text
stringlengths 9
7.94M
|
---|
\begin{document}
\title{Voronoi-based estimation of Minkowski tensors from finite point samples}
\begin{abstract} Intrinsic volumes and Minkowski tensors have been used to describe the geometry of real world objects. This paper presents an estimator that allows to approximate these quantities from digital images. It is based on a generalized Steiner formula for Minkowski tensors of sets of positive reach. When the resolution goes to infinity, the estimator converges to the true value if the underlying object is a set of positive reach. The underlying algorithm is based on a simple expression in terms of the cells of a Voronoi decomposition associated with the image.
\end{abstract} \section{Introduction} Intrinsic volumes, such as volume, surface area, and Euler characteristic, are widely-used tools to capture geometric features of an object; see, for instance, \cite{meckeEtAl,OM,milesSerra}. Minkowski tensors are tensor valued generalizations of the intrinsic volumes, associating with every sufficiently regular compact set in $\mathbb{R}^d$ a symmetric tensor, rather than a scalar. They carry information about geometric features of the set such as position, orientation, and eccentricity. For instance, the volume tensor -- defined formally in Section \ref{minkowski} -- of rank $0$ is just the volume of the set, while the volume tensors of rank $1$ and $2$ are closely related to the center of gravity and the tensor of inertia, respectively. For this reason, Minkowski tensors are used as shape descriptors in materials science \cite{mickel,aste}, physics \cite{kapfer}, and biology \cite{beisbart,ziegel}.
The main purpose of this paper is to present estimators that approximate all the Min\-kow\-ski tensors of a set $K$ when only weak information on $K$ is available. More precisely, we assume that a finite set $K_0$ which is close to $K$ in the Hausdorff metric is known. The estimators are based on the Voronoi decomposition of $\mathbb{R}^d$ associated with the finite set $K_0$, following an idea of M\'{e}rigot et al.\ \cite{merigot}. What makes these estimators so interesting is that they are consistent; that is, they converge to the respective Minkowski tensors of $K$ when applied to a sequence of finite approximations converging to $K$ in the Hausdorff metric. We emphasize that the notion of `estimator' is used here in the sense of digital geometry \cite{digital} meaning `approximation of the true value based on discrete input' and should not be confused with the statistical concept related to the inference from data with random noise. The main application we have in mind is the case where $K_0$ is a digitization of $K$. This is detailed in the following.
As data is often only available in digital form, there is a need for estimators that allow us to approximate the {Minkowski} tensors from digital images. In a black-and-white image of a compact geometric object $K\subseteq \mathbb{R}^d$, each pixel (or voxel) is colored black if {its} midpoint belongs to $K$ and white otherwise. Thus, the information about $K$ contained in the image is the set of black pixel (voxel) midpoints $K_0=K\cap a\mathbb{L}$, where $\mathbb{L}$ is the lattice formed by {all} pixel (voxel) midpoints and $a^{-1}$ is the resolution. A natural criterion for {the reliability of} a digital estimator is that it yields the correct tensor when $a\to 0_+$. If this property holds for all objects in a given family of sets, for instance, for all sets with smooth boundary, then the estimator is called \emph{multigrid convergent} for this class.
Digital estimators for the scalar Minkowski tensors, that is, for the intrinsic volumes, are widespread in the digital geometry literature; see, e.g.,~\cite{digital,OM,OS} and the references therein. For Minkowski tensors up to rank two, estimators based on binary images are given in \cite{turk} for the two-dimensional and in \cite{mecke} for the three-dimensional case. Even for the class of convex sets, multigrid convergence has not been proven for any of the above mentioned estimators. The only exception are volume related quantities. Most of the above mentioned estimators are \emph{$n$-local} for some given fixed $n\in \mathbb{N}$. We call an estimator $n$-local if it depends on the image only through the histogram of all $n\times \dotsm \times n$ configurations of black and white points. For instance, a natural surface area estimator \cite{lindblad} in three-dimensional space scans the image with a voxel cube of size $2\times 2\times2$ and assigns a surface contribution to each observed configuration. The sum of all contributions is then the surface area estimator, which is clearly $2$-local. The advantage of $n$-local estimators is that they are intuitive, easy to implement, and the computation time is linear in the number of pixels or voxels.
However, many {$n$-local} estimators are not multigrid convergent for convex sets; see \cite{am3} and the detailed discussion in Section \ref{known}. This implies that many established estimators, like the mentioned one in \cite{lindblad} cannot be multigrid convergent for convex sets. All the estimators of 2D-Minkowski tensors in \cite{turk} are $2$-local. By the results in \cite{am3}, the estimators for the perimeter and the Euler characteristic can thus not be multigrid convergent for convex sets. The multigrid convergence of the other estimators has not been investigated. The algorithms for 3D-Minkowski tensors in \cite{mecke} have as input a triangulation of the object's boundary, and the way this triangulation is obtained determines whether the resulting estimators are $n$-local or not. There are no known results on multigrid convergence for these estimators either. Summarizing, to the best of our knowledge, this paper presents for the first time estimators of all Minkowski tensors of arbitrary rank that come with a multigrid convergence proof for a class of sets that is considerably larger than the class of convex sets.
The present work is inspired by \cite{merigot}, and we therefore start by recalling some basic notions from this paper. For a nonempty compact set $K$, the authors of \cite{merigot} define a tensor valued measure, which they call the \emph{Voronoi covariance measure}, defined on a Borel set $A\subseteq \mathbb{R}^d$ by \begin{equation*} \mathcal{V}_R(K;A) = \int_{ K^R }\mathds{1}_A(p_K(y)) (y-p_K(y))(y-p_K(y))^\top\,dy. \end{equation*} Here, $K^R$ is the set of points at distance at most $R>0$ from $K$ and $p_K$
is the \emph{metric projection} on $K$: the point $p_K(x)$ is the point in $K$ closest to $x$, provided that this closest point is unique. The metric projection of $K$ is well-defined on $\mathbb{R}^d$ with the possible exception of a set of Lebesgue-measure zero; see, e.g., \cite{fremlin}.
The paper \cite{merigot} uses the Voronoi covariance measure to determine local features of surfaces. It is proved there that if $K \subseteq \mathbb{R}^3$ is a smooth surface, then \begin{equation}\label{eigen} \mathcal{V}_R(K;B(x,r)) \approx \frac{2\pi}{3}R^3r^2\bigg(u(x)u(x)^\top + \frac{r^2}{4}\sum_{i=1,2}k_i(x)^2P_i(x)P_i(x)^\top\bigg), \end{equation} where $B(x,r)$ is the Euclidean ball with midpoint $x\in K$ and radius $r$, $u(x)$ is one of the two surface unit normals at $x\in K$, $P_1(x),P_2(x)$ are the principal directions and $k_1(x),k_2(x)$ the corresponding principal curvatures. Hence, the eigenvalues and -directions of the Voronoi covariance measure carry information about local curvatures and normal directions.
Assuming that a compact set $K_0$ approximates $K$, \cite{merigot} suggests to estimate $\mathcal{V}_R(K;\cdot) $ by $\mathcal{V}_R(K_0;\cdot)$. It is shown in that paper that $\mathcal{V}_R(K_0;\cdot)$ converges to $\mathcal{V}_R(K;\cdot)$ in the bounded Lipschitz metric when $K_0 \to K$ in the Hausdorff metric. Moreover, if $K_0$ is a finite set, then the Voronoi covariance measure can be expressed in the form \begin{equation*} \mathcal{V}_R(K_0;A) = \sum_{x\in K_0 \cap A} \int_{B(x,R)\cap V_x(K_0) } (y-x)(y-x)^\top \,dy. \end{equation*} Here, $V_x(K_0)$ is the Voronoi cell of $x$ in the Voronoi decomposition of $\mathbb{R}^d$ associated with $K_0$. Thus, the estimator which is used to approximate $\mathcal{V}_R(K;A)$ is easily computed. Given the Voronoi cells of $K_0$, each Voronoi cell contributes with a simple integral. Figure \ref{fig} (a) shows the Voronoi cells of a finite set of points on an ellipse. The Voronoi cells are elongated in the normal direction. This is the intuitive reason why they can be used to approximate \eqref{eigen}.
The Voronoi covariance measure $\mathcal{V}_R(K;A) $ can be identified with a symmetric 2-tensor. In the present work, we explore how
natural extensions of the Voronoi covariance measure can be used to estimate general Minkowski tensors. The generalizations of the Voronoi covariance measure, which we will introduce, will be called \emph{Voronoi tensor measures}. {We will then show how the Minkowski tensors can be recovered from these}. When we apply the results to digital images, we will work with full-dimensional sets $K$, and the finite point sample $K_0$ is obtained from the representation $K_0=K\cap a\mathbb{L}$ of a digital image of $K$. The Voronoi cells associated with $K_0=K\cap a\mathbb{L}$ are sketched in Figure~\ref{fig}~(b). Taking point samples from $K$ with increasing resolution, convergence results will follow from an easy generalization of the convergence proof in \cite{merigot}.
\begin{figure}
\caption{(a). The Voronoi cells of a finite set of points on a surface. (b). A digital image and the associated Voronoi cells.}
\label{fig}
\end{figure}
The paper is structured as follows: In Section~\ref{minkowski}, we recall the definition of Minkowski tensors and the classical as well as a local Steiner formula for sets of positive reach. In Section~\ref{construction}, we define the Voronoi tensor measures, discuss how they can be estimated from finite point samples, and explain how the Steiner formula can be used to connect the Voronoi tensor measures with the
Minkowski tensors.
Section \ref{convergence} is concerned with the convergence of the estimator. The results are specialized to digital images in Section \ref{DI}. Finally, the estimator is compared with existing approaches in Section \ref{known}.
\section{Minkowski tensors}\label{minkowski}
We work in Euclidean space $\mathbb{R}^d$ with scalar product $\langle\cdot\,,\cdot\rangle$ and norm $|\cdot|$. The Euclidean ball with center $x\in\mathbb{R}^d$ and radius $r\ge 0$ is denoted by $B(x,r)$, and we write $S^{d-1}$ for the unit sphere in $\mathbb{R}^d$. Let $\partial A$ and $\text{int}A$ be the boundary and the interior of a set $A\subseteq{\mathbb R}^d$, respectively. The $k$-dimensional Hausdorff-measure in $\mathbb{R}^d$ is denoted by ${\mathcal H}^k$, $0\le k\le d$. Let ${\mathcal C}^d$ be the family of nonempty compact subsets of $\mathbb{R}^d$ and ${\mathcal K}^d\subseteq \mathcal{C}^d$ the subset of nonemtpy compact convex sets. For two compact sets $K,M \in{\mathcal C}^d$, we define their \emph{Hausdorff distance} by \begin{equation*} d_H(K,M) = \inf\{\varepsilon>0\mid K\subseteq M^\varepsilon, M \subseteq K^\varepsilon\}. \end{equation*}
Let $\mathbb{T}^p$ denote the space of symmetric $p$-tensors (tensors of rank $p$) over $\mathbb{R}^d$. Identifying $\mathbb{R}^d$ with its dual (via the scalar product), a symmetric $p$-tensor defines a symmetric multilinear map $(\mathbb{R}^d)^p\to \mathbb{R}$. Letting $e_1,\dots,e_d$ be the standard basis in $\mathbb{R}^d$, a tensor $T\in \mathbb{T}^p$ is determined by its coordinates \begin{equation*} T_{i_1\dots i_p}=T(e_{i_1},\dots,e_{i_p}) \end{equation*} with respect to the standard basis, for all choices of ${i_1},\dots,{i_p} \in \{1,\dots,d\}$. We use the norm on $\mathbb{T}^p$ given by \begin{equation*}
|T|=\sup\big\{|T(v_1,\dots,v_p)| \,\mid \, |v_1|=\dots =|v_p|=1\big\} \end{equation*} for $T\in \mathbb{T}^p$. The same definition is used for arbitrary tensors of rank $p$.
The symmetric tensor product of $y_1,\ldots, y_m\in \mathbb{R}^{d}$
is given by the symmetrization $y_1\odot\cdots\odot y_m=(m!)^{-1}\sum \otimes_{i=1}^m y_{\sigma(i)}$, where the sum extends over all permutations $\sigma$ of $\{1,\ldots,m\}$ and $\otimes$ is the usual tensor product. We write $x^r$ for the $r$-fold tensor product of $x\in \mathbb{R}^d$. For two symmetric tensors of the form $T_1=y_1 \odot \cdots \odot y_r$ and $T_2=y_{r+1} \odot \cdots \odot y_{r+s}$,
where $y_1, \ldots , y_{r+s} \in\mathbb{R}^d$, the symmetric tensor product $T_1\odot T_2$ of $T_1$ and $T_2$, which we often abbreviate by $T_1T_2$, is the symmetric
tensor product of $y_1, \ldots, y_{r+s} $. This is extended to general symmetric tensors $T_1$ and $T_2$ by linearity.
Moreover, it follows from the preceding definitions that
$$
|y_1\odot\cdots\odot y_m|\le |y_1|\cdots |y_m|,
$$ $y_1,\ldots, y_m\in \mathbb{R}^{d}$.
For any compact set $K\subseteq \mathbb{R}^d$, we can define an element of $\mathbb{T}^r$ called the \emph{$r$th volume tensor} \begin{equation*} \Phi_{d}^{r,0}(K) = \frac{1}{r!} \int_{K} x^r \,dx. \end{equation*} For $s\geq 1$ we define $\Phi_{d}^{r,s}(K)=0$. Some of the volume tensors have well-known physical interpretations. For instance, $\Phi_{d}^{0,0}(K)$ is the usual volume of $K$, $\Phi_{d}^{1,0}(K)$ is up to normalization the center of gravity, and $\Phi_{d}^{2,0}(K)$ is closely related to the tensor of inertia. All three tensors together can be used to find the best approximating ellipsoid of a particle \cite{ziegel}. The sequence of all volume tensors $(\Phi_{d}^{r,0}(K))_{r=0}^\infty$ determines the compact set $K$ uniquely.
For convex sets in the plane even the following stability result \cite[Remark 4.4.]{JuliaAstrid} holds: If $K, L\in {\mathcal K}^2$ are contained in the unit square and have coinciding volume tensors up to rank $r$, then their distance, measured in the symmetric difference metric ${\mathcal H}^2\big((K\setminus L) \cup (L\setminus K)\big)$, is of order $O(r^{-1/2})$ as $r\to \infty$.
We will now define \emph{Minkowski surface tensors}. These can also be used to characterize the shape of an object or the structure of a material as in \cite{beisbart,kapfer}. They require stronger regularity assumptions on $K$. Usually, like in \cite[Section 5.4.2]{schneider}, the set $K$ is assumed to be convex. However, as Minkowski tensors are tensor-valued integrals with respect to the generalized curvature measures (also called support measures) of $K$, they can be defined whenever the latter are available. We will use this to define Minkowski tensors for sets of positive reach.
First, we recall the definition of a set of positive reach and explain how curvature measures of such sets are determined (see \cite{Federer59,zahle}). For a compact set $K\in {\mathcal C}^d$, we let $d_K(x)$ denote the distance from $x\in \mathbb{R}^d$ to $K$. Then, for $R\ge 0$, $K^R=\{x\in \mathbb{R}^d \mid d_K(x)\leq R\}$ is the $R$-parallel set of $K$. The \emph{reach} $\reach(K)$ of $K$ is defined as the supremum over all $R\geq 0$ such that for all $x\in \mathbb{R}^d$ with $d_K(x)<R$ there is a unique closest point $p_K(x)$ in $K$. We say that $K$ has positive reach if $\reach(K)>0$. Smooth surfaces (of class $C^{1,1}$) are examples of sets of positive reach, and compact convex sets are characterized by having infinite reach. By definition, the map $p_K$ is defined everywhere on $K^R$ if $R<\reach(K)$. Let $K\subseteq \mathbb{R}^d$ be a (compact) set of positive reach. The (global) Steiner formula for sets with positive reach states that for all $R<\reach(K)$ the $R$-parallel volume of $K$ is a polynomial, that is, \begin{align}\label{gloSt} \mathcal{H}^d( K^R){}&= \sum_{k=0}^d \kappa_{d-k} R^{d-k} \Phi_{k}^{0,0}(K). \end{align}
Here $\kappa_j$ is the volume of the unit ball in $\mathbb{R}^j$ and the numbers $\Phi^{0,0}_0(K),\ldots, \allowbreak \Phi_d^{0,0}(K)$ are the so-called \emph{intrinsic volumes} of $K$. They are special cases of the Minkowski tensors to be defined below. Some of them have well-known interpretations. As mentioned, $\Phi^{0,0}_d(K)$ is the volume of $K$. Moreover, $2\Phi^{0,0}_{d-1}(K)$ is the surface area, $\Phi^{0,0}_{d-2}(K)$ is proportional to the total mean curvature, and $\Phi^{0,0}_0(K)$ is the Euler characteristic of $K$. For convex sets, \eqref{gloSt} is the classical Steiner formula which holds for all $R\ge 0$.
Z\"ahle \cite{zahle} showed that a local version of \eqref{gloSt} can be established giving rise to the \emph{generalized curvature measures} $\Lambda_k(K;\cdot)$ of $K$, for $k=0,\dots,d-1$. An extension to general closed sets is
considered in \cite{last}. The generalized curvature measures (also called support measures) are measures on $\Sigma = \mathbb{R}^d\times S^{d-1}$. They are determined by the following {\em local} Steiner formula which holds for all $R < \reach(K)$ and all Borel set $B\subseteq \Sigma$: \begin{equation}\label{clasSteiner}
\mathcal{H}^d\left(\left\{x\in K^R \backslash K \mid \Big(p_K(x), \tfrac{x-p_K(x)}{|x-p_K(x)|}\Big)\in B\right\}\right) = \sum_{k=0}^{d-1} R^{d-k} \kappa_{d-k} \Lambda_k(K;B). \end{equation} The coefficients $\Lambda_k(K;B)$ on the right side of \eqref{clasSteiner} are signed Borel measures $\Lambda_k(K;\cdot)$ evaluated on $B\subseteq\Sigma$. These measures are called the {\em generalized curvature measures} of $K$.
Since the pairs of points in $B$ on the left side of \eqref{clasSteiner} always consist of a boundary point of $K$ and an outer unit normal of $K$ at that point, each of the measures $\Lambda_k(K,\cdot)$ is concentrated on the set of all such pairs. For this reason, the generalized curvature measures $\Lambda_k(K;\cdot)$, $k\in\{0,\ldots,d-1\}$, are also called {\em support measures}. They describe the local boundary behavior of the part of $\partial K$ that consists of points $x$ with an outer unit normal $u$ such that $(x,u)\in B$. A description of the generalized curvature measures $\Lambda_k(K,\cdot)$ by means of generalized curvatures living on the normal bundle of $K$ was first given in \cite{zahle} (see also \cite[\S 2.5 and p.~217]{schneider} and the references given there).
The total measures $\Lambda_k(K,\Sigma)$ are the intrinsic volumes.
Based on the generalized curvature measures, for every $k\in\{0,\dots,d-1\}$, $r,s\geq 0$ and every set $K\subseteq\mathbb{R}^d$ with positive reach, we define the {\em Minkowski tensor} \begin{equation*} \Phi_{k}^{r,s}(K) = \frac{1}{r!s!}\frac{\omega_{d-k}}{\omega_{d-k+s}}\int_{\Sigma} x^r u^{s} \Lambda_k(K;d(x,u)) \end{equation*} in $\mathbb{T}^{r+s}$. Here $\omega_k$ is the surface area of the unit sphere $S^{k-1}$ in $\mathbb{R}^k$. More information on Minkowski tensors can for instance be found in \cite{hug,mcmullen,schuster,KVJLNM}. As in the case of volume tensors, the Minkowski tensors carry strong information on the underlying set.
For instance, already the sequence $(\Phi_{1}^{0,s}(K))_{s=0}^\infty$ determines any $K\in {\mathcal K}^d$ up to a translation. A stability result also holds: if $K$ and $L$ are both contained in a fixed ball and have the same tensors $\Phi_{1}^{0,s}$ of
rank $s\le s_0$, then a translation of $K$ is close to $L$ in the Hausdorff metric and the distance is $O(s_0^{-\beta})$ as $s_0\to \infty$ for any $0<\beta<3/(n+1)$; see \cite[Theorem 4.9]{AstridMarkus}.
One can define \emph{local Minkowski tensors} in a similar way (see \cite{HS14}). For a Borel set $B\subseteq \Sigma$, for $k\in\{0,\dots,d-1\}$, $r,s\geq 0$ and a set $K\subseteq\mathbb{R}^d$ with positive reach, we put \begin{equation*} \Phi_{k}^{r,s}(K;B) = \frac{1}{r!s!}\frac{\omega_{d-k}}{\omega_{d-k+s}}\int_{B} x^r u^{s} \,\Lambda_k(K;d(x,u)) \end{equation*} and, for a Borel set $A \subseteq \mathbb{R}^d$, \begin{equation*} \Phi_{d}^{r,0}(K;A) = \frac{1}{r!} \int_{K\cap A} x^r \,dx. \end{equation*} In order to avoid a distinction of cases, we also write $\Phi_{d}^{r,0}(K;A\times S^{d-1})$ instead of $\Phi_{d}^{r,0}(K;A)$. Moreover, we define $\Phi_{d}^{r,s}(K;\cdot)=0$ if $s\ge 1$. The local Minkowski tensors can be used to describe local boundary properties. For instance, local 1- and 2-tensors are used for the detection of sharp edges and corners on surfaces in \cite{clarenz}. They also carry information about normal directions and principal curvatures as explained in the introduction.
We conclude this section with a general remark on continuity properties of the Minkowski tensors.
Although the functions $K\mapsto \Phi_{k}^{r,s}(K)$ are continuous when considered in the metric space $(\mathcal{K}^d,d_H)$, they are not continuous on ${\mathcal C}^d$. (For instance, the volume tensors of a finite set are always vanishing, but finite sets can be used to approximate any compact set in the Hausdorff metric.) This is the reason why our approach requires an approximation argument with parallel sets as outlined below. The consistency of our estimator is mainly based on a continuity result for the metric projection map. We quote this result \cite[Theorem 3.2]{chazal} in a slightly different formulation which is symmetric in the two bodies involved.
Let $\|f\|_{L^1(E)}$ be the usual $L^1$-norm of the restriction of $f$ to a Borel set $E\subseteq \mathbb{R}^d$.
\begin{proposition}\label{CHAZProp}
Let $\rho>0$ and let $E\subseteq \mathbb{R}^d$ be a bounded measurable set. Then there is a constant $C_1=C_1\left(d,\diam(E\cup\{0\}),\rho\right)>0$ such that
\[
\|p_K-p_{K_0}\|_{L^1(E)}
\le C_1 d_H(K,K_0)^{\frac 12}
\]
for all $K,K_0\in {\mathcal C}^d$ with $K,K_0\subseteq B(0,\rho)$. \end{proposition} \begin{proof}
Let $E'$ be the convex hull of $E$ and observe that
\begin{equation*}
\|p_K-p_{K_0}\|_{L^1(E)} \leq \|p_K-p_{K_0}\|_{L^1(E')}. \end{equation*}
It is shown in \cite[Lemma 3.3]{chazal} (see also \cite[Theorem 4.8]{Federer59}) that the map $v_K:\mathbb{R}^d\to\mathbb{R}$ given by $v_K(x)=|x|^2-d_K^2(x)$ is convex and that its gradient coincides almost everywhere with $2p_K$. Since $E'$ has rectifiable boundary, \cite[Theorem~3.5]{chazal} implies
that
\begin{align*}
\|p_K-p_{K_0}\|_{L^1(E')}
\le {}& c_1(d) ({\mathcal H}^d(E')+(c_2+\|d_K^2-d_{K_0}^2\|_{\infty,E'}^{\frac 12}){\mathcal H}^{d-1}(\partial E'))\\
&\times \|d_K^2-d_{K_0}^2\|_{\infty,E'}^{\frac 12}.
\end{align*}
Here $c_2=\diam(2p_K(E')\cup 2p_{K_0}(E'))\le 2\diam (K\cup K_0)\le 4\rho$ and the supremum-norm $\|\cdot\|_{\infty,E'}$ on $E'$ can be estimated by
\begin{align*}
\|d_K^2-d_{K_0}^2\|_{\infty,E'}&\le 2\diam(E'\cup K\cup K_0) \|d_K-d_{K_0}\|_{\infty,E'}
\\&\le 2\left[\diam(E'\cup\{0\})+2\rho\right]d_H(K,K_0).
\end{align*}
Moreover, intrinsic volumes are increasing on the class of convex sets, so
\begin{align*}
\mathcal{H}^d(E'){}&\leq \mathcal{H}^d(B(0, \diam(E'\cup \{0\})))\\
{\mathcal H}^{d-1}(\partial E') {}&\leq \mathcal{H}^{d-1}(\partial B(0, \diam(E'\cup \{0\}))).
\end{align*}
Together with the trivial estimate $d_H(K,K_0)\le2\rho$ and with the equality $\diam(E\cup\{0\})=\diam(E'\cup\{0\})$, this yields the claim.
\end{proof}
The authors of \cite{chazal} argue that the exponent $1/2$ in Proposition \ref{CHAZProp} is best possible.
\section{Construction of the estimator} \label{construction} In Section \ref{VTM} below, we define the Voronoi tensor measures and show how the Minkowski tensors can be obtained from these. We then explain in Section \ref{finite} how the Voronoi tensor measures can be estimated from finite point samples. As a special case, we obtain estimators for all intrinsic volumes. This is detailed in Section \ref{intvol}.
\subsection{The Voronoi tensor measures} \label{VTM} Let $K$ be a compact set. Here and in the following subsections, we let $r,s\in\mathbb{N}_0$ and $R\ge 0$. Define the $ \mathbb{T}^{r+s}$-valued measures $\mathcal{V}_{R}^{r,s}(K;\cdot)$ given on a Borel set $A\subseteq \mathbb{R}^d$ by \begin{equation}\label{star} \mathcal{V}_{R}^{r,s}(K;A) = \int_{K^R }\mathds{1}_A(p_K(x)) \,p_K(x)^r(x-p_K(x))^s \, dx. \end{equation} When $K$ is a smooth surface, $\mathcal{V}_{R}^{0,2}(K;\cdot)$ {corresponds to} the Voronoi covariance measure in \cite{merigot}. We will refer to the measures defined in \eqref{star} as the \emph{Voronoi tensor measures}. Note that if $f:\mathbb{R}^d \to \mathbb{R}$ is a bounded Borel function, then \begin{equation}\label{integralf} \int_{\mathbb{R}^d} f(x) \,\mathcal{V}_{R}^{r,s}(K;dx) = \int_{K^R}f(p_K(x))\,p_K(x)^r(x-p_K(x))^s \, dx \in \mathbb{T}^{r+s}. \end{equation}
Suppose now that $K$ has positive reach with $\reach(K)>R$. Then a special case of the generalized Steiner formula derived in \cite{last} (or an extension of \eqref{clasSteiner}) implies the following version of the local Steiner formula for the Voronoi tensor measures: \begin{align}\nonumber \mathcal{V}_{R}^{r,s}(K;A) {}&= \sum_{k=1}^{d} \omega_{k} \int_{\Sigma} \int_{0}^R \mathds{1}_{A}(x) t^{s+k-1} x^r u^s \, dt\, \Lambda_{d-k}(K;d(x,u))\nonumber\\ &\qquad +\mathds{1}_{\{s = 0\}}\int_{K\cap A} x^r \,dx\nonumber\\ &= r!s! \sum_{k=0}^d \kappa_{k+s} R^{s+k} \Phi_{d-k}^{r,s}(K;A\times S^{d-1}),\label{steiner} \end{align} where $A\subseteq {\mathbb R}^d$ is a Borel set.
In particular, the total measure is \begin{equation*} \mathcal{V}_{R}^{r,s}(K)=\mathcal{V}_{R}^{r,s}(K;\mathbb{R}^d) = r!s!\sum_{k=0}^d \kappa_{k+s} R^{s+k} \Phi_{d-k}^{r,s}(K) . \end{equation*} Note that the special case $r=s=0$ is the Steiner formula \eqref{gloSt} for sets with positive reach.
Equation \eqref{steiner}, used for different parallel distances $R$, can be solved for the Minkowski tensors. More precisely, choosing $d+1$ different values $0<R_0<\ldots <R_d<\reach(K)$ for $R$, we obtain a system of $d+1$ linear equations: \begin{align}\label{matrixeq} \begin{pmatrix} \mathcal{V}_{R_0}^{r,s}(K;A)\\ \vdots \\ \mathcal{V}_{R_d}^{r,s}(K;A) \end{pmatrix} =r!s! \begin{pmatrix}\kappa_s R_0^{s} & \dots & \kappa_{s+d}R_0^{s+d} \\ \vdots & & \vdots \\ \kappa_sR_{d}^{s} & \dots & \kappa_{s+d}R_{d}^{s+d} \end{pmatrix} \begin{pmatrix}\Phi_{d}^{r,s}(K;{A\times S^{d-1}})\\ \vdots \\ \Phi_{0}^{r,s}(K;{A\times S^{d-1}}) \end{pmatrix}. \end{align} Since the Vandermonde-type matrix
\begin{align}\label{matrixA}
A_{R_0,\ldots,R_d}^{r,s}
=
{r!s!}
\begin{pmatrix}
\kappa_s R_0^{s} & \dots & \kappa_{s+d}R_0^{s+d} \\
\vdots & & \vdots
\\
\kappa_s R_{d}^{s} & \dots & \kappa_{s+d}R_{d}^{s+d}
\end{pmatrix}\in \mathbb{R}^{(d+1)\times(d+1)}
\end{align} in \eqref{matrixeq} is invertible, the system can be solved for the tensors, and thus we get \begin{align}\label{matrix} \begin{pmatrix}{\Phi}_{d}^{r,s}(K;A{\times S^{d-1}})\\ \vdots \\ {\Phi}_{0}^{r,s}(K;{A{\times S^{d-1}}}) \end{pmatrix} =\left(A_{R_0,\ldots,R_d}^{r,s}\right)^{-1} \begin{pmatrix} \mathcal{V}_{R_0}^{r,s}(K;A)\\ \vdots \\ \mathcal{V}_{R_d}^{r,s}(K;A) \end{pmatrix}. \end{align} If $s>0$, then ${\Phi}_{d}^{r,s}(K;A\times S^{d-1})=0$ by definition, so we may omit one of the equations in the system \eqref{matrixeq}.
\subsection{Estimation of Minkowski tensors}\label{finite} Let $K$ be a compact set of positive reach. Suppose that we are given a compact set $K_0$ that is close to $K$ in the Hausdorff metric. In the applications we have in mind, $K_0$ is a finite subset of $K$, but this is not necessary for the algorithm to work. Based on $K_0$, we want to estimate the local Minkowski tensors of $K$. We do this by approximating $\mathcal{V}_{R_k}^{r,s}(K;A)$ in Formula \eqref{matrix} by $\mathcal{V}_{R_k}^{r,s}(K_0;A)$, for $k=0,\dots,d$ and $A\subseteq \mathbb{R}^d$ a Borel set. This leads to the following set of estimators for $\Phi_k^{r,s}(K;A\times S^{d-1})$, $k\in\{0,\ldots,d\}$: \begin{align} \begin{pmatrix}\hat{\Phi}_{d}^{r,s}(K_0;A\times S^{d-1})\\ \vdots \\ \hat{\Phi}_{0}^{r,s}(K_0;A\times S^{d-1}) \end{pmatrix} =\left(A_{R_0,\ldots,R_d}^{r,s}\right)^{-1} \begin{pmatrix} \mathcal{V}_{R_0}^{r,s}(K_0;A)\\ \vdots \\ \mathcal{V}_{R_d}^{r,s}(K_0;A) \end{pmatrix}\label{defEst} \end{align} with $A_{R_0,\ldots,R_d}^{r,s}$ given by \eqref{matrixA}. Setting $A=\mathbb{R}^d$ in \eqref{defEst}, we obtain estimators \[ \hat{\Phi}_{k}^{r,s}(K_0)=\hat{\Phi}_{k}^{r,s}(K_0;\mathbb{R}^d\times S^{d-1}) \] of the {intrinsic volumes}. Note that this approach requires an estimate for the reach of $K$ because we need to choose $0<R_0<\dots<R_d <\reach(K)$. The idea to invert the Steiner formula is not new. It was used in \cite{chazal} to approximate curvature measures of sets of positive reach. In \cite{spodarev} and \cite{jan} it was used to estimate intrinsic volumes but without proving convergence for the resulting estimator.
We now consider the case where $K_0$ is finite. Let \begin{equation*} V_x(K_0)=\{y\in \mathbb{R}^d \mid p_{K_0}(y)=x\} \end{equation*} denote the Voronoi cell of $x\in K_0$ with respect to the set $K_0$. Since $\mathbb{R}^d$ is the union of the finitely many Voronoi cells of $K_0$, it follows that $K^R_0$ is the union of the $R$-bounded parts $B(x,R)\cap V_x(K_0)$, $x\in K_0$, of the Voronoi cells $V_x(K_0)$, $x\in K_0$, which have pairwise disjoint interiors. Thus \eqref{star} simplifies to \begin{equation}\label{algorithm} \mathcal{V}_{R}^{r,s}(K_0;A)= \sum_{x\in K_0\cap A } x^r \int_{B(x,R)\cap V_x(K_0)} (y-x)^s \, dy. \end{equation} Like the Voronoi covariance measure, the Voronoi tensor measure $\mathcal{V}_{R}^{r,s}(K_0;A)$ is a sum of simple contributions from the individual Voronoi cells.
An example of a Voronoi decomposition associated with a digital image is sketched in Figure~\ref{redblue}. The original set $K$ is the disk bounded by the inner black circle, and the disk bounded by the outer black circle is its $R$-parallel set $K^R$. The finite point sample is $K_0 = K \cap \mathbb{Z}^2$, which is shown as the set of red dots in the picture, and the red curve is the boundary of its $R$-parallel set. The Voronoi cells of $K_0$ are indicated by blue lines. The $R$-bounded part of one of the Voronoi cells is the part that is cut off by the red arc.
\begin{figure}
\caption{The Voronoi decomposition (blue lines) and $R$-parallel set (red curve) associated with a digital image.}
\label{redblue}
\end{figure}
\subsection{The case of intrinsic volumes}\label{intvol} Recall that $\Phi_k^{0,0}(K)=\Lambda_k(K;\mathbb{R}^d)$ is the $k$th intrinsic volume. Thus, Section \ref{finite} provides estimators for all intrinsic volumes as a special case. This case is particularly simple. The measure $\mathcal{V}_{R}^{0,0}(K;A)$ is simply the volume of a local parallel set \begin{align*} \mathcal{V}_{R}^{0,0}(K;A){}&=\mathcal{H}^d\left(\{ x \in K^R\mid p_K(x) \in A\}\right),\\ \mathcal{V}_{R}^{0,0}(K){}&=\mathcal{H}^d( K^R). \end{align*} In particular, if $K\subseteq \mathbb{R}^d$ is a compact set with $\reach(K)>R$, then Equation~\eqref{steiner} reduces to the usual local Steiner formula \begin{align*} \mathcal{H}^d( \{ x \in K^R\mid p_K(x) \in A\})&= \sum_{k=0}^d \kappa_{k} R^{k} \Lambda_{d-k}(K;A\times S^{d-1}), \end{align*} and to the (global) Steiner formula \eqref{gloSt} if $A=\mathbb{R}^d$.
In this case, our algorithm approximates the parallel volume $\mathcal{H}^d(K^R)$ by $\mathcal{H}^d(K_0^R)$. In the example in Figure \ref{redblue}, this corresponds to approximating the volume of the larger black disk by the volume of the region bounded by the red curve. This volume is again the sum of the volumes of the regions bounded by the red and blue curves. In other words, it is the sum of volumes of the $R$-bounded Voronoi cells on the right-hand side of the equation \begin{equation*} \mathcal{V}_{R}^{0,0}(K_0;A)= \sum_{x\in K_0\cap A } \mathcal{H}^d(B(x,R) \cap V_x(K_0)). \end{equation*}
\subsection{Estimators for general local Minkowski tensors}\label{general1} In Section \ref{finite} we have only considered estimators for local tensors of the form $\Phi_k^{r,s}(K;A \times S^{d-1})$, where $K\subseteq\mathbb{R}^d$ is a set with positive reach. The natural way to estimate $\Phi_k^{r,s}(K;B)$, for a measurable set $B\subseteq \Sigma $, would be to copy the idea in Section \ref{finite} with $\mathcal{V}_{R}^{r,s}(K;A)$ replaced by the following generalization of the Voronoi tensor measures, \begin{equation}\label{baddef} \mathcal{W}_{R}^{r,s}(K;B) =
\int_{K^R \backslash K}\mathds{1}_B(p_K(x),u_K(x))p_K(x)^r (x-p_K(x))^s \, dx, \end{equation}
where $u_K(x) = ({x-p_K(x)})/{|x-p_K(x)|}$ estimates the normal direction. Of course, this definition works for any $K\in\mathcal{C}^d$. Moreover, we could define estimators related to \eqref{baddef} whenever we have a set $K_0$ which approximates $K$. However, even if $K$ has positive reach, the map $x\mapsto u_K(x)$ is not Lipschitz on $K^R\backslash K$, and therefore
the convergence results in Section \ref{convergence} will not work with this definition. Since the map $x\mapsto u_K(x)$ is Lipschitz on $K^R\backslash K^{R/2}$, it is natural to proceed as follows. For any $K\in\mathcal{C}^d$, we define \begin{align}\label{modify} \overline{\mathcal{V}}_{R}^{r,s}(K;B) {}&=
\int_{K^R \backslash K^{R/2}}\mathds{1}_B(p_K(x),u_K(x))p_K(x)^r (x-p_K(x))^s \, dx. \end{align} Note that \begin{equation}\label{Wdifference} \overline{\mathcal{V}}_{R}^{r,s}(K;\cdot)=\mathcal{W}_{R}^{r,s}(K;\cdot)-\mathcal{W}_{R/2}^{r,s}(K;\cdot), \end{equation} where $\mathcal{W}_{R}^{r,s}(K;\cdot)$ is defined as in \eqref{baddef}. We will not use the notation $\mathcal{W}_{R}^{r,s}(K;\cdot)$ in the following. If $K$ has positive reach and $0<R<\text{reach}(K)$, then the generalized Steiner formula yields \begin{align*} {\overline{\mathcal{V}}_{R}^{r,s}(K;B)} &= r!s! \sum_{k=1}^d \kappa_{s+k} R^{s+k} (1-2^{-(s+k)}){ {\Phi}_{d-k}^{r,s}(K;B)}. \end{align*} Again, choosing $0<R_1<\ldots<R_d<\text{reach}(K)$, we can recover the Minkowski tensors from \begin{align*} \begin{pmatrix}{\Phi}_{d-1}^{r,s}(K;B)\\ \vdots \\ {\Phi}_{0}^{r,s}(K;B) \end{pmatrix} = \left( \overline{A}_{R_1,\ldots,R_d}^{r,s} \right)^{-1} \begin{pmatrix} \overline{\mathcal{V}}_{R_1}^{r,s}(K;B)\\ \vdots \\ \overline{\mathcal{V}}_{R_d}^{r,s}(K;B) \end{pmatrix} \end{align*} where \[ \overline {A}_{R_1,\ldots,R_d}^{r,s}= \frac{1}{r!s!} \begin{pmatrix} \kappa_{s+1} (1-2^{-(s+1)}) R_1^{s+1} & \dots & \kappa_{s+d}(1-2^{-(s+d)})R_1^{s+d} \\ \vdots & & \vdots \\ \kappa_{s+1} (1-2^{-(s+1)}) R_{d}^{s+1} & \dots & \kappa_{s+d}(1-2^{-(s+d)})R_{d}^{s+d} \end{pmatrix} \] is a regular matrix. Using this, we can define estimators for ${\Phi}_{k}^{r,s}(K;B)$, for $0\le k\leq d-1$, by \begin{align*} \begin{pmatrix}\overline{\Phi}_{d-1}^{r,s}(K_0;B)\\ \vdots \\ \overline{\Phi}_{0}^{r,s}(K_0;B) \end{pmatrix} = \left( \overline{A}_{R_1,\ldots,R_d}^{r,s} \right)^{-1} \begin{pmatrix} \overline{\mathcal{V}}_{R_1}^{r,s}(K_0;B)\\ \vdots \\ \overline{\mathcal{V}}_{R_d}^{r,s}(K_0;B) \end{pmatrix}, \end{align*} where $K_0$ is a compact set which approximates $K$. Convergence of these modified estimators will be discussed in Section \ref{convergence}.
The estimators $\overline{\Phi}_{k}^{r,s}$ can be used to approximate local tensors of the form $\Phi_k^{r,s}(K;B)$ where the set $B\subseteq \Sigma$ involves normal directions. Thus, they are more general than $\hat{\Phi}_{k}^{r,s}$. However, \eqref{Wdifference} shows that estimating $\overline{\mathcal{V}}_{R}^{r,s}(K;B)$ requires an approximation of two parallel sets, rather than one. We therefore expect more severe numerical errors for $\overline{\Phi}_{k}^{r,s}$.
\section{Convergence properties}\label{convergence} In this section we prove the main convergence results. This is an immediate generalization of \cite[Theorem 5.1]{merigot}.
\subsection{The convergence theorem}
For a bounded Lipschitz function $f:\mathbb{R}^d \to \mathbb{R}$, we let $|f|_\infty$ denote the usual supremum norm, \begin{equation*}
|f|_L = \sup \bigg\{ \frac{|f(x)-f(y)|}{|x-y|} \mid x\neq y\bigg\} \end{equation*} the Lipschitz semi-norm, and \begin{equation*}
|f|_{bL}=|f|_L + |f|_\infty \end{equation*} the bounded Lipschitz norm. Let $d_{bL} $ be the bounded Lipschitz metric on the space of bounded $\mathbb{T}^p$-valued Borel measures on $\mathbb{R}^d$. For any two such measures $\mu$ and $\nu$ on $\mathbb{R}^d$, the distance with respect to $d_{bL}$ is defined by \begin{equation*}
d_{bL}(\mu,\nu) = \sup \bigg\{\bigg|\int f \, d\mu - \int f \, d\nu\bigg| \mid |f|_{bL} \leq 1\bigg\}, \end{equation*}
where the supremum extends over all bounded Lipschitz functions $f:\mathbb{R}^d\to \mathbb{R}$ with $ |f|_{bL} \leq 1$. The following theorem shows that the map \begin{equation*} K \mapsto \mathcal{V}_{R}^{r,s}(K;\cdot) \end{equation*} is H\"{o}lder continuous with exponent $ \frac{1}{2}$ with respect to the Hausdorff metric on $\mathcal{C}^d$ (restricted to compact subsets of a fixed ball) and the bounded Lipschitz metric. In the proof, we use the symmetric difference $A\Delta B=(A\setminus B)\cup(B\setminus A)$ of sets $A,B\subseteq \mathbb{R}^d$. \begin{thm}\label{converge} Let $R,\rho>0$ and $r,s\in {\mathbb N}_0$ be given. Then there is a positive constant $C_2=C_2(d,R,\rho,r,s)$ such that \begin{align*} d_{bL}(\mathcal{V}_{R}^{r,s}(K;\cdot),\mathcal{V}_{R}^{r,s}(K_0;\cdot )) \leq C_2 d_H(K,K_0)^{\frac{1}{2}} \end{align*} for all compact sets $K,K_0\subseteq B(0,\rho)$. \end{thm}
\begin{proof}
Let $f$ with $|f|_{bL} \leq 1$ be given. Then \eqref{integralf} yields \begin{align}\nonumber
&\bigg|\int_{\mathbb{R}^d} f(x)\, \mathcal{V}_{R}^{r,s}(K;dx)-\int_{\mathbb{R}^d} f(x)\, \mathcal{V}_{R}^{r,s}(K_0;dx)\bigg|\\
&=\bigg|\int_{K^R }f(p_K(x))\,p_K(x)^r(x-p_K(x))^s \, dx\nonumber\\
&\qquad\qquad\qquad\qquad-\int_{K_0^R }f(p_{K_0}(x))p_{K_0}(x)^r(x-p_{K_0}(x))^s \, dx\bigg| \nonumber \\ &\leq {I}+{II},\label{AB} \end{align} where $I$ is the integral \begin{align*}
\int_{K^R \cap K_0^R }|f(p_K(x))p_K(x)^r(x-p_K(x))^s-f(p_{K_0}(x))\,p_{K_0}(x)^r(x-p_{K_0}(x))^s |\,dx \end{align*} and \begin{align*} II={{\rho}^r}R^s \mathcal{H}^{d}(K^R \Delta K_0^R). \end{align*} By \cite[Corollary 4.4]{chazal}, there is a constant $c_1=c_1(d,R,\rho)>0$ such that \begin{equation}\label{symdif} \mathcal{H}^{d}(K^R \Delta K_0^R) \leq c_1\,d_H(K,K_0) \end{equation} when $d_H(K,K_0)\leq {R}/{2}$. Replacing $c_1$ by a possibly even bigger constant, we can ensure that \eqref{symdif} also holds when $R/2\le d_H(K,K_0)\leq 2 \rho$. Hence, \begin{equation}\label{B} II \leq {c_2}\,d_H(K,K_0)^{\frac 12} \end{equation} with some constant $c_2=c_2(d,R,\rho,r,s)>0$.
Using the inequalities (and interpreting empty products as 1) \begin{align}\label{product}
\bigg|\bigodot_{i=1}^m y_i - \bigodot_{i=1}^m z_i\bigg|\leq
\bigg|\bigotimes_{i=1}^m y_i - \bigotimes_{i=1}^m z_i\bigg|\leq \sum_{j=1}^m |y_j-z_j|\prod_{i=1}^{j-1} |y_i| \prod_{i=j+1}^m |z_i|, \end{align} with $m=r+s$ and the rank-one tensors
\[
\begin{array}{lcl}
y_1=\ldots=y_r=p_K(x), &\qquad& y_{r+1}=\ldots=y_{r+s}=x-p_K(x),\\
z_1=\ldots=z_r=p_{K_0}(x), &\qquad& z_{r+1}=\ldots=z_{r+s}=x-p_{K_0}(x),
\end{array}
\] we get \begin{align*}
|f{}&(p_K(x))\, p_K(x)^r(x-p_K(x))^s-f(p_{K_0}(x))\,p_{K_0}(x)^r(x-p_{K_0}(x))^s | \\
&\leq |f(p_K(x))-f(p_{K_0}(x)) | |p_K(x)|^{r} |x-p_{K}(x)|^s \\
&+|f(p_{K_0}(x))| \sum_{j=1}^r |p_K(x)-p_{K_0}(x)||p_K(x)|^{j-1} |p_{K_0}(x)|^{r-j}|x-p_{K_0}(x)|^s + \\
&+|f(p_{K_0}(x))| \sum_{j=1}^s |p_K(x)-p_{K_0}(x)||p_K(x)|^{r} |x-p_{K}(x)|^{j-1}|x-p_{K_0}(x)|^{s-j}. \end{align*}
Since we assumed that $|f|_{bL}\le 1$, we get
\begin{align}\nonumber I&\leq (r+s+1)\max\{\rho,1\}^r\max\{R,1\}^s\int_{K^R \cap K_0^R } |p_K(x)-p_{K_0}(x)|\, dx\\ &\leq c_3\, d_H(K,K_0)^{\frac{1}{2}}.\label{A} \end{align} The existence of the constant $c_3=c_3(d,R,\rho,r,s)$ in the last inequality is guaranteed by Proposition \ref{CHAZProp} with $K^R \cap K_0^R$ as the set $E$, because this choice of $E$ satisfies $\diam(E \cup \{0\})\leq 2(\rho + R)$. \end{proof}
When $r=s=0$ and $f=1$, the above proof simplifies to Inequality \eqref{symdif} as $I$ vanishes. Hence we obtain the following strengthening of the theorem, which is relevant for the estimation of intrinsic volumes.
\begin{thm}\label{IVconverge} Let $R,\rho>0$. Then there is a constant $C_3=C_3(d,R,\rho)>0$ such that \begin{equation*}
\Big|\mathcal{V}_{R}^{0,0}(K)-\mathcal{V}_{R}^{0,0}(K_0) \Big|\leq C_3\, d_H(K,K_0) \end{equation*} for all compact sets $K,K_0\subseteq B(0,\rho)$. \end{thm}
For local tensors, the proof of Theorem \ref{converge} can also be adapted to show a convergence result.
\begin{thm}\label{locallip} Let $r,s\in\mathbb{N}_0$ and $R>0$. If $K_i \to K$ with respect to the Hausdorff metric on ${\mathcal C}^d$, as $i\to \infty$, then $\mathcal{V}_{R}^{r,s}(K_i;A)\to \mathcal{V}_{R}^{r,s}(K;A)$ in the tensor norm, for every Borel set $A\subseteq\mathbb{R}^d$ which satisfies \begin{equation}\label{4.3exceptional} \mathcal{H}^d(p_K^{-1}(\partial A)\cap K^R)=0. \end{equation} \end{thm} \begin{proof} Convergence of tensors is equivalent to coordinate-wise convergence. Hence, it is enough to show that the coordinates satisfy $$\mathcal{V}_{R}^{r,s}(K_i;A)_{i_1\dots i_{r+s}}\to \mathcal{V}_{R}^{r,s}(K;A)_{i_1\dots i_{r+s}}\qquad\text{as $i\to\infty$},$$ for all choices of indices ${i_1\dots i_{r+s}}$; see the notation at the beginning of Section~\ref{minkowski}.
We write $T_K(x)=p_K(x)^r(x-p_K(x))^s$. Then \begin{equation*} \mathcal{V}_{R}^{r,s}(K;A)_{i_1\dots i_{r+s}}=\int_{K^R} \mathds{1}_A(p_K(x))T_K(x)_{i_1\dots i_{r+s}}\, dx \end{equation*} is a signed measure. Let $T_K(x)_{i_1\dots i_{r+s}}^+$ and $T_K(x)_{i_1\dots i_{r+s}}^-$ denote the positive and negative part of $T_K(x)_{i_1\dots i_{r+s}}$, respectively. Then \begin{equation*} \mathcal{V}_{R}^{r,s}(K;A)^{\pm}_{i_1\dots i_{r+s}}=\int_{K^R} \mathds{1}_A(p_K(x))T_K(x)_{i_1\dots i_{r+s}}^{\pm}\,dx \end{equation*} are non-negative measures such that \begin{equation*} \mathcal{V}_{R}^{r,s}(K;\cdot)_{i_1\dots i_{r+s}}=\mathcal{V}_{R}^{r,s}(K;\cdot)_{i_1\dots i_{r+s}}^+-\mathcal{V}_{R}^{r,s}(K;\cdot)_{i_1\dots i_{r+s}}^-. \end{equation*}
The proof of Theorem \ref{converge} can immediately be generalized to show that $\mathcal{V}_{R}^{r,s}(K_i;\cdot)^{\pm}_{i_1\dots i_{r+s}}$ converges to $\mathcal{V}_{R}^{r,s}(K;\cdot)^{\pm}_{i_1\dots i_{r+s}}$ in the bounded Lipschitz norm (as $i\to\infty$), and hence the measures converge weakly. In particular, they converge on every continuity set of $\mathcal{V}_{R}^{r,s}(K;\cdot)^{\pm}_{i_1\dots i_{r+s}}$. If $\mathcal{H}^d(p_K^{-1}(\partial A)\cap K^R)=0$, then $A$ is such a continuity set. \end{proof}
\begin{remark} Though relatively mild, the condition $\mathcal{H}^d(p_K^{-1}(\partial A)\cap K^R)=0$ can be hard to control if $K$ is unknown. It is satisfied if, for instance, $K$ and $A$ are smooth and their boundaries intersect transversely. A special case of this is when $K$ is a smooth surface and $A$ is a small ball centered on the boundary of $K$. This is the case in the application from \cite{merigot} that was described in the introduction. Examples where it is not satisfied are when $A=K$ or when $K$ is a polytope intersecting $\partial A$ at a vertex. \end{remark}
\begin{remark}\label{Rem4.6new} Let $f:\mathbb{R}^d\to\mathbb{R}$ be a bounded measurable function. We define $$ \mathcal{V}_{R}^{r,s}(K;f):=\int_{\mathbb{R}^d} f(x)\, \mathcal{V}_{R}^{r,s}(K;dx). $$ Hence $\mathcal{V}_{R}^{r,s}(K;A)=\mathcal{V}_{R}^{r,s}(K;\mathds{1}_A)$ for every Borel set $A\subseteq\mathbb{R}^d$. Then, Theorem \ref{locallip} is equivalent to saying that, for all continuous test functions $f:\mathbb{R}^d\to\mathbb{R}$, $$ \mathcal{V}_{R}^{r,s}(K_i;f)\to \mathcal{V}_{R}^{r,s}(K;f),\quad \text{as }i\to\infty, $$ in the tensor norm, whenever $K_i \to K$ with respect to the Hausdorff metric on ${\mathcal C}^d$, as $i\to \infty$.
Thus, if one is interested in the local behaviour of $\Phi^{r,s}_k(K; \cdot)$ at a neighborhood $A$, like in \cite{merigot}, then one can study $$ \Phi^{r,s}_k(K;f):=\int_{\Sigma} f(x)x^ru^s\, \Lambda_k(K;d(x,u)), $$ where $f$ is a continuous function with support in $A$. This avoids the extra condition \eqref{4.3exceptional}.
\end{remark}
As the matrix $A_{R_0,\ldots,R_d}^{r,s}$ in the definition \eqref{defEst} of $\hat \Phi_k^{r,s}(K_0;A\times S^{d-1})$ does not depend on the set $K_0$, the above results immediately yield a consistency result for the estimation of the Minkowski tensors. We formulate this only for $A=\mathbb{R}^d$.
\begin{corollary}\label{corNew}
Let $\rho>0$ and $K$ be a compact subset of $B(0,\rho)$ of positive reach such that $\mathrm{Reach}(K)>R_d>\ldots>R_0>0$.
Let $K_0\subseteq B(0,\rho)$ be a compact set.
Then there is a constant $C_4=C_4(d,R_0,\ldots,R_d,\rho)$ such that
\[
\left| \hat{\Phi}^{0,0}_k(K_0)-\Phi^{0,0}_k(K)\right|\le C_4\, d_H(K_0,K),
\]
for all $k\in\{0,\ldots,d\}$.
For $r,s\in {\mathbb N}_0$ there is a constant
$C_5=C_5(d,R_0,\ldots,R_d,\rho,r,s)$ such that
\[
\left| \hat{\Phi}^{r,s}_k(K_0)-\Phi^{r,s}_k(K)\right|\le C_5\, d_H(K_0,K)^{\frac12},
\]
for all $k\in\{0,\ldots,d-1\}$.
\end{corollary}
Finally, we state the convergence results for the modified estimators for $\Phi_k^{r,s}(K;B)$, where $B\subseteq \Sigma$
is a Borel set, that were defined in Section \ref{general1}. The map $x\mapsto {x}/{|x|}$ is Lipschitz on $\mathbb{R}^d \backslash \indre({B(0,{R}/{2})})$ with Lipschitz constant ${4}/{R}$, and therefore the mapping $u_K$, which was defined after \eqref{baddef}, satisfies \begin{equation*}
|u_K(x)-u_{K_0}(x)|\leq \tfrac{4}{R}|p_K(x)-p_{K_0}(x)|, \end{equation*} for $x\in (K^R \backslash K^{R/2}) \cap(K_0^R \backslash K_0^{R/2})$. Moreover, \begin{equation*} \left(K^R \backslash K^{R/2}\right) \Delta \left(K_0^R \backslash K_0^{R/2}\right) \subseteq \left(K^R \Delta K_0^{R}\right) \cup \left(K^{R/2} \Delta K_0^{R/2}\right). \end{equation*} Using this, it is straightforward to generalize the proofs of Theorems \ref{converge} and \ref{locallip} to obtain the following result.
\begin{thm}\label{convergeloc2} Let $R,\rho>0$ and $r,s\in {\mathbb N}_0$ be given. Then there is a positive constant $C_6=C_6(d,R,\rho,r,s)$ such that \begin{align*} d_{bL}(\overline{\mathcal{V}}_{R}^{r,s}(K;\cdot),\overline{\mathcal{V}}_{R}^{r,s}(K_0;\cdot )) \leq C_6 d_H(K,K_0)^{\frac{1}{2}} \end{align*} for all compact sets $K,K_0\subseteq B(0,\rho)$. \end{thm}
This in turn leads to the next convergence result.
\begin{thm} Let $r,s\in\mathbb{N}_0$ and $R>0$. If $K,K_i\in \mathcal{C}^d$ are compact sets such that $K_i\to K$ in the Hausdorff metric, as $i\to\infty$, then $\overline{\mathcal{V}}_{R}^{r,s}(K_i;B)$ converges to $\overline{\mathcal{V}}_{R}^{r,s}(K;B)$ in the tensor norm, for any measurable set $B\subseteq \Sigma$ satisfying \begin{equation*} \mathcal{H}^d(\{x\in K^R \mid (p_K(x), u_K(x))\in \partial B\})=0. \end{equation*} Here $\partial B$ is the boundary of $B$ as a subset of $\Sigma$.
If $B$ satisfies this condition and $\text{Reach}(K)>R_d$, then \begin{equation*} \lim_{i \to 0} \overline{\Phi}_{k}^{r,s}(K_i;B) = {\Phi_{k}^{r,s}(K;B)} . \end{equation*} \end{thm}
\begin{remark} We can argue as in Remark \ref{Rem4.6new} to see that if $K,K_i\in \mathcal{C}^d$ are compact sets such that $K_i\to K$ in the Hausdorff metric, as $i\to\infty$, then $$ \overline{\mathcal{V}}_{R}^{r,s}(K_i;g)\to \overline{\mathcal{V}}_{R}^{r,s}(K;g),\quad \text{as }i\to\infty, $$ whenever $g:\Sigma\to\mathbb{R}$ is a continuous test function and $\overline{\mathcal{V}}_{R}^{r,s}(K;g)$ is defined similarly as before.
If $K$ satisfies $\text{Reach}(K)>R_d$, we get $\overline{\Phi}_{k}^{r,s}(K_i;g) \to {\Phi_{k}^{r,s}(K;g)}$, as $i\to\infty$. \end{remark}
\section{Application to digital images}\label{DI} Our main motivation for this paper is the estimation of Minkowski tensors from digital images. Recall that we model a black-and-white digital image of $K\subseteq \mathbb{R}^d$ as the set $K\cap a\mathbb{L}$, where $\mathbb{L}\subseteq \mathbb{R}^d$ is a fixed lattice and $a>0$. We refer to \cite{barvinok02} for basic information about lattices.
The lower dimensional parts of $K$ are generally invisible in the digital image. When dealing with digital images, we will therefore always assume that the underlying set is topologically regular, which means that it is the closure of its own interior.
In digital stereology, the underlying object $K$ is often assumed to belong to one of the following two set classes: \begin{itemize} \item
$K$ is called \emph{$\delta$-regular} if it is topologically regular and the reach of its closed complement ${\rm cl}({\mathbb{R}^d \backslash K})$ and the reach of $K$ itself are both at least $\delta>0$. This is a kind of smoothness condition on the boundary, ensuring in particular that $\partial K$ is a $C^1$ manifold (see the discussion after Definition 1 in \cite{svane15b}). \item $K$ is called \emph{polyconvex} if it is a finite union of compact convex sets. While convex sets have infinite reach, note that polyconvex sets do generally not have positive reach. Also note that for a compact convex set $K\subseteq\mathbb{R}^d$, the set ${\rm cl}({\mathbb{R}^d \backslash K})$ need not have positive reach. \end{itemize} It should be observed that for a compact set $K\subseteq \mathbb{R}^d$ both assumptions imply that the boundary of $K$ is a $(d-1)$-rectifiable set in the sense of \cite{Federer69} (i.e., $\partial K$ is the image of a bounded subset of $\mathbb{R}^{d-1}$ under a Lipschitz map), which is a much weaker property that will be sufficient for the analysis in Section \ref{volten}.
\subsection{The volume tensors}\label{volten} Simple and efficient estimators for the volume tensors $\Phi_d^{r,0}(K)$ of a (topologically regular) compact set $K$ are already known and are usually based on the approximation of $K$ by the union of all pixels (voxels) with midpoint in $K$. This leads to the estimator \begin{equation*} \phi_d^{r,0}(K\cap a\mathbb{L} ) = \frac1 {r!} \sum_{z \in K\cap a\mathbb{L}} \int_{z+aV_0(\mathbb{L})}x^r\,dx, \end{equation*} where $V_0(\mathbb{L})$ is the Voronoi cell of 0 in the Voronoi decomposition generated by $\mathbb{L}$. This, in turn, can be approximated by \begin{equation*} \hat{\phi}_d^{r,0}(K\cap a\mathbb{L} ) = \frac{a^{d}}{r!} \mathcal{H}^d\left(V_0(\mathbb{L})\right) \sum_{z \in K\cap a\mathbb{L}} z^r. \end{equation*} When $r\in \{0,1\}$, we even have ${\phi}_d^{r,0}(K\cap a\mathbb{L} )=\hat{\phi}_d^{r,0}(K\cap a\mathbb{L} )$.
Choose $C>0$ such that $V_0(\mathbb{L}) \subseteq B(0,C)$. Then $$ K\Delta \bigcup_{z\in K\cap a\mathbb{L}} (z+aV_0(\mathbb{L}))\subseteq (\partial K)^{ aC}. $$ In fact, if $x\in \left[\bigcup_{z\in K\cap a\mathbb{L}} (z+aV_0(\mathbb{L}))\right]\setminus K$, then there is some $z\in K\cap a\mathbb{L}$ such that $x\in z+aV_0(\mathbb{L})$ and $x\notin K$. Since $z\in K$ and $x\notin K$, we have $[x,z]\cap\partial K\neq\emptyset$. Moreover, $x-z\in aV_0(\mathbb{L})\subseteq
B(0,aC)$, and hence $|x-z|\le aC$. This shows that $x\in(\partial K)^{aC}$. Now assume that $x\in K$ and $x\notin (\partial K)^{aC}$. Then $B(x,\rho)\subseteq K$ for some $\rho>aC$. Since $\bigcup_{z\in a\mathbb{L}}(z+aV_0(\mathbb{L}))=\mathbb{R}^d$, there is some $z\in a\mathbb{L}$ such that $x\in z+aV_0(\mathbb{L})$. Hence $x-z\in aV_0(\mathbb{L})\subseteq B(0,aC)$. We conclude that $z\in B(x,aC)\subseteq K$, therefore $z\in K\cap a\mathbb{L}$ and thus $x\in \bigcup_{z\in K\cap a\mathbb{L}} (z+aV_0(\mathbb{L}))$.
Hence \begin{equation}\label{Oabound}
|{\phi}_d^{r,0}(K\cap a\mathbb{L} ) - {\Phi}_d^{r,0}(K)| \leq \frac{1} {r!} \int_{(\partial K)^{ aC}}|x|^r \, dx. \end{equation} If $\mathcal{H}^{d}(\partial K)=0$, then the integral on the right-hand side goes to zero by monotone convergence, so \begin{equation}\label{convzero} \lim_{a\to 0_+}{\phi}_d^{r,0}(K\cap a\mathbb{L} ) ={\Phi}_d^{r,0}(K). \end{equation} If $\partial K$ is $(d-1)$-rectifiable in the sense of \cite[Section 3.2.14]{Federer69}, that is, $\partial K$ is the image of a bounded subset of $\mathbb{R}^{d-1}$ under a Lipschitz map, then $\mathcal{H}^{d}(\partial K)=0$. Since $\partial K$ is compact, \cite[Theorem 3.2.39]{Federer69} implies that $\lim_{a\to 0_+}\mathcal{H}^d((\partial K)^{ aC})/a $ exists and equals a fixed multiple of $\mathcal{H}^{d-1}(\partial K)$ which is finite. Hence, \eqref{Oabound} shows that the speed of convergence in \eqref{convzero} is $O(a)$ as $a\to 0_+$.
Inequality \eqref{product} yields that $|x^r-z^{r}|\leq aC r(|x|+aC)^{r-1}$ whenever $x\in z+ aV_0(\mathbb{L})$ and $r\ge 1$. Therefore, \begin{align*}
|\hat{\phi}_d^{r,0}(K\cap a\mathbb{L} ) - \phi_d^{r,0}(K\cap a \mathbb{L})|{}& \leq \frac{aC } {(r-1)!} \sum_{z \in K\cap a\mathbb{L}} \int_{z+aV_0(\mathbb{L})}(|x|+aC)^{r-1}\, dx\\
& \leq \frac{aC } {(r-1)!} \int_{K^{aC}} (|x|+aC)^{r-1} \,dx, \end{align*} which shows that \begin{equation*} \lim_{a\to 0_+}\hat{\phi}_d^{r,0}(K\cap a\mathbb{L} ) ={\Phi}_d^{r,0}(K), \end{equation*} provided that $\mathcal{H}^d(\partial K)=0$. If $\partial K$ is $(d-1)$-rectifiable, then the speed of convergence is of the order $O(a)$.
Hence, we suggest to simply use the estimators $\hat{\phi}_d^{r,0}(K\cap a\mathbb{L} )$ for the volume tensors. This estimator can be computed much faster and more directly than $\hat{\Phi}_d^{r,0}(K\cap a\mathbb{L} )$. Moreover, it does not require an estimate for the reach of $K$, and it converges for a much larger class of sets than those of positive reach.
\subsection{Convergence for digital images} For the estimation of the remaining tensors we suggest to use the Voronoi tensor measures. Choosing $K_0=K \cap a\mathbb{L}$ in \eqref{algorithm}, we obtain \begin{equation}\label{algorithm2} \mathcal{V}_{R}^{r,s}(K\cap a\mathbb{L} ;A)= \sum_{x\in K \cap a\mathbb{L} \cap A } x^r \int_{B(x,R)\cap V_x(K\cap a\mathbb{L})} (y-x)^s \,dy, \end{equation} where $A\subseteq\mathbb{R}^d$ is a Borel set.
To show some convergence results in Corollary \ref{convercor} below, we first note that the digital image converges to the original set in the Hausdorff metric.
\begin{lemma}\label{dHbounds} If $K$ is compact and topologically regular, then \begin{equation*} \lim_{a\to 0_+} d_H(K,K\cap a\mathbb{L}) = 0. \end{equation*} If $K $ is $\delta$-regular, then $d_H(K,K\cap a\mathbb{L})$ is of order $O(a)$. The same holds if $K$ is topologically regular and polyconvex. \end{lemma}
\begin{proof} Recall from \cite[p.~311]{barvinok02} that $ \mu(\mathbb{L})=\max_{x\in\mathbb{R}^d}\text{dist}(x,\mathbb{L}) $ is well defined and denotes the covering radius of $\mathbb{L}$.
Let $\varepsilon>0$ be given. Since $K$ is compact, there are points $x_1,\ldots,x_m\in K$ such that $$ K\subseteq\bigcup_{i=1}^m B(x_i,\varepsilon). $$ Using the fact that $K$ is topologically regular, we conclude that there are points $y_i\in\text{int}(K)\cap \text{int}(B(x_i,2\varepsilon))$ for $i=1,\ldots,m$. Hence, there are $\varepsilon_i\in (0,2\varepsilon)$ such that $ B(y_i,\varepsilon_i)\subseteq K\cap B(x_i,2\varepsilon)$ for $i=1,\ldots,m$. Let $0<a<\min\{\varepsilon_i/\mu(\mathbb{L}) \mid i=1,\ldots,m\}$. Since $\varepsilon_i/a>\mu(\mathbb{L})$ it follows that $a\mathbb{L} \cap B(y_i,\varepsilon_i)\neq\emptyset$, for $i=1,\ldots,m$. Thus we can choose $z_i\in a\mathbb{L} \cap B(y_i,\varepsilon_i)\subseteq a\mathbb{L}\cap K$ for
$i=1,\ldots,m$. By the triangle inequality, we have $|z_i-x_i|\le \varepsilon_i+2\varepsilon\le 4\varepsilon$, and hence $x_i\in (K\cap a\mathbb{L})+B(0,4\varepsilon)$, for $i=1,\ldots,m$. Therefore, $K\subseteq (K\cap a\mathbb{L}) +B(0,5\varepsilon)$ if $a>0$ is sufficiently small.
Assume that $K$ is $\delta$-regular, for some $\delta>0$. We choose $0<a<\delta/(2\mu(\mathbb{L}))$. Since $a\mu(\mathbb{L})<\delta/2$, for any $x\in K$ there is a ball $B(y,a\mu(\mathbb{L}))$ of radius $a\mu(\mathbb{L})$ such that $x\in B(y,a\mu(\mathbb{L}))\subseteq K$. From $a\mathbb{L}\cap B(y,a\mu(\mathbb{L}))\neq\emptyset$ we
conclude that there is a point $z\in K\cap a\mathbb{L}$ with $|x-z|\le 2a\mu(\mathbb{L})$. Hence $x\in (K\cap a\mathbb{L}) +B(0,2a\mu(\mathbb{L}))$, and therefore $d_H(K,K\cap a\mathbb{L})\le 2a\mu(\mathbb{L})$.
Finally, we assume that $K$ is topologically regular and polyconvex. Then $K$ is the union of finitely many compact convex sets with interior points. Hence, for the proof we may assume that $K$ is convex with $B(0,\rho)\subseteq K$ for a fixed $\rho>0$. Choose $0<a<\rho/(2\mu(\mathbb{L}))$ and put $r=2a\mu(\mathbb{L})<\rho$. If $x\in K$, then $B((1-r/\rho)x,r)\subseteq K$ and $B((1-r/\rho)x,r)$ contains a point $z\in a\mathbb{L}$. Since $$
|x-z|\le r+({r}/{\rho})|x|\le 2a\mu(\mathbb{L})\left(1+\text{diam}(K)/\rho\right), $$ we get $$ K\subseteq (K\cap a\mathbb{L}) +B\big(0,2a\mu(\mathbb{L})\left(1+\text{diam}(K)/\rho \right)\big), $$ which completes the argument. \end{proof}
Thus Theorems \ref{converge} and \ref{IVconverge} and Corollary \ref{corNew} together with Lemma \ref{dHbounds} yield the following result. \begin{corollary}\label{convercor} If $K $ is compact and topologically regular, then \begin{align*} &\lim_{a\to 0_+} d_{bL}(\mathcal{V}_{R}^{r,s}(K;\cdot),\mathcal{V}_{R}^{r,s}(K\cap a\mathbb{L};\cdot)) = 0,\\ &\lim_{a\to 0_+} \mathcal{V}_{R}^{r,s}(K\cap a\mathbb{L}) = \mathcal{V}_{R}^{r,s}(K). \end{align*} If, in addition, $K$ has positive reach, then \begin{align}\label{multigrid} &\lim_{a\to 0_+} \hat{\Phi}^{r,s}_k(K\cap a\mathbb{L}) = {\Phi}^{r,s}_k(K). \end{align} If $K$ is $\delta$-regular or a topologically regular convex set, then the speed of convergence is $O(a)$ when $r=s=0$ and $O(\sqrt{a})$ otherwise. \end{corollary} The property \eqref{multigrid} means that $\hat{\Phi}^{r,s}_k(K\cap a\mathbb{L})$ is multigrid convergent for the class of sets of positive reach as defined in the introduction. A similar statement about local tensors, but without the speed of convergence, can be made. We omit this here.
\subsection{Possible refinements of the algorithm for digital images}\label{refinement} We first describe how the number of necessary radii $R_0<R_1<\ldots<R_d$ in \eqref{defEst} can be reduced by one if $s=0$ and $A=\mathbb{R}^d$. Setting $s=0$ and $A=\mathbb{R}^d$ and subtracting
$(r!)\Phi_d^{r,0}(K)$ on both sides of Equation \eqref{steiner} yields \begin{align}\label{modstein} \int_{K^R\backslash K} p_K(x)^r \,dx = \mathcal{V}_{R}^{r,0}(K)-(r!)\Phi_d^{r,0}(K) = (r!) \sum_{k=1}^d \kappa_{k} R^{k} \Phi_{d-k}^{r,0}(K). \end{align} As mentioned in Section \ref{volten}, the volume tensor $\Phi_d^{r,0}(K)$ can be estimated by
$\hat{\phi}_d^{r,0}(K\cap a\mathbb{L})$. We may take $\mathcal{V}_{R}^{r,0}(K\cap a\mathbb{L})-(r!)\hat{\phi}_d^{r,0}(K\cap a\mathbb{L})$ as an improved estimator for \eqref{modstein}. This corresponds to replacing the integration domains $B(x,R)\cap V_x(K\cap a\mathbb{L})$ in \eqref{algorithm2} by \[ (B(x,R)\cap V_x(K\cap a\mathbb{L}))\backslash V_x(a\mathbb{L}). \] This makes sense since $V_x(a\mathbb{L})$ is likely to be contained in $K$ while the left-hand side of \eqref{modstein} is an integral over $K^R\backslash K$. The Minkowski tensors can now be isolated from only $d$ equations of the form \eqref{modstein} with $d$ different values of $R$.
We now suggest a slightly modified estimator for the Minkowski tensors satisfying the same convergence results as $\hat{\Phi}_k^{r,s}(K\cap a\mathbb{L})$ but where the number of summands in \eqref{algorithm2} is considerably reduced. As the volume tensors can easily be estimated with the estimators in Section \ref{volten}, we focus on the tensors with $k<d$.
Let $K$ be a compact set. We define the {\em Voronoi neighborhood} $N_\mathbb{L}(0)$ of $0$ to be the set of points $y\in \mathbb{L}$ such that the Voronoi cells $V_0(\mathbb{L})$ and $V_y(\mathbb{L})$ of $0$ and $y$, respectively, have exactly one common $(d-1)$-dimensional face. Similarly, for $z\in \mathbb{L}$ the Voronoi neighborhood $N_\mathbb{L}(z)$ of $z$ is defined, and thus clearly $N_\mathbb{L}(z)=z+N_\mathbb{L}(0)$. When $\mathbb{L}\subset \mathbb{R}^2$ is the standard lattice, $N_\mathbb{L}(z)$ consists of the four points in $\mathbb{L}$ that are neighbors of $z$ in the usual $4$-neighborhood \cite{OM}. Define $I(K\cap a\mathbb{L})$ to be the set of points $z\in K\cap a\mathbb{L}$ such that $N_{a\mathbb{L}}(z)\subseteq K\cap a\mathbb{L}$.
The relative complement $B(K\cap a\mathbb{L})=(K\cap a\mathbb{L})\setminus I(K\cap a\mathbb{L})$ of $I(K\cap a\mathbb{L})$ can be considered as the set of lattice points in $K\cap a\mathbb{L}$ that are close to the boundary of the given set $K$.
We modify \eqref{algorithm2} by removing contributions from $I(K\cap a\mathbb{L})$ and define
\begin{equation}\label{algorithm3}
\tilde{\mathcal{V}}_{R}^{r,s}(K\cap a\mathbb{L} ;A)= \sum_{x\in B(K \cap a\mathbb{L}) \cap A } x^r \int_{B(x,R)\cap V_x(K\cap a\mathbb{L})} (y-x)^s\, dy.
\end{equation} Assuming that $K$ has positive reach, let $0<R_0<R_1<\ldots<R_d< \textrm{Reach}(K)$. We write again $K_0$ for $K\cap a\mathbb{L}$. Then we obtain the estimators \begin{align} \begin{pmatrix} {\tilde{\Phi}}_{d}^{r,s}(K_0;A\times S^{d-1})\\ \vdots \\ {\tilde{\Phi}}_{0}^{r,s}(K_0;A\times S^{d-1}) \end{pmatrix} =\left(A_{R_0,\ldots,R_d}^{r,s}\right)^{-1} \begin{pmatrix} \tilde{\mathcal{V}}_{R_0}^{r,s}(K_0;A)\\ \vdots \\ \tilde{\mathcal{V}}_{R_d}^{r,s}(K_0;A) \end{pmatrix}\label{defEstcheck} \end{align} with $A_{R_0,\ldots,R_d}^{r,s}$ given by \eqref{matrixA}.
Working with $\tilde{\mathcal{V}}_{R}^{r,s}(K\cap a\mathbb{L};A)$ reduces the workload considerably. For instance, when $K$ is $\delta$-regular or polyconvex and topologically regular, the number of elements in $I(K\cap a\mathbb{L})$ increases with $a^{-d}$, whereas the number of elements in $B(K \cap a\mathbb{L})$ only increases with $a^{-(d-1)}$ as $a\to 0_+$. The set $I(K\cap a\mathbb{L})$ can be obtained from the digital image of $K$ in linear time using a linear filter.
Moreover, we have the following convergence result.
\begin{proposition} Let $K$ be a topologically regular compact set with positive reach and let $C$ be such that $V_0(\mathbb{L})\subseteq B(0,C)$. If $A$ is a Borel set in $\mathbb{R}^d$ and $aC<R_0<R_1<\ldots<R_d<\mathrm{Reach}(K)$ and $K_0=K\cap a\mathbb{L}$, then \[ \tilde{\Phi}_{k}^{r,s}(K_0;A\times S^{d-1})=\hat{\Phi}_{k}^{r,s}(K_0;A\times S^{d-1}) \] for all $k\in\{0,\ldots,d-1\}$, whenever $s=0$ or $s$ is odd. If $s$ is even and $k\in\{0,\ldots,d-1\}$, then \begin{equation*} \lim_{a\to 0_+} \tilde{\Phi}_{k}^{r,s}(K_0;A\times S^{d-1})=\lim_{a\to 0_+}\hat{\Phi}_{k}^{r,s}(K_0;A\times S^{d-1}). \end{equation*}
\end{proposition}
\begin{proof} Let $aC<R<\mathrm{Reach}(K)$.
For $x\in I(K\cap a\mathbb{L})$, we have
\[
B(x,R)\cap V_{x}(K\cap a\mathbb{L})=V_{x}(a\mathbb{L}),
\]
so the contribution of
$x$ to the sum in \eqref{algorithm2} is $(s!)x^r\Phi^{s,0}_d(V_{0}(a\mathbb{L}))$. It follows that
\begin{align}\label{Vred}
{\mathcal{V}}_{R}^{r,s}(K\cap a\mathbb{L} ;A)-\tilde{\mathcal{V}}_{R}^{r,s}(K\cap a\mathbb{L} ;A)=
(s!)\Phi^{s,0}_d(V_{0}(a\mathbb{L}))\sum_{x\in I(K\cap a\mathbb{L})\cap A}x^r.
\end{align}
For odd $s$ we have $\Phi^{s,0}_d(V_{0}(a\mathbb{L}))=0$, so the claim follows. For $s=0$ the right-hand side of
\eqref{Vred} does not vanish, but it is independent of $R$. A combination of
\[
\left(A_{R_0,\ldots,R_d}^{r,0}\right)^{-1}
\begin{pmatrix} 1\\1\\
\vdots
\\ 1 \end{pmatrix}=
\begin{pmatrix} (r!)^{-1}\\ 0\\
\vdots
\\
0 \end{pmatrix},
\] with \eqref{Vred}, \eqref{defEst} and \eqref{defEstcheck} gives the claim.
For even $s>0$, we have that $\Phi^{s,0}_d(V_{0}(a\mathbb{L}))=a^{d+s}\Phi^{s,0}_d(V_{0}(\mathbb{L}))$, while \begin{align*}
\left|\sum_{x\in I(K\cap a\mathbb{L})\cap A}x^r \right| &\leq \sum_{x\in I(K\cap a\mathbb{L})}|x|^r \\
&\leq \sup_{x\in K}|x|^r\sum_{x\in I(K\cap a\mathbb{L})}
\left(a^{d}{\mathcal H}^d(V_{0}(\mathbb{L}))\right)^{-1}{\mathcal H}^d(V_{x}(a\mathbb{L}))\\
&\leq \sup_{x\in K}|x|^r \cdot a^{-d}\cdot {\mathcal H}^d(V_0(\mathbb{L}))^{-1}\cdot \mathcal{H}^d(K^{aC}). \end{align*}
Therefore, the expression on the right-hand side of \eqref{Vred} converges to $0$.
\end{proof}
It should be noted that a similar modification for $\overline \Phi_k^{r,s}$ is not necessary. In fact the modified Voronoi tensor measure \eqref{modify} with $K=K_0$ has the advantage that small Voronoi cells that are completely contained in the $R_0/2 $-parallel set of $K\cap a\mathbb{L}$ do not contribute. In particular, contributions from $I(K\cap a\mathbb{L})$ are automatically ignored when $a$ is sufficiently small.
\section{Comparison to known estimators}\label{known} Most {existing} estimators of intrinsic volumes \cite{digital,lindblad,OM} and Minkowski tensors \cite{turk,mecke} are $n$-local for some $n\in \mathbb{N}$. The idea is to look at all $n\times \dotsm \times n$ pixel blocks in the image and count how many times each of the $2^{n^d}$ possible configurations of black and white points occur. Each configuration is weighted by an element of $\mathbb{T}^{r+s}$ and $\Phi^{r,s}_k(K)$ is estimated as a weighted sum of the configuration counts. It is known that estimators of this type for intrinsic volumes other than ordinary volume are not multigrid convergent, even when $K$ is known to be a convex polytope; see \cite{am3}. {It is not difficult to see that there cannot be a multigrid convergent $n$-local estimator for the (even rank) tensors $\Phi_k^{0,2s}(K)$ with $k=0,\ldots,d-1$, $s\in\mathbb{N}$, for polytopes $K$, either. In fact, repeatedly taking the trace of such an estimator would lead to a multigrid convergent $n$-local estimator of the $k$th intrinsic volume, in contradiction to \cite{am3}.}
The algorithm presented in this paper is not $n$-local for any $n\in \mathbb{N}$. It is required in the convergence proof that the parallel radius $R$ is fixed while the resolution $a^{-1}$ goes to infinity. {The non-local operation in the definition of our estimator is the calculation of the Voronoi diagram.} The computation time for Voronoi diagrams of $k$ points is $O(k\log k + k^{\lfloor d/2\rfloor})$, see \cite{chazelle}, which is somewhat slower than $n$-local algorithms for which the computation time for $k$ data points is $O(k)$. The computation time can be improved by ignoring interior points as discussed in Section \ref{refinement}.
The idea to base digital estimators for intrinsic volumes on an inversion of the Steiner formula as in \eqref{matrix} has occurred before in \cite{spodarev,jan}. In both references, the authors define estimators for polyconvex sets which are not necessarily of positive reach. This more ambitious aim leads to problems with the convergence.
In \cite{spodarev}, the authors use a version of the Steiner formula for polyconvex sets given in terms of the Schneider index, see \cite{schneider}. Since its definition is, however, $n$-local in nature, the authors choose an $n$-local algorithm to estimate it. As already mentioned, such algorithms are not multigrid convergent.
In \cite{jan}, it is used that the intrinsic volumes of a polyconvex set can, on the one hand, be approximated by those of a parallel set with small parallel radius, and on the other hand, the closed complement of this parallel set has positive reach, so that its intrinsic volumes can be computed via the Steiner formula. The authors employ a discretization of the parallel volumes of digital images, but without showing that the convergence is preserved.
It is likely that the ideas of the present paper combined with the ones of \cite{jan} could be used to construct {multigrid} convergent digital algorithms for polyconvex sets. The price for this is that the notion of convergence in \cite{jan} is slightly artificial for practical purposes, requiring very small parallel radii in order to get good approximations and at the same time large radii compared to resolution.
In \cite{svane}, $n$-local algorithms based on grey-valued images are suggested. They are shown to converge to the true value when the resolution {tends} to infinity. However, they only apply to surface and certain mean curvature tensors. Moreover, they are hard to apply in practice, since they require detailed information about the underlying point spread function {which specifies the representation of the object as grey-value image. If grey-value images are given, the}
algorithm of the present paper could be applied to thresholded images, but there may be more efficient ways to exploit the additional information of the grey-values.
\end{document} |
\begin{document}
\title{ Concordance Rate of a Four-Quadrant Plot \for Repeated Measurements}
\begin{abstract} Before new clinical measurement methods are implemented in clinical practice, it must be confirmed whether their results are equivalent to those of existing methods. The agreement of the trend between these methods is evaluated using the four-quadrant plot, which describes the trend of change in each difference of the two measurement methods’ values in sequential time points, and the plot's concordance rate, which is calculated using the sum of data points in the four-quadrant plot that agree with this trend divided by the number of all accepted data points. However, the conventional concordance rate does not consider the covariance between the data on individual subjects, which may affect its proper evaluation. Therefore, we proposed a new concordance rate calculated by each individual according to the number of agreement. Moreover, this proposed method can set a parameter that the minimum concordant number between two measurement techniques. The parameter can provide a more detailed interpretation of the degree of agreement. A numerical simulation conducted with several factors indicated that the proposed method resulted in a more accurate evaluation. We also showed a real data and compared the proposed method with the conventional approach. Then, we concluded the discussion with the implementation in clinical studies. \end{abstract}
\keywords{Clinical trial, \and Method comparison, \and Monte Carlo Simulation, \and Trending agreement}
\section{Introduction}
New clinical measurements and new technologies such as cardiac output (CO) monitoring continue to be introduced, and it must be verified whether the results of the new testing measurement methods are equivalent to those of the standard measurement methods before implementing them in clinical practice. For example, an improved cardiac index (CI) tracking device was compared to a traditional method for CI by transpulmonary thermodilution to assess the reliability for accurately measuring changes in norepinephrine dose during operations (Monnet {\it{et al}}., 2012). In the study of Cox {\it{et al}}. (2017), the bioimpedance electrical cardiometry, another experimental measurement device of CI, examined with the continuous pulmonary artery thermoregulatory catheterization as the gold standard by conducting before, during, and after cardiac surgery.
Various statistical methods have been proposed to assess the equivalence of the new testing measurement methods with the gold standards (e.g., Carstensen, 2010; Choundhary and Nagaraja, 2005; Choudhary and Nagaraja, 2017). In Altman and Bland (1983), Bland and Altman (1986) and Bland and Altman (1996), the Bland-Altman analysis has been proposed to evaluate the accuracy of a new clinical test based on its difference from a gold standard measurement values and on the mean of the two tests values. In addition, a method for calculating the sample size when conducting the Bland-Altman analysis during clinical trials has been proposed by Shieh (2019). The Bland-Altman analysis has also expanded to cases of repeated measurement (e.g., Bland and Altman, 2007; Bartko, 1976; Zou, 2013), which have been used in clinical studies. Asamoto {\it{et al}}. (2017) used the Bland-Altman analysis to evaluate the equivalence of the accuracy in the less invasive continuous CO monitor during two different surgeries. However, the Bland-Altman plot can not describe the trending ability between the two compared measurements, because the Bland-Altman analysis does not consider the order of the observed data.
Thus, to evaluate the trending ability, the researchers also showed the four-quadrant plot for drawing the changes of the measurement results and calculated the concordance rate. In fact, in these equivalence comparative clinical trials, the four-quadrant plot and the concordance rate are often used along with the Bland-Altman analysis.
As the assessment based on the degree of trending of the CO changes at each time point, the use of the four-quadrant plot and concordance rate has been proposed (Perrino {\it{et al}}., 1994; Perrino {\it{et al}}., 1998). The four-quadrant plot and four-quadrant concordance analysis are often employed with the Bland-Altman analysis when evaluating the equivalence of the two measurement methods (e.g. Monnet {\it{et al}}., 2012). The four-quadrant plot and concordance rate focus on the trending ability between each difference of two testing values, while Bland-Altman analysis assesses the accuracy and the precision of values of two measurement methods. In a four-quadrant plot, pairs of each difference of two testing values at sequential time points are plotted. For example, a plot draws with the value at the second time point minus the value measured at the first time point which are measured by the gold standard on the horizontal axis, and the difference value between the same time points measured by the experimental method on the vertical axis.
The evaluation of the four-quadrant plot is based on whether the trends regarding each difference between the new experimental measurement and the gold standard are concordant. When the trends between the two measurements increase or decrease together, those points are regarded as being in agreement (Saugel {\it{et al}}., 2015). Here, the small difference values do not count for the concordance rate by introducing the ``exclusion zone".
Concordance rate in a four-quadrant plot is calculated by the ratio of the number of agreements to all data points. However, this conventional concordance rate does not consider covariance within an individual, even though, in general, one subject is measured multiple times in clinical practice. In the case when the covariance within an individual is high, this may lead to incorrect results in the calculation without considering the covariance. However, concordance rate for the four-quadrant plot has not been expanded for repeated measurement, unlike the Bland-Altman analysis.
Thus, our study proposes a new concordance rate for the four-quadrant plot based on multivariate normal distribution in order to take into account the individual subjects. This new method can be applied to any number of repeated measurement.
Specifically, the proposed concordance rate is formulated as conditional probabilities of the agreement given the event in which no data points within individual fall into the exclusion zone.
In this study, we examine the case of three time points in numerical simulation.
The proposed method also has a parameter to set the minimum concordant number $m$ between two measurement methods regarded as being in ``agreement". The concordance number is counted based on how many times an ``agreement" out of the number of differences of measurement values $T$. This parameter is the least number that the trending of two clinical measurement methods can be assessed in calculating concordance rate. For instance, when the parameter $m$ is $3$ and $T$ is $5$, the concordance rate evaluates the case of more than $3$ agreements out of $5$ times. This parameter allows analysts setting from a clinical perspective. In general, $T = m$ and the high probability of the concordance are ideal, but the parameter can provide a more detailed interpretation of the degree of agreement by adjusting the parameter $m$. \
Accordingly, this study first proposes the new concordance rate for the four-quadrant plot in a general framework and then takes the case of the calculation at three time points as an example. In detail, the remainder of this paper is organized as follows; Section 2 explains the general concordance rate for the four-quadrant plot. In Section 3, we introduce the new proposed concordance rate and present the case wherein the maximum number of agreements is two. Then, Section 4 presents the application of the proposed method to simulations and its result. Section 5 describes the results of the application to a real example. We conclude this paper in Section 6.
\section{Concordance Rate} \label{sec:concordance}
This section explains the ways to draw the four-quadrant plot and calculate the concordance rate by using the conventional method. The assessment method for the trending agreement of two testing values using the four-quadrant plot was first proposed by Perrino, {\it{et al}}. (1994). The four-quadrant plot uses each pair of differences between the values measured by the two clinical methods being compared. Point $x^{*}_{it^{*}} (i=1,2,\cdots,n;\quad t^{*}=1,2,\cdots,(T+1))$ indicates as the value of a gold standard for individual subject $i$ at time $t^*$th, and $y^{*}_{it^{*}} (i=1,2,\cdots,n;\quad t^{*}=1,2,\cdots,(T+1))$ is the value of the experimental technique. Then, the $t$th difference of the values measured by the gold standard is
\begin{align*} x_{it} = x^{*}_{i(t+1)} - x^{*}_{it} \quad (t=1,2,\cdots,T), \end{align*}
and the $t$th difference of the values measured by the experimental technique is
\begin{align*} y_{it} = y^{*}_{i(t+1)} - y^{*}_{it} (t=1,2,\cdots,T). \end{align*}
Plot 1 in Figure \ref{4q step} shows an example of treatment values in a time sequence that compares two tests for one subject. Focusing on the first two data points in Plot 1, the difference between [2] and [1] can be described as [4] of the four-quadrant plot in Plot 2. At this time, both $x$ and $y$ increase, which indicates that the direction of change in $x$ and $y$ is the same. A point such as [4] plotted in the upper-right of the four-quadrant plot can be evaluated as being in ``agreement." In contrast, the difference between [3] and [2] is plotted as [5] in the lower-right of Plot 2. In this case, $x$ increases but $y$ decreases, which means that the trend of $x$ and $y$ is recognized as being in ``disagreement." Similarly, if the difference in both $x$ and $y$ is negative, as plotted in the lower-left, the change is also in ``agreement," while the data points in the upper-left can be assessed as being in ``disagreement."
\begin{figure}
\caption{Plots for the step of drawing the four-quadrant plot. The horizontal axis denotes $x$, and the vertical axis denotes $y$. Plot 1: Data plotted for three pairs of values on Cartesian coordinates. Plot 2: Four-quadrant plot of the data in Plot 1.}
\label{4q step}
\end{figure}
\begin{figure}
\caption{Four-quadrant plot with artificial example data.}
\label{4q plot}
\end{figure}
Figure \ref{4q plot} is a four-quadrant plot with artificial example data. In the figure, the red points in the upper-right and lower-left sections are counted as being in ``agreement." The blue dots, on the other hand, signify ``disagreement." When the difference value of the experimental technique is equal to that of the gold standard, the data dot is on the $45^\circ$ lines (dotted lines in Figure \ref{4q plot}).
The concordance rate is calculated based on the idea above. The conventional concordance rate (CCR) is defined as follows:
\begin{align} {\rm CCR}(a) = \frac{ \# {\rm SA} - \# {\rm AEz}(a) }{ nT - \# {\rm Ez} (a) }, \end{align}
where \begin{align*}
{\rm SA} =& \{(x_{it}, y_{it})| \; {\big(} (x_{it}\geq0,\;y_{it}\geq0)\ \cup\ (x_{it}<0,\;y_{it}<0){\big)}, \\ &i=1,2,\cdots,n;\;t=1,2,\cdots,T \}, \\
{\rm AEz}(a) =& \{(x_{it}, y_{it})| \; {\big(} (0\leq x_{it}\leq a,\;0\leq y_{it}\leq a)\ \cup\ (-a< x_{it}<0,\;-a<y_{it}<0){\big)} \\ &i=1,2,\cdots,n;\;t=1,2,\cdots,T \}, \quad {\rm and } \\
{\rm Ez}(a)=&\{(x_{it},y_{it})|\;-a\leq x_{it},x_{it} \leq a,\;-a \leq y_{it},y_{it} \leq a,\; t=1,2,\cdots,T\}.\\ \end{align*}
{\rm SA} is the set of ``agreement" pairs of each difference between the values of the gold standard and experimental technique. ${\rm Ez}(a)$ is the set of pairs plotted in the exclusion zone. In the four-quadrant plot, the exclusion zone (middle square in Figure \ref{4q plot}) is usually placed to remove data plots close to the origin of the plot, because it is difficult to determine whether such small values have occurred due to the examination or mechanical errors (e.g.,Critchley {\it{et al}}., 2010). The gray points plotted in the exclusion zone in Figure \ref{4q plot} are excluded when calculating the concordance rate. The range of the exclusion zone depends on $a$, which is set from a clinical point of view (e.g.,Saugel {\it{et al}}., 2015). ${\rm AEz}(a)$ is the set of the ``agreement" pairs in the exclusion zone. \# signifies the cardinality of a set. The concordance rate in Eq. (1) is the ratio between the number of data points in the ``agreement" sections except exclusion zone with all data points that fall outside the exclusion zone.
This conventional concordance rate simply counts the number of data points that show the same trend of change. However, multiple measurements are generally taken for a single patient in a clinical setting. Individual tendencies may influence the measurement results for a single subject. Therefore, individuals must be considered to calculate a more precise concordance rate.
\section{Concordance Rate for the Four-quadrant Plot} \label{sec:proposal}
\subsection{General framework of the proposed concordance rate}
The proposed concordance rate evaluates the equivalence between the experimental technique and the gold standard through calculation that considers the individual subjects. This proposed method includes the exclusion zone as well, and is defined as the conditional probability, which corresponds to the event falling out of the exclusion zone in all time points. We estimate the parameters of the population with all the data.
The approach for calculation of the proposed method starts with the four-quadrant plot per point $t$. First, the quadrant sections are named $A_t$ to $D_t$. The sample space where the $t$th value falls in each section can be described in four ways:
\begin{align*}
A_{t}=&\{\omega |\; X_{t} (\omega) \geq 0, Y_{t} (\omega) \geq 0\}, \\
B_{t}=&\{\omega |\; X_{t} (\omega) < 0, Y_{t} (\omega) < 0\}, \\
C_{t}=&\{\omega |\; X_{t} (\omega) < 0, Y_{t} (\omega) \geq 0\}, \quad {\rm and} \\
D_{t}=&\{\omega |\; X_{t} (\omega) \geq 0, Y_{t} (\omega) < 0\} \quad (t = 1, 2, \cdots, T). \end{align*}
Here, $X_t$ and $Y_t$ are random variables of each difference of the values of the gold standard and experimental techniques, respectively. $X_t$ and $Y_t$ correspond to $x_{it}$ and $y_{it}$, respectively. ${\rm X} = (X_1, X_2, \cdots, X_T)$ and ${\rm Y} = (Y_1, Y_2, \cdots, Y_T)$ are assumed to be distributed from multivariate normal distributions. $A_t$ in the upper-right and $B_t$ in the lower-left quadrants of the four-quadrant plot (Figure \ref{4q plot}) correspond with ``agreement," whereas $C_t$ in the upper-left and $D_t$ in the lower-right quadrants are in ``disagreement."
Here, the family of sets is defined as follows:
\begin{align*} \mathscr{W}_t = \{A_t \cup B_t, C_t \cup D_t\} \quad (t = 1, 2, \cdots, T). \end{align*}
Then, exclusion zone at the $t$th time is
\begin{align*}
{\rm Ez_t}(a)=&\{\omega|\;-a \leq X_t(\omega) \leq a, -a \leq Y_t(\omega) \leq a\} \quad (t = 1, 2, \cdots, T). \end{align*}
${\rm Ez}(a)$ is also divided into four-quadrant sections:
\begin{align*}
{\rm EzA_t}(a)=&\{\omega|\;0 \leq X_t(\omega) \leq a, 0 \leq Y_t(\omega) \leq a\}, \\
{\rm EzB_t}(a)=&\{\omega|\;-a \leq X_t(\omega) \leq 0, -a \leq Y_t(\omega) \leq 0\}, \\
{\rm EzC_t}(a)=&\{\omega|\;-a \leq X_t(\omega) \leq 0, 0 \leq Y_t(\omega) \leq a\}, \\
{\rm EzD_t}(a)=&\{\omega|\;0 \leq X_t(\omega) \leq a, -a \leq Y_t(\omega) \leq 0\}\quad (t = 1, 2, \cdots, T). \end{align*}
The assets of the random variables in $A_t, B_t, C_t$, and $D_t$, except the exclusion zone, are defined as follows:
\begin{align*} A_t^{\dagger} =& A_t \cap {\rm EzrA_t}(a)^c, \\
B_t^{\dagger} =& B_t \cap {\rm EzrB_t}(a)^c, \\
C_t^{\dagger} =& C_t \cap {\rm EzrC_t}(a)^c, \quad {\rm and}\\
D_t^{\dagger} =& D_t \cap {\rm EzrD_t}(a)^c , \end{align*}
where $Z^c$ is the complement of arbitrary set $Z$.\ $A_t^{\dagger}$ and $B_t^{\dagger}$ are the events of ``agreement" that do not fall into the exclusion zone, whereas $C_t^{\dagger}$ and $D_t^{\dagger}$ are the events of ``disagreement" out of the exclusion zone.
The proposed concordance rate is calculated in the condition when all pairs of $(X_t,Y_t)$ are not in the exclusion zone. This means that all data of one subject are excluded from the calculation if any pair of data points for that subject drops to the exclusion zone at least once. This can be described as
\begin{align*}
{\rm NEz(a)} = \Big \{ \omega|\; \forall t \; (t = 1,2, \cdots, T); \; \omega \notin {\rm Ez}_t(a)\Big \}. \end{align*}
Here, the two clinical testing methods are regarded as equivalent if $X_t$ and $Y_t$ show the same direction of trends more than $m$ times out of $T$ times per subject. Concordance rate in the agreement times more than the setting number in $m$ is calculated. $m$ is determined from a clinical perspective. $T$ is the number of differences of measurement values. Given this idea, we propose the new concordance rate, wherein the probability of ``agreement" of more than $m$ times in $T$ is defined as follows:
\begin{align}
P\Big[ \bigcup_{t = m}^T H_t | {\rm NEz}(a)
\Big] \nonumber \ =& \frac{ P\Big[ (\bigcup_{t = m}^T H_t) \cap {\rm NEz}(a)
\Big] } { P\Big [ {\rm NEz}(a) \Big] }\\ =& \frac{ \sum_{t = m}^T P\Big[ H_t \cap {\rm NEz}(a)
\Big] } { 1 - P\Big [ \bigcup_{s=1}^T {\rm Ez_s}(a) \Big] }, \label{condition1} \end{align}
where
\begin{align}
H_t = \Big \{ \omega |\; (W_1(\omega), W_2(\omega), \cdots, W_T(\omega)) \in \prod_{s=1}^T \mathscr{W}_s, \sum_{s=1}^T I(W_s(\omega) = A_s(\omega) \cup B_s(\omega)) = t \Big \}. \label{condition2} \end{align}
$H_t$ in Eq. (\ref{condition1}) is the subset of the sample space wherein the trend between $X$ and $Y$ agrees $t$ times. $I$ is the indicator function in the condition wherein the $s$th data fall in $A^{\dagger}$ or $B^{\dagger}$. $\prod_{s=1}^T \mathscr{W}_s$ in Eq. (\ref{condition2}) indicates the product.
\subsection{Example of the proposal index, T = 2}
Next, we explain the proposed concordance rate in the case of $m = 1$ and $T = 2$, that is, at three points in time. The probability can be calculated as follows:
\begin{align}
P\Big[ \Big( \bigcup_{t = 1}^2 H_t \Big) \cap {\rm NEz}(a)
\Big] \ = \frac{ \sum_{t = 1}^2 P\Big[ H_t \cap {\rm NEz}(a)
\Big] } { 1 - P\Big [ \bigcup_{s=1}^2 {\rm Ez}_s(a) \Big] }. \label{condition3} \end{align}
We apply the definition at $T =2$ to a four-quadrant plot. There are three patterns in the case of $T =2$: agreement in $t=1$, agreement in $t=2$, and agreements in $t=1$ and $t=2$. The probability of the numerator in the definition formula is
\begin{align}
P[H_1 \cap {\rm NEz}(a)] =& P[(A_1^{\dagger} \cup B_1^{\dagger}) \cap (C_2^{\dagger} \cup D_2^{\dagger})]
+ P[(C_1^{\dagger} \cup D_1^{\dagger}) \cap (A_2^{\dagger} \cup B_2^{\dagger})]\label{eq2} \\
P[H_2 \cap {\rm NEz}(a)] =& P[(A_1^{\dagger} \cup B_1^{\dagger}) \cap (A_2^{\dagger} \cup B_2^{\dagger})]. \label{eq3} \end{align}
To describe each case, the range wherein the data point enters into each quadrant of the plot is set as $F = \{ [0, \infty]^T, \; [-\infty, 0]^T \}$, and the range of the exclusion zone is $E = \{ [0, a]^T, \; [-a, 0]^T \}$.
Vectors to describe the range for the probability calculations are as follows:
\begin{align*} {\bf v_1} = \left[ \begin{array}{c} v_{11} \\ v_{21} \\ \end{array} \right] , \quad
{\bf v_2} = \left[ \begin{array}{c} v_{12} \\ v_{22} \\ \end{array} \right], \quad
{\bf z_1} = \left[ \begin{array}{c} z_{11} \\ z_{21} \\ \end{array} \right], \quad
{\bf z_2} = \left[ \begin{array}{c} z_{12} \\ z_{22} \\ \end{array} \right]. \end{align*}
The first term of Eq. (\ref{eq2}) means the probability with which the trend of $X_1$ and $Y_1$ is in agreement, whereas that of $X_2$ and $Y_2$ is not. This can also be expressed as
\begin{align*} &P\Big[ (A_1^{\dagger} \cup B_1^{\dagger}) \cap (C_2^{\dagger} \cup D_2^{\dagger}) \Big] \nonumber \\
=& \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} \neq {\bf z_2} \\ {\bf v_1}, {\bf v_2}, {\bf z_1}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\
&+ \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} \neq {\bf z_2} \\ {\bf v_1},{\bf v_2}, {\bf z_1}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\
&- \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} \neq {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in F, \; {\bf v_2}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\
&- \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} \neq {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in E, \; {\bf v_2}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ). \end{align*}
Then, the second term of Eq. (\ref{eq2}) is the probability when the trend of $X_1$ and $Y_1$ is in disagreement, but that of $X_2$ and $Y_2$ is in agreement. This can be rewritten similarly as \begin{align*} &P\Big[ (C_1^{\dagger} \cup D_1^{\dagger}) \cap (A_2^{\dagger} \cup B_2^{\dagger}) \Big] \nonumber \\
=& \sum_{\substack{\bf{v}_1 \neq {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf v_2}, {\bf z_1}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\
&+ \sum_{\substack{{\bf v_1} \neq {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf v_2}, {\bf z_1}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\
&- \sum_{\substack{{\bf v_1} \neq {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in F, \; {\bf v_2}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\
&- \sum_{\substack{{\bf v_1} \neq {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in E, \; {\bf v_2}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ). \end{align*}
Eq. (\ref{eq3}) is the probability that the trends of $X_1$ and $Y_1$ and of $X_2$ and $Y_2$ are both concordant: \begin{align*} &P\Big[ (A_1^{\dagger} \cup B_1^{\dagger}) \cap (A_2^{\dagger} \cup B_2^{\dagger}) \Big] \nonumber \\
=& \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf v_2}, {\bf z_1}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\
&+ \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf v_2}, {\bf z_1}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\
&- \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in F, \; {\bf v_2}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\
&- \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in E, \; {\bf v_2}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ). \end{align*}
Finally, the probability of the denominator in $T =2$ is
\begin{align*} &1 - P\Big [ \bigcup_{s=1}^2 {\rm Ez}_s(a) \Big]\\
=& 1- P(-a < X_1 < a. -\infty < X_2 <\infty,\;-a < Y_1 <a, -a < Y_2 <a)\\ &- P(-\infty < X_1 < \infty, -a < X_2 <a,\;-\infty < Y_1 < \infty, -a < Y_2 < a)\\ &+ P(-a < X_1 < a, -a < X_2 <a,\;-a < Y_1 < a, -a < Y_2 < a). \end{align*}
In the proposed concordance rate, we assume that all random variables are distributed from multivariate normal distribution. Therefore, we must estimate the mean vectors and covariance matrices to calculate the concordance rate. The method of estimating these parameters is described next.
\subsection{Estimation}\label{sec3}
First, we define
$Z = (X_1,\; \cdots,\; X_T,\; Y_1,\; \cdots,\; Y_T) = (Z_1,\; \cdots,\; Z_{T}, Z_{T+1},\; \cdots,\; Z_{T+T})$.
Since the proposed method assumes that $Z$ are distributed from $T+T$-dimensional normal distributions, it is necessary to estimate the $T+T$-dimensional mean vector and variance covariance matrix to calculate the concordance rate. The estimated mean vector in the proposed approach is
$\bar{\bm z}=(\bar{x}_1,\;\cdots,\;\bar{x}_T,\; \bar{y}_1,\;\cdots,\;\bar{y}_T )^T$
, where $\bar{x}_t$ and $\bar{y}_t$ are the mean of the $t$th value of gold standard and experimental technique, respectively. The covariance matrix based on the differences between the times is ${\bm{S}} = (s_{tk})\quad (t,k=1,2,\cdots,T+T)$, where $s_{tk}$ is the covariance between $t$ and $k$. By using these estimator, the proposed concordance rate in Eq (\ref{condition1}), defined as the conditional probability, can be calculated.
\section{Numerical Simulation} \label{sec:4} In this section, we describe the simulation design, and present the simulation results. We conducted a simulation and set the two types of evaluation for the simulation. First, we examined how close the concordance rates calculated with the conventional methods and the proposed approach were to the result of the true concordance rate. The assessment of each concordance rate was expressed as the difference from the true concordance rate. The results of the proposed method can not be simply compared with CCR$(a)$, since CCR$(a)$ does not consider the repeated measurement. In order to compare with the conventional concordance rate, control1 and control2 were adjusted to allow repeated measurements, which details in factor 7.\ Secondly, to compare the diagnosability of the proposed method with CCR$(a)$, we calculated ROC curves and Area Under the Curve (AUC) (e.g. Pepe, 2003). The second evaluation is based on AUC. In this simulation, we used {\choosefont{pcr} RStudio Version 1.1.453.}
\subsection{Simulation design}
We set $T=2$, and the data generation procedure is as follows: \begin{align*}
\bf{Z} \sim N (\bf{\mu}_{z}, \Sigma_{z}) \end{align*}
where ${\bf Z} = (X_1,X_2,Y_1,Y_2) ^{T}$ . $X_t$ is the difference in the measurement values of the gold standard between the $t$th and $(t+1)$th times $(t=1,2,3)$, and $Y_t$ is that of experimental technique.
In addition, \begin{align*}
\bm{\mu}_{Z} = \left[ \begin{array}{c} \bm{\mu}_{X}\\ \bm{\mu}_{Y}\\ \end{array} \right] , \quad
\bm{\Sigma}_{Z} = \left[ \begin{array}{cc} \bm{\Sigma}_{X} & \bm{\Sigma}_{XY} \\ \bm{\Sigma}_{XY} & \bm{\Sigma}_{Y}\\ \end{array} \right], \end{align*}
where $\bf{\mu}_{X} = (\mu_{x1},\mu_{x2})^{T}$ and $\bf{\mu}_{Y} = (\mu_{y1},\mu_{y2})^{T}$ are the mean vectors of the gold standard and experimental technique, and $\bf{\Sigma}_{X}$ and $\bf{\Sigma}_{Y}$ are the covariance matrices, respectively.\\ Here,
\begin{align*}
\bm{\Sigma}_{X} = \left[ \begin{array}{cc} {\sigma}_{x1} & {\rho} \\ {\rho} & {\sigma}_{x2}\\ \end{array} \right], \quad
\bm{\Sigma}_{Y} = \left[ \begin{array}{cc} {\sigma}_{y1} & {\rho} \\ {\rho} & {\sigma}_{y2}\\ \end{array} \right], \ {\rm and} \ \quad
\bm{\Sigma}_{XY} = \left[ \begin{array}{cc} {\rho}_{XY} & {\rho}_{XY} \\ {\rho}_{XY} & {\rho}_{XY}\\ \end{array} \right]. \end{align*}
we set $\sigma_{x1} = \sigma_{x2} = \sigma_{y1} = \sigma_{y2} = 1$.
Factors set in the simulation are presented in Table \ref{T1}. The number of total patterns is $30\times3\times2\times2\times2\times2\times3=4320$. For each pattern, corresponding artificial data are generated 100 times and we evaluate the results. The levels of the seven factors are set as follows.
\
\noindent {\bf Factor 1: Means}
The mean is of 30 patterns, as shown in Table \ref{T2}. The setting depends on the combination of the magnitude of the mean value and the direction of change in $x$ and $y$.
\
\noindent {\bf Factor 2: Covariance between the difference values within each measurement method }
The corvariance within each measurement method of the difference values, ${\rho} $, is set as $0$, $1/3$ and $2/3$ in both $X$ and $Y$.
\
\noindent {\bf Factor 3: Covariance between $X$ and $Y$}
${\rho}_{XY} = 0$ and $1/3$.
\
\noindent {\bf Factor 4: Number of agreements}
Factor 4 is the number of trending agreements between $X$ and $Y$. We set two different situations: (1) agreement more than once in $T=2$, and (2) agreement at both time points.
\
\noindent {\bf Factor 5: Exclusion zone}
$a$ of the exclusion zone ${\rm Ez}(a)$ is set as 0.5 and 1.0.
\
\noindent {\bf Factor 6: Number of subjects}
The number of subjects is set as 15 and 40.
\
\noindent {\bf Factor 7: Methods}
We calculate the concordance rate by four methods. Control1, control2, and the proposed method are used in the first evaluation, and CCR, Control1, control2, and the proposed method are used in the second evaluation. We denote the proposed concordance rate as ``proposal".
Control1, based on binomial distribution, is calculated as follows: \begin{align*}
\sum_{s=m}^{2} {}_{2}C_{s} p^s(1-p)^{(2-s)}, \end{align*} where \begin{align*}
p = \frac{k_1+k_2}{n_1^{\dagger}+n_2^{\dagger}}. \end{align*} ${k_t}$ ($t = 1,2$) is the number of data that show the same trend between $X_t$ and $Y_t$ out of the exclusion zone. $n_t^{\dagger}$ is the number of subjects whose data points fall out of the exclusion zone.
The concordance rate in control2 is calculated by the probability at each number of agreement:
twice in two time points is
$p_1p_2$,
and once in two time points
$p_1(1-p_2)+(1-p_1)p_2$,
where \begin{align*}
p_t = \frac{k_t}{n_t^{\dagger}}\quad (t=1,2). \end{align*} \
Subjects whose difference value fall in the exclusion zone of the four-quadrant plot even once are excluded from the calculation of the concordance rate in both control1 and control2 as same manner of the proposed method.
\
The first evaluation index for the simulation result is the absolute values of the difference between the concordance rate based on each estimated parameters and the concordance rate computed with the true mean vector $\bm{\mu}_{Z}$ and with true covariance matrix $\bm{\Sigma}_{Z}$. We set the evaluation to deserve as better assessment if the absolute values of the difference between the true value and the estimated values are smaller among all concordance rate approaches.
For the second evaluation index, we label to each pattern of means in Table \ref{T1}. If $\bf{\mu}_{X}$ and $\bf{\mu}_{Y}$ are concordant both two times, we mark the corresponding mean pattern as ``$\circ$", and the rest as ``$\times$". Then, the 4320 $\times$ 100 data in total have this label. ROC and AUC (e.g. Pepe, 2003) are calculated by the label and the results of concordance rates in each method, and we compare these results of AUC among the proposal method, CCR$(a)$, control1 and control2.
\begin{table} \begin{center} \caption{Factors of the simulation design} \label{T1} \begin{tabular}{llll} \hline Factor No. & Factor name & levels \\ \hline Factor 1 & Means & 30 \\ Factor 2 & Covariance between the difference values within each measurement method & 3\\ Factor 3 & Covariance between $X$ and $Y$ & 2 \\ Factor 4 & Number of agreements & 2 \\ Factor 5 & Exclusion zone & 2 \\ Factor 6 & Number of subjects & 2 \\ Factor 7 & Methods & 3 / 4 \\ \hline \end{tabular} \end{center} \end{table}
\begin{table} \begin{center} \caption{Mean patterns in Factor 1: Label $\circ$ indicates the pattern of agreement between $\mu_{X}$ and $\mu_{Y}$ 2 times, $\times$ indicates the pattern of not agreement between $\mu_{X}$ and $\mu_{Y}$ 2 times} \label{T2} \begin{tabular}{lrrrrclrrrrc} \hline Pattern No. &$\mu_{X1}$ & $\mu_{X2}$ & $\mu_{Y1}$ & $\mu_{Y2}$ & Label & Pattern No. &$\mu_{X1}$ & $\mu_{X2}$ & $\mu_{Y1}$ & $\mu_{Y2}$ & Label\\ \hline Pattern1 & -1.5 & -1.5 & 1.5 & 1.5 & $\times$ & Pattern16 & 0.5 & 0.5 & -0.5 & -0.5 & $\times$\\ Pattern2 & -0.5 & -0.5 & 0.5 & 0.5 & $\times$ & Pattern17 & -0.5 & -1.5 & -0.5 & -1.5 & $\circ$\\ Pattern3 & -1.5 & 1.5 & 1.5 & 1.5 & $\times$ & Pattern18 & 0.5 & -1.5 & -0.5 & -1.5 & $\times$\\ Pattern4 & 0.5 & -0.5 & 0.5 & 0.5 & $\times$ & Pattern19 & -0.5 & 1.5 & -0.5 & -1.5 & $\times$\\ Pattern5 & 1.5 & 1.5 & 1.5 & 1.5 & $\circ$ & Pattern20 & 0.5 & 1.5 & -0.5 & -1.5 & $\times$\\ Pattern6 & 0.5 & 0.5 & 0.5 & 0.5 & $\circ$ & Pattern21 & -1.5 & -1.5 & -1.5 & 1.5 & $\times$\\ Pattern7 & -0.5 & -1.5 & 0.5 & 1.5 & $\times$ & Pattern22 & -0.5 & -0.5 & -0.5 & 0.5 & $\times$\\ Pattern8 & 0.5 & -1.5 & 0.5 & 1.5 & $\times$ & Pattern23 & -1.5 & 1.5 & -1.5 & 1.5 & $\circ$\\ Pattern9 & -0.5 & 1.5 & 0.5 & 1.5 & $\times$ & Pattern24 & 0.5 & -0.5 & -0.5 & 0.5 & $\times$\\ Pattern10 & 0.5 & 1.5 & 0.5 & 1.5 & $\circ$ & Pattern25 & 1.5 & 1.5 & -1.5 & 1.5 & $\times$\\ Pattern11 & -1.5 & -1.5 & -1.5 & -1.5 & $\circ$ & Pattern26 & 0.5 & 0.5 & -0.5 & 0.5 & $\times$\\ Pattern12 & -0.5 & -0.5 & -0.5 & -0.5 & $\circ$ & Pattern27 & -0.5 & -1.5 & -0.5 & 1.5 & $\times$\\ Pattern13 & -1.5 & 1.5 & -1.5 & -1.5 & $\times$ & Pattern28 & 0.5 & -1.5 & -0.5 & 1.5 & $\times$\\ Pattern14 & 0.5 & -0.5 & -0.5 & -0.5 & $\times$ & Pattern29 & -0.5 & 1.5 & -0.5 & 1.5 & $\circ$\\ Pattern15 & 1.5 & 1.5 & -1.5 & -1.5 & $\times$ & Pattern30 & 0.5 & 1.5 & -0.5 & 1.5 & $\times$\\ \hline \end{tabular} \end{center} \end{table}
\subsection{Simulation results} \subsubsection{Difference between the true value and the estimation of each concordance rate method}
In all simulation results, the proposed approach was closer to the true value than control methods. Figure \ref{fig.1} showed the result of this simulation. We also showed median, the first quartile and the third quartile by each factor in tables. Medians of the proposal method was smaller, and the interquantile range was narrower than than the control1 and control2 in all factors. These results indicate that the variation of the proposed method was smaller than two control concordance methods. The estimation of the proposal was stable.\ Table \ref{TF1} are the results par each pattern of the mean. In Pattern3, 13, 21 and 25, the bias of control1 tended to be large. These patterns are the situation that the all absolute values of means of $X$ and $Y$ are 1.5 and the direction of trends disagree two times. compared to the control methods, the proposed method was stable in all patterns. Table \ref{TF2} is the results of the covariance of the difference values within each measurement method and table \ref{TF3} is the results of the covariance between $X$ ans $Y$. The results of all methods were almost same in terms of both covariances. The proposed method was more stable than the conventional methods in both factors. The proposed method resulted more closely to the true values in both $m=1$ and $m=2$ than the control methods (Table \ref{TF4}). It means that the proposal evaluated more properly in all number of agreement in the case of $T=2$. Regarding the exclusion zone, the concordance rates was slightly higher in larger size of the exclusion zone (Table \ref{TF5}). The concordance rates in all methods were smaller in the larger number of subjects (Table \ref{TF6}).
\subsubsection{Diagnosability of the estimation of each concordance method} To compare the diagnosability of the proposed method with that of the conventional methods, we described the ROC curves of the proposal, CCR, control1 and control2 in Figure \ref{fig.8.1} and calculated their AUC in Table \ref{T3}. Seeing from the results of AUC, the proposed method was better than the conventional methods. In other words, it showed that the diagnostic capability of the proposed method was superior to the conventional methods.
\begin{figure}
\caption{Result of the simulation}
\label{fig.1}
\end{figure}
\begin{table} \begin{center} \caption{The result of the simulation for Factor1: Means.}\label{TF1}
\begin{tabular}{l|lll} \hline Pattern No. & control1 & control2 & proposal\\ \hline Pattern1 & 0.028 (0.011, 0.059) & 0.028 (0.011, 0.060) & 0.018 (0.007, 0.042) \\ Pattern2 & 0.076 (0.036, 0.137) & 0.079 (0.036, 0.141) & 0.047 (0.022, 0.086) \\ Pattern3 & 0.158 (0.131, 0.191) & 0.038 (0.019, 0.069) & 0.025 (0.012, 0.043)\\ Pattern4 & 0.066 (0.030, 0.117) & 0.071 (0.034, 0.123) & 0.046 (0.021, 0.080)\\ Pattern5 & 0.023 (0.010, 0.057) & 0.023 (0.011, 0.057) & 0.015 (0.006, 0.037)\\ Pattern6 & 0.072 (0.034, 0.124) & 0.074 (0.036, 0.127) & 0.043 (0.019, 0.079)\\ Pattern7 & 0.048 (0.022, 0.093) & 0.053 (0.024, 0.096) & 0.033 (0.015, 0.068)\\ Pattern8 & 0.070 (0.035, 0.118) & 0.051 (0.023, 0.093) & 0.033 (0.016, 0.062)\\ Pattern9 & 0.059 (0.029, 0.105) & 0.043 (0.020, 0.086) & 0.028 (0.013, 0.059)\\ Pattern10 & 0.035 (0.016, 0.081) & 0.039 (0.020, 0.085) & 0.025 (0.011, 0.060)\\ Pattern11 & 0.022 (0.010, 0.055) & 0.023 (0.011, 0.056) & 0.014 (0.006, 0.036)\\ Pattern12 & 0.072 (0.035, 0.124) & 0.074 (0.038, 0.126) & 0.042 (0.020, 0.077)\\ Pattern13 & 0.159 (0.132, 0.190) & 0.038 (0.019, 0.069) & 0.025 (0.012, 0.042)\\ Pattern14 & 0.065 (0.029, 0.117) & 0.069 (0.032, 0.127) & 0.045 (0.021, 0.082)\\ Pattern15 & 0.029 (0.011, 0.061) & 0.030 (0.011, 0.062) & 0.018 (0.007, 0.042)\\ Pattern16 & 0.079 (0.038, 0.137) & 0.082 (0.038, 0.141) & 0.047 (0.021, 0.086)\\ Pattern17 & 0.036 (0.016, 0.085) & 0.040 (0.021, 0.086) & 0.026 (0.011, 0.060)\\ Pattern18 & 0.059 (0.029, 0.104) & 0.043 (0.020, 0.090) & 0.030 (0.013, 0.061)\\ Pattern19 & 0.070 (0.034, 0.116) & 0.052 (0.024, 0.095) & 0.034 (0.015, 0.062)\\ Pattern20 & 0.047 (0.021, 0.092) & 0.052 (0.023, 0.092) & 0.033 (0.014, 0.067)\\ Pattern21 & 0.159 (0.132, 0.190) & 0.038 (0.019, 0.069) & 0.024 (0.011, 0.042)\\ Pattern22 & 0.066 (0.031, 0.117) & 0.073 (0.035, 0.127) & 0.046 (0.021, 0.082)\\ Pattern23 & 0.015 (0.005, 0.056) & 0.014 (0.005, 0.056) & 0.009 (0.003, 0.036)\\ Pattern24 & 0.068 (0.030, 0.120) & 0.071 (0.031, 0.125) & 0.045 (0.021, 0.080)\\ Pattern25 & 0.158 (0.130, 0.190) & 0.038 (0.019, 0.069) & 0.025 (0.012, 0.043)\\ Pattern26 & 0.067 (0.030, 0.118) & 0.074 (0.034, 0.128) & 0.046 (0.022, 0.082)\\ Pattern27 & 0.072 (0.034, 0.117) & 0.050 (0.023, 0.094) & 0.034 (0.016, 0.064)\\ Pattern28 & 0.046 (0.020, 0.086) & 0.045 (0.020, 0.090) & 0.031 (0.013, 0.062)\\ Pattern29 & 0.042 (0.017, 0.089) & 0.036 (0.016, 0.084) & 0.022 (0.009, 0.055)\\ Pattern30 & 0.058 (0.028, 0.103) & 0.042 (0.019, 0.086) & 0.029 (0.013, 0.059)\\ \hline \end{tabular}\\ \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \end{table}
\begin{table} \begin{center} \caption{The result of the simulation for Factor2: Covariance of the difference values within each measurement method.}\label{TF2}
\begin{tabular}{l|lll} \hline
& control1 & control2 & proposal \\ \hline
${\rho} = 0$ & 0.061 (0.022, 0.123) & 0.043 (0.017, 0.091) & 0.028 (0.011, 0.058)\\
${\rho} = 1/3$ & 0.063 (0.022, 0.124) & 0.045 (0.019, 0.091) & 0.029 (0.012, 0.060)\\
${\rho} = 2/3$ & 0.069 (0.029, 0.137) & 0.052 (0.025, 0.102) & 0.033 (0.014, 0.065)\\ \hline \end{tabular} \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \end{table}
\begin{table} \begin{center} \caption{The result of the simulation for Factor3: Covariance between $X$ and $Y$}\label{TF3}
\begin{tabular}{l|lll} \hline
& control1 & control2 & proposal\\ \hline ${\rho}_{XY} = 0$ & 0.065 (0.025, 0.127) & 0.048 (0.02, 0.094) & 0.031 (0.013, 0.062)\\ ${\rho}_{XY} = 1/3$ & 0.064 (0.024, 0.130) & 0.046 (0.019, 0.095) & 0.029 (0.012, 0.060)\\ \hline \end{tabular} \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \end{table}
\begin{table} \begin{center} \caption{The result of the simulation for Factor4: Number of agreements.}\label{TF4}
\begin{tabular}{l|lll} \hline
& control1 & control2 & proposal\\ \hline $m = 1$ & 0.059 (0.021, 0.122) & 0.042 (0.017, 0.087) & 0.027 (0.011, 0.056)\\ $m = 2$ & 0.070 (0.029, 0.135) & 0.052 (0.023, 0.102) & 0.034 (0.014, 0.066)\\ \hline \end{tabular} \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \end{table}
\begin{table} \begin{center} \caption{The result of the simulation for Factor5: Exclusion zone. }\label{TF5}
\begin{tabular}{l|lll} \hline
& control1 & control2 & proposal\\ \hline $a = 0.5$ & 0.058 (0.023, 0.116) & 0.043 (0.019, 0.085) & 0.028 (0.012, 0.055)\\ $a = 1.0$ & 0.072 (0.026, 0.141) & 0.053 (0.021, 0.106) & 0.032 (0.013, 0.067)\\ \hline \end{tabular} \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \end{table}
\begin{table} \begin{center} \caption{The result of the simulation for Factor6: Number of subjects. }\label{TF6}
\begin{tabular}{l|lll} \hline
& control1 & control2 & proposal\\ \hline $n = 15$ & 0.077 (0.029, 0.145) & 0.061 (0.025, 0.117) & 0.039 (0.016, 0.078)\\ $n = 40$ & 0.055 (0.022, 0.111) & 0.038 (0.016, 0.074) & 0.024 (0.010, 0.047)\\ \hline \end{tabular} \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile)
\begin{center} \caption{AUC of proposal method, CCR, control1 and control2 in the simulation}\label{T3}
\begin{tabular}{l|llll} \hline
& proposal & CCR & control1 & control2\\ \hline AUC & 0.930 & 0.898 & 0.898 & 0.909\\ \hline \end{tabular} \label{table3} \end{center} \end{table}
\begin{figure}
\caption{ROC curves of proposal method, CCR, control1 and control2 for the simulation}
\label{fig.8.1}
\end{figure}
\section{Real Example} \label{sec:5}
In this section, we show the usefulness of the proposed concordance rate by the diagnosability through a real example.
We applied the proposed methods to the blood pressure data of package {\choosefont{pcr}MethComp} in {\choosefont{pcr} R software} (Carstensen {\it{et al}}., 2020). The data (Altman and Bland, 1991; Bland and Altman, 1999) comprise the blood pressure measurement for 85 subjects based on 3 types of data: data named as J and R were measured by a gold standard conducted by 2 different human observers, and S was measured by an automatic machine as the experimental method. The study was performed at three time points for each subject.\
The four-quadrant plots generated from the real data are presented in Figure \ref{fig.9}. Comparing 2 of the 3 measurement results to one another, there are 3 pairs: J(observer1) and R(observer2), R and S(auto machine), and J and S. Each pattern has 2 plots, (1) $t=1$ and (2) $t=2$. We calculated the concordance rate with the proposed method, CCR, control1 and control2 as described in Section 4. The concordance rate was in the 2 cases when the trend of change agreed once in two time points ($m = 1$) and twice all time points ($m = 2$). ${\rm Ez}(a)$ was set as 10 percent quantile point in each pair (e.g. Critchley {\it{et al}}., 2010). \
As the assessment of the methods, we compared the diagnostic feasibility of the proposal and the conventional methods of CCR, control1 and control2. Specifically, each 10 subjects out of 85 were randomly selected $100$ times, and the concordance rates was obtained by the four methods in the only case of $m=2$ in each pair. Based on the results, AUC of the proposal, CCR, control1 and control2 were calculated, and ROC curves of the proposal and CCR were drawn to estimate the diagnosability.
Each pattern of the four-quadrant plots in Figure \ref{fig.9} shows the characteristics of the real example. The Data of J and R in Pattern 1 have many red points which show "agreement" of the trend between two data and most of these points lie close to the $45^\circ$ line, because this tendency naturally derives from the same established measurement method. On the other hand, data of S, the experimental measurement, is collected in the different way, thus the plots of Pattern2 and Pattern3 have more blue dots as "disagreement" than the plots of pattern1, and the data are distributed with variation.
Then, data of pattern 1 is attached "agreement" label, and data of both pattern2 and pattern 3 as "disagreement" label. For the evaluation, $10$ subjects out of 85 are randomly selected and calculated by proposal, CCR, control1, and control2 inall three patterns. The procedure was iterated $1000$ times and the diagnostic performances of each method are evaluated.
AUC of the proposal methods, CCR, control1 and control2 shows in Table \ref{Table4}. Each concordance rate estimated with high accuracy in $m=2$ of the example data. The proposal was better than CCR, control1 and control2. As for ROC curves in Figure \ref{fig.10}, the plot of the proposed method drew a curve with almost right angle, while the curve was more moderate in the ROC of CCR. The AUC and ROC curves indicate that the proposed approach has more accuracy than the conventional concordance rates.
\begin{figure}
\caption{Four-quadrant plots with real example data. Pattern1: J(observer1) and R(observer2), Pattern2: R and S(automatic machine), and Pattern3: J and S.}
\label{fig.9}
\end{figure}
\begin{figure}
\caption{ROC of proposal and CCR}
\label{fig.10}
\end{figure}
\begin{table}[htbp] \begin{center} \caption{AUC of CCR, control1, control2 and proposal in a real example}\label{R3} \begin{tabular}{llll} \hline proposal &CCR & control1 & control2 \\ \hline $0.999$ & $0.964$ & $0.964$ & $0.965$ \\ \hline \label{Table4} \end{tabular} \end{center} \end{table}
\section{Discussion} \label{sec:6}
The conventional concordance rate for a four-quadrant plot is one of the methods for evaluating the equivalence between a new testing method and a standard measurement method. In many clinical practice situations, these values are observed repeatedly for the same subjects. However, the conventional concordance rate for the four-quadrant plot does not consider individual subjects when evaluating the trend of measurement values between two clinical testing methods being compared. Therefore, we proposed a new concordance rate based on normal distribution that is calculated using the difference values in each measurement technique depending on the number of agreements. The minimum number of agreements to evaluate the equivalence named hyper parameter can be set according to the total number of time points in the data and the clinical point of view.
In most factors set in the simulation, the proposed concordance rate was mostly closer to the true value than the conventional methods. In addition to that, the diagnosability of the estimation of the proposed method was superior to both the existing concordance method and its applied control methods from the results of numerical simulations.
In addition, through the real example using sbp data,
we showed the superiority of the proposed method for the diagnosability by these AUC values.
We also provided only the results of the numerical simulations and a real example for the case of time point $T = 2$ in this study; however, this proposed concordance rate can be calculated as a case of any $T$.
Here we mention the assumptions of the proposed method and its comparison with existing statistical methods. In the proposed method, we assumed that these data are distributed from multivariate normal distribution. In practical situation, concordance rate is used with Bland-Altman analysis to evaluate the equivalence of two measurement methods. Bland-Altman analysis assumed to be distributed from normal distribution (e.g. Bland and Altman, 2007; Bartko, 1976; Zou, 2013). Therefore, the assumption of the proposed method is consistent with that of Bland-Altman analysis.
Next, Goodman and Kruskal's gamma (Goodman and Kruskal, 1963) is similar to the concordance rate, although the range is different. The gamma statistic does not consider the exclusion zone and, in the practical situation of clinical trials, concordance rate is usually used with Bland-Altman analysis.
Finally, We further discuss the four points of future work of this study. First, for the values of the proposed concordance rate, there are no absolute criteria, similar to the conventional concordance rate. Although various criteria have been proposed, there are no common acceptable criteria for the conventional concordance rate (e.g., Saugel {\it{et al}}., 2015). Therefore, it is difficult to determine the result as good, acceptable, or poor. Secondly, the results of the proposed concordance rate may also face the problem at time intervals between the measurement values, similar to the conventional concordance rate (e.g., Saugel {\it{et al}}., 2015). Thirdly, we have to determine the parameters of the exclusion zone (e.g., Critchley {\it{et al}}., 2011). Forthly, in the proposed method, we introduced hyper parameter $m$, which allows us to arrive at a flexible interpretation of the results. While the Bland-Altman analysis was sometimes used in confirmatory clinical trials based on the statistical inference (e.g., Asamoto {\it{et al}}., 2017), our proposed concordance rate for the four-quadrant plot has not been established yet in this regard. The estimation of the confidence interval will be needed.
In this study, we found that the conventional concordance rate was not so proper indicator in repeated measurements, while the proposed concordance rate could enhance the accuracy by calculating depending on the number of agreement. As the proposed concordance rate provides the trending agreement from various perspectives, this new method is expected to contribute to clinical decisions as an exploratory analysis. Further consideration is thus required from these points of view.
\
\noindent {\bf{Conflict of Interest}}
\noindent {\it{The authors have declared no conflict of interest. }}
\end{document} |
\begin{document}
\selectlanguage{english}
\begin{abstract} We study spectral properties of unbounded Jacobi matrices with periodically modulated or blended entries. Our approach is based on uniform asymptotic analysis of generalized eigenvectors. We determine when the studied operators are self-adjoint. We identify regions where the point spectrum has no accumulation points. This allows us to completely describe the essential spectrum of these operators. \end{abstract} \maketitle
\section{Introduction} Consider two sequences $a = (a_n : n \in \mathbb{N}_0)$ and $b = (b_n : n \in \mathbb{N}_0)$ such that $a_n > 0$ and $b_n \in \mathbb{R}$ for all $n \geq 0$. Let $A$ be the closure in $\ell^2(\mathbb{N}_0)$ of the operator acting on sequences having finite support by the matrix \[
\begin{pmatrix}
b_0 & a_0 & 0 & 0 &\ldots \\
a_0 & b_1 & a_1 & 0 & \ldots \\
0 & a_1 & b_2 & a_2 & \ldots \\
0 & 0 & a_2 & b_3 & \\
\vdots & \vdots & \vdots & & \ddots
\end{pmatrix}. \] The operator $A$ is called \emph{Jacobi matrix}. Recall that $\ell^2(\mathbb{N}_0)$ is the Hilbert space of square summable complex valued sequences with the scalar product \[
\sprod{x}{y}_{\ell^2(\mathbb{N}_0)} = \sum_{n=0}^\infty x_n \overline{y_n}. \] The most throughly studied are bounded Jacobi matrices, see e.g. \cite{Simon2010Book}. Let us remind that the Jacobi matrix $A$ is bounded if and only if the sequences $a$ and $b$ are bounded. In this article we are exclusively interested in \emph{unbounded} Jacobi matrices. We shall consider two classes: periodically modulated and periodically blended. The first class has been introduced in \cite{JanasNaboko2002} and systematically studied since then. To be precise, let $N$ be a positive integer. We say that $A$ has \emph{$N$-periodically modulated} entries if there are two $N$-periodic sequences $(\alpha_n : n \in \mathbb{Z})$ and $(\beta_n : n \in \mathbb{Z})$ of positive and real numbers, respectively, such that \begin{enumerate}[leftmargin=2em, label=\alph*)]
\item
$\begin{aligned}[b]
\lim_{n \to \infty} a_n = \infty
\end{aligned},$
\item
$\begin{aligned}[b]
\lim_{n \to \infty} \bigg| \frac{a_{n-1}}{a_n} - \frac{\alpha_{n-1}}{\alpha_n} \bigg| = 0
\end{aligned},$
\item
$\begin{aligned}[b]
\lim_{n \to \infty} \bigg| \frac{b_n}{a_n} - \frac{\beta_n}{\alpha_n} \bigg| = 0
\end{aligned}.$ \end{enumerate} This class contains sequences one can find in many applications. It is also rich enough to allow building an intuition about the general case. In particular, in this class there are examples of Jacobi matrices with purely absolutely continuous spectrum filling the whole real line (see \cite{JanasNaboko2002, PeriodicI, PeriodicII, SwiderskiTrojan2019, JanasNaboko2001}), having a bounded gap in absolutely continuous spectrum (see \cite{Dombrowski2004, Dombrowski2009, DombrowskiJanasMoszynskiEtAl2004, DombrowskiPedersen2002a, DombrowskiPedersen2002, JanasMoszynski2003, JanasNabokoStolz2004, Sahbani2016, Janas2012}), having absolutely continuous spectrum on the half-line (see \cite{Damanik2007, Janas2001, Janas2009, Motyka2015, Naboko2009, Naboko2010, Simonov2007, DombrowskiPedersen1995, Naboko2019}), having purely singular continuous spectral measure with explicit Hausdorff dimension (see \cite{Breuer2010}), having a dense point spectrum on the real line (see \cite{Breuer2010}), and having an empty essential spectrum (see \cite{Silva2004, Silva2007, Silva2007a, HintonLewis1978, Szwarc2002}).
The second class, that is blended Jacobi matrices (see Definition~\ref{def:3}), has been introduced in \cite{Janas2011} as an example of unbounded Jacobi matrices having absolutely continuous spectrum equal to a finite union of compact intervals. It has been further studied in \cite{ChristoffelI, SwiderskiTrojan2019} in the context of orthogonal polynomials.
Before we formulate the main results of this paper, let us introduce some definitions. In our investigation, the crucial r\^ole is played by the \emph{transfer matrix} defined as \[
B_j(x) =
\begin{pmatrix}
0 & 1 \\
-\frac{a_{j-1}}{a_j} & \frac{x - b_j}{a_j}
\end{pmatrix}. \] We say that a sequence $(x_n : n \in \mathbb{N})$ of vectors from a normed vector space $V$ belongs to $\mathcal{D}_r (V)$ for a certain $r \in \mathbb{N}_0$, if it is \emph{bounded,} and for each $j \in \{1, \ldots, r\}$, \[
\sum_{n = 1}^\infty \big\| \Delta^j x_n \big\|^\frac{r}{j} < \infty \] where \begin{align*}
\Delta^0 x_n &= x_n, \\
\Delta^j x_n &= \Delta^{j-1} x_{n+1} - \Delta^{j-1} x_n, \qquad j \geq 1. \end{align*} If $X$ is the real line with Euclidean norm we abbreviate $\mathcal{D}_{r} = \mathcal{D}_{r}(X)$. Given a compact set $K \subset \mathbb{C}$ and a normed vector space $R$, we denote by $\mathcal{D}_{r}(K, R)$ the case when $X$ is the space of all continuous mappings from $K$ to $R$ equipped with the supremum norm.
\begin{main_theorem}
\label{thm:A}
Suppose that $A$ is a Jacobi matrix with $N$-periodically modulated entries. Let
\[
\mathcal{X}_0(x) = \lim_{n \to \infty} X_{nN}(x)
\]
where
\[
X_n(x) = B_{n+N-1}(x) B_{n+N-2}(x) \cdots B_n(x).
\]
Assume that\footnote{For a real matrix $X$ we define its discriminant as $\operatorname{discr} X = (\operatorname{tr} X)^2 - 4 \det X$.}
$\operatorname{discr} \mathcal{X}_0(0) > 0$. If there are a compact set $K \subset \mathbb{R}$ with at least $N+1$ points,
$r \in \mathbb{N}$ and $i \in \{0, 1, \ldots, N-1 \}$, so that\footnote{By $\operatorname{Mat}(d, \mathbb{R})$ we denote the real matrices
of dimension $d \times d$ with the operator norm.}
\begin{equation}
\label{eq:131}
\big( X_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_r \big( K, \operatorname{Mat}(2, \mathbb{R}) \big),
\end{equation}
then $A$ is self-adjoint and\footnote{For a self-adjoint operator $A$ we denote by $\sigma_{\mathrm{ess}}(A), \sigma_{\mathrm{ac}}(A)$
and $\sigma_{\mathrm{sing}}(A)$ its essential spectrum, the essential spectrum and the singular spectrum, respectively.}
$\sigma_{\mathrm{ess}}(A) = \emptyset$. \end{main_theorem} Recall that a sufficient condition for self-adjointness of the operator $A$ is \emph{the Carleman's condition} (see e.g. \cite[Corollary 6.19]{Schmudgen2017}), that is \begin{equation}
\label{eq:19}
\sum_{n=0}^\infty \frac{1}{a_n} = \infty. \end{equation} The conclusion of Theorem~\ref{thm:A} is in strong contrast with the case when $\operatorname{discr} \mathcal{X}_0(0) < 0$. Indeed, if $\operatorname{discr} \mathcal{X}_0(0) < 0$, then by \cite[Theorem A]{SwiderskiTrojan2019}, the operator $A$ is self-adjoint if and only if the Carleman's condition is satisfied. If it is the case then $A$ is purely absolutely continuous and $\sigma(A) = \mathbb{R}$.
Under the Carleman's condition, the conclusion of Theorem~\ref{thm:A} for $r=1$ has been proven in \cite{JanasNaboko2002} by showing that the resolvent of $A$ is compact. Furthermore, by \cite[Theorem 8]{Chihara1962} (see also \cite[Theorem 2.6]{Szwarc2002}) it follows that if a self-adjoint Jacobi matrix $A$ is $1$-periodically modulated with $\operatorname{discr} \mathcal{X}_0(0) > 0$, then $\sigma_{\mathrm{ess}}(A) = \emptyset$, i.e. the condition \eqref{eq:131} is not necessary here. \begin{main_theorem}
\label{thm:B}
Suppose that $A$ is a Jacobi matrix with $N$-periodically blended entries. Let
\[
\mathcal{X}_1(x) = \lim_{n \to \infty} X_{n(N+2)+1}(x)
\]
where
\[
X_n(x) = B_{n+N+1}(x) B_{n+N}(x) \cdots B_n(x).
\]
If there are a compact set $K \subset \mathbb{R}$ with at least $N+3$ points, $r \in \mathbb{N}$, and $i \in \{1,2, \ldots, N \}$,
so that
\[
\big( X_{n(N+2)+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{r} \big( K, \operatorname{Mat}(2, \mathbb{R}) \big),
\]
then $A$ is self-adjoint and
\[
\sigma_{\mathrm{sing}}(A) \cap \Lambda = \emptyset \quad \text{and} \quad
\sigma_{\mathrm{ac}}(A) = \sigma_{\mathrm{ess}}(A) = \overline{\Lambda}
\]
where
\[
\Lambda = \big\{ x \in \mathbb{R} : \operatorname{discr} \mathcal{X}_1(x) < 0 \big \}.
\] \end{main_theorem} For the proof of Theorem \ref{thm:B} see Theorem \ref{thm:6}. Let us comment that in Theorem \ref{thm:B}, the absolute continuity of $A$ follows by \cite[Theorem B]{SwiderskiTrojan2019}. Moreover, by \cite[Theorem 3.13]{ChristoffelI} it stems that $\Lambda$ is a union of $N$ open disjoint bounded intervals. For $r = 1$ and under certain very strong assumptions, Theorem \ref{thm:B} has been proven in \cite[Theorem 5]{Janas2011}.
The following results concerns the case when $\operatorname{discr} \mathcal{X}_0(0) = 0$. For the proof see Theorem \ref{thm:10}. \begin{main_theorem}
\label{thm:C}
Let $A$ be a Jacobi matrix with $N$-periodically modulated entries, and let $X_n$ and $\mathcal{X}_0$ be defined as in
Theorem~\ref{thm:A}. Suppose that $\mathcal{X}_0(0) = \sigma \operatorname{Id}$ for any $\sigma \in \{-1, 1\}$, and that there are two
$N$-periodic sequences $(s_n : n \in \mathbb{N}_0)$ and $(z_n : n \in \mathbb{N}_0)$, such that
\[
\lim_{n \to \infty} \bigg|\frac{\alpha_{n-1}}{\alpha_n} a_n - a_{n-1} - s_n\bigg| = 0,
\qquad
\lim_{n \to \infty} \bigg|\frac{\beta_n}{\alpha_n} a_n - b_n - z_n\bigg| = 0.
\]
Let $R_n = a_{n+N-1}(X_n - \sigma \operatorname{Id})$. Then $(R_{kN} : k \in \mathbb{N}_0)$ converges locally uniformly on $\mathbb{R}$ to
$\mathcal{R}_0$. If there are a compact set $K \subset \mathbb{R}$ with at least $N+1$ points and $i \in \{1,2, \ldots, N \}$,
so that
\[
\big( R_{nN+i} : n \in \mathbb{N} \big) \in
\mathcal{D}_{1} \big( K, \operatorname{Mat}(2, \mathbb{R}) \big),
\]
then $A$ is self-adjoint and
\[
\sigma_{\mathrm{sing}}(A) \cap \Lambda = \emptyset
\qquad \text{and} \qquad
\sigma_{\mathrm{ac}}(A) = \sigma_{\mathrm{ess}}(A) = \overline{\Lambda}
\]
where
\[
\Lambda = \big\{ x \in \mathbb{R} : \operatorname{discr} \mathcal{R}_0(x) < 0 \big\}.
\] \end{main_theorem} In fact, Theorem~\ref{thm:C} completes the analysis started in \cite{PeriodicIII} where it has been shown that $\overline{\Lambda} \subset \sigma_{\mathrm{ac}}(A)$.
Finally, we investigate the case when the Carleman's condition \eqref{eq:19} is \emph{not} satisfied. \begin{main_theorem}
\label{thm:D}
Let $A$ be a Jacobi matrix with $N$-periodically modulated entries, and let $X_n$ and $\mathcal{X}_0$ be defined as in
Theorem~\ref{thm:A}. Suppose that $\mathcal{X}_0(0) = \sigma \operatorname{Id}$ for any $\sigma \in \{-1, 1\}$, and that the Carleman's
condition is \emph{not} satisfied. Assume that there are $i \in \{0, 1, \ldots, N-1\}$ and a sequence of positive
numbers $(\gamma_n : n \in \mathbb{N}_0)$ satisfying
\[
\sum_{n=0}^\infty \frac{1}{\gamma_n} = \infty,
\]
such that $R_{nN+i}(0) = \gamma_n(X_{nN+i}(0) - \sigma \operatorname{Id})$ converges to a non-zero matrix $\mathcal{R}_i$.
Suppose that
\[
\big( R_{nN+i}(0) : n \in \mathbb{N} \big) \in \mathcal{D}_1 \big( K, \operatorname{Mat}(2, \mathbb{R}) \big).
\]
Then
\begin{enumerate}[(i), leftmargin=2em]
\item if $\operatorname{discr} \mathcal{R}_i < 0$, then $A$ is \emph{not} self-adjoint;
\item if $\operatorname{discr} \mathcal{R}_i > 0$, then $\sigma_{\mathrm{ess}}(A) = \emptyset$ provided $A$ is self-adjoint.
\end{enumerate} \end{main_theorem} In fact, in Theorem \ref{thm:8a} we characterize when $A$ is self-adjoint. To illustrate Theorem \ref{thm:D}, in Section \ref{sec:KM} we consider the $N$-periodically modulated Kostyuchenko--Mirzoev's class. In this context we can precisely describe when the operator $A$ is self-adjoint.
In our analysis the basic objects are \emph{generalized eigenvectors} of $A$. Let us recall that $(u_n : n \in \mathbb{N}_0)$ is a generalized eigenvector associated with $x \in \mathbb{C}$, if for all $n \geq 1$ \[
\begin{pmatrix}
u_n \\
u_{n+1}
\end{pmatrix}
=
B_n(x)
\begin{pmatrix}
u_{n-1} \\
u_{n}
\end{pmatrix} \] for a certain $(u_0, u_1) \neq (0,0)$. The spectral properties of $A$ are intimately related to the asymptotic behavior of generalized eigenvectors. For example, $A$ is self-adjoint if and only if there is a generalized eigenvector associated with some $x_0 \in \mathbb{R}$, that is \emph{not} square-summable. In another vein, the theory of subordinacy (see \cite{Khan1992}) describes spectral properties of a self-adjoint $A$ in terms of asymptotic behavior of generalized eigenvectors. In particular, it has been shown in \cite{Silva2007} that the subordinacy theory together with some general properties of self-adjoint operators imply the following: if $K \subset \mathbb{R}$ is a compact interval such that for each $x \in K$ there is a generalized eigenvector $(u_n(x) : n \in \mathbb{N}_0)$ associated with $x \in K$, so that \begin{equation}
\label{eq:132}
\sum_{n=0}^\infty \sup_{x \in K} |u_n(x)|^2 < \infty, \end{equation} then $\sigma_{\mathrm{ess}}(A) \cap K = \emptyset$. In \cite{Silva2007}, for some class of Jacobi matrices the condition~\eqref{eq:132} has been checked with a help of uniform discrete Levinson's type theorems. In this article we take similar approach. In particular, in Theorems~\ref{thm:2} and \ref{thm:3}, we prove our uniform Levinson's theorems. They improve the existing results known in the literature. More precisely, Theorem~\ref{thm:2} with $r \geq 2$ in the case of negative discriminant, improves the pointwise theorem \cite[Theorem 3.1]{Moszynski2006}. The case of positive discriminant for $r > 2$ has not been studied before, even pointwise. Concerning the uniformity, Theorem~\ref{thm:2} improves \cite{Silva2004}, where for $r=1$ it was assumed that the limiting matrix is constant. Our analysis shows that this condition can be dropped (see the comment after proof of Theorem~\ref{thm:3}). We prove uniformity by constructing explicit diagonalization of the relevant matrices. The case of positive discriminant provides more technical challenges than the negative one. If the Carleman's condition is not satisfied, our Levinson's type theorems allowed us to study asymptotic behavior of generalized eigenvectors on the whole complex plane for a large class of sequences $a$ and $b$. In particular, our results cover the asymptotic recently obtained by Yafaev in \cite{Yafaev2019}, see Corollary~\ref{cor:3} for details. Let us emphasize that our approach is different than used in \cite{Yafaev2019}.
The organization of the paper is as follows. In Section~\ref{sec:prelim} we collect basic properties and definitions. In particular, we prove axillary results concerning periodically modulated and blended Jacobi matrices. In Section~\ref{sec:stolz} we describe Stolz classes, and prove results necessary to show in Section \ref{sec:levinson} our Levinson's type theorems which might be of independent interest. In Section~\ref{sec:essential} we apply them to deduce Theorems~\ref{thm:A}, \ref{thm:B} and \ref{thm:C}. Finally, in Section~\ref{sec:notCarleman} we prove Theorem~\ref{thm:D}, and study the Kostyuchenko--Mirzoev's class of Jacobi matrices in details.
\subsection*{Notation} By $\mathbb{N}$ we denote the set of positive integers and $\mathbb{N}_0 = \mathbb{N} \cup \{0\}$. Throughout the whole article, we write $A \lesssim B$ if there is an absolute constant $c>0$ such that $A \le cB$. We write $A \asymp B$ if $A \lesssim B$ and $B \lesssim A$. Moreover, $c$ stands for a positive constant whose value may vary from occurrence to occurrence.
\subsection*{Acknowledgment} The first author was partially supported by the Foundation for Polish Science (FNP) and by long term structural funding -- Methusalem grant of the Flemish Government.
\section{Preliminaries} \label{sec:prelim} Given two sequences $a = (a_n : n \in \mathbb{N}_0)$ and $b = (b_n : n \in \mathbb{N}_0)$ of positive and real numbers, respectively, we define $k$th associated \emph{orthonormal} polynomials as \[
\begin{gathered}
p^{[k]}_0(x) = 1, \qquad p^{[k]}_1(x) = \frac{x - b_k}{a_k}, \\
a_{n+k-1} p^{[k]}_{n-1}(x) + b_{n+k} p^{[k]}_n(x) + a_{n+k} p^{[k]}_{n+1}(x) =
x p^{[k]}_n(x), \qquad n \geq 1.
\end{gathered} \] We usually omit the superscript if $k = 0$. Suppose that the Jacobi matrix $A$ corresponding to the sequences $a$ and $b$ is self-adjoint. Let us denote by $E_A$ its spectral resolution of the identity. Then for any Borel subset $B \subset \mathbb{R}$, we set \[
\mu(B) = \langle E_A(B) \delta_0, \delta_0 \rangle_{\ell^2(\mathbb{N}_0)} \] where $\delta_0$ is the sequence having $1$ on the $0$th position and $0$ elsewhere. The polynomials $(p_n : n \in \mathbb{N}_0)$ form an orthonormal basis of $L^2(\mathbb{R}, \mu)$.
In this article, we are interested in Jacobi matrices associated to two classes of sequences that are defined in terms of periodic Jacobi parameters. The latter are described as follows. Let $(\alpha_n : n \in \mathbb{Z})$ and $(\beta_n : n \in \mathbb{Z})$ be two $N$-periodic sequences of real and positive numbers, respectively. Let $(\mathfrak{p}_n : n \in \mathbb{N}_0)$ be the corresponding polynomials, that is \[
\begin{gathered}
\mathfrak{p}_0(x) = 1, \qquad \mathfrak{p}_1(x) = \frac{x-\beta_0}{\alpha_0}, \\
\alpha_n \mathfrak{p}_{n-1}(x) + \beta_n \mathfrak{p}_n(x)
+ \alpha_{n} \mathfrak{p}_{n+1}(x)
= x \mathfrak{p}_n(x), \qquad n \geq 1.
\end{gathered} \] Let \[
\mathfrak{B}_n(x) =
\begin{pmatrix}
0 & 1 \\
-\frac{\alpha_{n-1}}{\alpha_n} & \frac{x - \beta_n}{\alpha_n}
\end{pmatrix},
\qquad\text{and}\qquad
\mathfrak{X}_n(x) = \prod_{j = n}^{N+n-1} \mathfrak{B}_j(x), \qquad n \in \mathbb{Z} \] where for a sequence of square matrices $(C_n : n_0 \leq n \leq n_1)$ we have set \[
\prod_{k = n_0}^{n_1} C_k = C_{n_1} C_{n_1-1} \cdots C_{n_0}. \]
\subsection{Periodic modulation} \begin{definition}
\label{def:2}
We say that the Jacobi matrix $A$ associated to $(a_n : n \in \mathbb{N}_0)$ and $(b_n : n \in \mathbb{N}_0)$ has
\emph{$N$-periodically modulated entries,} if there are two $N$-periodic sequences
$(\alpha_n : n \in \mathbb{Z})$ and $(\beta_n : n \in \mathbb{Z})$ of positive and real numbers, respectively, such that
\begin{enumerate}[(a), leftmargin=2em]
\item
$\begin{aligned}[b]
\lim_{n \to \infty} a_n = \infty
\end{aligned},$
\item
$\begin{aligned}[b]
\lim_{n \to \infty} \bigg| \frac{a_{n-1}}{a_n} - \frac{\alpha_{n-1}}{\alpha_n} \bigg| = 0
\end{aligned},$
\item
$\begin{aligned}[b]
\lim_{n \to \infty} \bigg| \frac{b_n}{a_n} - \frac{\beta_n}{\alpha_n} \bigg| = 0
\end{aligned}.$
\end{enumerate} \end{definition} For a Jacobi matrix $A$ with $N$-periodically modulated entries, we set \[
X_n = \prod_{j = n}^{N+n-1} B_j. \] Then for each $i \in \{0, 1, \ldots, N-1\}$ the sequence $(X_{jN+i} : j \in \mathbb{N}_0)$ has a limit $\mathcal{X}_i$. In view of \cite[Proposition 3.8]{ChristoffelI}, we have $\mathcal{X}_i(x) = \mathfrak{X}_i(0)$ for all $x \in \mathbb{C}$.
\begin{proposition}
\label{prop:10}
Let $N$ be a positive integer and $\sigma \in \{-1, 1\}$. Let $A$ be a Jacobi matrix with $N$-periodically modulated
entries so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$. Suppose that there are two $N$-periodic sequences $(s_n : n \in \mathbb{Z})$
and $(z_n : n \in \mathbb{Z})$, such that
\[
\lim_{n \to \infty} \bigg|\frac{\alpha_{n-1}}{\alpha_n} a_n - a_{n-1} - s_n\bigg| = 0,
\qquad
\lim_{n \to \infty} \bigg|\frac{\beta_n}{\alpha_n} a_n - b_n - z_n\bigg| = 0,
\]
then for each $i \in \{0, 1, \ldots, N-1 \}$ the sequence
$\big(a_{(k+1)N+i-1} (X_{kN+i} - \sigma \operatorname{Id}) : k \in \mathbb{N} \big)$ converges locally uniformly on $\mathbb{C}$ to
$\mathcal{R}_i$, and
\[
\operatorname{tr} \mathcal{R}_i = -\sigma \lim_{k \to \infty} \big( a_{(k+1)N+i-1} - a_{kN+i-1} \big).
\] \end{proposition} \begin{proof}
According to \cite[Proposition 9]{PeriodicIII}, we have
\begin{equation} \label{eq:102a}
\mathcal{R}_i(x) = \alpha_{i-1} \mathcal{C}_i(x)+ \alpha_{i-1} \mathcal{D}_i
\end{equation}
where
\[
\mathcal{C}_i(x) = x
\begin{pmatrix}
-\frac{\alpha_{i-1}}{\alpha_i} \Big(\mathfrak{p}^{[i+1]}_{N-2}\Big)'(0) & \Big(\mathfrak{p}^{[i]}_{N-1}\Big)'(0) \\
-\frac{\alpha_{i-1}}{\alpha_i} \Big(\mathfrak{p}^{[i+1]}_{N-1}\Big)'(0) & \Big(\mathfrak{p}^{[i]}_{N}\Big)'(0)
\end{pmatrix}
\]
and
\begin{equation}
\label{eq:100}
\mathcal{D}_i =
\sum_{j=0}^{N-1}
\frac{1}{\alpha_{i+j}}
\left\{
\prod_{m=j+1}^{N-1} \mathfrak{B}_{i+m} (0)
\right\}
\begin{pmatrix}
0 & 0 \\
s_{i+j} & z_{i+j}
\end{pmatrix}
\left\{
\prod_{m=0}^{j-1} \mathfrak{B}_{i+m} (0)
\right\}.
\end{equation}
In view of \cite[Proposition 6]{PeriodicIII},
\begin{equation}
\label{eq:102}
\operatorname{tr} \mathcal{C}_i \equiv 0.
\end{equation}
Since the trace is linear and invariant under cyclic permutations, by \eqref{eq:100} we get
\begin{equation}
\label{eq:101}
\operatorname{tr} \mathcal{D}_i =
\sum_{j=0}^{N-1}
\frac{1}{\alpha_{i+j}}
\operatorname{tr} \left\{
\begin{pmatrix}
0 & 0 \\
s_{i+j} & z_{i+j}
\end{pmatrix}
\prod_{m=j+1}^{N+ j-1} \mathfrak{B}_{i+m} (0)
\right\}.
\end{equation}
Using \cite[Proposition 3]{PeriodicIII}
\[
\prod_{m=j+1}^{N+j-1} \mathfrak{B}_{i+m} (0) =
\begin{pmatrix}
-\frac{\alpha_{i+j}}{\alpha_{i+j+1}} \mathfrak{p}^{[i+j+2]}_{N-3}(0) &
\mathfrak{p}^{[i+j+1]}_{N-2}(0) \\
-\frac{\alpha_{i+j}}{\alpha_{i+j+1}} \mathfrak{p}^{[i+j+2]}_{N-2}(0) &
\mathfrak{p}^{[i+j+1]}_{N-1}(0)
\end{pmatrix},
\]
thus
\begin{align}
\nonumber
\operatorname{tr} \left\{
\begin{pmatrix}
0 & 0 \\
s_{i+j} & z_{i+j}
\end{pmatrix}
\prod_{m=j+1}^{N+ j-1} \mathfrak{B}_{i+m} (0)
\right\}
&=
s_{i+j} \mathfrak{p}^{[i+j+1]}_{N-2}(0) + z_{i+j} \mathfrak{p}^{[i+j+1]}_{N-1}(0) \\
\label{eq:18}
&=
-\sigma \frac{\alpha_{i+j}}{\alpha_{i+j-1}} s_{i+j},
\end{align}
where the last equality follows by \cite[formula (13)]{PeriodicIII}. Inserting \eqref{eq:18} into
\eqref{eq:101} results in
\[
\operatorname{tr} \mathcal{D}_i = -\sigma
\sum_{j=0}^{N-1} \frac{s_{i+j}}{\alpha_{i+j-1}}.
\]
Hence, by \eqref{eq:102a} and \eqref{eq:102}, we get
\[
\operatorname{tr} \mathcal{R}_i = \alpha_{i-1} \operatorname{tr} \mathcal{D}_i =
-\sigma \alpha_{i-1}
\sum_{j=0}^{N-1} \frac{s_{i+j}}{\alpha_{i+j-1}}.
\]
Finally, by \cite[Proposition 3]{christoffelII}, we obtain
\[
\operatorname{tr} \mathcal{R}_i
=
-\sigma \lim_{k \to \infty} \big( a_{(k+1)N+i-1} - a_{kN+i-1} \big),
\]
which completes the proof. \end{proof}
\subsection{Periodic blend} \begin{definition}
The Jacobi matrix $A$ associated to $(a_n : n \in \mathbb{N}_0)$ and $(b_n : n \in \mathbb{N}_0)$ has
\emph{asymptotically $N$-periodic entries} if there are two $N$-periodic sequences
$(\alpha_n : n \in \mathbb{Z})$ and $(\beta_n : n \in \mathbb{Z})$ of positive and real numbers, respectively, such that
\begin{enumerate}[(a), leftmargin=2em]
\item
$\begin{aligned}[b]
\lim_{n \to \infty} \big|a_n - \alpha_n\big| = 0
\end{aligned}$,
\item
$\begin{aligned}[b]
\lim_{n \to \infty} \big|b_n - \beta_n\big| = 0
\end{aligned}$.
\end{enumerate} \end{definition}
\begin{definition}
\label{def:3}
The Jacobi matrix $A$ associated with sequences $(a_n : n \in \mathbb{N}_0)$ and $(b_n : n \in \mathbb{N}_0)$ has
a \emph{$N$-periodically blended entries} if there are an asymptotically $N$-periodic Jacobi matrix $\tilde{A}$
associated with sequences $(\tilde{a}_n : n \in \mathbb{N}_0)$ and $(\tilde{b}_n : n \in \mathbb{N}_0)$, and a sequence of positive
numbers $(\tilde{c}_n : n \in \mathbb{N}_0)$, such that
\begin{enumerate}[(a), leftmargin=2em]
\item
$\begin{aligned}[b]
\lim_{n \to \infty} \tilde{c}_n = \infty, \qquad\text{and}\qquad
\lim_{m \to \infty} \frac{\tilde{c}_{2m+1}}{\tilde{c}_{2m}} = 1
\end{aligned}$,
\item
$\begin{aligned}[b]
a_{k(N+2)+i} =
\begin{cases}
\tilde{a}_{kN+i} & \text{if } i \in \{0, 1, \ldots, N-1\}, \\
\tilde{c}_{2k} & \text{if } i = N, \\
\tilde{c}_{2k+1} & \text{if } i = N+1,
\end{cases}
\end{aligned}$
\item
$\begin{aligned}[b]
b_{k(N+2)+i} =
\begin{cases}
\tilde{b}_{kN+i} & \text{if } i \in \{0, 1, \ldots, N-1\}, \\
0 & \text{if } i \in \{N, N+1\}.
\end{cases}
\end{aligned}$
\end{enumerate} \end{definition} If $A$ is a Jacobi matrix having $N$-periodically blended entries, we set \[
X_n(x) = \prod_{j = n}^{N+n+1} B_j(x). \] By \cite[Proposition 3.12]{ChristoffelI}, for each $i \in \{1, 2, \ldots, N-1\}$, \[
\lim_{j \to \infty} B_{j(N+2)+i}(x) = \mathfrak{B}_i(x), \] locally uniformly with respect to $x \in \mathbb{C}$, thus the sequence $(X_{j(N+2)+i}:j \in \mathbb{N})$ converges to $\mathcal{X}_i$ locally uniformly on $\mathbb{C}$ where \begin{equation} \label{eq:32}
\mathcal{X}_i(x) =
\bigg( \prod_{j = 1}^{i-1} \mathfrak{B}_j(x) \bigg) \mathcal{C}(x) \bigg(\prod_{j = i}^{N-1} \mathfrak{B}_j(x) \bigg), \end{equation} and \[
\mathcal{C}(x) =
\begin{pmatrix}
0 & -1 \\
\frac{\alpha_{N-1}}{\alpha_0} & -\frac{2x-\beta_0}{\alpha_0}
\end{pmatrix}. \] Moreover, we have the following proposition. \begin{proposition}
\label{prop:1}
\begin{align}
\label{eq:21a}
\lim_{j \to \infty} B^{-1}_{j(N+2)}(x) &=
\begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix}, \\
\label{eq:21b}
\lim_{j \to \infty} B_{j(N+2)+N}(x) &=
\begin{pmatrix}
0 & 1 \\
0 & 0
\end{pmatrix}, \\
\label{eq:21c}
\lim_{j \to \infty} B_{j(N+2)+N+1}(x) &=
\begin{pmatrix}
0 & 1 \\
-1 & 0
\end{pmatrix}.
\end{align}
locally uniformly with respect to $x \in \mathbb{C}$. \end{proposition} \begin{proof}
The proposition easily follows from Definition \ref{def:3}. Indeed, we have
\[
B_{j(N+2)}^{-1}(x) =
\begin{pmatrix}
\frac{x-\tilde{b}_{jN}}{\tilde{c}_{2j-1}} & -\frac{\tilde{a}_{jN}}{\tilde{c}_{2j-1}} \\
1 & 0
\end{pmatrix},
\]
and
\[
B_{j(N+2)+N}(x)
=
\begin{pmatrix}
0 & 1 \\
-\frac{\tilde{a}_{jN+N-1}}{\tilde{c}_{2j}} & \frac{x}{\tilde{c}_{2j}}
\end{pmatrix},
\qquad
B_{j(N+2)+N+1}(x)
=
\begin{pmatrix}
0 & 1 \\
-\frac{\tilde{c}_{2j}}{\tilde{c}_{2j+1}} & \frac{x}{\tilde{c}_{2j+1}}
\end{pmatrix}.
\]
Thus using Definition \ref{def:3}(i) and boundedness of the sequence $(\tilde{a}_n : n \in \mathbb{N}_0)$, we can compute
the limits. \end{proof}
\section{Stolz class} \label{sec:stolz} In this section we define a proper class of slowly oscillating sequences which is motivated by \cite{Stolz1994}, see also \cite[Section 2]{SwiderskiTrojan2019}. Let $X$ be a normed space. We say that a bounded sequence $(x_n)$ belongs to $\mathcal{D}_{r, s}(X)$ for certain $r \in \mathbb{N}$ and $s \in \{0, 1, \ldots, r-1\}$, if for each $j \in \{1, \ldots, r-s\}$, \[
\sum_{n=1}^\infty \big\|\Delta^j x_n \big\|^{\frac{r}{j+s}} < \infty. \] Moreover, $(x_n) \in \mathcal{D}_{r,s }^0(X)$, if $(x_n) \in \mathcal{D}_{r,s}(X)$ and \[
\sum_{n=1}^\infty \|x_n\|^{\frac{r}{s}} < \infty. \] Let us observe that \[
\mathcal{D}_{r, s}(X) \subset \mathcal{D}_{r, 0}(X),
\qquad\text{and}\qquad
\mathcal{D}_{r, r-1}(X) = \mathcal{D}_{1, 0}(X). \] To simplify the notation, if $X$ is the real line with Euclidean norm we shortly write $\mathcal{D}_{r, s} = \mathcal{D}_{r,s}(\mathbb{R})$. Given a compact set $K \subset \mathbb{C}$ and a normed vector space $R$, by $\mathcal{D}_{r, s}(K, R)$ we denote the case when $X$ is the space of all continuous mappings from $K$ to $R$ equipped with the supremum norm. Moreover, given a positive integer $N$, we say that $(x_n) \in \mathcal{D}^N_{r, s}(X)$ if for each $i \in \{0, 1, \ldots, N-1 \}$, \[
\big( x_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{r,s}(X). \] \begin{lemma}
\label{lem:3}
Let $d$ and $M$ be positive integers, and let $K_0 \subset \mathbb{C}$ be a set with at least $M+1$ points.
Suppose that $(x_n : n \in \mathbb{N})$ is a sequence of elements from $\operatorname{Mat}(d, \mathbb{P}_M)$ where $\mathbb{P}_M$ denotes the
linear space of complex polynomials of degree at most $M$. If there are $r \geq 1$ and $s \in \{0, 1, \ldots, r-1\}$
so that for all $z \in K_0$,
\[
\big( x_n(z) : n \in \mathbb{N} \big) \in \mathcal{D}_{r,s} \big(\operatorname{Mat}(d, \mathbb{C}) \big),
\]
then for every compact set $K \subset \mathbb{C}$,
\[
\big( x_n : n \in \mathbb{N} \big) \in \mathcal{D}_{r,s} \big(K, \operatorname{Mat}(d, \mathbb{C}) \big).
\] \end{lemma} \begin{proof}
Let $\{ z_0, z_1, \ldots, z_M \}$ be a subset of $K_0$ consisting of distinct points. By the Lagrange interpolation
formula, we can write
\[
x_n(z) = \sum_{j=0}^M \ell_j(z) x_n(z_j)
\]
where
\[
\ell_j(z) =
\prod_{\stackrel{m = 0}{m \neq j}}^M
\frac{z - z_m}{z_j - z_m}.
\]
Let $K$ be a compact subset of $\mathbb{C}$. Then there is a constant $c>0$ such that for any $j \in \{0, 1, \ldots, M \}$,
\[
\sup_{z \in K} |\ell_j(z)| \leq c.
\]
Since, for each $k \geq 0$,
\[
\Delta^k x_n(z) = \sum_{j=0}^M \ell_j(z) \Delta^k x_n(z_j),
\]
we obtain
\[
\sup_{z \in K} \big\| \Delta^k x_n(z) \big\|
\leq
c \sum_{j=0}^M \big\| \Delta^k x_n(z_j) \big\|,
\]
and the conclusion follows. \end{proof}
The following lemma is well-known and its proof is straightforward. \begin{lemma}
\label{lem:1}
For any two sequences $(x_n)$ and $(y_n)$, and $j \in \mathbb{N}$,
\[
\Delta^j(x_n y_n : n \in \mathbb{N})_n = \sum_{k = 0}^j {j \choose k} \Delta^{j-k}x_n \cdot
\Delta^k y_{n + j - k}.
\] \end{lemma}
\begin{corollary}[{\cite[Corollary 1]{SwiderskiTrojan2019}}]
\label{cor:1}
Let $r \in \mathbb{N}$ and $s \in \{0, \ldots, r-1\}$.
\begin{enumerate}[(i), leftmargin=2em]
\item If $(x_n) \in \mathcal{D}_{r, 0}(X)$ and $(y_n) \in \mathcal{D}_{r, s}^0(X)$ then
$(x_n y_n) \in \mathcal{D}_{r, s}^0(X)$.
\item If $(x_n), (y_n) \in \mathcal{D}_{r, s}(X)$, then $(x_n y_n) \in \mathcal{D}_{r, s}(X)$.
\end{enumerate} \end{corollary}
\begin{lemma}[{\cite[Lemma 2]{SwiderskiTrojan2019}}]
\label{lem:2}
Fix $r \in \mathbb{N}$, $s \in \{0, \ldots, r-1\}$ and a compact set $K \subseteq \mathbb{R}$. Let
$(f_n : n \in \mathbb{N}) \in \mathcal{D}_{r, s}(K, \mathbb{R})$ be a sequence of real functions on $K$ with values in $I \subseteq \mathbb{R}$
and let $F \in \mathcal{C}^{r-s}(I, \mathbb{R})$. Then $(F \circ f_n : n \in \mathbb{N}) \in \mathcal{D}_{r, s}(K, \mathbb{R})$. \end{lemma}
By Lemma \ref{lem:2}, we easily get the following corollary. \begin{corollary}
\label{cor:2}
Let $r \in \mathbb{N}$. If $(x_n) \in \mathcal{D}_{r, 0}(K, \mathbb{C})$, and
\[
0 < \delta \leq \abs{x_n(x)},
\]
for all $n \in \mathbb{N}$ and $x \in K$, then $(x_n^{-1} : n \in \mathbb{N}) \in \mathcal{D}_{r, 0}(K, \mathbb{C})$. \end{corollary}
The next theorem is the main result of this section. \begin{theorem}
\label{thm:1}
Fix two integers $r \geq 2$ and $s \in \{0, \ldots, r-2\}$, and a compact set $K \subset \mathbb{R}$. Suppose that
$(\lambda_n^+ : n \in \mathbb{N})$ and $(\lambda_n^- : n \in \mathbb{N})$ are two uniform Cauchy sequences from
$\mathcal{D}_{r, 0}(K, \mathbb{R})$ so that for all $x \in K$ and $n \in \mathbb{N}$,
\begin{equation}
\label{eq:7}
\begin{aligned}
\lambda_n^+(x) \lambda_n^-(x) &> 0, \\
\abs{\lambda_n^+(x)} - \abs{\lambda_n^-(x)} &\geq \delta > 0.
\end{aligned}
\end{equation}
Let $(X_n : n \in \mathbb{N}) \in \mathcal{D}_{r, s}\big(K, \operatorname{GL}(2, \mathbb{R})\big)$ be such that
\begin{equation}
\label{eq:8}
\sup_{x \in K} \sup_{n \in \mathbb{N}} \big(\|X_n(x)\| + \|X_n^{-1}(x)\|\big) < \infty.
\end{equation}
Then there are sequences $(\mu_n^+ : n \in \mathbb{N}), (\mu_n^- : n \in \mathbb{N}) \in \mathcal{D}_{r, 0}(K, \mathbb{R})$ and
$(Y_n : n \in \mathbb{N}) \in \mathcal{D}_{r, s+1}\big(K, \operatorname{GL}(2, \mathbb{R})\big)$ satisfying
\begin{equation}
\label{eq:20}
\begin{pmatrix}
\lambda^+_n & 0 \\
0 & \lambda_n^-
\end{pmatrix}
X_n^{-1} X_{n-1} =
Y_n
\begin{pmatrix}
\mu_n^+ & 0 \\
0 & \mu_n^-
\end{pmatrix}
Y_n^{-1},
\end{equation}
such that $(\mu_n^+ : n \in \mathbb{N})$ and $(\mu_n^- : n \in \mathbb{N})$ are uniform Cauchy sequences with
\begin{align*}
\mu_n^+(x) \mu_n^-(x) &> 0, \\
\abs{\mu_n^+(x)} - \abs{\mu_n^-(x)} &\geq \delta' > 0,
\end{align*}
for all $x \in K$ and $n \in \mathbb{N}$. Moreover,
\begin{equation}
\label{eq:10}
\lim_{n \to \infty} \sup_{x \in K} \big\| Y_n(x) - \operatorname{Id} \big\| = 0.
\end{equation} \end{theorem} \begin{proof}
Let
\[
D_n =
\begin{pmatrix}
\lambda_n^+ & 0 \\
0 & \lambda_n^-
\end{pmatrix}.
\]
We set
\[
W_n = D_n X_n^{-1} X_{n-1} = D_n \big( \operatorname{Id} - X_n^{-1} \Delta X_{n-1}\big).
\]
By \eqref{eq:8}, we have
\[
\sup_{K} \big\|W_n - D_n \big\| = \sup_{K} \big\|D_n X_n^{-1} \Delta X_{n-1} \big\|
\leq
c
\sup_K \big\| \Delta X_{n-1} \big\|.
\]
Since $(X_n) \in \mathcal{D}_{r, s}\big(K, \operatorname{GL}(2, \mathbb{R})\big)$,
\[
\lim_{n \to \infty} \sup_K \| \Delta X_n \|= 0,
\]
thus
\[
\lim_{n \to \infty} \sup_K \big\| W_n - D_n \big\|= 0.
\]
In particular, $W_n$ has positive discriminant. Let $\mu^+_n$ and $\mu_n^-$ be its eigenvalues with
$\abs{\mu^+_n} > \abs{\mu^-_n}$. Then
\[
\lim_{n \to \infty} \sup_K \big|\mu_n^+ - \lambda_n^+ \big| = 0,
\qquad\text{and}\qquad
\lim_{n \to \infty} \sup_K \big|\mu_n^- - \lambda_n^- \big| = 0,
\]
and hence $(\mu^+_n : n \in \mathbb{N})$ and $(\mu_n^- : n \in \mathbb{N})$ are a uniform Cauchy sequence satisfying \eqref{eq:7}.
Setting
\[
X_n = \begin{pmatrix}
x_{11}^{(n)} & x_{12}^{(n)} \\
x_{21}^{(n)} & x_{22}^{(n)}
\end{pmatrix},
\qquad\text{and}\qquad
W_n =
\begin{pmatrix}
w_{11}^{(n)} & w_{12}^{(n)} \\
w_{21}^{(n)} & w_{22}^{(n)}
\end{pmatrix},
\]
we obtain
\[
W_n
=
\frac{1}{\det X_n}
\begin{pmatrix}
\lambda_n^+ \big(x_{11}^{(n-1)} x_{22}^{(n)} - x_{21}^{(n-1)} x_{12}^{(n)}\big) &
\lambda_n^+ \big(x_{12}^{(n-1)} x_{22}^{(n)} - x_{22}^{(n-1)} x_{12}^{(n)}\big) \\
\lambda_n^- \big( x_{21}^{(n-1)} x_{11}^{(n)} - x_{11}^{(n-1)} x_{21}^{(n)}\big) &
\lambda_n^- \big( x_{22}^{(n-1)} x_{11}^{(n)} - x_{12}^{(n-1)} x_{21}^{(n)}\big)
\end{pmatrix}.
\]
By \eqref{eq:8} and Corollary \ref{cor:2}, we have
\[
\bigg(\frac{1}{\det X_n} \bigg) \in \mathcal{D}_{r, 0},
\]
hence by Corollary \ref{cor:1}(ii), we get that
\[
\big(w_{11}^{(n)} : n \in \mathbb{N} \big), \big(w_{22}^{(n)} : n \in \mathbb{N} \big)\in \mathcal{D}_{r, 0}.
\]
Moreover,
\begin{align*}
w_{12}^{(n)}
&=
\frac{\lambda_n^+}{\det X_n} \big(x_{12}^{(n-1)} x_{22}^{(n)} - x_{22}^{(n-1)} x_{12}^{(n)}\big) \\
&=
\frac{\lambda_n^+}{\det X_n} \Big(\big(x_{22}^{(n)} - x_{22}^{(n-1)}\big) x_{12}^{(n)}
- \big(x_{12}^{(n)}-x_{12}^{(n-1)}\big) x_{22}^{(n)}\Big),
\end{align*}
and
\begin{align*}
w_{21}^{(n)}
&=
\frac{\lambda_n^-}{\det X_n} \Big(\big(x_{22}^{(n)} - x_{22}^{(n-1)}\big) x_{21}^{(n)}
- \big(x_{21}^{(n)}-x_{21}^{(n-1)}\big) x_{22}^{(n)}\Big),
\end{align*}
thus, by Corollary \ref{cor:1}(i),
\[
\big(w_{12}^{(n)} : n \in \mathbb{N} \big), \big(w_{21}^{(n)} : n \in \mathbb{N} \big)\in \mathcal{D}_{r, s+1}^0.
\]
Next, we compute the eigenvalues. We obtain
\[
\mu_n^+ = \frac{w_{11}^{(n)}+w_{22}^{(n)}}{2}
+ \frac{\sigma_n}{2} \sqrt{\operatorname{discr}{W_n}},\qquad\text{and}\qquad
\mu_n^- = \frac{w_{11}^{(n)}+w_{22}^{(n)}}{2}
- \frac{\sigma_n}{2} \sqrt{\operatorname{discr}{W_n}}
\]
where $\sigma_n = \sign{w_{11}^{(n)}}$, and
\[
\operatorname{discr}{W_n} = \big(w_{22}^{(n)} - w_{11}^{(n)}\big)^2 + 4 w_{12}^{(n)} w_{21}^{(n)}.
\]
Since for all $n$ sufficiently large
\begin{equation}
\label{eq:11}
\big| w_{11}^{(n)}-w_{22}^{(n)} \big| \geq
\big|\lambda_n^+ - \lambda_n^-\big|
- \big|w_{11}^{(n)}-\lambda_n^+ \big|
- \big|w_{22}^{(n)}-\lambda_n^- \big| \geq \frac{\delta}{2},
\end{equation}
by Lemma \ref{lem:2}, we have $(\mu_n^+),(\mu_n^-) \in \mathcal{D}_{r, 0}(K, \mathbb{R})$. It remains to compute the matrix
$Y_n$. Suppose that the equations
\begin{equation}
\label{eq:30}
W_n
\begin{pmatrix}
1 \\
v_n^+
\end{pmatrix}
=
\mu_n^+
\begin{pmatrix}
1 \\
v_n^+
\end{pmatrix}
\qquad\text{and}\qquad
W_n
\begin{pmatrix}
v_n^- \\
1
\end{pmatrix}
=
\mu_n^-
\begin{pmatrix}
v_n^- \\
1
\end{pmatrix}
\end{equation}
both have solutions, then the matrix
\[
Y_n =
\begin{pmatrix}
1 & v_n^- \\
v_n^+ & 1
\end{pmatrix}
\]
satisfies \eqref{eq:20}. Observe that equations \eqref{eq:30} are equivalent to
\begin{align}
\label{eq:34}
\left\{
\begin{aligned}
w_{11}^{(n)} + v_n^+ w_{12}^{(n)} &= \mu_n^+, \\
w_{21}^{(n)} + v_n^+ w_{22}^{(n)} &= \mu_n^+ v_n^+,
\end{aligned}
\right.
\qquad\text{and}\qquad
\left\{
\begin{aligned}
w_{11}^{(n)} v_n^- + w_{12}^{(n)} &= \mu_n^- v_n^-, \\
w_{21}^{(n)} v_n^- + w_{22}^{(n)} &= \mu_n^-.
\end{aligned}
\right.
\end{align}
If $\sigma_n = 1$ then by \eqref{eq:11},
\[
w_{22}^{(n)} - w_{11}^{(n)} - \sqrt{\operatorname{discr}{W_n}} \leq -\frac{\delta}{2},
\]
otherwise
\[
w_{22}^{(n)} - w_{11}^{(n)} + \sqrt{\operatorname{discr}{W_n}} \geq \frac{\delta}{2}.
\]
Thus
\[
\big| w_{22}^{(n)} - w_{11}^{(n)} - \sigma_n \sqrt{\operatorname{discr}{W_n}} \big| \geq \frac{\delta}{2},
\]
and
\begin{align*}
v_n^+
=
\frac{-2 w_{21}^{(n)}}{w_{22}^{(n)} - w_{11}^{(n)}
- \sigma_n \sqrt{\operatorname{discr} W_n}},
\quad\text{and}\quad
v_n^-
=
\frac{2 w_{12}^{(n)}}{w_{22}^{(n)} - w_{11}^{(n)}
- \sigma_n \sqrt{\operatorname{discr}{W_n}}},
\end{align*}
satisfy the systems \eqref{eq:34}. In view of \eqref{eq:11}, Corollary \ref{cor:2} and Corollary \ref{cor:1}(i),
we conclude that $(v_n^+), (v_n^-) \in \mathcal{D}_{r, s+1}^0(K, \mathbb{R})$. Finally, Lemma \ref{lem:2} implies that $(Y_n)$
belongs to $\mathcal{D}_{r, s+1}\big(K, \operatorname{GL}(2, \mathbb{R})\big)$. Because
\[
\lim_{n \to \infty} \sup_K{\abs{v_n^+}} = \lim_{n \to \infty} \sup_K{\abs{v_n^-}} = 0,
\]
we easily obtain \eqref{eq:10}. \end{proof}
\begin{corollary}
The sequences $(\mu^-_n)$ and $(\mu^+_n)$ converge to the same limit as $(\lambda^-_n)$ and $(\lambda^+_n)$,
respectively. \end{corollary}
\section{Levinson's type theorems} \label{sec:levinson} In this section we develop discrete variants of the Levinson's theorem. There are two cases we need to distinguish according to whether the limiting matrix has two different eigenvalues or not.
\subsection{Different eigenvalues} \begin{theorem}
\label{thm:2}
Let $(X_n : n \in \mathbb{N})$ be a sequence of continuous mappings defined on $\mathbb{R}$ with values in $\operatorname{GL}(2, \mathbb{R})$
that converges uniformly on a compact set $K$ to the mapping $\mathcal{X}$ with $\operatorname{discr} \mathcal{X}(x) \neq 0$ and
$\det \mathcal{X}(x) > 0$ for each $x \in K$. If $\operatorname{discr} \mathcal{X} > 0$, we additionally assume that for all $x \in K$,
\begin{equation}
\label{eq:35}
\big|[\mathcal{X}(x)]_{1, 1} - \lambda_1(x)\big| > 0
\quad\text{and}\quad
\big|[\mathcal{X}(x)]_{2, 2} - \lambda_2(x)\big| > 0
\end{equation}
where $\lambda_1$ and $\lambda_2$ are continuous functions on $K$ so that $\lambda_1(x)$ and $\lambda_2(x)$ are
eigenvalues of $\mathcal{X}(x)$. Let $(E_n : n \in \mathbb{N})$ be a sequence of continuous mappings defined on $\mathbb{R}$ with values
in $\operatorname{Mat}(2, \mathbb{C})$ such that
\begin{equation}
\label{eq:40}
\sum_{n = 1}^\infty \sup_K{\| E_n\|} < \infty.
\end{equation}
If $(X_n : n \in \mathbb{N})$ belongs to $\mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{R}) \big)$ for a certain $r \geq 1$ and
$\eta$ is a continuous eigenvalue of $\mathcal{X}$, then there are continuous mappings
$\Phi_n: K \rightarrow \mathbb{C}^2$, $\mu_n: K \rightarrow \mathbb{C}$, and $v : K \rightarrow \mathbb{C}^2$, satisfying
\[
\Phi_{n+1} = (X_n + E_n)\Phi_n
\]
and
\[
\lim_{n \to \infty} \sup_{x \in K}{|\mu_n(x) - \eta(x)|} = 0,
\]
such that
\begin{equation}
\label{eq:14}
\lim_{n \to \infty}
\sup_{x \in K}{
\bigg\|
\frac{\Phi_n(x)}{\prod_{j=1}^{n-1} \mu_j(x)} - v(x)
\bigg\|}
=
0
\end{equation}
whereas $v(x)$ is an eigenvector of $\mathcal{X}(x)$ corresponding to $\eta(x)$ for each $x \in K$. \end{theorem} \begin{proof}
Suppose that $\operatorname{discr} \mathcal{X}(x) > 0$ and $\det \mathcal{X}(x) > 0$ for all $x \in K$. In particular, $\operatorname{tr} \mathcal{X}(x) \neq 0$
for all $x \in K$. Let $\lambda^+$ and $\lambda^-$ denote the eigenvalues of $\mathcal{X}$ such that
$|\lambda^+| > |\lambda^-|$, namely we set
\[
\lambda^+(x) = \frac{\operatorname{tr} \mathcal{X}(x) + \sigma \sqrt{\operatorname{discr} \mathcal{X}(x)}}{2},
\qquad\text{and}\qquad
\lambda^-(x) = \frac{\operatorname{tr} \mathcal{X}(x) - \sigma \sqrt{\operatorname{discr} \mathcal{X}(x)}}{2}
\]
where $\sigma = \sign{\operatorname{tr} \mathcal{X}}$. Without loss of generality we can assume that \eqref{eq:35} is satisfied with
$\lambda_1 = \lambda^+$ and $\lambda_2 = \lambda^-$, since otherwise we consider mappings conjugated by
\[
J =
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}.
\]
Select $\delta > 0$ such that for all $x \in K$,
\[
\big|[\mathcal{X}(x)]_{1, 1} - \lambda^+(x)\big| \geq 2 \delta
\quad\text{and}\quad
\big|[\mathcal{X}(x)]_{2, 2} - \lambda^-(x)\big| \geq 2 \delta,
\]
and
\[
\operatorname{discr} \mathcal{X}(x) \geq 2 \delta^2, \qquad \det \mathcal{X}(x) \geq 2 \delta^2.
\]
Since $(\operatorname{discr} X_n : n \in \mathbb{N})$ converges uniformly on $K$, there is $M \geq 1$ such that for all $n \geq M$ and
$x \in K$,
\begin{equation}
\label{eq:43}
\operatorname{discr} X_n(x) \geq \delta^2 \qquad
\det X_n(x) \geq \delta^2.
\end{equation}
Hence, the matrix $X_n(x)$ has two eigenvalues
\[
\lambda_n^+(x) = \frac{\operatorname{tr} X_n(x) + \sigma \sqrt{\operatorname{discr} X_n(x)}}{2},
\qquad\text{and}\qquad
\lambda_n^-(x) = \frac{\operatorname{tr} X_n(x) - \sigma \sqrt{\operatorname{discr} X_n(x)}}{2},
\]
By increasing $M$, we can also assume that for all $n \geq M$ and $x \in K$,
\begin{equation}
\label{eq:41}
\big|[X_n(x)]_{1, 1} - \lambda^+_n(x)\big| \geq \delta
\quad\text{and}\quad
\big|[X_n(x)]_{2, 2} - \lambda^-_n(x)\big| \geq \delta.
\end{equation}
Then setting
\[
C_{n, 0} =
\begin{pmatrix}
\frac{[X_n]_{1,2}}{\lambda^+_n-[X_n]_{1,1}} & 1 \\
1 & \frac{[X_n]_{2,1}}{\lambda^-_n - [X_n]_{2,2}}
\end{pmatrix}
\qquad\text{and}\qquad
D_{n, 0} =
\begin{pmatrix}
\lambda_n^+ & 0 \\
0 & \lambda_n^-
\end{pmatrix},
\]
we obtain
\[
X_n = C_{n, 0} D_{n, 0} C_{n, 0}^{-1}.
\]
In view of \eqref{eq:43} and \eqref{eq:41}, by Corollaries \ref{cor:2} and \ref{cor:1}, the sequences
$(C_{n, 0} : n \geq M)$ and $(D_{n, 0} : n \geq M)$ belong to $\mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{R})\big)$.
If $r \geq 2$, in view of \eqref{eq:43} we can apply Theorem \ref{thm:1} to get two sequences of mappings
\[
(C_{n,1} : n \geq M) \in \mathcal{D}_{r, 1}\big(K, \operatorname{GL}(2, \mathbb{R}) \big),
\qquad\text{and}\qquad
(D_{n, 1} : n \geq M) \in \mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{R}) \big),
\]
such that
\[
D_{n, 0} C_{n, 0}^{-1} C_{n-1, 0} = C_{n, 1} D_{n, 1} C_{n, 1}^{-1},
\]
and
\[
D_{n, 1} =
\begin{pmatrix}
\gamma_{n, 1}^+ & 0 \\
0 & \gamma_{n, 1}^-
\end{pmatrix}.
\]
Then for $n \geq M+1$,
\begin{align*}
X_{n+1} X_n
&=
(C_{n+1,0} D_{n+1, 0} C_{n+1, 0}^{-1}) (C_{n, 0} D_{n, 0} C_{n, 0}^{-1}) \\
&=
C_{n+1, 0} (D_{n+1, 0} C_{n+1, 0}^{-1} C_{n, 0}) (D_{n, 0} C_{n, 0}^{-1} C_{n-1, 0}) C_{n-1, 0}^{-1} \\
&=
C_{n+1, 0} (C_{n+1, 1} D_{n+1, 1} C_{n+1,1}^{-1}) (C_{n, 1} D_{n, 1} C_{n, 1}^{-1}) C_{n-1, 0}^{-1} \\
&=
C_{n+1, 0} C_{n+1, 1} (D_{n+1, 1} C_{n+1, 1}^{-1} C_{n, 1}) (D_{n, 1} C_{n, 1}^{-1} C_{n-1, 1})
(C_{n-1, 0} C_{n-1, 1})^{-1}.
\end{align*}
By repeated application of Theorem \ref{thm:1} for $k \in \{2, 3, \ldots, r-1\}$, we can find sequences
\begin{equation}
\label{eq:6}
(C_{j, k} : j \geq M+k) \in \mathcal{D}_{r, k}\big(K, \operatorname{GL}(2, \mathbb{R})\big),
\quad\text{and}\quad
(D_{j, k} : j \geq M+k) \in \mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{R})\big),
\end{equation}
such that
\[
D_{j, k-1} C_{j, k-1}^{-1} C_{j-1, k-1} = C_{j, k} D_{j, k} C_{j, k}^{-1}.
\]
Hence,
\[
X_{n+1} X_n =
Q_{n+1} \big(D_{n+1, r-1} C_{n+1, r-1}^{-1} C_{n, r-1}\big) \big(D_{n, r-1} C_{n, r-1}^{-1} C_{n-1, r-1} \big)
Q_{n-1}^{-1}
\]
where
\[
Q_m = C_{m, 0} C_{m, 1} \cdots C_{m, r-1}.
\]
Notice that
\[
\lim_{m \to \infty} Q_m =
\begin{pmatrix}
\frac{[\mathcal{X}]_{1,2}}{\lambda^+-[\mathcal{X}]_{1,1}} & 1 \\
1 & \frac{[\mathcal{X}]_{2,1}}{\lambda^--[\mathcal{X}]_{2,2}}
\end{pmatrix}
\]
uniformly on $K$.
Let us now consider the recurrence equation
\begin{align*}
\Psi_{k+1} &= Q_{2k+1}^{-1} (X_{2k+1} + E_{2k+1})(X_{2k} + E_{2k}) Q_{2k-1} \Psi_k \\
&=\big(Y_k + R_k + F_k\big) \Psi_k, \qquad k \geq M
\end{align*}
where
\[
Y_k = D_{2k+1, r-1} D_{2k, r-1}
=
\begin{pmatrix}
\gamma_{2k+1, r-1}^+ \gamma_{2k, r-1}^+ & 0 \\
0 & \gamma_{2k+1, r-1}^- \gamma_{2k, r-1}^-
\end{pmatrix},
\]
\[
R_k = (D_{2k+1, r-1} C_{2k+1, r-1}^{-1} C_{2k, r-1})
(D_{2k,r-1} C_{2k, r-1}^{-1} C_{2k-1, r-1}) - D_{2k+1, r-1} D_{2k, r-1},
\]
and
\[
F_k = Q_{2k+1}^{-1} X_{2k+1} E_{2k} Q_{2k-1} + Q_{2k+1}^{-1} E_{2k+1} X_{2k} Q_{2k-1}
+ Q_{2k+1}^{-1} E_{2k+1} E_{2k} Q_{2k-1}.
\]
Since
\begin{align*}
R_k &= -D_{2k+1, r-1} C_{2k+1, r-1}^{-1} \big( \Delta C_{2k, r-1}\big) D_{2k, r-1} \\
&\phantom{=}- D_{2k+1, r-1} C_{2k+1, r-1}^{-1} C_{2k, r-1} D_{2k, r-1} C_{2k, r-1}^{-1}
\big( \Delta C_{2k-1, r-1} \big),
\end{align*}
we easily see that
\[
\|R_k\| \leq
c
\big(\big\|\Delta C_{2k, r-1}\big\| + \big\|\Delta C_{2k-1, r-1} \big\| \big),
\]
which together with \eqref{eq:6} implies that
\[
\sum_{k = M}^\infty \sup_K \|R_k\| < \infty.
\]
In view of \eqref{eq:40} we also get
\[
\sum_{k = M}^{\infty} \sup_K \|F_k\| < \infty.
\]
Let us consider the case $\eta = \lambda^-$. The sequence $(\gamma_{n, r-1}^- : n \geq M)$ converges to $\lambda^-$,
thus there are $n_0 \geq M$ and $\delta' > 0$, so that for all $n \geq n_0$,
\[
\bigg|\frac{\gamma^+_{n, r-1}}{\gamma^-_{n, r-1}}\bigg| \geq 1 + \frac{\delta}{\abs{\gamma^-_{n, r-1}}}
\geq 1 + \delta',
\]
thus for all $k_1 \geq k_0 \geq n_0$,
\[
\prod_{j = k_0}^{k_1}
\bigg|\frac{\gamma^+_{2j+1, r-1}}{\gamma^-_{2j+1, r-1}}\bigg| \cdot
\bigg|\frac{\gamma^+_{2j, r-1}}{\gamma^-_{2j, r-1}} \bigg|
\geq (1+ \delta')^{2(k_1-k_0)}.
\]
In particular, $(Y_k : k \geq n_0)$ satisfies the uniform Levinson's condition, see
\cite[Definition 2.1]{Silva2004}. Therefore, in view of \cite[Theorem 4.1]{Silva2004}, there is a sequence
$(\Psi_k : k \geq n_0)$ such that
\[
\lim_{n \to \infty}
\sup_{x \in K}{
\bigg\|
\frac{\Psi_k(x)}{\prod_{j = n_0}^{k-1} \gamma^-_{2j+1, r-1}(x) \gamma^-_{2j, r-1}(x)}
-
e_2
\bigg\|} = 0
\]
where
\[
e_2 = \begin{pmatrix}
0 \\
1
\end{pmatrix}.
\]
In fact, in the proof of \cite[Theorem 4.1]{Silva2004} the author used supremum norm with respect to the
parameter, thus when all the mappings in \cite[Theorem 4.1]{Silva2004} are continuous (or holomorphic) with respect
to this parameter, the functions $\Phi_k$ are continuous (or holomorphic, respectively).
We are now in the position to define $(\Phi_n : n \geq 2 n_0)$. Namely, for $x \in K$ and $n \geq 2n_0$, we set
\[
\Phi_{n}(x) =
\begin{cases}
Q_{2k-1}(x) \Psi_k(x) & \text{if } n = 2k, \\
(X_{2k}(x) + E_{2k}(x)) Q_{2k-1}(x) \Psi_k(x) & \text{if } n = 2k+1.
\end{cases}
\]
As it is easy to check, $(\Phi_n : n \geq n_0)$ satisfies
\[
\Phi_{n+1} = (X_n + E_n) \Phi_n.
\]
Observe that for
\[
v^- =
\begin{pmatrix}
1 \\
\frac{[\mathcal{X}]_{2,1}}{\lambda^--[\mathcal{X}]_{2,2}}
\end{pmatrix}
\]
we obtain
\[
\lim_{k \to \infty} \sup_{x \in K}{ \big\|Q_{2k-1}(x) e_2 - v^-(x)\big\|} = 0,
\]
and
\[
\lim_{k \to \infty} \sup_{x \in K}{
\bigg\| \frac{X_{2k}(x) + E_{2k}(x)}{\gamma^-_{2k, r-1}(x)} Q_{2k-1}(x) e_2 - v^-(x) \bigg\|} = 0.
\]
Therefore, \eqref{eq:14} is satisfied for $(\mu_n : n \in \mathbb{N})$ defined on $K$ by the formula
\[
\mu_n =
\begin{cases}
1 & \text{for } n < n_0, \\
\gamma^-_{n, r-1} &\text{for } n \geq n_0.
\end{cases}
\]
This completes the proof. The reasoning when $\eta = \lambda^+$ is analogous.
If $\operatorname{discr} \mathcal{X}(x) < 0$ for $x \in K$, the argument is simpler. First, let us observe that the matrix $\mathcal{X}(x)$ has
real entries, thus $|[\mathcal{X}(x)]_{1, 2}| > 0$ for all $x \in K$. Since $(X_n : n \in \mathbb{N})$ converges uniformly on $K$,
there are $\delta > 0$ and $M \geq 1$, such that for all $n \geq M$ and $x \in K$,
\[
\operatorname{discr} X_n(x) \leq -\delta,\qquad\text{and}\qquad |[X_n(x)]_{1, 2}| > \delta.
\]
Therefore, for each $x \in K$, the matrix $X_n(x)$ has two eigenvalues $\lambda_n$ and $\overline{\lambda_n}$ where
\[
\lambda_n(x) = \frac{\operatorname{tr} X_n(X) + i\sqrt{\abs{\operatorname{discr} X_n(x)}}}{2}.
\]
Hence, setting
\[
C_{n, 0} =
\begin{pmatrix}
1 & 1 \\
\frac{\lambda_n - [X_n]_{1,1}}{[X_n]_{1,2}} & \frac{\overline{\lambda_n} - [X_n]_{1,1}}{[X_n]_{1,2}}
\end{pmatrix},
\qquad\text{and}\qquad
D_{n, 0} =
\begin{pmatrix}
\lambda_n & 0 \\
0 & \overline{\lambda_n}
\end{pmatrix},
\]
we obtain
\[
X_n = C_{n, 0} D_{n, 0} C_{n, 0}^{-1}.
\]
Moreover, $(C_{n, 0} : n \geq M)$ and $(D_{n, 0} : n \geq M)$ belong to $\mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{C}) \big)$.
If $r \geq 2$, then by \cite[Theorem 1]{SwiderskiTrojan2019}, there are two sequences of matrices
\[
(C_{n, 1} : n \geq M) \in \mathcal{D}_{r, 1}\big(K, \operatorname{GL}(2, \mathbb{C}) \big), \qquad\text{and}\qquad
(C_{n, 1} : n \geq M) \in \mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{C}) \big),
\]
such that
\[
D_{n, 0} C_{n, 0}^{-1} C_{n, 0} = C_{n, 1} D_{n, 1} C_{n, 1}^{-1},
\]
and
\[
D_{n, 1} =
\begin{pmatrix}
\gamma_{n, 1} & 0 \\
0 & \overline{\gamma_{n, 1}}
\end{pmatrix}.
\]
By repeated application of \cite[Theorem 1]{SwiderskiTrojan2019}, for $k \in \{2, 3, \ldots, r-1\}$, we can
find sequences
\[
(C_{j, k} : j \geq M+k) \in \mathcal{D}_{r, k}\big(K, \operatorname{GL}(2, \mathbb{C})\big),
\qquad\text{and}\qquad
(D_{j, k} : j \geq M+k) \in \mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{C})\big),
\]
such that
\[
D_{j, k-1} C_{j, k-1}^{-1} C_{j-1, k-1} = C_{j, k} D_{j, k} C_{j, k}^{-1}.
\]
Hence,
\[
X_{n+1}X_n = Q_{n+1} \big(D_{n+1, r-1} C_{n+1, r-1}^{-1} C_{n, r-1} \big)
\big(D_{n, r-1} C_{n, r-1}^{-1} C_{n-1, r-1}\big) Q_{n-1}^{-1},
\]
where
\[
Q_m = C_{m, 0} C_{m, 1} \cdots C_{m, r-1}.
\]
Notice that
\[
\lim_{m \to \infty} Q_m =
\begin{pmatrix}
1 & 1 \\
\frac{\lambda - [\mathcal{X}]_{1,1}}{[\mathcal{X}]_{1,2}} & \frac{\overline{\lambda} - [\mathcal{X}]_{1,1}}{[\mathcal{X}]_{1,2}}
\end{pmatrix}
\]
uniformly on $K$.
We next consider the recurrence equation
\begin{align*}
\Psi_{k+1}
&= Q_{2k+1}^{-1} (X_{2k+1} + E_{2k+1})(X_{2k} + E_{2k})Q_{2k-1} \Psi_k \\
&= (Y_k + R_k + F_k) \Psi_k, \qquad k \geq M
\end{align*}
where
\[
Y_{k} = D_{2k+1, r-1} D_{2k, r-1} =
\begin{pmatrix}
\gamma_{2k+1, r} \gamma_{2k, r} & 0 \\
0 & \overline{\gamma_{2k+1, r}} \overline{\gamma_{2k, r}}
\end{pmatrix},
\]
\[
R_k = (D_{2k+1, r-1} C_{2k+1, r-1}^{-1} C_{2k, r-1})
(D_{2k,r-1} C_{2k, r-1}^{-1} C_{2k-1, r-1})-D_{2k+1, r-1} D_{2k, r-1},
\]
and
\[
F_k = Q_{2k+1}^{-1} X_{2k+1} E_{2k} Q_{2k-1} + Q_{2k+1}^{-1} E_{2k+1} X_{2k} Q_{2k-1}
+Q_{2k+1}^{-1} E_{2k+1} E_{2k} Q_{2k-1}.
\]
Suppose that $\eta = \overline{\lambda}$. Since for all $n \geq M$,
\[
\bigg|\frac{\gamma_{n, r-1}}{\overline{\gamma_{n, r-1}}} \bigg| = 1,
\]
the sequence $(Y_k : k \geq M)$ satisfies the uniform Levinson's condition. Therefore, by
\cite[Theorem 4.1]{Silva2004}, there is a sequence $(\Psi_k : k \geq M)$ such that
\[
\lim_{k \to \infty}\sup_{x \in K}{ \bigg\|\frac{\Psi_k(x)}
{\prod_{j = M}^{k-1} \gamma_{2j+1, r-1}(x) \overline{\gamma_{2j, r-1}(x)}} - e_2\bigg\|} = 0.
\]
Hence, $(\Phi_n : n \geq 2M)$ defined by the formula
\[
\Phi_{n}(x) =
\begin{cases}
Q_{2k-1}(x) \Psi_k(x) & \text{if } n = 2k, \\
(X_{2k}(x) + E_{2k}(x)) Q_{2k-1}(x) \Psi_k(x) & \text{if } n = 2k+1.
\end{cases}
\]
together with
\[
\mu_n =
\begin{cases}
1 & \text{for } n < 2 M, \\
\gamma_{n, r-1} & \text{for } n \geq 2 M,
\end{cases}
\]
satisfies \eqref{eq:14}. This completes the proof of the theorem. \end{proof}
The following lemma guarantees that in the case of positive discriminant Theorem \ref{thm:2} can at least be applied locally. \begin{lemma}
\label{lem:4}
Suppose that $X$ is a continuous mapping defined on a closed interval $I \subset \mathbb{R}$ with values in $\operatorname{Mat}(2, \mathbb{R})$
that has positive discriminant on $I$. Let $\lambda_1, \lambda_2: I \rightarrow \mathbb{R}$, be continuous functions
so that $\lambda_1(x)$ and $\lambda_2(x)$ are the distinct eigenvalues of $X(x)$. Then for each
$x \in I$ there is an open interval $I_x$ containing $x$ such that
\begin{enumerate}[(i), leftmargin=2em]
\item
for all $y \in I_x \cap I$,
\[
\big([X(y)]_{1,1} - \lambda_1(y)\big)\big([X(y)]_{2,2} - \lambda_2(y)\big) \neq 0,
\]
\end{enumerate}
or
\begin{enumerate}[(i), resume, leftmargin=2em]
\item
for all $y \in I_x \cap I$,
\[
\big([X(y)]_{1,1} - \lambda_2(y)\big)\big([X(y)]_{2,2} - \lambda_1(y)\big) \neq 0.
\]
\end{enumerate} \end{lemma} \begin{proof}
Let $x \in I$. Since $\operatorname{discr} X(x) > 0$, we have $\lambda_1(x) \neq \lambda_2(x)$.
By the continuity of $X$, it is enough to show that
\[
\big([X(x)]_{1,1} - \lambda_1(x)\big)\big([X(x)]_{2,2} - \lambda_2(x)\big) \neq 0,
\]
or
\[
\big([X(x)]_{1,1} - \lambda_2(x)\big)\big([X(x)]_{2,2} - \lambda_1(x)\big) \neq 0.
\]
If neither of the conditions is met, we would have
\[
[X(x)]_{1,1} = \lambda_1(x) \quad\text{and}\quad [X(x)]_{2,2} = \lambda_1(x),
\]
or
\[
[X(x)]_{1,1} = \lambda_2(x) \quad\text{and}\quad [X(x)]_{2,2} = \lambda_2(x).
\]
Thus $\operatorname{tr} X(x)$ equals $2 \lambda_1(x)$ or $2 \lambda_2(x)$, but neither of the situations is possible. \end{proof} The following corollary gives a Levinson's type theorem in the case when the limit $\mathcal{X}$ is a constant matrix. It may be proved in much the same way as Theorem \ref{thm:2}. Here, the condition \eqref{eq:35} can be dropped since $\mathcal{X}$ is a constant matrix. \begin{corollary}
\label{cor:4}
Let $(X_n : n \in \mathbb{N})$ be a sequence of matrices belonging to $\mathcal{D}_1 \big(\operatorname{GL}(2, \mathbb{R})\big)$
convergent to the matrix $\mathcal{X}$. Let $(E_n : n \in \mathbb{N})$ be a sequence of continuous (or holomorphic)
mappings defined on a compact set $K \subset \mathbb{C}$ with values in $\operatorname{Mat}(2, \mathbb{C})$, such that
\[
\sum_{n = 1}^\infty \sup_K \|E_n\| < \infty.
\]
Suppose that $\operatorname{discr} \mathcal{X} \neq 0$ and $\det \mathcal{X} > 0$. If $\eta$ is an eigenvalue of $\mathcal{X}$, then there
are continuous (or holomorphic, respectively) mappings $\Phi_n: K \rightarrow \mathbb{C}^2$, satisfying
\[
\Phi_{n+1} = (X_n + E_n) \Phi_n
\]
such that
\[
\lim_{n \to \infty}
\sup_{x \in K}{\bigg\|
\frac{\Phi_n(x)}{\prod_{j = 1}^{n-1} \mu_j} - v
\bigg\|}=0
\]
where $v$ is the eigenvector of $\mathcal{X}$ corresponding to $\eta$, and $\mu_n$ is the eigenvalue of $X_n$ such
that
\[
\lim_{n \to \infty} |\mu_n - \eta| = 0.
\] \end{corollary}
\subsection{Perturbations of the identity}
\begin{theorem}
\label{thm:3}
Let $(X_n : n \in \mathbb{N})$ be a sequence of continuous mappings defined on $\mathbb{R}$
with values in $\operatorname{GL}(2, \mathbb{R})$ that uniformly converges on a compact interval $K$ to $\sigma \operatorname{Id}$ for a certain
$\sigma \in \{-1, 1\}$. Suppose that there is a sequence of positive numbers $(\gamma_n : n \in \mathbb{N}_0)$
such that $R_n = \gamma_n(X_n - \sigma \operatorname{Id})$ converges uniformly on $K$ to the mapping $\mathcal{R}$ satisfying
$\operatorname{discr} \mathcal{R}(x) \neq 0$ for all $x \in K$. If $\operatorname{discr} \mathcal{R} > 0$, we additionally assume that
\begin{equation}
\label{eq:110}
\sum_{n=0}^\infty \frac{1}{\gamma_n} = \infty.
\end{equation}
Let $(E_n : n \in \mathbb{N})$ be a sequence of continuous mappings defined on $\mathbb{R}$ with values in $\operatorname{Mat}(2, \mathbb{C})$,
such that
\begin{equation}
\label{eq:37}
\sum_{n = 1}^\infty \sup_K{\|E_n\|} < \infty.
\end{equation}
If $(R_n : n \in \mathbb{N})$ belongs to $\mathcal{D}_{1, 0}\big(K, \operatorname{Mat}(2, \mathbb{R})\big)$ and $\eta$ is a continuous eigenvalue of
$\mathcal{R}$, then there are $n_0 \geq 1$ and continuous mappings $\Phi_n : K \rightarrow \mathbb{C}^2$,
$\mu_n : K \rightarrow \mathbb{C}$, and $v : K \rightarrow \mathbb{C}^2$ satisfying
\[
\Phi_{n+1} = (X_n + E_n) \Phi_n,
\]
such that
\begin{equation}
\label{eq:17}
\lim_{n \to \infty}
\sup_{x \in K}{\bigg\|\frac{\Phi_n(x)}{\prod_{j = n_0}^{n-1}
\big( \sigma + \gamma_j^{-1} \mu_j(x)\big)} - v(x) \bigg\|} = 0
\end{equation}
where for each $x \in K$, $v(x)$ is an eigenvector of $\mathcal{R}(x)$ corresponding to $\eta(x)$, and $\mu_n(x)$
is an eigenvalue of $R_n(x)$ such that
\[
\lim_{n \to \infty} \sup_{x \in K}{\big|\mu_n(x) - \eta(x)\big|} = 0.
\] \end{theorem} \begin{proof}
Let us first consider the case of positive discriminant. There is $\delta > 0$ such that for all $x \in K$,
\[
\operatorname{discr} \mathcal{R}(x) \geq 2\delta^2.
\]
Then the matrix $\mathcal{R}(x)$ has two eigenvalues
\[
\xi^+(x) = \frac{\operatorname{tr} \mathcal{R} (x) + \sigma \sqrt{\operatorname{discr} \mathcal{R}(x)}}{2},
\qquad\text{and}\qquad
\xi^-(x) = \frac{\operatorname{tr} \mathcal{R} (x) - \sigma \sqrt{\operatorname{discr} \mathcal{R}(x)}}{2}.
\]
Since $(R_n : n \in \mathbb{N})$ converges uniformly on $K$, there is $M \geq 1$, such that for all $n \geq M$ and
$x \in K$,
\begin{equation}
\label{eq:108}
\operatorname{discr} R_n(x) \geq \delta^2,
\qquad\text{and}\qquad
\gamma_n \geq \delta.
\end{equation}
In particular, the matrix $R_{n}$ has two eigenvalues
\begin{equation}
\label{eq:107}
\xi^+_n(x) = \frac{\operatorname{tr} R_n(x) + \sigma \sqrt{\operatorname{discr} R_n(x)}}{2},
\qquad\text{and}\qquad
\xi^-_n(x) = \frac{\operatorname{tr} R_n(x) - \sigma \sqrt{\operatorname{discr} R_n(x)}}{2}.
\end{equation}
Now, let us consider the collection of intervals $\{I_x : x \in K\}$ determined in Lemma \ref{lem:4} for
the mapping $\mathcal{R}$. By compactness of $K$ we can find a finite subcollection $\{I_1, \ldots, I_J\}$ that covers $K$.
Let us consider the case $\eta = \xi^-$. It is clear that
\[
\lim_{n \to \infty} \sup_{x \in K}{\big\|\xi^-_n(x) - \eta(x)\big\|} = 0.
\]
Suppose that on each $K_j = \overline{I_j} \cap K$, one can find $\Phi_n^{(j)}$ and $v^{(j)}$ so that
\begin{equation}
\label{eq:46}
\lim_{n \to \infty} \sup_{x \in K_j}{
\bigg\|
\frac{\Phi^{(j)}_n(x)}{\prod_{m = n_0}^{n-1} \big(\sigma + \gamma_m^{-1} \mu_m(x)\big)}
-
v^{(j)}(x)
\bigg\|
}
=0.
\end{equation}
Let $\{\psi_1, \ldots, \psi_J\}$ be the continuous partition of unity subordinate to the covering
$\{I_1, \ldots , I_J\}$, that is $\psi_j$ is a continuous non-negative function with $\operatornamewithlimits{supp} \psi_j \subset I_j$, so
that
\[
\sum_{j = 1}^J \psi_j \equiv 1.
\]
We set
\[
\Phi_n = \sum_{j = 1}^J \Phi^{(j)}_n \psi_j, \qquad\text{and}\qquad
v = \sum_{j = 1}^J v^{(j)} \psi_j.
\]
Observe that $v(x)$ is an eigenvector of $\mathcal{R}(x)$ corresponding to $\eta(x)$ for all $x \in K$. Moreover,
since $\psi_j$ is supported inside $I_j$,
\begin{align*}
\lim_{n \to \infty}
\sup_{x \in K}{
\bigg\|
\frac{\Phi_n(x)}{\prod_{m = n_0}^{n-1} \big(\sigma + \gamma_m^{-1} \mu_m(x)\big)}
-
v(x)
\bigg\|}
\leq
\lim_{n \to \infty}
\sum_{j = 1}^J
\sup_{x \in K_j}{
\bigg\|
\frac{\Phi_n^{(j)}(x)}{\prod_{m = n_0}^{n-1} \big(\sigma + \gamma_m^{-1} \mu_m(x)\big)}
-
v^{(j)}(x)
\bigg\|}
=
0.
\end{align*}
Therefore, it is sufficient to prove \eqref{eq:46} for $K = K_j$ where $j \in \{ 1, \ldots, J \}$.
To simplify the notation, we drop the dependence on $j$. Without loss of generality, we can assume that
for each $x \in K$,
\[
\big|[\mathcal{R}(x)]_{1,1} - \xi^+(x)\big| \geq 2\delta,
\qquad\text{and}\qquad
\big|[\mathcal{R}(x)]_{2,2} - \xi^-(x) \big| \geq 2\delta.
\]
Since $(R_n : n \in \mathbb{N})$ converges to $\mathcal{R}$ uniformly on $K$, there are $M \geq 1$ such that for all
$x \in K$ and $n \geq M$,
\begin{equation}
\label{eq:49}
\big| [R_n(x)]_{1,1} - \xi^+_n(x) \big| \geq \delta,
\qquad\text{and}\qquad
\big| [R_n(x)]_{2,2} - \xi^-_n(x) \big| \geq \delta.
\end{equation}
Now, we can define
\[
C_n =
\begin{pmatrix}
\frac{[R_n]_{1,2}}{\xi^+_n - [R_n]_{1,1}} & 1 \\
1 & \frac{[R_n]_{2,1}}{\xi^-_n-[R_n]_{2,2}}
\end{pmatrix},
\qquad\text{and}\qquad
\tilde{D}_n(x) =
\begin{pmatrix}
\xi^+_n(x) & 0 \\
0 & \xi^-_n(x)
\end{pmatrix}.
\]
Then
\[
R_{n}(x) = C_n(x) \tilde{D}_n(x) C_n^{-1}(x),
\]
and in view of \eqref{eq:49}, \eqref{eq:108}, \eqref{eq:107}, Corollary~\ref{cor:1} and Lemma~\ref{lem:2},
we conclude that
\begin{equation}
\label{eq:113}
(C_n : n \geq M) \in \mathcal{D}_{1,0} \big( K, \operatorname{GL}(2, \mathbb{R}) \big).
\end{equation}
Notice that
\begin{equation}
\label{eq:16}
\lim_{n \to \infty} C_n =
\begin{pmatrix}
\frac{[\mathcal{R}]_{1,2}}{\xi^+ - [\mathcal{R}]_{1,1}} & 1 \\
1 & \frac{[\mathcal{R}]_{2,1}}{\xi^- - [\mathcal{R}]_{2,2}}
\end{pmatrix}
\end{equation}
uniformly on $K$. Since
\[
X_{n} = \sigma \operatorname{Id} + \frac{1}{\gamma_n} R_{n},
\]
we obtain
\[
X_{n}(x) = C_n(x) D_n(x) C_n^{-1}(x)
\]
where
\[
D_n = \sigma \operatorname{Id} + \frac{1}{\gamma_n} \tilde{D}_n.
\]
Hence, eigenvalues of $X_n$ are
\begin{equation}
\label{eq:52}
\lambda^+_n = \sigma + \frac{1}{\gamma_n} \xi^+_n, \qquad\text{and}\qquad
\lambda^-_n = \sigma + \frac{1}{\gamma_n} \xi^-_n.
\end{equation}
Let us now consider the recurrence equation
\begin{align*}
\Psi_{n+1} &=
C_{n+1}^{-1} (X_n + E_n)C_n \Psi_n\\
&=(D_n + F_n) \Psi_n
\end{align*}
where
\[
F_n = -C^{-1}_{n+1} \big( \Delta C_n \big) D_n + C_{n+1}^{-1} E_n C_n.
\]
By \eqref{eq:16}, we easily see that
\[
\| F_n \| \leq c \big(\| \Delta C_n \| + \|E_n\|\big),
\]
which together with \eqref{eq:113} and \eqref{eq:37} gives
\[
\sum_{n=M}^\infty \sup_K \| F_n \| < \infty.
\]
Next, in view of \eqref{eq:52}, \eqref{eq:107} and \eqref{eq:108}, for $n \geq M$,
\begin{align} \label{eq:126}
\bigg| \frac{\lambda^+_n}{\lambda^-_n} \bigg|
=
\bigg| 1 + \frac{1}{\gamma_n} \frac{\sqrt{\operatorname{discr} R_n}}{1 + \frac{\sigma}{\gamma_n} \xi_n^-}\bigg|
\geq
1 + \frac{\sqrt{\operatorname{discr} R_n}}{2 \gamma_n}
\geq
\exp\bigg(\frac{\delta}{4 \gamma_n} \bigg),
\end{align}
after possibly enlarging $M$. Therefore, for all $n_2 > n_1 \geq M$,
\[
\prod_{n=n_1}^{n_2}
\bigg| \frac{\lambda^+_n}{\lambda^-_n} \bigg|
\geq
\exp\bigg(\frac{\delta}{4} \sum_{n = n_1}^{n_2} \frac{1}{\gamma_n} \bigg).
\]
Hence, \eqref{eq:110} guarantees that the sequence $(D_n : n \geq M)$ satisfies the uniform Levinson's
condition. Let us remind that we are considering $\eta = \xi^-$. In view of \cite[Theorem 4.1]{Silva2004}, there
is a sequence $(\Psi_n : n \geq M)$ such that
\[
\lim_{n \to \infty} \sup_{x \in K}{
\bigg\|
\frac{\Psi_n(x)}{\prod_{j=M}^{n-1} \lambda_{j}^-(x)} - e_2
\bigg\|}
=
0.
\]
Now, for $x \in K$ and $n \geq M$, we set
\[
\Phi_n(x) = C_n(x) \Psi_n(x).
\]
It is easy to verify that $(\Phi_n : n \geq M)$ satisfies
\[
\Phi_{n+1} = (X_n + E_n) \Phi_n.
\]
Setting
\[
v =
\begin{pmatrix}
1 \\
\frac{[\mathcal{R}]_{2,1}}{\xi^-- [\mathcal{R}]_{2,2}}
\end{pmatrix}
\]
by \eqref{eq:16}, we get
\[
\lim_{n \to \infty} \sup_{x \in K}{\big\| C_n(x) e_2 - v(x) \big\|} = 0,
\]
which completes the proof of \eqref{eq:17} for $K = K_j$, and the case of positive discriminant follows.
When $\operatorname{discr} \mathcal{R} < 0$ on $K$, the reasoning is similar. Since the matrix $\mathcal{R}$
has real entries, $[\mathcal{R}(x)]_{1,2} \neq 0$ for all $x \in K$. Therefore, for $n \geq M$, we can set
\[
C_n =
\begin{pmatrix}
1 & 1 \\
\frac{\xi^+_n - [R_n]_{1,1}}{[R_n]_{1, 2}} & \frac{\xi^-_n - [R_n]_{1,1}}{[R_n]_{1, 2}}
\end{pmatrix}
\]
where
\[
\xi^+_n(x) = \frac{\operatorname{tr} R_n(x) + i \sqrt{|\operatorname{discr} R_n(x)|}}{2},
\qquad\text{and}\qquad
\xi^-_n(x) = \frac{\operatorname{tr} R_n(x) - i \sqrt{|\operatorname{discr} R_n(x)}|}{2}.
\]
Since
\[
\bigg| \frac{\lambda^+_n}{\lambda^-_n} \bigg| = 1,
\]
the sequence $(D_n : n \geq M)$ satisfies the uniform Levinson's condition. The rest of the proof runs as before. \end{proof}
The method of the proof used in Theorem \ref{thm:3}, can be also applied in the case of different eigenvalues and $r = 1$. In particular, the condition \eqref{eq:35} can be dropped.
The proof of the following corollary is analogous to the proof of Theorem \ref{thm:3}. \begin{corollary}
\label{cor:5}
Let $(X_n : n \in \mathbb{N})$ be a sequence of matrices in $\operatorname{GL}(2, \mathbb{R})$ convergent to the matrix $\sigma \operatorname{Id}$
for a certain $\sigma \in \{-1, 1\}$. Suppose that there is a sequence of positive numbers $(\gamma_n : n \in \mathbb{N}_0)$
such that $R_n = \gamma_n(X_n - \sigma \operatorname{Id})$ converges to the matrix $\mathcal{R}$ satisfying $\operatorname{discr} \mathcal{R} \neq 0$.
If $\operatorname{discr} \mathcal{R} > 0$, we additionally assume
\[
\sum_{n = 0}^\infty \frac{1}{\gamma_n} = \infty.
\]
Let $(E_n : n \in \mathbb{N})$ be is a sequence of continuous (or holomorphic) mappings on a compact set $K \subset \mathbb{C}$ with values in
$\operatorname{Mat}(2, \mathbb{C})$, such that
\[
\sum_{n = 1}^\infty \sup_K \|E_n\| < \infty.
\]
If $(R_n: n \in \mathbb{N})$ belongs to $\mathcal{D}_{1, 0}(\operatorname{Mat}(2, \mathbb{R}))$, and $\eta$ is an eigenvalue of $\mathcal{R}$, then there
are $n_0 \geq 1$ and continuous (or holomorphic, respectively) mappings $\Phi_n: K \rightarrow \mathbb{C}^2$, satisfying
\[
\Phi_{n+1} = (X_n + E_n) \Phi_n,
\]
and such that
\[
\lim_{n \to \infty}
\sup_{x \in K}{
\bigg\|
\frac{\Phi_n(x)}{\prod_{j = n_0}^{n-1} (\sigma + \gamma_j^{-1} \mu_j)} - v
\bigg\|}=0
\]
where $v$ is an eigenvector of $\mathcal{R}$ corresponding to $\eta$, $\mu_n$ is the eigenvalue of $R_n$ such that
\[
\lim_{n \to \infty} |\mu_n - \eta| = 0.
\] \end{corollary}
In the following proposition we describe a way to estimate the denominator in \eqref{eq:17}. \begin{proposition} \label{prop:7}
Let $(X_n : n \in \mathbb{N})$ be a sequence of mappings defined on $\mathbb{R}$ with values in $\operatorname{GL}(2, \mathbb{R})$ convergent
on a compact set $K$ to $\sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1\}$. Suppose that there is a sequence of
positive numbers $(\gamma_n : n \in \mathbb{N})$ satisfying
\[
\lim_{n \to \infty} \Gamma_n = \infty \qquad \text{where} \qquad
\Gamma_n = \sum_{j=1}^n \frac{1}{\gamma_j},
\]
such that $R_n = \gamma_n (X_n - \sigma \operatorname{Id})$ converges uniformly on $K$ to the mapping $\mathcal{R}$. Assume that
$\operatorname{discr} \mathcal{R}(x) > 0$ for all $x \in K$, and
\begin{equation}
\label{eq:50}
\sum_{n=1}^\infty \Gamma_n \cdot \sup_{x \in K} \| R_{n+1}(x) - R_n(x) \| < \infty.
\end{equation}
Then there is $n_0$ such that for all $n \geq n_0$, and $x \in K$,
\begin{equation}
\label{eq:38}
\prod_{j = n_0}^n \big|\sigma + \gamma_j^{-1} \mu_j^-(x) \big|^2 \asymp
\exp \Big(\Gamma_n \big( \sigma \operatorname{tr} \mathcal{R}(x) - \sqrt{\operatorname{discr} \mathcal{R}(x)} \big) \Big)
\end{equation}
and
\begin{equation}
\label{eq:39}
\prod_{j = n_0}^n \big|\sigma + \gamma_j^{-1} \mu_j^+(x) \big|^2 \asymp
\exp \Big(\Gamma_n \big( \sigma \operatorname{tr} \mathcal{R}(x) + \sqrt{\operatorname{discr} \mathcal{R}(x)} \big) \Big)
\end{equation}
where
\begin{equation}
\label{eq:56}
\mu_j^- = \frac{1}{2} \Big(\operatorname{tr} R_{j} - \sigma \sqrt{\operatorname{discr} R_{j}}\Big),
\qquad\text{and}\qquad
\mu_j^+ = \frac{1}{2}\Big(\operatorname{tr} R_{j} + \sigma \sqrt{\operatorname{discr} R_{j}}\Big).
\end{equation}
The implicit constants in \eqref{eq:38} and \eqref{eq:39} are independent of $x$ and $n$. \end{proposition} \begin{proof}
Since $\operatorname{discr} \mathcal{R} > 0$ on $K$, there is $n_0$ such that for all $j \geq n_0$ and $x \in K$,
$\operatorname{discr} R_{j}(x) > 0$. Thus $R_{j}$ has two eigenvalues given by the formulas \eqref{eq:56}.
By possible enlarging $n_0$, for all $n \geq n_0$, we have
\[
\log\Big(
\prod_{j = n_0}^n \big|\sigma + \gamma_j^{-1} \mu_j^- \big|^2
\Big)
\asymp
\sum_{j = n_0}^n \frac{1}{\gamma_j} \Big(\sigma \operatorname{tr} R_{j} - \sqrt{\operatorname{discr} R_{j}}\Big)
\]
uniformly on $K$. Let
\[
A_n^- = \sigma \operatorname{tr} R_{n} - \sqrt{\operatorname{discr} R_{n}},
\qquad
A_{\infty}^- = \sigma \operatorname{tr} \mathcal{R} - \sqrt{\operatorname{discr} \mathcal{R}}.
\]
Since for $m \geq n$,
\begin{align*}
\big|A_n^- - A_m^- \big| \cdot \Gamma_n
&\leq
c \sum_{k = n}^\infty \big\|R_{k+1} - R_{k} \big\| \cdot \Gamma_n \\
&\leq
c \sum_{k = n}^\infty \big\|R_{k+1} - R_{k} \big\| \cdot \Gamma_k,
\end{align*}
we obtain
\begin{equation}
\label{eq:59}
\sup_K{\big| A_n^- - A_{\infty}^- \big|} \cdot \Gamma_n
\leq
c.
\end{equation}
Now, by the summation by parts, we get
\begin{align*}
\sum_{j = n_0}^n \frac{1}{\gamma_j} A_j^-
&=
(\Gamma_n - \Gamma_{n_0-1}) A_{\infty}^- + \sum_{j = n_0}^n (\Gamma_j - \Gamma_{j-1})
(A_j^- - A_{\infty}^-) \\
&=
\Gamma_n A^-_{\infty} - \Gamma_{n_0-1} A_{n_0}^- + \Gamma_n (A_n^- - A_{\infty}^-) +
\sum_{j = n_0}^{n-1} \Gamma_j (A_j^- - A_{j+1}^-),
\end{align*}
thus, by \eqref{eq:50} and \eqref{eq:59},
\[
\sup_K{\bigg| \sum_{j = n_0}^n \frac{1}{\gamma_j} A_j^- - A_{\infty}^- \cdot \Gamma_n\bigg|}
\leq c.
\]
Hence,
\[
\prod_{j = n_0}^n \big|\sigma + \gamma_j^{-1} \mu_j^- \big|^2
\asymp
\exp \Big(\Gamma_n \big( \sigma \operatorname{tr} \mathcal{R} - \sqrt{\operatorname{discr} \mathcal{R}} \big) \Big),
\]
uniformly on $K$. The proof of \eqref{eq:39} is similar. \end{proof}
\section{Essential spectrum for positive discriminant} \label{sec:essential} In this section we prove the main results of the paper. \begin{theorem}
\label{thm:6}
Let $N$ and $r$ be positive integers and $i \in \{1, 2, \ldots, N\}$. Let $A$ be a Jacobi matrix with
$N$-periodically blended entries. If there is a compact set $K_0 \subset \mathbb{R}$ with at least $N+3$ points so
that
\begin{equation}
\label{eq:127}
\big( X_{n(N+2)+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{r,0} \big( K_0, \operatorname{Mat}(2, \mathbb{R}) \big),
\end{equation}
then $A$ is self-adjoint and
\[
\sigma_{\mathrm{sing}}(A) \cap \Lambda = \emptyset \quad \text{and} \quad
\sigma_{\mathrm{ac}}(A) = \sigma_{\mathrm{ess}}(A) = \overline{\Lambda}
\]
where
\[
\Lambda = \big\{ x \in \mathbb{R} : \operatorname{discr} \mathcal{X}_1(x) < 0 \big\}
\]
wherein $\mathcal{X}_1$ is given by the formula \eqref{eq:32}. \end{theorem} \begin{proof}
Fix $x_0 \in \mathbb{R} \setminus \overline{\Lambda}$. Let $I$ be an open interval containing $x_0$ such that
$\overline{I} \subset \mathbb{R} \setminus \overline{\Lambda}$. Since $\operatorname{discr} \mathcal{X}_1 = \operatorname{discr} \mathcal{X}_i$,
we have $\operatorname{discr} \mathcal{X}_i > 0$ on $\overline{I}$. Thus the matrix $\mathcal{X}_i$ has two different eigenvalues
$\lambda^+$ and $\lambda^-$. Since $\det \mathcal{X}_i \equiv 1$, we can select them in such a way that
\[
\abs{\lambda^-} < 1 < \abs{\lambda^+}.
\]
Let $I_0$ be an open interval determined by Lemma \ref{lem:4} for $x_0$ and the mapping $\mathcal{X}_i: \overline{I}
\rightarrow \operatorname{Mat}(2, \mathbb{R})$. Without loss of generality we can assume that, for all $x \in I_0$,
\[
\big| [\mathcal{X}_i(x)]_{1, 1} - \lambda^-(x)\big| > 0,
\qquad\text{and}\qquad
\big| [\mathcal{X}_i(x)]_{2, 2} - \lambda^+(x)\big| > 0.
\]
Let $K = \overline{I_0}$. In view of Lemma \ref{lem:3},
\begin{equation}
\label{eq:130}
\big( X_{j(N+2)+i} : j \in \mathbb{N} \big) \in \mathcal{D}_{r,0} \big( K, \operatorname{GL}(2, \mathbb{R}) \big).
\end{equation}
Now, by Theorem \ref{thm:2}, there are sequences $(\Phi^\pm_n : n \geq n_0)$ and $(\mu_n^\pm : n \in \mathbb{N})$,
such that
\begin{equation} \label{eq:138}
\lim_{n \to \infty} \sup_{x \in K}{
\bigg\| \frac{\Phi^\pm_n(x)}{\prod_{j=1}^n \mu_j^\pm(x)} - v^\pm(x) \bigg\|} = 0.
\end{equation}
where $v^\pm$ is a continuous eigenvector of $\mathcal{X}_i$ corresponding to $\lambda^\pm$. We set
\[
\phi_1^\pm = B_1^{-1} \cdots B_{n_0(N+2)+i-1}^{-1} \Phi^\pm_{n_0},
\]
and for $n \geq 1$,
\begin{equation}
\label{eq:24}
\phi^\pm_{n+1} = B_n \phi^\pm_n.
\end{equation}
Then for $k(N+2)+i' > n_0(N+2) + i$ with $i' \in \{0, 1, \ldots, N+1\}$, we have
\begin{equation} \label{eq:139}
\phi^\pm_{k(N+2)+i'}
=
\begin{cases}
B^{-1}_{k(N+2)+i'} B^{-1}_{k(N+2)+i'+1} \cdots B^{-1}_{k(N+2)+i-1} \Phi^\pm_k
&\text{if } i' \in \{0, 1, \ldots, i-1\}, \\
\Phi^\pm_k
&\text{if } i' = i, \\
B_{k(N+2)+i'-1} B_{k(N+2)+i'-2} \cdots B_{k(N+2)+i} \Phi_k^\pm
&\text{if } i' \in \{i+1, \ldots, N+1\}.
\end{cases}
\end{equation}
Since for $i' \in \{1, \ldots, i-1\}$,
\[
\lim_{k \to \infty}
B^{-1}_{k(N+2)+i'} B^{-1}_{k(N+2)+i'+1} \cdots B^{-1}_{k(N+2)+i-1}
=
\mathfrak{B}^{-1}_{i'} \mathfrak{B}^{-1}_{i'-1} \cdots \mathfrak{B}^{-1}_{i-1},
\]
we obtain
\begin{equation}
\label{eq:25}
\lim_{k \to \infty}
\sup_K{
\bigg\|
\frac{\phi^{-}_{k(N+2)+i'}}{\prod_{j=1}^{k-1} \mu_j^-}
-
\mathfrak{B}^{-1}_{i'} \mathfrak{B}^{-1}_{i'-1} \cdots \mathfrak{B}^{-1}_{i-1} v^-
\bigg\|
}
=0.
\end{equation}
Analogously, for $i' \in \{i+1, \ldots, N\}$, we get
\begin{equation}
\label{eq:26}
\lim_{k \to \infty}
\sup_K{
\bigg\|
\frac{\phi^{-}_{k(N+2)+i'}}{\prod_{j=1}^{k-1} \mu_j^-}
-
\mathfrak{B}_{i'-1} \mathfrak{B}_{i'-2} \cdots \mathfrak{B}_i v^-
\bigg\|
}
=0.
\end{equation}
Lastly, by Proposition \ref{prop:1},
\begin{equation}
\label{eq:22}
\lim_{k \to \infty}
\sup_K
\Bigg\|
\frac{\phi^{-}_{k(N+2)}}{\prod_{j = 0}^{k-1} \mu_j^-}
-
\begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix}
\mathfrak{B}_{1}^{-1} \mathfrak{B}_{2}^{-1} \cdots \mathfrak{B}_{i-1}^{-1} v^-
\Bigg\|
=0
\end{equation}
and
\begin{equation}
\label{eq:23}
\lim_{k \to \infty}
\sup_K
\Bigg\|
\frac{\phi^\pm_{k(N+2)+N+1}}{\prod_{j=1}^{k-1} \mu_j^-}
-
\begin{pmatrix}
0 & 1 \\
0 & 0
\end{pmatrix}
\mathfrak{B}_{N-1} \mathfrak{B}_{N-2} \cdots \mathfrak{B}_i v^-
\Bigg\|
=0.
\end{equation}
Since $(\phi^\pm_n : n \in \mathbb{N})$ satisfies \eqref{eq:24}, the sequence $(u^\pm_n(x) : n \in \mathbb{N}_0)$
defined as
\[
u^\pm_n(x) =
\begin{cases}
\langle \phi^\pm_1(x), e_1 \rangle & \text{if } n = 0, \\
\langle \phi^\pm_n(x), e_2 \rangle & \text{if } n \geq 1,
\end{cases}
\]
is a generalized eigenvector associated to $x \in K$. Observe that $(u_0, u_1) \neq 0$ on $K$.
Indeed, otherwise there is $x \in K$, so that $\phi^\pm_1(x) = 0$, hence $\phi^\pm_n(x) = 0$ for
all $n \in \mathbb{N}$. Therefore, $v^\pm(x) = 0$, which is impossible. Now, in view of \eqref{eq:138} and \eqref{eq:139},
there are constants $c > 0$ and $\delta > 0$ such that for all $n \geq n_0$ and $x \in K$,
\[
\big| u^+_{n(N+2)+i-1}(x) \big|^2 + \big| u^+_{n(N+2)+i}(x) \big|^2 =
\big\| \phi^+_{n(N+2)+i}(x) \big\|^2 \geq c
\prod_{j=n_0}^{n-1} |\mu^+_j(x)|^2 \geq c (1 + \delta)^n.
\]
Moreover, by \eqref{eq:25}--\eqref{eq:23}, for all $n \geq n_0$, $i' \in \{0, 1, \ldots, N+1 \}$, and $x \in K$,
\[
\big\| \phi^-_{n(N+2)+i'}(x) \big\|^2 \leq c \prod_{j=n_0}^{n-1} |\mu^-_j(x)|^2 \leq c (1+\delta)^{-n}.
\]
Consequently, for any $x \in K$,
\[
\sum_{n = 0}^\infty \abs{u^+_n(x)}^2 = \infty
\]
which shows that $A$ is self-adjoint (see \cite[Theorem 6.16]{Schmudgen2017}). Moreover,
\[
\sum_{n = 0}^\infty \sup_{x \in K}{\abs{u^-_n(x)}}^2 < \infty,
\]
thus by the proof of \cite[Theorem 5.3]{Silva2007},
\[
\sigma_{\mathrm{ess}}(A) \cap K = \emptyset.
\]
Therefore, for all $x_0 \in \mathbb{R} \setminus \overline{\Lambda}$ there is an open interval $I_0$ containing $x_0$
such that $\sigma_{\mathrm{ess}}(A) \cap I_0 = \emptyset$. Consequently, $\sigma_{\mathrm{ess}}(A) \subseteq \overline{\Lambda}$.
In view of \eqref{eq:130}, \cite[Theorem B]{SwiderskiTrojan2019} implies that $A$ is purely absolutely continuous
on $\Lambda$, and $\overline{\Lambda} \subset \sigma_{\mathrm{ac}}(A)$. This completes the proof. \end{proof}
\begin{remark}
The proof of \cite[Corollary 6.7]{Swiderski2020} entails that \eqref{eq:127} is satisfied for any compact
set $K \subset \mathbb{R}$, and all $i \in \{ 1, 2, \ldots, N \}$, provided that
\[
\bigg( \frac{1}{a_n} : n \in \mathbb{N} \bigg),
\big( b_n : n \in \mathbb{N} \big) \in \mathcal{D}^{N+2}_{r,0} \quad \text{and} \quad
\bigg( \frac{a_{n(N+2)+N}}{a_{n(N+2)+N+1}} : n \in \mathbb{N} \bigg) \in \mathcal{D}_{r,0}.
\] \end{remark}
Essentially the same reasoning as in the proof of Theorem \ref{thm:6} leads to the following theorem. \begin{theorem}
\label{thm:7}
Let $N$ and $r$ be positive integers and $i \in \{0, 1, \ldots, N-1\}$. Let $A$ be a Jacobi matrix with
$N$-periodically modulated entries. If $|\operatorname{tr} \mathfrak{X}_0(0)| > 2$ and there is a compact set $K_0 \subset \mathbb{R}$ with
at least $N+1$ points so that
\begin{equation}
\label{eq:128}
\big( X_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{r,0} \big( K_0, \operatorname{Mat}(2, \mathbb{R}) \big),
\end{equation}
then $A$ is self-adjoint and $\sigma_{\mathrm{ess}}(A) = \emptyset$. \end{theorem}
\begin{remark}
The proof of \cite[Corollary 8]{SwiderskiTrojan2019} implies that \eqref{eq:128} is satisfied for any compact
set $K \subset \mathbb{R}$, and all $i \in \{0, 1, \ldots, N-1 \}$, provided that
\[
\bigg( \frac{a_{n-1}}{a_n} : n \in \mathbb{N} \bigg),
\bigg( \frac{b_n}{a_n} : n \in \mathbb{N} \bigg),
\bigg( \frac{1}{a_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}^N_{r,0}.
\] \end{remark}
We next consider the case when $\mathfrak{X}_0$ has equal eigenvalues. \begin{theorem}
\label{thm:10}
Let $N$ and $r$ be positive integers and $i \in \{0, 1, \ldots, N-1\}$. Let $A$ be a Jacobi matrix with
$N$-periodically modulated entries so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1\}$. Suppose
that there are two $N$-periodic sequences $(s_n : n \in \mathbb{N}_0)$ and $(z_n : n \in \mathbb{N}_0)$, such that
\[
\lim_{n \to \infty} \bigg|\frac{\alpha_{n-1}}{\alpha_n} a_n - a_{n-1} - s_n\bigg| = 0,
\qquad
\lim_{n \to \infty} \bigg|\frac{\beta_n}{\alpha_n} a_n - b_n - z_n\bigg| = 0.
\]
Let
\[
R_n = a_{n+N-1} \big(X_n - \sigma \operatorname{Id}\big).
\]
Then $(R_{nN} : n \in \mathbb{N})$ converges to $\mathcal{R}_0$ locally uniformly on $\mathbb{R}$.
If there is a compact set $K_0 \subset \mathbb{R}$ with at least $N+1$ points such that
\begin{equation} \label{eq:129}
\big( R_{nN+i} : n \in \mathbb{N} \big) \in
\mathcal{D}_{1,0} \big( K_0, \operatorname{Mat}(2, \mathbb{R}) \big),
\end{equation}
then $A$ is self-adjoint and
\[
\sigma_{\mathrm{sing}}(A) \cap \Lambda = \emptyset \qquad \text{and} \qquad
\sigma_{\mathrm{ac}}(A) = \sigma_{\mathrm{ess}}(A) = \overline{\Lambda}
\]
where
\[
\Lambda = \big\{ x \in \mathbb{R} : \operatorname{discr} \mathcal{R}_0(x) < 0 \big\}.
\] \end{theorem} \begin{proof}
In view of Proposition~\ref{prop:10}, there is $c > 0$ such that for all $k \geq 0$,
\begin{align*}
a_{kN+i}
&=
\sum_{j=0}^{k-1} \big( a_{(j+1)N+i} - a_{jN+i} \big) + a_i \\
&\leq
c(k+1).
\end{align*}
Therefore,
\[
\sum_{n = 0}^{k_0N+i} \frac{1}{a_n}
\geq
\sum_{k = 0}^{k_0} \frac{1}{a_{kN+i}}
\geq
\frac{1}{c} \sum_{k=1}^{k_0} \frac{1}{k}.
\]
Thus, the Carleman's condition is satisfied, and so $A$ is self-adjoint.
Thanks to Lemma~\ref{lem:3}, for any compact set $K \subset \mathbb{R}$,
\[
\big( X_{jN+i} : j \in \mathbb{N}_0 \big),
\big( R_{nN+i} : n \in \mathbb{N} \big) \in
\mathcal{D}_{1,0} \big( K, \operatorname{Mat}(2, \mathbb{R}) \big).
\]
Since $\operatorname{discr} \mathcal{R}_0 = \operatorname{discr} \mathcal{R}_i$, by \cite[Criterion 5.8]{Moszynski2009} together with
\cite[Proposition 5.7]{Moszynski2009} and \cite[Theorem 5.6]{Moszynski2009}, we conclude that $A$ is purely
absolutely continuous on $\Lambda$ and $\overline{\Lambda} \subset \sigma_{\mathrm{ac}}(A)$. Hence, it remains to show that
$\sigma_{\mathrm{ess}}(A) \subset\overline{\Lambda}$. To do so, we fix a compact set $K \subset \mathbb{R} \setminus \overline{\Lambda}$
with non-empty interior. Since $\operatorname{discr} \mathcal{R}_i > 0$ on $K$, for each $x \in K$ the matrix $\mathcal{R}_i(x)$ has two
distinct eigenvalues
\[
\xi^+(x) = \frac{\operatorname{tr} \mathcal{R}_{i}(x) + \sigma \sqrt{\operatorname{discr} \mathcal{R}_i(x)}}{2},
\qquad\text{and}\qquad
\xi^-(x) = \frac{\operatorname{tr} \mathcal{R}_i(x) - \sigma \sqrt{\operatorname{discr} \mathcal{R}_i(x)}}{2}.
\]
Moreover, $(\operatorname{discr} R_{jN+i} : j \in \mathbb{N})$ converges uniformly on $K$, thus there are $M \geq 1$ and $\delta > 0$
such that for all $j \geq M$ and $x \in K$,
\[
\operatorname{discr} R_{jN+i}(x) \geq \delta.
\]
Therefore, $R_{jN+i}(x)$ has two distinct eigenvalues
\[
\xi^+_j(x) = \frac{\operatorname{tr} R_{jN+i}(x) + \sigma \sqrt{\operatorname{discr} R_{jN+i}(x)}}{2},
\qquad\text{and}\qquad
\xi^-_j(x) = \frac{\operatorname{tr} R_{jN+i}(x) - \sigma \sqrt{\operatorname{discr} R_{jN+i}(x)}}{2}.
\]
Since
\[
X_n = \sigma \operatorname{Id} + \frac{1}{a_{n+N-1}} R_n,
\]
the eigenvalues of $X_{jN+i}(x)$ are
\[
\lambda_j^+(x) = \sigma + \frac{\xi_j^+(x)}{a_{(j+1)N+i-1}},
\qquad\text{and}\qquad
\lambda_j^-(x) = \sigma + \frac{\xi_j^-(x)}{a_{(j+1)N+i-1}}.
\]
By Theorem \ref{thm:3}, there is a sequence $(\Phi_n : n \geq n_0)$ such that
\begin{equation}
\label{eq:42}
\lim_{n \to \infty}
\sup_{x \in K} \bigg\|\frac{\Phi_n(x)}{\prod_{j = n_0}^{n-1} \lambda_j^-(x)} - v^-(x) \bigg\|= 0
\end{equation}
where $v^-$ is a continuous eigenvector of $\mathcal{R}_i$ corresponding to $\xi^-$. We set
\[
\phi_1 = B_1^{-1} \cdots B^{-1}_{n_0} \Phi_{n_0},
\]
and for $n \geq 1$,
\begin{equation}
\label{eq:15}
\phi_{n+1} = B_n \phi_n.
\end{equation}
Then for $kN+i' > n_0N+i$ with $i' \in \{0, 1, \ldots, N-1\}$, we have
\[
\phi_{kN+i'} =
\begin{cases}
B_{kN+i'}^{-1} B_{kN+i'+1}^{-1} \cdots B_{kN+i-1}^{-1} \Phi_k
&\text{if } i' \in \{0, 1, \ldots, i-1\}, \\
\Phi_k & \text{if } i ' = i,\\\
B_{kN+i'-1} B_{kN+i'-2} \cdots B_{kN+i} \Phi_k &
\text{if } i' \in \{i+1, \ldots, N-1\}.
\end{cases}
\]
Since for $i' \in \{0, 1, \ldots, i-1\}$,
\[
\lim_{k \to \infty} B_{kN+i'}^{-1} B_{kN+i'+1}^{-1} \cdots B_{kN+i-1}^{-1}
=
\mathfrak{B}_{i'}^{-1}(0) \mathfrak{B}_{i'+1}^{-1}(0) \cdots \mathfrak{B}_{i-1}^{-1}(0),
\]
we obtain
\begin{equation}
\label{eq:27}
\lim_{k \to \infty}
\sup_K{
\bigg\|
\frac{\phi_{kN+i'}}{\prod_{j = n_0}^{k-1} \lambda^-_j} -
\mathfrak{B}_{i'}^{-1}(0) \mathfrak{B}_{i'+1}^{-1}(0) \cdots \mathfrak{B}_{i-1}^{-1}(0)
v^-
\bigg\|
} = 0.
\end{equation}
Analogously, for $i' \in \{i+1, \ldots, N-1\}$,
\begin{equation}
\label{eq:28}
\lim_{k \to \infty}
\sup_K{
\bigg\|
\frac{\phi_{kN+i'}}{\prod_{j = n_0}^{k-1} \lambda^-_j} -
\mathfrak{B}_{i'-1}(0) \mathfrak{B}_{i'-2}(0) \cdots \mathfrak{B}_{i}(0)
v^-
\bigg\|}
=0.
\end{equation}
Since $(\phi_n : n \in \mathbb{N})$ satisfies \eqref{eq:15}, the sequence $(u_n(x) : n \in \mathbb{N}_0)$ defined as
\[
u_n(x) = \begin{cases}
\langle \phi_1(x), e_1 \rangle & \text{if } n = 0, \\
\langle \phi_n(x), e_2 \rangle & \text{if } n \geq 1,
\end{cases}
\]
is a generalized eigenvector associated to $x \in K$. By \eqref{eq:42}, \eqref{eq:27} and \eqref{eq:28}, for each
$i' \in \{0, 1, \ldots, N-1\}$, $n > n_0$, and $x \in K$,
\begin{equation}
\label{eq:29}
|u_{nN+i'}(x)|
\leq
c
\prod_{j = n_0}^{n-1} |\lambda_j^-(x)|.
\end{equation}
Since $(R_{nN+i} : n \in \mathbb{N})$ converges to $\mathcal{R}_i$ uniformly on $K$, and
\[
\lim_{n \to \infty} a_n = \infty,
\]
there is $M \geq n_0$, such that for $n \geq M$,
\[
\frac{|\operatorname{tr} R_{nN+i}(x)| + \sqrt{\operatorname{discr} R_{nN+i}(x)}}{ a_{(n+1)N+i-1}} \leq 1.
\]
Therefore, for $n \geq M$,
\[
|\lambda_n^-(x)|
=
1 +
\frac{ \sigma \operatorname{tr} R_{nN+i}(x) - \sqrt{\operatorname{discr} R_{nN+i}(x)} }{ 2 a_{(n+1)N+i-1}}.
\]
We next claim the following holds true.
\begin{claim}
\label{clm:1}
There are $\delta' > 0$ and $M_0 \geq M$ such that for all $n \geq M_0$ and $x \in K$,
\begin{equation}
\label{eq:103}
n \frac{\sigma \operatorname{tr} R_{nN+i}(x) - \sqrt{\operatorname{discr} R_{nN+i}(x)}} {a_{(n+1)N+i-1}}
\leq
-1-\delta'.
\end{equation}
\end{claim}
First observe that by the Stolz--Ces\'aro theorem and Proposition~\ref{prop:10}, we have
\begin{equation}
\label{eq:104}
0 \leq
\lim_{n \to \infty} \frac{a_{(n+1)N+i-1}}{n} =
\lim_{n \to \infty} \big( a_{(n+1)N+i-1} - a_{nN+i-1} \big) =
-\sigma \operatorname{tr} \mathcal{R}_i(x).
\end{equation}
Since $(R_{nN+i} : n \in \mathbb{N})$ converges to $\mathcal{R}_i$ uniformly on $K$,
\begin{equation}
\label{eq:105}
\lim_{n \to \infty}
\Big( \sigma \operatorname{tr} R_{nN+i}(x) - \sqrt{\operatorname{discr} R_{nN+i}(x)} \Big) =
\sigma \operatorname{tr} \mathcal{R}_i(x) - \sqrt{\operatorname{discr} \mathcal{R}_i(x)}.
\end{equation}
Thus, by \eqref{eq:104} and \eqref{eq:105} we get
\[
\lim_{n \to \infty}
n \frac{ \sigma \operatorname{tr} R_{nN+i}(x) - \sqrt{\operatorname{discr} R_{nN+i}(x)}}{a_{(n+1)N+i-1}} =
\begin{cases}
-\infty & \text{if } \operatorname{tr} \mathcal{R}_i = 0, \\
-1 - \tfrac{\sqrt{\operatorname{discr} \mathcal{R}_i}}{-\sigma \operatorname{tr} \mathcal{R}_i} & \text{otherwise,}
\end{cases}
\]
which together with \eqref{eq:104} implies \eqref{eq:103}.
Now, using Claim \ref{clm:1}, we conclude for all $n \geq M_0$,
\[
\sup_{x \in K}
|\lambda^-_n(x)| \leq 1 - \frac{1+\delta'}{2n}.
\]
Consequently, by \eqref{eq:29}, there is $c' > 0$ such that for all $i' \in \{0, 1, \ldots, N-1\}$ and $n \geq M_0$,
\begin{align*}
\sup_{x \in K}{|u_{nN+i}(x)|}
&\leq c \prod_{j = n_0}^{n-1} \bigg(1 - \frac{1+\delta'}{2j} \bigg) \\
&\leq
c' \exp\bigg(-\frac{1+\delta'}{2} \log (n-1) \bigg).
\end{align*}
Hence,
\[
\sum_{n = 0}^\infty \sup_{x \in K}{|u_n(x)|^2} < \infty,
\]
thus by the proof of \cite[Theorem 5.3]{Silva2007} we conclude that $\sigma_{\mathrm{ess}}(A) \cap K = \emptyset$. Since
$K$ was any compact subset of $\mathbb{R} \setminus \overline{\Lambda}$, we obtain
$\sigma_{\mathrm{ess}}(A) \subseteq \overline{\Lambda}$, and the theorem follows. \end{proof}
\begin{remark}
By \cite[Proposition 9]{PeriodicIII}, the regularity \eqref{eq:129} holds true for any compact set
$K \subset \mathbb{R}$, and all $i \in \{0, 1, \ldots, N-1 \}$, if
\[
\bigg( \frac{\alpha_{n-1}}{\alpha_n} a_n - a_{n-1} : n \in \mathbb{N} \bigg),
\bigg( \frac{\beta_n}{\alpha_n} a_n - b_n : n \in \mathbb{N} \bigg),
\bigg( \frac{1}{a_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1^N(\mathbb{R}).
\] \end{remark}
\section{Periodic modulations in non-Carleman setup} \label{sec:notCarleman} In this section we shall consider Jacobi matrices such that \[
\sum_{n=0}^\infty \frac{1}{a_n} < \infty. \] Let us start with the following general observation. \begin{proposition}
\label{prop:3}
Let $N$ be a positive integer and
\[
X_n(z) = \prod_{j=n}^{n+N-1} B_j(z).
\]
Let $K$ be a compact subset of $\mathbb{C}$ containing $0$, and suppose that
\begin{equation}
\label{eq:115}
\sup_{n \geq 1} \sup_{z \in K} \| B_n(z) \| < \infty.
\end{equation}
Then there is $c > 0$ such that
\[
\sup_{x \in K} \| X_n(z) - X_n(0) \| \leq c \sum_{j = 0}^{N-1} \frac{1}{a_{n+j}}.
\]
In particular, if
\begin{equation}
\label{eq:116}
\sum_{n=0}^\infty \frac{1}{a_n} < \infty,
\end{equation}
then
\[
\sum_{n=1}^\infty \sup_{z \in K} \| X_n(z) - X_n(0) \| < \infty.
\] \end{proposition} \begin{proof}
Let us notice that
\[
B_j(z) - B_j(0) =
\begin{pmatrix}
0 & 0 \\
0 & \frac{z}{a_j}
\end{pmatrix},
\]
thus
\begin{equation}
\label{eq:114}
\| B_j(z) - B_j(0) \| \leq \frac{|z|}{a_j}.
\end{equation}
Since
\[
X_n(z) - X_n(0) =
\sum_{j=0}^{N-1}
\Bigg\{ \prod_{m=j+1}^{N-1} B_{n+m}(0) \Bigg\}
\big( B_{n+j}(z) - B_{n+j}(0) \big)
\Bigg\{ \prod_{m=0}^{j-1} B_{n+m}(z) \Bigg\},
\]
we have
\[
\| X_n(z) - X_n(0) \| \leq
\sum_{j=0}^{N-1}
\Bigg\{ \prod_{m=j+1}^{N-1} \big\| B_{n+m}(0) \big\| \Bigg\}
\big\| B_{n+j}(z) - B_{n+j}(0) \big\|
\Bigg\{ \prod_{m=0}^{j-1} \big\| B_{n+m}(z) \| \Bigg\}.
\]
Now the conclusion easily follows by \eqref{eq:115} and \eqref{eq:114}. \end{proof}
The following corollary reproves the main result of \cite{Yafaev2019} obtained by a different method. \begin{corollary}[Yafaev]
\label{cor:3}
Suppose that the Carleman's condition is \emph{not} satisfied and
\begin{equation} \label{eq:134}
\bigg( \frac{a_n}{\sqrt{a_{n-1} a_{n+1}}} - 1 : n \in \mathbb{N} \bigg) \in \ell^1 \quad \text{and} \quad
\bigg( \frac{b_n}{\sqrt{a_{n-1} a_n}} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1.
\end{equation}
Let
\begin{equation} \label{eq:136}
q = \lim_{n \to \infty} \frac{b_n}{2 \sqrt{a_{n-1} a_n}}.
\end{equation}
If $|q| \neq 1$, then for every $z \in \mathbb{C}$ there is a basis $\{ u^+(z), u^-(z) \}$ of generalized eigenvectors
associated with $z$ such that
\begin{equation}
\label{eq:133}
u^\pm_n(z) =
\bigg(\prod_{j=1}^n \lambda_j^\pm(0) \bigg) \big( 1 + \epsilon^\pm_n(z) \big)
\end{equation}
where $\lambda^\pm_j(0)$ is the eigenvalue of $B_j(0)$, and
$(\epsilon_n^\pm)$ is a sequence of holomorphic functions tending to zero uniformly on any compact subset of $\mathbb{C}$. \end{corollary} \begin{proof}
By \cite[Lemma 2.1]{Yafaev2019}
\begin{equation} \label{eq:135}
\bigg( \sqrt{\frac{a_{n+1}}{a_n}} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1 \quad \text{and} \quad
\lim_{n \to \infty} \sqrt{\frac{a_{n+1}}{a_n}} \geq 1.
\end{equation}
By Corollary~\ref{cor:2} and Lemma~\ref{lem:2} it implies
\begin{equation}
\label{eq:57}
\bigg( \frac{a_{n-1}}{a_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1.
\end{equation}
Next, we write
\begin{equation}
\label{eq:137}
\frac{b_n}{a_n} = \frac{b_n}{\sqrt{a_{n-1} a_n}} \sqrt{\frac{a_{n-1}}{a_n}},
\end{equation}
thus by \eqref{eq:134}, \eqref{eq:135} and Corollary~\ref{cor:1}
\begin{equation}
\label{eq:58}
\bigg( \frac{b_n}{a_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1.
\end{equation}
In particular, by \eqref{eq:57} and \eqref{eq:58} we conclude that
\[
(B_n(0) : n \in \mathbb{N}) \in \mathcal{D}_1 \big( \operatorname{GL}(2, \mathbb{R}) \big).
\]
Now, in view of Proposition~\ref{prop:3}
\[
B_n(z) = B_n(0) + E_n(z)
\]
where for any compact set $K \subset \mathbb{C}$,
\[
\sum_{n=1}^\infty \sup_{z \in K} \| E_n(z) \| < \infty.
\]
By \eqref{eq:57} and \eqref{eq:58}, there are $r, s \in \mathbb{R}$,
\[
r = \lim_{n \to \infty} \frac{a_{n-1}}{a_n} \quad \text{and} \quad
s = \lim_{n \to \infty} \frac{b_{n}}{a_n}.
\]
Then the limit of $(B_n(0) : n \in \mathbb{N})$ is
\[
\mathcal{B} =
\begin{pmatrix}
0 & 1 \\
-r & -s
\end{pmatrix}.
\]
Notice that
\[
\operatorname{discr} \mathcal{B} = s^2 - 4r =
r \Big(\frac{s}{2 \sqrt{r}} - 1 \Big)
\Big(\frac{s}{2 \sqrt{r}} + 1 \Big).
\]
On the other hand, by \eqref{eq:135}, \eqref{eq:136} and \eqref{eq:137}, we can easily deduce that
\[
r \in (0, 1] \quad \text{and} \quad q = \frac{s}{2 \sqrt{r}}.
\]
Therefore, $\operatorname{discr} \mathcal{B} \neq 0$ whenever $\abs{q} \neq 1$. Fix a compact set $K \subset \mathbb{C}$.
If $\operatorname{discr} \mathcal{B} > 0$, then $\mathcal{B}$ has two eigenvectors
\[
v^+ =
\begin{pmatrix}
1 \\
\lambda^+
\end{pmatrix}
\qquad
v^- =
\begin{pmatrix}
1 \\
\lambda^-
\end{pmatrix}
\]
corresponding to the eigenvalues
\[
\lambda^+ = \frac{-s + \sqrt{s^2-4r}}{2}, \qquad
\lambda^- = \frac{-s - \sqrt{s^2-4r}}{2}.
\]
Since $\det \mathcal{B} = r$ these eigenvalues are non-zero.
Let us consider the system
\[
\Phi_{n+1} = \big( B_n(0) + E_n \big) \Phi_n.
\]
By Corollary \ref{cor:4}, there is a sequence of mappings $(\Phi^\pm_n : n \geq n_0)$ so that
\begin{equation}
\label{eq:36}
\lim_{n \to \infty} \sup_{z \in K}{\bigg\|\frac{\Phi^\pm_n(z)}{\prod_{j=1}^{n-1} \lambda^\pm_j(0)} - v^\pm
\bigg\|} = 0.
\end{equation}
Since $B_n$ is invertible for any $n$, we set
\[
\phi^\pm_1 = B_1^{-1} \cdots B_{n_0}^{-1} \Phi^\pm_{n_0}.
\]
Then for $n \geq 1$, we define
\[
\phi^\pm_{n+1} = B_n \phi_n^\pm.
\]
Finally, to obtain a generalized eigenvector associated with $z \in K$, we set
\[
u_n^\pm(z) =
\begin{cases}
\langle \phi^\pm_1(z), e_1\rangle & \text{if } n = 0,\\
\langle \phi^\pm_n(z), e_2\rangle & \text{if } n \geq 1.
\end{cases}
\]
Now, by \eqref{eq:36} we easily see that
\[
u^\pm_n(z) =
\bigg(\prod_{j = 1}^{n-1} \lambda_j^\pm(0) \bigg)
\big(\lambda^\pm + \epsilon^\pm_n(z)\big)
\]
with
\[
\lim_{n \to \infty} \sup_{z \in K}{|\epsilon^\pm_n(z)|} = 0.
\]
Since $(\lambda^\pm_j(0))$ converges to $\lambda^\pm$, we obtain \eqref{eq:133}.
When $\operatorname{discr} \mathcal{B} < 0$, the reasoning is analogous. \end{proof}
\subsection{Perturbation of the identity} \begin{theorem}
\label{thm:8a}
Let $N$ be a positive integer. Let $A$ be a Jacobi matrix with $N$-periodically modulated entries so that
$\mathfrak{X}_0(0) = \sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1 \}$. Assume that there are $i \in \{0, 1, \ldots, N-1 \}$,
and a sequence of positive numbers $(\gamma_n : n \in \mathbb{N}_0)$ satisfying
\[
\sum_{n=0}^\infty \frac{1}{\gamma_n} = \infty,
\]
such that $R_{nN+i}(0) = \gamma_n \big(X_{nN+i}(0) - \sigma \operatorname{Id} \big)$ converges to the non-zero matrix $\mathcal{R}_i$.
If $\big( R_{nN+i}(0) : n \in \mathbb{N} \big)$ belongs to $\mathcal{D}_1 \big( \operatorname{Mat}(2, \mathbb{R}) \big)$, $\operatorname{discr} \mathcal{R}_i > 0$, and
\begin{equation}
\label{eq:45}
\sum_{n=0}^\infty \frac{1}{a_n} < \infty,
\end{equation}
then $A$ is self-adjoint if and only if there is $n_0 \geq 1$, such that
\begin{equation} \label{eq:72}
\sum_{n=n_0}^{\infty}
\prod_{j=n_0}^n
\bigg| 1 + \frac{\sigma \operatorname{tr} R_{jN+i}(0) + \sqrt{\operatorname{discr} R_{jN+i}(0)}}{2 \gamma_j} \bigg|^2 = \infty.
\end{equation}
Moreover, if $A$ is self-adjoint, then $\sigma_{\mathrm{ess}}(A) = \emptyset$. \end{theorem} \begin{proof}
Since $\operatorname{discr} \mathcal{R}_i > 0$, there are $\delta > 0$ and $n_0 \in \mathbb{N}$, such that for all $j \geq n_0$,
\[
\operatorname{discr} R_{jN+i}(0) > \delta.
\]
Hence, the matrix $R_{jN+i}(0)$ has two distinct eigenvalues
\[
\xi^+_j(0) = \frac{\operatorname{tr} R_{jN+i}(0) + \sigma \sqrt{\operatorname{discr} R_{jN+i}(0)}}{2},
\qquad\text{and}\qquad
\xi^-_j(0) = \frac{\operatorname{tr} R_{jN+i}(0) - \sigma \sqrt{\operatorname{discr} R_{jN+i}(0)}}{2},
\]
thus the matrix $X_{jN+i}(0)= \sigma \operatorname{Id} + \gamma_j^{-1}R_{jN+i}(0)$ has two eigenvalues
\[
\lambda^+_j(0) = \sigma + \frac{\xi^+_j(0)}{\gamma_j},
\qquad\text{and}\qquad
\lambda^-_j(0) = \sigma + \frac{\xi^-_j(0)}{\gamma_j}.
\]
Let $K$ be any compact subset of $\mathbb{R}$. By Proposition~\ref{prop:3}, we can write
\[
X_{nN+i}(x) = \sigma \operatorname{Id} + \frac{1}{\gamma_n} R_{nN+i}(0) + E_{nN+i}(x)
\]
where
\[
\sum_{n=0}^\infty \sup_{x \in K} \| E_{nN+i}(x) \| < \infty.
\]
Since $(R_{jN+i}(0) : j \in \mathbb{N})$ belongs to $\mathcal{D}_1\big(\operatorname{Mat}(2, \mathbb{R})\big)$, by Corollary~\ref{cor:5},
there are two sequences $\big( \Phi_j^- : j \geq n_0 \big)$ and $\big( \Phi_j^+ : j \geq n_0 \big)$ satisfying
\[
\Phi_{j+1} = \big( X_{jN+i}(0) + E_{jN+i}\big) \Phi_j,
\]
and such that
\begin{equation} \label{eq:140}
\lim_{n \to \infty} \sup_K
\bigg\| \frac{\Phi_n^\pm}{\prod_{j = n_0}^{n-1} \lambda^\pm_j(0)} - v^\pm \bigg\| = 0
\end{equation}
for certain $v^-, v^+ \neq 0$. Let
\[
\phi_1^\pm = B_1^{-1} B_2^{-1} \cdots B_{n_0}^{-1} \Phi_{n_0}^\pm.
\]
For $n \geq 1$, we set
\[
\phi_{n+1}^\pm = B_n \phi_n^\pm.
\]
Then for $kN+i' > n_0N+i$ with $i' \in \{0, 1, \ldots, N-1\}$, we have
\[
\phi_{kN+i'}^\pm =
\begin{cases}
B_{kN+i'}^{-1} B_{kN+i'+1}^{-1} \cdots B_{kN+i-1}^{-1} \Phi_k^\pm
&\text{if } i' \in \{0, 1, \ldots, i-1\}, \\
\Phi_k^\pm & \text{if } i ' = i,\\\
B_{kN+i'-1} B_{kN+i'-2} \cdots B_{kN+i} \Phi_k^\pm &
\text{if } i' \in \{i+1, \ldots, N-1\}.
\end{cases}
\]
Consequently, we obtain
\begin{equation}
\label{eq:71}
\lim_{k \to \infty} \frac{\phi_{kN+i'}^\pm}{\prod_{j = n_0}^n \lambda^\pm_j(0)} =
\begin{cases}
\mathfrak{B}_{i'}^{-1}(0) \mathfrak{B}_{i'+1}^{-1}(0) \cdots \mathfrak{B}_{i-1}^{-1}(0) v^\pm
&\text{if } i' \in \{0, 1, \ldots, i-1\} \\
v^\pm &\text{if } i' = i, \\
\mathfrak{B}_{i'-1}(0) \mathfrak{B}_{i'-2}(0) \cdots \mathfrak{B}_{i}(0) v^\pm
&\text{if } i' \in \{i+1, \ldots ,N-1\},
\end{cases}
\end{equation}
uniformly on $K$. Let
\[
u_n^\pm(x) =
\begin{cases}
\langle \phi_1^\pm(x), e_1 \rangle & \text{if } n = 0, \\
\langle \phi_n^\pm(x), e_2 \rangle & \text{if } n \geq 1.
\end{cases}
\]
Then $(u_n^+(x) : n \in \mathbb{N}_0)$ and $(u_n^-(x) : n \in \mathbb{N}_0)$ are two generalized eigenvectors associated with
$x \in K$. Since their asymptotic behavior is different from each other, they are linearly independent.
By \eqref{eq:140} and \eqref{eq:71}, there is a constant $c>0$ such that for all $n > n_0$, and $x \in K$,
\begin{equation}
\label{eq:44}
\big| u^{\pm}_{nN+i}(x) \big|^2 + \big| u^{\pm}_{nN+i-1}(x) \big|^2
=
\big\| \phi^{\pm}_{nN+i}(x) \big\|^2
\geq
c \prod_{j = n_0}^{n-1} \big| \lambda_j^{\pm}(0) \big|^2.
\end{equation}
Moreover, for all $n > n_0$, $i' \in \{0, 1, \ldots, N-1\}$, and $x \in K$,
\begin{equation}
\label{eq:48a}
\big\| \phi^\pm_{nN+i'}(x) \big\|^2 \leq
c \prod_{j = n_0}^{n-1} \big| \lambda_j^{\pm}(0) \big|^2.
\end{equation}
Since $|\lambda_{j}^{-}(0)| \leq |\lambda_{j}^{+}(0)|$, we obtain
\begin{equation}
\label{eq:73}
\sum_{n=n_0+1}^\infty |u_n^-(x)|^2 \leq c
\sum_{n=n_0+1}^\infty |u_n^+(x)|^2.
\end{equation}
Now, observe that if \eqref{eq:72} is satisfied then by \eqref{eq:44} the generalized eigenvector
$(u_n^+(x) : n \in \mathbb{N}_0)$ is not square-summable, hence by \cite[Theorem 6.16]{Schmudgen2017}, the operator
$A$ is self-adjoint. On the other-hand, if \eqref{eq:72} is not satisfied, then by \eqref{eq:48a} and \eqref{eq:73},
all generalized eigenvectors are square-summable, thus by \cite[Theorem 6.16]{Schmudgen2017}, the operator
$A$ is not self-adjoint.
Finally, let us suppose that $A$ is self-adjoint. By the proof of \cite[Theorem 5.3]{Silva2007}, if
\begin{equation}
\label{eq:70}
\sum_{n=0}^\infty \sup_{x \in K} | u^-_n(x) |^2 < \infty,
\end{equation}
then $\sigma_{\mathrm{ess}}(A) \cap K = \emptyset$, and since $K$ is any compact subset of $\mathbb{R}$ this implies that
$\sigma_{\mathrm{ess}}(A) = \emptyset$. Therefore, to complete the proof of the theorem it is enough to show \eqref{eq:70}.
Observe that $E_{nN+i}(0)=0$, thus
\[
\lambda_j^+(0) \lambda_j^-(0) =
\det X_{jN+i}(0) = \frac{a_{jN+i-1}}{a_{(j+1)N+i-1}},
\]
and so
\[
\prod_{j=n_0}^k
\lambda^-_j(0) \lambda^+_j(0)
=
\frac{a_{n_0 N+i-1}}{a_{(k+1)N+i-1}}.
\]
Consequently, by \eqref{eq:45},
\[
\sum_{k=n_0}^\infty
\prod_{j=n_0}^k
\big| \lambda^-_j(0) \lambda^+_j(0) \big|
=
\sum_{k=n_0}^\infty \frac{a_{n_0 N+i-1}}{a_{(k+1)N+i}} < \infty,
\]
which together with $| \lambda^-_j(0) | \leq | \lambda^+_j(0) |$ implies that
\[
\sum_{k=n_0}^\infty
\prod_{j=n_0}^k
\big| \lambda^-_j(0) \big|^2
< \infty.
\]
Hence, by \eqref{eq:48a} we obtain \eqref{eq:70}, and the theorem follows. \end{proof}
By the similar reasoning one can prove the following theorem. \begin{theorem}
\label{thm:8}
Let $N$ be a positive integer. Let $A$ be a Jacobi matrix with $N$-periodically
modulated entries so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1 \}$. Assume that there are
$i \in \{0, 1, \ldots, N-1 \}$ and a sequence of positive numbers $(\gamma_n : n \in \mathbb{N}_0)$ such that
$R_{nN+i}(0) = \gamma_n \big(X_{nN+i}(0) - \sigma \operatorname{Id}\big)$ converges to the non-zero matrix $\mathcal{R}_i$.
If $\big( R_{nN+i}(0) : n \in \mathbb{N} \big)$ belongs to $\mathcal{D}_1 \big( \operatorname{Mat}(2, \mathbb{R}) \big)$, $\operatorname{discr} \mathcal{R}_i < 0$,
and
\[
\sum_{n=0}^\infty \frac{1}{a_n} < \infty,
\]
then the operator $A$ is \emph{not} self-adjoint. \end{theorem}
Proposition~\ref{prop:7} motivates us to the following notion. Given a sequence $(w_n : n \in \mathbb{N})$ such that $w_n > 0$ for all $n \in \mathbb{N}$, we introduce the weighted Stolz class. We say that $(x_n)$ a bounded sequence from a normed vector space $X$ belongs to $\mathcal{D}_1(X; w)$, if \[
\sum_{n = 1}^\infty \left\|\Delta x_n \right\| w_n < \infty. \] Moreover, given a positive integer $N$, we say that $x \in \mathcal{D}^N_{1}(X; w)$ if for each $i \in \{0, 1, \ldots, N-1 \}$, \[
\big( x_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{1}(X; w). \]
Similar reasoning to \cite[Corollary 1]{SwiderskiTrojan2019} leads to the following fact. \begin{proposition}
\label{prop:4}
For any weight $(w_n)$
\begin{enumerate}[(i), leftmargin=2em]
\item If $(x_n), (y_n) \in \mathcal{D}_1(X; w)$, then $(x_n y_n), (x_n + y_n) \in \mathcal{D}_1(X;w)$.
\item If $(x_n) \in \mathcal{D}_1(K, \mathbb{C}; w)$, and $\|x_n(t)\| \geq c > 0$ for all $n \geq \mathbb{N}_0$ and $t \in K$, then
$(x_n^{-1}) \in \mathcal{D}_1(K, \mathbb{C}; w)$.
\end{enumerate} \end{proposition}
The following proposition describes a way to construct matrices $(R_n : n \in \mathbb{N})$ satisfying the hypotheses of Theorems \ref{thm:8a} and \ref{thm:8}. \begin{proposition}
\label{prop:2}
Let $N$ be a positive integer and $i \in \{0, 1, \ldots, N-1\}$. Let $A$ be a Jacobi matrix with $N$-periodically
modulated entries so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1 \}$. Assume that there is
$(\gamma_n : n \in \mathbb{N}_0)$ a sequence of positive numbers such that
$R_{nN+i}(0) = \gamma_n\big(X_{nN+i}(0) - \sigma \operatorname{Id}\big)$ converges to non-zero matrix $\mathcal{R}_i$.
If there are two $N$-periodic sequences $(\tilde{s}_{i'} : i' \in \mathbb{N}_0)$ and
$(\tilde{z}_{i'} : i' \in \mathbb{N}_0)$ such that
\begin{equation}
\label{eq:117}
\tilde{s}_{i'} =
\lim_{n \to \infty} \gamma_n \Big(\frac{\alpha_{i'-1}}{\alpha_{i'}} - \frac{a_{nN+i'-1}}{a_{nN+i'}} \Big),
\qquad\text{and}\qquad
\tilde{z}_{i'} =
\lim_{n \to \infty} \gamma_n \Big(\frac{\beta_{{i'}}}{\alpha_{i'}} - \frac{b_{nN+i'}}{a_{nN+i'}} \Big),
\end{equation}
then
\begin{equation}
\label{eq:118}
\mathcal{R}_i
=
\sum_{j=0}^{N-1}
\left\{ \prod_{m=j+1}^{N-1} \mathfrak{B}_{i+m}(0) \right\}
\begin{pmatrix}
0 & 0 \\
\tilde{s}_{i+j} & \tilde{z}_{i+j}
\end{pmatrix}
\left\{ \prod_{m=0}^{j-1} \mathfrak{B}_{i+m}(0) \right\}
\end{equation}
and
\begin{equation} \label{eq:118a}
\operatorname{tr} \mathcal{R}_i = -\sigma \sum_{j=0}^{N-1} \tilde{s}_{i+j} \frac{\alpha_{i+j}}{\alpha_{i+j-1}}.
\end{equation}
Moreover, if there is a weight $(w_n : n \in \mathbb{N})$ so that for all $i' \in \{0, 1, \ldots, N-1 \}$,
\begin{equation}
\label{eq:120}
\bigg( \frac{1}{\gamma_n} : n \in \mathbb{N} \bigg),
\bigg( \gamma_n \Big(\frac{\alpha_{i'-1}}{\alpha_{i'}} - \frac{a_{nN+i'-1}}{a_{nN+i'}} \Big) : n \in \mathbb{N} \bigg),
\bigg( \gamma_n \Big(\frac{\beta_{i'}}{\alpha_{i'}} - \frac{b_{nN+i'}}{a_{nN+i'}} \Big) : n \in \mathbb{N} \bigg)
\in \mathcal{D}_1(\mathbb{R}; w),
\end{equation}
then
\begin{equation}
\label{eq:121}
\big( R_{nN+i}(0) : n \in \mathbb{N} \big) \in \mathcal{D}_1 \big( \operatorname{Mat}(2, \mathbb{R}); w\big).
\end{equation} \end{proposition} \begin{proof}
Since
\[
X_n(0) - \mathfrak{X}_n(0) =
\sum_{i'=0}^{N-1}
\left\{ \prod_{m=i'+1}^{N-1} \mathfrak{B}_{n+m}(0) \right\}
\big( B_{n+i'}(0) - \mathfrak{B}_{n+i'}(0) \big)
\left\{ \prod_{m=0}^{i'-1} B_{n+m}(0) \right\},
\]
and $\mathfrak{X}_i(0) = \sigma \operatorname{Id}$, we get
\begin{equation}
\label{eq:119}
R_{nN+i}(0)
=
\sum_{i'=0}^{N-1}
\left\{ \prod_{m=i'+1}^{N-1} \mathfrak{B}_{i+m}(0) \right\}
\gamma_n
\big( B_{nN+i+i'}(0) - \mathfrak{B}_{i+i'}(0) \big)
\left\{ \prod_{m=0}^{i'-1} B_{nN+i+m}(0) \right\}.
\end{equation}
Observe that
\begin{equation}
\label{eq:122}
\gamma_n \big( B_{nN+i+i'}(0) - \mathfrak{B}_{i+i'}(0) \big)
=
\gamma_n
\begin{pmatrix}
0 & 0 \\
\frac{\alpha_{i+i'-1}}{\alpha_{i+i'}} - \frac{a_{nN+i+i'-1}}{a_{nN+i+i'}} &
\frac{\beta_{i+i'}}{\alpha_{i+i'}} - \frac{b_{nN+i+i'}}{a_{nN+i+i'}}
\end{pmatrix},
\end{equation}
thus by \eqref{eq:117} we obtain
\[
\lim_{n \to \infty}
\gamma_n \big( B_{nN+i+i'}(0) - \mathfrak{B}_{i+i'}(0) \big)
=
\begin{pmatrix}
0 & 0 \\
\tilde{s}_{i+i'} & \tilde{z}_{i+i'}
\end{pmatrix}.
\]
Now, \eqref{eq:118} easily follows from \eqref{eq:119}. The proof of \eqref{eq:118a} is analogous to Proposition
\ref{prop:10}, cf. \eqref{eq:101} and \eqref{eq:18}.
We proceed to showing \eqref{eq:121}. By \eqref{eq:120}, for each $i' \in \{0, 1, \ldots, N-1 \}$,
\[
\bigg( \frac{a_{nN+i'-1}}{a_{nN+i'}} : n \in \mathbb{N} \bigg),
\bigg( \frac{b_{nN+i'}}{a_{nN+i'}} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1(\mathbb{R}; w),
\]
thus,
\begin{equation}
\label{eq:123}
\big( B_{nN+i'}(0) : n \in \mathbb{N} \big) \in \mathcal{D}_1 \big( \operatorname{GL}(2, \mathbb{R}); w\big).
\end{equation}
Moreover, in view of \eqref{eq:122}, the condition \eqref{eq:120} implies that
\begin{equation}
\label{eq:124}
\Big( \gamma_n \big( B_{nN+i+i'}(0) - \mathfrak{B}_{i+i'}(0) \big) : n \in \mathbb{N} \Big)
\in \mathcal{D}_1 \big( \operatorname{Mat}(2, \mathbb{R}); w\big).
\end{equation}
Now, \eqref{eq:123} and \eqref{eq:124} together with \eqref{eq:119} implies \eqref{eq:121} \end{proof}
\subsection{A periodic modulations of Kostyuchenko--Mirzoev's class} \label{sec:KM} Let $N$ be a positive integer. We say that a Jacobi matrix $A$ associated to $(a_n : n \in \mathbb{N}_0)$ and $(b_n: n \in \mathbb{N}_0)$ belongs to $N$-periodically modulated Kostyuchenko--Mirzoev's class, if there are two $N$-periodic sequences $(\alpha_n : n \in \mathbb{Z})$ and $(\beta_n : n \in \mathbb{Z})$ of positive and real numbers, respectively, such that \[
a_n = \alpha_n \tilde{a}_n \Big( 1 + \frac{f_n}{\gamma_n} \Big) > 0, \qquad\text{and}\qquad
b_n = \frac{\beta_n}{\alpha_n} a_n \] where $(f_n : n \in \mathbb{N}_0)$ is bounded sequence, and $(\tilde{a}_n : n \in \mathbb{N}_0)$ is a positive sequence satisfying \[
\sum_{n=0}^\infty \frac{1}{\tilde{a}_n} < \infty \qquad \text{and} \qquad
\lim_{n \to \infty} \gamma_n \Big( 1- \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) = \kappa > 0 \] for a certain positive sequence $(\gamma_n : n \in \mathbb{N}_0)$ tending to infinity.
This class contains interesting examples of Jacobi matrices giving rise to self-adjoint operators which do not satisfy the Carleman's condition. Moreover, we formulate certain conditions under which the essential spectrum is empty. This class has been studied in \cite{Kostyuchenko1999, JanasMoszynski2003, Silva2004, Silva2007, Silva2007a} in the case when $N$ is an even integer, $\alpha_n \equiv 1, \beta_n \equiv 0$, and \[
\tilde{a}_n = (n+1)^\kappa, \qquad \gamma_n = n+1 \] for some $\kappa > 1$.
\begin{theorem}
\label{thm:4}
Let $N$ be a positive integer. Let $A$ be a Jacobi matrix from $N$-periodically modulated
Kostyuchenko--Mirzoev's class so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1\}$.
Suppose that there is a weight $(w_n : n \in \mathbb{N})$, so that
\begin{equation}
\label{eq:51}
\bigg( \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) : n \in \mathbb{N} \bigg),
\big( f_n : n \in \mathbb{N} \big),
\bigg( \frac{\gamma_{n-1}}{\gamma_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1^N(\mathbb{R}; w),
\end{equation}
and
\begin{equation} \label{eq:51a}
\lim_{n \to \infty} \frac{\gamma_{n-1}}{\gamma_n} = 1.
\end{equation}
Then for all $i \in \{0, 1, \ldots, N-1\}$, the matrices
$R_{nN+i}(0) = \gamma_{nN} \big(X_{nN+i}(0) - \sigma \operatorname{Id}\big)$ converge to the non-zero matrix $\mathcal{R}_i$,
\begin{equation}
\label{eq:33}
\big( R_{nN+i}(0) : n \in \mathbb{N} \big) \in \mathcal{D}_1 \big(\operatorname{Mat}(2, \mathbb{R}); w\big),
\end{equation}
and
\[
\sum_{n = 0}^\infty
\sup_{z \in K}{
\big\|X_{nN+i}(z) - X_{nN+i}(0) \big\|
}
< \infty
\]
for every compact set $K \subset \mathbb{C}$. Moreover, $\operatorname{tr} \mathcal{R}_i = -\kappa \sigma N$, and
\begin{equation}
\label{eq:31}
\mathcal{R}_i
=
\sum_{j=0}^{N-1}
\frac{\alpha_{i+j-1}}{\alpha_{i+j}} \big( \kappa + \mathfrak{f}_{i+j} - \mathfrak{f}_{i+j-1} \big)
\left\{ \prod_{m=j+1}^{N-1} \mathfrak{B}_{i+m}(0) \right\}
\begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix}
\left\{ \prod_{m=0}^{j-1} \mathfrak{B}_{i+m}(0) \right\}
\end{equation}
where $(\mathfrak{f}_n : n \in \mathbb{Z})$ is $N$-periodic sequence so that
\begin{equation} \label{eq:63}
\lim_{n \to \infty} |f_n - \mathfrak{f}_n| = 0.
\end{equation} \end{theorem} \begin{proof}
To prove \eqref{eq:33}, we are going to apply Proposition \ref{prop:2}.
To do so, we need to check \eqref{eq:120}. In fact, it is enough to show that for any $i \in \{0, 1, \ldots, N-1 \}$,
\begin{equation}
\label{eq:61}
\bigg( \gamma_{jN} \Big( \frac{\alpha_{i-1}}{\alpha_{i}} - \frac{a_{jN+i-1}}{a_{jN+i}} \Big) : j \in \mathbb{N} \bigg)
\in \mathcal{D}_1(\mathbb{R}; w).
\end{equation}
We write
\[
\gamma_{jN} \Big(
\frac{\alpha_{i-1}}{\alpha_{i}} -
\frac{a_{jN+i-1}}{a_{jN+i}}
\Big) =
\frac{\alpha_{i-1}}{\alpha_i}
\gamma_{jN}
\Big(
1 -
\frac{\tilde{a}_{jN+i-1}}{\tilde{a}_{jN+i}} \Big( 1 + \frac{e_{j}}{\gamma_{jN}} \Big)
\bigg)
\]
where
\begin{equation} \label{eq:65}
e_{j} =
\gamma_{jN} \bigg( \frac{1+\tfrac{f_{jN+i-1}}{\gamma_{jN+i-1}}}{1+\tfrac{f_{jN+i}}{\gamma_{jN+i}}} -1 \bigg) =
\frac{\gamma_{jN}}{\gamma_{jN+i-1}} \frac{f_{jN+i-1}- f_{jN+i} \tfrac{\gamma_{jN+i-1}}{\gamma_{jN+i}}}
{1+\tfrac{f_{jN+i}}{\gamma_{jN+i}}}.
\end{equation}
Thus
\begin{equation} \label{eq:66}
\gamma_{jN} \Big(
\frac{\alpha_{i-1}}{\alpha_{i}} -
\frac{a_{jN+i-1}}{a_{jN+i}}
\Big) =
\frac{\alpha_{i-1}}{\alpha_i}
\gamma_{jN} \Big(
1 -
\frac{\tilde{a}_{jN+i-1}}{\tilde{a}_{jN+i}} \Big)
- \frac{\alpha_{i-1}}{\alpha_i}
\frac{\tilde{a}_{jN+i-1}}{\tilde{a}_{jN+i}}
e_j
\end{equation}
and by \eqref{eq:51} we easily obtain \eqref{eq:61}.
In view of \eqref{eq:51a} and \eqref{eq:63}, the formula \eqref{eq:65} gives
\[
\lim_{j \to \infty} e_{jN+i} = \mathfrak{f}_{i-1} - \mathfrak{f}_i.
\]
Thus, by \eqref{eq:66}
\[
\lim_{j \to \infty}
\gamma_{jN} \Big(
\frac{\alpha_{i-1}}{\alpha_{i}} -
\frac{a_{jN+i-1}}{a_{jN+i}}
\Big) = \frac{\alpha_{i-1}}{\alpha_i} (\kappa + \mathfrak{f}_{i} - \mathfrak{f}_{i-1}).
\]
Fix a compact set $K \subset \mathbb{C}$. Since the condition \eqref{eq:116} is satisfied, by Proposition \ref{prop:3},
for all $z \in K$,
\[
X_{jN+i}(z) = \sigma \operatorname{Id} + \frac{1}{\gamma_j} R_{jN+i}(0) + E_{jN+i}(z)
\]
where
\[
\sup_{z \in K} \|E_n(z)\| \leq \frac{c}{\tilde{a}_n}.
\]
Finally, by Proposition \ref{prop:2}, we obtain \eqref{eq:33} and \eqref{eq:31}. \end{proof}
\subsubsection{Examples of modulated sequences} In this section we present examples of sequences $(\tilde{a}_n : n \in \mathbb{N}_0)$ and $(\gamma_n : n \in \mathbb{N}_0)$ satisfying the assumptions of Theorem~\ref{thm:4}.
\begin{example} Let $\kappa > 1$ and \[
\tilde{a}_n = (n+1)^{\kappa} \quad \text{and} \quad
\gamma_n = n+1. \] Then \[
\gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) =
\kappa + \frac{\kappa (\kappa-1)}{2n} + \mathcal{O} \Big( \frac{1}{n^2} \Big). \] \end{example}
\begin{example} Let \[
\tilde{a}_n = (n+1) \log^2(n+2) \quad \text{and} \quad
\gamma_n = n+1. \] Then \[
\gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) =
1 + \frac{2}{\log n} - \frac{3}{n \log n} + \mathcal{O} \Big( \frac{1}{n \log^2 n} \Big). \] \end{example}
\begin{proposition}
\label{prop:5}
Suppose that the hypotheses of Theorem~\ref{thm:4} are satisfied with $\gamma_n = n+1$. Assume that
$\operatorname{discr} \mathcal{R}_0 > 0$. Then
\begin{enumerate}[(i), leftmargin=2em]
\item
\label{cas:1}
if $-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} > -1$, then the operator $A$ is self-adjoint;
\item
\label{cas:2}
if $-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} < -1$, then the operator $A$ is not self-adjoint.
\end{enumerate}
Moreover, if the operator $A$ is self-adjoint then $\sigma_{\mathrm{ess}}(A) = \emptyset$. \end{proposition} \begin{proof}
We shall consider the case \ref{cas:1} only as the reasoning in \ref{cas:2} is similar.
By Theorem~\ref{thm:8a} it is enough to check whether there is $n_0 \geq 1$ so that the series
\begin{equation}
\label{eq:75}
\sum_{n=n_0}^\infty
\prod_{j=n_0}^n
\bigg| 1 + \frac{\sigma \operatorname{tr} R_{jN}(0) + \sqrt{\operatorname{discr} R_{jN}(0)}}{2 jN} \bigg|^2
\end{equation}
diverges. Let us select $\delta > 0$ so that
\begin{equation}
\label{eq:47}
- \kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} - \delta > -1.
\end{equation}
By Theorem \ref{thm:4}, $\operatorname{tr} \mathcal{R}_0 = -\kappa \sigma N$. Hence, there is $j_0 \in \mathbb{N}$ such that for all
$j \geq j_0$,
\[
\Big|
\big( \sigma \operatorname{tr} R_{jN}(0) + \sqrt{\operatorname{discr} R_{jN}(0)}\big)
-
\big( -\kappa N + \sqrt{\operatorname{discr} \mathcal{R}_0}\big)\Big| \leq N \delta.
\]
Thus,
\[
1 + \frac{\sigma \operatorname{tr} R_{jN}(0) + \sqrt{\operatorname{discr} R_{jN}(0)}}{2 jN}
\geq 1 + \frac{1}{2jN}\big(-\kappa N + \sqrt{\operatorname{discr} \mathcal{R}_0} - N \delta\big),
\]
and so
\begin{align*}
\log \bigg( \prod_{j=j_0}^n \bigg| 1 + \frac{\sigma \operatorname{tr} R_{jN}(0) + \sqrt{\operatorname{discr} R_{jN}(0)}}{2 jN} \bigg| \bigg)
&\geq
-c + \frac{1}{2} \Big(-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} - \delta \Big) \sum_{j = 1}^n \frac{1}{j} \\
&\geq
-c' + \frac{1}{2} \Big( -\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} - \delta \Big) \log n.
\end{align*}
Therefore,
\[
\prod_{j=j_0}^n
\bigg| 1 + \frac{\sigma \operatorname{tr} R_{jN}(0) + \sqrt{\operatorname{discr} R_{jN}(0)}}{2 jN} \bigg|^2 \geq
c n^{-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} - \delta}
\]
which, in view of \eqref{eq:47}, implies that the series \eqref{eq:75} is divergent. \end{proof}
\begin{example}
For $0 < \tau < 1$ we set
\[
\tilde{a}_n = \textrm{e}^{n^\tau}
\qquad\text{and}\qquad
\gamma_n = \max\{ n^{1-\tau}, 1\}.
\]
Let $m \in \mathbb{N}$ be chosen so that
\[
1 - \frac{1}{m-2} \leq \tau < 1 - \frac{1}{m-1}.
\]
Then
\begin{align*}
1-\frac{\tilde{a}_{n-1}}{\tilde{a}_n}
&=
\sum_{j = 1}^{m-1}
\frac{(-1)^{j+1}}{j!} \big(n^\tau - (n-1)^\tau \big)^j + \mathcal{O} \big(n^{m(\tau-1)}\big) \\
&=
\sum_{j = 1}^{m-1} \frac{(-1)^{j+1}}{j!} n^{\tau j}
\Big(1 - (1-n^{-1})^\tau\Big)^j + \mathcal{O}\big(n^{m(\tau-1)}\big).
\end{align*}
Since
\[
1 - (1-n^{-1})^\tau
= \tau n^{-1} - \tfrac{\tau(\tau-1)}{2} n^{-2} + \mathcal{O}\big(n^{-3}\big),
\]
we obtain
\begin{align*}
1-\frac{\tilde{a}_{n-1}}{\tilde{a}_n}
&=
n^{\tau}\Big(\tau n^{-1} - \tfrac{\tau(\tau-1)}{2} n^{-2} + \mathcal{O}\big(n^{-3}\big) \Big)
-\sum_{j = 2}^{m-1} \frac{(-1)^j}{j!} n^{\tau j} \Big(\tau n^{-1} + \mathcal{O}\big(n^{-2}\big)\Big)^j
+ \mathcal{O}\big(n^{m(\tau-1)}\big)\\
&=
\tau n^{\tau-1} - \tfrac{\tau(\tau-1)}{2} n^{\tau-2} + \mathcal{O}\big(n^{\tau-3}\big)
-
\sum_{j = 2}^{m-1} \frac{(-1)^j}{j!} n^{\tau j} \Big(\tau^j n^{-j} + \mathcal{O}\big(n^{-j-1}\big)\Big)
+\mathcal{O}\big(n^{m(\tau-1)}\big)\\
&=
\tau n^{\tau-1} - \tfrac{\tau(\tau-1)}{2} n^{\tau-2}
-
\sum_{j = 2}^{m-1} \frac{(-\tau)^j}{j!} n^{j(\tau-1)} + \mathcal{O}\big(n^{2\tau-3}\big)
+\mathcal{O}\big(n^{m(\tau-1)}\big).
\end{align*}
Hence,
\[
\gamma_n \bigg(1- \frac{\tilde{a}_{n-1}}{\tilde{a}_n}\bigg)
=
\tau + \tfrac{\tau(1-\tau)}{2} n^{-1} - \sum_{j = 2}^{m-1}\frac{(-\tau)^{j}}{j!} n^{-(j-1)(1-\tau)}
+\mathcal{O}\big(n^{-2+\tau}\big)
+\mathcal{O}\big(n^{-(m-1)(1-\tau)}\big).
\]
In particular, the assumptions of Theorem \ref{thm:4} are satisfied. \end{example}
For a given sequence $(\gamma_n : n \in \mathbb{N}_0)$, the following proposition provides an explicit sequence $(\tilde{a}_n : n \in \mathbb{N}_0)$ satisfying the regularity assumptions of Theorem \ref{thm:4}. \begin{proposition}
Suppose that $(\gamma_n : n \in \mathbb{N})$ is a positive sequence such that
\[
\lim_{n \to \infty} \gamma_n = \infty,
\qquad\text{and}\qquad
\bigg( \frac{1}{\gamma_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1^N(\mathbb{R}; w)
\]
where $w=(w_n : n \in \mathbb{N})$ is a weight. For $\kappa > 0$ we set
\[
\tilde{a}_n = \exp \bigg(\sum_{j=1}^n \frac{\kappa}{\gamma_j} \bigg).
\]
Then
\[
\lim_{n \to \infty} \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) = \kappa,
\]
and
\[
\bigg( \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) : n \in \mathbb{N} \bigg) \in \mathcal{D}_1^N(\mathbb{R}; w).
\] \end{proposition} \begin{proof}
We have
\[
\gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) =
\gamma_n \bigg( 1 - \exp \Big(-\frac{\kappa}{\gamma_n} \Big) \bigg) =
f \Big(\frac{1}{\gamma_n} \Big)
\]
where
\[
f(x) = \frac{1 - \textrm{e}^{-\kappa x}}{x}.
\]
Observe that
\[
\lim_{x \to 0} f(x) = \kappa.
\]
Moreover, $f$ has analytic extension to $\mathbb{R}$, thus by the mean value theorem
\[
\Big| f \Big(\frac{1}{\gamma_{n+N}} \Big) - f \Big(\frac{1}{\gamma_n} \Big) \Big| \leq c
\Big| \frac{1}{\gamma_{n+N}} - \frac{1}{\gamma_{n}} \Big|,
\]
from which the conclusion follows. \end{proof}
The following proposition settles the problem when the Carleman's condition is satisfied in terms of the growth of the sequence $(\gamma_n : n \in \mathbb{N}_0)$. \begin{proposition}
\label{prop:6}
Suppose that $(\gamma_n : n \in \mathbb{N})$ and $(\tilde{a}_n : n \in \mathbb{N}_0)$ are positive sequences satisfying
\[
\lim_{n \to \infty} \gamma_n = \infty, \qquad \text{and} \qquad
\lim_{n \to \infty} \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) = \kappa > 0.
\]
Then
\begin{enumerate}[(i), leftmargin=2em]
\item
\label{prop:6:a}
if $\lim_{n \to \infty} \frac{\gamma_n}{n} = 0$, then $\sum_{n=0}^\infty \frac{1}{\tilde{a}_n} < \infty$;
\item
\label{prop:6:b}
if $\lim_{n \to \infty} \frac{\gamma_n}{n} = \infty$, then $\sum_{n=0}^\infty \frac{1}{\tilde{a}_n} = \infty$.
\end{enumerate} \end{proposition} \begin{proof}
We shall prove \ref{prop:6:a} only, as the proof of \ref{prop:6:b} is similar. Let
\[
r_n = \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big).
\]
There is $n_0$ such that for $n \geq n_0$,
\[
\frac{\gamma_n}{n} \leq \frac{\kappa}{4} \leq \frac{r_n}{2}.
\]
Hence, for $j \geq n_0$,
\[
\frac{\tilde{a}_{j-1}}{\tilde{a}_j} = 1 - \frac{r_j}{\gamma_j}
\leq 1 - \frac{2}{j},
\]
and so
\[
\frac{\tilde{a}_{n_0-1}}{\tilde{a}_n}
=
\prod_{j = n_0}^n \frac{\tilde{a}_{j-1}}{\tilde{a}_j}
\leq
\prod_{j=n_0}^n \bigg( 1 - \frac{2}{j} \bigg).
\]
Consequently, for a certain $c>0$,
\[
\frac{\tilde{a}_{n_0-1}}{\tilde{a}_n}
\leq c n^{-2},
\]
which implies that
\[
\sum_{n=0}^\infty \frac{1}{\tilde{a}_n} < \infty.\qedhere
\] \end{proof}
The following proposition has a proof similar to Proposition \ref{prop:5}. \begin{proposition}
Suppose that the hypotheses of Theorem~\ref{thm:4} are satisfied for a sequence $(\gamma_n : n \in \mathbb{N}_0)$ such that
\[
\lim_{n \to \infty} \frac{\gamma_n}{n} = 0.
\]
Assume that $\operatorname{discr} \mathcal{R}_0 > 0$. Then
\begin{enumerate}[(i), leftmargin=2em]
\item if $-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} > 0$ then the operator $A$ is self-adjoint;
\item if $-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} < 0$ then the operator $A$ is not self-adjoint.
\end{enumerate}
Moreover, if $A$ is self-adjoint then $\sigma_{\mathrm{ess}}(A) = \emptyset$. \end{proposition}
\subsubsection{Construction of the modulating sequences} In this section we present examples of sequences $(\alpha_n : n \in \mathbb{N}_0)$ and $(\beta_n : n \in \mathbb{N}_0)$ for which one can compute $\operatorname{tr} \mathcal{R}_0$ and $\operatorname{discr} \mathcal{R}_0$.
The first example illustrates that the sign of $\operatorname{discr} \mathcal{R}_0$ may be positive or negative. \begin{example}
\label{ex:1}
Let $N=3$, and
\[
\alpha_n \equiv 1, \qquad\text{and}\qquad
\beta_n \equiv 1.
\]
Then $\sigma = 1$ and
\[
\mathcal{R}_0 =
\begin{pmatrix}
-\kappa - \mathfrak{f}_0 + \mathfrak{f}_2 & \kappa - \mathfrak{f}_0 + \mathfrak{f}_1 \\
-\kappa + \mathfrak{f}_1 - \mathfrak{f}_2 & -2 \kappa + \mathfrak{f}_0 - \mathfrak{f}_2
\end{pmatrix}.
\]
Consequently,
\[
\operatorname{tr} \mathcal{R}_0 = - 3 \kappa \qquad \text{and} \qquad
\operatorname{discr} \mathcal{R}_0 =
4
\Big( \mathfrak{f}_0^2 + \mathfrak{f}_1^2 + \mathfrak{f}_2^2 - \mathfrak{f}_0 \mathfrak{f}_1 - \mathfrak{f}_0 \mathfrak{f}_2 - \mathfrak{f}_1 \mathfrak{f}_2 \Big)
- 3 \kappa^2.
\]
In particular, taking $\mathfrak{f}_0 = \mathfrak{f}_1 = 0$ and $\mathfrak{f}_2 = t$, we obtain
\[
\sign{\operatorname{discr} \mathcal{R}_0} =
\begin{cases}
1 & |t| > \frac{\sqrt{3}}{2} \kappa, \\
0 & |t| = \frac{\sqrt{3}}{2} \kappa, \\
-1 & |t| < \frac{\sqrt{3}}{2} \kappa.
\end{cases}
\] \end{example}
In the following example, discriminant of $\mathcal{R}_0$ is non-negative regardless of $(\mathfrak{f}_n : n \in \mathbb{Z})$. \begin{example}
\label{ex:2}
Let $N=4$, and
\[
\alpha_n \equiv 1, \qquad
\beta_{n} =
\begin{cases}
(-1)^{n/2} & \text{$n$ even}, \\
0 & \text{otherwise.}
\end{cases}
\]
Then $\sigma = 1$ and
\[
\mathcal{R}_0 =
\begin{pmatrix}
-2 \kappa - \mathfrak{f}_0 + \mathfrak{f}_1 - \mathfrak{f}_2 + \mathfrak{f}_3 & -\mathfrak{f}_0 + 2 \mathfrak{f}_1 - \mathfrak{f}_2 \\
0 & -2 \kappa + \mathfrak{f}_0 - \mathfrak{f}_1 + \mathfrak{f}_2 - \mathfrak{f}_3
\end{pmatrix}.
\]
Consequently,
\[
\operatorname{tr} \mathcal{R}_0 = -4 \kappa \qquad \text{and} \qquad
\operatorname{discr} \mathcal{R}_0 =
4 \bigg( \sum_{j=0}^3 (-1)^j \mathfrak{f}_j \bigg)^2 \geq 0.
\] \end{example}
The following theorem provides a large class of modulating sequences for which $\operatorname{discr} \mathcal{R}_0$ is always non-negative. \begin{theorem}
\label{thm:5}
Let $N$ be an even integer and $\kappa > 0$. Let $(\mathfrak{f}_n : n \in \mathbb{Z})$ be $N$-periodic sequence of non-negative
numbers and $(\alpha_n : n \in \mathbb{Z})$ be $N$-periodic sequence of positive numbers satisfying
\begin{equation}
\label{eq:55}
\alpha_0 \alpha_2 \cdots \alpha_{N-2} = \alpha_1 \alpha_3 \cdots \alpha_{N-1}.
\end{equation}
Let $\mathfrak{B}_n$ denote the transfer matrix associated with sequences $(\alpha_n : n \in \mathbb{Z})$
and $\beta_n \equiv 0$. We set
\[
\mathcal{R}_0
=
\sum_{j=0}^{N-1}
\frac{\alpha_{j-1}}{\alpha_{j}} \big( \kappa + \mathfrak{f}_{j} - \mathfrak{f}_{j-1} \big)
\left\{ \prod_{m=j+1}^{N-1} \mathfrak{B}_{m}(0) \right\}
\begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix}
\left\{ \prod_{m=0}^{j-1} \mathfrak{B}_{m}(0) \right\}.
\]
Then
\[
\operatorname{tr} \mathcal{R}_0 = -(-1)^{N/2} N \kappa \qquad \text{and} \qquad
\operatorname{discr} \mathcal{R}_0 = 4 \bigg( \sum_{j=0}^{N-1} (-1)^j \mathfrak{f}_j \bigg)^2.
\] \end{theorem} \begin{proof}
Let $N = 2M$. By \cite[Proposition 3]{PeriodicIII}, for all $\ell \geq k \geq 0$ we have
\begin{equation}
\label{eq:53}
\prod_{m=k}^{\ell} \mathfrak{B}_m(0) =
\begin{pmatrix}
-\frac{\alpha_{k-1}}{\alpha_k} \mathfrak{p}^{[k+1]}_{\ell-k-1}(0) & \mathfrak{p}^{[k]}_{\ell-k}(0) \\
-\frac{\alpha_{k-1}}{\alpha_k} \mathfrak{p}^{[k+1]}_{\ell-k}(0) & \mathfrak{p}^{[k]}_{\ell-k+1}(0)
\end{pmatrix}.
\end{equation}
Observe that for $k \geq 1$ and $j \geq 0$,
\begin{align*}
\prod_{m=j}^{j+2k-1} \mathfrak{B}_m(0) &=
\prod_{m=0}^{k-1} \Big( \mathfrak{B}_{j+2m+1}(0) \mathfrak{B}_{j+2m}(0) \Big)
=
\prod_{m=0}^{k-1}
\begin{pmatrix}
-\frac{\alpha_{j+2m-1}}{\alpha_{j+2m}} & 0 \\
0 & -\frac{\alpha_{j+2m}}{\alpha_{j+2m+1}}
\end{pmatrix} \\&=
(-1)^k
\begin{pmatrix}
\frac{\alpha_{j+2k-3}}{\alpha_{j+2k-2}} \ldots
\frac{\alpha_{j+1}}{\alpha_{j+2}} \frac{\alpha_{j-1}}{\alpha_j} & 0 \\
0 & \frac{\alpha_{j+2k-2}}{\alpha_{j+2k-1}} \ldots
\frac{\alpha_{j+2}}{\alpha_{j+3}} \frac{\alpha_{j}}{\alpha_{j+2}}
\end{pmatrix}.
\end{align*}
In particular, by \eqref{eq:55}, we obtain
\[
\prod_{m=0}^{N-1} \mathfrak{B}_m(0) = (-1)^M \operatorname{Id}.
\]
Moreover, by \eqref{eq:53}, for all $j \geq 0$ and $n \geq 0$,
\begin{equation}
\label{eq:54}
\mathfrak{p}^{[j]}_{n}(0) =
\begin{cases}
(-1)^k \frac{\alpha_{j+2k-2}}{\alpha_{j+2k-1}} \ldots
\frac{\alpha_{j+2}}{\alpha_{j+3}} \frac{\alpha_{j}}{\alpha_{j+2}} & n=2k, \\
0 & \text{otherwise.}
\end{cases}
\end{equation}
Setting
\[
s_j = \kappa + \mathfrak{f}_j - \mathfrak{f}_{j-1},
\]
by \eqref{eq:31}, we write
\[
\mathcal{R}_0 =
\sum_{j=0}^{N-1} \frac{\alpha_{j-1}}{\alpha_j} s_j
\left\{ \prod_{m=j+1}^{N-1} \mathfrak{B}_{m}(0) \right\}
\begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix}
\left\{ \prod_{m=0}^{j-1} \mathfrak{B}_{m}(0) \right\}.
\]
Therefore, by \eqref{eq:53},
\[
\mathcal{R}_0 =
\sum_{j=0}^{N-1} \frac{\alpha_{j-1}}{\alpha_j} s_j
\begin{pmatrix}
-\frac{\alpha_{j}}{\alpha_{j+1}} \mathfrak{p}^{[j+2]}_{N-j-3}(0) &
\mathfrak{p}^{[j+1]}_{N-j-2}(0) \\
-\frac{\alpha_{j}}{\alpha_{j+1}} \mathfrak{p}^{[j+2]}_{N-j-2}(0) &
\mathfrak{p}^{[j+1]}_{N-j-1}(0)
\end{pmatrix}
\begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix}
\begin{pmatrix}
-\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{j-2}(0) &
\mathfrak{p}^{[0]}_{j-1}(0) \\
-\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{j-1}(0) &
\mathfrak{p}^{[0]}_{j}(0)
\end{pmatrix},
\]
and consequently,
\[
\mathcal{R}_0 =
\sum_{j=0}^{N-1} \frac{\alpha_{j-1}}{\alpha_j} s_j
\begin{pmatrix}
-\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{j-2}(0) \mathfrak{p}^{[j+1]}_{N-j-2}(0) &
\mathfrak{p}^{[j+1]}_{N-j-2}(0) \mathfrak{p}^{[0]}_{j-1}(0) \\
-\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[j+1]}_{N-j-1}(0) \mathfrak{p}^{[1]}_{j-2}(0) &
\mathfrak{p}^{[j+1]}_{N-j-1}(0) \mathfrak{p}^{[0]}_{j-1}(0)
\end{pmatrix}.
\]
In view of \eqref{eq:54}, we have
\[
\mathcal{R}_0 =
\sum_{j=0}^{N-1} \frac{\alpha_{j-1}}{\alpha_j} s_j
\begin{pmatrix}
-\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{j-2}(0) \mathfrak{p}^{[j+1]}_{N-j-2}(0) &
0 \\
0 &
\mathfrak{p}^{[j+1]}_{N-j-1}(0) \mathfrak{p}^{[0]}_{j-1}(0)
\end{pmatrix}.
\]
By considering even and odd $j$, the last formula can be written in the form
\begin{align*}
\mathcal{R}_0 &=
\sum_{k=0}^{M-1} \frac{\alpha_{2k-1}}{\alpha_{2k}} s_{2k}
\begin{pmatrix}
-\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{2k-2}(0) \mathfrak{p}^{[2k+1]}_{N-2k-2}(0) & 0 \\
0 & 0
\end{pmatrix} \\
&\phantom{=}+
\sum_{k=0}^{M-1} \frac{\alpha_{2k}}{\alpha_{2k+1}} s_{2k+1}
\begin{pmatrix}
0 & 0 \\
0 & \mathfrak{p}^{[2k+2]}_{N-2k-2}(0) \mathfrak{p}^{[0]}_{2k}(0)
\end{pmatrix}.
\end{align*}
Now, using \eqref{eq:54} and \eqref{eq:55} we obtain
\[
\mathfrak{p}^{[2k+2]}_{N-2k-2}(0) \mathfrak{p}^{[0]}_{2k}(0) =
(-1)^{M-1} \frac{\alpha_0 \alpha_2 \ldots \alpha_{N-2}}{\alpha_{1} \alpha_3 \ldots \alpha_{N-1}} \cdot
\frac{\alpha_{2k+1}}{\alpha_{2k}} =
(-1)^{M-1} \cdot 1 \cdot \frac{\alpha_{2k+1}}{\alpha_{2k}}.
\]
Analogously one can show
\[
-\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{2k-2}(0) \mathfrak{p}^{[2k+1]}_{N-2k-2}(0)
= (-1)^{M-1} \frac{\alpha_{2k}}{\alpha_{2k-1}}.
\]
Therefore,
\begin{align*}
\mathcal{R}_0 &= (-1)^{M-1}
\begin{pmatrix}
\sum_{k=0}^{M-1} s_{2k} & 0 \\
0 & \sum_{k=0}^{M-1} s_{2k+1}
\end{pmatrix} \\
&=
-\sigma
\begin{pmatrix}
M \kappa + \sum_{k=0}^{M-1} \big( \mathfrak{f}_{2k} - \mathfrak{f}_{2k-1} \big) & 0 \\
0 & M \kappa + \sum_{k=0}^{M-1} \big( \mathfrak{f}_{2k+1} - \mathfrak{f}_{2k} \big)
\end{pmatrix},
\end{align*}
and the conclusion readily follows. \end{proof}
\begin{bibliography}{jacobi}
\end{bibliography}
\end{document} |
\begin{document}
\title{Atom diode: Variants, stability, limits, and adiabatic interpretation} \author{A. Ruschhaupt} \email[Email address: ]{[email protected]} \affiliation{Departamento de Qu\'\i mica-F\'\i sica, Universidad del Pa\'\i s Vasco, Apdo. 644, 48080 Bilbao, Spain} \author{J. G. Muga} \email[Email address: ]{[email protected]} \affiliation{Departamento de Qu\'\i mica-F\'\i sica, Universidad del Pa\'\i s Vasco, Apdo. 644, 48080 Bilbao, Spain}
\begin{abstract}
We examine and explain the stability properties of the ``atom diode'', a laser device that lets the ground state atom pass in one direction but not in the opposite direction. The diodic behavior and the variants that result by using different laser configurations may be understood with an adiabatic approximation. The conditions to break down the approximation, which imply also the diode failure, are analyzed.
\end{abstract} \pacs{03.75.Be,42.50.Lc} \maketitle
\section{Introduction}
In a previous paper \cite{ruschhaupt_2004_diode} we proposed simple models for an ``atom diode'', a laser device that lets the neutral atom in its ground state pass in one direction (conventionally from left to right) but not in the opposite direction for a range of incident velocities. A diode is a very basic control element in a circuit, so many applications may be envisioned to trap or cool atoms, or to build logic gates for quantum information processing in atom chips or other setups. Similar ideas have been developed independently by Raizen and coworkers \cite{raizen.2005,dudarev.2005}. While their work has emphasized phase space compression, we looked for the laser interactions leading to the most effective diode. This lead us to consider first STIRAP \cite{STIRAP} (stimulated Raman adiabatic passage) transitions and three level atoms, although we also proposed schemes for two-level atoms. In this paper we continue the investigation on the atom diode, concentrating on its two-level version, by examining its stability with respect to parameter changes, and also several variants, including in particular the ones discussed in \cite{ruschhaupt_2004_diode} and \cite{raizen.2005}. We shall see that the behaviour of the diode, its properties, and its working parameter domain can be understood and quantified with the aid of an adiabatic basis (equivalently, partially dressed states) obtained by diagonalizing the effective interaction potential.
We restrict the atomic motion, similarly to \cite{ruschhaupt_2004_diode}, to one dimension. This occurs when the atom travels in waveguides formed by optical fields \cite{schneble.2003}, or by electric or magnetic interactions due to charged or current-carrying structures \cite{folman.2002}. It can be also a good approximation in free space for atomic packets which are broad in the laser direction, perpendicular to the incident atomic direction \cite{HHM05}. Three dimensional effects should not imply a dramatic disturbance, in any case, as we shall analyze elsewhere.
\begin{figure}
\caption{(a) Schematic action of the different lasers on the atom levels and (b) location of the different laser potentials.}
\label{fig1}
\end{figure}
The basic setting can be seen in Fig. \ref{fig1}, and consists of three, partially overlapping laser fields: two of them are state-selective mirror lasers blocking the
excited ($|2\rangle$) and ground ($|1\rangle$) states on the left and right, respectively of a central pumping laser on resonance with the atomic transition. They are all assumed to be traveling waves perpendicular to the atomic motion direction. The corresponding effective, time-independent, interaction-picture Hamiltonian for the two-level atom may be written, using
$|1\rangle \equiv {1 \choose 0}$ and $|2\rangle \equiv {0 \choose 1}$, as
\begin{eqnarray} \bm{H} = \frac{\hat{p}_x^2}{2m} + \underbrace{\frac{\hbar}{2} \left(\begin{array}{cc} W_1(x) & \Omega (x)\\ \Omega (x) & W_2(x) \end{array}\right)}_{\bm{M}(x)}, \label{ham2} \end{eqnarray}
where $\Omega(x)$ is the Rabi frequency for the resonant transition and the effective reflecting potentials are $W_1(x)\hbar/2$ and $W_2(x)\hbar/2$. $\hat{p}_x = -i\hbar \frac{\partial}{\partial x}$ is the momentum operator and $m$ is the mass (corresponding to Neon in all numerical examples).
Spontaneous decay is neglected here for simplicity, but it could be incorporated following \cite{ruschhaupt_2004_diode}. It implies both perturbing and beneficial effects for unidirectional transmission. Notice that in the ideal diode operation the ground state atom must be excited during its left-to-right crossing of the device. In principle, excited atoms could cross the diode ``backwards'', i.e., from right to left, but an irreversible decay from the excited state to the ground state would block any backward motion \cite{ruschhaupt_2004_diode}.
The behaviour of this device is quantified by the scattering transmission and reflection amplitudes for left (l) and right (r) incidence. Using $\alpha$ and $\beta$ to denote the channels, $\alpha=1,2$, $\beta=1,2$, let us denote by $R^l_{\beta\alpha} (v)$ ($R^r_{\beta\alpha} (v)$) the scattering amplitudes for incidence with modulus of velocity $v>0$ from the left (right) in channel $\alpha$, and reflection in channel $\beta$. Similarly we denote by $T^l_{\beta\alpha} (v)$ ($T^r_{\beta\alpha} (v)$) the scattering amplitude for incidence in channel $\alpha$ from the left (right) and transmission in channel $\beta$.
For some figures, it will be preferable to use an alternative notation in which the information of the superscript ($l/r$) is contained instead in the sign of the velocity argument $w$, positive for left incidence and negative otherwise,
\begin{eqnarray} R_{\beta\alpha}(w):= \left\{ \begin{array}{ll} R^l_{\beta\alpha}(\fabs{w}),& {\rm{if}}\; w>0 \\ R^r_{\beta\alpha}(\fabs{w}),& {\rm{if}}\; w<0 \end{array} \right. \nonumber\\ T_{\beta\alpha}(w):= \left\{ \begin{array}{ll} T^l_{\beta\alpha}(\fabs{w}),& {\rm{if}}\; w>0 \\ T^r_{\beta\alpha}(\fabs{w}),& {\rm{if}}\; w<0 \end{array} \right. \nonumber \end{eqnarray}
The ideal diode configuration must be such that
\begin{eqnarray} \fabsq{T_{21}^l (v)}\approx \fabsq{R_{11}^r (v)} \approx 1, \label{di1} \\ \fabsq{R_{\beta 1}^l (v)} \approx \fabsq{T_{\beta 1}^r (v)} \approx \fabsq{R_{21}^r (v)} \approx \fabsq{T_{11}^l (v)} \approx 0, \label{di2} \end{eqnarray}
with $\beta=1,2$, in an interval $v_{min}<v<v_{max}$ of the modulus of the velocity. In words, there must be full transmission for left incidence and full reflection for right incidence in the ground state. This was achieved in \cite{ruschhaupt_2004_diode} with a particular choice of the potential in which $\Omega(x)$, $W_1(x)$, and $W_2(x)$ were related to two partially overlapping functions $f_1(x)$, $f_2(x)$. However, other forms are also possible, so we shall deal here with the more general structure of Eq. (\ref{ham2}). We shall use Gaussian laser profiles
\begin{eqnarray*} &W_1 (x) = \hat{W}_1 \;\Pi(x,d),\quad W_2 (x) = \hat{W}_2 \;\Pi(x,-d),&\\ &\Omega (x) = \hat{\Omega}\; \Pi (x,0),& \end{eqnarray*}
where
$$ \Pi (x,x_0)=\exp[-(x-x_0)^2/(2\Delta x^2)] $$
and $\Delta x = 15 \mum$.
In section \ref{s2} we shall examine the stability and limits of the ``diodic'' behavior while the variants of the atom diode are presented in section \ref{s3}. They are explained in section \ref{s4} with the aid of an adiabatic basis and approximation. The paper ends with a summary and an appendix on the adiabaticity criterion.
\section{``Diodic'' behaviour and its limits\label{s2}}
\begin{figure}\label{fig2}
\end{figure}
The behavior of the two-level atom diode is examined by solving numerically the stationary Schr\"odinger equation,
\begin{eqnarray} E_v \bm{\Psi} (x) = \bm{H} \bm{\Psi} (x), \label{stat} \end{eqnarray}
with the Hamiltonian given by Eq. (\ref{ham2}) and $E_v = \frac{m}{2}v^2$. The results, obtained by the ``invariant imbedding method'' \cite{singer.1982,band.1994}, are shown in Fig. \ref{fig2} for different parameters. In the plotted velocity range, the ``diodic'' behaviour holds, i.e. Eqs. (\ref{di1}) and (\ref{di2}) are fulfilled. (The transmission and reflection probabilities for incidence in the ground state,
$|R_{21}^{l/r}|^2$ and $|T_{11}^{l/r}|^2$, which are not shown in the Figure are zero.) The device may be asymmetric, i.e. even with $\hat{W}_1 \neq \hat{W}_2$ there can be a ``diodic'' behaviour, see some examples in Fig. \ref{fig2}.
Note in passing that the device works as a diode for incidence in the excited state too, but in the opposite direction, namely,
$|T^r_{12}(v)|^2 \approx |R^l_{22}(v)|^2\approx 1$, whereas all other probabilities for incidence in the excited state are approximately zero.
Now let us examine the stability of the diode with respect to changes in the separation between laser field centers $d$. We define $v_{max}$ and $v_{min}$ as the upper and lower limits where diodic behaviour holds, by imposing that all scattering probabilities from the ground state be small except the ones in Eq. (\ref{di1}) (i.e., the transmission probability from $1$ to $2$ for left incidence and the reflection probability from $1$ to $1$ for right incidence). More precisely, $v_{max/min}$ are chosen as the limiting values such that
$\sum_{\beta=1}^2 (|R_{\beta 1}^l|^2+|T^r_{\beta 1}|^2)
+(|R^r_{21}|^2+|T^l_{11}|^2)+(1-|T_{21}^l|^2)+
(1-|R^r_{11}|^2)<\epsilon$ for all $v_{min}< v <v_{max}$. In Fig. \ref{fig3}, $v_{max/min}$ are plotted versus the distance between the laser centers, $d$, for different combinations of $\hat{\Omega}$, $\hat{W}_1$, and $\hat{W}_2$. For the intensities considered, $v_{max}$ is in the ultra-cold regime below $1$ m/s. In the $v_{max}$ surface, unfilled boxes indicate reflection failure for right incidence and filled circles indicate transmission failure for left incidence. In the $v_{min}$ surface, the failure is always due to transmission failure for left incidence. We see that the valid $d$ range for ``diodic'' behaviour can be increased by increasing the Rabi frequency $\hat{\Omega}$, compare e.g. (a) and (b), or (c) and (d). Moreover, higher mirror intensities increase $v_{max}$ at the plateau but also make it narrower, compare e.g. (b) and (d). This narrowing can be simply compensated by increasing $\hat{\Omega}$ too, compare e.g. (a) and (d).
\begin{figure}
\caption{ Limit $v_{min}$ (solid lines) and $v_{max}$ (symbols connected with dashed lines) for ``diodic'' behaviour, $\epsilon = 0.01$; the circles (boxes) correspond to breakdown due to transmission (reflection); (a) $\hat{\Omega} = 0.2 \Msi$, $\hat{W}_1 = \hat{W}_2 = 20 \Msi$; (b) $\hat{\Omega} = 1 \Msi$, $\hat{W}_1 = \hat{W}_2 = 20 \Msi$; (c) $\hat{\Omega} = 0.2 \Msi$, $\hat{W}_1 = \hat{W}_2 = 100 \Msi$; (d) $\hat{\Omega} = 1 \Msi$, $\hat{W}_1 = \hat{W}_2 = 100 \Msi$.}
\label{fig3}
\end{figure}
\begin{figure}
\caption{ Limit $v_{min}$ (solid lines) and $v_{max}$ (symbols connected with dashed lines) for ``diodic'' behaviour versus the shift $\Delta$ for different $d$, $\epsilon = 0.01$; the circles (boxes) correspond to breakdown first for transmission (reflection); $\hat{\Omega} = 1 \Msi$, $\hat{W}_1 = \hat{W}_2 = 100 \Msi$; (a) $d=46\mum$, (b) $d=50\mum$, (c) $d=60\mum$, (d) $d=70\mum$.}
\label{figx}
\end{figure}
Finally, we have also examined the stability with respect to a shift $\Delta$ of the central position of the pumping laser, see Fig. \ref{figx}. It turns out that there is a range, which depends on $d$, where the limits $v_{min}$ and $v_{max}$ practically do not change.
\section{Variants of the atom diode\label{s3}}
Is the mirror potential $W_2$ really necessary? If we want ground state atoms to pass from left to right but not from right to left, it is not intuitively obvious why we should add a reflection potential for the excited state on the left of the pumping potential $\Omega$, see again Fig. \ref{fig1}. In other words, it could appear that the pumping potential and a reflecting potential $W_1$ on its right would be enough to make a perfect diode. This simpler two-laser scheme, however, only works partially. In Fig. \ref{fig5} the scattering probabilities for the case $\hat{W}_1 >0$, $\hat{W}_2 = 0$ are represented. While there is still full reflection if the atom comes from the right, the transmission probability is only $1/2$ when the atom comes from the left; accordingly there is a $1/2$ reflection probability from the left, which is equally distributed between the ground and excited state channels. This is in contrast to the $\hat{W}_1>0$, $\hat{W}_2 > 0$ case of Fig \ref{fig2}. We may thus conclude that the counterintuitive state-selective mirror $W_2$ is really important to attain a perfect diode.
In Fig. \ref{fig5} the case $\hat{W}_1=0$, $\hat{W}_2>0$ is also plotted. For incidence from the right in the ground state there is no full reflection so this case is not useful as a diode. But for incidence from the left there is equal transmission in ground and excited states so that this device might be useful to build an interferometer. A very remarkable and useful property in this case, and in fact in all cases depicted in Figs. \ref{fig2} and \ref{fig5}, is the constant value of the transmission and reflection probabilities in a broad velocity range. This is calling for an explanation. Moreover, why do they take the values $1$, $1/2$, or $1/4$? None of these facts is very intuitive, neither within the representations and concepts we have put forward so far, nor according to the following arguments: Let us consider again the simple two-laser configuration with $\hat{W}_2=0$ and $\hat{W}_1>0$. From a classical perspective, the atom incident from the left finds first the pumping laser and then the state-selective mirror potential for the ground state. According to this ``sequential'' model, one would expect an important effect of the velocity in the pumping efficiency. A different velocity implies a different traversal time and thus a different final phase for the Rabi oscillation which should lead to a smooth, continuum variation of the final atomic state with the velocity. In particular, the probability of the excited state after the pumping would oscillate with the velocity and therefore the final transmission after the right mirror should oscillate too, if the sequential model picture were valid. Indeed, these oscillations are clearly seen in Fig. \ref{fig6} when $\hat{W}_1=\hat{W}_2=0$ (above a low velocity threshold in which the Rabi oscillations are suppressed and all channels are equally populated, for a related effect see \cite{ROS}, see also the explanation of this low velocity regime in section \ref{s4}). Clearly, however, the oscillations are absent when $\hat{W}_1>0$, so the sequential, classical-like picture cannot be right. In summary, the mirror potentials added to the pumping laser imply a noteworthy stabilization of the probabilities and velocity independence. The failure of the sequential scattering picture must be due to some sort of quantum interference phenomenon. Interference effects are well known in scattering off composite potentials, but in comparison with, e.g., resonance peaks in a double barrier, the present results are of a different nature. There is indeed a clean explanation to all the mysteries we have dropped along the way as the reader will find out in the next section.
\begin{figure}\label{fig5}
\end{figure}
\begin{figure}\label{fig6}
\end{figure}
\section{Adiabatic interpretation of the diode and its variants\label{s4}}
Depending on the mirror potentials included in the device, let us label the four possible cases discussed in the previous section as follows: case ``0'': $\hat{W}_1=\hat{W}_2=0$; case ``1'': $\hat{W}_1>0$, $\hat{W}_2=0$; case ``2'': $\hat{W}_1=0$, $\hat{W}_2>0$; case ``12'': $\hat{W}_1>0$, $\hat{W}_2>0$.
We diagonalize now the potential matrix $\bm{M}(x)$
\begin{eqnarray*} \bm{U}(x)\bm{M}(x)\bm{U}^+ (x) = \left(\begin{array}{cc} \lambda_-(x) & 0 \\ 0 & \lambda_+(x) \end{array}\right). \end{eqnarray*}
The orthogonal matrix $\bm{U}(x)$ is given by
\begin{eqnarray*} \bm{U}(x) = \left(\begin{array}{cc} \frac{W_-(x) - \mu(x)}{\sqrt{4\Omega^2(x)+[W_-(x) - \mu(x)]^2}} & \frac{W_-(x) + \mu(x)}{\sqrt{4\Omega^2(x)+[W_-(x) + \mu(x)]^2}}\\ \frac{2\Omega(x)}{\sqrt{4\Omega^2(x)+[W_-(x) - \mu(x)]^2}} & \frac{2\Omega(x)}{\sqrt{4\Omega^2(x)+[W_-(x) + \mu(x)]^2}} \end{array}\right) \end{eqnarray*}
where
\begin{eqnarray} W_- &=& W_1 - W_2, \nonumber\\ \mu&=&\sqrt{4\Omega^2(x)+W_-^2(x)}, \nonumber \end{eqnarray}
and the eigenvalues of $\bm{M}(x)$ are
$$ \lambda_{\mp} (x) = \frac{\hbar}{4}\left[W_1(x)+W_2(x) \mp \mu(x)\right] $$
with corresponding (normalized) eigenvectors $|\lambda_\mp(x)\rangle$. The asymptotic form of $\bm{U}$ varies for the different cases distinguished with a superscript, $U^{(j)}$, $j=0,1,2,12$. For $x\to-\infty$, the same $\bm{U}$ is found for cases $0$ and $1$, in which the left edge corresponds to the pumping potential. Similarly, the cases $2$ and $12$ share the same left edge potential $W_2$ and thus a common form of $\bm{U}$,
\begin{eqnarray*} \bm{U}^{(0,1)}(-\infty)\!=\!\frac{1}{\sqrt{2}}\left(\begin{array}{cc} -1 & 1\\ 1 & 1 \end{array}\right),\quad\!
\!\!\bm{U}^{(2,12)}(-\infty)\!=\!\left(\begin{array}{cc} -1 & 0\\ 0 & 1 \end{array}\right)\!. \end{eqnarray*}
The corresponding analysis for $x\to \infty$ gives the asymptotic forms
\begin{eqnarray*} \bm{U}^{(0,2)}(\infty) = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} -1 & 1\\ 1 & 1 \end{array}\right),\quad
\bm{U}^{(1,12)}(\infty) = \left(\begin{array}{cc} 0 & 1\\ 1 & 0 \end{array}\right). \end{eqnarray*}
\begin{figure}
\caption{ Eigenvalues (a) $\lambda_+$ and (b) $\lambda_-$; $d=50\mum$, $\hat{\Omega} = 1 \Msi$; $\hat{W}_1 = \hat{W}_2 = 100 \Msi$ (solid lines); $\hat{W}_1 = 100 \Msi$, $\hat{W}_2 = 0$ (boxes); $\hat{W}_1 = 0$, $\hat{W}_2 = 100 \Msi$ (circles).}
\label{fig7}
\end{figure}
The eigenvalues $\lambda_\pm (x)$ for the same parameters of Fig. \ref{fig5} are plotted in Fig. \ref{fig7}. We see that $\lambda_+ (x) > 0$ has at least one high barrier whereas $\lambda_-(x) \approx 0$.
If $\bm{\Psi}$ is a two-component solution of the stationary Schr\"odinger equation, Eq. (\ref{stat}), we define now the vector
$$ \bm{\Phi}(x)= {\phi_-(x) \choose \phi_+(x)} := \bm{U}(x)\bm{\Psi}(x) $$
in a potential-adapted, ``adiabatic representation''. Note that if no approximation is made, $\bm{\Phi}$ and $\bm{\Psi}$ are both exact and contain the same information expressed in different bases. Starting from Eq. (\ref{stat}), using $\bm{\Psi}=\bm{U}^+\bm{\Phi}$, and multiplying from the left by $\bm{U}$, we arrive at the following equation for $\bm{\Phi}(x)$
\begin{eqnarray*} E_v \bm{\Phi}(x) &=& -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} \bm{\Phi}(x) + \left(\begin{array}{cc} \!\lambda_-(x) & 0 \\ 0 & \lambda_+(x) \end{array}\!\right) \bm{\Phi} (x)\\ & & + \bm{Q} \bm{\Phi}(x), \end{eqnarray*}
where
\begin{eqnarray} \bm{Q} &=& -\frac{\hbar^2}{2m} \left(\bm{U}(x) \frac{\partial^2 \bm{U}^+}{\partial x^2}(x) + 2 \bm{U}(x)\frac{\partial \bm{U}^+}{\partial x}(x) \frac{\partial}{\partial x}\right) \nonumber \\ & = & \left(\begin{array}{cc} m\,B^2(x)/2 & -A(x) + i B(x) \hat{p}_x \\ A(x) - i B(x) \hat{p}_x & m\,B^2(x)/2\end{array}\right) \label{q} \end{eqnarray}
is the coupling term in the adiabatic basis, and $A(x)$, $B(x)$ are real functions,
\begin{eqnarray*} A(x)&=& \frac{1}{32 \mu^4(x)\Delta x^4 m} \Big\{\\ & & \fexp{-\frac{(x+d)^2}{\Delta x^2}} d^2 \hbar^6 \Omega(x) W_-(x)\\ & & + \fexp{\frac{(x+d)^2}{\Delta x^2}} \times\\ & & \left[-4\Omega^2(x) + W_1^2(x) + W_2^2(x) + 6 W_1(x)W_2(x)\right]\Big\},\\ B(x)&=& \frac{d \hbar^3}{4 \mu^2(x) \Delta x^2 m} \Omega(x) \left[W_1(x) + W_2(x)\right]. \end{eqnarray*}
Let us consider incidence from the left and assume first that the coupling $\bm{Q}$ can be neglected so that there are two independent adiabatic modes ($\pm$) in which the internal state of the atom
adapts to the position-dependent eigenstates $|\lambda_\pm\rangle$ of the laser potential $\bm{M}$, whereas the atom center-of-mass motion is affected in each mode by the effective adiabatic potentials $\lambda_\pm(x)$.
Because $\lambda_- \approx 0$, an approximate solution for $\phi_- (x)$ is a full transmitted wave and because $\lambda_+$ consists of at least one ``high'' barrier -at any rate the present argument is only applicable for energies below the barrier top-, an approximate solution for $\phi_- (x)$ is a wave which is fully reflected by a wall. So we can write for $x\ll 0$,
\begin{eqnarray*} \bm{\Phi} (x) \approx \bm{\Phi}_{-\infty} (x) := \left(\begin{array}{c}c_-\\c_+\end{array}\right) e^{ikx} + \left(\begin{array}{c}0\\-c_+\end{array}\right) e^{-ikx}, \end{eqnarray*}
and for $x\gg 0$,
\begin{eqnarray*} \bm{\Phi} (x) \approx \bm{\Phi}_{\infty} (x) := \left(\begin{array}{c}c_-\\0\end{array}\right) e^{ikx}. \end{eqnarray*}
In order to determine the amplitudes $c_\pm$ we have to compare with the asymptotic form of the scattering solution for left incidence,
\begin{eqnarray*} {\bm{\Psi}}(x) \approx {\bm{\Psi}}_{-\infty} (x) :=
\left(\begin{array}{c} 1\\0 \end{array}\right) e^{i k x} + \left(\begin{array}{c} R_{11}^l\\R_{21}^l \end{array}\right) e^{-i k x} \end{eqnarray*}
if $x\ll 0$ and
\begin{eqnarray*} \bm{\Psi}(x) \approx \bm{\Psi}_{\infty} (x) := e^{i k x} \left(\begin{array}{c} T_{11}^l\\T_{21}^l \end{array}\right) \end{eqnarray*}
if $x\gg 0$.
The transmission and reflection coefficients can now be approximately calculated for each case from the boundary conditions $\bm{\Phi}_{-\infty}(x) = \bm{U}(-\infty) \bm{\Psi}_{-\infty} (x)$ and $\bm{\Phi}_{\infty}(x) = \bm{U}(\infty) \bm{\Psi}_{\infty} (x)$.
\begin{table}[tbp] \label{tab} \caption{Reflection and transmission probability for the different variations of the atom diode} (a) incidence from the right: \begin{eqnarray*}
\begin{array}{cc|c|c|c|c|c|c|} &\mbox{case} & c_-^{r} & c_+^{r} & R_{11}^{r}& R_{21}^{r} & T_{11}^{r} & T_{21}^{r}\\[0.1cm] \hline (0)& \hat{W}_1=\hat{W}_2=0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2}\\[0.1cm] \hline (1)& \hat{W}_1>0, \hat{W}_2=0 & 0 & 1 & -1 & 0 & 0 & 0\\[0.1cm] \hline (2)& \hat{W}_1=0, \hat{W}_2>0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{\sqrt{2}} & 0\\[0.1cm] \hline (12)& \hat{W}_1>0, \hat{W}_2>0 & 0 & 1 &
-1 & 0 & 0 & 0\\[0.1cm] \hline \end{array} \end{eqnarray*}
(b) incidence from the left:
\begin{eqnarray*}
\begin{array}{cc|c|c|c|c|c|c|} &\mbox{case} & c_-^{l} & c_+^{l} & R_{11}^{l}& R_{21}^{l} & T_{11}^{l} & T_{21}^{l}\\[0.1cm] \hline (0)& \hat{W}_1=\hat{W}_2=0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2}\\[0.1cm] \hline (1)& \hat{W}_1>0, \hat{W}_2=0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & -\frac{1}{2} & -\frac{1}{2} & 0 & -\frac{1}{\sqrt{2}}\\[0.1cm] \hline (2)& \hat{W}_1=0, \hat{W}_2>0 & -1 & 0 & 0 & 0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\[0.1cm] \hline (12)& \hat{W}_1>0, \hat{W}_2>0 & -1 & 0 & 0 & 0 & 0 & -1\\[0.1cm] \hline \end{array} \end{eqnarray*} \end{table}
The incidence from the right can be treated in a similar way. All the amplitudes are given in Table I, from which we can find, taking the squares, the transmission and reflection probabilities $1$, $1/2$, $1/4$, and $0$, of Figs. \ref{fig2}, \ref{fig5}, and \ref{fig6}.
\begin{figure}
\caption{ $|\langle 1|\lambda_-^{(j)}\rangle|^2$ (solid lines)
and $|\langle 2|\lambda_-^{(j)}\rangle|^2$ (dashed lines) for $d = 50 \mum$, $\hat{\Omega} = 1 \Msi$, $\hat{W}_1 = 100 \Msi$; (a) $\hat{W}_2 = 100 \Msi$ (case $j=12$), (b) $\hat{W}_2 = 0$ (case $j=1$).}
\label{fig8}
\end{figure}
These results provide in summary a simple explanation of the behaviour of the diode and its variants. In particular, the perfect diode behavior of case $12$, occurs because the (approximately) ``freely'' moving mode $\phi_-$ transfers adiabatically the ground state to the excited state from left to right. To visualize this, let us represent the probabilities to find the ground and excited state
in the eigenvectors $|\lambda^{(j)}_-(x)\rangle$ for the cases $j=12,1$. They are plotted in Fig. \ref{fig8}a for the case ``12'': the perfect adiabatic transfer can be seen clearly. On the other hand, the mode ``$+$'' (not plotted), which tends to the ground state on the right edge of the device, is blocked by a high barrier. The stability of this blocking effect with respect to incident velocities holds for energies smaller than the $\lambda_+$ barrier top, more on this below. In Fig. \ref{fig8}b the ground and excited state probabilities for case ``1'' are plotted. If the mirror potential laser $W_2$ is removed on the left edge of the device, the ground state is not any more an eigenstate of the potential for $x\ll 0$. The adiabatic transfer
of the mode ``$-$'' occurs instead from $(|2\rangle-|1\rangle)/2^{1/2}$ on the left
to $|2\rangle$ on the right, whereas the blocked mode ``$+$'' on the left
corresponds to the linear combination $(|2\rangle+|1\rangle)/2^{1/2}$. This results in a $1/2$ reflection probability for ground-state incidence from the left. A similar analysis would be applicable in the other cases.
\begin{figure}
\caption{ Limits of the ``diodic'' behaviour $v_{min}$ (thick dashed line) and $v_{max}$ (filled circles connected with a dashed line, see also Fig. \ref{fig3}), $\epsilon = 0.01$; limits of condition (\ref{cond1}) $v_{\lambda,min}$ (lower solid line) and $v_{\lambda,max}$ (upper solid line); limit of the adiabatic approximation $v_{ad,max}$ (unfilled circles), $\epsilon = 0.01$; $\hat{W}_1 = \hat{W}_2 = 100 \Msi$, $\hat{\Omega} = 0.2 \Msi$.}
\label{fig9}
\end{figure}
Of course all approximations have a range of validity that depend on the potential parameters and determines the working conditions of the diode. Even though these conditions can be easily found numerically from the exact results, approximate breakdown criteria are helpful to understand the limits of the device and different reasons for its failure.
For the approximation that $\phi_-$ is a fully transmitted wave and $\phi_+$ a fully reflected one a necessary condition is
\begin{eqnarray} \mbox{max}_x \left[\lambda_-(x)\right] < E_v < \mbox{max}_x \left[\lambda_+ (x)\right]. \label{cond1} \end{eqnarray}
This defines the limits
\begin{eqnarray} v_{\lambda,min} &:=& \sqrt{\frac{2}{m}\mbox{max}_x \left[ \lambda_-(x)\right]}, \label{deflm}\\ v_{\lambda,max} &:=& \sqrt{\frac{2}{m}\mbox{max}_x \left[\lambda_+(x)\right]}, \label{deflp} \end{eqnarray}
such that Eq. (\ref{cond1}) is fulfilled for all $v$ with $v_{\lambda,min} < v < v_{\lambda,max}$. The plateaus of $v_{max}$ seen e.g. in Fig. \ref{fig3} for a range of $d$-values are essentially coincident with $v_{\lambda,max}$. Fig. \ref{fig9} shows the exact limits $v_{min}$ and $v_{max}$ for the ``diodic'' behaviour, as in Fig. \ref{fig3}c, and also the limits $v_{\lambda,min}$, $v_{\lambda,max}$ resulting from the condition of Eq. (\ref{cond1}). We see that the exact limit $v_{min}$ coincides essentially with $v_{\lambda,min}$ so that the lower ``diodic'' velocity boundary can be understood by the breakdown of the condition that $\phi_-$ is fully transmitted due to a $\lambda_-$ barrier. This effect is only relevant for small distances $d$ between the lasers.
Another reason for the breaking down of the diode may be that the adiabatic modes are no longer independent, i.e. that $\bm{Q}$, see Eq. (\ref{q}), cannot be neglected. An approximate criterion for adiabaticity, more precisely for neglecting the non-diagonal elements of $\bm{Q}$, see the Appendix, is
\begin{eqnarray} q(v) &:=& \mbox{max}_{x \in I} \frac{\fabsq{A(x)} + 2m\fabsq{B(x)}[E_v - \lambda_-(x)]} {\fabsq{\lambda_+(x)-\lambda_-(x)}}\nonumber\\ &\ll& 1 \label{neglectQ} \end{eqnarray}
with $I = [-d, d]$. A velocity boundary $v_{ad,max}$ defined by $q(v) < \epsilon$ for all $v_{\lambda,min} < v < v_{ad,max}$ is shown in Fig. \ref{fig9}. (Note that the condition of Eq. (\ref{neglectQ}) only makes sense if $E_v > \lambda_-(x)$, i.e. $v_{\lambda,min} < v$.) We see in Fig. \ref{fig9} that the breakdown of the diode at $v_{max}$ for large $d$ is due to a failure of the adiabatic approximation.
\section{Summary\label{s5}}
Summarizing, we have studied a two-level model for an ``atom diode'', a laser device in which ground state atoms can pass in one direction, conventionally from left to right, but not in the opposite direction. The proposed scheme includes three lasers: two of them are state-selective mirrors, one for the excited state on the left, and the other one for the ground state on the right, whereas the third one -located between the two mirrors- is a pumping laser on resonance with the atomic transition.
We have shown that the ``diodic'' behaviour is very stable with respect to atom velocity in a given range, and with respect to changes in the distances between the centers of the lasers. The inclusion of the laser on the left, reflecting the excited state, is somewhat counterintuitive, but it is essential for a perfect diode effect; the absence of this laser leads to a $50\%$ drop in efficiency. The stability properties as well as the actual mechanism of the diode is explained with an adiabatic basis and an adiabatic approximation. The diodic transmission is due to the adiabatic transfer of population from left to right, from the ground state to the excited state in a free-motion adiabatic mode, while the other mode is blocked by a barrier.
\begin{acknowledgments}
AR acknowledges support by the Ministerio de Educaci\'on y Ciencia. This work has been supported by Ministerio de Educaci\'on y Ciencia (BFM2003-01003), and UPV-EHU (00039.310-15968/2004). \end{acknowledgments}
\begin{appendix} \section{}
To motivate Eq. (\ref{neglectQ}), see also \cite{messiah.book}, let us assume
\begin{eqnarray} E \bm{\Phi}(x) &=& -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} \bm{\Phi}(x) + \left(\begin{array}{cc} \lambda_- & 0 \\ 0 & \lambda_+ \end{array}\right) \bm{\Phi} (x)\nonumber\\ & + & \epsilon \left(\begin{array}{cc} 0 & -\tilde{A}+i \tilde{B} \hat{p}_x\\ \tilde{A} - i \tilde{B} \hat{p}_x & 0 \end{array}\right) \bm{\Phi}(x) \label{ap} \end{eqnarray}
where $\lambda_\pm$, $\tilde{A}$ and $\tilde{B}$ are real and independent of $x$. We assume that $E > \lambda_-$ and that $\epsilon$ is small such that we can treat $\bm{\Phi}$ perturbatively,
\begin{eqnarray*} \bm{\Phi}(x) \approx \left(\begin{array}{c}\phi_{0,-} (x) \\ \phi_{0,+} (x)\end{array}\right) + \epsilon \left(\begin{array}{c}\phi_{1,-} (x) \\ \phi_{1,+} (x)\end{array}\right) \end{eqnarray*}
with
\begin{eqnarray*} \phi_{0,-} (x) &=& \fexp{\frac{i}{\hbar} \sqrt{2m(E-\lambda_-)} x}\\ \phi_{0,+} (x) &=& 0 \end{eqnarray*}
Then it follows from (\ref{ap}) for the first-order correction
\begin{eqnarray*} \phi_{1,-} & = & 0\\ \phi_{1,+} & = & \left[E- \lambda_+ -\hat{p}^2_x/(2m)\right]^{-1} (\tilde{A} - i \tilde{B} \hat{p}_x)\; \phi_{0,-}\\ & = & \frac{\tilde{A} - i \tilde{B} \sqrt{2m(E-\lambda_-)}}{\lambda_- - \lambda_+}\; \phi_{0,-} \end{eqnarray*}
because $\hat{p}_x\,\phi_{0,-} = \sqrt{2m(E-\lambda_-)}\,\phi_{0,-}$. If we want to neglect $\phi_+ = 0 + \epsilon\,\phi_{1,+}$ we get the condition
\begin{eqnarray*} \epsilon^2 \frac{\fabsq{\tilde{A}} + \fabsq{\tilde{B}} 2m (E-\lambda_-)} {\fabsq{\lambda_- - \lambda_+}} \ll 1. \end{eqnarray*}
If $\lambda_\pm$, $\tilde{A}$ and $\tilde{B}$ depend on $x$, we may use the condition
\begin{eqnarray*} \mbox{max}_{x\in I} \frac{\fabsq{\epsilon \tilde{A}(x)} + \fabsq{\epsilon\tilde{B}(x)} 2m [E-\lambda_-(x)]} {\fabsq{\lambda_-(x) - \lambda_+(x)}} \ll 1 \end{eqnarray*}
where $I$ is chosen in such a way that the assumption $\phi_{0,+}(x)=0$ is approximately valid.
In Eq. (\ref{ap}), we have not included any diagonal elements in the coupling, compare with Eq. (\ref{q}). We neglect them in the condition (\ref{neglectQ}) but in principle it would be also possible to absorb them by defining effective adiabatic potentials $\tilde{\lambda}_{\pm} = \lambda_{\pm} + mB^2/2$. \end{appendix}
\end{document} |
\begin{document}
\date{} \title{Central intersections of element centralisers}
\begin{center} \small \textit{FB Mathematik, TU Kaiserslautern, Postfach 3049}
\textit{67653 Kaiserslautern, Germany}
\text{E-mail: [email protected]} \end{center}
\paragraph{}
\textit{MSC:}
\textit{Primary: 20E34}
\textit{Seconday: 20D99}
\paragraph{}
\textit{Keywords:}
\textit{Finite groups, element centralisers, CA-groups, F-groups}
\normalsize \begin{abstract} In 1970 R. Schmidt gave a structural classification for CA-groups. In this paper we consider a condition upon the intersection of element centralisers which turns out to be equivalent to the definition of a CA-group. We then weaken which centralisers we chose to intersect and structurally classify this new family of groups. Furthermore we apply a similar weakening to the class of F-groups introduced by It{\^o} in 1953 and classified by Rebmann in 1971.
\end{abstract}
\section{Introduction}
A finite group is called a CA-group if the centraliser of every non-central element is abelian. If $G$ is a CA-group and $x,y\in G\setminus Z(G)$, then $C_G(x)$ can never be properly contained in $C_G(y)$. Furthermore we shall see in Lemma~\ref{TIC=CA} that $G$ being a CA-group is equivalent to saying for all $x$ and $y$ non-central elements in $G$ either $C_G(x)=C_G(y)$ or $C_G(x)\cap C_G(y)=Z(G)$. In addition to the class of CA-groups, It{\^o} \cite{ItoTypeI} introduced the notion of an F-group. This is a group in which every non-central element centraliser contains no other non-central element centraliser. That is, $G$ is an F-group if for any $x\in G\setminus Z(G)$, then $C_G(x)\leq C_G(y)$ implies $y\in Z(G)$. We shall also see in Lemma~\ref{CentralTIC=F} that $G$ being an F-group is equivalent to $Z(C_G(x))\cap Z(C_G(y))=Z(G)$ for all $x,y\in G\setminus Z(G)$ such that $C_G(x)\ne C_G(y)$.
The aim of this paper is to consider these intersection conditions for a specific subset of centralisers, in particular, the set of minimal centralisers in a group (those which do not properly contain any other element centraliser). Thus we define a group to be a ${\rm CA}_{min}$-group if for two non-central elements $x$ and $y$ with minimal element centralisers in $G$ either $C_G(x)=C_G(y)$ or $C_G(x)\cap C_G(y)=Z(G)$. Similarly we call $G$ an ${\rm F}_{min}$-group if for two non-central elements $x$ and $y$ with minimal element centralisers in $G$ either $C_G(x)=C_G(y)$ or $Z(C_G(x))\cap Z(C_G(y))=Z(G)$. Note that the analogous condition by considering maximal centralisers for CA-groups was studied by Schmidt \cite{SchmidtCaGps} (although Schmidt defined such groups using subgroup centralisers).
The aim of this paper is to prove a structural classification of ${\rm CA}_{min}$-groups and ${\rm F}_{min}$-groups.
\begin{thm}\label{MainThm} $G$ is a ${\rm CA}_{min}$-group {\rm (respectively ${\rm F}_{min}$)} if and only if $G$ has one of the following forms: \begin{enumerate} \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that both $K$ and $L$ are abelian. \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that $K$ is abelian, $L$ is a ${\rm CA}_{min}$-group {\rm (respectively ${\rm F}_{min}$)}, $Z(G)=Z(L)$ and $L/Z(L)$ is a $p$-group. \item $G/Z(G)\cong {\rm Sym}(4)$ and if $V/Z(G)\cong V_4$, then $V$ is non-abelian. \item $G$ has an abelian normal subgroup of index $p$, $G$ is not abelian. \item $G\cong A\times P$, where $A$ is abelian and $P$ is a non-abelian $p$-group for some prime $p$; therefore $P$ is a ${\rm CA}_{min}$-group {\rm (respectively ${\rm F}_{min}$)}. \item $G/Z(G)\cong PGL_2(p^n)$ or $PSL_2(p^n)$ with $p^n > 3$. \end{enumerate} \end{thm}
Note a similarity to Rebmann's structural classification of F-groups \cite{FGroups}. In fact the groups of type (1), (3) and (4) are CA-groups \cite{SchmidtCaGps}. Moreover for the families (2) and (5) replacing ${\rm CA}_{min}$-group by F-group yields the corresponding family for F-groups. While the non-solvable case (6) contains all the non-solvable cases of F-groups. In particular we obtain the following corollary as given in Rebmann \cite{FGroups}.
\begin{cor}\label{NonSolFIsCA} Let $G$ be a non-solvable F-group. Then $G$ is a CA-group. \end{cor}
By using the above theorem is it easy to see that the class of CA-groups is strictly smaller than the class of ${\rm CA}_{min}$-groups, as $PSL_2(q)$ and $PGL_2(q)$ have a non-abelian centraliser. However in Rebmann's paper no example of an F-group which is not a CA-group was provided. We finish this paper by providing a family of $p$-groups which are F-groups but not CA-groups.
\begin{prop}\label{FNotCA} Let $G$ be an extraspecial group of order $p^{2n+1}$ with $n>1$. Then $G$ is an F-group which is not a CA-group. \end{prop}
Note that this family will also be a family of ${\rm F}_{min}$-groups which are not ${\rm CA}_{min}$-groups. Finally, observe that if there exists a solvable ${\rm CA}_{min}$-group which is not an CA-group, then such a $p$-group exists for some prime $p$. However, running over the GAP libraries we have been unable to find a $p$-group which is ${\rm CA}_{min}$-group and not a CA-group. Note that \cite{RockeAbCent} studied such groups however we were unable to use the results and methods in this paper to produce such an example.
\section{Preliminaries} \subsection{Conditions on element centraliser intersections}
Let $G$ be a non-abelian CA-group with $x$ and $y$ non-central elements in $G$. Consider $C_G(x)\cap C_G(y)$. If $z\in C_G(x)\cap C_G(y)$, then $\langle C_G(x),C_G(y)\rangle\leq C_G(z)$. As these centralisers are abelian either $C_G(x)= C_G(y)$ or $z\in Z(G)$. Moreover we shall show in the next lemma that this condition is equivalent to a CA-group.
\begin{lm}\label{TIC=CA} Let $G$ be a finite non-abelian group. Then $G$ is a CA-group if and only if for any pair of non-central elements $x$ and $y$ such that $C_G(x)\ne C_G(y)$ then $C_G(x)\cap C_G(y)=Z(G)$. \begin{proof} We commented above that any CA-group satisfies this condition. Hence it remains to show to converse. Let $z$ be a non-central element in $G$ and $x,y\in C_G(z)\setminus Z(G)$. Then $\langle z\rangle \leq C_G(y)\cap C_G(x)$ and so $C_G(x)\cap C_G(y)\ne Z(G)$. Therefore $C_G(x)= C_G(y)$, and so $x$ and $y$ commute. In particular, any two elements in $C_G(z)$ commute. Thus $C_G(z)$ is abelian and $G$ is a CA-group. \end{proof} \end{lm}
Furthermore, with this observation we have the following corollary. \begin{cor} Let $G$ be a CA-group such that $Z_2(G)>Z(G)$. Then $G$ is meta-abelian. \begin{proof} We want to show that $G'$ is abelian. However, by \cite[Theorem III.2.11]{Huppert}, $G'\leq C_G(Z_2(G))$. Thus it is enough to show $C_G(Z_2(G))$ is abelian. Let $x,y\in C_G(Z_2(G))\setminus Z(G)$. Then $Z(G)<Z_2(G)\leq C_G(x)\cap C_G(y)$. Thus $x$ and $y$ commute and so $C_G(Z_2(G))$ is abelian.
\end{proof} \end{cor}
We now make clear the definition of a minimal element centraliser for use in the definitions of ${\rm CA}_{min}$-groups and ${\rm F}_{min}$-groups.
\begin{df} An element centraliser $C_G(x)$ for $x$ a non-central element is called a minimal centraliser if $C_G(y)\leq C_G(x)$ implies $C_G(y)=C_G(x)$. \end{df}
Thus to relax the notion of a CA-group, we want to consider the intersection property for minimal element centralisers. Note that any non-abelian group must have at least two minimal non-central element centralisers. Otherwise if $C=C_G(x)$ is the unique minimal centraliser in $G$, then for all $y\in G$, we have that $C\leq C_G(y)$. Therefore $x\in \cap_{y\in G} C_G(y)=Z(G)$. Thus $C=G$ and $x\in Z(G)$.
Note that we could also consider maximal centralisers ($C_G(x)$ called a maximal centraliser if $C_G(x)<C_G(y)$ implies $y\in Z(G)$). In fact Schmidt considered the set of groups in which any two distinct maximal non-central element centralisers have intersection equal to the center of the group \cite{SchmidtCaGps}. (These were referred to as $\mathfrak{D}$-groups) However he only classified the soluble $\mathfrak{D}$-groups, although he did discuss in depth the non-solvable case too.
\begin{df} Let ${\rm CA}_{min}$ denote the set of finite groups $G$ such that $C_G(x)\cap C_G(y)=Z(G)$ for any two distinct minimal centralisers $C_G(x)$ and $C_G(y)$. \end{df}
By definition, a group is an F-group if and only if for any non-central element $x$ in $G$ we have $C_G(x)$ is both a maximal and minimal centraliser in $G$. Therefore, for an F-group, the definitions of $\mathfrak{D}$ and ${\rm CA}_{min}$ are equivalent. Furthermore, the following corollary follows from Lemma~\ref{TIC=CA}.
\begin{cor}\label{CA=D+F} Let $G$ be a finite group. Then $G$ is a CA-group if and only if $G$ is an F-group and a ${\rm CA}_{min}$-group. \end{cor}
As with the notion of CA-groups we shall weaken the notion of an F-group. However, first we need an analogous lemma for Lemma~\ref{TIC=CA}.
\begin{lm}\label{CentralTIC=F} Let $G$ be a finite non-abelian group. Then $G$ is an F-group if and only if for any pair of non-central elements $x$ and $y$ such that $C_G(x)\ne C_G(y)$ then $Z(C_G(x))\cap Z(C_G(y))=Z(G)$. \begin{proof}
Assume $G$ is an F-group and let $C_G(x)\ne C_G(y)$ for $x$ and $y$ non-central elements. If $z\in Z(C_G(x))\cap Z(C_G(y))$, then $\langle C_G(x),C_G(y)\rangle \leq C_G(z)$. Hence as in an F-group every centraliser is both maximal and minimal, it follows that $C_G(z)=G$. Or in other words $z\in Z(G)$. Therefore $Z(C_G(x))\cap Z(C_G(y))=Z(G)$.
For the converse direction assume that $C_G(x)<C_G(y)$ for both $x$ and $y$ non-central elements in $G$. Then $Z(C_G(y))\leq Z(C_G(x))$ and therefore $Z(C_G(y))=Z(G)$, which implies $y\in Z(G)$. \end{proof} \end{lm}
Thus we now make the following definition.
\begin{df} Let ${\rm F}_{min}$ denote the set of finite groups $G$ such that $Z(C_G(x))\cap Z(C_G(y))=Z(G)$ for any two distinct minimal centralisers $C_G(x)$ and $C_G(y)$. \end{df}
Therefore we have the following inclusions: (Theorem~\ref{MainThm} and Proposition~\ref{FNotCA} show that these are strict inclusions)
\begin{center} \begin{tikzpicture} \node (1) {${\rm F}_{min}$-groups}; \node[below left of=1,node distance=10mm , rotate=45] (2) {$\subsetneq$}; \node[below right of=1,node distance=10mm , rotate=315] (3) {$\supsetneq$}; \node[below left of=1] (4) {${\rm CA}_{min}$-groups}; \node[below right of=1] (5) {F-groups}; \node[below left of=5] (6) {CA-groups}; \node[above right of=6,node distance=10mm , rotate=45] (7) {$\subsetneq$}; \node[above left of=6,node distance=10mm , rotate=315] (8) {$\supsetneq$};
\end{tikzpicture} \end{center}
We finally observe that the intersection of ${\rm CA}_{min}$-groups with F-groups equals the set of CA-groups.
\subsection{Exhibiting a partition}
In the works of Rebmann and Schmidt \cite{FGroups}, \cite{SchmidtCaGps} providing an abelian normal partition of the central quotient $G/Z(G)$ yielded a powerful tool to structurally classify families of groups; in particular they could apply the following classifications by Baer and Suzuki. Neither theorem appears as one statement but as several across the papers, therefore we combine the results into one statement.
\begin{thm}\cite{BaerPart1}\cite{BaerPart2}\label{BaerSolPart} Let $G$ be a solvable group with a normal non-trivial partition $\beta$, then $G$ is one of the following: \begin{enumerate} \item A component of $\beta$ is self normalising in $G$ and $G$ is a Frobenius group. \item $G\cong {\rm Sym}(4)$ and $\beta$ is the set of maximal cyclic subgroups of $G$.
\item $G$ has a nilpotent normal subgroup $N$ which lies in $\beta$ with $|G:N|=p$ and every element in $G\setminus N$ has order $p$. \item $G$ is a $p$-group, for $p$ a prime. \end{enumerate} \end{thm}
\begin{thm}\cite{SuzPart}\label{SuzNonSolPart} Let $G$ be a non-solvable group with a normal non-trivial partition $\beta$. Then $G\cong PGL_2(p^n)$, $PSL_2(p^n)$ for $p$ prime and $p^n>3$, $Sz(2^n)$ for $n\geq 3$ or a component of $\beta$ is self normalising and $G$ is a Frobenius group. \end{thm}
We aim to show that for $G$ a ${\rm CA}_{min}$-group or an ${\rm F}_{min}$-group, as for F-groups, the central quotient $G/Z(G)$ exhibits a normal abelian partition. For the case of ${\rm CA}_{min}$-groups we require the following preliminary result.
\begin{lm}\label{CAminCenAb} Let $G$ be a ${\rm CA}_{min}$-group, then each minimal centraliser is abelian. \begin{proof} Let $C$ be a minimal centraliser in $G$ and $x\in C$. If $x\not\in Z(C)$, then there exists a minimal centraliser $D \leq C_G(x)$. It is clear that $Z(C_G(x))\leq Z(D)$. Hence $x\in C\cap D$ which equals $Z(G)$ or $C=D$. Thus assume that $C=D$. However as $x\in Z(D)$, it means that $x\in Z(C)$. \end{proof} \end{lm}
\begin{lm} Let $G$ be a ${\rm CA}_{min}$-group. Then \[ \beta = \{C/Z(G) \mid C \text{ a minimal centraliser in $G$}\} \] forms a non-trivial normal partition of $G/Z(G)$ consisting of abelian subgroups. \begin{proof} It is clear that the set $\beta$ is closed under conjugation and by Lemma~\ref{CAminCenAb} every subgroup in $\beta$ is abelian. Thus to show $\beta$ is a partition we need to show that every element in $G/Z(G)$ lies in a unique subgroup in $\beta$.
Take $C/Z(G)$ and $D/Z(G)$ distinct in $\beta$. Then $C/Z(G)\cap D/Z(G)=(C\cap D)/Z(G)=1$. Thus it is enough to show that any $xZ(G)$ lies in some $C/Z(G)$.
Consider $C_G(x)$ which contains some minimal centraliser $C$. Then as in Lemma~\ref{CAminCenAb}, $Z(C_G(x))\leq Z(C)$, hence $x\in C$. In particular $xZ(G)\in C/Z(G)$. \end{proof} \end{lm}
\begin{lm} Let $G$ be an ${\rm F}_{min}$-group. Then \[ \beta = \{Z(C)/Z(G) \mid C \text{ a minimal centraliser in $G$}\} \] forms a non-trivial normal partition of $G/Z(G)$ consisting of abelian subgroups. \begin{proof} It is clear that the set $\beta$ is closed under conjugation and every subgroup in $\beta$ is abelian. Thus to show $\beta$ is a partition we need to show that every element in $G/Z(G)$ lies in a unique subgroup in $\beta$.
Take $Z(C)/Z(G)$ and $Z(D)/Z(G)$ distinct in $\beta$. Then $Z(C)/Z(G)\cap Z(D)/Z(G)=(Z(C)\cap Z(D))/Z(G)=1$. Thus it is enough to show that any $xZ(G)$ lies in some $Z(C)/Z(G)$.
Consider $C_G(x)$ which contains some minimal centraliser $C$. Then as in Lemma~\ref{CAminCenAb}, $Z(C_G(x))\leq Z(C)$, hence $x\in Z(C)$. In particular $xZ(G)\in Z(C)/Z(G)$. \end{proof} \end{lm}
\section{Proof of main theorem}
We now aim to classify ${\rm CA}_{min}$-groups and ${\rm F}_{min}$-groups. In fact a similar argument as for F-groups occurs when we replace F-group by ${\rm CA}_{min}$-group or ${\rm F}_{min}$-group.
\subsection{Classifying ${\rm CA}_{min}$-groups}
Due to the classification of partitions by Baer and Suzuki, first we shall consider the solvable ${\rm CA}_{min}$-groups.
\begin{thm}\label{SolCAminGp} Let $G$ be a solvable ${\rm CA}_{min}$ group. Then $G$ is one of the following: \begin{enumerate} \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that both $K$ and $L$ are abelian. \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that $K$ is abelian, $L$ is a ${\rm CA}_{min}$-group, $Z(L)=Z(G)$ and $L/Z(L)$ is a $p$-group. \item $G/Z(G)\cong {\rm Sym}(4)$ and if $V/Z(G)\cong V_4$, then $V$ is non-abelian. \item $G$ has an abelian normal subgroup of index $p$, $G$ is not abelian. \item $G\cong A\times P$, where $A$ is abelian and $P$ is a non-abelian $p$-group for some prime $p$; therefore $P$ is a ${\rm CA}_{min}$-group. \end{enumerate}
\begin{proof} As $G/Z(G)$ admits a non-trivial normal partition, we apply the classification of Baer (Theorem~\ref{BaerSolPart}) to determine $G/Z(G)$.
\underline{\bf Case (1)}\newline Let $L/Z(G)$ denote the Frobenius kernel of $G/Z(G)$. Furthermore, let $K/Z(G)$ denote an element in the partition of $G/Z(G)$ which is self-normalising. Then $K=C_G(x)$ for some minimal centraliser $C_G(x)$ in $G$. We want to show that $K/Z(G)$ is a Frobenius complement, thus we need that $K\cap K^g=Z(G)$ for all $g\in G\setminus K$.
As $K$ is a minimal centraliser in $G$, it means $K^g$ is also a minimal centraliser in $G$ and therefore $K\cap K^g=Z(G)$ or $K$. However, if $K^g=K$, then $gZ(G)\in N_{G/Z(G)}(K/Z(G))=K/Z(G)$ as $K/Z(G)$ was chosen to be self-normalising. Thus $K/Z(G)$ is a Frobenius complement in $G/Z(G)$. Furthermore as $K$ is a minimal centraliser in $G$, then $K$ is abelian (Corollary~\ref{CAminCenAb}).
Let $x\in L\setminus Z(G)$. As $G/Z(G)$ is Frobenius with kernel $L/Z(G)$, \[ C_G(x)/Z(G)\leq C_{G/Z(G)}(xZ(G))\leq L/Z(G), \] so $C_G(x)\leq L$; or in other words $C_G(x)=C_L(x)$.
If $L$ has a unique minimal centraliser, then $L$ is abelian by Corollary~\ref{CAminCenAb} and thus of type $(1)$. Hence assume $L$ has two distinct minimal centralisers $C_L(x)<L$ and $C_L(y)<L$ for $x,y\in L\setminus Z(G)$. Then $C_L(x)=C_G(x)$ and $C_L(y)=C_G(x)$. If $C_G(x)$ is not a minimal centraliser in $G$, then there exists $C_G(z)<C_G(x)$. As $C_G(z)<C_G(x)\leq L$, it follows that $z\in L\setminus Z(G)$. Thus $C_L(z)< C_L(x)$ and so $C_L(x)$ is not minimal. Therefore $C_G(x)$ and $C_G(y)$ are distinct minimal centralisers in $G$. As $G$ is a ${\rm CA}_{min}$-group, we have that $Z(L)\leq C_L(x)\cap C_L(y)=C_G(x)\cap C_G(y)=Z(G)$. However $Z(G)\leq L$ and therefore $Z(G)\leq Z(L)$ implying that $Z(L)=Z(G)$. Furthermore, we have shown that $L$ is a ${\rm CA}_{min}$-group. By Thompson, \cite[Theorem V.8.7]{Huppert}, $L/Z(G)$ is nilpotent (as it is a Frobenius kernel). By \cite[Remark 2.4]{BaerPart1}, the only nilpotent groups with a partition are $p$-groups for some prime $p$. This implies $G$ is of type $(2)$.
\underline{\bf Case (2)}\newline In this case $G/Z(G)\cong {\rm Sym}(4)$. Let $V\leq G$ such that $V/Z(G)\cong V_4$, the Klein-four subgroup. If $V$ is abelian, then for all $x\in V\setminus Z(G)$ we have $V\leq C_G(x)$. As $xZ(G)$ has order $2$ it follows that $C_{G/Z(G)}(xZ(G))=D_8$ or $V_4$ (when $xZ(G)$ is a double or single transposition respectively), we also observe that $V/Z(G)\leq C_G(x)/Z(G)\leq C_{G/Z(G)}(xZ(G))$.
If there exists an $x\in V$ such that $C_G(x)/Z(G)\cong V_4$, then $C_G(x)=V$ and so is abelian and thus a minimal centraliser. However $V_4$ is not contained in the partition of $G/Z(G)$ which yields a contradiction. Thus for all $x\in V\setminus Z(G)$ we have that $C_G(x)/Z(G)=C_{G/Z(G)}(xZ(G))\cong D_8$ and $xZ(G)\leq Z(D_8)$. In particular, it follows that $V_4$ must be the normal klein-four subgroup of ${\rm Sym}(4)$.
Inside $C_G(x)/Z(G)\cong D_8$ there exists a unique cyclic subgroup of order 4 and another copy of $V_4$ which is not normal in ${\rm Sym}(4)$. Let $N,M\leq C_G(x)$ such that $N/Z(G)\cong C_4$ and $M/Z(G)$ equals the non-normal copy of $V_4$. By Theorem~\ref{BaerSolPart}, $N/Z(G)$ lies in the partition and therefore $N$ is a minimal centraliser in $G$. The subgroup $M$ contains $x$ and therefore $\langle x,Z(G)\rangle \leq Z(M)$. In particular, we see that $M/Z(M)$ must be cyclic and so $M$ is abelian. However $x\in N\cap M$ and so $M$ cannot be a minimal centraliser in $G$. Thus $C_G(y)>M$ for all $y\in M$. As $M/Z(G)$ lies in a unique maximal subgroup isomorphic to $D_8$, it follows that for each $y\in M\setminus Z(G)$, then $C_G(y)/Z(G)$ equals the unique maximal subgroup containing $M/Z(G)$. Thus $M/Z(G)\leq Z(D_8)$ which is a contradiction.
\underline{\bf Case (3)}\newline In this case $N/Z$ is a component of $\beta$, which implies $N$ is abelian.
\underline{\bf Case (4)}\newline In this case $G/Z(G)$ is a $p$-group and therefore $G$ is nilpotent. Therefore $G=A\times P$ for $A\leq Z(G)$ and $P$ a $p$-subgroup which is a ${\rm CA}_{min}$-group. \end{proof}
\end{thm}
We next show that each case occurring in Theorem~\ref{SolCAminGp} yields a ${\rm CA}_{min}$-group.
\begin{prop}\label{ListAreCAmin} Any solvable group occurring in Theorem~\ref{SolCAminGp} is a ${\rm CA}_{min}$-group. \begin{proof} The solvable groups in Theorem~\ref{MainThm} are those of type $(1)-(5)$. Any group of type $(5)$ is easily seen to be a ${\rm CA}_{min}$-group. For the groups of type $(1),(3)$ and $(4)$, Schmidt \cite{SchmidtCaGps} has shown that they are CA-groups and hence are ${\rm CA}_{min}$-groups. Thus it only leaves those of type $(2)$. If $x\in L\setminus Z(G)$, then it was shown in the proof of Theorem~\ref{SolCAminGp} that $C_G(x)=C_L(x)$. If $x \in G\setminus L$, then as $G/Z(G)$ is a Frobenius group $xZ(G)$ lies in some conjugate of $K/Z(G)$ \cite[Page 496]{Huppert}. Thus assume $x\in K$. As $K$ is abelian, then $K/Z(G)\leq C_G(x)/Z(G)\leq C_{G/Z(G)}(xZ(G))\leq K/Z(G)$. That is $K=C_G(x)$. It now follows that any two distinct minimal non-central element centralisers have intersection equal to $Z(G)$. \end{proof} \end{prop}
Thus it only remains to study the non-solvable ${\rm CA}_{min}$-groups.
\begin{thm}\label{NonSolCAminGp} Let $G$ be a non-solvable group. Then $G$ is a ${\rm CA}_{min}$ group if and only if $G/Z(G)\cong PSL_2(p^n)$ or $PGL_2(p^n)$ with $p^n>3$. \begin{proof} As $G/Z(G)$ admits a non-trivial normal partition, we will use the classification of Suzuki (Theorem~\ref{SuzNonSolPart}) to determine $G/Z(G)$.
\underline{\bf Case (1)}\newline If $G/Z(G)$ is a Frobenius group, then as in the solvable case the kernel $L/Z(G)$ is nilpotent and the complement $K/Z(G)$ is abelian. However, this implies $G/Z(G)$ and therefore $G$ is solvable.
\underline{\bf Case (2)}\newline If $G/Z(G)$ is isomorphic to $Sz(2^n)$, then Schmidt[8] showed that the Sylow $2$-subgroups of $Sz(2^n)$ are subgroups of some components for any non-trivial partition. However $Sz(2^n)$ has non-abelian Sylow $2$-subgroups and therefore cannot be a ${\rm CA}_{min}$-group.
\underline{\bf Case (3)}\newline Assume $G/Z(G)\cong PSL_2(p^n)$ or $PGL_2(p^n)$ with $p^n>3$. In this case we want to show that any group arising in this way is a ${\rm CA}_{min}$-group.
It is well known that every element centraliser in $PGL_2(q)$ and $PSL_2(q)$ takes one of the following forms: \begin{enumerate} \item A cyclic group of order $q$, $q-1$ or $q+1$. \item A dihedral group of order $2(q+1)$ or $2(q-1)$. \end{enumerate}
Let $x\in G\setminus Z(G)$ such that $C_G(x)$ is a minimal centraliser and set $C$ to be the subgroup of $G$ such that $C_{G/Z(G)}(xZ(G))=C/Z(G)$. If $C/Z(G)$ is cyclic, then $C$ is abelian and thus $C_G(x)=C$ . If $C/Z(G)$ is dihedral, then as $xZ(G)\in Z(C/Z(G))$, it follows that $xZ(G)\in C'/Z(G)$ for $C'/Z(G)$ the cyclic subgroup of index $2$ in $C/Z(G)$. Hence $C'\leq C_G(x)\leq C$. If $C_G(x)=C$, then there exists a $y\in G$ such that $C_{G/Z(G)}(yZ)=C'/Z(G)$ and so $C_G(y)<C_G(x)$ contradicting minimality. Thus every minimal centraliser in $G$ is abelian and its quotient is a centraliser in $G/Z(G)$.
Let $x,y\in G\setminus Z(G)$ such that $C_G(x)$ and $C_G(y)$ are distinct minimal centralisers in $G$. Then $C_G(x)/Z(G)$ and $C_G(y)/Z(G)$ are centralisers in $G/Z(G)$. It is enough to show that $(C_G(x)/Z(G))\cap (C_G(y)/Z(G))$ is trivial in $G/Z(G)$. Let $kZ(G)\in (C_G(x)/Z(G)\cap C_G(y)/Z(G))$. Then $C_G(x)/Z(G)$ and $C_G(y)/Z(G)$ are distinct abelian centralisers in $G/Z(G)$ which are subgroups of $C_{G/Z(G)}(kZ)$. However, no centraliser in $PSL_2(q)$ or $PGL_2(q)$ contains two distinct abelian centralisers in $PSL_2(q)$ or $PGL_2(q)$ respectively. Therefore $kZ=Z$ and hence the intersection is trivial. In particular, we have shown that $C_G(x)\cap C_G(y)=Z(G)$ and $G$ is a ${\rm CA}_{min}$-group. \end{proof} \end{thm}
\begin{cor*}[Corollary~\ref{NonSolFIsCA}] Any non-solvable F-group is a CA-group. \begin{proof} By Rebmann, any non-solvable F-group must be of the form $G/Z(G)\cong PSL_2(q)$ or $PGL_2(q)$ with some extra condition on $G'$. However, by the previous result any group such that $G/Z(G)$ has this structure is ${\rm CA}_{min}$-group. Hence $G$ is an F-group and ${\rm CA}_{min}$-group, and it follows it is a CA-group. \end{proof} \end{cor*}
\subsection{Classifying ${\rm F}_{min}$-groups}
We now repeat a similar argument as in the case of ${\rm CA}_{min}$-groups. Most details are omitted, however we include details for cases (1) and (2) in the solvable case to highlight that the partition now consists of quotients of centres of centralisers.
\begin{thm} Let $G$ be a solvable ${\rm F}_{min}$-group. Then $G$ is one of the following: \begin{enumerate} \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that both $K$ and $L$ are abelian. \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that $K$ is abelian, $L$ is an ${\rm F}_{min}$-group, $Z(L)=Z(G)$ and $L/Z(L)$ is a $p$-group. \item $G/Z(G)\cong {\rm Sym}(4)$ and if $V/Z(G)\cong V_4$, then $V$ is non-abelian. \item $G$ has an abelian normal subgroup of index $p$, $G$ is not abelian. \item $G\cong A\times P$, where $A$ is abelian and $P$ is a non-abelian $p$-group for some prime $p$; therefore $P$ is an ${\rm F}_{min}$-group. \end{enumerate}
\begin{proof} As $G/Z(G)$ admits a non-trivial normal partition, we apply the classification of Baer (Theorem~\ref{BaerSolPart}) to determine $G/Z(G)$ and then $G$. Note that cases (3) and (4) use the exact same argument as in Theorem~\ref{SolCAminGp} and so we shall not repeat them.
\underline{\bf Case (1)}\newline Let $L/Z(G)$ denote the Frobenius kernel of $G/Z(G)$ and $K/Z(G)$ an element in the partition of $G/Z(G)$ which is self-normalising. Then $K=Z(C_G(x))$ for some minimal centraliser $C_G(x)$ in $G$. Using the same argument as in Theorem~\ref{SolCAminGp} shows $K/Z(G)$ is a Frobenius complement and abelian. Furthermore, recall that for $x\in L\setminus Z(G)$, then $C_G(x)=C_L(x)$ and $C_L(x)$ is minimal centraliser in $L$ implies that $C_G(x)$ is minimal centraliser in $G$.
If $L$ has a unique minimal centraliser, then $L$ is abelian by Corollary~\ref{CAminCenAb} and thus of type $(1)$. Hence assume $L$ has two distinct minimal centralisers $C_L(x)<L$ and $C_L(y)<L$ for $x,y\in L\setminus Z(G)$. Then $C_L(x)=C_G(x)$ and $C_L(y)=C_G(x)$ are distinct minimal centralisers in $G$. Moreover $Z(L)\leq Z(C_L(x))\cap Z(C_L(y))=Z(G)$ and so $Z(L)=Z(G)$. In particular, we have shown that $L$ is an ${\rm F}_{min}$-group. Repeating the argument in Theorem~\ref{SolCAminGp} also implies $G$ is of type $(2)$.
\underline{\bf Case (2)}\newline In this case $G/Z(G)\cong {\rm Sym}(4)$. Let $V\leq G$ such that $V/Z(G)\cong V_4$. If $V$ is abelian, then for all $x\in V\setminus Z(G)$ we have $V\leq C_G(x)$. As $xZ(G)$ has order $2$ it follows that $C_{G/Z(G)}(xZ(G))=D_8$ or $V_4$ (when $xZ(G)$ is a double or single transposition respectively), we also observe that $V/Z(G)\leq C_G(x)/Z(G)\leq C_{G/Z(G)}(xZ(G))$.
If there exists an $x\in V$ such that $C_G(x)/Z(G)\cong V_4$, then $C_G(x)=V$ and so is abelian and thus a minimal centraliser. However $V_4$ is not contained in the partition of $G/Z(G)$ which yields a contradiction. Thus for all $x\in V\setminus Z(G)$ we have that $C_G(x)/Z(G)=C_{G/Z(G)}(xZ(G))\cong D_8$ and $xZ(G)\leq Z(D_8)$. In particular, it follows that $V_4$ must be the normal Klein-four subgroup of ${\rm Sym}(4)$.
Inside $C_G(x)/Z(G)\cong D_8$ there exists a unique cyclic subgroup of order 4 and another copy of $V_4$ which is not normal in ${\rm Sym}(4)$. Let $N,M\leq C_G(x)$ such that $N/Z(G)\cong C_4$ and $M/Z(G)$ equals the non-normal copy of $V_4$. By Theorem~\ref{BaerSolPart}, $N/Z(G)$ lies in the partition and therefore $N$ is the centre of a minimal centraliser in $G$. The subgroup $M$ contains $x$ and therefore $\langle x,Z(G)\rangle \leq Z(M)$. In particular, we see that $M/Z(M)$ must be cyclic and so $M$ is abelian. However $x\in Z(N)\cap Z(M)$ and so $M$ cannot be a minimal centraliser in $G$. Thus $C_G(y)>M$ for all $y\in M$. As $M/Z(G)$ lies in a unique maximal subgroup isomorphic to $D_8$, it follows that for each $y\in M\setminus Z(G)$, then $C_G(y)/Z(G)$ equals the unique maximal subgroup containing $M/Z(G)$. Thus $M/Z(G)\leq Z(D_8)$ which is a contradiction. \end{proof}
\end{thm}
Using the same arguments as in Proposition~\ref{ListAreCAmin} gives the analogous result for ${\rm F}_{min}$-groups.
\begin{prop} Any solvable group occurring in Theorem~\ref{MainThm} is an ${\rm F}_{min}$-group. \end{prop}
We are therefore left to study the non-solvable ${\rm F}_{min}$-groups.
\begin{thm} Let $G$ be a non-solvable group. Then $G$ is an ${\rm F}_{min}$-group if and only if $G/Z\cong PSL_2(p^n)$ or $PGL_2(p^n)$ with $p^n>3$. \begin{proof} As $G/Z(G)$ admits a non-trivial normal partition, we will use the classification of Suzuki (Theorem~\ref{SuzNonSolPart}) to determine $G/Z(G)$. Note that Cases (1) and (2) use exactly the same argument as in Theorem~\ref{NonSolCAminGp} and so shall not be repeated.
\underline{\bf Case (3)}\newline Assume $G/Z(G)\cong PSL_2(p^n)$ or $PGL_2(p^n)$ with $p^n>3$. We saw that any such group is a ${\rm CA}_{min}$-group, which implies it is an ${\rm F}_{min}$-group. \end{proof} \end{thm}
Moreover, we now obtain the analogous corollary of Rebmann for non-solvable ${\rm F}_{min}$-groups.
\begin{cor} Any non-solvable ${\rm F}_{min}$-group is a ${\rm CA}_{min}$-group. \end{cor}
Finally, by combing the two theorems in this section we obtain Theorem~\ref{MainThm}.
\section{A family of F-groups which are not CA-groups} As we saw in the introduction, given the classification of ${\rm CA}_{min}$-groups, it is easy to see that there is a non-solvable ${\rm CA}_{min}$-group which is not a CA-group. However, as commented in Rebmann, any non-solvable F-group is also a CA-group. Thus we need to consider the solvable classification from Rebmann. In particular, using \cite[Corollary 5.1]{FGroups} if $G$ is an F-group that is not a CA-group, $G$ must take one of the two forms:
\begin{enumerate} \item $G\cong A\times P$ where $A$ is abelian and $P$ is a non-abelian $p$-group which is also an F-group. \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$, $K$ is abelian $Z(L)=Z(G)$, $L$ is an $F$-group and $L/Z(L)$ is a $p$-group. \end{enumerate}
Note that if we have an F-group which is not a CA-group of the second type, then the subgroup $L$ cannot be a CA-group. In particular, if there exists an F-group which is not a CA-group, then there exists such a $p$-group. Thus to find an F-group which is not a CA-group, the first place to consider is in the set of $p$-groups. In particular we shall consider the class of extraspecial groups.
First we state the following lemma which will be of use to us. \begin{lm} Let $G$ be a finite group in which the derived subgroup has order $p$ for some prime $p$. Then $G$ is an F-group. \begin{proof}
This result follows from the observation that the conjugacy class $g^G$ is contained in the coset $gG'$. Therefore $|g^G|$ equals $p$ or $1$. Hence $|C_G(g)|=\frac{|G|}{p}$ or $|G|$ and so every non-central element centraliser is both maximal and minimal. In particular $G$ is an F-group. \end{proof} \end{lm}
Let $G$ be an extraspecial group, usually denoted by one of the two groups $p^{1+2n}_{\pm}$ for some positive integer $n$. Then we have $G'=\Phi(G)=Z(G)$ of order $p$. By the previous lemma $G$ is an F-group.
Assume $n>1$, otherwise $G$ is of order $p^3$ and it is easy to see such groups are CA-groups. Then $G$ is isomorphic to the central product of $H$ and $P$, where $H$ is an extraspecial group of order $p^{1\pm 2(n-1)}$ and $P$ is extraspecial of order $p^3$. Take $x\in H\setminus Z(H)$. Then $x\not\in Z(G)$, however $P\leq C_G(x)$. Thus $G$ is not a CA-group.
\begin{prop*}[Proposition~\ref{FNotCA}] Let $G$ be an extraspecial group of order $p^{2n+1}$ with $n>1$. Then $G$ is an F-group which is not a CA-group. \end{prop*}
Note that not all F-groups which are not CA-groups occur from extraspecial groups. In particular, using GAP we can find 5 groups of order 64 which are F-groups but not CA-groups.
\section*{Acknowledgments} The author gratefully acknowledges financial support by the ERC Advanced Grant $291512$. In addition the author would like to thank Benjamin Sambale for reading and discussing a preliminary version of this paper.
\end{document} |
\begin{document}
\title{Generalized uncertainty relations and entanglement dynamics in quantum Brownian motion models} \author{C. Anastopoulos\footnote{[email protected]}, S. Kechribaris\footnote{[email protected]}, and D. Mylonas\footnote{[email protected]}\\ Department of Physics, University of Patras, 26500 Patras, Greece} \maketitle
\begin{abstract} We study entanglement dynamics in quantum Brownian motion (QBM) models. Our main tool is the Wigner function propagator. Time evolution in the Wigner picture is physically intuitive and it leads to a simple derivation of a master equation for any number of system harmonic oscillators and spectral density of the environment. It also provides generalized uncertainty relations, valid for any initial state, that allow a characterization of the environment in terms of the modifications it causes to the system's dynamics. In particular, the uncertainty relations are very informative about the entanglement dynamics of Gaussian states, and to a lesser extent for other families of states. For concreteness, we apply these techniques to a bipartite QBM model, describing the processes of entanglement creation, disentanglement, and decoherence at all temperatures and time scales.
\end{abstract}
\section{Introduction}
The study of quantum entanglement is both of practical and theoretical significance: entanglement is viewed as a physical resource for quantum-information processing and it constitutes a major issue in the foundations of quantum theory. The quantification of entanglement is difficult in multipartite systems (see, for example, Refs. \cite{PeresBook,KarolBook,AlickiBook, 4Hor}); however, there are useful separability criteria and entanglement measures for bipartite states, pure and mixed \cite{Peres, Horod, Simon, Duan, Barnum, HHH, AlickiHorod, GMVT03}).
Realistic quantum systems, including multipartite ones, cannot avoid interactions with their environments, which can degrade their quantum coherence and entanglement. Thus quantum decoherence and disentanglement are obstacles to quantum-information processing \cite{RajRendell,Diosi,Dodd,DoddHal}. On the other hand, some environments act as intermediates that generate entanglement in multipartite systems, even if the components do not interact directly \cite{Braun, BFP, OK}. The theoretical study of entanglement dynamics in open quantum systems has uncovered important physical effects, such as the sudden death of entanglement \cite{YE, suddeath}, entanglement revival after sudden death \cite{FicTan06}, the significance of non-Markovian effects \cite{ASH, nonmark, CYH}, the possibility of a rich phase structure for the asymptotic behavior of entanglement \cite{PR1, PR2}, and intricacies in the evolution of entanglement in multipartite systems \cite{Li10}.
Here, we study entanglement and decoherence in quantum Brownian motion (QBM) models \cite{HPZ, QBM, QBM2}, focusing on their description in terms of generalized uncertainty relations. Our main tool in this study is the Wigner function propagator. QBM models are defined by a quadratic total Hamiltonian, and they are characterized by a Gaussian propagator. This propagator is solely determined by two matrices: one corresponding to the classical dissipative equations of motion and one containing the effect of environment-induced diffusion. In Sec. II we provide explicit formulas for their determination.
The simplicity of time evolution in the Wigner picture leads to a concise derivation of an exact master equation for general QBM models, with any number of system oscillators and spectral density. Moreover, time evolution in the Wigner picture allows for a derivation of generalized uncertainty relations, valid for {\em any} initial state, that incorporate the influence of the environment upon the system. These uncertainty relations generalize the ones of Ref. \cite{AnHa} to QBM models with an arbitrary number of system oscillators---see also Refs. \cite{HZ, CYH}. Their most important feature is that the lower bound is independent of the initial state, and for this reason, they allow for general statements about the process of decoherence and thermalization.
The uncertainty relations are also related to
separability criteria for bipartite systems \cite{Simon, GMVT03}. Hence, they provide an important tool for the study of entanglement dynamics.
For Gaussian states, in particular, the uncertainty relations, derived here, provide a general characterization of processes such as entanglement creation and disentanglement without the need to specify detailed properties of the initial state. However, uncertainty relations do not suffice to distinguish all entangled {\em non-Gaussian} states. For such states, the description of entanglement dynamics from the uncertainty relations is rather partial, but still leads to nontrivial results.
The uncertainty relations derived in this article apply to
any open quantum system characterized by Gaussian propagation, and they are expressed solely in terms of the coefficients of the Wigner function propagator. They can be used for the study of entanglement dynamics, not only in bipartite but also in multipartite systems.
To demonstrate their usefulness, we apply them to a concrete bipartite QBM model system that has been studied by Paz and Roncanglia \cite{PR1, PR2}. In this model, there exist two coupled subalgebras of observables, only one of which couples directly to the environment. For a special case of the system parameters, considered in Ref. \cite{PR1}, one of the subalgebras is completely decoupled, and thus there exists a decoherence-free subspace for the system. Here we focus on the generic case, also explored in Ref. \cite{PR2}.
We find that in the high-temperature regime, decoherence and disentanglement are generic and the uncertainty relations allow for an identification of the characteristic timescales, which in some cases may be of very different orders of magnitude. At low temperature, entanglement creation often occurs and we demonstrate that it is accompanied by ``entanglement oscillations'', that is, a sequence of entanglement sudden death and revivals at early times. In this regime, there is no decoherence, and disentanglement arises because of relaxation. At a time scale of the order of relaxation time the system tends to a unique asymptotic state, which coincides with a thermal state at the weak-coupling limit. The generalized uncertainty relations allow for the determination of upper limits to disentanglement time with respect to all Gaussian initial states.
The structure of the article is the following. In Sec. II we construct the Wigner function propagator for the most general QBM model and we provide explicit formulas for the propagator's coefficients. The master equation is then simply obtained from the propagator. In Sec. III we construct the generalized uncertainty relations valid for all QBM models, we show that they can be used for the study of multipartite entanglement, and we then consider their special case in the model of Refs. \cite{PR1, PR2}. In Sec. IV we employ the uncertainty relations for the study of decoherence, disentanglement, and entanglement creation in different regimes and time scales of this model.
\section{Quantum Brownian motion models for multipartite systems} In this section, we consider the most general setup for quantum Brownian motion, namely, a system of $N$ harmonic oscillators of masses $M_r$ and frequencies $\Omega_r$ interacting with a heat bath. The heat bath is modeled by a set of harmonic oscillators of masses $m_i$ and frequencies $\omega_i$, initially at a thermal state of temperature $T$. The Hamiltonian of the total system is a sum of three terms $\hat{H} = \hat{H}_{sys} + \hat{H}_{env} + \hat{H}_{int}$, where \begin{eqnarray} \hat{H}_{sys} &=& \sum_r\left( \frac{1}{2M_r} \hat{P}_r^2 + \frac{M_r \Omega^2_r}{2} \hat{X}_r^2\right) \label{ho}\\ \hat{H}_{env} &=& \sum_i (\frac{1}{2m_i} \hat{p}_i^2 + \frac{m_i \omega_i^2}{2} \hat{q}_i^2)\\ \hat{H}_{int} &=& \sum_i \sum_a c_{ir} \hat{X}_r \hat{q}_i, \label{hint} \end{eqnarray} where $\hat{X}_r$ and $\hat{P}_r$ are the position and momentum operators for the system oscillators and $\hat{q}_i$ and $\hat{p}_i$ are the position and momentum operators for the environment oscillator. The interaction Hamiltonian Eq. (\ref{hint}) involves different couplings $c_{ir}$ of each system oscillators to the bath. Thus it can also be used to describes systems different from the classic setup of Brownian motion, for example, particle detectors at different locations interacting with a quantum field \cite{LinHu}.
For an initial state that is factorized in system and environment degrees of freedom the evolution of the reduced density matrix for the system variables is {\em autonomous}, and it can be expressed in terms of a master equation. For the issues we explore in this article, in particular entanglement dynamics, the determination of the propagator of the reduced density matrix is more important than the construction of the master equation, because it allows us to follow the time evolution of the relevant observables. The construction of the propagator is simpler in the Wigner picture.
Instead of the density operator, we work with the Wigner function, defined by \begin{eqnarray} W({\bf X},{\bf P}) = \frac{1}{(2 \pi)^N} \int d^N \zeta e^{-i {\bf P} \cdot {\bf \zeta}} \hat{\rho}({\bf X} + \frac{1}{2}{\bf \zeta}, {\bf X}- \frac{1}{2}{\bf \zeta}). \end{eqnarray} Its inverse is \begin{eqnarray} \hat{\rho}({\bf X},{\bf Y}) = \int d^NP \; e^{i{\bf P} \cdot ({\bf X} - {\bf X'})} \; W(\frac{1}{2}({\bf X} + {\bf X'}), {\bf P}). \end{eqnarray}
For a factorized initial state, time evolution in QBM models is encoded in the density matrix propagator $J({\bf X}_f, {\bf Y}_f, t| {\bf X}_0, {\bf Y}_0,0)$, defined by \begin{eqnarray}
\hat{\rho}_t({\bf X}_f, {\bf Y}_f) = \int d^NX_0 d^NY_0 \; \; J({\bf X}_f, {\bf Y}_f, t| {\bf X}_0, {\bf Y}_0, 0) \hat{\rho}_0({\bf X}_0, {\bf Y}_0). \label{jprop} \end{eqnarray} The Wigner function propagator is defined as \begin{eqnarray}
K({\bf X}_f, {\bf P}_f, t| {\bf X}_0, {\bf P}_0, 0) = \int \frac{d^N \zeta_f d^N\zeta_0}{(2\pi)^N} e^{i {\bf P}_{0}\cdot{\bf \zeta}_0 - i {\bf P}_{f} \cdot {\bf \zeta}_f} \; J({\bf X}_f + \frac{{\bf \zeta}_f}{2}, {\bf X}_f - \frac{{\bf \zeta}_f}{2}, t| {\bf x}_0 + \frac{{\bf \zeta}_0}{2}, {\bf X}_0 - \frac{ {\bf \zeta}_0}{2},0). \label{wfprop} \end{eqnarray}
Denoting the phase-space coordinates by the vector \begin{eqnarray} \xi_a = (X_1, P_1, X_2, P_2, \ldots, X_N, P_N), \hspace{1cm} a = 1, 2, \ldots, 2N, \label{xidef}
\end{eqnarray}
we write the Wigner function propagator compactly as $K_t(\xi_f, \xi_0)$ and express Eq. (\ref{jprop}) as \begin{eqnarray} W_t(\xi) = \int \frac{d^{2N} \xi_0}{(2 \pi)^N} K_t(\xi_f, \xi_0) W_0(\xi_0), \label{wt} \end{eqnarray} where $W_t$ and $W_0$ are the Wigner functions at times $t$ and $0$, respectively.
In QBM models the Wigner function propagator is Gaussian. This follows from the fact that the total Hamiltonian for the system is quadratic and the initial state for the bath is Gaussian. The most general form of a Gaussian Wigner function propagator is \begin{eqnarray} K_t(\xi_f, \xi_0) = \frac{\sqrt{\det S^{-1}(t)}}{\pi^N} \exp \left[ - \frac{1}{2} [\xi_f^a - \xi_{cl}^a(t)] S^{-1}_{ab}(t) [\xi_f^b - \xi_{cl}^b(t)] \right], \label{gauss} \end{eqnarray} where $S^{-1}_{ab}(t)$ is a positive real-valued matrix, and $\xi_{cl}(t)$ is the solution of the corresponding classical equations of motion (including dissipation) with initial condition $\xi = \xi_0$ at $t = t_0$. The equations of motion are linear, so $\xi_{cl}(t)$ is of the form \begin{eqnarray} \xi^a_{cl}(t) = R^{a}_b(t) \xi_0^b, \label{ceq} \end{eqnarray} in terms of a matrix $R^a_b(t)$.
Equation (\ref{gauss}) holds if there are no ``decoherence-free'' subalgebras, that is, if there is no subalgebra of the canonical variables that remains decoupled from the environment. These observables evolve with a delta-function propagator, rather than with a Gaussian. However, this case corresponds to a set of measure zero in the space of parameters, and it can be obtained as a weak limit
of the generic expression, Eq. (\ref{gauss}).
In order to specify the Wigner function propagator, we must construct the matrix-valued functions $R(t)$ and $S(t)$. To this end, we consider the two-point correlation matrix $V$ of a quantum state $\hat{\rho}$, defined by \begin{eqnarray} V_{ab} = \frac{1}{2} Tr\left[\hat{\rho} (\hat{\xi}_a \hat{\xi}_b + \hat{\xi}_b \hat{\xi}_a) \right] - Tr (\hat{\rho} \hat{\xi}_a) Tr(\hat{\rho} \hat{\xi}_b). \label{Vab} \end{eqnarray}
Gaussian propagation decouples the evolution of two-point correlations from any higher-order correlations. From Eqs. (\ref{wt}) and (\ref{gauss}), we find the two-point correlation matrix, Eq. (\ref{Vab}), $V_t$ at time $t$, \begin{eqnarray} V_t = R(t)V_0 R^T(t) + S(t), \label{Vt} \end{eqnarray} where $V_0$ is the correlation matrix of the initial state.
The first term in the right-hand side of Eq. (\ref{Vt}) corresponds to the evolution of the initial phase-space correlations according to the {\em classical} equations of motion. The second term incorporates the effect of environment-induced fluctuations and it does not
depend on the initial state. Hence, the matrix $S$ can be explicitly constructed, by identifying the part of the correlation matrix that does not depend on the initial state.
To this end, we proceed as follows. From the Heisenberg-picture evolution of the bath oscillators, we obtain the equations \begin{eqnarray} \ddot{\hat{q}}_i (t) + \omega_i^2 \hat{q}_i (t) = \sum_r \frac{c_{ir}}{m_i} \hat{X}_r(t), \label{qeq} \end{eqnarray} with solution \begin{eqnarray} \hat{q}_i(t) = \hat{q}_i^0(t) + \sum_r\frac{c_{ir}}{m_i \omega_i} \int_0^t ds \sin\left(\omega_i(t-s)\right) \hat{X}_r(s), \label{q1} \end{eqnarray} where \begin{eqnarray} \hat{q}_i^0(t) = \hat{q}_i \cos \left(\omega_i t\right) + \frac{\hat{p}_i}{m_i \omega_i} \sin\left(\omega_it\right). \end{eqnarray} For the system variables, we obtain \begin{eqnarray} \ddot{\hat{X}}_r (t) + \Omega_r^2 \hat{X}_r (t) + \frac{2}{M_r} \sum_{r'} \int_0^t ds \gamma_{rr'}(t-s) \hat{X}_{r'}(s) = \sum_i \frac{c_{ir}}{M_r} \hat{q}^0_i(t) \label{Xeq}, \end{eqnarray} where \begin{eqnarray} \gamma_{rr'} (s) = - \sum_i \frac{c_{ir} c_{ir'}}{2 m_i \omega_i^2} \sin\left(\omega_i s\right) \end{eqnarray} is the dissipation kernel. In general, the matrix $\gamma_{rr'}$ is symmetric and has $\frac{1}{2}N(N+1)$ independent terms, each defining a different relaxation time-scale for the system. However, symmetries of the couplings $c_{ir}$ may reduce the number of independent components of the dissipation kernel.
The solution of Eq. (\ref{Xeq}) is \begin{eqnarray} \hat{X}_r(t) = \sum_{r'} (\dot{v}_{rr'}(t) \hat{X}_{r'} + \frac{1}{M_{r'}}v_{rr'} \hat{P}_{r'}) +\sum_{r'} \frac{1}{M_{r'}} \int_0^t ds v_{r r'}(t-s) \sum_i c_{ir'} \hat{q}_i^0(s), \label{Xsol} \end{eqnarray} where $v_{rr'}(t)$ is the solution of the homogeneous part of Eq. (\ref{Xeq}), with initial conditions $v_{rr'}(0) = \delta_{rr'} $ and $\dot{v}_{rr'}(0) = 0 $. It can be expressed as an inverse Laplace transform \begin{eqnarray} v(t) = {\cal L}^{-1} [A^{-1}(z)], \label{vt} \end{eqnarray} where $A_{rr'}(z) = (z^2 + \Omega_r^2)\delta_{rr'} + \frac{2}{M_r} \tilde{\gamma}_{rr'}(z)$ and $\tilde{\gamma}_{rr'}(z)$ is the Laplace transform of the dissipation kernel.
The classical equations of motion follow from the expectation values of $\hat{X}_r$ and $\hat{P}_r = M_r \dot{X}_r$ in Eq. (\ref{Xsol}) \begin{eqnarray} \left(\begin{array}{c} X(t) \\ P(t) \end{array} \right) = \left( \begin{array}{cc} \dot{v}(t) & v(t) M^{-1}\\ M \dot{v}(t) & M \ddot{v}(t) M^{-1} \end{array} \right) \left(\begin{array}{c} X(0) \\ P(0) \end{array} \right), \label{ceq2} \end{eqnarray} where $M = \mbox{diag} ( M_1, \ldots, M_r)$ is the mass matrix for the system. The matrix $R$ of Eq. (\ref{ceq}) follows from Eq. (\ref{ceq2}) by a relabeling coordinated according to the definition of the vector $\xi^a$, Eq. (\ref{xidef}).
We next employ Eq. (\ref{Xsol}), in order to construct the correlation matrix Eq. (\ref{Vab}). Using the following equation for the correlation functions of harmonic oscillators in a thermal state at temperature $T$, \begin{eqnarray} \langle \hat{q}_i^0(s) \hat{q}_j^0(s') \rangle_{T} = \delta_{ij} \frac{1}{2m_i\omega_i} \coth \left(\frac{\omega_i}{2T}\right) \cos \left(\omega_i(s-s')\right), \end{eqnarray} we find \begin{eqnarray} S_{X_r X_{r'}} &=& \sum_{qq'} \frac{1}{M_q M_{q'}} \int_0^t ds \int_0^t ds' v_{rq}(s) \nu_{qq'}(s-s') v_{q'r'}(s'),\label{sxx} \\ S_{P_r P_{r'}} &=& M_r M_{r'} \sum_{qq'} \frac{1}{M_q M_{q'}} \int_0^t ds \int_0^t ds' \dot{v}_{rq}(s) \nu_{qq'}(s-s') \dot{v}_{q'r'}(s'),\\ S_{X_r P_{r'}} &=& M_{r'} \sum_{qq'} \frac{1}{M_q M_{q'}} \int_0^t ds' v_{rq}(s) \nu_{qq'}(s-s') \dot{v}_{q'r'}(s') \label{sxp}, \end{eqnarray} where the symmetric matrix \begin{eqnarray} \nu_{rr'}(s) = \sum_i \frac{c_{ir} c_{ir'}}{2 m_i \omega_i^2} \coth \left( \frac{\omega_i}{2T}\right)\cos \left(\omega_is\right) \end{eqnarray} is the noise kernel. Similarly to the dissipation kernel, the noise kernel has $\frac{1}{2}N(N+1)$ independent components.
Equations (\ref{sxx}---\ref{sxp}) together with the classical equations of motion (\ref{ceq2}) fully specify the Wigner function propagator. The master equation in the Wigner representation easily follows,
by taking the time derivative of Eq. (\ref{wt}) and using the identities \begin{eqnarray}
\int \frac{d^{2N} \xi_0}{(2 \pi)^N} (\xi - \xi_{cl})^a K_t(\xi_f, \xi_0) W_0(\xi_0) &=& - S^{ab} \frac{\partial W_t(\xi)}{\partial \xi^b}, \\
\int \frac{d^{2N} \xi_0}{(2 \pi)^N} (\xi - \xi_{cl})^a (\xi - \xi_{cl})^b K_t(\xi_f, \xi_0) W_0(\xi_0) &=& S^{ab} + S^{ac}S^{bd} \frac{\partial^2 W_t(\xi)}{\partial \xi^c \partial \xi^d}. \end{eqnarray}
The result is \begin{eqnarray} \frac{\partial W_t}{\partial t} = -(\dot{R}R^{-1})^a_b \frac{\partial (\xi^b W_t)} {\partial \xi^a} + (\frac{1}{2} \dot{S}^{ab} - (\dot{R}R^{-1})_c^{(a} S^{cb)}) \frac{\partial^2 W_t(\xi)}{\partial \xi^a \partial \xi^b}. \label{master} \end{eqnarray}
The method leading to the master equation (\ref{master}) is a generalization of the approach in Ref. \cite{HalYu} for the derivation of the Hu, Paz and Zhang master equation for $N = 1$. To the best of our knowledge the only other derivation of the QBM master equation in such a general setup (also including external force terms) is the one by Fleming, Roura and Hu, Ref. \cite{QBM2}. The benefit of the present derivation is that, by construction, it also provides the solution of the master equation, i.e., explicit formulas for the coefficients of the propagator.
The first term in the right-hand side of Eq. (\ref{master}) corresponds to the Hamiltonian and dissipation terms, and the second one to diffusion with diffusion functions $D^{ab}(t) = (\frac{1}{2} \dot{S}^{ab} - \dot{R}R^{-1})_c^{(a} S^{cb)})$. A necessary condition for the master equation to be Markovian is that dissipation is local, that is, that the matrix $A:= \dot{R}R^{-1}$ is time independent. Then $A$ is a generator of a one-parameter semi-group on the classical-state space. Moreover, the diffusion functions must be constant, which implies that $S$ must be a solution of the equations $\ddot{S} = O\dot{S}+\dot{S}O$.
\section{Generalized uncertainty relations} In this section, we derive generalized uncertainty relations for the QBM models described in Sec. II, which are relevant to the discussion of entanglement dynamics.
\subsection{Background}
Let ${\cal H} = L^2(R^N)$ be the Hilbert space of a quantum system corresponding to a classical phase-space $R^{2N}$. ${\cal H}$ carries a representation of canonical commutation relations \begin{eqnarray} [\hat{q}_i, \hat{p}_j] = i \delta_{ij}, i = 1, \ldots, N. \end{eqnarray}
We employ a vector notation, analogous to Eq. (\ref{xidef}), for the canonical operators $\hat{q}_i$ and $\hat{p}_j$. Then the commutation relations take the form \begin{eqnarray} [\hat{\xi}_a, \hat{\xi}_b] = i \Omega_{ab}, \hspace{0.5cm} a,b = 1,2, \ldots 2N, \end{eqnarray} where \begin{eqnarray} \Omega = \left(\begin{array}{cccc} J &0 & \ldots &0 \\
0& J & \ldots &0 \\
\ldots& & &\\
0& 0& \ldots &J \end{array} \right) \hspace{1.2cm}
J = \left(\begin{array}{cc} 0&1 \\ -1&0\end{array} \right). \end{eqnarray}
The standard uncertainty relations for this system take the form \begin{eqnarray} V \geq - \frac{i}{2} \Omega. \label{V} \end{eqnarray}
For a bipartite system, with $n$ degrees of freedom for the first subsystem, and $N-n$ ones for the second, the Peres-Horodecki partial transpose operation defines a transformation $\xi \rightarrow \Lambda \xi$, where $\Lambda$ inverts the momenta of the second subsystem. Then, the correlation matrix of a separable state satisfies the inequality \cite{Simon} \begin{eqnarray} V \geq - \frac{i}{2} \tilde{\Omega}, \hspace{1cm} \tilde{\Omega} = \Lambda \Omega \Lambda. \label{V2} \end{eqnarray}
Of special interest is the case $N = 2$, where Eqs. (\ref{V}) and (\ref{V2}) lead to a simple, if weaker, set of uncertainty relations. These have a simple generalization in the QBM model considered in this article. We introduce the variables \begin{eqnarray} X_+ = \frac{1}{2}(X_1 + X_2) , \hspace{1cm} P_+ = P_1 + P_2 , \\ X_- = \frac{1}{2}(X_1 - X_2) , \hspace{1cm} P_- = P_1 - P_2. \end{eqnarray} The partial transpose operation then interchanges $P_+$ with $P_-$, that is, \begin{eqnarray} \Lambda (X_+, P_+, X_-, P_-) = (X_+, P_-, X_- , P_+). \end{eqnarray}
Hence, the uncertainty relations,
\begin{eqnarray} {\cal A}_{X_+P_+} := (\Delta X_+)^2 (\Delta P_+ )^2 - V_{X_+P_+}^2 \geq \frac{1}{4}, \hspace{0.8cm} {\cal A}_{X_-P_-} := (\Delta X_-)^2 (\Delta P_- )^2 - V_{X_-P_-}^2 \geq \frac{1}{4} \label{unc2a}, \end{eqnarray} satisfied by any pair of conjugate variables (they follow from the positivity of the $2\times 2$ diagonal subdeterminants of $V$), imply that a factorized state must satisfy the following relations \begin{eqnarray} {\cal A}_{X_+ P_-} := (\Delta X_+)^2 (\Delta P_- )^2 - V_{X_+ P_-}^2 \geq \frac{1}{4}, \hspace{0.8cm} {\cal A}_{X_- P_+} := (\Delta X_-)^2 (\Delta P_+ )^2 - V_{X_-P_+}^2 \geq \frac{1}{4} \label{unc2b}.
\end{eqnarray} If either inequality in Eq. (\ref{unc2b}) is violated, then the state is entangled. Hence, the uncertainty functions ${\cal A}_{X_+ P_-}$ and ${\cal A}_{X_- P_+}$ provide witnesses of entanglement for any state. They are weaker than the full Eq. (\ref{V2}). Equation (\ref{V2}) fully specifies entanglement in all Gaussian states, while Eq. (\ref{unc2b}) does so only for pure Gaussian states.
\subsection{Uncertainty relations in QBM models}
The initial correlation matrix $V_0$ in Eq. (\ref{Vt}) satisfies
the inequality (\ref{V}). It follows that
\begin{eqnarray}
V_t \geq - \frac{i}{2} R(t) \Omega R^T(t) + S(t). \label{gunc1}
\end{eqnarray} The inequality (\ref{gunc1}) is a generalized uncertainty relation that incorporates the effect of environment-induced fluctuations. It generalizes the uncertainty relations of Ref. \cite{AnHa} to oscillator systems with an arbitrary number of degrees of freedom. The right-hand side of Eq. (\ref{gunc1}) depends only on the coefficients of the Wigner function propagator and not on any properties of the initial state. Hence, Eq. (\ref{gunc1}) provides a lower bound to the correlation matrix at time $t$, for a system that comes into contact with a heat bath at time $t = 0$.
Equality in Eq. (\ref{gunc1}) is achieved for pure Gaussian states. The bound is to be understood in the sense of an envelope. No single Gaussian state saturates the bound in Eq. (\ref{gunc1}) at all moments of time, but equality is achieved by a different family of Gaussians at each moment $t$.
\subsubsection{Bipartite entanglement} When applied to a bipartite system, Eq. (\ref{gunc1}) implies that the condition \begin{eqnarray} - \frac{i}{2} R(t) \Omega R^T(t) + S(t) < -\frac{i}{2} \tilde{\Omega} \label{gunc2} \end{eqnarray}
is sufficient for the existence of entangled states at time $t$, irrespective of the degradation caused by the environment. For Gaussian initial states, this condition is also necessary.
For a factorized initial state, Eqs. (\ref{Vt}) and (\ref{V2}) yield \begin{eqnarray}
V_t \geq - \frac{i}{2} R(t) \tilde{\Omega} R^T(t) + S(t). \label{gunc3} \end{eqnarray}
Inequality (\ref{gunc3}) is saturated for {\em factorized} pure Gaussian states, and, similarly to Eq. (\ref{gunc3}), the lower bound to the correlation matrix is to be understood as an envelope.
If an initially factorized state remains factorized at time $t$, then $V_t \geq -\frac{i}{2}\tilde{\Omega}$. Then Eq. (\ref{gunc3}) implies that the inequality
\begin{eqnarray}
- \frac{i}{2} \left( R(t) \tilde{\Omega} R^T(t) - \tilde{\Omega} \right) + S(t) \leq 0 \label{gunc4}
\end{eqnarray}
is a necessary condition for the preservation of factorizability at time $t$.
\subsubsection{Tripartite entanglement} Equations (\ref{Vt}) and (\ref{gunc1}) apply to systems of $N$ oscillators. Used in conjunction with suitable separability criteria for multipartite systems \cite{DCT}, they also allow the derivation for uncertainty relations relevant multipartite systems. For example, we can use the criteria of Ref. \cite{GKLC} which apply to systems of three oscillators, labeled by the index $i = 1, 2, 3$. One defines the matrices $\Lambda_i$ that effect partial transposition with respect to the $i$th subsystems. Then, separable states satisfy \begin{eqnarray} V \geq - \frac{i}{2} \tilde{\Omega}_i, \hspace{1cm} \tilde{\Omega}_i = \Lambda \Omega_i \Lambda, \label{unc6} \end{eqnarray} for all $i$. There are some subtleties in the application of the criterion Eq. (\ref{unc6}) for Gaussians: there exist states that satisfy Eq. (\ref{unc6}) that are not fully separable, but only biseparable with respect to all possible bipartite splits---see Ref. \cite{GKLC} for details. However, the reasoning of Sec. III B 1 applies.
The condition \begin{eqnarray} - \frac{i}{2} R(t) \Omega R^T(t) + S(t) < -\frac{i}{2} \tilde{\Omega}_i, \label{guncb1} \end{eqnarray} for all $i$, is sufficient for the existence of entangled states at time $t$, irrespective of the degradation caused by the environment. For a factorized initial state, Eqs. (\ref{Vt}) and (\ref{unc6}) yield \begin{eqnarray}
V_t \geq - \frac{i}{2} R(t) \tilde{\Omega}_i R^T(t) + S(t), \label{guncb2} \end{eqnarray} for all $i$. Equation (\ref{guncb2}) implies that the condition
\begin{eqnarray}
- \frac{i}{2} \left( R(t) \tilde{\Omega} R^T(t) - \tilde{\Omega}_i \right) + S(t) \leq 0 \label{guncb3}
\end{eqnarray}
is necessary for the preservation of factorizability at time $t$.
\subsection{A case model}
The uncertainty relations (\ref{gunc1})--(\ref{gunc4}) hold for any Gaussian QBM system and depend only on the matrices $R$ and $S$ defining the density-matrix propagator, for which explicit expressions were given in Sec. II. In what follows, we elaborate on these relations in the context of a specific QBM model for a bipartite system, which has been studied by Paz and Roncanglia \cite{PR1, PR2}.
In this model, the system consists of two harmonic oscillators with equal masses $M$ and frequencies $\Omega_1$ and $\Omega_2$. We also consider symmetric coupling to the environment, that is., $c_{i1} = c_{i2}:= c_i$ in Eq. (\ref{hint}). The latter assumption is a strong simplification, because the dissipation and noise kernels then become scalars, \begin{eqnarray} \gamma(s) &=& \int d \omega I(\omega) \sin \left(\omega s\right) \left(\begin{array}{cc}1&1\\1&1 \end{array} \right), \\ \nu(s) &=& \int d \omega I(\omega) \coth \left(\frac{\omega}{2T}\right) \cos \left(\omega s\right) \left(\begin{array}{cc}1&1\\1&1 \end{array} \right), \end{eqnarray} where \begin{eqnarray} I(\omega) = \sum_i \frac{c_i^2}{2 m_i \omega_i^2} \delta (\omega - \omega_i) \end{eqnarray} is the bath's spectral density. A common form for $I(\omega)$ is
\begin{eqnarray} I(\omega)=M\gamma\omega \left(\frac{\omega}{\tilde{\omega}} \right)^s e^{-\frac{\omega^2}{\Lambda^2}}, \end{eqnarray} where $\gamma$ is a dissipation constant, $\Lambda$ is a high-frequency cutoff, $\tilde{\omega}$ is a frequency scale, and the exponent $s$ characterizes the infrared behavior of the bath. For this model, it is convenient to employ the dimensionless parameter $\delta := \frac{\Omega_1^2 - \Omega_2^2}{ \Omega_1^2 + \Omega_2^2 }$, denoting how far the system is from resonance, and the scaled temperature $\theta := \frac{T}{\sqrt{ \Omega_1^2 + \Omega_2^2 }}$.
In this model, the pair of oscillators is coupled to the environment only through the variables $X_+$. The variable $X_-$ is affected by the environment only through its coupling with $X_+$, which is proportional to $\Delta^2 = |\Omega_i^2 - \Omega_2^2|$. For resonant oscillators ($\Omega_1 = \Omega_2$) this coupling vanishes, the subalgebra generated by $\hat{X}_-$ and $\hat{P}_-$ is isolated from the environment, and it is therefore decoherence free. This means in particular that some entanglement may persist even at late times. This case has been studied in detail in Ref. \cite{PR1}. For nonzero $\Delta$, the $\hat{X}_-$ and $\hat{P}_-$ subalgebra is not totally isolated from the environment.
The uncertainty relations simplify when the environment is ohmic ($s = 0$). Then, dissipation is local and in the weak-coupling limit ($\gamma << \Omega_i$), the matrices $R$ describing classical evolution take the form
\begin{eqnarray}
R(t) = e^{- \frac{1}{2}\gamma t} U(t),
\end{eqnarray}
where $U(t)$ is a canonical transformation: $U(t)\Omega U^T(t) = \Omega$. Hence, Eq. (\ref{gunc1}) becomes
\begin{eqnarray}
V_t \geq -\frac{i}{2} e^{- \gamma t} \Omega + S(t). \label{gunc5}
\end{eqnarray} From Eq. (\ref{gunc5}) we see that dissipation tends to shrink phase-space areas, but this is compensated by the effects of diffusion incorporated into the definition of the matrix $S$. For an initial factorized state, we obtain \begin{eqnarray} V_t \geq -\frac{i}{2} e^{- \gamma t} F(t) + S(t), \label{gunc6b} \end{eqnarray} where $F(t) = U(t)\tilde{\Omega}U^T(t)$ is an oscillating function of time. The oscillations in $F(t)$ may lead to violation of the bound $V_t \geq -\frac{i}{2} \tilde{\Omega}$ for factorized states and thus to entanglement creation. However, the oscillating character of $F(t)$ implies that entanglement creation will in general be accompanied by entanglement death and revival. For times $t >> \gamma^{-1}$ the first term in the right-hand side of Eq. (\ref{gunc6b}) is suppressed.
\paragraph{The Wigner function area.} According to Eq. (\ref{gunc5}), the matrix $V_t + \frac{i}{2}e^{- \gamma t} - S(t)$ is positive. Its upper $2\times 2$ submatrix in the $X_+, X_-$ coordinates should also be positive; hence,
\begin{eqnarray}
[(\Delta X_+)^2 - S_{X_+X_+}][(\Delta P_+)^2 - S_{P_+P_+}]] - (V_{X_+P_+} - S_{X_+P_+})^2 \geq \frac{1}{4} e^{- 2 \gamma t}. \label{gunc6}
\end{eqnarray} By virtue of Schwartz's inequality, $(\Delta X_+)^2 S_{X_+X_+} + (\Delta P_+)^2 S_{P_+P_+} - V_{X_+P_+}S_{X_+P_+} \geq 0$; hence, \begin{eqnarray} {\cal A}_{X_+P_+} \geq \frac{1}{4} e^{- \gamma t} + \left(S_{X_+X_+} S_{P_+P_+} - S_{X_+P_+}^2\right ). \label{gunc7a} \end{eqnarray} Similarly, \begin{eqnarray} {\cal A}_{X_- P_-} &\geq& \frac{1}{4} e^{- \gamma t} + \left(S_{X_-X_-} S_{P_- P_-} - S_{X_- P_-}^2\right), \label{gunc7b} \\ {\cal A}_{X_+ P_-} &\geq& \left(S_{X_+X_+} S_{P_- P_-} - S_{X_+ P_-}^2 \right), \label{gunc7c}\\ {\cal A}_{X_-, P_+} &\geq& \left(S_{X_-X_-} S_{P_+P_+} - S_{X_- P_+}^2 \right). \label{gunc7d} \end{eqnarray}
The uncertainty functions ${\cal A}_{X_iP_j}$ correspond to the area of the projection of the Wigner function ellipse onto a two-dimensional subspace defined by $X_i$ and $P_j$. The right-hand side of the inequalities are plotted in Fig. 1 as function of time. Except possibly at early times, the functions increase monotonically and reach a constant asymptotic value at a time scale of order $\gamma^{-1}$.
\begin{figure}
\caption{ \small (Color online) (a) The lower bounds for ${\cal A}_{X_+P_-}$ in Eq. (\ref{gunc7a}) and ${\cal A}_{X_+P_+}$ in Eq. (\ref{gunc7c}) as functions of $\gamma t$, for parameter values $\theta = 0.7$ and $\delta = 0.38$. (b) The lower bound to ${\cal A}_{X_+P_-}$ as a function of $\gamma t$ for different values of the dimensionless temperature $\theta$.}
\end{figure}
\section{Entanglement dynamics}
\subsection{Disentanglement at high temperature}
A widely studied regime in quantum Brownian motion models is the so-called Fokker-Planck limit in ohmic environments, because in this limit the master equation is Markovian. The Fokker-Planck limit is defined by the condition $T >> \Lambda$, and then taking $\Lambda \rightarrow \infty$, in order to obtain time-local dissipation and noise.
In this regime, thermal noise is strong, resulting in loss of quantum coherence and entanglement at early times. It is convenient to work with the uncertainty functions ${\cal A}_{X_iP_j}$, because they can be explicitly evaluated\footnote{There is no loss of information in this choice, because of the rapid degradation of coherence. The sharper inequality, Eq. (\ref{gunc5}), gives the same estimation for the characteristic time scales of these processes.}. We find \begin{eqnarray} {\cal A}_{X_+P_+} &\geq& \frac{1}{4}(1 - \gamma t + \gamma^2 T^2 t^4), \label{gunc9a}
\\ {\cal A}_{X_-P_-} &\geq& \frac{1}{4}[1 - \gamma t + \frac{\gamma^2 T^2}{2^{8}\cdot 3^4\cdot 35} \Delta^8 t^{12}], \label{gunc9b} \\ {\cal A}_{X_+P_-} &\geq& \frac{11 \gamma^2 T^2}{256} \Delta^4 t^8 , \label{gunc9c} \\
{\cal A}_{X_-P_+} &\geq& \frac{\gamma^2 T^2}{256} \Delta^4 t^8, \label{gunc9d} \end{eqnarray}
The above equations are obtained far from resonance for the two oscillators, that is, $\Delta >> \gamma$.
Equations (\ref{gunc9a}) and (\ref{gunc9b}) represent the initial growth of fluctuations starting from purely quantum fluctuations at $t = 0$. The growth of the fluctuations for the variables $X_+$ and $P_+$ is faster than that of the variables $X_-$ and $P_-$, because the former couple indirectly to the bath. The $-\gamma t$ term in these equations indicates an initial decrease of the fluctuations, in apparent violation of the uncertainty principle. The violation in Eq. (\ref{gunc9a}) occurs at a timescale of order $(\gamma T^2)$. This is because these equations are derived taking the infinite cut-off limit $\Lambda \rightarrow \infty$, which leads to violations of the positivity of the density operator at $t < \Lambda^{-1}$ \cite{AnHa}. For $t > \Lambda^{-1} >> T^{-1}$, and $T$ sufficiently large so that $\gamma T^2/\Lambda^3 >> 1$, such violations do not arise.
Ignoring the positivity-violating terms, Eq. (\ref{gunc9a}) leads to an expression $t_{th} \sim 1/\sqrt{\gamma T}$ for the time scale where the thermal fluctuations overcome the purely quantum ones. This is an upper limit to the decoherence time for the $X_+$ and $P_+$ variables \cite{AnHa}.
From Eqs. (\ref{gunc9c}) and (\ref{gunc9d}) we obtain the characteristic time scale where ${\cal A}_{X_+ P_-}$ and ${\cal A}_{X_- P_+}$ reach the value $\frac{1}{4}$ starting from 0. This is indicative of the time scale for disentanglement $t_{dis}$ in this model: \begin{eqnarray} t_{dis} \sim \frac{1}{(\gamma T \Delta^2)^{1/4}}. \end{eqnarray} The characteristic scale for disentanglement is distinct from the time scale $t_{th}$ characterizing the growth of thermal fluctuations: \begin{eqnarray} t_{dis}/t_{th} = \left(\frac{\gamma T}{\Delta^2}\right)^{1/4}.
\end{eqnarray}
For sufficiently small values of $\Delta$, that is, weak coupling between the $+$ and $-$ variables, the disentanglement timescale may be much larger than the decoherence time scale for the $X_+$ and $P_+$ variables. Hence, even if the $X_-, P_-$ degrees of freedom are only partially protected from degradation from the environment, they can sustain entanglement
long after the $X_+$ and $P_+$ variables have decohered.
\subsection{Long-time limit}
While entanglement may be preserved much longer than the coherence of the $X_+$ and $P_+$ degrees of freedom, the interaction with the environment sets the relaxation time scale $\gamma^{-1}$ as an upper limit for disentanglement time. For times $t >> \gamma^{-1}$, all states tend toward the stationary state $\hat{\rho}_{\infty}$ corresponding to a Wigner function, \begin{eqnarray} W_{\infty}(\xi) = \frac{\sqrt{\det S^{-1}_{\infty}}}{\pi} \exp[- \frac{1}{2} \xi S^{-1}_{\infty} \xi], \label{asympt} \end{eqnarray} where $S_{\infty}$ is the asymptotic value of the matrix $S$ at $t \rightarrow \infty$. At this limit, the correlation matrix $V$ coincides with $S$. Explicit evaluation of Eqs. (\ref{sxx}-\ref{sxp}) shows that, as $t\rightarrow \infty$ the only nonvanishing elements of the matrix $S$ are the diagonal ones: $S_{X_+X_+}, S_{X_-X_-}, S_{P_+P_+}, S_{P_-P_-}$ (see the Appendix). For states of this form, the uncertainty functions ${\cal A}_{X_+P_-}$ and ${\cal A}_{X_- P_+}$ fully determine entanglement. We further find that to leading order in $\gamma/\Omega_i$ and $\Omega_i/\Lambda$, the asymptotic state coincides with the thermal state for
Hamiltonian $\hat{H}_0$ in Eq. (\ref{ho}); hence, it is factorized.
However, at low temperatures the thermal states are close to the boundary that separates factorized from entangled states (for example, they satisfy ${\cal A}_{X_+P_-} \simeq \frac{1}{4}$). Hence, the corrections from the nonzero values of $\gamma/\Omega_i$ and $\Omega_i/\Lambda$ may lead the asymptotic state to retain some degree of entanglement, as was found in Ref. \cite{PR2}. We have verified numerically that the residual entanglement decreases with increasing values of the cutoff parameter $\Lambda$.
This result applies to a system of nondegenerate oscillations. For degenerate oscillators, the $X_-$ and $ P_-$ subalgebra is protected from the environment. Hence, the asymptotic state is not unique and it may sustain entanglement or even be characterized by a nonterminating sequence of entanglement deaths and revivals.
The analysis of Sec. II allows us to make a general characterization of the asymptotic state valid for any QBM model. The key observation is that the uniqueness of the asymptotic state is solely determined from the classical equations of motion, that is, from the matrix $R^a_{b}$ in Eq. (\ref{ceq}). In the generic case the phase space contains no dissipation-free subspace, and $\xi^a_{cl}(t) \rightarrow 0$ as $t \rightarrow \infty$, irrespective of the initial condition. Hence, for times $t$ much larger than the relaxation time $\tau_{rel}$ the memory of the initial state is lost from the Wigner function propagator, Eq. (\ref{gauss}). Moreover, if $\xi^a_{cl}(t) \rightarrow 0$ sufficiently fast as $t \rightarrow \infty$, the limit $t \rightarrow \infty$ for the matrix $S$, Eqs. (\ref{sxx}--\ref{sxp}), is well defined. Thus a unique asymptotic state of the form (\ref{asympt}) is obtained. At the weak-coupling limit, one expects that the asymptotic state will be close to the thermal state at temperature $T$; hence, it will be factorized.
If, on the other hand, the classical equations of motion admit a dissipation-free subspace, time evolution in this subspace is Hamiltonian, and there $\xi^a_{cl}(t)$ does not converge to a unique value as $t \rightarrow \infty$. This implies that the Wigner function propagator Eq. (\ref{gauss}) preserves its dependence on the initial variables even for $t >> \tau_{rel}$. As a consequence, an asymptotic state may not exist or, if it exists, it may not be unique. Hence, in this case asymptotic entanglement or a sequence of entanglement death and revivals is possible.
Nonetheless, the case of a unique asymptotic state is the generic one. Dissipation-free subspaces exist only for a set of measure zero in the space of parameters (e.g., system-environment couplings) characterizing a QBM model. For example, even a small dependence of the coupling on the oscillator's position will prevent the existence of a dissipation-free subspace. Hence, unless some symmetry can be invoked that fully protects a subalgebra from degradation from the environment, we expect that the relaxation time sets an absolute upper limit to the time scale that entanglement can be preserved in any oscillator system interacting with a QBM-type environment.
\subsection{Entanglement creation}
In general, two noninteracting quantum systems may become entangled by their interaction with a third system. In QBM the role of the third system can be played by the environment, and indeed, low-temperature baths have the tendency to create entanglement.
The uncertainty relations Eqs. (\ref{gunc3}) and (\ref{gunc4}) are particularly useful for the study of entanglement creation. We apply them as follows.
The positivity of the matrix $V_t + \frac{i}{2} \tilde{\Omega}$ is a necessary criterion for a state to be factorized at time $t$. Hence, in a factorized state, the minimal eigenvalue $\lambda_{min}(t)$ of $V_t + \frac{i}{2} \tilde{\Omega}$ is positive. By Eq. (\ref{gunc3}), $\lambda_{min}(t)$ is always bounded from below by the minimal eigenvalue of the matrix $-\frac{i}{2}(R\tilde{\Omega}R^T - \tilde{\Omega}) + S$, which we denote as $\tilde{\lambda}_{bound}(t)$. Hence, the function $\tilde{\lambda}_{bound}(t)$ determines the capacity of the environment to create entanglement irrespective of the initial state. In particular, the condition that $\tilde{\lambda}_{bound}(t) \leq 0$ implies that at least some factorized states can develop entanglement at time $t$.
Figure 2(b) provides a plot of the minimal eigenvalue $\lambda_{min}(t)$ of $V_t + \frac{i}{2} \tilde{\Omega}$ for an initial factorized Gaussian state together with the lower bound, $\tilde{\lambda}_{bound}(t)$, as functions of time. $\lambda_{min}(t)$ oscillates rapidly at a scale of $\Omega_i^{-1}$, so that at time scale of order $\gamma^{-1}$ we can distinguish only two enveloping curves that bound it from above and below.
$\tilde{\lambda}_{bound}(t)$ is close to the lower enveloping curve of $\lambda_{min}(t)$ and we note that at specific instants the inequality $\lambda_{min}(t) \geq \lambda_{bound}(t)$ is saturated.
\begin{figure}
\caption{ \small (Color online) (a) The rapidly oscillating minimal eigenvalue $\lambda_{min}$ of $V_t + \frac{i}{2} \tilde{\Omega}$ for an initial factorized state $|0, 1 \rangle$, together with the lower bound $\tilde{\lambda}_{bound}(t)$ corresponding to minimal eigenvalue of the matrix $-\frac{i}{2}(R\tilde{\Omega}R^T - \tilde{\Omega}) + S$. In this plot $\delta = 0.02$ and $\theta = 0.21$. (b) Same as in (a) but for an initial factorized Gaussian state.}
\end{figure}
For Gaussian states the criterion $V_t < - \frac{i}{2} \tilde{\Omega}$ completely specifies entanglement, hence, for times $t$ that $\tilde{\lambda}_{bound}(t) > 0$, no initially factorized Gaussian state can sustain entanglement. In Figs. 2(b) and 3, we see that $\tilde{\lambda}_{bound}(t)$ exhibits oscillations around zero at low temperatures. This implies that, at least for Gaussian states, entanglement creation at low temperature is typically accompanied by a period of ``entanglement oscillations'', that is, a sequence of entanglement deaths and revivals, which terminates at a time scale of order $\gamma^{-1}$, when the system relaxes to an asymptotic factorized state.
Figure 2(a) provides a plot of $\lambda_{min}(t)$ for an initial factorized energy eigenstate $|0, 1\rangle$, together with the bound $\tilde{\lambda}_{bound}(t)$. For non-Gaussians, a positive value of $\lambda_{min}(t)$ does not imply factorizability of the state; information about entanglement is carried in higher order correlation functions of the system. Nonetheless, a negative value of $\lambda_{min}$
is a definite sign of entanglement. Despite of the fact that $\lambda_{min}(t)$ saturates the bound at some instants, in general its behavior is qualitatively different.
In Fig. 3, the minimal eigenvalue $\tilde{\lambda}_{bound}(t)$ is plotted for different values of temperature. With increasing temperature the time intervals of persisting entanglement shrink and the entanglement oscillations are suppressed. At sufficiently high temperature (of order $\theta > 10$), no creation of entanglement occurs.
\begin{figure}
\caption{\small (Color online) The minimal eigenvalue $\tilde{\lambda}_{bound}(t)$ of the matrix $-\frac{i}{2}(R\tilde{\Omega}R^T - \tilde{\Omega}) + S$ for $\delta = 0.02$ and different values of temperature.}
\end{figure}
\subsection{Disentanglement at low temperature} We saw that at high temperature, the noise from the environment degrades the quantum state and causes rapid decoherence and disentanglement. At low temperatures ($\theta < 1 $), however, the noise is not sufficiently strong to cause decoherence \cite{HPZ}, and entanglement is preserved longer. The physical mechanism responsible for disentanglement at low-temperature is relaxation: the existence of a unique asymptotic factorized state implies that at a time scale of order $\gamma^{-1}$ all memory of the initial state (including entanglement) is lost. In other words, a low temperature bath is much more efficient in creating and preserving entanglement, but relaxation to equilibrium will inevitably lead to a factorized state.
By Eq. (\ref{gunc1}), the minimal eigenvalue $\lambda_{min}(t)$ of the matrix $V_t + \frac{i}{2} \tilde{\Omega}$ is always bounded from below by the minimal eigenvalue $\lambda_{bound}(t)$ of the matrix $-\frac{i}{2}(R\Omega R^T - \tilde{\Omega}) + S$. Hence, the condition $\lambda_{bound}(t) < 0$ is sufficient for the existence of entangled states at time $t$. Moreover, the condition $\lambda_{min}(t) > 0 $ establishes that the evolution of any {\em Gaussian} initial state at time $t$ is factorized.
Figure 4 contains plots of the minimal eigenvalue $\lambda_{min}(t)$ of $V_t + \frac{i}{2} \tilde{\Omega}$ for two different initial states, together with the lower bound $\lambda_{bound}(t)$. In Fig. 4(a) the initial state is an entangled Gaussian, and in Fig. 4(b) the initial state is $\frac{1}{2(1 + e^{- |z|^2})} (|z, 0\rangle +|0, z\rangle)$, where $z$ is a coherent state. In both cases, $\lambda_{min}(t)$ approaches the lower bound only after a time scale of order $\gamma^{-1}$ when the system has started relaxation to a unique asymptotic state. We note that there are no entanglement oscillations for such states, only a gradual decay of entanglement. This behavior is typical for initial states that violate Eq. (\ref{V2}) by a substantial margin. However, the uncertainty relations do not provide any significant information about the entanglement dynamics of initial states that are entangled, but do not violate the bound, Eq. (\ref{V2}). This is the case, for example, for states of the form $\frac{1}{\sqrt{2}}(|0,1\rangle + e^{i \theta} |1, 0\rangle))$. In order to study such states, we would have to obtain generalized uncertainty relations pertaining to correlation functions of order higher than 2.
In Fig. 5, we plot the minimal eigenvalue $\lambda_{bound}(t)$ as a function of time $t$, for different temperatures. As expected, the time interval during which the system sustains entangled states [i.e., $\lambda_{bound}(t) < 0$ ] shrinks with temperature.
\begin{figure}
\caption{ \small (Color online) The minimal eigenvalue $\lambda_{bound}(t)$ of the matrix $-\frac{i}{2}(R\Omega R^T - \tilde{\Omega}) + S$ for $\delta = 0.02$ and different values of temperature.}
\end{figure}
The uncertainty relation, Eq. (\ref{gunc1}), allows for the definition of the disentanglement time $t_{dis}$ as the instant that $\lambda_{bound}(t) = 0$. Thus defined, $t_{dis}$ is an upper bound to the disentanglement time for {\em any} Gaussian initial state. In general, non-Gaussian states may preserve entanglement for times larger than $t_{dis}$. However, $t_{dis}$ depends only on the matrices $S$ and $R$, and the evolution of higher-order correlation functions of non-Gaussian states is governed by the matrices $S$ and $R$ alone. Moreover, $t_{dis} \sim \gamma^{-1}$ refers to the regime of relaxation to a unique thermal equilibrium state, hence, the loss of any memory of the initial condition.
For this reason, it is reasonable to assume that $t_{dis}$ provides a good estimation for disentanglement time that is valid for a larger class of initial states, at least as far as its qualitative dependence on temperature and other bath parameters are concerned. Figure 6 plots $t_{dis}$ as a function of temperature for different values of $\delta$. As expected $t_{dis}$ decreases with temperature. However, there is no monotonic dependence of $t_{dis}$ on $\delta$, and for $\theta > 0.5$, $t_{dis}$ is largely insensitive to $\delta$.
Finally, we note that the weaker uncertainty relations for the Wigner function areas ${\cal A}_{X_{\pm}P_{\mp}}$ also provide an estimation for disentanglement time $t_{is}$. Since these inequalities are weaker, the values of $t_{dis}$ thus obtained are smaller, but their dependence on the parameters $\delta$ and $\theta$ is qualitatively similar.
\begin{figure}
\caption{\small (Color online) Disentanglement time $t_{dis}$ in units of $\gamma^{-1}$ as a function of the dimensionless temperature $\theta$ for different values of $\delta$.}
\end{figure}
\section{Conclusions} The main results of our article are the following: (i) the explicit construction of the Wigner function propagator for QBM models with any number of system oscillators and for any spectral density; the propagator allows for a simple derivation of the corresponding master equation; (ii) the identification of generalized uncertainty relations valid in any QBM model that provide a state-independent lower bound to the fluctuations induced by the environment; (iii) the application of the uncertainty relations to a concrete model, for the study of decoherence, disentanglement, and entanglement creation in different regimes. In particular, we showed that entanglement creation is often accompanied by entanglement oscillation at early times and that the uncertainty relations provide an upper bound to disentanglement time with respect to all initial Gaussian states.
In our opinion, the most important feature of the techniques developed in this article is that they can be immediately generalized for addressing more complex systems and issues in the study of entanglement dynamics, for example, in the derivation of uncertainty relations for higher-order correlation functions, or for information-theoretic quantities that contain more detailed information about entanglement of general initial states, and in the exploration of entanglement dynamics in multipartite systems and of the dependence of entanglement on the spatial separation of multipartite systems.
\begin{appendix} \section{The coefficients in the Wigner function propagator} In this appendix, we sketch the calculations of the coefficients in the Wigner function propagator for the model presented in Sec. III C.
We first compute the function $v(s)$ of Eq. (\ref{vt}) in the $X_+, X_-$ coordinates. To leading order in $\gamma$ for the poles in the Laplace transform (\ref{vt}), we obtain
\begin{eqnarray} v_{++}(s)&=&\frac{e^{-\frac{1}{2}\gamma s}}{4\Omega_1 \Omega_2(\Omega_2^2-\Omega_1^2)} \left[\Omega_2 \sin(\Omega_1 s)\left(\gamma^2 - 2\Omega_1^2 + 2 \Omega_2^2 \right) \right. \nonumber \\ &&\left. + 4 \gamma \Omega_1\Omega_2 \left(\cos(\Omega_2 s)-\cos(\Omega_1 s)\right)-\Omega_1 \sin(\Omega_2 s) \left(\gamma^2+2\Omega_1^2-2 \Omega_2^2\right)\right] \\ v_{+-}(s)&=&\frac{e^{-\frac{1}{2}\gamma s}}{2\Omega_1 \Omega_2}\left[\Omega_2\sin(\Omega_1 s)-\Omega_1\sin(\Omega_2 s)\right] \\ v_{--}(s)&=&\frac{e^{-\frac{1}{2}\gamma s}}{4\Omega_1 \Omega_2(\Omega_2^2-\Omega_1^2)}\left[\Omega_2\sin(\Omega_1 s)\left(\gamma^2-8\gamma\Omega_1-2\Omega_1^2+2\omega_2^2\right)\right. \nonumber \\ && \left.+ 4\gamma\Omega_1\Omega_2\left(\cos(\Omega_2 s)-\cos(\Omega_1 s)\right)-\Omega_1\sin(\Omega_2 s)\left(\gamma^2-8\gamma\Omega_1+2\Omega_1^2- 2\Omega_2^2\right)\right] \hspace{1cm} \end{eqnarray} From $v(s)$ one constructs the matrix $R$ using Eq. (\ref{ceq2}) and the matrix $S$ using Eqs. (\ref{sxx}---\ref{sxp}). To obtain the asymptotic state, we compute $S$ at the limit $t \rightarrow \infty$. The off-diagonal elements in the $X_+, X_-$ basis vanish, while \begin{eqnarray} S_{X_+X_+}&=&\frac{\gamma}{M \pi}\int_0^\infty d\omega \omega f(\omega) (-2\omega^2+\Omega_1^2+\Omega_2^2)^2 , \\ S_{P_+P_+}&=&\frac{4M\gamma}{\pi}\int_0^\infty d\omega \omega^3 f(\omega) (-2\omega^2+\Omega_1^2+\Omega_2^2)^2 , \\ S_{X_-X_-}&=&\frac{\gamma}{M\pi} (\Omega_1^2-\Omega_2^2)^2\int_0^\infty d\omega \omega , f(\omega) \\ S_{P_- P_-}&=&\frac{4M\gamma}{\pi}(\Omega_1^2-\Omega_2^2)^2 \int_0^\infty d\omega \omega^3 f(\omega), \end{eqnarray} where \begin{eqnarray} f(\omega) = \frac{e^{-\frac{\omega^2}{\Lambda^2}}\coth\left(\frac{\omega}{2T}\right)} {[2(\omega^2-\Omega_1^2)^2+\gamma^2(\omega^2+\Omega_1^2)] [2(\omega^2-\Omega_2^2)^2+\gamma^2(\omega^2+\Omega_2^2)]}. \end{eqnarray} The asymptotic values for $S$ can be evaluated to leading order in $\gamma/\Omega_i$ and $\Omega_i/\Lambda$, by substituting the Lorentzians in the integrals with a delta function, that is, $[(x-a)^2+\gamma^2]^{-1} \simeq \frac{\pi}{2 \gamma} \delta (x-a)$. This corresponds to the weak-damping limit of Ref. \cite{HZ}. We then obtain \begin{eqnarray} S_{X_+X_+} = S_{X_-X_-} = \frac{1}{8M} \left( \frac{\coth\frac{\Omega_1}{2T}}{\Omega_1} + \frac{\coth\frac{\Omega_2}{2T}}{\Omega_2} \right), \\ S_{P_+P_+} = S_{P_-P_-} = \frac{M}{2} \left(\Omega_1 \coth\frac{\Omega_1}{2T} + \Omega_2 \coth\frac{\Omega_2}{2T} \right), \end{eqnarray} which correspond to an asymptotic thermal state for the pair of oscillators.
\end{appendix}
\end{document} |
\begin{document}
\newcommand{\(g^{(2)}\)}{\(g^{(2)}\)} \newcommand{\(g^{(3)}\)}{\(g^{(3)}\)} \newcommand{\(g^{(1,1)}\)}{\(g^{(1,1)}\)}
\title{Probing multimode squeezing with correlation functions} \author{Andreas Christ\(^{1,2}\), Kaisa Laiho\(^2\), Andreas Eckstein\(^2\), Kati\'{u}scia N. Cassemiro\(^2\), and Christine Silberhorn\(^{1,2}\)} \address{\(^1\)Applied Physics, University of Paderborn, Warburger Straße 100, 33098 Paderborn, Germany} \address{\(^2\)Max Planck Institute for the Science of Light,\\ G\"unther-Scharowsky Straße 1/Bau 24, 91058 Erlangen, Germany} \ead{[email protected]}
\date{\today}
\begin{abstract} Broadband multimode squeezers constitute a powerful quantum resource with promising potential for different applications in quantum information technologies such as information coding in quantum communication networks or quantum simulations in higher dimensional systems. However, the characterization of a large array of squeezers that coexist in a single spatial mode is challenging. In this paper we address this problem and propose a straightforward method to determine the number of squeezers and their respective squeezing strengths by using broadband multimode correlation function measurements. These measurements employ the large detection windows of state of the art avalanche photodiodes to simultaneously probe the full Hilbert space of the generated state, which enables us to benchmark the squeezed states. Moreover, due to the structure of correlation functions, our measurements are not affected by losses. This is a significant advantage, since detectors with low efficiencies are sufficient. Our approach is less costly than tomographic methods relying on multimode homodyne detection which is based on much more demanding measurement and analysis tools and appear to be impractical for large Hilbert spaces. \end{abstract}
\pacs{42.50.-p 42.65.Yj 42.65.Lm 42.65.Wi 03.65.Wj}
\maketitle
\section{Introduction} The study of correlation functions has a long history and lies at the heart of coherence theory \cite{mandel_optical_1995}. Intensity correlation measurements were first performed by Hanbury Brown and Twiss in the context of classical optics \cite{brown_correlation_1956}. Since then correlation functions have become an standard tool in quantum optical experiments to study the properties of laser beams \cite{chopra_higher-order_1973}, parametric downconversion sources\cite{blauensteiner_photon_2009, ivanova_multiphoton_2006} or heralded single-photons \cite{tapster_photon_1998, uren_characterization_2005, bussieres_fast_2008}. Current state of the art experiments are able to measure correlation functions up to the eighth order \cite{avenhaus_accessing_2010}, giving access to diverse characteristics of photonic states. The normalized second-order correlation function \(g^{(2)}(0)\) probes whether the generated photons are bunched or anti-bunched, with \(g^{(2)}(0) < 1\) being a genuine sign of non-classicality \cite{loudon_quantum_2000}. The measurement of all unnormalized moments \(G^{(n)}\) of a given optical quantum state provide complete access to the photon-number distribution for arbitrary single-mode input states \cite{mandel_optical_1995}. Moreover, it is possible to perform a full state-tomography with the help of correlation function measurements \cite{shchukin_universal_2006}.
The measurement of these correlation functions is, in general, performed in a time resolved manner \(g^{(n)}(t_1, t_2, \dots t_n)\). Limited time resolution has been considered as a detrimental effect and treated as experimental imperfection \cite{tapster_photon_1998}. In contrast to previous work, we employ the finite time resolution of photo-detectors to gain access to the spectral character of broadband multimode quantum states. Our scheme of measuring broadband multimode correlation functions of pulsed quantum light is especially useful for probing squeezed states. These states are commonly generated via the interaction of light with a crystal exhibiting a \(\chi^{(2)}\)-nonlinearity, a process referred to as parametric downconversion (PDC)\cite{rarity_quantum_1992, mauerer_how_2009, wasilewski_pulsed_2006, lvovsky_decomposing_2007, wenger_pulsed_2004, anderson_pulsed_1997} or with optical fibers featuring a \(\chi^{(3)}\)-nonlinearity called four-wave-mixing (FWM) \cite{loudon_squeezed_1987, levenson_generation_1985}.
In general the generated squeezed states exhibit multimode characteristics in the spectral degree of freedom, i.e. a set of independent squeezed states is created with each squeezer residing in its own Hilbert space. This inherent multimode character renders these states powerful for coding quantum information, yet the same feature impedes a proper experimental characterization in a straightforward manner. Due to the sheer vastness of the corresponding Hilbert space, standard quantum tomography methods become time-consuming and ineffective. It is neither easy to determine the degree of squeezing in each mode, nor the amount of generated independent squeezers. Nonetheless, these are the key benchmarks defining the potential of a source for quantum information and quantum cryptography applications. In the following we investigate how to overcome these issues and elaborate on an alternative approach to determine the properties of multimode squeezed states based on measuring broadband multimode correlation functions.
This paper is structured as follows: In section \ref{sec:multimode_squeezer} we revisit the general structure of multimode twin-beam squeezers drawing special attention --- but not restricting --- to states generated by parametric downconversion and four-wave-mixing. Section \ref{sec:correlation_functions} presents the formalism of correlation functions, introduces the intricacies of finite time resolution and defines broadband multimode correlation measurements. Section \ref{sec:probing_twin_beam_squeezing} combines the findings of sections \ref{sec:multimode_squeezer} and \ref{sec:correlation_functions}: We analyze the relation between the number of generated squeezers, their respective squeezing strengths and broadband multimode correlation functions, which leads us to proposing our scheme for characterizing multimode squeezing with the aid of broadband multimode correlation functions.
\section{Multimode Squeezers}\label{sec:multimode_squeezer} In a squeezed state of light one quadrature of the field exhibits an uncertainty below the standard quantum level at the expense of an increased variance in the conjugate quadrature, such that the Heisenberg's uncertainty relation holds at its minimum attainable value. The standard description of squeezed states usually considers two different types of squeezers: single-beam squeezers and twin-beam squeezers. Single-beam squeezers create the squeezing into a single optical mode \(\hat{S} = \exp\left(-\zeta \hat{a}^{\dagger2} + \zeta^* \hat{a}^2 \right)\), whereas twin-beam squeezers consist of \textit{two} beams with inter-beam squeezing \(\hat{S}^{ab} = \exp\left(-\zeta \hat{a}^\dagger \hat{b}^\dagger + \zeta^* \hat{a} \hat{b}\right) \) \cite{barnett_methods_2003}. In these equations \(\zeta\) labels the squeezing strength and the operators \(\hat{a}^\dagger, \hat{b}^\dagger\) create photons in distinct optical modes.
In this section we go beyond the standard description and discuss the theory of squeezed states, which are generated by the interaction of ultrafast pump pulses with nonlinear crystals or optical fibers. Here, we concentrate on the spectral structure of the broadband output beams. In general the utilized optical processes, typically called optical parametric amplification (OPA) or parametric downconversion (PDC) do not generate one but a variety of different squeezers in multiple frequency modes. A whole set of independent squeezed beams is generated in broadband orthogonal spectral modes within an optical beam. We refer to these states as frequency multimode single- or twin-beam squeezers \cite{wasilewski_pulsed_2006}. Here the \textit{multimode} prefix indicates that more than one squeezer is present in the optical beam and the term \textit{single- or twin-beam} identifies whether one squeezed beam or two entangled squeezed beams are created. Due to the single-pass configuration of our sources losses are negligible, hence we restrict ourselves to the analysis of pure squeezed states.
\subsection{Multimode twin-beam squeezers} The subject of our analysis is twin-beam squeezing generated by the propagation of an ultrafast pump pulse through a nonlinear medium (single-beam squeezers are discussed in \ref{app:single_beam_squeezer}). For simplicity we focus on the collinear propagation of all involved fields each generated into a single spatial mode. This description is rigorously fulfilled for PDC in waveguides \cite{mosley_direct_2009, christ_spatial_2009}, but can also be applied for other experimental configurations, since the approximation carries all the complexities of the multimode propagation in the spectral degree of freedom. If the pump field is undepleted, we can neglect its quantum fluctuations and describe this OPA process by the effective quadratic Hamiltonian (see \ref{app:multimode_two_mode_squeezer_generation} for a detailed derivation) \begin{eqnarray}
\hat{H}_{OPA} = A \int \mathrm d \omega_s \int \mathrm d \omega_i\, f(\omega_s, \omega_i) \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \, ,
\label{eq:effective_OPA_hamiltonian_two_mode} \end{eqnarray} in which the constant \(A\) denotes the overall efficiency of the OPA, the function \(f(\omega_s, \omega_i)\) describes the normalized output spectrum of the downconverted beam, which --- in many cases --- is close to a two-dimensional Gaussian distribution. The operators \(\hat{a}^\dagger_s(\omega_s)\) and \(\hat{a}^\dagger_i(\omega_i)\) are the photon creation operators in the different twin-beam arms, in general labelled signal and idler, respectively.
The unitary transformation generated by the effective OPA Hamiltonian in equation \eref{eq:effective_OPA_hamiltonian_two_mode} can be written in the form \begin{eqnarray}
\fl \qquad \hat{U}_{OPA} &= \exp\left[-\frac{\imath}{\hbar} \left( A \int \mathrm d \omega_s \int \mathrm d \omega_i\, f(\omega_s, \omega_i) \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \right) \right].
\label{eq:effective_OPA_unitary_two_mode_hamilton} \end{eqnarray} By virtue of the singular-value-decomposition theorem \cite{law_continuous_2000} we decompose the two terms in the exponential of equation \eref{eq:effective_OPA_unitary_two_mode_hamilton} as \begin{eqnarray}
\nonumber
-\frac{\imath}{\hbar} A f(\omega_s, \omega_i) = \sum_k r_k \psi^*_k(\omega_s) \phi^*_k(\omega_i), \,\,\,\, \mathrm{and}\\
-\frac{\imath}{\hbar} A^* f^*(\omega_s, \omega_i) = - \sum_k r_k \psi_k(\omega_s) \phi_k(\omega_i).
\label{eq:singular_value_decomposition} \end{eqnarray} Here both \(\left\{\psi_k(\omega_s)\right\}\) and \(\left\{\phi_k(\omega_i)\right\}\) each form a complete set of orthonormal functions. The amplitudes of the generated modes \(\psi_k(\omega_s)\) and \(\phi_k(\omega_i)\) are given by the \(r_k \in \mathbb{R}^+\) distribution. Employing equation \eref{eq:singular_value_decomposition} and introducing a new broadband mode basis \cite{rohde_spectral_2007} for the generated state as: \begin{eqnarray}
\hat{A}_k = \int \mathrm d \omega_s \psi_k(\omega_s) \hat{a}_s(\omega_s) \,\,\,\, \mathrm{and} \,\,\,\,
\hat{B}_k = \int \mathrm d \omega_i \phi_k(\omega_i) \hat{a}_i(\omega_i), \end{eqnarray} we obtain the unitary transformation \cite{mauerer_how_2009} \begin{eqnarray}
\nonumber
\hat{U}_{OPA} &= \exp\left[\sum_k r_k \hat{A}_k^\dagger \hat{B}_k^\dagger - h.c. \right] \\
\nonumber
&= \bigotimes_k \exp\left[r_k \hat{A}_k^\dagger \hat{B}_k^\dagger - h.c. \right] \\
&= \bigotimes_k \hat{S}^{ab}_k(-r_k).
\label{eq:effective_OPA_unitary_two_mode} \end{eqnarray} In total the OPA generates a tensor product of distinct broadband twin-beam squeezers as defined in \cite{barnett_methods_2003} with squeezing amplitudes \(r_k\), related to the available amount of squeezing via: \(\mathrm{squeezing[dB]} = -10 \log_{10}\left(e^{-2 r_k}\right)\). The Heisenberg representation of the multimode twin-beam squeezers is given by independent input-output relations for each broadband beam \begin{eqnarray}
\nonumber
\hat{A}_k \Rightarrow \cosh(r_k) \hat{A}_k + \sinh(r_k)\hat{B}_k^\dagger \\
\hat{B}_k \Rightarrow \cosh(r_k) \hat{B}_k + \sinh(r_k)\hat{A}_k^\dagger .
\label{eq:two_mode_squeezer_input_output_relation} \end{eqnarray} Note that the squeezer distribution \(r_k\) and basis modes \(\hat{A}_k\) and \(\hat{B}_k\) are unique and well-defined properties of the generated twin-beam. Their exact form is given by the Schmidt decomposition of the joint spectral amplitude \(-\frac{\imath}{\hbar} A f(\omega_s, \omega_i)\). This mathematical transformation directly yields the physical shape of the generated optical modes \(\psi_k(\omega_s)\), \(\phi_k(\omega_i)\) with each pair \(\hat{A}_k\) and \(\hat{B}_k\) being strictly correlated.
In figure \ref{fig:schmidt_modes} we illustrated one possible squeezer distribution and corresponding broadband modes. The joint spectral distribution \(f(\omega_s, \omega_i)\) of the generated twin-beams shown in figure \ref{fig:schmidt_modes} defines the shape of the broadband signal and idler modes \(\hat{A}_k\) and \(\hat{B}_k\). In the special case of a Gaussian spectral distribution the form of the squeezing modes resembles the Hermite functions. The number of different squeezer modes is closely connected to the frequency correlations between the signal and idler beam. In the presented case the spectrally correlated beams lead to over 20 independent squeezers. The total amount of squeezing depends on the constant \(A\) appearing in the Hamiltonian in equation \eref{eq:effective_OPA_hamiltonian_two_mode}, which is directly related to the applied pump power \(I\) and the strength of the nonlinearity \(\chi^{(2)}\) in the medium \( (A \propto \sqrt{I}, \chi^{(2)})\).
\begin{figure}
\caption{Visualization of the singular value decomposition in equation \eref{eq:singular_value_decomposition}. The frequency distribution \(- \frac{\imath}{\hbar} A f(\omega_s, \omega_i)\) of the generated state defines the shape of the signal and idler modes \(\psi_k(\omega_s), \phi_k(\omega_i)\) and the squeezer distribution \(r_k\).}
\label{fig:schmidt_modes}
\end{figure}
The OPA state is mainly characterized by the number of squeezed modes and the overall gain of the process, both being determined by the distribution of the individual squeezing amplitudes \(r_k\). In order to analyze the number of generated squeezers independently from the amount of squeezing, we split the distribution of squeezing weights \(r_k\) into a normalized distribution \(\lambda_k\) \(\left(\sum_k \lambda_k^2 = 1\right)\) that characterizes the probability for occupation of different squeezers in the respective optical quantum state, and an overall gain of the process \(B \in \mathbb{R}^+\), quantifying the total amount of generated squeezing according to \begin{eqnarray}
r_k = B \, \lambda_k. \end{eqnarray} The characterization of these two fundamental properties of a multimode twin-beam state is a major experimental challenge. While these states are easily generated in the lab, a tomography by means of homodyne detection would require to match for each squeezed mode \(\hat{A}_k\) and \(\hat{B}_k\) different local oscillator beams with adapted temporal-spectral pulse shapes. Multimode homodyning \cite{beck_joint_2001} may provide a route to circumvent this difficulty, however an experimental implementation still appears challenging.
\section{Correlation functions}\label{sec:correlation_functions} The n-th order (normalized) correlation function \(g^{(n)}(t_1, t_1, \dots ,t_n)\) is generally defined as a time-dependent function of the electromagnetic field. For quantized electric field operators, it can be expressed as \cite{glauber_quantum_1963, loudon_quantum_2000, mandel_optical_1995, vogel_quantum_2006} \begin{eqnarray}
g^{(n)}(t_1, t_2, \dots, t_n)
=\frac{\left< \hat{E}^{(-)}(t_1) \dots \hat{E}^{(-)}(t_n)\hat{E}^{(+)}(t_1) \dots \hat{E}^{(+)}(t_n)\right>}
{\left< \hat{E}^{(-)}(t_1)\hat{E}^{(+)}(t_1)\right> \dots \left< \hat{E}^{(-)}(t_n) \hat{E}^{(+)}(t_n) \right>},
\label{eq:correlation_function-time_resolved} \end{eqnarray} and it measures the (normalized) n-th order temporal correlations at different points in time. Note that this definition of the correlation functions is independent of coupling losses and detection inefficiencies yielding a loss resilient measure \cite{avenhaus_accessing_2010}. Realistic detectors however, suffer from internal jitter and finite gating times. We accommodate for these resolution effects by weighting the correlation function with the appropriate detection window \(T(t)\) of the applied detectors as presented in \cite{tapster_photon_1998}, and obtain \begin{figure}
\caption{a) perfect time-resolved detection; b) finite detection gate; c) broadband detection gate exceeding the pulse duration giving rise to different types of correlation measures.}
\label{fig:correlation_function}
\end{figure}
\begin{eqnarray}
\nonumber
\fl g^{(n)}(t_1, t_2, \dots, t_n) = \\
\fl \qquad \frac{\int \mathrm d t_1 T(t_1) \dots \int \mathrm d t_n T(t_n) \left< \hat{E}^{(-)}(t_1) \dots \hat{E}^{(-)}(t_n)\hat{E}^{(+)}(t_1) \dots \hat{E}^{(+)}(t_n)\right>}
{ \int \mathrm d t_1 T(t_1) \left< \hat{E}^{(-)}(t_1)\hat{E}^{(+)}(t_1)\right> \dots \int \mathrm d t_n T(t_n) \left< \hat{E}^{(-)}(t_n) \hat{E}^{(+)}(t_n) \right>}.
\label{eq:correlation_function-time_resolved_finite_gating} \end{eqnarray} If the employed photo-detectors exhibit flat detection windows, exceeding the length of the investigated pulses (\(T(t) \rightarrow \mathrm{const.}\)), equation \eref{eq:correlation_function-time_resolved_finite_gating} can be simplified to \begin{eqnarray}
g^{(n)} =
\frac{\int \mathrm d t_1 \dots \mathrm d t_n \left< \hat{E}^{(-)}(t_1) \dots \hat{E}^{(-)}(t_n)\hat{E}^{(+)}(t_1) \dots \hat{E}^{(+)}(t_n)\right>}
{ \int \mathrm d t_1 \left< \hat{E}^{(-)}(t_1)\hat{E}^{(+)}(t_1)\right> \dots \int \mathrm d t_n \left< \hat{E}^{(-)}(t_n) \hat{E}^{(+)}(t_n) \right>}.
\label{eq:correlation_function-time_resolved_flat} \end{eqnarray} This theoretical model is adequate for the detection of ultrafast pulses with standard avalanche photodetectors. Furthermore, equation \eref{eq:correlation_function-time_resolved_flat} exhibits the convenient property of time independence and represents our generalized broadband multimode correlation function. Despite its similarity to the common correlation functions as defined in equation \eref{eq:correlation_function-time_resolved}, the broadband multimode correlation function in equation \eref{eq:correlation_function-time_resolved_flat} should no longer be considered as a naive general measure of n-th order coherence. In figure \ref{fig:correlation_function} we illustrate the main difference between the time-integrated and time-resolved correlation measurements.
Equation \eref{eq:correlation_function-time_resolved_flat} is still not optimal for our studies of squeezed light fields. We transform it further by replacing the electric field operators by photon number creation and destruction operators (\(\hat{E}^{(+)}(t_n) \propto \hat{a}(t_n)\)) and perform a Fourier transform from the time domain into the frequency domain (\( \hat{a}(t) = \int \mathrm d \omega\, \hat{a}(\omega) e^{-\imath \omega t}\)). Equation \eref{eq:correlation_function-time_resolved_flat} is then rewritten as \begin{eqnarray}
\nonumber
g^{(n)} &=
\frac{\int \mathrm d \omega_1 \dots \mathrm d \omega_n \left< \hat{a}^\dagger(\omega_1) \dots \hat{a}^\dagger(\omega_n)\hat{a}(\omega_1) \dots \hat{a}(\omega_n)\right>}
{ \int \mathrm d \omega_1 \left< \hat{a}^\dagger(\omega_1)\hat{a}(\omega_1)\right> \dots \int \mathrm d \omega_n \left< \hat{a}^\dagger(\omega_n) \hat{a}(\omega_n) \right>} \\
&= \frac{\left<: \left( \int \mathrm d \omega \hat{a}^\dagger(\omega) \hat{a}(\omega) \right)^n: \right>}{\left< \int \mathrm d \omega \hat{a}^\dagger(\omega) \hat{a}(\omega) \right>^n},
\label{eq:correlation_function-time_resolved_flat_frequency_domain} \end{eqnarray} in which \(\left<: \cdots :\right>\) indicates normal ordering of the enclosed photon creation and destruction operators. In addition we adapt the correlation function to the basis of the measured quantum system, i.e. we perform a general basis transform from \(\hat{a}(\omega)\) to the basis of the measured multimode twin-beam squeezers \(\hat{A}_k\). This results in: \begin{eqnarray}
g^{(n)} = \frac{\left<:\left(\sum_k \hat{A}_k^\dagger\hat{A}_k\right)^n:\right>}{\left<\sum_k \hat{A}_k^\dagger\hat{A}_k\right>^n} \label{eq:broadband_correlation_function_final} \end{eqnarray} Equations \eref{eq:correlation_function-time_resolved_flat}, \eref{eq:correlation_function-time_resolved_flat_frequency_domain} and \eref{eq:broadband_correlation_function_final} stress the key difference between time-resolved and time-integrated correlation function measurements. While time-resolved correlation functions probe specific temporal modes, time-integrating detectors directly measure a superposition of all the different modes. This specific feature of broadband multimode detection is essential for our analysis. The simultaneous measurement of all different optical modes gives us direct \textit{loss-independent} access to the squeezer distribution of the probed state.
\subsection{Broadband multimode cross-correlation functions} In the previous section we restricted ourselves to intra-beam correlations. To allow for measurements of correlations between different beams we extend our analysis. The identification of such inter-beam correlations is of special importance in quantum optics and quantum information applications, since they quantify the continuous variable entanglement between different subsystems, in our case the analyzed optical beams. In section \ref{sec:multimode_squeezer} we have already discussed one of the most employed entanglement sources: Twin-beam squeezers. These states are not only entangled in their quadratures, but also in their spectral and spatial degrees of freedom \cite{braunstein_quantum_2005}. In order to probe higher-order cross-correlations between the two different beams \cite{vogel_quantum_2006}, or subsystems \(a\) and \(b\) of order \(n\) and \(m\) respectively, we generalize equation \eref{eq:correlation_function-time_resolved} to \begin{eqnarray}
\nonumber
& \fl g^{(n, m)}(t^{(a)}_1, t^{(a)}_2, \dots, t^{(a)}_n; t^{(b)}_1, t^{(b)}_2, \dots, t^{(b)}_m) =\\
& \fl =\frac{\left< \hat{E}_a^{(-)}(t^{(a)}_1) \dots \hat{E}_a^{(-)}(t^{(a)}_n)\hat{E}_a^{(+)}(t^{(a)}_1) \dots \hat{E}_a^{(+)}(t^{(a)}_n) \times \hat{E}_b^{(-)}(t^{(b)}_1) \dots \hat{E}_b^{(+)}(t^{(b)}_n)\right>}
{\left< \hat{E}_a^{(-)}(t^{(a)}_1)\hat{E}_a^{(+)}(t^{(a)}_1)\right> \dots \left< \hat{E}_a^{(-)}(t^{(a)}_n) \hat{E}_a^{(+)}(t^{(a)}_n) \right> \times \dots \left<\hat{E}_b^{(-)}(t^{(b)}_n)\hat{E}_b^{(+)}(t^{(b)}_n)\right>}.
\label{eq:cross_correlation_function-time_resolved} \end{eqnarray} Taking into account broadband detection windows --- exceeding the pulse duration --- the above formula can be reformulated as \begin{eqnarray}
g^{(n,m)} = \frac{\left<:\left(\int\mathrm d t\,\hat{E}_a^{(-)}(t)\hat{E}_a^{(+)}(t)\right)^n: :\left(\int\mathrm d t \,\hat{E}_b^{(-)}(t)\hat{E}_b^{(+)}(t)\right)^m: \right>}{\left<\int\mathrm d t \,\hat{E}_a^{(-)}(t)\hat{E}_a^{(+)}(t)\right>^n\left<\int\mathrm d t \,\hat{E}_b^{(-)}(t)\hat{E}_b^{(+)}(t)\right>^m}. \end{eqnarray} Again we perform the same simplifications as in equation \eref{eq:correlation_function-time_resolved_flat_frequency_domain} in section \ref{sec:correlation_functions}, namely we replace the electric field operators by photon creation and destruction operators, apply the Fourier transform from the time to frequency domain and finally we adapt the measurement basis to the given optical state. We find an extended version of equations \eref{eq:correlation_function-time_resolved_flat_frequency_domain} and \eref{eq:broadband_correlation_function_final} \begin{eqnarray}
g^{(n,m)} &= \frac{\left<:\left(\int\mathrm d\omega \,\hat{a}^\dagger(\omega)\hat{a}(\omega)\right)^n: :\left(\int\mathrm d\omega \,\hat{b}^\dagger(\omega)\hat{b}(\omega)\right)^m: \right>}{\left<\int\mathrm d\omega \,\hat{a}^\dagger(\omega)\hat{a}(\omega)\right>^n\left<\int\mathrm d\omega \,\hat{b}^\dagger(\omega)\hat{b}(\omega)\right>^m} \\
&= \frac{\left<:\left(\sum_k \hat{A}_k^\dagger\hat{A}_k\right)^n::\left(\sum_k \hat{B}_k^\dagger\hat{B}_k\right)^m:\right>}{\left<\sum_k \hat{A}_k^\dagger\hat{A}_k\right>^n\left<\sum_k \hat{B}_k^\dagger\hat{B}_k\right>^m}. \label{eq:broadband_cross-correlation_function} \end{eqnarray} Further extensions of cross-correlation measurements to systems consisting of more than two different beams are possible \cite{mandel_optical_1995}, but are not necessary within the scope of this paper.
\section{Probing frequency multimode squeezers via correlation functions}\label{sec:probing_twin_beam_squeezing} Using the theoretical description of squeezers as well as the derived broadband multimode correlation functions, we now combine the findings of section \ref{sec:multimode_squeezer} and \ref{sec:correlation_functions}. We establish a connection between the broadband multimode correlation functions and the properties of the squeezing, i.e. the mode distribution \(\lambda_k\) and the optical gain \(B\).
\subsection{Probing the number of modes via \(g^{(2)}\)-measurements}\label{sec:probing_twin_beam_squeezing_mode_distribution} The foremost important property of frequency multimode squeezers is the number of independent squeezers in the generated twin-beam state, which is specified by the mode distribution \(\lambda_k\). In contrast to the optical gain \(B\), which is easily tuned by adjusting the pump power the mode distribution \(\lambda_k\) is heavily constricted by the dispersion in the nonlinear material and hence --- in general --- not easily adjustable \cite{frequency_filter}. The effective number of modes in multimode twin-beam state is given by the Schmidt number or cooperativity parameter \(K\) as defined in \cite{eberly_schmidt_2006, r_grobe_measure_1994} with \begin{eqnarray}
K = 1 / \sum_k \lambda_k^4.
\label{eq:schmidt_number} \end{eqnarray} Under the assumption of an independent uniform squeezer distribution it directly reflects the number of occupied modes. \begin{figure}
\caption{Setup to measure \(g^{(2)}\) of a multimode twin-beam squeezer.}
\label{fig:mm_two_mode_squeezr_g2_setup}
\end{figure} The mode number \(K\) of a multimode twin-beam squeezer can be directly accessed by measuring the broadband multimode \(g^{(2)}\)-correlation function in the signal or idler arm as depicted in figure \ref{fig:mm_two_mode_squeezr_g2_setup}. This is a result of the structure of the second-order correlation function, which --- by using \eref{eq:broadband_correlation_function_final} and \eref{eq:two_mode_squeezer_input_output_relation} --- can be expressed as \begin{eqnarray}
g^{(2)} &= 1 + \frac{\sum_k \sinh^4(r_k) } { \left[\sum_k \sinh^2(r_k)\right]^2 }.
\label{eq:gtwo_mm_two_mode_squeezer_nonlinear} \end{eqnarray} For our further analysis it is useful to distinguish the low gain from the high gain regime, corresponding to low and high levels of squeezing. In the low gain regime corresponding to biphotonic states typically referred to in the context of PDC experiments \(\sinh(r_k) \approx r_k = B \lambda_k\) and we are able to simplify equation \eref{eq:gtwo_mm_two_mode_squeezer_nonlinear} to \begin{eqnarray}
\nonumber
g^{(2)} &\approx 1 + \frac{\sum_k B^4 \lambda_k^4) } { \left(\sum_k B^2 \lambda_k^2\right)^2 }
= 1 + \frac{\sum_k \lambda_k^4 } { \left(\sum_k \lambda_k^2\right)^2 }
= 1 + \sum_k \lambda_k^4\\
&=1 + \frac{1}{K}.
\label{eq:gtwo_mm_two_mode_squeezer} \end{eqnarray} Consequently the effective number of modes is directly available from the correlation function measurement via \(K = 1 / ( g^{(2)} -1 )\). For a single twin-beam squeezer (\(K = 1\)) \(g^{(2)} = 2\), whereas for higher numbers of squeezers (\(K \gg 1\)) the contributions from the term \(\sum_k \lambda_k^4\) becomes negligible and \(g^{(2)}\) approaches one. This direct correspondence between \(g^{(2)}\) and the effective number of modes \(K\) is presented in figure \ref{fig:g2_mm_squeezer_results} (a).
Another way of interpreting equation \eref{eq:gtwo_mm_two_mode_squeezer} is to approach the correlation function measurement from the photon-number point of view. The \(g^{(2)}\)-value of a single twin-beam squeezer, which exhibits a thermal photon-number distribution, evaluates to \(g^{(2)}\)\(=2\). If more squeezers are involved the detector cannot distinguish between the different thermal distributions, i.e. it measures a convolution of all the different thermal photon streams, which gives a Poissonian photon-number distribution \cite{mauerer_how_2009, avenhaus_photon_2008}. In fact one can show that the \(g^{(2)}\)-correlation function in equation \eref{eq:gtwo_mm_two_mode_squeezer_nonlinear} is the convolution of the second-order moments of each individual squeezer.
Once more, we stress that the \(g^{(2)}\)-measurement does not give access to the exact distribution of squeezers \(\lambda_k\), but to the \textit{effective} number of modes under the assumption that all squeezed states share an identical amount of squeezing. This is a rather crude model and does not fit very well to many experimental realizations. Fortunately, there is a common class of squeezed states, for which a much more refined mode distribution \(\lambda_k\) is accessible: In the case of a two-dimensional Gaussian joint-spectral distribution \(f(\omega_s, \omega_i)\), the distribution \(\lambda_k\) is thermal \(\lambda_k = \sqrt{1 - \mu^2} \mu^k\), and thus it can be characterized by a single distribution parameter \(\mu\) \cite{uren_photon_2003}. The latter can be retrieved from a \(g^{(2)}\)-measurement via \(\mu = \sqrt{2 / g^{(2)} - 1}\), as depicted in figure \ref{fig:g2_mm_squeezer_results} (b), where we illustrate how the detection of the \(g^{(2)}\)-function can provide us directly with comprehensive knowledge about the underlying spectral mode structure of the analyzed state. \begin{figure}
\caption{a) Plot of the effective mode number \(K\) as a function of \(g^{(2)}\) for various effective numbers of modes. b) Visualization of \(\mu\) as a function of \(g^{(2)}\) for different thermal squeezer distributions.}
\label{fig:g2_mm_squeezer_results}
\end{figure}
In conclusion we have shown, that by measuring the second-order correlation function \(g^{(2)}\) of a multimode broadband twin-beam state one can probe the corresponding distribution of spectral modes \(\lambda_k\). Our method displays the advantage that correlation functions can be measured in a very practical way \cite{eckstein_highly_2011}, resulting in an approach that is much easier than realizing homodyne measurements, which require addressing individual modes. As a side remark we would like to point out that one can also determine the effective number of squeezers from the higher moments \(g^{(n)}\, , n \ge 2\) similar to the presented approach, yet \(g^{(2)}\) is already sufficient for our purposes.
\subsection{Probing the optical gain \(B\) of a multimode twin-beam squeezer via \(g^{(1,1)}\) measurements} In section \ref{sec:probing_twin_beam_squeezing_mode_distribution} we determined the number of modes in a loss resilient way by measuring \(g^{(2)}\) for low gains \(B\). Here we investigate the amount of the generated squeezing determined by the overall optical gain \(B\). In order to probe this value the setup has to be changed to measure the correlation function \(g^{(1,1)}\) of the generated twin-beam squeezer as presented in figure \ref{fig:g11_mm_squeezer_setup}. \begin{figure}
\caption{Schematic setup to measure \(g^{(1,1)}\) of a multimode twin-beam squeezer generated via PDC.}
\label{fig:g11_mm_squeezer_setup}
\end{figure} Using equation \eref{eq:broadband_cross-correlation_function} and \eref{eq:two_mode_squeezer_input_output_relation} we obtain for \(g^{(1,1)}\) the form \begin{eqnarray}
\nonumber
\fl g^{(1,1)} = \frac{\sum_{k,l} \sinh^2(r_k) \sinh^2(r_l) + \sum_{k} \sinh^2(r_k) \cosh^2(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^2} \\
\fl \qquad = 1 + \underbrace{\frac{1}{\sum_k \sinh^2(r_k)}}_{1/\left<n\right>} + \underbrace{\frac{\sum_k \sinh^4(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^2}}_{g^{(2)}-1}. \label{eq:g11_two_mode_squeezer} \end{eqnarray} The relevant characteristics we exploit from this measurement is its dependence on both, the number of modes in the system, as given by the \(g^{(2)}\)-function \textit{and} the mean photon number in each arm, which is closely connected to the coupling coefficient \(B\). In the low gain regime (\(\sinh(r_k) \approx r_k\)), \(g^{(1,1)}\) simplifies to \begin{eqnarray}
g^{(1,1)} &\approx 1 + \frac{1}{B^2} + \underbrace{\frac{\sum_k \lambda_k^4}{\left[\sum_k \lambda_k^2 \right]^2}}_{g^{(2)}-1}
\approx \underbrace{g^{(2)}}_{\le 2} + \underbrace{\frac{1}{B^2}}_{\gg 1}
\approx \frac{1}{B^2}.
\label{eq:g11_two_mode_squeezer_low_gain} \end{eqnarray} Hence, the optical gain is --- in the low gain regime --- obtained from the \(g^{(1,1)}\)-measurement via the simple relation \(B \approx 1 / \sqrt{g^{(1,1)}}\). Mode dependencies of the coupling value \(B\) only occur at high squeezing strengths, where the relation diverges from equation \eref{eq:g11_two_mode_squeezer_low_gain} and takes on a more complicated form. In figure \ref{fig:two_mode_squeezer_g11_analytic_inverse} we plot the dependence of the overall coupling value \(B\) on \(g^{(1,1)}\) --- as presented in equation \eref{eq:g11_two_mode_squeezer} --- which takes on a high value for small optical gains \(B\) but rapidly decreases when the high gain regime is approached. \begin{figure}
\caption{The optical gain \(B\) plotted as a function of \(g^{(1,1)}\). For small values of \(B\) the correlation function \(g^{(1,1)}\) takes on a high value, yet rapidly decreases when the high gain regime is approached.}
\label{fig:two_mode_squeezer_g11_analytic_inverse}
\end{figure}
In total measuring \(g^{(1,1)}\) gives direct \textit{loss-independent} access to the optical gain \(B\). This enables a loss tolerant probing of the generated mean photon number which, in the low gain regime, is even independent of the underlying mode structure.
Taking into account the prior knowledge we gained from section \ref{sec:probing_twin_beam_squeezing_mode_distribution}, we can now ascertain all parameters needed to fully determine the highly complex multimode state. The optical gain \(B\) defines not only the photon distribution, but quantifies the generated twin-beam squeezing, i.e. the available CV-entanglement in each mode. Note that all modes exhibit different entanglement parameters. Depending on the state and its respective mode distribution determined by the \(g^{(2)}\)-measurement all the entanglement could be generated in a single spectral mode where it is readily available for quantum information experiments or in a multitude of different squeezed modes. Note however, that after the state generation process multiple squeezers cannot be combined into a single optical mode by using only Gaussian operations, since this operation would be equivalent to continuous-variable entanglement distillation \cite{eisert_distilling_2002, fiurascaronek_gaussian_2002, giedke_characterization_2002}.
\section{Outlook} In this paper we focused on the state characterization of ultrafast twin-beam squeezers in the time domain and their experimental analysis. The presented approach however is not limited to twin-beam squeezers:
On the one hand, our measurement technique also applies to probe the squeezing of ultrafast multimode \textit{single}-beam squeezers as presented in \ref{app:single_beam_squeezer}. On the other hand, our approach is easily adapted to spatial multimode squeezed states \cite{treps_quantum_2005, chalopin_multimode_2010, lassen_generation_2006}. These are characterized by measuring correlation functions that are broadband in the spatial domain, in a direct analogy to the spectral degree of freedom analyzed in this work.
\section{Conclusion} We elaborated on the generation of multimode squeezed beams and their characterization with multimode broadband correlation functions. We expanded the formalism of correlation functions by including the effects of finite time resolution. These extended correlation function measurements serve as a versatile tool for the characterization of optical quantum states such as twin-beam squeezers. They provide a simple, straightforward and \textit{loss independent} way to investigate the characteristics of multimode squeezed states. Our findings are important for the field of efficient quantum state characterization and have already proven to be a useful experimental tool in the laboratory \cite{laiho_testing_2010, eckstein_highly_2011}.
\section{Acknowledgments} This work was supported by the EC under the grant agreements CORNER (FP7-ICT-213681), and QUESSENCE (248095). Kati\'{u}scia N. Cassemiro acknowledges support from the Alexander von Humboldt foundation. The authors thank Agata M. Bra\'nczyk, Malte Avenhaus and Benjamin Brecht for useful discussions and helpful comments. \\
\begin{appendix}
\section{Multimode twin-beam squeezer generation via nonlinear optical processes}\label{app:multimode_two_mode_squeezer_generation} \subsection{Generation of multimode twin-beam squeezers via parametric downconversion}\label{app:multimode_two_mode_squeezer_generation_PDC} In the process of parametric downconversion squeezed states are generated by the interaction of a strong pump field with the \(\chi^{(2)}\)-nonlinearity of a crystal. Regarding the generation of twin-beam squeezers the Hamiltonian of the corresponding three-wave-mixing process is given by \cite{mauerer_how_2009, braczyk_optimized_2010, grice_spectral_1997}: \begin{eqnarray}
\hat{H}_{PDC} = \int_{-\frac{L}{2}}^{\frac{L}{2}} \mathrm d z \, \chi^{(2)} \hat{E}^{(+)}_p(z,t) \hat{E}^{(-)}_s(z,t) \hat{E}_i^{(-)}(z,t) + h.c.
\label{eq:pdc_twin-beam_hamiltonian} \end{eqnarray} where we focused on a collinear interaction of all three beams. In equation \eref{eq:pdc_twin-beam_hamiltonian} \(L\) labels the length of the medium, \( \chi^{(2)} \) the nonlinearity of the crystal, and \(\hat{E}_p^{(+)}(z,t)\), \(\hat{E}^{(-)}_s(z,t)\), \(\hat{E}^{(-)}_i(z,t)\) the pump, the signal and the idler fields. The electric field operators used in equation \eref{eq:pdc_twin-beam_hamiltonian} are defined as follows \begin{eqnarray}
\fl \qquad \hat{E}_{x}^{(-)}(z,t) = \hat{E}_{x}^{(+)\dagger}(z,t) = C \int \mathrm d \omega_{x} \, \exp\left[-\imath\left(k_{x}(\omega)z + \omega t\right)\right] \hat{a}^\dagger_{x}(\omega) \label{eq:electric_field_operator}, \end{eqnarray} in which we have merged all constants and slowly varying field amplitudes in the overall parameter \(C\). In order to simplify the Hamiltonian we treat the strong pump field as a classical wave \begin{eqnarray}
\hat{E}_p^{(+)}(z,t) \Rightarrow E_p(z,t) = \int \mathrm d \omega_p \, \alpha(\omega_p) \exp\left[\imath\left(k_p(\omega_p)z + \omega_p t\right)\right].
\label{eq:electric_pump_field} \end{eqnarray} Here \(\alpha(\omega_p) = A_p \exp\left[(\omega_p - \mu_p)^2/ (2\sigma_p^2)\right]\) is the Gaussian pump envelope function generated by an ultrafast laser system, consisting of a field amplitude \(A_p\), a central pump frequency \(\mu_p\), and a pump width \(\sigma_p\).
The PDC Hamiltonian in equation \eref{eq:pdc_twin-beam_hamiltonian} generates the following unitary transformation: \begin{eqnarray}
\hat{U} = \exp\left[-\frac{\imath}{\hbar} \int_{-\infty}^{\infty}\mathrm d t' \, \hat{H}_{PDC}(t')\right]
\label{eq:unitary_operator_two_mode_squeezer} \end{eqnarray} In the low downconversion regime we can ignore the time-ordering of the electric field operators \cite{wasilewski_pulsed_2006, lvovsky_decomposing_2007} and directly evaluate the time integration. This yields a delta-function \(2 \pi \delta(\omega_s + \omega_i - \omega_p)\) and hence allows us to perform the integral over the pump frequency \(\omega_p\). Equation \ref{eq:unitary_operator_two_mode_squeezer} can be re-expressed as \begin{eqnarray}
\nonumber
\fl \hat{U} = \exp \left[ -\frac{\imath}{\hbar} \left(A' \int_{-\frac{L}{2}}^{\frac{L}{2}} \mathrm d z \int \mathrm d \omega_s \int \mathrm d \omega_i \, \right. \right. \\
\left. \left. \times \alpha(\omega_s + \omega_i) \exp\left[\imath \Delta k z\right] \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \right) \right], \end{eqnarray} in which \(\Delta k = k_p(\omega_s + \omega_i) - k_s(\omega_s) - k_i(\omega_i)\) is the so called phase-mismatch and \(A'\) accumulates all constants. Finally, we perform the integration over the length of the crystal and obtain \begin{eqnarray}
\fl \qquad \hat{U} = \exp\left[ -\frac{\imath}{\hbar} \left( A \int \mathrm d \omega_s \int \mathrm d \omega_i \, \alpha(\omega_s + \omega_i)
\phi(\omega_s, \omega_i) \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \right)\right],
\label{eq:unitary_typeII_process_derivation} \end{eqnarray} where \(\phi(\omega_s, \omega_i) = \mathrm{sinc}\left(\frac{\Delta k L}{2}\right)\) is referred to as the phasematching function. The latter combined with the pump distribution \(\alpha(\omega_s + \omega_i)\) gives the overall frequency distribution or joint spectral amplitude \(f(\omega_s, \omega_i)\) of the generated state. The final unitary squeezing operator of the downconversion process is \begin{eqnarray}
\hat{U} = \exp\left[-\frac{\imath}{\hbar}\underbrace{\left( A \int \mathrm d \omega_s \int \mathrm d \omega_i f(\omega_s, \omega_i) \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \right)}_{\hat{H}_{eff}} \right].
\label{eq:unitary_PDC_twin_beam_derivation} \end{eqnarray}
The \(\mathrm{sinc}\) function appearing in equation \ref{eq:unitary_PDC_twin_beam_derivation} can be approximated by a Gaussian distribution \begin{eqnarray}
\fl \qquad \phi(\omega_s, \omega_i) = \mathrm{sinc}\left(\frac{\Delta k(\omega_s,\omega_i) L}{2}\right)
\approx \exp\left[-0.193 \left(\frac{\Delta k(\omega_s, \omega_i) L}{2}\right)^2\right].
\label{eq:phasematching_function_approximation} \end{eqnarray} With this simplification the joint frequency distribution \(f(\omega_s, \omega_i)\) takes on the form of a two-dimensional Gaussian distribution. Applying this approximation the exact squeezer distribution is accessible as presented in section \ref{sec:probing_twin_beam_squeezing}.
\subsection{Generation of multimode twin-beam squeezers via four-wave-mixing} In a four-wave-mixing (FWM) process two strong pump fields interact with the \(\chi^{(3)}\)-nonlinearity of a fiber to create two new electric fields. If the two generated fields are distinguishable the Hamiltonian of the process is given by \cite{chen_quantum_2007} \begin{eqnarray}
\fl \qquad \hat{H}_{\mathrm{FWM}} = \int_{-\frac{L}{2}}^{\frac{L}{2}} \mathrm d z \, \chi^{(3)} \hat{E}^{(+)}_{p1}(z,t) \hat{E}^{(+)}_{p2}(z,t) \hat{E}^{(-)}_s(z,t) \hat{E}_i^{(-)}(z,t) + h.c. \,\, .
\label{eq:fwm_twin-beam_hamiltonian} \end{eqnarray} Again, we assume a collinear interaction of all interacting beams. The electric fields for signal, idler and pump are defined in equations \eref{eq:electric_field_operator} and \eref{eq:electric_pump_field}. Performing the same steps as in \ref{app:multimode_two_mode_squeezer_generation_PDC} we obtain a similar unitary transformation \begin{eqnarray}
\fl \qquad \hat{U} = \exp\left[ -\frac{\imath}{\hbar} \underbrace{\left( A \int \mathrm d \omega_s \int \mathrm d \omega_i \, f_{\mathrm{FWM}}(\omega_s, \omega_i) \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \right)}_{\hat{H}_{eff}}\right] .
\label{eq:unitary_FWM_twin_beam_derivation} \end{eqnarray} Equation \eref{eq:unitary_FWM_twin_beam_derivation} resembles equation \eref{eq:unitary_PDC_twin_beam_derivation} with the exception of the joint frequency distribution \(f_{\mathrm{FWM}}(\omega_s, \omega_i)\) which takes on a more complicated shape in comparison to the PDC case \begin{eqnarray}
\fl \qquad f_{\mathrm{FWM}}(\omega_s, \omega_i) = \int \mathrm d \omega_p \, \alpha(\omega_{p}) \alpha(\omega_s + \omega_i - \omega_p)\, \mathrm{sinc}\left( \frac{\Delta k (\omega_p, \omega_s, \omega_i) L}{2} \right) . \end{eqnarray} By comparing the unitary transformation in equation \eref{eq:unitary_typeII_process_derivation} and \eref{eq:unitary_FWM_twin_beam_derivation} it is apparent that the two different processes both create the same fundamental quantum state: Multimode twin-beam squeezers.
\section{Multimode single-beam squeezers}\label{app:single_beam_squeezer} In the main body of the paper we discussed the characterization of multimode twin-beam squeezers. Here we call attention to the fact that the broadband multimode correlation function formalism is also applicable to probe multimode single-beam squeezed states.
\subsection{Generation of multimode single-beam squeezers}\label{app:single_beam_squeezer_generation} Single-beam squeezers are created by PDC and FWM processes similar to the twin-beam states. The difference between the twin-beam and single-beam squeezer generation is that in the latter the generated beams are emitted into the same optical mode, whereas in the former two different optical modes are generated as discussed in \ref{app:multimode_two_mode_squeezer_generation}.
The PDC Hamiltonian generating a single-beam squeezer is given by \begin{eqnarray}
\hat{H} = \int_{-\frac{L}{2}}^{\frac{L}{2}} \mathrm d z \, \chi^{(2)} \hat{E}^{(+)}_p(z,t) \hat{E}^{(-)}(z,t) \hat{E}^{(-)}(z,t) + h.c. \,\, .
\label{eq:pdc_single-beam_hamiltonian} \end{eqnarray} Performing the same steps as in the case of twin-beam generation we obtain the unitary transformation \begin{eqnarray}
\hat{U} = \exp\left[-\frac{\imath}{\hbar}\underbrace{\left( A \int \mathrm d \omega_s \int \mathrm d \omega_i f(\omega_s, \omega_i) \hat{a}^\dagger(\omega_s) \hat{a}^\dagger(\omega_i) + h.c. \right)}_{\hat{H}_{eff}} \right]. \end{eqnarray} If the joint spectral distribution \(f(\omega_s, \omega_i)\) is engineered to be symmetric under permutation of signal and idler, the Schmidt decomposition is given by: \begin{eqnarray}
-\frac{\imath}{\hbar} A f(\omega_s, \omega_i) = \sum_k r_k \phi_k^*(\omega_s) \phi_k^*(\omega_i) \,\,\, \mathrm{and}\\
-\frac{\imath}{\hbar} A^* f^*(\omega_s, \omega_i) = -\sum_k r_k \phi_k(\omega_s) \phi_k(\omega_i)
\label{eq:singular_value_decomposition_single_beam} \end{eqnarray} Introducing broadband modes we obtain the multimode broadband unitary transformation \begin{eqnarray}
\nonumber
\hat{U} &= \exp\left[\sum_k r_k \hat{A}_k^\dagger \hat{A}_k^\dagger - h.c.\right] \\
\nonumber
&= \bigotimes_k \exp\left[r_k \hat{A}_k^\dagger \hat{A}_k^\dagger - h.c.\right]\\
&= \bigotimes_k \hat{S}(-r_k).
\label{eq:OPA_unitary_single_beam} \end{eqnarray} This is exactly the form of a frequency multimode single-beam squeezed state \cite{barnett_methods_2003}. Or written in the Heisenberg picture: \begin{eqnarray}
\hat{A}_k = \cosh(r_k) \hat{A}_k + \sinh(r_k) \hat{A}_k^\dagger \end{eqnarray} Single-beam squeezers are --- like twin-beam squeezers --- widely employed in quantum optics experiments \cite{zhu_photocount_1990, sasaki_multimode_2006}. As in the twin-beam squeezer case the same states are generated by properly engineered FWM processes.
\subsection{Probing frequency multimode single-beam squeezers via correlation function measurements} In order to characterize the generated states we have to determine the optical gain \(B\) and mode distribution \(\lambda_k\) as in the case of multimode twin-beam squeezers (see section \ref{sec:probing_twin_beam_squeezing}). Therefore, we adapt the scheme presented in section \ref{sec:probing_twin_beam_squeezing} and probe the correlation functions \(g^{(2)}\) and \(g^{(3)}\) as sketched in figure \ref{fig:squeezer_two_mode_g2_g3_setup}. \begin{figure}
\caption{Schematic setup to measure a) \(g^{(2)}\) and b) \(g^{(3)}\) of a frequency multimode single-beam squeezer.}
\label{fig:squeezer_two_mode_g2_g3_setup}
\end{figure} For a multimode single-beam squeezer they can be written as: \begin{eqnarray}
g^{(2)} =& 1 + 2 \frac{\sum_k \sinh^4(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^2} + \underbrace{\frac{1}{\sum_k \sinh^2(r_k)}}_{1/\left<n\right>} \,\,\,\,\, \mathrm{and}\\
\nonumber
g^{(3)} =& 1 + 6 \frac{\sum_k \sinh^4(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^2} + 8 \frac{\sum_k \sinh^6(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^3}\\
& + \frac{3}{\sum_k \sinh^2(r_k)} + 6 \frac{\sum_k \sinh^4(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^3}.
\label{eq:mm_sm_squeezer_correlations} \end{eqnarray} In the single-beam case however \(g^{(2)}\) does not directly yield the effective number of modes \(K\) or thermal mode distribution parameter \(\mu\) as for the multimode twin-beam squeezers in equation \eref{eq:gtwo_mm_two_mode_squeezer}. A joint measurement of \(g^{(2)}\) and \(g^{(3)}\) is necessary, as sketched in figure \ref{fig:g3_vs_g2_mm_squeezer_K_mu_dependence}. \begin{figure}
\caption{\(g^{(3)}\) as a function of \(g^{(2)}\) for various multimode single-beam squeezers. The effective number of modes and the thermal mode distributions parameter \(\mu\) of a multimode single-beam squeezer are encoded in the slope.}
\label{fig:g3_vs_g2_mm_squeezer_K_mu_dependence}
\end{figure} Clearly the effective mode number \(K\) and the thermal mode distribution \(\mu\) are given by the slope \(s\) of \(g^{(3)}\) vs. \(g^{(2)}\). In figure \ref{fig:g3_vs_g2_mm_squeezer_K_mu_analysis} we plotted the explicit dependence of \(K\) and \(\mu\) on the slope \(s\). Surprisingly the functions exhibit almost the same shape as in the twin-beam squeezer case. \begin{figure}
\caption{a) Effective mode number \(K\) as a function of the slope of \(g^{(3)}[g^{(2)}]\). b) Thermal mode distribution \(\mu\) as a function of the slope of \(g^{(3)}[g^{(2)}]\) for multimode single-beam squeezed states.}
\label{fig:g3_vs_g2_mm_squeezer_K_mu_analysis}
\end{figure}
In order to obtain the gain of a multimode single-beam squeezer a single \(g^{(2)}\)-measurement is sufficient which is sensitive towards the coupling value \(B\) as presented in figure \ref{fig:g3_vs_g2_mm_squeezer_B} (similar to the \(g^{(1,1)}\)-measurement in the twin-beam squeezer case). In the low gain regime it is given via the relation \(B = 1 / \sqrt{g^{(2)}}\). Again, while describing a different system, the shape of the function \(B[g^{(2)}]\) is very similar to the twin-beam squeezer case. \begin{figure}
\caption{Optical gain \(B\) as a function of \(g^{(2)}\) for a multimode single-beam squeezed state.}
\label{fig:g3_vs_g2_mm_squeezer_B}
\end{figure}
In total the theoretical description and derivation of multimode single-beam squeezers is very similar to the mathematics behind multimode twin-beam states. These similarities translate to multimode correlations functions which are able to probe the generated optical gain \(B\) and mode distribution \(\lambda_k\) as in the twin-beam case. \end{appendix}
\end{document} |
\begin{document}
\begin{abstract} We develop an elementary theory of partially additive rings as a foundation of ${\mathbb F}_1$-geometry. Our approach is so concrete that an analog of classical algebraic geometry is established very straightforwardly. As applications, (1) we construct a kind of group scheme ${{\mathbb{G}\mathbb{L}}}_n$ whose value at a commutative ring $R$ is the group of $n\times n$ invertible matrices over $R$ and at ${\mathbb F}_1$ is the $n$-th symmetric group, and (2) we construct a projective space $\mathbb P^n$ as a kind of scheme and count the number of points of ${\mathbb P}^n({\mathbb F}_q)$ for $q=1$ or $q=p^n$ a power of a rational prime, then we explain a reason of number 1 in the subscript of ${\mathbb F}_1$ even though it has two elements. \end{abstract} \title{Partially additive rings \
and group schemes over $un$} \tableofcontents \section{Introduction}
It seems that the notion of a so-called field with one element was first proposed by J. Tits\cite{tits-sur-les-analogues-algebriques}. There have been many attempts to define an algebraic geometry over $\FF_1.$
We start from a partially additive algebraic system, partial monoids. We impose strict associativity on partial monoids, in the sense of G. Segal\cite{segal-configuration-spaces-and}, who used the topological version of this structure in the study of configuration spaces. Then we define a partial ring as a partial monoid equipped with a binary commutative, associative multiplication with unity. Our approach is so concrete that we can establish an analog of the classical theory of schemes based on commutative rings with unity and define so-called partial schemes, which are locally partial-ringed spaces that are isomorphic to an affine partial scheme in an open neighborhood of each point.
Among others, rather concrete constructions of $\FF_1$-geometry from a (partial) algebraic systems are, Deitmar\cite{deitmar-schemes-over-f1}, Deitmar\cite{deitmar-congruence-schemes} and Lorscheid\cite{lorscheid-the-geometry-of-1}. In particular, the latter two treat the partially additive algebraic system, so our approach resembles their approach most. Indeed the category of partial rings is embedded in the category of blueprints.
The main outcome of our construction is that we can describe a group valued functor ${\mathbb{G}\mathbb{L}}_n$ by a partial group object in the category of partial rings, which means that we have constructed an affine partial group partial scheme. When this functor is applied to good partial rings, it takes values in the category of groups. For example, if $A$ is a commutative ring with unity, then ${\mathbb{G}\mathbb{L}}_n(A)$ is nothing but the general linear group of $n\times n$ matrices, and if $A=\FF_1, $ then ${\mathbb{G}\mathbb{L}}_n(\FF_1) =\mathfrak S_n$ is the $n$-th symmetric group.
Another modest outcome of our approach is that we have an explanation why the number 1 is put to the notation of our field even though it has two elements. Namely, we say that the number is there since we have only one element which can be added to 1 in the field $\FF_1 = \set{0,1},$ while in the usual finite field $\mathbb{F}_q,$ there are $q$ of such elements. (See Example \ref{ex:projective}.) In this paper, $\mathbb{N}$ denotes the set of non-negative integers.
{\it Acknowledgement.} Conversations with Bastiaan Cnossen about tensor products of partial monoids are very useful for the author. Indeed, the associative closure and the tensor product which appear in this article are contained in his argument \cite{bastiaan-cnossen-master-thesis}, explicitly or implicitly. Yoshifumi Tsuchimoto suggested to the author that search for a group scheme in $\FF_1$-geometry is important. Bastiaan Cnossen, Katsuhiko Kuribayashi, Makoto Masumoto, Shuhei Masumoto, Jun-ichi Matsuzawa, Kazunori Nakamoto and Yasuhiro Omoda gave valuable comments on previous versions of this article. The author is grateful to these people.
\section{Additive Part} \subsection{Partial Magmas and Monoids} \begin{defn}[partial magma and partial monoid]
A (commutative and unital) {\bf partial magma} is
\begin{enumerate}
\item a set $A,$ with a distinguished element $0,$
\item a subset $A_2 $ of $A\times A,$
\item a map $+\colon A_2 \to A,$
\end{enumerate}
such that
\begin{enumerate}[label= (\alph*)]
\item $(0,a)\in A_2, (a,0) \in A_2$ and $a+0 = a = 0+a,$ for all $a\in A,$
\item if $(a,b) \in A_2$ then $(b,a) \in A_2$ and $a+b = b+a,$ for all $a, b \in A.$
\end{enumerate}
If, moreover, it satisfies the following condition, we say that it is a (commutative and unital) {\bf partial monoid} :
\begin{enumerate}[label= (\alph*)]
\setcounter{enumi}{2}
\item $(a,b), (a+b,c) \in A_2$ if and only if $(b,c), (a,b+c)\in A_2$ for all $a, b, c \in A$
and in such a case, $(a+b)+c = a+(b+c).$
\end{enumerate} \end{defn}
If, instead, a partial magma satisfies the following condition, we say that it is a (commutative and unital) {\bf weak partial monoid} (or
weakly associative partial magma) :
\begin{enumerate}[label= (\alph*)]
\setcounter{enumi}{3}
\item If $a_1 + \dots + a_r$ can be calculated in $A$ under supplement of parenthesis in more than or equal to two ways,
then the results are equal for all $r\in \mathbb{N}$ and $a_1, \dots, a_r \in A.$
\end{enumerate}
\begin{defn}
Let $A$ and $B$ be partial magmas. A map $f\colon A\to B$ is called a {\bf homomorphism} if
$f(0) = 0, (f\times f)(A_2) \subseteq B_2$ and $f(a_1 + a_2) = f(a_1) + f(a_2)$ for all $(a_1, a_2) \in A_2.$
If $A$ and $B$ are partial monoids, a map $A\to B$ is a {\bf homomorphism} if it is a homomorphism of partial magmas.
The category of partial magmas and partial monoids are denoted by ${\mathcal{P\!M}ag}$ and ${\mathcal{P\!M}on},$
respectively. \end{defn}
\begin{example}
A partial magma of order 1 is isomorphic to $0 = \Set{0},$ which is a partial monoid.
A partial magma of order 2 is isomorphic to one of the following :
\begin{enumerate}
\item $\mathbb{F}_1 = \Set{0,1}$ where $1+1$ is undefined,
\item $\mathbb{F}_2 = \Set{0,1}$ where $1+1 = 0$ and
\item $\mathbb{B} = \Set{0,1}$ where $1+1 = 1.$
\end{enumerate}
These are also partial monoids. \end{example}
\begin{example}
Any based set $(X,0)$ can be given a partial monoid structure by putting $X_2 = \Set{0} \times X \cup X\times \Set{0}.$
A homomorphism between such partial monoids is nothing but a based map between the given based sets. \end{example}
\begin{example}
Any abelian monoid $M$ can be given a partial monoid structure by putting $M_2 = M\times M.$
A homomorphism between such partial monoids is nothing but a homomorphism between
the given abelian monoids. \end{example}
These examples show that we can embed a category of based sets ${\mathcal{S}et}_0$ and that of abelian monoids ${\mathcal{A}b\mathcal{M}on}$ into ${\mathcal{P\!M}on}.$ In this paper, based sets and abelian monoids (and abelian groups) are always considered as partial monoids unless otherwise specified.
It is easily shown that a homomorphism $f\colon A\to B$ is a monomorphism in the category of partial monoids if and only if it is an injective map of underlying sets. When $A$ and $B$ are partial magmas, we say that $A$ is contained in $B$ to mean that $A$ is a partial submagma of $B.$ Remark that a homomorphism $f\colon A\to B$ is an epimorphism if it is a surjective map of underlying sets, but the converse is false. For example, the map $\FF_1 \to \mathbb{N}$ determined by $1\mapsto 1$ is a monomorphism and epimorphism, but not an isomorphism.
\subsection{Monoid completion}
In this section, we show that for any partial magma $A,$ there exists a monoid $A_{mon}$ and a homomorphism $\mu\colon A\to A_{mon}$ which is universal among the homomorphisms from $A$ to abelian monoids.
Let $A$ be a partial magma and $\mathbb{N}[A]$ be the free abelian monoid generated by the underlying set of $A.$ More precisely, \[
\mathbb{N}[A] = \Set{ a_1 \dotplus \dots \dotplus a_r | r\in \mathbb{N}, a_i \in A\,(1\leq i\leq r)}, \] where the empty sum is the unit. By the injective homomorphism $A \to \mathbb{N}[A]~;~a\mapsto a$ of partial magmas, we regard $A$ as a partial submagma of $\mathbb{N}[A].$ Let $\sim$ be the equivalence relation on $\mathbb{N}[A]$ generated by $0\dotplus x\sim x$ and $(a_1+a_2) \dotplus x \sim a_1\dotplus a_2\dotplus x,$ where $x$ is any element of $\mathbb{N}[A]$ and $(a_1, a_2) \in A_2.$ We put $A_{mon} = \mathbb{N}[A]/\sim.$ Since $\sim$ is an additive equivalence relation, $A_{mon}$ has a monoid structure such that the projection $\mathbb{N}[A]\to A_{mon}$ is a homomorphism of monoids. The composite $A\to \mathbb{N}[A] \to A_{mon}$ is denoted by $\mu\colon A\to A_{mon}.$
\begin{prop}
Let $A$ be a partial magma and $f\colon A\to B$ be a homomorphism to an abelian monoid $B.$
Then there exists a unique homomorphism $f_{mon}\colon A_{mon} \to B$
such that $f_{mon}\circ \mu = f.$
\end{prop}
\begin{proof}
We put $f_{mon}([a_1\dotplus \dots \dotplus a_r]) = f(a_1) + \dots + f(a_r),$ then we have a homomorphism
$f_{mon} \colon A_{mon} \to B$ such that $f_{mon}\circ \mu = f,$ which is unique. \end{proof}
Remark that $\mu$ is not necessarily a monomorphism. Indeed, $\mu(A)$ has a natural structure of partial submagma of $A_{mon}$ which is weakly associative and is denoted by $A_{wass}.$
\subsection{Associative closure}
In this section, we show that for any partial magma $A,$ there exists a partial monoid $A_{ass}$ and a homomorphism $\alpha\colon A\to A_{ass}$ which is universal among the homomorphisms from $A$ to partial monoids.
Let $A$ be a partial magma and $\Set{B^{(\lambda)}}$ be a family of partial submagmas of $A.$ If we put \[ B = \cap_\lambda B^{(\lambda)} \mbox{~and~} B_2 = \cap_\lambda B^{(\lambda)}_2 \] then $B$ is the largest partial submagma of $A$ which is contained in every $B^{(\lambda)}$'s. If all the $B^{(\lambda)}$'s are partial submonoids of $A,$ then $B$ is also a partial submonoid of $A.$
On the other hand, let $A$ be a partial magma and $\Set{B^{(\lambda)}}$ be a family of partial submagmas of $A$ which is totally ordered by containment. If we put \[ B = \cup_\lambda B^{(\lambda)} \mbox{~and~} B_2 = \cup_\lambda B^{(\lambda)}_2 \] then $B$ is the smallest partial submagma of $A$ which contains every $B^{(\lambda)}$'s. If all the $B^{(\lambda)}$'s are partial submonoids of $A,$ then $B$ is also a partial submonoid of $A.$
Let $A$ be a partial monoid and $B$ be a partial submagma of $A.$ We define $B_{ass, A}$ to be the smallest partial submonoid of $A$ which contains $B.$ We call $B_{ass,A}$ the associative closure of $B$ in $A.$ $B_{ass, A}$ can be constructed inductively as follows:
Put $B^{(0)} = B$ and $B^{(0)}_2 = B_2.$ Suppose we have constructed a partial submagma $B^{(n-1)}\subseteq A.$ Consider a condition \[
(*)~~~(a+b) + c \mbox{~can be calculated in~} B^{(n-1)} \] for a triple $(a,b,c) \in B^{(n-1)}\times B^{(n-1)}\times B^{(n-1)}.$ If we put \begin{align*}
B^{(n)} &= B^{(n-1)} \cup \Set{ b+c | (a,b,c) \mbox{~satisfies~}(*) }\mbox{~and~}\\
B^{(n)}_2 &= B^{(n-1)}_2\cup (\Set{0}\times B^{(n-1)} )\cup (B^{(n-1)}\times \Set{0} )\\
&\phantom{=} \cup \Set{ (b,c), (c,b), (a,b+c), (b+c, a) | (a,b,c)\mbox{~satisfies~}(*)}, \end{align*} then $B^{(n)}$ has a natural structure of partial submagma of $A.$ Constructing $B^{(n)}, n=0,1,2,\dots $ inductively, we put \[
B' = \cup_{n\geq 0} B^{(n)} \mbox{~and~} B'_2 = \cup_{n\geq 0} B^{(n)}_2. \] Now, $B'$ is a partial submonoid of $A$ which contains $B$ and which is the smallest. Thus $B' = B_{ass, A}.$
For any partial magma $A,$ we define $A_{ass}$ to be the smallest partial submonoid of $A_{mon}$ which contains $A_{wass}.$ The composite $A\to A_{wass} \to A_{ass}$ is denoted by $\alpha.$
\begin{prop}
Let $A$ be a partial magma and $f\colon A\to B$ be a homomorphism to a partial monoid $B.$
Then there exists a unique homomorphism $f_{ass}\colon A_{ass} \to B$
such that $f_{ass}\circ \alpha = f.$ \end{prop}
\subsection{Limits and colimits in ${\mathcal{P\!M}ag}$ and ${\mathcal{P\!M}on}$} \label{sec:pmag_pmon_complete_cocomplete} It is easily checked that ${\mathcal{P\!M}ag}$ has all small limits and all small colimits. It is also easily checked that in ${\mathcal{P\!M}on},$ all limits in ${\mathcal{P\!M}ag}$ are also limits in ${\mathcal{P\!M}on},$ so ${\mathcal{P\!M}on}$ has all small limits. If we construct a colimit in ${\mathcal{P\!M}ag}$ from a given diagram in ${\mathcal{P\!M}on},$ then it is not a colimit in ${\mathcal{P\!M}on},$ but composing with the functor which takes associative closure makes it a colimit in ${\mathcal{P\!M}on}.$ This shows that ${\mathcal{P\!M}on}$ has all small colimits.
\subsection{Tensor product and $\mathop{\mathrm{Hom}}\nolimits$}
In this section, we define tensor product and $\mathop{\mathrm{Hom}}\nolimits.$ Propositions are given without proof since each of them can be proved by a formal argument.
Let $A, B$ be partial monoids and $\mathbb{N}[A\times B]$ be the free abelian monoid generated by the set $A\times B.$ Let $\sim$ be the equivalence relation on $\mathbb{N}[A\times B]$ generated by \begin{enumerate}
\item $(0, b) \dotplus x \sim x \sim (a,0)\dotplus x$ for all $a\in A, b\in B$ and $x\in \mathbb{N}[A\times B],$
\item $(a_1, b) \dotplus (a_2, b) \dotplus x \sim (a_1+a_2, b) \dotplus x$ for all $x\in \mathbb{N}[A\times B], b\in B$ and $(a_1, a_2) \in A_2,$
\item $(a, b_1) \dotplus (a, b_2) \dotplus x \sim (a, b_1+b_2) \dotplus x$ for all $x\in \mathbb{N}[A\times B], a\in A$ and $(b_1, b_2) \in A_2.$ \end{enumerate} We put $T(A, B) = \mathbb{N}[A\times B]/ \sim.$ Then $T(A,B)$ has an abelian monoid structure such that $\pi\colon \mathbb{N}[A\times B] \to T(A,B)$ is a homomorphism of monoids. An element of $T(A,B)$ represented by $(a,b) \in A\times B$ is denoted by $a\otimes b.$ We give $\pi(A\times B) \subseteq T(A,B)$ the maximal partial magma structure.
\begin{defn}[tensor product]
Let $A$ and $B$ be partial monoids.
The associative closure of $\pi(A\times B)$ is
denoted by $A\otimes B$ and is called the {\bf tensor product} of $A$ and $B.$ \end{defn} \begin{defn}[bilinear map] We say that a map $f\colon A\times B \to C$ is bilinear if for each $a\in A$ the map $B\to C,$ given by $b\mapsto f(a,b)$ is a partial monoid homomorphism and for each $b\in B$ the map $A\to C$ given by $a\mapsto f(a,b)$ is a partial monoid homomorphism. \end{defn}
\begin{prop}
Let $A, B, C$ be partial monoids and $f\colon A\times B \to C$ a biliear map.
Then there exists a unique partial monoid homomorphism
$\tilde{f}\colon A\otimes B \to C$ which makes the following diagram commute:
\begin{center}
\begin{tikzcd}
A\times B \ar{r}{f}\ar{d} & C\\
A\otimes B\ar{ru}[below]{\tilde{f}},
\end{tikzcd}
\end{center}
where the vertical map is the canonical map.\qed \end{prop}
For partial magmas $A,B$ we would like to define \begin{align*}
\mathop{\mathrm{Hom}}\nolimits_{\mathcal{P\!M}ag} (A, B) &= \Set{ f\colon A\to B | f : \mbox{homomorphism} }\\
\mathop{\mathrm{Hom}}\nolimits_{\mathcal{P\!M}ag} (A, B)_2 &= \Set{ (f,g) | (f(a), g(a)) \in B_2 \mbox{~for all~} a\in A }. \end{align*} But a formula $(f+g)( a ) = f(a) + g(a)$ does not define a homomorphism $f+g \colon A\to B$ unless $B$ is associative. If we assume that $B$ is a partial monoid, then $\mathop{\mathrm{Hom}}\nolimits_{\mathcal{P\!M}ag} (A,B)$ given above is a partial monoid. Thus we have a functor $\mathop{\mathrm{Hom}}\nolimits_{\mathcal{P\!M}ag}( -, - ) \colon {\mathcal{P\!M}ag}^{op}\times {\mathcal{P\!M}on} \to {\mathcal{P\!M}on}.$ We also have a functor $\mathop{\mathrm{Hom}}\nolimits_{\mathcal{P\!M}on}( -, - ) \colon {\mathcal{P\!M}on}^{op}\times {\mathcal{P\!M}on} \to {\mathcal{P\!M}on}.$
\begin{prop}\label{prop:bilinear_pmag_pmon}
Let $A$ and $B$ be partial magmas and $C$ be a partial monoid.
If $f \colon A\times B\to C$ is a bilinear map, then there exists a unique bilinear map
$\tilde{f}\colon A_{ass}\times B_{ass} \to C$ which makes the following diagram commute:
\begin{center}
\begin{tikzcd}
A\times B \ar{r}{f}\ar{d} & C\\
A_{ass}\times B_{ass}\ar{ru}[below]{\tilde{f}}.
\end{tikzcd}
\end{center}\qed \end{prop}
\begin{prop}
Let $A$ be a partial monoid, then
we have an adjoint pair of functors
\[
\otimes A \colon {\mathcal{P\!M}on} \leftrightarrows {\mathcal{P\!M}on} : \mathop{\mathrm{Hom}}\nolimits(A, -) \qedhere
\]
\qed \end{prop} \begin{prop}
$({\mathcal{P\!M}on}, \otimes , \FF_1)$ constitute a symmetric monoidal category. \qed \end{prop}
\subsection{Equivalence relations}
For a detailed discussion on equivalence relations, see \S 2.5 of \cite{borceux-handbook-2} or \cite{nlab-congruences}. As opposed to the terminology in \cite{nlab-congruences}, we reserved the word ``congruences'' for effective equivalence relations on partial rings, that apear in a later section of this paper. Let $A$ be a partial magma and $R$ be an equivalence relation on $A$ in ${\mathcal{P\!M}ag}.$ Thus $R$ is a partial submagma of $A\times A$ such that for any partial magma $X,$ \[
\mathop{\mathrm{Hom}}\nolimits_{{\mathcal{P\!M}ag}} (X, R) \subseteq \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{P\!M}ag}} (X, A\times A)
\cong \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{P\!M}ag}}(X, A)\times \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{P\!M}ag}}(X, A) \] is an equivalence relation on the set $\mathop{\mathrm{Hom}}\nolimits_{{\mathcal{P\!M}ag}} (X,A).$ Recall that $R$ is said to be effective if $R\rightrightarrows A$ is a kernel pair of some $f\colon A\to B,$ {\it i.e.} the following diagram is a pullback diagram: \[
\begin{tikzcd}
R\ar{r}\ar{d} & A\ar{d}{f}\\
A\ar{r}[below]{f} & B.
\end{tikzcd} \]
\begin{defn}
Let $A$ be a partial magma.
We say that an equivalence relation $R$ on $A$ is
{\bf additive} if $R_2 = (R\times R) \cap A_2.$ \end{defn}
If $R$ is an additive equivalence relation on a partial magma $A,$ then we can define a partial magma by \begin{align*}
&A/\!\!/ R = ( \mbox{~the quotient set of~} A\mbox{~by~} R\\
&(A/\!\!/ R)_2 = \Set{ ([a_1], [a_2] ) | (a_1, a_2) \in A_2 }\\
&[a_1]+[a_2] = [a_1+a_2]. \end{align*}
It is easily checked that the following diagram is a coequalizer diagram in ${\mathcal{P\!M}ag}:$ \begin{center}
\begin{tikzcd}
R \ar[shift left = 0.3em]{r}\ar[shift right = 0.3em]{r}& A \ar{r} & A/\!\!/ R.
\end{tikzcd} \end{center}
\begin{prop}\label{prop:effective-additive-pmag}
An equivalence relation $R$ on a partial magma $A$ is effective if and only if
it is additive. \end{prop} \begin{proof} Suppose that $R$ is effective. Then $R\rightrightarrows A$ is a kernel pair of some $f\colon A\to B.$ Let $C=\Set{ 0, a, b, a+b}$ be a partial monoid in which $a+b$ is the only non-trivial sum. If $a_1Rb_1, a_2Rb_2$ and $(a_1, a_2), (b_1, b_2) \in A_2$, we define $h_i\colon C\to A$ by $h_i(a) = a_i, h_i(b) = b_i\,(i=1,2).$ Since $f\circ h_1 = f\circ h_2$ we have a morphism $h\colon C\to R$ such that $h(a) = (a_1, a_2)$ and $h(b) = (b_1, b_2)$ by universality of the pullback. Then $((a_1, a_2) , (b_1, b_2))\in R_2,$ which means that $R$ is additive.
Conversely, suppose that $R$ is additive. Let $\pi \colon A\to A/\!\!/ R$ be the projection morphism. To show that the diagram \[
\begin{tikzcd}
R\ar{r}\ar{d} & A\ar{d}{f}\\
A\ar{r}[below]{f} & A/\!\!/ R
\end{tikzcd} \] is a pullback diagram, let $g_1, g_2\colon B\to A$ be two morphisms such that $f\circ g_1 = f\circ g_2.$ It is easily checked that $(g_1,g_2) \colon B\to A\times A$ factors through a morphism $B\to R$ since $R$ is additive. This proves that $R$ is effective. \end{proof}
Let $A$ be a partial monoid and $R$ be an additive equivalence relation on $A.$ Let $A/R = (A/\!\!/ R)_{ass}$ be the associative closure of $A/\!\!/ R.$ Then it is easily checked that the following diagram is a coequalizer diagram in ${\mathcal{P\!M}on}:$ \begin{center}
\begin{tikzcd}
R \ar[shift left = 0.3em]{r}\ar[shift right = 0.3em]{r}& A \ar{r} & A/R.
\end{tikzcd} \end{center}
\begin{prop}
An equivalence relation $R$ on a partial monoid $A$ is effective if and only if
it is additive and the canonical map $\alpha \colon A/\!\!/ R \to A/R$ is injective. \end{prop} \begin{proof} Suppose that $R$ is effective. Then we can show that $R$ is additive by the same argument as in the proof of Proposition \ref{prop:effective-additive-pmag}. It follows that $R\rightrightarrows A$ is the kernel pair of the canonical morphism $\pi \colon A\to A/R.$ We need to show that $\alpha$ is injective. Assume, on the contrary, that $\alpha$ is not injective and there exist $a_1, a_2 \in A$ such that $[a_1] \neq [a_2]$ in $A/\!\!/ R$ but $\alpha([a_1]) = \alpha([a_2]).$ Let $f_i \colon \FF_1 \to A$ be a morphism given by $f_i (1) = a_i\, (i=1,2).$ Since $R\rightrightarrows A$ is the kernel pair of $\pi \colon A\to A/R,$ there exists a morphism $f\colon \FF_1 \to R$ such that $f(1) = (f_1(1), f_2(1)) = (a_1, a_2),$ a contradiction. This proves that $\alpha$ is injective.
Conversely, suppose that $R$ is additive and the canonical map $\alpha \colon A/\!\!/ R \to A/R$ is injective. To show that $R\rightrightarrows A$ is the kernel pair of $\pi\colon A\to A/R,$ let $f_1, f_2\colon B\to A$ be morphisms such that $\pi \circ f_1 = \pi \circ f_2.$ If $\pi'\colon A\to A/\!\!/ R$ denotes the canonical morphism, $\pi'\circ f_1 = \pi'\circ f_2,$ since $\alpha$ is injective. Since $R\rightrightarrows A$ is the kernel pair of $\pi'$ in ${\mathcal{P\!M}ag},$ we have a unique morphism $f\colon B\to R$ such that $f = (f_1, f_2)$ in ${\mathcal{P\!M}ag}.$ Since $B$ and $R$ are partial monoids, $f$ is a morphism in ${\mathcal{P\!M}on},$ which is uniquely determined. \end{proof} \begin{cor}
Let $R$ be an additive equivalence relation on a partial monoid $A.$
The $R$ is effective if and only if the following condition holds:
\[
\mbox{(Condition)} \left\{
\begin{array}{l}
\mbox{~if~} aRa', bRb', cRc', (a,b) \in A_2, (b',c')\in A_2,\\
(a+b)Rx, (b'+c')Rx', (x,c) \in A_2 \mbox{~and~} (a',x') \in A_2\\
\mbox{~then~}(x+c) R (a'+x').
\end{array}\right.
\] \end{cor} \begin{cor}
If $\Set{R_\lambda}$ is a family of effective equivalence relations on a partial monoid $A.$
Then $\cap_\lambda R_{\lambda}$ is an effective equivalence relation on $A.$ \end{cor}
The next theorem is not used in the following part of this paper.
\begin{thm}
The category of partial magmas is regular. \end{thm}
\begin{proof}
As noted in \S\ref{sec:pmag_pmon_complete_cocomplete}, ${\mathcal{P\!M}ag}$ is
complete and cocomplete. To show that a pullback of a regular epimorphism
is a regular epimorphism, let $f\colon A\to B$ be a regular epimorphism.
If $K$ is the kernel pair of $f,$ then it is readily checked that $B \cong A/\!\!/ K.$
So we may assume that $B = A/\!\!/ K$ and $f=\pi \colon A\to A/\!\!/ K.$
Let
\[
\begin{tikzcd}
P\ar{r}\ar{d}{h} & A \ar{d}{\pi}\\
C \ar{r}{g} & A/\!\!/ K
\end{tikzcd}
\]
be the pullback of $\pi$ along a morphism $g\colon C\to A/\!\!/ K.$
Since $\pi$ is a surjective map, so is $h.$
Let $L$ be the kernel pair of $h.$
We show that the diagram
\[
\begin{tikzcd}
L \ar[shift left = 0.3em]{r}{l_1}\ar[shift right = 0.3em]{r}[below]{l_2}& P \ar{r} & C.
\end{tikzcd}
\]
is a coequalizer diagram. For that purpose, suppose that there exists a morphism
$m\colon P\to D$ such that $m\circ l_1 = m\circ l_2.$ Since $h$ is surjective,
we have a map $\tilde{m}\colon C\to D$ of sets which satisfies $\tilde{m} \circ h= m.$
To show that $\tilde{m}$ is a homomorphism of partial magmas, suppose
$(c_1, c_2) \in C_2.$ Then $(g(c_1), g(c_2)) \in (A/\!\!/ K)_2.$ So we can take a summable
pair $(a_1, a_2) \in A_2$ such that $(\pi(a_1), \pi(a_2)) =(g(c_1), g(c_2)).$
Thus, $((a_1, c_1) , (a_2, c_2)) \in P_2,$ which implies that
$(\tilde{m}(c_1), \tilde{m}(c_2)) = (m(a_1, c_1) , m(a_2, c_2)) \in D_2.$
Therefore $\tilde{m}$ is a unique homomorphism which satisfies $\tilde{m} \circ h= m.$ \end{proof}
\section{Partial Rings}
\subsection{Partial Rings} \begin{defn}[partial ring]
A {\bf partial ring} is a partial monoid $A$ with a bilinear operation
$\cdot,$ called a multiplication which is associative, commutative,
and have a unit $1.$ Here, bilinearity of $\cdot$ is, by definition, equivalent to the following
condition:
\begin{enumerate}
\item $0\cdot a = 0$ for all $a\in A$ and
\item $(a_1, a_2) \in A_2$ implies $(a_1\cdot x, a_2\cdot x) \in A_2$
and $a_1\cdot x + a_2\cdot x = (a_1+a_2)\cdot x$ for all $x\in A.$
\end{enumerate} \end{defn}
\begin{defn}
Let $A$ and $B$ be partial rings. A map $f\colon A\to B$ is a
homomorphism of partial rings if it is a homomorphism of underlying partial monoids and
satisfies $f(1) = 1$ and $f(ab) = f(a)f(b)$ for all $a, b\in A.$
The category of partial rings are denoted by ${\mathcal{P\!R}ing}.$ \end{defn}
\begin{rem}
A partial ring is nothing but a commutative monoid object in the symmetric monoidal category
$({\mathcal{P\!M}on}, \otimes , \FF_1).$ \end{rem}
\begin{example}
A partial ring of order 2 is isomorphic to one of
$\FF_1, \mathbb{F}_2$ and $\mathbb{B}.$ \end{example}
\subsection{A partial ring given by generators and a summability list}
In this section let $n$ be a fixed positive integer and $N=\mathbb{N}[x_1,\dots, x_n]$ be the set of the polynomials of
indeterminates $x_1, \dots, x_n$ with coefficients in $\mathbb{N}.$
Let $S = \Set{s_1, \dots, s_r }$ be a subset of $N.$
Then let
\[
\FF_1 \angles{x_1,\dots, x_n | \exists s_1, \dots, s_r }
\]
denote the smallest partial subring of $N$ which contains every subsum of
one of $s_1, \dots , s_r,$ and in which every pair $(a,b)$
such that $a+b$ is a subsum of one of $s_1, \dots , s_r$ is summable.
If $c_1, \dots, c_n$ are elements of a partial ring $A,$
and $s \in N,$ then $s(c_1, \dots, c_n) \in A$ denotes the value of the polynomial $s$ as usual.
Let $s = m_1 + \dots + m_r$ be the unique factorization of $s$ into a sum of monomials with coefficients 1.
We say that $s(c_1, \dots, c_n)$ {\bf can be calculated in $A$} if $(m_{i_1}, \dots, m_{i_s}) \in A_s$ for
all subsets $\set{i_1,\dots , i_s} \subseteq \set{1, \dots, r}.$
\begin{prop}\label{prop:pring_hom_by_generators}
Let $c_1, \dots, c_n$ be elements of a partial ring $A$ and $S = \Set{s_1, \dots, s_r }$ be a subset of $N.$
If $s_i (c_1, \dots, c_n)$ can be calculated in $A$ for all $i=1,\dots , r, $ then there exists a unique
partial ring homomorphism
\[
\varphi \colon \FF_1 \angles{x_1,\dots, x_n | \exists s_1, \dots, s_r }\to A
\]
such that $\varphi(x_i) = c_i\,(i=1,\dots, n).$
\end{prop}
The remainder of this section is devoted to a proof of the above proposition.
Let $W$ be a partial submagma of the underlying partial monoid of $N.$
We say that an element $w\in W$
{\bf is factorized in $W$} if the unique factorization $w = m_1 + \dots + m_r$
in $N$ can be calculated in $W$ under some appropriate reordering and
supplement of parentheses.
We say that $W$ {\bf has the factorization property} if every element of $W$ is
factorized in $W.$
\begin{lem}
Let $B \subseteq N$ be a partial submagma of the underlying partial monoid of $N.$
If $B$ has the factorization property,
then so is its associative closure $B_{ass, N}$ in $N.$
\end{lem}
\begin{proof}
Recalling the inductive construction of the associative closure,
it is sufficient to show that if $B^{(n-1)}$ has the factorization property then so is $B^{(n)}.$
So assume that $B^{(n-1)}$ has the factorization property.
New elements in $B^{(n)}$ are of the form $b+c$ where there exists $a\in B^{(n-1)}$ such that
$(a+b)+c$ can be calculated in $B^{(n-1)}.$
Let $b= m_1 + \dots + m_r$ and $c = m'_1 + \dots + m'_s$ be the unique factorization of $b$ and $c$
in $N.$ By assumption, there exists a way to supply parentheses to these formula so that they can be
calculated in $B^{(n-1)}.$
Using these supplement of parentheses, $b+c$ can be calculated in $B^{(n)}.$
Thus $B^{(n)}$ has the factorization property.
\end{proof}
\begin{lem}\label{lem:inc_seq_of_submonoids_unique_factorization}
There exists an increasing sequence
$X^{(0)}\subseteq X^{(1)}\subseteq \dots$ of partial submonoids of $N$
such that
\begin{enumerate}
\item $X^{(i)}$ has the factorization property for all $i$,
\item if $a,b \in X^{(i)}$ then $ab \in X^{(i+1)}$ for all $i$ and
\item $\FF_1 \angles{x_1,\dots, x_n | \exists s_1, \dots, s_r } = \cup_{i\geq 0} X^{(i)}$ as a partial monoid.
\end{enumerate}
\end{lem}
\begin{proof}
If we put
\begin{align*}
Y^{(0)} &= \Set{ 0, 1 } \cup \Set{ s | s \mbox{~is a subsum of some~}s_i \in S},\\
(Y^{(0)})_2 &= \Set{ (a,b) | a+b \mbox{~is a subsum of some~}s_i \in S}
\end{align*}
then this is a partial submagma of $N,$ which has the factorization property.
Let $X^{(0)} = (Y^{(0)})_{ass, N}$ be its associative closure in $N.$ By the previous lemma,
$X^{(0)}$ has the factorization property.
Suppose that we have constructed $X^{(0)}, \dots, X^{(k-1)}$ such that $X^{(i)}$ has the factorization
property for $i=0,\dots, k-1$ and if $a,b \in X^{(i)}$ then $ab \in X^{(i+1)}$ for all $i = 0, \dots, k-2.$
The we put
\begin{align*}
Y^{(k)} &= X^{(k-1)}\cup \Set{ ab | a, b \in X^{(k-1)} },\\
(Y^{(k)})_2 &= (X^{(k-1)})_2 \cup \Set{ (a_1 b, a_2 b) | (a_1, a_2) \in (X^{(k-1)})_2 , b\in X^{(k-1)} }.
\end{align*}
$Y^{(k)}$ is a partial submagma of $N.$ To show that $Y^{(k)}$ has the factorization property,
let $a= m_1 + \dots + m_r$ and $b = m'_1 + \dots + m'_s$ be the unique factorization of $a$ and $b$
in $N.$ By assumption, there exists a way to supply parentheses to these formula so that they can be
calculated in $X^{(k-1)}.$ Then the unique factorization of $ab$ is given by
$ab = \sum m_i m'_j.$ Now,
supply parentheses to $ab = m_1 b + \dots + m_r b$ in the way that we supplied parentheses
to the formula $a = m_1 + \dots + m_r.$ This formula can be calculated in
$Y^{(k)},$ once $b$ is calculated. So we supply parentheses to each
$b = m'_1 + \dots + m'_s$ so that it is calculated in $Y^{(k)}.$ Thus $Y^{(k)}$ has the factorization property.
Let $X^{(k)} = (Y^{(k)})_{ass, N}$ be its associative closure in $N.$ By the previous lemma,
$X^{(k)}$ has the factorization property.
We have constructed an increasing sequence $X^{(0)}\subseteq X^{(1)}\subseteq \dots$
of partial submonoids of $N$ which satisfies (1) and (2). Now it is clear that it satisfies (3).
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:pring_hom_by_generators}]
Let $Y^{(k)}$ and $X^{(k)}$ be as in (the proof of)
Lemma \ref{lem:inc_seq_of_submonoids_unique_factorization}.
The assumption of the proposition that $s_i(c_1, \dots, c_n)$ can be calculated in $A$ for all $i$
is equivalent to the condition that a partial magma homomorphism
$\varphi\colon Y^{(0)} \to A$ can be defined by $\varphi(x_i) = c_i.$ Suppose we have shown that
$\varphi$ extends to a partial magma homomorphism $\varphi \colon Y^{(k-1)} \to A.$
If $(a + b) + c$ can be calculated in $Y^{(k-1)},$ then $(\varphi(b) , \varphi(c)) \in A_2,$
since $A$ is a
partial monoid. If we put $\varphi(b+c) = \varphi(b)+\varphi(c),$
then this is well-defined since $Y^{(k-1)}$ has the factorization property.
In this way, we can extend $\varphi$ uniquely to
a partial monoid homomorphism $\varphi \colon X^{(k-1)}\to A.$
Let $a, b \in X^{(k-1)}.$ We put $\varphi(ab) = \varphi(a)\varphi(b).$ This is well-defined by
the factorization property of $Y^{(k)}.$ So $\varphi$ uniquely extends to
a partial magma homomorphism $\varphi\colon Y^{(k)} \to A.$
Continuing this process inductively, $\varphi$ extends uniquely to a partial monoid
$\varphi \colon \cup_{k} X^{(k)} \to A,$ which is clearly a partial ring homomorphism
$\varphi \colon \FF_1 \angles{x_1,\dots, x_n | \exists s_1, \dots, s_r }\to A.$
\end{proof}
\subsection{Congruences}
\begin{defn}[Congruence]
Let $A$ be a partial ring. By a {\bf congruence} on $A,$ we mean an effective equivalence
relation on $A.$
\end{defn}
\begin{prop}
Let $A$ be a partial ring.
A partial subring $R \subseteq A\times A$ is a congruence on $A$
if and only if its underlying partial monoid is an effective equivalence relation
on the underlying partial monoid of $A.$
\end{prop}
\begin{proof}
Assume that $R$ is a congruence on $A.$
Then $R\rightrightarrows A$ is a kernel pair of some morphism $f\colon A\to B$ in
${\mathcal{P\!R}ing}.$ Since a limit of any diagram
in ${\mathcal{P\!R}ing}$ is constructed by taking a limit of the corresponding
diagram in ${\mathcal{P\!M}on},$ $R\rightrightarrows A$ is the kernel pair of the
underlying homomorphism $f\colon A\to B$ in ${\mathcal{P\!M}on}.$ So the underlying
partial monoid of $R$ is an effective relation on the underlying partial monoid of $A.$
Conversely, assume that the underlying
partial monoid of $R$ is an effective relation on the underlying partial monoid of $A.$
We can define a map $A/\!\!/ R \times A/\!\!/ R \to A/\!\!/ R$ by
$([a_1], [a_2]) \mapsto [a_1 a_2],$ since $R$ is a partial subring of $A\times A.$
Then $m'$ is bilinear since the multiplication map of $A$ is bilinear.
Then by Proposition \ref{prop:bilinear_pmag_pmon}, $m'$ induces
a bilinear map $\bar{m}\colon A/R\times A/R \to A/R.$
This makes $A/R$ into a partial ring.
It is clear that the canonical morphism $\pi\colon A\to A/R$ is
a homomorphism of partial rings.
To show that $R$ is the kernel pair of $\pi\colon A\to A/R,$
let $f_1, f_2 \colon B\to A$ be two morphisms such that
$\pi\circ f_1 = \pi\circ f_2.$
The underlying partial monoid of $R$ is the kernel pair of $\pi$ in ${\mathcal{P\!M}on},$
there exists a unique morphism $f\colon B\to A$ of underlying partial monoids
such that $f = (f_1, f_2).$
Since $R$ is a partial subring of $A\times A,$ $f$ is a homomorphism of partial rings.
\end{proof}
\begin{cor}
If $\Set{R_\lambda}$ is a family of congruences on a partial ring $A,$
then $\cap_{\lambda} R_{\lambda}$ is a congruence on $A.$
\end{cor}
\begin{cor}
Let $A$ be a partial ring.
For any subset $S$ of $A\times A,$ there exists the smallest congruence on $A$
which contains $S.$ (It is denoted by $\angles{S}.$)
\end{cor}
\begin{prop}
Let $R$ be a congruence on a partial ring $A.$ Then
for any homomorphism $f\colon A\to B$
for which $f(a_1) = f(a_2)$ for all $(a_1, a_2) \in R,$ there exists
a unique homomorphism $\tilde{f} \colon A/R \to B$ which makes the following diagram commute:
\begin{center}
\begin{tikzcd}
A \ar{r}{f}\ar{d}[left]{\pi} & B\\
A/R \ar{ru}[below]{\tilde{f}}.
\end{tikzcd}
\end{center}
\end{prop}
\begin{proof}
Let $f\colon A\to B$ be a morphism of partial rings such that
$f(a_1) = f(a_2)$ for all $(a_1, a_2) \in R.$ Since $A\to A/\!\!/ R$ is a coequalizer of
$R\rightrightarrows A$ in ${\mathcal{P\!M}ag}$ we have a homomorphism
$A/\!\!/ R \to B$ of partial magmas. Similarly, we have a homomorphism $A/R \to B$
of partial monoids, and we have a commutative diagram
\begin{center}
\begin{tikzcd}
A\ar{r}\ar{rd}[below]{f} & A/\!\!/ R \ar{r}\ar{d} & A/R\ar{ld}{\tilde{f}}\\
& B.
\end{tikzcd}
\end{center}
In the following diagram, the largest rectangle is commutative, since
$f$ is a homomorphism of partial rings and by the definition of $m' :$
\begin{center}
\begin{tikzcd}
A/\!\!/ R \times A/\!\!/ R\ar{r}\ar{d}{m'}
& A/R \times A/R \ar{d}{\bar{m}} \ar{r} & B\times B \ar{d}{m_B}\\
A/\!\!/ R\ar{r} & A/R \ar{r}& B.
\end{tikzcd}
\end{center}
Also, the left small rectangle is commutative by the definition of $\bar{m}.$
Then so is the right small rectangle, by Proposition \ref{prop:bilinear_pmag_pmon}.
\end{proof}
\subsection{Ideals}
\begin{defn}[ideal]
An {\bf ideal} of a partial ring $A$ is a partial submonoid $I$ of $A$
such that $I_2 = A_2 \cap (I\times I)$
and $ax \in I $ for any $a\in A$ and $x\in I.$ \end{defn}
\begin{example}
Let $T$ be a subset of $A.$ If we put
\begin{align*}
I &= \Set{ a_1 t_1 + \dots + a_r t_r | \begin{array}{l}r\in \mathbb{N}, a_i \in A, t_i \in T \\
(a_1 t_1 , \dots , a_r t_r)\in A_r
\end{array}
},
\end{align*}
then $I$ is the smallest ideal which contains $T.$ This ideal $I$ is denoted by $(S).$
If $ S = \Set{a}$ is a singleton, $(S)$ is also denoted by $(a).$ \end{example}
Let $\fraka$ and $\frakb$ be two ideals of $A.$ The smallest ideal which contains $\fraka$ and $\frakb$ is denoted by $\fraka + \frakb.$ On the other hand, if we put
\begin{align*}
I &= \Set{ a_1 b_1 + \dots + a_r b_r | \begin{array}{l}r\in \mathbb{N}, a_i \in A, b_i \in B \\
(a_1 b_1 , \dots , a_r b_r)\in A_r
\end{array}
},
\end{align*} then $I$ is an ideal which is contained in both $\fraka$ and $\frakb.$ This ideal $I$ is denoted by $\fraka \frakb.$
Let $\varphi\colon A\to B$ be a homomorphism of partial rings and $J\subseteq B$ be an ideal. If we put \[ I = \varphi^{-1}(J) \]
then $I$ is an ideal of $A.$ This ideal $I$ is denoted by $\varphi^*(J).$ On the other hand, let $I$ be an ideal of $A.$ The smallest ideal of $B$ which contains $\varphi(I)$ is denoted by $\varphi_*(I).$
\begin{defn}[prime ideal]
An ideal $I$ of a partial ring $A$ is called a {\bf prime ideal} if $I\neq A$ and
$ab \in I$ implies $a\in I$ or $b\in I$ for any $a,b \in A.$ \end{defn}
\subsection{Localization}
Let $S$ be a multiplicative subset of $A.$ We put \begin{align*}
S^{-1} A &= \Set{ a/s | a\in A, s\in S} \\
(S^{-1} A)_2 &= \Set{ (a/s, b/t) | \mbox{there exists~} u \in S \mbox{~s.t.~}(uta, usb) \in A_2}, \end{align*} where $a/s$ denotes the usual equivalence class such that $a/s = b/t$ if and only if there exists $u\in S$ such that $uta = usb.$ It is readily checked that $S^{-1}A$ is a partial ring by putting $\frac{a}{s}\frac{b}{t} = \frac{ab}{st}.$ The homomorphism $\lambda \colon A\to S^{-1}A$ given by $\lambda(a) = a/1$ has the universal property of localization.
\begin{defn}[local pring]
A partial ring $A$ is called a {\bf local partial-ring} (local pring for short)
if it has a unique maximal ideal. \end{defn}
If $\frakp$ is a prime ideal of a partial ring $A,$ then the localization $A_\frakp$ of $A$ by the multiplicative set $A\setminus \frakp$ is a local pring.
If $A , B$ are local prings, a homomorphism $\varphi\colon A\to B$ is called a {\bf homomorphism of local prings} if $\varphi^*(\frakm_B) = \frakm_A,$ where $\frakm_A$ (resp. $\frakm_B$) is the maximal ideals of $A$ (resp. $B).$
\begin{prop} Let $A$ be a partial ring and $\lambda\colon A\to S^{-1}A$ be the localization by a multiplicative subset $S$ of $A.$ For any ideal $I\subseteq A, \lambda^*\lambda_*(I) = I.$ \end{prop}
\begin{defn}[partial field]
A {\bf partial field} is a partial ring in which every non-zero element is multiplicatively invertible. \end{defn}
A partial ring is a partial field iff the ideal $(0)$ is a maximal ideal.
\subsection{Blueprints}
We can compare partial rings to Lorscheid's blueprints\cite{lorscheid-the-geometry-of-1}. As is explained in the above paper, Deitmar's sesquiads\cite{deitmar-congruence-schemes} are special kinds of blueprints. In this section, we show that partial rings are also special kinds of blueprints as explained below.
Let $A$ be a partial ring and let $\mathbb{N}[A]$ denote the monoid-semiring determined by the underlying multiplicative monoid of $A.$ We put \[
R_0(A) = \Set{ (a_1\dotplus a_2, a ) | (a_1, a_2) \in A, a_1+a_2 = a}\cup \Set{(0,\emptyset)}. \] Let $R(A)$ be the smallest additive equivalence relation on $\mathbb{N}[A].$ In other words, $R(A)$ is the smallest equivalence relation on $\mathbb{N}[A]$ which contains \[
R_1(A) = \Set{(0 \dotplus x, x) , (a_1 \dotplus a_2 \dotplus x, (a_1+a_2) \dotplus x ) | (a_1, a_2 ) \in A_2, x\in \mathbb{N}[A]}. \]
\begin{lem}\label{lem:pring_to_blueprint_to_pring}
If $(a_1\dotplus a_2, a ) \in R(A),$ then $(a_1,a_2) \in A_2$ and $a_1+a_2 = a.$ \end{lem} \begin{proof} Consider the following property for $x = a_1\dotplus \dots \dotplus a_r\in \mathbb{N}[A]$ \[ \mbox{(Property)}~~~\mbox{if~}\dotplus\mbox{~is replaced with~}+, x\mbox{~can be calculated in~}A. \] Since $A$ is a partial monoid, for any $(x,y) \in R_1(A), x$ has this property if and only if $y$ has, and the sum for $x$ and that for $y$ are equal. Then the same is true for any $(x,y) \in R(A).$ Suppose $(a_1 \dotplus a_2, a) \in R(A).$ Since $a$ can be calculated in $A,$ so is $a_1 \dotplus a_2$ and $a_1 + a_2 = a.$ \end{proof}
\begin{lem}
$R(A)$ is multiplicative. \end{lem} \begin{proof}
If $(\alpha, \beta) \in R_0(A)$ and $c\in A$ then $(\alpha c, \beta c) \in R_0(A).$
Then it is readily checked that
\begin{enumerate}
\item if $(\alpha, \beta) \in R(A)$ and $c\in A$ then $(\alpha c, \beta c) \in R(A),$
\item if $(\alpha, \beta) \in R(A)$ and $\xi\in \mathbb{N}[A]$
then $(\alpha \xi, \beta \xi) \in R(A)$ and
\item if $(\alpha, \beta) \in R(A)$ and $(\gamma, \delta)\in \mathbb{N}[A]$
then $(\alpha \gamma, \beta \delta) \in R(A),$
\end{enumerate}
which shows that $R(A)$ is multiplicative. \end{proof}
It follows that $B(A) = (A, R(A))$ is a blueprint.
Now, let $M$ be a commutative (multiplicative) monoid and $B = (M,R)$ be a blueprint. Put \[
R_0 = \Set{ (a_1\dotplus a_2, a ) \in R | a_1, a_2 , a \in M} \cup \Set{(0,\emptyset)}. \] \begin{defn} A blueprint $(M, R)$ is {\bf generated by binary operations} if $R$ is the smallest additive equivalence relation which contains $R_0.$ A blueprint $(M, R)$ is {\bf associative} if $(a_1\dotplus a_2 \dotplus a_3, b)\in R, a_i\,(i = 1,2,3), b\in M$ then there exists $c\in M$ such that $(a_1\dotplus a_2, c) \in R.$ \end{defn}
\begin{lem} Let $A$ be a partial ring. The blueprint $B(A) = (A, R(A))$ is generated by binary operations and associative. \end{lem} \begin{proof}
$B(A)$ is generated by binary operations by its construction.
If $(a_1\dotplus a_2 \dotplus a_3, b)\in R, a_i\,(i = 1,2,3), b\in A,$
then by Lemma \ref{lem:pring_to_blueprint_to_pring}, $a_1 + a_2 + a_3 = b$ in $A.$
Then by the associativity of a partial monoid $A,$ there exists $c\in A$
such that $a_1 + a_2 = c $ in $A.$ This means that $B(A)$ is associative. \end{proof}
\begin{prop}
The category of partial rings is isomorphic to the category of
proper cancellative and associative blueprints with zero which are
generated by binary operations. \end{prop}
\begin{proof}
Let $B = (A, R)$ be a proper cancellative and associative blueprint with zero which is
generated by binary operations.
We put
\[
A_2 = \set{(a_1, a_2) \in A^2 | \exists a\in A \mbox{~s.t.~}(a_1 \dotplus a_2 , a ) \in R}
\]
and define $a_1 + a_2 = a$ if $(a_1 \dotplus a_2 , a ) \in R.$ Since $B$ is a proper
blueprint with a zero, this determines a partial magma structure on $A.$
Since $B$ is associative, $A$ is a partial monoid. Finally, $A$ is a partial ring since
$R$ is multiplicative.
Conversely, if $A$ is a partial ring, let $\mathbb{N}[A]$ denote the semiring determined by
the underlying multiplicative monoid of $A.$
\end{proof}
\section{Partial Schemes}
\subsection{Locally PRinged Spaces}
\begin{defn}[locally pringed space]
A {\bf locally partial-ringed space} (locally pringed space for short) is a pair $(X, \mathcal O_X)$ where $X$ is a topological space and $\mathcal O_X$ is a
sheaf of partial rings on $X$ whose stalks are local prings.
Let $(X,\mathcal O_X)$ and $(Y,\mathcal O_Y)$ be two locally pringed spaces.
A morphism $(X, \mathcal O_X) \to (Y,\mathcal O_Y)$ is
\begin{enumerate}
\item a map $f\colon X\to Y$ of topological spaces and
\item a homomorphism $f^\#\colon \mathcal O_Y\to f_* \mathcal O_X$ of sheaves of partial rings over $Y$
which induces a homomorpshim of local prings $o_{Y,f(x)} \to o_{X,x}$ for all point $x\in X.$
\end{enumerate} \end{defn}
\subsection{Affine Partial Schemes}
In this section, we will define affine partial schemes.
Most statements and proofs in \S 3.1 and \S 3.2 of \cite{lorscheid-the-geometry-of-1}
about affine blue schemes can be read as statements and proofs about affine partial schemes.
We will give proofs for some of them rather when it is simpler than those in
\cite{lorscheid-the-geometry-of-1} to show the simplicity of our theory.
Let $A$ be a partial ring and
$X_A$ be the set of prime ideals of $A.$
For any $a\in A,$ let $D(a)$ be the set of prime ideals $\frakp$ such that $a\notin \frakp.$
We give $X_A$ the topology generated by $D(a)$'s for all $a\in A.$ This topology is called the Zariski topology.
For any ideal $\fraka,$ let $V(\fraka)$ denote the set of prime ideals which contain $\fraka.$
\begin{prop}
If $\fraka$ does not meet a multiplicative set $S,$ then there exists a prime ideal $\frakp$
which contains $\fraka$ and does not meet $S.$
\end{prop}
\begin{proof}
Let $\Sigma$ be a family of ideals which contain $\fraka$ and do not meet $S.$
Since $\fraka$ is an element of it, $\Sigma$ is not empty. By the Zorn's lemma,
there exists a maximal element $\frakp$ of $\Sigma.$ Suppose that $ab \in \frakp.$
Then $(\frakp + (a)) (\frakp + (b)) \subseteq \frakp.$
Since $\frakp$ does not meet $S,$ at least one of $(\frakp + (a))$ and $(\frakp + (b))$
does not meet $S.$
Since $\frakp$ is maximal with this property, one of $a$ and $b$ is an element of $\frakp.$
This proves that $\frakp$ is a prime ideal.
\end{proof}
\begin{cor}
If $D(b) \subseteq D(a),$ then there exist $k\in \mathbb{N}$ such that $b^k \in (a).$
\end{cor}
\begin{proof}
Take $\fraka = (a)$ and $S = \Set{b^k | k\in \mathbb{N}}$ in the previous lemma.
\end{proof}
\begin{cor}
If we put $\sqrt{\fraka} = \Set{ a\in A | \exists r \in \mathbb{N}\mbox{~s.t.~} a^r \in \fraka},$ then
\[
\sqrt{\fraka} = \bigcap_{\fraka \subseteq \frakp,~\frakp\in X_A} \frakp
\]
\end{cor}
\begin{proof}
It is clear that $\sqrt{\fraka} \subseteq \bigcap \frakp.$ For the converse, assume that
$a \in A$ is not in $\sqrt{\fraka},$ or equivalently, $\fraka$ does not meet the multiplicative
subset $S = \Set{ a^r | r\in \mathbb{N}}$ of $A.$ Then
by proposition, there exists a prime ideal $\frakp$ which contains $\fraka$
and does not meet $S.$ In particular, $a\notin \frakp$ and this proves that
$\sqrt{\fraka} \supseteq \bigcap \frakp.$
\end{proof}
\begin{prop}
$D(a)$ is quasi-compact for all $a\in A.$ In particular $X_A$ is quasi-compact.
\end{prop}
\begin{proof}
It is sufficient to show that
if $D(a) = \cup_{\lambda\in \Lambda} D(a_{\lambda})$ for some subset $\set{a_\lambda}\subseteq A,$ then
there exist finite elements $\lambda_1, \dots , \lambda_r \in \Lambda$ such that
$D(a) = D(a_{\lambda_1})\cup \dots \cup D(a_{\lambda_r}).$
So assume that
$D(a) = \cup_{\lambda\in \Lambda} D(a_{\lambda})$ for some subset $\set{a_\lambda}\subseteq A.$
Then $a\in \frakp \iff \Set{a_\lambda}\subseteq \frakp$ for all $\frakp \in X_A,$ which implies
that $\sqrt{(a)} = \sqrt{(\Set{a_\lambda})}.$ So there exists $r\in \mathbb{N}$ such that
$a^r \in (\Set{a_\lambda}),$ which means that we can take some
$a_{\lambda_1}, \dots , a_{\lambda_s}$ and $c_1, \dots, c_s \in A$ such that
$(c_1a_{\lambda_1}, \dots , a_sa_{\lambda_s} )\in A_s$ and
$a^r =c_1a_{\lambda_1}+ \dots + a_sa_{\lambda_s},$ so
$a \in \sqrt{(a_{\lambda_1}, \dots , a_{\lambda_s})}.$
Then
$\Set{a_{\lambda_1}, \dots , a_{\lambda_s}} \subseteq \frakp \implies a\in \frakp$ for all
$\frakp \in X_A$ and this means that
$D(a) \subseteq D(a_{\lambda_1}) \cup \dots \cup D(a_{\lambda_s}).$
\end{proof}
For any open set $U$ of $X= X_A,$ let $S_U$ be the subset of $A$ consisting of all $a\in A$ such that
$a\notin \frakp$ for all $\frakp \in U.$ Then $S_U$ is a multiplicative subset of $A.$
If we put $\mathcal O'_X(U) = S_U^{-1} A,$ then $\mathcal O'_X$ is a presheaf of partial rings on $X.$ Its sheafification is
denoted by $\mathcal O_X.$
\begin{prop}
If $x = \frakp\in X_A$ is a point, the stalk $o_{X,x}$ at $x$ coincides with the
local pring $A_\frakp.$
\end{prop}
The locally pringed space $(X_A, \mathcal O_X)$ constructed above is denoted by $\mathop{\mathrm{Spec}}\nolimits (A).$
\begin{defn}[affine pscheme]
A locally pringed space is called an {\bf affine partial scheme} (affine pscheme for short)
if it is isomorphic
as a locally pringed space to $\mathop{\mathrm{Spec}}\nolimits A$ for a partial ring $A.$
\end{defn}
Let $A, B$ be partial rings and $\varphi\colon A\to B$ be a homomorphism.
Suppose that $\mathop{\mathrm{Spec}}\nolimits(A) = (X,\mathcal O_X)$ and $\mathop{\mathrm{Spec}}\nolimits(B) = (Y,\mathcal O_Y).$
\begin{prop}
Let $A$ be a partial ring and $(X,\mathcal O_X) = \mathop{\mathrm{Spec}}\nolimits (A).$
For any element $s\in A,$ there exists a monomorphism $A_s \to \mathcal O_X(D(s)).$
\end{prop}
\begin{proof}
We can define a homomorphism $\varphi\colon A_s \to \mathcal O_X(D(s))$ by putting\\
$\varphi(a/s^r) (\frakp) = a/s^r \in A_{\frakp}$ for all $\frakp \in X.$
Suppose $\varphi(a/s^r) = \varphi(b/s^q).$
This means that $a/s^r = b/s^q$ in $A_\frakp$ for all $\frakp\in D(s),$ that is, we have $h\notin \frakp$
such that $hs^q a = hs^r b.$
If we put
\[
\fraka = \Set{ x\in A | xs^qa = xs^rb},
\]
then $\fraka$ is an ideal of $A$ and $\fraka \not\subseteq \frakp$ for all $\frakp \in D(s).$ Then we have that
$\sqrt{(s)} \subseteq \sqrt{\fraka}.$ If $s^k \in \fraka,$ then $s^{k+q} a = s^{k+r} b, $ which means that
$a/s^r = b/s^q$ in $A_s.$ Therefore $\varphi$ is injective.
\end{proof}
If we take $s=1$ in the above proposition, we obtain a monomorphism $\gamma \colon A\to \mathcal O_X(X).$
The following lemma, which is an extraction from the proof of Lemma 3.16 in \cite{lorscheid-the-geometry-of-1}
is useful.
\begin{lem}\label{lem:integralization}
Let $A$ be a partial ring and $(X,\mathcal O_X) = \mathop{\mathrm{Spec}}\nolimits(A).$ We put $B = \mathcal O_X(X).$
For any element $\sigma \in B,$ there exists an element $s\in A$ such that $s\sigma \in \gamma(A).$
\end{lem}
\begin{proof}
We can prove this similarly as in the second paragraph of the proof of Lemma 3.16 of \cite{lorscheid-the-geometry-of-1}.
\end{proof}
\begin{thm}
Let $A$ be a partial ring and $(X,\mathcal O_X) = \mathop{\mathrm{Spec}}\nolimits(A).$ We put $B = \mathcal O_X(X)$ and let $(Y,\mathcal O_Y) = \mathop{\mathrm{Spec}}\nolimits (B).$
Then $(X, \mathcal O_X)$ and $(Y, \mathcal O_Y)$ are isomorphic.
\end{thm}
\begin{proof}
This theorem is only a mixture of Lemma 3.16 and Lemma 3.18 of \cite{lorscheid-the-geometry-of-1}.
At the final step of the proof of
Lemma 3.18 of \cite{lorscheid-the-geometry-of-1}, we can use Lemma \ref{lem:integralization}.
\end{proof}
The following corollary is immediate from the theorem.
\begin{cor}\label{cor:one-one-correspondence}
There exists a one-to-one correspondence between the morphisms $\mathop{\mathrm{Spec}}\nolimits A\to \mathop{\mathrm{Spec}}\nolimits B$ of
affine pschemes and the morphisms $B\to \Gamma ( \mathop{\mathrm{Spec}}\nolimits A )$ of partial rings.
\end{cor}
The following definition is a translation into our case from \cite{lorscheid-the-geometry-of-1}.
\begin{defn}
A partial ring of the form $\Gamma(X, \mathcal O)$ for some affine pscheme $(X,\mathcal O)$ is called
{\bf global}.
\end{defn}
\begin{prop}
Every partial field is global.
\end{prop}
\begin{proof}
If $F$ is a partial field and $\mathop{\mathrm{Spec}}\nolimits(F) = (X, \mathcal O),$ then $X = \Set{ (0) }$ is a point.
Then it is clear that $\mathcal O_X(X) = F.$
\end{proof}
\subsection{Partial schemes} \begin{defn}[partial scheme]
A locally pringed space is called a {\bf partial scheme} if it is locally an affine pscheme. \end{defn}
\subsection{Projective Spaces}
In the polynomial semiring $\mathbb{N}[y_0, \dots, y_n]$ of $n+1$ indeterminates, we can consider a partial monoid \begin{align*}
\mathbb{N}_1[y_0, \dots, y_n]=\Set{ f(y_0, \dots, y_n) | \mbox{~the constant term of~}f \mbox{~is~}0\mbox{~or~}1 }, \end{align*} in which two polynomials are summable if constant terms of them are summable in $\FF_1.$ Let $B$ be a partial subring of $\mathbb{N}_1[x_0, \dots, x_n]$ which contains $\angles{y_0, \dots, y_n},$ the
multiplicatively written free commutative monoid thought of as a partial ring with trivial sum. Consider a multiplicative subset $S_i = \Set{ y_i^r | r\in \mathbb{N}},$ then the localization $S_i^{-1} B$ has a $\mathbb{Z}$ grading by the degree of polynomials. Let $A^{(i)}$ denote the 0-th part of $S_i^{-1} B\,(0\leq i \leq n).$ More precisely, \begin{align*}
A^{(i)} &= \Set{y_i^{-r} f_r | r\in \mathbb{N}, f_r\in B \mbox{~: a homogeneous polynomial of degree~}r},\\
A^{(i)} &= \Set{ (y_i^{-r} f_r,y_i^{-r} g_r) | (f_r, g_r ) \in B_2}. \end{align*}
If $A^{(i,j)}$ denotes the localization of $A^{(i)}$ by the multiplicative subset \[\Set{(y_j/y_i)^r | r\in \mathbb{N}},\] then we have $A^{(i,j)}=A^{(j,i)}.$ This observation ensures that, for any partial ring $A,$ we can glue affine pschemes $\mathbb{A}_i = \mathop{\mathrm{Spec}}\nolimits A^{(i)}$ along the isomorphisms of open subschemes $\mathop{\mathrm{Spec}}\nolimits A^{(i,j)}$ along the isomorphisms given by the equalities $A^{(i,j)}=A^{(j,i)}.$ Let $\mathbb{P}^n_V$ denote the resulting partial scheme, where $V = \mathop{\mathrm{Spec}}\nolimits(B),$ which is thought of as a vector space determined by $B.$
\begin{example}\label{ex:projective}
Let
\[
B = \FF_1 \angles {y_0, \dots, y_n | \exists(y_0 + \dots + y_n) }.
\]
At this point, we propose to think of this partial ring as the most fundamental one
which is between $\angles{y_0, \dots, y_n}$ and $\mathbb{N}_1[x_0, \dots, x_n].$ For example
$\mathop{\mathrm{Hom}}\nolimits_{{\mathcal{A}lg}_{\FF_1}}(B, A)$ equals $A_{n+1},$ the summable $(n+1)$-tuples in $A,$
which may be think of as the most fundamental $\FF_1$-module ``of rank $n+1$''.
In this example, $\mathbb{P}^n_V $ for this specific $B$ is abbreviated as $\mathbb{P}^n.$
In this case,
\begin{align*}
A^{(i)} &= \Set{\mbox{~subsum of~}(y_0/y_i + \dots + y_n/y_i)^r | r\in \mathbb{N}}\cup \set{0},\\
A^{(i,j)} &= \Set{y_i^s y_j^{-s}\left(\mbox{~subsum of~}(y_0/y_i + \dots + y_n/y_i)^r\right) |r\in \mathbb{N}, s\in \mathbb{Z}}\cup \set{0},\\
A^{(i,j,k)} &= \Set{
\begin{array}{l}y_i^s y_j^t y_k^u\times\\
\left(\mbox{~subsum of~}(y_0/y_i + \dots + y_n/y_i)^r\right)
\end{array} |\begin{array}{l}
r\in \mathbb{N}, \\
s,t,u\in \mathbb{Z},\\
s+t+u = 0
\end{array}
}\cup \set{0},\\
\vdots
\end{align*}
and so on.
Now let $F$ be a partial field. Then by Corollary \ref{cor:one-one-correspondence},
there is a one to one correspondence between the set of $F$ points $\mathop{\mathrm{Spec}}\nolimits F \to \mathbb{A}_i$ and
\[
A_i(F) = \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{A}lg}_{\FF_1}}(A^{(i)}, F).
\]
Then the set of $F$-points $\mathop{\mathrm{Spec}}\nolimits F\to \mathbb{P}^n$ can be identified with
\[
\mathbb{P}^n(F) = \coprod_{i=1}^n A_i(F)/\sim,
\]
where for $v_i \in A_i(F)$ and $v_j \in A_j(F),$ $v_i\sim v_j$
if there exists a homomorphism $v\colon A^{(i,j)} \to F$ such that $v|_{A^{(i)}} = v_i$ and $v|_{A^{(j)}} = v_j.$
For any subset $\set{i_1,\dots, i_r} \subseteq \Set{0,1,\dots , n},$ we put
\[
A_{i_1,\dots, i_r}(F) = \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{A}lg}_{\FF_1}}\left(A^{(i_1,\dots, i_r)}, F \right).
\]
Then as sets,
\[A_{i_1,\dots, i_r}(F) \cong
\left(\begin{array}{l}
n\mbox{-tuples~} (x_1, \dots, x_n)\mbox{~for which}\\
1+x_1 + \dots +x_n \mbox{~can be calculated in~}F\\
\mbox{and~} x_1, \dots , x_{r-1} \neq 0
\end{array}\right).
\]
Then
\[
\# A_{i_1,\dots, i_r}(F)= (\kappa-1)^{r-1}\kappa^{n-r+1},
\]
where $\kappa = \kappa(F)$ denotes the number of elements of $F$ which is summable with 1.
Now we can calculate $\# \mathbb{P}^n(F )$ as
\begin{align*}
\# \mathbb{P}^n(F) &= \sum_{i=1}^{n+1} (-1)^{i-1} \binom{n+1}{k}(\kappa-1)^{i-1}\kappa^{n-i+1}\\
&= -\frac{(\kappa-(\kappa-1))^{n+1} - \kappa^{n+1}}{\kappa-1}\\
&= \frac{\kappa^{n+1} - 1}{\kappa-1} = \kappa^n + \dots + \kappa + 1.
\end{align*}
Of course, $\kappa(\mathbb{F}_q) = q,$ where $\mathbb{F}_q$ denotes the finite field with $q$ elements,
and $\kappa(\FF_1 ) = 1.$ \end{example}
\section{Affine Group PSchemes and Affine PGroup PSchemes} \subsection{$\mathbb{G}_a$}
Put $K = \FF_1\angles{ x, y | \exists (x+y) }$ and let $R$ be the congruence $\angles{(x+y, 0)}.$
Then put $G = K/ R.$ If we put $t = [x] = -[y] \in G,$ then
\[
G = \Set{ a_0 + a_1t + \dots + a_r t^r | r\in \mathbb{N}, a_0 =0\mbox{~or~}1, a_i \in \mathbb{Z} }.
\]
A cogroup structure on $G$ is given by
\begin{align*}
&m\colon G\to G\otimes G~;~ t\mapsto 1\otimes t + t\otimes 1\\
&e\colon G\to \FF_1 ~;~t \mapsto 0\\
&i\colon G\to G~;~t\mapsto -t.
\end{align*}
Then $\mathbb{G}_a = \mathop{\mathrm{Hom}}\nolimits_{{{\mathcal{A}lg}}_{\FF_1}} (G, -)$ is the additive group. \subsection{$\mathbb{G}_m$}
Put $K= \angles{ x, y }$ and let $R$ be the congruence $\angles{(ab, 1)}.$
Then put $G = K/R.$ If we put $t = [x] \in G, $ then $G = \Set{ t^n | t\in \mathbb{Z}}.$
A cogroup structure on $G$ is given by
\begin{align*}
&m\colon G\to G\otimes G~;~ t\mapsto t\otimes t\\
&e\colon G\to \FF_1 ~;~t \mapsto 1\\
&i\colon G\to G~;~t\mapsto t^{-1}.
\end{align*}
Then $\mathbb{G}_m = \mathop{\mathrm{Hom}}\nolimits_{{{\mathcal{A}lg}}_{\FF_1}} (G, -)$ is the multiplicative group. \subsection{${\mathbb{G}\mathbb{L}}_n$} In this section, we construct an affine pgroup pscheme ${\mathbb{G}\mathbb{L}}_n,$ which induces a ${\mathcal{G}rp}$ valued functor when restricted to a `good' partial rings.
\subsubsection{Linear Algebra} Let $A$ be a partial ring. \begin{defn}[$A$-modules]
An $A$-module is a partial monoid $M$ equipped with a bilinear action $A\times M\to M$ by $A.$ \end{defn}
For any natural number $n, A^n$ is an $A$-module in a natural way. Let $\varphi \colon A^n \to A^m$ be an $A$-module homomorphism. Let $\set{e_1,\dots, e_n}$ and\\ $\set{f_1,\dots, f_m}$ be the canonical basis of $A^n, $ respectively. Suppose $\varphi( e_j ) = \sum_{i=1}^m c_{ij} f_i.$ Let $(a_1,\dots, a_n)\in A^n$ be any element. Since $( a_1e_1, \dots, a_n e_n )$ is a summable $n$-tuple in $A^n,$ $( a_1\varphi(e_1), \dots, a_n \varphi(e_n) )$ is a summable $n$-tuple in $A^m.$ This implies that $( a_1 c_{1j} , a_2c_{2j}, \dots, a_n c_{nj} ) $ is a summable $n$-tuple in $A.$ Now, we put \[
A_{(n)} = \Set{ (c_1, \dots, c_n) \in A^n |
\begin{array}{l}
(a_1 c_1 , \dots , a_n c_n) \mbox{~is a summable~}n\mbox{-tuple}\\
\mbox{for any } a_1, \dots, a_n \in A
\end{array}
}. \] If we think of $C = (c_{ij})$ as an $m\times n$ matrix, then we have shown above that each row of $C$ is an element of $A_{(n)}.$ The set of $m\times n$ matrices with this property is denoted by $M_{m,n}(A).$ Conversely, if we are given an $m\times n$ matrix $C = (c_{ij})\in M_{m,n}(A),$ we can define a $A$-module homomorphism $\varphi \colon A^n\to A^m$ by the usual matrix multiplication $\varphi(a_1,\dots, a_n) = C\,^t\!(a_1,\dots, a_n).$
If $m=n,$ matrix multiplication gives $M_n(A) = M_{n,n}(A)$ a non-commutative monoid structure. The invertible elements of $M_n(A)$ constitute a group, which is denoted by $GL_n(A).$
We introduce a weaker version $M'_{m,n}(A)$ of $M_{m,n}(A),$ in which matrices have their rows in $A_n,$ instead of $A_{(n)}.$ If $m=n,$ matrix multiplication gives $M'_n(A) = M'_{n,n}(A)$, in this case, a non-commutative partial magma structure as is illustrated below. If we put \begin{align*}
M &= M'_n(A), \mbox{~and}\\
M_2 &= \Set{ ( (c_{ij}), (d_{ij}) ) | (c_{i1}d_{1j}, \dots, c_{in}d_{nj}) \in A_n \mbox{~for each~} i, j }, \end{align*} then a multiplication $M_2 \to M$ is given by the matrix multiplication. The unit matrix gives a unit for the partial magma $M,$ which can be multiplied to any element of $M.$ As a definition of invertible matrix in $M'_n(A)$, we give the following one (hinted by the group pscheme construction given in a following section): a matrix $C \in M'_n(A)$ is invertible if there exists a matrix $C'\in M'_n(A)$ such that we can form $CC'$ and $C'C$ in $M'_n(A)$ and $CC' = C'C = I_n,$ the unit matrix. The invertible elements of $M_n(A)$ constitute a partial group, which is denoted by $GL'_n(A),$ with the following (ad hoc) definition of a partial group.
\begin{defn}[partial group]
A partial group is a non-commutative
partial magma $G$ in which for every element $a\in G,$ there exists
an element $b\in G$ such that $ab$ and $ba$ can be formed in $G$ and $ab=ba=1,$ where
$1$ is the unit element of the partial magma $G.$ \end{defn}
\begin{defn}[good partial ring]
A partial ring $A$ is called good if $A_n = A_{(n)}.$ \end{defn}
If $A$ is a good partial ring, $GL_n(A) = GL'_n(A).$ Note that commutative monoids and commutative rings are good partial rings.
\subsubsection{Partial Cogroups}
If $\mathcal C$ is a category with finite coproducts, then we can define a partial cogroup in $\mathcal C.$ Let $I$ be an initial object of $\mathcal C,$ and $\otimes $ be a binary coproduct in $\mathcal C.$
\begin{defn}[Partial cogroup]
A partial cogroup in $\mathcal C$ is
\begin{enumerate}[label = \arabic*)]
\item an object $G,$
\item an object $H$ and an epimorphism $j\colon G\otimes G\to H,$
\item a morphism $e\colon G\to I,$ called the counit, and epimorphisms
$e_L\colon H \to I\otimes G$ and $e_R \colon H \to G\otimes I,$
\item a morphism $m\colon G\to H,$ called the comultiplication, and
\item a morphism $i\colon G\to G,$ called the inverse, and morphisms
$i_L, i_R \colon H \to G$
\end{enumerate}
which makes the following diagrams commute :
\begin{enumerate}[label = \alph*)]
\item
\[
\begin{tikzcd}
& G\otimes G \ar{ld}[above]{e\otimes id}\ar[two heads]{d}{j}\ar{rd}{id\otimes e}\\
I\otimes G\ar{d}[left]{(id,!)} & H \ar[two heads]{l}{e_L}\ar[two heads]{r}[below]{e_R} & G\otimes I\ar{d}{(!, id)}\\
G & G \ar[equal]{l}\ar{u}{m}\ar[equal]{r} & G
\end{tikzcd}
\]
\item
\[
\begin{tikzcd}
& G\otimes G\ar{ld}[above]{i\otimes id}\ar{rd}{id\otimes i}\ar[two heads]{d}{j}\\
G & H\ar{l}{i_L}\ar{r}[below]{i_R} & G\\
I\ar{u}{!} & G\ar{u}{m}\ar{l}{e}\ar{r}[below]{e} & I\ar{u}{!}.
\end{tikzcd}
\]
\end{enumerate} \end{defn}
\subsubsection{${\mathbb{G}\mathbb{L}}_n$}
Let $\mathbb{N}[x_{ij}, y_{ij}\,(1\leq i,j \leq n) ]$ be the semiring of polynomials of
$2n^2$ indeterminates $x_{ij}, y_{ij}\,(1\leq i,j \leq n).$
Consider $n\times n$ matrices $X = (x_{ij}), Y = (y_{ij}), Z = XY = (z_{ij})$ and $W = YX = (w_{ij}).$
Let $K$ be the subset of $\mathbb{N}[x_{ij}, y_{ij}\,(1\leq i,j \leq n) ]$
consisting of $4n$ elements
\[
\begin{array}{l}
x_i = x_{i1}+\dots +x_{in} \,(1\leq i\leq n),\\
y_i = y_{i1}+\dots +y_{in} \,(1\leq i\leq n),\\
z_i = z_{i1}+\dots +z_{in} \,(1\leq i\leq n)\mbox{~and~}\\
w_i = w_{i1}+\dots +w_{in} \,(1\leq i\leq n).
\end{array}
\]
We put $G' = \FF_1 \angles{x_{ij}, y_{ij}\,(1\leq i,j \leq n) | \exists t, \forall t\in K}.$
Let $Q$ be the smallest congruence on $G'$ which contains $2n^2$ pairs
$(z_{ij} ,\delta_{ij})$ and $(w_{ij}, \delta_{ij})\,(1\leq i, j\leq n).$
Then we put $G=G'/Q.$
Next, let $N = \mathbb{N}[x_{ij}, y_{ij}, x'_{ij}, y'_{ij}\,(1\leq i,j \leq n) ]$ be the semiring of polynomials of
$4n^2$ indeterminates $x_{ij}, y_{ij}, x'_{ij}, y'_{ij}\,(1\leq i,j \leq n).$
Consider $n\times n$ matrices
\begin{align*}
&X = (x_{ij}), Y = (y_{ij}), Z = XY = (z_{ij}), W = YX = (w_{ij}), \\
&X' = (x'_{ij}), Y' = (y'_{ij}), Z' = X'Y' = (z'_{ij}), W' = Y'X' = (w'_{ij}), \\
&S = XX' = (s_{ij}), T = Y'Y = (t_{ij}), U = ST = (u_{ij}), V = TS = (v_{ij}).
\end{align*}
We put
\[
L = \Set{ x_i, y_i, z_i, w_i, x'_i, y'_i, z'_i, w'_i, s_i, t_i, u_i, v_i | 1\leq i \leq n},
\]
where $*_i$ denotes the sum of $i$-th row of a matrix indicated by the capital of the same letter $*.$
We put $H' = \FF_1 \angles{x_{ij}, y_{ij},x'_{ij}, y'_{ij}\,(1\leq i,j \leq n) | \exists t, \forall t\in L}.$
Let $R$ be the smallest congruence on $H'$ which contains $6n^2$ pairs
$(z_{ij} ,\delta_{ij}),$ $(w_{ij}, \delta_{ij}),$ $(z'_{ij} ,\delta_{ij}),$ $(w'_{ij}, \delta_{ij}),$
$(u_{ij} ,\delta_{ij})$ and $(v_{ij}, \delta_{ij})\,(1\leq i, j\leq n).$ Then we put $H = H'/R.$
The following list of maps define partial ring homomorphisms which give
$G$ a partial cogroup structure:
\begin{align*}
& j\colon G\otimes G \to H~&;~ & j(x_{ij}\otimes 1) = x_{ij}, j(y_{ij}\otimes 1) = y_{ij},\\
& & & j(1\otimes x_{ij}) = x'_{ij}, j(1\otimes y_{ij}) = y'_{ij},\\
& e\colon G \to \FF_1~&;~& e(x_{ij}) = e(y_{ij}) = \delta_{ij},\\
& e_L\colon H \to \FF_1\otimes G~&;~& e_L(x_{ij}) = e_L(y_{ij}) = \delta_{ij}\otimes 1,\\
& & & e_L(x'_{ij}) = 1\otimes x_{ij}, e_L(y'_{ij}) = 1\otimes y_{ij},\\
& e_R\colon H \to \FF_1\otimes G~&;~& e_R(x_{ij}) = x_{ij}\otimes 1, e_R(y_{ij}) = y_{ij}\otimes 1,\\
& & & e_R(x'_{ij}) = e_R(y'_{ij}) = 1\otimes \delta_{ij},\\
& m\colon G\to H~&;~& m(x_{ij}) = s_{ij}, m(y_{ij}) = t_{ij},\\
& i\colon G\to G~&;~& i(x_{ij}) = y_{ij}, i(y_{ij}) = x_{ij},\\
& i_L\colon H\to G~&;~& i_L(x_{ij}) = y_{ij}, e_L(y_{ij}) = x_{ij},\\
& & & e_L(x'_{ij}) = x_{ij}, e_L(y'_{ij}) = y_{ij},\\
& i_R\colon H\to G~&;~& i_R(x_{ij}) = x_{ij}, e_R(y_{ij}) = y_{ij},\\
& & & e_R(x'_{ij}) = y_{ij}, e_R(y'_{ij}) = x_{ij}.
\end{align*}
\begin{thm}
There exists a representable functor
${\mathbb{G}\mathbb{L}}_n \colon {\mathcal{P\!R}ing} \to \mathcal{PG}rp$
from the category of partial rings to the category of partial groups
which enjoys the following properties:
\begin{enumerate}
\item its restriction to the category of good partial rings factors through
${\mathcal{G}rp}, $ the category of groups.
\item ${\mathbb{G}\mathbb{L}}_n (A)$ is
the group of $n$-th general linear group with entries in $A$, if
$A$ is a commutative rings with 1, and
\item ${\mathbb{G}\mathbb{L}}_n (\FF_1) = \mathfrak S_n$ is $n$-th symmetric group.
\end{enumerate}
\end{thm}
\end{document} |
\begin{document}
\maketitle
\begin{abstract} The notion of retrocell in a double category with companions is introduced and its basic properties established. Explicit descriptions in some of the usual double categories are given. Monads in a double category provide an important example where retrocells arise naturally. Cofunctors appear as a special case. The motivating example of vertically closed double categories is treated in some detail. \end{abstract}
\tableofcontents
\section*{Introduction}
In \cite{Par21} an in-depth study of the double category $ {\mathbb R}{\rm ing} $ of rings, homomorphisms, bimodules and linear maps was made, and several interesting features were uncovered. It became apparent that considering this double category, rather than the category of rings and homomorphisms or the bicategory of bimodules, could provide some important insights into the nature of rings and modules.
An important property of the bicategory of bimodules is that it is biclosed, i.e. the $ \otimes $ has right adjoints in each variable so that we have bijections of linear maps \begin{center} \begin{tabular}{c}
$M \to N \obslash_T P $ \\[3pt] \hline \\[-12pt]
$N \otimes_S M \to P$ \\[3pt] \hline \\[-12pt] $ N \to P \oslash_R M $ \end{tabular} \end{center} for bimodules $$ \bfig\scalefactor{.8}
\Ctriangle/@{<-}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}/<400,300>[S`R`T\rlap{\ .};M`N`P]
\efig $$ We use (a slight modification of Lambek's notation for the hom bimodules \cite{Lam66}): $ P \oslash_R M $ is the $ T\mbox{-}S $ bimodule of $ R $-linear maps $ M \to P $, and $ T $-linear for $ N \obslash_T P $. Both $ P \oslash_R M $ and $ N \obslash_T P $ are covariant in $ P $ but contravariant in the other variables. This is for $ 2 $-cells in the bicategory $ {\cal{B}}{\it im} $ but it does not extend to cells in the double category $ {\mathbb R}{\rm ing} $, which casts a shadow on our contention that $ {\mathbb R}{\rm ing} $ works better than $ {\cal{B}}{\it im} $.
The way out of this dilemma is hinted at in the commuter cells of \cite{GraPar08} (there called commutative cells) introduced to deal with the universal property of internal comma objects. That is, to use companions to define new cells, which we call retrocells below, and thus recover functoriality.
After a quick review of companions in Section 1, we introduce retrocells in Section 2 and see that they are the cells of a new double category, and if we apply this construction twice, we get the original double category, up to isomorphism.
Section 3 extends the mates calculus to double categories where we see retrocells appearing as the mates of standard cells. A careful study of dualities in Section 4 completes this.
Retrocells in the standard double categories whose vertical arrows are spans, relations, profunctors or $ {\bf V} $-matrices are analyzed in Section 5. They correspond to various sorts of liftings reminiscent of fibrations.
Section 6 studies retrocells in the context of monads in a double category. It is seen that, while Kleisli objects are certain kinds of universal cells, Eilenberg-Moore objects are universal retrocells. In $ {\mathbb S}{\rm pan} {\bf A} $, monads are category objects in $ {\bf A} $ and internal functors are cells preserving identities and multiplication. Retrocells, on the other hand, give cofunctors.
In Section 7 we extend Shulman's closed equipments to general double categories, and establish the functoriality of internal homs, covariant in one variable and retrovariant in the other, formulated in terms of ``twisted cospans''.
We end in Section 8 by re-examining commuter cells in the light of retrocells and see that this leads to an interesting triple category, though we do not pursue the triple category aspect.
The results of this paper were presented in preliminary form at CT2019 in Edinburgh and in the MIT Categories Seminar in October 2020. We thank Bryce Clarke, Matt Di~Meglio and David Spivak for expressing their interest in retrocells. We also thank the anonymous referee for a careful reading and numerous suggestions resulting in a much better presentation.
\section{Companions}
The whole paper will be concerned with double categories that have companions, so we recall the definition, principal properties we will use, and establish some notation (see \cite{GraPar04} for more details).
\begin{definition} Let $ f \colon A \to B $ be a horizontal arrow in a double category $ {\mathbb A} $. A {\em companion} for $ f $ is a vertical arrow $ v \colon A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy B $ together with two {\em binding cells} $ \alpha $ and $ \beta $ as below, such that $$ \bfig\scalefactor{.8}
\square/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``v`f]
\place(250,250)[{\scriptstyle \alpha}]
\square(500,0)/>``=`=/[A`B`B`B;f```]
\place(750,250)[{\scriptstyle \beta}]
\place(1300,250)[=\ \ \id_f]
\place(1700,250)[\mbox{and}]
\square(2100,-250)/`@{>}|{\usebox{\bbox}}`=`=/[A`B`B`B\rlap{\ .};`v``]
\place(2350,0)[{\scriptstyle \beta}]
\square(2100,250)/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``v`f]
\place(2350,500)[{\scriptstyle \alpha}]
\place(3000,250)[= \ \ 1_v]
\efig $$
\end{definition}
We can always assume the vertical identities are strict and usually denote them by long equal signs in diagrams, as we just did. Of course horizontal identities are always strict, and we use a similar diagrammatic notation.
The vertical identity on $ A $, $ \id_A $, is a companion to the horizontal identity $ 1_A $, with both binding cells the common value $ 1_{\id_A} = \id_{1_A} $, $$ \bfig\scalefactor{.8} \square/=`=`=`=/[A`A`A`A\rlap{\ .};```]
\place(250,250)[{\scriptstyle 1}]
\efig $$ If $ f \colon A \to B $ and $ g \colon B \to C $ have respective companions $ (v, \alpha, \beta) $ and $ (w, \gamma, \delta) $ then $ g f $ has $ w \bdot v $ as companion with binding cells $$ \bfig\scalefactor{.8} \square/`=`=`>/[A`B`A`B;```f]
\place(250,250)[{\scriptstyle \id_f}]
\square(500,0)/=``@{>}|{\usebox{\bbox}}`>/[B`B`B`C;``w`g]
\place(750,250)[{\scriptstyle \gamma}]
\square(0,500)/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``v`f]
\place(250,750)[{\scriptstyle \alpha}]
\square(500,500)/=``@{>}|{\usebox{\bbox}}`/[A`A`B`B;``v`]
\place(750,750)[{\scriptstyle 1_v}]
\place(1450,500)[\mbox{and}]
\square(1900,0)/=`@{>}|{\usebox{\bbox}}``=/[B`B`C`C;`w``]
\place(2150,250)[{\scriptstyle 1_w}]
\square(2400,0)/>`@{>}|{\usebox{\bbox}}`=`=/[B`C`C`C\rlap{\ .};g`w``]
\place(2650,250)[{\scriptstyle \delta}]
\square(1900,500)/>`@{>}|{\usebox{\bbox}}`=`/[A`B`B`B;f`v``]
\place(2150,750)[{\scriptstyle \beta}]
\square(2400,500)/>``=`/[B`C`B`C;g```]
\place(2650,750)[{\scriptstyle \id_g}]
\efig $$
Two companions $ (v, \alpha, \beta) $ and $ (v', \alpha', \beta') $ for the same $ f $ are isomorphic by the globular isomorphism $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`=`=/[A`B`B`B\rlap{\ .};f`v``]
\place(250,250)[{\scriptstyle \beta}]
\square(0,500)/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``v'`]
\place(250,750)[{\scriptstyle \alpha'}]
\efig $$
We usually choose a representative companion from each isomorphism class and call it $ (f_*, \psi_f, \chi_f) $ $$ \bfig\scalefactor{.8}
\square/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``f_*`f]
\place(250,250)[{\scriptstyle \psi_f}]
\square(1200,0)/>`@{>}|{\usebox{\bbox}}`=`=/[A`B`B`B\rlap{\ .};f`f_*``]
\place(1450,250)[{\scriptstyle \chi_f}]
\efig $$ The choice is arbitrary but it simplifies things if we choose the companion of $ 1_A $ to be $ (\id_A, 1_{\id_A}, 1_{\id_A}) $. In all of our examples there is a canonical choice and for that $ (1_A)_* = \id_A $.
To lighten the notation, we often write the binding cells $ \psi_f $ and $ \chi_f $ as corner brackets in diagrams: $$ \bfig\scalefactor{.8}
\square/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``f_*`f]
\place(250,200)[\ulcorner]
\place(850,250)[\mbox{and}]
\square(1200,0)/>`@{>}|{\usebox{\bbox}}`=`=/[A`B`B`B\rlap{\ .};f`f_*``]
\place(1450,250)[\lrcorner]
\efig $$ We also use $ = $ and $ {\mbox{\rule{.23mm}{2.3mm}\hspace{.6mm}\rule{.23mm}{2.3mm}}} $ for horizontal and vertical identity cells.
There is a useful technique, called {\em sliding}, where we slide a horizontal arrow around a corner into a vertical one. Specifically, there are bijections natural in every way that makes sense, $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/<1000,500>[A`C`D`E;`v`w`h]
\morphism(0,500)|a|/>/<500,0>[A`B;f]
\morphism(500,500)|a|/>/<500,0>[B`C;g]
\place(500,250)[{\scriptstyle \alpha}]
\place(1500,250)[\longleftrightarrow]
\square(2000,-150)/>`@{>}|{\usebox{\bbox}}``>/<500,800>[A`B`D`E;f`v``h]
\morphism(2500,650)|r|/@{>}|{\usebox{\bbox}}/<0,-400>[B`C;g_*]
\morphism(2500,250)|r|/@{>}|{\usebox{\bbox}}/<0,-400>[C`E;w]
\place(2250,250)[{\scriptstyle \beta}] \efig $$ and also $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/<1000,500>[A`B`C`E;f`v`w`]
\morphism(0,0)|b|/>/<500,0>[C`D;g]
\morphism(500,0)|b|/>/<500,0>[D`E;h]
\place(500,250)[{\scriptstyle \alpha}]
\place(1500,250)[\longleftrightarrow]
\square(2000,-150)/>``@{>}|{\usebox{\bbox}}`>/<500,800>[A`B`D`E\rlap{\ .};f``w`h]
\morphism(2000,650)|l|/@{>}|{\usebox{\bbox}}/<0,-400>[A`C;v]
\morphism(2000,250)|l|/@{>}|{\usebox{\bbox}}/<0,-400>[C`D;g_*]
\place(2250,250)[{\scriptstyle \beta}]
\efig $$ If we combine the two we get a bijection $$ \bfig\scalefactor{.8}
\square(0,150)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,400)[{\scriptstyle \alpha}]
\place(1000,400)[\longleftrightarrow]
\square(1500,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/<500,400>[C`B`D`D;`g_*`w`]
\square(1500,400)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/<500,400>[A`A`C`B;`v`f_*`]
\place(1750,400)[{\scriptstyle \widehat{\alpha}}]
\efig $$ which is, in a sense, the conceptual basis for retrocells. That, and the idea that $ f $ and $ f_* $ are really the same morphism in different roles.
We refer the reader to \cite{Gra20} for all unexplained double category matters.
\section{Retrocells}
Let $ {\mathbb A} $ be a double category in which every horizontal arrow has a companion and choose a companion for each (with $ \id_A $ as the companion of $ 1_A $).
\begin{definition}
A {\em retrocell} $ \alpha $ in $ {\mathbb A} $, denoted $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\morphism(170,250)/<=/<200,0>[`;\alpha]
\efig $$ is a (standard) double cell of $ {\mathbb A} $ of the form $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/<500,400>[B`C`D`D\rlap{\ .};`w`g_*`]
\square(0,400)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/<500,400>[A`A`B`C;`f_*`v`]
\place(250,400)[{\scriptstyle \alpha}]
\efig $$
\end{definition}
\begin{theorem}
The objects, horizontal and vertical arrows of $ {\mathbb A} $ together with retrocells, form a double category $ {\mathbb A}^{ret} $.
\end{theorem}
\begin{proof}
The horizontal composite $ \beta \alpha $ of retrocells $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\morphism(180,250)/<=/<200,0>[`;\alpha]
\square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[B`E`D`F;h``x`k]
\morphism(680,250)/<=/<200,0>[`;\beta]
\efig $$ is given by $$ \bfig\scalefactor{.8}
\square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[E`E`F`F;`x`x`]
\place(250,250)[=]
\square(0,500)/=`@{>}|{\usebox{\bbox}}``/<500,1000>[A`A`E`E;`(hf)_*``]
\place(250,1000)[{\scriptstyle \cong}]
\morphism(500,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*]
\morphism(500,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`E;h_*]
\square(500,0)/``@{>}|{\usebox{\bbox}}`=/[E`D`F`F;``k_*`]
\place(750,500)[{\scriptstyle \beta}]
\square(500,500)/=``@{>}|{\usebox{\bbox}}`/[B`B`E`D;``w`]
\square(500,1000)/=``@{>}|{\usebox{\bbox}}`/[A`A`B`B;``f_*`]
\place(750,1250)[=]
\square(1000,0)/=``@{>}|{\usebox{\bbox}}`=/[D`D`F`F;``k_*`]
\place(1250,250)[=]
\square(1000,500)/``@{>}|{\usebox{\bbox}}`/[B`C`D`D;``g_*`]
\place(1250,1000)[{\scriptstyle \alpha}]
\square(1000,1000)/=``@{>}|{\usebox{\bbox}}`/[A`A`B`C;``v`]
\square(1500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[C`C`F`F;``(kg)_*`]
\place(1750,500)[{\scriptstyle \cong}]
\square(1500,1000)/=``@{>}|{\usebox{\bbox}}`/[A`A`C`C;``v`]
\place(1750,1250)[=]
\efig $$ where the \ $ \cong $ \ represent the canonical isomorphisms $ (hf)_* \cong h_* \bdot f_* $ and $ (kg)_* \cong k_* \bdot g_* $, $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`=`=/<1000,500>[A`E`E`E;`(hf)_*``]
\place(500,250)[\lrcorner]
\morphism(0,500)|a|/>/<500,0>[A`B;f]
\morphism(500,500)|a|/>/<500,0>[B`E;h]
\square(0,500)/>`=`=`/[A`B`A`B;f```]
\place(250,750)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}]
\square(500,500)/=``@{>}|{\usebox{\bbox}}`/[B`B`B`E;``h_*`]
\place(750,750)[\ulcorner]
\square(0,1000)/=`=`@{>}|{\usebox{\bbox}}`/[A`A`A`B;``f_*`]
\place(250,1250)[\ulcorner]
\square(500,1000)/=``@{>}|{\usebox{\bbox}}`/[A`A`B`B;``f_*`]
\place(750,1250)[=]
\place(1400,750)[\mbox{and}]
\square(1850,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[D`D`F`F;`k_*`k_*`]
\place(2100,250)[=]
\square(2350,0)/>``=`=/[D`F`F`F\rlap{\ .};k```]
\place(2600,250)[\lrcorner]
\square(1850,500)/>`@{>}|{\usebox{\bbox}}`=`/[C`D`D`D;g`g_*``]
\place(2100,750)[\lrcorner]
\square(2350,500)/>``=`/[D`F`D`F;k```]
\place(2600,750)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}]
\square(1850,1000)/=`=`@{>}|{\usebox{\bbox}}`/<1000,500>[C`C`C`F;``(kg)_*`]
\place(2350,1250)[\ulcorner]
\efig $$
The vertical composite $ \alpha' \bdot \alpha $ of retrocells $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[C`D`C'`D';g`v'`w'`h]
\morphism(180,250)/<=/<200,0>[`;\alpha']
\square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`B`C`D;f`v`w`]
\morphism(180,750)/<=/<200,0>[`;\alpha]
\efig $$ is $$ \bfig\scalefactor{.8}
\square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[D`D`D'`D';`w'`w'`]
\place(250,250)[=]
\square(0,500)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`C`D`D;`w`g_*`]
\square(0,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`C;`f_*`v`]
\place(250,1000)[{\scriptstyle \alpha}]
\square(500,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[D`C'`D'`D'\rlap{\ .};``h_*`]
\place(750,500)[{\scriptstyle \alpha'}]
\square(500,500)/=``@{>}|{\usebox{\bbox}}`/[C`C`D`C';``v'`]
\square(500,1000)/=``@{>}|{\usebox{\bbox}}`/[A`A`C`C;``v`]
\place(750,1250)[=]
\efig $$
Horizontal and vertical identities are $$ \bfig\scalefactor{.8}
\square(0,250)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`C`C;`v`v`]
\morphism(180,500)/<=/<200,0>[`;1_v]
\place(700,500)[=]
\square(900,0)/`@{>}|{\usebox{\bbox}}`=`=/[A`C`C`C;`v``]
\place(1150,500)[=]
\square(900,500)/=`=`@{>}|{\usebox{\bbox}}`/[A`A`A`C;``v`]
\place(1650,500)[\mbox{and}]
\square(1900,250)/>`=`=`>/[A`B`A`B;f```f]
\place(2150,500)[{\scriptstyle \id_f}]
\place(2600,500)[=]
\square(2900,0)/`=`@{>}|{\usebox{\bbox}}`=/[B`A`B`B\rlap{\ .};``f_*`]
\place(3150,500)[=]
\square(2900,500)/=`@{>}|{\usebox{\bbox}}`=`/[A`A`B`A;`f_*``]
\efig $$
There are a number of things to check (horizontal and vertical unit laws and associativities as well as interchange), all of which are straightforward calculations and will be left to the reader. It is merely a question of writing out the diagrams and following the steps indicated schematically below.
The identity laws are trivial because of our conventions that $ (1_A)_* = \id_A $ and vertical identities are as strict as in $ {\mathbb A} $.
For retrocells $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A_0`A_1`C_0`C_1;f_1`v_0`v_1`g_1]
\morphism(180,250)/<=/<200,0>[`;\alpha_1]
\square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[A_1`A_2`C_1`C_2;f_2``v_2`g_2]
\morphism(680,250)/<=/<200,0>[`;\alpha_2]
\square(1000,0)/>``@{>}|{\usebox{\bbox}}`>/[A_2`A_3`C_2`C_3\rlap{\ ,};f_3``v_3`g_3]
\morphism(1180,250)/<=/<200,0>[`;\alpha_3]
\efig $$ $ \alpha_3 (\alpha_2 \alpha_1) $ is a composite of 17 cells arranged in a $ 4 \times 7 $ array represented schematically as
\begin{center}
\setlength{\unitlength}{.9mm}
\begin{picture}(70,40)
\put(0,0){\framebox(70,40){}}
\put(10,0){\line(0,1){40}}
\put(20,0){\line(0,1){40}}
\put(30,0){\line(0,1){40}}
\put(40,0){\line(0,1){40}}
\put(50,0){\line(0,1){40}}
\put(60,0){\line(0,1){40}}
\put(0,10){\line(1,0){10}} \put(10,20){\line(1,0){10}} \put(20,10){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(30,10){\line(1,0){10}} \put(30,30){\line(1,0){10}} \put(40,20){\line(1,0){10}} \put(50,10){\line(1,0){10}} \put(50,30){\line(1,0){10}} \put(60,30){\line(1,0){10}}
\put(5,25){\makebox(0,0){$\scriptstyle\cong$}}
\put(15,10){\makebox(0,0){$\scriptstyle \alpha_3$}}
\put(25,30){\makebox(0,0){$\scriptstyle \cong$}}
\put(35,20){\makebox(0,0){$\scriptstyle \alpha_2$}}
\put(45,30){\makebox(0,0){$\scriptstyle \alpha_1$}}
\put(55,20){\makebox(0,0){$\scriptstyle \cong$}}
\put(65,15){\makebox(0,0){$\scriptstyle \cong$}}
\put(115,20){(1)}
\put(75,0){.}
\end{picture}
\end{center}
\noindent The empty rectangles are horizontal identities and the $ \cong $ represent canonical isomorphisms generated by companions.
$ (\alpha_3 \alpha_2) \alpha_1 $ on the other hand is of the form
\begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(70,40) \put(0,0){\framebox(70,40){}}
\put(10,0){\line(0,1){40}} \put(20,0){\line(0,1){40}} \put(30,0){\line(0,1){40}} \put(40,0){\line(0,1){40}} \put(50,0){\line(0,1){40}} \put(60,0){\line(0,1){40}}
\put(0,10){\line(1,0){10}} \put(10,10){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(30,10){\line(1,0){10}} \put(30,30){\line(1,0){10}} \put(40,20){\line(1,0){10}} \put(40,30){\line(1,0){10}} \put(50,20){\line(1,0){10}} \put(60,30){\line(1,0){10}}
\put(5,25){\makebox(0,0){$\scriptstyle\cong$}} \put(15,20){\makebox(0,0){$\scriptstyle \cong$}} \put(25,10){\makebox(0,0){$\scriptstyle \alpha_3$}} \put(35,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(45,10){\makebox(0,0){$\scriptstyle \cong$}} \put(55,30){\makebox(0,0){$\scriptstyle \alpha_1$}} \put(65,20){\makebox(0,0){$\scriptstyle \cong$}}
\put(115,20){(2)}
\put(75,0){.}
\end{picture}
\end{center}
\noindent It is now clear what to do. Switch $ \alpha_3 $ with $ \cong $
in (1) and $ \alpha_1 $ with $ \cong $ in (2) to get
\begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(30,40) \put(0,0){\framebox(30,40){}}
\put(10,0){\line(0,1){40}} \put(20,0){\line(0,1){40}} \put(30,0){\line(0,1){40}}
\put(0,20){\line(1,0){10}} \put(10,10){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(20,20){\line(1,0){10}}
\put(5,10){\makebox(0,0){$\scriptstyle \alpha_3$}} \put(15,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(25,30){\makebox(0,0){$\scriptstyle \alpha_1$}}
\end{picture}
\end{center}
\noindent in the middle in both cases. The $ 4 \times 2 $ block on the left in (1) becomes
\begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(20,40) \put(0,0){\framebox(20,40){}}
\put(10,0){\line(0,1){40}} \put(20,0){\line(0,1){40}}
\put(0,10){\line(1,0){10}} \put(10,20){\line(1,0){10}}
\put(5,25){\makebox(0,0){$\scriptstyle \cong$}} \put(15,30){\makebox(0,0){$\scriptstyle \cong$}}
\put(25,0){.}
\end{picture} \end{center}
\noindent which is not formally the same as $ 4 \times 2 $ block in (2), but they are equal by one of the coherence identities for $ (\ )_* $. We write it out $$ \bfig\scalefactor{.8}
\square/=`@{>}|{\usebox{\bbox}}``=/<500,1500>[A_0`A_0`A_3`A_3;`(f_3 f_2 f_1)_*``]
\place(250,750)[{\scriptstyle \cong}]
\morphism(500,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-1000>[A_0`A_2;(f_2 f_1)_*]
\morphism(500,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_2`A_3;f_{3*}]
\square(500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,500>[A_2`A_2`A_3`A_3;``f_{3*}`]
\place(750,250)[=]
\square(500,0)/=```/<500,1500>[A_0`A_0`A_3`A_3;```]
\morphism(1000,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_0`A_1;f_{1*}]
\morphism(1000,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_1`A_2;f_{2*}]
\place(750,1150)[{\scriptstyle \cong}]
\place(1400,850)[=]
\square(2000,0)/=`@{>}|{\usebox{\bbox}}``=/<500,1500>[A_0`A_0`A_3`A_3;`(f_3 f_2 f_1)_*``]
\place(2250,750)[{\scriptstyle \cong}]
\morphism(2500,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_0`A_1;f_{1*}]
\morphism(2500,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-1000>[A_1`A_3;(f_3 f_2)_*]
\square(2500,1000)/=``@{>}|{\usebox{\bbox}}`=/[A_0`A_0`A_1`A_1;``f_{1*}`]
\place(2750,1250)[{\scriptstyle =}]
\morphism(3000,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_1`A_2;f_{2*}]
\morphism(3000,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_2`A_3\rlap{\ .};f_{3*}]
\morphism(2500,0)/=/<500,0>[A_3`A_3;]
\place(2750,700)[{\scriptstyle \cong}]
\efig $$ There may be something to worry about here because $ (f_2 f_1)_* \cong f_{2*} \bdot f_{1*} $ involves $ \chi_{f_2 f_1} $ whereas $ (f_3 f_2)_* \cong f_{3*} \bdot f_{2*} $ involves $ \chi_{f_3 f_2} $ which are unrelated. However both $ \chi_{f_2 f_1} $ and $ \chi_{f_3 f_2} $ cancel in the composites. The left hand side is
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(40,50) \put(0,0){\framebox(40,50){}}
\put(10,10){\line(0,1){10}} \put(10,20){\line(0,1){10}} \put(20,0){\line(0,1){50}} \put(30,30){\line(0,1){20}} \put(40,0){\line(0,1){50}}
\put(0,10){\line(1,0){20}}
\put(0,20){\line(1,0){40}} \put(0,30){\line(1,0){40}}
\put(20,40){\line(1,0){20}}
\put(10,5){\makebox(0,0){$\scriptstyle \chi_{f_3 f_2 f_1}$}} \put(15,15){\makebox(0,0){$\scriptstyle \psi_{f_3}$}} \put(5,25){\makebox(0,0){$\scriptstyle \psi_{f_2 f_1}$}} \put(30,25){\makebox(0,0){$\scriptstyle\chi_{f_2 f_1}$}} \put(35,35){\makebox(0,0){$\scriptstyle \psi_{f_2}$}} \put(25,45){\makebox(0,0){$\scriptstyle \psi_{f_1}$}}
\end{picture}
\end{center}
\noindent and when we cancel $ \chi_{f_2 f_1} $ with $ \psi_{f_2 f_1} $
leaving $ \id_{f_2 f_1} $, that composite reduces to
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(30,40) \put(0,0){\framebox(30,40){}}
\put(10,10){\line(0,1){30}} \put(20,10){\line(0,1){30}} \put(30,0){\line(0,1){40}}
\put(0,10){\line(1,0){30}} \put(0,20){\line(1,0){30}} \put(0,30){\line(1,0){30}}
\put(15,5){\makebox(0,0){$\scriptstyle \chi_{f_3 f_2 f_1}$}} \put(25,15){\makebox(0,0){$\scriptstyle \psi_{f_3}$}} \put(15,25){\makebox(0,0){$\scriptstyle \psi_{f_2}$}} \put(5,35){\makebox(0,0){$\scriptstyle \psi_{f_1}$}}
\end{picture}
\end{center}
\noindent as does the right hand side.
The $ 4 \times 2 $ block on the right is the same with the roles of $ \psi $ and $ \chi $ interchanged. This completes the proof of associativity of horizontal composition of retrocells.
The associativity for vertical composition is much simpler as it does not involve $ \psi $'s or $ \chi $'s, only the associativity isomorphisms of $ {\mathbb A} $. In particular if $ {\mathbb A} $ were strict, then $ {\mathbb A}^{ret} $ would be too, and the proof of associativity would be merely a question of writing down the two composites and observing that they are exactly the same.
For interchange consider retrocells $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B_0`B_1`C_0`C_1;g_1`w_0`w_1`h_1]
\morphism(180,250)/<=/<200,0>[`;\beta_1]
\square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[B_1`B_2`C_1`C_2\rlap{\ .};g_2``w_2`h_1]
\morphism(680,250)/<=/<200,0>[`;\beta_2]
\square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A_0`A_1`B_0`B_1;f_1`v_0`v_1`]
\morphism(180,750)/<=/<200,0>[`;\alpha_1]
\square(500,500)/>``@{>}|{\usebox{\bbox}}`/[A_1`A_2`B_1`B_2;f_2``v_2`]
\morphism(680,750)/<=/<200,0>[`;\alpha_2]
\efig $$ Then the pattern for $ (\beta_2 \beta_1) \bdot (\alpha_2 \alpha_1) $ is
\begin{center}
\setlength{\unitlength}{.9mm}
\begin{picture}(80,40)
\put(0,0){\framebox(80,40){}}
\put(10,0){\line(0,1){40}}
\put(20,0){\line(0,1){40}}
\put(30,0){\line(0,1){40}}
\put(40,0){\line(0,1){40}}
\put(50,0){\line(0,1){40}}
\put(60,0){\line(0,1){40}}
\put(70,0){\line(0,1){40}}
\put(0,10){\line(1,0){50}} \put(60,10){\line(1,0){10}} \put(0,20){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(50,20){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(30,30){\line(1,0){50}}
\put(5,30){\makebox(0,0){$\scriptstyle \cong$}} \put(15,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(25,30){\makebox(0,0){$\scriptstyle \alpha_1$}} \put(35,20){\makebox(0,0){$\scriptstyle \cong$}} \put(45,20){\makebox(0,0){$\scriptstyle \cong$}} \put(55,10){\makebox(0,0){$\scriptstyle \beta_2$}} \put(65,20){\makebox(0,0){$\scriptstyle \beta_1$}} \put(75,15){\makebox(0,0){$\scriptstyle \cong$}}
\end{picture}
\end{center}
\noindent and for $ (\beta_2 \bdot \alpha_2) (\beta_1 \bdot \alpha_1) $ it is
\begin{center}
\setlength{\unitlength}{.9mm}
\begin{picture}(60,40)
\put(0,0){\framebox(60,40){}}
\put(10,0){\line(0,1){40}}
\put(20,0){\line(0,1){40}}
\put(30,0){\line(0,1){40}}
\put(40,0){\line(0,1){40}}
\put(50,0){\line(0,1){40}}
\put(10,10){\line(1,0){10}} \put(30,10){\line(1,0){20}} \put(0,20){\line(1,0){10}} \put(20,20){\line(1,0){20}} \put(50,20){\line(1,0){10}} \put(10,30){\line(1,0){20}} \put(40,30){\line(1,0){10}}
\put(5,30){\makebox(0,0){$\scriptstyle \cong$}} \put(15,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(25,10){\makebox(0,0){$\scriptstyle \beta_2$}} \put(35,30){\makebox(0,0){$\scriptstyle \alpha_1$}} \put(45,20){\makebox(0,0){$\scriptstyle \beta_1$}} \put(55,10){\makebox(0,0){$\scriptstyle \cong$}}
\put(65,0){.}
\end{picture}
\end{center}
\noindent The two $ \cong $ in the middle of the first one are inverse to each other, $$ g_{2*} \bdot g_{1*} \to^{\cong} (g_2 g_1)_* \to^{\cong} g_{2*} \bdot g_{1*}\ , $$ so each of $ (\beta_2 \beta_1) \bdot (\alpha_2 \alpha_1) $ and $ (\beta_2 \bdot \alpha_2) (\beta_1 \bdot \alpha_1) $ is equal to
\begin{center}
\setlength{\unitlength}{.9mm}
\begin{picture}(50,40)
\put(0,0){\framebox(50,40){}}
\put(10,0){\line(0,1){40}}
\put(20,0){\line(0,1){40}}
\put(30,0){\line(0,1){40}}
\put(40,0){\line(0,1){40}}
\put(50,0){\line(0,1){40}}
\put(10,10){\line(1,0){10}} \put(30,10){\line(1,0){10}} \put(0,20){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(40,20){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(30,30){\line(1,0){10}}
\put(5,30){\makebox(0,0){$\scriptstyle \cong$}} \put(15,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(25,30){\makebox(0,0){$\scriptstyle \alpha_1$}} \put(25,10){\makebox(0,0){$\scriptstyle \beta_2$}} \put(35,20){\makebox(0,0){$\scriptstyle \beta_1$}} \put(45,10){\makebox(0,0){$\scriptstyle \cong$}}
\end{picture}
\end{center}
\noindent completing the proof.
\end{proof}
\begin{theorem} \label{Thm-DoubDual} (1) $ {\mathbb A}^{ret} $ has a canonical choice of companions.
\noindent (2) There is a canonical isomorphism of double categories with companions $$ {\mathbb A} \to^\cong {\mathbb A}^{ret\ ret} $$ which is the identity on objects and horizontal and vertical arrows.
\end{theorem}
\begin{proof} The companion of $ f \colon A \to B $ in $ {\mathbb A}^{ret} $ is $ f_* $, $ f $'s companion in $ {\mathbb A} $ with binding retrocells $$ \bfig\scalefactor{.8}
\square(0,250)/>`@{>}|{\usebox{\bbox}}`=`=/[A`B`B`B;f`f_*``]
\morphism(170,500)/<=/<200,0>[`;]
\place(800,500)[=]
\square(1200,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`B`B;`\id_B`\id_B`]
\square(1200,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;`f_*`f_*`]
\place(1450,500)[{\scriptstyle 1}]
\efig $$ and $$ \bfig\scalefactor{.8}
\square(0,250)/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``f_*`f]
\morphism(170,500)/<=/<200,0>[`;]
\place(800,500)[=]
\square(1200,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`B`B\rlap{\ .};`f_*`f_*`]
\square(1200,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`A`A;`\id_A`\id_A`]
\place(1450,500)[{\scriptstyle 1}]
\efig $$ The binding equations only involve canonical isos so hold by coherence.
A cell $ \alpha $ in $ {\mathbb A}^{ret\,ret} $, i.e. a retrocell in $ {\mathbb A}^{ret} $ is $$ \bfig\scalefactor{.8}
\square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\morphism(170,500)/<=/<200,0>[`;\alpha]
\place(800,500)[=]
\square(1200,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`C`D`D;`w`g_*`]
\square(1200,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`C;`f_*`v`]
\place(1450,500)[{\scriptstyle \alpha}]
\place(2200,500)[\mbox{in\ \ ${\mathbb A}^{ret}$}]
\efig $$ $$ \bfig\scalefactor{.8} \place(800,750)[=]
\place(100,0)[\ ]
\square(1250,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[C`D`D`D;`g_*`\id_D`]
\square(1250,500)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`B`C`D;`v`w`]
\square(1250,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`A`B;`\id_A`f_*`]
\place(1500,750)[{\scriptstyle \alpha}]
\place(2200,750)[\mbox{in\ \ ${\mathbb A}$}]
\efig $$ and these are in canonical bijection with $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D\rlap{\ .};f`v`w`g]
\place(250,250)[{\scriptstyle \alpha'}]
\efig $$
Checking that composition and identities are preserved is a straightforward calculation and is omitted.
\end{proof}
\begin{example}\rm
If $ \cal{A} $ is a $ 2 $-category, the double category of quintets $ {\mathbb Q}\cal{A} $ has the same objects as $ \cal{A} $, the $ 1 $-cells of $ \cal{A} $ as both horizontal and vertical arrows, and cells $ \alpha $ $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`h`k`g]
\place(250,250)[{\scriptstyle \alpha}]
\place(850,250)[=]
\square(1200,0)[A`B`C`D;f`h`k`g]
\morphism(1550,350)/=>/<-140,-140>[`;\alpha]
\efig $$ i.e. a $ 2 $-cell $ \alpha \colon k f \to g h $. Horizontal and vertical composition are given by pasting. Every horizontal arrow $ f \colon A \to B $ has a companion, $ f_* $, namely $ f $ itself considered as a vertical arrow. A retrocell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`h`k`g]
\morphism(180,250)/<=/<200,0>[`;\alpha]
\efig $$ is $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`C`D`D;`k`g_*`]
\square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`C;`f_*`h`]
\place(250,500)[{\scriptstyle \alpha}]
\place(900,500)[=]
\square(1400,250)[A`A`D`D;1_A`k f`g h`1_D]
\morphism(1750,600)/=>/<-140,-140>[`;\alpha]
\efig $$ i.e. a coquintet $$ \bfig\scalefactor{.8} \square[A`B`C`D\rlap{\ .};f`h`k`g]
\morphism(220,200)/=>/<140,140>[`;\alpha]
\efig $$ Thus $$ ({\mathbb Q} {\cal{A}})^{ret} = {\rm co}{\mathbb Q}{\cal{A}} = {\mathbb Q}({\cal{A}}^{co})\ . $$
\end{example}
\section{Adjoints, companions, mates}
The well-known mates calculus says that if we have functors $ F, G, H, K, U, V $ as below with $ F \dashv U $ and $ G \dashv V $, then there is a bijection between natural transformations $ t $ and $ u $ as below $$ \bfig\scalefactor{.8} \square[{\bf A}`{\bf B}`{\bf C}`{\bf D};H`U`V`K]
\morphism(180,250)/=>/<200,0>[`;t]
\place(900,250)[\longleftrightarrow]
\square(1300,0)[{\bf C}`{\bf D}`{\bf A}`{\bf B}\rlap{\ .};K`F`G`H]
\morphism(1680,250)/=>/<-200,0>[`;u]
\efig $$ This is usually stated for bicategories but with the help of retrocells we can extend it to double categories (with companions).
To say that two horizontal arrows are adjoint in a double category $ {\mathbb A} $ means they are so in the $ 2 $-category of horizontal arrows $ {\cal{H}}{\it or} {\mathbb A} $. So $ h $ left adjoint to $ f $ means we are given cells $$ \bfig\scalefactor{.8} \square/`=`=`=/<800,400>[A`A`A`A;```]
\morphism(0,400)/>/<400,0>[A`B;f]
\morphism(400,400)/>/<400,0>[B`A;h]
\place(400,200)[{\scriptstyle \epsilon}]
\place(1100,200)[\mbox{and}]
\square(1400,0)/=`=`=`/<800,400>[B`B`B`B;```]
\morphism(1400,0)|b|/>/<400,0>[B`A;h]
\morphism(1800,0)|b|/>/<400,0>[A`B;f]
\place(1800,200)[{\scriptstyle \eta}]
\efig $$ satisfying the ``triangle'' identities $$ \bfig\scalefactor{.8} \square/`=`=`=/<1000,500>[A`A`A`A;```]
\morphism(0,500)/>/<500,0>[A`B;f]
\morphism(500,500)/>/<500,0>[B`A;h]
\place(500,250)[{\scriptstyle \epsilon}]
\square(1000,0)/>``=`>/[A`B`A`B;f```f]
\place(1250,250)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}]
\square(0,500)/>`=`=`/[A`B`A`B;f```]
\place(250,750)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}]
\square(500,500)/=``=`/<1000,500>[B`B`B`B;```]
\place(1000,750)[{\scriptstyle \eta}]
\place(1900,500)[=]
\square(2300,250)/>`=`=`>/[A`B`A`B;f```f]
\place(2550,500)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}]
\efig $$ and $$ \bfig\scalefactor{.8} \square/>`=`=`>/[B`A`B`A;h```h]
\place(250,250)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}]
\square(500,0)/``=`=/<1000,500>[A`A`A`A;```]
\place(1000,250)[{\scriptstyle \epsilon}]
\morphism(500,500)/>/<500,0>[A`B;f]
\morphism(1000,500)/>/<500,0>[B`A;h]
\square(0,500)/=`=`=`/<1000,500>[B`B`B`B;```]
\place(500,750)[{\scriptstyle \eta}]
\square(1000,500)/>``=`/[B`A`B`A;h```]
\place(1250,750)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}]
\place(1900,500)[=]
\square(2300,250)/>`=`=`>/[B`A`B`A;h```h]
\place(2550,500)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}]
\place(2950,0)[.]
\efig $$
To say that the vertical arrows are adjoint means that they are so in the vertical bicategory $ {\cal{V}}{\it ert} {\mathbb A} $. So $ x $ is left adjoint to $ v $ if we are given cells $$ \bfig\scalefactor{.8} \square/=``=`=/<550,1000>[A`A`A`A;```]
\place(275,500)[{\scriptstyle \epsilon}]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`C;v]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[C`A;x]
\place(1000,500)[\mbox{and}]
\square(1450,0)/=`=``=/<550,1000>[C`C`C`C;```]
\morphism(2000,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[C`A;x]
\morphism(2000,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`C;v]
\place(1750,500)[{\scriptstyle \eta}]
\efig $$ also satisfying the triangle identities.
Suppose we are given horizontal arrows $ f $ and $ h $ with cells $ \alpha_1 $ and $ \beta_1 $ as below. In the presence of companions we can use sliding to transform them. We have bijections $$ \bfig\scalefactor{.8} \square(0,0)/`=`=`=/<1000,500>[A`A`A`A;```]
\place(500,250)[{\scriptstyle \alpha_1}]
\morphism(0,500)/>/<500,0>[A`B;f]
\morphism(500,500)/>/<500,0>[B`A;h]
\place(1300,250)[\longleftrightarrow]
\square(1600,0)/>`=`@{>}|{\usebox{\bbox}}`=/[A`B`A`A;f``h_*`]
\place(1850,250)[{\scriptstyle \alpha_2}]
\place(2450,250)[\longleftrightarrow]
\square(2800,-250)/=`=``=/<500,1000>[A`A`A`A;```]
\place(3050,250)[{\scriptstyle \alpha_3}]
\morphism(3300,750)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*]
\morphism(3300,250)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;h_*]
\efig $$
$$ \bfig\scalefactor{.8} \square(0,0)/=`=`=`/<1000,500>[B`B`B`B;```]
\place(500,250)[{\scriptstyle \beta_1}]
\morphism(0,0)|b|/>/<500,0>[B`A;h]
\morphism(500,0)|b|<500,0>[A`B;f]
\place(1300,250)[\longleftrightarrow]
\square(1600,0)/=`@{>}|{\usebox{\bbox}}`=`>/[B`B`A`B;`h_*``f]
\place(1850,250)[{\scriptstyle \beta_2}]
\place(2400,250)[\longleftrightarrow]
\square(2800,-250)/=``=`=/<500,1000>[B`B`B`B\rlap{\ .};```]
\place(3050,250)[{\scriptstyle \beta_3}]
\morphism(2800,750)/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;h_*]
\morphism(2800,250)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*]
\efig $$
\begin{proposition} \label{Prop-AdjComp}
$ h $ is left adjoint to $ f $ with adjunctions $ \alpha_1 $ and $ \beta_1 $ if and only if $ f_* $ is left adjoint to $ h_* $ with adjunctions $ \beta_3 $ and $ \alpha_3 $.
\end{proposition}
\begin{theorem} \label{Thm-Mates}
Consider horizontal morphisms $ f $ and $ g $ and vertical morphisms $ v $ and $ w $ as in $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D\rlap{\ .};f`v`w`g]
\efig $$ (1) If $ x $ is left adjoint to $ v $ and $ y $ left adjoint to $ w $, then there is a bijection between cells $ \alpha $ and retrocells $ \beta $ as in $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,250)[{\scriptstyle \alpha}]
\place(900,250)[\longleftrightarrow]
\square(1300,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[C`D`A`B\rlap{\ .};g`x`y`f]
\morphism(1620,250)/=>/<-150,0>[`;\beta]
\efig $$ (2) If $ h $ is left adjoint to $ f $ and $ k $ left adjoint to $ g $, then there is a bijection between cells $ \alpha $ and retrocells $ \gamma $ as in $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,250)[{\scriptstyle \alpha}]
\place(900,250)[\longleftrightarrow]
\square(1300,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B`A`D`C\rlap{\ .};h`w`v`k]
\morphism(1620,250)/=>/<-150,0>[`;\gamma]
\efig $$
\end{theorem}
\begin{proof} (1) Standard cells $ \alpha $ are in bijection with $ 2 $-cells $ \widehat{\alpha} $ in the bicategory $ {\cal{V}}{\it ert} {\mathbb A} $ $$ \bfig\scalefactor{.8}
\square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,500)[{\scriptstyle \alpha}]
\place(900,500)[\longleftrightarrow]
\square(1300,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[C`B`D`D;`g_*`w`]
\square(1300,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`C`B;`v`\alpha`]
\place(1550,500)[{\scriptstyle \widehat{\alpha}}]
\efig $$ and retrocells $ \beta $ are defined to be $ 2 $-cells in $ {\cal{V}}{\it ert}{\mathbb A} $ $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[D`A`B`B\rlap{\ .};`y`f_*`]
\square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[C`C`D`A;`g_*`x`]
\place(250,500)[{\scriptstyle \beta}]
\efig $$ Then our claimed bijection is just the usual bijection from bicategory theory: $$ \frac{\widehat{\alpha} \colon g_* \bdot v \to w \bdot f_*} {\beta \colon y \bdot g_* \to f_* \bdot x\rlap{\ .}} $$ (2) From the previous proposition we have $ f_* $ is left adjoint to $ h_* $ and $ g_* $ left adjoint to $ k_* $, and again our bijection follows from the usual bicategory one: $$ \frac{\widehat{\alpha} \colon g_* \bdot v \to w_* \bdot f_*} {\gamma \colon v \bdot h_* \to k_* \bdot w} $$
\end{proof}
\begin{corollary} (1) If $ f $ has a left adjoint $ h $, $ g $ a left adjoint $ k $, $ v $ a right adjoint $ x $ and $ w $ a right adjoint $ y $, then we have a bijection of cells $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,250)[{\scriptstyle \alpha}]
\place(900,250)[\longleftrightarrow]
\square(1300,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[D`C`B`A\rlap{\ .};k`y`x`h]
\place(1550,250)[{\scriptstyle \delta}]
\efig $$ (2) We get the same bijection if left and right are interchanged in all four adjunctions.
\end{corollary}
\begin{proof} (1) We have the following bijections $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,250)[{\scriptstyle \alpha}]
\place(800,250)[\longleftrightarrow]
\square(1100,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B`A`D`C;h`w`v`k]
\morphism(1450,250)/=>/<-200,0>[`;\beta]
\place(1900,250)[\longleftrightarrow]
\square(2200,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[D`C`B`A;k`y`x`h]
\place(2450,250)[{\scriptstyle \delta}]
\efig $$ the first by direct application of part (2) of Theorem \ref{Thm-Mates} and the second by applying part (1) of Theorem \ref{Thm-Mates} in $ {\mathbb A}^{ret} $ where $ x \dashv v $ and $ y \dashv w $. Finally $ \delta $ is a cell in $ ({\mathbb A}^{ret})^{ret} \cong {\mathbb A} $.
\noindent (2) For this we use (1) first and then (2) in $ {\mathbb A}^{ret} $ $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,250)[{\scriptstyle \alpha}]
\place(800,250)[\longleftrightarrow]
\square(1100,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[C`D`A`B;g`x`y`f]
\morphism(1450,250)/=>/<-200,0>[`;\gamma]
\place(1900,250)[\longleftrightarrow]
\square(2200,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[D`C`B`A\rlap{\ .};k`y`x`h]
\place(2450,250)[{\scriptstyle \delta}]
\efig $$
\end{proof}
Note that the statement of the corollary does not refer to retrocells or companions but it does not seem possible to prove it directly without companions. The infamous pinwheel \cite{DawPar93B} pops up in all attempts to do so.
\section{Coretrocells}
There is a dual situation giving two more bijections in the presence of right adjoints, but the notion of retrocell is not self-dual. In fact there is a dual notion, coretrocell, which also comes up in practice as we will see later.
Like for $ 2 $-categories there are duals op and co for double categories. $ {\mathbb A}^{op} $ has the horizontal direction reversed and $ {\mathbb A}^{co} $ the vertical. If $ {\mathbb A} $ has companions there is no reason why $ {\mathbb A}^{op} $ or $ {\mathbb A}^{co} $ should, and even if they did there is no relation between the retrocells there and those of $ {\mathbb A} $. Companions in $ {\mathbb A}^{op} $ or $ {\mathbb A}^{co} $ correspond to conjoints in $ {\mathbb A} $ and we will use these to define coretrocells.
For completeness we recall the notion of conjoint. More details can be found in \cite{Gra20}.
\begin{definition}
Let $ f \colon A \to B $ be a horizontal arrow in $ {\mathbb A} $. A {\em conjoint} for $ f $ is a vertical arrow $ v \colon B \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy A $ together with two cells (conjunctions) $$ \bfig\scalefactor{.8}
\square/>`=`@{>}|{\usebox{\bbox}}`=/[A`B`A`A;f``v`]
\place(250,250)[{\scriptstyle \alpha}]
\place(900,250)[\mbox{and}]
\square(1300,0)/=`@{>}|{\usebox{\bbox}}`=`>/[B`B`A`B;`v``f]
\place(1550,250)[{\scriptstyle \beta}]
\efig $$ such that $$ \bfig\scalefactor{.8}
\square(0,250)/>`=`@{>}|{\usebox{\bbox}}`=/[A`B`A`A;f``v`]
\place(250,500)[{\scriptstyle \alpha}]
\square(500,250)/=``=`>/[B`B`A`B;```f]
\place(750,500)[{\scriptstyle \beta}]
\place(1600,500)[= \mbox{\ \ $\id_f$\quad and\quad}]
\square(2200,0)/>`=`@{>}|{\usebox{\bbox}}`=/[A`B`A`A;f``v`]
\place(2450,250)[{\scriptstyle \alpha}]
\square(2200,500)/=`@{>}|{\usebox{\bbox}}`=`/[B`B`A`B;`v``]
\place(2450,750)[{\scriptstyle \beta}]
\place(3100,500)[= \mbox{\ \ $1_v$}]
\place(3200,0)[.]
\efig $$
\end{definition}
As we said, this is the vertical dual of the notion of companion and therefore has the corresponding properties. They are unique up to globular isomorphism when they exist and we choose representation that we call $ f^* $. We have $ (g f)^* \cong f^* \bdot g^* $ and $ 1^*_A \cong \id_A $. The choice is arbitrary but in practice there is a canonical one and for that $ 1^*_A $ is usually $ \id_A $, which we will assume.
The dual of sliding is {\em flipping}: we have bijections, natural in every way that makes sense, $$ \bfig\scalefactor{.9}
\square(0,250)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/<1000,500>[A`C`D`E;`v`w`h]
\place(500,500)[{\scriptstyle \alpha}]
\morphism(0,750)|a|/>/<500,0>[A`B;f]
\morphism(500,750)|a|/>/<500,0>[B`C;g]
\place(1500,500)[\longleftrightarrow]
\square(2100,0)/>``@{>}|{\usebox{\bbox}}`>/<500,1000>[B`C`D`E;g``w`h]
\place(2350,500)[{\scriptstyle \beta}]
\morphism(2100,1000)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;f^*]
\morphism(2100,500)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[A`D;v]
\efig $$ and $$ \bfig\scalefactor{.8}
\square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/<1000,500>[A`B`C`E;f`v`w`]
\place(500,500)[{\scriptstyle \alpha}]
\morphism(0,250)|b|/>/<500,0>[C`D;g]
\morphism(500,250)|b|/>/<500,0>[D`E;h]
\place(1500,500)[\longleftrightarrow]
\square(2100,0)/>`@{>}|{\usebox{\bbox}}``>/<500,1000>[A`B`C`D\rlap{\ .};f`v``g]
\place(2350,500)[{\scriptstyle \beta}]
\morphism(2600,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`E;w]
\morphism(2600,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[E`D;h^*]
\efig $$
We now complete Proposition \ref{Prop-AdjComp}.
\begin{proposition}
Assuming only those companions and conjoints mentioned, we have the following natural bijections $$ \bfig\scalefactor{.67} \square(0,250)/`=`=`=/<1000,500>[A`A`A`A;```] \place(500,500)[{\scriptstyle \alpha_1}]
\morphism(0,750)|a|/>/<500,0>[A`B;f]
\morphism(500,750)|a|/>/<500,0>[B`A;h]
\place(1350,500)[\longleftrightarrow]
\square(1700,250)/>`=`@{>}|{\usebox{\bbox}}`=/[A`B`A`A;f``h_*`] \place(1950,500)[{\scriptstyle \alpha_2}]
\place(2650,500)[\longleftrightarrow]
\square(3100,0)/=`=``=/<500,1000>[A`A`A`A;```] \place(3350,500)[{\scriptstyle \alpha_3}]
\morphism(3600,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*]
\morphism(3600,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;h_*]
\morphism(500,100)/<->/<0,-300>[`;]
\morphism(1950,100)/<->/<0,-300>[`;]
\square(250,-1000)/>`@{>}|{\usebox{\bbox}}`=`=/[B`A`A`A;h`f^*``] \place(500,-750)[{\scriptstyle \alpha_4}]
\place(1250,-750)[\longleftrightarrow]
\square(1700,-1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`A`A;`f^*`h_*`] \place(1950,-750)[{\scriptstyle \alpha_5}]
\morphism(500,-1200)/<->/<0,-300>[`;]
\square(250,-2700)/=``=`=/<500,1000>[A`A`A`A;```] \place(500,-2200)[{\scriptstyle \alpha_6}]
\morphism(250,-1700)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;h^*]
\morphism(250,-2200)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;f^*]
\square(2850,-3950)/=`=`=`/<1000,500>[B`B`B`B;```] \place(3350,-3700)[{\scriptstyle \beta_1}]
\morphism(2850,-3950)|b|/>/<500,0>[B`A;h]
\morphism(3350,-3950)|b|/>/<500,0>[A`B;f]
\place(2520,-3700)[\longleftrightarrow]
\square(1700,-3950)/=`@{>}|{\usebox{\bbox}}`=`>/[B`B`A`B;`h_*``f] \place(1950,-3700)[{\scriptstyle \beta_2}]
\place(1200,-3700)[\longleftrightarrow]
\square(250,-4200)/=``=`=/<500,1000>[B`B`B`B;```] \place(500,-3700)[{\scriptstyle \beta_3}]
\morphism(250,-3200)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;h_*]
\morphism(250,-3700)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*]
\morphism(1950,-2900)/<->/<0,-300>[`;]
\morphism(3350,-2900)/<->/<0,-300>[`;]
\square(3100,-2700)/=`=`@{>}|{\usebox{\bbox}}`>/[B`B`B`A;``f^*`h] \place(3350,-2450)[{\scriptstyle \beta_4}]
\place(2650,-2450)[\longleftrightarrow]
\square(1700,-2700)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`A`A;`h_*`f^*`] \place(1950,-2450)[{\scriptstyle \beta_5}]
\morphism(3350,-1750)/<->/<0,-300>[`;]
\square(3100,-1550)/=`=``=/<500,1000>[B`B`B`B\rlap{\ .};```] \place(3350,-1050)[{\scriptstyle \beta_6}]
\morphism(3600,-550)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;f^*]
\morphism(3600,-1050)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;h^*]
\efig $$
The following are then equivalent. \begin{itemize}
\item[(1)] $ h $ is left adjoint to $ f $ with adjunctions $ \alpha_1 $
and $ \beta_1 $
\item[(2)] $ h_* $ is a conjoint for $ f $ with conjunctions $ \alpha_2$
and $ \beta_2 $
\item[(3)] $ f_* $ is left adjoint to $ h_* $ with adjunctions $ \alpha_3 $
and $ \beta_3 $
\item[(4)] $ f^* $ is a companion for $ h $ with binding cells $ \alpha_4 $
and $ \beta_4 $
\item[(5)] $ f^* $ is isomorphic to $ h_* $ with inverse isomorphisms
$ \alpha_5 $ and $ \beta_5 $
\item[(6)] $ f^* $ is left adjoint to $ h^* $ with ajdunctions $ \alpha_6 $
and $ \beta_6 $.
\end{itemize}
\end{proposition}
\begin{definition} Suppose that in $ {\mathbb A} $ every horizontal arrow $ f $ has a conjoint $ f^* $, then a {\em coretrocell} $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\morphism(260,200)|r|/=>/<0,200>[`;\alpha]
\efig $$ is a (standard) cell $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[D`A`C`C;`g^*`v`]
\place(250,500)[{\scriptstyle \alpha}]
\square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`D`A;`w`f^*`]
\efig $$ in $ {\mathbb A} $.
\end{definition}
Coretrocells are retrocells in $ {\mathbb A}^{co} $. So all properties of retrocells dualize to coretrocells. In particular we have a double category $ {\mathbb A}^{cor} $ whose cells are coretrocells. Dualities can be confusing so we list them here.
\begin{proposition} (1) If $ {\mathbb A} $ has conjoints then $ {\mathbb A}^{op} $ and $ {\mathbb A}^{co} $ have companions and
\noindent (a) $ ({\mathbb A}^{cor})^{op} = ({\mathbb A}^{op})^{ret} $
\noindent (b) $ ({\mathbb A}^{cor})^{co} = ({\mathbb A}^{co})^{ret} $
\noindent (2) If $ {\mathbb A} $ has companions then $ {\mathbb A}^{op} $ and $ {\mathbb A}^{co} $ have conjoints and
\noindent (a) $ ({\mathbb A}^{ret})^{op} = ({\mathbb A}^{op})^{cor} $
\noindent (b) $ ({\mathbb A}^{ret})^{co} = ({\mathbb A}^{co})^{cor} $
\noindent (3) Under the above conditions
\noindent (a) $ ({\mathbb A}^{ret})^{coop} = ({\mathbb A}^{coop})^{ret} $
\noindent (b) $ ({\mathbb A}^{cor})^{coop} = ({\mathbb A}^{coop})^{cor} $.
\end{proposition}
Passing between $ {\mathbb A} $ and $ {\mathbb A}^{co} $ switches left adjoints to right (both horizontal and vertical), switches companions and conjoints, and retrocells with coretrocells. Thus we get the dual theorem for mates.
\begin{theorem} Assume $ {\mathbb A} $ has conjoints.
(1) If $ x $ is right adjoint to $ v $ and $ y $ right adjoint to $ w $, then there is a bijection between cells $ \alpha $ and coretrocells $ \beta $ $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,250)[{\scriptstyle \alpha}]
\place(900,250)[\longleftrightarrow]
\square(1300,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[C`D`A`B\rlap{\ .};g`x`y`f]
\morphism(1550,200)|r|/=>/<0,200>[`;\beta]
\efig $$
(2) If $ h $ is right adjoint to $ f $ and $ k $ right adjoint to $ g $, then there is a bijection between cells $ \alpha $ and coretrocells $ \gamma $ $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,250)[{\scriptstyle \alpha}]
\place(900,250)[\longleftrightarrow]
\square(1300,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B`A`D`C\rlap{\ .};g`w`v`f]
\morphism(1550,200)|r|/=>/<0,200>[`;\gamma]
\efig $$
\end{theorem}
Whereas we think of companions as vertical arrows isomorphic to horizontal ones, it makes sense to think of a cell $ \alpha $ as above as a cell $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[C`B`D`D;`g_*`w`]
\place(250,500)[{\scriptstyle \widehat{\alpha}}]
\square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`C`B;`v`f_*`]
\efig $$ (which it corresponds to bijectively) and reversing its direction would give a natural notion of a cell in the opposite direction, thus giving retrocells. Coretrocells, on the other hand, are less intuitive. We think of conjoints as vertical arrows adjoint to horizontal ones, and although there is a bijection between cells $ \alpha $ and cells $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`D`C`C\rlap{\ ,};`v`g^*`]
\place(250,500)[{\scriptstyle \alpha^\vee}]
\square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`A`D;`f^*`w`]
\efig $$ this is more in the nature of a proposition than a tautology. Nevertheless, formally the two bijections are dual, so have the same status. Reversing the direction of the $ \alpha^\vee $ gives us coretrocells, and they do come up in practice as we will see in the next sections.
\section{Retrocells for spans and such}
If $ {\bf A} $ is a category with pullbacks, we get a double category $ {\mathbb S}{\rm pan} {\bf A} $ whose horizontal part is $ {\bf A} $, whose vertical arrows are spans and whose cells are span morphisms, modified to account for the horizontal arrows $$ \bfig\scalefactor{.8}
\square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`S`T`g]
\place(250,500)[{\scriptstyle \alpha}]
\place(900,500)[=]
\square(1300,0)[S`T`C`D\rlap{\ .};\alpha`\sigma_1 `\tau_1`g]
\square(1300,500)/>`<-`<-`/[A`B`S`T;f`\sigma_0`\tau_0`]
\efig $$ $ {\mathbb S}{\rm pan} {\bf A} $ has companions $ f_* $ and conjoints $ f^* $: $$ \bfig\scalefactor{.7} \place(0,400)[f_*\ \ =]
\morphism(400,400)|r|/>/<0,400>[A`A;1_A]
\morphism(400,400)|r|/>/<0,-400>[A`B;f]
\place(900,400)[\mbox{and}]
\place(1400,400)[f^*\ \ =]
\morphism(1800,400)|r|/>/<0,400>[A`B;f]
\morphism(1800,400)|r|/>/<0,-400>[A`A\rlap{\ \ .};1_A]
\efig $$
A retrocell $ \beta $ is $$ \bfig\scalefactor{.8}
\square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`S`T`g]
\morphism(350,500)/=>/<-200,0>[`;\beta]
\place(900,500)[=]
\square(1400,0)/>`>`>`=/<650,500>[T \times_B A`S`D`D;\beta`\tau_1 p_1`g \sigma_1`]
\square(1400,500)/=`<-`<-`/<650,500>[A`A`T \times_B A`S;`p_2`\sigma_0`]
\efig $$ where $$ \bfig\scalefactor{.8} \square[T \times_B A`A`T`B;p_2`p_1`f`\tau_0]
\efig $$ is a pullback.
A coretrocell $ \gamma $ is $$ \bfig\scalefactor{.8}
\square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`S`T`g]
\morphism(250,450)|r|/=>/<0,200>[`;\gamma]
\place(900,500)[=]
\square(1400,0)/>`>`>`=/<550,500>[C \times_D T`S`C`C\rlap{\ .};\gamma`p_1`\sigma_1`]
\square(1400,500)/=`<-`<-`/<550,500>[B`B`C \times_D T`S;`\tau_0 p_2`f \sigma_0`]
\efig $$
When $ {\bf A} = {\bf Set} $ we can represent an element $ s \in S $ with $ \sigma_0 s = a $ and $ \sigma_1 s = c $ by an arrow $ a \todo{s} c $. Then a morphism of spans $ \alpha $ is a function $$ (a \todo{s} c) \longmapsto (f a \todo{\alpha(s)} g c) . $$
For a retrocell $ \beta $, an element of $ T \times_B A $ is a pair $ (b \todo{t} d, a) $ such that $ f a = b $ so we can represent it as $ f a \todo{t} d $. Then $ \beta $ is a function $ (f a \todo{t} d) \longmapsto (a \todo{\beta t} \beta_1 t) $ with $ g \beta_1 t = d $. If we picture $ S $ as lying over $ T $ (thinking of (co)fibrations) then $ \beta $ is a lifting: for every $ t $ we are given a $ \beta t $ $$ \bfig\scalefactor{.8}
\square/@{>}|{\usebox{\bbox}}`--`--`@{>}|{\usebox{\bbox}}/[a`\beta_1 t`f a`d;\beta t```t]
\morphism(250,200)/|->/<0,200>[`;]
\morphism(900,0)/--/<0,500>[T`S;]
\efig $$ So it is like an opfibration but without any of the category structure around (in particular we cannot say that ``$ \beta t $ is over $ t $'').
For a coretrocell $ \gamma $, an element of $ C \times_D T $ is a pair $ (c, b \todo{t} d) $ with $ g c = d $ which we can write as $ b \todo{t} g c $. $ \gamma $ then assigns to such a $ t $ an $ S $ element $ \gamma_0 t \to^{\gamma t} c $ with $ f \gamma_0 t = b $, i.e. a lifting from $ T $ to $ S $ $$ \bfig\scalefactor{.8}
\square/@{>}|{\usebox{\bbox}}`--`--`@{>}|{\usebox{\bbox}}/[\gamma_0 t`c`b`g c\rlap{\ ,};\gamma t```t]
\morphism(250,200)/|->/<0,200>[`;]
\efig $$ much like a fibration, though without the category structure.
This example shows well the difference between retrocells and coretrocells and their comparison with actual cells.
The story for relations is much the same. If $ {\bf A} $ is a regular category and $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`R`S`g]
\efig $$ is a boundary in $ {\mathbb R}{\rm el} {\bf A} $, i.e. $ f $ and $ g $ are morphisms and $ R $ and $ S $ are relations, then in the internal language of $ A $, there is a (necessarily unique) cell iff $$ a \sim_R c \Rightarrow f a \sim_S g c\ , $$ there is a retrocell iff $$ f a \sim_S d \Rightarrow \exists c (a \sim_R c \wedge g c = d) $$ and a coretrocell iff $$ b \sim_S g c \quad\Rightarrow\quad \exists a (a \sim_R c \wedge f a = b) . $$
Profunctors are the relations of the $ {\cal C}{\it at} $ world. There is a double category which we call $ {\mathbb C}{\rm at} $ whose objects are small categories, horizontal arrows functors, vertical arrows profunctors, and cells the appropriate natural transformations. In a typical cell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[{\bf A}`{\bf B}`{\bf C}`{\bf D};F`P`Q`G]
\place(250,250)[{\scriptstyle t}]
\efig $$ $ t $ is a natural transformation $ P (-, =) \ \to \ Q (F -, G =) $. $ {\mathbb C}{\rm at} $ has companions and conjoints: $$ F_* (A, B) = {\bf B} (FA, B) $$ $$ F^* (B, A) = {\bf B} (B, FA) . $$ We denote an element $ p \in P (A, C) $ by an arrow $ p \colon A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy C $. So the action of $ t $ is $$ t \colon (p \colon A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy C) \longmapsto (t p \colon FA \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy G C) $$ natural in $ A $ and $ C $, of course.
A retrocell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[{\bf A}`{\bf B}`{\bf C}`{\bf D};F`P`Q`G]
\morphism(350,250)/=>/<-200,0>[`;\phi]
\efig $$ is a natural transformation $ \phi \colon Q \otimes_{\bf B} F_* \to G_* \otimes_{\bf C} P $. An element of $ Q \otimes_{\bf B} F_* (A, D) $ is an element of $ Q (FA, D) $, $ g \colon FA \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy D $. An element of $ G_* \otimes_{\bf C} P (A, D) $ is an equivalence class $$ [p \colon A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy C, d \colon GC \to D]_C . $$ So a retrocell assigns to each element of $ Q $, $ q \colon FA \to D $, an equivalence class $$ [\phi (q) \colon A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy C, \ov{\phi} (q) \colon GC \to D] . $$ We can think of it as a lifting, like for spans $$ \bfig\scalefactor{.8}
\square/@{>}|{\usebox{\bbox}}`--``@{>}|{\usebox{\bbox}}/<600,800>[A`C`FA`D\rlap{\ .};\phi(q)```q]
\morphism(600,800)/--/<0,-400>[C`GC;]
\morphism(600,400)/>/<0,-400>[GC`D;]
\morphism(300,250)/|->/<0,400>[`;]
\efig $$ The lifting $ C $ does not lie over $ D $, there is merely a comparison $ GC \to D $. Furthermore the lifting is not unique, but two liftings are connected by a zigzag of $ {\bf C} $ morphisms. We have not spelled out the details because we do not know of any occurrences of these retrocells in print.
Coretrocells of profunctors are similar (dual). We get a ``lifting'' $$ \bfig\scalefactor{.8}
\square/@{>}|{\usebox{\bbox}}``--`@{>}|{\usebox{\bbox}}/<600,800>[A`C`B`GC\rlap{\ .};p```q]
\morphism(0,800)/--/<0,-400>[A`FA;]
\morphism(0,400)/<-/<0,-400>[FA`B;]
\morphism(300,250)/|->/<0,400>[`;]
\efig $$
A final variation on the span theme is $ {\bf V} $-matrices. Let $ {\bf V} $ be a monoidal category with coproducts preserved by $ \otimes $ in each variable separately. There is associated a double category which we call $ {\bf V} $-$ {\mathbb S}{\rm et} $. Its objects are sets and horizontal arrows functions. A vertical arrow $ A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy C $ is an $ A \times C $ matrix of objects of $ {\bf V} $, $ [V_{ac}] $. A cell is a matrix of morphisms $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`{[}V_{ac}{]}`{[}W_{bd}{]}`g]
\place(250,250)[{\scriptstyle {[}\alpha_{ac}{]}}]
\efig $$ $$ \alpha_{ac} \colon V_{ac} \to W_{fa, gc} . $$ Vertical composition is matrix multiplication $$ [X_{ce}] \otimes [V_{ac}] = [\sum_{c\in C} X_{ce} \otimes V_{ac}] . $$ Every horizontal arrow has a companion $$ f_* = [\Delta_{fa, b}] $$ and a conjoint $$ f^* = [\Delta_{b, fa}] $$ where $ \Delta $ is the ``Kronecker delta'' $$ \Delta_{b, b'} = \left\{ \begin{array}{lll} I & \mbox{if} & b = b'\\ 0 & \mbox{if} & b \neq b'. \end{array} \right. $$
A retrocell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`{[}V_{ac}{]}`{[}W_{bd}{]}`g]
\morphism(340,240)/=>/<-200,0>[`;\phi]
\efig $$ is an $ A \times D $ matrix $ [\phi_{ad}] $ $$ \phi_{ad} \colon W_{fa, d} \to \sum_{gc = d} V_{ac} \ . $$ A coretrocell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`{[}V_{ac}{]}`{[}W_{bd}{]}`g]
\morphism(250,200)|r|/=>/<0,200>[`;\psi]
\efig $$ is a $ B \times C $ matrix $ [\psi_{bc}] $ $$ \psi_{bc} \colon W_{b, gc} \to \sum_{fa = b} V_{ac} \ . $$ For example, if $ {\bf V} = {\bf Ab} $, and we again represent elements of $ V_{ac} $ by arrows $ a \todo{v} c $ (resp. of $ W_{bd} $ by $ b \todo{w} d $), then $ \phi $ associates to each $ f a \todo{w} d $ a finite number of elements $ a \todo{v_i} c_i $ with $ g c_i = d $ $$ \bfig\scalefactor{.8}
\square/@{>}|{\usebox{\bbox}}`--`--`@{>}|{\usebox{\bbox}}/[a`c_i`f a`d;v_i```w]
\morphism(250,200)/|->/<0,200>[`;]
\place(900,250)[(i = 1, ..., n)\ .]
\efig $$ Of course the dual situation holds for coretrocells $ \psi $.
So we see that (co)retrocells in each case give liftings but of a type adapted to the situation. For spans they are uniquely specified, for relations they exist but are not specified, for profunctors only up to a connectedness condition and for matrices of Abelian groups we get a finite number of them.
\section{Monads}
A monad in $ {\cal C}{\it at} $ is a quadruple $ ({\bf A}, T, \eta, \mu) $ where $ {\bf A} $ is a category, $ T \colon {\bf A} \to {\bf A} $ an endo\-functor, $ \eta \colon 1_{\bf A} \to T $ and $ \mu \colon T^2 \to T $ natural transformations satisfying the well-known unit and associativity laws. In \cite{Str72} Street introduced morphisms of monads $$ (F, \phi) \colon ({\bf A}, T, \eta, \mu) \to ({\bf B}, S, \kappa, \nu) $$ as functors $ F \colon {\bf A} \to {\bf B} $ together with a natural transformation $$ \bfig\scalefactor{.8} \square[{\bf A}`{\bf B}`{\bf A}`{\bf B};F`T`S`F]
\morphism(340,320)/=>/<-140,-140>[`;\phi]
\efig $$ respecting units and multiplications in the obvious way. He called these monad functors, now called lax monad morphisms (see \cite{Lei04}). This was done, not just in $ {\cal C}{\it at} $, but in a general $ 2 $-category. Using duality, he also considered what he called monad opfunctors, i.e. oplax morphisms of monads, with the $ \phi $ in the opposite direction.
The lax morphisms work well with Eilenberg-Moore algebras, giving a functor $$ {\bf EM} (F, \phi) \colon {\bf EM} ({\mathbb T}) \to {\bf EM} ({\mathbb S}) $$ $$ (TA \to^a A) \longmapsto (SFA \to^{\phi A} FTA \to^{Fa} FA) $$ whereas the oplax ones give functors on the Kleisli categories $$ {\bf Kl} (F, \psi) \colon {\bf Kl} ({\mathbb T}) \to {\bf Kl} ({\mathbb S}) $$ $$ (A \to^f TB) \longmapsto (FA \to^{Ff} FTB \to^{\psi B} SFB) . $$
The story for monads in a double category is this (see \cite{FioGamKoc11, FioGamKoc12}, though note that there horizontal and vertical are reversed). In general we just get one kind of morphism, the oplax ones. If we have companions then we also get the lax ones, and if we also have conjoints we have another kind. The $ 2 $-category case considered by Street corresponds to the double category of coquintets which has companions but not conjoints.
Let $ {\mathbb A} $ be a double category. A vertical {\em monad} in $ {\mathbb A} $, $ t = (A, t, \eta, \mu) $ consists of an object $ A $, a vertical endomorphism $ t $ and two cells $ \eta $ and $ \mu $ as below $$ \bfig\scalefactor{.8}
\square(0,250)/=`=`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`]
\place(250,500)[{\scriptstyle \eta}]
\square(1100,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`]
\place(1350,500)[{\scriptstyle \mu}]
\morphism(1100,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t]
\morphism(1100,500)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t]
\efig $$ satisfying $$ \bfig\scalefactor{.8}
\square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;`t`t`]
\place(250,250)[{\scriptstyle =}]
\square(0,500)/=`=`@{>}|{\usebox{\bbox}}`/[A`A`A`A;``t`]
\place(250,750)[{\scriptstyle \eta}]
\square(500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`]
\place(750,500)[{\scriptstyle \mu}]
\place(1300,500)[=]
\square(1600,250)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;`t`t`]
\place(1850,500)[{\scriptstyle 1_t}]
\place(2400,500)[=]
\square(2700,0)/=`=`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`]
\place(2950,250)[{\scriptstyle \eta}]
\square(2700,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`A`A;`t`t`]
\place(2950,750)[{\scriptstyle =}]
\square(3200,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`]
\place(3450,500)[{\scriptstyle \mu}]
\efig $$ $$ \bfig\scalefactor{.8}
\square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;`t`t`]
\place(250,250)[{\scriptstyle =}]
\square(0,500)/=``@{>}|{\usebox{\bbox}}`/<500,1000>[A`A`A`A;``t`]
\place(250,1000)[{\scriptstyle \mu}]
\morphism(0,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t]
\square(500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1500>[A`A`A`A;``t`]
\place(750,750)[{\scriptstyle \mu}]
\place(1500,750)[=]
\square(2000,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`]
\place(2250,500)[{\scriptstyle \mu}]
\square(2000,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`A`A;`t`t`]
\place(2250,1250)[{\scriptstyle =}]
\morphism(2000,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t]
\morphism(2000,500)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t]
\square(2500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1500>[A`A`A`A\rlap{\ .};``t`]
\place(2750,750)[{\scriptstyle \mu}]
\efig $$
A (horizontal) {\em morphism of monads} $ (f, \psi) \colon (A, t, \eta, \mu) \to (B, s, \kappa, \nu) $ consists of a horizontal arrow $ f $ and a cell $ \psi $ as below, such that $$ \bfig\scalefactor{.8}
\square/=`=`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`]
\place(250,250)[{\scriptstyle \eta}]
\square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[A`B`A`B;f``s`f]
\place(750,250)[{\scriptstyle \psi}]
\place(1300,250)[=]
\square(1600,0)/>`=`=`>/[A`B`A`B;f```f]
\place(1850,250)[{\scriptstyle \id_f}]
\square(2100,0)/=``@{>}|{\usebox{\bbox}}`=/[B`B`B`B;``s`]
\place(2350,250)[{\scriptstyle \kappa}]
\efig $$ and $$ \bfig\scalefactor{.8}
\square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`]
\place(250,500)[{\scriptstyle \mu}]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t]
\square(500,0)/>``@{>}|{\usebox{\bbox}}`>/<500,1000>[A`B`A`B;f``s`f]
\place(750,500)[{\scriptstyle \psi}]
\place(1400,500)[=]
\square(1800,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`A`B;f`t`s`f]
\place(2050,250)[{\scriptstyle \psi}]
\square(1800,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`B`A`B;f`t`s`]
\place(2050,750)[{\scriptstyle \psi}]
\square(2300,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[B`B`B`B\rlap{\ .};``s`]
\place(2550,500)[{\scriptstyle \nu}]
\efig $$ These are the oplax morphisms referred to above.
There are also vertical morphisms of monads, ``bimodules'', whose composition requires certain well-behaved coequalizers. They are interesting (see e.g. \cite{Shu08}), of course, but will not concern us here.
If $ {\mathbb A} $ has companions we can also define retromorphisms of monads. (See \cite{Cla22, DiM22}.)
\begin{definition} A {\em retromorphism of monads} $ (f, \phi) \colon (A, t, \eta, \mu) \to (B, s, \kappa, \nu) $ consists of a horizontal arrow $ f $ and a retrocell $ \phi $ $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`A`B;f`t`s`f]
\morphism(350,250)/=>/<-200,0>[`;\phi]
\efig $$ satisfying $$ \bfig\scalefactor{.8} \square/=`=`>`=/[B`B`B`B;``s`]
\place(250,250)[{\scriptstyle \kappa}]
\square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;`f_*`f_*`]
\place(250,750)[{\scriptstyle =}]
\square(500,0)/``>`=/[B`A`B`B;``f_*`]
\square(500,500)/=``@{>}|{\usebox{\bbox}}`/[A`A`B`A;``t`]
\place(750,500)[{\scriptstyle \phi}]
\place(1500,500)[=]
\square(2000,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`B`B;`f_*`f_*`]
\place(2250,250)[{\scriptstyle =}]
\square(2000,500)/=`=`@{>}|{\usebox{\bbox}}`/[A`A`A`A;``t`]
\place(2250, 750)[{\scriptstyle \eta}]
\efig $$
$$ \bfig\scalefactor{.8}
\square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[B`B`B`B;``s`]
\place(250,500)[{\scriptstyle \nu}]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B;s]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B;s]
\square(0,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;`f_*`f_*`]
\place(250,1250)[{\scriptstyle =}]
\morphism(500,1500)/=/<500,0>[A`A;]
\morphism(1000,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-1000>[A`A;t]
\morphism(1000,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*]
\morphism(500,0)/=/<500,0>[B`B;]
\place(750,750)[{\scriptstyle \phi}]
\place(1300,750)[=]
\square(1700,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`B`B;`s`s`]
\place(1950,250)[{\scriptstyle =}]
\square(1700,500)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`A`B`B;`s`f_*`]
\square(1700,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`A;`f_*`t`]
\place(1950,1250)[{\scriptstyle \phi}]
\square(2200,0)/``@{>}|{\usebox{\bbox}}`=/[B`A`B`B;``f_*`]
\square(2200,500)/``@{>}|{\usebox{\bbox}}`/[A`A`B`A;``t`]
\place(2450,500)[{\scriptstyle \phi}]
\square(2200,1000)/=``@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`]
\place(2450,1250)[{\scriptstyle =}]
\square(2700,0)/=``@{>}|{\usebox{\bbox}}`=/[A`A`B`B;``f_*`]
\square(2700,500)/=``@{>}|{\usebox{\bbox}}`/<500,1000>[A`A`A`A;``t`]
\place(2950,1000)[{\scriptstyle \mu}]
\efig $$
\end{definition}
\begin{proposition} The identity retrocell is a retromorphism $ (A, t, \eta, \mu) \to (A, t, \eta, \mu) $. The composite of two retromorphisms of monads is again one.
\end{proposition}
\begin{proof} Easy calculation.
\end{proof}
For a monad $ t = (A, t, \eta, \mu) $, Kleisli is a colimit construction, a universal morphism of the form $$ (A, t, \eta, \mu) \to (X, \id_X, 1, 1) . $$ \begin{definition} The {\em Kleisli object} of a vertical monad in a double category, if it exists, is an object $ Kl(t) $, a horizontal arrow $ f $ and a cell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`=`>/[A`Kl(t)`A`Kl(t);f`t``f]
\place(250,250)[{\scriptstyle \pi}]
\efig $$ such that
\noindent (1) $$ \bfig\scalefactor{.8}
\square/=`=`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`]
\place(250,250)[{\scriptstyle \eta}]
\square(500,0)/>``=`>/[A`Kl(t)`A`Kl(t);f```f]
\place(750,250)[{\scriptstyle \pi}]
\place(1500,250)[=]
\square(2000,0)/>`=`=`>/[A`Kl(t)`A`Kl(t);f```f]
\place(2250,250)[{\scriptstyle \id_f}]
\efig $$
\noindent (2) $$ \bfig\scalefactor{.8}
\square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`]
\place(250,500)[{\scriptstyle \mu}]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t]
\square(500,0)/>``=`>/<500,1000>[A`Kl(t)`A`Kl(t);f```f]
\place(750,500)[{\scriptstyle \pi}]
\place(1500,500)[=]
\square(2000,0)/>`@{>}|{\usebox{\bbox}}`=`>/[A`Kl(t)`A`Kl(t);f`t``f]
\place(2250,250)[{\scriptstyle \pi}]
\square(2000,500)/>`@{>}|{\usebox{\bbox}}`=`/[A`Kl(t)`A`Kl(t);f`t``]
\place(2250,750)[{\scriptstyle \pi}]
\square(2500,0)/=``=`=/<600,1000>[Kl(t)`Kl(t)`Kl(t)`Kl(t);```]
\place(2850,500)[{\scriptstyle \cong}]
\efig $$ and universal with those properties. That is, for any $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`=`>/[A`B`A`B;X`t``X]
\place(250,250)[{\scriptstyle \xi}]
\efig $$ such that (1) $ \xi \eta = \id $ and (2) $ \xi \mu = \xi \cdot \xi $, there exists a unique $ h \colon Kl(t) \to B $ such that (1) $ h f = x $ and (2) $ h \pi = \xi $. \end{definition}
Just by universality, if we have a morphism of monads $ (h, \psi) \colon (A, t, \eta, \mu) \to (B, s, \kappa, \nu) $ and the Kleisli objects $ Kl(t) $ and $ Kl(s) $ exist, we get a horizontal arrow $ Kl(h, \psi) $ such that $$ \bfig\scalefactor{.8} \square[A`Kl(t)`B`Kl(s)\rlap{\ .};f`h`Kl(h, \psi)`g]
\efig $$
This does not work for Eilenberg-Moore objects. Asking for a universal morphism of the form $$ (X, \id_X, 1, 1) \to (A, t, \eta, \mu) $$ is not the right thing as can be seen from the usual $ {\cal C}{\it at} $ example, but also in general. For such a morphism $ (u, \theta) $, the unit law says $$ \bfig\scalefactor{.8} \square/=`=`=`=/[X`X`X`X;```]
\place(250,250)[{\scriptstyle 1_{\id_X}}]
\square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[X`A`X`A;u``t`u]
\place(750,250)[{\scriptstyle \theta}]
\place(1500,250)[=]
\square(2000,0)/>`=`=`>/[X`A`X`A;u```u]
\place(2250,250)[{\scriptstyle \id_u}]
\square(2500,0)/=``@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`]
\place(2750,250)[{\scriptstyle \eta}]
\efig $$ i.e. $ \theta $ must be $ \eta u $ and this is a morphism. Thus monad morphisms $ (u, \theta) $ are in bijection with horizontal arrows $ X \to A $. The universal such is $ 1_A $, i.e. we get $$ \bfig\scalefactor{.8}
\square/>`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`A;1_A``t`1_A]
\place(250,250)[{\scriptstyle \eta}]
\efig $$ not the Eilenberg-Moore object.
\begin{definition} The {\em Eilenberg-Moore object} of a vertical monad $ (A, t, \eta, \mu) $ is the universal retromorphism of monads $$ (X, \id_X, 1, 1) \to^{(u, \theta)} (A, t, \eta, \mu) $$ $$ \bfig\scalefactor{.8}
\square/>`=`@{>}|{\usebox{\bbox}}`>/[X`A`X`A\rlap{\ .};u``t`u]
\morphism(350,250)/=>/<-200,0>[`;\theta]
\efig $$
\end{definition}
\begin{proposition}
Let $ \cal{A} $ be a $ 2 $-category and $ (A, t, \eta, \mu) $ a monad in $ \cal{A} $. Then $ (A, t, \eta, \mu) $ is also a monad in the double category of coquintets $ {\rm co}{\mathbb Q}{\cal{A}} $, and a retromorphism $$ (u, \theta) \colon (X, \id_X, 1, 1) \to (A, t, \eta, \mu) $$ is a $ 1 $-cell $ u \colon X \to A $ and a $ 2 $-cell $ \theta \colon t u \to u $ in $ \cal{A} $ satisfying the unit and associativity laws for a $ t $-algebra. The universal such is the Eilenberg-Moore object for $ t $.
\end{proposition}
\begin{proof}
This is merely a question of interpreting the definition of retromorphism in $ {\rm co}{\mathbb Q}{\cal{A}} $.
\end{proof}
We now see immediately how a retrocell $ (f, \phi) \colon (A, t, \eta, \mu) \to (B, s, \kappa, \nu) $ produces, by universality, a horizontal arrow $$ \bfig\scalefactor{.8} \square<850,500>[EM(t)`EM(s)`A`B\rlap{\ .};EM(f, \phi)`u`u'`f]
\efig $$
\begin{example}\rm Let $ \cal{A} $ be a $ 2 $-category and $ {\mathbb Q}{\cal{A}} $ the double category of quintets in $ \cal{A} $. Recall that a cell in $ {\mathbb Q}{\cal{A}} $ is a quintet in $ \cal{A} $ $$ \bfig\scalefactor{.8} \square[A`B`C`D\rlap{\ .};f`h`k`g]
\morphism(330,330)/=>/<-140,-140>[`;\alpha]
\efig $$ Every horizontal arrow $ f $ has a companion, namely $ f $ itself but viewed as a vertical arrow. A (vertical) monad in $ {\mathbb Q}{\cal{A}} $ is a comonad in $ \cal{A} $. A morphism of monads in $ {\mathbb Q}{\cal{A}} $ is then a lax morphism of comonads, and a retromorphism of monads in $ {\mathbb Q}{\cal{A}} $ is an oplax morphism of comonads in $ \cal{A} $.
To make the connection with Street's monad functors and opfunctors, we must take coquintets (the $ \alpha $ in the opposite direction) $ {\rm co}{\mathbb Q}{\cal{A}} $. Now a monad in $ {\rm co}{\mathbb Q}{\cal{A}} $ is a monad in $ \cal{A} $, a monad morphism in $ {\rm co}{\mathbb Q}{\cal{A}} $ is an oplax morphism of monads, i.e. a monad opfunctor in $ \cal{A} $, whereas a retromorphism of monads is now a lax morphism of monads, i.e. a monad functor.
It is unfortunate that the most natural morphisms from a double category point of view are not the established ones in the literature. At the time of \cite{Str72}, people were more interested in the Eilenberg-Moore algebras for a monad as a generalization of Lawvere theories and their algebras, so it was natural to choose the monad morphisms that worked well with those, namely lax morphisms, as monad functors. Now, with the advent of categorical computer science, Kleisli categories have come into their own, and it is not so clear what the leading concept is, and double category theory suggests that it may well be the oplax morphisms.
\end{example}
\begin{example}\rm
Let $ {\bf C} $ be a category with (a choice of) pullbacks. As is well-known a monad in $ {\mathbb S}{\rm pan} {\bf C} $ is a category object in $ {\bf C} $. A morphism of monads in $ {\mathbb S}{\rm pan} {\bf C} $ is an internal functor.
A retromorphism of monads $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A_0`B_0`A_0`B_0;F`A_1`B_1`F]
\morphism(350,250)/=>/<-200,0>[`;\phi]
\efig $$ is first of all a morphism $ F \colon A_0 \to B_0 $ and then a cell $$ \bfig\scalefactor{.8} \square/>`>`>`=/<800,500>[B_1 \times_{B_0} A_0`A_0`B_0`B_0;\phi`d_1 p_1`F d_1`]
\square(0,500)/=`<-`<-`/<800,500>[A_0`A_0`B_1 \times_{B_0} A_0`A_0;`p_2`d_0`]
\efig $$ which must satisfy the unit law $$ \bfig\scalefactor{.8} \qtriangle/>`>`>/<850,550>[A_0`B_1\times_{B_0} A_0`A_1; \langle \id F, 1_{A_0} \rangle`\id`\phi]
\efig $$ and the composition law $$ \bfig\scalefactor{.9} \square/`>``>/<2200,1000>[B_1 \times_{B_0} B_1 \times_{B_0} A_0`B_1 \times_{B_0} A_0 \times_{A_0} A_1 `B_1 \times_{B_0} A_0`A_1\rlap{\ .};`\nu \times_{B_0} A_0``\phi]
\morphism(0,1000)/>/<1100,0>[B_1 \times_{B_0} B_1 \times_{B_0} A_0`B_1 \times_{B_0} A_1;B_1 \times_{B_0} \phi]
\morphism(1100,1000)/>/<1100,0>[B_1 \times_{B_0} A_1`B_1 \times_{B_0} A_0 \times_{A_0} A_1;\cong]
\morphism(2200,1000)|r|/>/<0,-500>[B_1 \times_{B_0} A_0 \times_{A_0} A_1`A_1 \times_{A_0} A_1;\phi \times_{A_0} A_1]
\morphism(2200,500)|r|/>/<0,-500>[A_1 \times_{A_0} A_1`A_1;\mu]
\efig $$ This is precisely an internal cofunctor \cite{Agu97, Cla20}.
When $ {\bf C} = {\bf Set} $, a cofunctor $ F \colon {\bf A} \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}+\hspace{-1mm}]\endxy {\bf B} $ consists of an object function $ F \colon {\rm Ob} {\bf A} \to {\rm Ob} {\bf B} $ and a lifting function $ \phi \colon (b \colon FA \to B) \longmapsto (a \colon A \to A') $ with $ FA' = B $ $$ \bfig\scalefactor{.8} \square/>`--`--`>/[A`A'`FA`B;a```b]
\morphism(250,180)/|->/<0,200>[`;]
\efig $$ satisfying
\noindent (1) (unit law) $ \phi (A, 1_{FA}) = 1_A $
\noindent (2) (composition law) $$ \phi(b'b, A) = \phi (b', A') \phi (b, A) . $$ So $ F $ is like a split opfibration given algebraically but without the functor part.
\end{example}
If $ {\mathbb A} $ has conjoints, we can define coretromorphisms of monads as retromorphisms in $ {\mathbb A}^{op} $ which now has companions, and monads in $ {\mathbb A}^{op} $ are the same as monads in $ {\mathbb A} $. Explicitly, an opretromorphism $$ (f, \theta) \colon (A, t, \eta, \mu) \to (B, s, \kappa, \nu) $$ consists of a horizontal morphism $ f \colon A \to B $ in $ {\mathbb A} $ and an opretromorphism $ \theta $ $$ \bfig\scalefactor{.8}
\square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`A`B;f`t`s`f]
\morphism(250,450)|r|/=>/<0,200>[`;\theta]
\place(800,500)[=]
\square(1100,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`A`A`A;`f^*`t`]
\place(1350,500)[{\scriptstyle \theta}]
\square(1100,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`B`A;`g`f^*`]
\efig $$ such that $$ \bfig\scalefactor{.8}
\square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`A`A;`f^*`f^*`]
\place(250,250)[{\scriptstyle =}]
\square(0,500)/=`=`@{>}|{\usebox{\bbox}}`/[B`B`B`B;``s`]
\place(250,750)[{\scriptstyle \kappa}]
\square(500,0)/``@{>}|{\usebox{\bbox}}`=/[B`A`A`A;``t`]
\place(750,500)[{\scriptstyle \theta}]
\square(500,500)/=``@{>}|{\usebox{\bbox}}`/[B`B`B`A;``f^*`]
\place(1500,500)[=]
\square(2000,0)/=`=`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`]
\place(2250,250)[{\scriptstyle \eta}]
\square(2000,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`A`A;`f^*`f^*`]
\place(2250,750)[{\scriptstyle =}]
\efig $$ and $$ \bfig\scalefactor{.8}
\square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`A`A;`f^*`f^*`]
\square(0,500)/=``@{>}|{\usebox{\bbox}}`/<500,1000>[B`B`B`B;``s`]
\place(250,250)[{\scriptstyle =}]
\place(250,1000)[{\scriptstyle \nu}]
\morphism(0,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B;s]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B;s]
\morphism(500,1500)/=/<500,0>[B`B;]
\morphism(1000,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;f^*]
\morphism(1000,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-1000>[A`A;t]
\morphism(500,0)/=/<500,0>[A`A;]
\place(750,750)[{\scriptstyle \theta}]
\place(1300,750)[=]
\square(1600,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`A`A`A;`f^*`t`]
\square(1600,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`B`A;`s`f^*`]
\square(1600,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`B`B;`s`s`]
\place(1850,500)[{\scriptstyle \theta}]
\place(1850,1250)[{\scriptstyle =}]
\square(2100,0)/=``@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`]
\place(2350,250)[{\scriptstyle =}]
\square(2100,500)/``@{>}|{\usebox{\bbox}}`/[B`A`A`A;``t`]
\square(2100,1000)/=``@{>}|{\usebox{\bbox}}`/[B`B`B`A;``f^*`]
\place(2350,1000)[{\scriptstyle \theta}]
\square(2600,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A\rlap{\ .};``t`]
\place(2850,500)[{\scriptstyle \mu}]
\square(2600,1000)/=``@{>}|{\usebox{\bbox}}`/[B`B`A`A;``f^*`]
\place(2850,1250)[{\scriptstyle =}]
\efig $$
Coretromorphisms do not come up in the formal theory of monads because the \nobreak{double} category of coquintets of a $ 2 $-category seldom has conjoints, but $ {\mathbb S}{\rm pan} {\bf C} $ does, and we get opcofunctors, i.e. cofunctors $ {\bf A}^{op} \to {\bf B}^{op} $. These consist of an object function $ F \colon {\rm Ob} {\bf A} \to {\rm Ob}{\bf B} $ and a lifting function $$ \theta \colon (b \colon B \to FA) \longmapsto (a \colon A' \to A) $$ with $ F A' = A $ $$ \bfig\scalefactor{.8} \square/>`--`--`>/[A'`A`B`FA;a```b]
\morphism(250,200)/|->/<0,200>[`;]
\efig $$ satisfying
\noindent (1) $ \theta (A, 1_{FA}) = 1_A $
\noindent (2) $ \theta (A, bb') = \theta (A, b) \theta (A', b') $.
This again illustrates well the difference between retromorphisms and coretromorphisms and, at the same time, the symmetry of the concepts. They all move objects forward. Functors move arrows forward $$ (a \colon A \to A') \longmapsto (Fa \colon FA \to FA') , $$ cofunctors move arrows of the form $ FA \to B $ backward $$ (b \colon FA \to B) \longmapsto (\phi b \colon A \to A') $$ and opcofunctors move arrows of the form $ B \to FA $ backward $$ (b \colon B \to FA) \longmapsto (\theta b \colon A' \to A). $$
All of this can be extended to the enriched setting for a monoidal category $ {\bf V} $ which has coproducts preserved by the tensor in each variable. Then a monad in $ {\bf V} $-$ {\mathbb S}{\rm et} $ is exactly a small $ {\bf V} $-category and the retromorphisms are exactly the enriched cofunctors of Clarke and Di~Meglio \cite{ClaDim22}, to which we refer the reader for further details.
\section{Closed double categories}
Many bicategories that come up in practice are closed, i.e. composition $ \otimes $ has right adjoints in each variable, $$ Q \otimes (-) \dashv Q \obslash (\ ) $$ $$ (\ )\otimes P \dashv (\ ) \oslash P\rlap{\ .} $$ Thus we have bijections \begin{center} \begin{tabular}{c}
$P \to Q \obslash R $ \\[3pt] \hline \\[-12pt]
$Q \otimes P \to R$ \\[3pt] \hline \\[-12pt] $Q \to R \oslash P $ \end{tabular} \end{center}
We adapt (and adopt) Lambek's notation for the internal homs. $ \otimes $ is a kind of multiplication and $ \obslash $ and $ \oslash $ divisions.
\begin{example}\rm
The original example in \cite{Lam66}, though not expressed in bicategorical terms, was $ {\cal{B}} {\it im} $ the bicategory whose objects are rings, $ 1 $-cells bimodules and $ 2 $-cells linear maps. Composition is $ \otimes $ $$ \bfig\scalefactor{.8}
\Atriangle/@{<-}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}/<400,300>[S`R`T\rlap{\ .};M`N`N\otimes_S M]
\efig $$ ($ M $ is an $ S $-$ R $-bimodule, i.e. left $ S $ - right $ R $ bimodule, etc.) Given $ P \colon R \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy T $, we have the usual bijections \begin{center} \begin{tabular}{c}
$N \to P \oslash_R M $\mbox{\quad $T$-$S$ linear} \\[3pt] \hline \\[-12pt]
$N \otimes_S M \to P$ \mbox{\quad $T$-$R$ linear} \\[3pt] \hline \\[-12pt]
$M \to N \obslash_T P$\mbox{\quad $S$-$R$ linear} \end{tabular} \end{center} where $$ P \oslash_R M = \Hom_R (M, P) $$ $$ N \obslash_T P = \Hom_T (N, P) $$ are the hom bimodules of $ R $-linear (resp. $ T $-linear) maps.
\end{example}
\begin{example}\rm
The bicategory of small categories and profunctors is closed. For profunctors $$ \bfig\scalefactor{.8}
\Atriangle/@{<-}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}/<400,300>[{\bf B}`{\bf A}`{\bf C};P`Q`R]
\efig $$ we have $$ (Q \obslash_{\bf C} R) (A, B) = \{n.t. \ Q(B, -) \to R (A, -)\} $$ and $$ (R \oslash_{\bf A} P) (B, C) = \{n.t. \ P(-, B) \to R(-, C)\}\rlap{\ .} $$
\end{example}
\begin{example}\rm If $ {\bf A} $ has finite limits, then it is locally cartesian closed if and only if the bicategory of spans in $ {\bf A} $, $ {\cal{S}}{\it pan} {\bf A} $, is closed (Day \cite{Day74}).
For spans $ A \to/<-/^{p_0} R \to^{p_1} B $ and $ B \to/<-/^{\tau_0} T \to^{\tau_1} C $, the composite is given by the pullback $ T \times_B R $, which we could compute as the pullback $ P $ below and then composing with $ \tau_1 $ $$ \bfig\scalefactor{.8} \square/<-`>`>`<-/<700,500>[R`P`A \times B`A \times T;```A \times \tau_0]
\morphism(700,0)|b|/>/<600,0>[A \times T`A \times C;A \times \tau_1]
\place(350,270)[\mbox{$\scriptstyle PB$}]
\efig $$ i.e. $ T \otimes_B (\ ) $ is the composite $$ {\bf A}/(A \times B) \to^{(A \times \tau_0)^*} {\bf A}/(A \times T) \to^{\sum_{A \times \tau_1}} {\bf A}/(A \times C)\ . $$ $ \sum_{A \times \tau_1} $ always has a right adjoint $ (A \times \tau_1)^* $ and if $ {\bf A} $ is locally cartesian closed so will $ (A \times \tau_0)^* $, namely $ \prod_{A \times \tau_0} $. So, for $ A \to/<-/^{\sigma_0} S \to^{\sigma_1} C $, $$ T \obslash_C S = \prod_{A \times \tau_0} (A \times \tau_1)^* S\rlap{\ .} $$ If we interpret this for $ {\bf A} = {\bf Set} $, in terms of fibers $$ (T \obslash_C S)_{ab} = \prod_c S_{ac}^{T_{bc}} \ . $$
The situation for $ \oslash_A $ is similar $$ (S \oslash_A R)_{bc} = \prod_a S_{ac}^{R_{ab}} \ . $$
\end{example}
These bicategories, and in fact most bicategories that occur in practice, are the vertical bicategories of naturally occurring double categories. So a definition of a (vertically) closed double category would seem in order. And indeed Shulman in \cite{Shu08} did give one. A double category is closed if its vertical bicategory is. This definition was taken up by Koudenburg \cite{Kou14} in his work on pointwise Kan extensions. But both were working with ``equipments'', double categories with companions and conjoints. Something more is needed for general double categories.
\begin{definition} (Shulman) $ {\mathbb A} $ has {\em globular left homs} if for every $ y $, $ y \bdot (\ ) $ has a right adjoint $ y \bsd (\ ) $ in $ {\cal{V}}{\it ert} {\mathbb A} $.
\end{definition}
Thus for every $ z $ we have a bijection $$ \frac{y \bdot x \to z}{x \to y \bsd z} \mbox{\quad\quad in $ {\cal{V}}{\it ert} {\mathbb A} $} $$
$$ \bfig\scalefactor{.8}
\square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C;``z`]
\place(250,500)[{\scriptstyle \alpha}]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;x]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\morphism(900,-20)/-/<0,1060>[`;]
\square(1300,250)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`B`B;`x`y \bsd z`]
\place(1550,500)[{\scriptstyle \beta}]
\place(1900,0)[.]
\efig $$ Of course there is the usual naturality condition on $ x $, which is guaranteed by expressing the above bijection as composition with an evaluation cell $ \epsilon \colon y \bdot (y \bsd z) \to z $ $$ \bfig\scalefactor{.8}
\square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C\rlap{\ .};``z`]
\place(250,500)[{\scriptstyle \epsilon}]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;y \bsd z]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\efig $$ The universal property is then: for every $ \alpha $ there is a unique $ \beta $, as below, such that $$ \bfig\scalefactor{.8}
\square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`C`C;`y`y`]
\place(250,250)[{\scriptstyle =}]
\square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;`x`y \bsd z`]
\place(250,750)[{\scriptstyle \beta}]
\square(500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C;``z`]
\place(750,500)[{\scriptstyle \epsilon}]
\place(1400,500)[=]
\square(1800,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C\rlap{\ .};``z`]
\place(2050,500)[{\scriptstyle \alpha}]
\morphism(1800,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;x]
\morphism(1800,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\efig $$ This shows clearly that $ \bsd $ has nothing to do with horizontal arrows, and the interplay between the horizontal and vertical is at the very heart of double categories.
\begin{definition} $ {\mathbb A} $ has {\em strong left homs (is left closed)} if for every $ y $ and $ z $ as below there is a vertical arrow $ y \bsd z $ and an evaluation cell $ \epsilon $ such that for every $ \alpha $ there is a unique $ \beta $ such that $$ \bfig\scalefactor{.8}
\square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`C`C;`y`y`]
\place(250,250)[{\scriptstyle =}]
\square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;f`x`y \bsd z`]
\place(250,750)[{\scriptstyle \beta}]
\square(500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C;``z`]
\place(750,500)[{\scriptstyle \epsilon}]
\place(1400,500)[=]
\square(1800,0)/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[A'`A`C`C\rlap{\ .};f``z`]
\place(2050,500)[{\scriptstyle \alpha}]
\morphism(1800,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x]
\morphism(1800,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\efig $$
\end{definition}
\begin{proposition}
If $ {\mathbb A} $ has companions and has globular left homs, then the strong universal property is equivalent to stability under companions: for every $ f $, the canonical morphism $$ (y \bsd z) \bdot f_* \to y \bsd (z \bdot f_*) $$ is an isomorphism.
\end{proposition}
\begin{proof} (Sketch) For every $ f $ and $ x $ as below we have the following natural bijections of cells $$ \bfig\scalefactor{.8}
\square/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[A'`A`C`C;f``z`]
\place(250,500)[{\scriptstyle \alpha}]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\morphism(900,-20)/-/<0,1060>[`;]
\square(1300,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`A`C`C;`y`z`]
\place(1550,500)[{\scriptstyle \ov{\alpha}}]
\square(1300,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A'`A'`B`A;`x`f_*`]
\morphism(2200,-20)/-/<0,1060>[`;]
\square(2600,250)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A'`A'`B`B;`x`y \bsd (z \bdot f_*)`]
\place(2850,500)[{\scriptstyle \beta}]
\place(3500,0)[.]
\efig $$ $ y \bsd z $ is strong iff we have the following bijections $$ \bfig\scalefactor{.8}
\square/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[A'`A`C`C;f``z`]
\place(250,500)[{\scriptstyle \alpha}]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\morphism(900,-20)/-/<0,1060>[`;]
\square(1300,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A'`A`B`B;f`x`y \bsd z`]
\place(1550,500)[{\scriptstyle \gamma}]
\morphism(2200,-20)/-/<0,1060>[`;]
\square(2600,0)/=`@{>}|{\usebox{\bbox}}``=/<500,1000>[A'`A'`B`B;`x``]
\place(2850,500)[{\scriptstyle \ov{\gamma}}]
\morphism(3100,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A'`A;f_*]
\morphism(3100,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B\rlap{\ .};y \bsd z]
\efig $$
\end{proof}
\begin{proposition} If $ {\mathbb A} $ has conjoints, then the strong universal property is equivalent to the globular one.
\end{proposition}
\begin{proof} (Sketch) For every $ f $ and $ x $ as below we have the following natural bijections $$ \bfig\scalefactor{.8}
\square/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[A'`A`C`C;f``z`]
\place(250,500)[{\scriptstyle \alpha}]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\morphism(800,-320)/-/<0,1650>[`;]
\square(1200,-250)/=``@{>}|{\usebox{\bbox}}`=/<600,1500>[A`A`C`C;``z`]
\place(1500,550)[{\scriptstyle \widetilde{\alpha}}]
\morphism(1200,1250)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A';f^*]
\morphism(1200,750)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x]
\morphism(1200,250)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\morphism(2150,-320)/-/<0,1650>[`;]
\square(2500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`B`B;``y \bsd z`]
\morphism(2500,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A';f^*]
\morphism(2500,500)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x]
\place(2750,500)[{\scriptstyle \widetilde{\beta}}]
\morphism(3300,-320)/-/<0,1650>[`;]
\square(3600,250)/>`@{>}|{\usebox{\bbox}}`>`=/[A'`A`B`B;f`x`y \bsd z`]
\place(3850,500)[{\scriptstyle \beta}]
\efig $$
\end{proof}
All of the examples above have conjoints so the left homs are automatically strong.
Of course, $ y \bsd z $ is functorial in $ y $ and $ z $, contravariant in $ y $ and covariant in $ z $, but only for globular cells $ \beta $, $ \gamma $ $$ y' \to^\beta y \quad \& \quad z \to^\gamma z' \quad \leadsto \quad y \bsd z \to^{\beta \bsd \gamma} y' \bsd z' \ . $$ For general double category cells $ \beta $, $ \gamma $ $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B'`B`C'`C;b`y'`y`c]
\place(250,250)[{\scriptstyle \beta}]
\place(950,250)[\mbox{and}]
\square(1400,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`C`C';a`z`z'`c']
\place(1650,250)[{\scriptstyle \gamma}]
\efig $$ we would hope to get a cell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`B`B';a`y \bsd z`y' \bsd z'`]
\place(250,250)[{\scriptstyle \beta \bsd \gamma}]
\efig $$ but $ b $ is in the wrong direction, and there are $ c $ and $ c' $ in opposite directions. If we reverse $ b $ and $ c $ then $ \beta $ is in the wrong direction. That was the motivation for retrocells.
\begin{proposition} Suppose $ {\mathbb A} $ has companions and is (strongly) left closed. Then a retrocell $ \beta $ and a standard cell $ \gamma $ $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B`B'`C`C';b`y`y'`c]
\morphism(350,250)/=>/<-200,0>[`;\beta]
\square(1000,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`C`C';a`z`z'`c]
\place(1250,250)[{\scriptstyle \gamma}]
\efig $$ induce a canonical cell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`B`B'\rlap{\ .};a`y \bsd z`y' \bsd z'`b]
\place(250,250)[{\scriptstyle \beta \bsd \gamma}]
\efig $$
\end{proposition}
\begin{proof} (Sketch) A candidate $ \xi $ for $ \beta \bsd \gamma $ would satisfy the following bijections $$ \bfig\scalefactor{.8}
\square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`B`B';a`y \bsd z`y' \bsd z'`b]
\place(250,500)[{\scriptstyle \xi}]
\morphism(950,-120)/-/<0,1250>[`;]
\square(1400,0)/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A'`B'`B';a``y' \bsd z'`]
\place(1650,500)[{\scriptstyle \ov{\xi}}]
\morphism(1400,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;y \bsd z]
\morphism(1400,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B';b_*]
\morphism(2300,-320)/-/<0,1650>[`;]
\square(2800,-250)/>``@{>}|{\usebox{\bbox}}`=/<600,1500>[A`A'`C'`C';a``z'`]
\place(3100,500)[{\scriptstyle \ov{\ov{\xi}} }]
\morphism(2800,1250)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;y \bsd z]
\morphism(2800,750)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B';b_*]
\morphism(2800,250)/@{>}|{\usebox{\bbox}}/<0,-500>[B'`C';y']
\efig $$ and there is indeed a canonical $ \ov{\ov{\xi}} $, namely $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B'`C`C'`C';`y'`c_*`]
\square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`B'`C;`b_*`y`]
\square(0,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;`y \bsd z`y \bsd z`]
\square(500,0)/=``@{>}|{\usebox{\bbox}}`=/[C`C`C'`C';``c_*`]
\square(500,500)/=``@{>}|{\usebox{\bbox}}`/<500,1000>[A`A`C`C;``z`]
\square(1000,0)/>``=`=/[C`C'`C'`C'\rlap{\ .};c```]
\square(1000,500)/>``@{>}|{\usebox{\bbox}}`/<500,1000>[A`A'`C`C';a``z'`]
\place(250,500)[{\scriptstyle \beta}]
\place(250,1250)[{\scriptstyle =}]
\place(750,250)[{\scriptstyle =}]
\place(750,1000)[{\scriptstyle \epsilon}]
\place(1250,250)[{\lrcorner}]
\place(1250,1000)[{\scriptstyle \gamma}]
\efig $$
\end{proof}
\noindent In fact the cell $ \beta \bsd \gamma $ is not only canonical but also functorial, i.e. $ (\beta' \beta) \bsd (\gamma' \gamma) = (\beta' \bsd \gamma') (\beta \bsd \gamma) $. To express this properly we must define the categories involved. The codomain of $ \bsd $ is simply $ {\bf A}_1 $, the category whose objects are vertical arrows of $ {\mathbb A} $ and whose morphisms are (standard) cells. The domain of $ \bsd $ is the category which, for lack of a better name, we call $ {\bf TC} ({\mathbb A}) $ (twisted cospans). Its objects are cospans of vertical arrows and its cells are pairs $ (\beta, \gamma) $ $$ \bfig\scalefactor{.8}
\square/>`@{<-}|{\usebox{\bbox}}`@{<-}|{\usebox{\bbox}}`>/[C`C'`B`B';c`y`y'`b]
\morphism(340,250)/=>/<-200,0>[`;\beta]
\square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A'`C`C';a`z`z'`]
\place(250,750)[{\scriptstyle \gamma}]
\efig $$ where $ \beta $ is a retrocell and $ \gamma $ a standard cell. Also we must flesh out our sketchy construction of $ y \bsd z $. We can express the universal property of $ y \bsd z $ as representability of a functor $$ L_{y,z} \colon {\bf A}^{op}_1 \to {\bf Set}\rlap{\ .} $$
For $ v \colon \ov{A} \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy \ov{B} $, $ L_{y,z} (v) = \{(f, g, \alpha) | f, g, \alpha \mbox{\ \ as in}\ (*)\} $ $$ f \colon \ov{A} \to A , g \colon \ov{B} \to B $$ $$ \bfig\scalefactor{.8}
\square/>``@{>}|{\usebox{\bbox}}`=/<700,1500>[\ov{A}`A`C`C\rlap{\ .};f``z`]
\place(-1300,0)[\ ]
\place(350,750)[{\scriptstyle \alpha}]
\morphism(0,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\place(2000,750)[(*)]
\efig $$
Some straightforward calculation will show that $ L_{y, z} $ is indeed a functor. The following bijections show that $ y \bsd z $ is a representing object for $ L_{y, z} $ $$ \bfig\scalefactor{.8}
\square/>``@{>}|{\usebox{\bbox}}`=/<700,1500>[\ov{A}`A`C`C;f``z`]
\place(250,750)[{\scriptstyle \alpha}]
\morphism(0,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\morphism(1000,-50)/-/<0,1650>[`;]
\square(1300,250)/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[\ov{A}`A`B`B;f``y \bsd z`]
\morphism(1300,1250)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v]
\morphism(1300,750)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*]
\place(1550,750)[{\scriptstyle \ov{\alpha}}]
\morphism(2100,100)/-/<0,1250>[`;]
\square(2400,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[\ov{A}`A`\ov{B}`B;f`v`y \bsd z`g]
\place(2650,750)[{\scriptstyle{\ovv\alpha}}]
\place(3000,0)[.]
\efig $$
This gives the full double category universal property of $ \bsd $ : For every boundary $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[\ov{A}`A`\ov{B}`B;f`v`y \bsd z`g]
\efig $$ and $ \alpha $ as below, there exists a unique fill-in $ \beta $ such that $$ \bfig\scalefactor{.8}
\square/>``@{>}|{\usebox{\bbox}}`=/<600,1500>[\ov{A}`A`C`C;f``z`]
\place(300,750)[{\scriptstyle \alpha}]
\morphism(0,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\place(900,750)[=]
\square(1200,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`C`C;`y`y`]
\square(1200,500)/`@{>}|{\usebox{\bbox}}`=`/[\ov{B}`B`B`B;`g_*``]
\square(1200,1000)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[\ov{A}`A`\ov{B}`B;f`v`y \bsd z`g]
\place(1450,250)[{\scriptstyle =}]
\place(1450,750)[{\lrcorner}]
\place(1450,1250)[{\scriptstyle \beta}]
\square(1700,0)/=``@{>}|{\usebox{\bbox}}`=/<600,1500>[A`A`C`C\rlap{\ .};``z`]
\place(2000,750)[{\scriptstyle \epsilon}]
\efig $$
For $ (\beta, \gamma) $ in $ {\bf TC}({\mathbb A}) $ we get a natural transformation $$ \phi_{\beta \gamma} \colon L_{y, z} \to L_{y', z'} $$
$$ \bfig\scalefactor{.8}
\square(0,250)/>``@{>}|{\usebox{\bbox}}`=/<500,1500>[\ov{A}`A`C`C;f``z`]
\place(250,1000)[{\scriptstyle \alpha}]
\morphism(0,1750)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v]
\morphism(0,1250)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*]
\morphism(0,750)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\place(750,1000)[\longmapsto]
\square(1150,0)/=`@{>}|{\usebox{\bbox}}``=/[B'`B'`C'`C';`y'``]
\square(1150,500)/=`@{>}|{\usebox{\bbox}}``/<500,1000>[\ov{B}`\ov{B}`B'`B';`(bg)_*``]
\square(1150,1500)/=`@{>}|{\usebox{\bbox}}``/[\ov{A}`\ov{A}`\ov{B}`\ov{B};`v``]
\place(1400,250)[{\scriptstyle =}]
\place(1400,1000)[{\scriptstyle \cong}]
\place(1400,1750)[{\scriptstyle =}]
\square(1650,0)/`@{>}|{\usebox{\bbox}}``=/[B'`C`C'`C';`y'``]
\square(1650,500)/=`@{>}|{\usebox{\bbox}}``/[B`B`B'`C;`b_*``]
\square(1650,1000)/=`@{>}|{\usebox{\bbox}}``/[\ov{B}`\ov{B}`B`B;`g_*``]
\square(1650,1500)/=`@{>}|{\usebox{\bbox}}``/[\ov{A}`\ov{A}`\ov{B}`\ov{B};`v``]
\place(1900,500)[{\scriptstyle \beta}]
\place(1900,1250)[{\scriptstyle =}]
\place(1900,1750)[{\scriptstyle =}]
\square(2150,0)/=`@{>}|{\usebox{\bbox}}``=/[C`C`C'`C';`c_*``]
\square(2150,500)/>``@{>}|{\usebox{\bbox}}`/<500,1500>[\ov{A}`A`C`C;f``z`]
\morphism(2150,2000)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v]
\morphism(2150,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*]
\morphism(2150,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y]
\place(2400,250)[{\scriptstyle =}]
\place(2400,1250)[{\scriptstyle \alpha}]
\square(2650,0)/`@{>}|{\usebox{\bbox}}`=`=/[C`C'`C'`C'\rlap{\ .};`c_*``]
\square(2650,500)/>``@{>}|{\usebox{\bbox}}`>/<500,1500>[A`A'`C`C';a``z'`c]
\place(2900,250)[{ \lrcorner}]
\place(2900,1250)[{\scriptstyle \gamma}]
\efig $$
Some calculation is needed to show naturality, which we leave to the reader. This natural transformation is what gives $ \beta \bsd \gamma $.
We are now ready for the main theorem of the section.
\begin{theorem} For $ {\mathbb A} $ a left closed double category with companions, the internal hom is a functor $$ \bsd\ \colon {\bf TC} ({\mathbb A}) \to {\bf A}_1\rlap{\ .} $$
\end{theorem}
\begin{proof} Let $ (\beta, \gamma) $ and $ (\beta', \gamma') $ be composable morphisms in $ {\bf TC} ({\mathbb A}) $ $$ \bfig\scalefactor{.8}
\square/>`@{<-}|{\usebox{\bbox}}`@{<-}|{\usebox{\bbox}}`>/[C`C'`B`B';c`y`y'`b]
\square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A'`C`C';a`z`z'`]
\square(500,0)/>``@{<-}|{\usebox{\bbox}}`>/[C'`C''`B'`B''\rlap{\ .};c'``y''`b']
\square(500,500)/>``@{>}|{\usebox{\bbox}}`/[A'`A''`C'`C'';a'``z''`]
\morphism(350,250)/=>/<-200,0>[`;\beta]
\place(250,750)[{\scriptstyle \gamma}]
\morphism(850,250)/=>/<-200,0>[`;\beta']
\place(750,750)[{\scriptstyle \gamma'}]
\efig $$ Then $$ L_{y,z} \to^{\phi_{\beta, \gamma}} L_{y', z'} \to^{\phi_{\beta', \gamma'}} L_{y'', z''} $$ takes $ v $ to the composite of 23 cells (most of which are bookkeeping -- identities, canonical isos, ...) arranged in a $ 5 \times 7 $ array, with 38 objects, and best represented schematically as
\begin{center}
\setlength{\unitlength}{.9mm}
\begin{picture}(70,50)
\put(0,0){\framebox(70,50){}}
\put(10,0){\line(0,1){50}}
\put(20,0){\line(0,1){50}}
\put(30,0){\line(0,1){50}}
\put(40,0){\line(0,1){50}}
\put(50,0){\line(0,1){50}}
\put(60,0){\line(0,1){50}}
\put(0,10){\line(1,0){10}} \put(10,20){\line(1,0){20}} \put(20,10){\line(1,0){50}}
\put(0,40){\line(1,0){40}} \put(30,30){\line(1,0){10}} \put(40,20){\line(1,0){30}}
\put(5,25){\makebox(0,0){$\scriptstyle\cong$}} \put(15,10){\makebox(0,0){$\scriptstyle \beta'$}} \put(25,30){\makebox(0,0){$\scriptstyle \cong$}} \put(35,20){\makebox(0,0){$\scriptstyle \beta$}} \put(45,35){\makebox(0,0){$\scriptstyle \alpha$}} \put(55,35){\makebox(0,0){$\scriptstyle \gamma$}} \put(55,15){\makebox(0,0){$ \lrcorner$}} \put(65,35){\makebox(0,0){$\scriptstyle \gamma'$}} \put(65,5){\makebox(0,0){$ \lrcorner$}}
\put(95,25){(*)}
\end{picture}
\end{center}
\noindent whereas $$ L_{yz} \to^{\phi_{\beta' \beta, \gamma' \gamma}} L_{y'' z''} $$ takes $ v $ to
\begin{center}
\setlength{\unitlength}{.9mm}
\begin{picture}(70,50)
\put(0,0){\framebox(70,50){}}
\put(10,0){\line(0,1){50}}
\put(20,0){\line(0,1){50}}
\put(30,0){\line(0,1){50}}
\put(40,0){\line(0,1){50}}
\put(50,0){\line(0,1){50}}
\put(60,20){\line(0,1){30}}
\put(0,10){\line(1,0){10}} \put(10,20){\line(1,0){10}} \put(20,10){\line(1,0){10}} \put(30,20){\line(1,0){40}} \put(10,30){\line(1,0){30}} \put(0,40){\line(1,0){40}}
\put(5,25){\makebox(0,0){$\scriptstyle\cong$}} \put(15,10){\makebox(0,0){$\scriptstyle \beta'$}} \put(25,20){\makebox(0,0){$\scriptstyle \beta$}} \put(35,10){\makebox(0,0){$\scriptstyle \cong$}} \put(45,35){\makebox(0,0){$\scriptstyle \alpha$}} \put(55,35){\makebox(0,0){$\scriptstyle \gamma$}}
\put(65,35){\makebox(0,0){$\scriptstyle \gamma'$}} \put(60,10){\makebox(0,0){$ \lrcorner$}}
\put(90,25){(**)}
\end{picture}
\end{center}
\noindent The three bottom right cells of (**) compose to the $ 2 \times 2 $ block on the bottom right of (*), so the $ 5 \times 3 $ part on the right of (*) is equal to the $ 5 \times 4 $ part on the right of (**). And the rest are equal too by coherence. It follows that $$ (\beta' \bsd \gamma') (\beta \bsd \gamma) = (\beta' \beta) \bsd (\gamma' \gamma) . $$ For identities $ 1_{y \bsd z} = 1_y \bsd 1_z $.
\end{proof}
Right closure is dual but the duality is op, switching the direction of vertical arrows which switches companions with conjoints and retrocells with coretrocells. We outline the changes.
\begin{definition} (Shulman) $ {\mathbb A} $ has {\em globular right homs} if for every $ x $, $ (\ ) \bdot x $ has a right adjoint $ (\ ) \slashdot x $ in $ {\cal{V}}{\it ert} {\mathbb A} $, $$ \frac{y \bdot x \to z}{y \to z \slashdot x} \mbox{\quad in \ ${\cal{V}}{\it ert}{\mathbb A} $}. $$ This bijection is mediated by an evaluation cell $$ \bfig\scalefactor{.8}
\square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C\rlap{\ .};```]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;x]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;z \slashdot x]
\place(250,500)[{\scriptstyle \epsilon'}]
\efig $$ The right homs are {\em strong} if $ z \slashdot x $ has the universal property for cells of the form $$ \bfig\scalefactor{.8}
\square/=``@{>}|{\usebox{\bbox}}`>/<500,1000>[A`A`C'`C\rlap{\ .};``z`g]
\morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;x]
\morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C';y]
\place(250,500)[{\scriptstyle \alpha}]
\efig $$
\end{definition}
\begin{proposition} If $ {\mathbb A} $ has conjoints and globular right homs, then the strong universal property is equivalent to the canonical morphism $$ g^* \bdot (z \slashdot x) \to (g^* \bdot z) \slashdot x $$ being an isomorphism. If instead $ {\mathbb A} $ has companions, then strong is equivalent to globular.
\end{proposition}
Finally, if $ {\mathbb A} $ has conjoints, $ z \slashdot x $ is functorial in $ z $ and $ x $, for standard cells in $ z $ and for coretrocells in $ x $. More precisely, $ \slashdot $ is defined on the category $ {\bf TS} ({\mathbb A}) $ whose objects are spans of vertical arrows, $ (x, z) $, as below, and whose morphisms are pairs of cells $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`C`C';a`z`z'`c]
\place(250,250)[{\scriptstyle \gamma}]
\square(0,500)/>`@{<-}|{\usebox{\bbox}}`@{<-}|{\usebox{\bbox}}`/[B`B'`A`A';b`x`x'`]
\morphism(250,850)/=>/<0,-200>[`;\alpha]
\efig $$ where $ \alpha $ is a coretrocell and $ \gamma $ a standard one.
\begin{theorem} If $ {\mathbb A} $ has conjoints and is right closed, then $ \slashdot $ is a functor $$ \slashdot \ \colon {\bf TS} ({\mathbb A}) \to {\bf A}_1 . $$
\end{theorem}
For completeness sake, we end this section with a definition.
\begin{definition} A double category $ {\mathbb A} $ is closed if it is right closed and left closed.
\end{definition}
\section{A triple category}
As mentioned in the introduction, one of the inspirations for retrocells was the commuter cells of \cite{GraPar08}.
\begin{definition} Let $ {\mathbb A} $ be a double category with companions. A cell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,250)[{\scriptstyle \alpha}]
\efig $$ is a {\em commuter} cell if the associated globular cell $ \widehat{\alpha} $ $$ \bfig\scalefactor{.8} \square/`>`=`=/[C`D`D`D;`g_*``]
\square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B`B`C`D;f`v`w`g]
\square(0,1000)/=`=`>`/[A`A`B`B;``f_*`]
\place(250,250)[{ \lrcorner}]
\place(250,750)[{\scriptstyle \alpha}]
\place(250,1250)[{ \ulcorner}]
\efig $$ is a horizontal isomorphism.
\end{definition}
The intent is that the cell $ \alpha $ itself is an isomorphism making the square commute (up to isomorphism).
The inverse of $ \widehat{\alpha} $ is a retrocell, so the question is, how do we express that a cell and a retrocell are inverse to each other?
Cells and retrocells form a double category (and ultimately a triple category). For a double category with companions $ {\mathbb A} $, we define a new (vertical arrow) double category $ {\mathbb V}{\rm ar} ({\mathbb A}) $ as follows. Its objects are the vertical arrows of $ {\mathbb A} $, its horizontal arrows are standard cells of $ {\mathbb A} $, and its vertical arrows are retrocells. It is a thin double category with a unique cell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[v`w`v'`w';\alpha`\beta`\gamma`\alpha']
\place(250,250)[{\scriptstyle !}]
\efig $$
$$ \bfig\scalefactor{.9} \node a(300,0)[C'] \node b(800,0)[D'] \node c(0,250)[A'] \node d(300,500)[C] \node e(800,500)[D] \node f(0,800)[A] \node g(500,800)[B]
\arrow|b|/>/[a`b;g']
\arrow|l|/@{>}|{\usebox{\bbox}}/[c`a;v']
\arrow|r|/>/[d`a;c]
\arrow|r|/>/[e`b;d]
\arrow|b|/>/[d`e;g]
\arrow|l|/>/[f`c;a]
\arrow|l|/@{>}|{\usebox{\bbox}}/[f`d;v]
\arrow|r|/@{>}|{\usebox{\bbox}}/[g`e;w]
\arrow|a|/>/[f`g;f]
\place(430,650)[{\scriptstyle \alpha}]
\morphism(160,320)|l|/=>/<0,200>[`;\beta]
\node f'(1500,800)[A] \node g'(2000,800)[B] \node c'(1500,300)[A'] \node d'(2000,300)[B'] \node a'(1800,0)[C'] \node b'(2300,0)[D'] \node e'(2300,400)[D]
\arrow|a|/>/[f'`g';f]
\arrow|l|/>/[f'`c';a]
\arrow|l|/>/[g'`d';b]
\arrow|r|/@{>}|{\usebox{\bbox}}/[g'`e';w]
\arrow|a|/>/[c'`d';f']
\arrow|l|/@{>}|{\usebox{\bbox}}/[c'`a';v']
\arrow|b|/>/[a'`b';g']
\arrow|r|/@{>}|{\usebox{\bbox}}/[d'`b';w']
\arrow|r|/>/[e'`b';d]
\place(1950,150)[{\scriptstyle \alpha'}]
\morphism(2150,320)|l|/=>/<0,200>[`;\gamma]
\efig $$ if we have $$ f' a = b f $$ $$ g' c = d g\ , $$ and $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A'`C`C'`C';`v'`c_*`]
\square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`A'`C;`a_*`v`]
\place(250,500)[{\scriptstyle \beta}]
\square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[C`D`C'`D';``d_*`g']
\square(500,500)/>``@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f``w`g]
\place(750,250)[{\scriptstyle *}]
\place(750,750)[{\scriptstyle \alpha}]
\place(1300,500)[=]
\square(1600,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A'`B'`C'`D';f'`v'`w'`g']
\square(1600,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`B`A'`B';f`a_*`b_*`]
\place(1850,250)[{\scriptstyle \alpha'}]
\place(1850,750)[{\scriptstyle *}]
\square(2100,0)/``@{>}|{\usebox{\bbox}}`=/[B'`D`D'`D';``d_*`]
\square(2100,500)/=``@{>}|{\usebox{\bbox}}`/[B`B`B'`D;``w`]
\place(2350,500)[{\scriptstyle \gamma}]
\efig $$ where the starred cells are the canonical ones gotten from the equations $ g' c = d g $ and $ f' a = b f $ by ``sliding''.
\begin{proposition} $ {\mathbb V}{\rm ar} ({\mathbb A}) $ is a strict double category.
\end{proposition}
\begin{proof} We just have to check that cells compose horizontally and vertically. We simply give a sketch of the proof.
Suppose we have two cells, $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[v`w`v'`w';\alpha`\beta`\gamma`\alpha']
\place(250,250)[{\scriptstyle !}]
\square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[w`x`w'`x';\delta``\xi`\delta']
\place(750,250)[{\scriptstyle !}]
\efig $$ i.e. we have
\begin{center}
\setlength{\unitlength}{.9mm}
\begin{picture}(130,20)
\put(0,0){\framebox(20,20){}} \put(30,0){\framebox(20,20){}} \put(80,0){\framebox(20,20){}} \put(110,0){\framebox(20,20){}}
\put(10,0){\line(0,1){20}} \put(40,0){\line(0,1){20}} \put(90,0){\line(0,1){20}} \put(120,0){\line(0,1){20}}
\put(10,10){\line(1,0){10}} \put(30,10){\line(1,0){10}} \put(90,10){\line(1,0){10}} \put(110,10){\line(1,0){10}}
\put(25,10){\makebox(0,0){$=$}} \put(65,10){\makebox(0,0){and}} \put(105,10){\makebox(0,0){$=$}}
\put(5,10){\makebox(0,0){$\scriptstyle \beta$}} \put(15,5){\makebox(0,0){$\scriptstyle *$}} \put(15,15){\makebox(0,0){$\scriptstyle \alpha$}} \put(15,15){\makebox(0,0){$\scriptstyle \alpha$}} \put(35,5){\makebox(0,0){$\scriptstyle \alpha'$}} \put(35,15){\makebox(0,0){$\scriptstyle *$}} \put(45,10){\makebox(0,0){$\scriptstyle \gamma$}} \put(85,10){\makebox(0,0){$\scriptstyle \gamma$}} \put(95,5){\makebox(0,0){$\scriptstyle *$}} \put(95,15){\makebox(0,0){$\scriptstyle \delta$}} \put(115,5){\makebox(0,0){$\scriptstyle \delta'$}} \put(115,15){\makebox(0,0){$\scriptstyle *$}} \put(125,10){\makebox(0,0){$\scriptstyle \xi$}}
\put(135,0){.}
\end{picture}
\end{center}
\noindent Thus
\begin{center}
\setlength{\unitlength}{.9mm}
\begin{picture}(170,20)
\put(0,0){\framebox(20,20){}} \put(30,0){\framebox(30,20){}} \put(70,0){\framebox(30,20){}} \put(110,0){\framebox(30,20){}} \put(150,0){\framebox(20,20){}}
\put(10,0){\line(0,1){20}} \put(40,0){\line(0,1){20}} \put(50,0){\line(0,1){20}} \put(80,0){\line(0,1){20}} \put(90,0){\line(0,1){20}} \put(120,0){\line(0,1){20}} \put(130,0){\line(0,1){20}} \put(160,0){\line(0,1){20}}
\put(10,10){\line(1,0){10}} \put(40,10){\line(1,0){20}} \put(70,10){\line(1,0){10}} \put(90,10){\line(1,0){10}} \put(110,10){\line(1,0){20}} \put(150,10){\line(1,0){10}}
\put(25,10){\makebox(0,0){$=$}} \put(65,10){\makebox(0,0){$=$}} \put(105,10){\makebox(0,0){$=$}} \put(145,10){\makebox(0,0){$=$}}
\put(5,10){\makebox(0,0){$\scriptstyle \beta$}} \put(15,5){\makebox(0,0){$\scriptstyle *$}} \put(15,15){\makebox(0,0){$\scriptstyle \delta \alpha$}} \put(35,10){\makebox(0,0){$\scriptstyle \beta$}} \put(45,5){\makebox(0,0){$\scriptstyle *$}} \put(45,15){\makebox(0,0){$\scriptstyle \alpha$}} \put(55,5){\makebox(0,0){$\scriptstyle *$}} \put(55,15){\makebox(0,0){$\scriptstyle \delta$}} \put(75,5){\makebox(0,0){$\scriptstyle \alpha'$}} \put(75,15){\makebox(0,0){$\scriptstyle *$}} \put(85,10){\makebox(0,0){$\scriptstyle \gamma$}} \put(95,5){\makebox(0,0){$\scriptstyle *$}} \put(95,15){\makebox(0,0){$\scriptstyle \delta$}} \put(115,5){\makebox(0,0){$\scriptstyle \alpha'$}} \put(115,15){\makebox(0,0){$\scriptstyle *$}} \put(125,5){\makebox(0,0){$\scriptstyle \delta'$}} \put(125,15){\makebox(0,0){$\scriptstyle *$}} \put(135,10){\makebox(0,0){$\scriptstyle \xi$}} \put(155,5){\makebox(0,0){$\scriptstyle \alpha' \delta'$}} \put(155,15){\makebox(0,0){$\scriptstyle *$}} \put(165,10){\makebox(0,0){$\scriptstyle \xi$}}
\put(175,0){.}
\end{picture}
\end{center}
\noindent Consider cells $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[v'`w'`v''`w''\rlap{\ .};\alpha'`\beta'`\gamma'`\alpha'']
\place(250,250)[{\scriptstyle !}]
\square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[v`w`v'`w';\alpha`\beta`\gamma`]
\place(250,750)[{\scriptstyle !}]
\efig $$ We did not say, but vertical composition of arrows in $ {\mathbb V}{\rm ar} ({\mathbb A}) $ is given by horizontal composition of retrocells. It could not be otherwise given their boundaries. Then we have the following
\begin{center}
\setlength{\unitlength}{.9mm}
\begin{picture}(170,30)
\put(0,5){\framebox(20,20){}} \put(30,0){\framebox(30,30){}} \put(70,0){\framebox(30,30){}} \put(110,0){\framebox(30,30){}} \put(150,5){\framebox(20,20){}}
\put(10,5){\line(0,1){20}} \put(40,0){\line(0,1){30}} \put(50,0){\line(0,1){30}} \put(80,0){\line(0,1){30}} \put(90,0){\line(0,1){30}} \put(120,0){\line(0,1){30}} \put(130,0){\line(0,1){30}} \put(160,5){\line(0,1){20}}
\put(10,15){\line(1,0){10}} \put(30,20){\line(1,0){10}} \put(40,10){\line(1,0){20}} \put(50,20){\line(1,0){10}} \put(70,20){\line(1,0){20}} \put(80,10){\line(1,0){20}} \put(110,10){\line(1,0){10}} \put(110,20){\line(1,0){20}} \put(130,10){\line(1,0){10}} \put(150,15){\line(1,0){10}}
\put(25,15){\makebox(0,0){$=$}} \put(65,15){\makebox(0,0){$=$}} \put(105,15){\makebox(0,0){$=$}} \put(145,15){\makebox(0,0){$=$}}
\put(5,15){\makebox(0,0){$\scriptstyle \beta' \bdot \beta$}} \put(15,10){\makebox(0,0){$\scriptstyle *$}} \put(15,20){\makebox(0,0){$\scriptstyle \alpha$}} \put(35,10){\makebox(0,0){$\scriptstyle \beta'$}} \put(35,25){\makebox(0,0){$\scriptstyle =$}} \put(45,5){\makebox(0,0){$\scriptstyle =$}} \put(45,20){\makebox(0,0){$\scriptstyle \beta$}} \put(55,5){\makebox(0,0){$\scriptstyle *$}} \put(55,25){\makebox(0,0){$\scriptstyle \alpha$}} \put(75,10){\makebox(0,0){$\scriptstyle \beta'$}} \put(75,25){\makebox(0,0){$\scriptstyle =$}} \put(85,5){\makebox(0,0){$\scriptstyle *$}} \put(85,15){\makebox(0,0){$\scriptstyle \alpha'$}} \put(85,25){\makebox(0,0){$\scriptstyle *$}} \put(95,5){\makebox(0,0){$\scriptstyle =$}} \put(95,20){\makebox(0,0){$\scriptstyle \gamma$}} \put(115,5){\makebox(0,0){$\scriptstyle \alpha''$}} \put(115,15){\makebox(0,0){$\scriptstyle *$}} \put(115,25){\makebox(0,0){$\scriptstyle *$}} \put(125,10){\makebox(0,0){$\scriptstyle \gamma'$}} \put(125,25){\makebox(0,0){$\scriptstyle =$}} \put(135,5){\makebox(0,0){$\scriptstyle =$}} \put(135,20){\makebox(0,0){$\scriptstyle \gamma$}} \put(155,10){\makebox(0,0){$\scriptstyle \alpha''$}} \put(155,20){\makebox(0,0){$\scriptstyle *$}} \put(165,15){\makebox(0,0){$\scriptstyle \gamma' \bdot \gamma$}}
\put(175,5){.}
\end{picture}
\end{center}
\noindent So horizontal and vertical composition of cells are again cells.
Identities pose no problem.
\end{proof}
\begin{proposition} A cell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g]
\place(250,250)[{\scriptstyle \alpha}]
\efig $$ in $ {\mathbb A} $ is a commuter cell iff $ \alpha \colon v \to w $ has a companion in $ {\mathbb V}{\rm ar} {\mathbb A} $.
\end{proposition}
\begin{proof} A companion $ \beta \colon v \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy w $ for $ \alpha $ will have cells $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[v`w`w`w;\alpha`\beta`\id_w`]
\place(250,250)[{\scriptstyle !}]
\place(850,250)[\mbox{and}]
\square(1200,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[v`v`v`w;`\id_v`\beta`\alpha]
\place(1450,250)[{\scriptstyle !}]
\efig $$ i.e. it is a retrocell $$ \bfig\scalefactor{.8}
\square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f'`v`w`g']
\morphism(340,250)/=>/<-200,0>[`;\beta]
\efig $$ making the following cubes ``commute''
$$ \bfig\scalefactor{.9} \node a(300,0)[D] \node b(800,0)[D] \node c(0,250)[B] \node d(300,500)[C] \node e(800,500)[D] \node f(0,800)[A] \node g(500,800)[B]
\arrow|b|/=/[a`b;]
\arrow|l|/@{>}|{\usebox{\bbox}}/[c`a;w]
\arrow|r|/>/[d`a;g']
\arrow|r|/=/[e`b;]
\arrow|b|/>/[d`e;g]
\arrow|l|/>/[f`c;f']
\arrow|l|/@{>}|{\usebox{\bbox}}/[f`d;v]
\arrow|r|/@{>}|{\usebox{\bbox}}/[g`e;w]
\arrow|a|/>/[f`g;f]
\place(430,650)[{\scriptstyle \alpha}]
\morphism(150,300)|l|/=>/<0,200>[`;\beta]
\node f'(1500,800)[A] \node g'(2000,800)[B] \node c'(1500,300)[B] \node d'(2000,300)[B] \node a'(1800,0)[D] \node b'(2300,0)[D] \node e'(2300,400)[D]
\arrow|a|/>/[f'`g';f]
\arrow|l|/>/[f'`c';f']
\arrow|l|/=/[g'`d';]
\arrow|r|/@{>}|{\usebox{\bbox}}/[g'`e';w]
\arrow|a|/=/[c'`d';]
\arrow|l|/@{>}|{\usebox{\bbox}}/[c'`a';w]
\arrow|b|/=/[a'`b';]
\arrow/@{>}|{\usebox{\bbox}}/[d'`b';w]
\arrow|r|/=/[e'`b';]
\place(1950,150)[{\scriptstyle 1_w}]
\morphism(2130,320)|r|/=>/<0,200>[`;] \place(2215,335)[{\scriptstyle 1_w}]
\place(1150,500)[\mbox{``=''}]
\efig $$ and $$ \bfig\scalefactor{.9} \node a(300,0)[C] \node b(800,0)[D] \node c(0,250)[A] \node d(300,500)[C] \node e(800,500)[C] \node f(0,800)[A] \node g(500,800)[A]
\arrow|b|/>/[a`b;g]
\arrow|l|/@{>}|{\usebox{\bbox}}/[c`a;v]
\arrow|r|/=/[d`a;]
\arrow|r|/>/[e`b;g']
\arrow|b|/=/[d`e;]
\arrow|l|/=/[f`c;]
\arrow|l|/@{>}|{\usebox{\bbox}}/[f`d;v]
\arrow|r|/@{>}|{\usebox{\bbox}}/[g`e;v]
\arrow|a|/=/[f`g;]
\place(430,650)[{\scriptstyle 1_v}]
\morphism(160,320)|l|/=>/<0,200>[`;1_v]
\node f'(1500,800)[A] \node g'(2000,800)[A] \node c'(1500,300)[A] \node d'(2000,300)[B] \node a'(1800,0)[C] \node b'(2300,0)[D] \node e'(2300,400)[C]
\arrow|a|/=/[f'`g';]
\arrow|l|/=/[f'`c';]
\arrow|l|/>/[g'`d';f']
\arrow|r|/@{>}|{\usebox{\bbox}}/[g'`e';v]
\arrow|a|/>/[c'`d';f]
\arrow|l|/@{>}|{\usebox{\bbox}}/[c'`a';v]
\arrow|b|/>/[a'`b';g]
\arrow/@{>}|{\usebox{\bbox}}/[d'`b';w]
\arrow|r|/>/[e'`b';g']
\place(1950,150)[{\scriptstyle \alpha}]
\morphism(2150,320)|l|/=>/<0,200>[`;\beta]
\place(1150,500)[\mbox{``=''}]
\efig $$
\noindent So, first of all $ f = f' $ and $ g = g' $. The first ``equation'' says $$ \bfig\scalefactor{.8}
\square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`C`D`D;`w`g_*`]
\square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`C;`f_*`v`]
\place(250,500)[{\scriptstyle \beta}]
\square(500,0)/>``=`=/[C`D`D`D;g```]
\place(750,250)[{ \lrcorner}]
\square(500,500)/>``@{>}|{\usebox{\bbox}}`/[A`B`C`D;f``w`]
\place(750,750)[{\scriptstyle \alpha}]
\place(1350,500)[=]
\square(1700,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`D`D;`w`w`]
\place(1950,250)[{\scriptstyle 1_w}]
\square(1700,500)/>`@{>}|{\usebox{\bbox}}`=`/[A`B`B`B;f`f_*``]
\place(1950,750)[{\scriptstyle \lrcorner}]
\square(2200,0)/=```=/<500,1000>[B`B`D`D;```]
\morphism(2700,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`D;w]
\morphism(2700,500)/=/<0,-500>[D`D;]
\place(2450,500)[{\scriptstyle \cong}]
\efig $$ which by sliding is equivalent to $ \widehat{\alpha} \beta = 1_{w \bdot f_*} $. Similarly the second equation says $ \beta \widehat{\alpha} = 1_{g_* \bdot v} $.
\end{proof}
We end by acknowledging the ``triple category in the room''. The cubes we have been discussing are clearly the triple cells of a triple category $ {\mathfrak{Ret}} {\mathbb A} $. We orient the cubes to be in line with our intercategories conventions of \cite{GraPar15} where the faces of the cubes are horizontal, vertical (left and right), and basic (front and back) in decreasing order of strictness (or fancyness). The order here will be commutative, cell, and retrocell.
\begin{itemize}
\item[1.] Objects are the objects of $ {\mathbb A} $, ($ A, A', B, ..$)
\item[2.] Transversal arrows are the horizontal arrows of $ {\mathbb A} $,
($ f, f', g, g' $)
\item[3.] Horizontal arrows are the horizontal arrows of $ {\mathbb A} $,
($ a, b, c, d $)
\item[4.] Vertical arrows are the vertical arrows of $ {\mathbb A} $,
($ v, v', w, w' $)
\item[5.] Horizontal cells are commutative squares of horizontal arrows
\item[6.] Vertical cells are double cells in $ {\mathbb A} $, ($ \alpha, \alpha' $)
\item[7.] Basic cells are retrocells in $ {\mathbb A} $, ($ \beta, \gamma $)
\item[8.] Triple cells are ``commutative'' cubes as discussed above
\end{itemize}
$$ \bfig\scalefactor{.9} \node a(300,0)[D] \node b(800,0)[D'] \node c(0,250)[C] \node d(300,500)[B] \node e(800,500)[B'] \node f(0,800)[A] \node g(500,800)[A']
\arrow|b|/>/[a`b;d]
\arrow|l|/>/[c`a;g]
\arrow|r|/@{>}|{\usebox{\bbox}}/[d`a;w]
\arrow|r|/@{>}|{\usebox{\bbox}}/[e`b;w']
\arrow|b|/>/[d`e;b]
\arrow|l|/@{>}|{\usebox{\bbox}}/[f`c;v]
\arrow|l|/>/[f`d;f]
\arrow|r|/>/[g`e;f']
\arrow|a|/>/[f`g;a]
\place(150,400)[{\scriptstyle \alpha}]
\morphism(650,250)|l|/=>/<-200,0>[`;\gamma]
\node f'(1500,800)[A] \node g'(2000,800)[A'] \node c'(1500,300)[C] \node d'(2000,300)[C'] \node a'(1800,0)[D] \node b'(2300,0)[D'\rlap{\ .}] \node e'(2300,400)[B']
\arrow|a|/>/[f'`g';a]
\arrow|l|/@{>}|{\usebox{\bbox}}/[f'`c';v]
\arrow|l|/@{>}|{\usebox{\bbox}}/[g'`d';v']
\arrow|r|/>/[g'`e';f']
\arrow|a|/>/[c'`d';c]
\arrow|l|/>/[c'`a';g]
\arrow|b|/>/[a'`b';d] \arrow/>/[d'`b';g']
\arrow|r|/@{>}|{\usebox{\bbox}}/[e'`b';w']
\morphism(1850,550)/=>/<-200,0>[`;\beta]
\place(2150,400)[{\scriptstyle \alpha'}]
\efig $$
We leave the details to the interested reader.
\end{document} |
\begin{document}
\title{On Type I Blowups of Suitable Weak Solutions to Navier-Stokes Equations near Boundary.
} \begin{abstract} In this note, boundary Type I blowups of suitable weak solutions to the Navier-Stokes equations are discussed. In particular, it has been shown that, under certain assumptions, the existence of non-trivial mild bounded ancient solutions in half space leads to the existence of suitable weak solutions with Type I blowup on the boundary.
\end{abstract}
\section{Introduction}
\setcounter{equation}{0}
The aim of the note is to study conditions under which solutions to the Navier-Stokes equations undergo Type I blowups on the boundary.
Consider the classical Navier-Stokes equations \begin{equation}
\label{NSE}
\partial_tv+v\cdot\nabla v-\Delta v=-\nabla q,\qquad {\rm div}\,v=0
\end{equation}
in the space time domain $Q^+=B^+\times ]-1,0[$, where $B^+=B^+(1)$ and $B^+(r)=\{x\in \mathbb R^3: |x|<r,\,x_3>0\}$ is a half ball of radius $r$ centred at the origin $x=0$. It is supposed that $v$ satisfies the homogeneous Dirichlet boundary condition \begin{equation}\label{bc}
v(x',0,t)=0
\end{equation}
for all $|x'|<1$ and $-1<t<0$. Here, $x'=(x_1,x_2)$ so that $x=(x',x_3)$ and $z=(x,t)$.
Our goal is to understand how to determine whether or not the origin $z=0$ is a singular point of the velocity field $v$. We say that $z=0$ is a regular point of $v$ if there exists $r>0$ such that $v\in L_\infty(Q^+(r))$ where $Q^+(r)=B^+(r)\times ]-r^2,0[$. It is known, see \cite{S3} and \cite{S2009},
that the velocity $v$ is H\"older continuous in a parabolic vicinity of $z=0$ if $z=0$ is a regular point. However, further smoothness even in spatial variables does not follow in the regular case, see \cite{Kang2005} and \cite{SerSve10} for counter-examples.
The class of solutions to be studied is as follows.
\begin{definition}\label{sws} A pair of functions $v$ and $q$ is called a suitable weak solution to the Navier-Stokes equations in $Q^+$ near the boundary if and only if the following conditions hold:
\begin{equation}\label{class}
v\in L_{2, \infty}(Q^+)\cap L_2(-1,0;W^1_2(Q^+)),\qquad q\in L_\frac 32(Q^+);
\end{equation}
$v$ and $q$ satisfy equations (\ref{NSE}) and boundary condition (\ref{bc});
$$\int\limits_{B^+}\varphi(x,t)|v(x,t)|^2dx+2\int\limits_{-1}^t\int\limits_{B^+}\varphi|\nabla v|^2dxdt\leq $$
\begin{equation}\label{energy_inequality}
\leq \int\limits_{-1}^t\int\limits_{B^+}(|v|^2(\partial_t\varphi+\Delta\varphi)+v\cdot\nabla v(|v|^2+2q))dxdt \end{equation}
for all non-negative functions $\varphi\in C^\infty_0(B\times]-1,1[)$ such that $\varphi|_{x_3=0}=0$.
\end{definition}
In what follows, some statements will be expressed in terms of scale invariant quantities (invariant with respect to the Navier-Stokes scaling: $\lambda v(\lambda x,\lambda^2 t)$ and $\lambda ^2q(\lambda x,\lambda^2 t)$). Here, they are:
$$A(v,r)=\sup\limits_{-r^2<t<0}\frac 1r\int\limits_{B^+(r)}|v(x,t)|^2dx, \qquad
E(v,r)=\frac 1r\int\limits_{Q^+(r)}|\nabla v|^2dz,$$$$
C(v,r)=\frac 1{r^2}\int\limits_{Q^+(r)}|v|^3dz,\qquad
D_0(q,r)=\frac 1{r^2}\int\limits_{Q^+(r)}|q-[q]_{B^+(r)}|^\frac 32dz, $$
$$D_2(q,r)=\frac 1{r^\frac {13}8}\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q|^\frac {12}{11}dx\Big)^\frac {11}8dt,$$
where
$$[f]_\Omega=\frac 1{|\Omega|}\int\limits_\Omega fdx.$$
We also introduce the following values: $$g(v):=\min\{\sup\limits_{0<R<1}A(v,R), \sup\limits_{0<R<1}C(v,R),\sup\limits_{0<R<1}E(v,R)\} $$ and, given $r>0$, $$G_r(v,q):=$$$$\max\{\sup\limits_{0<R<r}A(v,R), \sup\limits_{0<R<r}C(v,R),\sup\limits_{0<R<r}E(v,R),\sup\limits_{0<R<r}D_0(q,R)\}. $$
Relationships between $g(v)$ and $G_1(v,q)$ is described in the following proposition.
\begin{pro}\label{boundednesstheorem} Let $v$ and $q$ be a suitable weak solution to the Navier-Stokes equations in $Q^+$ near the boundary. Then, $G_1$ is bounded if and only if $g$ is bounded. \end{pro}
If $z=0$ is a singular point of $v$ and $g(v)<\infty$, then $z=0$ is called a Type I singular point or a Type I blowup point.
Now, we are ready to state the main results of the paper.
\begin{definition}
\label{leas} A function $u:Q^+_-:=\mathbb R^3_+\times]-\infty,0[\,\to\mathbb R^3$ is called a local energy ancient solution if there exists a function $p:Q_-^+\to\mathbb R$ such that the pair $u$ and $p$ is a suitable weak solution in $Q^+(R)$ for any $R>0$. Here, $\mathbb R^3_+:=\{x\in \mathbb R^3:\,x_3>0\}$. \end{definition}
\begin{theorem}\label{local energy ancient solution} There exists a suitable weak solution $v$ and $q$ with Type I blowup at the origin $z=0$ if and only if there exists a non-trivial local energy ancient solution $u$ such that $u$ and the corresponding pressure $p$ have the following prosperities: \begin{equation}\label{scalequatities} G_\infty(u,p):=\max\{\sup\limits_{0<R<\infty} A(u,R), \sup\limits_{0<R<\infty}E(u,R),$$$$\sup\limits_{0<R<\infty}C(u,R),\sup\limits_{0<R<\infty}D_0(p,R)\}<\infty \end{equation} and \begin{equation}\label{singularity} \inf\limits_{0<a<1}C(u,a)\geq \varepsilon_1>0. \end{equation}
\end{theorem} \begin{remark}\label{singType1} According to (\ref{scalequatities}) and (\ref{singularity}), the origin $z=0$ is Type I blowup of the velocity $u$. \end{remark}
There is another way to construct a suitable weak solution with Type I blowup. It is motivated by the recent result in \cite{AlBa18} for the interior case. Now, the main object is related to the so-called mild bounded ancient solutions in a half space, for details see \cite{SerSve15} and \cite{BaSe15}. \begin{definition}\label{mbas} A bounded function $u$ is a mild bounded ancient solution if and only if there exists a pressure $p=p^1+p^2$, where the even extension of $p^1$ in $x_3$ to the whole space $\mathbb R^3$ is a $L_\infty(-\infty,0;BMO(\mathbb R^3))$-function, $$\Delta p^1={\rm divdiv}\,u\otimes u$$ in $Q^+_-$ with $p^1_{,3}(x',0,t)=0$, and $p^2(\cdot,t)$ is a harmonic function in $\mathbb R^3_+$, whose gradient satisfies the estimate
$$|\nabla p^2(x,t)|\leq \ln (2+1/x_3)$$ for all $(x,t)\in Q^+_-$ and has the property
$$\sup\limits_{x'\in \mathbb R^2}|\nabla p^2(x,t)|\to 0 $$ as $x_3\to \infty$; functions $u$ and $p$ satisfy: $$\int\limits_{Q^+_-}u\cdot \nabla qdz=0$$ for all $q\in C^\infty_0(Q_-:=\mathbb R^3\times ]-\infty,0[)$ and, for any $t<0$, $$\int\limits_{Q^+_-}\Big(u\cdot(\partial_t\varphi+\Delta\varphi)+u\otimes u:\nabla \varphi+p{\rm div}\,\varphi\Big)dz=0 $$ for and $\varphi\in C^\infty_0(Q_-)$ with $\varphi(x',0,t)=$ for all $x'\in \mathbb R^2$. \end{definition}
As it has been shown in \cite{BaSe15}, any mild bounded ancient solution $u$ in a half space is infinitely smooth up to the boundary and $u|_{x_3}=0$. \begin{theorem}\label{mbas_type1}
Let $u$ be a mild bounded ancient solution such that $|u|\leq 1$ and $|u(0,a,0)|=1$ for a positive number $a$ and such that
(\ref{scalequatities}) holds. Then there exists a suitable weak solution in $Q^+$ having Type I blowup at the origin $z=0$. \end{theorem} {\bf Acknowledgement} The work is supported by the grant RFBR 17-01-00099-a.
\section{Basic Estimates}
\setcounter{equation}{0}
In this section, we are going to state and prove certain basic estimates for arbitrary suitable weak solutions near the boundary.
For our purposes, the main estimate of the convective term can be derived as follows. First, we apply H\"older inequality in spatial variables:
$$\||v||\nabla v|\|_{\frac {12}{11},\frac 32,Q_+(r)}^\frac 32=\int\limits^0_{-r^2}\Big(\int\limits_{B_+(r)}
|v|^\frac {12}{11}|\nabla v|^\frac {12}{11}dx\Big)^\frac{11}8dt\leq$$
$$\leq\int\limits^0_{-r^2}\Big(\int\limits_{B_+(r)}|\nabla v|^2dx\Big)^\frac 34\Big(\int\limits_{B_+(r)}|v|^\frac {12}5dx\Big)^\frac 58dt. $$ Then, byy interpolation, since $\frac {12}5=2\cdot\frac 35+3\cdot\frac 25$, we find
$$\Big(\int\limits_{B_+(r)}|v|^\frac {12}5dx\Big)^\frac 58\leq \Big(\int\limits_{B_+(r)}|v|^2dx\Big)^\frac 38\Big(\int\limits_{B_+(r)}|v|^3dx\Big)^\frac 14. $$ So,
$$\||v||\nabla v|\|_{\frac {12}{11},\frac 32,Q_+(r)}^\frac 32\leq $$$$\leq \int\limits^0_{-r^2}\Big(\int\limits_{B_+(r)}|\nabla v|^2dx\Big)^\frac 34\Big(\int\limits_{B_+(r)}|v|^2dx\Big)^\frac 38\Big(\int\limits_{B_+(r)}|v|^3dx\Big)^\frac 14dt\leq $$
\begin{equation}\label{mainest}\leq \sup\limits_{-r^2<t<0}\Big(\int\limits_{B_+(r)}|v|^2dx\Big)^\frac 38\Big(\int\limits_{Q_+(r)}|\nabla v|^2dxdt\Big)^\frac 34\Big(\int\limits_{Q_+(r)}|v|^3dxdt\Big)^\frac 14\leq \end{equation} $$\leq r^\frac 38r^\frac 34r^\frac 12A^\frac 38(v,r)E^\frac 34(v,r)C^\frac 14(v,r) $$$$=r^\frac {13}8A^\frac 38(v,r)E^\frac 34(v,r)C^\frac 14(v,r). $$
Two other estimates are well known and valid for any $0<r\leq 1$: \begin{equation}\label{multiple} C(v,r)\leq cA^\frac 34(v,r)E^\frac 34(v,r)
\end{equation}
and \begin{equation}\label{embedding}
D_0(q,r)\leq cD_2(q,r). \end{equation}
Next, one more estimate immediately follows from the energy inequality (\ref{energy}) for a suitable choice of cut-off function $\varphi$: \begin{equation}\label{energy} A(v,\tau R)+E(v,\tau R)\leq c(\tau)\Big[C^\frac 23(v,R)+C^\frac 13(v,R)D_0^\frac 23(q,R)+C(v,R)\Big] \end{equation} for any $0<\tau<1$ and for all $0<R\leq 1$.
The last two estimates are coming out from the linear theory. Here, they are: \begin{equation}\label{pressure} D_2(q,r)\leq c \Big(\frac r\varrho\Big)^2\Big[D_2(q,\varrho)+E^\frac 34(v,\varrho)\Big]+$$$$+c\Big(\frac \varrho r\Big)^\frac {13}8A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho) \end{equation} for any $0<r<\varrho\leq 1$ and \begin{equation}\label{highder}
\|\partial_tv\|_{\frac {12}{11},\frac 32,Q^+(\tau R)}+\|\nabla^2v\|_{\frac {12}{11},\frac 32,Q^+(\tau R)}+\|\nabla q\|_{\frac {12}{11},\frac 32,Q^+(\tau R)} \leq \end{equation} $$\leq c(\tau)R^\frac {13}{12}\Big[D_0^\frac 23(q,R)+C^\frac 13(v,R)+E^\frac 12(v,R)+$$$$+(A^\frac 38(v,R)E^\frac 34(v,R)C^\frac 14(v,R))^\frac 23\Big] $$ for any $0<\tau<1$ and for all $0<R\leq 1$.
Estimate (\ref{highder}) follows from bound (\ref{mainest}), from the local regularity theory for the Stokes equations (linear theory), see paper \cite{S2009}, and from scaling. Estimate (\ref{pressure}) will be proven in the next section.
\section{Proof of (\ref{pressure})}
\setcounter{equation}{0}
Here, we follows paper \cite{S3}. We let
$\tilde f=-v\cdot\nabla v$ and observe
that
\begin{equation}\label{weakerterm}
\frac 1r\|\nabla v\|_{\frac {12}{11},\frac 32,Q^+(r)}\leq r^\frac {13}{12}E^\frac 12(v,r)
\end{equation} and, see (\ref{mainest}), \begin{equation}\label{righthand}
\|\tilde f\|_{\frac {12}{11},\frac 32,Q^+(r)} \leq cr^\frac {13}{12}(A^\frac 38(v,r)E^\frac 34(v,r)C^\frac 14(v,r))^\frac 23. \end{equation} Next, we select a convex domain with smooth boundary so that $$B^+(1/2)\subset \tilde B\subset B^+$$ and, for $0<\varrho<1$, we let $$\tilde B(\varrho)=\{x\in \mathbb R^3: x/\varrho\in \tilde B\},\qquad \tilde Q(\varrho)=\tilde B(\varrho)\times ]-\varrho^2,0[. $$ Now, consider the following initial boundary value problem: \begin{equation}\label{v1eq} \partial_tv^1-\Delta v^1+\nabla q^1=\tilde f, \qquad {\rm div}\,v^1=0 \end{equation} in $\tilde Q(\varrho)$ and \begin{equation}\label{v1ibc} v^1=0 \end{equation} on parabolic boundary $\partial'\tilde Q(\varrho)$ of $\tilde Q(\varrho)$. It is also supposed that $[q^1]_{\tilde B(\varrho)}(t)=0$ for all $-\varrho^2<t<0$.
Due to estimate ({\ref{righthand}) and due to the Navier-Stokes scaling, a unique solution to problem (\ref{v1eq}) and (\ref{v1ibc}) satisfies the estimate
$$\frac 1{\varrho^2}\|v^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}+\frac 1{\varrho}\|\nabla v^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}
+\|\nabla^2 v^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}+$$ \begin{equation}\label{v1est}
+ \frac 1{\varrho}\| q^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}
+\|\nabla q^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}\leq \end{equation}
$$\leq c\|\tilde f\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}\leq c\varrho^\frac {12}{11}(A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho))^\frac 23,$$ where a generic constant c is independent of $\varrho$.
Regarding $v^2=v-v^1$ and $q^2=q-[q]_{B_+(\varrho/2)}-q^1$, one can notice the following:\begin{equation}\label{v2eq} \partial_tv^2-\Delta v^2+\nabla q^2=0, \qquad {\rm div}\,v^2=0 \end{equation} in $\tilde Q(\varrho)$ and \begin{equation}\label{v2ibc}
v^2|_{x_3=0}=0. \end{equation} As it was indicated in \cite{S2009}, functions $v^2$ and $q^2$ obey the estimate
\begin{equation}\label{v2q2est}\|\nabla^2 v^2\|_{9,\frac 32, Q^+(\varrho/4)}+\|\nabla q^2\|_{9,\frac 32, Q^+(\varrho/4)}\leq \frac c{\varrho^\frac {29}{12}}L,\end{equation} where
$$L:=\frac 1{\varrho^2}\| v^2\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| \nabla v^2\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| q^2\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}.$$ As to an evaluation of $L$, we have $$L\leq
\Big[\frac 1{\varrho^2}\| v\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| \nabla v\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| q-[q]_{B^+(\varrho/)}\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+$$
$$+\frac 1{\varrho^2}\| v^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| \nabla v^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/1)}+\frac 1{\varrho}\| q^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}\Big]\leq$$
$$\leq \Big[\frac 1{\varrho}\| \nabla v\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\|\nabla q\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+$$$$+\frac 1{\varrho}\| \nabla v^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| q^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}\Big].$$ So, by (\ref{weakerterm}), by (\ref{highder}) with $R=\varrho$ and $\tau=\frac 12$, and by (\ref{v1est}), one can find the following bound \begin{equation}\label{q2est}
\|\nabla q^2\|_{9,\frac 32, Q^+(\varrho/4)} \leq \frac c{\varrho^\frac 43}\Big[E^\frac 12(v,\varrho)+D_2^\frac 23(q,\varrho)+$$$$+(A^\frac 38(\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho))^\frac 23\Big]. \end{equation} Now, assuming $0<r<\varrho/4$, we can derive from (\ref{v1est}) and from (\ref{q2est}) the estimate
$$D_2(r)\leq \frac c{r^\frac {13}8}\Big[\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^1|^\frac {12}{11}dx\Big)^\frac {11}8dt+
\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^2|^\frac {12}{11}dx\Big)^\frac {11}8dt\Big]\leq $$$$
\leq \frac c{r^\frac {13}8}\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^1|^\frac {12}{11}dx\Big)^\frac {11}8dt+cr^2
\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^1|^9dx\Big)^\frac 16dt\leq$$ $$\leq c\Big(\frac \varrho r\Big)^\frac {13}8A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho)+c\Big(\frac r\varrho\Big)^2\Big[E^\frac 34(v,\varrho)+D_2(q,\varrho)+$$$$+A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho)\Big] $$ and thus $$D_2(q,r)\leq c\Big(\frac r\varrho\Big)^2\Big[E^\frac 34(v,\varrho)+D_2(q,\varrho)\Big]+$$$$+c\Big(\frac \varrho r\Big)^\frac {13}8A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^14(v,\varrho) $$ for $0<r<\varrho/4$. The latter implies estimate (\ref{pressure}).
\section{Proof of Proposition \ref{boundednesstheorem}}
\setcounter{equation}{0}
\begin{proof} We let $g=g(v)$ and $G=G_1(v,q)$.
Let us assume that $g<\infty$. Our aim is to show that $G<\infty$. There are three cases:
\textsc{Case 1}. Suppose that \begin{equation}\label{case1} C_0:=\sup\limits_{0<R<1}C(v,R)<\infty. \end{equation} Then, from (\ref{energy}), one can deduce that $$A(v, R/2)+E(v, R/2)\le c_1(1+D_0^\frac 23(q,R)).$$ Here and in what follows in this case, $c_1$ is a generic constant depending on $C_0$ only.
Now, let us use (\ref{embedding}), (\ref{pressure}) with $\varrho= R/2$, and the above estimate. As a result, we find $$D_2(q,r)\leq c\Big(\frac rR\Big)^2D_2(q, R/2) +c_1\Big(\frac Rr\Big)^\frac {13}8[E^\frac 34(v, R/2)+1 +D_2^\frac 34(q,R)]\leq $$$$\leq c\Big(\frac rR\Big)^2D_2(q,R)+c_1\Big(\frac Rr\Big)^\frac {13}8[1+D_2(q,R)^\frac 23]$$ for all $0<r< R/2$. So, by Young's inequality, \begin{equation}\label{pressure1} D_2(q,r)\leq c\Big(\frac rR\Big)^2D_2(q,R)+c_1\Big(\frac Rr\Big)^\frac {71}8\end{equation} for all $0<r< R/2$. If $ R/2\leq r\leq R$, then
$$D_2(q,r)\leq \frac 1{( R/2)^\frac {13}8}\int\limits^0_{-R^2}\Big(\int\limits_{B^+(R)}|\nabla q|^\frac {12}{11}dx\Big)^\frac {11}8dt\leq 2^\frac {13}8D_2(q,R)\Big(\frac {2r}{R}\Big)^2.$$ So, estimate (\ref{pressure1})
holds for all $0<r<R<1$.
Now, for $\mu$ and $R$ in $]0,1[$, we let $r=\mu R$ in (\ref{pressure1}) and find $$D_2(q,\mu R)\leq c\mu^2D_2(q,R)+c_1\mu^{-\frac{71}8}. $$ Picking $\mu$ up so small that $2c\mu\leq 1$, we show that $$D_2(q,\mu R)\leq \mu D_2(q,R)+c_1 $$ for any $0<R<1$. One can iterate the last inequality and get the following: $$D_2(q,\mu^{k+1}R)\leq \mu^{k+1}D_2(q,R)+c_1(1+\mu+...+\mu^k)$$ for all natural numbers $k$. The latter implies that \begin{equation}\label{1case1est} D_2(q,r)\leq c_1\frac rRD_2(q,R)+c_1 \end{equation} for all $0<r<R<1$. And we can deduce from (\ref{embedding}) and from the above estimate that $$\max\{\sup\limits_{0<R<1}D_0(q,R), \sup\limits_{0<R<1}D_2(q,\tau R)\}<\infty$$ for any $0<\tau<1$. Uniform boundedness of $A(R)$ and $E(R)$ follows from the energy estimate (\ref{energy}) and from the assumption (\ref{case1}).
\textsc{Case 2}. Assume now that \begin{equation}\label{case2}
A_0:=\sup\limits_{0<R<1}A(v,R)<\infty.
\end{equation}
Then, from (\ref{multiple}), it follows that $$C(v,r)\leq cA_0^\frac 34E^\frac 34(v,r)$$ for any $0<r<1$ and thus $$A(v,\tau \varrho)+E(v,\tau \varrho)\leq c_3(A_0,\tau)\Big[E^\frac 12(v,\varrho)+E^\frac 14(v,\varrho)D_0^\frac 23(q,\varrho)+E^\frac 34(v,\varrho)\Big]. $$ for any $0<\tau<1$ and $0<\varrho<1$.
Our next step is an estimate for the pressure quantity: $$D_2(q,r)\leq c\Big(\frac r\varrho\Big)^2\Big[D_2(q,\varrho)+E^\frac34(v,\varrho)\Big]+c_2\Big(\frac \varrho r\Big)^\frac {13}8E^\frac {15}{16}(v,\varrho)\leq$$$$\leq c\Big(\frac r\varrho\Big)^2D_2(q,\varrho)+c_2\Big(\frac \varrho r\Big)^\frac {13}8(E^\frac {15}{16}(v,\varrho)+1)$$ for any $0<r<\varrho<1$. Here, a generic constant, depending on $A_0$ only, is denoted by $c_2$.
Letting $r=\tau R$ and $\mathcal E(r):=A(v,r)+D_2(q,r)$, one can deduce from latter inequalities, see also (\ref{embedding}), the following estimates: $$\mathcal E(\tau \varrho)\leq c\tau^2D_2(q,\varrho)+c_2\Big(\frac 1 \tau\Big)^\frac {13}8(E^\frac {15}{16}(v,\varrho)+1)+$$$$+c_3(A_0,\tau)\Big[E^\frac 12(v,\varrho)+E^\frac 14(v,\varrho)D_2^\frac 23(\varrho)+E^\frac 34(v,\varrho)\Big]\leq $$ $$\leq c\tau^2D_2(q,\varrho)+c_2\Big(\frac 1 \tau\Big)^\frac {13}8(E^\frac {15}{16}(v,\varrho)+1)+$$$$+c_3(A_0,\tau)\Big(\frac1{\tau}\Big)^4E^\frac 34(v,\varrho)+c_3(A_0,\tau)\Big[E^\frac 12(v,\varrho)+E^\frac 34(v,\varrho)\Big]\leq $$ $$\leq c\tau^2\mathcal E(\varrho) +c_3(A_0,\tau).
$$ The rest of the proof is similar to what has been done in Case 1, see derivation of (\ref{1case1est}).
\textsc{Case 3}. Assume now that \begin{equation}\label{case3}
E_0:=\sup\limits_{0<R<1}E(v,R)<\infty.
\end{equation}
Indeed, $$C(v,r)\leq cE_0^\frac 34A^\frac 34(v,r)$$ for all $0<r\leq 1$. As to the pressure, we can find $$D_2(\tau\varrho)\leq c\tau^2D_2(\varrho)+c_4(E_0,\tau)A^\frac {9}{16}(\varrho) $$ for any $0<\tau<1$ and for any $0<\varrho<1$. In turn, the energy inequality gives: $$A(v,\tau \varrho)\leq c_5(E_0,\tau)\Big[A^\frac 12(v,\varrho)+A^\frac 14(v,\varrho)D_0^\frac 23(q,\varrho)+A^\frac 34(v,\varrho)\Big]\leq $$ $$\leq c_5(E_0,\tau)\Big[A^\frac 12(v,\varrho)+A^\frac 14(v,\varrho)D_2^\frac 23(q,\varrho)+A^\frac 34(v,\varrho)\Big] $$ for any $0<\tau<1$ and for any $0<\varrho<1$. Similar to Case 2, one can introduce the quantity $\mathcal E(r)=A(v,r)+D_2(q,r)$ and find the following inequality for it: $$\mathcal E(\tau\varrho)\leq c\tau^2D_2(q,\varrho)+c_4(E_0,\tau)A^\frac {9}{16}(v,\varrho)+$$ $$+c_5(E_0,\tau)\Big[A^\frac 12(v,\varrho)+A^\frac 14(v,\varrho)D_2^\frac 23(q,\varrho)+A^\frac 34(v,\varrho)\Big]\leq $$ $$\leq c\tau^2\mathcal E(\varrho)+c_5(E_0,\tau)$$ for any $0<\tau<1$ and for any $0<\varrho<1$. The rest of the proof is the same as in Case 2.
\end{proof}
\section{Proof of Theorem \ref{local energy ancient solution}}
\setcounter{equation}{0}
Assume that $v$ and $q$ is a suitable weak solution in $Q^+$ with Type I blow up at the origin so that \begin{equation}\label{type1} g=g(v)=\min\{ \sup\limits_{0<R<1}A(v,R),\sup\limits_{0<R<1}E(v,R)\sup\limits_{0<R<1}C(v,R)\}<\infty. \end{equation} By Theorem \ref{boundednesstheorem}, \begin{equation}
\label{bound1} G_1=G_1(v,q):=\max\{\sup\limits_{0<R<1}A(v,R),\sup\limits_{0<R<1}E(v,R),$$$$\sup\limits_{0<R<1}C(v,R), \sup\limits_{0<R<1}D_0(v,R)\}<\infty. \end{equation} We know, see Theorem 2.2 in \cite{S2016}, that there exists a positive number $\varepsilon_1=\varepsilon_1(G_1)$ such that \begin{equation}\label{sing1} \inf\limits_{0<R<1}C(v,R)\geq \varepsilon_1>0. \end{equation} Otherwise, the origin $z=0$ is a regular point of $v$.
Let $R_k\to0$ and $a>0$ and let $$u^{(k)}(y,s)=R_kv(x,t),\qquad p^{(k)}(y,s)=R_k^2q(x,t), $$ where $x=R_ky$, $t=R^2_ks$. Then, we have $$A(v,aR_k)=A(u^{(k)},a)\leq G_1,\qquad E(v,aR_k)=E(u^{(k)},a)\leq G_1,$$$$ C(v,aR_k)=C(u^{(k)},a)\leq G_1, \qquad D_0(q,u^{(k)})=D_0(p^{(k)},a)\leq G_1.$$ Thus, by (\ref{highder}),
$$\|\partial_tu^{(k)}\|_{\frac {12}{11},\frac 32,Q^+(a)}+\|\nabla^2u^{(k)}\|_{\frac {12}{11},\frac 32,Q^+(a)}+\|\nabla p^{(k)}\|_{\frac {12}{11},\frac 32,Q^+(a)}\leq c(a,G_1)$$ Moreover, the well known multiplicative inequality implies the following bound:
$$\sup\limits_k\int\limits_{Q^+}|u^{(k)}|^\frac {10}3dz\leq c(a,G_1).$$
Using known arguments, one can select a subsequence (still denoted in the same way as the whole sequence) such that, for any $a>0$, $$u^{(k)}\to u$$ in $L_3(Q^+(a))$, $$\nabla u^{(k)}\rightharpoonup \nabla u$$ in $L_2(Q^+(a))$, $$p^{(k)}\rightharpoonup p$$ in $L_\frac 32(Q^+(a))$. The first two statements are well known and we shall comment on the last one only.
Without loss of generality, we may assume that $$\nabla p^{(k)}\rightharpoonup w $$ in $L_{\frac {12}{11}}(Q^+(a))$ for all positive $a$.
We let $p^{(k)}_1(x,t)=p^{(k)}(x,t)-[p^{(k)}]_{B^+(1)}(t)$. Then, there exists a subsequence $\{k^1_j\}_{j=1}^\infty$ such that $$p^{(k^1_j)}_1\rightharpoonup p_1$$ in $L_\frac 32(Q^+(1))$ as $j\to\infty$. Indeed, it follows from Poincar\'e-Sobolev inequality
$$\|p^{(k^1_j)}_1\|_{\frac 32, Q^+(1)}\leq c \|\nabla p^{(k^1_j)}\|_{\frac {12}{11},\frac 32,Q^+(1)}\leq c(1,G_1).$$ Moreover, one has $\nabla p_1=w$ in $Q^+(1)$.
Our next step is to define $p^{(k^1_j)}_2(x,t)=p^{(k^1_j)}(x,t)-[p^{(k^1_j)}]_{B^+(2)}(t)$. For the same reason as above, there is a subsequence $\{k^2_j\}_{j=1}^\infty$ of the sequence $\{k^1_j\}_{j=1}^\infty$ such that $$p^{(k^2_j)}_2\rightharpoonup p_2$$ in $L_\frac 32(Q^+(2))$ as $j\to\infty$. Moreover, we claim that $\nabla p_2=w$ in $Q^+(2)$ and $$p_2(x,t)-p_1(x,t)=[p_2]_{B^+(1)}(t)-[p_1]_{B^+(1)}(t)=[p_2]_{B^+(1)}(t)$$ for $x\in B^+(1)$ and $-1<t<0$, i.e., in $Q^+(1)$.
After $s$ steps, we arrive at the following: there exists a subsequence $\{k^s_j\}_{j=1}^\infty$ of the sequence $\{k^{s-1}_j\}_{j=1}^\infty$ such that $p^{(k^s_j)}_s(x,t) =p^{(k^s_j)}(x,t)-[p^{(k^s_j)}]_{B^+(s)}(t)$ in $Q^+(s)$ and $$p^{(k^s_j)}_s\rightharpoonup p_s$$ in $L_\frac 32(Q^+(s))$ as $j\to\infty$. Moreover, $\nabla p_s=w$ in $Q^+(s)$ and $$p_s(x,t)=p_{s-1}(x,t)+[p_s]_{B^+(s-1)}(t)$$ in $Q^+(s-1)$. And so on.
The following function $p$ is going to be well defined: $p=p_1$ in $Q^+(1)$ and $$p(x,t)=p_{s+1}(x,t)-\sum\limits_{m=1}^s[p_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)$$ in $Q^+(s+1)$, where $\chi_\omega(t)$ is the indicator function of the set $\omega \in \mathbb R$. Indeed, to this end, we need to verify that $$p_{s+1}(x,t)-\sum\limits_{m=1}^{s}[p_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)=$$ $$=p_{s}(x,t)-\sum\limits_{m=1}^{s-1}[p_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)$$ in $Q^+(s)$. The latter is an easy exercise.
Now, let us fix $s$ and consider the sequence $$p^{(k^s_j)}(x,t)=p^{(k^s_j)}_{s}(x,t)-\sum\limits_{m=1}^{s-1}[p^{(k^s_j)}_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)$$ in $Q^+(s)$. Then, since the sequence $\{k^s_j\}_{j=1}^\infty$ is a subsequence of all sequences $\{k^{m+1}_j\}_{j=1}^\infty$ with $m\leq s-1$, one can easily check that $$p^{(k^s_j)}\rightharpoonup p$$ in $L_\frac 32(Q^+(s))$. It remains to apply the diagonal procedure of Cantor.
Having in hands the above convergences, we can conclude that the pair $u$ and $p$ is a local energy ancient solution in $Q^+_-$ and (\ref{scalequatities}) and (\ref{singularity}) hold.
The inverse statement is obvious.
\section{Proof of Theorem \ref{mbas_type1}} \setcounter{equation}{0}
The proof is similar to the proof of Theorem \ref{local energy ancient solution}. We start with scaling $u^\lambda(y,s)=\lambda u(x,t)$ and $p^\lambda(y,s)=\lambda^2p(x,t)$ where $x=\lambda y$ and $t=\lambda^2s$ and $\lambda\to\infty$. We know
$$|u^\lambda(0,y_{3\lambda},0)|=\lambda|u(0,a,0)|=\lambda$$ and so that $y_{3\lambda}\to0$ as $\lambda\to\infty$.
For any $R>0$, by the invariance with respect to the scaling, we have $$A(u^\lambda,R)=A(u,\lambda R)\leq G(u,p)=:G_0,\qquad E(u^\lambda,R)=E(u,\lambda R)\leq G_0, $$ $$C(u^\lambda,R)=C(u,\lambda R)\leq G_0,\qquad D_0(p^\lambda,R)=D_0(p,\lambda R)\leq G_0.$$ Now, one can apply estimate (\ref{scalequatities}) and get the following:
$$\|\partial_tu^\lambda\|_{\frac {12}{11},\frac 32,Q^+(R)}+\|\nabla^2u^\lambda\|_{\frac {12}{11},\frac 32,Q^+(R)}+\|\nabla p^\lambda\|_{\frac {12}{11},\frac 32,Q^+(aR}\leq c(R,G_0).$$ Without loss of generality, we can deduce from the above estimates that, for any $R>0$, $$u^{(k)}\to v$$ in $L_3(Q^+(R))$, $$\nabla u^{(k)}\rightharpoonup \nabla v$$ in $L_2(Q^+(R))$, $$p^{(k)}\rightharpoonup q$$ in $L_\frac 32(Q^+(R))$. Passing to the limit as $\lambda\to\infty$, we conclude that $v$ and $q$ are a local energy ancient solution in $Q^+_-$ for which $G(v,q)<\infty$.
Now, our goal is to prove that $z=0$ is a singular point of $v$. We argue ad absurdum. Assume that the origin is a regular point, i.e., there exist numbers $R_0>0$ and $A_0>0$ such that
$$|v(z)|\leq A_0$$ for all $z\in Q^+(R_0)$. Hence, \begin{equation}\label{estim1}
C(v,R)=\frac 1{R^2}\int\limits_{Q^+(R)}|v|^3dz\leq cA_0^3R^3 \end{equation} for all $0<R\leq R_0$. Moreover, \begin{equation}\label{pass}
C(u^\lambda,R)\to C(v,R) \end{equation} as $\lambda\to\infty$. By weak convergence, $$D_0(q,R)\leq G_0$$ for all $R>0$.
Now, we can calculate positive numbers $\varepsilon(G_0)$ and $c(G_0)$ of Theorem 2.2 in \cite{S2016}. Then, let us fix $0<R_1<R_0$, see (\ref{estim1}), so that $C(v,R_1)<\varepsilon(G_0)/2$. According to (\ref{pass}), one can find a number $\lambda_0>0$ such that $$G(u^\lambda,R_1)<\varepsilon(G_0)$$ for all $\lambda>\lambda_0$. By Theorem 2.2 of \cite{S2016},
$$\sup\limits_{z\in Q^+(R_1/2)}|u^\lambda(z)|<\frac {c(G_0)}{R_1} $$ for all $\lambda>\lambda_0$. It remains to select $\lambda_1>\lambda_0$ such that $y_{3\lambda}=a/\lambda <R_1/2$ and $\lambda_1>c(G_0)/R_1$. Then
$$|u^{\lambda_1}(0,y_{3\lambda_1},0)|=\lambda_1\leq \sup\limits_{z\in Q^+(R_1/2)}|u^{\lambda_1}(z)|<\frac {c(G_0)}{R_1}.$$ This is a contradiction.
\end{document} |
\begin{document}
\title{Noise sensitivity of the top eigenvector of a Wigner matrix \thanks{ G\'abor Lugosi was supported by the Spanish Ministry of Economy and Competitiveness, Grant PGC2018-101643-B-I00; ``High-dimensional problems in structured probabilistic models - Ayudas Fundaci\'on BBVA a Equipos de Investigaci\'on Cientifica 2017''; and Google Focused Award ``Algorithms and Learning for AI''. Charles Bordenave was supported by by the research grants ANR-14-CE25-0014 and ANR-16-CE40-0024-01. Nikita Zhivotovskiy was supported by RSF grant No. 18-11-00132. }} \author{Charles Bordenave\thanks{Institut de Math\'ematiques de Marseille, CNRS \& Aix-Marseille University, Marseille, France.} \and G\'abor Lugosi \thanks{Department of Economics and Business, Pompeu
Fabra University, Barcelona, Spain, [email protected]} \thanks{ICREA, Pg. LluÃs Companys 23, 08010 Barcelona, Spain} \thanks{Barcelona Graduate School of Economics} \and Nikita Zhivotovskiy\thanks{This work was prepared while Nikita Zhivotovskiy was a postdoctoral fellow at the department of Mathematics, Technion I.I.T. and researcher at National University Higher School of Economics. Now at Google Research, Brain Team.} }
\maketitle
\begin{abstract} We investigate the noise sensitivity of the top eigenvector of a Wigner matrix in the following sense. Let $v$ be the top eigenvector of an $N\times N$ Wigner matrix. Suppose that $k$ randomly chosen entries of the matrix are resampled, resulting in another realization of the Wigner matrix with top eigenvector $v^{[k]}$. We prove that, with high probability, when $k \ll N^{5/3-o(1)}$, then $v$ and $v^{[k]}$ are almost collinear and when $k\gg N^{5/3}$, then $v^{[k]}$ is almost orthogonal to $v$. \end{abstract}
\section{Introduction}
In this paper we study the \emph{noise sensitivity} of top eigenvectors of Wigner matrices. For a positive integer $N$, let $X=(X_{i,j})$ be a symmetric $N\times N$ matrix such that, for
$i\leq j$, the $X_{i,j}$ are independent real random variables, such that for some constant $\delta>0$ and for all $ i \leq j$, $\mathbb{E} X_{i,j} = 0$ and $\mathbb{E} \exp ( |X_{i,j}|^\delta ) \leq 1/\delta$. Note that this assumption is satisfied for a wide
class of distributions with a sufficiently light tail. Uniformly
bounded, sub-gaussian, and
sub-exponential distributions fall in this class. To guarantee that $X$ is a symmetric matrix, we set $X_{i,j} = X_{j,i}$. Finally, we assume that the off-diagonal entries have the unit variance: for all $i \ne j$, $\mathbb{E} X_{ij}^2 = 1$ and for all $i$, $\mathbb{E} X_{ii}^2 = \sigma_0^2$, for some $\sigma_0 \geq 0$. Throughout this text, we call such matrix $X$ a Wigner matrix.
In this paper we are concerned with large matrices and
the main results are asymptotic, concerning $N\to \infty$. The distribution of the entries $X_{i,j}$ may change with $N$ though we suppress this dependence in the notation. However, the values of $\sigma_0$ and $\delta$ are assumed to be the same for all $N$.
Let $\lambda=\sup_{w\in S^{N-1}} \inr{w,Xw}$ be the top eigenvalue of $X$ and let $v$ denote the corresponding unit eigenvector. In this paper we study the noise sensitivity of $v$. In particular, we are interested in the behavior of the top eigenvector $v^{[k]}$ of the symmetric matrix $X^{[k]}$ obtained by resampling $k$ random entries of $X$. The main finding of the paper is that, with high probability, when $k \leq N^{5/3-o(1)}$, then $v$ and $v^{[k]}$ are almost collinear and when $k\gg N^{5/3}$, then $v^{[k]}$ is almost orthogonal to $v$.
\subsection*{Related work and proof technique}
Noise sensitivity is an important notion in probability that has been extensively studied since the pioneering work of Benjamini, Kalai, and Schramm \cite{BeKaSc99}. Noise sensitivity has mostly been studied in the context of Boolean functions and it has been shown to have deep connections with threshold phenomena, measure concentration, and isoperimetric inequalities, see Talagrand \cite{Tal94a}, Friedgut and Kalai \cite{FrKa96}, Kahn, Kalai, and Linial \cite{KaKaLi88}, Bourgain, Kahn, Kalai, Katznelson, and Linial \cite{BoKaKaKaLi92} for some of the key early work and Garban \cite{Gar11}, Garban and Steif \cite{GaSt14}, Kalai and Safra \cite{KaSa06}, O'Donnell \cite{ODo14} for surveys. The key techniques for studying noise sensitivity typically use elements of harmonic analysis, in particular, hypercontractivity (\cite{Tal94a}, \cite{KaKaLi88}) but also the ``randomized algorithm'' approach of Schramm and Steif \cite{ScSt10} and other techniques, see Garban, Pete, and Schramm \cite{GaPeSc10}.
Our approach is inspired by Chatterjee's work \cite{Cha16} who shows that, for functions of independent standard Gaussian random variables, the notion of noise sensitivity (or ``chaos'' as Chatterjee calls it) is deeply related to the notion of ``superconcentration''.
In fact, a result in a similar spirit to ours for the \emph{Gaussian Unitary Ensemble} was proved by Chatterjee \cite[Section 3.6]{Cha16}. However, instead of resampling random entries of the matrix, the perturbations considered in \cite{Cha16} are different. In Chatterjee's model, \emph{every} entry of the matrix $X$ is perturbed by replacing $X$ by $Y=e^{-t}X+\sqrt{1-e^{-2t}}X'$ where $X'$ is an independent copy of $X$ and $t>0$. It is proved in \cite{Cha16} that the top eigenvectors of $X$ and $Y$ are approximately orthogonal (in the sense that the expectation of their inner product goes to zero as $N\to\infty$) as soon as $t \gg N^{-1/3}$.
Chatterjee uses this example to illustrate how ``superconcentration'' implies ``chaos''. His techniques crucially depend on the Gaussian assumption as in that case explicit formulas may be exploited. Our techniques are similar in the sense that our starting point is also ``superconcentration'' (i.e., the fact that the variance of the largest eigenvalue of a Wigner matrix is small). However, outside of the Gaussian realm, the notions of superconcentration and chaos are murkier. Starting from a general formula for the variance of a function of independent random variables, due to Chatterjee \cite{Cha05}, we establish a monotonicity lemma that allows us to make the connection between the variance of the top eigenvalue and the inner product of interest. Then we use the fact that the top eigenvector has a small variance (i.e., in a sense, it is ``superconcentrated''). The monotonicity lemma may be of independent interest and it may have further uses when one tries to prove that ``superconcentration implies chaos'' for functions of independent--not necessarily Gaussian--random variables.
\subsection*{Result}
To formally describe the setup, let $X$ be a symmetric $N\times N$ Wigner matrix as defined above. For a positive integer $k \le \binom{N}{2} + N = N(N+1)/2$, let the random matrix $X^{[k]}$ be defined as follows. Let $S_k=\{(i_1,j_1),\ldots,(i_k,j_k)\}$ be a set of $k$ pairs chosen uniformly at random (without replacement) from the set
of all ordered pairs $(i,j)$ of indices with $1\le i\le j\le N$. We also assume that $S_k$ is independent of the entries of $X$. The entries of $X^{[k]}$ above the diagonal are \[
X^{[k]}_{i,j} = \left\{ \begin{array}{ll}
X'_{i,j} & \text{if $(i,j)\in S_k$} \\
X_{i,j} & \text{otherwise},
\end{array} \right. \] where $(X'_{i,j})_{1\le i\le j\le N}$ are independent random variables, independent of $X$ and $X'_{i,j}$ has the same distribution as $X_{i,j}$, for all $i\le j$. In words, $X^{[k]}$ is obtained from $X$ by resampling $k$ random entries of the matrix above and including the diagonal and also the corresponding terms below the diagonal. Clearly, $X^{[k]}$ has the same distribution as $X$. Denote unit eigenvectors corresponding to the largest eigenvalues of $X$ and $X^{[k]}$ by $v$ and $v^{[k]}$, respectively.
Note that with
overwhelming probability, the spectrum of a Wigner matrix is simple
and, in particular, the top unit eigenvector is unique (up to
changing the sign), see \cite{MR2760897}.
Our main results are the following.
\begin{theorem} \label{thm:main} Assume that $X$ is a Wigner matrix as above. If $k/N^{5/3}\to \infty$, then \[
\mathbb{E} \left|\inr{v,v^{[k]}}\right| = o(1)~. \] \end{theorem}
Conversely, our second result asserts that when $k \leq N^{5/3 -o(1)}$ then $v$ and $v^{[k]}$ are almost aligned. \begin{theorem} \label{thm:main2} Assume that $X$ is a Wigner matrix as above. There exists a constant $c >0$ such that, with $\varepsilon_N =( \log N)^{-c \log \log N}$, $$
\mathbb{E} \max_{1 \leq k \leq \varepsilon_N N^{5/3} } \min_{s \in \{-1,1\}} \| v - s v^{[k]} \|_{2} = o(1)~. $$ \end{theorem}
The proof of Theorem \ref{thm:main2} actually establishes that $\max_k \min _s \sqrt N \| v - s v^{[k]} \|_{\infty}$ goes to $0$ in probability.
The following heuristic argument may provide an intuition of why the threshold in the lower bound of Theorem \ref{thm:main2} is at $k = N^{5/3 - o(1)}$.
Since the seminal work of Erd\H{o}s, Schlein, and Yau \cite{ESY09b}, it is well known that unit eigenvectors of random matrices are delocalized in the sense that $\| v \|_\infty = N^{-1/2 + o(1)}$ with high probability.
Denoting the top eigenvalue of $X^{[k]}$ by $\lambda^{[k]}$, we might infer from the derivative of a simple eigenvalue as the function of the matrix entries that $$ \lambda^{[1]} - \lambda \simeq ( 1+ \mathbbm{1} (i_1 \ne j_1 )) v_{i_1}( X'_{i_1,j_1} - X_{i_1,j_1} )v_{j_1} \simeq \frac{ X'_{i_1,j_1} - X_{i_1,j_1} }{N^{1+o(1)}}~, $$ where $v_i$ is the $i$-th component of $v$ Assuming that $v_i$ is nearly independent of any matrix entry $X_{ij}$, since $X_{ij}$ is centered with unit variance, we would get from the central limit theorem that $$ \lambda^{[k]} - \lambda = \sum_{t =0}^{k-1}( \lambda^{[t+1]} - \lambda ^{[t]} ) \simeq \frac{\sqrt k}{N^{1+o(1)}}~ . $$ On the other hand, the known behavior of random matrices at the edge of the spectrum implies that the second largest eigenvalue of $X$ is at distance of order $N^{-1/6}$ from $\lambda$. The above heuristic should thus break down when $\sqrt k / N^{1 + o(1)}$ is of order $N^{-1/6}$. It gives the threshold at $k = N^{5/3+o(1)}$.
To get an idea of how Theorem \ref{thm:main} is proved, consider the variance of the largest eigenvalue $\lambda$ of $X$. The key inequality we prove is that \[
\left(\mathbb{E} \left|\inr{v,v^{[k]}}\right|\right)^2 \lesssim \frac{N^2\mathrm{Var}(\lambda)}{k}. \] By the Tracy-Widom law \cite{tracy1994level, tracy1996orthogonal} for the largest eigenvalue, we expect that $\mathrm{Var}(\lambda)$ is of order $N^{-1/3}$, which implies the desired asymptotic orthogonality whenever $k/N^{5/3}\to \infty$. The proof of the inequality above is based on a variance formula for general functions of independent random variables due to Chatterjee \cite{Cha05}, see Lemma \ref{lem:chatterjee} below. The variance formula suggests that small variance implies noise sensitivity of the top eigenvalue in a certain sense. This is made precise by Lemmas \ref{lem:monotonicity} and \ref{lem:monotonicity_second}. Finally, noise sensitivity of the top eigenvalue translates to the inequality above.
\remark We expect that the arguments of Theorem \ref{thm:main} for the noise sensitivity of the top
eigenvalue may be modified to prove analogous results for the
eigenvector corresponding to the $j$-th largest eigenvalue, $1 \leq j \leq N$. However, the threshold is expected to occur at values different from $N^{5/3}$. In particular, a simple heuristic argument suggests that for the $j$-th eigenvector the threshold occurs around $N^{5/3+o(1)} \min ( j , N -j +1)^{-2/3}$. However, to keep the presentation transparent, in this paper we focus on the top eigenvalue.
Interestingly, the proof that the top eigenvalue is very sensitive to resampling more than $\Theta(N^{5/3})$ entries involves proving that it is insensitive to resampling just a single entry. As a consequence the proofs of Theorems \ref{thm:main} and \ref{thm:main2} share common techniques.
The rest of the paper is dedicated to proving Theorems \ref{thm:main} and \ref{thm:main2}. In Section \ref{sec:var} we introduce a general tool for proving noise sensitivity that generalizes Chatterjee's ideas based of ``superconcentration'' to functions of independent, not necessarily standard normal random variables. In Section \ref{sec:rm} we summarize some of the tools from random matrix theory that are crucial for our arguments. In Sections \ref{sec:thm1} and \ref{sec:thm2} we give the proofs of Theorems \ref{thm:main} and \ref{thm:main2}.
\section{Variance and noise sensitivity} \label{sec:var}
The first building block in the proof of Theorem \ref{thm:main} is a formula for the variance of an arbitrary function of independent random variables, due to Chatterjee \cite{Cha05}. For any positive integer $i$, denote $[i]=\{1, \ldots, i\}$.
\begin{lemma} \label{lem:chatterjee} \emph{\cite{Cha05}} Let $X_1,\ldots,X_n$ be independent random variables taking values in some set $\mathcal{X}$ and let $f:\mathcal{X}^n \to \mathbb{R}$ be a measurable function. Denote $X=(X_1,\ldots,X_n)$. Let $X'=(X_1',\ldots,X_n')$ be an independent copy of $X$. Under the notation \[
X^{(i)}=(X_1,\ldots,X_{i-1},X_i',X_{i+1},\ldots,X_n) \quad \text{and} \quad
X^{[i]}=(X_1',\ldots,X_i',X_{i+1},\ldots,X_n) \] and, in particular, $X^{[0]}=X$ and $X^{[n]}=X'$, we have \[ \mathrm{Var}(f(X)) = \frac{1}{2} \sum_{i=1}^n \mathbb{E} \left[\left(f(X)-f(X^{(i)})\right) \left(f(X^{[i-1]})-f(X^{[i]})\right)\right]~. \] \end{lemma}
In general, for $A \subseteq [n]$ let $X^A$ denote the random vector, obtained from $X$ by replacing the components indexed by $A$ by corresponding components of $X^{\prime}$.
In the variance formula above, the order of the variables does not matter and the formula remains valid after permuting the indices $1,\ldots,n$ arbitrarily. In particular, one may take the variables in random order. Thus, if $\sigma=(\sigma(1),\ldots,\sigma(n))$ is a random permutation sampled uniformly from the symmetric group $S_n$ and $\sigma([i])$ denotes $\{\sigma(1), \ldots, \sigma(i)\}$,
then \begin{equation} \label{eq:var} \mathrm{Var}(f(X)) = \frac{1}{2} \sum_{i=1}^n \mathbb{E} \left[\left(f(X)-f(X^{{(\sigma(i))}})\right) \left(f(X^{\sigma([i-1])})-f(X^{\sigma([i])})\right)\right]~. \end{equation} Note that on the right-hand side of \eqref{eq:var} the expectation is taken with respect to both $X,X'$, and the random permutation $\sigma$.
One would intuitively expect that the terms on the right-hand side of \eqref{eq:var} decrease with $i$, as the differences $f(X)-f(X^{ (\sigma(i))} )$ and $f(X^{\sigma([i-1])})-f(X^{\sigma([i])})$ become less correlated as more randomly chosen components get resampled. This is indeed the case and this fact is one of our main tools in proving noise sensitivity. We believe that the following lemma can be useful in diverse situations. The proof is given in Section \ref{sec:prooflemmon} below.
\begin{lemma} \label{lem:monotonicity} Consider the setup of Lemma \ref{lem:chatterjee} and the notation above. For $i\in [n]$, denote \[
B_i= \mathbb{E} \left[\left(f(X)-f(X^{{(\sigma(i))}})\right) \left(f(X^{\sigma([i-1])})-f(X^{\sigma([i])})\right)\right]~, \] where the expectation is taken with respect to components of vectors and random permutations. Then $B_i \ge B_{i+1}$ for all $i=1,\ldots,n-1$ and $B_n \ge 0$. In particular, for any $k\in [n]$, \[
B_k \le \frac{2\mathrm{Var}(f(X))}{k}. \] \end{lemma}
We also introduce a modification of Lemma \ref{lem:monotonicity} that will be more convenient for our purposes. To do so, we introduce the following notation. Let $j$ have uniform distribution on $[n]$. Let $X^{(j) \circ \sigma([i-1])}$ denote the vector obtained from $X^{\sigma([i-1])}$ by replacing its $j$-th component by an independent copy of the random variable $X_j$, denoted by $X_j^{\prime\prime}$. Observe that $j$ may belong to $\sigma([i - 1])$ and in this case $X_j^{\prime\prime}$ is independent of $X_j^{\prime}$ appearing in $X^{\sigma([i-1])}$. With this notation in mind we may prove the following version of Lemma \ref{lem:monotonicity}.
\begin{lemma} \label{lem:monotonicity_second} Using the notation of Lemma \ref{lem:monotonicity}, assuming that $j$ is chosen uniformly at random from the set $[n]$ and independently of other random variables involved, we have for any $k\in [n]$, \[
B_k^{\prime} \le \frac{2\mathrm{Var}(f(X))}{k}\left(\frac{n + 1}{n}\right)~, \] where for any $i \in [n]$, \[ B_i^{\prime} = \mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{(j) \circ \sigma([i-1])})\right)\right]~. \] \end{lemma}
\section{Random matrix results} \label{sec:rm}
In the proof of Theorem \ref{thm:main} we apply Lemma \ref{lem:monotonicity_second} with $f$ being the top eigenvalue of a Wigner matrix. The usefulness of this bound crucially hinges on the fact that the variance of the top eigenvalue is small, that is, in a sense, the top eigenvalue is ``superconcentrated''. This fact is quantified in this section.
Our first lemma on the variance of $\lambda$ is obtained as a combination of a result of Ledoux and Rider \cite{MR2678393} on Gaussian ensembles and the universality of fluctuations for Wigner matrices as stated in Erd\H{o}s, Yau and Yin \cite{MR2871147}.
\begin{lemma} \label{lem:variance} Assume that $X$ is a Wigner matrix as in Theorem \ref{thm:main}. Let $\lambda$ denote the largest eigenvalue of $X$. Then, \[
\mathrm{Var}(\lambda) \le (c +o(1)) N^{-1/3}~, \] where $c>0$ is an absolute constant.
\end{lemma}
{ \begin{remark} The result of Lemma \ref{lem:variance} implies an improved version of the variance bound \[ \mathrm{Var}(\lambda) \lesssim (\log N)^{C\log \log N}N^{-1/3}, \] following from \cite[Theorem 2.2]{MR2871147}. \end{remark} }
We also need the following delocalization result of the top eigenvector of a Wigner matrix which can be found in Tao and Vu \cite[Proposition 1.12]{MR2669449}.
\begin{lemma} \label{delocalization}{\em \cite{MR2669449}.} Assume that $X$ is a Wigner matrix as in Theorem \ref{thm:main}. For any real $c_0>0$, there exists a constant $C>0$, such that,
with probability at least $1-C N^{-c_0}$, any eigenvector $w$ of $X$ with $\| w \|_2 =1 $ satisfies \[
\|w\|_{\infty} \le \frac{(\log N)^C}{\sqrt{N}}~. \]
\end{lemma}
Our final lemma is a perturbation inequality in $\ell^\infty$-norm of the top eigenvector of a Wigner matrix when a single entry is re-sampled. The proof uses precise estimates on the eigenvalue spacings in Wigner matrices proved in Tao and Vu \cite{MR2669449} and Erd\H{o}s, Yau, and Yin \cite{MR2871147}.
\begin{lemma} \label{fliponeelem}
Let $X$ be a Wigner matrix as in Theorem \ref{thm:main} and $X'$ { be} an independent copy of $X$. For any $(i,j)$ with $1\le i,j \le N$. Denote by {$X^{(ij)}$} the symmetric matrix obtained from $X$ by replacing the entry $X_{ij}$ by $X'_{ij}$ and $X_{ji}$ by $X'_{ji}$. For any $0 < \alpha < 1/10$, there exists $\kappa >0$ such that, for all $N$ large enough, with probability at least $1 - N^{-\kappa}$, \[
\max_{1 \leq i,j \leq N} \inf_{s \in \{-1,1\}} \|s v - u^{(ij)} \|_\infty \le N^{ - \frac 1 2 - \alpha}~, \] where $v$ and $u^{(ij)}$ are any unit eigenvectors corresponding to the largest eigenvalues of $X$ and {$X^{(ij)}$}. \end{lemma}
\section{Proof of Theorem \ref{thm:main}} \label{sec:thm1}
Now we are ready for the proof of the main results of the paper.
We start by fixing some notation. Let $\lambda$ denote the largest eigenvalue of the Wigner matrix $X$ of Theorem \ref{thm:main} and let $v\in S^{N-1}$ be a corresponding normalized eigenvector. Let $k\in \left[\binom{N}{2} + N\right]$ to be specified later and let $X^{[k]}$ be the random symmetric matrix obtained by resampling $k$ random entries { above} the diagonal and including the diagonal, as defined in the introduction. We denote by $S_k \subset \left[\binom{N}{2} + N\right]$ the set of random positions of the $k$ resampled entries. Let $\lambda^{[k]}$ denote the top eigenvalue of $X^{[k]}$ and $v^{[k]}$ a corresponding normalized eigenvector.
For $1 \leq i \leq j \leq N$, we denote by {$Y_{(ij)}$} the symmetric matrix obtained from $X$ by replacing the entry $X_{ij}$ by $X^{\prime\prime}_{ij}$ where $X^{\prime\prime}$ is an independent copy of $X$. We obtain $Y^{[k]}_{(ij)}$ from $X^{[k]}$ by the same operation. We denote by $(\mu_{(ij)},u_{(ij)})$, and $(\mu^{[k]}_{(ij)},u^{[k]}_{(ij)})$ the top eigenvalue/eigenvector pairs of $Y_{(ij)}$ and $Y^{[k]}_{(ij)}$, respectively. Let $(s, t)$ be a pair of {indices chosen uniformly at random from $\left[\binom{N}{2} + N\right]$ and satisfying $1\leq s \leq t\leq N$}. For ease of notation, we set $Y = Y_{(st)}$, $\mu = \mu_{(st)} $ and $u = u_{(st)}$. We define similarly $Y^{[k]} = Y^{[k]}_{(st)}$, $\mu^{[k]} = \mu^{[k]}_{(st)}$ and $u^{[k]} = u^{[k]}_{(st)}$.
By applying Lemma \ref{lem:monotonicity_second} to the function of $n=\binom{N}{2} + N$ independent random variables $f\left((X_{i,j})_{1\le i\le j\le N}\right)=\lambda$, we obtain that, for any $k\in \left[\binom{N}{2} + N\right]$, \begin{equation} \label{eq:firststep} \frac{2\mathrm{Var}(\lambda)}{k} \cdot \frac{\binom{N}{2} + N + 1}{\binom{N}{2} + N } \ge \mathbb{E}\left[
(\lambda - \mu)\left(\lambda^{[k]}-\mu^{[k]}\right) \right]~. \end{equation} {In what follows, we show that the right-hand side of \eqref{eq:firststep} satisfies \[ \mathbb{E}\left[(\lambda - \mu)\left(\lambda^{[k]}-\mu^{[k]}\right) \right] \simeq \frac{1}{N^2}\mathbb{E}\left[\langle v, v^{[k]}\rangle^2\right]. \] This relation, combined with Lemma \ref{lem:variance} and \eqref{eq:firststep}, implies \[ \mathbb{E}\left[\langle v, v^{[k]}\rangle^2\right] \lesssim \frac{N^{\frac{5}{3}}}{k}~, \] which is sufficient for Theorem \ref{thm:main}. We proceed with the formal argument. } Using the notation of the previous section we have \[ \mathbb{E}\left[(\lambda - \mu)\left(\lambda^{[k]}-\mu^{[k]}\right) \right] = \mathbb{E}\left[(\inr{v,Xv} - \inr{u,Yu})\left(\inr{v^{[k]},X^{[k]}v^{[k]}} - \inr{u^{[k]},Y^{[k]}u^{[k]}}\right)\right]~. \] { Using the fact that $v$ maximizes $\inr{v,Xv}$ and $u$ maximizes $\inr{u,Yu}$ we have \[ \inr{u,(X - Y)u} \le \inr{v,Xv} - \inr{u,Yu} \le \inr{v,(X - Y)v}. \] Observe that the elements of $X - Y$ are all zeros except at most two that correspond to resampled values. If the element $X_{t, s}$ of $X$ was resampled to get $Y$, we have, for any vector $x$, \[ \inr{x,(X - Y)x} = U_{t, s} x_{t}x_{s} \] with $U_{t,s} = (X_{t,s} - X''_{t,s}) ( 1 + \mathbbm{1}(t \ne s))$. Similarly, if we set $U'_{t,s} = (X'_{t,s} - X''_{t,s}) ( 1 + \mathbbm{1}(t \ne s))$, we have $\inr{x,(X^{[k]} - Y^{[k]})x} = U'_{t, s} x_{t}x_{s}$. Therefore, it is straightforward to see that \begin{align*} &(\inr{v,Xv} - \inr{u,Yu})\left(\inr{v^{[k]},X^{[k]}v^{[k]}} - \inr{u^{[k]},Y^{[k]}u^{[k]}}\right)\ge I , \end{align*} where we have set, $$ I = V_{t,s} \min\left\{v_t v_s v^{[k]}_t v^{[k]}_s, u_t u_s v^{[k]}_t v^{[k]}_s, v_t v_s u^{[k]}_t u^{[k]}_s, u_t u_s u^{[k]}_t u^{[k]}_s\right\}, $$ and for $1 \leq i \leq j \leq N$, $$ V_{i,j} = U_{i,j} U'_{i,j} = ( 1 + \mathbbm{1}(i \ne j))^2 (X_{i,j} - X''_{i,j}) (X'_{i,j} - X''_{i,j}). $$
In order to have some extra independence, we introduce yet another independent copy of our random variables. For $1 \leq i \leq j \leq N$, let $Z_{(ij)}$ be the symmetric matrix obtained from $X$ by replacing the entry $X_{ij}$ by $X'''_{ij}$ where $X'''$ is an independent copy of $X$, independent of $X'$ and $X''$. We obtain $Z^{[k]}_{(ij)}$ from $X^{[k]}$ by the same operation. As above, we denote by $w_{(ij)}$, and $w^{[k]}_{(ij)}$ the top unit eigenvector of $Z_{(ij)}$ and $Z^{[k]}_{(ij)}$, respectively. For ease of notation, with $(s,t)$ as above, we define $w = w_{(s,t)}$ and $w^{[k]} = w^{[k]}_{(st)}$. The key observation is that $V_{i,j}$ is independent of $Z_{(ij)}$ and $Z^{[k]}_{(ij)}$.}
Fix $0 < \alpha< 1/10$ and let $C$ be as in Lemma \ref{delocalization} for $c_0 = 10$. We define {$\mathcal{E}= \mathcal{E}_1 \cap \mathcal{E}_2$} to be the {intersection} of the following two events: \begin{itemize}
{\item $\mathcal{E}_1$: for all $1 \leq i \leq j \leq N$: $\max(\|v - w_{(ij)} \|_{\infty} , \|u_{(ij)} - w_{(ij)} \|_{\infty} , \|v^{[k]} - w_{(ij)}^{[k]}\|_{\infty} , \|u_{(ij)}^{[k]} - w_{(ij)}^{[k]}\|_{\infty} ) \le N^{-\frac 1 2 - \alpha}$.
\item $\mathcal{E}_2$: $\|x\|_{\infty} \le \frac{(\log N)^C}{\sqrt{N}}$ for all $x \in \left\{v, u_{(ij)}, w_{(ij)}, v^{[k]}, u_{(ij)}^{[k]}, w_{(ij)}^{[k]} : 1 \leq i, j \leq N\right\}$.} \end{itemize} By Lemmas \ref{delocalization}, \ref{fliponeelem}, and the union bound, we have, for all $N$ large enough, $\mathbb{P}(\mathcal{E}_2^c) \leq N^{-6}$ and for some $\kappa >0$, $\mathbb{P}(\mathcal{E}^c) \le N^{-\kappa}$ (provided that we choose properly the $\pm$-phase for the eigenvectors $u$, $w$, $u^{[k]}$ and $w^{[k]}$). Observe that when $\mathcal{E}$ holds, for all \begin{equation}\label{eq:xyinf} x \in \{v_t v_s v^{[k]}_t v^{[k]}_s, u_t u_s v^{[k]}_t v^{[k]}_s, v_t v_s u^{[k]}_t u^{[k]}_s, u_t u_s u^{[k]}_t u^{[k]}_s \}~, \end{equation} we have, for all $N$ large enough,
$$|x - w_t w_s w^{[k]}_t w^{[k]}_s| \le \frac{4(\log N)^{3C}}{N^{2 + \alpha}}~.$$ We show this, for brevity, only for $v_t v_s v^{[k]}_t v^{[k]}_s$. Denoting $\delta_t = w_t-v_t$ and $\delta_t^{[k]}=v_t^{[k]}-w_t^{[k]}$, we write \[ v_t v_s v^{[k]}_t v^{[k]}_s = (w_t -\delta_t)(w_s - \delta_s)(w^{[k]}_t - \delta_t^{[k]})(w^{[k]}_s - \delta_s^{[k]})~. \] Then open the brackets and use that, on $\mathcal{E}$, \[
\max\{|\delta_t|, |\delta_s|, |\delta_t^{[k]}|, |\delta_s^{[k]}|\} \le N^{-\frac{1}{2} - \alpha}\quad \text{and}\quad \max\{|w_t|, |w_s|, |w^{[k]}_t|, |w^{[k]}_s|\} \le (\log N)^{{ 3}C} / \sqrt{N}~. \] { If $\mathcal{E}$ holds, we thus have \begin{align*}
& I \ge V_{t,s} w_t w_s w^{[k]}_t w^{[k]}_s - \frac{4(\log N)^{3C}}{N^{2 + \alpha}} |V_{t,s}| .
\end{align*}
On the other hand, if $\mathcal{E}_2 \backslash \mathcal{E}$ holds, we get $$
I \geq - \frac{(\log N)^{4C}}{N^2}\mathbbm{1} (\mathcal{E}^c) |V_{t,s}|. $$
Finally, if $\mathcal{E}_2$ does not hold, using that all the vectors are of unit norm (and therefore, $\max\{|v_t|, |v_s|, |v^{[k]}_t|, |v^{[k]}_s|\} \le 1$), we have \begin{align*}
& I \geq - \mathbbm{1} (\mathcal{E}_2^c) |V_{t,s}|~. \end{align*} The same bounds hold for $ V_{t,s} w_t w_s w^{[k]}_t w^{[k]}_s $ on $\mathcal{E}_2 \backslash \mathcal{E}$ and $\mathcal{E}_2^c$. Note also that $\mathbb{E} V_{t,s}^2 \leq c^2_1$ for some constant $c_1 \geq 1$ depending on $\delta$. Combining altogether the last three bounds, by the Cauchy-Schwarz inequality, we arrive at $$ \mathbb{E} [ I ] \geq \mathbb{E} [ V_{t,s} w_t w_s w^{[k]}_t w^{[k]}_s ] - 4c_1\frac{(\log N)^{3C}}{N^{2 + \alpha}} - 2c_1 \frac{(\log N)^{4C}}{N^{2}} \sqrt{\mathbb{P}(\mathcal{E}^c)} - 2c_1 \sqrt{\mathbb{P}(\mathcal{E}_2^c)}. $$ Recalling \eqref{eq:firststep}, we find \begin{align*} \mathbb{E} [ V_{t,s} w_t w_s w^{[k]}_t w^{[k]}_s ] &\le \frac{4\mathrm{Var}(\lambda)}{k}+ 4c_1\frac{(\log N)^{3C}}{N^{2 + \alpha}} + 2c_1 \frac{(\log N)^{4C}}{N^{2}} \sqrt{\mathbb{P}(\mathcal{E}^c)} + 2c_1 \sqrt{\mathbb{P}(\mathcal{E}_2^c)}~. \end{align*}
Integrating over the random choice of $(s,t)$, we have \begin{equation} \label{decomp} \mathbb{E} [ V_{t,s} w_t w_s w^{[k]}_t w^{[k]}_s ]= \frac{1}{\binom{N}{2} + N}\mathbb{E} \left( \sum\limits_{1 \le i \le j \le N}V_{i,j} (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j \right). \end{equation} Now, using \eqref{decomp} and using $\frac{\binom{N}{2} + N + 1}{\binom{N}{2} + N } \le 2$, we get \begin{equation} \label{eq:Z1} \mathbb{E} \left(\sum\limits_{1 \leq i , j \leq N} \tilde V_{i,j}(w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j \right) \leq 4 N^2 \frac{\mathrm{Var}(\lambda)}{k} + \varepsilon_N , \end{equation} where $\tilde V_{i,j} = V_{i,j} / 2$ if $i \ne j$, $\tilde V_{i,i} = V_{i,i}$ and $$ \varepsilon_N = 4c_1\frac{(\log N)^{3C}}{N^{ \alpha}} + 2c_1 (\log N)^{4C} \sqrt{\mathbb{P}(\mathcal{E}^c) } + 2c_1 N^2\sqrt{\mathbb{P}(\mathcal{E}_2^c)}. $$ Note that for $i \ne j$, $\mathbb{E} \tilde V_{i,j} = 2$ and $\mathbb{E} \tilde V_{i,i} = \sigma_0^2$. We have \begin{equation*} \mathbb{E}\left(\sum\limits_{i=1}^N (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j \right) \le \frac{ (\log N) ^{4C} }{N} + N \mathbb{P}(\mathcal{E}_2^c). \end{equation*} Hence, using that the variable $V_{i,j}$ is independent of the vectors $w_{(ij)},w_{(ij)}^{[k]}$, we deduce that \begin{eqnarray} 2 \mathbb{E} \left(\sum\limits_{1 \leq i , j \leq N} (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j \right) &\leq & 4 N^2 \frac{\mathrm{Var}(\lambda)}{k} + \varepsilon'_N ,\label{eq:Z2} \end{eqnarray} where $$
\varepsilon'_N = \varepsilon_N + | 2 - \sigma_0^2|\frac{ (\log N) ^{4C} }{N} + N | 2 - \sigma_0^2| \mathbb{P}(\mathcal{E}_2^c). $$ We now argue that in \eqref{eq:Z2}, we may replace the vectors $w_{(ij)}$ and $w_{(ij)}^{[k]}$ by $v$ and $v^{[k]}$ respectively. We repeat the above argument. Recall the event $\mathcal{E} = \mathcal{E}_1 \cap \mathcal{E}_2$ defined above. As already pointed, on the event $\mathcal{E}$, we have $$
|v_i v_j v_i^{[k]} v_j ^{[k]} - (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j| \le \frac{4(\log N)^{3C}}{N^{2 + \alpha}}~. $$ If $\mathcal{E}_2$ holds, we have $$
|v_i v_j v_i^{[k]} v_j ^{[k]} - (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j| \le \frac{2(\log N)^{4C}}{N^{2}}~. $$ Finally, there is the deterministic bound $$
|v_i v_j v_i^{[k]} v_j ^{[k]} - (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j| \le 2. $$ Combining the last three bounds we obtain that \begin{align*}
&\mathbb{E} \sum\limits_{1 \leq i , j \leq N} | (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j - v_i v_j v^{[k]}_i v^{[k]}_j| \\ &\quad\leq \frac{4(\log N)^{3C}}{N^{\alpha}} + 2 (\log N)^{4C} \mathbb{P}(\mathcal{E}^c) + 2N^2 \mathbb{P}(\mathcal{E}_2^c). \end{align*} The right-hand side is upper bounded by $2 \varepsilon_N$. We thus have proved that \begin{eqnarray} 2 \mathbb{E} \left(\sum\limits_{1 \leq i , j \leq N} v_i v_j v^{[k]}_i v^{[k]}_j \right) &\leq & 4 N^2 \frac{\mathrm{Var}(\lambda)}{k} + \varepsilon''_N ,\label{eq:Z3} \end{eqnarray} with $\varepsilon''_N = \varepsilon'_N +2 \varepsilon_N$. As already pointed, by Lemmas \ref{delocalization}, \ref{fliponeelem}, and the union bound, we have, for all $N$ large enough, $\mathbb{P}({\mathcal{E}'_2}^c) \leq N^{-6}$ and $\mathbb{P}({\mathcal{E}'}^c) \le N^{-\kappa}$. It follows that $\varepsilon''_N\to 0$ with $N$.
Now, combining Jensen's inequality and \eqref{eq:Z3}, \begin{align*}
\left(\mathbb{E}\left|\inr{v, v^{[k]}}\right|\right)^2 &\le \mathbb{E}\left(\sum\limits_{i = 1}^Nv_iv_i^{[k]}\right)^2 \le \mathbb{E}\left(\sum\limits_{1 \le i , j \le N}v_i v_j v^{[k]}_i v^{[k]}_j\right) \le 2N^2 \frac{\mathrm{Var}(\lambda)}{k} + \frac{\varepsilon''_N}{2}. \end{align*} From Lemma \ref{lem:variance}, the claim follows.}
\subsection{Proof of Lemma \ref{lem:monotonicity} and Lemma \ref{lem:monotonicity_second}} \label{sec:prooflemmon}
We start with the following technical lemma. \begin{lemma} \label{varianceformula} Let $f: \mathcal{X}^n \to \mathbb{R}$ be a measurable function and let $\sigma \in S_n$ be any fixed permutation. Fix $i \in [n - 1]$ and $j \in [n]$ such that $j \notin \sigma([i])$. Let $X_1,\ldots,X_n$ be independent random variables taking values in $\mathcal{X}$. Then \begin{align*} A_i &= \mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])})-f(X^{\sigma([i])})\right)\right] \\ &\ge \mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])\cup j})-f(X^{\sigma([i])\cup j})\right)\right] \\ &\ge 0~. \end{align*} \end{lemma} \begin{proof} Without loss of generality, we may consider one particular permutation $\sigma$, defined as follows: set $\sigma(k) = k$ for $k \notin \{1, i\}$, $\sigma(i) = 1$, $\sigma(1) = i$, and we may also assume that $j = i + 1$. The proof is identical for any other $\sigma$ and $j$. In our case, \[ A_i = \mathbb{E} \left[\left(f(X)-f(X^{(1)})\right) \left(f(X^{[i]\setminus\{1\}})-f(X^{[i]})\right)\right]~. \] Moreover, we have \[ \mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])\cup j})-f(X^{\sigma([i])\cup j})\right)\right] = A _{i + 1}. \] We introduce a simplifying notation. Denote $B = (X_2, \ldots,X_i)$, $B' = (X'_2, \ldots, X'_i)$ and $C = (X_{i + 2}, \ldots, X_n)$. Therefore, we may rewrite \[ A_i = \mathbb{E} \left[\left(f(X_1, B, X_{i + 1}, C)-f(X_1', B, X_{i + 1}, C)\right) \left(f(X_1, B', X_{i + 1}, C)-f(X_1', B', X_{i + 1}, C)\right)\right] \] and \[ A_{i + 1} = \mathbb{E} \left[\left(f(X_1, B, X_{i + 1}, C)-f(X_1', B, X_{i + 1}, C)\right) \left(f(X_1, B', X_{i + 1}', C)-f(X_1', B', X_{i + 1}', C)\right)\right]~. \]
Denote $h(X_1, X_1', X_{i + 1}, C) = \mathbb{E}[\left(f(X_1, B, X_{i + 1}, C)-f(X_1', B, X_{i + 1}, C)\right)\big| X_1, X_1', X_{i + 1}, C]$. Using the independence of $B, B'$ and their independence of the remaining random variables, we have \[ A_i = \mathbb{E} h(X_1, X_1', X_{i + 1}, C)^2~. \] At the same time, using the same notation for $h$ we have, by the Cauchy-Shwarz inequality and the fact that $X_{i + 1}$ and $X_{i + 1}'$ have the same distribution, \begin{align*} A_{i + 1} &= \mathbb{E} h(X_1, X_1', X_{i + 1}, C)h(X_1, X_1', X'_{i + 1},C) \\
&= \mathbb{E}[\mathbb{E}[ h(X_1, X_1', X_{i + 1}, C)h(X_1, X_1', X'_{i + 1},C)|X_1, X_1', C ]] \\ &\le \mathbb{E} h(X_1, X_1', X_{i + 1}, C)^2 \\ &= A_{i}~. \end{align*}
Now to prove that $A_{i} \ge 0$, it is sufficient to show that $A_n \ge 0$. Denoting $g(X_1) = \mathbb{E} [f(X)|\ X_1]$, we have \begin{align*} A_{n} &= \mathbb{E} \left[\left(f(X)-f(X^{(1)})\right) \left(f(X^{[n]\setminus\{1\}})-f(X^{[n]})\right)\right] \\ &= \mathbb{E} (f(X)f(X^{[n]\setminus\{1\}}) - f(X)f(X^{[n]}) - f(X^{(1)})f(X^{[n]\setminus\{1\}}) + f(X^{(1)})f(X^{[n]})) \\ &= 2\mathbb{E} f(X)f(X^{[n]\setminus\{1\}}) - 2(\mathbb{E} f(X))^2 \\
&= 2\mathbb{E}[\mathbb{E} [f(X)f(X^{[n]\setminus\{1\}})| X_1]] - 2(\mathbb{E} f(X))^2 \\ &= 2\mathbb{E}[g(X_1)^2] - 2(\mathbb{E} f(X))^2 \\ &\ge 0~, \end{align*} where we used Jensen's inequality and that $\mathbb{E} g(X_1) = \mathbb{E} f(X)$. \end{proof}
We proceed with the proof of Lemma \ref{lem:monotonicity}.
\begin{proof} In this proof by writing $i + 1$ we mean $i + 1\ (\text{mod}\ n)$. For each permutation $\sigma \in S_n$ and fixed $i \in [n]$ we construct a corresponding permutation $\sigma'$ by defining $\sigma'(i) = \sigma(i + 1),\ \sigma'(i + 1) = \sigma(i)$ and $\sigma'(k) = \sigma(k)$ for $k \neq \{i, i + 1\}$.
It is straightforward to see that for any fixed $i$ there is a one-to-one correspondence between $\sigma \in S_n$ and $\sigma'$. By observing that $\sigma'([i]) = \sigma([i - 1])\cup \sigma(i + 1)$ and $\sigma'([i +1]) = \sigma([i + 1])$ we have, conditionally on $\sigma$, \begin{align*} &\mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])})-f(X^{\sigma([i])})\right)\right] \\ &=\mathbb{E} \left[\left(f(X)-f(X^{(\sigma'(i + 1))})\right) \left(f(X^{\sigma([i - 1])})-f(X^{\sigma([i])})\right)\right] \\ &\ge \mathbb{E} \left[\left(f(X)-f(X^{(\sigma'(i + 1))})\right) \left(f(X^{\sigma'([i])})-f(X^{\sigma'([i+ 1])})\right)\right]~, \end{align*} where in the last step we used Lemma \ref{varianceformula}. Using the one to one correspondence between all $\sigma$ and $\sigma'$, we have \begin{align*} B_i &= \mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])})-f(X^{\sigma([i])})\right)\right] \\ &= \frac{1}{n!}\sum\limits_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])})-f(X^{\sigma([i])})\right)\right] \\ &\ge\frac{1}{n!}\sum\limits_{\sigma'}\mathbb{E} \left[\left(f(X)-f(X^{(\sigma'(i + 1))})\right) \left(f(X^{\sigma'([i])})-f(X^{\sigma'([i+ 1])})\right)\right] \\ &=\mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i + 1))})\right) \left(f(X^{\sigma([i])})-f(X^{\sigma([i+ 1])})\right)\right] \\ &= B_{i + 1}. \end{align*} The proof that $B_n \ge 0$ follows from Lemma \ref{varianceformula} as well. \end{proof}
Finally, we prove Lemma \ref{lem:monotonicity_second}.
\begin{proof} To prove this Lemma we show an upper bound for $B_i^{\prime}$. We have, \begin{align*} B_i^{\prime} &= \mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{(j) \circ \sigma([i-1])})\right)\right] \\
&=\mathbb{E}_{\sigma}\left(\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{(j) \circ \sigma([i-1])})\right)\big| j \in { \sigma[i - 1]}\right]\mathbb{P}(j \in { \sigma[i - 1]})\right) \\
&\quad+\mathbb{E}_{\sigma}\left(\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{(j) \circ \sigma([i-1])})\right)\big| j \notin { \sigma[i - 1]}\right]\mathbb{P}(j \notin { \sigma[i - 1]})\right)~. \end{align*} Observe that $\mathbb{P}(j \in{ \sigma[i - 1]}) = \frac{i - 1}{n}$ and the second summand is equal to $B_i\frac{n - i + 1}{n}$. We proceed with the first summand. For $i \ge 1$, we have \begin{align*}
&\mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{(j) \circ \sigma([i-1])})\right)\big| j \in { \sigma[i - 1]}\right] \\
&= \mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])\setminus\{j\}})-f(X^{(j) \circ \sigma([i-1])})\right)\big| j \in { \sigma[i - 1]}\right] \\
&\quad+\mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{\sigma([i-1])\setminus\{j\}})\right)\big| j \in { \sigma[i - 1]}\right] \\
& = B_{i - 1} - \mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])\setminus\{j\}}) - f(X^{\sigma([i-1])})\right)\big| j \in { \sigma[i - 1]}\right]~. \end{align*} { Finally, we prove that \begin{equation} \label{diffeq}
\mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])\setminus\{j\}}) - f(X^{\sigma([i-1])})\right)\big| j \in \sigma[i - 1]\right] \ge 0~. \end{equation} Without loss of generality, we consider a particular choice of $\sigma$ and $j$ such that $\sigma(k) = k$, for $k \in [n]$ and $j = 1$. Therefore, \eqref{diffeq} will follow from \[ \mathbb{E} f(X)(f(X^{[i-1]\setminus\{1\}}) - f(X^{[i-1]})) \ge \mathbb{E} f(X^{(1)})(f(X^{[i-1]\setminus\{1\}}) - f(X^{[i-1]}))~. \] Since $X^{(1)} = (X_1^{\prime\prime}, X_2, \ldots, X_n)$, we have $\mathbb{E} f(X)f(X^{[i-1]}) = \mathbb{E} f(X^{(1)})f(X^{[i-1]})$. This implies that \eqref{diffeq} is valid whenever \[ \mathbb{E} f(X)f(X^{[i-1]\setminus\{1\}}) \ge \mathbb{E} f(X^{(1)})f(X^{[i-1]\setminus\{1\}})~. \] As in the proof of Lemma \ref{varianceformula}, this relation holds due to Jensen's inequality. These lines together imply that \[ B_i^{\prime} \le \frac{i - 1}{n}B_{i - 1} + \frac{n - i + 1}{n}B_i~, \] which, using Lemma \ref{lem:monotonicity}, proves the claim.} \end{proof}
\subsection{Proof of Lemma \ref{lem:variance}}
We start with a special case. Let us say that a Wigner matrix as in Theorem \ref{thm:main} is {\em standard} if for all $i$, $ \mathbb{E} X_{ii}^2= 2$. In this case, the variance of the entries of $X$ is equal to the variance of the entries of a random matrix $Y$ sampled from the Gaussian Orthogonal Ensemble (GOE). If $\mu$ is the largest eigenvalue of $Y$, it follows from \cite[Corollary 3]{MR2678393} that for some absolute constant $c >0$, $$ \mathrm{Var} ( \mu ) \leq c N^{-1/3}. $$
On the other hand, it follows from \cite[Theorem 2.4]{MR2871147} (see also \cite[Theorem 1.6]{MR3034787} for a statement which can be used directly) that, $$
N^{1/3} \left| \mathrm{Var} ( \mu ) - \mathrm{Var} (\lambda) \right| = o(1). $$ We obtain the first claim of the lemma for standard Wigner matrices. To conclude the proof of the lemma for Wigner matrices, it suffices to prove that for any Wigner matrix $X$, for some $\kappa \geq 1/3$, we have for all $N$ large enough, \begin{equation} \label{eq:ll0}
\mathbb{E} |\lambda - \lambda_0 |^2 \leq N^{-\kappa}. \end{equation} where $\lambda_0$ is the largest eigenvalue of a matrix $X_0$ obtained from $X$ by setting to $0$ all diagonal entries. We will prove it for any $\kappa < 1/2$ (an improvement of the forthcoming Lemma \ref{lem:resres0} would give \eqref{eq:ll0} for any $\kappa <1$). The proof requires some care since the operator norm of $X - X_0$ may be much larger than $1$ and the rank of $X-X_0$ could be $N$.
There is an easy inequality which is half of \eqref{eq:ll0}. Let $v_0$ be a unit eigenvector of $X_0$ with eigenvalue $\lambda_0$. We have $$ \lambda \geq \langle v_0 , X v_0 \rangle = \langle v_0 , X_0 v_0 \rangle+ \langle v_0 , (X -X_0)v_0 \rangle = \lambda_0 + \sum_{i=1}^N (v_0)_i^2 X_{ii}~, $$ where $(v_0)_i$ is the $i$-th coordinate of $v_0$. We observe that $v_0$ is independent of { $X_{ii}$ for all $i$ and $\mathbb{E}
X_{ii} X_{jj} = 0$ for $i \neq j$. Denoting $(x)^2_+ = \max
(x,0)^2$, by the Cauchy-Schwarz inequality,} we deduce that $$
\mathbb{E} (\lambda_0 - \lambda)_+ ^2 \leq \mathbb{E} \sum_{i=1}^N (v_0)_i^4 \mathbb{E} X_{ii}^2 \leq \mathbb{E} \| v_0 \|^2_{\infty} \sigma_0^2~. $$
We write, $\mathbb{E} \| v_0 \|^2_{\infty} \leq (\log N)^ {2C}/ N + \mathbb{P}( \|v_0 \|_{\infty} \geq (\log N)^ {C}/ \sqrt N) $. From Lemma \ref{delocalization} applied to $c_0 = 2$, we deduce that for some constant $C>0$, $$ \mathbb{E} (\lambda_0 - \lambda)_+ ^2 \leq \frac{(\log N) ^C}{N}~. $$ It implies the easy half of \eqref{eq:ll0} for any $\kappa < 1$.
The proof of the converse inequality is more involved. Fore ease of notation, we introduce the number for $N \geq 3$, \begin{equation}\label{eq:defL} L = L_N = (\log N)^{\log \log N}~. \end{equation} We say that a sequence of events $(A_N)$ holds {\em with overwhelming
probability} if for any $C>0$, there exists a constant $c>0$ such that $\mathbb{P}( A_N) \geq 1 - cN^{-C}$. We repeatedly use the fact that a polynomial { intersection} of events of overwhelming probability is an event of overwhelming probability. We start with a small deviation lemma which can be found, for example, in \cite[Appendix B]{MR2981427}.
\begin{lemma}\label{lem:dev}
Assume that $(Z_i)$ $1 \leq i \leq N$ are independent centered complex variables such that for some $\delta >0$, for all $i$, $\mathbb{E} \exp \left( |Z_i|^{\delta}\right) \leq 1/ \delta$. Then, for any $(x_i) \in \mathbb{C}^N$ with overwhelming probability, $$
\left| \sum_{i=1}^N x_i Z_i \right| \leq L \| x\|_2~. $$ \end{lemma}
For $z = E + {\mathbf{i}}\eta$ with $\eta >0$ and $E \in \mathbb{R}$, we introduce the resolvent matrices $$ R (z ) = ( X - z I) ^{-1} \hbox{ and } R_0 (z ) = ( X_0 - z I) ^{-1}~, $$ where $I$ denotes the identity matrix. The following lemma asserts that the resolvent can be used to estimate the largest eigenvalue of $X$ and $X_0$. \begin{lemma}\label{lem:resl} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main} and let $\lambda_1 \geq \ldots \geq \lambda_N$ be its eigenvalues. For any $1 \leq k \leq N$, there exists an integer $1 \leq i \leq N$ such that for all $E$ and $\eta >0$ $$
\frac 1 2 \max( \eta, |\lambda_k - E |) ^{-2} \leq N\eta ^{-1} \Im R( E + {\mathbf{i}}\eta)_{ii}~. $$
Moreover, let $1 \leq k \leq L$. There exists $c_0 >0$ such that with overwhelming probability, we have $|\lambda_k - 2 \sqrt N| < L^{c_0} N^{-1/6}$ and for all integers $ 1 \leq i \leq N$, and all $E$ such that $|E - 2 \sqrt N| < L^{c_0} N^{-1/6}$, $$
N\eta ^{-1} \Im R( E + {\mathbf{i}}\eta)_{ii} \leq L^{c_0} \min_{ 1 \leq j \leq N} (\lambda_j -E )^{-2}~. $$ \end{lemma} \begin{proof} From the spectral theorem, we have $$ \Im R_{ii} = \sum_{p = 1}^N \frac{ \eta (v_p)_i ^2}{ (\lambda_p - E )^2 + \eta ^2}~, $$ where $(v_1, \ldots, v_N)$ is an orthonormal basis of eigenvectors of $X$ and $(v_p)_i$ is the $i$-th coordinate of $v_p$. In particular, $$
N \eta^{-1} \Im R_{ii} \geq \frac{ N ({v_p})_i ^2}{ (\lambda_k- E )^2 + \eta ^2} \geq \frac{ N ({v_p})_i ^2}{ 2\max( \eta, |\lambda_k - E |) ^{2} }~. $$ From the pigeonhole principle, for some $i$, $({v_p})_i^2 \geq 1/N$ and the first statement of the lemma follows.
Fix an integer $1 \leq k \leq L$. From \cite[Theorem 2.2]{MR2871147} and Lemma \ref{delocalization}, for some constants $c_0,C_0>0$, we have, with overwhelming probability, that the following event $\mathcal{E}$ holds: $|\lambda_k- 2 \sqrt N| \leq L^{c_0} N ^ {-1/6}$, for all integers $1 \leq p \leq N$, $$ \lambda_p \leq 2 \sqrt N - 2 C_0 p^{2/3} N^{-1/6} + L^c p^{-1/3}N^{-1/6}~, $$
and $ \| v_p \|^2_{\infty} \leq L / N$. We set $q = \lfloor C L^{3c_0} \rfloor$ for some $C$. Let $E $ be such that $|E - 2 \sqrt N| \leq L^{c_0} N ^ {-1/6}$. On the event $\mathcal{E}$, if $C$ is large enough, we have, for all $p > q$, $E - \lambda_p \geq C_0 p^{2/3} N^{-1/6}$ and $$ \sum_{p = q+1}^{N} \frac{ N (v_p)_i^2}{ (\lambda_p - E )^2 + \eta ^2} \leq \sum_{p = q+1}^{N} \frac{L}{ (\lambda_p - E )^2} \leq \frac 1 {C^2_0} \sum_{p = q+1}^{N} \frac{L N^{1/3}} {p ^{4/3}} \leq c_1 L N^{1/3} q^ {-1/3}. $$ On the other hand, on the same event $\mathcal{E}$, we have $$ \sum_{p = 1}^{q} \frac{ N (v_p)_i ^2}{ (\lambda_p - E )^2 + \eta ^2} \leq \sum_{p = 1}^{q} \frac{ N (v_p)_i ^2}{ \min_{1 \leq j\leq N} (\lambda_j - E )^2} \leq \frac{L q }{\min_{1 \leq j\leq N} (\lambda_j - E)^{2}}~. $$ It remains to adjust the value of the constant $c_0$ to conclude the proof. \end{proof}
The next step in the proof of \eqref{eq:ll0} is a comparison between the resolvent of $X$ and $X_0$ for $z$ close to $2\sqrt N$. The following result is a corollary of \cite[Theorem 2.1 (ii)]{MR2871147}.
\begin{lemma}\label{loclaw}
Let $X$ be a Wigner matrix as in Theorem \ref{thm:main}. There exists $c >0$ such that, with overwhelming probability, the following event holds: for all $z = E + {\mathbf{i}}\eta$ such that $|2 \sqrt N - E| \leq \sqrt N$ and $N^{-1/2} L ^c \leq \eta \leq N^{1/2}$, all $i \ne j$, we have $$
| R(z)_{ij} | \leq \Delta
\quad \hbox{ and } \quad | R(z)_{ii} | \leq c N^{-1/2}, $$
where $\Delta = L^c (|E-2\sqrt N| + \eta)^{1/4} N^{-7/8} \eta^{-1/2} + L^c N^{-2} \eta^{-1} $. \end{lemma} \begin{proof}
Let $Y = X/ \sqrt N $ and for $z \in \mathbb{C}$, $\Im (z) >0$, $G (z) = ( Y - z I)^{-1}$. We have $R(z) = N^{-1/2} G(z N^{-1/2})$. Theorem 2.1 (ii) in \cite{MR2871147} asserts that with overwhelming probability for all $w= a + {\mathbf{i}} b$ such that $|a| \leq 5$ and $N^{-1} L ^c \leq b \leq 1$, all $i \ne j$, we have $$
| G(w)_{ij} | \leq \delta
\quad \hbox{ and } \quad | G(w)_{ii} - m (w) | \leq \delta~, $$
where $\delta = L^c \sqrt{\Im (m (w)) / (N b)} + L^c (Nb)^{-1} $ and $m(w)$ is the Cauchy-Stieltjes transform of the semi-circular law (for its precise definition see \cite{MR2871147}). Then \cite[Lemma 3.4]{MR2871147} implies that, for some $C >0$, for all $w = a + {\mathbf{i}}b$, $|a| \leq 5$ and $0 \leq b \leq 1$, we have $|m(w)| \leq C$ and $|\Im( m(w)) | \leq C \sqrt{ |a - 2| + b}$. We apply the above result for $a = E / \sqrt N$ and $b = \eta / \sqrt N$. We obtain the claimed statement for $R(z) = N^{-1/2} G(z N^{-1/2})$. \end{proof}
We use Lemma \ref{loclaw} to estimate the difference between $R(z)$ and $R_0(z)$.
\begin{lemma}\label{lem:resres0} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main}, let $X_0$ be obtained from $X$ by setting to $0$ all diagonal entries, and let
$c_0$ be as in Lemma \ref{lem:resl}. With overwhelming probability, the following event holds: for all $z = E + {\mathbf{i}}\eta$ such that $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$ and $ \eta = N^{-1/4}$, all $i$,
$$
| R_0 (z)_{ii} - R(z)_{ii} | \leq \frac{1}{4 N \eta}~. $$ \end{lemma} \begin{proof} The resolvent identity states that if $A-z I$ and $B-z I $ are invertible matrices then \begin{equation}\label{eq:resid} (A - zI)^{-1} = (B-zI)^{-1} + (B - zI)^{-1}(B-A)(A - zI)^{-1}~. \end{equation} Applying twice this identity, it implies that $$ R = R_0 + R_0 (X_0-X)R_0 + R_0 (X_0-X)R_0 (X_0-X)R $$ (where we omit to write the parameter $z$ for ease of notation). For any integer $1 \leq i \leq N$, we thus have $$ R_{ii} - (R_0)_{ii} = -\sum_{j} (R_0)_{ij} X_{jj} (R_0)_{ji} + \sum_{j,k} (R_0)_{ij} X_{jj} (R_0)_{jk} X_{kk} R_{ki} = - I(z) + J(z)~. $$ Note that $X_{jj}$ is independent of $R_0$. By Lemma \ref{lem:dev} and Lemma \ref{loclaw} we find that, with overwhelming probability, $$
|I(z)| \leq L \Delta^2 \sqrt N + c L N^{-1}~. $$
For a given $z = E + {\mathbf{i}} \eta$ such that $|E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}$ and $\eta = N^{-1/4}$, it is straightforward to check that, for some $c>0$, {$\Delta \le L^c N^{-19/24}$} and $|I(z)|\leq L^c N^{-13/12} = o(1/(N\eta))$.
Similarly, we have $$
|J(z)| \leq \sum_{k} |X_{kk}| |R_{ki}| |G_k| \quad \hbox{ with } \; G_k = \sum_{j} (R_0)_{ij} X_{jj} (R_0)_{jk}~. $$
For a given $z$, by Lemma \ref{lem:dev} and Lemma \ref{loclaw}, we have with overwhelming probability, for all $k$, $|G_k| \leq L^c N^{-13/12}$ and $| J(z) | \leq L (\Delta N + c N^{-1/2} ) L^c N^{-13/12} =o(1/(N\eta))$.
For a given $z$, let $\mathcal E_z$ be the event that $\max_{1 \leq i \leq N} |R(z)_{ii} - R_0(z)_{ii}| \leq (8 N \eta)^ {-1}$ and $\mathcal E'_z$ the event that $\max_{1 \leq i \leq N}|R(z)_{ii} - R_0(z)_{ii}| \leq (4 N \eta)^ {-1}$. We have proved so far that for a given $z = E + {\mathbf{i}}\eta$ such that $|E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}$ and $\eta = N^{-1/4}$, with overwhelming probability, $\mathcal E_z$ holds. By a net argument, it implies that with overwhelming probability, the events $\mathcal E_z'$ hold jointly for all $z = E + {\mathbf{i}} \eta$ with $|E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}$ and $\eta = N^{-1/4}$. Indeed, from the resolvent identity \eqref{eq:resid}, we have $|R_{ij} (E + {\mathbf{i}} \eta) - R_{ij} ( E' + {\mathbf{i}} \eta) | \leq \eta^{-2} |E - E'|$. It follows that if $|E - E'| \leq \eta^2 (8 N \eta)^ {-1} \leq N^{-1} $ then $|R(z)_{ii} - R_0(z)_{ii}| \leq (8 N \eta)^ {-1}$. Let $\mathcal N$ be a finite subset of the interval $K = \{ E : |E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}\}$ such that for all $E \in K$, $\min_{E' \in \mathcal N} |E - E'| \leq N^{-1}$. We may assume that $\mathcal N$ has at most $N$ elements. From what precedes we have the inclusion, with $\eta = N^{-1/4}$, $$
\bigcap_{z = E + {\mathbf{i}} \eta : E \in \mathcal N} \mathcal E_z \; \subseteq \bigcap_{z = E + {\mathbf{i}}\eta : E \in K} \mathcal E'_z~. $$ From the union bound, the right-hand side holds with overwhelming probability. It concludes the proof of the lemma. \end{proof}
Now we have all ingredients necessary to conclude the proof of \eqref{eq:ll0}. Let $\eta = N^{-1/4}$. We prove that for some $c >0$, with overwhelming probability, $$ \lambda \leq \lambda_0 + L^c \eta~. $$
By Lemma \ref{lem:resl}, with overwhelming probability, $|\lambda - 2 \sqrt N | \leq L^{c_0} N^{-1/6}$ and for some $j$, $$ N \eta^{-1} \Im R (\lambda + {\mathbf{i}} \eta)_{jj} \geq \frac 1 2 \eta^{-2}. $$ and if $\lambda > \lambda_0$, $$ N \eta^{-1} \Im R_0 (\lambda + {\mathbf{i}} \eta)_{jj} \leq L^{c_0} (\lambda - \lambda_0)^{-2}~. $$ By Lemma \ref{lem:resres0}, we deduce that with overwhelming probability, if $\lambda > \lambda_0$, $$ \frac 1 4 \eta^{-2} \leq L^{c_0} (\lambda - \lambda_0)^{-2}~. $$ Hence, $\lambda \leq \lambda_0 + 2 L^{c_0/2} \eta$, concluding the proof of \eqref{eq:ll0}.
\subsection{Proof of Lemma \ref{fliponeelem}}
Let $\lambda = \lambda_1 \geq \cdots \geq \lambda_N$ be the eigenvalues of $X$. For any $(i,j)$, let $\lambda^{(ij)}$ be the largest eigenvalue of ${X^{(ij)}}$. We start by proving that $\lambda$ and $\lambda^{(ij)}$ are close compared to their fluctuations. We have $$
\lambda \geq \langle u^{(ij)} , Xu^{(ij)} \rangle = \lambda^{(ij)} + \langle u^{(ij)} , (X - {X^{(ij)}} ) u^{(ij)} \rangle \geq \lambda^{(ij)} - 2 (|X_{ij}| + |X'_{ij}|) \| u^{(ij)} \|^2_{\infty}~, $$ { where $u^{(ij)}$ is as in Lemma \ref{fliponeelem}.}
Since $X$ and ${X^{(ij)}}$ have the same distribution, we deduce from
Lemma \ref{delocalization} that, { for any $c_0 >0$, there exists $C > 0$ such that with probability at least $1 - CN^{2-c_0}$, $\| v \|_{\infty} \leq (\log N)^C / \sqrt N$ and $\max_{ij} \| u^{(ij)} \|_{\infty} \leq (\log N)^C / \sqrt N$. For all $N$ large enough, we have $(\log N)^C \leq L$, where $L$ is defined in (\ref{eq:defL}). Hence for any $c_0$, for some new constant $C>0$, with probability at least $1 - CN^{2-c_0}$, $\| v \|_{\infty} \leq L/ \sqrt N$ and $\max_{ij} \| u^{(ij)} \|_{\infty} \leq L/ \sqrt N$. Since $c_0$ can be taken arbitrarily large, we deduce that} with overwhelming probability, $\| v \|_{\infty} \leq L / \sqrt N$,
$\max_{ij} \| u^{(ij)} \|_{\infty} \leq L/ \sqrt N$ and $\max_{ij}(
|X_{ij}| + |X'_{ij}|) \leq L/2$. On this event, we get $$ \lambda \geq \lambda^{(ij)} - \frac{L^{3}} {N}~. $$ Reversing the role $X$ and ${X^{(ij)}}$ and using the union bound, we deduce that, with overwhelming probability, $$
\max_{ij} |\lambda - \lambda^{(ij)} | \leq \frac{L^{3}} {N}~. $$ It follows from \cite[Theorem 1.14]{MR2669449} that, for any $\rho >0$, there exists $\kappa >0$ such that, for all $N$ large enough, $$ \mathbb{P} ( \lambda_2 < \lambda - N^{-1/2-\rho} ) \geq 1 - N^{-\kappa}~. $$ Let $(v_1, \ldots, v_p)$ be an orthonormal basis of eigenvectors of $X$ associated to the eigenvalues $(\lambda_1, \ldots, \lambda_N)$ with $v_1 = v$. We set $\theta = 2/5 - 3 \rho / 5$ and $q = \lfloor N^{\theta} \rfloor$. For some constant $c>0$ to be defined and $\rho \in (0,1/16)$, we introduce the event $\mathcal{E}_\rho$ such that \begin{itemize} \item $\lambda_2 < \lambda - N^{-1/2 - \rho}$ and $ \lambda_q \leq \lambda - c q^{2/3} N^{-1/6}$~; \item
$\max_{1 \leq p \leq q} \| v_p \|_{\infty} \leq L / \sqrt N$ and $\max_{ij} \| u^{(ij)} \|_{\infty} \leq L / \sqrt N$~; \item
$\max_{ij}( |X_{ij}| + |X'_{ij}|) \leq L/2$~. \end{itemize}
From what precedes, Lemma \ref{delocalization} and \cite[Theorem 2.2]{MR2871147}, for some $c$ small enough, for any $\rho >0$ there exits $\kappa >0$ such that for all $N$ large enough, $\mathbb{P}(\mathcal{E}_\rho) \geq 1 - N^{-\kappa}$. Note also, that we have checked that if $\mathcal{E}_\rho$ holds then $\max_{ij} |\lambda - \lambda^{(ij)} | \leq L^{3} / N$.
On the event $\mathcal{E}_\rho$, we now prove that $v$ and $u^{(ij)}$ are close in $\ell^{\infty}$-norm. For a fixed $(i,j)$, we write, $u^{(ij)} = \alpha v + \beta x + \gamma y$, where $\alpha^2 + \beta ^2 + \gamma^2 = 1$ with $\beta,\gamma$ non-negative real numbers, $x$ is a unit vector in the vector space spanned by $(v_2, \ldots, v_q)$, and $y$ is a unit vector in the vector space spanned by $(v_{q+1}, \ldots, v_N)$. Set \[ w = {(X^{(ij)} - X)} u^{(ij)} + ( \lambda - \lambda^{(ij)} ) u^{(ij)}. \] We have $$ \lambda u^{(ij)} = \alpha \lambda v + \beta X x + \gamma X y + w~. $$ Taking the scalar product with $y$, we find $$ \lambda \gamma = \lambda \langle y , u^{(ij)} \rangle = \gamma \langle y , X y \rangle + \langle y, w\rangle \leq (\lambda - c q^{2/3} N^{-1/6})\gamma + \langle y, w\rangle~. $$ Hence,
$$
\gamma\leq c^{-1} q^{-2/3} N^{1/6} \| w\|_2 \leq
c^{-1} q^{-2/3} N^{1/6} \left( \frac{L^{2} }{\sqrt N} +
\frac{L^{4}}{N^{3/2}} \right) \leq 2 c^{-1} L^{4} N^{-2\theta / 3 - 1/3}~. $$ Similarly, taking the scalar product with $x$, we find $$
\beta \leq N^{1/2 + \rho} \langle x, w \rangle \leq N^{1/2 + \rho} \left( \left| \langle x, (X - {X^{(ij)}}) u^{(ij)} \rangle\right| + \frac{L^{ 3}}{N} \right)~. $$
Since $|\langle a , b \rangle| \leq \| a\|_{\infty} \|b \|_1 \leq m \|a \|_{\infty} \|b\|_{\infty} $ where $m$ is the number of non-zeros entries of $b$, we have $\left| \langle x, (X - {X^{(ij)}}) u^{(ij)} \rangle\right| \leq \| x \|_{\infty} L^{2} / \sqrt N$. By construction, $x = \sum_{p=2}^q \gamma_p v_p$ where $\sum_p |\gamma_p|^2 = 1$. If $\mathcal{E}_\rho$ holds, { using the Cauchy-Schwarz inequality and $ \| v_p \|_{2} = 1$,} we deduce that $$
\| x \|_{\infty} \leq \sum_{p=2}^q |\gamma_p| \| v_p \|_{\infty} \leq \frac{L}{\sqrt N} \sum_{p=2}^q |\gamma_p| \leq \frac{L \sqrt q}{\sqrt N} \leq L N^{\theta / 2 - 1/2}~. $$ So finally, $$ \beta \leq 2 L^{3} N^{-1/2 + \theta /2 + \rho}~, $$
We deduce that $|\alpha| = \sqrt{1 - \beta^2 - \gamma^2} \geq 1 - \beta - \gamma$ is positive for all $N$ large enough. We set $s = \alpha / |\alpha|$. We find, since $\|y\|_{\infty} \leq \| y\|_2 \leq 1$, $$
\| s v - u^{(ij)}\|_{\infty} \leq ( 1 - |\alpha| ) \| v \|_{\infty} + \beta \|x \|_{\infty} + \gamma \leq 2\beta \|x \|_{\infty} +2\gamma~. $$ For our choice of $\theta= 2/5 - 3 \rho / 5$, this last expression is $O( L^{4} N^{-3/5 + 8 \rho /5})$. {Indeed, we have \begin{align*} &\gamma \leq 2c^{-1} L^4 N^{-4/15+2\rho /5-1/3}= 2c^{-1} L^4 N^{-3/5+2\rho /5}, \\
&\|x \|_{\infty} \leq L N^{1/5-3\rho /10-1/2} = LN^{-3/10-3\rho /10},\\ &\beta \leq 2 L^3 N^{-1/2+1/5-3\rho/10+\rho} = 2L^3 N^{-3/10+7\rho/10}. \end{align*}}
Since $\rho<1/16$, we have $3/5 - 8 \rho /5 > 1/2$. Hence, finally, if we set ${\kappa^{\prime}} = 1/10 - 8 \rho/ 5 >0$, we get that $\| s v - u^{(ij)}\|_{\infty} = O ( L^4 N^{-1/2{ - \kappa^{\prime}}})$. This concludes the proof of the lemma.
\section{Proof of Theorem \ref{thm:main2}} \label{sec:thm2} { The proof of Theorem \ref{thm:main2} relies on the rigorous justification of the heuristic argument sketched below the statement of Theorem \ref{thm:main2}, see the forthcoming Lemma \ref{lem:llk}. This is performed by a careful perturbation argument on the resolvent in Lemma \ref{lem:resresk}. Indeed, the resolvent has nice analytical properties and it is intimately connected to the spectrum, as illustrated in Lemma \ref{lem:reslk}}.
Recall that $S_k=\{(i_1,j_1),\ldots,(i_k,j_k)\}$ is the set of $k$ pairs chosen uniformly at random (without replacement) from the set of all ordered pairs $(i,j)$ of indices with $1\le i\le j\le N$ which is used in the definition of $X^{[k]}$. We denote by $\lambda$ and $\lambda^{[k]}$ the largest eigenvalues of $X$ and $X^{[k]}$. Recall the definition of $L = L_N$ in \eqref{eq:defL} and the notion of overwhelming probability immediately below \eqref{eq:defL}. The main technical lemma is the following: \begin{lemma}\label{lem:llk} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main2} and let $\lambda = \lambda_1 \geq \cdots \geq \lambda_N$ be its eigenvalues. For any $c>0$ there exists a constant $c_2 >0$ such that for all $\varepsilon >0$, for all $N$ large enough, with probability at least $1- \varepsilon$,
$$\max_{k \leq N^{5/3} L^{-c_2}}\max_{p \in \{1,2\}} | \lambda_p - \lambda_p^{[k]}| \leq N^{-1/6} L^{-c}~.$$ \end{lemma}
We postpone the proof of Lemma \ref{lem:llk} to the next subsection. We denote by $R (z) = (X - z I)^{-1}$ and $R^{[k]} (z) = (X^{[k]}- z I)^{-1}$ the resolvent of $X$ and $X^{[k]}$. The proof of Lemma \ref{lem:llk} is based on this comparison lemma on the resolvents.
\begin{lemma}\label{lem:resresk}
Let $X$ be a Wigner matrix as in Theorem \ref{thm:main}. Let $c_0 >0$ be as in Lemma \ref{lem:resl} and let $c_1 >0$. There exists $c_2>0$ such that, with overwhelming probability, the following event holds: for all $k \leq N^{5/3} L^{-c_2}$, for all $z = E + {\mathbf{i}} \eta$ such that $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$ and $ \eta = N^{-1/6} L^{-c_1}$,
$$
\max_{1 \leq i,j \leq N} N \eta | R^{[k]} (z)_{ij} - R(z)_{ij} | \leq \frac{1}{L^2}~. $$ \end{lemma} We postpone the proof of Lemma {\ref{lem:resresk}} to the next subsection. Our next lemma connects the resolvent with eigenvectors. \begin{lemma}\label{lem:reslk} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main} and let $\varepsilon >0$. There exist $c_1,c_2$ such that the following event holds for all $N$ large enough with probability at least $1 - \varepsilon$: for all $k \leq N^{5/3} L^{-c_2}$, we have, with $z = \lambda + {\mathbf{i}}\eta$, $ \eta = N^{-1/6} L^{-c_1}$,
$$
\max_{1 \leq i,j \leq N} | N \eta \Im R(z)_{ij} - N v_i v_{j} | \leq \frac 1 {L^2} \quad \hbox{ and } \quad
\max_{1 \leq i,j \leq N} | N\eta \Im R^{[k]} (z )_{ij} - N v^{[k]}_i v^{[k]}_{j} | \leq \frac 1 {L^2}~. $$ \end{lemma}
\begin{proof} Let $\lambda = \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_N$ be the eigenvalues of $X$. Let $(v_1, \ldots, v_N)$ be an eigenvector basis of $X$. Recall that $$ N \eta \Im R(E + {\mathbf{i}} \eta )_{ij} = \sum_{p=1}^N \eta^2 \frac{ N (v_p)_i (v_p)_j }{ (\lambda_p - E )^2 + \eta ^2}~. $$ As in the proof of Lemma \ref{lem:resl}, from
\cite[Theorem 2.2]{MR2871147} and Lemma \ref{delocalization}, for some constants $c_0,C>0$, we have with overwhelming probability that the following event $\mathcal{E}$ holds: $|\lambda- 2 \sqrt N| \leq L^{c_0} N ^ {-1/6}$, for all integers $1 \leq p \leq N$, $ \| v_p \|^2_{\infty} \leq L / N$ and for all $q >p$ with $q = \lfloor L^{c_0} \rfloor$ and $E$ such that $|E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}$ we have $$ \sum_{p = q+1}^{N} \frac{ N (v_p)_i (v_p)_j }{ (\lambda_p - E )^2 + \eta ^2} \leq C L N^{1/3} q^ {-1/3}~. $$ On the other hand, let $\mathcal{E}_\delta$ be the event that $\lambda_2 \leq \lambda - \delta N^{-1/6}$. Fix $\varepsilon >0$. From \cite[Theorem 2.7]{MR3253704} and, e.g., \cite[Chapter 3]{MR2760897}, there exists $\delta >0$ such that $$ \mathbb{P} (\mathcal{E}_\delta ) \geq 1 - \varepsilon~. $$
On the event $\mathcal{E}\cap \mathcal{E}_\delta$, if $| \lambda - E| \leq (\delta/2) N^{-1/6}$, we have $$ \sum_{p = 2}^{q} \frac{ N (v_p)_i (v_p)_j }{ (\lambda_p - E )^2 + \eta ^2} \leq \frac 4 {\delta^2}L q N^{1/3}~. $$
Finally, if $|\lambda - E | \leq \eta/L^2$, on the event $\mathcal{E}$, we find easily, if $v_i$ is i-th coordinate of $v$, $$
\left| \eta^2 \frac{ N v_i v_j }{ (\lambda - E )^2 + \eta ^2} - N v_i v_j \right| \leq \frac 1 {L^3}~. $$
For some $c_1 >0$, we thus find, that if $\eta = N^ {-1/6} L^{-c_1}$ then on the event $\mathcal{E}\cap \mathcal{E}_\delta$, for all $E$ such that $|\lambda - E | \leq \eta/L^2$ we have $$
\max_{i ,j}| N \eta \Im R(E + {\mathbf{i}} \eta )_{ij} - N v_i v_{j} | \leq \frac 1 {L^2}~. $$
We apply this last estimate $R$ and $E = \lambda$. For each $k$, let $\mathcal{E}^ {[k]}$ be the event corresponding to $\mathcal{E}$ for $X^{[k]}$ instead of $X$. We apply the above estimate on the event $\mathcal{E}'_k = \mathcal{E}^{[k]} \cap \mathcal{E}_\delta \cap \{ \max_{p = 1 ,2} |\lambda_p - \lambda_p^{[k]}| \leq \eta / L^2\}$ to $R^{[k]}$ and $E = \lambda$. By Lemma \ref{lem:llk} and the union bound $\cap_{k \leq N^{5/3} L^{-c_2}} \mathcal{E}'_k$ has probability at least $1 - 2 \varepsilon$. It concludes the proof. \end{proof}
We may now conclude the proof of Theorem \ref{thm:main2}. Let $c_1,c_2$ be as in Lemma \ref{lem:reslk}, $k \leq N^{5/3} L^{- c_2}$ and $\eta = N^{-1/6} L^{-c_1}$. Up to increasing the value of $c_2$, we may also assume that the conclusion of Lemma \ref{lem:resresk} holds. By Lemma \ref{delocalization}, Lemma \ref{lem:resresk} and Lemma \ref{lem:reslk}, for any $\varepsilon >0$, for all $N$ large enough, with probability at least $1 - \varepsilon$, it holds that for some $c >0$: $ \sqrt N \| v\|_{\infty} \leq (\log N) ^c$, $\sqrt N \| v ^{[k]} \|_\infty \leq (\log N)^c$ and $$
\max_{i ,j}| N v_i v_{j} - N v^{[k]}_i v^{[k]}_{j} | \leq \frac 3 {L^2}. $$ Applied to $i = j$, we get that for some $s_i \in \{-1,1\}$, $$
\sqrt N | s_i v_i - v_i^{[k]} | \leq \frac{\sqrt 3}{ L}. $$ Notably, we find $$
(1 - s_i s_j) N | v_i v_j | \leq | N v_i v_{j} - N v^{[k]}_i v^{[k]}_{j} | + \frac {2 \sqrt 3} {L} (\log N) ^{c} \leq \frac 4 {L} (\log N) ^{c}. $$
Let $J = \{ 1 \leq i \leq N : \sqrt N |v_i| \geq L^{-1/3} \}$. It follows from the above inequality that for $i,j \in J$, $s_i = s_j$. Let $s$ be this common value. We have for all $i \in J$, $$
\sqrt N | s v_i - v_i^{[k]} | \leq \frac{\sqrt 3}{ L}. $$ Moreover, for all $i \notin J$, by definition, $$
\sqrt N | s v_i - v_i^{[k]} | \leq \sqrt N |v_i| + \sqrt N |v^{[k]}_i| \leq L^{-1/3} + L^{-1/3} + \sqrt 3 L ^{-1}. $$ It concludes the proof of Theorem \ref{thm:main2}.
\subsection{Proof of Lemma \ref{lem:resresk}}
{ The proof of Lemma \ref{lem:resresk} is based on a technical martingale argument. Thanks to the resolvent identity \eqref{eq:resid}, we will write $R^{[k]}_{ij}(z) - R_{ij}(z)$ as a sum of martingale differences up to small error terms, this is performed in Equation \eqref{eq:RkRij}. These martingales will allow us to use concentration inequalities. Each term of the martingale differences will be estimated thanks to the upper bound on resolvent entries given in Lemma \ref{loclaw}.}
{ We apply many times the resolvent identity and for technical convenience, it will be easier to have a uniform bound on our random variables. }We thus start by truncating our random variables $(X_{ij})$. Set $\tilde X_{ij} = X_{ij} \mathbbm{1} ( |X_{ij} | \leq (\log N)^{c})$ and $\tilde X'_{ij} = X'_{ij} \mathbbm{1} ( |X'_{ij} | \leq (\log N)^{c})$ with $c= 2/ \delta$. The matrix $\tilde X$ has independent entries above the diagonal. Moreover, since $\mathbb{E} \exp ( |X_{ij}|^\delta) \leq 1/ \delta$, with overwhelming probability, $X = \tilde X$ and $X' = \tilde X'$. It is also straightforward to check that $\mathbb{E} |X_{ij}|^2 \mathbbm{1} ( |X_{ij} | \geq (\log N) ^{c} ) = O( \exp ( - (\log N)^{2}/2))$. It implies that $|\mathbb{E} \tilde X_{ij}| = O ( \exp ( - ( \log N)^2 /2 )$ and $\mathrm{Var} (\tilde X_{ij}) = 1 + O ( \exp ( - (\log N)^2 /2))$ for $i \ne j$. We define the matrix $\bar X$ with for $i \ne j$, $$\bar X_{ij} =( \tilde X_{ij} - \mathbb{E} \tilde X_{ij} ) / \sqrt{\mathrm{Var} (\tilde X_{ij} )}\quad \hbox{ and } \quad \bar X_{ii} = \tilde X_{ij} - \mathbb{E} \tilde X_{ij}.$$
The matrix $\bar X$ is a Wigner matrix as in Theorem \ref{thm:main2} with entries in $[-L/4,L/4]$. Moreover, from Gershgorin's circle theorem \cite[Theorem 6.6.1]{HJ85}, with overwhelming probability, the operator norm of $X - \bar X$ satisfies $\| X - \bar X \| = O ( N \exp ( - ( \log N)^2/2 )$. Observe that from the spectral theorem, for any Hermitian matrix $A$, $\| (A-z)^{-1}\| \leq |\Im(z)|^ {-1}$. In particular, from the resolvent identity \eqref{eq:resid}, we get $\| (X-z)^{-1} - (\bar X - z)^{-1}\| = \| (X-z)^{-1} (X - X') (\bar X - z)^{-1}\|\leq \Im (z)^{-2} \| X - \bar X \| = O ( N^3 \exp ( - ( \log N)^2/2 )$ if $\Im (z) \geq N^{-1}$. The same truncation procedure applies for $X^{[k]}$. In the proof of Lemma \ref{lem:resresk}, we may thus assume without loss of generality that the random variables $X_{ij}$ have support in $[-L/4,L/4]$.
{ It will also be convenient to assume that the random subset $S_k$ does not contain too many points on a given row or column. To that end,} for $0 \leq t \leq k$, let ${\cal F}_t$ be the $\sigma$-algebra generated by the random variable $X$, $S_{k}$ and $(X'_{i_s,j_s})_{1 \leq s \leq t}$. For $1 \leq i,j \leq N$, we set $$T_{ij} = \{ t : \{ i_t ,j_t \} \cap \{i, j \} \ne \emptyset \}~.$$ Note that $T_{ij}$ is ${\cal F}_0$-measurable. We have $$
\mathbb{E} |T_{ij}| = \frac{2 k}{N+1}~. $$ Besides, from \cite[Proposition 1.1]{MR2288072}, for any $u >0$,
$$\mathbb{P} \left( |T_{ij}| \geq \mathbb{E} |T_{ij}| + u \right) \leq \exp \left( - \frac{u^2 }{ 4\mathbb{E} |T_{ij}| +2 u } \right)~. $$
If $k \leq N^{5/3} L^{-c_2}$, it follows that with overwhelming probability, the following event, say $\mathcal T$, holds: $\max_{ij} |T_{ij}| \leq 4k' /N$ where for ease of notation we have set $$ k' = \min ( k , N (\log N)^2)~. $$
Now, let $c$ be as in Lemma \ref{loclaw} and, for $0 \leq t \leq k$, we denote by $\mathcal{E}_t \in {\cal F}_t$ the event that $\mathcal T$ holds and that the conclusion of Lemma \ref{loclaw} holds for $X^{[t]}$ and $R^{[t]}$ (with the convention $X^{[0]} = X$). If $\mathcal{E}_t$ holds, then for all $z = E + {\mathbf{i}} \eta$ with $|2 \sqrt N - E| \leq L^{c _0} N^{-1/6}$ and $ \eta = N^{-1/6} L^{-c_1}$, we have, $$
\max_{i \ne j} |R^{[t]}_{ij} (z) | \leq \delta = L ^ {c'} N^{-5/6} \quad \hbox{ and } \quad ~ \max_{i} |R^{[t]}_{ii} (z) | \leq \delta_0 = c N^{-1/2}~, $$ where $c' =1 + c + \max ( c_0 / 2, c_0/4 + c_1/2) $.
{ After these preliminaries, we may now write the resolvent expansion. Our goal is to write $R^{[k]}_{ij}(z) - R_{ij}(z)$ as a sum of martingale differences up to error terms. The outcome will be Equation \eqref{eq:RkRij} below.} We define $X_0^{[t]}$ as the symmetric matrix obtained from $X^{[t]}$ by setting to $0$ the entries $(i_t,j_t)$ and $(j_t,i_t)$. By construction $X^{[t+1]}_0$ is ${\cal F}_t$-measurable. We denote by $R_0^{[t]}$ the resolvent of $X_0^{[t]}$. The resolvent identity \eqref{eq:resid} implies that $$ R_0^{[t+1]} = R^{[t]} + R^{[t]} ( X^{[t]}-X_0^{[t+1]})R^{[t]} + R^{[t]} (X^{[t]}-X_0^{[t+1]})R^{[t]} (X^{[t]}-X_0^{[t+1]}) R^{[t+1]}_0 $$ (we omit to write the parameter $z$ for ease of notation). Now, we set for $i \ne j$, $E^s_{ij} = e_i e_j ^* + e_j e_i^*$ and {$E_{ii}^s = e_i e_i^*$}, where $e_i$ denotes the canonical vector of $\mathbb R^n$ with all entries equal to $0$ except the $i$-th entry equal to $1$. We have \begin{equation}\label{eq:XtXt0} X^{[t]}-X_0^{[t+1]} = X_{i_{t+1} j_{t+1}} E^s_{i_{t+1} j_{t+1}} \quad \hbox{ and } \quad X^{[t+1]}-X_0^{[t+1]} = X'_{i_{t+1} j_{t+1}} E^s_{i_{t+1} j_{t+1}}~. \end{equation}
We use that $|X_{ij}| \leq L/4$ and $(R^{[t+1]}_0)_{ij} \leq \eta^{-1}$. If $\mathcal{E}_t$ holds, we deduce that for all $z = E + {\mathbf{i}} \eta$ with $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$ and $ \eta = N^{-1/6} L^{-c_1}$, we have \begin{equation}\label{eq:R0Rt}
\max_{i\ne j} |(R^{[t+1]}_0)_{ij} | \leq \sqrt 2 \delta \quad \hbox{ and } \quad \max_{i} |(R^{[t+1]}_0)_{ii} | \leq \sqrt 2\delta_0~. \end{equation}
Similarly, the resolvent identity \eqref{eq:resid} with $R^{[t+1]}$ and $R^{[t]}$ implies that, if $\mathcal{E}_t$ holds, for all $z = E + {\mathbf{i}} \eta$ with $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$ and $ \eta = N^{-1/6} L^{-c_1}$, we have \begin{equation}\label{eq:R1Rt}
\max_{i \ne j} |R^{[t+1]}_{ij} | \leq \sqrt 2 \delta \quad \hbox{ and } \quad \max_{i} |R^{[t+1]}_{ii} | \leq \sqrt 2 \delta_0~. \end{equation} Finally, the resolvent identity with $R^{[t+1]}$ and $R_0^{[t+1]}$ gives \begin{align*} & R^{[t+1]} \; = \; R_0^{[t+1]} + R_0^{[t+1]} ( X_0^{[t+1]}-X^{[t+1]})R^{[t+1]} \\ &\quad = \; \sum_{\ell = 0}^2 \left(R_0^{[t+1]} ( X_0^{[t+1]}-X^{[t+1]}) \right)^{\ell} R_0^{[t+1]} + \left(R_0^{[t+1]} ( X_0^{[t+1]}-X^{[t+1]}) \right)^{3} R^{[t+1]} ~. \end{align*}
Note that, $\mathbb{E} [X'_{i_{t+1} j_{t+1}} | {\cal F}_t] = 0$. We use $|X_{i_tj_t} |\leq L/4$, from \eqref{eq:XtXt0}-\eqref{eq:R0Rt}-\eqref{eq:R1Rt}, we deduce that \begin{equation}\label{eq:ERtR0}
\left| \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] - (R^{[t+1]}_0)_{ij} -s^{[t+1]}_{ij} {X'}^2_{i_{t+1}j_{t+1}} \right| \leq a_t \quad \hbox{ and } \quad |R^{[t+1]}_{ij} - (R^{[t+1]}_0)_{ij} | \leq b_t~, \end{equation} where $s^{[t]}_{ij} = ((R_0^{[t]} E^s_{i_{t} j_{t}})^2 R_0^{[t]})_{ij}$ and, if $\mathcal{E}_t$ holds, \begin{align*} a_t = L^3 \delta^2 \delta_0 ^2+ L^3 \delta_0^4 \mathbbm{1}_{(t \in T_{ij})} \quad \hbox{ and } \quad b_t = L \delta^2 + L \delta \delta_0 \mathbbm{1}_{(t \in T_{ij})} + L \delta_0^2 \mathbbm{1}_{ ( \{i_t,j_t\} = \{ i,j \})}~. \end{align*} We rewrite, one last time the resolvent identity with $R_0^{[t+1]}$ and $R^{[t]}$: $$ R^{[t]} = \sum_{\ell = 0}^2 \left(R_0^{[t+1]} ( X_0^{[t+1]}-X^{[t]}) \right)^{\ell} R_0^{[t+1]} + \left(R_0^{[t+1]} ( X_0^{[t+1]}-X^{[t]}) \right)^{3}R^{[t]} ~. $$ If $\mathcal{E}_t$ holds, we arrive at, $$
\left| \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] - (R^{[t]})_{ij} - r^{[t+1]}_{ij} X_{i_{t+1} j_{t+1}} + s^{[t+1]}_{ij} (X^2_{i_{t+1} j_{t+1}} - {X'}^2_{i_{t+1}j_{t+1} }) \right| \leq 2a_t~, $$ where $r^{[t]}_{ij} = (R_0^{[t]} E^s_{i_{t} j_{t}} R_0^{[t]})_{ij}$. We have thus found that \begin{equation}\label{eq:RkRij}
R^{[k]}_{ij} - R_{ij} = \sum_{t=0}^ {k-1} \left(R^{[t+1]}_{ij} - (R^{[t]})_{ij} \right) = \sum_{t=0}^ {k-1} \left(R^{[t+1]}_{ij} - \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] \right) + r_{ij} + s_{ij} - s'_{ij} +a_{ij} ~, \end{equation} where we have set, with $Y_{ij} = X_{ij}^2 - \mathbb{E} X_{ij}^2$, $Y'_{ij} = {X'_{ij}}^2 - \mathbb{E} X_{ij}^2$, $$
r_{ij} = \sum_{t=1}^{k} r^{[t]}_{ij} X_{i_{t} j_{t}} ~,\quad s_{ij} = \sum_{t=1}^{k} s^{[t]}_{ij} Y_{i_{t} j_{t}} ~, \quad s'_{ij} = \sum_{t=1}^{k} s^{[t]}_{ij} Y'_{i_{t} j_{t}} ~, \quad |a_{ij}| \leq 2 \sum_{t=1}^k a_t~. $$
{ In this final step of the proof, we use concentration inequalities to estimate the terms in \eqref{eq:RkRij}}. We set $Z_{t+1} = (R^{[t+1]}_{ij} - \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] ) \mathbbm{1}_{\mathcal{E}_t}$. We write, for any $u \geq 0$, $$
\mathbb{P} \left( \left| \sum_{t=0}^ {k-1} \left(R^{[t+1]}_{ij} - \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] \right)\right| \geq u \right) \leq \mathbb{P} \left( \left| \sum_{t=1}^ {k} Z_t \right| \geq u \right) + \sum_{t=0}^{k-1} \mathbb{P} ( \mathcal{E}_t^c)~. $$
By Lemma \ref{loclaw}, we have for any $c>0$, $\sum_{t=0}^{k-1} \mathbb{P} ( \mathcal{E}_t^c) = O ( N ^{-c})$. Since $ \mathcal{E}_t \in {\cal F}_t$, we have { that} $\mathbb{E}[ Z_{t+1} | {\cal F}_t ] = 0$. Also, from \eqref{eq:R0Rt}-\eqref{eq:ERtR0}, $|Z_{t}| \leq 2b_t$. On the event $\mathcal T$, we have $$
\sqrt{\sum_{t=1}^k b_t^2 } \leq L \delta^2 \sqrt k + L \delta \delta_0 \sqrt{\frac{4k'}{N} } + L \delta^2_0 \leq 2 L \delta^2 \sqrt{k'}~. $$ Azuma-Hoeffding martingale inequality implies that, for $u \geq 0$, $$
\mathbb{P} \left( \left| \sum_{t=1}^ {k} Z_t \right| \geq 2 u L \delta^2 \sqrt{k'} \right) \leq 2 \exp \left( - \frac{u^2}{2}\right)~. $$ We apply the later inequality to $u = \log N$. We deduce that, with overwhelming probability, \begin{equation}\label{eq:jook}
\sum_{t=0}^ {k-1} \left(R^{[t+1]}_{ij} - \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] \right) \leq L^2 \sqrt{k'} \delta^2~. \end{equation}
We may treat similarly the random variable $s'_{ij}$ in \eqref{eq:RkRij}. We set $Z'_{t+1} = s^{[t+1]}_{ij} Y'_{i_{t+1} j_{t+1}}\mathbbm{1}_{\mathcal{E}_t}$. Note that $s^{[t+1]}_{ij}$ is ${\cal F}_t$-measurable and $\mathbb{E} [ Y'_{i_{t+1} j_{t+1}} | {\cal F}_t ] = 0$. Thus $\mathbb{E} [Z'_{t+1} | {\cal F}_t] = 0$. Moreover, since $|Y_{ij}| \leq L^2/ 16$, from \eqref{eq:R0Rt}, we find $|Z'_{t+1}| \leq b'_t = L^2 (\delta^2 \delta_0 + \delta_0^3 \mathbbm{1}_{( t \in T_{ij})})$. If $\mathcal T$ holds, we get $$ \sqrt{\sum_{t=0}^{k-1} {b'_t}^2} \leq L^2 \delta^2 \delta_0 \sqrt{k} + L^2 \delta_0^3 \sqrt{ \frac{4k'}{N}} \leq 2 L^2 \delta^2 \delta_0 \sqrt{k'} $$ We write, for $u \geq 0$, $$
\mathbb{P} \left( \left| s'_{ij} \right| \geq u \right) \leq \mathbb{P} \left( \left| \sum_{t=1}^{k} Z'_t \right| \geq u \right) + \sum_{t=0}^{k-1} \mathbb{P} ( \mathcal{E}_{t}^c) . $$ From Azuma-Hoeffding martingale inequality, we deduce that, with overwhelming probability, \begin{equation}\label{eq:jook3}
|s'_{ij}| \leq \sqrt{k'} \delta^2. \end{equation}
We now estimate the random variable $r_{ij}$ in \eqref{eq:RkRij}. We will also use Azuma-Hoeffding inequality but we need to introduce a backward filtration {(because we have to deal with the random variables $X_{i_t,j_t}$ instead of $X'_{i_t,j_t}$ as in $s'_{ij}$)}. We define ${\cal F}'_t$ as the $\sigma$-algebra generated by the random variables, $X'$, $S_k$ and $\{ (X_{ij}) : \{i,j\} \notin \{ i_s, j_s \}, s \leq t \}$. By construction $X^{[t]}$ and $X_0^{[t]}$ are ${\cal F}'_{t}$-measurable random variables. Let $\mathcal{E}'_{t} \in {\cal F}'_{t} $ be the event that $\mathcal T$ holds and that the conclusion of Lemma \ref{loclaw} holds for $X^{[t]}$. If $\mathcal{E}'_{t}$ holds, then for all $z = E + {\mathbf{i}} \eta$ with $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$ and $ \eta = N^{-1/6} L^{-c_1}$, we have, $$
\max_{i \ne j} |R^{[t]}_{ij} (z) | \leq \delta \quad \hbox{ and } \quad ~ \max_{i} |R^{[t]}_{ii} (z) | \leq \delta_0~. $$ Arguing as in \eqref{eq:R0Rt}, if $\mathcal{E}'_t$ holds then $$
\max_{i \ne j} |(R_0^{[t]})_{ij} | \leq \sqrt 2\delta \quad \hbox{ and } \quad ~ \max_{i} |(R^{[t]}_0)_{ii} | \leq \sqrt 2\delta_0~. $$
The variable $r^{[t]}_{ij}$ is ${\cal F}'_{t}$-measurable and $\mathbb{E} (X_{i_{t} j_{t}} | {\cal F}'_{t} ) = 0$. We write, for $u \geq 0$, $$
\mathbb{P} \left( \left| r_{ij} \right| \geq u \right) \leq \mathbb{P} \left( \left| \sum_{t=0}^{k-1} \tilde Z_t \right| \geq u \right) + \sum_{t=0}^{k-1} \mathbb{P} ( {\mathcal{E}'_{t}}^c)~ , $$
where $\tilde Z_{t+1} = r^{[t]}_{ij} X_{i_{t} j_{t}} \mathbbm{1}_{\mathcal{E}'_{t}}$. We have $\mathbb{E} ( \tilde Z_{t+1} | {\cal F}'_t ) = 0$ and $$|\tilde Z_t| \leq \tilde b_t = L \delta^2 + L \delta \delta_0 \mathbbm{1}_{(t \in T_{ij})} + L \delta_0^2 \mathbbm{1}_{ \{ i_t,j_t\} = \{ i,j \})}~.$$ Arguing as above, from Azuma-Hoeffding martingale inequality, we deduce that with overwhelming probability, \begin{equation}\label{eq:jook2}
|r_{ij}| \leq L^2\sqrt {k'} \delta^2. \end{equation}
Similarly, repeating the argument leading to \eqref{eq:jook3} with $s_{ij}$ and the filtration $({\cal F}'_t)$ gives with overwhelming probability, \begin{equation}\label{eq:jook4}
|s_{ij}| \leq \sqrt{k'} \delta^2. \end{equation}
We note also that if $\mathcal T$ holds then $$
|a_{ij}| \leq 2 \sum_{t=1}^k a_t \leq 2 L^3 \delta^2 \delta^2_0 k + 2L^3 \delta_0^4 \frac{4k'}{N} \leq \sqrt{k'} \delta^2, $$
where the last inequality holds provided that $k \leq N^{5/3}$. So finally, from \eqref{eq:RkRij}-\eqref{eq:jook}-\eqref{eq:jook3}-\eqref{eq:jook2}-\eqref{eq:jook4}, we have proved that for a given $z = E + {\mathbf{i}} \eta$ such that $|E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}$ and $\eta = N^{-1/6} L ^{-c_1}$, with overwhelming probability $$
\left| R^{[k]}_{ij} (z)- R_{ij} (z) \right| \leq 3 L ^2 \sqrt {k'} \delta^2~, $$ where the inequality holds provided that $k \leq N^{5/3}$.
Recall that $|R_{ij} (E + {\mathbf{i}} \eta) - R_{ij} ( E' + {\mathbf{i}} \eta) | \leq \eta^{-2} |E - E'|$. By a net argument (as in the proof of Lemma \ref{lem:resres0}), we deduce that with overwhelming probability for all $z = E + {\mathbf{i}} \eta$ such that $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$, $\left| R^{[k]}_{ij} (z)- R_{ij} (z) \right| \leq 4 L^2 \sqrt {k'} \delta^2$. It concludes the proof of Lemma \ref{lem:resresk}.
\subsection{Proof of Lemma \ref{lem:llk}}
Let $c_0$ be as in Lemma \ref{lem:resl} and $c >0$. We set $c_1 = c_0/2 +2c$ and let $\eta = N^{-1/6} L ^{-c_1}$. Let $p \in \{1,2\}$. We start with by bounding $\min_{ j}|\lambda_p - \lambda_j^{[k]}|$ and $\min_{j}|\lambda^{[k]}_p - \lambda_j|$. Since $X$ and $X^{[k]}$ have the same distribution, we only prove that with overwhelming probability \begin{equation}\label{eq:pton}
\min_{ 1 \leq j \leq N}|\lambda_p - \lambda_j^{[k]}| \leq 2 L^{c_0/2} \eta. \end{equation}
By Lemma \ref{lem:resl}, with overwhelming probability, $|\lambda_p - 2 \sqrt N | \leq L^{c_0} N^{-1/6}$ and for some integer $1 \leq i\leq N$, $$ N \eta^{-1} \Im R (\lambda_p + {\mathbf{i}} \eta)_{ii} \geq \frac 1 2 \eta^{-2}~, $$ and, $$ N \eta^{-1} \Im R^{[k]} (\lambda_p + {\mathbf{i}} \eta)_{ii} \leq L^{c_0} \min_{1 \leq j \leq N} (\lambda_p - \lambda_j^{[k]})^{-2}~. $$ By Lemma \ref{lem:resresk}, we deduce that if $k \leq N^{5/3} L^{-c_2}$, with overwhelming probability, $$ \frac 1 4 \eta^{-2} \leq L^{c_0}\min_{1 \leq j \leq N} (\lambda_p - \lambda_j^{[k]})^{-2}~. $$ It proves \eqref{eq:pton}.
We may now conclude the proof of Lemma \ref{lem:llk}. Fix $\varepsilon >0$. As already noticed, from \cite[Theorem 2.7]{MR3253704}, there exists $\delta >0$ such that, with probability at least $1 - \varepsilon$, $\lambda_2 < \lambda - \delta N^{-1/6}$. From what precedes, with probability at least $1 - 2 \varepsilon$, $\mathcal{E}_\delta$ holds and for all $k \leq N^{5/3} L^{-c_2}$, we have $$
\max \left( \min_{ 1 \leq j \leq N}|\lambda^{[k]}_p - \lambda_j| , \min_{ 1 \leq j \leq N}|\lambda_p - \lambda_j^{[k]}| \right) \leq \alpha~, $$ with $\alpha = 2 L^{c_0/2} \eta$.
On this event, we readily find $|\lambda - \lambda^{[k]} | \leq \alpha$ and for some $p$, $|\lambda_p - \lambda^{[k]}_2| \leq \alpha$. Assume that this last inequality is false for $p \ne 2$. Since $2\alpha < \delta N^{-1/6}$, if $p \ne 2$, then $p \leq 3$ and we deduce that $\lambda_2 > \lambda_2^{[k]} + \alpha$. We note that, on our event, for some $q$, we have $|\lambda_2 - \lambda_q^{[k]}| \leq \alpha$. In particular, $\lambda_q^{[k]} > \lambda_2^{[k]}$. So necessarily, $q = 1$ and, from the triangle inequality, $| \lambda_2 - \lambda_1| \leq 2 \alpha$. This is a contradiction since $2\alpha < \delta N^{-1/6}$. It concludes the proof of Lemma \ref{lem:llk}.
{\textbf{Acknowledgments} We would like to thank Jaehun Lee for pointing out a mistake in the proof of Lemma \ref{lem:monotonicity_second} in an early version of this paper. We also would like to thank the referees for their valuable reports.}
\end{document} |
\begin{document}
\begin{center}{\large\bf Glauberman correspondents and extensions of nilpotent block algebras}
\large\bf Lluis Puig and Yuanyang Zhou
\end{center}
\insert\footins{\scriptsize 2000 {\it Mathematics Subject Classification}. Primary 20C15, 20C20
The second author is supported by Program for New Century Excellent Talents in University and by NSFC (No. 11071091).}
\begin{abstract} The main purpose of this paper is to prove that the extensions of a nilpotent block algebra and its Glauberman correspondent block algebra are Morita equivalent under an additional group-theoretic condition (see Theorem 1.6); in particular, Harris and Linckelman's theorem and Koshitani and Michler's theorem are covered (see Theorems £7.5 and £7.6). The ingredient to carry out our purpose is the two main results in K\"ulshammer and Puig's work {\it Extensions of nilpotent blocks{\,\hskip-1pt\cdot\hskip-1pt\,}}; we actually revisited them, giving completely new proofs of both and slightly improving the second one (see Theorems £3.5 and £3.14).
\end{abstract}
\noindent{\bf\large 1. Introduction}
\noindent{\bf 1.1.}\quad Let ${\,\hskip-1pt\cdot\hskip-1pt\,}$ be a complete discrete valuation ring with an algebraically closed residue field $k$ of characteristic $p$ and a quotient field ${\,\hskip-1pt\cdot\hskip-1pt\,}$ of characteristic 0. In addition, ${\,\hskip-1pt\cdot\hskip-1pt\,}$ is also assumed to be big enough for all finite groups that we consider below. Let $H$ be a finite group. We denote by ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H)$ the set of all irreducible characters of $H$ over ${\,\hskip-1pt\cdot\hskip-1pt\,}$. Let $A$ be another finite group and assume that there is a group homomorphism $A\rightarrow {\rm Aut}(H)$. Such a group $H$ with an $A$-action is called an $A$-group. We denote by $H^A$ the subgroup of all $A$-fixed elements in $H$. Clearly $A$ acts on ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H)$. We denote by ${\rm Irr}_{\cal K}(H)^A$ the set of all $A$-fixed elements in ${\rm Irr}_{\cal K}(H)$. Assume that $A$ is solvable and the order of $A$ is coprime to the order of $H$. By \cite[Theorem 13.1]{I}, there is a bijection $$\pi(H,\,A): {\rm Irr}_{\cal K}(H)^A \rightarrow {\rm Irr}_{\cal K}(H^A)$$ such that
\noindent{\bf 1.1.1.} For any normal subgroup $B$ of $A$, the bijection $\pi(H,{\,\hskip-1pt\cdot\hskip-1pt\,} B)$ maps ${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H)^A$ to ${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H^B)^A$, and in ${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H)^A$ we have $$\pi(H,\,A) = \pi(H^B,\,A/B) \circ \pi(H,\,B){\,\hskip-1pt\cdot\hskip-1pt\,}.$$
\noindent{\bf 1.1.2.} If $A$ is a $q$-group for some prime $q$, then for any $\chi\in {\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H)^A$, the corresponding irreducible character $\pi(H,\,A)(\chi)$ of $G^A$ is the unique irreducible constituent of ${\rm Res}^H_{H^A}(\chi)$ occurring with a multiplicity coprime to~$q$.
\noindent The character $\pi(H,\,A)(\chi)$ of $H^A$ is called the {\it Glauberman correspondent{\,\hskip-1pt\cdot\hskip-1pt\,}} of the character $\chi$ of $H$.
\noindent{\bf 1.2.}\quad For any central idempotent $c$ of ${\,\hskip-1pt\cdot\hskip-1pt\,} H$, we denote by ${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H, c)$ the set of all irreducible characters of $H$ provided by some ${\,\hskip-1pt\cdot\hskip-1pt\,} Hc$-module. Let $b$ be a block of $H$ --- namely $b$ is a primitive central idempotent of ${\,\hskip-1pt\cdot\hskip-1pt\,} H{\,\hskip-1pt\cdot\hskip-1pt\,};$ then ${\,\hskip-1pt\cdot\hskip-1pt\,} Hb$ is called the {\it block algebra{\,\hskip-1pt\cdot\hskip-1pt\,}} corresponding to $b$. Assume that $A$ stabilizes the block $b$ and centralizes a defect group of $b$. Then, by \cite[Proposition 1 and Theorem 1]{W}, $A$ stabilizes all characters of ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H, b)$ and there is a unique block ${\it w}(b)$ of ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)$ such that $${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H^A, {\it w}(b))=\pi(H,\,A)({\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H, b)){\,\hskip-1pt\cdot\hskip-1pt\,};$$ moreover ,there is a perfect isometry (see \cite{B1}) $$R_H^b: {\cal R}_{\,\hskip-1pt\cdot\hskip-1pt\,} (H, b)\rightarrow {\cal R}_{\,\hskip-1pt\cdot\hskip-1pt\,} (H^A, {\it w}(b))$$ such that $R_H^b(\chi)=\pm\pi(H,\,A)(\chi)$ for any $\chi\in {\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H, b)$, where we denote by ${\cal R}_{\,\hskip-1pt\cdot\hskip-1pt\,} (H, b)$ and ${\cal R}_{\,\hskip-1pt\cdot\hskip-1pt\,} (H^A, {\it w}(b))$ the additive groups generated by ${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H, b)$ and ${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H^A, {\it w}(b))$. Such a block ${\it w}(b)$ is called the {\it Glauberman correspondent{\,\hskip-1pt\cdot\hskip-1pt\,}} of $b$ (see \cite{W}). Since a perfect isometry between blocks is often nothing but
the character-theoretic `shadow' of a derived equivalence, it seems reasonable
to ask whether there is a derived equivalence between a block and its Glauberman correspondent. In the last few years, some Morita equivalences between $b$ and ${\it w}(b)$ were found in the cases where $H$ is $p$-solvable or the defect
groups of $b$ are normal in $H$, which supply Glauberman correspondences
from ${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H, b)$ to ${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(H^A, {\it w}(b))$ (see \cite{H}, \cite{KG} and \cite{H1}); moreover, all these Morita equivalences between $b$ and ${\it w}(b)$ are {\it basic} in the sense of~\cite{P1}.
\noindent{\bf 1.3.}\quad By induction, the groups $H$ and $H^A$ and the blocks $b$ and ${\it w}(b)$ in the main results of \cite{H}, \cite{KG} and \cite{H1} can be reduced to the situation where, for some $A$-stable normal subgroup $K$ of $H{\,\hskip-1pt\cdot\hskip-1pt\,},$ we have $H=H^A {\,\hskip-1pt\cdot\hskip-1pt\,} K$ , the block $b$ is an $H$-stable block of $K$ with trivial or central defect group, and the block ${\it w}(b)$ is an $H^A$-stable block of $K^A$ with trivial or central defect group. Recall that the block $b$ of $H$ is called {\it nilpotent{\,\hskip-1pt\cdot\hskip-1pt\,}} (see \cite{P4}) if the quotient group $N_H(R_\varepsilon)/C_H(R)$ is a $p$-group for any local pointed group $R_\varepsilon$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} H b$. Blocks with trivial or central defect group are nilpotent and therefore in these situations ${\,\hskip-1pt\cdot\hskip-1pt\,} Hb$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A){\it w}(b)$ are extensions of the nilpotent block algebras ${\,\hskip-1pt\cdot\hskip-1pt\,} K b$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} K^A{\it w}(b)$ respectively. K\"ulshammer and Puig already precisely described the algebraic structure of extensions of nilpotent block algebras (see \cite{KP} or Section 3 below) and these results can be applied to blocks of $p$-solvable groups (see \cite{P2}) and to blocks with normal defect groups (see \cite{R, K}). Thus, it is reasonable to seek a common generalization of the main results of \cite{H, KG, H1} in the setting of extensions of nilpotent block algebras.
\noindent{\bf 1.4.}\quad Let $G$ be another finite $A$-group having $H$ as an $A$-stable normal subgroup and consider the $A$-action on $H$ induced by the $A$-group $G$. We assume that $A$ stabilizes $b$ and denote by $N$ the stabilizer of $b$ in $G$. Clearly $N$ is $A$-stable. Set \begin{center}$ c={\rm Tr}_N^G (b)$ and $\alpha=\{c{\,\hskip-1pt\cdot\hskip-1pt\,}$\quad ; \end{center} then the idempotent $c$ is $A$-stable and $\alpha$ is an $A$-stable point of $G$ on the group algebra ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ (the action of $G$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ is induced by conjugation). In particular, $G_\alpha$ is a pointed group on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$. Let~$P$ be a defect group of $G_\alpha{\,\hskip-1pt\cdot\hskip-1pt\,};$ then, by \cite[Proposition 5.3]{KP}, $Q=P\cap H$ is a defect group of the block $b$ of~$H$.
\noindent{\bf Theorem 1.5.}\quad {\it Assume that $A$ centralizes $P$, that $A$ is solvable and that the orders of $G$ and $A$ are coprime. Set ${\it w}(c)={\rm Tr}_{N^A}^{G^A}({\it w}(b))$ and ${\it w}(\alpha)={\,\hskip-1pt\cdot\hskip-1pt\,}{\it w}(c){\,\hskip-1pt\cdot\hskip-1pt\,}$. Then, ${\it w}(\alpha)$ is a point of $G^A$ on the group algebra ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)$ and $P$ is a defect group of the pointed group $(G^A)_{{\it w}(\alpha)}$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A){\,\hskip-1pt\cdot\hskip-1pt\,}.$ Moreover, if $G=H{\,\hskip-1pt\cdot\hskip-1pt\,} G^A$ and the block $b$ of $H$ is nilpotent, we have \begin{center}${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(G, c)={\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(G, c)^A$ and $\pi(G, A)({\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(G, c))={\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(G^A, {\it w}(c))$. \end{center}}
The following theorem shows that there is a ``{\it basic{\,\hskip-1pt\cdot\hskip-1pt\,}}" Morita equivalence between ${\,\hskip-1pt\cdot\hskip-1pt\,} G c$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} G^A {\it w}(c)$; that is to say, this Morita equivalence induces basic Morita equivalences~\cite{P1} between corresponding block algebras.
\noindent{\bf Theorem 1.6.}\quad {\it Assume that $A$ centralizes $P$, that $A$ is solvable and that the orders of $G$ and $A$ are coprime. Set ${\it w}(c)={\rm Tr}_{N^A}^{G^A}({\it w}(b))$. Assume that $G=G^A{\,\hskip-1pt\cdot\hskip-1pt\,} H$ and that the block $b$ is nilpotent. Then, there is an ${\,\hskip-1pt\cdot\hskip-1pt\,} (H\times H^A)$-module $M$ inducing a basic Morita equivalence between ${\,\hskip-1pt\cdot\hskip-1pt\,} Hb$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A) {\it w}(b){\,\hskip-1pt\cdot\hskip-1pt\,},$ which can be extended to the inverse image $K$ in $N\times N^A$ of the ``diagonal'' subgroup of $N/H\times N^A/H^A$ in such a way that ${\rm Ind}^{G\times G^A}_K (M)$ induces a ``basic" Morita equivalence between ${\,\hskip-1pt\cdot\hskip-1pt\,} Gc$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A){\it w}(c)$. }
\noindent{\bf Remark 1.7.}\quad Since $G=H{\,\hskip-1pt\cdot\hskip-1pt\,} G^A$, we have $N=H{\,\hskip-1pt\cdot\hskip-1pt\,} N^A$ and then the inclusion $N^A\subset N$ induces a group isomorphism $N/H\cong N^A/H^A$.
We use pointed groups introduced by Lluis Puig. For more details on pointed groups, readers can see \cite{P5} or Paragraph 2.5 below. In Section 2, we introduce some notation and terminology. Section 3 revisits K\"ulshammer and Puig's main results on extensions of nilpotent blocks; the proof of the existence and uniqueness of the finite group $L$ (see
\cite[Theorem 1.8]{KP} and Theorem 3.5 below) is dramatically simplified; actually, Corollary 3.14 below slightly improves \cite[Theorem 1.12]{KP}; explicitly, $S_\gamma$ in Corollary 3.14 is unique up to determinant one. With the Glauberman correspondents of blocks due to Watanabe, in Section 4 we define Glauberman correspondents of extensions of blocks and compare the local structures of extensions of blocks and their Glauberman correspondents.
By Puig's structure theorem of nilpotent blocks, there is a bijection between the sets of irreducible characters of the nilpotent block $b$ of $H$ and of its defect $Q$; in Section 5, for a suitable local point $\delta$ of $Q{\,\hskip-1pt\cdot\hskip-1pt\,},$ we prove that this bijection preserves $N_G(Q_\delta)$-actions on these sets. As a consequence, we obtain an $N_G(Q_\delta)$-stable irreducible character $\chi$ of $H$ such that $\chi$ lifts the unique irreducible Brauer character of the nilpotent block $b$ of $H$ and that the Glauberman correspondent character $\pi(H, A)(\chi)$ lifts the unique irreducible Brauer character of the Glauberman correspondent block ${\it w}(b)$ of $H^A$ (see Lemma 5.6).
Obviously, $N$ stabilizes the unique simple module in the nilpotent block $b$ of $H$; with this $N$-stable ${\,\hskip-1pt\cdot\hskip-1pt\,} H b$-simple module, we construct an $A$-stable $k^*$-group $\skew3\hat {\bar N}^{^k}$ (see £2.3 and £3.13 below); since $N^A$ stabilizes the unique simple module of the nilpotent block ${\it w}(b)$ of $H^A{\,\hskip-1pt\cdot\hskip-1pt\,},$ a $k^*$-group ${\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{\overline{{\,\hskip-1pt\cdot\hskip-1pt\,} N^A}}^k$ is similarly constructed. In Section 6, we prove that ${\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{\overline{{\,\hskip-1pt\cdot\hskip-1pt\,} N^A}}^k$ and $(\skew3\hat{\bar N}^{^k})^A$ are isomorphic as $k^*$-groups (see Theorem 6.4). In Section 7, we use the improved version of K\"ulshammer and Puig's main result to prove our main theorem 1.6.
\eject
\vskip 1cm \noindent{\bf\large 2. Notation and terminology}
\noindent{\bf 2.1.}\quad Throughout this paper, all ${\,\hskip-1pt\cdot\hskip-1pt\,}$-modules are ${\,\hskip-1pt\cdot\hskip-1pt\,}$-free finitely generated --- except in 2.4 below; all ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebras have identity elements, but their subalgebras need not have the same identity element. Let $\cal A$ be an ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra; we denote by ${\,\hskip-1pt\cdot\hskip-1pt\,}^\circ{\,\hskip-1pt\cdot\hskip-1pt\,},$ ${\cal A}^*$, $Z({\cal A})$, $J({\cal A})$ and $1_{\cal A}$ the opposite ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$algebra of ${\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,},$ the multiplicative group of all invertible elements of ${\cal A}$, the center of~${\cal A}$, the radical of ${\cal A}$ and the identity element of ${\cal A}$ respectively. Sometimes we write $1$ instead of $1_{\cal A}{\,\hskip-1pt\cdot\hskip-1pt\,}.$ For any abelian group $V$, ${\rm id }_V$ denotes the identity automorphism on $V$. Let ${\cal B}$ be an ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra; a homomorphism ${\cal F}: {\cal A}\rightarrow {\cal B}$ of ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebras is said to be an {\it embedding{\,\hskip-1pt\cdot\hskip-1pt\,}} if ${\cal F}$ is injective and we have
$${\cal F}({\cal A})={\cal F}(1_{\cal A}){\cal B}{\cal F}(1_{\cal A})\quad .$$
Let $S$ be a set and $G$ be a group acting on $S$. For any $g\in G$ and $s\in S$, we write the action of $g$ on $s$ as $s\.g$.
\noindent{\bf 2.2.}\quad Let $X$ be a finite group. An $X$-interior ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra ${\cal A}$ is an ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra ${\cal A}$ endowed with a group homomorphism $\rho:X\rightarrow {\cal A}^*$; for any $x, y\in X $ and $a\in {\cal A}$, we write $\rho(x)a\rho(y)$ as $x{\,\hskip-1pt\cdot\hskip-1pt\,} a{\,\hskip-1pt\cdot\hskip-1pt\,} y$ or $xay$ if there is no confusion. Let $\varrho: Y\rightarrow X$ be a group homomorphism; the ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra ${\cal A}$ with the group homomorphism $\rho\circ\varrho: Y\rightarrow {\cal A}^*$ is an $Y$-interior ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra and we denote it by ${\rm Res}_{\varrho}({\cal A})$. Let ${\cal A}'$ be another $X$-interior ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra; an ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra homomorphism ${\cal F}:{\cal A}\rightarrow {\cal A}'$ is said to be a homomorphism of $X$-interior ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebras if for any $x, y\in X $ and any $a\in {\cal A}$, we have ${\cal F}(xay)=x{\cal F}(a)y$. The tensor product ${\cal A}\bigotimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal A}'$ of ${\cal A}$ and ${\cal A}'$ is an $X$-interior ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra with the group homomorphism $$ X\rightarrow ({\cal A}\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal A}')^*,\quad x\mapsto x1_{\cal A}\otimes x1_{{\cal A}'}\quad .$$ Let $Z$ be a subgroup of $X$ and let ${\cal B}$ be an ${\,\hskip-1pt\cdot\hskip-1pt\,} Z$-interior algebra. Obviously, the left and right multiplications by ${\,\hskip-1pt\cdot\hskip-1pt\,} Z$ on ${\cal B}$ define an $({\,\hskip-1pt\cdot\hskip-1pt\,} Z, {\,\hskip-1pt\cdot\hskip-1pt\,} Z)$-bimodule structure on ${\cal B}$. Set $${\rm Ind}_Z^X ({\cal B})={\,\hskip-1pt\cdot\hskip-1pt\,} X\otimes_{{\,\hskip-1pt\cdot\hskip-1pt\,} Z} {\cal B}\otimes_{{\,\hskip-1pt\cdot\hskip-1pt\,} Z} {\,\hskip-1pt\cdot\hskip-1pt\,} X $$ and then this the $({\,\hskip-1pt\cdot\hskip-1pt\,} X, {\,\hskip-1pt\cdot\hskip-1pt\,} X)$-bimodule ${\rm Ind}_Z^X ({\cal B})$ becomes an $X$-interior ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra with the product $$(x\otimes b\otimes y)(x'\otimes b'\otimes y') =\cases{x\otimes b{\,\hskip-1pt\cdot\hskip-1pt\,} y x'\.b'\otimes y'&if $yx'\in Z$\cr {}&{}\cr 0 &otherwise\cr}$$ for any $x, y, x', y'\in X$ and any $b,b'\in {\cal B}{\,\hskip-1pt\cdot\hskip-1pt\,},$ and with the homomorphism ${\,\hskip-1pt\cdot\hskip-1pt\,} X\rightarrow {\rm Ind}_Z^X ({\cal B})$ mapping $x\in X$ onto $\sum_y xy\otimes 1\otimes y^{-1}$, where $y$ runs over a set of representatives for left cosets of $Z$ in $X$.
\noindent{\bf 2.3.}\quad A $k^*$-group with $k^*$-quotient $X$ is a group $\hat X$ endowed with an injective group homomorphism $\theta: k^*\rightarrow Z(\hat X)$ together with an isomorphism $\hat X/\theta(k^*)\cong X$; usually we omit to mention $\theta$ and the quotient $X=\hat X/\theta(k^*)$ is called the $k^*$-quotient of $\hat X$, writing $\lambda{\,\hskip-1pt\cdot\hskip-1pt\,} \hat x$ instead of $\theta(\lambda)\hat x$ for any $\lambda\in k^*$ and any $\hat x\in \hat X$. We denote by $\hat Y$ the inverse image of $Y$ in $\hat X$ for any subset $Y$ of~$X$ and, if no precision is needed, we often denote by $\hat x$ some lifting of an element $x\in X$. We denote by $\hat X^\circ$ the $k^*$-group with the same underlying group $\hat X$ endowed with the group homomorphism $\theta^{-1}: k^*\rightarrow Z(\hat X),{\,\hskip-1pt\cdot\hskip-1pt\,} \lambda\mapsto \theta(\lambda)^{-1}$. Let $\vartheta: Z\rightarrow X$ be a group homomorphism; we denote by ${\rm Res}_{\vartheta}(\hat X)$ the $k^*$-group formed by the group of pairs $(\hat x, y)\in \hat X\times Z$ such that $\vartheta(y)$ is the image of $\hat x$ in $X$, endowed with the group homomorphism mapping $\lambda\in k^*$ on $(\theta(\lambda), 1)$; up to suitable identifications, $Z$ is the $k^*$-quotient of ${\rm Res}_{\vartheta}(\hat X)$. Let $\hat U$ be another $k^*$-group with $k^*$-quotient $U$. A group homomorphism $\phi: \hat X\rightarrow \hat U$ is a homomorphism of $k^*$-groups if $\phi(\lambda{\,\hskip-1pt\cdot\hskip-1pt\,} \hat x)=\lambda{\,\hskip-1pt\cdot\hskip-1pt\,} \phi(\hat x)$ for any $\lambda\in k^*$ and $\hat x\in \hat X$. For more details on $k^*$-groups, please see \cite[{\,\hskip-1pt\cdot\hskip-1pt\,} 5]{P6}.
\noindent{\bf 2.4.}\quad Let $\hat X$ be a $k^*$-group with $k^*$-quotient $X$. By \cite[Charpter II, Proposition 8]{S}, there exists a canonical decomposition ${\,\hskip-1pt\cdot\hskip-1pt\,}^*\cong (1+J({\,\hskip-1pt\cdot\hskip-1pt\,}))\times k^*$, thus $k^*$ can be canonically regarded as a subgroup of ${\,\hskip-1pt\cdot\hskip-1pt\,}^*$. Set $${\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat X={\,\hskip-1pt\cdot\hskip-1pt\,}\otimes_{{\,\hskip-1pt\cdot\hskip-1pt\,} k^*}{\,\hskip-1pt\cdot\hskip-1pt\,} \hat X \quad, $$ where the left ${\,\hskip-1pt\cdot\hskip-1pt\,} k^*$-module ${\,\hskip-1pt\cdot\hskip-1pt\,}\hat X$ and the right ${\,\hskip-1pt\cdot\hskip-1pt\,} k^*$-module ${\,\hskip-1pt\cdot\hskip-1pt\,}$ are defined by the left and right multiplication by $k^*$ on $\hat X$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} ^*$ respectively. It is straightforward to verify that the ${\,\hskip-1pt\cdot\hskip-1pt\,}$-module ${\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat X$ is an ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra with the distributive multiplication $$(a_1\otimes\hat x_1)(a_2\otimes\hat {x}_2) =a_1a_2\otimes\hat x_1\hat{x}_2$$ for any $a_1,a_2\in {\,\hskip-1pt\cdot\hskip-1pt\,}$ and any $\hat x_1,\hat{x}_2 \in \hat X$.
\noindent{\bf 2.5.}\quad Let ${\cal A}$ be an $X$-algebra over ${\,\hskip-1pt\cdot\hskip-1pt\,}$; that is to say, ${\,\hskip-1pt\cdot\hskip-1pt\,}$ is endowed with a group homomorphism $\psi: X\rightarrow {\rm Aut}({\cal A})$, where ${\rm Aut}({\cal A})$ is the group of all ${\,\hskip-1pt\cdot\hskip-1pt\,}$-automorphisms of $A{\,\hskip-1pt\cdot\hskip-1pt\,};$ usually, we omit to mention $\psi{\,\hskip-1pt\cdot\hskip-1pt\,}.$ For any subgroup $Y$ of $X$, we denote by ${\cal A}^Y$ the ${\,\hskip-1pt\cdot\hskip-1pt\,}$-subalgebra of all $Y$-fixed elements in ${\cal A}$. A {\it pointed group{\,\hskip-1pt\cdot\hskip-1pt\,}} $Y_\beta$ on ${\cal A}$ consists of a subgroup $Y$ of $X$ and of an $({\cal A}^Y)^*$-conjugate class $\beta$ of primitive idempotents of ${\cal A}^Y$. We often say that $\beta$ is a {\it point{\,\hskip-1pt\cdot\hskip-1pt\,}} of $Y$ on ${\cal A}$. Obviously, $X$ acts on the set of all pointed groups on ${\cal A}$ by the equality $(Y_\beta)^x=Y^x_{\psi(x^{-1})(\beta)}$ and we denote by $N_X(Y_\beta)$ the stabilizer of $Y_\beta$ in $X$ for any pointed group $Y_\beta$ on ${\cal A}$. Another pointed group $Z_\gamma$ is said {\it contained in{\,\hskip-1pt\cdot\hskip-1pt\,}} $Y_\beta$ if $Z\leq Y$ and there exist some $i\in \beta$ and $j\in \gamma$ such that $ij=ji=j$. For a subgroup $U$ of $G$, set $${\cal A}(U)=k\otimes _{\,\hskip-1pt\cdot\hskip-1pt\,} ({\cal A}^U/\sum_V {\cal A}^U_V)\quad$$ where $V$ runs over the set of proper subgroups of $U$ and ${\cal A}^U_V$ is the image of the relative trace map ${\rm Tr}_V^U: {\cal A}^V\rightarrow {\cal A}^U$; the canonical surjective homomorphism ${\rm Br}^{\cal A}_U: {\cal A}^U\rightarrow {\cal A}(U)$ is called the {\it Brauer homomorphism{\,\hskip-1pt\cdot\hskip-1pt\,}} of the $X$-algebra ${\cal A}$ at $U$. When ${\cal A}$ is equal to the group algebra ${\,\hskip-1pt\cdot\hskip-1pt\,} X$, the homomorphism $kC_X(U)\rightarrow {\cal A}(U)$ sending $x\in C_X(U)$ onto the image of $x$ in ${\cal A}(U)$ is an isomorphism, through which we identify ${\cal A}(U)$ with $kC_X(U)$. A pointed group $U_\gamma$ on $\cal A$ is said {\it local{\,\hskip-1pt\cdot\hskip-1pt\,}} if the image of $\gamma$ in ${\cal A}(U)$ is not equal to $\{0{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,},$ which forces $U$ to be a $p$-group; then, a local pointed group $U_\gamma$ is said a {\it defect pointed group{\,\hskip-1pt\cdot\hskip-1pt\,}} of a pointed group $Y_\beta$ on $\cal A$ if $U_\gamma\leq Y_\beta$ and we have $\beta\subset {\rm Tr}_U^Z({\cal A}^U{\,\hskip-1pt\cdot\hskip-1pt\,} \gamma{\,\hskip-1pt\cdot\hskip-1pt\,} A^U)$, where ${\cal A}^U{\,\hskip-1pt\cdot\hskip-1pt\,} \gamma{\,\hskip-1pt\cdot\hskip-1pt\,} A^U$ is the ideal of ${\cal A}^U$ generated by~$\gamma$. Let $c$ be a block of $X{\,\hskip-1pt\cdot\hskip-1pt\,};$ then $\{c{\,\hskip-1pt\cdot\hskip-1pt\,}$ is a point of $X$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} X$ and if $P_\gamma$ is a defect pointed group of $X_{\{c{\,\hskip-1pt\cdot\hskip-1pt\,}}$ then $P$ is a defect group of $c$.
\vskip 1cm \noindent{\bf\large 3. Extensions of nilpotent blocks revisited}
In this section, we assume that ${\,\hskip-1pt\cdot\hskip-1pt\,}$ is a complete discrete valuation ring with an algebraically closed residue field of characteristic $p$.
\noindent {\bf £3.1.}\quad Let $G$ be a finite group, $H$ be a normal subgroup of $G$ and $b$ be a block of $H$ over~${\,\hskip-1pt\cdot\hskip-1pt\,}$. Denote by $N$ the stabilizer of $b$ in $G$ and set $\bar N=N/H$. Obviously, $\beta=\{b{\,\hskip-1pt\cdot\hskip-1pt\,}$ is a point of $H$ and $N$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ and there is a unique pointed group $G_\alpha$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ such that $$H_\beta\leq N_\beta\leq G_\alpha\quad .$$ Let $Q_\delta$ be a defect pointed group of~$H_\beta$ and $P_\gamma$ be a defect pointed group of $N_\beta$ such that $Q_\delta\leq P_\gamma{\,\hskip-1pt\cdot\hskip-1pt\,};$ by \cite[Proposition 5.3]{KP}, we have $Q=P\cap H$ and, since we have \cite[1.7]{KP} $${\,\hskip-1pt\cdot\hskip-1pt\,} G{\,\hskip-1pt\cdot\hskip-1pt\,}{\rm Tr}_N^G( b)\cong {\rm Ind}_N^G ({\,\hskip-1pt\cdot\hskip-1pt\,} N b) \quad ,$$ it is easily checked that $P_\gamma$ is also a {\it defect pointed group{\,\hskip-1pt\cdot\hskip-1pt\,}} of $G_\alpha$ \cite[1.12]{P3}. Assume that the block $b$ is {\it nilpotent}; it follows from \cite[Proposition~6.5]{KP} that $b$ remains a {nilpotent block{\,\hskip-1pt\cdot\hskip-1pt\,}} of $H{\,\hskip-1pt\cdot\hskip-1pt\,} R$ for any subgroup~$R$ of~$P{\,\hskip-1pt\cdot\hskip-1pt\,},$ and from \cite[Theorem~6.6]{KP} that there is a unique {local point{\,\hskip-1pt\cdot\hskip-1pt\,}} $\varepsilon$ of $R$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ such that~$R_\varepsilon{\,\hskip-1pt\cdot\hskip-1pt\,} P_\gamma{\,\hskip-1pt\cdot\hskip-1pt\,}.$
\noindent {\bf £3.2.}\quad Set ${\cal A} = {\,\hskip-1pt\cdot\hskip-1pt\,} Nb$ and ${\cal B} = {\,\hskip-1pt\cdot\hskip-1pt\,} Hb{\,\hskip-1pt\cdot\hskip-1pt\,}$. Choosing $j\in \delta$ and $i\in \gamma$ such that $ij=ji=j{\,\hskip-1pt\cdot\hskip-1pt\,},$ we set \begin{center} ${\cal A}_\gamma=({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\gamma=i{\cal A} i$, ${\cal B}_\gamma=({\,\hskip-1pt\cdot\hskip-1pt\,} H)_\gamma=i{\cal B}i$ and ${\cal B}_\delta=({\,\hskip-1pt\cdot\hskip-1pt\,} H)_\delta=j{\cal B}j$. \end{center} Then ${\cal A}_\gamma$ is a $P$-interior algebra with the group homomorphism $P\rightarrow {\cal A}_\gamma^*$ mapping $u$ onto $ui$ for any $u\in P$, ${\cal B}_\gamma$ is a $P$-stable subalgebra of ${\cal A}_{\gamma}$ and ${\cal B}_\delta$ is a $Q$-interior algebra with the group homomorphism $Q\rightarrow {\cal B}_\delta^*$ mapping $v\in Q$ onto $vj$ for any $v\in Q$. Clearly $\cal A$ is an $N/H$-graded algebra with the $\bar x$-component ${\,\hskip-1pt\cdot\hskip-1pt\,} H x b$, where $\bar x\in N/H$ and $x$ is a representative of $\bar x$ in $N$. Since $i$ belongs to the $1$-component $\cal B$, ${\cal A}_\gamma$ is an $N/H$-graded algebra with the $\bar x$-component $i({\,\hskip-1pt\cdot\hskip-1pt\,} H x)i$.
\noindent {\bf £3.3.}\quad In \cite{KP} K\"ulshammer and Puig describe the structure of any block of $G$ lying over $b$ in terms of a new finite group $L$ which need not be {involved{\,\hskip-1pt\cdot\hskip-1pt\,}} in $G$ \cite[Theorem~1.8]{KP}. More explicitly, $L$ is a {group extension{\,\hskip-1pt\cdot\hskip-1pt\,}} of $\bar N$ by~$Q$ holding {strong uniqueness{\,\hskip-1pt\cdot\hskip-1pt\,}} properties. In order to prove these properties, in \cite{KP} the group $L$ is exhibited inside a suitable ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$algebra \cite[Theorem~8.13]{KP}, demanding a huge effort. But, as a matter of fact, these properties can be obtained {directly{\,\hskip-1pt\cdot\hskip-1pt\,}} from the so-called {local structure{\,\hskip-1pt\cdot\hskip-1pt\,}} of $G$ over ${\,\hskip-1pt\cdot\hskip-1pt\,} H b{\,\hskip-1pt\cdot\hskip-1pt\,},$ a fact that we only have understood recently. Then, with these uniqueness properties in hand, the second main result \cite[Theorem~1.12]{KP} follows quite easily. With the notation and framework of \cite{KP}, we completely develop both new proofs.
\noindent {\bf £3.4.}\quad Denote by ${\,\hskip-1pt\cdot\hskip-1pt\,}_{(b,H,G)}$ the category --- called the {\it extension category{\,\hskip-1pt\cdot\hskip-1pt\,}} associated to $G{\,\hskip-1pt\cdot\hskip-1pt\,},$ $H$ and $b$ --- where the objects are all the subgroups of $P$ and, for any pair of subgroups $R$ and $T$ of $P{\,\hskip-1pt\cdot\hskip-1pt\,},$ the morphisms from $T$ to $R$ are the pairs $(\psi_x,\bar x)$ formed by an injective group homomorphism $\psi_x{\,\hskip-1pt\cdot\hskip-1pt\,}\colon T\to R$ and an element $\bar x$ of $\bar N$ both determined by an element $x\in N$ fulfilling $T_\nu{\,\hskip-1pt\cdot\hskip-1pt\,} (R_\varepsilon)^x$ where $\varepsilon$ and $\nu$ are the respective local points of $R$ and~$T$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ determined by~$P_\gamma$ --- in general, we should consider the {\it $(b,N)\hbox{-}$Brauer pairs{\,\hskip-1pt\cdot\hskip-1pt\,}} over the {$p\hbox{-}$permutation $N\hbox{-}$algebra ${\,\hskip-1pt\cdot\hskip-1pt\,} H b${\,\hskip-1pt\cdot\hskip-1pt\,}} [5,~Definition~1.6 and Theorem~1.8] but, in our situation, they coincide with the {local pointed groups{\,\hskip-1pt\cdot\hskip-1pt\,}} over this $N\hbox{-}$algebra. The composition in ${\,\hskip-1pt\cdot\hskip-1pt\,}_{(b,H,G)}$ is determined by the composition of group homomorphisms and by the product in $\bar N {\,\hskip-1pt\cdot\hskip-1pt\,}.$
\noindent {\bf Theorem £3.5.}\quad {\it There is a triple formed by a finite group $L$ and by two group homomorphisms $$\tau : P\longrightarrow L\quad and\quad \bar\pi : L\longrightarrow \bar N \leqno £3.5.1\phantom{.}$$ such that $\tau$ is injective, that $\bar\pi$ is surjective, that we have ${\rm Ker}(\bar\pi) = \tau (Q)$ and $\bar\pi\big(\tau (u)\big) =\bar u$ for any $u\in P{\,\hskip-1pt\cdot\hskip-1pt\,},$ and that these homomorphisms induce an equivalence of categories $${\,\hskip-1pt\cdot\hskip-1pt\,}_{(b,H,G)} \cong {\,\hskip-1pt\cdot\hskip-1pt\,}_{(1,\tau (Q),L)}. \leqno £3.5.2$$
\noindent Moreover, for another such a triple $L'{\,\hskip-1pt\cdot\hskip-1pt\,},$ $\tau'$ and $\bar\pi'{\,\hskip-1pt\cdot\hskip-1pt\,},$ there is a group isomorphism $\lambda{\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\cong L'{\,\hskip-1pt\cdot\hskip-1pt\,},$ unique up to conjugation, fulfilling $$\lambda\circ\tau = \tau'\quad and \quad \bar\pi'\circ\lambda = \bar\pi{\,\hskip-1pt\cdot\hskip-1pt\,}.$${\,\hskip-1pt\cdot\hskip-1pt\,}}
\par \noindent {\bf Proof:} Set $Z = Z(Q){\,\hskip-1pt\cdot\hskip-1pt\,},$ $M = N_G (Q_\delta)$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} = {\,\hskip-1pt\cdot\hskip-1pt\,}_{(b,H,G)}{\,\hskip-1pt\cdot\hskip-1pt\,},$ denote by ${\,\hskip-1pt\cdot\hskip-1pt\,} (R,T)$ the set of ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$morphisms from $T$ to $R{\,\hskip-1pt\cdot\hskip-1pt\,},$ and write ${\,\hskip-1pt\cdot\hskip-1pt\,} (R)$ instead of ${\,\hskip-1pt\cdot\hskip-1pt\,} (R,R){\,\hskip-1pt\cdot\hskip-1pt\,};$ by the very definition of the category ${\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,},$ we have the exact sequence $$1\longrightarrow C_H (Q)\longrightarrow M\longrightarrow {\,\hskip-1pt\cdot\hskip-1pt\,} (Q) \longrightarrow 1 ; $$ it is clear that $M$ contains $P$ and that we have $C_H (Q)\cap P = Z{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, denoting by ${\,\hskip-1pt\cdot\hskip-1pt\,}_P (Q)$ the image of $P$ in ${\,\hskip-1pt\cdot\hskip-1pt\,} (Q){\,\hskip-1pt\cdot\hskip-1pt\,},$ it is easily checked from \cite[~Proposition~5.3]{KP} that ${\,\hskip-1pt\cdot\hskip-1pt\,}_P (Q)$ is a Sylow $p\hbox{-}$subgroup of~${\,\hskip-1pt\cdot\hskip-1pt\,} (Q){\,\hskip-1pt\cdot\hskip-1pt\,}.$
We claim that the element $\bar h$ induced by $P$ in the second cohomology group ${\Bbb H}^2 \big({\,\hskip-1pt\cdot\hskip-1pt\,}_P (Q),Z\big)$ belongs to the image of~${\Bbb H}^2 ({\,\hskip-1pt\cdot\hskip-1pt\,} (Q),Z){\,\hskip-1pt\cdot\hskip-1pt\,}.$ Indeed, according to in~
\cite[~Ch.~XII,~Theorem~10.1]{CE},
it suffices to prove that, for any subgroup $R$ of $P$ containing~$Z$ and any $(\varphi_x,\bar x)\in {\,\hskip-1pt\cdot\hskip-1pt\,} (Q)$ such that $$(\varphi_x,\bar x)\circ{\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,} R}(Q)\circ (\varphi_x,\bar x)^{-1}{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_{\!P} (Q), \leqno £3.5.3$$ the restriction ${\rm res}_{(\varphi_x,\bar x)} (\bar h)$ of $\bar h$ {via{\,\hskip-1pt\cdot\hskip-1pt\,}} the conjugation by $(\varphi_x,\bar x)$ and the element of ${\Bbb H}^2\big({\,\hskip-1pt\cdot\hskip-1pt\,}_R(Q), Z \big)$ determined by $R$ coincide; actually, we may assume that $R$ contains $Q{\,\hskip-1pt\cdot\hskip-1pt\,}.$ Thus, $x$ normalizes $Q_\delta$ and inclusion~£3.5.3 forces $$C_H(Q){\,\hskip-1pt\cdot\hskip-1pt\,} R {\,\hskip-1pt\cdot\hskip-1pt\,} \big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} P\big)^x; $$ in particular, respectively denoting by $\lambda$ and $\mu$ the points of $C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} P$ and $C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} R$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ such that $\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} P\big)_\lambda$ and $\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} R\big)_\mu$ contain $Q_\delta$ \cite[Lemma~3.9]{P4}, by uniqueness we have $$\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} R\big)_\mu{\,\hskip-1pt\cdot\hskip-1pt\,} \big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} P\big)_\lambda $$
and, with the notation above, it follows from \cite[~Proposition~3.5]{KP} that $P_\gamma$ and $R_\varepsilon$ are {defect pointed groups{\,\hskip-1pt\cdot\hskip-1pt\,}} of the respective pointed groups $\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} P\big)_\lambda$ and $\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} R\big)_\mu{\,\hskip-1pt\cdot\hskip-1pt\,};$ consequently, there is $z\in C_H (Q)$ fulfilling $R_\varepsilon {\,\hskip-1pt\cdot\hskip-1pt\,} (P_\gamma)^{zx}$ \cite[~Theorem~1.2]{P5}. That is to say, the conjugation by $zx$ induces a group homomorphism $R\to P$ mapping $Z$ onto $Z$ and inducing the element $(\psi_{zx},\overline{zx})$ of ${\,\hskip-1pt\cdot\hskip-1pt\,} (P,R)$ which extends $(\varphi_x,\bar x){\,\hskip-1pt\cdot\hskip-1pt\,},$ so that the map $${\rm res}_{(\varphi_x,\bar x)} : \Bbb H^2\big({\,\hskip-1pt\cdot\hskip-1pt\,}_P (Q),Z\big)\longrightarrow \Bbb H^2\big({\,\hskip-1pt\cdot\hskip-1pt\,}_R (Q),Z\big) $$ sends $\bar h$ to the element of~${\Bbb H}^2\big({\,\hskip-1pt\cdot\hskip-1pt\,}_R(Q), Z \big)$ determined by~$R$ \cite[~Chap.~XIV, Theorem~4.2]{CE}.
In particular, the corresponding element of ${\Bbb H}^2 \big({\,\hskip-1pt\cdot\hskip-1pt\,} (Q),Z\big)$ determines a group extension $$1\longrightarrow Z\buildrel \tau \over\longrightarrow L \buildrel \pi \over \longrightarrow {\,\hskip-1pt\cdot\hskip-1pt\,} (Q)\longrightarrow 1 $$ and, since $\bar h\in \Bbb H^2\big({\,\hskip-1pt\cdot\hskip-1pt\,}_P (Q),Z\big)$ is the image of this element, there is a {group extension{\,\hskip-1pt\cdot\hskip-1pt\,}} homomorphism $\tau{\,\hskip-1pt\cdot\hskip-1pt\,}\colon P\to L$ \cite[~Chap.~XIV, Theorem~4.2]{CE}; it is clear that $\tau$ is injective and, since ${\,\hskip-1pt\cdot\hskip-1pt\,}_P (Q)$ is a Sylow $p$-subgroup of ${\,\hskip-1pt\cdot\hskip-1pt\,} (Q){\,\hskip-1pt\cdot\hskip-1pt\,},$ ${\rm Im}(\tau)$ is a Sylow $p\hbox{-}$subgroup of $L{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, since $N = H{\,\hskip-1pt\cdot\hskip-1pt\,} M$ \cite[Theorem~1.2]{P5}, we have $$\bar N \cong M/C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} Q\cong {\,\hskip-1pt\cdot\hskip-1pt\,} (Q)/{\,\hskip-1pt\cdot\hskip-1pt\,}_Q (Q) ;$$ in particular, $\pi$ determines a group homomorphism $\bar\pi{\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\to \bar N$ and, since $\tau$ is a {group extension{\,\hskip-1pt\cdot\hskip-1pt\,}} homomorphism, we get $\bar\pi\big(\tau (u)\big) = \bar u$ for any $u\in P$ and may choose $\pi$ in such a way that we have $$y\tau (v)y^{-1} = \tau\big(\varphi_x (v)\big) \leqno £3.5.4\phantom{.}$$ for any $y\in L$ and any $v\in Q$ where $\pi (y) = (\varphi_x,\bar x)$ for some $x\in N{\,\hskip-1pt\cdot\hskip-1pt\,}.$ Then, we claim that, up to a suitable modification of our choice of $\tau{\,\hskip-1pt\cdot\hskip-1pt\,},$ the group $L$ endowed with $\tau$ and $\bar\pi$ fulfills the conditions above; set $\hat{\,\hskip-1pt\cdot\hskip-1pt\,} = {\,\hskip-1pt\cdot\hskip-1pt\,}_{(1,\tau (Q),L)}$ for short.
For any pair of subgroups $R$ and $T$ of $P$ containing~$Q{\,\hskip-1pt\cdot\hskip-1pt\,},$ since we have $H\cap R = Q = H\cap T{\,\hskip-1pt\cdot\hskip-1pt\,},$ denoting by $\varepsilon$ and $\nu$ the respective local points of~$R$ and $T$ such that $P_\gamma$ contains $R_\varepsilon$ and $T_\nu{\,\hskip-1pt\cdot\hskip-1pt\,},$ these local pointed groups contain~$Q_\delta$ and, in particular, any ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$morphism $$(\psi_x,\bar x) : T\longrightarrow R $$ determines an element $(\varphi_x,\bar x)$ of ${\,\hskip-1pt\cdot\hskip-1pt\,} (Q)$ fulfilling $$(\varphi_x,\bar x)\circ {\,\hskip-1pt\cdot\hskip-1pt\,}_T (Q)\circ (\varphi_x,\bar x)^{-1} \subset {\,\hskip-1pt\cdot\hskip-1pt\,}_R (Q) \quad .$$ Thus, for any $y\in L$ such that $\pi (y) = (\varphi_x,\bar x){\,\hskip-1pt\cdot\hskip-1pt\,},$ we have $$y{\,\hskip-1pt\cdot\hskip-1pt\,}\tau (T)\,y^{-1}{\,\hskip-1pt\cdot\hskip-1pt\,} \tau (R) \quad ;$$ more precisely, for any $w\in T$ and any $v\in Q{\,\hskip-1pt\cdot\hskip-1pt\,},$ from equality~£3.5.4 we get $$y{\,\hskip-1pt\cdot\hskip-1pt\,}\tau (v^w)\,y^{-1} = \tau \big(\varphi_x (v^w)\big)
= \tau \big(\varphi_x(v)\big)^{\tau (\psi_x (w))} \quad ;$$
moreover, since $x T x^{-1}{\,\hskip-1pt\cdot\hskip-1pt\,} R{\,\hskip-1pt\cdot\hskip-1pt\,},$ we have
$$\bar\pi \big(y{\,\hskip-1pt\cdot\hskip-1pt\,}\tau (w)\,y^{-1}\big) = \bar x{\,\hskip-1pt\cdot\hskip-1pt\,}\bar w{\,\hskip-1pt\cdot\hskip-1pt\,} \bar x^{-1}
= \bar\pi \Big(\tau (\psi_x (w))\Big)
\quad .$$
Hence, for any $w\in T$ and a suitable $\theta_x (w)\in Z{\,\hskip-1pt\cdot\hskip-1pt\,},$ we get
$$y{\,\hskip-1pt\cdot\hskip-1pt\,}\tau \big(w{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_x (w)\big)\,y^{-1} = \tau (\psi_x (w)) \quad .$$
Conversely, since $R$ and $T$ have a unique (local) point on ${\,\hskip-1pt\cdot\hskip-1pt\,} Q{\,\hskip-1pt\cdot\hskip-1pt\,},$ any
$\hat{\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$morphism from $T$ to $R$ induced by an element $y$ of $L$
determines an element $\pi (y) = (\varphi_x,\bar x)$ of ${\,\hskip-1pt\cdot\hskip-1pt\,} (Q){\,\hskip-1pt\cdot\hskip-1pt\,},$ for some
$x\in N{\,\hskip-1pt\cdot\hskip-1pt\,},$ which still fulfills
$$(\varphi_x,\bar x)\circ {\,\hskip-1pt\cdot\hskip-1pt\,}_T (Q)\circ (\varphi_x,\bar x)^{-1} \subset {\,\hskip-1pt\cdot\hskip-1pt\,}_R (Q) \quad ;$$ thus, as above, $x$ normalizes $Q_\delta$ and this inclusion forces $$C_H(Q){\,\hskip-1pt\cdot\hskip-1pt\,} T {\,\hskip-1pt\cdot\hskip-1pt\,} \big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} R\big)^x \quad .$$ Once again, respectively denoting by $\lambda$ and $\mu$ the points of $C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} R$ and $C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} T$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ such that
$\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} R\big)_\lambda$ and $\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} T\big)_\mu$ contain
$Q_\delta$ \cite[ Lemma~3.9]{P4}, and by $\varepsilon$ and $\nu$ the local points of $R$ and $T$
on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ such that $P_\gamma$ contains $R_\varepsilon$ and~$T_\nu{\,\hskip-1pt\cdot\hskip-1pt\,},$ it follows from
\cite[~Proposition~3.5]{KP} that
$R_\varepsilon$ and $T_\nu$ are defect pointed groups of the respective pointed groups $\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} R\big)_\lambda$ and $\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} T\big)_\mu{\,\hskip-1pt\cdot\hskip-1pt\,};$ since by uniqueness we have $$\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} T\big)_\mu{\,\hskip-1pt\cdot\hskip-1pt\,} \big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} R\big)_\lambda ,$$
there is $z\in C_H (Q)$ fulfilling $T_\nu {\,\hskip-1pt\cdot\hskip-1pt\,} (R_\varepsilon)^{zx}$ \cite[Theorem~1.2]{P5}. That is to say, the conjugation by $zx$ induces a group homomorphism $\psi_{zx}{\,\hskip-1pt\cdot\hskip-1pt\,}\colon T\to R$ mapping $Z$ onto $Z$ and inducing the element $(\psi_{zx},\overline{zx})$ of ${\,\hskip-1pt\cdot\hskip-1pt\,} (R,T)$ which extends $(\varphi_x,\bar x){\,\hskip-1pt\cdot\hskip-1pt\,};$ hence, as above, for any $w\in T$ and a suitable $\theta_y (w)\in Z$ we get
$$y{\,\hskip-1pt\cdot\hskip-1pt\,}\tau \big(w{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_y (w)\big)\,y^{-1} = \tau (\psi_{zx} (w)).
\leqno £3.5.5$$
We claim that, for a suitable choice of $\tau{\,\hskip-1pt\cdot\hskip-1pt\,},$ the elements $\theta_x (w)$ and $\theta_y (w)$ are always trivial; then, the equivalence of categories~£3.5.2 will be an easy consequence of the above correspondences. Above, for any~$y\in L$ such that $\tau (T)\subset \tau (R)^y$ we have found an element $\big(\psi_y,\bar\pi (y)\big)\in {\,\hskip-1pt\cdot\hskip-1pt\,} (R,T)$ lifting~$\pi (y)$ in such a way that, for any~$w\in T{\,\hskip-1pt\cdot\hskip-1pt\,},$ we have $$\tau \big(w{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_y (w)\big) = \tau\big(\psi_y (w)\big)^y \leqno £3.5.6\phantom{.}$$ for a suitable $\theta_y (w)\in Z{\,\hskip-1pt\cdot\hskip-1pt\,};$ note that, according to equality~£3.5.4, for any $v\in Q$ we have $\theta_y (v) = 1{\,\hskip-1pt\cdot\hskip-1pt\,},$ and whenever $y$ belongs to $\tau (R)$ we may choose $\psi_y$ in such a way that $\theta_y (w) = 1{\,\hskip-1pt\cdot\hskip-1pt\,}.$
In this situation, for any $w,w'\in T{\,\hskip-1pt\cdot\hskip-1pt\,},$ we get \begin{eqnarray*}
\tau \big(ww'\theta_y (ww')\big) &=& \tau \big(\psi_y (ww')\big)^y {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau\big(\psi_y (w) \big)^y\tau\big(\psi_y (w') \big)^y {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau \big(w{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_y (w)\big)\tau \big(w'\theta_y (w')\big) {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau\big(w{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_y (w)\,w'\theta_y (w')\big) {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau \big(ww'\theta_y (w)^{w'}\theta_y (w')\big) \end{eqnarray*} and therefore, since $\tau$ is injective, we still get $$\theta_y (ww') = \theta_y (w)^{w'}\theta_y (w') \quad ;$$ in particular, for any $z\in Z$ we have $$\theta_y (wz) = \theta_y (w)^z{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_y (z) = \theta_y (w) \quad .$$ In other words, the map $\theta_y$ determines a $Z\hbox{-}$valued $1\hbox{-}${cocycle{\,\hskip-1pt\cdot\hskip-1pt\,}} from the image $\tilde T$ of~$T$ in $\widetilde{\rm Aut}(Q) = {\rm Out} (Q){\,\hskip-1pt\cdot\hskip-1pt\,}.$
Actually, the {cohomology class{\,\hskip-1pt\cdot\hskip-1pt\,}}~$\bar\theta_y$ of this $1\hbox{-}$cocycle does not depend on
the choice of~$\psi_y{\,\hskip-1pt\cdot\hskip-1pt\,};$ indeed, if another choice $\psi'_y$ determines~$\theta'_y{\,\hskip-1pt\cdot\hskip-1pt\,}\colon T\to Z$
then we clearly have $\psi'_y (T) = \psi_y (T)$ and, according to our argument above, there is $z\in C_H (Q)$ such that $$(T_\nu)^z = T_\nu\quad{\rm and}\quad \psi'_y = \psi_y\circ \kappa_z \quad ,$$ where $\kappa_z{\,\hskip-1pt\cdot\hskip-1pt\,}\colon T\to T$ denotes the conjugation by $z{\,\hskip-1pt\cdot\hskip-1pt\,};$ actually, we still have $$[z,T]{\,\hskip-1pt\cdot\hskip-1pt\,} H\cap T = Q \quad .$$ But, since $T_\nu$ is a defect pointed group of $\big(C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} T\big)_\mu$ and, according to [4,~Theorem~1.2] and \cite[Proposition~6.5]{KP}, $\mu$ determines a {nilpotent block{\,\hskip-1pt\cdot\hskip-1pt\,}} of the group $C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} T{\,\hskip-1pt\cdot\hskip-1pt\,},$ we have $N_{C_H (Q){\,\hskip-1pt\cdot\hskip-1pt\,} T} (T_\nu) = C_H (T){\,\hskip-1pt\cdot\hskip-1pt\,} T{\,\hskip-1pt\cdot\hskip-1pt\,}.$ Thus, $z$ belongs to~$Z{\,\hskip-1pt\cdot\hskip-1pt\,} C_H (T)$ and we actually may assume that $z$ belongs to $Z{\,\hskip-1pt\cdot\hskip-1pt\,}.$
In this case, it follows from equality~£3.5.6 applied twice that \begin{eqnarray*}
\tau \big(w{\,\hskip-1pt\cdot\hskip-1pt\,}\theta'_y (w)\big) &=& \tau\big(\psi'_y (w)\big)^y {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau\big(\psi_y (z w z^{-1})\big)^y {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau \big((z w z^{-1}){\,\hskip-1pt\cdot\hskip-1pt\,} \theta_y (z w z^{-1})\big) \end{eqnarray*} for any $w\in T$ and, since $\theta_y (z w z^{-1}) =\theta_y (w)$ and $\tau$ is injective, we get $$\theta'_y (w)\theta_y (w)^{-1} = w^{-1}z w z^{-1} = (z^{-1})^w z \quad .$$
Consequently, denoting by ${\,\hskip-1pt\cdot\hskip-1pt\,}_{\!L}$ the category where the objects are the subgroups of $\tau (P)$
and the set of morphisms ${\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,} L} \big(\tau (R),\tau (T)\big)$ from $\tau (T)$ to $\tau (R)$ is
just the corresponding {\it transporter{\,\hskip-1pt\cdot\hskip-1pt\,}} in $L{\,\hskip-1pt\cdot\hskip-1pt\,},$ the correspondence sending an element $y\in {\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,} L} \big(\tau (R),\tau (T)\big)$ to the cohomology class
$\bar\theta_y$ of $\theta_y$ determines a map $$ \bar\theta_{_{R,T}} : {\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,} L} \big(\tau (R),\tau (T)\big)\longrightarrow {\Bbb H}^1 (\tilde T,Z) \quad .$$
Moreover, if $U$ is a subgroup of $P$ containing $Q$ and $t$ an element of~$L$ fulfilling $\tau (U)\subset \tau (T)^t{\,\hskip-1pt\cdot\hskip-1pt\,},$ as above we can choose $\big(\psi_t,\bar\pi (t)\big)\in {\,\hskip-1pt\cdot\hskip-1pt\,} (T,U)$ lifting~$\pi (t)$ in such a way that, for any~$u\in U{\,\hskip-1pt\cdot\hskip-1pt\,},$ we have $$\tau \big(u{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_t (u)\big) = \tau\big(\psi_t (u)\big)^t $$ for a suitable $\theta_t (u)\in Z{\,\hskip-1pt\cdot\hskip-1pt\,};$ then, the composition $\big(\psi_y,\bar\pi (y)\big)\circ\big(\psi_t,\bar\pi (t)\big)$ lifts $\pi (yt)$ and, for any $u\in U{\,\hskip-1pt\cdot\hskip-1pt\,},$ we may assume that (cf.~£3.5.4)
\begin{eqnarray*}
\tau \big(u{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_{yt} (u)\big) &=& \tau\big((\psi_y\circ\psi_t) (u)\big)^{yt} {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau \Big(\psi_t (u){\,\hskip-1pt\cdot\hskip-1pt\,}\theta_y\big(\psi_t (u)\big)\Big)^t {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau \big(u{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_t (u)\big) \tau \Big(\theta_y \big(\psi_t (u)\big)\Big)^t {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau \bigg(u{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_t (u){\,\hskip-1pt\cdot\hskip-1pt\,}\pi (t)^{-1} \Big(\theta_y \big(\psi_t (u)\big)\Big)\bigg)\quad ; \end{eqnarray*} finally, since $\tau$ is injective, using {additive notation{\,\hskip-1pt\cdot\hskip-1pt\,}} in $Z$ we get $$\theta_{yt} (u) = \theta_t (u) + \pi (t)^{-1}\Big(\theta_y \big(\psi_t (u)\big)\Big) \quad .$$
Hence, denoting by $\tilde t$ the image of $t$ in $\widetilde{\rm Aut}(Q)$ and by $\psi_{\tilde t}{\,\hskip-1pt\cdot\hskip-1pt\,}\colon \tilde U\to \tilde T$ and ${\,\hskip-1pt\cdot\hskip-1pt\,}(\tilde t){\,\hskip-1pt\cdot\hskip-1pt\,}\colon Z\cong Z$ the corresponding group homomorphisms, we get the~{$1\hbox{-}$cocycle condition{\,\hskip-1pt\cdot\hskip-1pt\,}} $$\bar\theta_{yt} = \bar\theta_t + {\Bbb H}^1 \big(\psi_{\tilde t}, {\,\hskip-1pt\cdot\hskip-1pt\,} (\tilde t)\big) (\bar\theta_y) \quad ; \leqno £3.5.7$$ in particular, since $\theta_y (w) = 0$ whenever $y\in\tau (R){\,\hskip-1pt\cdot\hskip-1pt\,},$ it is easily checked from this condition that $\bar\theta_y$ only depends on the class of $y$ in the {\it exterior quotient{\,\hskip-1pt\cdot\hskip-1pt\,}} $$\tilde{\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,} L} \big(\tau (R),\tau (T)\big) = \tau (R)\backslash {\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,} L} \big(\tau (R),\tau (T)\big) .$$ Thus, respectively denoting by $\tilde L{\,\hskip-1pt\cdot\hskip-1pt\,},$ $\tilde R{\,\hskip-1pt\cdot\hskip-1pt\,},$ $\tilde T$ and $\tilde P$ the images of $L{\,\hskip-1pt\cdot\hskip-1pt\,},$ $\tau(R){\,\hskip-1pt\cdot\hskip-1pt\,},$ $\tau(T)$ and~$\tau (P)$ in $\widetilde{\rm Aut}(Q){\,\hskip-1pt\cdot\hskip-1pt\,},$ the map $\bar\theta_{_{R,T}}$ above admits a factorization $$\skew4\tilde{\bar\theta}_{_{\tilde R,\tilde T}} : \tilde {\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde L} (\tilde R,\tilde T)\longrightarrow {\Bbb H}^1 \big(\tilde T,Z\big) .$$
That is to say, let us consider the {exterior quotient{\,\hskip-1pt\cdot\hskip-1pt\,}} $\tilde{\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde L}$
of the category ${\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde L}$ and the {contravariant{\,\hskip-1pt\cdot\hskip-1pt\,}} functor $${\frak h^1_Z }: \tilde{\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde L}\longrightarrow \frak A\frak b $$ to the category of Abelian groups $\frak A\frak b$ mapping~$\tilde T$ on~${\Bbb H}^1 \big(\tilde T,Z\big){\,\hskip-1pt\cdot\hskip-1pt\,};$ then, identifying the $\tilde{\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde L}\hbox{-}$morphism $\tilde y\in \tilde {\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde L} (\tilde R,\tilde T)$ with the obvious {$\tilde{\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde L}\hbox{-}$chain{\,\hskip-1pt\cdot\hskip-1pt\,}} $\Delta_1\longrightarrow \tilde{\,\hskip-1pt\cdot\hskip-1pt\,}_{\tilde L}$ --- the {functor{\,\hskip-1pt\cdot\hskip-1pt\,}} from the category $\Delta_1{\,\hskip-1pt\cdot\hskip-1pt\,},$ formed by the objects $0$ and $1$ and a non-identity morphism $0\bullet 1$ from $0$ to $1{\,\hskip-1pt\cdot\hskip-1pt\,},$ mapping $0$ on $T{\,\hskip-1pt\cdot\hskip-1pt\,},$ $1$ on $R$ and $0\bullet 1$ on~$\tilde y$ --- the family $\bar \theta = {\,\hskip-1pt\cdot\hskip-1pt\,}\bar\theta_{\tilde y}{\,\hskip-1pt\cdot\hskip-1pt\,}_{\tilde y}{\,\hskip-1pt\cdot\hskip-1pt\,},$ where $\tilde y$ runs over the set of all the $\tilde{\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde L}\hbox{-}$morphisms, defines a {$1\hbox{-}$cocycle{\,\hskip-1pt\cdot\hskip-1pt\,}} from $\tilde {\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde L}$ to ${\frak h^1_Z}$ since equalities~£3.5.7 guarantee that the {differential map{\,\hskip-1pt\cdot\hskip-1pt\,}} sends $\bar\theta$ to zero.
We claim that this {$1\hbox{-}$cocycle{\,\hskip-1pt\cdot\hskip-1pt\,}} is a {$1\hbox{-}$coboundary{\,\hskip-1pt\cdot\hskip-1pt\,}}; indeed, for any subgroup~$\tilde R$ of $\tilde P{\,\hskip-1pt\cdot\hskip-1pt\,},$ choose a set of representatives $\tilde X_{\tilde R}\subset \tilde L$ for the set of {double classes{\,\hskip-1pt\cdot\hskip-1pt\,}} $\tilde P\backslash \tilde L/\tilde R$ and,
for any $\tilde n\in \tilde X_{\tilde R}{\,\hskip-1pt\cdot\hskip-1pt\,},$ set $\tilde R_{\tilde n} = \tilde R\cap P^{\tilde n}{\,\hskip-1pt\cdot\hskip-1pt\,},$ consider the $\tilde{\,\hskip-1pt\cdot\hskip-1pt\,}_{\tilde L}\hbox{-}$morphisms $\tilde n{\,\hskip-1pt\cdot\hskip-1pt\,}\colon \tilde R_{\tilde n}\to \tilde P$ and $\tilde\imath_{\tilde R_{\tilde n}}^{\tilde R} {\,\hskip-1pt\cdot\hskip-1pt\,}\colon \tilde R_{\tilde n} \to \tilde R$ respectively determined by $\tilde n$ and by the trivial element of~$\tilde L{\,\hskip-1pt\cdot\hskip-1pt\,},$ and denote by $$({\frak h}^1_Z)^{^{{\,\hskip-1pt\cdot\hskip-1pt\,}\circ}} (\tilde\imath_{\tilde R_{\tilde n}}^{\tilde R}) : {\Bbb H}^1 \big(\tilde R_{\tilde n},Z\big)\longrightarrow {\Bbb H}^1 \big(\tilde R,Z\big) $$
the corresponding {transfer homomorphism{\,\hskip-1pt\cdot\hskip-1pt\,}} \cite[~Ch.~XII,~\S8]{CE}; then, setting $$\bar\sigma_{\tilde R} = {\vert P\vert \over \vert L\vert}{\,\hskip-1pt\cdot\hskip-1pt\,} \sum_{\tilde n\in \tilde X_{\tilde R}} \big(({\frak h}^1_Z)^{^{{\,\hskip-1pt\cdot\hskip-1pt\,}\circ}} (\tilde\imath_{\tilde R_{\tilde n}}^{\tilde R})\big) (\bar\theta_{\tilde n}) \quad ,$$
we claim that, for any ${\tilde y}\in\bar{\,\hskip-1pt\cdot\hskip-1pt\,}_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde L}(\tilde R,\tilde T){\,\hskip-1pt\cdot\hskip-1pt\,},$ we have $$\bar\theta_{\tilde y} = \bar\sigma_{\tilde T} - \big({\frak h^1_Z} (\tilde y)\big)(\bar\sigma_{\tilde R}) \quad . \leqno £3.5.8$$
Indeed, note that ${\frak h^1_Z} (\tilde y)$ is the composition of the restriction {via{\,\hskip-1pt\cdot\hskip-1pt\,}} the $\tilde{\,\hskip-1pt\cdot\hskip-1pt\,}_{\tilde L}\hbox{-}$morphism $$\tilde\imath_{\tilde y\tilde T \tilde y^{-1}}^{\tilde R} : \tilde y{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde T {\,\hskip-1pt\cdot\hskip-1pt\,}\tilde y^{-1}\longrightarrow \tilde R $$ determined by the trivial element of $L{\,\hskip-1pt\cdot\hskip-1pt\,},$ with the conjugation determined by~$\tilde y{\,\hskip-1pt\cdot\hskip-1pt\,},$ which we denote by~${\frak h^1_Z} (\tilde y_*){\,\hskip-1pt\cdot\hskip-1pt\,};$ thus, by the corresponding {Mackey equalities{\,\hskip-1pt\cdot\hskip-1pt\,}}
\cite[Ch.~XII,~Proposition~9.1]{CE}, we get \begin{eqnarray*}
&{\frak h^1_Z} (\tilde y)\Big(\sum_{\tilde n\in \tilde X_R} \big(({\frak h}^1_Z)^{^{{\,\hskip-1pt\cdot\hskip-1pt\,}\circ}} (\tilde\imath_{\tilde R_{\tilde n}}^{\tilde R})\big) (\bar\theta_{\tilde n})\Big) {\,\hskip-1pt\cdot\hskip-1pt\,} =& {\frak h^1_Z} (\tilde y_*)\Big(\sum_{\tilde n\in \tilde X_{\tilde R}}{\,\hskip-1pt\cdot\hskip-1pt\,} \sum_{\tilde r\in \tilde Y_{\tilde n}} \big(({\frak h}^1_Z)^{^{{\,\hskip-1pt\cdot\hskip-1pt\,}\circ}} (\tilde\imath_{\tilde P^{\tilde n \tilde r} {\,\hskip-1pt\cdot\hskip-1pt\,}\cap{\,\hskip-1pt\cdot\hskip-1pt\,} \tilde y{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde T{\,\hskip-1pt\cdot\hskip-1pt\,} \tilde y^{-1}}^{\tilde y{\,\hskip-1pt\cdot\hskip-1pt\,} \tilde T{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde y^{-1}}) \circ {\frak h^1_Z} (\tilde r)\big) (\bar\theta_{\tilde n})\Big) {\,\hskip-1pt\cdot\hskip-1pt\,}
=& \sum_{\tilde n\in \tilde X_{\tilde R}}{\,\hskip-1pt\cdot\hskip-1pt\,} \sum_{\tilde r\in \tilde Y_{\tilde n}} \big(({\frak h}^1_Z)^{^{{\,\hskip-1pt\cdot\hskip-1pt\,}\circ}} (\tilde\imath_{\tilde P^{\tilde n\tilde r \tilde y} {\,\hskip-1pt\cdot\hskip-1pt\,}\cap{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde T}^{\tilde T}) \circ {\frak h^1_Z} (\tilde r\tilde y)\big) (\bar\theta_{\tilde n})\quad , \end{eqnarray*} where, for any $\tilde n\in X_{\tilde R}{\,\hskip-1pt\cdot\hskip-1pt\,},$ the subset $\tilde Y_{\tilde n}\subset \tilde R$ is a set of representatives for the set of {double classes{\,\hskip-1pt\cdot\hskip-1pt\,}} $\tilde R_{\tilde n} \backslash \tilde R/{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde y{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde T{\,\hskip-1pt\cdot\hskip-1pt\,} \tilde y^{-1}$ and, for any $\tilde r\in \tilde Y_{\tilde n}{\,\hskip-1pt\cdot\hskip-1pt\,},$ we consider the $\tilde{\,\hskip-1pt\cdot\hskip-1pt\,}_{\tilde L}\hbox{-}$morphisms $$\tilde r: \tilde P^{\tilde n \tilde r} \cap \tilde y{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde T{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde y^{-1} \longrightarrow \tilde R_{\tilde n}\quad{\rm and}\quad \tilde r\tilde y : \tilde P^{\tilde n\tilde r \tilde y}\cap\tilde T \longrightarrow \tilde R_{\tilde n} \quad .$$
Moreover, setting $\tilde m = \tilde n\tilde r\tilde y$ for~$\tilde n\in \tilde X_{\tilde R}$ and $\tilde r\in \tilde Y_{\tilde n}{\,\hskip-1pt\cdot\hskip-1pt\,},$ since we assume that $\theta_{\tilde r} = 0{\,\hskip-1pt\cdot\hskip-1pt\,},$ it follows from equality~£3.5.7 that $$ \big({\frak h^1_Z} (\tilde r\tilde y)\big) (\bar\theta_{\tilde n}) = \bar\theta_{\tilde m} - \bar\theta_{\tilde r\tilde y} =\bar\theta_{\tilde m} - \big({\frak h^1_Z}(\tilde\imath_{\tilde T_{\tilde m}}^{\tilde T})\big)(\bar\theta_{\tilde y}) \quad ;$$
thus, choosing $\tilde X_{\tilde T} = \bigsqcup_{{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde n\in \tilde X_{\tilde R}} \tilde n{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde Y_{\tilde n}{\,\hskip-1pt\cdot\hskip-1pt\,}\tilde y{\,\hskip-1pt\cdot\hskip-1pt\,},$
we get \cite[~Ch.~XII,~\S8.(6)]{CE} \begin{eqnarray*}
\bar\sigma_{\tilde T} - \big({\frak h^1_Z} (\tilde y)\big) (\bar\sigma_{\tilde R}) &=& {\vert P\vert \over \vert L\vert}{\,\hskip-1pt\cdot\hskip-1pt\,} \sum_{\tilde m\in \tilde X_{\tilde T}} \big(({\frak h}^1_Z)^{^{{\,\hskip-1pt\cdot\hskip-1pt\,}\circ}} (\tilde\imath_{ \tilde T_{\tilde m}}^{\tilde T})\big)\Big(\bar\theta_{\tilde m} - \big({\frak h^1_Z} (\tilde r\tilde y)\big) (\bar\theta_{\tilde n})\Big) {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& {\vert P\vert \over \vert L\vert}{\,\hskip-1pt\cdot\hskip-1pt\,} \sum_{\tilde m\in \tilde X_{\tilde T}} \big(({\frak h}^1_Z)^{^{{\,\hskip-1pt\cdot\hskip-1pt\,}\circ}} (\tilde\imath_{\tilde T_{\tilde m}}^{\tilde T})\big)\Big(\big({\frak h^1_Z}(\tilde\imath_{\tilde T_{\tilde m}}^{\tilde T})\big) (\bar\theta_{\tilde y})\Big) {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \sum_{\tilde m\in \tilde X_{\tilde T}}{\vert\tilde T/ \tilde T_{\tilde m}\vert \over\vert\tilde L/\tilde P\vert}{\,\hskip-1pt\cdot\hskip-1pt\,} \bar\theta_{\tilde y}=\bar\theta_{\tilde y} \quad . \end{eqnarray*}
In particular, for any subgroup $\tilde R$ of $\tilde P{\,\hskip-1pt\cdot\hskip-1pt\,},$ we get $$\bar \sigma_{\tilde R} = \big({\frak h^1_Z}(\tilde\imath_{\tilde R}^{\tilde P})\big)(\bar \sigma_{\tilde P}) $$
and the element $\bar\sigma_{\tilde P}\in \Bbb H^1(\tilde P,Z)$ can be lifted to a $1\hbox{-}$cocycle $\sigma_{\tilde P}{\,\hskip-1pt\cdot\hskip-1pt\,}\colon \tilde P\to Z$ which determines a group automorphism $\sigma{\,\hskip-1pt\cdot\hskip-1pt\,}\colon P\cong P$ mapping $u\in P$ on $u{\,\hskip-1pt\cdot\hskip-1pt\,}\sigma_{\tilde P}(\tilde u)$ where $\tilde u$ denotes the image of $u$ in $\tilde P{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, according to equality~£3.5.8, in \noindent £3.5.5 we may choose $$\theta_y (w) = \sigma_{\tilde P}(\tilde w)\big(\pi (y)\big)^{-1}\Big(\sigma_{\tilde P} \big(\widetilde{\psi_y (w)}\big)\Big)^{-1} .$$
Hence, replacing $\tau$ by $\hat\tau = \tau\circ\sigma{\,\hskip-1pt\cdot\hskip-1pt\,},$ the maps $\pi$ and $\hat\tau$ still fulfill the conditons above and, for any $w\in T{\,\hskip-1pt\cdot\hskip-1pt\,},$ in~equality~£3.5.6 we get \begin{eqnarray*}
\tau\big(\psi_y (w)\big)^y &=& \tau \big(w{\,\hskip-1pt\cdot\hskip-1pt\,}\theta_y (w)\big) {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau\bigg(w\big(w^{-1}\sigma (w)\big)\big(\pi (y)\big)^{-1} \Big(\psi_y (w)^{-1} \sigma \big(\psi_y (w)\big)\Big)^{-1}\bigg) {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau \bigg(\sigma (w)\big(\pi (y)\big)^{-1}\Big(\sigma \big(\psi_y (w) \big)^{-1}\psi_y (w)\Big)\bigg) {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \hat\tau (w)\tau \Big(\sigma\big(\psi_y (w)\big)^{-1} \psi_y (w)\Big)^y {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \hat\tau (w) \hat\tau\big(\psi_y (w)^{-1}\big)^y \tau\big(\psi_y (w) \big)^y \end{eqnarray*} so that, as announced, we obtain $$\hat\tau\big(\psi_y (w)\big)^y = \hat\tau (w) \quad .$$
In conclusion, we get a functor from $\hat{\,\hskip-1pt\cdot\hskip-1pt\,}$ to ${\,\hskip-1pt\cdot\hskip-1pt\,}$ mapping any $\hat {\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$morphism $$(\kappa_y,\bar y) : \hat\tau (T)\longrightarrow \hat\tau (R) $$ induced by an element $y$ of $L{\,\hskip-1pt\cdot\hskip-1pt\,},$ where $\kappa_y$ denotes the corresponding conjugation by~$y$ which actually fulfills $\hat\tau (Q{\,\hskip-1pt\cdot\hskip-1pt\,} T){\,\hskip-1pt\cdot\hskip-1pt\,} \big(\hat\tau (Q{\,\hskip-1pt\cdot\hskip-1pt\,} R)\big)^y{\,\hskip-1pt\cdot\hskip-1pt\,},$ on the ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$morphism $$\big(\psi_y,\bar\pi (y)\big) : T\longrightarrow R $$
where $\psi_y{\,\hskip-1pt\cdot\hskip-1pt\,}\colon T\to R$ is the group homomorphism determined by the equality $$\hat\tau_R \circ\psi_y = \kappa_y\circ \hat\tau_T \quad ,$$ $\hat\tau_R$ and $\hat\tau_T$ denoting the respective restrictions of $\hat\tau$ to $R$ and $T{\,\hskip-1pt\cdot\hskip-1pt\,};$ indeed, it is clear that this correspondence maps the composition of $\hat{\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$morphisms on the corresponding
composition of ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$morphisms. Moreover, it is clear that this functor is
{faithful{\,\hskip-1pt\cdot\hskip-1pt\,}}, and it follows from our argument above that any ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$morphism
$$(\psi_x,\bar x) : T\longrightarrow R $$ comes from an $\hat{\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$morphism from $\hat\tau (T)$ to $\hat\tau (R){\,\hskip-1pt\cdot\hskip-1pt\,}.$
Moreover, for another triple $L'{\,\hskip-1pt\cdot\hskip-1pt\,},$ $\tau'$ and $\bar\pi'$ fulfilling the above conditions, the corresponding equivalences of categories~£3.5.2 induce an equivalence of categories $$\hat{\,\hskip-1pt\cdot\hskip-1pt\,}\cong {\,\hskip-1pt\cdot\hskip-1pt\,}_{(1,\tau' (Q),L')} = {\,\hskip-1pt\cdot\hskip-1pt\,}' \quad ;\leqno £3.5.9$$ in particular, we have a group homomorphism $$\bar\sigma : L\longrightarrow \hat{\,\hskip-1pt\cdot\hskip-1pt\,} \big(\hat\tau (Q)\big)\cong {\,\hskip-1pt\cdot\hskip-1pt\,}' \big(\tau' (Q)\big)\cong L'/\tau' (Z) $$ and we claim that Lemma~£3.6 below applies to the finite groups $L$ and $L'{\,\hskip-1pt\cdot\hskip-1pt\,},$ with the Sylow $p\hbox{-}$subgroup $\hat\tau (P)$ of $L{\,\hskip-1pt\cdot\hskip-1pt\,},$ the Abelian normal $p\hbox{-}$group $\tau' (Z)$ of~$L'$ and the group homomorphism $\bar\sigma{\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\to L'/\tau' (Z)$ above; indeed, the group homomorphism $\hat\tau (P)\to L'$ mapping $\hat\tau (u)$ on $\tau' (u){\,\hskip-1pt\cdot\hskip-1pt\,},$ for any $u\in P{\,\hskip-1pt\cdot\hskip-1pt\,},$ clearly lifts the restriction of $\bar\sigma$ and it is easily checked from the equi-valence~£3.5.9 that it fulfills condition~£3.6.1 below. Consequently, the last statement immediately follows from this lemma. We are done.
\noindent {\bf Lemma~£3.6.}\quad {\it Let $L$ be a finite group, $M$ a group, $Z$ a normal Abelian $p'\hbox{-}$divisible subgroup of $M$~and $\bar\sigma{\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\to \bar M = M/Z$ a group homomorphism. Assume that, for a Sylow $p\hbox{-}$subgroup $P$ of $L{\,\hskip-1pt\cdot\hskip-1pt\,},$ there exists a group homomorphism $\tau{\,\hskip-1pt\cdot\hskip-1pt\,}\colon P\to M$ lifting the restriction of $\bar\sigma$ to $P$ and fulfilling the following condition
\noindent {\bf £3.6.1}\quad For any subgroup $R$ of $P$ and any $x\in L$ such that $R^x\subset P{\,\hskip-1pt\cdot\hskip-1pt\,},$ there is $y\in M$ such that $\bar\sigma (x) = \bar y$ and $\tau (u^x) = \tau (u)^y$~for any $u\in R{\,\hskip-1pt\cdot\hskip-1pt\,}.$
\noindent Then, there is a group homomorphism $\sigma{\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\to M$ lifting $\bar\sigma$ and extending~$\tau{\,\hskip-1pt\cdot\hskip-1pt\,}.$ Moreover, if $\sigma'{\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\to M$ is a group homomorphism which lifts $\bar\sigma$ and extends~$\tau{\,\hskip-1pt\cdot\hskip-1pt\,},$ then there is $z\in Z$ such that $\sigma' (x) = \sigma (x)^z$ for any $x\in L{\,\hskip-1pt\cdot\hskip-1pt\,}.${\,\hskip-1pt\cdot\hskip-1pt\,}}
\noindent {\bf Proof:} It is clear that $\bar\sigma$ determines an action of $L$ on $Z$ and it makes sense to consider the {cohomology groups{\,\hskip-1pt\cdot\hskip-1pt\,}} $\Bbb H^n (L,Z)$ and $\Bbb H^n (P,Z)$ for any $n$ in~$\Bbb N{\,\hskip-1pt\cdot\hskip-1pt\,}.$ But, $M$ determines an element $\bar\mu$ of~${\Bbb H}^2 (\bar M,Z)$ \cite[~Chap.~XIV, Theorem~4.2]{CE}
and if there is a group homomorphism $\tau{\,\hskip-1pt\cdot\hskip-1pt\,}\colon P\to M$ lifting the restriction of~$\bar\sigma$
then the corresponding image of $\bar\mu$ in ${\Bbb H}^2 (P,Z)$ has to be zero \cite[Chap.~XIV, Theorem~4.2]{CE}; thus, since the restriction map $${\Bbb H}^2 (L,Z)\longrightarrow {\Bbb H}^2 (P,Z) $$
is injective \cite[~Ch.~XII,~Theorem~10.1]{CE}, we also get $$\big({\Bbb H}^2 (\bar\sigma,{\rm id}_Z)\big)(\bar\mu) = 0 $$
and therefore there is a group homomorphism $\sigma{\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\to M$ lifting $\bar\sigma{\,\hskip-1pt\cdot\hskip-1pt\,}.$
At this point, the {difference{\,\hskip-1pt\cdot\hskip-1pt\,}} between $\tau$ and the restriction of $\sigma$ to $P$ defines a {$1\hbox{-}$cocycle{\,\hskip-1pt\cdot\hskip-1pt\,}} $\theta{\,\hskip-1pt\cdot\hskip-1pt\,}\colon P\to Z$ and, for any subgroup $R$ of $P$ and any $x\in L$ such that $R^x\subset P{\,\hskip-1pt\cdot\hskip-1pt\,},$ it follows from condition~£3.6.1 that, for a suitable $y\in M$ fulfilling $\bar y = \bar\sigma (x){\,\hskip-1pt\cdot\hskip-1pt\,},$ for any $u\in R$ we have
\begin{eqnarray*}
\theta (u^x) &=& \tau (u^x)^{-1}\sigma (u^x){\,\hskip-1pt\cdot\hskip-1pt\,}
& =& \tau (u^{-1})^y\sigma (u)^{\sigma (x)} {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \tau (u^{-1})^y \tau (u)^{\sigma (x)}\theta (u)^{\sigma (x)} {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \Big(\big(y\sigma (x)^{-1}\big)^{-1}\big(y\sigma (x)^{-1}\big)^{\tau (u)}\theta (u)\Big)^{\sigma (x)}\quad ;
\end{eqnarray*} consequently, since the map sending $u\in R$ to $$\big(y\sigma (x)^{-1}\big)^{-1}\big(y\sigma (x)^{-1}\big)^{\tau (u)}\in Z $$ is a {$1\hbox{-}$coboundary{\,\hskip-1pt\cdot\hskip-1pt\,}}, the cohomology class $\bar\theta$ of $\theta$ is $L\hbox{-}${stable{\,\hskip-1pt\cdot\hskip-1pt\,}}, and it follows again from \cite[~Ch.~XII,~Theorem~10.1]{CE} that it is the restriction of a suitable element $\bar \eta\in {\Bbb H}^1 (L,Z){\,\hskip-1pt\cdot\hskip-1pt\,};$ then, it suffices to modify $\sigma$ by a representative of~$\bar\eta$ to~get a new group homomorphism $\sigma'{\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\to M$ lifting $\bar\sigma$ and extending~$\tau{\,\hskip-1pt\cdot\hskip-1pt\,}.$
Now, if $\sigma'{\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\to M$ is a group homomorphism which lifts $\bar\sigma$ and extends~$\tau{\,\hskip-1pt\cdot\hskip-1pt\,},$ the element $\sigma' (x)\sigma (x)^{-1}$ belongs to $Z$ for any $x\in L$ and thus, we get a {$1\hbox{-}$cocycle{\,\hskip-1pt\cdot\hskip-1pt\,}} $\lambda{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\to Z$ mapping $x\in L$ on $\sigma' (x)\sigma (x)^{-1}{\,\hskip-1pt\cdot\hskip-1pt\,},$ which vanish over~$P{\,\hskip-1pt\cdot\hskip-1pt\,};$ hence, it is a {$1\hbox{-}$coboundary{\,\hskip-1pt\cdot\hskip-1pt\,}} \cite[~Ch.~XII,~Theorem~10.1]{CE} and therefore there exists $z\in Z$ such that $$\lambda (x) = z^{-1}\sigma (x) z\sigma (x)^{-1} $$ so that we have $\sigma' (x) =\sigma (x)^z$ for any $x\in L{\,\hskip-1pt\cdot\hskip-1pt\,}.$ We are done.
\noindent {\bf £3.7.}\quad Since $Q$ normalizes a unitary {full matrix{\,\hskip-1pt\cdot\hskip-1pt\,}} ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$subalgebra $T$ of ${\cal B}_\delta$ such that \cite[~Theorem~1.6]{P4} $${\cal B}_\delta\cong T\,Q\quad{\rm and}\quad {\rm rank}_{\,\hskip-1pt\cdot\hskip-1pt\,} (T)\equiv 1 \bmod p \quad ,\leqno £3.7.1$$ the action of $Q$ on $T$ admits a unique lifting to a group homomorphism \cite[1.8]{P4} $$Q\longrightarrow {\rm Ker}({\rm det}_T) \quad ;$$ hence, we have $${\,\hskip-1pt\cdot\hskip-1pt\,}_\delta\cong T\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q$$ and therefore ${{\,\hskip-1pt\cdot\hskip-1pt\,}}_\delta$ admits a unique two-sided ideal ${\frak n}_\delta$ such that, considering ${\cal B}_\delta/{\frak n}_\delta$ as a $Q$-interior ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebra, there is an isomorphism $${\cal B}_\delta/{\frak n}_\delta\cong T$$ of $Q$-interior ${\,\hskip-1pt\cdot\hskip-1pt\,}$-algebras. Then, a canonical {embedding{\,\hskip-1pt\cdot\hskip-1pt\,}} $f_\delta{\,\hskip-1pt\cdot\hskip-1pt\,}\colon {\cal B}_\delta\to {\rm Res}_Q^H ({\cal B})$ \cite[~2.8]{P4} and the ideal ${\frak n}_\delta$ determine a two-sided ideal ${\frak n}$ of $\cal B$ such that $S = {\cal B}/{\frak n}$ is also a {full matrix{\,\hskip-1pt\cdot\hskip-1pt\,}} ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$algebra.
\noindent {\bf Proposition~£3.8.} {\it With the notation above, the action of $N$ on $\cal B$ stabilizes~${\frak n}{\,\hskip-1pt\cdot\hskip-1pt\,}.${\,\hskip-1pt\cdot\hskip-1pt\,}}
\noindent {\bf Proof:} Since we have $N = H{\,\hskip-1pt\cdot\hskip-1pt\,} N_G (Q_\delta){\,\hskip-1pt\cdot\hskip-1pt\,},$ for the first statement we may consider $x\in N_G (Q_\delta){\,\hskip-1pt\cdot\hskip-1pt\,};$ then, denoting by $\sigma_x$ the automorphism of $Q$ induced by the conjugation by $x{\,\hskip-1pt\cdot\hskip-1pt\,},$ it is clear that the isomorphism $$f_x : {\rm Res}_{\sigma_x}\big({\rm Res}_Q^H ({\cal B})\big)\cong {\rm Res}_Q^H ({\cal B}) $$ of $Q$-interior algebras mapping $a\in \cal B$ on $a^x$ induces a commutative diagram of {\it exterior{\,\hskip-1pt\cdot\hskip-1pt\,}} homomorphisms of $Q$-interior algebras \cite[2.8]{P4} $$\matrix{{\rm Res}_{\sigma_x}\big({\rm Res}_Q^H ({\cal B})\big)&\buildrel \tilde f_x\over\cong &{\rm Res}_Q^H ({\cal B})\cr \hskip-10pt{\scriptstyle \tilde f_\delta}\hskip4pt\uparrow&\phantom{\Big\uparrow}&\uparrow\hskip4pt {\scriptstyle \tilde f_\delta}\hskip-10pt\cr {\rm Res}_{\sigma_x} ({\cal B}_\delta)&\buildrel (\tilde f_x)_\delta\over\cong& {\cal B}_\delta\cr} \quad ;$$ moreover, the uniqueness of ${\frak n}_\delta$ clearly implies that this ideal is stabilized by~$(\tilde f_x)_\delta{\,\hskip-1pt\cdot\hskip-1pt\,};$ consequently, ${\frak n} $ is still stabilized by~$\tilde f_x{\,\hskip-1pt\cdot\hskip-1pt\,}.$
\noindent {\bf £3.9.} In particular, $N$ acts on the {full matrix{\,\hskip-1pt\cdot\hskip-1pt\,}} ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$algebra $S$ and therefore the action on $S$ of any element $x\in N$ can be lifted to a suitable $s_x\in S^*{\,\hskip-1pt\cdot\hskip-1pt\,};$ thus, setting ${\rm r }= {\rm rank}_{\,\hskip-1pt\cdot\hskip-1pt\,}(S){\,\hskip-1pt\cdot\hskip-1pt\,},$ denoting by $\bar H$ the image of $H$ in $S^*$ and considering a finite extension ${\,\hskip-1pt\cdot\hskip-1pt\,}'$ of ${\,\hskip-1pt\cdot\hskip-1pt\,}$ containing the group $U$ of $\vert H\vert\hbox{-}$th roots of unity and the ${\rm r}\hbox{-}$th roots of ${\rm det}_S (s_x)$ for any $x\in N{\,\hskip-1pt\cdot\hskip-1pt\,},$ since ${\rm r}$ divides $\vert H\vert{\,\hskip-1pt\cdot\hskip-1pt\,},$ the {\it pull-back{\,\hskip-1pt\cdot\hskip-1pt\,}} $$\matrix{N &\longrightarrow & {\rm Aut}({\,\hskip-1pt\cdot\hskip-1pt\,}'\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S)\cr \uparrow&\phantom{\big\uparrow}&\uparrow\cr \hat N &\longrightarrow &(U\otimes\bar H){\,\hskip-1pt\cdot\hskip-1pt\,}{\rm Ker}({\rm det}_{{\,\hskip-1pt\cdot\hskip-1pt\,}'\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S})\cr}$$ determines a central extension $\hat N$ of $N$ by $U{\,\hskip-1pt\cdot\hskip-1pt\,},$ which clearly does not depend on the choice of ${\,\hskip-1pt\cdot\hskip-1pt\,}'{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, the inclusion $H{\,\hskip-1pt\cdot\hskip-1pt\,} N$ and the structural group homomorphism $H\to ({\,\hskip-1pt\cdot\hskip-1pt\,}'\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S)^*$ induces an injective group homomorphism $H\to \hat N$ with an image which is a normal subgroup of~$\check N$ and has a {trivial{\,\hskip-1pt\cdot\hskip-1pt\,}} intersection with the image of $U$ --- we identify this image with $H$ and set $$\skew3\hat{\bar N} = \hat N/H\quad .$$ We will consider the $H\hbox{-}$interior $N\hbox{-}$algebras (see \cite[2.1]{P7}) $$\hat{\cal A} = S^\circ\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal A}\quad{\rm and}\quad \hat{\cal B} = S^\circ\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal B}$$ and note that ${\,\hskip-1pt\cdot\hskip-1pt\,}'\otimes \hat {\,\hskip-1pt\cdot\hskip-1pt\,}$ actually has an $\hat N\hbox{-}$interior algebra structure.
\noindent {\bf £3.10.}\quad On the other hand, since $b$ is also a {nilpotent{\,\hskip-1pt\cdot\hskip-1pt\,}} block of the group $H{\,\hskip-1pt\cdot\hskip-1pt\,} P{\,\hskip-1pt\cdot\hskip-1pt\,},$ it is easily checked that \cite[1.9]{P4} $${\,\hskip-1pt\cdot\hskip-1pt\,}(H{\,\hskip-1pt\cdot\hskip-1pt\,} P)b\big/J\big({\,\hskip-1pt\cdot\hskip-1pt\,}(H{\,\hskip-1pt\cdot\hskip-1pt\,} P)b\big)\cong k\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S \quad ;$$ moreover, since the inclusion map ${\,\hskip-1pt\cdot\hskip-1pt\,} H\to {\,\hskip-1pt\cdot\hskip-1pt\,} (H{\,\hskip-1pt\cdot\hskip-1pt\,} P)$ is a {semicovering of $P\hbox{-}$algebras{\,\hskip-1pt\cdot\hskip-1pt\,}} \cite[Example~3.9, 3.10 and~Theorem~3.16]{KP}, we can identify $\gamma$ with a local point of $P$ on ${\,\hskip-1pt\cdot\hskip-1pt\,}(H{\,\hskip-1pt\cdot\hskip-1pt\,} P)b$. Set ${\,\hskip-1pt\cdot\hskip-1pt\,}(H{\,\hskip-1pt\cdot\hskip-1pt\,} P)_\gamma=i({\,\hskip-1pt\cdot\hskip-1pt\,}(H{\,\hskip-1pt\cdot\hskip-1pt\,} P))i$ and $S_\gamma=\bar\imath S\bar\imath$, where $\bar\imath$ is the image of $i$ in $S{\,\hskip-1pt\cdot\hskip-1pt\,};$ then, as in £3.7 above, we have an isomorphism of $P$-interior algebras \cite[Theorem~1.6]{P4} $${\,\hskip-1pt\cdot\hskip-1pt\,}(H{\,\hskip-1pt\cdot\hskip-1pt\,} P)_\gamma\cong S_\gamma{\,\hskip-1pt\cdot\hskip-1pt\,} P \quad ,\leqno £3.10.1$$ $S_\gamma$ is actually a {\it Dade $P\hbox{-}$algebra{\,\hskip-1pt\cdot\hskip-1pt\,}} --- namely, a {full matrix{\,\hskip-1pt\cdot\hskip-1pt\,}} $P\hbox{-}$algebra over ${\,\hskip-1pt\cdot\hskip-1pt\,}$ where $P$ stabilizes an ${\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$basis containing the unity element --- such that ${\rm rank}_{\,\hskip-1pt\cdot\hskip-1pt\,}(S_\gamma)\equiv 1{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,} {\rm mod}{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,} p$, and the action of $P$ on $S_\gamma$ can be uniquely lifted to a group homomorphism $P\to {\rm Ker}({\rm det}_{S_\gamma})$ \cite[1.8]{P4}, so that isomorphism~£3.10.1 becomes $${\,\hskip-1pt\cdot\hskip-1pt\,}(H{\,\hskip-1pt\cdot\hskip-1pt\,} P)_\gamma\cong S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} P¡£ \quad .\leqno £3.10.2$$
\noindent {\bf Proposition~£3.11.}\quad {\it With the notation above, the structural homomorphism ${\cal B}_\gamma\to S_\gamma$ of $P\hbox{-}$algebras is a strict semicovering.{\,\hskip-1pt\cdot\hskip-1pt\,}}
\noindent {\bf Proof:} It follows from isomorphism~£3.10.2 that the canonical homomorphism of $P\hbox{-}$algebras $${\,\hskip-1pt\cdot\hskip-1pt\,}(H{\,\hskip-1pt\cdot\hskip-1pt\,} P)_\gamma\longrightarrow S_\gamma \leqno £3.11.1\phantom{.}$$ admits a $P\hbox{-}$algebra section mapping $s\in S_\gamma$ on the image of $s\otimes 1$ by the inverse of that isomorphism, which proves that the $P$-interior algebra homomorphism~£3.11.1 is a {covering{\,\hskip-1pt\cdot\hskip-1pt\,}} \cite[4.14 and Example~4.25]{P4}; thus, since the inclusion map ${\,\hskip-1pt\cdot\hskip-1pt\,} H\to {\,\hskip-1pt\cdot\hskip-1pt\,} (H{\,\hskip-1pt\cdot\hskip-1pt\,} P)$ is a semicovering of $P\hbox{-}$algebras, the canonical homomorphism of $P\hbox{-}$algebras $${\cal B}_\gamma = ({\,\hskip-1pt\cdot\hskip-1pt\,} H)_\gamma\longrightarrow S_\gamma $$ remains a {semicovering{\,\hskip-1pt\cdot\hskip-1pt\,}} \cite[~Proposition~3.13]{KP}; moreover, since ${\frak n}{\,\hskip-1pt\cdot\hskip-1pt\,} J(\cal B){\,\hskip-1pt\cdot\hskip-1pt\,},$ it is a {strict semicovering{\,\hskip-1pt\cdot\hskip-1pt\,}} \cite[~3.10]{KP}.
\noindent {\bf £3.12.}\quad Consequently, it easily follows from \cite[~Theorem~3.16]{KP} and \cite[~Proposition~5.6]{P4} that we still have a {strict semicovering{\,\hskip-1pt\cdot\hskip-1pt\,}} homomorphism of $P\hbox{-}$algebras $$(S_\gamma)^\circ\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal B}_\gamma\longrightarrow (S_\gamma)^\circ\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S_\gamma\cong {\rm End}_{\,\hskip-1pt\cdot\hskip-1pt\,} (S_\gamma) \quad ;\leqno £3.12.1$$ hence, denoting by $\hat \gamma$ the local point of $P$ over $(S_\gamma)^\circ\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal B}_\gamma$ determined by~$\gamma{\,\hskip-1pt\cdot\hskip-1pt\,},$ the image of $\hat\gamma$ in $(S_\gamma)^\circ\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S_\gamma$ is contained in the corresponding local point of $P$ and therefore we get a {strict semicovering{\,\hskip-1pt\cdot\hskip-1pt\,}} homomorphism \cite[~5.7]{P4} $$\hat{\cal B}_{\hat\gamma}\longrightarrow {\,\hskip-1pt\cdot\hskip-1pt\,} \cong ((S_\gamma)^\circ\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S_\gamma)_{\hat\gamma}$$
of $P\hbox{-}$algebras; that is to say, any $\hat\imath\in \hat \gamma$ is actually a primitive idempotent
in~$\hat{\cal B}$ and therefore, for any local pointed group $R_{\hat\varepsilon}$ over $\hat{\cal B}$ contained in~$P_{\hat\gamma}{\,\hskip-1pt\cdot\hskip-1pt\,},$ it also belongs to $\hat\varepsilon{\,\hskip-1pt\cdot\hskip-1pt\,};$ in particular, denoting by $\hat \delta$ the local point of $Q$ over
$(S_\gamma)^\circ\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal B}_\gamma$ determined by $\delta{\,\hskip-1pt\cdot\hskip-1pt\,},$ we clearly have
$\hat{\cal B}_{\hat\delta} = \hat\imath\hat{\cal B}\hat\imath\cong {\,\hskip-1pt\cdot\hskip-1pt\,} Q$ (cf.~£3.7.1).
\noindent {\bf £3.13.}\quad As~in~\cite[~2.11]{KP}, we consider the $P$-interior algebra $\hat{\cal A}_{\hat\gamma} = \hat\imath\hat{\cal A}\hat\imath{\,\hskip-1pt\cdot\hskip-1pt\,};$ since $\cal A$ is an $N/H$-graded algebra, $\hat{\cal A}_{\hat\gamma}$ is also an $N/H$-graded algebra. On the other hand, since ${\,\hskip-1pt\cdot\hskip-1pt\,}'/J({\,\hskip-1pt\cdot\hskip-1pt\,}')\cong k{\,\hskip-1pt\cdot\hskip-1pt\,},$ we get a group homomorphism $\varpi{\,\hskip-1pt\cdot\hskip-1pt\,}\colon U\to k^*$ and, setting $\Delta_\varpi (U) = {\,\hskip-1pt\cdot\hskip-1pt\,}(\varpi (\xi),\xi^{-1}){\,\hskip-1pt\cdot\hskip-1pt\,}_{\xi\in U}{\,\hskip-1pt\cdot\hskip-1pt\,},$ we obtain the obvious $k^*\hbox{-}$group $$\skew3\hat{\bar N}^{^k} =( k^*\times \skew3\hat{\bar N})/\Delta_\varpi (U) \quad ; $$ then, with the notation of Theorem~£3.5, we set \cite[~5.7]{P6} $$\hat L = {\rm Res}_{\bar\pi} (\skew3\hat{\bar N}^{^k}) \quad ;\leqno £3.13.1$$ thus, ${\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}$ becomes a $P$-interior algebra {via{\,\hskip-1pt\cdot\hskip-1pt\,}} the lifting $\hat\tau{\,\hskip-1pt\cdot\hskip-1pt\,}\colon P\to \hat L^{^\circ}$ of the group homomorphism $\tau{\,\hskip-1pt\cdot\hskip-1pt\,}\colon P\to L{\,\hskip-1pt\cdot\hskip-1pt\,},$ and it has an obvious $L/\tau(Q)$-graded algebra structure. The group homomorphism $\bar \pi$ induces a group isomorphism $L/\tau(Q)\cong N/H$, through which we identify $L/\tau(Q)$ and $N/H{\,\hskip-1pt\cdot\hskip-1pt\,},$ so that ${\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}$ becomes an $N/H$-graded algebra.
\noindent {\bf Theorem~£3.14.}\quad {\it With the notation above, we have a $P$-interior and $N/H$-graded algebra isomorphism $\hat{\cal A}_{\hat\gamma}\cong {\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}{\,\hskip-1pt\cdot\hskip-1pt\,}.$ }
\noindent {\bf Proof:} Choosing $\hat\imath\in \hat\gamma{\,\hskip-1pt\cdot\hskip-1pt\,},$ we consider the groups $$M = N_{(\hat\imath\hat{\cal A} \hat\imath)^*} (Q{\,\hskip-1pt\cdot\hskip-1pt\,} \hat\imath)/k^*{\,\hskip-1pt\cdot\hskip-1pt\,} \hat\imath\quad{\rm and}\quad Z = \big((\hat\imath\hat{\cal B}\hat\imath)^Q\big)^*{\,\hskip-1pt\cdot\hskip-1pt\,} \big/k^*{\,\hskip-1pt\cdot\hskip-1pt\,} \hat\imath\cong 1 + J\big(Z ({\,\hskip-1pt\cdot\hskip-1pt\,} Q)\big) \quad ;$$ it is clear that $Z$ is a normal Abelian $p'\hbox{-}$divisible subgroup of $M{\,\hskip-1pt\cdot\hskip-1pt\,},$ and we set~$\bar M = M/Z{\,\hskip-1pt\cdot\hskip-1pt\,}.$ In order to apply Lemma~£3.6, let $R$ be a subgroup of $P$ and $y$ an element of $L$ such that $\tau (R){\,\hskip-1pt\cdot\hskip-1pt\,} \tau (P)^y{\,\hskip-1pt\cdot\hskip-1pt\,};$ since $\tau (Q)$ is normal in~$L{\,\hskip-1pt\cdot\hskip-1pt\,},$ we actually may assume that $R$ contains $Q{\,\hskip-1pt\cdot\hskip-1pt\,}.$ According to the equivalence of categories~£3.5.2, denoting by $\varepsilon$ the unique local point of $R$ on $\cal B$ fulfilling $R_\varepsilon{\,\hskip-1pt\cdot\hskip-1pt\,} P_\gamma$ \cite[~Theorem~6.6]{KP}, there is $x_y\in N$ such that $$\bar x_y = \bar\pi (y)\quad,\quad R_\varepsilon{\,\hskip-1pt\cdot\hskip-1pt\,} (P_\gamma)^{x_y} \quad{\rm and}\quad \tau ({}^{x_y}v) ={}^y\tau (v) \hbox{{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,} for any $v\in R$} \quad ;\leqno £3.14.1$$ in particular, $x_y$ normalizes $Q_\delta{\,\hskip-1pt\cdot\hskip-1pt\,}.$
By Proposition 3.11, a local pointed group $R_\varepsilon$ on $\cal B$ such that $$Q_\delta\leq R_\varepsilon\leq P_\gamma$$
determines a local pointed group $R_{\tilde \varepsilon}$ on $S$ through the composition
$${\cal B}_\gamma\longrightarrow S_\gamma\hookrightarrow S$$
(see \cite[Proposition 3.15]{KP}). Since $S_\gamma$ has a $P$-stable ${\,\hskip-1pt\cdot\hskip-1pt\,}$-basis, $S_\varepsilon$ still has a $R$-stable ${\,\hskip-1pt\cdot\hskip-1pt\,}$-basis and, by \cite[Theorem 5.3]{P4}, there are unique local pointed groups $R_{\tilde\varepsilon}$ on $S_\varepsilon$
and $R_{\hat\varepsilon}$ on $\hat{\cal B}$ such that $\hat l(\tilde l\otimes l)=\hat l=(\tilde l\otimes l)\hat l$ for suitable $l\in \varepsilon$,
$\tilde l\in \tilde\varepsilon$ and $\hat l\in \hat\varepsilon{\,\hskip-1pt\cdot\hskip-1pt\,};$. then, we claim that $R_{\hat\varepsilon}{\,\hskip-1pt\cdot\hskip-1pt\,} (P_{\hat\gamma})^{x_y}$ and that $x_y$ stabilizes~$Q_{\hat\delta}{\,\hskip-1pt\cdot\hskip-1pt\,}.$ Indeed, since $(R_\varepsilon)^{x_y^{-1}}{\,\hskip-1pt\cdot\hskip-1pt\,} P_\gamma$, we have $(R_{\tilde\varepsilon})^{x_y^{-1}}{\,\hskip-1pt\cdot\hskip-1pt\,} P_{\tilde\gamma}$ and then it follows from \cite[Proposition 5.6]{P4} that we have $(R_{\hat\varepsilon})^{x_y^{-1}}{\,\hskip-1pt\cdot\hskip-1pt\,} P_{\hat\gamma}$ or, equivalently, $R_{\hat\varepsilon}{\,\hskip-1pt\cdot\hskip-1pt\,} (P_{\hat\gamma})^{x_y}{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, since $\delta$ is the unique local point of $Q$ such that $Q_\delta$ is contained in~$P_\gamma{\,\hskip-1pt\cdot\hskip-1pt\,},$ again by \cite[Proposition 5.6]{P4} we can easily conclude that $x_y$ stabilizes $Q_{\hat\delta}{\,\hskip-1pt\cdot\hskip-1pt\,}.$
In particular, since the image of $\hat\imath^{\,x_y}$ in $\hat{\cal B} (R_{\hat\varepsilon})$ is not zero [14,~2.7] and since $\hat\imath$ is primitive in $\hat{\cal B}{\,\hskip-1pt\cdot\hskip-1pt\,},$ $\hat\imath^{\,x_y}$ belongs to $\hat\varepsilon$ and therefore, since $\hat\imath$ also belongs to $\hat\varepsilon{\,\hskip-1pt\cdot\hskip-1pt\,},$ there is $\hat a_y\in (\hat{\cal B}^R)^*$ such that $\hat\imath^{\,x_y} = \hat\imath^{{\,\hskip-1pt\cdot\hskip-1pt\,}\hat a_y}{\,\hskip-1pt\cdot\hskip-1pt\,};$ choose $s_y\in S^*$ lifting the action of $x_y$ on $S$ and set $\hat x_y = s_y\otimes x_y{\,\hskip-1pt\cdot\hskip-1pt\,},$ so that we have $$\hat\imath^{\,x_y} = (\hat x_y)^{-1} \hat\imath{\,\hskip-1pt\cdot\hskip-1pt\,} \hat x_y \quad ;$$ then, since $\hat x_y$ and $\hat a_y$ normalize~$Q{\,\hskip-1pt\cdot\hskip-1pt\,},$ the element $\hat x_y \hat a_y^{-1}$ of $\hat A$ normalizes $Q{\,\hskip-1pt\cdot\hskip-1pt\,} \hat\imath$ and therefore $\hat x_y \hat a_y^{-1}\hat\imath$ determines an element $m_y$ of~$M{\,\hskip-1pt\cdot\hskip-1pt\,}.$ We claim that the image $\bar m_y$ of $m_y$ in $\bar M$ only depends on~$y\in L$ and that, in the case where $R_\varepsilon =Q_\delta{\,\hskip-1pt\cdot\hskip-1pt\,},$ this correspondence determines a group homomorphism $$\bar\sigma : L\longrightarrow \bar M \quad .$$
Indeed, if $x'\in N$ still fulfills conditions~£3.14.1 then we necessarily have $x' = x_y\,z$ for some $z\in C_H (R)$ and therefore it suffices to choose the element $\hat a_y{\,\hskip-1pt\cdot\hskip-1pt\,} z$ of $(\hat B^R)^*$ in the definition above. On the other hand, if $\hat a'\in (\hat B^R)^*$ still fulfills $\hat\imath^{{\,\hskip-1pt\cdot\hskip-1pt\,}\hat x_y} = \hat\imath^{{\,\hskip-1pt\cdot\hskip-1pt\,}\hat a'}$ then we clearly have $\hat a' = \hat c{\,\hskip-1pt\cdot\hskip-1pt\,}\hat a_y$ for some $\hat c\in (\hat B^R)^*$ centralizing $\hat\imath{\,\hskip-1pt\cdot\hskip-1pt\,},$ so that $\hat c{\,\hskip-1pt\cdot\hskip-1pt\,}\hat\imath$ belongs to $(\hat\imath\hat B\hat\imath)^Q{\,\hskip-1pt\cdot\hskip-1pt\,};$ hence, the image of $\hat x_y\hat a_y^{-1}\hat c^{-1}\hat\imath$ in $\bar M$ coincides with $\bar m_y{\,\hskip-1pt\cdot\hskip-1pt\,}.$ Moreover, in the case where $R_\varepsilon =Q_\delta{\,\hskip-1pt\cdot\hskip-1pt\,},$ for any element $y'$ in $L$ we clearly can choose $\hat x_{yy'} = \hat x_y{\,\hskip-1pt\cdot\hskip-1pt\,} \hat x_{y'}{\,\hskip-1pt\cdot\hskip-1pt\,};$ then, we have $$\hat\imath^{{\,\hskip-1pt\cdot\hskip-1pt\,}\hat x_{yy'}} = (\hat\imath^{{\,\hskip-1pt\cdot\hskip-1pt\,}\hat a_y})^{\hat x_{y'}} = \hat\imath^{\hat x_{y'}{\,\hskip-1pt\cdot\hskip-1pt\,} (\hat a_y)^{\hat x_{y'}}} = \hat\imath^{\hat a_{y'}(\hat a_y)^{\hat x_{y'}}}$$ and therefore, since $\hat a_{y'}(\hat a_y)^{\hat x_{y'}}$ still belongs to $(\hat B^Q)^*{\,\hskip-1pt\cdot\hskip-1pt\,},$ we clearly can choose $\hat a_{yy'} = \hat a_{y'}(\hat a_y)^{\hat x_{y'}}{\,\hskip-1pt\cdot\hskip-1pt\,},$ so that we get $$\hat x_{yy'}{\,\hskip-1pt\cdot\hskip-1pt\,} \hat a_{yy'}^{-1}\hat\imath = \hat x_y{\,\hskip-1pt\cdot\hskip-1pt\,}\hat x_{y'}{\,\hskip-1pt\cdot\hskip-1pt\,} \big(\hat a_{y'} (\hat a_y)^{\hat x_{y'}}\big)^{-1}\hat\imath = (\hat x_y{\,\hskip-1pt\cdot\hskip-1pt\,} \hat a_y^{-1}\hat\imath)(\hat x_{y'}{\,\hskip-1pt\cdot\hskip-1pt\,} \hat a_{y'}^{-1}\hat\imath)$$ which implies that $\bar m_{yy'} = \bar m_y{\,\hskip-1pt\cdot\hskip-1pt\,}\bar m_{y'}{\,\hskip-1pt\cdot\hskip-1pt\,}.$ This proves our claim.
In particular, for any $u\in P{\,\hskip-1pt\cdot\hskip-1pt\,},$ we can choose $x_{\tau (u)} = u$ and $\hat a_{\tau (u)} = 1{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, since the action of $P$ on $S_\gamma$ can be lifted to a unique group~homomorphism $\varrho {\,\hskip-1pt\cdot\hskip-1pt\,}\colon P\to {\rm Ker}({\rm det}_{S_\gamma})$ \cite[~1.8]{P4}, we may choose $\hat x_{\tau (u)} = \varrho (u)\otimes u{\,\hskip-1pt\cdot\hskip-1pt\,};$ then, it is clear that the correspondence $\tau^*$ mapping $\tau (u)$ on the image of $(\varrho (u)\otimes u) \hat\imath$ in $M$ defines a group homomorphism from $\tau (P){\,\hskip-1pt\cdot\hskip-1pt\,} L$ to~$M$ lifting the corresponding restriction of $\bar\sigma{\,\hskip-1pt\cdot\hskip-1pt\,}.$
Finally, we claim that $\tau^*$ fulfills condition~£3.6.1; indeed, coming back to the general inclusion $\tau (R){\,\hskip-1pt\cdot\hskip-1pt\,} \tau (P)^y$ above, we clearly have $\bar\sigma (y) = \bar m_y$ and, according to the right-hand equalities
in~£3.14.1, for any $v\in R$ we get $$\tau^*\big(\tau (v)^y\big) = v^{x_y}{\,\hskip-1pt\cdot\hskip-1pt\,} \hat\imath = (v{\,\hskip-1pt\cdot\hskip-1pt\,} \hat\imath)^{m_y} =\tau^*\big(\tau (v)\big)^{m_y} \quad .$$ Consequently, it follows from Lemma~£3.6 that $\bar\sigma$ can be lifted to a group homomorphism $\sigma{\,\hskip-1pt\cdot\hskip-1pt\,}\colon L\to M$ extending $\tau^*{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, the inverse image of~$\sigma (L)$ in $N_{(\hat\imath\hat{\cal A} \hat\imath)^*} (Q{\,\hskip-1pt\cdot\hskip-1pt\,}\hat\imath)$ is a $k^*\hbox{-}$group which is clearly contained in $$\hat N{\,\hskip-1pt\cdot\hskip-1pt\,}({\,\hskip-1pt\cdot\hskip-1pt\,}'^*\otimes 1)\subset {\,\hskip-1pt\cdot\hskip-1pt\,}'\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} \hat{\,\hskip-1pt\cdot\hskip-1pt\,} \quad ;$$ hence, according to definition~£3.13.1, $\sigma$ still can be lifted to a $k^*\hbox{-}$group homomorphism $$\hat\sigma : \hat L^{^\circ}\longrightarrow N_{(\hat\imath\hat{\cal A} \hat\imath)^*} (Q{\,\hskip-1pt\cdot\hskip-1pt\,} \hat\imath) $$ mapping $\tau (u)$ on $u{\,\hskip-1pt\cdot\hskip-1pt\,} \hat\imath$ for any $u\in P{\,\hskip-1pt\cdot\hskip-1pt\,};$ hence, we get a $P$-interior and $N/H$-graded algebra homomorphism $${\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}\longrightarrow \hat{\cal A}_{\hat\gamma} \quad .\leqno £3.14.2$$ We claim that homomorphism £3.14.2 is an isomorphism.
Indeed, denoting by $X{\,\hskip-1pt\cdot\hskip-1pt\,} N_G (Q_\delta)$ a set of representatives for $\bar N = N/H{\,\hskip-1pt\cdot\hskip-1pt\,},$ it is clear that we have $${\cal A} = \bigoplus_{x\in X} x{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal B} $$ and therefore we still have $$\hat{\cal A} = S\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal A} = \bigoplus_{x\in X} (s_x\otimes x) (S\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal B}) = \bigoplus_{x\in X} (s_x\otimes x) \hat{\cal B} \quad ;$$ moreover, choosing as above an element $\hat a_x\in (\hat{\cal B}^Q)^*$ such that $\hat\imath^{{\,\hskip-1pt\cdot\hskip-1pt\,} x} = \hat\imath^{{\,\hskip-1pt\cdot\hskip-1pt\,}\hat a_x}{\,\hskip-1pt\cdot\hskip-1pt\,},$ it is clear that $(s_x\otimes x) \hat a_x^{-1}\hat{\cal B} =(s_x\otimes x) \hat{\cal B} $ for any $x\in X$ and therefore we get $$\hat{\cal A}_{\hat\gamma} = \hat\imath \hat{\cal A}\hat\imath = \bigoplus_{x\in X} ((s_x\otimes x) \hat a_x^{-1}\hat\imath) (\hat\imath\hat{\cal B}\hat\imath) \quad ;$$ thus, since we know that $\hat\imath\hat{\cal B}\hat\imath\cong {\,\hskip-1pt\cdot\hskip-1pt\,} Q$ and that $L/\tau (Q)\cong \bar N{\,\hskip-1pt\cdot\hskip-1pt\,},$ denoting by $Y{\,\hskip-1pt\cdot\hskip-1pt\,} L$ a set of representatives for $L/\tau (Q)$ and by $\hat y$ a lifting of $y\in Y$ to $\hat L{\,\hskip-1pt\cdot\hskip-1pt\,},$ we still get $$\hat{\cal A}_{\hat\gamma} \cong \bigoplus_{y\in Y} \hat\sigma (\hat y){\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q $$ which proves that homomorphism~£3.14.2 is an isomorphism.
\noindent {\bf Corollary~£3.15.}\quad {\it With the notation above, we have a $P$-interior and $N/H$-graded algebra isomorphism ${\cal A}_\gamma\cong S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}{\,\hskip-1pt\cdot\hskip-1pt\,}.${\,\hskip-1pt\cdot\hskip-1pt\,}}
\noindent {\bf Proof:} Since $\hat{\cal A} = S^\circ\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal A}$ and we have a $P$-interior algebra embedding ${\,\hskip-1pt\cdot\hskip-1pt\,}\to S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S_\gamma^\circ$ \cite[~5.7]{P4}, we still have the following commutative diagram of {\it exterior{\,\hskip-1pt\cdot\hskip-1pt\,}} $P$-interior algebra embeddings and homomorphisms \cite[2.10]{KP} $$\matrix{&{\cal A}_\gamma&\longrightarrow&\hskip-20pt S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S_\gamma^\circ \otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal A}_\gamma&\cr &\hskip-20pt\nearrow&\nearrow\hskip-60pt&\uparrow\cr {\cal B}_\gamma&\longrightarrow& S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S_\gamma^\circ \otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal B}_\gamma& S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} \hat{\cal A}_{\hat\gamma}&\hskip-20pt\cong&\hskip-20pt S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}\cr &&&\hskip-40pt\nearrow&\nearrow\hskip-20pt&\cr &&S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} \hat{\cal B}_{\hat\gamma}\hskip-40pt&\hskip-20pt\cong&\hskip-20pt S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q\cr} \quad ;\leqno £3.15.1$$ moreover, since the unity element is primitive in $(S_\gamma)^P$ and the kernel of the canonical homomorphism $$(S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q)^P\longrightarrow (S_\gamma)^P $$ is contained in the radical, the unity element is primitive in $(S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q)^P$ too; since $P$ has a unique local point over $ S_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S_\gamma^\circ \otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal A}_\gamma$ \cite[Proposition~5.6]{P4}, from diagram~£3.15.1 we get the announced isomorphism.
\noindent {\bf £3.16.}\quad Let us take advantage of this revision to correct the erroneous proof of~\cite[1.15.1]{KP}.
Indeed, as proved in Proposition~£3.11 above, we have a {strict covering{\,\hskip-1pt\cdot\hskip-1pt\,}} of
$Q\hbox{-}$interior $k\hbox{-}$algebras
$$k\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal B}_\delta\longrightarrow k\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S_\delta
\leqno £3.16.1$$
but {\it not{\,\hskip-1pt\cdot\hskip-1pt\,}} a {strict covering{\,\hskip-1pt\cdot\hskip-1pt\,}} $k\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\cal B}\longrightarrow k\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} S$
of $H\hbox{-}$interior $k\hbox{-}$algebras as stated in~\cite[1.15]{KP}; however, it follows from
\cite[~2.14.4 and Lemma~9.12]{P6} that the isomorphism ${\cal B}_\delta (Q_\delta)\cong S_\delta (Q)$ induced by homomorphism~£3.16.1
\cite[4.14]{P4} forces the {embedding{\,\hskip-1pt\cdot\hskip-1pt\,}} ${\cal B}(Q_\delta)\to
S (Q_{\bar\delta})$ where $\bar\delta$ denotes the local point of $Q$ over $S$ determined by $\delta{\,\hskip-1pt\cdot\hskip-1pt\,};$ hence, we still have the isomorphism
\cite[1.15.5]{KP} which allows us to complete the argument.
\vskip 1cm \noindent{\bf\large 4. Extensions of Glauberman correspondents of blocks}
In this section, we continue to use the notation in Paragraph 3.1, namely ${\,\hskip-1pt\cdot\hskip-1pt\,}$ is a complete discrete valuation ring with an algebraically closed residue field $k$ of characteristic $p{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover we assume that its quotient field ${\,\hskip-1pt\cdot\hskip-1pt\,}$ has characteristic 0 and is big enough for all finite groups that we will consider; this assumption is kept throughout the rest of this paper.
\noindent{\bf 4.1.}\quad Let $A$ be a cyclic group of order $q$, where $q$ is a power of a prime. Assume that $G$ is an $A$-group, that $H$ is an $A$-stable normal subgroup of~$G$ and that $b$ is $A$-stable. Note that, in this section, $b$ is not necessarily nilpotent. Assume that $A$ and $G$ have coprime orders; by \cite[Theorem 1.2]{P5}, $G$~acts transitively on the set of all defect groups of $G_\alpha$ and, obviously, $A$ also acts on this set; hence, since $A$ and $G$ have coprime orders, by \cite[Lemma 13.8 and Corollary 13.9]{I} $A$ stabilizes some defect group of $G_\alpha$ and $G^A$ acts transitively on the set of them. Similarly, $A$ stabilizes some defect group
of $N_\beta$ and $N^A$ acts transitively on the set of them. Thus, we may assume that $A$ stabilizes $P{\,\hskip-1pt\cdot\hskip-1pt\,} N$ and actually we ssume that $A$ centralizes~$P{\,\hskip-1pt\cdot\hskip-1pt\,};$ recall that~$Q=P\cap H$.
\noindent{\bf 4.2.}\quad Clearly $H^A$ is normal in $G^A$. We claim that $N^A$ is the stabilizer of ${\it w}(b)$ in $G^A$. Indeed, for any $x\in G^A$, $b^x$ is a block of $H$ and $Q^x$ is a defect group of $b^x$; since $A$ stabilizes $b^x$ and centralizes~$Q^x$, ${\it w}(b^x)$ makes sense. Note that $G$ acts on ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H){\,\hskip-1pt\cdot\hskip-1pt\,},$ that $G^A$ acts on ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H^A)$ and that the Glauberman correspondence $\pi(G, A)$ is compatible with the obvious actions of $G^A$ on ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H)$ and ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H^A)$. So we have \begin{eqnarray*} {\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H^A, {\it w}(b^x)) &=& \pi(H, A)({\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H, b^x)) {\,\hskip-1pt\cdot\hskip-1pt\,} &=& \pi(H, A)({\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H,b)^x ){\,\hskip-1pt\cdot\hskip-1pt\,} &=& (\pi(H, A)({\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H, b)))^x {\,\hskip-1pt\cdot\hskip-1pt\,} &=& {\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H^A,{\it w}(b))^x{\,\hskip-1pt\cdot\hskip-1pt\,}; \end{eqnarray*} in particular, we get ${\it w}(b^x)={\it w}(b)^x$ and therefore we have ${\it w}(b)^x={\it w}(b)$ if and only if $x$ belongs to $N^A$. We set \begin{center}${\it w}(c)={\rm Tr}^{G^A}_{N^A}({\it w}(b))$, ${\it w}(\beta)={\,\hskip-1pt\cdot\hskip-1pt\,}{\it w}(b){\,\hskip-1pt\cdot\hskip-1pt\,}$ and ${\it w}(\alpha)={\,\hskip-1pt\cdot\hskip-1pt\,}{\it w}(c){\,\hskip-1pt\cdot\hskip-1pt\,}$\quad. \end{center} Then ${\it w}(\beta)$ is a point of $N^A$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)$, ${\it w}(\alpha)$ is a point of $G^A$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)$, we have $(N^A)_{{\it w}(\beta)}\leq (G^A)_{{\it w}(\alpha)}$ and any defect group of $(N^A)_{{\it w}(\beta)}$ is a defect group of $(G^A)_{{\it w}(\alpha)}$.
\noindent{\bf 4.3.}\quad Let ${\frak B}$ and ${\it w}({\frak B})$ be the respective sets of $A$-stable blocks of $G$ covering $b$ and of $G^A$ covering ${\it w}(b){\,\hskip-1pt\cdot\hskip-1pt\,}.$ Take $e\in {\frak B}{\,\hskip-1pt\cdot\hskip-1pt\,};$ since $P$ is a defect group of $G_\alpha$ and $c$ fulfills $ec=e$, $e$ has a defect group contained in $P$ and therefore, since $A$ centralizes $P{\,\hskip-1pt\cdot\hskip-1pt\,},$ $e$ has a defect group centralized by~$A{\,\hskip-1pt\cdot\hskip-1pt\,};$ hence, by \cite[Proposition 1 and Theorem 1]{W}, ${\it w}(e)$ makes sense and $A$ stabilizes all the characters in ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(G, e){\,\hskip-1pt\cdot\hskip-1pt\,};$ that is to say, $A$ stabilizes all the characters of blocks in ${\frak B}$. Moreover, by \cite[Theorem 13.29]{I}, ${\it w}(e)$ belongs to ${\it w}({\frak B})$.
\noindent{\bf Proposition 4.4.}\quad {\it The map ${\it w}: {\frak B}\rightarrow {\it w}({\frak B}),{\,\hskip-1pt\cdot\hskip-1pt\,} e\mapsto {\it w}(e)$ is bijective and we have \begin{center}${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(G^A, {\it w}(c))=\pi(G,\,A)({\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(G, c)^A)$\quad .\end{center} }
\noindent{\it Proof.}\quad Assume that $g\in {\frak B}$ and ${\it w}(e)={\it w}(g){\,\hskip-1pt\cdot\hskip-1pt\,};$ then there exist $\chi\in {\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(G, e)$ and $\phi\in {\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(G, g)$ such that $\pi(G, A)(\chi)=\pi(G, A)(\phi)$; but this contradicts the bijectivity of the Glauberman correspondence. Therefore the map ${\it w}$ is injective.
Take $h\in {\it w}({\frak B}){\,\hskip-1pt\cdot\hskip-1pt\,};$ then $h$ covers ${\it w}(b)$ and so there exist $\zeta\in {\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(G^A, h)$ and $\eta\in {\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H^A, {\it w}(b))$ such that $\eta$ is a constituent of ${\rm Res}^{G^A}_{H^A}(\zeta)$. Set $$\theta=(\pi(G, A))^{-1}(\zeta)\quad{\rm and}\quad \vartheta=(\pi(H, A))^{-1}(\eta) \quad ;$$
by \cite[Theorem 13.29]{I}, $\vartheta$ is a constituent of
${\rm Res}^G_H(\theta){\,\hskip-1pt\cdot\hskip-1pt\,};$ let $l$ be the block of~$G$ acting as the identity map on a ${\,\hskip-1pt\cdot\hskip-1pt\,} G$-module affording $\theta{\,\hskip-1pt\cdot\hskip-1pt\,};$ then $l$ covers $b$ and we have ${\it w}(l)=h$. Finally, we have \begin{eqnarray*} \pi(G,\,A)({\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(G, c)^A) &=& \pi(G,\,A)(\cup_{e\in {\frak B}}{\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(G, e)) {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& \cup_{{\it w}(e)\in {\it w}({\frak B})}{\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(G^A, {\it w}(e)) {\,\hskip-1pt\cdot\hskip-1pt\,}
&=& {\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(G^A, {\it w}(c)) \end{eqnarray*}
\noindent{\bf Proposition 4.5.}\quad {\it $P$ is a defect group of the pointed group $(G^A)_{{\it w}(\alpha)}$.}
\noindent{\it Proof.}\quad It suffices to show that $P$ is a defect group of~$(N^A)_{{\it w}(\beta)}$ (cf.~£3.1); thus, without loss of generality, we can assume that $G=N$. Obviously, $A$ stabilizes $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$ and $b$ is the unique block of $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$ covering the block $b$ of~$H{\,\hskip-1pt\cdot\hskip-1pt\,};$ since $P$ is a defect group of $G_\alpha$ and $N_\beta$, $P$ is maximal in $N$ such that ${\rm Br}_P^{{\,\hskip-1pt\cdot\hskip-1pt\,} H}(b)\neq 0{\,\hskip-1pt\cdot\hskip-1pt\,};$ thus $P$ is maximal in $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$ such that ${\rm Br}_P^{{\,\hskip-1pt\cdot\hskip-1pt\,} (P{\,\hskip-1pt\cdot\hskip-1pt\,} H)}(b)\neq 0{\,\hskip-1pt\cdot\hskip-1pt\,};$ therefore $P$ is a defect group of $b$ as a block of $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$. Since $A$ centralizes $P$, the Glauberman correspondent $b'$ of $b$ as a block of $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$ makes sense; moreover by Proposition 4.4, $b'$ covers ${\it w}(b)$. Since ${\it w}(b)$ is the unique block of $P{\,\hskip-1pt\cdot\hskip-1pt\,} H^A$ covering the block ${\it w}(b)$ of $H^A$, $b'={\it w}(b)$, and then, by \cite[Theorem 1]{W}, $P$ is a defect group of ${\it w}(b)$ as a block of $P{\,\hskip-1pt\cdot\hskip-1pt\,} H^A$; in particular, ${\rm Br}_P^{{\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)}({\it w}(b))\neq 0$.
Since $P$ is a defect group of $G_\alpha$, by \cite[Theorem 5.3]{KP} the image of $P$ in the quotient group $N/H$ is a Sylow $p$-subgroup of $N/H{\,\hskip-1pt\cdot\hskip-1pt\,};$ but, the inclusion map $N^A\hookrightarrow N$ induces a group isomorphism $N^A/H^A\cong (N/H)^A{\,\hskip-1pt\cdot\hskip-1pt\,};$ hence, the image of $P$ in $N^A/H^A$ is a Sylow $p$-subgroup of $N^A/H^A{\,\hskip-1pt\cdot\hskip-1pt\,};$ then, by \cite[Theorem 5.3]{KP} again, $P$ is a defect group of $(N^A)_{{\it w}(\alpha)}$.
\noindent{\bf 4.6.}\quad We may assume that $A$ stabilizes $P_\gamma{\,\hskip-1pt\cdot\hskip-1pt\,};$ then $A$ stabilizes $Q_\delta$ too (see \cite[Proposition 5.5]{KP}). Let $R$ be a subgroup such that $Q\leq R\leq P$ and $R_\varepsilon$ a local pointed group on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ contained in~$P_\gamma$. Since $A$ stabilizes $P_\gamma$ and centralizes $P$, $A$ centralizes $R$ and then, by \cite[Proposition 5.5]{KP}, it stabilizes~$R_\varepsilon$. Since~${\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,} H}(\varepsilon)$ is a point of $kC_H(R)$, then there is a unique block $b_\varepsilon$ of ${\,\hskip-1pt\cdot\hskip-1pt\,} C_H(R)$ such that ${\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,} H}(b_\varepsilon\varepsilon)={\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,} H}(\varepsilon)$ and, by \cite[Lemma 2.3]{Z}, $C_Q(R)$ is a defect group of $b_\varepsilon$; in particular, $b_\varepsilon$ is nilpotent. Obviously, $A$ centralizes $C_Q(R)$ and, since $A$ stabilizes $R_\varepsilon$ and thus it stabilizes $b_\varepsilon$, ${\it w}(b_\varepsilon)$ makes sense; moreover, ${\it w}(b_\varepsilon)$ is nilpotent and, since we have $$C_{H^A}(R)=C_{C_H(R)}(A)\quad ,$$
there is a unique local point ${\it w}(\varepsilon)$ of $R$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)$ such that \begin{center}${\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)}({\it w}(\varepsilon){\it w}(b_\varepsilon))={\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)}({\it w}(\varepsilon))\quad .$\end{center}
\noindent{\bf Proposition 4.7.}\quad {\it $P_{{\it w}(\gamma)}$ is a defect pointed group of $(G^A)_{{\it w}(\alpha)}$ and $Q_{{\it w}(\delta)}$ is a defect pointed group of $(H^A)_{{\,\hskip-1pt\cdot\hskip-1pt\,}{\it w}(b){\,\hskip-1pt\cdot\hskip-1pt\,}}$.}
\noindent{\it Proof.}\quad By \cite[Proposition 2.8]{P7}, the inclusion map ${\,\hskip-1pt\cdot\hskip-1pt\,} H\hookrightarrow {\,\hskip-1pt\cdot\hskip-1pt\,} (P\.H)$ is actually a strict semicovering $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$-algebra homomorphism; hence, $\gamma$ determines a unique local point $\gamma'$ of $P$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} (P{\,\hskip-1pt\cdot\hskip-1pt\,} H)$ such that $\gamma \subset \gamma'$. Obviously, $b$ is a block of $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$. Since $\beta$ is also a point of $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$ and $P_\gamma$ is also a defect pointed group of the pointed group $(P{\,\hskip-1pt\cdot\hskip-1pt\,} H)_\beta$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} H$, by \cite[Corollary 6.3]{KP} $P_{\gamma'}$ is a defect pointed group of the pointed group $(P{\,\hskip-1pt\cdot\hskip-1pt\,} H)_\beta$ on ${\,\hskip-1pt\cdot\hskip-1pt\,} (P{\,\hskip-1pt\cdot\hskip-1pt\,} H)$.
Let $b_{\gamma'}$ be the block of $C_{P{\,\hskip-1pt\cdot\hskip-1pt\,} H}(P)$ such that $${\rm Br}_P^{{\,\hskip-1pt\cdot\hskip-1pt\,} (P{\,\hskip-1pt\cdot\hskip-1pt\,} H)}(b_{\gamma'}\gamma')={\rm Br}_P^{{\,\hskip-1pt\cdot\hskip-1pt\,} (P{\,\hskip-1pt\cdot\hskip-1pt\,} H)}(\gamma')\quad ;$$
then $Z(P)$ is a defect group of $b_{\gamma'}$ and therefore ${\it w}(b_{\gamma'})$ makes sense. Obviously, $b_{\gamma'}$ covers $b_\gamma$ and thus ${\it w}(b_{\gamma'})$ covers ${\it w}(b_{\gamma})$ (see Proposition 4.4); but, since ${\it w}(b)$ is also the Glauberman correspondent of the block $b$ of $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$ (see the first paragraph of the proof of Proposition 4.5), by \cite[Proposition 4]{W} we have
\begin{center}${\rm Br}_P^{{\,\hskip-1pt\cdot\hskip-1pt\,} (P{\,\hskip-1pt\cdot\hskip-1pt\,} H^A)}({\it w}(b){\it w}(b_{\gamma'}))={\rm Br}_P^{{\,\hskip-1pt\cdot\hskip-1pt\,} (P{\,\hskip-1pt\cdot\hskip-1pt\,} H^A)}({\it w}(b_{\gamma'}))$ \quad ;\end{center} this forces ${\rm Br}_P^{{\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)}({\it w}(b){\it w}(b_{\gamma}))={\rm Br}_P^{{\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)}({\it w}(b_{\gamma}))$, which implies that $$P_{{\it w}(\gamma)}\leq (P{\,\hskip-1pt\cdot\hskip-1pt\,} H^A)_{{\it w}(\beta)}\leq (G^A)_{{\it w}(\alpha)}\quad ;$$ hence, by Proposition 4.5, $P_{{\it w}(\gamma)}$ is a defect pointed group of $(G^A)_{{\it w}(\alpha)}$.
The statement that $Q_{{\it w}(\delta)}$ is a defect pointed group of $(H^A)_{{\,\hskip-1pt\cdot\hskip-1pt\,}{\it w}(b){\,\hskip-1pt\cdot\hskip-1pt\,}}$ is clear.
\noindent{\bf Lemma 4.8.}\quad {\it Let $R_\varepsilon$ and $T_\eta$ be local pointed groups on ${\,\hskip-1pt\cdot\hskip-1pt\,}$ such that $R$ is normal in $T$ and that we have
$Q_\delta\leq R_\varepsilon\leq P_\gamma$ and $Q_\delta\leq T_\eta\leq P_\gamma$. Then,
we have $R_\varepsilon\leq T_\eta$ if and only if we have $${\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,} C_H(R)}(b_\eta b_\varepsilon)={\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,} C_H(R)}(b_\gamma)\quad .$$}
\par\noindent{\it Proof.}\quad Obviously, ${\,\hskip-1pt\cdot\hskip-1pt\,}$ is a $p$-permutation $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$-algebra (see \cite[Def. 1.1]{BP1}) by $P{\,\hskip-1pt\cdot\hskip-1pt\,} H$-conjugation and $(T, {\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(b_{\eta}))$ and $(R, {\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(b_{\varepsilon}))$ are $(b, P{\,\hskip-1pt\cdot\hskip-1pt\,} H)\hbox{-}$Brauer pairs (see \cite[Def. 1.6]{BP1}). Moreover $T$ stabilizes $b_\varepsilon$, and $\eta$ and $\varepsilon$ are the unique local points of $T$ and $R$ on ${\,\hskip-1pt\cdot\hskip-1pt\,}$ (see \cite[Proposition 5.5]{KP}) such that \begin{center}${\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(\eta){\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(b_{\eta})={\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(\eta)$ and ${\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(\varepsilon){\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(b_{\varepsilon})={\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(\varepsilon)\quad .$\end{center}
Assume that $R_\varepsilon\leq T_\eta{\,\hskip-1pt\cdot\hskip-1pt\,};$ then, there are $h\in \eta$ and $l\in \varepsilon$ such that $hl=l=lh{\,\hskip-1pt\cdot\hskip-1pt\,};$ thus, we have $${\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(hl)={\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(l)\quad{\rm and}\quad {\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(h){\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(b_{\varepsilon})\neq 0\quad .$$
Then, it follows from \cite[Def. 1.7]{BP1} that
$$(R, {\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(b_{\varepsilon}))\subset (T,{\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(b_{\eta}))$$ and from \cite[Theorem 1.8]{BP1} that we have ${\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,} C_{H}(R)}(b_{\eta} b_{\varepsilon})={\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,} C_{H}(R)}(b_{\eta})$.
Conversely, if we have $${\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,} C_{H}(R)}(b_{\eta} b_{\varepsilon})={\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,} C_{H}(R)}(b_{\eta})$$ then, by \cite[Theorem 1.8]{BP1} we still have
${\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(e_{\varepsilon} h)={\rm Br}_R^{{\,\hskip-1pt\cdot\hskip-1pt\,}}(h)$ for any $h\in \eta$; hence, by the lifting theorem for idempotents, we get $R_{\varepsilon}\leq T_\eta$.
Let $\cal R$ be a Dedekind domain of characteristic 0, $\pi$ be a finite set of prime numbers such that $l\cal R\neq \cal R$ for all $l\in \pi{\,\hskip-1pt\cdot\hskip-1pt\,},$ and $X$ and $Y$ be finite groups with $X$ acting on $Y$. We consider the group algebra ${\cal R} Y$ and set $$Z_{\rm id}({\cal R} Y)=\oplus {\cal R} c$$ where $c$ runs over all central primitive idempotents of ${\cal R}Y$. Obviously, $X$ acts on $Z_{\rm id}({\cal R} Y)$ and, in the case that $X$ is a solvable $\pi$-group, Lluis Puig exhibes a $\cal R$-algebra homomorphism ${\mathcal G l}^Y_X: Z_{\rm id}({\cal R} X)\rightarrow Z_{\rm id}({\cal R} Y^X)$ (see \cite[Theorem 4.6]{P7}), which unifies the usual Brauer homomorphism and the Glauberman correspondence of characters --- called the {\it Brauer-Glauberman correspondence{\,\hskip-1pt\cdot\hskip-1pt\,}}.
\noindent{\bf Proposition 4.9.}\quad {\it Let $R_\varepsilon$ and $T_\eta$ be local pointed groups on ${\,\hskip-1pt\cdot\hskip-1pt\,}$ such that $Q_\delta\leq R_\varepsilon\leq P_\gamma$ and that $Q_\delta\leq T_\eta\leq P_\gamma$. Then $R_\varepsilon\leq T_\eta$ and $R_{{\it w}(\varepsilon)}\leq T_{{\it w}(\eta)}$ are equivalent to each other. }
\noindent{\it Proof.}\quad By induction we can assume that $R$ is normal and maximal in $T$; in particular, the quotient $T/R$ is cyclic. In this case, it follows from Lemma 4.8 that the inclusion $R_{{\it w}(\varepsilon)}\leq T_{{\it w}(\eta)}$ is equivalent to $${\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,} C_{H^A}(R)}({\it w}(b_\varepsilon){\it w}(b_\eta)) ={\rm Br}_T^{{\,\hskip-1pt\cdot\hskip-1pt\,} C_{H^A}(R)}({\it w}(b_\eta)) \quad .\leqno £4.9.1$$
Let ${\mathbb Z}$ be the ring of all rational integers and $S$ be the complement set of $p{\mathbb Z}\cup q\mathbb Z$ in $\mathbb Z{\,\hskip-1pt\cdot\hskip-1pt\,};$
then $S$ is a multiplicatively closed set in $\mathbb Z$. We take the localization $S^{-1}\mathbb Z$ of $\mathbb Z$ at $S$ and regard it as a subring of ${\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,};$ since we assume that $\cal K$ is big enough for all finite groups we consider, we can assume that $\cal K$ contains an $|H|$-th primitive root $\omega$ of unity and we set \begin{center}${\cal R}=(S^{-1}{\mathbb Z})[\omega]\quad .$\end{center} Then $\cal R$ is a Dedekind domain (see \cite[Example 2 in Page 96 and Exercise 1 in Page 99]{AM}) and given a prime $l$, we have $l\cal R\neq \cal R$ if and only if $l=p$ or $l=q$. We consider the group algebra ${\cal R}C_H(R)$ and the obvious action of $(T\times A)/R\cong (T/R)\times A$ on it.
Since $\cal R$ contains an $|H|$-th primitive unity root $\omega$, the blocks $b_\varepsilon$, $b_\eta$, ${\it w}(b_\varepsilon)$ and ${\it w}(b_\eta)$ respectively belong to $$Z_{\rm id}({\cal R}C_H(R)){\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,},{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,} Z_{\rm id}({\cal R}C_H(T)){\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,},{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,} Z_{\rm id}({\cal R}C_{H^A}(R))\quad{\rm and}\quad Z_{\rm id}({\cal R}C_{H^A}(T)) $$ (see \cite[Charpter IV, Lemma 7.2]{F}); then, by \cite[Corollary 5.9]{P7}, we have $${\mathcal G l}^{C_{H}(R)}_{A}(b_\varepsilon)={\it w}(b_\varepsilon)\quad{\rm and}\quad {\mathcal G l}^{C_{H}(T)}_{A}(b_\eta)={\it w}(b_\eta) \quad .$$
If $R_\varepsilon\leq T_\eta$, by Lemma 4.8 we have the equality $${\rm Br}_{T}^{{\,\hskip-1pt\cdot\hskip-1pt\,} C_H(R)}(b_\varepsilon b_\eta)={\rm Br}_{T}^{{\,\hskip-1pt\cdot\hskip-1pt\,} C_H(R)}(b_\eta)$$
which is equivalent to ${\mathcal G l}^{C_H(R)}_{T/R}(b_\varepsilon)b_\eta=b_\eta$ (see \cite[4.6.1 and the proof of Corollary 3.6]{P7}). Then by \cite[4.6.2]{P7}, we have \begin{eqnarray*} {\it w}(b_\eta) &=& {\mathcal G l}^{C_H(T)}_{A}(b_\eta)= {\mathcal G l}^{C_H(T)}_{A}({\mathcal G l}^{C_H(R)}_{T/R} (b_\varepsilon)b_\eta) {\,\hskip-1pt\cdot\hskip-1pt\,}&=& {\mathcal G l}^{C_H(R)}_{(T/R)\times A}(b_\varepsilon) {\mathcal G l}^{C_H(T)}_{A}(b_\eta) {\,\hskip-1pt\cdot\hskip-1pt\,} &=&{\mathcal G l}^{C_{H^A}(R)}_{T/R} ({\mathcal G l}^{C_{H}(R)}_{A}(b_\varepsilon)) {\mathcal G l}^{C_H(T)}_{A}(b_\eta){\,\hskip-1pt\cdot\hskip-1pt\,} &=&{\mathcal G l}^{C_{H^A}(R)}_{T/R} ({\it w}(b_\varepsilon)) {\it w}(b_\eta)\quad. \end{eqnarray*} which is equivalent again to equality~£4.9.1 above (see \cite[4.6.1 and the proof of Corollary 3.6]{P7} and therefore it implies $R_{{\it w}(\varepsilon)}\leq T_{{\it w}(\eta)}$. The prove that $R_{{\it w}(\varepsilon)}\leq T_{{\it w}(\eta)}$ implies $R_\varepsilon\leq T_\eta$ is similar.
\noindent{\bf 4.10.}\quad The assumptions and consequences above are very scattered; we collect them in this paragraph, so that readers can easily find them and we can conveniently quote them later. Let $A$ be a cyclic group of order~$q$, where $q$ is a prime number; we assume that $G$ is an $A$-group, that $H$ is an $A$-stable normal subgroup of $G{\,\hskip-1pt\cdot\hskip-1pt\,},$ that $b$ is $A$-stable, that $A$ centralizes $P$ and stabilizes $P_\gamma{\,\hskip-1pt\cdot\hskip-1pt\,},$ and that $A$ and $G$ have coprime orders. Without loss of generality, we may assume that $P\leq N$. Then, $A$ centralizes $Q$ and stabilizes $Q_\delta{\,\hskip-1pt\cdot\hskip-1pt\,},$ so that the Glauberman correspondent ${\it w}(b)$ of the block $b$ makes sense; moreover, the block ${\it w}(b)$ determines two pointed group $(N^A)_{{\it w}(\beta)}$ and $(G^A)_{{\it w}(\alpha)}$ such that $(N^A)_{{\it w}(\beta)}\leq (G^A)_{{\it w}(\alpha)}$ (see them in Paragraph 4.2)., and the local pointed groups $P_\gamma$ and $Q_\delta$ determine respective defect pointed groups $P_{{\it w}(\gamma)}$ and $Q_{{\it w}(\delta)}$ of $(G^A)_{{\it w}(\alpha)}$ and $(H^A)_{{\it w}(\beta)}$ (see Paragraph 4.6 and Proposition 4.7); actually, by Proposition 4.9, we have $Q_{{\it w}(\delta)}\leq P_{{\it w}(\gamma)}$. Take ${\it w}(i)\in {\it w}(\gamma)$ and ${\it w}(j)\in {\it w}(\delta){\,\hskip-1pt\cdot\hskip-1pt\,},$ and set \begin{center} $({\,\hskip-1pt\cdot\hskip-1pt\,} G^A)_{{\it w}(\gamma)}={\it w}(i)({\,\hskip-1pt\cdot\hskip-1pt\,} G^A){\it w}(i)$ , $({\,\hskip-1pt\cdot\hskip-1pt\,} H^A)_{{\it w}(\gamma)}={\it w}(i)({\,\hskip-1pt\cdot\hskip-1pt\,} H^A){\it w}(i)${\,\hskip-1pt\cdot\hskip-1pt\,}
and
$({\,\hskip-1pt\cdot\hskip-1pt\,} H^A)_{{\it w}(\delta)}={\it w}(j)({\,\hskip-1pt\cdot\hskip-1pt\,} H^A){\it w}(j)\quad ;$\end{center} then, $({\,\hskip-1pt\cdot\hskip-1pt\,} G^A)_{{\it w}(\gamma)}$ is a $P$-interior and $(N^A/H^A)$-graded algebra; moreover, the $Q$-interior algebra $({\,\hskip-1pt\cdot\hskip-1pt\,} H^A)_{{\it w}(\delta)}$ with the group homomorphism $$Q\longrightarrow ({\,\hskip-1pt\cdot\hskip-1pt\,} H^A)_{{\it w}(\delta)}^*\quad , \quad u\mapsto u{\it w}(j)$$
is a source algebra of the block algebra ${\,\hskip-1pt\cdot\hskip-1pt\,} H^A{\it w}(b)$ (see \cite{P5}).
\vskip 1cm \noindent{\bf\large 5. A Lemma}
From now on, we use the notation and assumption in Paragraphs 3.1, 3.2 and 4.10; in particular, we assume that the block $b$ of $H$ is nilpotent. Obviously, $N_G(Q_\delta)$ acts on ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H, b)$ and ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(Q)$ via the corresponding conjugation conjugation. Since $b$ is nilpotent, there is an explicit bijection between ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H, b)$ and ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(Q)$ (see \cite[Theorem 52.8]{T}); in this section, we will show that this bijection is compatible with the $N_G(Q_\delta)$-actions; our main purpose is to obtain Lemma 5.6 below as a consequence of this compatibility.
\noindent{\bf 5.1.}\quad For any $x\in N_G(Q_\delta)$, $xjx^{-1}$ belongs to $\delta$ and thus there is some invertible element $a_x\in {\,\hskip-1pt\cdot\hskip-1pt\,}^Q$ such that $xjx^{-1}=a_x ja_x^{-1}{\,\hskip-1pt\cdot\hskip-1pt\,};$ let us denote by $X$ the set of all elements $(a_x^{-1}x)j$ such that $a_x$ is invertible in ${\,\hskip-1pt\cdot\hskip-1pt\,}^Q$ and we have $xjx^{-1}=a_x ja_x^{-1}$ when $x$ runs over $N_G(Q_\delta)$. Set \begin{center}$E_G(Q_\delta)= N_G(Q_\delta)/QC_H(Q)\quad ;$\end{center} then, the following equality $$\Big((a_x^{-1}x)j\Big){\,\hskip-1pt\cdot\hskip-1pt\,} \Big((a_y^{-1}y)j\Big)= \Big((a_x^{-1}xa_y^{-1}x^{-1})xy\Big)j$$ shows that $X$ is a group with respect to the multiplication and it is easily checked that $Q{\,\hskip-1pt\cdot\hskip-1pt\,} ({\,\hskip-1pt\cdot\hskip-1pt\,}_\delta^Q)^*$ is normal in $X$ and that the map $$E_G(Q_\delta)\longrightarrow X/Q({\,\hskip-1pt\cdot\hskip-1pt\,}_\delta^Q)^*\leqno 5.1.1$$ sending the coset of $x\in N_G(Q_\delta)$ in $N_G(Q_\delta)/QC_H(Q)$ to the coset of $(a_x^{-1}x)j$ in $X/Q({\,\hskip-1pt\cdot\hskip-1pt\,}_\delta^Q)^*$ is a group isomorphism.
\noindent{\bf 5.2.}\quad We denote by $Y$ the set of all such elements $a_x^{-1}x$ when $x$ runs over $N_G(Q_\delta)$ and $a_x$ over the invertible element of ${\,\hskip-1pt\cdot\hskip-1pt\,}^Q$ such that $a_x^{-1}x$ commutes with $j$. As in 5.1, it is easily checked that $Y$ is a group with respect to the multiplication $$(a_x^{-1}x){\,\hskip-1pt\cdot\hskip-1pt\,} (a_y^{-1}y)= (a_x^{-1}xa_y^{-1}x^{-1})xy \quad,$$ that $Y$ normalizes $Q{\,\hskip-1pt\cdot\hskip-1pt\,} (({\,\hskip-1pt\cdot\hskip-1pt\,} H)^Q)^*$ and that the map $$E_G(Q_\delta)\longrightarrow \Big(Y{\,\hskip-1pt\cdot\hskip-1pt\,} Q{\,\hskip-1pt\cdot\hskip-1pt\,} ({\,\hskip-1pt\cdot\hskip-1pt\,}^Q)^*\Big)\Big/\Big(Q{\,\hskip-1pt\cdot\hskip-1pt\,} ({\,\hskip-1pt\cdot\hskip-1pt\,}^Q)^*\Big) \leqno 5.2.1$$ sending the coset of $x\in N_G(Q_\delta)$ to the coset of $a_x^{-1}x$ in the right-hand quotient is a group isomorphism.
\noindent{\bf 5.3.}\quad Let $I$ and $J$ be the sets of isomorphism classes of all simple ${\,\hskip-1pt\cdot\hskip-1pt\,}\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}\hbox{-}$ and ${\,\hskip-1pt\cdot\hskip-1pt\,}\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_\delta\hbox{-}$modules respectively. Cleraly, $Y$ acts on $I{\,\hskip-1pt\cdot\hskip-1pt\,};$ but, since $Y\cap ({\,\hskip-1pt\cdot\hskip-1pt\,}^Q)^*$ acts trivially on $I$, the action of $Y$ on $I$ induces an action of $E_G(Q_\delta)$ on $I$ through isomorphism 5.2.1; actually, this action coincides with the action of $E_G(Q_\delta)$ on ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H, b)$ induced by the $N_G(Q_\delta)$-conjugation. Similarly, $X$ acts on $J$ and this action of $X$ on $J$ induces an action of $E_G(Q_\delta)$ on $J$ through isomorphism 5.1.1. But, by \cite[Corollary 3.5]{P5}, the functor $M\mapsto j{\,\hskip-1pt\cdot\hskip-1pt\,} M$ is an equivalence between the categories of finitely generated ${\,\hskip-1pt\cdot\hskip-1pt\,}$- and ${\,\hskip-1pt\cdot\hskip-1pt\,}_\delta$-modules, which induces a bijection between the sets $I$ and $J$. Then, since $Y$ commutes with $j$ and the map $$Y\longrightarrow X\quad ,\quad y\mapsto yj$$
is a group homomorphism, it is easily checked that this bijection is com-patible with the actions of $E_G(Q_\delta)$ on $I$ and $J$.
\noindent{\bf 5.4.}\quad Recall that (cf.~£3.7) $${\,\hskip-1pt\cdot\hskip-1pt\,}_\delta\cong T\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q \leqno 5.4.1$$ where $T = {\rm End}_{\,\hskip-1pt\cdot\hskip-1pt\,} (W)$ for an endo-permutation ${\,\hskip-1pt\cdot\hskip-1pt\,} Q$-module $W$ such that the determinant of the image of any element of $Q$ in is one; in this case, the ${\,\hskip-1pt\cdot\hskip-1pt\,} Q$-module $W$ with these properties is unique up to isomorphism. Then, for any simple ${\,\hskip-1pt\cdot\hskip-1pt\,}\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_\delta$-module $V$ there is a ${\,\hskip-1pt\cdot\hskip-1pt\,} Q$-module $V_W$, unique up to isomorphism, such that $$V\cong W\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} V_W$$ as ${\,\hskip-1pt\cdot\hskip-1pt\,}\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_\delta$-modules; moreover the correspondence $$V\mapsto V_W\leqno 5.4.2$$ determines a bijection between $J$ and the set of isomorphism classes of all simple ${\,\hskip-1pt\cdot\hskip-1pt\,} Q$-modules. Now, the composition of this bijection with the bijection between isomorphism classes in 5.3 is a bijection from $I$ to the set of isomorphism classes of all simple ${\,\hskip-1pt\cdot\hskip-1pt\,} Q$-modules; translating this bijection to characters, we obtain a bijection $${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H, b)\longrightarrow {\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,} (Q)\quad ,\quad \chi_\lambda\mapsto \lambda \quad ;\leqno 5.4.3$$ let us denote by $\chi\in {\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H, b)$ the image ofthe trivial character of $Q{\,\hskip-1pt\cdot\hskip-1pt\,}.$
\noindent{\bf 5.5.}\quad Moreover, the $N_G(Q_\delta)$-conjugation induces an action of $E_G(Q_\delta)$ on the set of isomorphism classes of all simple ${\,\hskip-1pt\cdot\hskip-1pt\,} Q$-modules and we claim that, for any simple ${\,\hskip-1pt\cdot\hskip-1pt\,}\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_\delta$-module $V$ and any $\bar x\in E_G(Q_\delta){\,\hskip-1pt\cdot\hskip-1pt\,},$ we have a ${\,\hskip-1pt\cdot\hskip-1pt\,} Q$-module isomorphism $$^{\bar x}(V_W)\cong (^{\bar x}V)_W \quad ;\leqno 5.5.1$$ in particular, bijection 5.4.2 is compatible with the actions of $E_G(Q_\delta)$ on $J$ and on the set of isomorphism classes of simple ${\,\hskip-1pt\cdot\hskip-1pt\,} Q$-modules. Indeed, let $x$ be a lifting of $\bar x$ in $N_G(Q_\delta)$ and denote by $\varphi_x$ the isomorphism $$Q\cong Q\quad ,\quad u\mapsto xux^{-1} \quad ;$$ take a lifting $y=a_x^{-1}xj$ of $\bar x$ in $X$ through isomorphism 5.1.1; since the conjugation by $y$ stabilizes ${\,\hskip-1pt\cdot\hskip-1pt\,}_\delta$, the map $$f_y: {\,\hskip-1pt\cdot\hskip-1pt\,}_\delta\cong {\rm Res}_{\varphi_x}({\,\hskip-1pt\cdot\hskip-1pt\,}_\delta)\quad ,\quad a\mapsto yay^{-1}$$ is a $Q$-interior algebra isomorphism; then, by \cite[Corollary 6.9]{P4}, we can modify $y$ with a suitable element of $({\,\hskip-1pt\cdot\hskip-1pt\,}_\delta^Q)^*$ in such a way that $f_y$ stabilizes~$T{\,\hskip-1pt\cdot\hskip-1pt\,};$ in this case, the restriction of $f_y$ to $T$ has to be inner and thus we have $W\cong {\rm Res}_{f_y}(W)$ as ${\rm T}\hbox{-}$modules. Moreover, since the action of $Q$ on $T$ can be uniquely lifted to a $Q$-interior algebra structure such that the determinant of the image of any $u\in Q$ in $T$ is one, $f_y$ also stabilizes the image of $Q$ in~$T{\,\hskip-1pt\cdot\hskip-1pt\,};$ more precisely, $f_y$ maps the image of $u\in Q$ onto the image of $\varphi_x (u){\,\hskip-1pt\cdot\hskip-1pt\,}.$ The claim follows.
\noindent{\bf Lemma 5.6.}\quad {\it With the notation above,
\noindent{\bf 5.6.1.}\quad The irreducible character $\chi$ is $N_G(Q_\delta)$-stable and its restriction to the set $H_{p'}$ of all $p$-regular elements of $H$ is the unique irreducible Brauer character of $H{\,\hskip-1pt\cdot\hskip-1pt\,}.$
\noindent{\bf 5.6.2.}\quad The Glauberman correspondent $\phi$ of $\chi$ is $N_{G^A}(Q_{{\it w}(\delta)})$-stable and its restriction
to the set $H^A_{p'}$ of all $p$-regular elements of $H^A$ is the unique irreducible Brauer character of $H^A{\,\hskip-1pt\cdot\hskip-1pt\,}.$}
\noindent{\it Proof.}\quad It follows from £5.3 and £5.5 that the bijection~£5.4.3 is compatible with the actions of $E_G(Q_\delta)$ in ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,}(H, b)$ and ${\rm Irr}_{\,\hskip-1pt\cdot\hskip-1pt\,} (Q){\,\hskip-1pt\cdot\hskip-1pt\,};$ hence, $\chi$ is $E_G(Q_\delta)$-stable and thus $N_G(Q_\delta)$-stable. Since $\phi$ is the unique irreducible constituent of ${\rm Res}^H_{H^A}(\chi)$ occurring with a multiplicity coprime to $q$ and $N_{G^A}(Q_{{\it w}(\delta)})$ is contained in $N_G(Q_\delta)$, $\phi$ has to be $N_{G^A}(Q_{{\it w}(\delta)})$-stable. By the very definition of the bijection 5.4.3, the restriction of $\chi$ to $H_{p'}$ is the unique Brauer character of $H$. Since the perfect isometry $R_H^b$ between $ {\cal R}_{\,\hskip-1pt\cdot\hskip-1pt\,} (H, b)$ and ${\cal R}_{\,\hskip-1pt\cdot\hskip-1pt\,} (H^A, {\it w}(b))$ maps $\psi\in I$ onto $\pm\pi(H, A)(\psi)$ and the blocks $b$ and ${\it w}(b)$ are nilpotent, by \cite[Theorem 4.11]{B} the decomposition matrices of $b$ and ${\it w}(b)$ are the same if the characters indexing their columns correspond to each other by the Glauberman correspondence; hence, the restriction of~$\phi$ to $H^A_{p'}$ is the unique Brauer character of $H^A$.
\vskip 1cm \noindent{\bf\large 6. A $k^*$-group isomorphism $(\skew3\hat {\bar N}^{^k})^A\cong {\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{\overline{\!N^A}}^{k}$}
\noindent{\bf 6.1.}\quad Let $xH$ be an $A$-stable coset in $\bar N$. We consider the action of $H\rtimes A$ on $xH$ defined by the obvious action of $A$ on $xH$ and the right multiplication of $H$ on $xH{\,\hskip-1pt\cdot\hskip-1pt\,};$ since $A$ and $G$ have coprime orders, it follows from \cite[Lemma 13.8 and Corollary 13.9]{I} that $xH\cap N^A$ is non-empty and that $H^A$ acts transitively on it; consequently, we have $\bar N^A= (H{\,\hskip-1pt\cdot\hskip-1pt\,} N^A)/H$ and the inclusion $N^A\subset N$ induces a group isomorphism $$\overline{ \!N^A}\cong \bar N^A= (H{\,\hskip-1pt\cdot\hskip-1pt\,} N^A)/H \quad .\leqno 6.1.1 $$ Note that if $G=H {\,\hskip-1pt\cdot\hskip-1pt\,} G^A$ then we have $\bar N^A= \bar N{\,\hskip-1pt\cdot\hskip-1pt\,}.$
\noindent{\bf 6.2.}\quad It follows from Lemma~£5.6 that $N = H{\,\hskip-1pt\cdot\hskip-1pt\,} N_G (Q_\delta)$ stabilizes $\chi$ and actually the central extension $\skew3\hat{\bar N}$ of $\bar N$ by $U$ in~£3.9 above is nothing but the so-called {\it Clifford extension{\,\hskip-1pt\cdot\hskip-1pt\,}} of $\bar N$ over $\chi{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, since $A$ and $U$ also have coprime orders, we can prove as above that $\skew3\hat{\bar N}^A$ is a central extension of $\bar N^A$ by~$U{\,\hskip-1pt\cdot\hskip-1pt\,},$ which is the {\it Clifford extension{\,\hskip-1pt\cdot\hskip-1pt\,}} of $\bar N^A$ over $\chi{\,\hskip-1pt\cdot\hskip-1pt\,}.$ Since the Glauberman correspondent ${\it w}(b)$ is nilpotent, we can repeat all the above constructions for $G^A$, $H^A{\,\hskip-1pt\cdot\hskip-1pt\,},$ ${\it w}(b)$ and $N^A{\,\hskip-1pt\cdot\hskip-1pt\,};$ then, denoting by $U_A$ the group of $\vert H^A\vert\hbox{-}$th roots of unity, we obtain a central extension ${\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{\overline{\!N^A}}$ of ${\,\hskip-1pt\cdot\hskip-1pt\,}\overline{ \!N^A} =\bar N^A$ by~$U_A{\,\hskip-1pt\cdot\hskip-1pt\,},$ which is the {\it Clifford extension{\,\hskip-1pt\cdot\hskip-1pt\,}} of $\bar N^A$ over $\phi{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, note that $U_A$ is contained in~$U{\,\hskip-1pt\cdot\hskip-1pt\,}.$
\noindent{\bf 6.3.}\quad At this point, it follows from \cite[Corollary 4.16]{P8} that there is an extension group isomorphism $$\hat N^A\cong (U\times {\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{\!N^A})/\Delta_{-1} (U_A) \leqno £6.3.1$$ where we are setting $\Delta_{-1}(U_A) = {\,\hskip-1pt\cdot\hskip-1pt\,}(\xi^{-1},\xi){\,\hskip-1pt\cdot\hskip-1pt\,}_{\xi\in U_A}{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, according to \cite[Remark 4.17]{P7}, this isomorphism is defined by a sequence of Brauer homomorphisms --- in different characteristics --- and, in particular, it is quite clear that it maps any $y\in H{\,\hskip-1pt\cdot\hskip-1pt\,} \hat N^A$ in the classes of $(1,y)$ in the right-hand member, so that isomorphism~£6.3.1 induces a new
extension group isomorphism $$\skew3\hat{\bar N}^A\cong (U\times {\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{\overline{\!N^A}})/\Delta_{-1} (U_A) \quad .$$ Consequently, denoting by $\varpi_A{\,\hskip-1pt\cdot\hskip-1pt\,}\colon U_A\to k^*$ the restriction of $\varpi{\,\hskip-1pt\cdot\hskip-1pt\,},$ we get a $k^*\hbox{-}$group isomorphism \begin{eqnarray*}(\skew3\hat {\bar N}^{^k})^A &=& \Big((k^*\times \skew3\hat {\bar N})/\Delta_\varrho (U)\Big)^A \cong (k^*\times \skew3\hat {\bar N}^A)/\Delta_\varrho (U) {\,\hskip-1pt\cdot\hskip-1pt\,} &\cong &(k^*\times {\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{\overline{\!N^A}})/\Delta_{\varrho_A} (U_A) {\,\hskip-1pt\cdot\hskip-1pt\,}= {\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{\overline{\!N^A}}^k \end{eqnarray*} as announced.
\noindent{\bf Remark 6.4.}\quad Note that if $G=H{\,\hskip-1pt\cdot\hskip-1pt\,} G^A$ then we have $\skew3\hat {\bar N}^A= \skew3\hat {\bar N}$.
\vskip 1cm \noindent{\bf\large 7. Proofs of Theorems 1.5 and 1.6}
\noindent{\bf 7.1.}\quad The first statement in Theorem~£1.5 follows from Propositions~£4.4 and~£4.5. From now on, we assume that the block $b$ of $H$ is nilpotent; thus, the Glauberman correspondent ${\it w}(b)$ is also nilpotent and $({\,\hskip-1pt\cdot\hskip-1pt\,} G^A){\it w}(c)$ is an extension of the nilpotent block algebra $({\,\hskip-1pt\cdot\hskip-1pt\,} H^A){\it w}(b)$. This section will be devoted to comparing the extensions ${\,\hskip-1pt\cdot\hskip-1pt\,} G c$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} G^A{\it w}(c)$ of the nilpotent block algebras ${\,\hskip-1pt\cdot\hskip-1pt\,} H b$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} H^A{\it w}(b)$. Applying Theorem 3.5 to the finite groups $G^A$ and $H^A$ and the nilpotent block ${\it w}(b)$ of $H^A$, we get a finite group $L^A$ and respective injective and surjective group homomorphisms $$\tau^A: P\longrightarrow L^A\quad{\rm and}\quad \bar\pi^A: L^A\longrightarrow {\,\hskip-1pt\cdot\hskip-1pt\,}\overline{\!N^A}$$
such that $\bar\pi^A(\tau^A(u))=\bar u$ for any $u\in P$, that
${\rm Ker}(\bar\pi^A)=\tau^A(Q)$ and that they induce an equivalence
of categories $${\cal E}_{({\it w}(b),{\,\hskip-1pt\cdot\hskip-1pt\,} H^A,{\,\hskip-1pt\cdot\hskip-1pt\,} G^A)}\cong {\cal E}_{(1,{\,\hskip-1pt\cdot\hskip-1pt\,} \tau^A(Q),{\,\hskip-1pt\cdot\hskip-1pt\,} L^A)} \quad .$$ Similarly, we et $\widehat{ L^A}= {\rm res}_{\bar \pi^A}({\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{\overline{\!N^A}}^k)$ and denote by $\widehat{ \tau^A}{\,\hskip-1pt\cdot\hskip-1pt\,}\colon P\to \widehat{ L^A}$ the lifting of~$\tau^A{\,\hskip-1pt\cdot\hskip-1pt\,};$ then, by Corollary 3.15, there is a $P$-interior full matrix algebra ${\it w}(S_\gamma)$ such that we have an isomorphism $$({\,\hskip-1pt\cdot\hskip-1pt\,} (G^A))_{{\it w}(\gamma)}\cong {\it w}(S_\gamma)\otimes_{{\,\hskip-1pt\cdot\hskip-1pt\,} } {\,\hskip-1pt\cdot\hskip-1pt\,}_*\widehat{L^A}^\circ\quad \leqno{7.1.1}$$ of both $P$-interior and $N^A/H^A$-graded algebras.
\noindent{\bf Lemma 7.2.}\quad {\it Assume that $G=H{\,\hskip-1pt\cdot\hskip-1pt\,} G^A{\,\hskip-1pt\cdot\hskip-1pt\,}.$ Then we have $N=H{\,\hskip-1pt\cdot\hskip-1pt\,} N^A{\,\hskip-1pt\cdot\hskip-1pt\,},$ the inclusion $N^A\subset N$ induces a group isomorphism ${\,\hskip-1pt\cdot\hskip-1pt\,}\overline{\!N^A}\cong \bar N$ and there is a group isomorphism $$\sigma: L^A\cong L$$ such that $\sigma\circ\tau^A=\tau$ and $\bar\pi\circ\sigma=\bar\pi^A$. }
\noindent{\it Proof.}\quad For any subgroups $R$ and $T$ of $P$ containing $Q$, let us denote by $${\cal E}_{(b,{\,\hskip-1pt\cdot\hskip-1pt\,} H,{\,\hskip-1pt\cdot\hskip-1pt\,} G)}(R, T)\quad{\rm and}\quad {\cal E}_{({\it w}(b),{\,\hskip-1pt\cdot\hskip-1pt\,} H^A,{\,\hskip-1pt\cdot\hskip-1pt\,} G^A)}(R, T)$$ the respective sets of ${\cal E}_{(b,{\,\hskip-1pt\cdot\hskip-1pt\,} H,{\,\hskip-1pt\cdot\hskip-1pt\,} G)}\hbox{-}$ and ${\cal E}_{({\it w}(b),{\,\hskip-1pt\cdot\hskip-1pt\,} H^A,{\,\hskip-1pt\cdot\hskip-1pt\,} G^A)}\hbox{-}$morphisms from $T$ to $R{\,\hskip-1pt\cdot\hskip-1pt\,};$ since $A$ acts trivially in ${\cal E}_{(b,{\,\hskip-1pt\cdot\hskip-1pt\,} H,{\,\hskip-1pt\cdot\hskip-1pt\,} G)}(R, T)$, by \cite[Lemma 13.8 and Corollary 13.9]{I} each morphism in ${\cal E}_{(b,{\,\hskip-1pt\cdot\hskip-1pt\,} H,{\,\hskip-1pt\cdot\hskip-1pt\,} G)}(R, T)$ is induced by some element in $N^A{\,\hskip-1pt\cdot\hskip-1pt\,};$ moreover, if $T_\nu$ and $R_\varepsilon$ are local pointed groups contained in $P_\gamma{\,\hskip-1pt\cdot\hskip-1pt\,},$ it follows from Proposition 4.9 that we have $T_\nu\leq (R_\varepsilon)^x$ for some $x\in N^A$ if and only if we have $T_{{\it w}(\nu)}\leq (R_{{\it w}(\varepsilon)})^x$. Therefore, we get $${\cal E}_{(b,{\,\hskip-1pt\cdot\hskip-1pt\,} H,{\,\hskip-1pt\cdot\hskip-1pt\,} G)}(T, R)={\cal E}_{({\it w}(b),{\,\hskip-1pt\cdot\hskip-1pt\,} H^A,{\,\hskip-1pt\cdot\hskip-1pt\,} G^A)}(T, R) \quad . $$
At this point, it is easy to check that $L$, $\tau$ and $\bar\pi$ fulfill the conditions in Theorem 3.5 with respect to $G^A$, $H^A$ and the nilpotent block ${\it w}(b)$. Then this lemma follows from the uniqueness part in Theorem 3.5.
\noindent{\bf Lemma 7.3.}\quad {\it Assume that $G=H{\,\hskip-1pt\cdot\hskip-1pt\,} G^A{\,\hskip-1pt\cdot\hskip-1pt\,}.$ Then there is a $k^*$-group isomorphism $\hat \sigma: \widehat{ L^A}\cong \hat L$ lifting $\sigma$ and fulfilling $\hat\sigma\circ \widehat{\tau^A }= \hat\tau{\,\hskip-1pt\cdot\hskip-1pt\,}.$ In particular, we have $${\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(G, c)={\rm Irr}_{{\,\hskip-1pt\cdot\hskip-1pt\,}}(G, c)^A\quad .$$}
\par\noindent{\it Proof.}\quad The first statement is an easy consequence of £6.3 and Lemma 7.2; then, the last equality follows from Corollary~£3.15.
\noindent{\bf 7.4.} {\it Proof of Theorem 1.6.}\quad Firstly we consider the case where the block $b$ of $H$ is not stabilized by~$G{\,\hskip-1pt\cdot\hskip-1pt\,};$ then we have an isomorphism $${\rm Ind}^G_{N}({\,\hskip-1pt\cdot\hskip-1pt\,} N b)\cong {\,\hskip-1pt\cdot\hskip-1pt\,} G b$$ of ${\,\hskip-1pt\cdot\hskip-1pt\,} G$-interior algebras mapping $1\otimes a\otimes 1$ onto $a$ for any $a\in {\,\hskip-1pt\cdot\hskip-1pt\,} N b$ and an isomorphism $${\rm Ind}^{G^A}_{N^A}({\,\hskip-1pt\cdot\hskip-1pt\,} (N^A ){\it w}(b))\cong {\,\hskip-1pt\cdot\hskip-1pt\,} (G^A){\it w}(b)$$ of ${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A)$-interior algebras mapping $1\otimes a\otimes 1$ onto $a$ for any $a\in {\,\hskip-1pt\cdot\hskip-1pt\,} (N^A) {\it w}(b)$. Suppose that an ${\,\hskip-1pt\cdot\hskip-1pt\,}(N^A\times N)$-module $M$ induces a Morita equivalence from ${\,\hskip-1pt\cdot\hskip-1pt\,} (N^A) {\it w}(b)$ to ${\,\hskip-1pt\cdot\hskip-1pt\,} N b$. Then it is easy to see that the ${\,\hskip-1pt\cdot\hskip-1pt\,}(G^A\times G)$-module ${\rm Ind}^{G^A\times G}_{N^A\times N} (M)$ induces a Morita equivalence from ${\,\hskip-1pt\cdot\hskip-1pt\,} Gc$ to ${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A) {\it w}(c){\,\hskip-1pt\cdot\hskip-1pt\,}.$ So, we can assume that $G= N$ and then we have $G^A=N^A{\,\hskip-1pt\cdot\hskip-1pt\,}.$
By Corollary 3.15, there exists an isomorphism of both $(N/H)$-graded and $P$-interior algebras $$({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\gamma\cong S_\gamma\otimes_{{\,\hskip-1pt\cdot\hskip-1pt\,} } {\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}\quad ; \leqno{7.4.1}$$ denote by $V_\gamma$ an ${\,\hskip-1pt\cdot\hskip-1pt\,} P$-module such that ${\rm End}_{\,\hskip-1pt\cdot\hskip-1pt\,}(V_\gamma)\cong S_\gamma{\,\hskip-1pt\cdot\hskip-1pt\,};$ choosing $i\in \gamma$ and assuming that $({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\gamma = i({\,\hskip-1pt\cdot\hskip-1pt\,} G)i{\,\hskip-1pt\cdot\hskip-1pt\,},$ we know that the ${\,\hskip-1pt\cdot\hskip-1pt\,} Gb\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} ({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\gamma^\circ\hbox{-}$module $({\,\hskip-1pt\cdot\hskip-1pt\,} G)i$ determines a Morita equivalence from ${\,\hskip-1pt\cdot\hskip-1pt\,} Gb$ to $({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\gamma{\,\hskip-1pt\cdot\hskip-1pt\,},$ whereas the $({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L\hbox{-}$module $V_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}$ determines a Morita equivalence from $({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\gamma$ to~${\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}{\,\hskip-1pt\cdot\hskip-1pt\,},$ so that the ${\,\hskip-1pt\cdot\hskip-1pt\,} Gb\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L\hbox{-}$ module $$({\,\hskip-1pt\cdot\hskip-1pt\,} G)i\otimes_{({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\gamma} (V_\gamma\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}) \cong ({\,\hskip-1pt\cdot\hskip-1pt\,} G)i\otimes_{S_\gamma} V_\gamma$$
determines a Morita equivalence from ${\,\hskip-1pt\cdot\hskip-1pt\,} Gb$ to~${\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^{^\circ}{\,\hskip-1pt\cdot\hskip-1pt\,}.$
Similarly, choosing $j\in \delta$ such that $ji = j = ij{\,\hskip-1pt\cdot\hskip-1pt\,},$ assuming that
$j({\,\hskip-1pt\cdot\hskip-1pt\,} H)j = ({\,\hskip-1pt\cdot\hskip-1pt\,} H)_\delta$ and setting $j{\,\hskip-1pt\cdot\hskip-1pt\,} V_\gamma = V_\delta{\,\hskip-1pt\cdot\hskip-1pt\,},$
so that $S_\delta = {\rm End}_{\,\hskip-1pt\cdot\hskip-1pt\,} (V_\delta){\,\hskip-1pt\cdot\hskip-1pt\,},$ the ${\,\hskip-1pt\cdot\hskip-1pt\,} Hb\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q\hbox{-}$ module $$({\,\hskip-1pt\cdot\hskip-1pt\,} H)j\otimes_{({\,\hskip-1pt\cdot\hskip-1pt\,} H)_\delta} (V_\delta\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q) \cong ({\,\hskip-1pt\cdot\hskip-1pt\,} H)j\otimes_{S_\delta} V_\delta$$
determines a Morita equivalence from ${\,\hskip-1pt\cdot\hskip-1pt\,} Hb$ to~${\,\hskip-1pt\cdot\hskip-1pt\,} Q{\,\hskip-1pt\cdot\hskip-1pt\,}.$
Analogously, with evident notation, the ${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A) w(b)\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,}
{\,\hskip-1pt\cdot\hskip-1pt\,}_*{\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{\!L^A}\hbox{-}$ module $${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A) w(i) \otimes_{w(S_\gamma)} w(V_\gamma)$$
determines a Morita equivalence from ${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A)(b)$ to~${\,\hskip-1pt\cdot\hskip-1pt\,}_*{\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{L^A}^\circ{\,\hskip-1pt\cdot\hskip-1pt\,},$ whereas the ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A) w(b)\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q\hbox{-}$module $${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)w(j)\otimes_{w(S_\delta)} w(V_\delta)$$
determines a Morita equivalence from ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A) w(b)$ to~${\,\hskip-1pt\cdot\hskip-1pt\,} Q{\,\hskip-1pt\cdot\hskip-1pt\,}.$
Consequently, identifying ${\,\hskip-1pt\cdot\hskip-1pt\,}\widehat{L^A}$ with $\hat L$ through the isomorphism
$\hat\sigma$ (cf. Lemma~£7.3), the ${\,\hskip-1pt\cdot\hskip-1pt\,} (G\times G^A)\hbox{-}$module
$$D= (({\,\hskip-1pt\cdot\hskip-1pt\,} G)i\otimes_{S_\gamma} V_\gamma)\otimes_{{\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L}
(w(V_\gamma)^\circ \otimes_{w(S_\gamma)} w(i) {\,\hskip-1pt\cdot\hskip-1pt\,} (G^A))$$
determines a Morita equivalence from ${\,\hskip-1pt\cdot\hskip-1pt\,} Gb$ to~${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A) w(b){\,\hskip-1pt\cdot\hskip-1pt\,},$
whereas the ${\,\hskip-1pt\cdot\hskip-1pt\,} (H\times H^A)\hbox{-}$module
$$M = (({\,\hskip-1pt\cdot\hskip-1pt\,} H)j\otimes_{S_\delta} V_\delta)\otimes_{{\,\hskip-1pt\cdot\hskip-1pt\,} Q }
(w(V_\delta)^\circ \otimes_{w(S_\delta)} w(j){\,\hskip-1pt\cdot\hskip-1pt\,} (H^A))$$
determines a Morita equivalence from ${\,\hskip-1pt\cdot\hskip-1pt\,} Hb$ to~${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A) w(b){\,\hskip-1pt\cdot\hskip-1pt\,}.$
Moreover, since we have the obvious inclusions
$$({\,\hskip-1pt\cdot\hskip-1pt\,} H)j\subset ({\,\hskip-1pt\cdot\hskip-1pt\,} G)i\quad ,\quad S_\delta \subset S_\gamma\quad{\rm and}\quad
V_\delta \subset V_\gamma
\quad ,$$ it is easily checked that we have $$({\,\hskip-1pt\cdot\hskip-1pt\,} H)j\otimes_{S_\delta} V_\delta\cong ({\,\hskip-1pt\cdot\hskip-1pt\,} H)i\otimes_{S_\gamma} V_\gamma\subset ({\,\hskip-1pt\cdot\hskip-1pt\,} G)i\otimes_{S_\gamma} V_\gamma \quad ;\leqno £7.4.2$$ in particular, we have an evident section $$({\,\hskip-1pt\cdot\hskip-1pt\,} G)i\otimes_{S_\gamma} V_\gamma\longrightarrow ({\,\hskip-1pt\cdot\hskip-1pt\,} H)j\otimes_{S_\delta} V_\delta$$ which is actually an ${\,\hskip-1pt\cdot\hskip-1pt\,} Hb\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q\hbox{-}$module homomorphism. Similarly, we have a split ${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A) w(b)\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q\hbox{-}$module monomorphism $${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A)w(j)\otimes_{w(S_\delta)} w(V_\delta)\longrightarrow {\,\hskip-1pt\cdot\hskip-1pt\,} (G^A) w(i) \otimes_{w(S_\gamma)} w(V_\gamma) \quad .\leqno £7.4.3$$
In conclusion, the ${\,\hskip-1pt\cdot\hskip-1pt\,} Hb\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q\hbox{-}$ and
${\,\hskip-1pt\cdot\hskip-1pt\,} (H^A) w(b)\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,} Q\hbox{-}$module homomorphisms~£7.4.2 and~£7.4.3, together with the inclusion ${\,\hskip-1pt\cdot\hskip-1pt\,} Q\subset {\,\hskip-1pt\cdot\hskip-1pt\,} \hat L{\,\hskip-1pt\cdot\hskip-1pt\,},$ determine
an ${\,\hskip-1pt\cdot\hskip-1pt\,} (H\times H^A)\hbox{-}$module homomorphism
$$M\longrightarrow {\rm Res}_{H\times H^A}^{G\times G^A} (D)
\leqno £7.4.4$$
which actually admits a section too. Now, denoting by $K$ the inverse image in
$G\times G^A$ of the ``diagonal'' subgroup of $(G/H)\times (G^A/H^A){\,\hskip-1pt\cdot\hskip-1pt\,},$
we claim that the product by $K$ stabilizes the image of $M$ in $D{\,\hskip-1pt\cdot\hskip-1pt\,},$ so that
$M$ can be extended to an ${\,\hskip-1pt\cdot\hskip-1pt\,} K\hbox{-}$module.
Actually, we have
$$K = (H\times H^A){\,\hskip-1pt\cdot\hskip-1pt\,}\Delta (N_{G^A}(Q_\delta))
\quad ,$$
so that it suffices to prove that the image of $M$ is stable by multiplication
by $\Delta (N_{G^A}(Q_\delta)){\,\hskip-1pt\cdot\hskip-1pt\,}.$ Given $x\in N_{G^A}(Q_\delta)$, there are some invertible elements $a_x\in ({\,\hskip-1pt\cdot\hskip-1pt\,} H)^Q$ and $b_x\in ({\,\hskip-1pt\cdot\hskip-1pt\,} (H^A))^Q$ such that $$xjx^{-1} = a_xj a_x^{-1}\quad{\rm and}\quad x w(j)x^{-1} = b_x w(j )b_x^{-1}$$ and therefore $a_x^{-1}x$ and $b_x^{-1}x$ respectively centralize $j$ and $w(j){\,\hskip-1pt\cdot\hskip-1pt\,},$ so that $a_x^{-1}xj$ and $b_x^{-1}x w(j)$ respectively belong to $({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\delta$ and to $({\,\hskip-1pt\cdot\hskip-1pt\,} G^A)_{w(\delta)}{\,\hskip-1pt\cdot\hskip-1pt\,};$ but, according to isomorphisms~£7.4.1 and~£7.1.1, we have $G/H\hbox{-}$ and $G^A/H^A\hbox{-}$gra-ded isomorphisms $$({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\delta\cong S_\delta\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^\circ\quad{\rm and}\quad ({\,\hskip-1pt\cdot\hskip-1pt\,} G^A)_{w(\delta)}\cong w(S_\delta)\otimes_{\,\hskip-1pt\cdot\hskip-1pt\,} {\,\hskip-1pt\cdot\hskip-1pt\,}_*\hat L^\circ $$ where we are setting $w(S_\delta) = w(j)w(S_\gamma)w(j){\,\hskip-1pt\cdot\hskip-1pt\,}.$
Hence, identifying with each other both members of these isomorphisms and modifying if necessary our choice of $a_x{\,\hskip-1pt\cdot\hskip-1pt\,},$ for some $s_x\in S_\delta{\,\hskip-1pt\cdot\hskip-1pt\,},$ $t_x\in w(S_\delta)$ and $\hat y_x\in \hat L^\circ{\,\hskip-1pt\cdot\hskip-1pt\,},$
we get $$a_x^{-1}x j = s_x\otimes \hat y_x\quad{\rm and}\quad b_x^{-1}x w(j) = t_x\otimes \hat y_x \quad .$$ Thus, setting $w(V_\delta) = w(j)w(V_\gamma){\,\hskip-1pt\cdot\hskip-1pt\,},$ for any $a\in ({\,\hskip-1pt\cdot\hskip-1pt\,} H)j{\,\hskip-1pt\cdot\hskip-1pt\,},$ any $b\in ({\,\hskip-1pt\cdot\hskip-1pt\,} H^A)w(j){\,\hskip-1pt\cdot\hskip-1pt\,},$ any $v\in V_\delta$ and any $w\in w(V_\delta){\,\hskip-1pt\cdot\hskip-1pt\,},$ in $D$ we have \begin{eqnarray*}(x,x){\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,}&{\,\hskip-1pt\cdot\hskip-1pt\,}&{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,}{\,\hskip-1pt\cdot\hskip-1pt\,}(a\otimes v)\otimes (w\otimes b) = (x a\otimes v)\otimes (w\otimes bx^{-1}){\,\hskip-1pt\cdot\hskip-1pt\,} &=& (x ax^{-1}a_x (a_x^{-1}xj)\otimes v)\otimes (w\otimes (w(j)x^{-1}b_x)b_x^{-1}xbx^{-1}){\,\hskip-1pt\cdot\hskip-1pt\,} &=&(x ax^{-1}a_x \otimes s_x\.v){\,\hskip-1pt\cdot\hskip-1pt\,} \hat y_x\otimes \hat y_x^{-1}{\,\hskip-1pt\cdot\hskip-1pt\,}(w\.t_x^{-1}\otimes b_x^{-1}xbx^{-1}){\,\hskip-1pt\cdot\hskip-1pt\,} &=&(x ax^{-1}a_x \otimes s_x\.v) \otimes (w\.t_x^{-1}\otimes b_x^{-1}xbx^{-1})\quad ; \end{eqnarray*} since $x ax^{-1}a_x $ and $b_x^{-1}xbx^{-1}$ respectively belong to $({\,\hskip-1pt\cdot\hskip-1pt\,} H)j$ and $w(j)({\,\hskip-1pt\cdot\hskip-1pt\,} H^A){\,\hskip-1pt\cdot\hskip-1pt\,},$ this proves our claim.
Finally, since homomorphism~£7.4.4 actually becomes an ${\,\hskip-1pt\cdot\hskip-1pt\,} K\hbox{-}$module homomorphism, it induces an ${\,\hskip-1pt\cdot\hskip-1pt\,} (G\times G^A)\hbox{-}$module homomorphism $${\rm Ind}_K^{G\times G^A}(M)\longrightarrow D$$ which is actually an isomorphism as it is easily checked. We are done.
The following theorem is due to Harris and Linckelmann (see \cite{H}).
\noindent{\bf Theorem 7.5.}\quad {\it Let $G$ be an $A$-group and assume that $G$ is a finite $p$-solvable group and $A$ is a solvable group of order prime to $|G|$. Let $b $ be an $A$-stable block of $G$ over ${\,\hskip-1pt\cdot\hskip-1pt\,}$ with a defect group $P$ centralized by $A$ and denote by ${\it w}(b)$ the Glauberman correspondent of the block $b$. Then the block algebras ${\,\hskip-1pt\cdot\hskip-1pt\,} Gb$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A){\it w}(b)$ are basically Morita equivalent. }
\noindent{\it Proof.}\quad By \cite[Theorem 5.1]{H}, we can assume that $b$ is a $G\rtimes A$-stable block of ${\rm O}_{p'}(G)$, where ${\rm O}_{p'}(G)$ is the maximal normal $p'$-subgroup of $G$. Clearly $b$ as a block of ${\rm O}_{p'}(G)$ is nilpotent and thus ${\,\hskip-1pt\cdot\hskip-1pt\,} Gb$ is an extension of the nilpotent block algebra ${\,\hskip-1pt\cdot\hskip-1pt\,} {\rm O}_{p'}(G) b$. By \cite[Theorem 5.1]{H} again, ${\it w}(b)$ is a $G^A$-stable block of ${\rm O}_{p'}(G^A)$ and thus is nilpotent; thus ${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A) {\it w}(b)$ is an extension of the nilpotent block algebra ${\,\hskip-1pt\cdot\hskip-1pt\,} {\rm O}_{p'}(G^A) {\it w}(b)$. By \cite[Theorem 4.1]{H}, ${\it w}(b)$ is also the Glauberman correspondent of $b$ as a block of ${\rm O}_{p'}(G)$. Then, by Theorem 1.6, the block algebras ${\,\hskip-1pt\cdot\hskip-1pt\,} Gb$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A){\it w}(b)$ are basically Morita equivalent.
The following theorem is due to Koshitani and Michler (see \cite{KG}).
\noindent{\bf Theorem 7.6.}\quad {\it Let $G$ be an $A$-group and assume that $A$ is a solvable group of order prime to $|G|$. Let $b $ be an $A$-stable block of $G$ over ${\,\hskip-1pt\cdot\hskip-1pt\,}$ with a defect group $P$ centralized by $A$ and denote by ${\it w}(b)$ the Glauberman correspondent of the block $b$. Assume that $P$ is normal in $G$. Then, the block algebras ${\,\hskip-1pt\cdot\hskip-1pt\,} Gb$ and ${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A{)\it w}(b)$ have isomorphic source algebras. }
\noindent{\it Proof.}\quad Since $P$ is normal in $G$, by \cite[2.9]{AB} there is a block $b_P$ of $C_G(P)$ such that $b={\rm Tr}^G_{G_{b_P}}(b_P)$, where $G_{b_P}$ is the stabilizer of $b_P$ in $G$. Since $A$ and $G$ have coprime orders, by \cite[Lemma 13.8 and Corollary 13.9]{I}, $b_P$ can be chosen such that $A$ stabilizes $b_P$. Since $P$ is the unique defect group of $b$, $P$ has to be contained in $G_{b_P}{\,\hskip-1pt\cdot\hskip-1pt\,};$ then by \cite[Proposition 5.3]{KP}, the intersection $Z(P)=P\cap C_G(P)$ is the defect group of $b_P$ and, in particular, $b_P$ is nilpotent. Thus the block ${\,\hskip-1pt\cdot\hskip-1pt\,} G b$ is an extension of the nilpotent block algebra~${\,\hskip-1pt\cdot\hskip-1pt\,}(P{\,\hskip-1pt\cdot\hskip-1pt\,} C_G(P))b_P$ and, in particular, we have $\bar N \cong E_G (P_\gamma){\,\hskip-1pt\cdot\hskip-1pt\,}.$
The Glauberman correspondent of $b_P$ makes sense and by \cite[Proposition 4]{W}, we have
$${\it w}(b)={\rm Tr}^{G^A}_{(G^A)_{{\it w}(b_P)}}({\it w}(b_P))
\quad .$$
Since ${\it w}(b_P)$ has defect group $Z(P)$, it is also nilpotent and thus ${\,\hskip-1pt\cdot\hskip-1pt\,} (G^A){\it w}(b)$ is an extension of the nilpotent block algebra
${\,\hskip-1pt\cdot\hskip-1pt\,}(P{\,\hskip-1pt\cdot\hskip-1pt\,} C_{G^A}(P)){\it w}(b_P){\,\hskip-1pt\cdot\hskip-1pt\,};$ once again, we have
$\bar N ^A\cong E_{G^A} (P_{w(\gamma)}){\,\hskip-1pt\cdot\hskip-1pt\,}.$
On the other hand, since $P$ is normal in $G{\,\hskip-1pt\cdot\hskip-1pt\,},$ it follows from \cite[Proposition 14.6]{P6} that
$$({\,\hskip-1pt\cdot\hskip-1pt\,} G)_\gamma\cong {\,\hskip-1pt\cdot\hskip-1pt\,}_*(P \rtimes
\hat E_G (P_\gamma))\quad{\rm and}\quad ({\,\hskip-1pt\cdot\hskip-1pt\,} G^A)_{w(\gamma)}\cong {\,\hskip-1pt\cdot\hskip-1pt\,}_*(P \rtimes \hat E_{G^A} (P_{w(\gamma)}))
\quad ;$$
but, it follows from~£6.3 that we have a $k^*\hbox{-}$group isomorphism
$$\hat E_G (P_\gamma)\cong \hat E_{G^A} (P_{w(\gamma)})
\quad .$$
We are done.
Lluis Puig
CNRS, Institut de Math\'ematiques de Jussieu
6 Av Bizet, 94340 Joinville-le-Pont, France
[email protected]
Yuanyang Zhou
Department of Mathematics and Statistics
Central China Normal University
Wuhan, 430079
P.R. China
[email protected]
\end{document} |
\begin{document}
\title{Some Existence Results for a Singular Elliptic Problem via Bifurcation Theory}
\section{Introduction}
\nocite{DelPinoManaMonte1992,RabinowitzSturmLiouville70,RabinowitzBifurPotOp77,CrandallRabinowitzSturmLiou1970,KieloeferBifurBook}
The aim of this paper is to prove existence of solutions pairs $({\lambda},u)$ of the singular problem:
\begin{equation}\label{eqn:singular-neumann-problem}
\begin{cases}
-\Delta u={\lambda} u-\dfrac{1}{u}&\mbox{ in }\Omega,
\\
u>0&\mbox{ in }\Omega,
\\
\nabla u\cdot\nu=0&\mbox{ on }\partial\Omega
\end{cases}
\tag{P}
\end{equation}
where $\Omega$ is a bounded smooth open subset of $\ensuremath{\mathbb{R}}^N$ and $\nu$ denotes the unit normal defined on $\partial\Omega$ -- we are therefore considering a Neumann boundary condition. This research is motivated by the increasing interest on singular problem that has been showing up in the literature in recent years (starting more or less from the year 2000). If we confine ourselves on elliptic problems, studying Problem \thetag{P} seems a natural continuation of the paper \cite{MontSilva2012} by Montenegro Silva, where a two solutions result is proved for the Dirichlet problem with the semilinear term ${\lambda} u^p-\dfrac{1}{u^\alpha}$, with $0<p<1$ and $0<\alpha<1$. In our case the singular term has the exponent $\alpha=1$. Notice that the minus sign in front of the singular term makes the problem quite more challenging than the plus sign. In the latter problem indeed one can exploit the convexity of the corresponding term in the Euler-Lagrange functional, which allows to treat any positive esponent $\alpha$ (using suitable tricks, see e.g. \cite{CanDegioSingular}). If one makes some tests in the radial case is not difficult to realize that the Dirichlet problem has no reasonable solutions for $\alpha=1$. Actually solutions starting from zero are forced to stick at zero and not allowed to emerge (in contrast with the $\alpha<1$ case). On this respect see Remark \eqref{eqn:rmk-no-dirichlet-radial-soln}. For this reason we are lead to consider the corresponding Neumann problem \thetag{P}.
In this alternative setting we need to mention the paper \cite{DelPinoManaMonte1992} by Del Pino, Man\'asevich, and Montero which deals with the ODE case ($N=1$) in the periodic case with a more general, non autonomous, singular term $f(u,x)$ (singular in $u$ and $T$-periodic in $x$). Using topological degree arguments the authors prove for instance that the equation: \[
-\ddot u={\lambda} u-\frac{1}{u^\alpha}
\qquad,\qquad
u(x)>0
\quad,\qquad
u(x+T)=u(x), \] where $\alpha\geq1$ and $h$ is $T$-periodic, has a solution provided ${\lambda}\neq\dfrac{\mu_k}{4}$ for all $k$. Here $\mu_k$ denote the eigenvalues of a suitable linearized problem which arises in a natural way from the problem. In the variational case the results of \cite{DelPinoManaMonte1992} can be seen as deriving from the existence of two global bifurcation branches originating from the pairs $(\mu_k/2,1/\sqrt{\mu_k/2})$ (the second element of the pair is a constant function).
Following this idea we prove a local bifurcation result for \thetag{P}, (see Theorem \eqref{thm-local-bufurcation}) showing that there exist two bifurcation branches of solutions, emanating from the pairs $(\mu_k/2,1/\sqrt{\mu_k/2})$. In Theorem \eqref{thm:global-bifurvation-radial} of section 3 we consider the radial case and, exploiting a continuation argument for the nodal regions of the solutions, we can prove that one of the two branches is global and bounded in ${\lambda}$.
\section{A local bifurcation result} Let $\Omega$ be a bounded open subset of $\ensuremath{\mathbb{R}}^N$ with smooth boundary.
\begin{thm}\label{thm-local-bufurcation}
Let $\hat\mu>0$ be an eigenvalue of the following Neumann problem:
\begin{equation}\label{eqn:eigenvalue-neumann-problem}
\begin{cases}
-\Delta u=\mu u&\mbox{ in }\Omega,
\\
\nabla u\cdot\nu=0&\mbox{ on }\partial\Omega,
\end{cases}
\end{equation}
($\nu$ denotes the normal to $\partial\Omega$).
Then there exists $\rho_0>$ such that for all $\rho\in]0,\rho_0[$
there exist two distinct pairs $({\lambda}_{1,\rho},u_{1,\rho})$ and
$({\lambda}_{2,\rho},u_{2,\rho})$ such that, for $i=1,2$:
\[
({\lambda}_{i,\rho,u_{i,\rho}})\mbox{ are solutions of \thetag{P}}
\ ,\
\int_{\Omega}\left(u_{i,\rho}-\frac{1}{\sqrt{{\lambda}_{i,\rho}}}\right)^2 dx=\rho^2
\ ,\
{\lambda}_{i,\rho}\xrightarrow{\rho\to0}\frac{\mu_k}{2}.
\] \end{thm} \begin{proof}
We start by introducing some changes of variables. First of all notice that, for all ${\lambda}>0$, Problem \thetag{P} has the constant solution $u(x)=\dfrac{1}{\sqrt{{\lambda}}}$.
If we seek for solutions of the form
$u=\dfrac{1}{\sqrt{{\lambda}}}+z$ we easily end up with the equivalent problem on $z$:
\begin{equation}\label{eqn:z-neumann-problem}
\begin{cases}
-\Delta z=2{\lambda} z-h_{\lambda}(z)&\mbox{ in }\Omega,
\\
\sqrt{{\lambda}}z>-1&\mbox{ in }\Omega,
\\
\nabla z\cdot\nu=0&\mbox{ on }\partial\Omega
\end{cases}
\tag{P0}
\end{equation}
where $h_{{\lambda}}:\left]-\dfrac{1}{\sqrt{{\lambda}}},+\infty\right[\to\ensuremath{\mathbb{R}}$
is defined by:
\[
h_{\lambda}(s)=\frac{{\lambda}\sqrt{{\lambda}}s^2}{1+\sqrt{{\lambda}}s}.
\]
Now we consider another simple transformation: $v:=\sqrt{{\lambda}}z$,
so that \thetag{P0} turns out to be equivalent to:
\begin{equation}\label{eqn:v-neumann-problem}
\begin{cases}
-\Delta v={\lambda}\left( 2v-h_1(v)\right)&\mbox{ in }\Omega,
\\
v>-1&\mbox{ in }\Omega,
\\
\nabla v\cdot\nu=0&\mbox{ on }\partial\Omega
\end{cases}
\tag{P1}
\end{equation}
Choose $s_0\in]-1,0[$ (for instance $s_0:=1/2$) and
define $\tilde h_1:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}$ by:
\begin{equation}\label{eqn:def-tilde-h-1}
\tilde h_1(s)(=\tilde h_{1,s_0}(s)):=
\begin{cases}
h_1(s)
&\mbox{ if }
s\geq s_0,
\\
h_1(s_0)+h_1'(s_0)(s-s_0)
\\
\qquad+\dfrac{h_1''(s_0)}{2}(s-s_0)^2
&\mbox{ if }
s\leq s_0.
\end{cases}
\end{equation}
Then $\tilde h_1\in\mathcal{C}^2(\ensuremath{\mathbb{R}}\times\ensuremath{\mathbb{R}})$ and
$\tilde h_1'(0)=h_1''(0)=0$. Let $\tilde H_1:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}$ denote
the primitive function for $\tilde h_1$
(i.e. $\tilde H_1'=\tilde h_1$) such that $\tilde H_1(0)=0$.
Moreover we consider the functional
$\tilde I:\ensuremath{W^{1,2}(\Omega)}\to\ensuremath{\mathbb{R}}$ defined by
\[
\tilde I(v):=Q(v)+\tilde{\mathcal{H}}_1(v)
\]
where: \[
Q(v):=\frac{1}{2}\int_\Omega|\nabla v|^2\,dx-
\int_\Omega v^2\,dx
\quad,\quad
\tilde{\mathcal{H}}_{1}(v):=\int_\Omega\tilde H_1(v)\,dx. \] Using a classical bifurcation risult for potential operators (see e.g. \cite{SzulkinWillem1999}) we get that there exists $\rho_0>0$ such that for all $\rho\in]0,\rho_0[$ there are two distinct pairs $({\lambda}_{1,\rho},v_{1,\rho})$ and $({\lambda}_{2,\rho},v_{2,\rho})$ which solve:
\begin{equation}\label{eqn:v-tilde-neumann-problem}
\begin{cases}
-\Delta v={\lambda}\left( 2v-\tilde h_1(v)\right)&\mbox{ in }\Omega,
\\
\nabla v\cdot\nu=0&\mbox{ on }\partial\Omega
\end{cases}
\end{equation} and such that \[
\int_\Omega v_{i,\rho}^2\,dx=\rho^2
\quad,\quad
\lim_{\rho\to 0}{\lambda}_{i,\rho}=\frac{\mu_k}{2}
\qquad i=1,2. \]
Now using a standard regularity argument we can find a constant $K$ such that, for any solution $w$ of
\eqref{eqn:v-tilde-neumann-problem}:
\[
\|v\|_\infty\leq\|2v-\tilde h_1(v)\|_2\leq K\|v\|_2.
\]
Then, for $\rho_0$ small, $v_{i,\rho}>s_0$, $i=1,2$, so
$v_{i,\rho}$ actually solve \thetag{P1}. Going backwards and
setting
$u_{i,\rho}:=\dfrac{1}{\sqrt{{\lambda}_{i,\rho}}}+\sqrt{{\lambda}_{i,\rho}}v_{i,\rho}$, we find the desired solutions of \eqref{eqn:singular-neumann-problem}.
\end{proof}
\section{A global bifurcation result for radial solutions}
We consider the case $N=2$ and $\Omega=B(0,R)=\set{x\in\ensuremath{\mathbb{R}}^2{\,:\,}\|X\|<R}$. We look for radial solutions i.e. $z(x,y)=w(\|(x,y)\|)$. Actually with similar arguments we could have considered the general case $N\geq2$. Given $R>0$, it is therefore convenient to introduce the Hilbert space:
\[
E:=\set{w:[0,R]\to\ensuremath{\mathbb{R}}{\,:\,}\int_0^R\rho\dot{w}^2\, d\rho<+\infty}
\]
endowed with
$\displaystyle{(v,w)_E:=\int_0^R\rho\dot v\dot w \,d\rho+\int_0^R\rho v w \,d\rho}$ and for $\LM>0$ the set:
\[
W_{\lambda}:=\set{w\in E{\,:\,} 1+\sqrt{{\lambda}}w(\rho)>0},\qquad
\mathcal{W}:=\set{({\lambda},w)\in\ensuremath{\mathbb{R}}\times E{\,:\,} w\in W_{\lambda}}.
\]
It is clear that $\|w\|_\infty\leq constant\|w\|_E$ so $W$ is open in $E$ and $\mathcal{W}$ is open in $\ensuremath{\mathbb{R}}\times E$. As well known the search for radial solutions leads the equation: \begin{equation}\label{eqn:radial-equation} \left\{
\begin{aligned}
&\ddot{w}+\frac{\dot{w}}{\rho}=-{\lambda} w-\frac{{\lambda} w}{1+\sqrt{{\lambda}}w}=:f_{\lambda}(u),
\\
&\dot{w}(0)=\dot{w}(R)=0.
\end{aligned}
\right.
\tag{RP} \end{equation} By the above we mean that: \begin{equation}\label{eqn:radial-equation-weak}
({\lambda},w)\in\mathcal{W},
\qquad
\int_0^R\rho\dot{w}\dot{\delta}\,d\rho=
\int_0^R \rho f_{\lambda}(w)\delta\,d\rho
\quad
\forall v\in E. \end{equation} It is standard to check that ``weak solutions'', i.e. solutions to \eqref{eqn:radial-equation-weak} actually solve \eqref{eqn:radial-equation} in a classical sense.
It is clear that $({\lambda},0)$ is a solution for \eqref{eqn:radial-equation} for any ${\lambda}\in\ensuremath{\mathbb{R}}$. We call ``nontrivial '' solution a pair $({\lambda},w)$ with $w\neq0$ such that \eqref{eqn:radial-equation} holds.
\begin{rmk}
If $({\lambda},w)$ is a nontrivial solution then ${\lambda}>0$. To see this
it suffices to multiply \eqref{eqn:radial-equation} by $u$ and
integrate over $[0,R]$. Actually this property is true in the
general case (not just in the radial problem). \end{rmk} We shall use the following simple inequality. \begin{rmk}\label{rmk:inequality-logarithm}
Let $0<a<b<+\infty$. We have:
\begin{equation}\label{eqn:inequality-logarithm}
\frac{b-a}{b}\leq\ln\left(\frac{b}{a}\right)\leq\frac{b-a}{a}.
\end{equation}
We have indeed:
\[
\ln\left(\frac{b}{a}\right)=
\ln\left(1+\frac{b-a}{a}\right)\leq\frac{b-a}{a}
\]
and\[
\ln\left(\frac{b}{a}\right)=
-\ln\left(\frac{a}{b}\right)=
-\ln\left(1+\frac{a-b}{b}\right)\geq
-\frac{a-b}{b}=\frac{b-a}{b}.
\] \end{rmk}
Now let us suppose that a solution $({\lambda},w)$ exists so we can find some properties and estimates on $w$. Arguing as in the proof of Lemma 2.2 in \cite{CrandallRabinowitzSturmLiou1970} we have that either $w=0$ or $[0,R]$ can be split as the union of a finite number of subintervals $[\rho_1,\rho_2]$ where $w$ has one of the following behaviors: \begin{description}
\item[(A)] $w(r_1)>0$, $\dot{w}(r_1)=0$, $\dot{w}<0$ in $]r_1,r_2]$, and $w(r_2)=0$;
\item[(B)] $w(r_1)=0$, $\dot{w}<0$ in $]r_1,r_2]$, $\dot{w}(r_2)=0$, and $w(r_2)<0$;
\item[(C)] $w(r_1)<0$, $\dot{w}(r_1)=0$, $\dot{w}>0$ in $]r_1,r_2]$, and $w(r_2)=0$;
\item[(D)] $w(r_1)=0$, $\dot{w}>0$ in $]r_1,r_2]$, $\dot{w}(r_2)=0$, and $w(r_2)>0$. \end{description}
\begin{center}
\includegraphics[height=3cm]{soluzione-radiale-1AB.pdf}
\qquad\qquad
\includegraphics[height=3cm]{soluzione-radiale-1CD.pdf}
\end{center}
So let $w:[r_1,r_2]\to\ensuremath{\mathbb{R}}$ be as in one of the above cases. Multiplying \eqref{eqn:radial-equation} by $\dot{w}$ gives \[
\frac{1}{2}\ddot{w}+\dot{w}'\frac{\dot{w}^2}{\rho}=\frac{d}{d\rho}F_{\lambda}(w) \] Let $p:=\dot{w}^2$ the previous equation can be written as: \[
\frac{1}{2}\dot{p}+\frac{p}{\rho}=\frac{d}{d\rho}F_{\lambda}(w) \] which is equivalent to \[
\frac{d}{d\rho}(\rho^2p)=2\rho^2\frac{d}{d\rho}F_{\lambda}(w)\rho^2=
2\rho^2\frac{d}{d\rho}F_1\left(\sqrt{{\lambda}}w\right). \] We integrate between $\rho_1$ and $\rho_2$, where $r_1\leq\rho_1\leq\rho_2\leq r_2$: \[
\rho_2^2p(\rho_2)- \rho_1^2p(\rho_1)=
2\rho_2^2F(w(\rho_2))-2\rho_1^2F(w(\rho_1))-
\int_{\rho_1}^{\rho_2}4{\sigma} F_{\lambda}(w({\sigma}))\,d{\sigma} \] Notice that $F_{\lambda}$ is increasing on $\left]-\dfrac{1}{\sqrt{{\lambda}}},0\right[$ and decreasing on $]0,+\infty[$, so: \[
{\sigma}\mapsto F_{\lambda}(w({\sigma}))\mbox{ is increasing (decreasing) in cases \thetag{A} and \thetag{C} (in cases \thetag{B} and \thetag{D})} \] We hence get, in cases \thetag{A} and \thetag{C}: \[
-2(\rho_2^2-\rho_1^2)F_{\lambda}(w(\rho_2))\leq
-\int_{\rho_1}^{\rho_2}4{\sigma} F_{\lambda}(w({\sigma}))\,d{\sigma}\leq
-2(\rho_2^2-\rho_1^2)F_{\lambda}(w(\rho_1)) \] while in cases \thetag{B} and \thetag{D}: \[
-2(\rho_2^2-\rho_1^2)F_{\lambda}(w(\rho_1))\leq
-\int_{\rho_1}^{\rho_2}4{\sigma} F_{\lambda}(w({\sigma}))\,d{\sigma}\leq
-2(\rho_2^2-\rho_1^2)F_{\lambda}(w(\rho_2)) \] So in cases \thetag{A} and \thetag{C} we have: \begin{equation}\label{eqn:inequality-AC}
2\rho_1^2(F_{\lambda}(w(\rho_2))-F_{\lambda}(w(\rho_1))\leq
\rho_2^2p(\rho_2)- \rho_1^2p(\rho_1)\leq
2\rho_2^2(F_{\lambda}(w(\rho_2))-F_{\lambda}(w(\rho_1)) \end{equation} and in cases \thetag{B} and \thetag{D}: \begin{equation}\label{eqn:inequality-BD}
2\rho_2^2(F_{\lambda}(w(\rho_2))-F_{\lambda}(w(\rho_1))\leq
\rho_2^2p(\rho_2)- \rho_1^2p(\rho_1)\leq
2\rho_1^2(F_{\lambda}(w(\rho_2))-F_{\lambda}(w(\rho_1)) \end{equation} Now we estimate $w(\rho)$ - we need to take into account all the four cases \thetag{A},\thetag{B},\thetag{C},\thetag{D}.
Case \thetag{A}. We rename $\bar\rho:=r_1$, $\rho_0:=r_2$ and let $h:=w(\bar\rho)>0$. We use \eqref{eqn:inequality-AC} with $\rho_1=\bar\rho$ and $\rho_2={\sigma}\in[\bar\rho,\rho_0]$: \[
2\bar\rho^2(F_{\lambda}(w({\sigma}))-F_{\lambda}(h))\leq
{\sigma}^2\dot{w}({\sigma})^2\leq
2{\sigma}^2(F_{\lambda}(w({\sigma}))-F_{\lambda}(h)). \] Then we take the square root and divide: \[
\sqrt{2}\frac{\bar\rho}{{\sigma}}\leq
\frac{-\dot{w}({\sigma})}{\sqrt{F_{\lambda}(w({\sigma}))-F_{\lambda}(h)}}\leq
\sqrt{2} \] and now we integrate between $\bar\rho$ and $\rho\in[\bar\rho,\rho_0]$ getting: \[
\sqrt{2}\bar\rho\ln\left(\frac{\rho}{\bar\rho}\right)\leq
-\Phi_{{\lambda},h}(w(\rho))+\Phi_{{\lambda},h}(h)\leq
\sqrt{2}(\rho-\bar\rho) \] where $\Phi_{{\lambda},h}:[0,h]\to\ensuremath{\mathbb{R}}$ is defined by: \[
\Phi_{{\lambda},h}(s):=\int_0^s\frac{d\xi}{\sqrt{F_{\lambda}(\xi)-F_{\lambda}(h)}} \] (it is simple to check the the integral converges at $\xi=h$). So we deduce: \[
\Phi_{{\lambda},h}^{-1}\left(\Phi_{{\lambda},h}(h)-\sqrt{2}\left(\rho-\bar\rho\right)\right)
\leq w(\rho)\leq
\Phi_{{\lambda},h}^{-1}\left(\Phi_{{\lambda},h}(h)-\sqrt{2}\bar\rho\ln\left(\frac{\rho}{\bar\rho}\right)\right) \] which we prefer to write as \begin{equation}\label{eq-inequality-case-AB}
\Phi_{{\lambda},h}^{-1}\left(\Phi_{{\lambda},h}(h)+\sqrt{2}\left(\bar\rho-\rho\right)\right)
\leq w(\rho)\leq
\Phi_{{\lambda},h}^{-1}\left(\Phi_{{\lambda},h}(h)+\sqrt{2}\bar\rho\ln\left(\frac{\bar\rho}{\rho}\right)\right) \end{equation} In particular, taking $\rho=\rho_0$, which gives $w(\rho_0)=0$, (and using \eqref{eqn:inequality-logarithm}) we have: \begin{equation}\label{eqn:estimate-interval-case-A} \sqrt{2}\frac{\bar\rho}{\rho_0}(\rho_0-\bar\rho)\leq
\sqrt{2}\bar\rho\ln\left(\frac{\rho_0}{\bar\rho}\right)\leq\Phi_{{\lambda},h}(h)\leq\sqrt{2}\left(\rho_0-\bar\rho\right). \end{equation} Moreover taking $\rho_1=\bar\rho$ and $\rho_2=\rho_0$ in \eqref{eqn:inequality-AC} we have: \begin{equation}\label{eqn:estimate-derivative-case-A}
\sqrt{2}\frac{\bar\rho}{\rho_0}\sqrt{-F_{\lambda}(h))}\leq
-\dot{w}(\rho_0)\leq
\sqrt{2}\sqrt{-F_{\lambda}(h))} \end{equation}
Case \thetag{B}. We rename $\rho_0:=r_1$, $\bar\rho:=r_2$ and let $h:=w(\bar\rho)<0$. We use \eqref{eqn:inequality-BD} with $\rho_1={\sigma}\in[\rho_0,\bar\rho]$ and $\rho_2=\bar\rho$: \[
2\bar\rho^2(F_{\lambda}(h)-F_{\lambda}(w({\sigma})))\leq
-{\sigma}^2\dot{w}({\sigma})^2\leq
2{\sigma}^2(F_{\lambda}(h)-F_{\lambda}(w({\sigma}))). \] We change sign and proceed as in case \thetag{A}: \[
2{\sigma}^2(F_{\lambda}(w({\sigma}))-F_{\lambda}(h))\leq
{\sigma}^2\dot{w}({\sigma})^2\leq
2\bar\rho^2(F_{\lambda}(w({\sigma}))-F_{\lambda}(h))). \] Take the square root and divide: \[
\sqrt{2}\leq
\frac{-\dot{w}({\sigma})}{\sqrt{F_{\lambda}(w({\sigma}))-F_{\lambda}(h)}}\leq
\sqrt{2}\frac{\bar\rho}{{\sigma}}. \] Integrate on $[\rho,\bar\rho_0]$: \[
\sqrt{2}(\bar\rho-\rho)\leq
-\Phi_{{\lambda},h}(h)+\Phi_{{\lambda},h}(w(\rho))\leq
\sqrt{2}\bar\rho\ln\left(\frac{\bar\rho}{\rho}\right) \] defining $\Phi_{{\lambda},h}:[h,0]\to\ensuremath{\mathbb{R}}$ as in case \thetag{A}. Applying $\Phi_{{\lambda},h}^{-1}$ we get that
\eqref{eq-inequality-case-AB} holds in case \thetag{B} too. In particular, taking $\rho=\rho_0$ (and using \eqref{eqn:inequality-logarithm}): \begin{equation}\label{eqn:estimate-interval-case-B}
\sqrt{2}(\bar\rho-\rho_0)\leq
-\Phi_{{\lambda},h}(h)\leq
\sqrt{2}\bar\rho\ln\left(\frac{\bar\rho}{\rho_0}\right)\leq
\sqrt{2}\frac{\bar\rho}{\rho_0}(\bar\rho-\rho_0) \end{equation} and taking $\rho_1=\rho_0$ and $\rho_2=\bar\rho$ in \eqref{eqn:inequality-BD} we have: \begin{equation}\label{eqn:estimate-derivative-case-B}
\sqrt{2}\sqrt{-F_{\lambda}(h))}\leq
-\dot{w}(\rho_0)\leq
\sqrt{2}\frac{\bar\rho}{\rho_0}\sqrt{-F_{\lambda}(h))} \end{equation}
Case \thetag{C}. We rename $\bar\rho:=r_1$, $\rho_0:=r_2$ end let $h:=w(\bar\rho)<0$. Using \eqref{eqn:inequality-AC} with $\rho_1=\bar\rho$ and $\rho_2={\sigma}\in[\bar\rho,\rho_0]$ we obtain the same inequality of case \thetag{A}.
After taking the square root and dividing: \[
\sqrt{2}\frac{\bar\rho}{{\sigma}}\leq
\frac{\dot{w}({\sigma})}{\sqrt{F_{\lambda}(w({\sigma}))-F_{\lambda}(h)}}\leq
\sqrt{2}. \] We integrate between $\bar\rho$ and $\rho\in[\bar\rho,\rho_0]$ getting: \[
\sqrt{2}\bar\rho\ln\left(\frac{\rho}{\bar\rho}\right)\leq
\Phi_{{\lambda},h}(w(\rho))-\Phi_{{\lambda},h}(h)\leq
\sqrt{2}(\rho-\bar\rho) \] with $\Phi_{{\lambda},h}:[h,0]\to\ensuremath{\mathbb{R}}$ defined as above. So we deduce: \begin{equation}\label{eq-inequality-case-CD}
\Phi_{{\lambda},h}^{-1}\left(\Phi_{{\lambda},h}(h)+\sqrt{2}\bar\rho\ln\left(\frac{\rho}{\bar\rho}\right)\right)
\leq w(\rho)\leq
\Phi_{{\lambda},h}^{-1}\left(\Phi_{{\lambda},h}(h)+\sqrt{2}\left(\rho-\bar\rho\right)\right) \end{equation} In particular, taking $\rho=\rho_0$ (and using \eqref{eqn:inequality-logarithm}): \begin{equation}\label{eqn:estimate-interval-case-C}
\sqrt{2}\frac{\bar\rho}{\bar\rho_0}(\rho_0-\bar\rho)\leq
\sqrt{2}\bar\rho\ln\left(\frac{\rho_0}{\bar\rho}\right)\leq-\Phi_{{\lambda},h}(h)\leq\sqrt{2}\left(\rho_0-\bar\rho\right). \end{equation} Moreover taking $\rho_1=\bar\rho$ and $\rho_2=\rho_0$ in \eqref{eqn:inequality-AC} we have: \begin{equation}\label{eqn:estimate-derivative-case-C}
\sqrt{2}\frac{\bar\rho}{\rho_0}\sqrt{-F_{\lambda}(h))}\leq
\dot{w}(\rho_0)\leq
\sqrt{2}\sqrt{-F_{\lambda}(h))} \end{equation}
Case \thetag{D}. We rename $\rho_0:=r_1$, $\bar\rho:=r_2$ and let $h:=w(\bar\rho)>0$. Using \eqref{eqn:inequality-BD} with $\rho_1={\sigma}\in[\rho_0,\bar\rho]$ and $\rho_2=\bar\rho$ we obtain the same inequalities of case \thetag{B}. When we take the square root and divide:
\[
\sqrt{2}\leq
\frac{\dot{w}({\sigma})}{\sqrt{F_{\lambda}(w({\sigma}))-F_{\lambda}(h)}}\leq
\sqrt{2}\frac{\bar\rho}{{\sigma}}. \] Integrate on $[\rho,\bar\rho_0]$: \[
\sqrt{2}(\bar\rho-\rho)\leq
\Phi_{{\lambda},h}(h)-\Phi_{{\lambda},h}(w(\rho))\leq
\sqrt{2}\bar\rho\ln\left(\frac{\bar\rho}{\rho}\right) \] with the usual definition of $\Phi_{{\lambda},h}:[h,0]\to\ensuremath{\mathbb{R}}$. Applying $\Phi_{{\lambda},h}^{-1}$ we obtain that
\eqref{eq-inequality-case-CD} holds in case \thetag{D} too. In particular, taking $\rho=\rho_0$ (and using \eqref{eqn:inequality-logarithm}): \begin{equation}\label{eqn:estimate-interval-case-D}
\sqrt{2}\left(\bar\rho-\rho_0\right)\leq\Phi_{{\lambda},h}(h)\leq\sqrt{2}\bar\rho\ln\left(\frac{\bar\rho}{\bar\rho_0}\right)\leq
\sqrt{2}\frac{\bar\rho}{\rho_0}(\bar\rho-\rho_0) \end{equation} and taking $\rho_1=\rho_0$ and $\rho_2=\bar\rho_0$ in \eqref{eqn:inequality-BD} we have: \begin{equation}\label{eqn:estimate-derivative-case-D}
\sqrt{2}\sqrt{-F_{\lambda}(h))}\leq
\dot{w}(\rho_0)\leq
\sqrt{2}\frac{\bar\rho}{\rho_0}\sqrt{-F_{\lambda}(h))} \end{equation}
Now we have: \begin{multline*}
\sqrt{2}\Phi_{{\lambda},h}(h)=
\\
\int_0^h\frac{d\xi}{\sqrt{F(\sqrt{{\lambda}}\xi)-F(\sqrt{{\lambda}}h)}}=
\int_0^1\frac{h\,d{\sigma}}{\sqrt{F({\sigma}\sqrt{{\lambda}}h)-F(\sqrt{{\lambda}}h)}}=
\frac{1}{\sqrt{{\lambda}}}\bar\Phi(\sqrt{{\lambda}}h) \end{multline*} where: \[
\bar\Phi(s):=\int_0^1\frac{s\,d{\sigma}}{\sqrt{F({\sigma} s)-F(s)}}=
\mathrm{sgn}(s)\int_0^1\sqrt{\frac{s^2}{F({\sigma} s)-F(s)}}\,d{\sigma}. \]
With simple computations: \[
\lim_{s\to0}\frac{s^2}{F({\sigma} s)-F(s)}=\frac{1}{1-{\sigma}^2},
\qquad
\lim_{s\to+\infty}\frac{s^2}{F({\sigma} s)-F(s)}=\frac{2}{1-{\sigma}^2}, \] and \[
\lim_{s\to-1^-}\frac{s^2}{F({\sigma} s)-F(s)}=0 \] So we deduce that \begin{gather}\label{eqn:limits-Phi-positive}
\lim_{h\to0^+}\Phi_{{\lambda},h}(h)=
\frac{\pi}{2\sqrt{2{\lambda}}}
,\quad
\lim_{h\to+\infty}\Phi_{{\lambda},h}(h)=
\frac{\pi}{2\sqrt{{\lambda}}},
\\
\label{eqn:limits-Phi-negative}
\lim_{h\to0^-}\Phi_{{\lambda},h}(h)=
-\frac{\pi}{2\sqrt{2{\lambda}}}
,\quad
\lim_{h\to-1^+}\Phi_{{\lambda},h}(h)=0. \end{gather} \begin{center}
\includegraphics[height=5cm]{Phi-graph.pdf}
\end{center}
To state the main result we need some notation, which we take
from \cite{CrandallRabinowitzSturmLiou1970, RabinowitzSturmLiouville70}. For $k\in\ensuremath{\mathbb{N}}$, $k\geq1$, we consider
\begin{gather*}
\mathcal{S}:=\set{({\lambda},w)\in\mathcal{W}{\,:\,}{\lambda},w)\mbox{ is a solution to \eqref{eqn:radial-equation}}}
\\
\mathcal{S}^+_k:=\set{({\lambda},w)\in\mathcal{S}{\,:\,}\mbox{ $w$ has $k$ nodes in $]0,R[$}, w(0)>0},
\\
\mathcal{S}^-_k:=\set{({\lambda},w)\in\mathcal{S}{\,:\,}\mbox{ $w$ has $k$ nodes in $]0,R[$}, w(0)<0},
\end{gather*}
We also consider the two eigenvalue problems: \begin{equation}\label{eqn:radial-eigenvalue-problem}
\ddot{w}+\frac{\dot{w}}{\rho}=-\mu w,
\qquad
\dot{w}(0)=\dot{w}(R)=0.
\tag{RP*} \end{equation} \begin{equation}\label{eqn:radial-eigenvalue-problem-zero}
\ddot{v}+\frac{\dot{v}}{\rho}=-\nu v,
\qquad
\dot{v}(0)=0,{v}(R)=0.
\tag{RP0} \end{equation} It is clear that $w\neq0$ and $\mu\neq0$ solve \eqref{eqn:radial-eigenvalue-problem} if and only if, for some integer $k\geq1$: \begin{equation}\label{eqn:radial-eigenvalues}
\mu=\mu_k:=\left(\frac{y_k}{R}\right)^2 \end{equation} where $y_k$ denotes the $k$-th nontrivial zero of $J_0'$ and $J_0$ is the first Bessel function, and
\begin{equation}\label{eqn:radial-eigenfunctions}
w=\alpha w_k,\quad\alpha\in\ensuremath{\mathbb{R}},\qquad
w_k(\rho):=J_0\left(\frac{y_k}{R}\rho\right). \end{equation} For the sake of completeness we can agree that $\mu_0=0$ and $w_0(\rho)=J_0(0)$. In the same way $v\neq0$ and $\nu$ solve \eqref{eqn:radial-eigenvalue-problem-zero} if and only if, for some integer $k\geq1$: \begin{equation}\label{eqn:radial-eigenvalues-zero}
\nu=\nu_k:=\left(\frac{z_k}{R}\right)^2 \end{equation} where $z_k$ is the $k$-th zero of $J_0$ and
\begin{equation}\label{eqn:radial-eigenfunctions-zero}
v=\alpha v_k,\quad\alpha\in\ensuremath{\mathbb{R}},\qquad
v_k(\rho):=J_0\left(\frac{z_k}{R}\rho\right). \end{equation} Notice that $\nu_k<\mu_k<\nu_{k+1}$ for all $k$.
\begin{thm}\label{thm:global-bifurvation-radial}
Let $\mu_k>0$ be an eigenvalue for \eqref{eqn:radial-eigenvalue-problem}. Then $\mathcal{S}_k^+$ is a connected set and
\begin{itemize}
\item $(\mu_k/2,0)\in\overline{\mathcal{S}_k^+}$;
\item
$\displaystyle{
0<
\inf\set{{\lambda}\in\ensuremath{\mathbb{R}}{\,:\,}\exists w\in E\mbox{ with }({\lambda},w)\in\mathcal{S}_k^+}
}$;
\item
$\displaystyle{
\sup\set{{\lambda}\in\ensuremath{\mathbb{R}}{\,:\,}\exists w\in E\mbox{ with }({\lambda},w)\in\mathcal{S}_k^+}
<+\infty
}$;
\item
$\mathcal{S}_k^+$ is unbounded and contains a sequence $({\lambda}_n,w_n)$
such that $\|w_n\|_E\to\infty$ and
\begin{equation}\label{eqn:limit-of-the-branch}
\lim_{n\to\infty}{\lambda}_n=
\begin{cases}
\mu_{k/2}&\mbox{ if $k$ is even},
\\
\nu_{(k+1)/2}&\mbox{ if $k$ is odd}.
\end{cases}
\end{equation}
\end{itemize}
\end{thm}
The proof of \eqref{thm:global-bifurvation-radial}
will be obtained from some preliminary statements.
\begin{rmk}
If $({\lambda},w)\in\mathcal{S}^+$ ( resp. $({\lambda},w)\in\mathcal{S}^+$),
and $0=\rho_0<\rho_1,\dots,\rho_k<\rho_{k+1=R}$, $\rho_1,\dots,\rho_k$ being the nodal points of $w$, then:
\begin{equation}\label{eqn:estimate-positive-intervals}
\rho_{i+1}-\rho_i\geq
\frac{\pi}{4\sqrt{{\lambda}}}
\ \mbox{for $i$ even}
\quad(\mbox{resp. for $i$ odd}).
\end{equation}
This is easily seen using the right hand sides of the inequalities \eqref{eqn:estimate-interval-case-A},
\eqref{eqn:estimate-interval-case-C},
and \eqref{eqn:limits-Phi-positive}.
\end{rmk}
\begin{lma}
For any integer $k$ there esist two constants $\underline{{\lambda}}_k$
and $\overline{{\lambda}}_k$ such that
\begin{equation}\label{eqn:estimate-lambda-on-solutions}
({\lambda},w)\in\mathcal{S}_k^+\cup\mathcal{S}_k^-
\Rightarrow
0<\underline{{\lambda}}_k\leq{\lambda}\leq\overline{{\lambda}}_k<+\infty
\end{equation}
\end{lma}
\begin{proof}
Take any subinterval $[r_1,r_2]$ as in cases
\thetag{A}--\thetag{D} and consider the first eigenvalue
$\bar\mu=\bar\mu(r_1,r_2)$ for the mixed type boundary condition:
\[
\left\{
\begin{aligned}
-&(\rho\dot{w})'=\mu w\quad\mbox{on }]r_1,r_2[
\\
&\dot{w}(r_1)=0,w(r_2)=0\qquad
(\mbox{resp. }w(r_1)=0,\dot{w}(r_2)=0)
\end{aligned}
\right.
\]
in cases \thetag{A},\thetag{C} (resp. cases \thetag{C},\thetag{D}). We can choose an eigenfunction $\bar e$ corresponding
to $\bar\mu$ so that $z\bar e>0$ in
$]r_1,r_2[$. Multiplying \eqref{eqn:radial-equation}
by $\bar e$ and integrating over $[r_1,r_2]$ yields:
\[
\bar\mu\int_{r_1}^{r_2}\rho z\bar e\,d\rho=
{\lambda}\int_{r_1}^{r_2}\rho z\bar e\left(1+\frac{1}{1+\sqrt{{\lambda}}z}\right)\,d\rho.
\]
This implies:
\[
{\lambda}\int_{r_1}^{r_2}\rho z\bar e\,d\rho\leq
\bar\mu\int_{r_1}^{r_2}\rho z\bar e\,d\rho\leq
2{\lambda}\int_{r_1}^{r_2}\rho z\bar e\,d\rho
\]
which gives
\(
\dfrac{\bar\mu}{2}\leq{\lambda}\leq\bar\mu.
\)
Now since $]r_1,r_2[\subset]0,R[$ we have
$\bar\mu\geq\bar\mu[0,R]$. On the other side since $w$ has $k$
nodal points we can choose $r_1$, $r_2$ such that
$r_2-r_1\geq R/k$, which implies
$\bar\mu\leq\sup\limits_{b-a=R/k}\bar\mu(a,b)<+\infty$.
This proves \eqref{eqn:estimate-lambda-on-solutions}
\end{proof}
\begin{lma}\label{lma:main-lemma-sequences}
Let $({\lambda}_n,w_n)$ be a sequence in $\mathcal{S}_k^+$.
Then we can consider $0<\rho_{1,n}<\cdots<\rho_{k,n}<R$ to be the nodes of $w_n$ and set $\rho_{0,n}:=0$, $\rho_{k+1,n}:=R$;
in ths way $wn(\rho)>0$ on $]\rho_1,\rho_{i+1}[$ if $i$ is even
and $wn(\rho)<0$ on $]\rho_1,\rho_{i+1}[$ if $i$ is odd.
The following facts are equivalent:
\begin{description}
\item[(a)]
\[
\lim_{n\to\infty}\sup_{\rho\in[0,R]}w_n(\rho)=+\infty;
\]
\item[(b)]
\[
\lim_{n\to\infty}\inf_{\rho\in[0,R]}(1+{\lambda}_n w_n(\rho))=0;
\]
\item[(c)]
\[
\lim_{n\to\infty}\sup_{\rho\in[\rho_{i,n},\rho_{i+1,n}]}w_n(\rho)=+\infty
\quad\mbox{if $i$ is even};
\]
\item[(d)]
\[
\lim_{n\to\infty}\inf_{\rho\in[\rho_{i,n},\rho_{i+1,n}]}(1+{\lambda}_n w_n(\rho))=0
\quad\mbox{if $i$ is odd};
\]
\item[(e)]
\[
\lim_{n\to\infty}\rho_{1+1,n}-\rho_{i,n}=0
\quad\mbox{if $i$ is odd};
\]
\end{description}
Moreover, if any of the above holds, then \eqref{eqn:limit-of-the-branch} holds.
\end{lma}
\begin{proof}
We can assume, passing to a subsequence that
${\lambda}_n\to\hat{\lambda}\in[\underline{{\lambda}}_k,\overline{\overline{{\lambda}}_k}]$.
First notice that for all $i$ even (corresponding to $w>0$)
we have:
\[
\rho_{i+1,n}-\rho_{i,n}\geq\frac{\pi}{4\sqrt{\underline{{\lambda}}_k}}
\]
as we can infer from \eqref{eqn:estimate-interval-case-A} or \eqref{eqn:estimate-interval-case-D} and the behaviour of $\Phi_{{\lambda},h}(h)$
in \eqref{eqn:limits-Phi-positive}.
Let
\[
h_{i,n}:=\max_{\rho_{i,n}\leq\rho_{i+1,n}}w(\rho)
\mbox{ for $i$ even},
\qquad
h_{i,n}:=\min_{\rho_{i,n}\leq\rho_{i+1,n}}w(\rho)
\mbox{ for $i$ odd}
\]
Then for any $i$ even:
\[
h_{i,n}\to+\infty
\Leftrightarrow
\Phi_{{\lambda}_n,h_{i,n}}(h_{i,n})\to\frac{\pi}{2\sqrt{\hat{\lambda}}}
\Leftrightarrow
\dot{w}(\rho_{i,n})\to+\infty
\Leftrightarrow
\dot{w}(\rho_{i+1,n})\to-\infty.
\]
This can be deduced from \eqref{eqn:limits-Phi-positive},
\eqref{eqn:estimate-derivative-case-A}, and \eqref{eqn:estimate-derivative-case-D}. In the same way, using
\eqref{eqn:limits-Phi-negative}, \eqref{eqn:estimate-derivative-case-B}, and \eqref{eqn:estimate-derivative-case-C} we get that,
for $i$ odd:
\[
1+\sqrt{{\lambda}_n} h_{i,n}\to0
\Leftrightarrow
\Phi_{{\lambda}_n,h_{i,n}}(h_{i,n})\to0
\Leftrightarrow
\dot{w}(\rho_{i,n})\to-\infty
\Leftrightarrow
\dot{w}(\rho_{i+1,n})\to+\infty.
\]
Now we prove our claims. Let $\bar i\in\set{0,\dots,k}$ with
$\bar i$ even (resp. odd) and suppose that $h_{\bar i,n}\to+\infty$
(resp. $1+\sqrt{{\lambda}_n} h_{\bar i,n}\to0$). Then $F_{{\lambda}_n}(h_{\bar i,n})\to+\infty$
(resp. $F_{{\lambda}_n}(h_{\bar i,n})\to-\infty$)
and by
\eqref{eqn:estimate-derivative-case-A},
\eqref{eqn:estimate-derivative-case-D}
(
\eqref{eqn:estimate-derivative-case-B},
\eqref{eqn:estimate-derivative-case-C}
) we get that:
\[
\dot{w}_n(\rho_{\bar i,n})\to+\infty,\
\dot{w}_n(\rho_{\bar i+1,n})\to-\infty
\quad
(\dot{w}_n(\rho_{\bar i,n})\to-\infty,\
\dot{w}_n(\rho_{\bar i+1,n})\to+\infty)
\]
which in turn implies:
\[
F_{{\lambda}_n}(h_{\bar i-1,n})\to-\infty\mbox{ ( resp. }+\infty),
\
F_{{\lambda}_n}(h_{\bar i+1,n})\to-\infty\mbox{ ( resp. }+\infty)
\]
(with the obvious exceptions when $\bar i-1<0$ or $\bar i+1>k$).
So we get:
\[
1+\sqrt{{\lambda}_n}h_{\bar i-1,n}\to0\
(h_{\bar i-1,n}\to+\infty),
\
1+\sqrt{{\lambda}_n}h_{\bar i+1,n}\to0\
(h_{\bar i+1,n}\to+\infty).
\]
This shows that the property $|F_{{\lambda}}(h_{i,n})|\to+\infty$
``propagates'' from the $i$-th interval to the previous and to the next one. From this it is easy to deduce that
\thetag{a}--\thetag{d} are all equivalent. To prove that they
are equivalent to \thetag{e} just use \eqref{eqn:estimate-interval-case-A},\eqref{eqn:estimate-interval-case-B},\eqref{eqn:estimate-interval-case-C},\eqref{eqn:estimate-interval-case-D}, depending on the case, {\bf noticing that}
$\rho_{1,n}\geq\dfrac{\pi}{4\pi\underline{{\lambda}}_k}$, as from
\eqref{eqn:estimate-positive-intervals}
(this would not be possible if we were considering $\mathcal{S}_k^-$).
Finally suppose that $({\lambda}_n,w_n)$ verifies any of
\thetag{a}--\thetag{e}. Then $\|w_n\|_\infty \to+\infty$.
Let $\hat w_n:=\dfrac{w_n}{\|w_n\|_\infty}$. We can suppose that
$\hat w_n\rightharpoonup\hat w$ in $E$ and that:
\begin{gather*}
\rho_{1,n}\to\rho_1,
\
\rho_{2j-1,n}\to\rho_j,\ \rho_{2j,n}\to\rho_j
\mbox{ $1\leq j\leq k/2$},
\
\rho_{k,n}\to R
\mbox{ if $k$ is odd},
\end{gather*}
where $0=\rho_0<\rho_1<\cdots<\rho_h<\rho_h+1=R$ and
$h=\lfloor k/2\rfloor$ (so $\rho_1=R$ when $k=1$).
It is not difficult to prove that
$\hat w(\rho)>0$ in $]\rho_i,\rho_{i+1}[$ if $i=0,\dots,h$,
$\hat w(\rho_1)=\cdots=\hat w(\rho_h)=0$,
${\hat w}'(0)=0$ and ${\hat w}'(R)=0$ is $k$ is even
while $\hat w(R)=0$ is $k$ is odd.
Moreover for any $i=0,\dots,h$:
\[
-(\rho\hat{w}')'=\hat{\lambda}\hat w\quad\mbox{on }]\rho_i,\rho_{i+1}[
\]
Now we can rearrange $\hat w$ defining
$\tilde w:=\sum_{j=0}^h(-1)^j\alpha_j\hat w\mathbb{1}_{[\rho_j,\rho_{j+1}]}$, where $\alpha_1=1$ and
$\alpha_{j}\hat{w}'_-(\rho_j)=\alpha_{j+1}\hat{w}'_+(\rho_j)$,
$j=1,\dots,h$. In this way $(\hat{\lambda},\tilde w)$ is an
eigenvalue -- eigenfunction pair relative for
problem \eqref{eqn:radial-eigenfunctions} if $k$ is even
and of \eqref{eqn:radial-eigenfunctions-zero} if
$k$ is odd. Since $\tilde w$ has $h=k/2$ nodal points for
$k$ even and $h+1=(k+1)/2$ if $k$ is odd, then
\eqref{eqn:limit-of-the-branch} holds.
\end{proof}
\begin{proof}[Proof of \eqref{thm:global-bifurvation-radial}]
If $\varepsilon{}\in]0,1[$ we set:
\[
\mathcal{O}_\varepsilon{}:=\set{({\lambda},w)\in E{\,:\,} \varepsilon{}<{\lambda}<\varepsilon{}^{-1},
1+\sqrt{{\lambda}}w(\rho)>\varepsilon{},w(\rho)<\varepsilon{}^{-1}\ \forall\rho\in[0,R]}
\]
Clearly $\mathcal{O}_\varepsilon{}$ is an open set with
$\mathcal{O}_\varepsilon{}\subset\mathcal{W}$.
Moreover $(\mu_{k/2},0)\in\mathcal{O}_\varepsilon{}$ if $\varepsilon{}$
is sufficently small. Define $\tilde h_{{\lambda},\varepsilon{}}$ as in
\eqref{eqn:def-tilde-h-1} with $s_0=\varepsilon{}$ and let
$\tilde h_{{\lambda}}(s):=\tilde h_1(\sqrt{{\lambda}}s)$. Using \cite{RabinowitzSturmLiouville70}
we get there that there exists a pair $({\lambda}_\varepsilon{},w_\varepsilon{})$
in $\partial\mathcal{O}_\varepsilon{}$,
with $w_\varepsilon{}$ having $k$ nodal points, which
solves Problem \eqref{eqn:radial-equation} with
$\tilde h_{\varepsilon{},{\lambda}}:=\tilde h_\varepsilon{}({\lambda},\cdot)$ instead of $h_{\lambda}$.
Since $({\lambda},w)\in\partial\mathcal{O}_\varepsilon{}\Rightarrow \tilde h_\varepsilon{}({\lambda},w)=h_{\lambda}(w)$, we get that
$({\lambda}_\varepsilon{},w_\varepsilon{})\in\mathcal{S}_k^+$.
For $\varepsilon{}$ small we have $\varepsilon{}<\underline{{\lambda}}_k\leq\overline{{\lambda}}_k<\varepsilon{}^{-1}$ so we get $w_\varepsilon{}\in\partial\set{1+\sqrt{{\lambda}_\varepsilon{}}w>\varepsilon{},w<\varepsilon{}^{-1}}$ i.e. there exists a point $\rho_\varepsilon{}\in[0,R]$ such that
\[
\mbox{either}\quad 1+\sqrt{{\lambda}_\varepsilon{}}w_\varepsilon{}(\rho_\varepsilon{})=\varepsilon{}
\qquad\quad
\mbox{or}\quad w_\varepsilon{}(\rho_\varepsilon{})=\varepsilon{}^{-1}.
\]
We can find a sequence $\varepsilon{}_n\to0$ such that the corresponding $({\lambda}_n,w_n):= ({\lambda}_{\varepsilon{}_n},w_{\varepsilon{}_n})$
verify one of the above properties for all $n\in\ensuremath{\mathbb{N}}$.
If the first one holds for all $n$, then $({\lambda}_n,w_n)$
verifies \thetag{b} of Lemma
\eqref{lma:main-lemma-sequences}; in the second case
$({\lambda}_n,w_n)$ verifies \thetag{a} of Lemma
\eqref{lma:main-lemma-sequences}. Then by Lemma
\eqref{lma:main-lemma-sequences} $\|w_n\|_\infty\to\infty$
and \eqref{eqn:limit-of-the-branch} holds.
This proves the Theorem.
\end{proof}
\begin{center}
\includegraphics[height=5cm]{bifur-radial-paper.pdf}
\end{center}
\begin{rmk}
As a consequence of Theorem \eqref{thm:global-bifurvation-radial}
we get that for any $h\geq1$ integer and any ${\lambda}$ strictly between ${\lambda}_h$ and ${\lambda}_{2h}/2$ there exists $u$ such that $({\lambda},u)$ solves \thetag{P}.
The same is true for all ${\lambda}$ strictly between $\nu_h$ and ${\lambda}_{2h-1}/2$
\end{rmk}
\begin{rmk}
The proof of \thetag{P} fails if we follow the bifurcation branch
$({\lambda}_\rho,w_\rho)$ with $w_\rho(0)<0$. In this case it seems possible that the branch tends to a point $(\tilde{\lambda},\tilde w)$
where $\sqrt{\tilde{\lambda}}\tilde w(0)=-1$
(but $\sqrt{\tilde{\lambda}}\tilde w(0)>-1$ for $\rho>0$). This phenomenon, if true, would be worth studying.
\end{rmk}
\begin{rmk}\label{eqn:rmk-no-dirichlet-radial-soln}
The computations of this section show that, if $\Omega$ is the ball, then there are no solutions for the Dirichlet problem. It is
indeed impossible to construct a (nontrivial) solution $({\lambda},w)$ for
\thetag{RP} with $w(R)=0$,
\end{rmk}
\end{document} |
\begin{document}
\title{Causal Incompleteness: \A New Perspective on Quantum Non-locality}
\begin{abstract} The mathematical notion of incompleteness (eg of rational numbers, Turing-computable functions, and arithmetic proof) does not play a key role in conventional physics. Here, a reformulation of the kinematics of quantum theory is attempted, based on an inherently granular and discontinuous state space, in which the quantum wavefunction is associated with a finite set of finite bit strings, and the unitary transformations of complex Hilbert space are reformulated as finite permutation and related operators incorporating complex and hyper-complex structure. Such a reformulation, consistent with Wheeler's `It from Bit' programme, provides the basis for a novel interpretation of the Bell theorem: that the experimental violation of the Bell inequalities reveals the inevitable incompleteness of the causal structure of physical theory. The kinematic reformulation of quantum theory so developed, provides a new perspective on the age-old dichotomy of free will versus determinism. \end{abstract}
\section{Introduction} \label{sec:introduction}
It is often said that the most profound theorem of 20th Century mathematics concerns the incompleteness of arithmetic proof. The basis of G\"{o}del's theorem, via its use of the Cantor diagonal slash, is directly related to the incompleteness of the rational numbers and the Turing-computable functions. Thus incompleteness is rather fundamental in mathematics; and since mathematics completely underpins our scientific understanding of the physical world, one might ask whether incompleteness has any role to play in physical theory. The potential role of incompleteness in conventional physics is masked by its generic use of continuum equations. Here a novel interpretation of Bell's eponymous theorem of quantum physics is discussed, based on an attempt to recast the complex Hilbert space of quantum theory, using granular, discontinuous mathematics. In this interpretation, the experimental violation of the Bell inequalities reveals, not the type of non-local causality which mainstream physics regards as bizarre but unavoidable, but rather the inevitable incompleteness of the causal structure which may be inferred from such theory.
The conventional proof of Bell's theorem presumes that the settings of measuring instruments can be treated as free variables. That is, for a given entangled particle pair, it is assumed that the causal consequences of choosing alternate instrument settings on measurement outcomes are well defined. Bell(1993) himself realised that this issue was not metaphysically clear cut, since the world is given to us once only: `we cannot repeat an experiment changing just one variable; the hands of the clock will have moved and the moons of Jupiter'. However, for Bell, the existence or otherwise of free variables was something to be inferred from the mathematical structure of physical theory, rather than from metaphysical analysis. Hence, for example, if \begin{equation} \label{eq:det} \dot{\mathbf{X}}=\mathbf{F}[\mathbf{X}] \end{equation} denotes a conventional continuum evolution equation, such as occurs in standard quantum theory, electromagnetism, general relativity as so on, then (\ref{eq:det}) determines not only how a given initial state vector $\mathbf{X}(0)$ evolves at future times $t>0$, but also the causal consequences at $t>0$, of a hypothetical perturbation $\delta X$, for example to one of $\mathbf{X}(0)$'s components. In this respect, most physicists (the author included) would agree with Bell(1993)that his eponymous theorem `is primarily an analysis of certain kinds of physical theory', metaphysical concerns notwithstanding.
On the other hand, the conventional non-locally causal interpretation of Bell's theorem, that the influence of some freely-chosen remote instrument setting can propagate through space at superluminal speed, remains as bizarre and incomprehensible today as when it was first propounded. Although there have been other proposals to understand the Bell theorem, such as backwards-in-time causality (Price, 1996), one might ask whether there exist classes of theory, formulated using less conventional mathematical structures to that of (\ref{eq:det}) above, for which the existence of an unrestricted set of causal consequences between alternate detector orientations and measurement outcomes, cannot be assumed. The purpose of this paper is to analyse `certain kinds of physical theory` for which the freedom to perturb mathematically the values of certain key variables, is determined by the underlying mathematical structure of the theory's state space. Of particular relevance here will be systems whose state space is generically granular and discontinuous (that is, state space cannot be `continued' by Cauchy-sequence methods).
In Section \ref{sec:toy} an idealised model is outlined as the basis for a discussion of the standard EPR-Bohm-Bell experiment, in which the role of causal incompleteness is made explicit. Here we make use of an elementary property of the cosine function (albeit one that the author is unaware of having been used before in physics): if $0<\cos \theta< 1$ is rational, $\theta/\pi$ does not have a finite binary expansion: see Appendix 1 for a simple proof of this.
In Section \ref{sec:q} is discussed a potential reformulation, granular and discontinuous, of the kinematics of conventional quantum theory. In this reformulation, the wavefunction becomes a set of finite bit strings and unitary transforms become permutation and related operators with inherent complex and hyper-complex structure. This kinematic reformulation may be of direct interest in quantum information theory as a novel attempt to define physical reality consistent with J.A. Wheeler's `It from Bit' programme (Wheeler, 1994). The analysis of causal incompleteness and the Bell theorem is discussed in section \ref{sec:toy} is relation to this reformulation.
Some discussion of the notion of causal incompleteness in the context of the age-old cognitive dichotomy of free-will versus determinism, is given in Section \ref{sec:meta}. It is suggested that the type of mathematically-developed causal incompleteness discussed in sections \ref{sec:toy} and \ref{sec:q}, may present a new perspective on the age-old dichotomy of free will and determinism (Kane, 2002).
\section{Quantum Entanglement and Causal Incompleteness: A Simplified Model} \label{sec:toy} The main goal of this paper is to outline a possible reformulation of the kinematics of quantum theory, in which the incompleteness of causal structure can be made explicit. Before doing so, a simplified deterministic model is developed, consistent with two-particle quantum entanglement statistics, which illustrates the potential role of causal incompleteness in the interpretation of the Bell theorem. Linkage between this simplified model and the proposed reformulation of the kinematics of quantum theory is developed in the next section.
\subsection{Preliminaries}
Let $\mathcal{N}_0$ denote a dyadic rational, ie a member of the set $\mathbbm{Q}_2$ of numbers with finite binary expansion. For example, let the binary expansion of $\mathcal{N}_0$ agree with the first $2^N$ bits of the binary Champernowne number $=.11011100101\ldots$, formed by concatenating the natural numbers $1,2,3,4\ldots$ in binary representation. With $c_n$ denoting the $n$th bit of $\mathcal{N}_0$, let $a_n=2c_n-1 \in \{1,-1\}$. Then, with $\{\lambda_n=\frac{n-1}{2^N}\pi: 1\le n \le 2^N\}$ denoting a finite subset of the unit semi-circle $0 \le \lambda \le \pi$, define \begin{equation} \label{eq:s} S_0(\lambda_n)=a_n \ \ \ \ \ \ S_0(\lambda_n+\pi)=-a_n. \end{equation} For sufficiently large $N$, $S_0$ is defined on arbitrarily-dense subsets of the unit circle.
More generally, let $\mathcal{N}_{\theta}$ be the dyadic rational obtained by flipping ($0 \rightarrow 1, 1 \rightarrow 0$) every $1/\sin^2 \frac{\theta}{2}$th bit of $\mathcal{N}_0$, where $\cos \theta \in \mathbbm{Q}_2$. Hence, for example, with $\mathcal{N}_0$ based on the Champernowne number, then \begin{equation} \mathcal{N}_{\pi/2}=.10001001111\ldots \ \ \ \mathcal{N}_{\pi}=.00100011010\ldots \end{equation} Let $c_n(\theta)$ denote the corresponding bits of $\mathcal{N}_{\theta}$, $a_n(\theta)=2c_n(\theta)-1$, and define \begin{equation} \label{eq:s2} S_{\theta}(\lambda_n)=a_n(\theta) \ \ \ \ \ \ S_{\theta}(\lambda_n+\pi)=-a_n(\theta) \end{equation}
Since $\mathcal{N}_0$ is based on a normal number (Hardy and Wright, 1979), for large enough $N$ the values of either $S_0$ or $S_{\theta}(\lambda)$, sampled over $S_0$-defined points in any subset of the unit circle, comprise equal numbers of +1's and -1's. Moreover, by construction, the corresponding sample coefficient of correlation
\begin{equation} \label{eq:correal} C(\theta)=\langle \ S_0(\lambda)S_{\theta}(\lambda+\pi)\rangle =-\cos\theta \end{equation}
\subsection{A Specific Reality} \label{sec:reality}
We now add some interpretational baggage. Imagine two experimenters, each with Stern-Gerlach detectors, measuring the spin of entangled particle pairs in a standard EPR-Bohm-Bell experiment. The orientation of experimenter 1's detector is defined to be the $z$-axis; the orientation of experimenter 2's detector is at angle $\theta$ to the $z$ axis (corresponding to a rotation about the particle beam axis). Let $S_0(\lambda)$ and $S_{\theta}(\lambda+\pi)$ determine the measurement outcomes for a given entangled particle pair labelled by $\lambda$. (Nb if $\mathcal{M}_0=\mathcal{N}_{\theta}$ had been used in place of $\mathcal{N}_0$ as the generating base-2 normal number, then experimenter 1's spin measurement outcomes would be determined by $\mathcal{M}_{\theta}$.)
Consider a $\theta(t)$ in which the experimenters' detectors have relative orientation $\theta=\theta_A$ when $t_1<t<t_2$ and $\theta=\theta_B$ when $t_3<t<t_4$. We have \begin{eqnarray} \label{eq:corr} C(\theta_A)&=&\langle \ S_0(\lambda)S_{\theta_A}(\lambda+\pi)\ \rangle=-\cos\theta_A \nonumber \\ C(\theta_B)&=&\langle \ S_0(\lambda)S_{\theta_B}(\lambda+\pi)\ \rangle=-\cos \theta_B \end{eqnarray} consistent with quantum experimentation. The values $S_0(\lambda)$ and $S_{\theta}(\lambda)$ associated with such a $\theta(t)$ are referred to as a `specific reality'. The space $\mathcal{U}(\mathcal{N}, \theta(t))$ of such `specific realities' can be generated, as far as this idealised model is concerned, by varying over continguous length-$2^N$ segments $\mathcal{N}$ of the Champernowne number, and timeseries $\theta(t)$ where, for all $t$, $\cos\theta(t) \in \mathbbm{Q}_2$.
\subsection{Causal Extension of the Specific Reality}
In the introduction, it was noted that (\ref{eq:det}) determined not only the evolution $\mathbf{X}(t)$ from some specific initial state, but also the causal effect on $\mathbf{X}(t)$ of a perturbation $\delta \mathbf{X}$ to that initial state. If the system is ergodic, then this causal effect is, in principle, determined from knowledge of the evolution $\mathbf{X}(t)$.
In keeping with our intuition that the experimenters are free to choose individually the orientations of their detectors, it can similarly be asked whether the functions which describe the `specific reality' above, $S_0(\lambda)$ and $S_{\theta}(\lambda)$, also provide the information required to describe the causal effect on measurement outcome, of hypothetical perturbations $\delta\theta_1$ and $\delta\theta_2$ to experimenter 1 and 2's actual detector orientations.
To this end, consider the functions \begin{eqnarray} \label{eq:cf} Sp_1(\delta\theta_1, \lambda)&=&\ \ S_0(\lambda-\delta\theta_1)\nonumber\\ Sp_2(\delta\theta_2, \lambda)&=&-S_{\theta}(\lambda-\delta\theta_2) \end{eqnarray} written in conventional local hidden-variable form. When $\delta\theta_1=\delta \theta_2=0$, $Sp_1$ and $Sp_2$ describe the specific reality above, ie \begin{eqnarray} Sp_1(0, \lambda)&=&\ \ S_0(\lambda)\nonumber\\ Sp_2(0, \lambda)&=&-S_{\theta}(\lambda)=S_{\theta}(\lambda+\pi) \end{eqnarray} Hence, assume that $Sp_1(\delta\theta_1, \lambda)$ determines the spin value of one particle of an entangled particle pair labelled by $\lambda$ under a hypothetical perturbation $\delta\theta_1$ to the orientation of experimenter 1's detector. Similarly, let $Sp_2(\delta\theta_2, \lambda)$ determine the spin value of the other entangled particle under a hypothetical perturbation $\delta\theta_2$ to the orientation of experimenter 2's detector. That is, we can think of (\ref{eq:cf}) as defining a pair of lists which give the causal consequences on measurement outcome of hypothetical perturbations to detector orientations. Since, for $N \rightarrow \infty$, $Sp_1$ and $Sp_2$ are defined on uniformly dense subsets of the circle, the functions $Sp_1$ and $Sp_2$ would, for sufficiently large $N$, appear to accommodate any hypothetical alternate choices of orientation experimenter 1 and 2 would care to make.
From the construction of $S_0$ and $S_{\theta}$ above, a necessary condition that $Sp_1$ and $Sp_2$ be defined is that both $(\lambda-\delta\theta_1)/\pi$ and $(\lambda-\delta\theta_2/)\pi$ belong to $\mathbbm{Q}_2$. On the other hand, in order that $Sp_1$ and $Sp_2$ describe one of the specific realities of section \ref{sec:reality}, the cosine of the hypothetical relative orientation $\Delta \theta = \theta+(\delta\theta_2-\delta\theta_1)$ must be dyadic rational. There are certainly occasions where (\ref{eq:cf}) returns the correct quantum correlations when $\delta \theta_1 \ne 0$. For example, for all $\delta\theta_1=\delta\theta_2=\delta\theta'$, $\cos \Delta \theta = \cos \theta$ which by construction belongs to $\mathbbm{Q}_2$. In this situation, the hypothetical correlation $\langle \ Sp_1(\delta \theta', \lambda)Sp_2(\delta \theta', \lambda)\ \rangle$ is invariant under hypothetical identical perturbations $\delta \theta'$ to the orientations of both detectors. On the other hand, in general, it cannot be assumed that $\cos \Delta \theta \in \mathbbm{Q}_2$, the implications of which are discussed in the next section.
\subsection{Bell's Theorem and Causal Incompleteness}
As written, (\ref{eq:cf}) is local in terms of the hypothetical detector perturbations; $Sp_1$ does not depend on $\delta\theta_2$, and $Sp_2$ does not depend on $\delta\theta_1$. Does this imply that correlation statistics derived from (\ref{eq:cf}) must satisfy a Bell inequality? In order to derive a Bell inequality from the statistics of the two experiments with $\theta=\theta_A$ and $\theta=\theta_B$, we need to assume what, following EPR, are usually called `Reality Conditions'. For the first experiment where $\theta=\theta_A$, the relevant Reality Condition states that if a hypothetical perturbation $\delta\theta_1=\theta_A$ to the orientation of experimenter 1's detector were to have aligned experimenter 1's detector with experimenter 2's detector, then experimenter 1 would have measured exactly the opposite of experimenter 2, ie \begin{equation} \label{eq:reality} Sp_1(\theta_A, \lambda)=-Sp_2(0, \lambda) \end{equation} For the second experiment ($\theta=\theta_B$), the Reality Condition similarly requires \begin{equation} \label{eq:reality2} Sp_1(\theta_B, \lambda)=-Sp_2(0, \lambda) \end{equation} The anti-correlations expressed in (\ref{eq:reality}) and (\ref{eq:reality2}) should be contrasted with the anti-correlations, which by the definitions of $S_0(\lambda)$ and $S_{\theta}(\lambda+\pi)$, are guaranteed in the subset of occasions when $\theta=0$ (ie both detectors aligned with the $z$ axis) within the specific reality defined by $\theta=\theta(t)$. The difference between these two situations is exactly equivalent to the difference between the counterfactual and regularity definitions of causality (Menzies, 2001) as first enunciated by the philosopher David Hume. If (\ref{eq:reality}) and (\ref{eq:reality2}) are assumed, then the Bell inequalities follow by standard text-book analysis (eg Rae, 1992).
However, in the present case, it must be asked whether (\ref{eq:reality}) and (\ref{eq:reality2}) are consistent with the global constraint $\cos \Delta \theta \in \mathbbm{Q}_2$. Consider (\ref{eq:reality}) in particular. Putting $\delta\theta_1=\theta_A$, $\delta\theta_2=0$ in (\ref{eq:cf}), we have \begin{eqnarray} \label{eq:crunch} Sp_1(\theta_A, \lambda)&=&S_0(\lambda-\theta_A)\nonumber\\ Sp_2(0, \lambda)&=&-S_{\theta_A}(\lambda) \end{eqnarray} But now the result in Appendix A becomes relevant. Since $\cos\theta_A$ is required to be dyadic rational, $\theta_A$ cannot be a dyadic rational fraction of $\pi$. Now, from (\ref{eq:crunch}), $Sp_2$ is only defined if $\lambda$ is a dyadic rational fraction of $\pi$. Hence, if $\lambda$ is a dyadic rational fraction of $\pi$, and $\theta_A$ not, then $(\lambda-\theta_A)/\pi\ \notin \mathbbm{Q}_2$. Hence, for given $\lambda$, ie entangled particle pair, $Sp_1(\theta_A, \lambda)$ and $Sp_2(0, \lambda)$ are not simultaneously defined (reminiscent of the Principle of Complementarity), no matter how large is $N$. Alternatively, $Sp_1(\theta_A, \lambda)$ and $Sp_2(0,\lambda)$ are not contained in the lists of causal relations defined by (\ref{eq:cf}), even, as $N \rightarrow \infty$, this list becomes infinitely long. A similar conclusion holds for (\ref{eq:reality2}). Hence, we cannot derive a Bell inequality from the correlation statistics associated with (\ref{eq:cf}).
Is it not instead possible to derive a Bell inequality using in (\ref{eq:reality}) a hypothetical perturbation $\delta\theta$ which is a good dyadic rational approximation $\theta'_A$ to $\theta_A$? Since, the hidden-variable model (\ref{eq:cf}) is generically discontinuous, it is not possible. That is to say, if we consider a Cauchy sequence $\{\theta'_A, \theta''_A, \theta'''_A,\ldots\}$ of dyadic rational perturbations which converge on $\theta_A$, the corresponding sequence $Sp_1(\theta'_A, \lambda), Sp_1(\theta''_A, \lambda), Sp_1(\theta'''_A, \lambda), \ldots$ will not converge to some well-defined value $Sp_1(\theta_A, \lambda)$.
Hence although a hidden-variable model has been defined, whose support is as dense as we like on the cirle, generating in the limit $N \rightarrow \infty$ an infinite set of causal relationships between measurement outcome and hypothetical perturbation to detector orientation, the causal structure is not sufficiently comprehensive to be able to derive a Bell inequality. Could quantum theory be recast in such a form as to be able to exploit this result?
\section{It from Bit - Towards A Theory of Quantum Beables} \label{sec:q}
In this section an attempt to reformulate the kinematics of quantum theory as a generically granular and discontinuous theory is outlined, in order to exploit the notion of causal incompleteness, as discussed above. In this reformulation, the quantum wavefunction $|\psi\rangle$ is a set of encoded bit strings, and the unitary transformations are self-similar permutation and related operators with complex and hyper-complex structure. As such, the wavefunction is literally identified with `information', consistent with J.A.Wheeler's (1994) `It from Bit' aphorism, and Bell's notion of beables. There is no additional collapse hypothesis. The reformulation renders the state space of the wavefunction as inherently granular and discontinuous, yielding a mathematical structure from which the notion of incompleteness, discussed above, is manifest.
In this reformulation, the wavefunction of an elementary 2-level system will be identified with the single bit string \begin{equation} \label{eq:s3} \mathcal{S}=\{a_1, a_2, a_3 \ldots, a_{2^N}\} \end{equation} where $a_i \in \{1,-1\}$ and $N \gg 1$ denotes the number of such 2-state systems in the universe. The wavefunction of the universe as a whole is given by $N$ such bit strings; equivalently, a single rational $\mathcal{R}_U$ defined from a length-$2^N$ string comprising base-$2^N$ digits. The entanglement structure of the universe is defined by non-zero coefficients of correlation between individual bit strings; equivalently, in unequal frequencies of occurrence of the digits in the base-$2^N$ expansion of $\mathcal{R}_U$. Following Bohm (1980), this non-normal number structure could be referred to as the `implicate order', whereas the apparently random sequence of bits in any one string could be referred to as the `explicate order'. In the following, some emphasis is placed on ensuring that the proposed reformulation can correctly account for the `vastness' of Hilbert space.
\subsection{Permutation Operators with Complex and Hyper-Complex Structure} One of the key features of quantum theory is that its state space is complex. Here we introduce complex structure through permutation operators $\mathbf{E}$, acting on bit strings $\mathcal{S}$, which satisfy
\begin{equation} \label{eq:sqrt} \mathbf{E}^2(\mathcal{S})=-\mathcal{S}=\{-a_1, -a_2, -a_3 \ldots, -a_{2^N}\}. \end{equation} We start by defining such complex structures starting with the simplest $N=1$ `universe', and build up complexity for higher $N$. \subsubsection{N=1} With $\mathcal{S}=\{a_1,a_2\}$, define \begin{equation} \label{eq:i} \mathbf{i} (\mathcal{S}) =\{a_2, -a_1\} \end{equation} so that $\mathbf{E}=\mathbf{i}$ satisfies (\ref{eq:sqrt}). It is convenient to rewrite (\ref{eq:i}) as $\mathbf{i}(\mathcal{S}) =\{a_1, a_2\}i$ interpreting $\{a_1, a_2\}$ as a row vector, and \begin{equation} \label{eq:im} i=\left( \begin{array}{cc} 0&1\\ -1&0 \end{array} \right) \end{equation} The coefficient of correlation between $\mathcal{S}$ and $\mathbf{i}(\mathcal{S})$ is equal to zero.
\subsubsection{N=2} With $\mathcal{S}=\{a_1,a_2, a_3, a_4\}$, define \begin{eqnarray} \mathbf{e}_1(\mathcal{S})&=&\{-a_3, -a_4, a_1, a_2\}\nonumber\\ \mathbf{e}_2(\mathcal{S})&=&\{-a_4, a_3, -a_2, a_1\}\nonumber\\ \mathbf{e}_3(\mathcal{S})&=&\{-a_2, a_1, a_4, -a_3\} \end{eqnarray} In matrix notation, this can be written, for $j=1,2,3$, as \begin{equation} \mathbf{e}_j (\mathcal{S}) =\{a_1, a_2, a_3, a_4\}e_j
\end{equation} where $e_j$ are $4\times 4$ matrices in block $2 \times 2$ form \begin{equation} \label{eq:m2} e_1=\left( \begin{array}{cc} 0&1\\ -1&0 \end{array}\right),\ e_2=\left( \begin{array}{cc} 0&i\\ i&0 \end{array}\right),\ e_3=\left( \begin{array}{cc} i&0\\ 0&-i \end{array}\right) \end{equation} These matrices satisfy the laws of quaternionic multiplication, ie \begin{equation} e_j e_j = e_1 e_2 e_3 = -\mathrm{Id} \end{equation} and, hence, in particular, each $\mathbf{E}=\mathbf{e}_j$ satisfies (\ref{eq:sqrt}). Note that $e_1$ has the same block form as $i$ in (\ref{eq:im}). With $\mathbf{e}_0$ equal to the identity, the coefficient of correlation between any pair of sequences $(\mathbf{e}_j(\mathcal{S}), \mathbf{e}_k(\mathcal{S}))$, with $0 \le j,k \le 3$, is equal to zero.
\subsubsection{N=3} Based on the quaternions above, we can, by self-similarity, construct 7 independent square-root-of-minus-one permutation operators acting on 8-element sequences $\mathcal{S}=\{a_1, a_2, \ldots a_8\}$, ie for $j=1,2 \ldots 8$, \begin{equation} \mathbf{E}_j(\mathcal{S})=\{a_1, a_2, \ldots, a_8\} E_j \end{equation} where $E_j$ are $8\times8$ matrices in block $2 \times 2$ form \begin{eqnarray} \label{eq:m3} E_1&=&\left( \begin{array}{cc} 0&\ \ 1\\ -1&\ 0 \end{array}\right),\ \nonumber \\ E_2=\left( \begin{array}{cc} 0&e_1\\ e_1&0 \end{array}\right), \ E_3&=&\left( \begin{array}{cc} 0&\ e_2\\ \ e_2&0 \end{array}\right),\ E_4=\left( \begin{array}{cc} 0&e_3\\ e_3&0 \end{array}\right) \nonumber\\ E_5=\left( \begin{array}{cc} e_1&0\\ 0&-e_1 \end{array}\right), \ E_6&=&\left( \begin{array}{cc} e_2&0\\ 0&-e_2 \end{array}\right),\ E_7=\left( \begin{array}{cc} e_3&0\\ 0&-e_3 \end{array}\right) \end{eqnarray} Note that $E_1$ has the same block form as $i$ in (\ref{eq:im}), and each $E_j$, $j>1$ belongs to one of the pure imaginary quaternion triples $\{E_1, E_2, E_5\}$, $\{E_1,E_3,E_6\}$ and $\{E_1, E_4, E_7\}$. With $\mathbf{E}_0$ equal to the identity, the coefficient of correlation between any pair of sequences $(\mathbf{E}_j(\mathcal{S}), \mathbf{E}_k(\mathcal{S})$, $0\le j,k \le 7$ is equal to zero.
\subsubsection{Arbitrary $N$} The construction above can be continued, by self similarity, to $N=4, 5 \ldots$. For arbitary $N$ we have $2^N-1$ square-root-of-minus-one permutation matrices $E_1, E_2 \ldots, {E}_{2^N-1}$, each of which can be written as a $2 \times 2$ block matrix, with blocks representing $2^{N-1}\times 2^{N-1}$ matrices. The square roots are orthogonal to one another, and to the identity $E_0$ in the sense that the coefficient of correlation between any pair $(\mathbf{E}_j(\mathcal{S}), \mathbf{E}_k(\mathcal{S})$, $0\le j,k \le 2^N-1$, is equal to zero.
By self-similarity, each of the $M<N$th sets of square roots of minus one, eg (\ref{eq:m2}) and (\ref{eq:m3}), are embedded in this larger $N$th set.
Focus now on the matrix $E_1 \equiv E$ which has the special block form \begin{equation} \label{eq:E} E=\left( \begin{array}{cc} 0&1\\ -1&0 \end{array} \right) \end{equation} similar to (\ref{eq:im}), but where `$0$' and `$1$' denote the $2^{N-1} \times 2^{N-1}$ zero and identity matrices, respectively. $E$ has a square root which can be written as the $4\times4$ block matrix \begin{equation} E^{1/2}= \left( \begin{array}{cccc} 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ -1&0&0&0 \end{array} \right) \end{equation} where `$0$' and `$1$' now denote the $2^{N-2} \times 2^{N-2}$ zero and identity matrices, respectively. In turn, $E^{1/2}$ has a square root which can be written as the $8\times8$ block matrix \begin{equation} E^{1/4}= \left( \begin{array}{cccccccc} 0&1&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&1&0&0&0&0\\ 0&0&0&0&1&0&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&1\\ -1&0&0&0&0&0&0&0 \end{array} \right) \end{equation} where `$0$' and `$1$' now denote the $2^{N-3} \times 2^{N-3}$ zero and identity matrices, respectively. This procedure can be continued until we reach the $2^N$th root given by the $2^N \times 2^N$ matrix \begin{equation} \label{eq:root} E^{1/2^N}=\left( \begin{array}{cccccc} 0&1&\ &\ &\ldots&0\\ 0&0&1 &\ &\ &0\\ 0&0&0&1&\ &0\\ \ &\ &\ &\ &\ddots&\ \\ 0&0&0&0&\ldots&1\\ -1&0&0&0&\ldots&0 \end{array} \right) \end{equation} where `$0$' and `$1$' are scalars. Clearly $E^{1/2^N}$ is a $2^{N+2}$th root of unity and therefore generates a cyclic group of order $2^{N+2}$, a finite sub-group of $U(1)$. Applied to a bit string $\mathcal{S}=\{a_1, a_2, \ldots, a_{2^N}\}$, $E^{1/2^N}(\mathcal{S})=\{-a_{2^N}, a_1, a_2, \ldots, a_{2^N-1}\}$; that is, $E^{1/2^N}$ brings to the front, the (negation of the) trailing bit of $\mathcal{S}$, cf the discussion in section \ref{sec:meta}.
\subsection{Towards A Granular Reformulation of Complex Hilbert Space}
Here the results above are applied to a possible kinematic reformulation of quantum theory. Some preliminary ideas on such a reformulation were first presented in Palmer (2003). A more complete exposition will appear elsewhere.
\subsubsection{1 Qubit} \label{sec:1qubit}
The general 1-qubit state in quantum theory is given, for example, by \begin{equation} \label{eq:qubit}
|\psi\rangle=p_0|\uparrow\rangle+p_1|\downarrow\rangle) \end{equation}
where $p_0, p_1 \in \mathbbm{C}$ satisfy $|p_0|^2+|p_1|^2=1$. The Hilbert space of such a qubit is therefore three dimensional, including the phase degree of freedom \begin{equation} \label{eq:gphase}
|\psi\rangle \mapsto e^{i\phi}|\psi\rangle
\end{equation} which in quantum theory is viewed as `irrelevant' since the value $\phi$ does not affect the statistics of measurement outcomes. In the proposed reformulation proposed here, $|\psi\rangle \mapsto \mathcal{S}$, see (\ref{eq:s3}). The elements of $\mathcal{S}$ can be associated with what, operationally, are measurement outcomes, but which, following Bell, might be better called `beables'; $\mathcal{S}$ can therefore be thought of as a time series of such beables, whose first element is the beable corresponding to `now'.
To account for the three degrees of freedom of complex Hilbert space, recall from (\ref{eq:m2}) that, in an $N$ qubit universe, any of the $2^N-2$ square roots of minus one, ${E}_j \ j>1$, is automatically part of the (pure imaginary) quaternionic triple $\{E, E_j, {E}_{j+2^{N-1}-1}\}$ if $j \le 2^{N-1}$, or the triple $\{E, E_{j-2^{N-1}+1}, E_j\}$ if $j>2^{N-1}$, where $E$ is given by (\ref{eq:E}). We build the reformulation of the qubit state (\ref{eq:qubit}) around one such quaternion triple, written in the generic form $\{E, E_a, E_b\}$. Then, \begin{eqnarray}
|\uparrow\rangle &\mapsto& \{1,1,\ldots,1\}\nonumber\\
|\uparrow\rangle+|\downarrow\rangle &\mapsto& \mathbf{E}_a ( \{1,1,\ldots,1\})\nonumber\\
|\downarrow\rangle &\mapsto& \{0,0,\ldots,0\}. \end{eqnarray}
Let us start with the degree of freedom associated with the phase transformation (\ref{eq:gphase}), here reformulated as $\mathcal{S} \mapsto \mathbf{E}^{2\phi/\pi}(\mathcal{S})$. Consistent with the invariance of the qubit wavefunction under a global phase transformation in standard quantum theory, here a qubit is regarded as an equivalence class of bit strings, where two bit strings $\mathcal{S},\ \mathcal{S}'$ in the class are related by $\mathcal{S}'=\mathbf{E}^{\alpha}(\mathcal{S})$, $\alpha \in \mathbbm{Q}_2$.
To reformulate the `nontrivial' unitary transformations associated with the remaining two degrees of freedom, those which transform one qubit state into a physically-inequivalent qubit state, we define the notion of addition of sequences eg as in \begin{equation} \label{eq:superp} \mathcal{S}=(\cos\theta\ \mathbf{E}_a + \sin\theta\ \mathbf{E}_b) (\{1,1,\ldots,1\}) \end{equation} as follows. If $\cos\theta \in \mathbbm{Q}_2$, then the $n$th element of $\mathcal{S}$ is equal to the $n$th element of $\mathbf{E}_a(\{1,1,\ldots,1\})$ if the non-zero element in the $n$th row of $E^{\cos\theta}$ is a `1', and otherwise is equal to the $n$th element of $\mathbf{E}_b(\{1,1,\ldots,1\}$. The following properties of the defined `addition' operation are easy to show:
\begin{itemize} \item When $\theta=0$, $\mathcal{S}=\mathbf{E}_a(\{1,1,\ldots,1\})$ ;when $\theta=\pi/2$, $\mathcal{S}=\mathbf{E}_b(\{1,1,\ldots,1\})$ \item With $\cos\theta \in \mathbbm{Q}_2$, the coefficient of correlation between $\mathcal{S}$ and $\mathbf{E}_a(\{1,1,\ldots,1\})$ is equal to $\cos\theta$. With $\theta$ varying uniformly in time, the phenomenon of wave interference is manifest. \item If $0<\cos\theta < \pi/2 \in \mathbbm{Q}_2$, then $\sin\theta \notin \mathbbm{Q}_2$ and vice versa; see Appendix A. \item If $\sin\theta \in \mathbbm{Q}_2$ then $\mathcal{S}$ is as (\ref{eq:superp}), with $\mathbf{E}_a$ swapped with $\mathbf{E}_b$. \item The definition is distributive in the sense that \begin{equation} \mathbf{E}^{\alpha}(\mathcal{S})=(\cos\theta\ \mathbf{E}^{\alpha}\mathbf{E}_a + \sin\theta\ \mathbf{E}^{\alpha}\mathbf{E}_b) \{1,1,\ldots,1\} \end{equation} \end{itemize}
On this basis, we can write, for example \begin{eqnarray}
|\uparrow\rangle+e^{i\theta}|\downarrow\rangle &\mapsto& \mathbf{E}_a (\cos\theta\ \mathbf{E}_0+\sin\theta\ \mathbf{E}\mathbf{E}_0)(\{1,1,\ldots,1\})\nonumber\\ &=& (\cos\theta\ \mathbf{E}_a + \sin\theta\ \mathbf{E}_a\mathbf{E}) (\{1,1,\ldots,1\})\nonumber\\ &=& (\cos\theta\ \mathbf{E}_a + \sin\theta\ \mathbf{E}_b) (\{1,1,\ldots,1\})=\mathcal{S} \end{eqnarray} from (\ref{eq:superp}).
A key point in this proposed reformulation of Hilbert space, is that the notion of `adding wavefunctions' does not imply `superposition' of states, with all the (Schr\"{o}dinger's cat) paradoxes that that implies. Rather, the $n$th element (beable) of the bit string `$\mathcal{A}+i\mathcal{B}$' is formed from the $n$th elements of the bit strings $\mathcal{A}$ and $\mathcal{B}$, taking account of the hyper-complex structure of the associated permutation operators $\mathbf{E}_j$. Because of this complex structure, the $n$th elements of $\mathcal{A}+i\mathcal{B}$ need equal neither the $n$th elements of $\mathcal{A}$ nor $\mathcal{B}$. Note that no additional collapse hypothesis is required to obtain definite beable elements.
\subsubsection{2 Qubits}
In quantum theory, the general 2-qubit state is given by \begin{equation} \label{eq:2qubit}
|\psi\rangle=p_0|\uparrow\uparrow\rangle+p_1|\uparrow\downarrow\rangle +p_2|\downarrow\uparrow\rangle+p_3|\downarrow\downarrow\rangle \end{equation}
where $p_i \in \mathbbm{C}$ satisfy $|p_0|^2+|p_1|^2 +|p_2|^2+|p_3|^2=1$, giving a Hilbert space with seven degrees of freedom, including the global phase degree of freedom, modulo which gives the standard complex-three dimensional projective Hilbert space $\mathbbm{CP}_3$. In the reformulation proposed here, the wavefunction of a 2-qubit state (in a universe of $N$ qubits) is given by two $2^N$-long bit strings $\mathcal{S}_1$ and $\mathcal{S}_2$. Here the seven degrees of freedom are represented by recalling from (\ref{eq:m3}) that any ${E}_j \ j>1$ belongs to a set of seven square roots of minus one whose elements can be listed as $\{E, E_A, E_B, E_C, E_D, E_E, E_F\}$. As before we use roots of $E$ to represent global $`U(1)'$ invariance. Consistent with this, the coefficient of correlation between $\mathcal{S}_1$ and $\mathcal{S}_2$ is invariant under the global transformation \begin{equation} \label{eq:zphase} \mathcal{S}_1 \mapsto \mathbf{E}^{\alpha}(\mathcal{S}_1), \ \mathcal{S}_2 \mapsto \mathbf{E}^{\alpha}(\mathcal{S}_2) \end{equation} where $\alpha \in \mathbbm{Q}_2$. The remaining six degrees of freedom can be described by considering bit strings which combine (in the sense defined by (\ref{eq:superp})), the seven `basis' sequences \begin{eqnarray} \{1,1,&\ldots&,1\},\nonumber\\ \mathbf{E}_A(\{1,1,\ldots,1\} ), \mathbf{E}_B(\{1,1,&\ldots&,1\} ), \mathbf{E}_C(\{1,1,\ldots,1\} ),\nonumber\\ \mathbf{E}_D(\{1,1,\ldots,1\} ), \mathbf{E}_E(\{1,1,&\ldots&,1\} ), \mathbf{E}_F(\{1,1,\ldots,1\} ) \end{eqnarray}
Representing the wavefunction $|\psi\rangle$ for two qubits as the pair $\{\mathcal{S}_1, \mathcal{S}_2\}$, the qubits will be said to be entangled if $\mathcal{S}_1$ is correlated with $\mathcal{S}_2$. Consider, for example \begin{eqnarray} \label{eq:entangled} \mathcal{S}_1&=&\mathbf{E}_A(\{1,1,\ldots,1\})\nonumber \\ \mathcal{S}_2&=&(\cos\theta\ \mathbf{E}_A + \sin\theta\ \mathbf{E}_D) \{1,1,\ldots,1\} \end{eqnarray} By the discussion in section \ref{sec:1qubit}, if $\cos\theta \in \mathbbm{Q}_2$ then the correlation between $\mathcal{S}_1$ and $\mathcal{S}_2$ is equal to $\cos\theta$.
\subsubsection{$N$ Qubits}
Continuing to larger $N$, the wavefunction of an entire universe of $N$ qubits is represented by a set $\{\mathcal{S}_1, \mathcal{S}_2, \ldots, \mathcal{S}_N\}$ of bit strings each of length $2^N$ - equivalently, as discussed above, as a rational $\mathcal{R}_U$ with $2^N$ base-$2^N$ digits. In standard quantum theory, the Hilbert space of $N$ qubits has dimension $2^N-1$, including one global phase degree of freedom. In the reformulation, we have the set $\{E, E_2 \ldots E_{2^N-1}\}$ of $2^N-1$ square roots on minus one. As before, the the global phase degree of freedom is represented by the roots $E^{\alpha}$ of $E$. The remaining degrees of freedom are associated with linear combinations of the basis sequences
\begin{eqnarray} \{1,1,&\ldots&,1\},\nonumber\\ \mathbf{E}_2(\{1,1,\ldots,1\} ), \mathbf{E}_3(\{1,1,&\ldots&,1\} ), \ldots \mathbf{E}_{2^N-1}(\{1,1,\ldots,1\} ) \end{eqnarray} as in (\ref{eq:superp}).
It can be asked whether the proposed reformulation is testably different from quantum theory. One interesting fact, which may be relevant in this respect, emerges at the level of $4$ qubits. Unlike the state space of $1$, $2$ or $3$ qubits, the Hilbert space $\mathbbm{S}^{31}$ of 4 qubits in standard quantum theory cannot be Hopf fibrated, due to a theorem of Adams and Atiyah (1966). As Bernevig and Chen (2003) note, the failure of the Hilbert space to fibrate appears to lead to fundamental difficulties in describing the entanglement structure of 4 or more qubits. By contrast, the present theory is constrained by neither the continuum properties of hyper-complex algebraic fields, nor their corresponding topological spaces. It is interesting to note that $4$-qubit structures are needed to describe the quantum of gravity. This indicates that the proposed reformulation may more readily incorporate the effects of gravity than does does conventional quantum theory.
\subsubsection{Relation to the Idealised Model of Quantum Measurement}
The construction of $S_0$ and $S_{\theta}$ in section \ref{sec:toy} is an idealisation of the hyper-complex permutation operators developed here. A more precise linkage bewteen $S_0$ and $S_{\theta}$ and the proposed reformulation of quantum theory, can be given as follows. In terms of the proposed 2-qubit reformulation of Hilbert space put $|\psi\rangle \mapsto \{\mathcal{S}, \mathcal{S}_{\theta}\}$, where \begin{eqnarray} \mathcal{S}&=&\mathbf{E}_a\{1,1,\ldots,1\}\nonumber\\ \mathcal{S}_{\theta}&=&(\cos \theta \ \mathbf{E}_a+\sin\theta \ \mathbf{E}_b)\{1,1,\ldots,1\}, \end{eqnarray} where $\{E, E_a, E_b\}$ denotes a pure imaginary quaternionic triple in the space of $2^N-1$ hyper-complex permutation operators.
Using (\ref{eq:root}), define \begin{eqnarray} \mathbf{E}^{\alpha}(\mathcal{S})&=&\{a_1, a_2,\ldots, a_{2^N}\}\nonumber\\ \mathbf{E}^{\alpha}(\mathcal{S}_{\theta})&=&\{a_1(\theta), a_2(\theta), \ldots, a_{2^N}(\theta)\} \end{eqnarray} and put \begin{eqnarray} S_0(\frac{\alpha\pi}{2})&=&a_1 \nonumber\\ S_{\theta}(\frac{\alpha\pi}{2})&=&a_1(\theta) \end{eqnarray} that is, $S_{\theta}(\alpha\pi/2)$ denotes the leading bit of $\mathbf{E}^{\alpha}(\mathcal{S}_{\theta})$.
When $\theta=0$, then $S_0$ and $S_{\theta}$ are identical. When $\theta=\pi/2$ then $\mathcal{S}_{\theta}=\mathbf{E}_b\{1,1,\ldots,1\}=\mathbf{E}\mathbf{E}_a\{1,1,\ldots,1\}=\mathbf{E}(\mathcal{S})$. Hence, $S_{\pi/2}(\alpha\pi/2)=S_0(\pi/2+\alpha\pi/2)$. When $\theta=\pi$, then $\mathcal{S}_{\theta}=-\mathbf{E}_a\{1,1,\ldots, 1\}=\mathbf{E}^2(\mathcal{S})$, hence $S_{\pi}(\alpha\pi/2)=S_0(\pi+\alpha\pi/2)$. When $\theta=3\pi/2$ then $\mathcal{S}_{\theta}=-\mathbf{E}_b\{1,1,\ldots,1\}=\mathbf{E}^3(\mathcal{S})$. Hence, $S_{3\pi/2}(\alpha\pi/2)=S_0(3\pi/2+\alpha\pi/2)$.
For all other values of $\theta$, $\mathcal{S}_{\theta}$ is never equal to $\mathbf{E}^{\alpha}(\mathcal{S})$ for any $\alpha$ - since $\mathbf{E}^{\alpha}$ induces a cyclic displacement of the elements of $\mathcal{S}$, there is no $\alpha$ where $\mathbf{E}^{\alpha}(\mathcal{S})$ is partially correlated with $\mathcal{S}$. That is, the values of $\theta$ for which $\mathcal{S}_{\theta}$ belongs to the qubit equivalence class $\{\mathbf{E}^{\alpha}(\mathcal{S}):\ \alpha \in \mathbbm{Q}_2\}$, are precisely the values $\theta$ for which $\theta$ is a dyadic rational multiple of $\pi$ and $\cos\theta$ is dyadic rational ie $\{0,\pi/2, \pi, 3\pi/2\}$. This result links the idealised model in section \ref{sec:toy} with the proposed reformulation of quantum theory, and makes the result on causal incompleteness relevant to the interpretation of Bell's theorem in (this reformulation of) quantum theory.
\section{Incompleteness and the Metaphysics of Free Will} \label{sec:meta}
It has been proposed that inherent mathematical incompleteness (of the type describing the sets of rational numbers, Turing-computable functions, arithmetic proofs and so on) provides a new interpretation of the experimental violation of the Bell inequalities, one that does not invoke or require non-local causality. This interpretation applies to a class of physical theory for which state space is inherently granular and discontinuous. Within such class of theory, the freedom of experimenters to choose measurement settings is tempered by the granular structure of state space. The proposed interpretation of the Bell inequalities is that they reveal the inevitable mathematical incompleteness of the causal structure underlying such theory. The kinematic structure of quantum theory has been reformulated so that it then belongs to this class of theory.
It is well known that violation of the Bell inequalities can be `explained' if the notion of free will is completely rejected. This is generally considered an unsatisfactory explanation, for reasons which go under the general description `conspiratorial'. In an attempt to clarify this issue, Bell (1993) considers a deterministic dynamical system which replaces the whimsical experimenter. This system selects between two possible outputs $a$ or $a'$ on the basis of the parity of the digit in the millionth decimal place of some input variable. Then, fixing $a$ or $a'$ fixes something about the input - ie whether the millionth digit is odd or even. Bell's objection to a deterministic explanation of the violation of his eponymous inequalities is this: this peculiar piece of information, the millionth digit, is unlikely to be the vital piece of information for any distinctively different purpose ie it is otherwise rather useless.
However, in the reformulation of quantum kinematics, the wavefunction of the universe is constructed from finite $N$ bit strings each of length $2^N$, and the equivalent of unitary transformations involve permutation and related operators acting on these bit strings. Typically these permutation operators, represented as matrices, have terms on the anti-diagonal. For example, the global phase operator $\mathbf{E}^{1/2^N}$, see (\ref{eq:root}), acting on some sequence $\mathcal{S}$ brings to the front of the sequence, an element that was previously at the back of the sequence. That is, at the heart of our reformulation of the complex Hibert space of quantum theory, are operators whose action is very similar, in essence, to Bell's deterministic dynamical system. The entanglement structure of the universe is given by precise intricate relationships between the bits of the different bit strings. In this perspective, bits near the end of a bit string are no less `vital' for `distinctively different purposes' than bits near the front of the bit string. Like a Sudoku puzzle, violate one piece of the structure (either at the beginning or end of a bit string) and you violate the structure everywhere.
Rather, the real difficulty with deterministic explanations of the violation of the Bell inequalities is their contradiction of our strong intuition that the experiment could have been performed differently. In Bell's example above, our intuition suggests the input could have been otherwise, at least as far as the trailing digits of the input number are concerned. Any explanation of the violation of the Bell inequality which does not address this deeply-held feeling, is not likely to be accepted.
In the current proposal, our intuition about free will is not rejected \emph{per se}, but rather (as far as the EPR-Bohm-Bell experiments are concerned) is derived from the computational properties of the models $Sp_1$ and $Sp_2$, see (\ref{eq:cf}). Hence, our intuition infers that the experimenter could have chosen from an arbitarily dense set of alternative detector orientations to the one actually chosen, and, from the properties of $Sp_1$, this belief is not inconsistent with the (proposed reformulation of the) laws of physics. Since $Sp_1(0, \lambda)$ defines `reality', $Sp_1(\pi, \lambda)$ defines an alternate world precisely anitcorrelated with reality, and $Sp_1(\pi/2, \lambda)$ and $Sp_1(3\pi/2, \lambda)$ define alternate worlds uncorrelated with reality. However, if $Sp_1(\delta\theta_1, \lambda)$ solved algorithmically for a perturbed orientation such as required to derive a Bell inequality, $Sp_1$ would never halt. That is, in circumstances where our intuition might contradict physics, our intuition, acting computationally, would never be able to ascertain what the relevant measurement outcome would have been.
On the other hand, our cognitive reasoning is (from time to time, at least) able to transcend such a purely computational perspective. Does an awareness of algorithmic incompleteness imply that non-computability is a feature of such cognitive reasoning, and by implication a feature of the laws of physics, as has been suggested by Penrose (1994)? In fact, it would be hard to reconcile this notion with this paper's underlying premise that the granular reformulation of quantum theory is ultimately finite (with the wavefunction of the universe being given by a sequence of $2^N$ base-$2^N$ digits, for some very large but nevertheless finite $N$). A possible alternative suggestion, therefore, is that an awareness of algorithmic incompleteness may instead arise from some cognitive ability to jump (perhaps involuntarily) between computationally-inequivalent finite systems. For example, in the model developed here, a finite division of the circle based on angular segments which are equal dyadic rational fractions of $\pi$, is not equivalent to a finite division of the cirle based on angular segments whose cosines are equal dyadic rational fractions ie based on equal divisions of the diameter. From an awareness of both finite cyclotomies one can recognise the inability of one to contain the other.
In conclusion, it is ironic, perhaps, that exploitation of mathematical incompleteness in physical theory may turn out to be the key notion that allows EPR's goal to be achieved, of developing a theory of the quantum that is more physically complete than standard quantum theory.
\appendix \section{A fundamental property of the cosine function}
The discussion in section \ref{sec:toy} uses a rather basic property of the cosine function, albeit one rarely (if ever?) used in physics. For completeness, we give a simple proof of this property, based on unpublished work by Jahnel (2004). It can be seen as a special example from the theory of trigonometric diophantine equations (Conway and Jones, 1976).
\textbf{Theorem} Let $0<\theta/\pi<1/2 \in \mathbbm{Q}_2$, then $\cos \theta \notin \mathbbm{Q}_2$.
With $0<\theta/\pi<1/2 \in \mathbbm{Q}_2$, assume that $2\cos \theta =a/b$ is rational, where $a,b \in \mathbbm{Z}, b \ne 0$ have no common factors.
Using the identity \begin{equation} 2\cos 2\theta = (2\cos\theta)^2-2 \end{equation} we have \begin{equation} 2\cos 2\theta=\frac{a^2-2b^2}{b^2}
\end{equation} Now $a^2-2b^2$ and $b^2$ have no common factors, since if $p$ were a prime number dividing both, $p|b^2 \Rightarrow p|b$ and $p|(a^2-2b^2) \Rightarrow p|a$, a contradiction.
Hence, if $b \ne \pm1$, then the denominators in $2\cos\theta$, $2\cos2\theta$, $2\cos4\theta$, $2\cos8\theta\ldots$ get bigger and bigger without limit. On the other hand, $\theta/\pi=m/n$ which means that the sequence $(2\cos 2^k \theta)_{k\in \mathbbm{N}}$ admits at most $n$ values. Hence we have contradiction. Hence $b=\pm1$. Hence $\cos\theta=0, \pm1/2, \pm1$. No $0<\theta/\pi<1/2 \in \mathbbm{Q}_2$ has $\cos\theta$ with these values. QED.
Finally note that Pythagorean integer triples $\{x,y,z\}$ satisfying $x^2+y^2=z^2$ can be parametrised as $x=2uv, y=u^2-v^2, z=u^2+v^2$ where $(u, v)$ are integers without common factor and of different parity (eg Hardy and Wright, 1979). Hence, if $0 < \theta < \pi/2$ and both $\cos \theta$ and $\sin \theta$ are rational, then $\cos \theta = 2uv/(u^2+v^2)$ and $\sin\theta=(u^2-v^2)/(u^2+v^2)$. Since $u$ and $v$ are of different parity, then $u^2+v^2$ cannot be divided by 2. Hence $\cos \theta$ and $\sin\theta$ cannot both be dyadic rational.
\section*{References}
\begin{description}
\item Adams, F.J. and Atiyah, M.F.A. 1966: On K-theory and Hopf invariant. Quarterly J. Math., 17, 31-8.
\item Bernevig, B.A. and Chen H.-D., 2003: Geometry of the three-qubit state, entanglement and division algebras. J.Phys A, 36, 8325-8339.
\item Bell, J.S., 1993: Free variables and local causality. In `Speakable and unspeakable in quantum mechanics.' Cambridge University Press. 212pp.
\item Bohm, D., 1980: Wholeness and the implicate order. Routledge and Kegan Paul, London.
\item Conway, J.H. and Jones, A.J., 1976: Trigonometric Diophantine Equations. Acta Arithmetica, 30, 229-240.
\item Hardy, G.H. and Wright, E.M., 1979: The Theory of Numbers. Oxford University Press.
\item Jahnel, J., 2005: When does the (co)-sine of a rational angle give a rational number? Available online at www.uni-math.gwdg.de/jahnel/linkstopaperse.html
\item Kane, R., 2002: The Oxford Handbook on Free Will. Oxford University Press. 638pp
\item Menzies, P. 2001: Counterfactual Theories of Causation. The Stanford Encyclopedia of Philosophy (Spring 2001 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/spr2001/entries/causation-counterfactual/
\item Palmer, T.N., 2004: A granular permutation-based representation of complex numbers and quanternions: elements of a possible realistic quantum theory. Proc. Roy. Soc., 60A, 1039-1055.
\item Penrose, R., 1994: Shadows of the mind. Oxford University Press. Oxford. 450pp
\item Price, H., 1996: Time's Arrow and Archimedes Point. Oxford University Press. 306pp
\item Rae, A.I.M., 1992: Quantum Mechanics. Institute of Physics. Bristol.
\item Wheeler, J.A., 1994: In `Physical Origins of Time Asymmetry, ed J.J. Halliwell, J.Peres-Mercader and W.H. Zurek, pp. 1-29. Cambridge University Press.
\end{description}
\end{document} |
\begin{document}
\title{{\bf On the evolution equation of compressible\\ vortex sheets}} \author{ {\sc Alessandro Morando}\thanks{e-mail: [email protected]}\;, {\sc Paolo Secchi}\thanks{e-mail: [email protected]}\;, {\sc Paola Trebeschi}\thanks{e-mail: [email protected]}\\ {\footnotesize DICATAM, Sezione di Matematica, Universit\`a di Brescia, Via Valotti 9, 25133 Brescia, Italy} }
\date{}
\maketitle
\begin{abstract}
We are concerned with supersonic vortex sheets for the Euler equations of compressible inviscid fluids in two space dimensions. For the problem with constant coefficients we derive an evolution equation for the discontinuity front of the vortex sheet. This is a pseudo-differential equation of order two. In agreement with the classical stability analysis, if the jump of the tangential component of the velocity satisfies $|[v\cdot\tau]|<2\sqrt{2}\,c$ (here $c$ denotes the sound speed) the symbol is elliptic and the problem is ill-posed. On the contrary, if $|[v\cdot\tau]|>2\sqrt{2}\,c$, then the problem is weakly stable, and we are able to derive a wave-type a priori energy estimate for the solution, with no loss of regularity with respect to the data. Then we prove the well-posedness of the problem, by showing the existence of the solution in weighted Sobolev spaces.
\noindent{\bf Keywords:} Compressible Euler equations, vortex sheet, contact discontinuities, weak stability, loss of derivatives, linear stability.
\noindent{\bf Mathematics Subject Classification:}
35Q35, 76N10, 76E17, 35L50
\end{abstract}
\tableofcontents
\section{Introduction} \label{sect1}
We are concerned with the time evolution of vortex sheets for the Euler equations describing the motion of a compressible fluid. Vortex sheets are interfaces between two incompressible or compressible flows across which there is a discontinuity in fluid velocity. Across a vortex sheet, the tangential velocity field has a jump, while the normal component of the flow velocity is continuous. The discontinuity in the tangential velocity field creates a concentration of vorticity along the interface. In particular, compressible vortex sheets are contact discontinuities to the Euler equations for compressible fluids and as such they are fundamental waves which play an important role in the study of general entropy solutions to multidimensional hyperbolic systems of conservation laws.
It was observed in \cite{M58MR0097930,FM63MR0154509}, by the normal mode analysis, that rectilinear vortex sheets for isentropic compressible fluids in two space dimensions are linearly stable when the Mach number $\mathsf{M}>\sqrt{2}$ and are violently unstable when $\mathsf{M}<\sqrt{2}$, while planar vortex sheets are always violently unstable in three space dimensions. This kind of instabilities is the analogue of the Kelvin--Helmholtz instability for incompressible fluids.
\iffalse This problem is a nonlinear hyperbolic problem with a characteristic free boundary. The so-called Lopatinski\u{\i} condition holds only in a weak sense, which yields a loss of derivatives. \fi
\citet{AM87MR914450} studied certain instabilities of two-dimensional supersonic vortex sheets by analyzing the interaction with highly oscillatory waves through geometric optics. A rigorous mathematical theory on nonlinear stability and local-in-time existence of two-dimensional supersonic vortex sheets was first established by Coulombel--Secchi \cite{CS08MR2423311,CS09MR2505379} based on their linear stability results in \cite{CS04MR2095445} and a Nash--Moser iteration scheme.
Characteristic discontinuities, especially vortex sheets, arise in a broad range of physical problems in fluid mechanics, oceanography, aerodynamics, plasma physics, astrophysics, and elastodynamics. The linear results in \cite{CS04MR2095445} have been generalized to cover the two-dimensional nonisentropic flows \cite{MT08MR2441089}, the three-dimensional compressible steady flows \cite{WY13MR3065290,WYuan15MR3327369}, and the two-dimensional two-phase flows \cite{RWWZ16MR3474128}.
Recently, the methodology in \cite{CS04MR2095445} has been developed to deal with several constant coefficient linearized problems arising in two-dimensional compressible magnetohydrodynamics (MHD) and elastic flows; see \cite{WY13ARMAMR3035981,CDS16MR3527627,CHW17Adv}. For three-dimensional MHD, Chen--Wang \cite{CW08MR2372810} and \citet{T09MR2481071} proved the nonlinear stability of compressible current-vortex sheets, which indicates that non-paralleled magnetic fields stabilize the motion of three-dimensional compressible vortex sheets. Moreover, the modified Nash--Moser iteration scheme developed in \cite{H76MR0602181,CS08MR2423311} has been successfully applied to the compressible liquids in vacuum \cite{T09MR2560044}, the plasma-vacuum interface problem \cite{ST14MR3151094}, three-dimensional compressible steady flows \cite{WY15MR3328144}, and MHD contact discontinuities \cite{MTT16Preprint}. The approach of \cite{CS04MR2095445, CS08MR2423311} has been recently extended to get the existence of solutions to the non linear problem of relativistic vortex sheets in three-dimensional Minkowski spacetime \cite{CSW1707.02672}, and the two-dimensional nonisentropic flows \cite{MTW17MR}.
The vortex sheet motion is a nonlinear hyperbolic problem with a characteristic free boundary. The analysis of the linearized problem in \cite{CS04MR2095445} shows that the so-called Kreiss-Lopatinski\u{\i} condition holds in a weak sense, thus one can only obtain an \emph{a priori} energy estimate with a loss of derivatives with respect to the source terms. Because of this fact, the existence of the solution to the nonlinear problem is obtained in \cite{CS08MR2423311} by a Nash-Moser iteration scheme, with a loss of the regularity of the solution with respect to the initial data.
At the best of our knowledge the approach of \cite{CS04MR2095445,CS08MR2423311} is the only one known up to now, while it would be interesting to have different methods of proof capable to give the existence and possibly other properties of the solution.
In particular, the location of the discontinuity front of the vortex sheet is obtained through the jump conditions at the front, see \eqref{RH}, and is implicitly determined by the fluid motion in the interior regions, i.e. far from the front.
On the contrary, it would be interesting to find an \lq\lq{explicit}\rq\rq evolution equation for the vortex sheet, i.e. for the discontinuity front, that might also be useful for numerical simulations. In this regard we recall that in case of irrotational, incompressible vortex sheets, the location of the discontinuity front is described by the Birchhoff-Rott equation, see \cite{MR1688875,MB02MR1867882,MP94MR1245492}, whose solution is sufficient to give a complete description of the fluid motion through the Biot-Savart law. The evolution equation of the discontinuity front of current-vortex sheets plays an important role in the paper \cite{SWZ}.
In this paper we are concerned with supersonic vortex sheets for the Euler equations of compressible inviscid fluids in two space dimensions. For the problem with constant coefficients we are able to derive an evolution equation for the discontinuity front of the vortex sheet. This is a pseudo-differential equation of order two. In agreement with the classical stability analysis \cite{FM63MR0154509,M58MR0097930}, if the jump of the tangential component of the velocity satisfies $|[v\cdot\tau]|<2\sqrt{2}\,c$ (here $c$ denotes the sound speed) the symbol is elliptic and the problem is ill-posed. On the contrary, if $|[v\cdot\tau]|>2\sqrt{2}\,c$, then the problem is weakly stable, and we are able to derive a wave-type a priori energy estimate for the solution, with no loss of regularity with respect to the data. By a duality argument we then prove the well-posedness of the problem, by showing the existence of the solution in weighted Sobolev spaces.
The fact that the evolution equation for the discontinuity front is well-posed, with no loss of regularity from the data to the solution, is somehow in agreement with the result of the linear analysis in \cite{CS04MR2095445} (see Theorem 3.1 and Theorem 5.2), where the solution has a loss of derivatives in the interior domains while the function describing the front conserves the regularity of the boundary data.
In a forthcoming paper we will consider the problem with variable coefficients, which requires a completely different approach.
\subsection{The Eulerian description}
We consider the isentropic Euler equations in the whole plane ${\mathbb R}^2$. Denoting by ${\bf v}=(v_1,v_2) \in {\mathbb R}^2$ the velocity of the fluid, and by $\rho$ its density, the equations read: \begin{equation} \label{euler} \begin{cases} \partial_t \rho +\nabla \cdot (\rho \, {\bf v}) =0 \, ,\\ \partial_t (\rho \, {\bf v}) +\nabla \cdot (\rho \, {\bf v} \otimes {\bf v}) +\nabla \, p =0 \, , \end{cases} \end{equation} where $p=p(\rho)$ is the pressure law. In all this paper $p$ is a $C^\infty$ function of $\rho$, defined on $]0,+\infty[$, and such that $p'(\rho)>0$ for all $\rho$. The speed of sound $c(\rho)$ in the fluid is defined by the relation: \begin{equation*} \forall \, \rho>0 \, ,\quad c(\rho) :=\sqrt{p'(\rho)} \, . \end{equation*} It is a well-known fact that, for such a pressure law, \eqref{euler} is a strictly hyperbolic system in the region $(t,x)\in\, ]0,+\infty[ \, \times {\mathbb R}^2$, and \eqref{euler} is also symmetrizable.
We are interested in solutions of \eqref{euler} that are smooth on either side of a smooth hypersurface $\Gamma(t):=\{x=(x_1,x_2)\in {\mathbb R}^2 : F(t,x)=0\}=\{x_2=f(t,x_1)\}$ for each $t$ and that satisfy suitable jump conditions at each point of the front $\Gamma (t)$.
Let us denote $\Omega^\pm(t):=\{(x_1,x_2)\in {\mathbb R}^2 :x_2\gtrless f(t,x_1)\}$. Given any function $g$ we denote $g^\pm=g$ in $\Omega^\pm(t)$ and $[g]=g^+_{|\Gamma}-g^-_{|\Gamma}$ the jump across $\Gamma (t)$.
We look for smooth solutions $({\bf v}^\pm,\rho^\pm)$ of \eqref{euler} in $\Omega^\pm(t)$ and such that, at each time $t$, the tangential velocity is the only quantity that experiments a jump across the curve $\Gamma (t)$. (Tangential should be understood as tangential with respect to $\Gamma (t)$). The pressure and the normal velocity should be continuous across $\Gamma (t)$. For such solutions, the jump conditions across $\Gamma(t)$ read: \begin{equation*} \sigma ={\bf v}^\pm\cdot n \, ,\quad [p]=0 \quad {\rm on } \;\Gamma (t) \, . \end{equation*} Here $n=n(t)$ denotes the outward unit normal on $\partial\Omega^-(t)$ and $\sigma$ denotes the velocity of propagation of the interface $\Gamma (t)$. With our parametrization of $\Gamma (t)$, an equivalent formulation of these jump conditions is \begin{equation} \label{RH} \partial_t f ={\bf v}^+\cdot N ={\bf v}^-\cdot N \, ,\quad p^+ =p^- \quad {\rm on }\;\Gamma (t) \, , \end{equation} where \begin{equation}\label{defN} N=(-\partial_1 f, 1) \end{equation} and $p^\pm=p(\rho^\pm)$. Notice that the function $f$ describing the discontinuity front is part of the unknown of the problem, i.e. this is a free boundary problem.
For smooth solutions system \eqref{euler} can be written in the equivalent form \begin{equation} \label{euler1} \begin{cases} \partial_t \rho +({\bf v}\cdot\nabla) \rho +\rho \, \nabla\cdot{\bf v} =0 \, ,\\ \rho \,(\partial_t {\bf v} +({\bf v}\cdot\nabla)
{\bf v} ) +\nabla \, p =0 \, . \end{cases} \end{equation} Because $ p'(\rho)>0 $, the function $p= p(\rho ) $ can be inverted and we can write $ \rho=\rho(p) $. Given a positive constant $ \bar{\rho}>0 $, we introduce the quantity $ P(p)=\log(\rho(p)/\bar{\rho}) $ and consider $ P $ as a new unknown. In terms of $ (P,\bf v) $, the system \eqref{euler1} equivalently reads \begin{equation} \label{euler2} \begin{cases} \partial_t P +{\bf v}\cdot\nabla P + \nabla\cdot{\bf v} =0 \, ,\\ \partial_t {\bf v} +({\bf v}\cdot\nabla) {\bf v} +c^2\,\nabla \, P =0 \, , \end{cases} \end{equation} where now the speed of sound is considered as a function of $ P,$ that is $ c=c(P) $. Thus our problem reads \begin{equation} \label{euler3} \begin{cases} \partial_t P^\pm +{\bf v}^\pm\cdot\nabla P^\pm + \nabla\cdot{\bf v}^\pm =0 \, ,\\ \partial_t {\bf v}^\pm +({\bf v}^\pm\cdot\nabla) {\bf v}^\pm +c^2_\pm\,\nabla \, P^\pm =0 \, , \qquad {\rm in }\; \Omega^\pm(t), \end{cases} \end{equation} where we have set $ c_\pm=c(P^\pm) $.
The jump conditions \eqref{RH}
take the new form \begin{equation} \label{RH2} \partial_t f ={\bf v}^+\cdot N ={\bf v}^-\cdot N \, ,\quad P^+ =P^- \quad {\rm on }\;\Gamma (t) \, . \end{equation}
\section{Preliminary results}
Given functions $ {\bf v}^\pm, P^\pm$, we set \begin{equation} \begin{array}{ll}\label{defZ} Z^\pm:=\partial_t {\bf v}^\pm +( {\bf v}^\pm \cdot \nabla) {\bf v}^\pm . \end{array} \end{equation} Next, we study the behavior of $Z^\pm$ at $\Gamma(t)$. As in \cite{SWZ} we define \begin{equation} \begin{array}{ll}\label{deftheta} \theta(t,x_1):= {\bf v}^\pm(t,x_1,f(t,x_1))\cdot N(t,x_1), \end{array} \end{equation} for $N$ given in \eqref{defN}.
\begin{lemma}\label{lemmaN} Let $ f, {\bf v}^\pm, \theta$ be such that \begin{equation} \begin{array}{ll}\label{dtfthetavN} \partial_t f=\theta= {\bf v}^\pm\cdot N \, \qquad {\rm on }\; \Gamma(t), \end{array} \end{equation} and let $Z^\pm$ be defined by \eqref{defZ}. Then \begin{equation} \begin{array}{ll}\label{applN} Z^+ \cdot N \displaystyle = \partial_t\theta + 2 v_1^+\partial_1\theta + (v_1^+)^2 \partial^2_{11} f \,,\\ Z^- \cdot N \displaystyle = \partial_t\theta + 2 v_1^-\partial_1\theta + (v_1^-)^2 \partial^2_{11} f
\quad {\rm on }\; \Gamma(t) . \end{array} \end{equation} \end{lemma}
\begin{proof} Dropping for convenience the $\pm$ superscripts, we compute \begin{equation*} \begin{array}{ll}\label{} \partial_t \theta=(\partial_t{\bf v}+\partial_2{\bf v}\partial_t f)\cdot N + {\bf v}\cdot \partial_t N=(\partial_t v_2+\partial_2 v_2\partial_t f) -(\partial_t v_1+\partial_2 v_1\partial_t f)\partial_1 f - v_1 \partial_t \partial_1 f \,, \end{array} \end{equation*} and similarly \begin{equation*} \begin{array}{ll}\label{} \partial_1 \theta=(\partial_1 v_2+\partial_2 v_2\partial_1 f) -(\partial_1 v_1+\partial_2 v_1\partial_1 f)\partial_1 f - v_1 \partial^2_{11} f \, . \end{array} \end{equation*} Substituting \eqref{dtfthetavN} in the first of the two equations it follows that \begin{equation*} \begin{array}{ll}\label{} \partial_t v_2-\partial_t v_1\partial_1 f=\partial_t \theta+v_1\partial_1 \theta - \partial_t f(\partial_2 v_2 -\partial_2 v_1\partial_1 f ) \,, \end{array} \end{equation*} and from the second equation, after multiplication by $v_1$, it follows that \begin{equation*} \begin{array}{ll}\label{} v_1\partial_1 v_2- v_1\partial_1 v_1\partial_1 f=v_1\partial_1 \theta+v_1^2\partial^2_{11} f - v_1\partial_1 f(\partial_2 v_2 -\partial_2 v_1\partial_1 f ) \,. \end{array} \end{equation*} We substitute the last two equations in \begin{equation*} \begin{array}{ll}\label{} Z\cdot N=(\partial_t v_2 + {\bf v} \cdot \nabla v_2)-(\partial_t v_1 + {\bf v} \cdot \nabla v_1)\partial_1 f \,, \end{array} \end{equation*} rearrange the terms, use again \eqref{dtfthetavN}, and finally obtain \[ Z \cdot N \displaystyle = \partial_t\theta + 2 v_1\partial_1\theta + v_1^2 \partial^2_{11} f \,,
\] that is \eqref{applN}. \end{proof}
\subsection{A first equation for the front}
We take the scalar product of the equation for $\bf v^\pm$ in \eqref{euler3}, evaluated at $\Gamma(t)$, with the vector $N$. We get \begin{equation*} \big\{ Z^\pm + c^2_\pm \nabla P^\pm\big\} \cdot N =0\, \quad {\rm on} \; \Gamma(t) \, , \end{equation*} and applying Lemma \ref{lemmaN} we obtain \begin{equation} \begin{array}{ll}\label{puntoN} \displaystyle \partial_t\theta + 2 v_1^\pm\partial_1\theta + (v_1^\pm)^2 \partial^2_{11} f + c^2_\pm \nabla P^\pm \cdot N =0 \,\quad {\rm on} \; \Gamma(t) \, . \end{array} \end{equation} Now we apply an idea from \cite{SWZ}. We take the {\it sum} of the "+" and "-" equations in \eqref{puntoN} to obtain \begin{multline}\label{puntoN2} \displaystyle 2\partial_t\theta + 2 (v_1^++v_1^-)\partial_1\theta + ( (v_1^+)^2+(v_1^-)^2) \partial^2_{11} f + c^2 \nabla (P^+ + P^-) \cdot N =0 \,\quad {\rm on} \; \Gamma(t) \, , \end{multline}
where we have denoted the common value at the boundary $c=c_{\pm|\Gamma(t)}=c(P^\pm_{|\Gamma(t)})$. Next, following again \cite{SWZ}, we introduce the quantities \begin{equation} \label{defwV} {\bf w}=(w_1,w_2):=({\bf v}^++{\bf v}^-)/2, \qquad {\bf V}=(V_1,V_2):=({\bf v}^+-{\bf v}^-)/2. \end{equation} Sustituting \eqref{defwV} in \eqref{puntoN2} gives \begin{equation}\label{puntoN3} \displaystyle \partial_t\theta + 2 w_1\partial_1\theta + (w_1^2 + V_1^2 )\partial^2_{11} f +\frac{c^2}2 \nabla (P^+ + P^-) \cdot N =0 \,\qquad {\rm on} \; \Gamma(t) \, . \end{equation} Finally we substitute the boundary condition $ \theta=\partial_t f $ in \eqref{puntoN3} and we obtain \begin{equation}\label{puntoN4} \displaystyle \partial^2_{tt} f + 2 w_1\partial_1\partial_t f + (w_1^2 + V_1^2 )\partial^2_{11} f +\frac{c^2}2 \nabla (P^+ + P^-) \cdot N =0 \,\qquad {\rm on} \; \Gamma(t) \, . \end{equation} \eqref{puntoN4} is a second order equation for the front $f$. However, it is nonlinearly coupled at the highest order with the other unknowns $ ({\bf v}^\pm,P^\pm) $ of the problem through the last term in the left side of \eqref{puntoN4}. In order to find an evolution equation for $f,$ it is important to isolate the dependence of $f$ on $P^\pm$ at the highest order, i.e. up to lower order terms in $ ({\bf v}^\pm,P^\pm) $.
Notice that \eqref{puntoN4} can also be written in the form \begin{equation}\label{puntoN5} \displaystyle (\partial_t + w_1\partial_1)^2 f + V_1^2 \partial^2_{11} f +\frac{c^2}2 \nabla (P^+ + P^-) \cdot N -(\partial_t w_1+w_1\partial_1 w_1)\partial_1 f=0 \,\qquad {\rm on} \; \Gamma(t) \, . \end{equation}
\subsection{The wave problem for the pressure}
Applying the operator $ \partial_t+{\bf v}\cdot\nabla $ to the first equation of \eqref{euler2} and $ \nabla\cdot $ to the second one gives \begin{equation*}\label{} \begin{cases} (\partial_t +{\bf v}\cdot\nabla)^2 P + (\partial_t +{\bf v}\cdot\nabla)\nabla\cdot{\bf v} =0 \, ,\\ \nabla\cdot(\partial_t +{\bf v}\cdot\nabla) {\bf v} +\nabla\cdot(c^2\,\nabla \, P) =0 \, . \end{cases} \end{equation*} The difference of the two equations gives the wave-type equation\footnote[1]{Here we adopt the Einstein convention over repeated indices.} \begin{equation}\label{wave0} (\partial_t +{\bf v}\cdot\nabla)^2 P - \nabla\cdot(c^2\,\nabla \, P) = -[\partial_t +{\bf v}\cdot\nabla, \nabla\cdot\,]{\bf v}=\partial_i v_j\partial_j v_i. \end{equation} We repeat the same calculation for both $ ({\bf v}^\pm,P^\pm) $. As for the behavior at the boundary, we already know that \begin{equation}\label{bc1}
[P]=0 \, , \qquad {\rm on }\; \Gamma(t) \, . \end{equation} As a second boundary condition it is natural to add a condition involving the normal derivatives of $ P^\pm. $ We proceed as follows: instead of the {\it sum} of the equations \eqref{puntoN} as for \eqref{puntoN2}, we take the {\it difference} of the "+" and "-" equations in \eqref{puntoN} to obtain the jump of the normal derivatives $ \nabla P^\pm \cdot N $, \begin{equation} \label{jumpQ} [c^2 \nabla P \cdot N] =-[2 v_1\partial_1\theta + v_1^2 \partial^2_{11} f] \qquad {\rm on} \; \Gamma(t) \, . \end{equation} Recalling that $ \theta=\partial_t f $, we compute \begin{equation}\label{jumpQ1} [2 v_1\partial_1\theta + v_1^2 \partial^2_{11} f] = 4 V_1(\partial_t+w_1\partial_1)\partial_1 f . \end{equation} Thus, from \eqref{jumpQ}, \eqref{jumpQ1} we get \begin{equation}\label{bc2} [c^2 \nabla P \cdot N] =-4 V_1(\partial_t+w_1\partial_1)\partial_1 f \qquad {\rm on} \; \Gamma(t) \, . \end{equation} Collecting \eqref{wave0} for $P^\pm$, \eqref{bc1}, \eqref{bc2} gives the coupled problem for the pressure \begin{equation}\label{wave} \begin{cases} (\partial_t +{\bf v }^\pm\cdot\nabla)^2 P^\pm - \nabla\cdot(c^2_\pm\,\nabla \, P^\pm) =\mathcal F^\pm & {\rm in }\; \Omega^\pm(t) \, ,\\
[P]=0 \, ,\\ [c^2 \nabla P \cdot N] =-4 V_1(\partial_t+w_1\partial_1)\partial_1 f & {\rm on} \; \Gamma(t) \, ,
\end{cases} \end{equation} where \begin{equation*}\label{key} \mathcal F^\pm:=\partial_i v_j^\pm \partial_j v_i^\pm.
\end{equation*}
Notice that $ \mathcal F $ can be considered a lower order term in the second order differential equation for $ P^\pm $, differently from the right-hand side of the boundary condition for the jump of the normal derivatives, which is of order two in $ f. $
\section{The coupled problem \eqref{puntoN5}, \eqref{wave} with constant coefficients. The main result}
We consider a problem obtained by linearization of equation \eqref{puntoN5} and system \eqref{wave} about the constant velocity ${\bf v }^\pm=(v_1^\pm,0)$, constant pressure $P^+=P^-$, and flat front $\Gamma=\{x_2=0\}$, so that $N=(0,1)$, that is we study the equations \begin{equation}\label{puntoN6} \displaystyle (\partial_t + w_1\partial_1)^2 f + V_1^2 \partial^2_{11} f +\frac{c^2}2 \partial_2 (P^+ + P^-) =0 \,\qquad {\rm if} \; x_2=0 \, , \end{equation} \begin{equation}\label{wave2} \begin{cases} (\partial_t +v_1^\pm\partial_1)^2 P^\pm - c^2\Delta \, P^\pm =\mathcal F^\pm \quad & {\rm if }\; x_2\gtrless0 \, ,\\ [P]=0 \, ,\\ [c^2\partial_2 P ] =-4 V_1(\partial_t+w_1\partial_1)\partial_1 f & {\rm if} \; x_2=0 \, . \end{cases} \end{equation} In \eqref{puntoN6}, \eqref{wave2}, $v^\pm_1, c$ are constants and $c>0$, $w_1=(v_1^++v_1^-)/2, V_1=(v_1^+-v_1^-)/2$. $\mathcal F^\pm$ is a given source term. \eqref{puntoN6}, \eqref{wave2} form a coupled system for $f$ and $P^\pm$, obtained by retaining the highest order terms of \eqref{puntoN5} and \eqref{wave}. We are interested to derive from \eqref{puntoN6}, \eqref{wave2} an evolution equation for the front $f$.
For $\gamma\ge1$, we introduce $ \widetilde{f}:=e^{-\gamma t}f,\widetilde{P}^\pm:=e^{-\gamma t}P^\pm, \widetilde{\mathcal F}^\pm:=e^{-\gamma t}\mathcal F^\pm $ and consider the equations \begin{equation}\label{puntoN7} \displaystyle (\gamma+\partial_t + w_1\partial_1)^2 \widetilde{f} + V_1^2 \partial^2_{11} \widetilde{f} +\frac{c^2}2 \partial_2 (\widetilde{P}^+ + \widetilde{P}^-) =0 \,\qquad {\rm if} \; x_2=0 \, , \end{equation} \begin{equation}\label{wave3} \begin{cases} (\gamma+\partial_t +v_1^\pm\partial_1)^2 \widetilde{P}^\pm - c^2\Delta \, \widetilde{P}^\pm =\widetilde{\mathcal F}^\pm \quad & {\rm if }\; x_2\gtrless0 \, ,\\ [\widetilde{P}]=0 \, ,\\ [c^2\partial_2 \widetilde{P} ] =-4 V_1(\gamma+\partial_t+w_1\partial_1)\partial_1 \widetilde{f} & {\rm if} \; x_2=0 \, . \end{cases} \end{equation} System \eqref{puntoN7}, \eqref{wave3} is equivalent to \eqref{puntoN6}, \eqref{wave2}. Let us denote by $ \widehat{f},\widehat{P}^\pm,\widehat{\mathcal F}^\pm $ the Fourier transforms of $ \widetilde{f},\widetilde{P}^\pm, \widetilde{\mathcal F}^\pm $ in $(t,x_1)$, with dual variables denoted by $(\delta,\eta)$, and set $\tau=\gamma+i\delta$. We have the following result: \begin{theorem}\label{teo_equ} Let $\widetilde{\mathcal F}^\pm$ be such that \begin{equation}\label{cond_infF} \lim\limits_{x_2\to+\infty}\widehat{\mathcal F}^\pm(\cdot,\pm x_2)= 0 \, . \end{equation} Assume that $\widetilde{f},\widetilde{P}^\pm$ is a solution of \eqref{puntoN7}, \eqref{wave3} with \begin{equation}\label{cond_inf} \lim\limits_{x_2\to+\infty}\widehat{P}^\pm(\cdot,\pm x_2)= 0 \, . \end{equation} Then $f$ solves the second order pseudo-differential equation \begin{equation}\label{equ_f} \displaystyle \left( (\tau + iw_1\eta)^2 + V_1^2 \eta^2 \left( \frac{8(\tau+iw_1\eta)^2}{c^2(\mu^++\mu^-)^2} -1 \right)\right) \widehat{f} + \frac{\mu^+\mu^-}{\mu^++\mu^-}\,M =0 \, , \end{equation} where $\mu^\pm=\sqrt{\left(\frac{\tau+iv_1^\pm\eta}{c}\right)^2+\eta^2}$ is such that $ \Re\mu^\pm>0$ if $\Re\tau>0$, and \begin{equation}\label{def_M} M=M(\tau,\eta):= \frac{1}{\mu^+}\int_{0}^{+\infty}e^{-\mu^+ y}\widehat{\mathcal F}^+ (\cdot, y)\, dy - \frac{1}{\mu^-}\int_{0}^{+\infty}e^{-\mu^- y}\widehat{\mathcal F}^- (\cdot,- y)\, dy \, . \end{equation} \end{theorem} From the definition we see that the roots $ \mu^\pm $ are homogeneous functions of degree 1 in $(\tau, \eta)$. Therefore, the ratio $ (\tau+iw_1\eta)^2/(\mu^++\mu^-)^2 $ is homogeneous of degree 0. It follows that the symbol of \eqref{equ_f} is a homogeneous function of degree 2, see Remark \ref{remark52}. In this sense \eqref{equ_f} represents a second order pseudo-differential equation for $f$.
The main result of the paper is given by the following result.
\begin{theorem}\label{teoexist}
Assume $\frac{v}{c}>\sqrt{2}$, and let $ \mathcal F^+\in L^2({\mathbb R}^+;H^s_\gamma({\mathbb R}^2)) , \mathcal F^-\in L^2({\mathbb R}^-;H^s_\gamma({\mathbb R}^2))$. There exists a unique solution $f\in H^{s+1}_\gamma({\mathbb R}^2)$ of equation \eqref{equ_f} (with $w_1=0$), satisfying the estimate
\begin{equation}\label{stimafF1}
\gamma^3 \|f\|^2_{H^{s+1}_\gamma({\mathbb R}^2)} \le C\left( \|\mathcal F^+\|^2_{L^2({\mathbb R}^+;H^s_\gamma({\mathbb R}^2))}+\|\mathcal F^-\|^2_{L^2({\mathbb R}^-;H^s_\gamma({\mathbb R}^2))}\right) , \qquad\forall \gamma\ge1\, ,
\end{equation}
for a suitable constant $C>0$ independent of $\mathcal F^\pm$ and $\gamma$. \end{theorem}
See Remark \ref{ell_hyp} for a discussion about the different cases $ \frac{v}{c}\gtrless\sqrt{2} $ in relation with the classical stability analysis \cite{CS04MR2095445,FM63MR0154509,M58MR0097930,S00MR1775057}.
\subsection{Weighted Sobolev spaces and norms}\label{sec2.w} We are going to introduce certain weighted Sobolev spaces in order to prove Theorem \ref{teoexist}. Functions are defined over the two half-spaces $\{(t,x_1,x_2)\in\mathbb{R}^3:x_2\gtrless0\}$; the boundary of the half-spaces is identified to $\mathbb{R}^2$. For all $s\in\mathbb{R}$ and for all $\gamma\geq 1$, the usual Sobolev space $H^s(\mathbb{R}^2)$ is equipped with the following norm: \begin{align*}
\|v\|_{s,\gamma}^2:=\frac{1}{(2\pi)^2} \iint_{\mathbb{R}^2}\Lambda^{2s}(\tau,\eta) |\widehat{v}(\delta,\eta)|^2\,\mathrm{d} \delta \,\mathrm{d}\eta,\qquad
\Lambda^{s}(\tau,\eta):=(\gamma^2+\delta^2+\eta^2)^{\frac{s}{2}}=(|\tau|^2+\eta^2)^{\frac{s}{2}}, \end{align*} where $\widehat{v}(\delta,\eta)$ is the Fourier transform of $v(t,x_1)$ and $ \tau=\gamma+i\delta $. We will abbreviate the usual norm of $L^2(\mathbb{R}^2)$ as \begin{align*}
\|\cdot\|:=\|\cdot\|_{0,\gamma}\, . \end{align*} The scalar product in $L^2(\mathbb{R}^2)$ is denoted as follows: \begin{align*} \langle a,b\rangle:=\iint_{\mathbb{R}^2} a(x)\overline{b(x)}\,\mathrm{d} x, \end{align*} where $\overline{b(x)}$ is the complex conjugation of $b(x)$.
For $s\in\mathbb{R}$ and $\gamma\geq 1$, we introduce the weighted Sobolev space $H^{s}_{\gamma}(\mathbb{R}^2)$ as \begin{align*} H^{s}_{\gamma}(\mathbb{R}^2)&:=\left\{ u\in\mathcal{D}'(\mathbb{R}^2)\,:\, \mathrm{e}^{-\gamma t}u(t,x_1)\in H^{s}(\mathbb{R}^2) \right\}, \end{align*}
and its norm $\|u\|_{H^{s}_{\gamma}(\mathbb{R}^2)}:=\|\mathrm{e}^{-\gamma t}u\|_{s,\gamma}$. We write $L^2_{\gamma}(\mathbb{R}^2):=H^0_{\gamma}(\mathbb{R}^2)$ and $\|u\|_{L^2_{\gamma}(\mathbb{R}^2)}:=\|\mathrm{e}^{-\gamma t}u\|$.
We define $L^2(\mathbb{R}^\pm;H^{s}_{\gamma}(\mathbb{R}^2))$ as the spaces of distributions with finite norm \begin{align*}
\|u\|_{L^2(\mathbb{R}^\pm;H^s_{\gamma}(\mathbb{R}^2))}^2:=\int_{\mathbb{R}^+}\|u(\cdot,\pm x_2)\|_{H^s_{\gamma}(\mathbb{R}^2)}^2\,\mathrm{d} x_2 \, . \end{align*}
\section{Proof of Theorem \ref{teo_equ}}
In order to obtain an evolution equation for $f$, we will find an explicit formula for the solution $P^\pm$ of \eqref{wave3}, and substitute into \eqref{puntoN7}.
We first perform the Fourier transform of problem \eqref{puntoN7}, \eqref{wave3} and obtain \begin{equation}\label{puntoN8} \displaystyle (\tau + iw_1\eta)^2 \widehat{f} - V_1^2 \eta^2\widehat{f} +\frac{c^2}2 \partial_2 (\widehat{P}^+ + \widehat{P}^-) =0 \,\qquad {\rm if} \; x_2=0 \, , \end{equation} \begin{equation}\label{wave4} \begin{cases} (\tau +iv_1^\pm\eta)^2 \widehat{P}^\pm + c^2\eta^2 \widehat{P}^\pm -c^2\partial^2_{22} \widehat{P}^\pm =\widehat{\mathcal F}^\pm \quad & {\rm if }\; x_2\gtrless0 \, ,\\ [\widehat{P}]=0 \, ,\\ [c^2\partial_2 \widehat{P} ] =-4i\eta V_1(\tau+iw_1\eta) \widehat{f} & {\rm if} \; x_2=0 \, . \end{cases} \end{equation} To solve \eqref{wave4} we take the Laplace transform in $x_2$ with dual variable $s\in{\mathbb C} $, defined by \begin{equation*} \mathcal{L}[\widehat{P}^\pm](s)=\int_0^\infty e^{-sx_2}\widehat{P}^\pm(\cdot,\pm x_2)\, dx_2\,, \end{equation*} \begin{equation*} \mathcal{L}[\widehat{\mathcal F }^\pm](s)=\int_0^\infty e^{-sx_2}\widehat{\mathcal F }^\pm(\cdot,\pm x_2)\, dx_2\,. \end{equation*} For the sake of simplicity of notation, here we neglect the dependence on $\tau,\eta$. From \eqref{wave4} we obtain \begin{equation*} \left((\tau+iv^\pm_1\eta)^2+c^2\eta^2-c^2s^2\right)\mathcal{L}[\widehat{P}^\pm](s)= \mathcal{L}[\widehat{\mathcal F }^\pm](s) - c^2s\widehat{P}^\pm(0) \mp c^2 \partial_2\widehat{P}^\pm(0)\, . \end{equation*} It follows that \begin{equation}\label{laplace} \mathcal{L}[\widehat{P}^\pm](s)= \frac{c^2s\widehat{P}^\pm(0) \pm c^2 \partial_2\widehat{P}^\pm(0)}{c^2s^2-(\tau+iv^\pm_1\eta)^2-c^2\eta^2} - \frac{\mathcal{L}[\widehat{\mathcal F }^\pm](s)}{c^2s^2-(\tau+iv^\pm_1\eta)^2-c^2\eta^2}\, . \end{equation} Let us denote by $\mu^\pm=\sqrt{\left(\frac{\tau+iv_1^\pm\eta}{c}\right)^2+\eta^2}$ the root of the equation (in $s$) \[c^2s^2-(\tau+iv^\pm_1\eta)^2-c^2\eta^2=0\,,\] such that \begin{equation}\label{mu} \Re\mu^\pm>0\quad{\rm if }\quad \gamma>0 \, \end{equation} ($\Re$ denotes the real part). We show this property in Lemma \ref{lemma_mu}. Recalling that $\mathcal{L}[e^{\alpha x}H(x)](s)=\frac1{s-\alpha}$ for any $\alpha\in{\mathbb C}$, where $H(x)$ denotes the Heaviside function, we take the inverse Laplace transform of \eqref{laplace} and obtain \begin{multline}\label{{formulaQ+}} \widehat{P}^+(\cdot,x_2)=\widehat{P}^+(0)\cosh(\mu^+ x_2) +\partial_2 \widehat{P}^+(0) \frac{\sinh(\mu^+ x_2)}{\mu^+}\\ - \int_{0}^{x_2} \frac{\sinh(\mu^+ (x_2-y))}{c^2\mu^+} \widehat{\mathcal F }^+(\cdot,y)\, dy \, ,\qquad x_2>0 \,, \end{multline} \begin{multline}\label{{formulaQ-}} \widehat{P}^-(\cdot,-x_2)=\widehat{P}^-(0)\cosh(\mu^- x_2) -\partial_2 \widehat{P}^-(0) \frac{\sinh(\mu^- x_2)}{\mu^-}\\ - \int_{0}^{x_2} \frac{\sinh(\mu^- (x_2-y))}{c^2\mu^-} \widehat{\mathcal F }^-(\cdot,-y)\, dy \, ,\qquad x_2>0 \, . \end{multline} We need to determine the values of $\widehat{P}^\pm(0) , \partial_2 \widehat{P}^\pm(0) $ in \eqref{{formulaQ+}}, \eqref{{formulaQ-}}. Two conditions are given by the boundary conditions in \eqref{wave4}, and two more conditions are obtained by imposing the behavior at infinity \eqref{cond_inf}. Recalling \eqref{mu}, under the assumption \eqref{cond_infF} it is easy to show that \begin{equation}\label{cond_infF_int} \lim\limits_{x_2\to+\infty}\int_{0}^{x_2}e^{-\mu^\pm(x_2-y)}\widehat{\mathcal F}^\pm (\cdot,\pm y)\, dy= 0 \, . \end{equation}
From \eqref{cond_inf}, \eqref{{formulaQ+}}, \eqref{{formulaQ-}}, \eqref{cond_infF_int} it follows that \begin{equation}\label{cond_inf2} \widehat{P}^\pm(0) \pm \frac{1}{\mu^\pm} \partial_2 \widehat{P}^\pm(0) - \frac{1}{c^2\mu^\pm}\int_{0}^{+\infty}e^{-\mu^\pm y}\widehat{\mathcal F}^\pm (\cdot,\pm y)\, dy= 0 \, . \end{equation} Collecting the boundary conditions in \eqref{wave4} and \eqref{cond_inf2} gives the linear system \begin{equation}\label{system} \begin{cases} \widehat{P}^+(0)-\widehat{P}^-(0)=0 \, ,\\ \partial_2 \widehat{P}^+(0) - \partial_2 \widehat{P}^-(0) =-4i\eta \frac{ V_1}{c}\left(\frac{\tau+iw_1\eta}{c}\right) \widehat{f} \\ \mu^+\widehat{P}^+(0) + \partial_2 \widehat{P}^+(0)= \frac{1}{c^2}\int_{0}^{+\infty}e^{-\mu^+ y}\widehat{\mathcal F}^+ (\cdot, y)\, dy \\ \mu^-\widehat{P}^-(0) - \partial_2 \widehat{P}^-(0)=\frac{1}{c^2}\int_{0}^{+\infty}e^{-\mu^- y}\widehat{\mathcal F}^- (\cdot,- y)\, dy \, . \end{cases} \end{equation} The determinant of the above linear system equals $\mu^++\mu^- $; from \eqref{mu} it never vanishes as long as $ \gamma>0. $ Solving \eqref{system} gives \begin{equation}\label{somma_deriv} \partial_2 \widehat{P}^+(0) + \partial_2 \widehat{P}^-(0) =-4i\eta \frac{ V_1}{c}\left(\frac{\tau+iw_1\eta}{c}\right) \widehat{f}\; \frac{\mu^+-\mu^-}{\mu^++\mu^-} + 2\frac{\mu^+\mu^-}{\mu^++\mu^-}\frac{M}{c^2} \, , \end{equation} where we have set \begin{equation*} M:= \frac{1}{\mu^+}\int_{0}^{+\infty}e^{-\mu^+ y}\widehat{\mathcal F}^+ (\cdot, y)\, dy - \frac{1}{\mu^-}\int_{0}^{+\infty}e^{-\mu^- y}\widehat{\mathcal F}^- (\cdot,- y)\, dy \, . \end{equation*} We substitute \eqref{somma_deriv} into \eqref{puntoN8} and obtain the equation for $\widehat{f}$ \begin{equation}\label{equ_f0} \displaystyle \left( (\tau + iw_1\eta)^2 - V_1^2 \eta^2 -2i { V_1} \eta\left(\tau+iw_1\eta\right) \; \frac{\mu^+-\mu^-}{\mu^++\mu^-} \right) \widehat{f} + \frac{\mu^+\mu^-}{\mu^++\mu^-}M =0 \, . \end{equation} Finally, we compute \[ \frac{\mu^+-\mu^-}{\mu^++\mu^-} =4\frac{V_1}{c^2}\frac{i\eta(\tau+iw_1\eta)}{(\mu^++\mu^-)^2}, \] and substituting this last expression in \eqref{equ_f0} we can rewrite it as \begin{equation*} \displaystyle \left( (\tau + iw_1\eta)^2 + V_1^2 \eta^2 \left(8 \left(\frac{\tau+iw_1\eta}{c(\mu^++\mu^-)}\right)^2 -1 \right)\right) \widehat{f} + \frac{\mu^+\mu^-}{\mu^++\mu^-}M =0 \, , \end{equation*} that is \eqref{equ_f}.
\section{The symbol of the pseudo-differential equation \eqref{equ_f} for the front}
Let us denote the symbol of \eqref{equ_f} by $ \Sigma: $
\[ \Sigma=\Sigma(\tau,\eta):= (\tau + iw_1\eta)^2 + V_1^2 \eta^2
\left( \frac{8(\tau+iw_1\eta)^2}{c^2(\mu^+(\tau,\eta)+\mu^-(\tau,\eta))^2} -1 \right).\] In order to take the homogeneity into account, we define the hemisphere: \begin{align*} \Xi_1:=\left\{(\tau,\eta)\in \mathbb{C}\times\mathbb{R}\, :\,
|\tau|^2+\eta^2=1,\Re \tau\geq 0 \right\}, \end{align*} and the set of ``frequencies'': \begin{align*} \Xi:=\left\{(\tau,\eta)\in \mathbb{C}\times\mathbb{R}\, :\, \Re \tau\geq 0, (\tau,\eta)\ne (0,0) \right\}=(0,\infty)\cdot\Xi_1 \,. \end{align*} From now on we assume \[ v^+_1=v>0, \qquad v^-_1=-v \,, \] so that \[ w_1=0, \qquad V_1=v\, . \] From this assumption it follows that \begin{equation}\label{def_Sigma} \Sigma(\tau,\eta)= \tau^2 + v^2 \eta^2 \left(8\left( \frac{\tau/c}{\mu^+(\tau,\eta)+\mu^-(\tau,\eta)}\right)^2 -1 \right). \end{equation}
\subsection{Study of the roots $\mu^\pm$} \begin{lemma}\label{lemma_mu} Let $ (\tau,\eta)\in\Xi $ and let us consider the equation \begin{equation}\label{equ_mu} s^2= \left(\frac{\tau \pm iv\eta}{c}\right)^2+\eta^2. \end{equation} For both cases $\pm$ of \eqref{equ_mu} there exists one root, denoted by $ \mu^\pm=\mu^\pm(\tau,\eta) $, such that $ \Re\mu^\pm>0 $ as long as $ \Re\tau>0 $. The other root is $ -\mu^\pm $. The roots $ \mu^\pm$ admit a continuous extension to points $ (\tau,\eta)=(i\delta,\eta)\in\Xi $, i.e. with $ \Re\tau=0 $. Specifically we have:
(i) if $ \eta=0 $, $ \mu^\pm(i\delta,0)=i\delta/c $ ;
(ii) if $ \eta\not=0 $, \begin{equation}\label{mu+} \begin{array}{ll} \mu^+(i\delta,\eta)=\sqrt{-\left(\frac{\delta +v\eta}{c}\right)^2+\eta^2} \qquad &{\it if } \; -\left(\frac{v}{c}+1\right) <\frac{\delta}{c\eta}<-\left(\frac{v}{c}-1\right)\, , \\ \mu^+(i\delta,\eta)=0 \qquad &{\it if } \; \frac{\delta}{c\eta}=-\left(\frac{v}{c}\pm 1\right)\, , \\ \mu^+(i\delta,\eta)=-i\sgn(\eta) \sqrt{\left(\frac{\delta +v\eta}{c}\right)^2-\eta^2} \qquad &{\it if }\; \frac{\delta}{c\eta}<-\left(\frac{v}{c}+1\right)\, , \\ \mu^+(i\delta,\eta)=i\sgn(\eta) \sqrt{\left(\frac{\delta +v\eta}{c}\right)^2-\eta^2} \qquad &{\it if }\; \frac{\delta}{c\eta}>-\left(\frac{v}{c}-1\right)\, , \end{array} \end{equation} and \begin{equation}\label{mu-} \begin{array}{ll} \mu^-(i\delta,\eta)=\sqrt{-\left(\frac{\delta -v\eta}{c}\right)^2+\eta^2} \qquad &{\it if } \; \frac{v}{c}-1 <\frac{\delta}{c\eta}<\frac{v}{c}+1\, , \\ \mu^-(i\delta,\eta)=0 \qquad &{\it if } \; \frac{\delta}{c\eta}=\frac{v}{c}\pm 1\, , \\ \mu^-(i\delta,\eta)=-i\sgn(\eta) \sqrt{\left(\frac{\delta -v\eta}{c}\right)^2-\eta^2} \qquad &{\it if }\; \frac{\delta}{c\eta}<\frac{v}{c}-1\, , \\ \mu^-(i\delta,\eta)=i\sgn(\eta) \sqrt{\left(\frac{\delta -v\eta}{c}\right)^2-\eta^2} \qquad &{\it if }\; \frac{\delta}{c\eta}>\frac{v}{c}+1\, . \end{array} \end{equation} \end{lemma} \begin{proof} (i) If $ \eta=0 $, \eqref{equ_mu} reduces to $s^2=(\tau/c)^2$. We choose $\mu^\pm=\tau/c$ which has $\Re\mu^\pm>0$ if $\Re\tau=\gamma>0$; obviously, the continuous extension for $\Re\tau=0$ is $\mu^\pm=i\delta/c$. (ii) Assume $ \eta\not=0.$ Let us denote \begin{equation*} \alpha^\pm:=\left(\frac{\tau \pm iv\eta}{c}\right)^2+\eta^2=\frac{\gamma^2-(\delta \pm v\eta)^2+c^2\eta^2}{c^2} +2i\gamma\frac{\delta\pm v\eta}{c^2}\, . \end{equation*} For $\gamma>0$, $\Im\alpha^\pm=0$ if and only if $\delta\pm v\eta=0$, and
$\alpha^\pm_{|\delta\pm v\eta=0}=(\gamma/c)^2+\eta^2>0$. It follows that either $\alpha^\pm\in{\mathbb R}, \alpha^\pm>0$, or $\alpha^\pm\in{\mathbb C}$ with $\Im\alpha^\pm\not=0$. In both cases $\alpha^\pm$ has two square roots, one with strictly positive real part (that we denote by $\mu^\pm$), the other one with strictly negative real part. For the continuous extension in points with $\Re\tau=0,$ we have \begin{equation}\label{caso+} \mu^\pm(i\delta,\eta)=\sqrt{-\left(\frac{\delta \pm v\eta}{c}\right)^2+\eta^2} \qquad {\rm if } \quad -\left(\frac{\delta \pm v\eta}{c}\right)^2+\eta^2\ge0\, , \end{equation} and \begin{equation}\label{caso-} \mu^\pm(i\delta,\eta)=i\sgn(\delta\pm v\eta)\sqrt{\left(\frac{\delta \pm v\eta}{c}\right)^2-\eta^2} \qquad {\rm if } \quad -\left(\frac{\delta \pm v\eta}{c}\right)^2+\eta^2<0\, . \end{equation} We also observe that \begin{equation}\label{caso+2} \begin{array}{ll} \sgn(\delta+v\eta)=-\sgn(\eta) \qquad{\rm if } \quad \frac{\delta}{c\eta}<-\left(\frac{v}{c}+1\right) ,\\ \sgn(\delta+v\eta)=\sgn(\eta)\qquad{\rm if } \quad \frac{\delta}{c\eta}>-(\frac{v}{c}-1) , \end{array} \end{equation} and \begin{equation}\label{caso-2} \begin{array}{ll} \sgn(\delta-v\eta)=-\sgn(\eta) \qquad{\rm if } \quad \frac{\delta}{c\eta}<\frac{v}{c}-1 ,\\ \sgn(\delta-v\eta)=\sgn(\eta)\qquad{\rm if } \quad \frac{\delta}{c\eta}>\frac{v}{c}+1 . \end{array} \end{equation} From \eqref{caso+}--\eqref{caso-2} we obtain \eqref{mu+}, \eqref{mu-}. \end{proof}
\begin{corollary}\label{coroll_mu} From \eqref{mu+}, \eqref{mu-} the roots $ \mu^\pm $ only vanish in points $ (\tau,\eta)=(i\delta,\eta) $ with \[ \delta=-(v\pm c) \eta \qquad\forall\eta\not=0 \qquad ({\rm where}\;\mu^+=0)\, ,\] or \[ \delta=(v\pm c) \eta \qquad\forall\eta\not=0\qquad ({\rm where}\;\mu^-=0)\, .\] If $v\not= c $, the above four families of points \[ \delta=-(v+ c) \eta, \quad \delta=-(v- c)\eta, \quad \delta=(v- c) \eta, \quad \delta=(v+ c) \eta \, ,
\] are always mutually distinct. If $ v=c $, the two families in the middle coincide and we have
\begin{equation}\label{sommavc} \mu^+(-2ic\eta,\eta)=0, \qquad \mu^+(0,\eta)=\eta^-(0,\eta)=0, \qquad \mu^-(2ic\eta,\eta)=0, \qquad\forall\eta\not=0\, .
\end{equation} \end{corollary}
From \eqref{def_Sigma}, the symbol $ \Sigma $ is not defined in points $ (\tau,\eta)\in\Xi$ where $\mu^++\mu^-$ vanishes. From Lemma \ref{lemma_mu} we already know that $\Re\mu^\pm>0$ in all points with $\Re\tau>0$. It follows that $\Re(\mu^++\mu^-)>0$ and thus $\mu^++\mu^-\not=0$ in all such points. Therefore the symbol is defined for $\Re\tau>0$.
It rests to study if $\mu^++\mu^-$ vanishes in points $ (\tau,\eta)=(i\delta,\eta) $ with $ \Re\tau=0 $. From Corollary \ref{coroll_mu} we obtain that if $v\not= c $ then $\mu^++\mu^-\not=0$ in all points with $\delta=-(v\pm c) \eta$ and $\delta=(v\pm c) \eta$ (in these points, if $\mu^+=0$ then $\mu^-\not=0$ and viceversa). If $v= c $, then $\mu^+(0,\eta)+\mu^-(0,\eta)=0$.
From now on we adopt the usual terminology: $ v>c $ is the {\it supersonic} case, $ v<c $ is the {\it subsonic} case, $ v=c $ is the {\it sonic} case. The next lemma regards the supersonic case.
\begin{lemma}[$ v>c $]\label{super_mu} Let $ (\tau,\eta)=(i\delta,\eta)\in\Xi $ such that $ \Re\tau=0, \eta\not=0 $. For all such points the following facts hold. \begin{itemize} \item[(i)] If $\frac{\delta}{c\eta}<-\left(\frac{v}{c}+1\right)$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)<0$; \\ \item[(ii)] If $-\left(\frac{v}{c}+1\right) <\frac{\delta}{c\eta}<-\left(\frac{v}{c}-1\right) $ then $\mu^+\in {\mathbb R}^+$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)=0$; \\ \item[(iii)] If $-\left(\frac{v}{c}-1\right) <\frac{\delta}{c\eta}< \frac{v}{c}-1 $ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^+(0,\eta)+\mu^-(0,\eta)=0$, $\Re(\mu^+\mu^-)>0$; \\ \item[(iv)] If $\frac{v}{c}-1 <\frac{\delta}{c\eta}< \frac{v}{c}+1 $ then $\mu^+\in i{\mathbb R}$, $\mu^-\in {\mathbb R}^+$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)=0$; \\ \item[(v)] If $\frac{\delta}{c\eta}> \frac{v}{c}+1 $ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)<0$.
\end{itemize} \end{lemma} We emphasize that the above properties hold in all points $(i\delta,\eta)$ as indicated according to the value of ${\delta}/({c\eta})$, except for the case (iii) where $\mu^++\mu^-=0$ if and only if $(\delta,\eta)=(0,\eta).$ From \eqref{sommavc} and (iii) we have \begin{equation}\label{somma_mu} {\rm if }\;\, v\ge c \qquad \mu^+(0,\eta)+\mu^-(0,\eta)=0 \qquad \forall\eta\not=0\, . \end{equation} \begin{proof}[Proof of Lemma \ref{super_mu}] (i) If $\frac{\delta}{c\eta}<-\left(\frac{v}{c}+1\right)$ then $\mu^+\in i{\mathbb R}$ follows directly from $\eqref{mu+}_3$ and $\mu^-\in i{\mathbb R}$ follows from $\eqref{mu-}_3$ because $\frac{\delta}{c\eta}<\frac{v}{c}-1$. Moreover, from \eqref{mu+}, \eqref{mu-} we have \[ \mu^++\mu^-=-i\sgn(\eta)\left(\sqrt{\left(\frac{\delta +v\eta}{c}\right)^2-\eta^2}+\sqrt{\left(\frac{\delta -v\eta}{c}\right)^2-\eta^2} \right)\not=0 \,,
\]
\[
\Re(\mu^+\mu^-)=- \sqrt{\left(\left(\frac{\delta +v\eta}{c}\right)^2-\eta^2 \right) \left( \left(\frac{\delta -v\eta}{c}\right)^2-\eta^2\right)} <0\,.
\] The cases (ii) and (iv) follow directly from \eqref{mu+}, \eqref{mu-}.
(iii) If $-\left(\frac{v}{c}-1\right) <\frac{\delta}{c\eta}< \frac{v}{c}-1 $ then $\mu^\pm\in i{\mathbb R}$ follows from \eqref{mu+}, \eqref{mu-}. Moreover it holds \[ \mu^++\mu^-=i\sgn(\eta)\left(\sqrt{\left(\frac{\delta +v\eta}{c}\right)^2-\eta^2}-\sqrt{\left(\frac{\delta -v\eta}{c}\right)^2-\eta^2} \right)=0 \quad\mbox{if and only if }\, \delta=0 \,, \] recalling that here $\eta\not=0$. It follows that $\mu^+(0,\eta)+\mu^-(0,\eta)=0$ for all $\eta\not=0$. We also have
\[ \Re(\mu^+\mu^-)= \sqrt{\left(\left(\frac{\delta +v\eta}{c}\right)^2-\eta^2 \right) \left( \left(\frac{\delta -v\eta}{c}\right)^2-\eta^2\right)} >0\,. \] The proof of case (v) is similar to the proof of case (i). \end{proof}
The next lemma regards the subsonic case.
\begin{lemma}[$ v<c $]\label{sub_mu}
Let $ (\tau,\eta)=(i\delta,\eta)\in\Xi $ such that $ \Re\tau=0, \eta\not=0 $. For all such points the following facts hold.
\begin{itemize}
\item[(i)] If $\frac{\delta}{c\eta}<-\left(\frac{v}{c}+1\right)$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)<0$;
\\
\item[(ii)] If $-\left(\frac{v}{c}+1\right) <\frac{\delta}{c\eta}<\frac{v}{c}-1
$ then $\mu^+\in {\mathbb R}^+$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)=0$;
\\
\item[(iii)] If $\frac{v}{c}-1 <\frac{\delta}{c\eta}< -\left(\frac{v}{c}-1\right)
$ then $\mu^+\in {\mathbb R}^+$, $\mu^-\in {\mathbb R}^+$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)>0$;
\\
\item[(iv)] If $-\left(\frac{v}{c}-1\right) <\frac{\delta}{c\eta}< \frac{v}{c}+1
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in {\mathbb R}^+$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)=0$;
\\
\item[(v)] If $\frac{\delta}{c\eta}> \frac{v}{c}+1
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)<0$.
\end{itemize} \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{super_mu} and so we omit the details. \end{proof}
\begin{lemma}[$ v=c $]\label{eq_mu}
Let $ (\tau,\eta)=(i\delta,\eta)\in\Xi $ such that $ \Re\tau=0, \eta\not=0 $. For all such points the following facts hold.
\begin{itemize}
\item[(i)] If $\frac{\delta}{c\eta}<-2$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)<0$;
\\
\item[(ii)] If $-2<\frac{\delta}{c\eta}<0$ then $\mu^+\in {\mathbb R}^+$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)=0$;
\\
\item[(iii)] If $0 <\frac{\delta}{c\eta}< 2
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in {\mathbb R}^+$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)=0$;
\\
\item[(iv)] If $\frac{\delta}{c\eta}> 2
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, $\Re(\mu^+\mu^-)<0$.
\end{itemize} \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{super_mu} and so we omit the details. \end{proof}
\begin{corollary}\label{zerosommamu} From Lemma \ref{super_mu}, see also \eqref{somma_mu}, and Lemma \ref{sub_mu} it follows that $\mu^++\mu^-=0$ at $ (\tau,\eta)\in\Xi$ if and only if $\tau=0$ and $v\ge c$. \end{corollary} For $v\ge c$, though $\mu^++\mu^-=0$ at $ (0,\eta)\in\Xi$ , nevertheless we can define $\Sigma (0,\eta)$ by continuous extension, see Lemma \ref{extend}.
We are also interested to know if the difference $\mu^+-\mu^-$ vanishes somewhere. \begin{lemma}\label{diff_mu} Let $ (\tau,\eta)\in\Xi$. Then $\mu^+(\tau,\eta)=\mu^-(\tau,\eta)$ if and only if:
\begin{itemize}
\item[(i)] $ (\tau,\eta)= (\tau,0) $,
\item[(ii)] $ (\tau,\eta)= (0,\eta)$, and $v\le c$.
\end{itemize} \end{lemma} \begin{proof}
From \eqref{equ_mu} we obtain that $(\mu^+)^2 =(\mu^-)^2$ if and only if $\eta=0$ or $\tau=0$. If $\eta=0$ then $\mu^+ =\mu^-=\tau/c$ which gives the first case. If $\tau=0$ then $(\mu^+)^2
=(\mu^-)^2=(1-(v/c)^2)\eta^2$. For $1-(v/c)^2<0$ we obtain from Lemma \ref{lemma_mu} $\mu^\pm=\pm i\eta\sqrt{(v/c)^2-1}$ which yields $\mu^+-\mu^-=2i\eta\sqrt{(v/c)^2-1}\not=0$. For $1-(v/c)^2\ge 0$ we obtain $\mu^\pm= \sqrt{1-(v/c)^2}|\eta|$, that is the second case. \end{proof} From Lemma \ref{lemma_mu} we know that the roots $\mu^\pm$ satisfy $\Re\mu^\pm>0$ if $\Re\tau=\gamma>0$. Actually we can prove more than that. \begin{lemma}\label{stima_Re_mu}
Let $ (\tau,\eta)\in\Xi$ with $\Re\tau=\gamma>0$. Then \begin{equation}\label{est_Re_mu} \Re\mu^\pm(\tau,\eta)\ge \frac{1}{\sqrt{2}\,c}\,\gamma\, .
\end{equation}
\end{lemma} \begin{proof}
We consider $\mu^+$. From \eqref{equ_mu} we obtain \[ (\Re\mu^+)^2-(\Im\mu^+)^2=\frac{1}{c^2}(\gamma^2-(\delta+v\eta)^2)+\eta^2, \qquad \Re\mu^+\Im\mu^+=\frac{1}{c^2}\gamma(\delta+v\eta) \,.
\] Since $\Re\mu^+>0$ for $\gamma>0$, we can divide by $\Re\mu^+$ the second equation, then substitute the value of $\Im\mu^+$ into the first one and obtain \[ (\Re\mu^+)^4+\alpha(\Re\mu^+)^2+\beta=0\,,
\]
where we have set
\[
\alpha=-\left(\frac{1}{c^2}(\gamma^2-(\delta+v\eta)^2)+\eta^2
\right)\,, \qquad \beta=-\frac{1}{c^4}\gamma^2(\delta+v\eta)^2 \le0 \,.
\]
We show that $\alpha^2-4\beta>0$ for $\gamma>0$, and obtain
\[
2 (\Re\mu^+)^2=-\alpha+\sqrt{\alpha^2-4\beta}\ge
(\gamma/c)^2+\left|\frac{1}{c^2}(\delta+v\eta)^2-\eta^2
\right| -\left(\frac{1}{c^2}(\delta+v\eta)^2-\eta^2
\right) \ge (\gamma/c)^2 \,,
\]
which gives \eqref{est_Re_mu} for $\mu^+$. The proof for $\mu^-$ is similar. \end{proof}
\subsection{Study of the symbol $\Sigma$}
The next lemma regards the continuous extension of the symbol in points $(0,\eta)$ where $\mu^++\mu^-$ vanishes, see Corollary \ref{zerosommamu}. We only consider the case $v\geq c$.
\begin{lemma}\label{extend} Assume $v\geq c$. Let $(\tau,\eta)\in\Xi$ with $ \Re\tau\ge0 $ and $\bar{\eta}\not=0$ fixed. Then \begin{equation}\label{extension}\lim\limits_{(\tau,\eta)\to(0,\bar{\eta})}\left( \frac{\tau/c}{\mu^++\mu^-} \right)^2=\frac{(v/c)^2-1}{4(v/c)^2} \, . \end{equation} \end{lemma} \begin{proof}
{\it First case: $v>c$}. We first consider the case $ \Re\tau>0. $ Let $ \tau=\gamma+i\delta$ with $0<\gamma\ll1, \delta\in{\mathbb R}.$ Then \[ \left(\frac{\tau \pm iv\eta}{c}\right)^2+\eta^2 =a_\pm+ib_\pm \,,
\]
with \begin{equation}\label{def_ab} a_\pm=\left(\frac{\gamma}{c}\right)^2- \left(\frac{\delta\pm v\eta}{c}\right)^2+\eta^2 , \quad b_\pm=2\gamma\frac{\delta\pm v\eta}{c^2} \,. \end{equation}
For the computation of the square roots of $a_\pm+ib_\pm$ it is useful to recall that the square roots of the complex number $a+ib$ ($a,b$ real) are \begin{equation}\label{roots}
\pm\left\{\sgn(b)\sqrt{\frac{r+a}{2}} +i\sqrt{\frac{r-a}{2}}\right\}, \qquad r=|a+ib| \end{equation} (by convention $\sgn(0)=1$). In our case we compute \begin{equation}\label{rpm} r_\pm^2:=a_\pm^2+b_\pm^2 =
\left[ \left(\frac{\gamma}{c}\right)^2+ \left(\left| \frac{\delta\pm v\eta}{c} \right| - |\eta|\right)^2\right]
\left[ \left(\frac{\gamma}{c}\right)^2+ \left(\left| \frac{\delta\pm v\eta}{c} \right| + |\eta|\right)^2\right] \, . \end{equation} Substituting the definition of $ a_\pm, b_\pm $ in \eqref{def_ab}, $ r_\pm $ in \eqref{rpm}, into \eqref{roots} and taking the limit as $\gamma\downarrow 0, \delta\to\accentset{{\cc@style\underline{\mskip10mu}}}\delta, \eta\to\accentset{{\cc@style\underline{\mskip10mu}}}\eta$, with $ (\accentset{{\cc@style\underline{\mskip10mu}}}\delta, \accentset{{\cc@style\underline{\mskip10mu}}}\eta) \not=(0,0)$ we can prove again the formulas \eqref{mu+}, \eqref{mu-} of continuous extension of $ \mu^\pm $ to points with $ \Re\tau=0 $.
Let us study the limit of $ \mu^++\mu^- $ as $\gamma\downarrow 0, \delta\to0, \eta\to\accentset{{\cc@style\underline{\mskip10mu}}}\eta$, with $ \accentset{{\cc@style\underline{\mskip10mu}}}\eta\not=0$. By continuity, for $ (\gamma,\delta,\eta) $ sufficiently close to $ (0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta) $, we have from \eqref{def_ab} that $ \sgn(b_+)=\sgn(\delta+v\eta) =\sgn(\accentset{{\cc@style\underline{\mskip10mu}}}\eta)$. If $ \accentset{{\cc@style\underline{\mskip10mu}}}\eta>0$, from \eqref{roots} it follows that \begin{equation*} \mu^+= \sqrt{\frac{r_++a_+}{2}} +i\sqrt{\frac{r_+-a_+}{2}}\,. \end{equation*} With similar considerations we get \begin{equation*} \mu^-= \sqrt{\frac{r_-+a_-}{2}} -i\sqrt{\frac{r_--a_-}{2}}\,. \end{equation*} If $ \accentset{{\cc@style\underline{\mskip10mu}}}\eta<0$, from \eqref{roots} it follows that \begin{equation*} \mu^+= \sqrt{\frac{r_++a_+}{2}} -i\sqrt{\frac{r_+-a_+}{2}}\,, \qquad \mu^-= \sqrt{\frac{r_-+a_-}{2}} +i\sqrt{\frac{r_--a_-}{2}}\,. \end{equation*} Thus, \begin{equation}\label{sommamu} \mu^++\mu^-= \sqrt{\frac{r_++a_+}{2}}+ \sqrt{\frac{r_-+a_-}{2}} +i\sgn(\accentset{{\cc@style\underline{\mskip10mu}}}\eta)\left(\sqrt{\frac{r_+-a_+}{2}} -\sqrt{\frac{r_--a_-}{2}}\right)\,. \end{equation} From \eqref{def_ab}, \eqref{rpm} we obtain (recall that $ v>c $) \begin{equation}\label{lim_diff_ra} \lim\limits_{(\gamma,\delta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)}(r_\pm-a_\pm)=2\left(\left(\frac{v}{c}\right)^2-1\right)\accentset{{\cc@style\underline{\mskip10mu}}}\eta^2 \,, \end{equation} \begin{equation}\label{sommara} r_\pm+a_\pm=\frac{r_\pm^2-a_\pm^2} {r_\pm-a_\pm}=\frac{b_\pm^2} {r_\pm-a_\pm}=\frac{4\left(\frac{\gamma}{c}\right)^2\left(\frac{\delta\pm v\eta}{c}\right)^2} {r_\pm-a_\pm} \,. \end{equation} From \eqref{sommamu}, \eqref{sommara}, the real part of $ \mu^++\mu^- $ is given by \begin{equation}\label{Resommamu}
\Re(\mu^++\mu^-)=\sqrt{\frac{r_++a_+}{2}}+ \sqrt{\frac{r_-+a_-}{2}}=\sqrt{2}\, \frac{\gamma}{c} \, \left(\frac{\left| \frac{\delta+ v\eta}{c} \right|}{\sqrt{r_+-a_+}} + \frac{\left| \frac{\delta- v\eta}{c} \right|}{\sqrt{r_--a_-}} \right) \, . \end{equation} From \eqref{lim_diff_ra}, \eqref{Resommamu} it follows that \begin{equation}\label{lim_Re_somma_mu} \Re(\mu^++\mu^-)= \frac{\gamma}{c} \, \left(\frac{2\frac{v}{c}}{\sqrt{\left(\frac{v}{c}\right)^2-1}}+o(1)\right) \qquad{\rm as}\quad (\gamma,\delta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)\, . \end{equation} Now we consider the imaginary part of $ \mu^++\mu^- $ \begin{equation}\label{Imsommamu} \Im(\mu^++\mu^-)=\sgn(\accentset{{\cc@style\underline{\mskip10mu}}}\eta)\left(\sqrt{\frac{r_+-a_+}{2}} -\sqrt{\frac{r_--a_-}{2}}\right)= \frac{\sgn(\accentset{{\cc@style\underline{\mskip10mu}}}\eta)}{\sqrt{2}}\frac{(r_+-a_+)-(r_--a_-)}{\sqrt{r_+-a_+} +\sqrt{r_--a_-}} \,. \end{equation} Using \eqref{def_ab}, \eqref{rpm} gives \begin{equation}\label{diff_ra} (r_+-a_+)-(r_--a_-)=\frac{r_+^2-r_-^2}{r_++r_-}+4\frac{v}{c^2}\delta\eta \,, \end{equation} and \begin{equation}\label{diff_ra2} r_+^2-r_-^2=8\frac{v}{c^2}\delta\eta\left[\left(\frac{\gamma}{c}\right)^2+\left(\frac{\delta}{c}\right)^2+ \left(\left(\frac{ v}{c}\right)^2 -1\right)\eta^2 \right] \,. \end{equation} Moreover it holds \begin{equation}\label{lim_somma_r} \lim\limits_{(\gamma,\delta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)}(r_++r_-)=2\left(\left(\frac{v}{c}\right)^2-1\right)\accentset{{\cc@style\underline{\mskip10mu}}}\eta^2 \,. \end{equation} Combining \eqref{lim_diff_ra}, \eqref{Imsommamu}--\eqref{lim_somma_r} gives \begin{equation}\label{lim_Im_somma_mu} \Im(\mu^++\mu^-)= \frac{\delta}{c} \, \left(\frac{2\frac{v}{c}}{\sqrt{\left(\frac{v}{c}\right)^2-1}}+o(1)\right) \qquad{\rm as}\quad (\gamma,\delta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)\, . \end{equation} From \eqref{lim_Re_somma_mu}, \eqref{lim_Im_somma_mu} we deduce \begin{equation}\label{lim_somma_mu} \mu^++\mu^-= \frac{\tau}{c} \, \left(\frac{2\frac{v}{c}}{\sqrt{\left(\frac{v}{c}\right)^2-1}}+o(1)\right) \qquad{\rm as}\quad (\gamma,\delta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)\, . \end{equation} Thus, it follows that \begin{equation*} \lim\limits_{(\gamma,\delta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)}\left( \frac{\tau/c}{\mu^++\mu^-} \right)^2=\frac{(v/c)^2-1}{4(v/c)^2} \, , \end{equation*} that is \eqref{extension}.
Consider now the case $ \Re\tau=0 $, that is $ \tau=i\delta $, and let us assume with no loss of generality that $ \accentset{{\cc@style\underline{\mskip10mu}}}\eta>0.$ For $ (\delta,\eta) $ in a small neighborhood of $ (0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta) $ we have $ -(v-c)\eta<\delta<(v-c)\eta $, that is $ -(\frac{v}{c}-1)<\frac{\delta}{c\eta}<(\frac{v}{c}-1) $. From Lemma \ref{lemma_mu} (see also the proof of Lemma \ref{super_mu} (iii)) we get \begin{equation*} \frac{\tau/c}{\mu^++\mu^-}=\frac{\delta/c}{\sqrt{\left(\frac{\delta +v\eta}{c}\right)^2-\eta^2}-\sqrt{\left(\frac{\delta -v\eta}{c}\right)^2-\eta^2}}= \frac{\sqrt{\left(\frac{\delta +v\eta}{c}\right)^2-\eta^2}+\sqrt{\left(\frac{\delta -v\eta}{c}\right)^2-\eta^2}}{4v\eta/c} \, . \end{equation*} Passing to the limit as $ (\delta,\eta) \mapsto (0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta) $ we obtain again \eqref{extension}. The proof is complete.
{\it Second case: $v=c$}. Again we first consider the case $\Re\tau >0$: hence $\tau =\gamma + i\delta$ with $0<\gamma\ll1$, $\delta\in \mathbb{R}$. For $(\gamma,\delta,\eta)$ sufficiently close to $(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)$, with $\accentset{{\cc@style\underline{\mskip10mu}}}\eta\neq0$, $\mu^+ + \mu^-$ is given by \eqref{sommamu}, where $a_{\pm}$, $b_{\pm}$ and $r_{\pm}$ are computed in \eqref{def_ab} and in \eqref{rpm} with $v=c$. Hence \begin{equation}\label{mod_somma} \begin{split} \vert \mu^+ &+\mu^-\vert^2=r_++r_{-}+\sqrt{r_++a_+}\sqrt{r_-+a_-}-\sqrt{r_+-a_+}\sqrt{r_--a_-}\\ &=\left\vert\frac{\tau}{c}\right\vert(\alpha_++\alpha_-)+\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_++\left\vert\frac{\tau}{c}\right\vert\right)-\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_-+\left\vert\frac{\tau}{c}\right\vert\right)-\beta_-}\\ &\qquad -\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_+-\left\vert\frac{\tau}{c}\right\vert\right)+\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_--\left\vert\frac{\tau}{c}\right\vert\right)+\beta_-}\,, \end{split} \end{equation} where we have set: \begin{equation}\label{alphabeta} \alpha_\pm=\alpha_\pm(\tau,\eta):=\sqrt{\left\vert\frac{\tau}{c}\right\vert^2+4\eta\left(\eta\pm\frac{\delta}{c}\right)}\,,\quad\beta_\pm=\beta_\pm(\delta,\eta):=2\frac{\delta}{c}\left(\frac{\delta}{c}\pm\eta\right)\,. \end{equation} Assume that $\overline\eta>0$, so that $\eta>0$ when it is sufficiently close to $\overline\eta$. For $\delta>0$ sufficiently close to zero, we have $\beta_-<0$ thus \begin{equation*} \sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_++\left\vert\frac{\tau}{c}\right\vert\right)-\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_-+\left\vert\frac{\tau}{c}\right\vert\right)-\beta_-}\ge \sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_++\left\vert\frac{\tau}{c}\right\vert\right)-\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_-+\left\vert\frac{\tau}{c}\right\vert\right)} \end{equation*} and \begin{equation*} \sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_+-\left\vert\frac{\tau}{c}\right\vert\right)+\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_--\left\vert\frac{\tau}{c}\right\vert\right)+\beta_-}\le \sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_+-\left\vert\frac{\tau}{c}\right\vert\right)+\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_--\left\vert\frac{\tau}{c}\right\vert\right)}\,; \end{equation*} moreover from $\beta_+=2\frac{\delta}{c}\left(\frac{\delta}{c}+\eta\right)\le 2\left\vert\frac{\tau}{c}\right\vert\left(\frac{\delta}{c}+\eta\right)$ we have \begin{equation*} \sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_++\left\vert\frac{\tau}{c}\right\vert\right)-\beta_+}\ge\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_++\left\vert\frac{\tau}{c}\right\vert\right)-2\left\vert\frac{\tau}{c}\right\vert\left(\frac{\delta}{c}+\eta\right)} \end{equation*} and \begin{equation*} \sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_+-\left\vert\frac{\tau}{c}\right\vert\right)+\beta_+}\le\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_+-\left\vert\frac{\tau}{c}\right\vert\right)+2\left\vert\frac{\tau}{c}\right\vert\left(\frac{\delta}{c}+\eta\right)}\,. \end{equation*} We use the last inequalities in \eqref{mod_somma} to find \begin{equation*} \begin{split} \vert\mu^++\mu^-\vert^2\ge \left\vert\frac{\tau}{c}\right\vert\Theta(\tau,\eta)\,, \end{split} \end{equation*} where \begin{equation*} \Theta(\tau,\eta):=\alpha_++\alpha_-+\sqrt{\alpha_++\left\vert\frac{\tau}{c}\right\vert-2\left(\frac{\delta}{c}+\eta\right)}\,\sqrt{\alpha_++\left\vert\frac{\tau}{c}\right\vert}-\sqrt{\alpha_+-\left\vert\frac{\tau}{c}\right\vert+2\left(\frac{\delta}{c}+\eta\right)}\,\sqrt{\alpha_--\left\vert\frac{\tau}{c}\right\vert} \end{equation*} satisfies \begin{equation}\label{lim_theta} \lim\limits_{(\tau,\eta)\to (0,\overline\eta)}\Theta(\tau,\eta)=2(2-\sqrt 2)\overline\eta>0\,. \end{equation} Hence for $(\gamma,\delta,\eta)$ sufficiently close to $(0,0,\overline{\eta})$ with $\gamma,\delta>0$, we get \begin{equation}\label{stima_frac} \left\vert\frac{\tau/c}{\mu^++\mu^-}\right\vert^2\le\frac{\vert\tau/c\vert}{\Theta(\tau,\eta)}\,. \end{equation} We observe that the same estimate is true also for $\delta<0$ by noticing that \[ \vert(\mu^++\mu^-)(\gamma,\delta,\eta)\vert^2=\vert(\mu^++\mu^-)(\gamma,-\delta,\eta)\vert^2 \] (see \eqref{mod_somma}, \eqref{alphabeta}) and \begin{equation*} \left\vert\frac{\tau/c}{(\mu^++\mu^-)(\tau,\eta)}\right\vert^2=\left\vert\frac{\overline\tau/c}{(\mu^++\mu^-)(\overline\tau,\eta)}\right\vert^2\,. \end{equation*} From \eqref{lim_theta} and \eqref{stima_frac} we get \begin{equation}\label{ext0} \lim\limits_{(\tau,\eta)\to 0}\left(\frac{\tau/c}{\mu^++\mu^-}\right)^2=0\,, \end{equation} that is \eqref{extension} for $v=c$.
Consider now the case $\Re\tau=0$, that is $\tau=i\delta$ and, as above, assume that $\overline\eta>0$. If $\delta>0$, we may assume that $0<\frac{\delta}{c\eta}<2$ for $(\delta,\eta)$ sufficiently close to $(0,\overline\eta)$, being $\eta>0$; then from Lemma \ref{lemma_mu} (see formulas $\eqref{mu+}_4$ and $\eqref{mu-}_1$) we get \begin{equation*} \mu^+(i\delta,\eta)=i\sqrt{\left(\frac{\delta}{c}+\eta\right)^2-\eta^2}\,,\quad\mu^-(i\delta,\eta)=\sqrt{-\left(\frac{\delta}{c}-\eta\right)^2+\eta^2}\,. \end{equation*} If $\delta<0$ (so $-2<\frac{\delta}{c\eta}<0$ for $(\delta,\eta)$ close to $(0,\overline\eta)$) again from Lemma \ref{lemma_mu} (formulas $\eqref{mu+}_1$ and $\eqref{mu-}_3$) we get \begin{equation*} \mu^+(i\delta,\eta)=\sqrt{-\left(\frac{\delta}{c}+\eta\right)^2+\eta^2}\,,\quad\mu^-(i\delta,\eta)=-i\sqrt{\left(\frac{\delta}{c}-\eta\right)^2-\eta^2}\,. \end{equation*} From the above values of $\mu^\pm$ we get for all $\delta\neq 0$ \begin{equation}\label{mod_idelta} \vert\mu^++\mu^-\vert^2=4\left\vert\frac{\delta}{c}\right\vert\eta\,, \end{equation} hence for $\tau=i\delta$ \[ \left\vert\frac{\tau/c}{\mu^++\mu^-}\right\vert^2=\frac{\vert\delta/c\vert}{4\eta} \] and passing to the limit as $(\delta,\eta)\to (0,\overline\eta)$ we obtain again \eqref{ext0}.
The same calculations can be repeated also in the case of $\overline{\eta}<0$. \end{proof}
\begin{remark}Because of \eqref{extension}, the symbol $\Sigma$ can be extended to points $(0,\eta)$ where $\mu^++\mu^-$ vanishes. In particular we have the following limit for the coefficient in brackets, see \eqref{def_Sigma}, \begin{equation}\lim\limits_{(\tau,\eta)\to(0,\bar{\eta})}8\left( \frac{\tau/c}{\mu^++\mu^-} \right)^2-1=\frac{(v/c)^2-2}{(v/c)^2} \, , \end{equation} which changes sign according to $v/c\gtrless \sqrt2$. This is in relation with the well-known stability criterion for vortex sheets, see \cite{CS04MR2095445,FM63MR0154509,M58MR0097930,S00MR1775057}; see also Remark \ref{ell_hyp}. \begin{remark}\label{remark52} We easily verify that $ \mu^++\mu^- $ is a homogeneous function of degree 1 in $(\tau,\eta)\in \Xi $ if $ \Re(\tau)>0. $ It follows that the continuous extension to points with $ \Re(\tau)=0$ of $ \frac{\tau/c}{\mu^++\mu^-} $ is homogeneous of degree 0 and the continuous extension of $ \Sigma $ is homogeneous of degree 2. \end{remark}
\end{remark} In the next lemma we study the roots of the symbol $\Sigma$. \begin{lemma}\label{zeri_Sigma} Let $ \Sigma(\tau,\eta) $ be the symbol defined in \eqref{def_Sigma}, for $ (\tau,\eta)\in\Xi. $ \begin{itemize} \item[(i)] If $\frac{v}{c}<\sqrt{2}$, then $ \Sigma(\tau,\eta)=0 $ if and only if \[
\tau=cY_1|\eta| \qquad \forall\eta\not=0\, , \] where \[ Y_1= \sqrt{-\left(\left(\frac{v}{c}\right)^2+1\right) + \sqrt{4\left(\frac{v}{c}\right)^2+1}}\, . \]
\item[(ii)] If $\frac{v}{c}>\sqrt{2}$, then $ \Sigma(\tau,\eta)=0 $ if and only if \[ \tau=\pm icY_2\eta \qquad \forall\eta\not=0 \, , \] where \[ Y_2= \sqrt{\left(\frac{v}{c}\right)^2+1 - \sqrt{4\left(\frac{v}{c}\right)^2+1}}\, . \] Each of these roots is simple. For instance, there exists a neighborhood $\mathcal V$ of $( icY_2\eta,\eta)$ in $\Xi_1$ and a $C^\infty$ function $H$ defined on $\mathcal V$ such that \[ \Sigma(\tau,\eta)=(\tau-icY_2\eta)H(\tau,\eta), \quad H(\tau,\eta)\not=0 \quad\forall (\tau,\eta)\in\mathcal V.
\]
A similar result holds near $(-icY_2\eta,\eta)\in\Xi_1$. \end{itemize} \end{lemma}
\begin{remark}\label{ell_hyp} (i) Recall that the equation \eqref{equ_f} was obtained by taking the Fourier transform with respect to $(t,x_1)$ of \eqref{puntoN7}, \eqref{wave3}, which corresponds to taking the Laplace transform with respect to $t$ and the Fourier transform with respect to $x_1$ of \eqref{puntoN6}, \eqref{wave2}. Taking the Fourier transform with respect to $t$ of \eqref{puntoN6}, \eqref{wave2} corresponds to the case $\gamma=\Re\tau=0 $, i.e. $ (\tau,\eta)=(i\delta,\eta) $.
If $\frac{v}{c}<\sqrt{2}$, from Lemma \ref{zeri_Sigma} the symbol $ \Sigma(\tau,\eta) $ only vanishes in points $ (\tau,\eta)$ with $\tau\in{\mathbb R}, \tau>0$. It follows that $ \Sigma(i\delta,\eta)\not=0 $ for all $(\delta,\eta)\in{\mathbb R}^2$. Therefore the symbol is elliptic, according to the standard definition. In this case planar vortex sheets are violently unstable, see \cite{S00MR1775057}.
(ii) If $\frac{v}{c}>\sqrt{2}$, $ \Sigma(\tau,\eta) $ vanishes in points $ (\tau,\eta)$ with $\Re\tau=0$, that is on the boundary of the frequency set $\Xi$. In this case planar vortex sheets are known to be weakly stable, in the sense that the so-called Lopatinski\u{\i} condition holds in a weak sense, see \cite{CS04MR2095445,FM63MR0154509,M58MR0097930,S00MR1775057}. For this case we expect a loss of derivatives for the solution with respect to the data. \end{remark}
\begin{proof}[Proof of Lemma \ref{zeri_Sigma}]
As we can easily verify $ \Sigma(\tau,0)=\tau^2\not=0 $ for $ (\tau,0)\in\Xi $ and $ \Sigma(0,\eta)\neq 0 $ for $ (0,\eta)\in\Xi $ (see Corollary \ref{zerosommamu} and Lemma \ref{extend}). Thus we assume without loss of generality that $\tau\neq 0$ and $ \eta\not=0 $ and from Lemma \ref{diff_mu} $(\mu^+-\mu^-)(\tau,\eta)\neq 0$. We compute \[ \frac{\tau/c}{\mu^++\mu^-}=\frac{(\tau/c)(\mu^+-\mu^-)}{(\mu^+)^2-(\mu^-)^2}=\frac{c}{4iv}\frac{\mu^+-\mu^-}{\eta}\,,
\]
\[
\left(\frac{\mu^+-\mu^-}{\eta}\right)^2= 2\left(\left(\frac{\tau}{c\eta}\right)^2-\left(\frac{v}{c}\right)^2+1-\frac{\mu^+\mu^-}{\eta^2}\right)\,,
\]
and substituting in \eqref{def_Sigma} gives \begin{equation}\label{sigma1} \Sigma=c^2\left(\mu^+\mu^--\eta^2\right). \end{equation} Let us introduce the quantities \[ X:=\frac{\tau}{c\eta}, \qquad \tilde{\mu}^\pm:=\frac{\mu^\pm}{\eta}.
\] It follows from \eqref{sigma1} that \begin{equation}\label{sigma2} \Sigma=0 \qquad\mbox{if and only if }\quad \tilde{\mu}^+\tilde{\mu}^-=1 \,. \end{equation} Let us study the equation \begin{equation}\label{muquadro} (\tilde{\mu}^+)^2(\tilde{\mu}^-)^2=1 \,. \end{equation} This last equation is equivalent to the biquadratic equation \[ X^4+2\left(\left(\frac{v}{c}\right)^2+1\right)X^2 + \left(\frac{v}{c}\right)^2 \left(\left(\frac{v}{c}\right)^2-2\right)=0 \,.
\] This is a polynomial equation of degree 2 in $ X^2 $ with real and distinct roots \[ X^2=-\left(\left(\frac{v}{c}\right)^2+1\right)- \sqrt{4\left(\frac{v}{c}\right)^2+1} \,,
\]
and \[ X^2=-\left(\left(\frac{v}{c}\right)^2+1\right)+ \sqrt{4\left(\frac{v}{c}\right)^2+1} \,. \] The first one gives the imaginary roots \begin{equation}\label{roots_imag}
X_{1,2}=\pm i Y_0, \qquad Y_0:=\sqrt{\left(\frac{v}{c}\right)^2+1 + \sqrt{4\left(\frac{v}{c}\right)^2+1}}\,. \end{equation} The second root of $ X^2 $ gives real or imaginary roots according to $ v/c \lessgtr \sqrt{2}.$ If $ v/c<\sqrt{2} $, there are 2 real and distinct roots \begin{equation}\label{roots_real} X_{3,4}=\pm Y_1, \qquad Y_1:= \sqrt{-\left(\left(\frac{v}{c}\right)^2+1\right) + \sqrt{4\left(\frac{v}{c}\right)^2+1}}\,. \end{equation} If $ v/c>\sqrt{2} $, there are 2 imaginary roots \begin{equation}\label{roots_imag2} X_{3,4}=\pm iY_2, \qquad Y_2:= \sqrt{\left(\frac{v}{c}\right)^2+1- \sqrt{4\left(\frac{v}{c}\right)^2+1}}\,. \end{equation} If $ v/c=\sqrt{2} $, then $ X_{3,4}=0 $.
Assume $ v/c<\sqrt{2} $ and consider first the real roots \eqref{roots_real}. In order to obtain $ \Re(\tau)>0 $ we choose $ X_3 $ or $ X_4 $ depending on $ \sgn(\eta), $ and we obtain that
$\tau=cY_1 |\eta|.$ By construction the pairs $ (cY_1 |\eta|,\eta) $ solve the equation \eqref{muquadro}. In order to verify if \eqref{sigma2} holds we proceed as follows. We compute \[ (\tilde{\mu}^\pm)^2=a\pm ib, \qquad a=X^2-(v/c)^2+1, \qquad b=2Xv/c \,.
\] Recalling \eqref{roots} we can determine $ \tilde{\mu}^\pm $ and obtain that \[
\tilde{\mu}^+ \tilde{\mu}^-=\left|\sqrt{\frac{r+a}{2}}+i\sqrt{\frac{r-a}{2}}\right|^2=r=|a+ib|>0\,. \] From \eqref{muquadro} it follows that $ \tilde{\mu}^+ \tilde{\mu}^-=1 $, that is \eqref{sigma2}. Then, let us consider the imaginary roots in \eqref{roots_imag}. Correspondingly we have points $ (\tau,\eta)=(\pm icY_0\eta,\eta) $ with $ \Re(\tau)=0. $ Compairing the values $ \delta/(c\eta)=\pm Y_0$, where $Y_0>v/c+1,$ with the corresponding cases (i), (v) of Lemma \ref{super_mu} (if $ 1<v/c<\sqrt{2} $) and (i), (v) of Lemma \ref{sub_mu} (if $ v/c<1 $), while in the sonic case $v=c$ we use Lemma \ref{lemma_mu}, we get \[ \Re(\tilde{\mu}^+ \tilde{\mu}^-)=\eta^{-2}\Re(\mu^+\mu^-)<0\,.
\]
It means that in such points $ \tilde{\mu}^+ \tilde{\mu}^-=-1 $, that is \eqref{sigma2} is not satisfied. Therefore we have proved that in case $ v/c<\sqrt{2} $, the (only) roots of the symbol $\Sigma$ are the points $ (cY_1 |\eta|,\eta) $, for all $\eta\not=0$.
Now we assume $ v/c>\sqrt{2} $. For the imaginary roots \eqref{roots_imag} we can repeat the analysis made before. Correspondingly to the roots $X_{1,2}$ we have the same points $ (\tau,\eta)=(\pm icY_0\eta,\eta) $ with $ \Re(\tau)=0. $ Compairing the value $ \delta/(c\eta)=\pm Y_0$, where $Y_0>v/c+1,$ with the different cases of Lemma \ref{super_mu} we get \[ \Re(\tilde{\mu}^+ \tilde{\mu}^-)=\eta^{-2}\Re(\mu^+\mu^-)<0\,. \] It means that in such points $ \tilde{\mu}^+ \tilde{\mu}^-=-1 $, and \eqref{sigma2} is not satisfied. Correspondingly to the roots $X_{3,4}$ in \eqref{roots_imag2} we have the points $ (\tau,\eta)=(\pm icY_2\eta,\eta) $ with $ \Re(\tau)=0. $ Because $ -(v/c-1)< \pm Y_0<v/c-1$, from Lemma \ref{super_mu} (iii) we deduce \[ \Re(\tilde{\mu}^+ \tilde{\mu}^-)=\eta^{-2}\Re(\mu^+\mu^-)>0\,. \] It means that in such points $ \tilde{\mu}^+ \tilde{\mu}^-=1 $, and \eqref{sigma2} is satisfied. Therefore we have proved that in case $ v/c>\sqrt{2} $, the (only) roots of the symbol $\Sigma$ are the points $ (\pm icY_2\eta,\eta) $, for all $\eta\not=0$.
This completes the first part of the proof of Lemma \ref{zeri_Sigma}; it remains to prove that the roots corresponding to $ X_{3,4}=\pm iY_2 $ are simple. Let us set $\sigma(X):=\tilde{\mu}^+ \tilde{\mu}^- -1$. From \eqref{sigma1} we have $\Sigma=c^2\eta^2\sigma(X)$. We wish to study $\sigma(X)$ in sufficiently small neighborhoods of points $ (\pm icY_2\eta,\eta) $. From Lemma \ref{lemma_mu} we may assume that $ \tilde{\mu}^\pm $ are different from 0 in such neighborhoods. First of all, from $$ (\tilde{\mu}^\pm)^2=(X\pm iv/c)^2 +1 $$ we obtain \[ \frac{d\tilde{\mu}^+}{dX}=\frac{1}{\tilde\mu^+}(X+ iv/c), \qquad \frac{d\tilde{\mu}^-}{dX}=\frac{1}{\tilde\mu^-}(X- iv/c)\, .
\] We prove that \begin{equation*} \begin{array}{ll} \displaystyle \frac{d\sigma}{dX}(X)=\frac{d\tilde{\mu}^+}{dX}\tilde{\mu}^- + \tilde{\mu}^+ \frac{d\tilde{\mu}^-}{dX}=\frac{\tilde\mu^-}{\tilde\mu^+}(X+ iv/c) + \frac{\tilde\mu^+}{\tilde\mu^-}(X- iv/c) \\ \displaystyle=\frac{1}{\tilde\mu^+\tilde\mu^-}\left\{\left((\tilde\mu^+)^2+(\tilde\mu^-)^2\right)X -i\left((\tilde\mu^+)^2-(\tilde\mu^-)^2\right) v/c\right\} \\ \displaystyle=\frac{2X}{\tilde\mu^+\tilde\mu^-}\left\{X^2+(v/c)^2+1\right\} \,. \end{array} \end{equation*} Moreover we have \begin{equation*}
\sigma(X_{3})=0, \qquad K:=\left\{X^2+(v/c)^2+1\right\}_{|X=X_{3}}>0 \,. \end{equation*} Consequently we can write \[ \sigma(X)=(X-X_3)\, \tilde{H}(X)\,,
\] where, by continuity $ \tilde{H}(X)\not=0 $ in a neighborhood of $ X=X_3, $ because $ \tilde{H}(X_3)=\frac{d\sigma}{dX}(X_3)=2X_3K\not=0 $. Thus we write \begin{equation*} \Sigma(\tau,\eta)=c^2\eta^2\sigma(X)=c^2\eta^2\sigma\left(\frac{\tau}{c\eta}\right)=(\tau-X_3c\eta)\,H(\tau,\eta), \qquad H(\tau,\eta):=c\eta\, \tilde{H}\left(\frac{\tau}{c\eta}\right)\,. \end{equation*} Since \[ H(X_3c\eta,\eta)=c\eta\, \tilde{H}\left(X_3\right)\not=0 \qquad\forall \eta\not=0\,,
\] by continuity $ H(\tau,\eta)\not=0 $ in a small neighborhood of $ (X_3c\eta,\eta) $. It is easily verified that $H$ is a homogeneous function of degree 1. By the same argument we prove the similar result for $ X=X_4. $ The proof of Lemma \ref{zeri_Sigma} is complete. \end{proof}
\section{Proof of Theorem \ref{teoexist}}
\begin{lemma} Let $\Sigma$ be the symbol defined by \eqref{def_Sigma} and $s\in{\mathbb R}, \gamma\ge1 $. Given any $f\in H^{s+2}_\gamma({\mathbb R}^2)$, let $g$ be the function defined by \begin{equation}\label{def_g} \Sigma(\tau,\eta)\widehat f(\tau,\eta)=\widehat g(\tau,\eta) \qquad (\tau,\eta)\in\Xi\, , \end{equation} where $ \widehat{g} $ is the Fourier transform of $\widetilde{g}:= e^{-\gamma t}g. $ Then $g\in H^{s}_\gamma({\mathbb R}^2)$ with \begin{equation*}
\|g\|_{H^{s}_\gamma({\mathbb R}^2)}\le C \|f\|_{H^{s+2}_\gamma({\mathbb R}^2)} \, , \end{equation*} for a suitable positive constant $C$ independent of $ \gamma $. \end{lemma} \begin{proof} The proof follows by observing that $ \Sigma(\tau,\eta)$ is a homogeneous function of degree 2 on $\Xi$, so there exists a positive constant $C$ such that \begin{equation}\label{stimaSigma}
|\Sigma(\tau,\eta)|\le C(|\tau|^2+\eta^2)=C\Lambda^2(\tau,\eta) \qquad \forall (\tau,\eta)\in\Xi\,. \end{equation} Then \begin{equation*}
\|g\|_{H^{s}_\gamma({\mathbb R}^2)}=\frac{1}{2\pi}\|\Lambda^s\widehat{g}\|=\frac{1}{2\pi}\|\Lambda^s\Sigma\widehat{f}\|
\le C\|\Lambda^{s+2}\widehat{f}\|=C \|f\|_{H^{s+2}_\gamma({\mathbb R}^2)} \, . \end{equation*} \end{proof}
In the following theorem we prove the a priori estimate of the solution $f$ to equation \eqref{def_g}, for a given $g$. \begin{theorem}\label{teofg} Assume $\frac{v}{c}>\sqrt{2}$. Let $\Sigma$ be the symbol defined by \eqref{def_Sigma} and $s\in{\mathbb R}$. Given any $f\in H^{s+2}_\gamma({\mathbb R}^2)$, let $g\in H^s_\gamma({\mathbb R}^2)$ be the function defined by \eqref{def_g}. Then there exists a positive constant $C$ such that for all $\gamma\ge1$ the following estimate holds \begin{equation}\label{stimafg}
\gamma \|f\|_{H^{s+1}_\gamma({\mathbb R}^2)} \le C \|g\|_{H^s_\gamma({\mathbb R}^2)} \, . \end{equation} \end{theorem} \begin{proof}
The study of $\Sigma$ in the proof of Lemma \ref{zeri_Sigma} implies that for all $(\tau_0,\eta_0)\in\Xi_1$, there exists a neighborhood $\mathcal V$ of $(\tau_0,\eta_0)$ with suitable properties, as explained in the following. Because $\Xi_1$ is a $C^\infty$ compact manifold, there exists a finite covering $(\mathcal V_1,\dots,\mathcal V_I)$ of $\Xi_1$ by such neighborhoods, and a smooth partition of unity $(\chi_1,\dots,\chi_I)$ associated with this covering. The $\chi_i's$ are nonnegative $C^\infty$ functions with
\[
\supp\chi_i\subset\mathcal V_i, \qquad \sum_{i=1}^I\chi_i^2=1.
\]
We consider two different cases.
{\it In the first case}
$\mathcal V_i$ is a neighborhood of an {\it elliptic} point, that is a point $ (\tau_0,\eta_0) $ where $ \Sigma(\tau_0,\eta_0)\not=0. $ By taking $\mathcal V_i$ sufficiently small we may assume that $ \Sigma(\tau,\eta)\not=0 $ in the whole neighborhood $\mathcal V_i$, and there exists a positive constant $C$ such that $$ |\Sigma(\tau,\eta)|\ge C \qquad \forall (\tau,\eta)\in\mathcal V_i\,. $$ Let us extend the associated function $\chi_i$ to the whole set of frequencies $\Xi$, as a homogeneous mapping of degree 0 with respect to $(\tau,\eta)$. $ \Sigma(\tau,\eta)$ is a homogeneous function of degree 2 on $\Xi$, so we have \begin{equation}\label{stima_ellip0}
|\Sigma(\tau,\eta)|\ge C(|\tau|^2+\eta^2) \qquad \forall (\tau,\eta)\in\mathcal V_i\cdot{\mathbb R}^+\,. \end{equation} We deduce that \begin{equation}\label{stima_ellip}
C(|\tau|^2+\eta^2)|\chi_i\widehat f(\tau,\eta)|\le
|\Sigma(\tau,\eta)\chi_i\widehat f(\tau,\eta)|=|\chi_i\widehat g(\tau,\eta)| \qquad \forall (\tau,\eta)\in\mathcal V_i\cdot{\mathbb R}^+\,. \end{equation}
{\it In the second case}\label{first} $\mathcal V_i$ is a neighborhood of a {\it root} of the symbol $ \Sigma $, i.e. a point $ (\tau_0,\eta_0) $ where $ \Sigma(\tau_0,\eta_0)=0. $ For instance we may assume that $ (\tau_0,\eta_0)=(icY_2\eta_0,\eta_0), \eta_0\not=0$, see Lemma \ref{zeri_Sigma}; a similar argument applies for the other family of roots $(\tau,\eta)=(-icY_2\eta,\eta)$. According to Lemma \ref{zeri_Sigma} we may assume that on $\mathcal V_i$ it holds \[ \Sigma(\tau,\eta)=(\tau-icY_2\eta)H(\tau,\eta), \quad H(\tau,\eta)\not=0 \quad\forall (\tau,\eta)\in\mathcal V_i. \]
We extend the associated function $\chi_i$ to the whole set of frequencies $\Xi$, as a homogeneous mapping of degree 0 with respect to $(\tau,\eta)$. Because $ H(\tau,\eta)\not=0 $ on $\mathcal V_i$, there exists a positive constant $C$ such that $$ |H(\tau,\eta)|\ge C \qquad \forall (\tau,\eta)\in\mathcal V_i\,. $$ $H(\tau,\eta)$ is a homogeneous function of degree 1 on $\Xi$, so we have \[
|H(\tau,\eta)|\ge C(|\tau|^2+\eta^2)^{1/2} \qquad \forall (\tau,\eta)\in\mathcal V_i\cdot{\mathbb R}^+\,. \] Then we obtain \begin{equation}\label{stima_nellip0}
|\Sigma(\tau,\eta)|=|(\tau-icY_2\eta)H(\tau,\eta)|\ge C\gamma(|\tau|^2+\eta^2)^{1/2} \qquad \forall (\tau,\eta)\in\mathcal V_i\cdot{\mathbb R}^+\,, \end{equation} and we deduce that \begin{equation}\label{stima_nellip}
C\gamma(|\tau|^2+\eta^2)^{1/2}|\chi_i\widehat f(\tau,\eta)|\le
|\chi_i\widehat g(\tau,\eta)| \qquad \forall (\tau,\eta)\in\mathcal V_i\cdot{\mathbb R}^+\,. \end{equation}
In conclusion, adding up the square of \eqref{stima_ellip} and \eqref{stima_nellip}, and using that the $ \chi_i $'s form a partition of unity gives \begin{equation}\label{stimapointwise}
C\gamma^2(|\tau|^2+\eta^2)|\widehat f(\tau,\eta)|^2\le
|\widehat g(\tau,\eta)|^2 \qquad \forall (\tau,\eta)\in\Xi\,. \end{equation}
Multiplying the previous inequality by $ (|\tau|^2+\eta^2)^s $, integrating with respect to $ (\delta,\eta)\in{\mathbb R}^2 $ and using Plancherel's theorem finally yields the estimate \begin{equation*}
\gamma^2\|\widetilde{f}\|_{s+1,\gamma}^2\le C\|\widetilde{g}\|_{s,\gamma}^2\, , \end{equation*} for a suitable constant $C$, that is \eqref{stimafg}. \end{proof}
In the following theorem we prove the existence of the solution $f$ to equation \eqref{def_g}.
\begin{theorem}\label{teoexistfg} Assume $\frac{v}{c}>\sqrt{2}$. Let $\Sigma$ be the symbol defined by \eqref{def_Sigma} and $s\in{\mathbb R}, \gamma\ge1$. Given any $g\in H^s_\gamma({\mathbb R}^2)$ there exists a unique solution $f\in H^{s+1}_\gamma({\mathbb R}^2)$ of equation \eqref{def_g}, satisfying the estimate \eqref{stimafg}. \end{theorem} \begin{proof} We use a duality argument. Let us denote by $ \Sigma^* $ the symbol of the adjoint of the operator with symbol $ \Sigma $, such that \begin{align*} \langle \Sigma\widehat{f},\widehat{h}\rangle=\langle \widehat{f},\Sigma^*\widehat{h}\rangle \end{align*} for $ f,h $ sufficiently smooth. From the definition \eqref{def_Sigma} we easily deduce that \begin{equation}\label{equSS*} \Sigma^\ast(\tau,\eta)=\Sigma(\bar{\tau},\eta)\,. \end{equation} Thus, from Theorem \ref{teofg}, see in particular \eqref{stima_ellip0}, \eqref{stima_nellip0}, \eqref{stimapointwise}, we obtain the estimate
\begin{equation*}
\gamma^2(|\tau|^2+\eta^2)|\widehat h(\tau,\eta)|^2\le C|\Sigma^\ast(\tau,\eta)\widehat h(\tau,\eta)|^2 \,,
\end{equation*}
which gives by integration in $ (\delta,\eta) $ \begin{equation}\label{stimaSigma*}
\gamma\|\Lambda\widehat h \|\le C\|\Sigma^\ast\widehat h\| \,. \end{equation} We compute \begin{align}\label{duality}
\left|\langle \widehat{g},\widehat{h}\rangle\right|=\left| \langle \Lambda^s\widehat{g},\Lambda^{-s}\widehat{h}\rangle\right|
\le\|\Lambda^s\widehat{g}\| \, \|\Lambda^{-s}\widehat{h}\|\,. \end{align} From \eqref{stimaSigma}, \eqref{equSS*}, \eqref{stimaSigma*} (with $\Lambda^{-s-1}\widehat{h}$ instead of $\widehat{h}$) we obtain \begin{equation}\label{stimaL-s}
\|\Lambda^{-s}\widehat{h}\|=\|\Lambda\Lambda^{-s-1}\widehat{h}\|\le \frac{C}{\gamma}\|\Sigma^\ast\Lambda^{-s-1}\widehat h\| \le \frac{C}{\gamma}\|\Lambda^{-s+1}\widehat h\|= \frac{C}{\gamma}\|h\|_{H^{-s+1}_\gamma({\mathbb R}^2)} \, . \end{equation} Let us denote \[
\mathcal R:=\left\{ \Sigma^\ast\Lambda^{-s-1}\widehat h \,\, | \,\, h\in H^{-s+1}_\gamma({\mathbb R}^2) \right\} \,.
\]
From \eqref{stimaL-s} it is clear that $ \mathcal R$ is a subspace of $ L^2({\mathbb R}^2) $; moreover, the map $ \Sigma^\ast\Lambda^{-s-1}\widehat h \mapsto \Lambda^{-s}\widehat{h} $ is well-defined and continuous from $ \mathcal R$ into $ L^2({\mathbb R}^2) $. Given $ g\in H^s_\gamma({\mathbb R}^2) $, we define a linear form $ \ell $ on $ \mathcal R $ by
\[
\ell(\Sigma^\ast\Lambda^{-s-1}\widehat h)= \langle \widehat{g},\widehat{h} \rangle \,.
\] From \eqref{duality}, \eqref{stimaL-s} we obtain \[
\left| \ell(\Sigma^\ast\Lambda^{-s-1}\widehat h)\right| \le \frac{C}{\gamma}\|\Lambda^s\widehat{g}\| \, \|\Sigma^\ast\Lambda^{-s-1}\widehat h\| \,.
\]
Thanks to the Hahn-Banach and Riesz theorems, there exists a unique $ w\in L^2({\mathbb R}^2) $ such that
\[ \langle w,\Sigma^\ast\Lambda^{-s-1}\widehat h \rangle = \ell(\Sigma^\ast\Lambda^{-s-1}\widehat h)\,, \qquad
\|w\|= \|\ell\|_{\mathcal L(\mathcal R)} \le \frac{C}{\gamma}\|\Lambda^s\widehat{g}\| \,.
\] Defining $ \widehat f:= \Lambda^{-s-1}w$ we get $ f\in H^{s+1}_\gamma({\mathbb R}^2) $ such that \[ \langle \Sigma\widehat f, \widehat h \rangle = \langle \widehat f, \Sigma^\ast\widehat h \rangle= \langle \widehat{g},\widehat{h} \rangle \qquad \forall h\in H^{-s+1}_\gamma({\mathbb R}^2)\,,
\] which shows that $ f $ is a solution of equation \eqref{def_g}. Moreover \[
\|f\|_{H^{s+1}_\gamma({\mathbb R}^2)}=\frac{1}{2\pi}\|\Lambda^{s+1}\widehat{f}\|=\frac{1}{2\pi}\|w\| \le \frac{C}{\gamma}\|\Lambda^s\widehat{g}\|=\frac{C}{\gamma}\|g\|_{H^{s}_\gamma({\mathbb R}^2)}\,,
\]
that is \eqref{stimafg}. The uniqueness of the solution follows from the linearity of the problem and the a priori estimate. \end{proof}
Now we can conclude the proof of Theorem \ref{teoexist}. \begin{proof}[Proof of Theorem \ref{teoexist}] We apply the result of Theorem \ref{teoexistfg} for \[ \widehat g(\tau,\eta)=-\frac{\mu^+\mu^-}{\mu^++\mu^-}\,M \, , \] with $M$ defined in \eqref{def_M}. We write \begin{equation*} \widehat g=\widehat g_1-\widehat g_2, \end{equation*} where \begin{equation*} \widehat g_1=-\frac{\mu^-}{\mu^++\mu^-}\int_{0}^{+\infty}e^{-\mu^+ y}\widehat{\mathcal F}^+ (\cdot, y)\, \,\mathrm{d} y \,, \qquad \widehat g_2=-\frac{\mu^+}{\mu^++\mu^-}\int_{0}^{+\infty}e^{-\mu^- y}\widehat{\mathcal F}^- (\cdot,- y)\, \,\mathrm{d} y \, . \end{equation*} By the Plancherel theorem and Cauchy-Schwarz inequality we have \begin{equation}\label{stimag1} \begin{array}{ll}
\displaystyle \|g_1\|_{H^s_\gamma({\mathbb R}^2)}^2=\frac{1}{(2\pi)^2}\iint_{{\mathbb R}^2} \Lambda^{2s}\left| \frac{\mu^-}{\mu^++\mu^-}\int_{0}^{+\infty}e^{-\mu^+ y}\widehat{\mathcal F}^+ (\cdot, y)\, dy \right|^2\,\mathrm{d}\delta \,\mathrm{d}\eta\\
\displaystyle \le \frac{1}{(2\pi)^2}\iint_{{\mathbb R}^2} \Lambda^{2s}\left| \frac{\mu^-}{\mu^++\mu^-}\right|^2\frac{1}{2\Re\mu^+}
\left(\int_{0}^{+\infty}|\widehat{\mathcal F}^+ (\cdot, y)|^2\, dy\right) \,\mathrm{d}\delta \,\mathrm{d}\eta \, . \end{array} \end{equation} Then we use the fact that $\frac{\mu^-}{\mu^++\mu^-}$ is a homogeneous function of degree zero in $\Xi$ so that
\[ \left| \frac{\mu^-}{\mu^++\mu^-}\right|^2\le C \qquad \forall (\tau,\eta)\in\Xi\, , \] for a suitable constant $C>0$. Moreover, we have the estimate from below \begin{equation*}\Re\mu^+\ge \frac{1}{\sqrt{2}\,c}\,\gamma\, , \end{equation*} see Lemma \ref{stima_Re_mu}. Thus we obtain from \eqref{stimag1} \begin{equation*} \begin{array}{ll}
\displaystyle \|g_1\|_{H^s_\gamma({\mathbb R}^2)}^2
\le \frac{C}{\gamma}\iint_{{\mathbb R}^2} \Lambda^{2s}
\left(\int_{0}^{+\infty}|\widehat{\mathcal F}^+ (\cdot, y)|^2\, \,\mathrm{d} y\right) \,\mathrm{d}\delta \,\mathrm{d}\eta =\frac{C}{\gamma}\|\mathcal F^+\|_{L^2({\mathbb R}^+;H^s_\gamma({\mathbb R}^2))}^2 \, . \end{array} \end{equation*} The proof of the estimate of $g_2$ is similar. This completes the proof of Theorem \ref{teoexist}. \end{proof}
\end{document} |
\begin{document}
\title{\textbf{$\Oh(\log ^{*}
\begin{abstract} We consider the classic $k$-center problem in a parallel setting, on the low-local-space Massively Parallel Computation (MPC) model, with local space per machine of $\mathcal{O}(n^{\delta})$, where $\delta \in (0,1)$ is an arbitrary constant. As a central clustering problem, the $k$-center problem has been studied extensively. Still, until very recently, all parallel MPC algorithms have been requiring $\Omega(k)$ or even $\Omega(k n^{\delta})$ local space per machine. While this setting covers the case of small values of $k$, for a large number of clusters these algorithms require large local memory, making them poorly scalable. The case of large $k$, $k \ge \Omega(n^{\delta})$, has been considered recently for the low-local-space MPC model by Bateni et al.\ (2021), who gave an $\mathcal{O}(\log \log n)$-round \textsc{MPC}\xspace algorithm that produces $k(1+o(1))$ centers whose cost has multiplicative approximation of $\mathcal{O}(\log\log\log n)$. In this paper we extend the algorithm of Bateni et al. and design a low-local-space MPC algorithm that in $\mathcal{O}(\log\log n)$ rounds returns a clustering with $k(1+o(1))$ clusters that is an $\mathcal{O}(\log^*n)$-approximation for $k$-center. \end{abstract}
\section{Introduction} \label{sec:intro}
Clustering large data is a fundamental primitive extensively studied because of its numerous applications in a variety of areas of computer science and data science. It is a central type of problem in modern data analysis, including the fields of data mining, pattern recognition, machine learning, networking and social networks, and bioinformatics. In a typical clustering problem, the goal is to partition the input data into subsets (called clusters) such that the points assigned to the same cluster are ``similar'' to one another, and data points assigned to different clusters are ``dissimilar''.
The most extensively studied clustering problems are $k$-means, $k$-median, $k$-center, various notions of hierarchical clustering, and also variants of these problems with some additional constraints (e.g., fairness or balance).
While originally the clustering problems have been studied in the context of classical sequential computation,
most recently a large amount of research has been devoted to the non-sequential computational settings such as distributed and parallel computing, mainly because these are the only settings capable of performing computations in a reasonable time on large inputs, and because data is frequently collected on different sites and clustering needs to be performed in a distributed manner with low communication.
In this paper we consider one of the most fundamental clustering problems, the \emph{$k$-center} problem, on the \emph{Massively Parallel Computation (MPC)} model. \textsc{MPC}\xspace is a modern theoretical model of parallel computation, inspired by frameworks such as MapReduce \cite{mapreduce}, Hadoop \cite{hadoop}, Dryad \cite{dryad}, and Spark \cite{spark}. Introduced just over a decade ago by Karloff et al.\ \cite{mpc_introduced} (and later refined, e.g., in \cite{ANOY14,BKS17,CLMMOS18,GSZ11}), the model has been the subject of an increasing quantity of fundamental research in recent years, becoming nowadays the standard theoretical parallel model of algorithmic study.
\textsc{MPC}\xspace is a parallel system with \ensuremath{{\mathrm{m}}}\xspace \emph{machines}, each with \ensuremath{{\mathrm{s}}}\xspace words of \emph{local memory}. (We also consider the \emph{global space} \ensuremath{{\mathrm{g}}}\xspace, which is the total space used across all machines, $\ensuremath{{\mathrm{g}}}\xspace = \ensuremath{{\mathrm{s}}}\xspace \cdot \ensuremath{{\mathrm{m}}}\xspace$.) Computation takes place in synchronous rounds: in each round, each machine may perform arbitrary computation on its local memory, and then exchange messages with other machines. Each message is sent to a single machine specified by the machine sending the message. Machines must send and receive at most \ensuremath{{\mathrm{s}}}\xspace words each round. The messages are processed by recipients in the next round. At the end of the computation, machines collectively output the solution.
The goal is to design an \textsc{MPC}\xspace algorithm that solves a given task in as few rounds as possible.
If the input is of size $n$, then one wants \ensuremath{{\mathrm{s}}}\xspace to be sublinear in $n$ (for if $\ensuremath{{\mathrm{s}}}\xspace \ge n$ then a single machine can solve any problem without any communication, in a single round), and the total space across all the machines to be at least $n$ (in order for the input to fit onto the machines) and ideally not much larger. In this paper, we focus on the \emph{low-local-space} \textsc{MPC}\xspace setting, where the local space of each machine is strongly sublinear in the input size, i.e., $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^{\delta})$ for some arbitrarily constant $\delta \in (0,1)$. This low-local-space regime is especially attractive because of its scalability. At the same time, this setting is particularly challenging in that it requires extensive inter-machine communication to solve clustering problems for the input data scattered over many machines.
In recent years we have seen a number of very efficient, often constant-time, parallel clustering algorithms that have been relying on a combination of a \emph{core-set} and a ``reduce-and-merge'' approach. In this setting, one gradually filters the data set by typically reducing its size on every machine to $\widetilde{\mathcal{O}}(k)$, continuing until all the data can be stored on a single machine, at which point the problem is solved locally. Observe that this approach has an inherent bottleneck that requires that any machine must be able to store $\Omega(k)$ data points. Intuitively, this follows from the fact that if a machine sees $k$ data points that are all very far away from each other, it needs to keep track of all $k$ of them, for otherwise it might miss all the information about one of the clusters, which in turn could lead to a large miscalculation of the objective value. Similar arguments could be also used to argue that each machine needs to communicate $\Omega(k)$ points to the others (see \cite{CSWZ16} for a formalization of such intuition for a \emph{worst-case partition} of points for $k$-median, $k$-means, and $k$-center problems, though the worst-case partition assumption means that this bound does not extend directly to \textsc{MPC}\xspace).
Because of that, most of the earlier clustering \textsc{MPC}\xspace algorithms, especially those working in a constant number of rounds (see, e.g., \cite{coreset_kcenter1,coreset_kcenter2}), require $\Omega(k)$ or even $\Omega(k) \cdot n^{\Omega(1)}$ local space. Therefore in the setting considered in this paper, of \textsc{MPC}\xspace with local space per machine of $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^{\delta})$, the approach described above cannot be applied when the number of clusters is large, when $k = \omega(\ensuremath{{\mathrm{s}}}\xspace)$. This naturally leads to the main challenge in the design of clustering algorithms for \textsc{MPC}\xspace with low-local-space: \emph{how to efficiently partition the data into $k$ good quality clusters on an \textsc{MPC}\xspace with local space $\ensuremath{{\mathrm{s}}}\xspace \ll k$}. We believe that this setting is quantitatively different (and more challenging) from the setting when $k$ is smaller (or even comparable to) the amount of local space~\ensuremath{{\mathrm{s}}}\xspace.
In this paper, we focus on the \emph{$k$-center clustering problem}, a standard, widely studied, and widely used formulation of metric clustering. The problem is, given a set of $n$ input points, to find a subset of size $k$ of these points (called \emph{centers}) such that that maximum distance of a point to its nearest center is minimized. Specifically, in this work, we focus on the case where $k \gg \ensuremath{{\mathrm{s}}}\xspace$ and hence, when $k$ is quite large relative to $n$: one can think of these problem instances as ``compressing'' the input set of $n$ points into $k$ points.
Very recently this problem has been addressed by Bateni et al.\ ~\cite{bateni-kcenter}, who showed one can design an $\mathcal{O}(\log\log n)$-round \textsc{MPC}\xspace algorithm, with local space $\ensuremath{{\mathrm{s}}}\xspace=\mathcal{O}(n^{\delta})$ and global space $\ensuremath{{\mathrm{g}}}\xspace=\widetilde{\mathcal{O}}(n^{1+\delta})$,\footnote{$\widetilde{\mathcal{O}}(f)$ hides a polynomial factor in $\log f$.} that returns an $\mathcal{O}(\log\log\log n)$-approximate solution with $k+o(k)$ centers. Our main result is an improved bound in the \textsc{MPC}\xspace model:
\begin{theo}[\textbf{Main result stated informally}] \label{theo:inf}\label{thm:main-informal} In $\mathcal{O}(\log\log n)$ rounds on an \textsc{MPC}\xspace, we can compute an $\mathcal{O}(\log^* n)$ approximate solution to the $k$-centers problem using $k(1+o(1))$ centers.
The \textsc{MPC}\xspace has local space $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^\delta)$ and global space $\ensuremath{{\mathrm{g}}}\xspace = \widetilde{\mathcal{O}}(n^{1+\rho})$ for any constant $\rho >0$. The $n$ input points are in $\mathbb{R}^d$ for some constant $d$ and we assume that $k = \Omega(\log^cn)$ for a suitable constant $c$. Our algorithm succeeds with high probability.
\end{theo}
The algorithmic framework is based on a repeated application of \emph{locally sensitive sampling}: sampling a set of ``hub'' points, assigning all other points to a nearby hub, and then adding new hubs to well-approxi\-ma\-te the point set. We improve the approximation factor by a careful examination of the progress of clusters in some fixed optimal clustering over the course of the algorithm. Due to the depth of our iteration, clusters no longer satisfy certain properties with high probability, and carefully bounding the size of the clusters that fail to meet certain checks is an important challenge to overcome in our analysis.
Additionally, we provide a more flexible guarantee on the global space, providing an accuracy parameter which can be set to reduce global space used at the expense of a larger approximation ratio (or vice versa).\footnote{In particular, the constant in $\mathcal{O}(\log^*n)$ of \Cref{theo:inf} depends on $\rho$.} This is possible because of the way we implement \emph{locally-sensitive hashing (LSH)} in \textsc{MPC}\xspace. We believe our implementation of LSH in \textsc{MPC}\xspace could potentially see further applications, e.g., for other geometric problems.
\subsection{Related work} \label{subsec:related-work}
There has been a large amount of work on various variants of the clustering problems (see, e.g., \cite{clustering_survey} for a survey of research until 2005), including some extensive study of the $k$-center clustering problem. The $k$-center problem is well known to be NP-hard and simple algorithms are known to achieve a 2-approximation \cite{DF85,gonzalez1985kcenter,HS85}; this approximation ratio is tight unless $\mathrm{P} = \mathrm{NP}$ \cite{HN79}.
The study of clustering in the context of parallel computing is extremely well-motivated: as the size of typical data sets continue to increase, it becomes infeasible to store input data on a single machine, let alone iterate over it many times (as greedy sequential algorithms require (see, e.g., \cite{gonzalez1985kcenter})). It comes therefore as no surprise that there has been a considerable amount of work on $k$-center clustering algorithms in \textsc{MPC}\xspace. In particular, several constant-round, constant-approximation algorithms in the \textsc{MPC}\xspace setting were given recently for general metric $k$-center clustering, see, e.g., \cite{coreset_kcenter1,coreset_kcenter2,coreset_kcenter3}. Much of this work used \emph{coresets} or similar techniques as a means of approximating the structure of the underlying data, naturally implying a requirement that the local space satisfies $\ensuremath{{\mathrm{s}}}\xspace = \Omega(k)$ per machine and global space is $\ensuremath{{\mathrm{g}}}\xspace = \Omega(nk)$ or $\Omega(n^{\epsilon} k^2)$ \cite{coreset_kcenter1,coreset_kcenter2,coreset_kcenter3}. Specifically, Ene et al.\ \cite{coreset_kcenter1} gave a $\mathcal{O}(1)$ round $10$-approximation \textsc{MPC}\xspace algorithm that uses local space $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(k^2 n^{\Theta(1)})$, Malkomes et al.\ \cite{coreset_kcenter2} obtained a $2$-round $4$-approxi\-ma\-tion \textsc{MPC}\xspace algorithm with local space $\ensuremath{{\mathrm{s}}}\xspace = \Omega(\sqrt{nk})$, and Ceccarello et al.\ \cite{coreset_kcenter3} obtained a $2$-round $(2+\varepsilon)$-approximation \textsc{MPC}\xspace algorithm that uses local space $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}_{d,\varepsilon} (\sqrt{nk})$ for the problem in metric spaces with doubling dimension $d$. As mentioned earlier, these algorithms are not scalable if $k$ is large relative to $n$ (for example, when $k = n^{1/3}$), making the case of large $k$ particularly challenging. Furthermore, as argued by Bateni et al.\ \cite{bateni-kcenter}, the case of large $k$ appears naturally in some applications of $k$ clustering, including label propagation used in semi-supervised learning, or same-meaning query clustering for online advertisement or document search \cite{WLFXY09}. Unfortunately, we do not know of any $O(1)$-round, $O(1)$-approximation \textsc{MPC}\xspace algorithm that would use local space $\ensuremath{{\mathrm{s}}}\xspace = o(k)$.
In order to address the case of large $k$, Bateni et al.\ \cite{bateni-kcenter} considered a relaxed version of $k$-center clustering for low dimensional Euclidean spaces with constant dimension. The goal of that work was to design a scalable \textsc{MPC}\xspace algorithm for the $k$-center clustering problem with a \emph{sublogarithmic number of rounds} of computation, \emph{sublinear space per machine}, and small global space. Bateni et al.\ \cite{bateni-kcenter} showed that in $\mathcal{O}(\log\log n)$ rounds on an \textsc{MPC}\xspace with $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^\delta)$, one can compute an $\mathcal{O}(\log\log\log n)$-approximate solution to constant-dimension Euclidean $k$-center with $k(1 + o(1))$ centers. Their algorithm uses $\widetilde{\mathcal{O}}(n^{1 + \delta} \cdot \log \Delta)$ global space. Bateni et al.\ \cite{bateni-kcenter} complemented their analysis by some empirical study to demonstrate that the designed algorithm performs well in practice.
Finally, in the related PRAM model of parallel computation Blelloch and Tangwonsan gave a $2$-approximation algorithm for $k$-{center}\xspace \cite{pram-k-center}. However, their algorithm requires $\Omega(n^2)$ processors and it is therefore difficult to translate the approach to our setting.
\subsection{Technical contributions} \label{sec:diff-intro}
Our main result in \Cref{thm:main-informal} is an extension of the approach developed in Bateni et al.\ \cite{bateni-kcenter} that significantly improves the quality of the approximation guarantee. To present these two results in the right context, we will briefly describe the main differences between these two works at a high level.
The approach of Bateni et al.\ \cite{bateni-kcenter} starts with the entire point set $P$ as a set of potential centers (solution), and refines it to
$P = P_0 \supseteq \dots \supseteq P_{\tau}$, until $\size{P_\tau} = k+o(k)$. The final set $P_\tau$ is reported as the output.
It is not difficult to see that if we take an optimal clustering $\mathcal{C}^*$ for $P$ (i.e., $\mathcal{C}^*$ is the optimal solution to the $k$-center problem for $P$), then {the number of potential centers in any cluster $C \in \mathcal{C}^*$ reduces over rounds (that is, $|P_{i+1} \cap C| \leq |P_i \cap C|$).}
Let us define a cluster $C \in \mathcal{C}^*$ to be \emph{irreducible from round $i$}, if $i$ is the minimum index such that $\size{C \cap P_i} \leq 1$.
Two central properties of the cluster refinement due to Bateni et al.\ \cite{bateni-kcenter} are that after $\mathcal{O}(\log\log n)$ rounds the size of each cluster in $\mathcal{C}^*$ reduces to $\widetilde{\mathcal{O}}(\log n)$, and that after that, the total number of the points in the
reducible
clusters in $\mathcal{C}^*$ reduces after each round by a constant factor, implying that another $\mathcal{O}(\log\log n)$ rounds suffice to ensure the desired number of centers
(at most $k$ due to the irreducible clusters and $o(k)$ due to the reducible clusters)
and hence $\tau = \mathcal{O}(\log\log n)$. This is then complemented by the analysis of the quality of the refinements which guarantees that each new refinement adds an additive term of $\mathcal{O}(\textsc{opt}\xspace)$ to the cost of the solution, giving in total a double logarithmic approximation ratio. They further gave a sketch of the analysis to get an approximation ratio of $\mathcal{O}(\log\log\log n)$.
In our paper we substantially improve the approximation factor to $\mathcal{O}(\log^* n)$ by extending the framework in the following sense. We show that, after $\mathcal{O}(\log\log n)$ rounds, the size of each cluster in $\mathcal{C}^*$ reduces to $\widetilde{\mathcal{O}}(\log n)$ such that the refinement in each round adds an additive error of $\mathcal{O}(\frac{\textsc{opt}\xspace}{\log\log n})$ to the cost of the solution. Then, we show that after additional $\mathcal{O}(\log\log\log n)$ rounds, the sizes of \emph{almost all} (but not all) clusters in $\mathcal{C}^*$ reduce to a $\widetilde{\mathcal{O}}(\log\log n)$ such that the refinement in each round adds an additive error of $\mathcal{O}(\frac{\textsc{opt}\xspace}{\log\log\log n})$ to the cost of the solution. Next, we show that after another $\mathcal{O}(\log\log\log\log n)$ rounds, the sizes of \emph{almost all} clusters in $\mathcal{C}^*$ reduce to a $\widetilde{\mathcal{O}}(\log\log\log n)$ such that the refinement in each round adds $\mathcal{O}(\frac{\textsc{opt}\xspace}{\log\log\log\log n})$ to the cost of the solution, and so on. We continue this until the sizes of almost all clusters in $\mathcal{C}^*$ reduce to $\mathcal{O}(\log^*n)$. Observe that the total number of rounds taken so far is bounded by $\mathcal{O}(\log\log n)$, and we can argue that the current solution has an approximation ratio of $\mathcal{O}(\log^*n)$. An important challenge in analyzing this approach is that \emph{not all clusters satisfy these size guarantees with high probability}. Indeed, we cannot obtain a high probability guarantee by cluster refinement relying on random sampling of the already small clusters; we can ensure only that \emph{most of the clusters are getting small}. Let $\mathcal{C}^{**} \subseteq \mathcal{C}^*$ be the clusters that satisfy the reduction property as discussed above, that is, such that the number of points in each cluster of $\mathcal{C}^{**}$ is bounded by $\mathcal{O}(\log^* n)$ currently. We argue that the total number of points in the reducible clusters in $\mathcal{C}^{**}$ reduces by a constant factor after each successive round, adding an additive error of $\mathcal{O}(\textsc{opt}\xspace)$ each time. This implies that another $\mathcal{O}(\log (\log^* n))$ rounds are good enough to ensure that we have the desired number of centers at the end. To bound the total number of centers, we also need to show that the number of centers in clusters in $\mathcal{C}^{*}\setminus \mathcal{C}^{**}$ (that is, the set of clusters which fail to adhere to a size guarantee at some point during the algorithm) is bounded. Note that we cannot \emph{track} which clusters succeed or fail (doing so would require us to know an optimal clustering), and so we use $\mathcal{C}^*$ and $\mathcal{C}^{**}$ only for the analysis. In summary, the approach sketched above will reduce the number of clusters to $k(1+o(1))$, and will ensure that the total number of rounds spent by our algorithm is $\mathcal{O}(\log\log n)$ and the approximation ratio of our solution is $\mathcal{O}(\log^*n)$. A more detailed overview is in \Cref{sec:over}.
Our approach relies heavily on the use of LSH (locality sensitive hashing), and we provide a flexible implementation of LSH in \textsc{MPC}\xspace which one can configure with an appropriate parameter $\rho$. Reducing the value of $\rho$ decreases the amount of global space used by the algorithm (global space used is $\widetilde{O}(n^{1 + \rho})$) while increasing the approximation ratio.
\remove{
----OLD TEXT----
\sam{I think the following paragraph is good as a contrast between our paper and that of Bateni et al.\ , but since this is the first time in the paper that we talk about the general idea of our algorithms we ought to give an overview of the approach first.}
The approach of Bateni et al.\ \cite{bateni-kcenter} starts with the entire point set $P$ as the solution, and refines it to $P_1, \ldots, P_{\tau}$ over $\mathcal{O}(\log\log n)$ rounds such that $\tau=\mathcal{O}(\log\log n)$, $P_1\subseteq \ldots \subseteq P_{\tau}$ \sam{Should these $\subseteq$s be $\supseteq$s?} and $\size{P_\tau}=k+o(k)$. If we consider an optimal clustering $\mathcal{C}^*$, then the number of points in any cluster of $\mathcal{C}^*$ reduces over rounds. Bateni et al.\ \cite{bateni-kcenter} argue that the size of each cluster in $\mathcal{C}^*$ reduces to $\widetilde{\mathcal{O}}(\log n)$ after $\mathcal{O}(\log\log n)$ rounds and after that, they argue that the total number of the points in the clusters in $\mathcal{C}^*$ reduces by a constant factor after each round, implying that another $\mathcal{O}(\log\log n)$ rounds are good enough to have the desired number of centers.
We extend the framework by showing that, after $\mathcal{O}(\log\log n)$ rounds, the size of each clusters in $\mathcal{C}^*$ reduces to $\widetilde{\mathcal{O}}(\log n)$, then after an additional $\mathcal{O}(\log\log n\log n)$ rounds the sizes of the most clusters in $\mathcal{C}^*$ reduces to a $\widetilde{\mathcal{O}}(\log\log n)$, then after another $\mathcal{O}(\log\log n\log n)$ rounds  the sizes of the almost all clusters in $\mathcal{C}^*$ reduces to a $\widetilde{\mathcal{O}}(\log\log\log \log n)$, and so on. \sam{Are these bounds correct?} \complain{We repeat this procedure a suitable number of times, and after that, we argue on the reduction in the total number of points in the clusters that satisfy the desired property and in $\mathcal{C}^*$, and we bound the number of clusters using the fact that the number of points in the clusters that do not satisfy the desired property is bounded. To guarantee that most of the clusters satisfy the desired property and to argue that it is good enough, we introduce the notion of active and inactive clusters concerning their intersection with the intermediate centers. Note that our algorithm makes same number of rounds as that of Bateni et al.\ , but we make a significant improvement in the approximation ratio\Artur{But this text says nothing about the approximation ratio, which is the main difference between us and \cite{bateni-kcenter}}. .}
\gopi{Is this Ok?}
\sam{I've removed the red from the part I've gone through. Seems fine, though see my comment above.}
\Artur{I don't understand the arguments and there are no comments about approximation ratio.}
}
\junk{
The last decade has seen a booming interest in the development of algorithms that work with large data sets, motivated by both the increasing prevalence and size of these data sets. Much of this research concerns non-sequential computational settings such as distributed, parallel, and cluster computing: mainly because these are the only settings capable of performing computations in a reasonable time on such large inputs.
\textsc{MPC}\xspace is a modern theoretical model of parallel computation, inspired by frameworks such as MapReduce \cite{mapreduce}, Hadoop \cite{hadoop}, Dryad \cite{dryad}, and Spark \cite{spark}. Introduced just over a decade ago in \cite{mpc_introduced}, the model has been the subject of a increasing quantity of fundamental research in recent years.
\textsc{MPC}\xspace is a parallel system with \ensuremath{{\mathrm{m}}}\xspace machines, each with \ensuremath{{\mathrm{s}}}\xspace words of local memory. Computation takes place in synchronous rounds: in each round, each machine may perform arbitrary computation on its local memory, and then exchange messages with other machines. Each message is sent to a single machine (specified by the machine sending the message). Machines must send and receive at most \ensuremath{{\mathrm{s}}}\xspace words each round. We also consider the \emph{global space} \ensuremath{{\mathrm{g}}}\xspace of an instance of \textsc{MPC}\xspace as the total space used across all machines. Note that $\ensuremath{{\mathrm{g}}}\xspace = \ensuremath{{\mathrm{s}}}\xspace \cdot \ensuremath{{\mathrm{m}}}\xspace$, and that $\ensuremath{{\mathrm{g}}}\xspace = \Omega(n)$.
In this paper we will consider $\ensuremath{{\mathrm{s}}}\xspace = O(n^\delta)$ for some constant $\delta \in (0,1)$.\footnote{In graph algorithms, the local space regimes of $\ensuremath{{\mathrm{s}}}\xspace = O(n)$ and $\ensuremath{{\mathrm{s}}}\xspace = O(n^{1 + \delta})$ are also of interest. These settings make sense for graphs where the size of the input is $(n+m)$, but for our problem (with input size $n$) only $\ensuremath{{\mathrm{s}}}\xspace = O(n^\delta)$ is reasonable.} In this setting it is known that sorting $n$ elements, computing the prefix sum of $n$ elements, and simulating a single step of a \textsc{PRAM}\xspace algorithm can all be done in $O(1)$ \textsc{MPC}\xspace rounds.
We focus in this paper on the \emph{$k$-center clustering problem}, a standard, widely studied, and widely used formulation of metric clustering. The problem is, given a set of $n$ input points, to find a subset of size $k$ of these points called \emph{centers} such that that maximum distance of a point to its nearest center is minimized. Specifically, in this work, we focus on the case where $k$ is quite large relative to $n$: one can think of these problem instances as ``compressing'' the input set of $n$ points into $k$ points.
In this paper, we improve the state-of-the-art $k$-{center}\xspace clustering algorithm in the \textsc{MPC}\xspace model:
\begin{theo}[Main result: informal] In $\mathcal{O}(\log\log n)$ rounds on an \textsc{MPC}\xspace, we can compute an $\mathcal{O}(\log^* n)$ approximate solution to the $k$-centers problem using $k(1+o(1))$ centers (provided $k = \Omega(\log^6 n)$).
The \textsc{MPC}\xspace has local space $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^\delta)$ and global space $\ensuremath{{\mathrm{g}}}\xspace = \mathcal{O}(n^{1+\rho})$ for any constant $\rho >0$. The $n$ input points are in $\mathbb{R}^d$ for some constant $d$. Our algorithm succeeds with high probability.
\end{theo}
This significantly improves the state of the art result by Bateni et al.\ ~\cite{bateni-kcenter} who gave an $\mathcal{O}(\log\log\log n)$-approximate solution with $k(1+o(1))$ centers. We also give a detailed explanation of the implementation of \emph{locally sensitive hashing} on \textsc{MPC}\xspace.
\subsection{Clustering} The problem of clustering is fundamental. In clustering, the task is to partition the input into \emph{clusters} such that elements in the same cluster are similar, and elements in different clusters are dissimilar. Clustering has applications in a diverse array of fields, including machine learning, medical imaging, and natural language processing.
\subsubsection{$k$-means, $k$-medians}
\subsubsection{$k$-center} There are many variations and formulations of clustering problems; one of the most studied is $k$-{center}\xspace. In the $k$-{center}\xspace problem, the task is to assign each point a \emph{center} such that at most $k$ centers are used and the maximum distance from a point to its assigned center is minimized. The $k$-{center}\xspace problem was introduced in \cite{gonzalez1985kcenter} \todo{check!}. It was shown to be NP-hard in the same paper, and therefore most research around the problem is concerned with efficient \emph{approximation algorithms}. A classical greedy algorithm for the $k$-{center}\xspace problem in the sequential setting is given \cite{gonzalez1985kcenter}, which provides a $2$-approximation for the problem in $O(nk)$ time\todo{double-check}. A large body of work has since improved the state-of-the-art in $k$-{center}\xspace algorithms in various settings: see \cite{clustering_survey} for an overview of developments from 1985--2005 for example.
The study of clustering in the context of parallel computing is extremely well-motivated: as the size of typical data sets continue to increase, it becomes infeasible to store input data on a single machine, let alone iterate over it many times (as greedy sequential algorithms require). It comes therefore as no surprise that there has been a considerable amount of work on $k$-{center}\xspace algorithms in \textsc{MPC}\xspace. Constant-round, constant-approximation algorithms in the \textsc{MPC}\xspace setting were given in \cite{coreset_kcenter1,coreset_kcenter2,coreset_kcenter3}, with \cite{coreset_kcenter3} giving an impressive $(2 + \epsilon)$ approximation in only $2$ rounds. Much of this work used \emph{coresets} as a means of approximating the structure of the underlying data.
The previous coreset based MPC algorithms (where each machine needs to compute coresets (of size $k$) locally) for $k$ center requires local space of $\mathcal{S}=\Omega(k)$ per machine and global space of $\mathcal{T}=\Omega(nk)$ or $\Omega(n^\delta k^2)$~\cite{coreset_kcenter1,coreset_kcenter2,coreset_kcenter3}. In particular, Ene et al.\ ~\cite{coreset_kcenter1} gave a $\mathcal{O}(1)$ round $10$-approximation \textsc{MPC}\xspace algorithm that uses local space $\mathcal{S}=\mathcal{O}(n^{\delta})$ global space $\mathcal{T}=\Omega(\sqrt{n^\delta k^2})$; Markoves et al.\ ~\cite{coreset_kcenter2} gave a $2$-round $4$-approximation \textsc{MPC}\xspace algorithm that uses local space $\mathcal{S}=\Omega(\sqrt{nk})$ and global space $\mathcal{T}=\mathcal{O}(nk)$; Ceccarelloet al.\ ~\cite{coreset_kcenter3} gave a $2$-round $(2+\varepsilon)$-approximation \textsc{MPC}\xspace algorithm that uses local space $\mathcal{S}=\mathcal{O}_{d,\varepsilon} \left(\sqrt{nk}\right)$~\footnote{The constant in $\mathcal{O}_{d,\varepsilon}(\cdot)$ depends on $d$ and $\varepsilon$} and global space $\mathcal{T}=\mathcal{O}(nk)$. These algorithms are impractical and not scalable if $k$ is large relative to $n$ (for example $k=\sqrt{n^{2/3}}$). This motivated a recent result by Bateni et al.\ ~\cite{bateni-kcenter} who showed that, for \textsc{MPC}\xspace with $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^\delta)$, computes an $\mathcal{O}(\log\log\log n)$-approximate solution to constant-dimension Euclidean $k$-{center}\xspace with $k(1 + o(1))$ centers. Their algorithm uses $\widetilde{\mathcal{O}}(n^{1 + \delta} \log \Delta)$ total work and is therefore much more suitable in practical contexts.
We extend this result and obtain an $O(\log^* n)$ approximation algorithm for $k$-{center}\xspace, with the same running time and local and global space guarantees. }
\subsection{Notation and preliminaries}
We now introduce the notation used through the paper.
First, we present the setting of the parameters of our \textsc{MPC}\xspace. The $k$-{center}\xspace algorithm in this paper works for any local space $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^\delta)$ for a constant $0 < \delta < 1$: the setting of $\delta$ has only a constant factor impact on the running time. Similarly, the \textsc{MPC}\xspace can have any global space $\ensuremath{{\mathrm{g}}}\xspace = \widetilde{\mathcal{O}}(n^{1+\rho})$ for some constant $\rho > 0$: $\rho$ can be made arbitrarily small, and its setting has a constant factor impact on the approximation ratio. We sometimes refer to \textsc{MPC}\xspace with these choices of \ensuremath{{\mathrm{s}}}\xspace and \ensuremath{{\mathrm{g}}}\xspace simply as ``\mpc'' in the rest of this paper.
Let us recall that certain operations, particularly sorting and prefix sum of $n$ elements, and broadcasting a value of size $<\ensuremath{{\mathrm{s}}}\xspace$, can be computed deterministically in $O(1)$ rounds (see \cite{GSZ11}).
The input to our problem is a set $P$ of $n$ points in $\mathbb{R}^d$, where $d$ is a constant, and an integer parameter $k < n$. We define $d(p, q)$ as the Euclidean distance between points $p$ and $q$ in $\mathbb{R}^d$. We generalize this notation to the distance between a point and a set: $d(p, S) := \min\limits_{q \in S} d(p, q)$ is the minimum distance from $p$ to a point in $S$. We define $\textsc{cost}\xspace(P,S) := \max\limits_{p \in P} d(p,S)$ as the distance of the point in $P$ which is ``furthest away'' from any point in $S$. Without loss of generality, we assume that the input set is re-scaled so that the minimum distance between any two points in $P$ is $1$; then we let $\Delta$ to be the maximum distance between any two points in $P$.
We denote the set $\{1,\ldots,t\}$ by $[t]$ and $\log^{(i)} n := \underbrace{\log\dots\log}_{i} n$ the iterated logarithm of $n$. By convention $\log^{(0)} n := n$. The notations $\widetilde{\mathcal{O}}(f)$ and $\widetilde{\Theta}(f)$ hide polynomial factors in $\log f$.
We now define formally the $k$-center clustering problem.
\begin{defi} Let $P$ be a set of points in $\mathbb{R}^d$. A \emph{clustering} $\mathcal{C}$ of $P$ is a partition of $P$ into nonempty clusters $C_1,\ldots,C_t$. The \emph{radius} of cluster $C_i$ is $\min\limits_{x \in C_i} \max\limits_{y \in C_i}d(x,y)$, and the \emph{cost} of the clustering $\mathcal{C}$ is the maximum of the radii of the clusters $C_1,\ldots,C_t$. \end{defi}
\begin{defi}[\textbf{$k$-center clustering problem}] Let $k,n, d \in \mathbb{N}$ with $k \leq n$, and $P$ be a set of $n$ points in $\mathbb{R}^d$. The $k$-{center}\xspace problem for $P$ is to find a set $S^* \subseteq P$ such that $$S^*=\argmin\limits_{S \subseteq P: \size{S}=k} \textsc{cost}\xspace(P,S).$$ Moreover, $\textsc{cost}\xspace(P,S^*)$ is defined as the (optimal) cost of the $k$-{center}\xspace problem for $P$. \end{defi}
We assume throughout the paper that $k > \ensuremath{{\mathrm{s}}}\xspace$. However, our algorithms work as described provided that $k=\Omega((\log n)^c)$ for a suitable constant $c$ (which is also the main focus of the work).
\subsection{Our results --- detailed bounds} \label{sec:results}
We now present in details the main result of this paper:
\begin{theo}[{\bf Main result}] \label{theo:main-star} Let $P$ be any set of $n$ points in $\mathbb{R}^d$ and let \textsc{opt}\xspace denote the optimal cost of the $k$-center clustering problem for $P$.
There exists an \textsc{MPC}\xspace algorithm that in ${\mathcal{O}}(\log\log n)$ rounds determines with high probability a set $T \subseteq P$ of $k+o(k)$ centers, such that $\textsc{cost}\xspace(P,T) = \mathcal{O}(\log^*n) \cdot \textsc{opt}\xspace$.
The \textsc{MPC}\xspace uses local space $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^{\delta})$ and global space $\ensuremath{{\mathrm{g}}}\xspace = \widetilde{\mathcal{O}}(n^{1+\rho} \cdot \log^2\Delta)$.
\end{theo}
\Cref{theo:main-star} follows directly from a more general theorem.
\begin{theo}[{\bf Generalization of \Cref{theo:main-star}}] \label{theo:main} Let $\alpha$ be an arbitrary integer, $1 \le \alpha \le \log^*n - c_0$ for some suitable constant $c_0$. Let $P$ be any set of $n$ points in $\mathbb{R}^d$ and let \textsc{opt}\xspace denote the optimal cost of the $k$-center clustering problem for $P$.
There exists an \textsc{MPC}\xspace algorithm that in ${\mathcal{O}}(\log\log n)$ rounds determines with high probability a set $T \subseteq P$ of centers, such that $\textsc{cost}\xspace(P,T) = \mathcal{O}((\alpha + \log^{(\alpha + 1)}n)) \cdot \textsc{opt}\xspace$ and $\size{T} \le k \cdot \left(1+\frac{1}{\widetilde{\Theta}(\log^{(\alpha)} n)}\right) + \widetilde{\Theta}((\log ^{(\alpha)} n)^3)$.
The \textsc{MPC}\xspace uses local space $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^{\delta})$ and global space $\ensuremath{{\mathrm{g}}}\xspace = \widetilde{\mathcal{O}}(n^{1+\rho} \cdot \log^2\Delta)$.
\end{theo}
Observe that in \Cref{theo:main} we have $\size{T} = k+o(k)$, since we are assuming $k=\Omega((\log n)^c)$ for some suitable constant $c$.
Let $\alpha_0$ be the solution to the equation $\alpha = \log^{(\alpha + 1)} n$; observe that $\alpha_0 = \Theta(\log^*n)$. Then \Cref{theo:main-star} is a corollary of \Cref{theo:main} when we choose $\alpha = \alpha_0$.
\Cref{theo:main} can be seen as a fine-grained version of \Cref{theo:main-star}: as $\alpha$ increases the cost of the solution decreases and number of center increases (with the number of rounds always being $O(\log\log n)$). Therefore \Cref{theo:main} is more amiable in practical scenarios in the following sense: $\alpha$ in \Cref{theo:main} can be set to trade off between the quality of the solution and the number of centers in the solution. We would also like to highlight that the result of Bateni et al.\ \cite{bateni-kcenter} is a special case of \Cref{theo:main} when $\alpha=1$ and $\alpha=2$ to obtain $\mathcal{O}(\log\log n)$ and $\mathcal{O}(\log\log\log n)$ approximation, respectively.
\subsection{Organization of the paper} \label{subsec:organization}
In \Cref{sec:over} we give a proof of our main result predicated on the correctness of our main algorithm, and then give an overview of the subroutines which our main algorithm contains. In \Cref{sec:hub} we explain how LSH (locality-sensitive hashing) on \textsc{MPC}\xspace can be implemented to assign each point $p \in P$ to a hub in $H \subseteq P$ which is within a constant factor of the closest hub to $p$. In Sections \ref{sec:samp}~and~\ref{sec:uni} we prove critical properties of subroutines used in our main algorithm, and then in \Cref{sec:main} we prove the correctness of our main algorithm. Finally, \Cref{sec:conclude} contains some conclusions. \section{Technical overview} \label{sec:over}
\remove{\Artur{I thought the intro to this section (discussion before \Cref{theo:main1}) is important to have (possibly moved earlier, e.g., after \Cref{thm:main-informal}, or in \Cref{subsec:technical-contribution-and-Bateni}), but I find it imprecise}}
\remove{If we find a random sample a set of $k$ points (even $\mathcal{O}(k)$ points) from $P$, and output it as the solution to the $k$-{center}\xspace problem, then the output may not satisfy the desired approximation guarantee. In view of \Cref{theo:main}, assuming that we know the value of $\size{\mathcal{C}_r}$, if we take some random sample of size $\size{\mathcal{C}_r}$ (even $\mathcal{O}(\size{\mathcal{C}_r})$) from $P$ and output it as the solution, then the cost of the solution may not be as desired.\sam{I'll reword this first paragraph, it's a bit off}\Artur{First paragraph is off indeed; what is $\mathcal{C}_r$? what is $\size{\mathcal{C}_r}$?}}
Recall that \Cref{theo:main-star} is our main result and \Cref{theo:main} is its parameterized generalization. Our proof of \Cref{theo:main} (and hence of \Cref{theo:main-star}) relies on the following main technical theorem.
\begin{theo}[{\bf Main technical theorem proved in this paper}] \label{theo:main1} Let $\alpha$ be an arbitrary integer, $1 \le \alpha \le \log^*n - c_0$ for some suitable constant $c_0$. Let $r$ be an arbitrary positive real. Let $P$ be any set of $n$ points in $\mathbb{R}^d$ and let $\mathcal{C}_r $ be a clustering of $P$ that has the minimum number of centers among all clusterings of $P$ with cost at most $r$ and {$\size{\mathcal{C}_r}=\Omega((\log n)^c)$ for a suitable constant $c$}. There exists an \textsc{MPC}\xspace algorithm {\sc Ext}-$k$-{\sc Center}\xspace(Algorithm~\ref{algo:main}) that with probability at least $1-\frac{1}{(\log^{(\alpha - 1)} n)^{\Omega(1)}}$, in ${\mathcal{O}}(\log\log n)$ rounds determines a set $T \subseteq P$ of centers, such that $\textsc{cost}\xspace(P,T) = \mathcal{O}(r \cdot (\alpha+\log^{(\alpha + 1)}n))$ and $\size{T} \le \size{\mathcal{C}_r} \cdot \left(1 + \frac{1}{\widetilde{\Theta}(\log^{(\alpha)} n)} \right) + \widetilde{\Theta}((\log^{(\alpha)}n)^3)$.
The \textsc{MPC}\xspace uses local space $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^{\delta})$ and global space $\ensuremath{{\mathrm{g}}}\xspace = \widetilde{\mathcal{O}}(n^{1+\rho} \cdot \log\Delta)$.
\end{theo}
Algorithm {\sc Ext}-$k$-{\sc Center}\xspace used in \Cref{theo:main1} takes two parameters: an accuracy parameter $\alpha$ and a cost parameter $r$, and produces the output in a form similar to that required in \Cref{theo:main}, except that the number of clusters is equal to the number of centers in an optimal clustering of $P$ with cost at most $r$. This is in contrast with a standard clustering setting where the number of clusters is given as input, with no relationship to the cost of the solution. Therefore, if we knew a constant factor approximation to the optimal cost to the $k$-center problem, then setting it to be $r$ in \Cref{theo:main1}, we would get a desired solution as required in \Cref{theo:main}. This naturally suggests to run {\sc Ext}-$k$-{\sc Center}\xspace multiple times in parallel in order to obtain \Cref{theo:main}. Note that the success probability of \Cref{theo:main1} is not high. Hence we first run {\sc Ext}-$k$-{\sc Center}\xspace a suitable number of times in parallel to get an algorithm $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$ whose output and space requirements are same as that of {\sc Ext}-$k$-{\sc Center}\xspace, but the success probability is high. Then we run $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$ for $O(\log\Delta)$ choices of $r$ (starting with $r=\Delta$ and decreasing a constant factor each time) in parallel to get algorithm $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}''$ (the algorithm of \Cref{theo:main}). Moreover, $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}''$ reports the output of $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$ for the minimum $r$ for which we get the number of centers equals to $k+o(k)$. The details are in \Cref{sec:missing1}.
\subsection{Overview of the proof of \Cref{theo:main1}} \label{subsec:proof-theo:main1}
The idea to prove \Cref{theo:main1} is based on the framework which we call \emph{locally sensitive sampling}.
We generate a set $H \subseteq P$ of points (called \emph{hubs}) by sampling each point in $P$ independently with a \emph{suitable} probability, and assign all other points to one of the hubs based on its \emph{locality}. Let $B_h$ be the \emph{bag of the hub $h$}----the set of points associated to a hub $h \in H$. We run a variation of a well known greedy algorithm \cite{gonzalez1985kcenter} (for $k$-{center}\xspace in the sequential setting) for each bag in parallel to find a set of intermediate centers $C_h$ for hub $h$ such that $\textsc{cost}\xspace(B_h,C_h) = \mathcal{O}(r)$. We again repeat the procedure by setting $\bigcup_{h \in H} C_h$ as the point set. We continue this process a particular number of times with a particular choice of probability and radius parameters, and report the centers, at that point of time, as the final solution.
This framework was recently used by Bateni et al.\ \cite{bateni-kcenter} to give an $\mathcal{O}(\log\log n)$-round \textsc{MPC}\xspace algorithm with local space $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^\delta)$ and global space $\ensuremath{{\mathrm{g}}}\xspace = \widetilde{\mathcal{O}}(n^{1+\delta})$, which computes an $\mathcal{O} (\log\log\log n)$-approximate solution to $k$-{center}\xspace with $k(1 + o(1))$ centers, with high probability. We extend their framework and generalize the analysis to give an $\mathcal{O}(\log^*n)$ approximate solution as stated in \Cref{theo:main-star}. Note that \Cref{theo:main1} takes care of \Cref{theo:main-star} via \Cref{theo:main}.
The algorithm corresponding to \Cref{theo:main1} is {\sc Ext}-$k$-{\sc Center}\xspace (Algorithm~\ref{algo:main} in \Cref{sec:main}). Before describing {\sc Ext}-$k$-{\sc Center}\xspace, we describe and contextualize the three subroutines which it uses ({\sc Nearest-Hub-Search}\xspace, {\sc Sample-And-Solve}\xspace and {\sc Uniform}-{\sc Center}\xspace).
The main algorithm of Bateni et al.\ \cite{bateni-kcenter} uses subroutine {\sc Sample-And-Solve} and {\sc Uniform}-$k$-{\sc center}. We use analogous subroutines {\sc Sample-And-Solve}\xspace and {\sc Uniform}-{\sc Center}\xspace in our algorithm corresponding to {\sc Sample-And-Solve} and {\sc Uniform}-$k$-{\sc center} in Bateni et al.\ ~\cite{bateni-kcenter}, respectively, to achieve the desired result. But there are some differences which we will discuss when we describe {\sc Sample-And-Solve}\xspace and {\sc Uniform}-{\sc Center}\xspace. Due to our implementation of {\sc Nearest-Hub-Search}\xspace, we are able to give a more flexible bound on global space. We can improve the approximation ratio mainly due to generalizing their {\sc Uniform}-$k$-{\sc Center} to {\sc Uniform}-{\sc Center}\xspace in our case and using sophisticated analysis in our main algorithm that calls {\sc Uniform}-{\sc Center}\xspace.
Let us discuss first at a high level what these subroutines achieve in the context of the framework of locally sensitive sampling (discussed at the beginning of this section). Intuitively, the purpose of {\sc Sample-And-Solve}\xspace is to sparsify dense regions of points: it samples nodes with a given probability and iteratively adds centers in order to ensure that the cost of the centers remains low. {\sc Uniform}-{\sc Center}\xspace repeatedly uses {\sc Sample-And-Solve}\xspace: its main purpose is to guarantee that the number of centers in each cluster of some fixed optimal clustering decreases in a certain way over time.
\subsubsection*{{\sc Nearest-Hub-Search}\xspace ($Q,H$)}
Takes as input a set $Q$ of at most $n$ points and a set of hubs $H \subseteq Q$. For all points $q \in Q \setminus H$, it finds a point $\mbox{close}(q) \in H$ such that $d(q,\mbox{close}(q)) = \mathcal{O}( d(q,H))$, with probability at least $1-\frac{1}{n^{\Omega(1)}}$. {\sc Nearest-Hub-Search}\xspace{} can be implemented in MPC with local space $\ensuremath{{\mathrm{s}}}\xspace=\mathcal{O}(n^\delta)$ and global space $\ensuremath{{\mathrm{g}}}\xspace=\widetilde{\mathcal{O}}(n^{1+\rho} \cdot \log \Delta)$ in $\mathcal{O}(1)$ rounds. {\sc Nearest-Hub-Search}\xspace{} uses \emph{locally sensitive hashing}~\cite{lsh} and its implementation in \textsc{MPC}\xspace. For details on {\sc Nearest-Hub-Search}\xspace, see \Cref{sec:hub}.
\subsubsection*{{\sc Sample-And-Solve}\xspace($Q,p,r$)}
Takes a set of $Q$ of at most $n$ points, a sampling parameter $p$, and a radius parameter $r$. It produces some set of centers $S \subseteq Q$ such that $\textsc{cost}\xspace(Q,S)=\mathcal{O}(r)$.\footnote{The constant inside $\mathcal{O}(\cdot)$ depends on $\rho.$} Importantly, this can be implemented in an \textsc{MPC}\xspace with local space $\ensuremath{{\mathrm{s}}}\xspace=\mathcal{O}(n^{\delta})$ and global space $\widetilde{\mathcal{O}}(n^{1+\rho}\cdot \log \Delta)$ in $\mathcal{O}(1)$ rounds (\Cref{lem:samp}) as, aside from using {\sc Nearest-Hub-Search}\xspace{} to assign points to hubs, the computation is all done locally. {\sc Sample-And-Solve}\xspace first samples each point in $Q$ (independently) with probability $p$: let $H \subseteq Q$ be the set of sampled points called hubs. Then {\sc Sample-And-Solve}\xspace calls {\sc Nearest-Hub-Search}\xspace{} with input point set $Q$ and hub set $H$. After getting $\mbox{close}\xspace(q)$ for each $q \in Q\setminus H$, {\sc Sample-And-Solve}\xspace collects all points $B_h \subseteq Q$ assigned to a hub $h \in H$ (including hub $h$) and selects a set of centers $C_h$ from $B_h$ greedily using a variation of the sequential algorithm of \cite{gonzalez1985kcenter}, such that $\textsc{cost}\xspace(B_h,C_h) =\mathcal{O}(r)$. Finally, the algorithm outputs $S=\bigcup_{h \in H}C_h$. However, there is a difficulty to overcome: note that $\size{B_h}$ may be $\omega(n^\delta)$. So $B_h$ may not fit into the local memory of a machine. We show that this can be handled by distributing the points in $B_h$ into multiple machines, duplicating $h$ across all such machines. See \Cref{sec:samp} for more details about {\sc Sample-And-Solve}\xspace.
Algorithm {\sc Sample-And-Solve}\xspace in our paper serves essentially the same purpose as the corresponding algorithm due to Bateni et al.\ \cite{bateni-kcenter}. The approximation guarantee and number of rounds performed are the same in both cases. However, the global space used by our algorithm {\sc Sample-And-Solve}\xspace is more flexible in the following sense: reducing the value of $\rho$ decreases the amount of global space used by the algorithm (global space used is $\widetilde{\mathcal{O}}(n^{1 + \rho})\cdot \log \Delta$) while increasing the approximation ratio.
\subsubsection*{{\sc Uniform}-{\sc Center}\xspace$(V_t,r,t)$}
Takes a set $V_t$ of at most $ n$ points, a radius parameter $r$, and an additional parameter $t \leq n.$ It produces a set $S$ of centers, by calling {\sc Sample-And-Solve}\xspace $\tau=\Theta(\log \log t)$ times. $S_{i-1}$ is the input to the $i$-th call and $S_i$ is the output of the $i$-th call: overall we have $S_0=V_t$ and $S_\tau=S$ (the output of {\sc Uniform}-{\sc Center}\xspace). The probability and radius parameters to the calls to {\sc Sample-And-Solve}\xspace are set \emph{suitably}. From the guarantees we have from {\sc Sample-And-Solve}\xspace, we have the following guarantee for {\sc Uniform}-{\sc Center}\xspace: (i) it can be implemented in an \textsc{MPC}\xspace with local space $\ensuremath{{\mathrm{s}}}\xspace=\mathcal{O}(n^{\delta})$ and global space $\ensuremath{{\mathrm{g}}}\xspace=\widetilde{\mathcal{O}}(n^{1+\rho}\cdot \log \Delta)$ in $\mathcal{O}(\log \log t)$ rounds (\Cref{lem:uniform1}), and (ii) $\textsc{cost}\xspace(V_t,S)=\mathcal{O}(r\cdot \tau)=\mathcal{O}(r \log \log t)$ (\Cref{lem:uniform2}). {\sc Uniform}-{\sc Center}\xspace guarantees a reduction in cluster sizes in an optimal clustering in the following sense.
Consider a fixed clustering $\mathcal{C}_r^t$ of $V_t$ that has cost at most $r$. For $C \in \mathcal{C}_r^t$: if $\size{ C \cap V_t}\leq t$, then $\size{C \cap S} =\mathcal{O}( \log t \cdot (\log \log t)^2)$, with probability at least $1-\frac{1}{t^{\Omega(1)}}$. This is formally stated in \Cref{lem:uniform3}: note that this ceases to be high probability when $t \in o(n)$. This guarantee on the size reduction plays a crucial role when proving the number of centers reported by {\sc Ext}-$k$-{\sc Center}\xspace in \Cref{sec:main}. For more details on {\sc Uniform}-{\sc Center}\xspace, see \Cref{sec:uni}.
Our {\sc Uniform}-{\sc Center}\xspace is a full generalization of the analagous {\sc Uniform}-$k$-{\sc center} in Bateni et al.\ ~\cite{bateni-kcenter}. In particular, {\sc Uniform}-$k$-{\sc Center} is a special case of our {\sc Uniform}-{\sc Center}\xspace when $t=n$. This generalization plays a crucial role in the correctness of {\sc Ext}-$k$-{\sc Center}\xspace when we call {\sc Uniform}-{\sc Center}\xspace multiple times. {\sc Uniform}-$k$-{\sc Center} is not robust enough to be called from {\sc Ext}-$k$-{\sc Center}\xspace multiple times to achieve the desired result.
\subsubsection*{{\sc Ext}-$k$-{\sc Center}\xspace}
The algorithm consists of two phases, where {\bf Phase~1} consists of $\alpha$ subphases and {\bf Phase 2} consists of $\beta={\Theta}(\log ^{(\alpha + 1)} n)$ subphases. In the $j$-th subphase of {\bf Phase 1}, that is, in {\bf Phase 1.j}, {\sc Ext}-$k$-{\sc Center}\xspace calls $\mbox{{\sc Uniform}-{\sc Center}\xspace}(T_{j-1},r_{j-1},t_{j-1})$, where $T_0=P$, $r_0=\frac{r}{\log \log n}$, $t_0=n$, $t_{j}=\widetilde{\Theta}(\log ^{(j)} n),$ and $r_j=\frac{r}{\log \log t_{j}}$. Observe that the guarantees of {\sc Uniform}-{\sc Center}\xspace ensure the following:
\begin{enumerate} \item[(i)] {\bf Phase 1} can be implemented in an \textsc{MPC}\xspace with local space $\ensuremath{{\mathrm{s}}}\xspace=\mathcal{O} (n^\delta)$ and global space $\ensuremath{{\mathrm{g}}}\xspace=\widetilde{\mathcal{O}}(n^{1+\rho} \cdot \log\Delta)$ in $\sum\limits_{j=1}^{\alpha} \log\log t_{j-1}=\mathcal{O}(\log \log n)$ rounds; \item[(ii)] $\textsc{cost}\xspace(T_{j-1},T_j)=\mathcal{O}(r_{j-1} \log \log t_{j-1})=\mathcal{O}(r)$ for each $j \in [\alpha]$. Hence, $\textsc{cost}\xspace(P,T_{\alpha}) = \mathcal{O}(r\alpha)$. \end{enumerate}
Now consider {\bf Phase 2} of {\sc Ext}-$k$-{\sc Center}\xspace.
In the $i$-th subphase of {\bf Phase 2}, that is {\bf Phase 2.i}, {\sc Ext}-$k$-{\sc Center}\xspace calls $\mbox{{\sc Sample-And-Solve}\xspace}(T_{\alpha + i-1},\frac{1}{2},r)$, where $T=T_{\alpha + \beta}$ is the final output of {\sc Ext}-$k$-{\sc Center}\xspace. From the guarantee of {\sc Sample-And-Solve}\xspace, we have
\begin{enumerate} \item[(i)] {\bf Phase 2} can be implemented in an \textsc{MPC}\xspace with local space $\ensuremath{{\mathrm{s}}}\xspace=\mathcal{O}(n^\delta)$ and global space $\ensuremath{{\mathrm{g}}}\xspace=\widetilde{\mathcal{O}}(n^{1+\delta}\cdot \log \Delta)$ in $\mathcal{O}(\beta)=\mathcal{O}(\log ^{(\alpha +1)}n)$ rounds; \item[(ii)] $\textsc{cost}\xspace(T_{\alpha+i-1},T_{\alpha +1})=\mathcal{O}(r)$ for each $i \in [\beta]$. Hence,
\begin{align*}
\textsc{cost}\xspace(P,T) &=
\textsc{cost}\xspace(P,T_{\alpha+\beta}) =
\textsc{cost}\xspace(P,T_{\alpha})+\mathcal{O}(\beta r)
\\&=
\mathcal{O}(r\cdot (\alpha + \log^{(\alpha +1)}n))
\enspace.
\end{align*} \end{enumerate}
Combining the guarantees concerning the round complexity, global space and approximation guarantee of {\bf Phase 1} and {\bf Phase 2}, we get the claimed guarantees on round complexity, global space and approximation guarantees in \Cref{theo:main1} (see \Cref{lem:main1} for round and global space guarantee and \Cref{lem:main2} for the guarantee on approximation factor).
Now, we discuss how we bound the number of centers that {\sc Ext}-$k$-{\sc Center}\xspace outputs, that is, $\size{T}$. Consider an optimal clustering $\mathcal{C}_r$ of $P$ with cost at most $r$. A cluster $C \in \mathcal{C}_r$ is said to be \emph{active} (after {\bf Phase 1}) if $\size{C \cap T_j} \leq t_j$ for each $j$ with $1 \leq j \leq \alpha$. We say $C$ is \emph{inactive}, otherwise. Using the guarantee given by {\sc Uniform}-{\sc Center}\xspace concerning the reduction in cluster sizes, we can show that the total number of centers in $T_\alpha$, that are in inactive clusters, is
$\mathcal{O}\left(\frac{ \size{\mathcal{C}_r}}{(\log ^{(\alpha) } n)^{\Omega(1)}}\right)$,
with probability at least $1-\sum\limits_{i=1}^{\alpha}\frac{1}{t_{i-1}^{\Omega(1)}}$ (see \Cref{lem:inter1}). Note that $T_\alpha$ denotes the set of intermediate centers we have after {\bf Phase 1}. So, for any cluster $C \in \mathcal{C}_r$ that is active after {\bf Phase 1}, it satisfies $\size{C \cap T_\alpha}\leq t_\alpha =\widetilde{\Theta}{(\log ^{(\alpha)} n)}$. That is, with probability at least $1-\sum\limits_{i=1}^{\alpha}\frac{1}{t_{i-1}^{\Omega(1)}}$, we have the following:
\begin{align*}
\size{T_\alpha}&\leq
\size{\mathcal{C}_r}\cdot t_{\alpha}+\mathcal{O}\left(\frac{ \size{\mathcal{C}_r}}{(\log ^{(\alpha ) } n)^{\Omega(1)}}\right)
\enspace. \end{align*}
We define an active cluster $C \in \mathcal{C}_r$ is $i$-large if $\size{C \cap T_{\alpha+i-1}}\geq 2$. We show that the total number of intermediate centers in any large clusters reduces by a constant factor in {\bf Phase 2.i}, with probability at least $1-\frac{1}{t_{\alpha-1}^{\Omega(1)}}$. Note that the total number of intermediate centers in all active large clusters, just before {\bf Phase 2}, is at most $\size{\mathcal{C}_r}\cdot t_\alpha =\size{\mathcal{C}_r}\cdot \widetilde{\Theta}(\log ^{(\alpha)} n) $, and we are executing $\beta={\Theta}(\log ^{(\alpha +1)} n)$ many sub-phases in {\bf Phase 2}. We can show that the total number of centers in the active large clusters, after {\bf Phase 2}, is at most
$\frac{\size{\mathcal{C}_r}}{\widetilde{\Theta}(\log ^{(\alpha)} n)}+\widetilde{\Theta}((\log^{(\alpha)} n)^3)$,
with probability at least $1-\frac{1}{t_{\alpha -1}^{\Omega(1)}}$ (\Cref{lem:inter2}). Combined with the fact the number of active small clusters can be at most $\size{\mathcal{C}_r}$ with the bound on number of inactive clusters in {\bf Phase 2}, we have the desired bound on $\size{T}$. Full details of {\sc Ext}-$k$-{\sc Center}\xspace and its analysis are presented in \Cref{sec:main}.
\junk{
\subsection{Main differences from Bateni et al.\ ~\cite{bateni-kcenter}}
\label{sec:diff}
\Artur{Something like the text here should be in \Cref{subsec:technical-contribution-and-Bateni}; it must be earlier. But this means, it must be described without the terms defined later}
\gopi{Shall we still keep this section or remove?}\sam{I think this can be removed now}
The algorithm in Bateni~et al.\ ~\cite{bateni-kcenter} consists of two phases. In {\bf Phase 1}, they call of {\sc Uniform}-{\sc Center}\xspace once (with $t=n$~\footnote{Note that we have generalized {\sc Uniform}-{\sc Center}\xspace to work for any $t \leq n$.}) and in {\bf Phase 2}, they call {\sc Sample-And-Solve}\xspace $\mathcal{O}(\log \log n)$ times. While proving the correctness of {\bf Phase 1}, they argued that all the clusters satisfy certain properties w.r.t. the intermediate centers we have after {\bf Phase 1}, with high probability. This makes the analysis of {\bf Phase 2} relatively easier.
We extend their approach and repeatedly apply {\sc Uniform}-{\sc Center}\xspace (with different $t$) for $\alpha = O(\log^* n)$ times in {\bf Phase 1}. We observe that, to improve the approximation factor, each cluster must satisfy some stronger property after {\bf Phase 1}, but this is difficult to guarantee for all clusters after our {\bf Phase 1}. We deal with this by introducing the notion of active and inactive clusters. Active clusters are those that satisfy the desired property after each sub-phase of {\bf Phase 1} and inactive clusters are those that fail at some stage. We successfully argue that the number of centers in these inactive clusters is $o(k)$ after {\bf Phase 1}, with desired probability. With the notion of active and inactive clusters, the analysis of {\bf Phase 2} in our case is more complicated.
We believe we have extended this technique to a local optima and that better approximation guarantees, that is approximation factor independent of $n$, will require new ideas. }
\section{Nearest Hub Search} \label{sec:hub}
Recall that our {\sc Nearest-Hub-Search}\xspace algorithm takes a set $Q$ of points and a set $H \subseteq Q$ of hubs. For each $q \in (Q \setminus H)$, we want to find a hub $h \in H$ such that the distance between $q$ and $h$ is only a constant-factor more than the distance between $q$ and the closest hub to $q$ in $H$: {informally, $h$ is ``almost'' the closest hub to $q$ in $H$.}
In this section, we use \emph{locally sensitive hashing} (LSH)~\cite{lsh} to implement algorithm {\sc Nearest-Hub-Search}\xspace ($Q, H$) on \textsc{MPC}\xspace. Our implementation of locally sensitive hashing is parameterizable: by setting the parameter $\rho$ appropriately, one can reduce the global space while increase the approximation ratio, or vice versa.
First, we begin by recalling the definition of locally sensitive hashing, introduced in \cite{lsh}:
\begin{defi}[{\bf Locally sensitive hashing} \cite{lsh}] Let $r \in \mathbb{R}^+$, $c>1$ and $p_1,p_2 \in(0,1)$ be such that $p_1 > p_2$. A hash family $\mathcal{H}=\{h:\mathbb{R}^d \rightarrow U\}$ is said to be a $(r,cr,p_1,p_2)$-LSH family if for all $x,y \in \mathbb{R}^d$ the following hold: \begin{itemize} \item If $d(x,y)\leq r$, then $\mbox{Pr}_{h \in \mathcal{H}}(h(x)=h(y)) \geq p_1$; \item If $d(x,y)\geq c r$, then $\mbox{Pr}_{h \in \mathcal{H}}(h(x)=h(y)) \leq p_2$. \end{itemize} \end{defi}
Consider the following proposition that talks about the existence of a particular hash family, which will be useful to describe and analyse {\sc Nearest-Hub-Search}\xspace ($Q, H$) in Algorithm~\ref{algo:nn}.
\begin{pro}[\cite{lsh}] \label{pre:exist-hash} Let $r,n \in \mathbb{N}$ and $\rho \in (0,1)$. There exists a $(r,c_\rho r,(1/n)^\rho, 1/n)$-LSH family, where $c_\rho$ is a constant depending only on $\rho$. \end{pro}
In {\sc Nearest-Hub-Search}\xspace ($Q, H$), $Q$ is a set of at most $n$ points and $H \subseteq Q$ is the set of hubs. Our objective is to find a hub for each point which is at most some constant factor further away than the nearest hub, rather than finding the hub which is the closest. We do this by making $\log \Delta$ guesses about the distance to the nearest hub, and for each guess trying to find a hub within that distance.
For our $\log \Delta$ guesses for $r$ (the distance to the closest hub), we take (independently and uniformly at random) $L = \Theta(n^\rho)$ many hash functions from a $(r,c_\rho r, (1/n)^\rho, 1/n)$-LSH family and use them to hash all the points, including the hubs.\footnote{The constant $\rho$ is the same constant as in the exponent of $n$ in the global space bound. Recall our tradeoff: choosing a smaller $\rho$ decreases the amount of global space needed, but increases the approximation ratio.} Then we gather all points with the same hash value on consecutive machines. We then need to find, for each point, a hub that is close to it. This is difficult if the number of hubs mapped to a given hash value is large: if $h$ hubs and $m$ points are mapped to the same hash value, then we have to perform $h\cdot m$ distance checks, which is potentially prohibitive if $h\cdot m > \ensuremath{{\mathrm{s}}}\xspace$. To overcome this we show that, if many hubs are mapped to the same hash-value, we are able to discard all but a constant number of them, and retain for each point a hub that is within a constant factor of the distance of the closest hub. This works because of the choice of our hash function and by the definition of LSH. The full algorithm {\sc Nearest-Hub-Search}\xspace ($Q,H$) is described in Algorithm~\ref{algo:nn}, and its correctness is proved in \Cref{theo:lhs}.
\begin{lem}[{\bf Nearest hub search}] \label{theo:lhs} Let $Q$ be a set of at most $n$ points in $\mathbb{R}^d$, $H \subseteq Q$ denote the set of hubs, and $c_\rho$ be a suitable constant depending on $\rho$. There exists an \textsc{MPC}\xspace algorithm {\sc Nearest-Hub-Search}\xspace ($Q, H$) (as described in Algorithm~\ref{algo:nn}) that with high probability, in $\mathcal{O}(1)$ rounds, for all $q \in Q\setminus H$, finds a hub $\mbox{close}(q) \in H$ such that $d(q, \mbox{close}\xspace(q)) < 2c_\rho \cdot d( q,H)$. The \textsc{MPC}\xspace uses local space $\ensuremath{{\mathrm{s}}}\xspace = O(n^\delta)$ and global space $\ensuremath{{\mathrm{g}}}\xspace = \mathcal{O}(n^{1+\rho} \cdot \log^2 n \cdot \log \Delta )$. \end{lem}
\remove{\begin{obs} For all $x,y \in \mathbb{R}^d$ the followings holds with probability at least $1-\frac{1}{n^{\Omega(1)}}$: \begin{itemize}
\item $d(x,y)\leq r$, then there exists an $f \in \mathcal{F}$ such that $f(x)=f(y)$
\item If $d(x,y)\geq c_\rho r$, then for all $f \in \mathcal{F}$ we have $f(x)\neq f(y)$. \end{itemize} \end{obs}}
\remove{ \paragraph*{Locally sensitive hashing} Let $Q$ be a set of points in $\mathbb{R}^d$. Also $c>1$ and $p_1,p_2 \in (0,1)$ be the parameters. There is a data structure of size $\widetilde{\mathcal{O}}\left(n^{1+1/c}\right)$ such that
Let $Q$ be a set of points and $\Delta$ be the diameter of $Q$. Let $H \subset Q$ be a set of hubs. The objective of each point in $\mathcal{Q}$ to find the nearest point in $H$.}
\begin{algorithm}[h] \caption{{\sc Nearest-Hub-Search}\xspace ($Q, H$)}\label{algo:nn} \KwIn{A set $Q$ of at most $n$ points and a set of hubs $H \subseteq Q$.} \KwOut{For each point in $Q$, report $\mbox{close}(p) \in H$ such that $d(p,\mbox{close}(p))\leq 2c_\rho \cdot d(p,H)$, where $c_\rho$ is a suitable constant depending only on $\rho$.} \Begin{ \For{($i=1$ to $I=\Theta(\log n)$)} { \For{(j=0 to $\log \Delta$)} { Set $r=2^j$
Take $L=\Theta(n^{\rho})$ many hash function $f_1,\ldots,f_L$ ({independently and uniformly at random}) from a $(r,c_\rho r, (1/n)^{\rho},1/n)$-LSH family.
\For{($\ell=1$ to $L$)} { Determine $f_\ell(q)$ for each $q \in Q$.
Find the distance of each $q \in Q$ with at most a constant (say $10$) number of hubs $h \in H$ such that $f_l(q)=f_l(h)$. If we get such a $h \in H$ such that $d(q,h)\leq c_\rho \cdot r$, then we set $\mbox{close}_{ij\ell}(q)=h$ and {\sc null}, otherwise.
} Set $\mbox{close}_{ij}(q) =\mbox{{\sc null}}$ if $\mbox{close}_{ij\ell}(q)=\mbox{{\sc null}}$ for all $\ell \in [L]$. Otherwise, set $\mbox{close}_{ij}(q)=\mbox{close}_{ij \ell}(q)$ for some $\ell \in L$.
}
Set $\mbox{close}_{i}(q) =\mbox{{\sc null}}$ if $\mbox{close}_{ij}(q)=\mbox{{\sc null}}$ for all $j \in [\log \Delta]$. Otherwise, $\mbox{close}_{i}(q)=\mbox{close}_{ij^*}(q)$ such that $j^*$ is minimum among all $j$ for which $\mbox{close}_{ij}(q)$ is not {\sc null}. } If there exists a $q \in Q$ such that $\mbox{close}_i(q)$ is {\sc null} for all $i \in [\log n]$, then report {\sc Fail}.
Otherwise, set $\mbox{close}(q)=\mbox{close}_i(q)$
for some $i \in I$.
} \end{algorithm}
From Algorithm~\ref{algo:nn}, note that we repeat a procedure (lines 3--12 that find an almost closest hub with probability $2/3$) $I=\Theta(\log n)$ times, and report the output we get from any of the instances. Consider \Cref{lem:inter-lhs}, that says that, in {\sc Nearest-Hub-Search}\xspace each point $q \in Q$ finds $\mbox{close}(q)\in H$ satisfying the required property with high probability. This will immediately imply the correctness of \Cref{theo:lhs}. We then discuss the \textsc{MPC}\xspace implementation of {\sc Nearest-Hub-Search}\xspace.
Note that $\mbox{close}\xspace_i(q)$ (which is either {\sc null} or a point in $H$ such that $d(q,\mbox{close}\xspace_i(q))=O(d(q,H))$) denotes the output of {\sc Nearest-Hub-Search}\xspace for point $q \in Q \setminus H$ and the instance $i \in I$.
\begin{lem} \label{lem:inter-lhs} For a particular $q \in Q \setminus H$ and $i \in I$, $\mbox{close}\xspace_i(q)\in H$ is not {\sc null} and $d(q,\mbox{close}\xspace_i(q))\leq 2 c_\rho \cdot d(q,H)$, with probability at least $2/3$. \end{lem}
\begin{proof}
Consider $j^*$ such that $d(q,H) \leq r=2^{j^*} \leq 2 \cdot d(q,H)$,
and $q_h \in H$ be such that $d(q,q_h) \leq 2\cdot d(q,H)$. As
each $f_i, i \in L,$ is a function chosen from $(r,c_\rho r,
(1/n)^\rho,1/n)$-LSH family, $\mbox{Pr}(f_i(q)=f_i(q_h)) \geq
\frac{1}{n^{\rho}}$. As $L=\Theta(n^\rho)$, there
exists an $\ell^* \in L$ such that
$f_{\ell^*}(q)=f_{\ell^*}(q_h)$ with probability at least
$9/10$. But our algorithm may not find this particular
$q_h$ while considering the hubs $h \in H$ such that
$f_{\ell^*}(q)=f_{\ell^*}(q_h)=f_{\ell^*}(h)$ (See line 8 of
{\sc Nearest-Hub-Search}\xspace). Again, as $f_{\ell^*}$ is chosen
from $(r,c_\rho r, (1/n)^\rho,1/n)$-LSH family, the expected number
of hubs $h \in H$, with $d(q,h)> c_\rho r$ but
$f_{\ell^*}(q)=f_{\ell^*}(h)$, is at most $1$. By Markov's
Inequality, the probability that the number of such hubs is
more than $10$ is at most $1/10$. So, with probability at
least $2/3$, {\sc Nearest-Hub-Search}\xspace, sets
$\mbox{close}\xspace_{ij^*\ell^*}=h$ for some $h \in H$ such that
$d(q,h)\leq c_\rho r$, that is, $d(q,h)\leq 2c_\rho \cdot d(q,H)$. Now
considering the way we set $\mbox{close}\xspace_{ij}(q)$ from
$\mbox{close}\xspace_{ij\ell}(q)$'s ($\ell \in L$) in line 10 and
$\mbox{close}\xspace_{i}(q)$ from $\mbox{close}\xspace_{ij}(q)$'s $(0 \leq j \leq \log
\Delta)$ in line 12, we have that {\sc Nearest-Hub-Search}\xspace
sets $\mbox{close}\xspace_i(q)\in H$ such that $d(q,\mbox{close}\xspace_i(q))\leq 2 c_\rho \cdot d(q,H)$ with probability at least $2/3$. \end{proof}
Now, consider the way {\sc Nearest-Hub-Search}\xspace sets the value of $\mbox{close}\xspace(q)$ in lines 14--15 from $\mbox{close}\xspace_i(q)$'s. By \Cref{lem:inter-lhs}, we have $\mbox{close}\xspace(q)$ such that it is not {\sc null} and $d(q,\mbox{close}\xspace(q))\leq 2 c_\rho \cdot d(q,H)$ with probability at least $1-\frac{1}{n^{\Omega(1)}}.$ This is because $I=\Theta(\log n)$. Applying the union bound over all points in $Q \setminus H$, we see that \Cref{theo:lhs} is implied by \Cref{lem:inter-lhs}, except the details of \textsc{MPC}\xspace implementation.
\subsection*{\textsc{MPC}\xspace implementation of {\sc Nearest-Hub-Search}\xspace }
Without loss of generality, we assume that $\rho<\delta$ as otherwise we can set $\rho=\delta$. First, notice that, if we can implement lines 4-10 of {\sc Nearest-Hub-Search}\xspace in \textsc{MPC}\xspace with local space $\ensuremath{{\mathrm{s}}}\xspace = \mathcal{O}(n^\delta)$ and global space $\ensuremath{{\mathrm{g}}}\xspace = \mathcal{O}( n^{1+\rho} \cdot \log n)$, then we can run these lines in parallel for each possible value of $i$ and $j$ (adding a factor of $\mathcal{O}(\log n \cdot \log \Delta)$ to the global space). Then the results can be aggregated in $\mathcal{O}(1)$ rounds using sorting and prefix sum \cite{GSZ11}.
It suffices then to show that lines 4--10 of {\sc Nearest-Hub-Search}\xspace can be implemented in the desired rounds and space. The hash functions in line 5 can be generated locally by some ``leader'' machine and broadcast to the other machines in $O(1)$ rounds, since we assume $\rho <\delta$. We again perform lines 6--9 in parallel, giving each $f \in L$ its own set of machines to use.
We next consider the implementation of lines 7--8 given a specific $f \in L$. Machines can compute $f$ locally and without communication. Each point $q\in Q$ is now represented by a tuple $(f(q), \text{hub}(q), q)$, where $\text{hub}(q) = 1$ if $q \in H$ and $0$, otherwise. Machines then sort these tuples lexicographically and remove (using prefix sum) all but $10$ hubs for each value in $\ensuremath{\text{range}}(f)$. For each $v \in \ensuremath{\text{range}}(f)$, we now have to compute the distance between each point $q \in Q$ such that $f(q) = v$, and each hub $h \in H$ such that $f(h) = v$ and $h$ was not removed. It might be the case that some points in $Q$ are not located on the same machine as the hubs which are hashed to the same value (and in general, these points might not all fit on one machine, see discussion in \Cref{sec:samp}). However, all that is required in this case is that the hubs can be sent to all machines containing points hashed to the same value: this can be done using prefix sum in a constant number of rounds; since there are at most $10$ hubs for each value in $\ensuremath{\text{range}}(f)$, each machine receives at most $10$ hubs. Now machines have the information necessary to locally compute $\mbox{close}\xspace_{ij\ell}(q)$ for all points that they contain, and for each point $q \in Q$ the tuple $(q, \mbox{close}\xspace_{ij\ell}(q))$ is generated.
Finally, observe that line 10 can be implemented in $\mathcal{O}(1)$ rounds using sorting and prefix sum. \section{Sample and Solve} \label{sec:samp}
In this section, we describe {\sc Sample-And-Solve}\xspace $(Q,p,r)$, which is a subroutine in {\sc Uniform}-{\sc Center}\xspace and {\sc Ext}-$k$-{\sc Center}\xspace in Sections \ref{sec:uni} and \ref{sec:main}, respectively. {\sc Sample-And-Solve}\xspace $(Q,p,r)$ takes a set $Q$ of at most $n$ points, a sampling parameter $p$, and a radius parameter $r$, it relies on {\sc Nearest-Hub-Search}\xspace discussed in \Cref{sec:hub}, and produces a set of centers $S \subseteq Q$ such that $\textsc{cost}\xspace(Q,S)=\mathcal{O}(r)$.
\begin{algorithm}[h] \caption{{\sc Greedy}\xspace ($R, h, r$)}\label{algo:greedy}
\KwIn{Set $R$ of at most $n$ points; radius parameter $r \in \mathbb{R}^+$.} \KwOut{A set $G \subseteq R$ of centers.} \Begin{ Set $G \leftarrow \{h\}$.
\While{ ( $ \exists ~x \in R$ with $d(x,G)=\min_{y \in R} d(x,y) > 4c_\rho r$)} { // \textit{\small
Here $c_\rho$ is the constant as in \Cref{theo:lhs}.}
Let $x \in R$ be the point furthest from $G$; add $x$ to $G$.
} Report the set $G$ of centers. } \end{algorithm}
\begin{algorithm}[h]
\caption{{\sc Sample-And-Solve}\xspace ($Q, p,r$)}\label{algo:sample}
\KwIn{Set $Q$ of at most $n$ points; probability parameter $p \in (0,1)$; radius parameter $r \in \mathbb{R}^+$.} \KwOut{A set $S \subseteq Q$ of centers.}
\Begin
{
\If{$\left(\size{Q}\leq \ensuremath{{\mathrm{s}}}\xspace=\mathcal{O}(n^\delta)\right)$}
{
Call $\mbox{{\sc Greedy}\xspace}(Q,q,r)$ for some arbitrary $q \in Q$, and report the set of centers output by it as $S$.
}
Sample each point in $Q$ independently with probability $p$. Points which are sampled form the set of hubs $H$.
{If $H=\emptyset,$ report {\sc Fail}.}
For each point $q$ in $Q$, assign it to the closest hub in $H$ by calling $\mbox{{\sc Nearest-Hub-Search}\xspace}(Q,H, \rho)$. We call the set of points assigned to a hub $h \in H$ the {\em bag corresponding to $h$}, and denote it as $B_h$. Note that $B_h$ includes $h$.
\For{(each $h \in H$)}
{
\If {$(\size{B_h} \leq \ensuremath{{\mathrm{s}}}\xspace)$}
{
Collect $B_h$ on a single machine.
$S_h \leftarrow \mbox{{\sc Greedy}\xspace}(B_h,h,r)$.
}
\Else{ Form bags $B_{h_1},\ldots,B_{h_w}$, keeping $h$ in every $B_{h_i}$ ($i \in [w]$) and putting other point in $B_h \setminus \{h\}$ into exactly one of the $B_{h_i}$'s, such that $\size{B_{h_i}}\leq \ensuremath{{\mathrm{s}}}\xspace=\mathcal{O}(n^{\delta})$ for each $i \in [w]$.
$S_{h_i} \leftarrow \mbox{{\sc Greedy}\xspace}(B_{h_i},h,r)$, where $i \in [w]$.
$S_h \leftarrow \bigcup\limits_{i=1}^w S_{h_i}$.
}
}
Report set of centers $S=\bigcup\limits_{h \in H} S_h$. } \end{algorithm}
{\sc Sample-And-Solve}\xspace $(Q,p,r)$ calls algorithm {\sc Greedy}\xspace$(R,h,r)$ as a subroutine, which produces a set of centers $G\subseteq R$ such that $\textsc{cost}\xspace(R,G)=\mathcal{O}(r)$. {\sc Greedy}\xspace$(R,h,r)$ is a variation of a classic 2-approximation algorithm for $k$-center in the sequential setting~\cite{gonzalez1985kcenter}. In {\sc Sample-And-Solve}\xspace $(Q,p,r)$, the idea is to sample each point in $Q$ (independently) with probability $p$ to form a set of hubs $H$. Then each point $q \in Q$ will be assigned to some hub $h \in H$ by using {\sc Nearest-Hub-Search}\xspace (as described in Algorithm~\ref{algo:nn}). For $h \in H$, let $B_h$ be the set of points assigned to $h$ (including $h$ itself). We run {\sc Greedy}\xspace for the points in $B_h$, to produce a set of centers $S_h$. Finally, $\bigcup_{h \in H} S_h$ is the output reported by {\sc Sample-And-Solve}\xspace. There are other technicalities -- $\size{B_h}$ may be much larger than $\ensuremath{{\mathrm{s}}}\xspace$. In that case, we distribute the points in $B_h \setminus \{h\}$ across a number of machines, but we send $h$ to each machine, ensuring that the total number of points assigned to a machine (including $h$) is less than \ensuremath{{\mathrm{s}}}\xspace---and then we apply {\sc Greedy}\xspace to the points on each of these machines.
The formal algorithm for {\sc Sample-And-Solve}\xspace is presented in Algorithm~\ref{algo:sample}. The approximation guarantee, round complexity and space complexity of {\sc Sample-And-Solve}\xspace are stated in \Cref{lem:samp}. An additional property of {\sc Sample-And-Solve}\xspace is stated in \Cref{lem:samp-add} which will be useful in both \Cref{sec:uni} and \Cref{sec:main}.
\begin{lem}[{\bf Approximation guarantee, round complexity and space complexity of {\sc Sample-And-Solve}\xspace}] \label{lem:samp} Consider {\sc Sample-And-Solve}\xspace ($Q, p,r$), as described in Algorithm~\ref{algo:sample}. With probability at least $1-\min\left \lbrace e^{-\Omega \left( p \cdot n^{\delta}\right)}, \frac{1}{n^{\Omega(1)}} \right \rbrace$, it does not report {\sc Fail}, and moreover: \begin{enumerate} \item[(i)] It produces a set of centers $S\subseteq Q$ such that $\textsc{cost}\xspace(Q,S)\leq 4c_\rho r=\mathcal{O}(r)$, where $c_\rho$ is the constant as in \Cref{theo:lhs}; \item[(ii)] It takes $\mathcal{O}(1)$ \textsc{MPC}\xspace rounds with local space $\ensuremath{{\mathrm{s}}}\xspace=\mathcal{O}\left(n^{\delta} \right)$ and global space $\ensuremath{{\mathrm{g}}}\xspace=\widetilde{\mathcal{O}}(n^{1+\rho} \cdot \log \Delta)$.
\end{enumerate} \end{lem}
\begin{rem} \label{rem:sas} We call {\sc Sample-And-Solve}\xspace from {\sc Uniform}-{\sc Center}\xspace (Algorithm~\ref{algo:uniform}) with probability parameter $p =\Omega\left( \frac{ \log n}{n^{\delta}}\right)$. Therefore, the success probability of {\sc Sample-And-Solve}\xspace in our case is always at least $1-\frac{1}{n^{\Omega(1)}}$. \end{rem} \begin{proof}[Proof of \Cref{lem:samp}] Note that {\sc Sample-And-Solve}\xspace (Algorithm \ref{algo:sample}) crucially calls subroutine {\sc Greedy}\xspace (Algorithm~\ref{algo:greedy}) multiple times, particularly in line numbers 2, 10 and 14. We start the proof with the following observation (about algorithm {\sc Greedy}\xspace$(R,h,r)$) that follows from the description of Algorithm~\ref{algo:greedy}.
\begin{obs} \label{obs:grdy}
The output $G \subseteq R$ produced by {\sc Greedy}\xspace($R,h,r$) (as described in Algorithm \ref{algo:greedy}) satisfies $\textsc{cost}\xspace (R,G)\leq 4 c_\rho r$. \end{obs}
Note that both (i) and (ii) of \Cref{lem:samp} are direct if $\size{Q}\leq \ensuremath{{\mathrm{s}}}\xspace$. As in that case, we executes $\mbox{{\sc Greedy}\xspace}(Q,q,r)$ for some $q \in Q$ in one machine locally, and report its output as $S$. By \Cref{obs:grdy}, we have $\textsc{cost}\xspace(Q,S)\leq 4c_\rho r$.
Now consider the case when $\size{Q} > \ensuremath{{\mathrm{s}}}\xspace$. Note that {\sc Sample-And-Solve}\xspace reports {\sc Fail} only when the set of hubs $H$ is $\emptyset$. As every point in $Q$ is added to $H$ with probability $p$ independently, the probability that {\sc Sample-And-Solve}\xspace reports {\sc Fail} is at most $\left(1-p\right)^{\size{Q}} \leq e^{-\Omega\left(p \cdot n^{\delta}\right)}$. Now, we argue (i) and (ii) separately. Recall the description of Algorithm~\ref{algo:sample} from Line 7--17. \begin{enumerate} \item[(i)] By \Cref{obs:grdy}, for $h \in H$, $\textsc{cost}\xspace(B_h,S_h) \leq 4c_\rho r$. As $S=\bigcup\limits_{h \in H} S_h$ and $Q=\bigcup\limits_{h \in H} B_h$, $\textsc{cost}\xspace(Q,S)\leq 4c_\rho r$. \item[(ii)] From \Cref{theo:lhs}, {\sc Nearest-Hub-Search}\xspace can be implemented in \textsc{MPC}\xspace with local space $\ensuremath{{\mathrm{s}}}\xspace= \mathcal{O}(n^\delta)$ and global space $\ensuremath{{\mathrm{g}}}\xspace=\widetilde{\mathcal{O}}(n^{1+\rho} \cdot \log \Delta )$ in $\mathcal{O}(1)$ rounds. After {\sc Nearest-Hub-Search}\xspace is performed, each point knows its assigned hub. Using sorting, we can place all points with the same hubs on consecutive machines in $\mathcal{O}(1)$ rounds, and using prefix sum, we can count the number of points assigned to each hub in $\mathcal{O}(1)$ rounds. Now, we consider two cases:
If $|B_h| \leq \ensuremath{{\mathrm{s}}}\xspace $ (that is: the bag could fit on a single machine) then {\sc Greedy}\xspace on $B_h$ can be performed on a single machine without communication, that is, in $0$ rounds.\footnote{A minor technical matter is moving $B_h$ to a single machine if it is big enough to fit but originally stored on two consecutive machines. This can be done in $1$ round if we have $\geq 2\ensuremath{{\mathrm{s}}}\xspace/n$ machines: when sorting, use only the first $\ensuremath{{\mathrm{s}}}\xspace/n$ machines, then if $B_h$ was originally stored on machines $i$ and $i+1$, move it to machine $i + \ensuremath{{\mathrm{s}}}\xspace/n$.}
If $|B_h| > \ensuremath{{\mathrm{s}}}\xspace$ (that is: the bag could not fit on a single machine) then we arbitrarily partition the bag and perform {\sc Sample-And-Solve}\xspace on each part. Specifically, we send $h$ to each of the consecutive machines on which $B_h$ is stored, and these machines perform {\sc Greedy}\xspace on the subset of the bag that they hold locally. This can be performed in $\mathcal{O}(1)$ rounds. \remove{\begin{cl}\label{cl:bag-bnd} \complain{ With probability at least $1-\frac{1}{n^{\Omega(1)}}$, $\size{B_h} = \mathcal{O}\left(\frac{\log n}{p}\right)$ for each $h \in H$.} \end{cl}} \end{enumerate} \end{proof} \begin{lem}[{\bf An additional guarantee of {\sc Sample-And-Solve}\xspace}] \label{lem:samp-add} {Let $\mathcal{C}_r$ be a clustering of $Q$ having cost at most $r$. Then, with high probability, the following holds for any $C \in \mathcal{C}_r$:} if at least one hub is selected from $C$, then no further point in $C \setminus H$ is selected as a center, that is, $\size{S \cap C} = \size{H \cap C}$. \remove{\begin{enumerate}
\item[(i)] if at least one hub is selected from $C$, then no further point in $C \setminus H$ is selected as a centroid;
\item[(ii)] the number of centroids in $C$ does not exceed the number of hubs in $C$, that is, $\size{S \cap C} =\size{H \cap C}$. \end{enumerate}} \end{lem} \begin{proof} Consider any point $q \in C \setminus H$. As at least one hub is selected from $C$, $d(q,H) \leq 2r$ . By the guarantee from {\sc Nearest-Hub-Search}\xspace{} (see \Cref{theo:lhs}), with probability at least $1-\frac{1}{n^{\Omega(1)}}$, $q$ is assigned to some hub $h \in H$ such that $d(q,h) \leq 2 c_\rho d(q,H) \leq 4 c_\rho r$. So, when we call {\sc Greedy}\xspace ($B_h, h, r$), as $d(q,h) \leq 4 c_\rho r$, $q$ will not be selected as a center. This implies that $\size{S \cap C} \leq \size{H \cap C}$. The claim follows as $H \subseteq S$. \end{proof} \input{unifomr.tex}
\section{The main algorithm \remove{(Proof of \Cref{theo:main1})}} \label{sec:main}
In this section, we present our main algorithm {\sc Ext}-$k$-{\sc Center}\xspace. Recall the overall description of {\sc Ext}-$k$-{\sc Center}\xspace in \Cref{sec:over}. {\sc Ext}-$k$-{\sc Center}\xspace has two phases. In {\bf Phase 1}, it calls {\sc Uniform}-{\sc Center}\xspace $\alpha$ times, and in {\bf Phase 2}, it calls {\sc Sample-And-Solve}\xspace $\beta$ times, where $\alpha$ is the input precision parameter and $\beta=\Theta(\log ^{(\alpha +1)} n)$. The formal algorithm is described in Algorithm~\ref{algo:main}. We prove the round complexity and space complexity of {\sc Ext}-$k$-{\sc Center}\xspace in \Cref{lem:main1}, the approximation guarantee in \Cref{lem:main2} and the bound on the number of centers in \Cref{lem:main3}.
\begin{algorithm}[h] \caption{{\sc Ext}-$k$-{\sc Center}\xspace ($P, r$)}\label{algo:main}
\KwIn{Set $P$ of $n$ points; tradeoff parameter $\alpha$; radius parameter $r \in \mathbb{R}^+$.} \KwOut{A set $T\subseteq P$ of centers.} \Begin{
{\bf Phase 1:}
$T_0 \leftarrow P$, $t_0=n $, and $r_0= \log \log n$.
\For{$(j=1~\mbox{to}~\alpha)$} {
{\bf Phase 1.j:}
$T_{j} \leftarrow \mbox{{\sc Uniform}-{\sc Center}\xspace} (T_{j-1},r_{j-1}, t_{j-1})$.
$t_{j}=\Theta(\log t_{j-1} \cdot (\log \log t_{j-1})^{d+2})$.
\textit{\small
// Note that $t_j=\widetilde{\Theta}(\log t_{j-1})=\widetilde{\Theta}(\log ^{(j)} n).$ }
$r_j=\frac{r}{\log \log t_{j}}.$
} {\bf Phase 2:}
\For{ $(i=1~\mbox{to}~\beta= \Theta (\log ^{(\alpha +1)} n))$} {
{\bf Phase 2.i:}
$T_{\alpha+i} \leftarrow \mbox{{\sc Sample-And-Solve}\xspace} (T_{\alpha+i-1},\frac{1}{2},r)$.
}
Report $T=T_{\alpha+\beta}$. } \end{algorithm}
\begin{lem}[{\bf Round complexity and global space of {\sc Ext}-$k$-{\sc Center}\xspace}] \label{lem:main1} Consider {\sc Ext}-$k$-{\sc Center}\xspace ($P,t$), as described in Algorithm~\ref{algo:main}. The number of rounds taken by the algorithm is $\mathcal{O}( \log \log n)$ and the global space used by the algorithm is $\ensuremath{{\mathrm{g}}}\xspace=\widetilde{\mathcal{O}}\left(n^{1+\rho} \cdot \log \Delta\right)$. \end{lem}
\begin{proof} For any $j$ with $1\leq j \leq \alpha$, in {\bf Phase 1.j}, {\sc Ext}-$k$-{\sc Center}\xspace ($P, r$) calls $\mbox{{\sc Uniform}-{\sc Center}\xspace}(T_{j-1},r_{j-1},t_{j-1})$. By Lemma \ref{lem:uniform1} (i), the total number of rounds spent by {\sc Ext}-$k$-{\sc Center}\xspace ($P, r$) in {\bf Phase 1} is $\sum_{j=1}^\alpha \mathcal{O}\left(\log \log t_{j-1}\right)=\mathcal{O}\left(\log \log n\right)$. This is because $t_0=n$, $t_j=\widetilde{\Theta}\left(\log t_{j-1}\right)$ for any $j\geq 1$, that is, $t_j=\widetilde{\Theta}\left(\log ^{(j)} n\right)$\remove{(Minor thing: I assume this should be $\widetilde{\Theta}\left(\log^{(j)} n \right)$: with the tilde over the theta)}. In {\bf Phase 2}, {\sc Ext}-$k$-{\sc Center}\xspace ($P, r$) calls {\sc Sample-And-Solve}\xspace for $\beta=\Theta\left(\log ^{(\alpha +1)} n\right)$ times. By \Cref{lem:samp} (ii), the total number of rounds spent by {\sc Ext}-$k$-{\sc Center}\xspace ($P, r$) in {\bf Phase 2} is $\mathcal{O}(\beta)=\mathcal{O}\left(\log ^{(\alpha +1)} n\right)$. So, the round complexity of {\sc Ext}-$k$-{\sc Center}\xspace follows. The global space complexity of {\sc Ext}-$k$-{\sc Center}\xspace follows from the global space complexities of {\sc Uniform}-{\sc Center}\xspace and {\sc Sample-And-Solve}\xspace (see Lemma \ref{lem:uniform1} and \Cref{lem:samp}(ii), respectively). \end{proof}
\begin{lem}[{\bf Approximation guarantee of {\sc Ext}-$k$-{\sc Center}\xspace}] \label{lem:main2} Let us consider {\sc Ext}-$k$-{\sc Center}\xspace ($P, r$) as described in Algorithm~\ref{algo:main}. It produces output $T \subseteq P$ such that $\mbox{{\sc Cost}}(P ,T) = \mathcal{O}(r \cdot(\alpha+ \log ^{(\alpha +1)} n)).$ \end{lem}
\begin{proof} Observe that
\begin{align*}
\textsc{cost}\xspace(P,T) &= \textsc{cost}\xspace(T_0,T_{\alpha+\beta})
\leq \textsc{cost}\xspace(T_0,T_\alpha)+\textsc{cost}\xspace(T_\alpha,T_{\alpha+\beta})
\enspace. \end{align*}
It therefore suffices to show that $\textsc{cost}\xspace(T_0,T_\alpha)$ and $\textsc{cost}\xspace(T_\alpha,T_{\alpha+\beta})$ are bounded by $\mathcal{O}(r \alpha)$ and $\mathcal{O}(r \cdot \log^{(\alpha+1)} n)$, respectively.
For any $j$ with $1\leq j \leq \alpha$, note that {\sc Ext}-$k$-{\sc Center}\xspace($P, r$) calls $\mbox{{\sc Uniform}-{\sc Center}\xspace}(T_{j-1},r_{j-1},t_{j-1})$ in {\bf Phase 1.j} and produces $T_j$ as the output. So, by \Cref{lem:uniform2}, $\textsc{cost}\xspace (T_{j-1},T_j) = \mathcal{O}(r_{j-1} \cdot \log \log t_{j-1})$, which is $\mathcal{O}(r)$. Hence,
\begin{align*}
\textsc{cost}\xspace(T_0,T_\alpha) &\leq
\sum_{j=1}^\alpha \textsc{cost}\xspace \left(T_{j-1},T_j \right) =
\alpha \cdot \mathcal{O}(r) =
\mathcal{O}\left(r \cdot \alpha\right)
\enspace. \end{align*}
For any $i$ with $1\leq i \leq \beta$, note that {\sc Ext}-$k$-{\sc Center}\xspace($P, r$) calls $\mbox{{\sc Sample-And-Solve}\xspace}(T_{\alpha+i-1},1/2,r)$ in {\bf Phase 2.i} and produces $T_{\alpha +i}$ as the output. So, by \Cref{lem:samp} (i), $\textsc{cost}\xspace (T_{\alpha+i-1},T_{\alpha+i}) = \mathcal{O}(r).$ Hence, as $\beta=\Theta(\log ^{(\alpha +1)} n)$,
\begin{align*}
\textsc{cost}\xspace\left(T_\alpha,T_{\alpha +\beta}\right) &\leq
\sum\limits_{i=1}^\beta \textsc{cost}\xspace \left(T_{\alpha+i-1},T_{\alpha + i} \right)
=
\mathcal{O}(r \cdot \log ^{(\alpha +1)} n)
\enspace.
\qedhere \end{align*} \end{proof}
\begin{lem}[{\bf Number of centers reported by {\sc Ext}-$k$-{\sc Center}\xspace}] \label{lem:main3} Consider {\sc Ext}-$k$-{\sc Center}\xspace $(P, r)$ as described in Algorithm~\ref{algo:main}. It produces output $T$ such that, with probability at least $1-\frac{1}{(\log ^{(\alpha-1)} n)^{\Omega(1)}}$,
\begin{align*}
\size{T} &\leq
\size{\mathcal{C}_r}\left(1+\frac{1}{\widetilde{\Theta}(\log ^{(\alpha)} n)}\right)+\widetilde{\Theta}((\log ^{(\alpha)} n)^3)
\enspace. \end{align*}
Here, $\mathcal{C}_r $ is a clustering of $P$ that has the minimum number of centers among all possible clustering of $P$ with cost at most $r$ such that $\size{\mathcal{C}_r}=\Omega((\log n)^c)$, where $c$ is a suitable constant. \end{lem}
Now, we introduce the notion of \emph{active} and \emph{inactive} clusters in the following definition, which is useful in proving \Cref{lem:main3}. {Inactive clusters are clusters which, at some point during {\bf Phase 1}, fail to reduce in size sufficiently. After the sub-phase during which they fail to reduce in size sufficiently, we assume that they never reduce in size again (since this is the worst case). We are then able to bound the total number of centers in inactive clusters (\Cref{lem:inter1}). Active clusters, by contrast, always reduce in size as we expect: the number of centers in active clusters is therefore easy to bound.}
\begin{defi} \label{defi:active} Let $\mathcal{C}_r$ be an optimal clustering with cost at most~$r$. For each $C \in \mathcal{C}_r$ and $j$ with $1 \leq j \leq \alpha$, we say $C$ is \emph{inactive} in \textbf{Phase 1.j} if $\size{C \cap T_i} > t_i$ for some $i$ with $1 \leq i < j$. Otherwise, if $\size{C \cap T_i} \leq t_i$ for every $i$ with $1 \leq i < j$, $C$ is called \emph{active} in~\textbf{Phase~1.j}. \end{defi}
Let $\mathcal{C}_r' \subseteq \mathcal{C}_r$ be the set of clusters that are active after {\bf Phase 1}, that is, {\bf Phase 1.$\alpha$}. By the definition of active clusters, for each $C \in \mathcal{C}_r'$, $\size{C \cap T_\alpha} \leq t_{\alpha}$. Note that {\sc Ext}-$k$-{\sc Center}\xspace goes over $\beta$ sub-phases in {\bf Phase 2}. After {\bf Phase 1} and before the start of {\bf Phase 2}, it has $T_{\alpha}$ as the set of intermediate centers. For $1 \leq i \leq \beta$, in {\bf Phase 2.i}, we call $\mbox{{\sc Sample-And-Solve}\xspace}\left(T_{\alpha+i-1},\frac{1}{2},r \right)$, and get $T_{\alpha+i}$ as the intermediate centers. For $0 \leq i \leq \beta$; a cluster $C \in \mathcal{C}_r'$ is said to be $i$-\emph{large} if $\size{C \cap T_{\alpha+i}} \geq 2$. Let $\Gamma_i \subseteq \mathcal{C}_r'$ denote the set of $i$-\emph{large} clusters, and let $Y_i$ denote the total number of points that are {in $i$-large clusters}, that is, $Y_i=\sum\limits_{C \in \Gamma_i} \size{C \cap T_{\alpha+i}}$.
Note that, in \Cref{lem:main3}, we want to bound the number of centers in $T=T_{\alpha+\beta}$. We first observe that $\size{T}$ can be expressed as the sum of three quantities:
\begin{obs} \label{obs:boundT} $\size{T}=\size{T_{\alpha+\beta}}=\size{\mathcal{C}_r}+Y_{\beta}+\sum\limits_{C \in \mathcal{C}_r \setminus \mathcal{C}_r'} \size{C \cap T_\alpha}$. \end{obs}
\begin{proof} Observe that since $T_{\alpha+\beta} \subseteq T_\alpha$, we obtain,
\begin{align*}
T_{\alpha +\beta} &=
\sum_{C \in \mathcal{C}_r \setminus \mathcal{C}_r'}\size{C \cap T_{\alpha+\beta}}+\sum_{C \in \mathcal{C}_r'}\size{C \cap T_{\alpha+\beta}}
\\&\leq
\sum_{C \in \mathcal{C}_r \setminus \mathcal{C}_r'}\size{C \cap T_{\alpha}}+\sum_{C \in \mathcal{C}_r'}\size{C \cap T_{\alpha+\beta}}
\enspace. \end{align*}
To bound the second inequality by $\size{\mathcal{C}_r}+Y_\beta$, observe that
\begin{align*}
\sum_{C \in \mathcal{C}_r'}\size{C \cap T_{\alpha+\beta}}
&=
\!\!\!\sum_{C \in \mathcal{C}_r' : \size{C \cap T_{\alpha + \beta}}=1 }\!\!\!\size{C \cap T_{\alpha+\beta}} +
\!\!\!\!\!\!\sum_{C \in \mathcal{C}_r':\size{C \cap T_{\alpha + \beta}}\geq 2}\!\!\!\size{C \cap T_{\alpha+\beta}}
\\&\leq
\size{\mathcal{C}_r'}+Y_\beta \leq \size{\mathcal{C}_r}+Y_\beta
\enspace, \end{align*}
which used that $\mathcal{C}_r'\subseteq \mathcal{C}_r$. This yields \Cref{obs:boundT}. \end{proof}
In the following lemmas, we bound \remove{$\size{T}$ (as claimed in \Cref{lem:main3}) by bounding} $\sum_{C \in \mathcal{C}_r \setminus \mathcal{C}_r'} \size{C \cap T_\alpha}$ and $Y_\beta$, and (with \Cref{obs:boundT}) the result of \Cref{lem:main3} immediately follows from these bounds. Lemmas~\ref{lem:inter1}~and~\ref{lem:inter2}
are technical that we will prove later.
\begin{lem} \label{lem:inter1} With probability at least $1-\sum_{i=1}^{\alpha}\frac{1}{t_{i-1}^{\Omega(1)}}$, $\sum_{C \in \mathcal{C}_r \setminus \mathcal{C}_r'} \size{C \cap T_\alpha}$ is \edit{$\mathcal{O}\left( \frac{ \size{\mathcal{C}_r}}{(\log ^{(\alpha ) } n)^{\Omega(1)}}\right)$}, that is, the number of points in $T_{\alpha}$ that are present in clusters that are inactive after {\bf Phase 1} is \edit{$\mathcal{O}\left( \frac{ \size{\mathcal{C}_r}}{(\log ^{(\alpha) } n)^{\Omega(1)}}\right)$}. \end{lem}
\begin{lem} \label{lem:inter2} With probability at least $1-\frac{1}{t_{\alpha - 1}^{\Omega(1)}}$, we have $Y_\beta = \mathcal{O}\left(\frac{\size{\mathcal{C}_r}}{t_\alpha}+t_\alpha^3 \cdot \log t_\alpha\right)$. \end{lem}
\begin{proof}[Proof of \Cref{lem:main3} using \Cref{lem:inter1} and \Cref{lem:inter2}] From the above two lemmas along with \Cref{obs:boundT} and the fact $t_j=\widetilde{\Theta}(\log ^{(j)} n)$, we have the following bound on $\size{T}$ with probability at least $1-\sum\limits_{i=1}^{\alpha} \frac{1}{t_{i-1}^{\Omega(1)}}\geq 1-\frac{1}{(\log ^{(\alpha - 1)} n)^{\Omega(1)}}$:
\begin{align*}
\size{T} \leq \size{\mathcal{C}_r}\left(1+\frac{1}{\widetilde{\Theta}(\log ^{(\alpha)} n)}\right)+\widetilde{\Theta}\left((\log ^{(\alpha)} n)^3\right)
\enspace. \end{align*} Hence, we are done with the proof of \Cref{lem:main3}. \end{proof}
\subsection*{Proof of \Cref{lem:inter1}}
We prove \Cref{lem:inter1} by using the following lemma, which we prove later.
\begin{lem} \label{lem:main-P1i} Consider {\sc Ext}-$k$-{\sc Center}\xspace ($P, r$) as described in Algorithm \ref{algo:main} and {\bf Phase 1}. With probability at least $1-\sum_{i=1}^{\alpha}\frac{1}{t_{i-1}^{\Omega(1)}}$, the following holds: the number of clusters $C \in \mathcal{C}_r$, such that $C$ is active in {\bf Phase 1.i} but inactive after {\bf Phase 1.i}, is at most \edit{$\mathcal{O} \left(\frac{\size{\mathcal{C}_r}}{t_{i-1}^{\Omega(1)}}\right)$}, where $1\leq i \leq \alpha$. \end{lem}
\begin{proof}[Proof of \Cref{lem:inter1} using \Cref{lem:main-P1i}] Let us partition the clusters in $\mathcal{C}_r'$ into $\mathcal{A}_1,\ldots,\mathcal{A}_{\alpha}$, where $\mathcal{A}_i$ is the set of clusters that are active in {\bf Phase 1.i} but inactive after {\bf Phase 1.i}, where $1 \leq i \leq \alpha$. Consider a cluster $C \in \mathcal{A}_i$. By the definition of $\mathcal{A}_i$, $\size{C \cap T_{i-1}}\leq t_{i-1}$. That is, for each $C \in \mathcal{A}_i$, $\size{C \cap T_\alpha} \leq t_{i-1}$, because $T_0 \supseteq T_1 \supseteq \ldots \supseteq T_{\alpha}$.
Applying \Cref{lem:main-P1i} , we have $\size{\mathcal{A}_i}=\edit{\mathcal{O}\left(\frac{\size{\mathcal{C}_r}}{t_{i-1}^{\Omega(1)}}\right)}$ for each $i~(1 \leq i \leq \alpha)$, with probability at least $1-\sum_{i=1}^{\alpha}\frac{1}{t_{i-1}^{\Omega(1)}}$. Hence, with the same probability,
\begin{align*}
\sum_{C \in \mathcal{C}_r \setminus \mathcal{C}_r'} \size{C \cap T_\alpha} &\leq
\sum_{i=1}^{\alpha} \sum_{C \in \mathcal{A}_i}\size{T_\alpha \cap C} =
\sum_{i=1}^{\alpha} \size{\mathcal{A}_i} t_{i-1}
\\ &=
\mathcal{O}\left(\sum_{i=1}^\alpha\frac{\size{\mathcal{C}_r}}{t_{i-1}^{\Omega(1)}}\right) =
\mathcal{O}\left( \frac{ \size{\mathcal{C}_r}}{\left(\log ^{(\alpha )}n\right)^{\Omega(1)}}\right)
\enspace. \end{align*}
The last step uses that $t_i=\widetilde{\Theta}\left(\log ^{(i)} n\right)$.
\end{proof}
\remove{\begin{lem} \label{lem:main-P1} Consider {\sc Ext}-$k$-{\sc Center}\xspace ($P, r$) as described in Algorithm~\ref{algo:main}. After {\bf Phase 1}, $T_{\alpha}$ satisfies $\size{T_{\alpha}}=\size{\mathcal{C}_r} \cdot \mathcal{O}\left(\alpha +\log ^{(\alpha)} n\right)$, with probability at least $\frac{9}{10}$. Here, $\mathcal{C}_r $ is a clustering of $P$ that has minimum number of centers among all clusterings of $P$ with cost at most $r$. \end{lem}
\begin{proof} Let us partition the clusters in $\mathcal{C}_r$ into $\mathcal{A}_0,\mathcal{A}_1,\ldots,\mathcal{A}_{\alpha},X$, where $\mathcal{A}_i$ is the set of clusters that were active in {\bf Phase 1.(i-1)} but inactive after {\bf Phase 1.i} and $X$ is the set of clusters that are active after {\bf Phase 1.$\alpha$}, where $1 \leq i \leq \alpha$. Consider a cluster $C \in \mathcal{A}_i$. By the definition of $\mathcal{A}_i$, $\size{C \cap T_{i-1}}\leq t_{i-1}$. That is, for each $C \in \mathcal{A}_i$, $\size{C \cap T_\alpha} \leq t_{i-1}$. It is because $T_0 \subset T_1 \subset \ldots \subset T_{\alpha}$. Also, by the definition of $X$, we have $\size{C \cap T_\alpha}\leq t_{\alpha -1}$ for each $C \in T_\alpha$.
Observe that
\begin{align*}
\size{T_\alpha} &=
\sum_{i=0}^{\alpha} \sum_{C \in \mathcal{A}_i}\size{T_\alpha \cap C} + \sum_{C \in X}\size{T_\alpha \cap C}
\\&\leq
\sum_{i=1}^{\alpha} \size{\mathcal{A}_i}\cdot t_{i-1} + \size{X}\cdot t_{\alpha}
\enspace. \end{align*}
Applying \Cref{lem:main-P1i} for $j=\alpha$, we have $\size{\mathcal{A}_i}=o\left(\frac{\mathcal{C}_r}{t_{i-1}^{\Omega(1)}}\right)$ for each $i~(0 \leq i \leq \alpha)$, with probability at least $1-\sum_{i=0}^{\alpha-1}\frac{1}{t_{i-1}^{\Omega(1)}} \geq \frac{9}{10}$. Note that $\size{X} \leq \size{\mathcal{C}_r}$. Hence, with probability at least $\frac{9}{10}$,
\begin{align*}
\size{T _\alpha} &\leq
\sum_{i=0}^\alpha o\left(\frac{\size{\mathcal{C}_r}}{t_{i-1}^{\Omega(1)}}\right) + \size{\mathcal{C}_r} \cdot t_\alpha =
\size{\mathcal{C}_r} \cdot \mathcal{O}\left(\alpha + \log^{(\alpha)} n \cdot \left(\log^{(\alpha+1)}n \right)^{d+2} \right)
\enspace. \end{align*}
Here we are using $\alpha=\alpha$ and $t_i=\Theta\left(\log ^{(i)} n \cdot \left( \log ^{(i+1)} n\right)^{d+2}\right)$. \end{proof}}
\begin{proof}[Proof of \Cref{lem:main-P1i}] \remove{We prove the lemma by using induction on $j$. The claim is trivially true for $j=0$. Assume that the lemma is true for every $j$ with $0 \leq j \leq \ell -1$. Now we consider the case when $j=\ell$. }
Consider $i$ with $1 \leq i \leq \alpha$. Let $\mathcal{A}_i$ be the set of clusters that were active in {\bf Phase 1.i} but inactive after {\bf Phase 1.i}, and let $\mathcal{B}_i \supseteq \mathcal{A}_i$ be the set of clusters that were active in {\bf Phase 1.i}. It suffices to show that $\size{\mathcal{A}_i}=\mathcal{O}\left(\frac{\size{\mathcal{C}_r}}{t_{i -1}^{\Omega(1)}}\right)$ holds with probability at least $1-\frac{1}{t_{i-1}^{\Omega(1)}}$.
\remove{By the induction hypothesis, with probability at least $1-\sum_{i=0}^{\max\{0,\ell-2\}}\frac{1}{t_{i-1}^{\Omega(1)}}$, $\size{\mathcal{A}_i}= o\left(\frac{\size{\mathcal{C}_r}}{t_{i-1}^{\Omega(1)}}\right)$ for each $i$ with $0 \leq i \leq \ell-1$.}
For $C \in \mathcal{B}_{i}$, let $X_C$ be the random variable defined as
\begin{align*}
X_C &= \begin{cases}
1 & \text{ if } \size{C \cap T_{i}} > t_i \\
0 & \mbox{otherwise.}
\end{cases} \end{align*}
Observe that $\size{\mathcal{A}_i} =\sum_{C \in \mathcal{B}_i} X_C$.
\begin{cl} \label{cl:X_C} The probability that $X_C=1$ is $\mathcal{O}\left(\frac{1}{t_{i-1}^{\Omega(1)}}\right).$ \end{cl}
Using the above claim, we prove \Cref{lem:main-P1i} separately for $i=1$ and $i \geq 1$.
For $i=1$, applying the union bound over all $C \in \mathcal{B}_i=\mathcal{B}_1$, the probability that there exists a $C \in \mathcal{B}_1$ such that $X_C=1$ is at most $\frac{\size{\mathcal{B}_1}}{t_0^{\Omega(1)}}<\frac{1}{t_0^{\Omega(1)}}$. It is because $\size{\mathcal{B}_1}\leq n$ and $t_0=n$. This implies that $\size{\mathcal{A}_1}=0$ with probability at least $1-\frac{1}{t_0^{\Omega(1)}}$. Note that we are done for the case $i=1.$
For $2 \leq i \leq \alpha$, $\mathbb{E} [\size{\mathcal{A}_i}] =\mathcal{O}\left(\frac{\size{\mathcal{B}_i}}{t_{i-1}^{\Omega(1)}}\right)=\mathcal{O}\left(\frac{\size{\mathcal{C}_r}}{t_{i-1}^{\Omega(1)}}\right)$. As $i \geq 2$, $t_{i-1}\leq t_1=\widetilde{\mathcal{O}}(\log n)$. Recall that, we consider $\size{\mathcal{C}_r}=\Omega((\log n)^{c})$ for a suitable constant $c$. So, we can assume that $\frac{\size{\mathcal{C}_r}}{t_{i-1}^{\Omega(1)}}\geq \log t_{i-1}$. Using a Chernoff bound (\Cref{lem:cher}), $ \Pr \left(\size{\mathcal{A}_i} \geq c_1 \cdot \frac{\size{\mathcal{C}_r}}{t_{i-1}^{\Omega(1)}}\right)
\leq \frac{1}{t_{i -1 }^{\Omega(1)}},$ where $c_1$ is a suitable large constant. So, we are done with the proof of \Cref{lem:main-P1i}, except for the proof of \Cref{cl:X_C}.
\begin{proof}[Proof of \Cref{cl:X_C}]
Consider the clustering $\mathcal{C}_{r_{i-1}}^{t_{i-1}}$ of $T_{i-1}$ as follows. For each $C \in \mathcal{C}_r$, consider the partition of cluster $C$ into at most $z=\mathcal{O}\left(\left(\log \log t_{i-1}\right)^d\right)$ many clusters $C_1.\ldots, C_z$ such that the radius of each $C_i$ is at most $r_{i-1}$. For each $C_i$ ($i \in [z]$), the corresponding cluster in $\mathcal{C}_{r_{i -1}}^{t_{i -1}}$ is $C_i'=C_i \cap T_{i -1}$. So, $\size{\mathcal{C}_{r_{i -1}}^{t_{i-1}}}=\size{\mathcal{C}_r} \cdot \mathcal{O} \left(\left( \log \log t_{i -1} \right)^d\right).$
Consider the particular $C \in \mathcal{B}_{i}$. By \Cref{defi:active}, $\size{C \cap T_{i-1}} \leq t_{i-1}$. If we consider the partition of $C$ into $C_1,\ldots,C_z$ in $\mathcal{C}_{r_{i-1}}^{t_{i -1}}$, then $\size{C_y \cap T_{i -1}} \leq t_{i-1}$ for each $y \in [z]$. Let $\mathcal{B}_i'$ be the set of clusters in $\mathcal{C}_{r_{i-1}}^{t_{i -1}}$ that are formed due to the partition of some $C \in \mathcal{B}_i$. So, for each $C' \in \mathcal{B}_i'$, we have $\size{C' \cap T_{i -1}}\leq t_{i-1}.$
Consider {\bf Phase 1.i} of {\sc Ext}-$k$-{\sc Center}\xspace: we get $T_i$ as the current set of centers by calling $\mbox{{\sc Uniform}-{\sc Center}\xspace}(T_{i-1}, r_{i-1}, t_{i-1})$. Let us apply \Cref{lem:uniform3} with $t=t_{i-1}$, $V_t=T_{i-1}$, $r=r_{i-1}$, $S=T_i$, and $\mathcal{C}_{r}^t=\mathcal{C}_{r_{k-1}}^{t_{k-1}}$. For $y\in [z]$, with probability at least $1-\frac{1}{t_{i-1}^{\Omega(1)}}$, we have $\size{C_y \cap T_i} =\mathcal{O} \left(\log t_{i -1}\cdot \left(\log \log t_{i -1}\right)^2\right)$. Applying union bound for all $y \in [z]$, with probability at least $1-\frac{1}{t_{i-1}^{\Omega(1)}}$, we have $$\size{C \cap T_i} =\sum_{y \in [z]}\size{C_y \cap T_i} =\mathcal{O} \left( \log t_{i-1} \cdot \left(\log \log t_{i-1}\right)^{d+2}\right)=t_i.$$ This is because $z=\mathcal{O}\left((\log \log t_{i-1})^{d+2}\right)$, and we are done with the proof of the claim.
\remove{ the number of clusters $C' \in \mathcal{C}_{r_{\ell-1}}^{t_{\ell-1}}$'s, such that $\size{ C' \cap T_{\ell-1}}\leq t_{\ell-1}$ but $\size{C' \cap T_\ell} \geq \log t (\log \log t)^2$, is $o\left(\frac{\size{C_{r_{\ell-1}}^{t_{\ell-1}}}}{ t_{\ell-1}^{\Omega(1)}}\right)$, which is $o\left(\frac{\size{C_{r}}}{ t_{\ell-1}^{\Omega(1)}}\right)$. It is because $\size{\mathcal{C}_{r_{\ell -1}}^{t_{i-1}}}=\size{\mathcal{C}_r} \cdot \mathcal{O} \left(\left( \log \log t_{\ell -1} \right)^d\right).$
This
This implies $\size{\mathcal{A}_\ell'} =o\left(\frac{\size{C_{r}}}{ t_{\ell-1}^{\Omega(1)}}\right).$
Consider a cluster $C \in \mathcal{C}_r$ that was active after {\bf Phase 1.(k-1)}, that is, $\size{C \cap T_{k-1}} \leq t_{k-1}$. } \end{proof}
This completes the proof of \Cref{lem:inter1}. \end{proof}
\subsection*{Proof of \Cref{lem:inter2}}
We prove \Cref{lem:inter2} by using the following lemma, which we prove later.
\begin{lem} \label{lem:Y_i} Let $\zeta \in (0,1)$ be a suitable constant, let $i$ be such that $1\leq i \leq \beta$ with $Y_{i-1} =\Omega\left(t_{\alpha}^3 \log t_{\alpha}\right)$. With probability $1-\frac{1}{t_{\alpha}^{\Omega(1)}}$, $Y_{i} \leq \sum_{C \in \Gamma_{i-1}} \size{C \cap T_{\alpha + i}} \leq \zeta \cdot Y_{i-1}.$ \end{lem}
Applying the union bound over all $i$'s in $1$ to $\beta$, with probability at least $1-\frac{1}{t_{\alpha}^{\Omega(1)}}$, we have $Y_\beta=\zeta^{\beta}Y_0+t_\alpha^3 \log t_\alpha.$ As $Y_0=\mathcal{O}\left(\size{\mathcal{C}_r'} \cdot t_\alpha \right)$ $=\mathcal{O}\left(\size{\mathcal{C}_r} \cdot t _\alpha\right)$, $\beta=\Theta\left(\log ^{(\alpha + 1)} n\right)$ and $\zeta$ is chosen suitably, we are done with the proof of \Cref{lem:inter2}. \qed
\begin{proof}[Proof of \Cref{lem:Y_i}] As $Y_i=\sum_{C \in \Gamma_i} \size{C \cap T_{\alpha + i}}$ and $\Gamma_i \subseteq \Gamma_{i-1}$, $Y_{i} \leq \sum_{C \in \Gamma_{i-1}} \size{C \cap T_{\alpha + i}} $ follows.
For the other part, $\sum_{C \in \Gamma_{i-1}} \size{C \cap T_{\alpha + i}} \leq \zeta \cdot Y_{i-1}$, let $Z_C=\size{C \cap T_{\alpha + i}}$ and $Z=\sum_{C \in \Gamma_{i-1}} Z_C$. Now consider the following claim, which we will prove right after proving \Cref{lem:Y_i}.
\begin{cl}\label{cl:const-P2} Let $C \in \Gamma_{i-1}$. The probability that $Z_C=\size{C \cap T_{\alpha + i}} \leq \frac{3}{4} \cdot \size{C \cap T_{\alpha +i-1}}$ is at least a constant $\kappa \in (0,1)$. \end{cl} Note that $T_{\alpha+i} \subseteq T_{\alpha + i -1}$. So, $\size{C \cap T_{\alpha+i}} \leq \size{C \cap T_{\alpha+i-1}}$ always. For $C \in \Gamma_{i-1}$, note that $\size{T_{\alpha + i -1}} \leq t_{\alpha}$. From the above claim
\begin{align*}
\mathbb{E}[Z_C] &=
\mathbb{E}[\size{C \cap T_{\alpha+i}}]
\\&\leq
\kappa \cdot \frac{3}{4} \size{C \cap T_{\alpha+i-1}}+ (1-\kappa) \cdot \size{C \cap T_{\alpha +i -1 }}
\\&\leq
\zeta^{'} \size{C \cap T_{\alpha +i -1 }}
\enspace. \end{align*}
where $\zeta'$ is a suitable constant. Recalling the definition of $Y_{i-1}$, we have $$\mathbb{E}[Z]\leq \zeta^{'}\sum_{C \in \Gamma_{i-1}}\size{C \cap T_{\alpha+i-1}}=\zeta' Y_{i-1}.$$
Moreover, $0 \leq Z_C \leq t_{\alpha}$ for each $C \in \Gamma_{i-1}$. Hence, applying a Hoeffding bound (\Cref{lem:hoeff}), we have
\begin{align*}
\mbox{Pr}\left(Z \geq \zeta Y_{i-1}\right)
&=
\mbox{Pr}\left( Z \geq \mathbb{E}[Z] + \zeta_1 Y_{i-1} \right)
\\&\leq
e^{-\frac{\zeta_1^2 Y_{i-1}^2}{\size{\Gamma_{i-1}}t_{\alpha}^2}} \leq e^{-\frac{\zeta_1^2 Y_{i-1}^2}{\size{\Gamma_{i-1}}t_{\alpha}^2}}
\leq
{1}/{t_{\alpha - 1}^{\Omega(1)}}
\enspace. \end{align*}
The last inequality folllows from $Y_{i-1}\geq 2 \size{\Gamma_{i-1}}~\mbox{and}~Y_{i-1}=\Omega(t_{\alpha}^3 \cdot \log t_{\alpha})$. This concludes the proof of \Cref{lem:Y_i} since we have $Z = \sum_{C \in \Gamma_{i-1}} \size{C \cap T_{\alpha + i}}$. \end{proof}
We are left with only the proof of \Cref{cl:const-P2}.
\begin{proof}[Proof of \Cref{cl:const-P2}] We prove \begin{enumerate} \item[(i)] $\mbox{Pr}\left(Z_C \leq \frac{3}{4} \size{C \cap T_{\alpha+i-1}}\right) \geq \mbox{Pr}(Z_C=1)= \frac{\size{C \cap T_{\alpha+i-1}}}{2^{\size{C \cap T_{\alpha + i -1}}}}$; \item[(ii)] $\mbox{Pr} \left(Z_C \leq \frac{3}{4} \size{C \cap T_{\alpha+i-1}}\right) \geq 1- \frac{1}{2^{\Omega(\size{C \cap T_{\alpha + i -1}})}}$. \end{enumerate}
From the above two statements, we are done with the claim by setting $\kappa= \max \left \lbrace \frac{\size{C \cap T_{\alpha+i-1}}}{2^{\size{C \cap T_{\alpha + i -1}}}},1- e^{-\Omega(\size{C \cap T_{\alpha + i -1}})} \right \rbrace $, which is $\Omega(1)$.
Note that {\sc Ext}-$k$-{\sc Center}\xspace calls $\mbox{{\sc Sample-And-Solve}\xspace} \left(T_{\alpha+i-1},\frac{1}{2},r\right)$ in {\bf Phase 2.i}. In $\mbox{{\sc Sample-And-Solve}\xspace} \left(T_{\alpha+i-1},\frac{1}{2},r\right)$, let $H_i \subseteq T_{\alpha + i -1}$ be the set of hubs sampled, where each point in $T_{\alpha + i -1}$ is (independently) included in $H_i$ with probability $\frac{1}{2}$.
For (i), $\mbox{Pr}\left(Z_C \leq \frac{3}{4} \size{C \cap T_{\alpha+i-1}}\right) \geq \mbox{Pr}(Z_C=1)$ is direct as $\size{C \cap T_{\alpha+i-1}} \geq 2$. From \Cref{lem:samp-add}, $Z_C=\size{C \cap T_{\alpha +i}}=1$ if $\size{H_i}=1$. So,
\begin{align*}
\mbox{Pr}(Z_C=1) &= \frac{\size{C \cap T_{\alpha+i-1}}}{2^{\size{C \cap T_{\alpha + i -1}}}}
\enspace. \end{align*}
Now, we will prove (ii). From \Cref{lem:samp-add}, $Z_C =\size{C \cap T_{\alpha+i}} = \size{H_i}$ if $\size{H_i}>0$. Observe that $\mbox{Pr}(H_i >0)=1-\frac{1}{2^{\size{C \cap T_{\alpha + i -1}}}}$. The expected number of points in $H_i$ is $\frac{\size{C \cap T_{\alpha + i -1}}}{2}$. Using Chernoff bound (\Cref{lem:cher}), $\mbox{Pr}\left(\size{H_i} > \frac{3}{4}\size{C \cap T_{\alpha + i -1}}\right) \leq e^{-\Omega\left(\size{C \cap T_{\alpha + i -1}}\right)}.$ Hence, putting things together,
\begin{align*}
\mbox{Pr}\left(Z_C \leq \frac{3}{4}\size{C \cap T_{\alpha + i -1}}\right)
&\geq
\mbox{Pr}(\size{H_i}>0)\cdot \mbox{Pr}\left(\size{H_i} \leq \frac{3}{4}\size{C \cap T_{\alpha + i -1}}\right)
\\&\geq
\left(1-\frac{1}{2^{\size{C \cap T_{\alpha + i -1}}}}\right)\cdot \left(1-\frac{1}{e^{\Omega\left(\size{C \cap T_{\alpha + i -1}}\right)}}\right)
\\&\geq
1- \frac{1}{2^{\Omega(\size{C \cap T_{\alpha + i -1}})}}
\enspace. \end{align*} \end{proof} \remove{This is because $t_\alpha=\Theta\left(\log ^{ (\alpha)} n \cdot \left(\log ^{ (\alpha+1)}n \right)^{d+2}\right)$ and $\beta_d=\Theta\left(\log ^{(\alpha +1)} n\right)$.}
\section{Conclusions} \label{sec:conclude}
In this paper we show that even for large values of $k$, the classic $k$-center clustering problem in low-dimensional Euclidean space can be efficiently and very well approximated in the parallel setting of low-local-space \textsc{MPC}\xspace. While some earlier works (see, e.g., \cite{coreset_kcenter1,coreset_kcenter2,coreset_kcenter3}) were able to obtain constant-round \textsc{MPC}\xspace algorithms, they were relying on a large local space $\ensuremath{{\mathrm{s}}}\xspace \gg k$ allowing to successfully apply the core-set approach, which permits only limited communication. On the other hand, the low-local-space setting considered in this paper seems to require extensive communication between the machines to achieve any reasonable approximation guarantees. Therefore we believe (without any evidence) that the number of rounds of order $\mathcal{O}(\log\log n)$ may be almost as good as it gets. Also, we concede that our algorithm does not achieve a constant approximation guarantee, but we feel the approximation bound of $\mathcal{O}(\log^*n)$ is almost as good. Finally, our algorithm does not resolve the perfect setting of the $k$-center clustering in that it allows in the solution slightly more centers, $k + o(k)$ centers. Improving on these three parameters is the main open problem left by our work.
We believe that solely using the technique in this paper, improving the approximation factor and/or number of rounds may not be possible (a detailed explanation is in \Cref{app:reason}), but the approach may be useful for related problems in \textsc{MPC}\xspace or other models. We remark that the extra space in global space complexity is mainly due to the use of LSH; note that, even in the RAM model setting, the use of LSH requires some extra space.
Our work naturally suggests some open directions for future research:
\begin{itemize} \item Can we improve the approximation factor beyond $\mathcal{O}(\log ^{*} n )$ and/or the number of rounds beyond $\mathcal{O}(\log \log n)$? \item In the large $k$ regime, can we design an efficient algorithm that uses (almost) linear global space? \item In the large $k$ regime, can we design an efficient algorithm that reports exactly $k$ centers? \item Are similar results possible for the related $k$-means and $k$-medians problems for large $k$ in \textsc{MPC}\xspace? \item What \textsc{MPC}\xspace results are possible when the points are in high-dimensional Euclidean space or in a general metric space? Our work has a limitation to go beyond constant dimension as we are not aware of any \emph{efficient} LSH for high dimension. \end{itemize}
\appendix
\section*{Appendix}
\section{Concentration inequalities} \label{sec:conc}
In our analysis we use some basic and standard concentration inequalities, which we present here for the sake of completeness.
\begin{lem}[{\bf Chernoff bound}] \label{lem:cher} Let $X_1, \dots, X_n$ be independent random variables such that $X_i \in [0,1]$. For $X = \sum_{i=1}^n X_i$ and $\mu_l \leq \mathbb{E}[X] \leq \mu_h$, the following hold for any $\varepsilon >0$.
\begin{align*}
\mbox{Pr} \left( X \geq (1+\varepsilon)\mu_h \right) &\leq e^{-\mu_h \varepsilon^2/3}
\enspace. \end{align*}
\end{lem}
\begin{lem}[{\bf Hoeffding bound}] \label{lem:hoeff} Let $X_1, \dots, X_n$ be independent random variables such that $a_i \leq X_i \leq b_i$ and $X = \sum_{i=1}^n X_i$. Then, for all $\varepsilon >0$,
\begin{align*}
\mbox{Pr}\left(\size{X-\mathbb{E}[X]} \geq \varepsilon\right) &\leq
2\exp\left({-2\varepsilon^2}/ {\sum_{i=1}^{n}(b_i-a_i)^2}\right)
\enspace. \end{align*} \end{lem}
\section{Proof of Theorem~\ref{theo:main}} \label{sec:missing1}
In this section we prove \Cref{theo:main} using \Cref{theo:main1}.
Let us consider algorithm $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$ that runs $\psi$ instances, $\psi=\mathcal{O}(\log (\max \{n, \log \Delta\}))$, of the algorithm {\sc Ext}-$k$-{\sc Center}\xspace in parallel. Let $T(1), \ldots, T(\psi)$ be the outputs of the runs of {\sc Ext}-$k$-{\sc Center}\xspace. $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$ reports $T'=T(i)$ with the minimum cardinality as the output. Therefore by \Cref{theo:main1}, with probability at least $1-\frac{1}{(\max\{n, \log \Delta\})^{\Omega(1)}}$, the following is true for $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$: \begin{enumerate}
\item[(i)] the number of rounds spent by $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$ is $\mathcal{O}(\log \log n)$ and the global space used by $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$ is $\widetilde{\mathcal{O}}(n^{1+\rho} \log \Delta)$;
\item[(ii)] $\textsc{cost}\xspace(P,T')=\mathcal{O}(r\cdot (\alpha+\log ^{(\alpha + 1)}n))$; and
\item[(iii)] $\size{T'}\le \size{\mathcal{C}_r}(1+\frac{1}{\widetilde{\Theta}(\log ^ {(\alpha)} n )})$ + $\widetilde{\Theta}((\log ^{(\alpha)} n)^3)$. \end{enumerate}
Next, consider the following observation about $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$:
\begin{obs} \label{obs:malg-dash} Let \textsc{opt}\xspace be the optimal cost to the $k$-{center}\xspace problem for $P$. If one runs $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$ with radius parameter $r$ with $r \geq \textsc{opt}\xspace$, then the number of centers reported by $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$ is at most $k\left(1+\frac{1}{\widetilde{\Theta}(\log^{(\alpha)} n)}\right)+\widetilde{\Theta}((\log ^{(\alpha)} n)^3)$, with probability at least $1-\frac{1}{(\max\{n,\log \Delta\})^{\Omega(1)}}$. \end{obs}
\Cref{obs:malg-dash} follows from the bound $\size{\mathcal{C}_r} \le k$ for $r \geq \textsc{opt}\xspace$.
Now we describe the algorithm algorithm $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}^{''}$, which is the algorithm corresponding to \Cref{theo:main}. $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}^{''}$ runs $\phi=\mathcal{O}(\log \Delta)$ instances of $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$, with radius parameters $r(1)=\Delta, r(2)=\frac{\Delta}{2}, r(3)=\frac{\Delta}{4}, \ldots, r({\phi})=\mathcal{O}(1)$, in parallel. Let $T'(1), \ldots, T'(\phi)$ be the corresponding outputs of the runs of $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$. $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}^{''}$ reports $T''=T'(i)$ as the output such that $\size{T'(i)}\leq k(1+\frac{1}{\widetilde{\Theta}(\log ^ {(\alpha)} n)})+ \widetilde{\Theta}((\log ^ {(\alpha)} n)^3)$ and $r(i)$ is the minimum. So, the round complexity and space complexity of \Cref{theo:main} follow from the round and space complexity guarantee of $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$, respectively. From the guarantee of algorithm $\mbox{{\sc Ext}-$k$-{\sc Center}\xspace}'$ about the set of centers returned by it, we have $\textsc{cost}\xspace(P,T'')=r(i)\cdot (\alpha+{(\log ^{(\alpha + 1)} n)})$, and $\size{T''}\leq k(1+\frac{1}{\widetilde{\Theta}(\log^{(\alpha)} n)})$ + $\widetilde{\Theta}((\log ^{(\alpha)} n)^3)$, with probability at least $1-\frac{1}{(\max\{ n, \log \Delta\})^{\Omega(1)}}$. From \Cref{obs:malg-dash}, $r(i) \leq 2 \cdot \textsc{opt}\xspace$ with probability at least $1-\frac{1}{(\max\{n, \log \Delta\})^{\Omega(1)}}$. This yields the proof of guarantee on approximation factor and the number of centers of \Cref{theo:main}.
\section{Limitation of the techniques} \label{app:reason}
Our algorithm is an iterative algorithm that refines the set of centers starting with the entire point set as the set of centers. As with many distributed algorithms, the iterative approach usually does not lead to a constant round algorithm. In particular, roughly speaking, our algorithm first samples points with inverse polynomial probability and then increases probability in a square root fashion. So, to go from $n$ centers to $k$ or even $\mathcal{O}(k)$ centers, one needs $\Omega(\log\log n))$ rounds. Increasing the probability along a faster schedule is unlikely to allow us to sufficiently bound the number of centers while increasing the probability along a slower schedule will increase the running time. Note also that the cost of the solution gets added over the rounds in our algorithm. The way our algorithm works, the cost of the solution in the first phase is $\mathcal{O}(r)$. After that, each optimal cluster has at most $\widetilde{\mathcal{O}}(\log^*n)$ centers. In phase 2, we apply {\sc Sample-And-Solve}\xspace for $\mathcal{O}(\log^*n)$ rounds with $r$ as the radius that leads to approximation ratio $\mathcal{O}(\log^*n)$. One may think to apply {\sc Sample-And-Solve}\xspace in Phase 2 with a radius parameter less than $r$ in Phase 2. But in that case, guaranteeing the total number of centers to be $k+o(k)$ seems unlikely.
\end{document} |
\begin{document}
\title{On the Regularization of Autoencoders}
\begin{abstract}
While much work has been devoted to understanding the implicit (and explicit) regularization of deep nonlinear networks in the \emph{supervised} setting, this paper focuses on \emph{unsupervised} learning, i.e., autoencoders are trained with the objective of reproducing the output from the input. We extend recent results on unconstrained linear models \cite{jin21} and apply them to (1) \emph{nonlinear} autoencoders and
(2) \emph{constrained} linear autoencoders, obtaining the following two results: First, we show that the \emph{unsupervised} setting by itself induces strong additional regularization, i.e., a severe reduction in the model-capacity of the learned autoencoder: we derive that a \emph{deep nonlinear} autoencoder \emph{cannot} fit the training data more accurately than a \emph{linear} autoencoder does if both models have the same dimensionality in their last hidden layer (and under a few additional assumptions). Our second contribution is concerned with the low-rank EDLAE model \cite{steck20b}, which is a linear autoencoder with a constraint on the diagonal of the learned low-rank parameter-matrix for improved generalization: we derive a closed-form approximation to the optimum of its non-convex training-objective, and empirically demonstrate that it is an accurate approximation across \emph{all} model-ranks in our experiments on three well-known data sets. \end{abstract}
\section{Introduction} In recent years, much progress has been made in better understanding various kinds of implicit and explicit regularization when training deep nonlinear networks, e.g., \cite{neyshabur15b,c_zhang17,c_zhang17b,du18,chaudhari18,nagarajan19,belkin21}. This work has typically been done in the context of \emph{supervised} learning. In this paper, we instead focus on training \emph{autencoders (AE)}, which are used for \emph{unsupervised} learning. We discuss two kinds of regularization / restrictions of the model-capacity of the learned AE: \begin{enumerate} \item First, we analytically show that a deep nonlinear AE \emph{cannot} fit the data more accurately than a linear AE with the same number of latent dimensions in the last hidden layer does. We argue that \emph{unsupervised} learning by itself induces this severe reduction of model-capacity in the deep nonlinear AE, and may hence be viewed as a new kind of implicit regularization.
\item Second, we focus on the Emphasized Denoising Linear Autoencoder (EDLAE) \cite{steck20b}, which is trained with the objective of preventing this linear AE from learning the identity function between its input and output, as to improve generalization. While the resulting training-objective is non-convex, and was optimized iteratively in \cite{steck20b}, we show that there exists a simple yet accurate closed-form approximation.
\end{enumerate}
These two contributions are obtained by extending the results on unconstrained linear models in \cite{jin21} to (1) \emph{deep nonlinear} AEs (first contribution, see Section \ref{sec_nonlin}), and
(2) \emph{constrained} linear AEs that are prevented from overfitting to the identity function (second contribution, see Section \ref{sec_approx}). \section{Nonlinear vs Linear Auto-encoders} \label{sec_nonlin} In the first part of this paper, we show that the learned \emph{nonlinear} AE cannot fit the data better than a linear AE does if their last hidden layer has the same latent dimensionality. This is formalized in the following, and various aspects are discussed in the sections thereafter. \subsection{Training Objective} While cross-entropy or log-likelihood are popular training-objectives in the deep-learning literature, we here consider the squared error, as it allows for analytical derivations, see also \cite{tian21,razin20,belkin21}. Also note that minimizing the squared error as a training objective has been related to approximately optimizing ranking metrics \cite{ramaswamy13,calauzenes20} or classification error \cite{mai19,muthukumar20,thrampoulidis20,hui21}. Formally, given training data $X \in \mathbb{R}^{m\times n}$ regarding $m$ data-points and $n$ features, we denote the squared error (SE) by \begin{equation}
{\rm SE} = || X - f_\theta(X) ||_F^2 , \label{eq_se} \end{equation}
where the function $f_\theta: \mathbb{R}^n \rightarrow \mathbb{R}^n$ represents an arbitrary AE with model-parameters $\theta$, which is applied row-wise to the data-matrix $X$; and $||\cdot||_F$ is the Frobenius norm.
In addition to the squared error, regularization is typically applied during training, which reduces model-capacity with the goal of preventing overfitting. In this first contribution of this paper, we employ a low-dimensional embedding-space as a means of reducing model-capacity--we do not consider any additional explicit regularization here.
The reason is that the possibly high model-capacity of the learned deep nonlinear AE is actually severely reduced in the \emph{unsupervised} setting, even without additional regularization: this is reflected by the fact that
a deep \emph{nonlinear} AE cannot fit the training-data better than a \emph{linear} AE does, as derived below. The model-fit on the training data can only be further reduced when applying additional regularization. Moreover, as shown in the seminal papers \cite{neyshabur15b,c_zhang17}, explicit regularization may not be fundamental to understanding the ability of deep nonlinear models to generalize to unseen data.
\subsection{Autoencoders}
\label{sec_prop} In the following, we compare two AEs with each other: a deep-nonlinear AE and a linear AE, which share the following two properties: \begin{enumerate} \item the \emph{last} hidden layer has $k$ dimensions, where $k < \min(m,n)$. Note that this (partially) prevents the AE from learning the identity-function. It can severely restrict the model-capacity of the AE for small $k \ll \min(m,n)$. This is the only kind of regularization we consider here. \item the \emph{output-layer} has a \emph{linear} activation-function. This is a natural choice when using the squared error as training-objective. Note that this choice was also made, for instance, in \cite{neyshabur15b,liu20} to facilitate analytical derivations. \end{enumerate}
We can hence write the deep nonlinear AE in the following form: \begin{equation}
f_\theta^{\rm(deep)} (X) = g_{\theta'}(X) \cdot W_L ,
\label{eq_deep} \end{equation} where the function $g_{\theta'}: \mathbb{R}^n \rightarrow \mathbb{R}^k$ is an arbitrary deep nonlinear model parameterized by $\theta'$, which is applied to the data-matrix $X$ in a row-wise manner; and $W_L \in \mathbb{R}^{k\times n}$ is the weight matrix between the last hidden layer (with $k$ dimensions) and the output layer. Hence, in the nonlinear AE, the model parameters to be learnt are $\theta = (\theta' ,W_L)$.
The linear AE can be written as \begin{equation}
f_\theta^{\rm(linear)} (X) = X \cdot W_1 \cdot W_2 ,
\label{eq_lin} \end{equation} where $W_1 \in \mathbb{R}^{n\times k}$ and $W_2 \in \mathbb{R}^{k\times n}$ are the two weight matrices of rank $k$ to be learnt in the linear AE.
Having motivated and defined the problem in detail, we are now ready to present the main result of our first contribution, which will be discussed in the following sections:
{\bf Proposition:} \emph{Consider the squared errors} \begin{eqnarray}
{\rm SE}^{\rm(deep)} &=& \min_{\theta', W_L}|| X- g_{\theta'}(X) \cdot W_L ||_F^2 \label{eq_se_deep}\\
{\rm SE}^{\rm(linear)} &=& \min_{W_1,W_2}|| X- X \cdot W_1 \cdot W_2 ||_F^2 \label{eq_se_lin} , \end{eqnarray} \emph{of the deep-nonlinear and linear autoencoders on the training data $X\in \mathbb{R}^{m\times n}$, respectively, where the last hidden layer has dimensionality $k < \min(m,n)$ in both autoencoders, i.e., ${\rm rank} (W_L) = {\rm rank} (W_2) = k$. Moreover, note that the output-layer is required to have a linear activation-function in the nonlinear autoencoder in Eq. \ref{eq_se_deep}, while $g_{\theta'}(X)$ can be any deep nonlinear architecture. Then it holds that the deep nonlinear autoencoder \emph{cannot} fit the data better than the linear AE does, i.e., } \begin{equation}
{\rm SE}^{\rm(deep)} \ge {\rm SE}^{\rm(linear)} .
\label{eq_prop} \end{equation}
{\bf Proof:} This proposition follows from combining two simple mathematical facts: first, the squared error of the deep non-linear model (which can be viewed as a parametric model) can be bounded by the squared error of a matrix factorization (MF) model (which can be viewed as a nonparametric model), given that the singular value decomposition (SVD) provides the best rank-$k$ approximation (Eckart–Young–Mirsky theorem). Second, the (non-parametric) MF model can then be rewritten as a (parametric) linear AE, using simple properties of the SVD-solution. This relates the deep-nonlinear AE with the linear AE. The details can be found in the Appendix. $\Box$
\subsection{Discussion} \label{sec_discuss}
In this section, we discuss several aspects of this seemingly counterintuitive result that a deep-nonlinear AE cannot fit the data better than a linear AE does.
\subsubsection{Unsupervised Learning}
It is crucial for the proof to hold that the \emph{inputs are identical to the targets} when training the model. This \emph{unsupervised} training-objective is hence responsible for inducing a severe reduction in model-capacity of the deep-nonlinear AE as reflected by its squared error in the Proposition. This may be viewed as a novel kind of implicit regularization, in addition to the ones discussed in the literature, e.g., \cite{neyshabur15b,c_zhang17,c_zhang17b,du18,chaudhari18,nagarajan19,belkin21} etc. It is important to note that these results do \emph{not} apply to deep nonlinear models in general where the input-vector is \emph{different} from the target-vector during training, like in supervised learning.
{\bf Recommender Systems: } An important application of unsupervised learning of AEs is the 'classic' recommendation problem in the literature, where only the user-item interaction-data $X$ are available, which are then randomly split into disjoint training and test data.\footnote{In contrast, a scenario that we do not consider to be a 'classic' recommendation problem, is the prediction of the next item in a sequence of user-item interactions.} The empirical observations in the literature have been puzzling so far, given that (1) among the deep nonlinear AEs, rather shallow architectures with typically only 1-3 hidden layers were empirically found to obtain the highest ranking accuracy on the test data \cite{sedhain15, liang18, shenbin20,lobel20,khawar20}; and (2) simple linear AEs were recently found to be competitive with deep nonlinear AEs \cite{steck19a, steck20b}.
The derived proposition may provide an explanation for these otherwise counterintuitive empirical findings. While our result only applies to the training-error, the model capacity can only be further reduced by applying additional regularization to improve generalization / test-error. Given that deep nonlinear AEs typically have a more complex architecture than linear AEs do, this may provide more 'knobs' for applying various kinds of regularization, e.g., \cite{sedhain15, liang18, shenbin20,lobel20,khawar20}, which may eventually result in improved test-errors, compared to linear AEs. The empirical observations that deep nonlinear AEs achieve slightly better test-errors on some datasets, while linear AEs are better on others, e.g., \cite{steck19a}, suggest that deep-nonlinear AEs only appear to be extremely flexible models, while in fact their prediction accuracy may be comparable to the one of linear AEs after adding proper regularization, as corroborated by the derived proposition regarding the training error.
\subsubsection{Weighted Loss} \label{sec_limit} \emph{Weighted} loss-functions have been widely adopted in recent years. Two scenarios have to be distinguished: (1) a weight applies to a data point (i.e., row in $X$), and (2) a weight applies to a feature of a data-point (i.e., entry in $X$). Examples of the former include off-policy evaluation, reinforcement learning, and causal inference, while an example of the latter is weighted matrix factorization for collaborative filtering, e.g., \cite{hu08,pan08}.
The Proposition obviously carries over to the first scenario, where the weight applies to a row in $X$, as a weight can be viewed as the multiplicity with which a data-point appears in the data. In contrast, the Proposition does not immediately carry over to the second scenario, as the proof hinges on the singular value decomposition, which does not allow for a different weight being applied regarding each entry (rather than row) of $X$.
\subsubsection{Representation Learning}
Despite the limitations in terms of accuracy, a deep-nonlinear AE may still have advantages over a linear AE, namely when it comes to representation learning: a deep nonlinear AE may be able to learn a \emph{lower}-dimensional embedding (i.e., with dimensionality less than $k$) in one of the layers before the last hidden layer without a (significant) loss in accuracy. Such an increased dimensionality reduction can be beneficial in the subsequent step when these lower-dimensional embeddings are used as features in a supervised model, resulting in improved classification accuracy of the final supervised model, e.g., \cite{rifai11}.
\subsubsection{Neural Tangent Kernel} In the regime of the \emph{neural tangent kernel} (NTK), i.e., in the limit of infinite width of the deep nonlinear architecture, neural networks (with a linear output-layer) behave like linear models \cite{jacot18,liu20}. Note that this connection to linear models is unrelated to the scenario considered here, as we are concerned with \emph{unsupervised} learning of AEs that have a \emph{limited} rank/width $k<\min(m,n)$.
\section{Approximate EDLAE} \label{sec_approx}
In the second contribution of this paper, after a brief review of the EDLAE model \cite{steck20b} in the following section, we show that there exists an (approximate) \emph{closed-form} solution for training a low-rank EDLAE model (Section \ref{sec_derivation}). The derived solution results in Algorithm \ref{algo}, which consists of only four steps (Section \ref{sec_algo}). In Section \ref{sec_exp}, we observe that the approximation accuracy is very high for all model-ranks $k$ on all three well-known data-sets used in our experiments, compared to the (exact) iterative approach (ADMM) used in \cite{steck20b}. Hence, iterative approaches to optimize the non-convex training-loss of EDLAE may actually not be necessary.
\subsection{Brief Review of EDLAE} \label{sec_edlae}
In \cite{steck20b}, it was shown that it is crucial to prevent the AE from learning the identity function between its input and output. This can be achieved by the stochastic training-procedure called \emph{emphasized denoising} \cite{vincent10} (in particular for so-called \emph{full emphasis}), which was shown in \cite{steck20b} to be equivalent to modifying the least-squares objective of a linear low-rank AE as follows: \begin{eqnarray} &&\lVert X - X \cdot\left\{UV^\top -{\rm diagM}\left({\rm diag}(UV^\top )\right)\right\} \rVert_F^2 \nonumber\\
&&+ \lVert \Lambda^{\nicefrac{1}{2}} \cdot\left\{UV^\top -{\rm diagM}\left({\rm diag}(UV^\top )\right)\right\} \rVert_F^2 , \label{eq_trainuv_orig} \end{eqnarray} where $X \in \mathbb{R}^{m\times n}$ is the given training data, and the weight matrices $U,V \in \mathbb{R}^{n\times k}$ of rank $k$ are the parameters of the low-rank EDLAE to be learnt. The L$_2$-norm regularization is controlled by the diagonal matrix \begin{equation}
\Lambda = \lambda \cdotI + \frac{p}{q} \cdot {\rm diagM}({\rm diag}(X^\top X) ), \label{eq_l2} \end{equation}
where $p\in[0,1]$ is the dropout probability in emphasized denoising, and $q=1-p$; '${\rm diagM}$' denotes a diagonal matrix, while '${\rm diag}$' denotes the diagonal of a matrix. We added $\lambda\in \mathbb{R}$ as an additional regularization parameter (and $I$ denotes the identity matrix), which provides the same regularization across all features, while the second term (controlled by $p$) is feature-specific.
The key result of the analytic derivation in \cite{steck20b} is that the diagonal ${\rm diag}(UV^\top )$ gets ignored/removed from the matrix $UV^\top$ when learning its parameters in Eq. \ref{eq_trainuv_orig}.
\subsection{Closed-form Solution} \label{sec_derivation} This section outlines the derivation of the closed-form solution that approximates the learned EDLAE model, i.e., the optimum of Eq. \ref{eq_trainuv_orig}. Following \cite{jin21}, we first rewrite the training objective in Eq. \ref{eq_trainuv_orig} as follows: with the definitions \begin{equation}
Y =\left [ \begin{array}{c} X \\ 0 \end{array} \right ]\,,
\,\,\,\,\,\,\,
Z =\left [ \begin{array}{l} X \\ \Lambda^{\nicefrac{1}{2}} \end{array} \right ]\,, \end{equation} Eq. \ref{eq_trainuv_orig} is equal to \begin{equation}
\lVert Y - Z \cdot\left\{UV^\top -{\rm diagM}\left({\rm diag}(UV^\top )\right)\right\} \rVert_F^2 . \label{eq_trainuv_2} \end{equation} Now, introducing the \emph{full-rank} solution ${\hat{ B}} \in \mathbb{R}^{n\times n}$ of the minimization problem in Eq. \ref{eq_trainuv_2}, we obtain \begin{eqnarray}
&&\lVert Y - Z \left\{UV^\top -{\rm diagM}\left({\rm diag}(UV^\top )\right)\right\} \rVert_F^2 \nonumber\\
&=&
\lVert Y - Z \left\{{\hat{ B}} -{\rm diagM}\left({\rm diag}({\hat{ B}} )\right)\right\}\nonumber\\
&&\,\,\,\, +Z \left\{{\hat{ B}} -{\rm diagM}\left({\rm diag}({\hat{ B}} )\right)\right\}\nonumber\\
&&\,\,\,\, -Z \left\{UV^\top -{\rm diagM}\left({\rm diag}(UV^\top )\right)\right\} \rVert_F^2 \nonumber\\
&=&
\lVert Y - Z \left\{{\hat{ B}} -{\rm diagM}\left({\rm diag}({\hat{ B}} )\right)\right\} \rVert_F^2\nonumber\\
&&\!\!\!\!\!+\lVert Z \left\{{\hat{ B}} -{\rm diagM}\left({\rm diag}({\hat{ B}} )\right)\right\}\nonumber\\
&&\,\,\,\,\, -Z \left\{UV^\top -{\rm diagM}\left({\rm diag}(UV^\top )\right)\right\} \rVert_F^2 \label{eq_train_2part} \end{eqnarray} Note that the second equality holds for the following two reasons: (1) because ${\hat{ B}}$ is the \emph{optimum} of the least-squares problem $\lVert Y - Z \{B -{\rm diagM}({\rm diag}(B ))\} \rVert_F^2$,
the residuals $ Y - Z \{{\hat{ B}} -{\rm diagM}({\rm diag}({\hat{ B}} ))\}$ hence have to be orthogonal to the predictions $Z \{{\hat{ B}} -{\rm diagM}({\rm diag}({\hat{ B}} ))\}$; (2) the low-rank model $UV^\top$ is obviously nested within the full-rank model $B$, in other words, it lives in the linear span of the full-rank model: hence the difference $ \{{\hat{ B}} -{\rm diagM}({\rm diag}({\hat{ B}} ))\} - \{UV^\top -{\rm diagM}({\rm diag}(UV^\top ))\}$ also lives in the linear span of the full rank model. Hence we have that $ Z ( \{{\hat{ B}} -{\rm diagM}({\rm diag}({\hat{ B}} ))\} - \{UV^\top -{\rm diagM}({\rm diag}(UV^\top ))\})$ is orthogonal to the residuals $ Y - Z \{{\hat{ B}} -{\rm diagM}({\rm diag}({\hat{ B}} ))\}$, and hence the trace of their product vanishes, which eliminates the cross-term in Eq. \ref{eq_train_2part}. This is also corroborated by our experiments, where we computed the squared errors of the learned models in these different ways.
Based on the simplification in Eq. \ref{eq_train_2part}, the low-rank optimization problem can hence be decomposed into the following two steps: \begin{enumerate}
\item Computing the full-rank solution ${\hat{ B}}\in \mathbb{R}^{n\times n}$: its closed-form solution is given by \cite{steck20b}:\footnote{Note that $B -{\rm diagM}({\rm diag}(B ))$ may seem to leave the learned diagonal of $B$ unspecified. As outlined in \cite{steck20b}, it makes sense to set the diagonal to zero in the full rank model $B$, where all the parameters are independent of each other. Note that this is different from the situation in the \emph{low-rank} model $UV^\top$, where the diagonal ${\rm diag}(UV^\top )$ is specified by minimizing Eq. \ref{eq_trainuv_2}: the off-diagonal elements of $UV^\top$ are fit to the data, which in turn determines the diagonal ${\rm diag}(UV^\top )$ due to the low-rank nature of $UV^\top$. Note that ${\rm diag}(UV^\top )\ne 0$ in general.} \begin{eqnarray}
{\hat{ B}} &=& \argmin_B \lVert Y - Z \cdot\left\{B -{\rm diagM}\left({\rm diag}(B )\right)\right\} \rVert_F^2 \nonumber\\
&=& \argmin_{B \,\,{\rm s.t.\,\,}{\rm diag}(B)=0 } \lVert Y - \ZZB \rVert_F^2 \nonumber\\
&=& I -{\hat{ C}} \cdot {\rm diagM}(1 \oslash {\rm diag}({\hat{ C}})) ,
\label{eq_bb} \end{eqnarray} where $\oslash$ denotes the element-wise division of vectors, and \begin{equation} {\hat{ C}} = \left(Z^\top Z \right)^{-1} = \left(X^\top X + \Lambda \right)^{-1}, \end{equation} where the inverse exists for appropriately chosen L$_2$-norm regularization $\Lambda$.
\item Estimating the low-rank solution ${\hat{ U}},{\hat{ V}}\in \mathbb{R}^{n\times k}$ of\footnote{Note that we used ${\rm diag}({\hat{ B}})=0$, see previous footnote and Eq. \ref{eq_bb}.}
\begin{equation} \min_{U,V} \lVert Z {\hat{ B}} -Z \left\{UV^\top -{\rm diagM}\left({\rm diag}(UV^\top )\right)\right\} \rVert_F^2. \label{eq_part2}
\end{equation} It is important to realize that the diagonal values of the low-rank model $UV^\top$ do not matter (as they get cancelled) in this optimization. In other words, only the \emph{off-diagonal} elements of the low-rank model $UV^\top$ have to be fitted.
In the following, we propose a simple yet accurate approximation to this optimization problem by essentially ignoring the diagonal. To this end, let us assume that we knew the optimal values of the diagonal $\hat{\beta}:={\rm diag}({\hat{ U}}{\hat{ V}}^\top )$. Then the optimization problem becomes $$ \min_{U,V} \lVert Z \{{\hat{ B}} + {\rm diagM}(\hat{\beta})\} -Z UV^\top \rVert_F^2, $$
where $\hat{\beta}$ is added to the zero-diagonal of ${\hat{ B}}$, and the low-rank model $UV^\top$ gets fitted. Let us now consider the contributions to this squared error from the diagonal vs. off-diagonal elements of $UV^\top$. The first observation is that the contribution from the diagonal vanishes for the optimal ${\hat{ U}}{\hat{ V}}^\top$, given that $\hat{\beta}={\rm diag}({\hat{ U}}{\hat{ V}}^\top )$. As we generally do not know $\hat{\beta}$, we assume that we can instead use a 'reasonable' approximation $\beta$ such that the average squared error regarding a diagonal element, i.e., $|\beta - \hat{\beta}|^2 / n $, is of the same order of magnitude as the average squared error regarding an off-diagonal element. In the case that we increase the model-rank $k$ such that it approaches $n$ (full-rank), we know that the optimal $\hat{\beta}$ approaches 0, see full-rank solution in Eq. \ref{eq_bb}. In this case, $\beta=0$ is hence a reasonable choice. Moreover, for arbitrary model-rank $k$, given that $UV^\top$ is learned with the objective of predicting a feature from other (typically similar) features (due to emphasized denoising, see Section \ref{sec_edlae}), one can expect that the diagonal element $({\hat{ U}}{\hat{ V}}^\top)_{i,i}$ regarding a feature $i$ is of the same order of magnitude as the largest element in the $i$-th column $({\hat{ U}}{\hat{ V}}^\top)_{-i,i}$, where $-i$ denotes the set of all indices $j\ne i$ (i.e., the largest value is associated with a feature $j$ that is similar to feature $i$). We can use this heuristic to obtain reasonable values for $\beta$. But also when we simply use $\beta=0$ here, the squared error regarding each element on the diagonal on average is not orders of magnitude larger, compared to the average squared error regarding an off-diagonal element. This is also corroborated by our experiments, where we tried these heuristics (and a few more), and found that they all resulted in very small differences in the solutions, often within the confidence intervals of the ranking metrics on the test data. We hence show results for the simple choice $\beta=0$ in the experiments in Section \ref{sec_exp}.
Now that we have seen that all the squared errors (on and off the diagonal) are on average of the same order of magnitude, the second observation is that there are $n$ diagonal elements, while there are $n^2-n$ off-diagonal elements--hence the diagonal elements are outnumbered by the off-diagonal ones if there is a reasonably large number $n$ of features, i.e., when $n \ll n^2-n$, which clearly holds in many real-world applications, where the number of features is often in the hundreds, thousands or even millions. Hence, the aggregate squared error of all the off-diagonal elements can be (possibly several) orders of magnitude larger than the aggregate squared error of all the diagonal elements. For this reason, any 'reasonable' choice $\beta$ (like $\beta=0$) suffices for obtaining a very accurate approximation of the optimal ${\hat{ U}}{\hat{ V}}^\top$ if the number of features $n$ is large, as confirmed by our experiments.
With the choice $\beta=0$, the approximate optimization problem hence becomes \begin{equation} \min_{U,V} \lVert Z {\hat{ B}} -Z UV^\top \rVert_F^2, \label{eq_approx} \end{equation} which can easily be solved: let the singular value decomposition (SVD) of $ Z {\hat{ B}} $ be given by $$ Z {\hat{ B}} = PDQ^\top, $$ where $D$ is the diagonal matrix of singular values, while the matrices $P$ and $Q$ are composed of the left and right singular vectors, respectively. Then, according to the well-known Eckart–Young–Mirsky theorem, the optimal rank-$k$ solution of Eq. \ref{eq_approx} is given by $P_kD_kQ_k^\top$ pertaining to the largest $k$ singular values. Given that \cite{jin21} $$ P_kD_kQ_k^\top =PDQ^\top Q_k Q_k^\top = Z {\hat{ B}} Q_k Q_k^\top, $$ and comparing the last expression with Eq. \ref{eq_approx}, we obtain the solution of Eq. \ref{eq_approx}: \begin{eqnarray}
{\hat{ U}} &=& {\hat{ B}} Q_k \label{eq_uuhat}\\
{\hat{ V}} &=& Q_k \label{eq_vvhat}, \end{eqnarray} which is the approximate solution of Eq. \ref{eq_part2}
\end{enumerate}
As to compute $Q_k$ in practice, instead of using SVD on $Z{\hat{ B}}$, which is an $m \times n$ matrix, it might be more efficient to determine the eigenvectors of the $n\times n$ matrix ${\hat{ B}}^\top Z^\top Z {\hat{ B}} = {\hat{ B}}^\top (X^\top X+\Lambda) {\hat{ B}}$. Note that the latter matrix can also be computed more efficiently: \begin{equation*} {\hat{ B}}^\top Z^\top Z {\hat{ B}} = Z^\top Z - {\rm diagM}(1 \oslash {\rm diag}({\hat{ C}}))\cdot (I +{\hat{ B}}). \end{equation*}
\subsection{Algorithm} \label{sec_algo} The derived closed-form solution that approximates the optimal low-rank EDLAE gives rise to a particularly simple algorithm, as shown in Algorithm \ref{algo}: in the first two lines, the full-rank solution is computed in closed form. In the third line, 'eig$_k$' denotes the function that returns the $k$ eigenvectors pertaining to the $k$ largest eigenvalues, which yields the solution ${\hat{ V}}$ (see also Eq. \ref{eq_vvhat}). In the fourth line, the solution ${\hat{ U}}$ is computed (see also Eq. \ref{eq_uuhat}).
\begin{algorithm}[b] \SetAlgoNoLine \KwIn{data matrix $X^\top X \in \mathbb{R}^{n\times n }$, \\
\hspace{11 mm} L$_2$-norm regularization $\Lambda$, see Eq. \ref{eq_l2},\\
\hspace{11 mm} rank $k$ of low-rank solution.} \KwOut{approx. low-rank solution ${\hat{ U}},{\hat{ V}} \in \mathbb{R}^{n\times k }$.} ${\hat{ C}} = \left(X^\top X + \Lambda \right)^{-1}$ \\ $ {\hat{ B}} = I -{\hat{ C}} \cdot {\rm diagM}(1 \oslash {\rm diag}({\hat{ C}})) $\\ ${\hat{ V}} = {\rm eig}_k ( {\hat{ B}}^\top (X^\top X+\Lambda) {\hat{ B}} )$\\ ${\hat{ U}} = {\hat{ B}} {\hat{ V}}$ \\ \caption{Pseudocode for the approximate closed-form solution of Eq. \ref{eq_trainuv_orig}.} \label{algo} \end{algorithm}
The computational cost of this closed-form approximation is considerably smaller than the cost of the iterative optimization (ADMM) used in \cite{steck20b} unless the model-rank $k$ is very large (see Table \ref{tab_runtime}). In both approaches, an $n \times n$ matrix has to be inverted, which has computational complexity of about ${\cal O}(n^{2.376})$ when using the Coppersmith-Winograd algorithm.
The key difference is that the leading $k$ eigenvectors have to be computed in the proposed approximation, while each step of the iterative ADMM updates involves several matrix multiplications of size $n \times k$. While computing all eigenvectors of a symmetric matrix of size $n\times n$ has the same computational complexity as matrix multiplication or inversion \cite{pan99}, i.e., ${\cal O}(n^{2.376})$, computing the top $k$ eigenvectors for small $k$ can have a smaller cost, typically ${\cal O}(kn^2)$, e.g., using the Lanczos algorithm \cite{lanczos50,lehoucq98}. An empirical comparison of the training-times is shown in Table \ref{tab_runtime}: as we can see, the closed-form approximation is up to three times faster than ADMM (which we ran for 10 epochs here).\footnote{Note that 10 epochs is often the minimum that yields 'acceptable' accuracy in practice \cite{boyd11}.} Only for very large model-ranks $k$, ADMM is faster on the \emph{ML-20M} data set in Table \ref{tab_runtime}.
\begin{table}[t] \caption{Runtimes (wall clock time in seconds) of ADMM used in \cite{steck20b} and of the proposed approximation for different model-ranks $k$ on the \emph{ML-20M} dataset, using an AWS instance with 128 GB memory and 16 vCPUs. For simplicity, the times reported for ADMM are based on 10 epochs, even though ADMM may not have reached the optimal test error.$^{\rm 4}$} \label{tab_runtime} \begin{tabular}{lrrrrr} \hline rank $k$ & 10 & 500 & 1,000 & 2,000 & 5,000\\ \hline ADMM & 519s & 557s & 597s & 701s & 1,040s\\ Algo. 1 & 171s & 191s & 220s & 393s & 1,496s \\ \hline speed-up & $3.0\times$ & $2.9\times$ & $2.7\times$ & $1.8\times$ & $0.7\times$ \\ \hline \end{tabular} \end{table}
As an aside, this approach may also be viewed as a simple example of teacher-student learning for knowledge distillation, which was introduced in \cite{bucilua06}: first, the \emph{teacher} is learned from the given data, which is then followed by learning the \emph{student} based on the predictions of the teacher. In our case, the key step is to learn the 'correct' \emph{off-diagonal} elements of the full-rank model ${\hat{ B}}$ from the given data, which is achieved by constraining its diagonal to zero. In the second step, the low-rank student ${\hat{ U}}{\hat{ V}}^\top$ then learns these off-diagonal values from the teacher ${\hat{ B}}$.
\subsection{Experiments} \label{sec_exp} In this section, we empirically assess the approximation-accuracy of the closed-form solution derived above. To this end, we follow the experimental protocol as in \cite{steck20b} and \cite{liang18}, using their publicly available Python notebooks.\footnote{{\tt https://github.com/hasteck/EDLAE\_NeurIPS2020} and {\tt https://github.com/dawenl/vae\_cf}}
\begin{figure}
\caption{Ranking accuracy for different model-ranks $k={\rm rank}(UV^\top)$ on the three data-sets \emph{ML-20M}, \emph{Netflix} and \emph{MSD}, where the standard errors are 0.002, 0.001, and 0.001, respectively: the proposed closed-form solution (red dotted line) provides an accurate approximation to the exact solution of EDLAE (black line) across \emph{all} ranks $k$. Like in \cite{steck20b}, also the models with the constraint ${\rm diag}(UV^\top)=0$ (green solid line) and without any constraints on the diagonal (blue dashed line) are shown for comparison.}
\label{fig_res}
\end{figure}
We reproduced figure 1 from \cite{steck20b} for the Netflix Prize data, which shows the ranking accuracy (nDCG@100) on the test data for various model-ranks $k={\rm rank}(UV^\top)$, ranging from 10 to full-rank. These results, obtained by iteratively optimizing the training objective using the update-equations of ADMM derived in \cite{steck20b}, serve as the exact solution (ground truth) in our experiments. In addition to the \emph{Netflix} data, we also generated the same kind of graph for the MovieLens 20 Million data (\emph{ML-20M}) as well as the Million Song Data (\emph{MSD}) using the publicly available code of \cite{steck20b}. We tuned the L$_2$-norm regularization $\Lambda$ for each of the different model-ranks $k$ as to optimize prediction accuracy of each model.
We then added the results based on the closed-form solution derived above (red dotted line in Figure \ref{fig_res}): as we can see in Figure \ref{fig_res}, the proposed closed-form solution (red dotted line) provides an excellent approxiamtion to the (exact) solution obtained by ADMM in \cite{steck20b} (black solid line) \emph{across the entire range} of model-ranks $k$. This is particualrly remarkable, as the other two approximations considered in \cite{steck20b} (green solid line and blue dashed line in Figure \ref{fig_res}) only provide an accurate approximation for either very large or very small model-ranks $k$, respectively. The green solid line reflects the model with the constraint ${\rm diag}(UV^\top)=0$ applied to Eq. \ref{eq_trainuv_orig}, while the blue dashed line is based on the unconstrained linear AE, i.e., $\min_{U,V} \lVert X - X UV^\top \rVert_F^2 + \lVert \Lambda^{\nicefrac{1}{2}}UV^\top \rVert_F^2$.
These results also imply that, despite our 'initialization' $\beta=0$ for the diagonal values in our approach (see item 2 in Section \ref{sec_derivation}), the proposed closed-form solution is indeed able to learn the approximately correct values of the diagonal ${\rm diag}({\hat{ U}}{\hat{ V}}^\top)$, which can be markedly different from zero at small model-ranks $k$. This is implied by the fact that the unconstrained AE, which obviously has a non-zero diagonal (blue dashed line), is close to the proposed approximation (red dotted line) in Figure \ref{fig_res} for small $k$. On the other hand, at large model-ranks $k$, the proposed closed-form solution learns that ${\rm diag}({\hat{ U}}{\hat{ V}}^\top)$ is indeed close to zero, as implied by the fact that the proposed approximation (red dotted line) is close to the model trained with the constraint ${\rm diag}(UV^\top)=0$ (green solid line) in Figure \ref{fig_res} for large $k$.
Apart from that, Figure \ref{fig_res} also shows that the proposed approximation of EDLAE (red dotted line in Figure \ref{fig_res}) achieves a considerable gain in ranking-accuracy on test-data, compared to the exact closed-form solution of the unconstrained linear AE (blue dashed line), which was derived in \cite{jin21}.
\section*{Conclusions}
In this paper, we used simple properties of the singular value decomposition (SVD) to gain insights into the training of autoencoders in two scenarios.
First, we showed that \emph{unsupervised} training by itself induces severe regularization / reduction in model-capacity of \emph{deep nonlinear} autoencoders. This is reflected by the fact that \emph{deep nonlinear} autoencoders \emph{cannot} fit the data more accurately than \emph{linear} autoencoders do if they have the same dimensionality in their last hidden layer (and under a few additional mild assumptions). We discussed several aspects of this new and counterintuitive insight, including that this might provide the first explanation for the puzzling experimental observations in the literature of recommender systems that deep nonlinear autoencoders struggled to achieve markedly higher ranking-accuracies than simple linear autoencoders did.
Second, we derived a simple teacher-student algorithm for approximately learning a low-rank EDLAE model \cite{steck20b}, a linear autoencoder that is regularized to not overfit towards the identity-function. We empirically observed that the derived closed-form solution provides an accurate approximation across the entire range of latent dimensions. Moreover, except for very small numbers of latent dimensions, it considerably outperformed the exact closed-form solution of the unconstrained linear autoencoder derived in \cite{jin21}.
\section*{Appendix: Proof of Proposition} In this proof of the Proposition in Section \ref{sec_prop}, two mathematically simple facts are combined. The singular value decomposition (SVD) plays a pivotal role, which we relate to the deep nonlinear AE in the first step, and in the second step to the linear AE. The least-squares problem \begin{equation}
\min_{M: {\rm rank}(M)\le k}|| X- M ||_F^2 ,
\label{eq_m} \end{equation} where $M \in \mathbb{R}^{m \times n}$ is a matrix with a rank of (at most) $k<\min(m,n)$, has a well-known solution (Eckart–Young–Mirsky theorem): let \begin{equation}
X= U\cdot S \cdot V^\top
\label{eq_svd} \end{equation} denote the SVD of $X$, where $U \in \mathbb{R}^{m\times \min(m,n)}$ and $V \in \mathbb{R}^{n \times \min(m,n)}$ are the matrices with the left and right singular vectors, respectively, while $ S \in \mathbb{R}^{\min(m,n)\times \min(m,n)}$ is the diagonal matrix of the singular values of $X$. Then the rank-$k$ solution that minimizes the squared error in Eq. \ref{eq_m} is \begin{equation}
M^*= U_k\cdot S_k \cdot V_k^\top ,
\label{eq_m_solution} \end{equation} where $ S_k \in \mathbb{R}^{k \times k}$ is the diagonal matrix with the $k$ largest singular values in $S$; $U_k \in \mathbb{R}^{m\times k}$ and $V_k \in \mathbb{R}^{n \times k}$ are the sub-matrices of $U$ and $V$ with the singular vectors corresponding to these $k$ largest singular values.
Now, in the first step, we can make the connection to the \emph{deep nonlinear} AE: when comparing Eq. \ref{eq_m_solution} to Eq. \ref{eq_deep}, note that $g_{\theta'}(X)$ in the nonlinear AE may be viewed as a parametric model, while the corresponding matrix $U_k$ in $M^*$ may be viewed as a non-parametric approach.\footnote{'Non-parametric' is used here in the sense that, for each data-point (i.e., row) in the given training-data matrix $X$, there is a corresponding row (with $k$ model-parameters) in matrix $U_k$, i.e., the number of model parameters in $U_k$ grows linearly with the number of data-points in $X$. In contrast, $g_{\theta'}(X)$ in the nonlinear AE is a parametric model, as the function $g_{\theta'}(\cdot)$ has a fixed number of parameters, and it is applied to each row of $X$.} Obviously, the parametric model cannot fit the training-data more accurately than the non-parametric approach does. On the other hand, $W_L$ in the nonlinear AE and the corresponding $S_k \cdot V_k^\top$ in $M^*$ are both (non-parametric) matrices of rank $k$. This implies that the deep nonlinear AE in Eq. \ref{eq_deep} cannot fit the training data better than $M^*$ in Eq. \ref{eq_m_solution} does. Hence, we have \begin{eqnarray}
{\rm SE}^{\rm(deep)} \!\!\!\!\!\! &=& \!\!\!\!\!\! \min_{\theta', W_L}|| X- g_{\theta'}(X) \cdot W_L ||_F^2\nonumber \\
&\ge& \!\!\!\!\!\! ||X - U_k\cdot S_k \cdot V_k^\top||_F^2 = || X- M^* ||_F^2 .
\label{eq_proof_deep} \end{eqnarray} Now, in the second step, we can see that $M^*$ (non-parametric approach) is actually equivalent to the \emph{linear} AE (parametric model), due to the simple identity \begin{equation} U_k\cdot S_k \cdot V_k^\top = U\cdot S \cdot \underbrace{V^\top \cdot V_k}_{=I_{n\times k}} \cdot V_k^\top , \end{equation} which follows immediately from the fact that $I_{n\times k} \in \mathbb{R}^{n\times k}$ is essentially a (rectangular) identity-matrix of rank $k$, which then selects the submatrices $S_k$ from $S$, and $U_k$ from $U$. This simple equation seems to have been overlooked until it was pointed out in \cite{jin21}. With Eqs. \ref{eq_m_solution} and \ref{eq_svd}, it now follows \cite{jin21} \begin{equation} M^*= U_k\cdot S_k \cdot V_k^\top = U\cdot S \cdot V^\top \cdot (V_k \cdot V_k^\top) = X \cdot (V_k \cdot V_k^\top). \end{equation} Comparing this to the linear AE in Eq. \ref{eq_lin}, we can see that one of possibly many solutions of Eq. \ref{eq_se_lin} is $W_1 := W_2^\top := V_k$. Hence, the rank-$k$ SVD (non-parametric approach) and the linear AE (parametric model) can fit the training data equally well: \begin{eqnarray}
{\rm SE}^{\rm(linear)} \!\!\!\!\!\! &=& \!\!\!\!\!\! \min_{W_1,W_2}|| X- X \cdot W_1 \cdot W_2 ||_F^2 \nonumber\\
&=& \!\!\!\!\!\! ||X - X \cdot V_k \cdot V_k^\top||_F^2 = || X- M^* ||_F^2
\label{eq_proof_lin} \end{eqnarray} Comparing Eqs. \ref{eq_proof_deep} and \ref{eq_proof_lin}, we obtain the claim in Eq. \ref{eq_prop}, i.e., ${\rm SE}^{\rm(deep)} \ge {\rm SE}^{\rm(linear)}$. $\Box$
\end{document} |
\begin{document}
\baselineskip = 13.5pt
\title{\bf On weak-strong uniqueness and singular limit for the compressible Primitive Equations }
\author{ Hongjun Gao$^{1}$ \footnote{Email:[email protected]}\ \ \ \v{S}\'{a}rka Ne\v{c}asov\'{a}$^2$ \footnote{Email: [email protected]} \ \ \ Tong Tang$^{3,2}$ \footnote{Email: [email protected]}\\ {\small 1.Institute of Mathematics, School of Mathematical Sciences,}\\ {\small Nanjing Normal University, Nanjing 210023, P.R. China}\\ {\small 2. Institute of Mathematics of the Academy of Sciences of the Czech Republic,} \\ {\small \v Zitn\' a 25, 11567, Praha 1, Czech Republic}\\ {\small 3. Department of Mathematics, College of Sciences,}\\ {\small Hohai University, Nanjing 210098, P.R. China}\\ \date{}}
\maketitle \begin{abstract} The paper addresses the weak-strong uniqueness property and singular limit for the compressible Primitive Equations (PE). We show that a weak solution coincides with the strong solution emanating from the same initial data. On the other hand, we prove compressible PE will approach to the incompressible inviscid PE equations in the regime of low Mach number and large Reynolds number in the case of well-prepared initial data. To the best of the authors' knowledge, this is the first work to bridge the link between the compressible PE with incompressible inviscid PE.
{{\bf Key words:} compressible Primitive Equations, singular limit, low Mach number, weak-strong uniqueness.}
{ {\bf 2010 Mathematics Subject Classifications}: 35Q30.} \end{abstract}
\maketitle
\section{Introduction}\setcounter{equation}{0} The earth is surrounded and occupied by atmosphere and ocean, which play an important role in human's life. From the mathematical point of view and numerical perspective, it is very complicated to use the full hydrodynamical and thermodynamical equations to describe the motion and fascinating phenomena of atmosphere and ocean. In order to simplify model, scientists introduce the Primitive Equations (PE) model in meteorology and geophysical fluid dynamics, which helps us to predict the long-term weather and detect the global climate changes. In this paper, we study the following Compressible Primitive Equations (CPE):
\begin{eqnarray} \left\{ \begin{array}{llll} \partial_{t}\rho+\text{div}_x(\rho \mathbf{u})+\partial_z(\rho w)=0, \\ \partial_t(\rho \mathbf{u})+\textrm{div}_x(\rho\mathbf{u}\otimes\mathbf{u})+\partial_z(\rho\mathbf uw)+\nabla_x p(\rho)=\mu\Delta_x\mathbf u+\lambda\partial^2_{zz}\mathbf u,\\ \partial_zp(\rho)=0, \end{array}\right.\label{a} \end{eqnarray}
in $(0,T)\times\Omega$. Here $\Omega=\{(x,z)|x\in\mathbb{T}^2,0<z<1\}$, $x$ denotes the horizontal direction and $z$ denotes the vertical direction. $\rho=\rho(t,x)$, $\mathbf{u}(t,x,z)\in\mathbb{R}^2$ and $w(t,x,z)\in\mathbb{R}$ represent the density, the horizonal velocity and vertical velocity respectively.
From the hydrostatic balance equation $(1.1)_3$, it follows that {\bf the density $\rho$ is independent of $z$}. $\mu>0$, $\lambda\geq0$ are the constant viscosity coefficients. The system is supplemented by the boundary conditions \begin{eqnarray}
w|_{z=0}=w|_{z=1}=0,\hspace{4pt}\partial_z\mathbf u|_{z=0}=\partial_z\mathbf u|_{z=1}=0, \end{eqnarray} and initial data \begin{eqnarray}
\rho\mathbf u|_{t=0}=\mathbf m_0(x,z),\hspace{3pt}\rho|_{t=0}=\rho_0(x). \end{eqnarray}
The pressure $p(\rho)$ satisfies the barotropic pressure law where the pressure and the density are related by the following formula: \begin{eqnarray} p(\rho)=\rho^\gamma\hspace{5pt}(\gamma>1). \end{eqnarray} The PE model is widely used in meteorology and geophysical fluid dynamics, due to its accurate theoretical analysis and practical numerical computing. Concerning geophysical fluid dynamics we can refer to work by Chemin, Desjardins, Gallagher and Grenier \cite{ch} or Feireisl, Gallagher, Novotn\'{y} \cite{e}. There is a great number of results about PE, such as \cite{bg,b1,c2,c4,c5,l3,l4,s,t,ws}. We just mention some of results. Guill\'{e}n-Gonz\'{a}lez, Masmoudi and Rodr\'{\i}guez-Bellido \cite{gu} proved the local existence of strong solutions. The celebrated breakthrough result was made by Cao and Titi \cite{c1}. They were first who proved the global well-posedness of PE. After that a lot of scientists were focused on the dynamics and regularity of PE e.g. \cite{g1,g2,ju,kukavica}. Recently in \cite{c2,c4,c5}, the authors considered the strong solution for PE with vertical eddy diffusivity and only horizontal dissipation. About random perturbations of PE, the local and global strong solution of PE can be referred to \cite{d1, d2, gao}, large deviation principles, see \cite{dong} and diffusion limit, see \cite{g3}. On the other hand, regarding to inviscid PE (hydrostatic incompressible Euler equations), the existence and uniqueness is an outstanding open problem. Only a few results are available. Under the convex horizontal velocity assumptions, Brenier \cite{b} proved the existence of smooth solutions in two-dimensions. Then, Masmoudi and Wong \cite{m} utilized the weighted $H^s$ a priori estimates and obtained the existence, uniqueness and weak-strong uniqueness. Removing the convex horizontal velocity assumptions, they extended Brenier's result. By virtue of Cauchy-Kowalewshi theorem, the authors \cite{k} constructed a locally, unique and real-analytic solution. Notably, Brenier \cite{by} suggested that the existence problem may be ill-posed in Sobolev spaces. Further Cao et al. \cite{c3} established the blow up for certain class of smooth solutions in finite time.
In order to show the atmosphere and ocean have compressible property, Ersoy et al. \cite{er1} consider that the vertical scale of atmosphere is significantly smaller than the horizontal scales and they derive the CPE from the compressible Navier-Stokes equations. To be precise, the CPE system is obtained by replacing the vertical velocity momentum equation with hydrostatic balance equation. Compared with compressible Navier-Stokes equations, the regularity of vertical velocity is less regular than horizontal velocity in the CPE system. In the absence of sufficient information about the vertical velocity, it inevitably leads to difficulty for obtaining the existence of solutions. \emph{ Lions, Teman and Wang \cite{l1,l2} were first to study the CPE and received fundamental results in this field.} Under a smart $P-coordinates$, they reformulated the system into the classical PE with the incompressible condition. Later on, Gatapov and Kazhikhov \cite{g}, Ersoy and Ngom \cite{er2} proved the global existence of weak solutions in 2D case. Liu and Titi \cite{liu1} used the classical methods to proved the local existence of strong solutions in 3D case. Ersoy et al. \cite{er1}, Tang and Gao \cite{tang} showed the stability of weak solutions with the viscosity coefficients depending on the density. The stability means that a subsequence of weak solutions will converge to another weak solutions if it satisfies some uniform bounds. Recently, based on the work \cite{b1,b2,b3,li,v}, Liu and Titi \cite{liu2} and independently Wang et al. \cite{w} used the B-D entropy to prove the global existence of weak solutions in the case where the viscosity coefficients are depending on the density.
Our paper is divided into two parts. The first part concerns the weak-strong uniqueness of CPE. Recently, Liu and Titi \cite{liu3} studied the zero Mach number limit of CPE, proving it converges to incompressible PE, which is a breakthrough result to bridge the link between CPE and PE system. In the second part, inspired by \cite{liu3}, we investigate the singular limit of CPE, showing it converges to incompressible inviscid PE system. \emph{This is the first attempt to use the relative entropy method to study asymptotic limit for CPE.} Let us mention that the corner-stone analysis of our results is based on the relative energy inequality which was invented by Dafermos, see \cite{D}. It was introduced by Germain \cite{ge} and generalized by Feireisl \cite{e2} for compressible fluid model. Feireisl and his co-authors \cite{e3,e4} used the versatile tool to solve various problems. However, compared with the previous classical results, there is significant difference in the process of using relative energy inequality to CPE model due to the absence of the information on the vertical velocity. Therefore, it is not straightforward to apply the method from Navier-Stokes to CPE. We utilize the special structure of CPE to find the deeper relationship and reveal the important feature of CPE.
The paper is organized as follows. In Section 2, we introduce the dissipative weak solutions, relative energy and state our first theorem. In Section 3, we prove the weak-strong uniqueness. We recall the target system, state the singular limit theorem and derive the necessary uniform bounds in Section 4. Section 5 is devoted to proof of the convergence in the case of well-prepared initial data.
\vskip 0.5cm
\noindent {\bf Part I: Weak-Strong uniqueness} \vskip 0.2cm In this part, we focus on the weak-strong uniqueness of the CPE system. \section{Preliminaries and main result}
First of all, we should point out that a proper notion of weak solution to CPE has not been well understood. Recently, Bresch and Jabin \cite{br} consider different compactness method from Lions or Feireisl which can be applied to anisotropical stress tensor similarly. They obtain the global existence of weak solutions if $|\mu-\lambda|$ are not too large. Let us state one of the possible definitions here.
\subsection{Dissipative weak solutions}
\begin{definition}\label{def1} We say that $[\rho,\mathbf u,w]$ is a dissipative weak solution to the system of \eqref{a}, supplemented with initial data (1.3) and pressure follows (1.4) if $\rho=\rho(x,t)$ and \begin{align}
\mathbf u\in L^2(0,T;H^1(\Omega)),\hspace{3pt} \rho|\mathbf u|^2\in L^\infty(0,T; L^1(\Omega)), \hspace{3pt}\rho\in L^\infty(0,T;L^\gamma(\Omega)\cap L^1(\Omega)). \end{align}
\noindent $\bullet$ the continuity equation \begin{align} [\int_\Omega\rho\varphi dxdz]^{t=\tau}_{t=0}=\int^\tau_0\int_{\Omega}\rho\partial_t\varphi+\rho\mathbf{u}\nabla_x\varphi+\rho w\partial_z\varphi dxdzdt, \end{align} holds for all $\varphi\in C^\infty_c([0,T)\times\Omega)$;
\noindent $\bullet$ the momentum equation \begin{align} [\int_\Omega\rho\mathbf u\varphi dxdz]^{t=\tau}_{t=0}&=\int^\tau_0\int_{\Omega}\rho\mathbf{u}\partial_t\varphi+ \rho\mathbf{u}\otimes\mathbf{u}:\nabla_x\varphi+\rho\mathbf uw\partial_z\varphi+ p(\rho)\text{div}\varphi dxdzdt\nonumber\\ &\hspace{8pt}-\int^\tau_0\int_{\Omega}[\mu\nabla_x\mathbf u:\nabla_x\varphi+\lambda\partial_z\mathbf u\partial_z\varphi]dxdzdt, \end{align} holds for all $\varphi\in C^\infty_c([0,T)\times\Omega)$,
\noindent $\bullet$ the energy inequality \begin{align}
[\int_{\Omega}\frac{1}{2}\rho|\mathbf{u}|^2+P(\rho)-P'(\overline{\rho})(\rho-\overline{\rho})-P(\overline{\rho})dxdz]|^{t=\tau}_{t=0}
+\int^\tau_0\int_\Omega(\mu|\nabla_x\mathbf u|^2+\lambda|\partial_z\mathbf u|^2)dxdzdt\leq 0, \end{align} holds for a.a $\tau\in(0,T)$, a arbitrary constant $\overline{\rho}$, where $P(\rho)=\rho\int^\rho_1\frac{p(z)}{z^2}dz$.
Moreover, as there is no information about $w$, so we need the following equation: \begin{align} \rho w(x,z,t)=-\rm{div}_x(\rho\widetilde{\mathbf u})+z\rm{div}_x(\rho\overline{\mathbf u}), \hspace{4pt} \text{in the sense of} \hspace{4pt}H^{-1}(\Omega), \label{b1} \end{align} where \begin{align*} \widetilde{\mathbf u}(x,z,t)=\int^z_0u(x,s,t)ds,\hspace{5pt}\overline{u}(x,t)=\int^1_0u(x,z,t)dz. \end{align*}
\end{definition} We should emphasize that \eqref{b1} is the key step to obtain the existence of weak solution in \cite{liu2,w}, which is inspired by incompressible case.
\subsection{Relative entropy inequality}
Motivated by \cite{e2,e3}, for any finite weak solution $(\rho,\mathbf u,w)$ to the CPE system, we introduce the relative energy functional \begin{align}
\mathcal{E}(\rho,\mathbf{u}|r, \mathbf{U})&=\int_{\Omega}[\frac{1}{2}\rho|\mathbf u-\mathbf U|^2+P(\rho)-P'(r)(\rho-r)-P(r)]dxdz\nonumber\\
&=\int_\Omega(\frac{1}{2}\rho^2|\mathbf u|+P(\rho))dxdz-\int_\Omega\rho\mathbf u\cdot\mathbf Udxdz
+\int_\Omega\rho[\frac{|\mathbf U|^2}{2}-P'(r)]dxdz+\int_\Omega p(r)dxdz\nonumber\\ &=\sum^4_{i=1}I_i,\label{a1} \end{align} where $r>0$, $\mathbf U$ are smooth ``test'' functions, $r$, $\mathbf U$ compactly supported in $\Omega$.
\begin{lemma}\label{relativeentropy} Let $(\rho,\mathbf{u}, w)$ be a dissipative weak solution introduced in Definition \ref{def1}. Then $(\rho,\mathbf{u}, w)$ satisfy the relative entropy inequality
\begin{align}
\mathcal{E}&(\rho,\mathbf{u}|r,\mathbf U)|^{t=\tau}_{t=0}+\int^\tau_0\int_\Omega\big{(}\mu\nabla_x\mathbf u\cdot(\nabla_x\mathbf u-\nabla_x\mathbf U)+\lambda\partial_z\mathbf u(\partial_z\mathbf u-\partial_z\mathbf U)\big{)}dxdzdt\nonumber\\ &\leq\int^\tau_0\int_{\Omega}\rho(\mathbf U-\mathbf u)\partial_t\mathbf U+\rho\mathbf u(\mathbf U-\mathbf u)\cdot\nabla_x\mathbf U +\rho w(\mathbf U-\mathbf u)\cdot\partial_z\mathbf U-p(\rho)\text{div}_x\mathbf Udxdzdt\nonumber\\ &\hspace{15pt}-\int^\tau_0\int_{\Omega}P''(r)(\rho\partial_tr+\rho\mathbf u\nabla_xr)dxdzdt +\int^\tau_0\int_{\Omega}\partial_tp(r)dxdzdt. \end{align} \end{lemma} {\bf Proof:} From the weak formulation and energy inequality (2.2)-(2.4) we deduce \begin{align}
&I_1|^{t=\tau}_{t=0}+\int^\tau_0\int_\Omega(\mu|\nabla_x\mathbf u|^2+\lambda|\partial_z\mathbf u|^2)dxdzdt\leq0,\\
&I_2|^{t=\tau}_{t=0}=-\int^\tau_0\int_\Omega\rho\mathbf u\partial_t\mathbf U+\rho\mathbf u\otimes\mathbf u:\nabla_x\mathbf U+ \rho\mathbf uw\partial_z\mathbf U+p(\rho)\text{div}_x\mathbf Udxdzdt\nonumber\\ &\hspace{40pt}+\int^\tau_0\int_\Omega\mu\nabla_x\mathbf u:\nabla_x\mathbf U+\lambda\partial_z\mathbf u\partial_z\mathbf Udxdzdt,\\
&I_3|^{t=\tau}_{t=0}=\int^\tau_0\int_\Omega\rho\partial_t\frac{|\mathbf U|^2}{2}+\rho\mathbf u\cdot\nabla_x\frac{|\mathbf U|^2}{2}+\rho w\partial_z\frac{|\mathbf U|^2}{2}dxdzdt\nonumber\\ &\hspace{40pt}-\int^\tau_0\int_\Omega\rho\partial_tP'(r)+\rho\mathbf u\cdot\nabla_xP'(r)+\rho w\partial_zP'(r)dxdzdt\nonumber\\ &\hspace{20pt}=\int^\tau_0\int_\Omega\rho\mathbf U\partial_t\mathbf U+\rho\mathbf u\mathbf U\cdot\nabla_x\mathbf U+\rho w\mathbf U\partial_z\mathbf Udxdzdt\nonumber\\ &\hspace{30pt}-\int^\tau_0\int_\Omega\rho P''(r)\partial_tr+P''(r)\rho\mathbf u\cdot\nabla_xr dxdzdt,\\
&I_4|^{t=\tau}_{t=0}=[\int_\Omega p(\rho)dxdz]|^{t=\tau}_{t=0}. \end{align}
Summing (2.6)-(2.10) together, we obtain \begin{align}
\mathcal{E}&(\rho,\mathbf{u}|r,\mathbf U)|^{t=\tau}_{t=0}+\int^\tau_0\int_\Omega\big{(}\mu\nabla_x\mathbf u\cdot(\nabla_x\mathbf u-\nabla_x\mathbf U)+\lambda\partial_z\mathbf u(\partial_z\mathbf u-\partial_z\mathbf U)\big{)}dxdzdt\nonumber\\ &\leq\int^\tau_0\int_{\Omega}\rho(\mathbf U-\mathbf u)\partial_t\mathbf U+\rho\mathbf u(\mathbf U-\mathbf u)\cdot\nabla_x\mathbf U +\rho w(\mathbf U-\mathbf u)\cdot\partial_z\mathbf U-p(\rho)\text{div}_x\mathbf Udxdzdt\nonumber\\ &\hspace{15pt}-\int^\tau_0\int_{\Omega}P''(r)(\rho\partial_tr+\rho\mathbf u\nabla_xr)dxdzdt +\int^\tau_0\int_{\Omega}\partial_tp(r)dxdzdt. \end{align}
\subsection{Main result} We say that $(r,\mathbf U,W)$ is a strong solution to the CPE system $(1.1)-(1.4)$ in $(0,T)\times\Omega$, if \begin{align*} &r^\frac{1}{2}\in L^\infty(0,T;H^2(\Omega)),\hspace{3pt}\partial_tr^\frac{1}{2}\in L^\infty(0,T;H^1(\Omega)),\hspace{3pt}r>0\hspace{3pt}\text{for all}\hspace{3pt}(t,x),\\ &\mathbf U\in L^\infty(0,T;H^3(\Omega))\cap L^2(0,T;H^4(\Omega)),\hspace{3pt} \partial_t\mathbf U\in L^2(0,T; H^2(\Omega)), \end{align*} with initial data $r^\frac{1}{2}_0\in H^2(\Omega)$, $r_0>0$ and $\mathbf U_0\in H^3(\Omega)$.
Now, we are ready to state our first result. \begin{theorem} Let $\gamma>6$, $(\rho,\mathbf u,w)$ be a dissipative weak solution to the CPE system $(1.1)-(1.4)$ in $(0,T)\times\Omega$. Let $(r,\mathbf U, W)$ be a strong solution to the same problem and emanating from the same initial data. Then, \begin{align*} \rho=r,\hspace{5pt}\mathbf u=\mathbf U,\hspace{4pt}\text{in}\hspace{3pt}(0,T)\times\Omega. \end{align*}
\end{theorem}
\begin{remark} Liu and Titi \cite{liu1} obtained the local existence of strong solutions to CPE. It is important to point out that our result holds under more regularity than the strong solutions obtained in \cite{liu1}. \end{remark}
Section 3 is devoted to the proof of the above theorem.
\section{Weak-strong uniqueness} The proof of Theorem 2.1 depends on the relative energy inequality by considering the strong solution $[r,\mathbf U, W]$ as test function in the relative energy inequality \eqref{a1}.
\subsection{Step 1} We write \begin{align*} \int_\Omega\rho\mathbf u(\mathbf U-\mathbf u)\cdot\nabla_x\mathbf Udxdz= \int_\Omega\rho(\mathbf u-\mathbf U)(\mathbf U-\mathbf u)\cdot\nabla_x\mathbf Udxdz +\int_\Omega\rho\mathbf U(\mathbf U-\mathbf u)\cdot\nabla_x\mathbf Udxdz. \end{align*}
As $[r,\mathbf U, W]$ is a strong solution, it is easy to obtain that \begin{align} \int_\Omega\rho(\mathbf u-\mathbf U)(\mathbf U-\mathbf u)\cdot\nabla_x\mathbf Udxdz
\leq C\mathcal{E}(\rho,\mathbf u|r,\mathbf U). \end{align}
Moreover, the momentum equation reads as \begin{align*} (r\mathbf U)_t+\text{div}(r\mathbf U\otimes\mathbf U)+\partial_z(r\mathbf UW)+\nabla_xp(r)=\mu\Delta_x\mathbf U+\lambda\partial_{zz}\mathbf U, \end{align*} implying that \begin{align*} \mathbf U_t+\mathbf U\cdot\nabla_x\mathbf U+W\partial_z\mathbf U=-\frac{1}{r}\nabla_xp(r)+\frac{\mu}{r}\Delta_x\mathbf U +\frac{\lambda}{r}\partial_{zz}\mathbf U. \end{align*}
So we rewrite \begin{align*} \int_\Omega\rho(\mathbf U-\mathbf u)\cdot\partial_t\mathbf U+ \rho\mathbf U(\mathbf U-\mathbf u)\cdot\nabla_x\mathbf U +\rho W(\mathbf U-\mathbf u)\cdot\partial_z\mathbf U+\rho(w-W)(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz\\ =\int_\Omega\frac{\rho}{r}(\mathbf U-\mathbf u)(-\nabla_xp(r)+\mu\Delta_x\mathbf U+\lambda\partial_{zz}\mathbf U)dxdz +\int_\Omega\rho(w-W)(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz. \end{align*}
Thus, we obtain that \begin{align*}
\mathcal{E}&(\rho,\mathbf{u}|r,\mathbf U)|^{t=\tau}_{t=0}+\int^\tau_0\int_\Omega\big{(}\mu\nabla_x\mathbf u\cdot(\nabla_x\mathbf u-\nabla_x\mathbf U)+\lambda\partial_z\mathbf u(\partial_z\mathbf u-\partial_z\mathbf U)\big{)}dxdzdt\nonumber\\
&\leq C\int^\tau_0\mathcal{E}(\rho,\mathbf{u}|r,\mathbf U)dt -\int^\tau_0\int_{\Omega}P''(r)(\rho\partial_tr+\rho\mathbf u\nabla_xr)dxdzdt\nonumber\\ &\hspace{8pt}+\int^\tau_0\int_\Omega\frac{\rho}{r}(\mathbf U-\mathbf u)(\mu\Delta_x\mathbf U+\lambda\partial_{zz}\mathbf U)dxdz -\int^\tau_0\int_\Omega\frac{\rho}{r}(\mathbf U-\mathbf u)\nabla_xp(r)dxdz\nonumber\\ &\hspace{8pt}+\int^\tau_0\int_\Omega\rho(w-W)(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdzdt+\int^\tau_0\int_\Omega\partial_tp(r)dxdzdt -\int^\tau_0\int_\Omega p(\rho)\text{div}_x\mathbf Udxdzdt.\nonumber\\ \end{align*}
Before estimating, we should recall the following useful inequality from \cite{e2}: \begin{equation} \label{pres} P(\rho)-P'(r)(\rho-r)-P(r)\geq\left\{
\begin{array}{llll} C|\rho-r|^2,\hspace{5pt}\text{when} \hspace{3pt} \frac{r}{2}<\rho<r, \nonumber\\ C(1+\rho^\gamma),\hspace{5pt}\text{otherwise}. \end{array}\right. \end{equation}
Moreover, from \cite{e2}, we learn that \begin{align}
&\mathcal{E}(\rho,\mathbf{u}|r,\mathbf U)(t)\in L^\infty(0,T),\hspace{3pt}
\int_\Omega\chi_{\rho\geq r}\rho^{\gamma}dxdz\leq C\mathcal{E}(\rho,\mathbf{u}|r,\mathbf U)(t),\nonumber\\
&\int_\Omega\chi_{\rho\leq \frac{r}{2}}1dxdz\leq C\mathcal{E}(\rho,\mathbf{u}|r,\mathbf U)(t),\hspace{3pt}
\int_\Omega\chi_{\frac{r}{2}<\rho<r}(\rho-r)^2dxdz\leq C\mathcal{E}(\rho,\mathbf{u}|r,\mathbf U)(t).\label{a3} \end{align}
The main difficulty is to estimate the complicated nonlinear term $\int_\Omega\rho(w-W)(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz$, we rewrite it as \begin{align} \int_\Omega\rho(w-W)(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz =\int_\Omega\rho w(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz-\int_\Omega\rho W(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz.\label{b} \end{align}
According to \cite{e2,kr}, we divide the second term on the right side of (3.3) into three parts \begin{align} \int_\Omega&\rho W(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz\nonumber\nonumber\\ &=\int_\Omega\chi_{\rho\leq \frac{r}{2}}\rho W(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz +\int_\Omega\chi_{\frac{r}{2}<\rho<r}\rho W(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz+\int_\Omega\chi_{\rho\geq r}\rho W(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz\nonumber\nonumber\\
&\leq \|\chi_{\rho\leq \frac{r}{2}}1\|_{L^2(\Omega)}\|r\|_{L^\infty}\|W\partial_z\mathbf U\|_{L^3}\|\mathbf U-\mathbf u\|_{L^6(\Omega)} +\int_\Omega\chi_{\rho\geq r}\rho^{\frac{\gamma}{2}}W\partial_z\mathbf U\cdot(\mathbf U-\mathbf u)dxdz\nonumber\\
&\hspace{8pt}+C\|\chi_{\frac{r}{2}<\rho< r}(\rho-r)\|_{L^2(\Omega)}\|W\partial_z\mathbf U\|_{L^3}\|\mathbf U-\mathbf u\|_{L^6(\Omega)}\nonumber\\ &\leq C\int_\Omega\chi_{\rho\leq \frac{r}{2}}1dxdz +C\int_\Omega\chi_{\frac{r}{2}<\rho<r}(\rho-r)^2dxdz +C\int_\Omega\chi_{\rho\geq r}\rho^\gamma dxdz
+\delta\|\mathbf U-\mathbf u\|^2_{L^6(\Omega)}\nonumber\\
&\leq C\mathcal{E}(\rho,\mathbf u|r,\mathbf U)+\delta\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^2(\Omega)}
+\delta\|\partial_z\mathbf U-\partial_z\mathbf u\|^2_{L^2(\Omega)},\label{33c} \end{align} where in the last inequality, we have used the following celebrated inequality from Feireisl \cite{e1}: \begin{lemma} Let $2\leq p\leq6$, and $\rho\geq0$ such that $0<\int_\Omega\rho dx\leq M$ and $\int_\Omega\rho^\gamma dx\leq E_0$ for some $(\gamma>1)$ then \begin{align*}
\|f\|_{L^p(\Omega)}\leq C\|\nabla f\|_{L^2(\Omega)}+\|\rho^{\frac{1}{2}}f\|_{L^2(\Omega)}, \end{align*} where $C$ depends on $M$ and $E_0$. \end{lemma}
On the other hand, we take \eqref{b1} into the first term on the right hand of \eqref{b} and get \begin{align} \int_\Omega\rho w&(\mathbf U-\mathbf u)\cdot\partial_z\mathbf Udxdz\nonumber\\ &=\int_\Omega[-\text{div}_x(\rho\widetilde{\mathbf u})+z\text{div}_x(\rho\overline{\mathbf u})]\partial_z\mathbf U \cdot(\mathbf U-\mathbf u)dxdz\nonumber\\ &=\int_\Omega(\rho\widetilde{\mathbf u}-z\rho\overline{\mathbf u})\partial_z\nabla_x\mathbf U\cdot(\mathbf U-\mathbf u)dxdz+\int_\Omega(\rho\widetilde{\mathbf u}-z\rho\overline{\mathbf u})\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz.\label{c} \end{align}
In the following, we will estimate the terms on the right hand side of \eqref{c}. We choose the most complicated terms as examples to estimate, the remaining terms can be analyzed similarly. Firstly, we deal with $\int_\Omega\rho\widetilde{\mathbf u}\partial_z\nabla_x\mathbf U\cdot(\mathbf U-\mathbf u)dxdz$ in the following, \begin{align*} \int_\Omega\rho\widetilde{\mathbf u}\partial_z\nabla_x\mathbf U\cdot(\mathbf U-\mathbf u)dxdz &=\int_\Omega\rho(\widetilde{\mathbf u}-\widetilde{\mathbf U})\partial_z\nabla_x\mathbf U\cdot(\mathbf U-\mathbf u)dxdz +\int_\Omega\rho\widetilde{\mathbf U}\partial_z\nabla_x\mathbf U\cdot(\mathbf U-\mathbf u)dxdz\\ &=J_1+J_2. \end{align*} where $\widetilde{\mathbf U}=\int^z_0\mathbf U(x,s,t)ds$.
Similar to the above analysis, we divide the term $J_2$ into three parts \begin{align*} J_2&=\int_\Omega\rho\widetilde{\mathbf U}\partial_z\nabla_x\mathbf U\cdot(\mathbf U-\mathbf u)dxdz\\ &=\int_\Omega\chi_{\rho\leq \frac{r}{2}}\rho\widetilde{\mathbf U}\partial_z\nabla_x\mathbf U\cdot(\mathbf U-\mathbf u)dxdz +\int_\Omega\chi_{\frac{r}{2}<\rho<r}\rho\widetilde{\mathbf U}\partial_z\nabla_x\mathbf U\cdot(\mathbf U-\mathbf u)dxdz\\ &\hspace{5pt}+\int_\Omega\chi_{\rho\geq r}\rho\widetilde{\mathbf U}\partial_z\nabla_x\mathbf U\cdot(\mathbf U-\mathbf u)dxdz\\
&\leq \|\chi_{\rho\leq \frac{r}{2}}1\|_{L^2(\Omega)}\|r\|_{L^\infty}\|\widetilde{\mathbf U}\partial_z\nabla_x\mathbf U\|_{L^3}\|\mathbf U-\mathbf u\|_{L^6(\Omega)}
+\|\chi_{\rho\geq r}\rho^{\frac{\gamma}{2}}\|_{L^2(\Omega)}\|\widetilde{\mathbf U}\partial_z\nabla_x\mathbf U\|_{L^3(\Omega)}\|\mathbf U-\mathbf u\|_{L^6(\Omega)}\\
&\hspace{8pt}+C\|\chi_{\frac{r}{2}<\rho<r}(\rho-r)\|_{L^2(\Omega)}\|\widetilde{\mathbf U}\partial_z\nabla_x\mathbf U\|_{L^3(\Omega)}\|\mathbf U-\mathbf u\|_{L^6(\Omega)}\\
&\leq C\mathcal{E}(\rho,\mathbf u|r,\mathbf U)(t)+\delta\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^2(\Omega)}
+\delta\|\partial_z\mathbf U-\partial_z\mathbf u\|^2_{L^2(\Omega)}. \end{align*}
On the other hand, by virtue of Cauchy inequality, we obtain \begin{align} J_1&=\int_\Omega\rho(\widetilde{\mathbf u}-\widetilde{\mathbf U})\partial_z\nabla_x\mathbf U\cdot(\mathbf U-\mathbf u)dxdz\nonumber\\
&\leq\|\partial_z\nabla_x\mathbf U\|_{L^\infty}\int_\Omega\rho|\widetilde{\mathbf u}-\widetilde{\mathbf U}|^2dxdz+\int_\Omega\rho|\mathbf u-\mathbf U|^2dxdz\nonumber\\
&\leq C\int_\Omega\rho|\int^z_0(\mathbf u(s)-\mathbf U(s))ds|^2dxdz+\mathcal{E}(\rho,\mathbf u|r,\mathbf U)\nonumber\\
&\leq C\int_\Omega\rho\big{(}\int^1_0|\mathbf u-\mathbf U|^2ds\big{)}dxdz+\mathcal{E}(\rho,\mathbf u|r,\mathbf U)\nonumber\\
&\leq C\int^1_0\int_\Omega \rho|\mathbf u-\mathbf U|^2dxdzds+\mathcal{E}(\rho,\mathbf u|r,\mathbf U)\nonumber\\
&\leq C\int_\Omega \rho|\mathbf u-\mathbf U|^2dxdz+\mathcal{E}(\rho,\mathbf u|r,\mathbf U)\nonumber\\
&\leq C\mathcal{E}(\rho,\mathbf u|r,\mathbf U).\label{3aaa} \end{align}
Secondly, we will tackle with another complicated nonlinear term $\int_\Omega\rho\widetilde{\mathbf u}\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz$. It is easy to rewrite it as \begin{align} \int_\Omega\rho\widetilde{\mathbf u}&\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz\nonumber\\ &=\int_\Omega\chi_{\rho< r}\rho\widetilde{\mathbf u}\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz +\int_\Omega\chi_{\rho\geq r}\rho\widetilde{\mathbf u}\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz,\label{2a} \end{align} where \begin{align*} \int_\Omega\chi_{\rho< r}&\rho\widetilde{\mathbf u}\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz\\ &=\int_{\Omega}\chi_{\rho<r}\rho(\widetilde{\mathbf u}-\widetilde{\mathbf U})\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz +\int_{\Omega}\chi_{\rho<r}\rho\widetilde{\mathbf U}\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz\\ &=\int_{\Omega}\chi_{\rho<r}\rho(\widetilde{\mathbf u}-\widetilde{\mathbf U})\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz +\int_{\Omega}\chi_{\frac{r}{2}<\rho<r}\rho\widetilde{\mathbf U}\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz\\ &\hspace{20pt}+\int_{\Omega}\chi_{\rho\leq \frac{r}{2}}\rho\widetilde{\mathbf U}\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz\\
&\leq \|\chi_{\rho<r}\rho^{\frac{1}{2}}\|_{L^\infty(\Omega)}\|\sqrt{\rho}(\widetilde{\mathbf u}-\widetilde{\mathbf U})\|_{L^2(\Omega)}\|\partial_z\mathbf U\|_{L^\infty(\Omega)}\|\nabla_x\mathbf U-\nabla_x\mathbf u\|_{L^2(\Omega)}\\
&\hspace{10pt}+\|\chi_{\frac{r}{2}<\rho< r}\rho\|_{L^2(\Omega)}\|\widetilde{\mathbf U}\partial_z\mathbf U\|_{L^\infty(\Omega)}\|\nabla_x\mathbf U-\nabla_x\mathbf u\|_{L^2(\Omega)}\\
&\hspace{10pt}+\|\chi_{\rho\leq \frac{r}{2}}1\|_{L^2(\Omega)}\|r\|_{L^\infty(\Omega)}
\|\widetilde{\mathbf U}\partial_z\mathbf U\|_{L^\infty(\Omega)}\|\nabla_x\mathbf U-\nabla_x\mathbf u\|_{L^2(\Omega)}\\
&\leq C\mathcal{E}(\rho,\mathbf u|r,\mathbf U)(t)+\delta\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^2(\Omega)}. \end{align*}
Then we will deal with the second term on the right side of \eqref{2a}: \begin{align} \int_\Omega\chi_{\rho\geq r}&\rho\widetilde{\mathbf u}\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz\nonumber\\ &=\int_\Omega\chi_{\rho\geq r}\rho(\widetilde{\mathbf u}-\widetilde{\mathbf U})\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz +\int_\Omega\chi_{\rho\geq r}\rho\widetilde{\mathbf U}\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz\nonumber\\ &=K_1+K_2,\label{333} \end{align} where \begin{align} K_2&\leq \int_\Omega\chi_{\rho\geq r}\rho^{\frac{\gamma}{2}}\widetilde{\mathbf U}\partial_z\mathbf U\cdot(\nabla_x\mathbf U-\nabla_x\mathbf u)dxdz\nonumber\\
&\leq \|\chi_{\rho\geq r}\rho^{\frac{\gamma}{2}}\|_{L^2(\Omega)}
\|\widetilde{\mathbf U}\partial_z\mathbf U\|_{L^\infty(\Omega)}
\|\nabla_x\mathbf U-\nabla_x\mathbf u\|_{L^2(\Omega)}\nonumber\\
&\leq C\|\chi_{\rho\geq r}\rho^{\frac{\gamma}{2}}\|^2_{L^2(\Omega)}
+\delta\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^2(\Omega)}\nonumber\\
&\leq C\mathcal{E}(\rho,\mathbf u|r,\mathbf U)(t)+\delta\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^2(\Omega)}. \end{align}
Next, by virtue of H\"{o}lder inequality, we get \begin{align*}
K_1&\leq \|\chi_{\rho\geq r}\rho\|_{L^\gamma(\Omega)}
\|\chi_{\rho\geq r}(\widetilde{\mathbf u}-\widetilde{\mathbf U})\|_{L^3(\Omega)}
\|\partial_z\mathbf U\|_{L^\frac{6\gamma}{\gamma-6}(\Omega)}
\|\nabla_x\mathbf U-\nabla_x\mathbf u\|_{L^2(\Omega)}\nonumber\\
&\leq C\|\chi_{\rho\geq r}\rho\|^2_{L^\gamma(\Omega)}
\|\chi_{\rho\geq r}(\widetilde{\mathbf u}-\widetilde{\mathbf U})\|^2_{L^3(\Omega)}
+\delta\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^2(\Omega)}\nonumber\\
&\leq C\|\chi_{\rho\geq r}\rho\|^2_{L^\gamma(\Omega)}
\|\chi_{\rho\geq r}(\widetilde{\mathbf u}-\widetilde{\mathbf U})\|_{L^2(\Omega)}
\|\chi_{\rho\geq r}(\widetilde{\mathbf u}-\widetilde{\mathbf U})\|_{H^1(\Omega)}
+\delta\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^2(\Omega)}\nonumber\\
&\leq C\|\chi_{\rho\geq r}\rho\|^4_{L^\gamma(\Omega)}
\|\chi_{\rho\geq r}(\widetilde{\mathbf u}-\widetilde{\mathbf U})\|^2_{L^2(\Omega)}
+\delta\|\chi_{\rho\geq r}(\widetilde{\mathbf u}-\widetilde{\mathbf U})\|^2_{L^2(\Omega)}
+\delta\|\nabla_x\widetilde{\mathbf U}-\nabla_x\widetilde{\mathbf u}\|^2_{L^2(\Omega)}\nonumber\\
&\hspace{10pt}+\delta\|\partial_z\widetilde{\mathbf U}-\partial_z\widetilde{\mathbf u}\|^2_{L^2(\Omega)}
+\delta\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^2(\Omega)}, \end{align*} where we have used the interpolation inequality \begin{align*}
\|f\|_{L^3}\leq\|f\|^{\frac{1}{2}}_{L^2}\|f\|^{\frac{1}{2}}_{H^1}. \end{align*} According \eqref{a3} and \eqref{3aaa}, we have \begin{align*}
\|\chi_{\rho\geq r}\rho\|^4_{L^\gamma(\Omega)} =(\int_{\rho\geq r}\rho^\gamma dxdz)^{\frac{4}{\gamma}}
\leq\mathcal{E}(\rho,\mathbf u|r,\mathbf U)^{\frac{4}{\gamma}}(t), \end{align*} and \begin{align*}
\|\chi_{\rho\geq r}(\widetilde{\mathbf u}-\widetilde{\mathbf U})\|^2_{L^2(\Omega)}
=\int_{\rho\geq r}|\widetilde{\mathbf u}-\widetilde{\mathbf U}|^2dxdz
=\int_{\rho\geq r}\frac{1}{\rho}\rho|\widetilde{\mathbf u}-\widetilde{\mathbf U}|^2dxdz
\leq \frac{1}{\|r\|_{\infty(\Omega)}}\mathcal{E}(\rho,\mathbf u|r,\mathbf U)(t). \end{align*}
Similar to the estimate of \eqref{3aaa}, we obtain \begin{align*}
\|\nabla_x\widetilde{\mathbf U}-\nabla_x\widetilde{\mathbf u}\|^2_{L^2(\Omega)}
\leq \|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^2(\Omega)},
\hspace{5pt}\|\partial_z\widetilde{\mathbf U}-\partial_z\widetilde{\mathbf u}\|^2_{L^2(\Omega)}
\leq\|\partial_z\mathbf U-\partial_z\mathbf u\|^2_{L^2(\Omega)}. \end{align*}
Combining the above estimates, we get \begin{align*}
\int^\tau_0K_1dt\leq C\int^\tau_0h(t)\mathcal{E}(\rho,\mathbf u|r,\mathbf U)(t)dt
+\delta\int^\tau_0\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^2(\Omega)}+
\|\partial_z\mathbf U-\partial_z\mathbf u\|^2_{L^2(\Omega)}dt, \end{align*} where $h(t)\in L^1(0,T)$.
Using the same method we estimate the remaining terms. Therefore, we conclude that \begin{align*}
\mathcal{E}&(\rho,\mathbf{u}|r,\mathbf U)|^{t=\tau}_{t=0}+\int^\tau_0\int_\Omega\big{(}\mu\nabla_x\mathbf u\cdot(\nabla_x\mathbf u-\nabla_x\mathbf U)+\lambda\partial_z\mathbf u(\partial_z\mathbf u-\partial_z\mathbf U)\big{)}dxdzdt\nonumber\\
&\leq C\int^\tau_0h(t)\mathcal{E}(\rho,\mathbf{u}|r,\mathbf U)dt+\delta\int^\tau_0\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^{2}(\Omega)}
+\|\partial_z\mathbf U-\partial_z\mathbf u\|^2_{L^{2}(\Omega)}dt\nonumber\\ &\hspace{8pt}+\int^\tau_0\int_\Omega\frac{\rho}{r}(\mathbf U-\mathbf u)(\mu\Delta_x\mathbf U+\lambda\partial_{zz}\mathbf U)dxdzdt -\int^\tau_0\int_\Omega\frac{\rho}{r}(\mathbf U-\mathbf u)\nabla_xp(r)dxdzdt\nonumber\\ &\hspace{8pt}-\int^\tau_0\int_{\Omega}P''(r)(\rho\partial_tr+\rho\mathbf u\nabla_xr)dxdzdt +\int^\tau_0\int_\Omega\partial_tp(r)dxdzdt-\int^\tau_0\int_\Omega p(\rho)\text{div}_x\mathbf Udxdzdt. \end{align*}
Then we deduce that \begin{align}
\mathcal{E}&(\rho,\mathbf{u}|r,\mathbf U)|^{t=\tau}_{t=0}+\int^\tau_0\int_\Omega\big{(}\mu(\nabla_x\mathbf u-\nabla_x\mathbf U):(\nabla_x\mathbf u-\nabla_x\mathbf U)+\lambda(\partial_z\mathbf u-\partial_z\mathbf U)^2\big{)}dxdzdt\nonumber\\
&\leq C\int^\tau_0h(t)\mathcal{E}(\rho,\mathbf{u}|r,\mathbf U)dt+\delta\int^\tau_0\|\nabla_x\mathbf U-\nabla_x\mathbf u\|^2_{L^{2}(\Omega)}
+\|\partial_z\mathbf U-\partial_z\mathbf u\|^2_{L^{2}(\Omega)}dt\nonumber\\ &\hspace{8pt}+\int^\tau_0\int_\Omega(\frac{\rho}{r}-1)(\mathbf U-\mathbf u)(\mu\Delta_x\mathbf U+\lambda\partial_{zz}\mathbf U)dxdzdt -\int^\tau_0\int_\Omega\frac{\rho}{r}(\mathbf U-\mathbf u)\nabla_xp(r)dxdzdt\nonumber\\ &\hspace{8pt}-\int^\tau_0\int_{\Omega}P''(r)(\rho\partial_tr+\rho\mathbf u\nabla_xr)dxdzdt +\int^\tau_0\int_\Omega\partial_tp(r)dxdzdt-\int^\tau_0\int_\Omega p(\rho)\text{div}_x\mathbf Udxdzdt. \end{align}
It is easy to check that \begin{align} -\int^\tau_0&\int_\Omega\frac{\rho}{r}(\mathbf U-\mathbf u)\nabla_xp(r)+p(\rho)\text{div}_x\mathbf U+P''(r)(\rho\partial_tr+\rho\mathbf u\nabla_xr)dxdzdt +\int^\tau_0\int_\Omega\partial_tp(r)dxdzdt\nonumber\\ &=-\int^\tau_0\int_\Omega(\rho-r)P''(r)\partial_tr+P''(r)\rho\mathbf u\cdot\nabla_xr+\rho P''(r)(\mathbf U-\mathbf u)\cdot\nabla_xr+p(\rho)\text{div}_x\mathbf Udxdzdt\nonumber\\ &=-\int^\tau_0\int_\Omega(\rho-r)P''(r)\partial_tr+P''(r)\rho\mathbf U\cdot\nabla_xr+p(\rho)\text{div}_x\mathbf Udxdzdt\nonumber\\ &=-\int^\tau_0\int_\Omega\rho P''(r)(\partial_tr+\mathbf U\cdot\nabla_xr) -rP''(r)\partial_tr+p(\rho)\text{div}_x\mathbf Udxdzdt\nonumber\\ &=-\int^\tau_0\int_\Omega\rho P''(r)(-r\text{div}_x\mathbf U-r\partial_zW) -rP''(r)\partial_tr+p(\rho)\text{div}_x\mathbf Udxdzdt\nonumber\\ &=-\int^\tau_0\int_\Omega\text{div}_x\mathbf U\big{(}p(\rho)-p'(r)(\rho-r)-p(r)\big{)}dxdzdt +\int^\tau_0\int_\Omega p'(r)(\rho-r)\partial_zWdxdzdt, \end{align} where we have used the fact that $\partial_tr+\text{div}_x\mathbf Ur+\mathbf U\cdot\nabla_xr+r\partial_zW=0$.
Recalling the boundary condition $W|_{z=0,1}=0$, we have \begin{align} \int^\tau_0\int_\Omega p'(r)(\rho-r)\partial_zWdxdzdt =\int^\tau_0dt\int_{\mathbb{T}^2}(\int^1_0\partial_zWdz)p'(r)(\rho-r)dx=0. \end{align}
Moreover, we can use the method as \cite{kr} Section 6.3 to get \begin{align} \int_\Omega&(\frac{\rho}{r}-1)(\mathbf U-\mathbf u)(\mu\Delta_x\mathbf U+\lambda\partial_{zz}\mathbf U)dxdz\nonumber\\
&\leq C\mathcal{E}(\rho,\mathbf u|r,\mathbf U)+\delta\|\nabla_x\mathbf u-\nabla_x\mathbf U\|^2_{L^2}
+\delta\|\partial_z\mathbf u-\partial_z\mathbf U\|^2_{L^2}. \end{align}
Putting $(3.10)-(3.13)$ together, we have \begin{align}
\mathcal{E}(\rho,\mathbf u|r,\mathbf U)(\tau)\leq C\int^\tau_0h(t)\mathcal{E}(\rho,\mathbf u|r,\mathbf U)(t)dt. \end{align}
Then applying the Gronwall's inequality, we finish the proof of Theorem 2.1.
\vskip 0.5cm
\noindent {\bf Part II: Singular limit of CPE} \vskip 0.2cm
This part is devoted to studying the singular limit of the CPE in the case of well-prepared initial data.
\section{Preliminaries and main result} From the notable survey paper by Klein, see \cite{Klein}, singular limits of fluids play an important role in mathematics, physics and meteorology. We consider the following scale CPE system with Coriolis forces: \begin{eqnarray} \left\{ \begin{array}{llll} \partial_{t}\rho_\epsilon+\text{div}_x(\rho_\epsilon \mathbf{u}_\epsilon)+\partial_z(\rho_\epsilon w_\epsilon)=0, \\ \partial_t(\rho_\epsilon\mathbf{u}_\epsilon)+\textrm{div}_x(\rho_\epsilon\mathbf{u}_\epsilon\otimes\mathbf{u}_\epsilon) +\partial_x(\rho_\epsilon\mathbf u_\epsilon w_\epsilon) +\rho_\epsilon\mathbf u_\epsilon\times\omega+\frac{1}{\epsilon^2}\nabla_x p(\rho_\epsilon)=\mu\Delta_x\mathbf u_\epsilon+\lambda\partial^2_{zz}\mathbf u_\epsilon,\\ \partial_zp(\rho_\epsilon)=0, \end{array}\right.\label{4a} \end{eqnarray} where $\epsilon$ represents the Mach number, $\omega=(0,0,1)$ is the rotation axis. The boundary conditions and pressure are the same as (1.2) and (1.4). Problem \eqref{4a} is supplemented with initial data \begin{align} \rho_\epsilon (0, \cdot) = \rho_{0, \epsilon} =\overline{\rho} + \epsilon \rho^{(1)}_{0, \epsilon},\hspace{5pt} \mathbf{u}_\epsilon (0, \cdot) = \mathbf{u}_{0, \epsilon}, \end{align} where the constant $\overline{\rho}$ in (4.2) can be taken arbitrary.
There is a quite broad consensus that the compressible flows become incompressible in the low Mach number limit. In the following sections, we assume $\rho=\rho_\epsilon$ and $\mathbf u=\mathbf u_\epsilon$. In this part, our goal is to study system \eqref{4a} in the case of singular limit $\epsilon\rightarrow0$, meaning the inviscid, incompressible limit. Precisely speaking, we want to show that the weak solutions of CPE converge to the incompressible PE system.
\subsection{Target equation}
The expected limit problem reads \begin{align} &\partial_t\mathbf{V}+(\mathbf{V}\cdot\nabla_x)\mathbf{V}+\partial_z\mathbf VW+\mathbf V^{\perp}+\nabla_x\Pi=0,\nonumber\\ &\text{div}_x\mathbf{V}+\partial_zW=0,\nonumber\\ &\partial_z\Pi=0,\label{4b} \end{align} where $\mathbf V^{\perp}=(v_2,-v_1)$ and the $\Pi$ is the pressure. We supplement the system with the initial condition \begin{align*}
\mathbf V|_{t=0}=\mathbf V_0. \end{align*}
As shown by Kukavica et al. \cite{kukavica}, the problem \eqref{4b} possesses a local unique analytic solution $\mathbf V$ and $\Pi$ for some $T>0$ and any initial solution \begin{align} \mathbf V_0\in C^{\infty}(\Omega),\hspace{3pt}\int^1_0\text{div}\mathbf V_0dz=0. \end{align}
\subsection{Relative energy inequality} According to the previous definition, we define the relative entropy functional, \begin{align}
\mathcal{E}(\rho,\mathbf{u}|r, \mathbf{V})=\int_{\Omega}[\frac{1}{2}\rho|\mathbf u-\mathbf V|^2+\frac{1}{\epsilon^2}(P(\rho)-P'(r)(\rho-r)-P(r))]dxdz, \end{align} where $r$ and $\mathbf V$ are continuously differentiable, it is something not understandable "text functions". The following relation can be deduced \begin{align}
\mathcal{E}&(\rho,\mathbf{u}|r,\mathbf V)|^{t=\tau}_{t=0}+\int^\tau_0\int_\Omega\big{(}\mu\nabla_x\mathbf u\cdot(\nabla_x\mathbf u-\nabla_x\mathbf V)+\lambda\partial_z\mathbf u(\partial_z\mathbf u-\partial_z\mathbf V)\big{)}dxdzdt\nonumber\\ &\leq\int^\tau_0\int_{\Omega}\rho(\mathbf V-\mathbf u)\partial_t\mathbf V+\rho\mathbf u(\mathbf V-\mathbf u)\cdot\nabla_x\mathbf V +\rho w(\mathbf V-\mathbf u)\partial_z\mathbf V-\frac{1}{\epsilon^2}p(\rho)\text{div}_x\mathbf Vdxdzdt\nonumber\\ &\hspace{15pt}-\frac{1}{\epsilon^2}\int^\tau_0\int_{\Omega}P''(r)(\rho\partial_tr+\rho\mathbf u\nabla_xr)dxdzdt +\frac{1}{\epsilon^2}\int^\tau_0\int_{\Omega}\partial_tp(r)dxdzdt\nonumber\\ &\hspace{15pt}-\int^\tau_0\int_\Omega(\rho\mathbf u\times\omega)\cdot(\mathbf V-\mathbf u)dxdzdt,\label{4c} \end{align} for any $r,\mathbf V$$\in C'([0,T]\times\Omega)$, $r>0$.
\subsection{Main result} The second result concerns the singular limit.
\begin{theorem} Let $\gamma>6$, and $(\rho,\mathbf u,w)$ be a weak solution of the scaled system \eqref{4a} on a time interval $(0,T)$ with well-prepared initial data satisfying the following assumptions \begin{align}
&\|\rho^{(1)}_{0,\epsilon}\|_{L^\infty(\Omega)}+\|\mathbf u_{0,\epsilon}\|_{L^\infty(\Omega)}\leq D,\nonumber\\ &\frac{\rho_{0,\epsilon}-\overline{\rho}}{\epsilon}\rightarrow 0\hspace{3pt}\text{in}\hspace{3pt}L^{1}(\Omega),\hspace{8pt} \mathbf{u}_{0,\epsilon}\rightarrow \mathbf{V}_0\hspace{3pt}\text{in}\hspace{3pt}L^{2}(\Omega). \end{align}
Let $\mathbf V$ be the unique analytic solution of the target problem \eqref{4b}. Suppose that $T<T_{\max}$, where $T_{\max}$ denotes the maximal life-span of the regular solution to the incompressible PE system \eqref{4b} with initial data $\mathbf V_0$, then \begin{align}
\sup_{t\in[0,T]}&\int_\Omega[\rho|\mathbf u-\mathbf V|^2+\frac{1}{\epsilon^2}(P(\rho)-P'(\overline{\rho})(\rho-\overline{\rho})-P(\overline{\rho}))]\nonumber\\
&\leq C[\epsilon+\mu+\lambda+\int_\Omega|\mathbf u_{0,\epsilon}-\mathbf V_0|^2], \end{align} where the constant $C$ depends on the initial data $\rho_{0}$, $\mathbf u_{0}$, $\mathbf V_0$ and $T$, and the size $D$ of the initial data perturbation. The constant $\overline{\rho}$ can be taken arbitrary. \end{theorem}
\begin{remark} Theorem 4.1 yields that $\rho_\epsilon$ and $\mathbf u_\epsilon$ converge to the solution of target system in the regime of $\epsilon\rightarrow0$ and $\mu,\lambda\rightarrow0$ for the well-prepared initial data, in other words, the expression of the right hand of (4.8) tends to zero. \end{remark}
\subsection{Uniform bounds} Before proving Theorems 4.1, we derive uniforms bounds of weak solutions $(\rho,\mathbf u)$. Here and hereafter, the constant $C$ denotes a positive constant, independent on $\epsilon$, that will not have the same value when used in different parts of text. The following uniform bounds are derived from the relative energy inequality \eqref{4c}, if we take $r=\overline{\rho}$ and $\mathbf U=0$: \begin{align}
&ess\sup_{t\in(0,T)}||\frac{\rho-\overline{\rho}}{\epsilon}||_{L^2(\Omega)+L^\gamma(\Omega)}\leq C,\nonumber\\
&ess\sup_{t\in(0,T)}||\sqrt{\rho}\mathbf u||_{L^2(\Omega)}\leq C,\hspace{5pt}
\sqrt{\mu}||\nabla_x\mathbf u||_{L^2((0,T)\times\Omega)}
+\sqrt{\lambda}||\partial_z\mathbf u||_{L^2((0,T)\times\Omega)}\leq C. \end{align}
\section{Convergence of well-prepared initial data}
The proof of convergence is based on the ansatz \begin{equation} r=\overline{\rho},\hspace{5pt} \mathbf{U}=\mathbf{V}, \end{equation} in the relative energy inequality \eqref{4c}, where $\mathbf V$ is the analytic solution of the target problem \eqref{4b}. The corresponding relative energy inequality reads as: \begin{align}
\mathcal{E}&(\rho,\mathbf{u}|\overline{\rho}, \mathbf V)(\tau)+\int^\tau_0\int_\Omega\mu(\nabla_x\mathbf u-\nabla_x\mathbf V\big{)}:(\nabla_x\mathbf u -\nabla_x\mathbf V) +\lambda(\partial_z\mathbf u-\partial_z\mathbf V)^2dxdzdt\nonumber\\
&\leq\mathcal{E}(\rho,\mathbf{u}|\overline{\rho}, \mathbf V)(0) +\int^\tau_0\int_{\Omega}\rho(\mathbf V-\mathbf u)\partial_t\mathbf V+\rho\mathbf u(\mathbf V-\mathbf u)\cdot\nabla_x\mathbf V+\rho w(\mathbf V-\mathbf u)\partial_z\mathbf Vdxdzdt\nonumber\\ &\hspace{10pt}-\frac{1}{\epsilon^2}\int^\tau_0\int_\Omega p(\rho)\text{div}_x\mathbf Vdxdzdt -\int^\tau_0\int_\Omega(\rho\mathbf u\times\omega)\cdot(\mathbf V-\mathbf u)dxdzdt\nonumber\\ &\hspace{10pt}+\int^\tau_0\int_\Omega\mu\nabla_x\mathbf V(\nabla_x\mathbf u-\nabla_x\mathbf V)dxdzdt +\int^\tau_0\int_\Omega\lambda\partial_z\mathbf V(\partial_z\mathbf u-\partial_z\mathbf V)dxdzdt. \end{align}
First we deal with initial data and viscous term. It is easy to computer the initial relative energy inequality: \begin{align}
\mathcal{E} (\rho_{},\mathbf{u}_{}|\overline{\rho}, \mathbf V)|_{t=0}\leq C\int_\Omega[|\mathbf u_{0,\epsilon}-\mathbf V_0|^2+|\rho_{0,\epsilon}-\overline{\rho}|^2]dx, \end{align} and viscous term \begin{align} &\mu\int^\tau_0\int_\Omega\nabla_x\mathbf V(\nabla_x\mathbf u-\nabla_x\mathbf V)dxdzdt\leq
\int^\tau_0\frac{\mu}{2}\|\nabla_x\mathbf u-\nabla_x\mathbf V\|^2_{L^2(\Omega)}
+\frac{\mu}{2}\|\nabla_x\mathbf V\|^2_{L^2(\Omega)}dt,\nonumber\\ &\lambda\int^\tau_0\int_\Omega\partial_z\mathbf V(\partial_z\mathbf u-\partial_z\mathbf V)dxdzdt\leq
\int^\tau_0\frac{\lambda}{2}\|\nabla_x\mathbf u-\nabla_x\mathbf V\|^2_{L^2(\Omega)}
+\frac{\lambda}{2}\|\partial_z\mathbf V\|^2_{L^2(\Omega)}dt. \end{align}
Next, we consider the remaining terms. Utilizing $(4.3)_1$, we get that \begin{align} \int^\tau_0\int_{\Omega}\rho&(\mathbf V-\mathbf u)\partial_t\mathbf V+\rho\mathbf u(\mathbf V-\mathbf u)\cdot\nabla_x\mathbf V+\rho w(\mathbf V-\mathbf u)\partial_z\mathbf Vdxdzdt\nonumber\\ &=\int^\tau_0\int_{\Omega}\rho(\mathbf V-\mathbf u)(\partial_t\mathbf V+(\mathbf V\cdot\nabla_x)\mathbf V+W\partial_z\mathbf V)+\rho(\mathbf u-\mathbf V)(\mathbf V-\mathbf u)\nabla_x\mathbf V\nonumber\\ &\hspace{30pt}+\rho(\mathbf V-\mathbf u)(w-W)\partial_z\mathbf Vdxdzdt\nonumber\\ &=\int^\tau_0\int_{\Omega}\rho(\mathbf u-\mathbf V)(\nabla_x\Pi+\mathbf V^\bot)+\rho(\mathbf u-\mathbf V)(\mathbf V-\mathbf u)\nabla_x\mathbf V +\rho(\mathbf V-\mathbf u)(w-W)\partial_z\mathbf Vdxdzdt. \end{align}
It is easy to check that \begin{align*}
\int^\tau_0\int_\Omega\rho(\mathbf u-\mathbf V)(\mathbf V-\mathbf u)\nabla_x\mathbf Vdxdzdt\leq C\int^\tau_0\mathcal{E}(\rho,\mathbf{u}|\overline{\rho}, \mathbf V)(t)dt. \end{align*}
Next, we estimate the term $\int^\tau_0\int_{\Omega}\rho\mathbf V\cdot\nabla_x\Pi dxdzdt$, and rewrite in the form \begin{align} \int^\tau_0\int_{\Omega}\rho\mathbf V\cdot\nabla_x\Pi dxdzdt= \epsilon\int^\tau_0\int_{\Omega}\frac{\rho-\overline{\rho}}{\epsilon}\mathbf V\cdot\nabla_x\Pi dxdzdt+ \overline{\rho}\int^\tau_0\int_{\Omega}\mathbf V\cdot\nabla_x\Pi dxdzdt, \end{align}
The second term on the right side of (5.6) is estimated as: \begin{align*} \int^\tau_0\int_{\Omega}\mathbf V\nabla_x\Pi dxdzdt =-\int^\tau_0\int_{\Omega}\text{div}_x\mathbf V\Pi dxdzdt =\int^\tau_0\int_{\Omega}\partial_zW\Pi dxdzdt=0, \end{align*} where we have used the fact that $\Pi$ is independent of $z$. We deduce from the energy inequality that \begin{align} \int_{\Omega}\frac{1}{\epsilon^2}(P(\rho)-P'(r)(\rho-r)-P(r))dxdz\leq C, \hspace{5pt}\text{uniformly as} \hspace{3pt}\epsilon\rightarrow0. \end{align}
Similar to the previous analysis, it is enough to establish a uniform bound \begin{align*} \int_\Omega\frac{\rho-\overline{\rho}}{\epsilon}dxdz\leq C. \end{align*}
As we know that the pressure $\Pi$ is analytic, so that the rightmost integral of (5.6) can be vanished as $\epsilon\rightarrow0$.
From the previous definition of dissipative weak solutions, we choose $\Pi$ as the test function, so that \begin{align*} \int^\tau_0\int_{\Omega}&\rho\mathbf u\nabla_x\Pi dxdzdt\\
&=[\int_\Omega\rho\Pi dxdz]|^{t=\tau}_{t=0}-\int^\tau_0\int_\Omega\rho\partial_t\Pi dxdzdt -\int^\tau_0\int_\Omega\rho w\partial_z\Pi dxdzdt\\
&=\epsilon[\int_\Omega\frac{\rho-\overline{\rho}}{\epsilon}\Pi dxdz]|^{t=\tau}_{t=0} -\epsilon\int^\tau_0\int_\Omega\frac{\rho-\overline{\rho}}{\epsilon}\partial_t\Pi dxdzdt\rightarrow0, \hspace{3pt}\text{as}\hspace{3pt}\epsilon\rightarrow0. \end{align*}
Compared with Navier-Stokes equations, the pressure term in PE system is easy to estimate. By virtue of incompressible condition and $(4.3)_3$, we have that \begin{align*} -\frac{1}{\epsilon^2}\int^\tau_0\int_\Omega p(\rho)\text{div}_x\mathbf Vdxdzdt=\frac{1}{\epsilon^2}\int^\tau_0\int_\Omega p(\rho)\partial_zWdxdzdt=0. \end{align*}
Moreover, we find that \begin{align*} \int_{\Omega}\rho(\mathbf u-\mathbf V)\cdot\mathbf V^\bot dxdz+\int_\Omega(\rho\mathbf u\times\omega)\cdot(\mathbf V-\mathbf u)dxdz=0. \end{align*}
Now, utilizing \eqref{b1}, we deal the complex nonlinear term \begin{align*} \int^\tau_0\int_{\Omega}&\rho(\mathbf V-\mathbf u)(w-W)\partial_z\mathbf V dxdzdt\\ &=\int^\tau_0\int_{\Omega}(\mathbf V-\mathbf u)\partial_z\mathbf V\big{(} -\text{div}_x(\rho\widetilde{\mathbf u})+z\text{div}_x(\rho\overline{\mathbf u}) -\rho W \big{)}dxdzdt. \end{align*}
These nonlinear terms are estimated one by one \begin{align} -\int_{\Omega}(\mathbf V-\mathbf u)\partial_z\mathbf V \text{div}_x(\rho\widetilde{\mathbf u})dxdz =\int_{\Omega}\rho\widetilde{\mathbf u}(\nabla_x\mathbf V-\nabla_x\mathbf u)\cdot\partial_z\mathbf Vdxdz +\int_{\Omega}\rho\widetilde{\mathbf u}(\mathbf V-\mathbf u)\cdot\partial_z\nabla_x\mathbf Vdxdz.\label{55a} \end{align}
From the incompressible condition, it follows that $W=-\int^z_0\text{div}_x\mathbf V(x,s,t)ds$. We define $\widetilde{\mathbf V}=\int^z_0\mathbf V(x,s,t)ds$ and get \begin{align} \int_{\Omega}&\rho\widetilde{\mathbf u}(\nabla_x\mathbf V-\nabla_x\mathbf u)\cdot\partial_z\mathbf Vdxdz\nonumber\\ &=\int_{\Omega}\chi_{\rho\leq\overline{\rho}}\rho\widetilde{\mathbf u}(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz +\int_{\Omega}\chi_{\rho\geq\overline{\rho}}\rho\widetilde{\mathbf u}(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz\nonumber\\ &=\int_{\Omega}\chi_{\rho\leq\overline{\rho}}\rho(\widetilde{\mathbf u}-\widetilde{\mathbf V})(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz +\int_{\Omega}\chi_{\rho\leq\overline{\rho}}\rho\widetilde{\mathbf V}(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz\nonumber\\ &\hspace{5pt}+\int_{\Omega}\chi_{\rho\geq\overline{\rho}}\rho\widetilde{\mathbf u}(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz\label{5a} \end{align}
The foremost two terms on the right side of \eqref{5a} can be handed as \eqref{2a} \begin{align} \int_{\Omega}&\chi_{\rho\leq\overline{\rho}}\rho(\widetilde{\mathbf u}-\widetilde{\mathbf V})(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz +\int_{\Omega}\chi_{\rho\leq\overline{\rho}}\rho\widetilde{\mathbf V}(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz\nonumber\\ &=\int_{\Omega}\chi_{\rho\leq\overline{\rho}}\rho(\widetilde{\mathbf u}-\widetilde{\mathbf V})(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz +\int_{\Omega}\chi_{\frac{\overline{\rho}}{2}<\rho\leq\overline{\rho}}\rho\widetilde{\mathbf V}(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz\nonumber\\ &\hspace{5pt}+\int_{\Omega}\chi_{\rho\leq\frac{\overline{\rho}}{2}}\rho\widetilde{\mathbf V}(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz\nonumber\\
&\leq\delta\|\nabla_x\mathbf V-\nabla_x\mathbf u\|_{L^2(\Omega)}^2
+C\mathcal{E}(\rho,\mathbf{u}|\overline{\rho}, \mathbf V)(t). \end{align}
On the other hand, following \eqref{333}, we have \begin{align} \int^\tau_0\int_{\Omega}&\chi_{\rho\geq\overline{\rho}}\rho\widetilde{\mathbf u}(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz\nonumber\\ &=\int^\tau_0\int_{\Omega}\chi_{\rho\geq\overline{\rho}}\rho(\widetilde{\mathbf u}-\widetilde{\mathbf V})(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz +\int^\tau_0\int_{\Omega}\chi_{\rho\geq\overline{\rho}}\rho\widetilde{\mathbf V}(\nabla_x\mathbf V-\nabla_x\mathbf u)\partial_z\mathbf Vdxdz\nonumber\\
&\leq C\int^\tau_0h(t)\mathcal{E}(\rho,\mathbf u|r,\mathbf U)(t)dt
+\delta\int^\tau_0\|\nabla_x\mathbf V-\nabla_x\mathbf u\|^2_{L^2(\Omega)}
+\|\partial_z\mathbf V-\partial_z\mathbf u\|^2_{L^2(\Omega)}dt. \end{align}
Similarly, the second nonlinear term on the right side of \eqref{55a} is divided into two parts: \begin{align} \int_{\Omega}\rho\widetilde{\mathbf u}(\mathbf V-\mathbf u)\cdot\partial_z\nabla_x\mathbf Vdxdz &=\int_{\Omega}\rho(\widetilde{\mathbf u}-\widetilde{\mathbf V})(\mathbf V-\mathbf u)\partial_z\nabla_x\mathbf Vdxdz +\int_{\Omega}\rho\widetilde{\mathbf V}(\mathbf V-\mathbf u)\partial_z\nabla_x\mathbf Vdxdz.\nonumber \end{align}
Utilizing the similar estimates in (3.6), we have \begin{align} \int_{\Omega}\rho(\widetilde{\mathbf u}-\widetilde{\mathbf V})(\mathbf V-\mathbf u)\partial_z\nabla_x\mathbf Vdxdz
\leq C\mathcal{E}(\rho,\mathbf{u}|\overline{\rho}, \mathbf V)(t). \end{align}
Moreover, similar to \eqref{33c}, we get \begin{align} \int_{\Omega}&\rho\widetilde{\mathbf V}(\mathbf V-\mathbf u)\partial_z\nabla_x\mathbf Vdxdz\nonumber\\ &=\int_{\Omega}\chi_{\rho\leq\frac{\overline{\rho}}{2}}\rho\widetilde{\mathbf V}(\mathbf V-\mathbf u)\partial_z\nabla_x\mathbf Vdxdz +\int_{\Omega}\chi_{\frac{\overline{\rho}}{2}<\rho<\overline{\rho}}\rho\widetilde{\mathbf V}(\mathbf V-\mathbf u)\partial_z\nabla_x\mathbf Vdxdz\nonumber\\ &\hspace{5pt}+\int_{\Omega}\chi_{\rho\geq\overline{\rho}}\rho\widetilde{\mathbf V}(\mathbf V-\mathbf u)\partial_z\nabla_x\mathbf Vdxdz\nonumber\\
&\leq \delta\|\nabla_x\mathbf V-\nabla_x\mathbf u\|_{L^2(\Omega)}^2
+\delta\|\partial_z\mathbf V-\partial_z\mathbf u\|_{L^2(\Omega)}^2
+C\mathcal{E}(\rho,\mathbf{u}|\overline{\rho}, \mathbf V)(t), \end{align} and \begin{align} \int_{\Omega}(\mathbf V-\mathbf u)\partial_z\mathbf Vz\text{div}_x(\rho\overline{\mathbf u})dxdz
\leq \delta\|\nabla_x\mathbf V-\nabla_x\mathbf u\|_{L^2(\Omega)}^2
+\delta\|\partial_z\mathbf V-\partial_z\mathbf u\|_{L^2(\Omega)}^2
+C\mathcal{E}(\rho,\mathbf{u}|\overline{\rho}, \mathbf V)(t). \end{align}
The last term can be estimated as \begin{align*} \int_{\Omega}&\rho(\mathbf V-\mathbf u)\partial_z\mathbf VWdxdz\\ &=\int_\Omega\chi_{\rho\leq\frac{\overline{\rho}}{2}}\rho(\mathbf V-\mathbf u)\partial_z\mathbf VWdxdz+ \int_\Omega\chi_{\rho\geq\overline{\rho}}\rho(\mathbf V-\mathbf u)\partial_z\mathbf VWdxdz\\ &\hspace{8pt}+\int_\Omega\chi_{\frac{\overline{\rho}}{2}<\rho<\overline{\rho}}\rho(\mathbf V-\mathbf u)\partial_z\mathbf VWdxdz\\
& \leq\delta\|\nabla_x\mathbf V-\nabla_x\mathbf u\|_{L^2(\Omega)}^2
+\delta\|\partial_z\mathbf V-\partial_z\mathbf u\|_{L^2(\Omega)}^2
+C\mathcal{E}(\rho,\mathbf{u}|\overline{\rho}, \mathbf V)(t). \end{align*}
Combining the above estimates together and using Grownwall inequality, we prove Theorem 4.1.
\vskip 0.5cm \noindent {\bf Acknowledgements}
\vskip 0.1cm We are very much indebted to an anonymous referee for many helpful suggestions. The research of H. G is partially supported by the NSFC Grant No. 11531006. The research of \v S.N. leading to these results has received funding from the Czech Sciences Foundation (GA\v CR), GA19-04243S and RVO 67985840. The research of T.T. is supported by the NSFC Grant No. 11801138. The paper was written when Tong Tang was visiting the Institute of Mathematics of the Czech Academy of Sciences which {hospitality and support} is gladly acknowledged.
\end{document} |
\begin{document}
\title{Auxiliary-cavity-assisted ground-state cooling of optically levitated nanosphere in the unresolved-sideband regime}
\author{Jin-Shan Feng} \affiliation{Institute of Theoretical Physics, Lanzhou University, Lanzhou $730000$, China}
\author{Lei Tan} \email{[email protected]} \affiliation{Institute of Theoretical Physics, Lanzhou University, Lanzhou $730000$, China}
\author{Huai-Qiang Gu} \affiliation{School of Nuclear Science and Technology, Lanzhou University, Lanzhou $730000$, China}
\author{Wu-Ming Liu} \affiliation{Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190,China}
\date{\today} \begin{abstract} We theoretically analyse the ground-state cooling of optically levitated nanosphere in unresolved-sideband regime by introducing a coupled high-quality-factor cavity. On account of the quantum interference stemming from the presence of the coupled cavity, the spectral density of the optical force exerting on the nanosphere gets changed and then the symmetry between the heating and the cooling processes is broken. Through adjusting the detuning of strong-dissipative cavity mode, one obtains an enhanced net cooling rate for the nanosphere. It is illustrated that the ground state cooling can be realized in the unresolved sideband regime even if the effective optomechanical coupling is weaker than the frequency of the nanosphere, which can be understood by the picture that the effective interplay of the nanosphere and the auxiliary cavity mode brings the system back to an effective resolved regime. Besides, the coupled cavity refines the dynamical stability of the system. \end{abstract}
\pacs{42.50.Wk,42.50.Pq,07.10.Cm,42.50.Lc}
\maketitle
\section{Introduction}
Cavity optomechanics, providing the effective coupling between light and mesoscale matter, has been of interest in theoretical and experimental investigations\cite{Wilson,Marquardt,Kippenberg,Kipp,F.Marquardt,Aspe}. As an implementation of cavity-optomechanics, optically levitated nanosphere\cite{Zoller,Isart,Monteiro,Yin,Neukirch} in cavity is an important platform for realizing the quantum behavior at macroscale and exploring new applications of this field. The lack of the mechanical support in such levitated system leads to high mechanical quality factor and long coherence time. These benefits make the system have prominent advantages in ultrasensitive measurement\cite{Vernooy,Libbrecht,Geraci,Arvanitaki,LiTZ2,Millen,Moore,Ranjit,LIJ,Ranjit1,Rider,LiuJ,Aranas} and tests of fundamental theories that includes Nonlinear\cite{Xuereb,Gieseler0,Gieseler,Genoni,Ge,Fonseca,Rashid,XIAO}, Nonequilibrium\cite{Gieseler15,Gieseler1}, Macroscopic quantum behavior\cite{Isart1,Isart2,Bateman,Zhang,YIN}, and so on\cite{LITZ1,Nie3,LiTZ3,Nie2,Abdi,Goldwater,Zabolotskii,Honang,LIU,Minowa}. Although remarkable advances have been seen for the levitated nanosphere system, many related studies and highly sensitive measurements are still limited by the thermal noise. So it is a prior condition for all work to cool the nanosphere\cite{Barker,TZLi,Pender,Nie,Arita,Mestres,Millen1,Rodenburg,Frimmer,Jain} as micromechanical resonator all the way to quantum ground-state.
The cooling utilizing radiation pressure of levitated nanosphere in a cavity\cite{Yin1,Gieseler2,Asenbaum,Kiesel} is based on the principle that the scattering process related to cooling (anti-Stokes process) can be enhanced by choosing appropriate detuning between driving field and the cavity mode\cite{Teufel}. This requires the levitated nanosphere system to be in the ``resolved-sideband" regime, where the cavity linewidth should be smaller than the mechanical oscillator resonance frequency. Such requirement is stringent for the levitated nanosphere system characterized by low oscillation frequency ($<1$ MHz) with large cavity-decay. On the other hand, low-frequency nanosphere has large zero-point motion, and methods for cooling such nanosphere to the quantum regime are beneficial to new technical applications as well as fundamental studies. Therefore, extending the cooling domain to the unresolved-sideband regime has favorable prospect for cavity optomechanics. There have been several specific proposals, such as the dissipative coupling mechanism\cite{Elste,LiM,Xuereb1,Weiss,Yan}, parameter modulations\cite{LiY,Liao,Wang,Machnes}, hybrid system approaches\cite{Genes,Hammerer,Purdy,Paternostro,GENES,Camerer,Vogell,Gu,Bariani,Dantan,Bennett,Restrepo,Ojanen,Guo,Ranjit2,Chen,Nie1,Sarma,Zhou}, etc.\cite{LIUYC,Asjad,Yasir}, to achieve the ground-state cooling in unresolved-sideband regime\cite{LiuYC} for the cavity optomechanical system. However, these proposals are still hard to effortlessly realize. A pragmatic scenario to relax the limitation of resolved-sideband is to enhance the effective optomechanical response of the nanosphere, by coupling the cavity to an auxiliary quantum system which can be easily prepared in the experiment\cite{Liu}.
In this article, we couple the large damping optomechanical cavity in the levitated nanosphere system to an additional high-Q cavity. The large decay rate of optomechanical cavity in this system retains the efficiency for cooling through the interaction between the high-Q cavity and the optomechanical cavity. Due to no direct coupling between the auxiliary cavity and the nanosphere, the parameters of the optical and the mechanical properties can be optimized individually. We show the destructive quantum interference behavior in optical force spectrum changes the symmetry between the cooling and the heating processes of the nanosphere. One can obtain a high cooling rate at large optomechanical cavity decay rate by tuning the optical parameters of the two cavities. The cooling rate for the coupled-cavity system has two abnormal phenomena: different from the red detuning for optimum cooling in single-cavity system, the maximum cooling rate in the coupled-cavity system corresponds to blue tuning; while the large cooling rate in the single-cavity system is available only at low optomechanical cavity decay rate, it is achievable for both small and large optomechanical cavity damping in the case of coupled-cavity system. For our model, the ground-state cooling can be achieved under the condition that the effective coupling between the cooling field and the nanosphere is weaker than the frequency of the nanosphere. To comprehend this result and explain the abnormal phenomena in cooling process, we derive an effective indirect coupling between the auxiliary cavity and the nanosphere, which refines the dynamical stability of the system compared with the case without auxiliary cavity.
The paper is organized as follows. In Sec. \ref{1} we describe the Hamiltonian of the system and in Section \ref{2} derive the quantum Langevin equations for the system operators. Section \ref{3} is devoted to the analysis of cooling of the nanosphere. The coherence coupling and the dynamical stability of this system will be discussed in Sec. \ref{4}, followed by the conclusion of our work in Sec. \ref{5}.
\section{Model Hamiltonian}\label{1} \begin{figure}
\caption{(Color online) Hybrid optomechanical setup containing two coupled optical cavities. The first cavity with an optically levitated nanosphere has a low-Q, while the second one has a high-Q and doesn't interact with the levitated nanosphere. These two cavities are coupled by the tunneling optical mode.}
\label{Fig:1}
\end{figure} The system we consider includes two coupled cavities, as shown in Fig. \ref{Fig:1}. The first one provides a simple optomechanical system, of which the mechanical part is formed by an optically levitated nanosphere\cite{Zoller}. The dielectric nanosphere is manipulated by two spatial modes of this cavity. It is confined to an optical dipole trap \cite{Grimm} provided by one mode (denoted mode 1) which is driven resonantly. The other mode (denoted mode 2) driven by a weaker beam provides a radiation pressure for cooling the motion of the nanosphere. The second cavity supports an auxiliary field (denoted mode 3), which does not interact with the nanosphere directly. The coupling between two cavities is realized by the hopping through the joint mirror of photons in them\cite{Grudinin,Grudinin1,Zheng,Peng,Xu,Xiao,Sato,Cho,LiBB}. The joint mirror has no contribution for the decay of each cavity. The dissipative nature of cavities is determined by the mirrors at two side. They are driven by corresponding pump laser. In what follows, we refer to them as optomechanical cavity and auxiliary cavity, respectively. The Hamiltonian of the system is given in a rotating frame (with $\hbar=1$) by\cite{Zoller,Liu} \begin{eqnarray}\label{Hamiltonian} \hat H &=& - {\Delta _1}\hat a_1^ \dag {{\hat a}_1} - {\Delta_2}\hat a_2^\dag {{\hat a}_2} - {\Delta_3}\hat a_3^ \dag {{\hat a}_3}+ \frac{{\hat p^2}}{{2m}} \nonumber \\ &&-g_1\hat a_1^ \dag {{\hat a}_1}({\cos2k_1}{\hat x}-1) - g_2\hat a_2^ \dag {{\hat a}_2}{\cos 2(k_2{\hat x}-\frac{\pi}{4})} + J{\hat a_2^ \dag {{\hat a}_3}} + J^\ast{\hat a_3^ \dag {{\hat a}_2}} \nonumber \\
&& +({E_1^\ast}\hat a_1 + {E_1}{\hat a_1^\dag}) + ({E_2^\ast}\hat a_2 + {E_2}{\hat a_2^\dag})+({E_3^\ast}\hat a_3 + {E_3}{\hat a_3^\dag}). \end{eqnarray} The first line represents the free Hamiltonian of the system, where $\Delta_1 = \omega_1- \omega_o $, $\Delta_2 = \omega_2- \omega_o$ and $\Delta_3= \omega_3- \omega_a$ are the detunings between the driving field and cavity mode frequencies. $\omega_{i~(i=1,2,3)}$, $\omega_o$, and $\omega_a$ correspond to the pump fields, optomechanical cavity mode and auxiliary cavity mode frequencies, respectively. $\hat a_{i~(i=1,2,3)}$ is the annihilation operator for the corresponding cavity mode, $\hat p$ is the momentum operator of the center-of-mass of the nanosphere, and $m$ is the mass of the nanosphere.
The interactions are described by the second line. The previous two terms $g_1\hat a_1^ \dag {{\hat a}_1}({\cos2k_1}{\hat x}-1)$ and $g_2\hat a_2^ \dag {{\hat a}_2}{\cos 2(k_2{\hat x}-\frac{\pi}{4})}$ correspond to the optomechanical coupling of optical modes $\hat a_{1,2}$ with the nanosphere. $g_{i~(i=1,2)} = \frac{3V}{4V_{c,i}} \frac{\epsilon - 1}{\epsilon + 2} \omega_i$ quantifies the optomechanical interaction strength, where $V$ and $V_{c,i}$ are the nanosphere and the corresponding optical modes volumes, $\epsilon$ is the dielectric constant of the nanosphere, and $\hat x$ is the position operator of the nanosphere\cite{Zoller}. The two remaining terms $J{\hat a_2^ \dag {{\hat a}_3}}$ and $ J^\ast{\hat a_3^ \dag {{\hat a}_2}}$ stand for the interplay between optomechanical and auxiliary cavity modes. The tunnel-coupling strength of the cavities is characterized by the parameter $J$. This parameter is more difficult to define precisely because of the more detail technical factors, such as the material property of the joint mirror, the mode matching, etc. Phenomenologically, we neglect the losses in hopping and assume the mode matching is perfect. Applying Input-Output Relations, we set the $J = \sqrt{\kappa_{2} \kappa_{3}}$\cite{Bariani,Gardiner}.
The last line accounts for the optical driving, with $E_1 = \sqrt{\kappa_1^{ex}P_1/\hbar\omega_1}e^{i\phi_1}$, $E_2 = \sqrt{\kappa_2^{ex}P_2/\hbar\omega_2}e^{i\phi_2}$ and $E_3 = \sqrt{\kappa_3^{ex}P_3/\hbar\omega_3}e^{i\phi_3}$ the amplitudes of pump lasers, $P_{i~(i=1,2,3)}$ the input powers, $\kappa_{i~(i=1,2,3)}^{ex}$ the decay rates of the photons into the associated outgoing mode and $\phi_{i~(i = 1,2,3)}$ the initial phases for the input lasers\cite{Liu}.
Based on the fact that $\omega_1,\omega_2 \gg |\omega_1 - \omega_2|$, we assum that mode $1$ and $3$ have semblable poperties, so $\omega_1 \approx \omega_2 = \omega, k_1 \approx k_2 = k, g_1 \approx g_2= g, \kappa_1 = \kappa_2 = \kappa$, for simplicity.
\section{ Heisenberg Motion Equation and Linearization}\label{2}
From the Hamiltonian given by Eq. (\ref{Hamiltonian}), we obtain the Heisenberg-Langevin equations of the system operators: \begin{eqnarray} &&\dot{\hat{a}}_1 =( i{\Delta _1}-\frac{\kappa}{2}){\hat a}_1 - iE_1 + \sqrt{\kappa}{\hat a_{in,1}},\nonumber\\ &&\dot{\hat{a}}_2 =[ i({\Delta _2} + 2gk\hat x) - \frac{\kappa}{2}]{{\hat a}_2} - iJ{\hat a_3} - iE_2 + \sqrt{\kappa}{\hat a_{in,2}}, \nonumber \\ &&\dot{\hat{a}}_3 =( i{\Delta _3}-\frac{\kappa_3}{2}){\hat a}_3 - i{J^\ast}{\hat a_2} - iE_3 + \sqrt{\kappa_3}{\hat a_{in,3}}, \nonumber \\ &&\dot{\hat{p}} = - 4g{k^2}{\hat a_1^ \dag {{\hat a}_1}}{\hat x} + 2gk\hat a_2^\dag {{\hat a}_2} - \frac{\gamma}{2}\hat p + \hat F_p(t),\nonumber \\ &&\dot{\hat{x}} = \frac{\hat p}{m}, \end{eqnarray} where $\kappa$ and $\kappa_3$ are the cavity mode loss of optomechanical and auxiliary cavities, respectively. $\gamma$ is the dissipation rate of the nanosphere motion. $\hat a_{in, 1}$, $\hat a_{in, 2}$, and $\hat a_{in, 3}$ are the input vacuum noise operators, which have zero mean values and obey the nonzero correlation functions given $\langle\hat a_{in, 1}(t)\hat a^\dag_{in, 1}(t')\rangle = \langle\hat a_{in, 2}(t)\hat a^\dag_{in, 2}(t')\rangle = \langle\hat a_{in, 3}(t)\hat a^\dag_{in, 3}(t')\rangle = \delta( t - t')$, $\langle\hat a^\dag_{in, 1}(t)\hat a_{in, 1}(t')\rangle = \langle\hat a^\dag_{in, 2}(t)\hat a_{in, 2}(t')\rangle = \langle\hat a^\dag_{in, 3}(t)\hat a_{in, 3}(t')\rangle = 0$\cite{Gardiner}. $\hat F_p(t)$ is the damping force acting on the sphere with zero mean value, which obeys the following correlation function\cite{Giovannetti}: \begin{equation}
\langle\hat F_p(t)\hat F_p(t')\rangle = \frac{\hbar \gamma m}{2\pi}\int d\omega e^{-i\omega(t-t')} \omega \left[1 + \coth(\frac{\hbar\omega}{2k_BT})\right]. \end{equation} Here $k_B$ is the Boltzmann constant and $T$ is the thermal bath temperature related to the nanosphere.
Under the condition of strong driving, we can linearize the Eqs. (3.1) around the steady-state mean values by using the transformation $\hat{a}_1 \rightarrow \alpha_1 + a_1 $, $\hat{a}_2 \rightarrow \alpha_2 + a_2 $, $\hat{a}_3 \rightarrow \alpha_3 + a_3 $, $\hat{x} \rightarrow x_0 + x$, where $\alpha_1, \alpha_2, \alpha_3$ and $x_0$ are the mean values of the operators, and $a_1, a_2, a_3$ and $x$ are the small fluctuating terms. After segregating the mean values and the fluctuating terms, we obtain the equations for the steady-state expectation values of the nanosphere and cavity field
\begin{eqnarray}
&&0 = -\frac {\kappa}{2}{\alpha _1}- iE_1, \\
&&0 =[ i({\Delta _2} + 2gk{x_0}) - \frac{\kappa}{2}]{\alpha _2} - iJ{\alpha _3} - iE_2 ,\\
&&0 =( i{\Delta _3} - \frac{\kappa_3}{2}){\alpha _3} - i{J^\ast}{\alpha _2} - iE_3 , \\
&&0 = -4g{k^2}{\left| {{\alpha _1}} \right|^2}x_0 + 2gk{\left| {{\alpha _2}} \right|^2},\\
&&0 = p_0.
\end{eqnarray} By neglecting the higher-order terms and choosing $\Delta_1 =0$, the linear quantum Langevin equations read
\begin{eqnarray}
{\dot a_1} &=& -i4g{k^2}{x_0}{\alpha _1}x - \frac{\kappa}{2}{a_1} + \sqrt{\kappa}{\hat a_{in,1}} , \\
{\dot a_2} &=& [ i({\Delta _2} + 2gk{x_0}) - \frac{\kappa}{2}]{a _2} + 2ig{\alpha _2}kx - iJ{a_3} + \sqrt{\kappa}{\hat a_{in,2}} , \\
{\dot a_3} &=& ( i{\Delta _3} - \frac{\kappa_3}{2}){a _3} - i{J^\ast}{a_2} + \sqrt{\kappa_3}{\hat a_{in,3}} , \\
{\dot p} &=& - 4g{k^2}{\left| {{\alpha _1}} \right|^2}x - \frac{\gamma}{2} p + \hat F_p(t)
+ 2gk[{\alpha_2}{a_2^\dag} + {\alpha_2^\ast}{a_2} - 2k{x_0}({\alpha_1}{a_1^\dag} + {\alpha_1^\ast}{a_1})],\\
{\dot x} &=& \frac{p}{m}.
\end{eqnarray}
From Eq. (3.11), we note that cavity mode 1 provides a linear restoring force $dp/dt \sim - 4g{k^2}{\left| {{\alpha _1}} \right|^2}x = -m\omega^2_mx$. $\omega_m$ is the harmonic oscillator frequency of the nanosphere. The corresponding linearized system Hamilton is written as \begin{eqnarray}\label{LHamiltonian}
H &=& - {\Delta_1} a_1^ \dag {a_1} - {\Delta'_2} a_2^\dag {a_2} - {\Delta_3} a_3^ \dag {a_3} + \frac{p^2}{2m} + 4gk^2|\alpha_1|^2x^2 \nonumber \\ &&-(\Omega_ma_2^\dag +\Omega_m^\ast a_2)(b^\dag + b) + J{ a_2^ \dag { a_3}} + J^\ast{a_3^ \dag {a_2}}, \end{eqnarray} where $\Delta'_2 = \Delta _2 + 2gkx_0$ is the detuning relative to the new resonance frequency of the optomechanical cavity, $b = \frac{x}{x_m} + i\frac{p}{\sqrt{2m\hbar\omega_m}}$ is the annihilation operator of the mechanical mode. $\Omega_m = 2gkx_{ZPF}\alpha_2$ is the effective optomechanical coupling strength and $x_{ZPF} \equiv \sqrt{\hbar/2m\omega_m}$ is the zero-point fluctuation of the nanosphere. The energy levels for the linearized Hamiltonian are demonstrated in Fig. \ref{level} (a). The transition processes among levels contain two parts. The primary one is the cooling and heating processes on account of the interaction between the cooling optical mode and the nanosphere in optomechanical cavity\cite{Zoller,Liu1}, which are denoted by the one-way arrows. the other is the energy swapping of the optomechanical and the auxiliary cavities due to the tunneling between them, which is labeled by the red double arrows. It is worthwhile to mention that the energy level structure of the system is transformed from a two-level to a three-level after the auxiliary cavity is added, as in Fig. \ref{level} (b). \begin{figure}
\caption{(Color online) (a) Energy level diagram of the linearized Hamiltonian (see Eq. (\ref{LHamiltonian})). Here $|n_2, n_3, m\rangle$ denotes the state for $n_2$ number cooling field photons in optomechanical cavity, $n_3$ number photons in auxiliary cavity, and $m$ number phonons in mechnical mode of the nanosphere. The one-way arrows represent the cooling (blue arrows) and heating (red arrows) processes due to sideband resonance. The transition between energy levels of two coupled cavities is denoted by red double arrow. (b) The three-level configuration extracted from the Fig. \ref{level} (a). State $|1\rangle$ stands for a short-lived state with high decay rate $\kappa$, and $|2\rangle$ represents a long-lived metastable state with small decay rate $\kappa_3$. It should be pointed out emphatically that the levels $|n_2 + 1, n_3\rangle$ and $|n_2 , n_3 + 1\rangle$ have obvious interval in this figure, but they are degenerate.}
\label{level}
\end{figure}
\section{Cooling of Nanosphere}\label{3}
\subsection{Optical force spectrum} From the interaction term $-(\Omega_ma_2^\dag +\Omega_m^\ast a_2)(b^\dag + b)$ in Eq. (3.13), we can derive the optical force on the nanosphere $F = (\Omega_ma_2^\dag +\Omega_m^\ast a_2)/x_{ZPF}$. By the the Fourier transformation of the correlation function, the corresponding quantum noise spectrum is expressed as $S_{FF}(\omega) \equiv \int \langle F(t)F(0)\rangle e^{i\omega t}dt$. To gain the analytic expression of this noise spectrum, we treat the optomechanical coupling as a perturbation to the optical field because of the strong dissipative nature of optomechanical cavity. Firstly, we transform the corresponding linear motion equations to the frequency domain, i.e. \begin{eqnarray}
-i\omega \tilde{a}_2(\omega) &=& ( i{\Delta' _2} - \frac{\kappa}{2})\tilde{a }_1(\omega) + i\Omega_m [\tilde{b}^\dag(\omega) + \tilde{b}(\omega)]- iJ\tilde{a}_3(\omega) + \sqrt{\kappa}\tilde{a}_{in,2}(\omega), \\
-i\omega \tilde{a}_3(\omega) &=& ( i{\Delta _3} - \frac{\kappa_3}{2})\tilde{a }_3(\omega) - i{J^\ast}\tilde{a}_2(\omega) + \sqrt{\kappa_3}\tilde{a}_{in,3}(\omega), \\
-i\omega \tilde{b}(\omega) &=& (- i\omega_{m} - \frac{\gamma}{2}) \tilde{b}(\omega) + i[\Omega_m\tilde{b}^\dag(\omega) + \Omega_m^\ast\tilde{b}(\omega)] + \sqrt{\gamma}\tilde{b}_{in}(\omega). \end{eqnarray} Then we derive the expression for $\tilde{b}(\omega)$ as \begin{equation}
\tilde{b}(\omega) \simeq \frac{\sqrt{\gamma}\tilde{b}_{in}(\omega) + i\sqrt{\kappa}A_2(\omega) + \sqrt{\kappa_3}A_3(\omega)}{i\omega - i[\omega_m + \Sigma(\omega)] - \frac{\gamma}{2}}, \end{equation} where\begin{eqnarray}
A_2(\omega) &=& \Omega^\ast_m \chi(\omega)\tilde{a}_{in,2}(\omega) + \Omega_m\chi^\ast(-\omega)\tilde{a}^\dag_{in,2}(\omega),\\
A_3(\omega)&=& J[\Omega^\ast\chi(\omega)\chi_3(\omega)\tilde{a}_{in,3}(\omega) - \Omega\chi^\ast(-\omega)\chi_3^\ast(-\omega)\tilde{a}^\dag_{in,3}(\omega)], \\
\Sigma(\omega) &=& -i|\Omega_m|^2[\chi(\omega) - \chi^\ast(\omega)],\\
\chi_2(\omega) &=& \frac {1}{-i(\omega + {\Delta _2^\prime}) + {\kappa}/{2}},\\
\chi_3(\omega) &=& \frac {1}{-i(\omega + {\Delta _3}) + {\kappa_3}/{2}},\\
\chi(\omega) &=& \frac {1}{\frac{1}{\chi_{2}(\omega)} + \left| J \right|^2{\chi_{3}(\omega)}},\\
\chi_m(\omega) &=& \frac {1}{-i(\omega - \omega_{m}) + {\gamma}/{2}}.
\end{eqnarray} Here the effect of the optomechanical and the auxiliary cavities is represented by $A_{2, 3}(\omega)$. $\Sigma(\omega)$ accounts the optomechanical self-energy; $\chi(\omega)$ is the total response function of two coupled cavities, and $\chi_2(\omega),\chi_3(\omega),$ and $\chi_m(\omega)$ are the response function of the optomechanical cavity, the auxiliary cavity, and the mechanical mode, respectively. The influence of the optomechanical coupling on the nanosphere motion is the modification of its mechanical frequency $\delta\omega_m = Re\Sigma(\omega_m)$ and damping $\Gamma_{opt} = -2Im\Sigma(\omega_m)$.
With the above prepation, we obtain the spectral density for the optical force: \begin{eqnarray}\label{spectrum}
S_{FF}(\omega) &=& \frac{|\Omega_m\chi(\omega)|^2}{x^2_{ZPF}}\left[\kappa + \kappa_3|J|^2|\chi_3(\omega)|^2\right]\nonumber \\
&=& \frac{|\Omega_m |^2}{x^2_{ZPF}}\left|\frac{1}{-i(\Delta'_2 + \omega) + \frac{\kappa}{2} + \frac{|J|^2}{-i(\Delta_3 + \omega) + \frac{\kappa_3}{2}}} \right|^2\left(\kappa + \frac{\kappa_3 |J|^2}{(\Delta_3 + \omega)^2 + \frac{\kappa_3^2}{4}}\right). \end{eqnarray}
\begin{figure*}
\caption{(Colour online) Optical force spectrum $S_{FF}(\omega)$ of single cavity and coupled cavities vs normalized frequency $\omega / \omega_m$ for various normalized detuning $\Delta'_2/\omega_m$. (a) The spectrum for blue detuning $\Delta'_2 = 100\omega_m$. (b) Detail view of (a) for Fano line shape. (c) The spectrum for resonant $\Delta'_2 = 0$. (d) Detail view of (c) for EIT-like line shape. (e) The spectrum for red detuning $\Delta'_2 = -100\omega_m$. (f) Detail view of (e) for Fano line shape. The other parameters are $\Delta_3 = 0.5\omega_m, \kappa/\omega_m = 100, \kappa_3 = \omega_m, J = \sqrt{\kappa\omega_m}, \Omega_m = 5\omega$, and $\gamma = 10^{-5}\omega_m$.}
\label{spectrumplot}
\end{figure*}
For a general cavity optomechanical system, the noise spectrum has the form of $S_{FF}(\omega) = |\Omega_m\chi(\omega)|^2\kappa/x^2_{ZPF}$, which equals to Eq. (\ref{spectrum}) choosing $J=0$. This is a typical Lorentzian lineshape. From Eq. (\ref{spectrum}), it can be observed that the spectral density of coupling cavities system has a complex modification comparing to single cavity. This result roots from the interaction of two optical modes when the auxiliary cavity is added. The spectral density of the optical force $S_{FF}(\omega)$ for both single cavity and coupled cavities with different type of detuning values in the unresolved-sideband are depicted in Fig. \ref{spectrumplot}. From Fig. \ref{spectrumplot} (a), (c) and (e), we find that the noise spectra of the single cavity and the coupled cavities are identical in the range far away from the resonant region of the auxiliary cavity, while a new lineshape appears in the resonant region of the auxiliary cavity for coupled cavities system. The feature of new resonance peaks in Fig. \ref{spectrumplot} (b), (d) and (f) is related to the position of the resonant regions of the optomechanical and the auxiliary cavities. When the resonant regions are separate, the lineshape of the new resonance peaks is an asymmetric Fano lineshape as shown in Fig. \ref{spectrumplot} (b) and (f). And for the overlapping case i. e. , $\Delta'_2\simeq\Delta_3$, the lineshape is a symmetric EIT-like lineshape. The emergence of new lineshape changes the symmetry of the background with symmetric Lorentzian lineshape.
The EIT-like line shape is a result of the interference between two resonant processes. The physical mechanism is shown in Fig. \ref{level} (b), where the transition processes $|0\rangle \to |1\rangle$ and $|0\rangle \to |1\rangle \to |2\rangle \to |1\rangle$ are indistinguishable, which causes the destructive quantum interference. Therefore, the excitation channel $|0\rangle \to |1\rangle$ corresponding to heating process is suppressed. Besides, the cooling process is intact for the off-resonance transition. The interference of resonant and nonresonant processes lead to the appearance of Fano line shape\cite{Elste,Stassi}. In this case, there is an enhancement for a certain process while the other is restrained\cite{Sarma}. This means that the symmetry between the heating and the cooling processes is modulated due to the presence of the auxiliary cavity.
So, it is a decent approach for a preferable cooling performance that the interference can be utilized by adjusting the optical parameter of the system to suppress the heating effect and enhance the cooling one.
\subsection{Cooling rate}
For our system, the cooling and heating rates $A_{\mp}$ are given by \begin{equation}
A_\mp = S_{FF}(\pm\omega_m) x^2_{ZPF}= \left|\frac{\Omega_m}{-i(\Delta'_2 \pm \omega_m) + \frac{\kappa}{2} + \frac{|J|^2}{-i(\Delta_3 \pm \omega_m) + \frac{\kappa_3}{2}}} \right|^2\left(\kappa + \frac{\kappa_3 |J|^2}{(\Delta_3 \pm \omega_m)^2 + \frac{\kappa_3^2}{4}}\right). \end{equation} The net cooling rate is defined as \begin{equation}
\Gamma_{opt} = A_- - A_+. \end{equation}
\begin{figure*}
\caption{(Color online) Net cooling rate $\Gamma_{opt}$ as functions of normalized detuning $\Delta'_2/\omega_m$ and normalized decay rate $\kappa/\omega_m$ for a single cavity (a) and coupled cavities (b). The relevant parameters are $\Delta_3 = 0.5\omega_m, \kappa_3 = \omega_m, J = \sqrt{\kappa\omega_m}, \Omega_m = \omega/4$, and $\gamma = 10^{-5}\omega_m$.}
\label{coolingrate}
\end{figure*}
In Figs. \ref{coolingrate} (a) and (b), we plot the net cooling rate for the single cavity and coupled cavities systems. There are two discrepancies between them. For the coupled cavities, the optimum cooling detuning is blue and the high cooling rate widely appears at the region of larger decay rate. Due to the existence of the auxiliary cavity, the effective detuning of the nanosphere cooling dynamics is no longer $\Delta'_2$. So $\Delta'_2 < 0$ is not the appropriate choice for the nanosphere cooling. The details will be discuss in Section \ref{discuss}. When the coupled cavities system is in the cooling regime, the cooling rate $A_-$ is unchanged while the heating rate $A_+$ is large suppression on account of the quantum interference. Consequently, a large net cooling rate is gained. The larger the damping is, the more apparently the auxiliary cavity modifies the symmetry between heating and cooling processes for extensive detuning. So a large net cooling rate for wide range is shown.
\subsection{Cooling limit} The steady-state cooling limit (i.e. the final mean photon number) of the coupled cavities is similar to the single cavity\cite{Aspe}, which reads \begin{eqnarray}
n_f
&=& \frac{A_+}{\Gamma_{opt}}+\frac{\gamma_{sc}}{\Gamma_{opt}}. \end{eqnarray} The cooling limit consists of two parts. $n_f^q = A_+/\Gamma_{opt}$ is the quantum limit of cooling which relates to the quantum backaction. The classic cooling limit $n_f^c = \gamma_{sc}/\Gamma_{opt}$ is tied to the specific conditions of a particular system.
According to the above analyse, we know the quantum interference suppresses the heat rate $A_+$ in connection with the quantum backaction heating and gives rise to a larger net cooling rate $\Gamma_{opt}$. As a result, the coupled cavities system has much smaller quantum limit of cooling $n_f^q$ than the single one. With same physical quantity $\gamma_{sc}$ in both the single cavity and the coupled cavities systems, the classic cooling limit $n_f^c$ is much smaller in the coupled cavities system due to the large net cooling rate $\Gamma_{opt}$. To recap, the coupled cavities system can achieve ground-state cooling in an extensive range of parameters. In the following, a set of experimentally plausible parameters are adopted by reference to the related experiments \cite{Zoller,Pender,Rodenburg,Kiesel} to shown this result. We consider a silica sphere with radius $r = 50$ nm and mechanical frequency $\omega_m /(2\pi) = 0.5$ MHz is levitated inside a cavity with $L=1$ cm and waist $w = 25$ $\mu$m. The wavelength of the trap laser is taken $\lambda = 1$ $\mu$m and the material properties $\epsilon = 2$. Specifically, we take the effective optomechanical coupling strength $\Omega_m /\omega_{m} = 1/4 < 1$ for ensuring the validity of the perturbative result. This means the coupling between the cavity mode 2 and the nanosphere is weaker than the frequency of the nanosphere, which is different from the relevant study\cite{Sarma,Liu}. Besides, the influence origin from the background gas be negligible.
The effect of the tunnelling strength $J$ on the cooling limit is shown in Fig. \ref{coolinglimit} (a). We find that the ground state cooling can be achieved for a wide range of lager tunnelling strength. More carefully, the significant effect is occur at narrow region of the effective tunnelling strength increasing from zero, meanwhile the cooling limit hardly changes for enlarging the strength as the limit attains a certain value. This means the auxiliary cavity has a limit work for the nanosphere cooling. In Fig. \ref{coolinglimit} (b), we demonstrate the steady-state cooling limit of single cavity and coupled cavities for the various normalized damping $\kappa/\omega_m$. As shown in Fig. \ref{coolinglimit} (b), it is not able for the single cavity system in unresolved-sideband regime ($\kappa/\omega_m\gg1$) to cool the nanosphere to the ground-state. For the coupled cavities, due to the quantum interference originate from the addition of the auxiliary cavity, ground-state cooling can be achieved for a larger range of normalized damping $\kappa/\omega_m$.
\begin{figure}
\caption{(Color online) The steady-state cooling limit as a function of (a) the various normalized coupling strength $J/\omega_m$ and (b) the various normalized damping $\kappa/\omega_m$ for single cavity and coupled cavities. For(a), the detuning $\Delta'_2 = \omega_m$, the decay rate $\kappa/\omega_m =(J/\omega)^2 $ . In (b), the solid blue line denotes the final phonon number of the nanosphere for couplied cavities. Meanwhile the dash green line stands for the single cavity. The shaded region denotes $n_f < 1$. The optimum detuning $\Delta'_2 = J^2/(\Delta_3 + \omega_m)$ is in accordance with Ref. \cite{Liu} and $J = \sqrt{\kappa\omega_m}$. The other parameters are $\Delta_3 = 0.5\omega_m, \kappa_3 = \omega_m, \gamma = 10^{-5}\omega_m,$ and $\Omega_m = \omega_m/4$. For the nanosphere, the radius is chosen as $r = 50$ nm and the operating wavelength is taken $\lambda = 1$ $\mu$m.}
\label{coolinglimit}
\end{figure}
The physical quantity $\gamma_{sc}$ in the classic cooling limit is characterized by the nanosphere volume $V$ under the condition of the same material and trap field, since $\gamma_{sc} = \omega_m\frac{4\pi^2}{5}\frac{\epsilon-1}{\epsilon+2}(V/\lambda^3)$\cite{Zoller}. The decay rate of auxiliary cavity influences the optomechanical response of the nanosphere, and then changes the cooling limit of the nanosphere. For these reasons, the radius of the nanosphere related to the nanosphere volume and the damping rate of the auxiliary cavity are crucial parameters in the coupled-cavity-nanosphere system. Fig. \ref{coolinglimit2} shows the influence of them on the cooling limit. Fig. \ref{coolinglimit2} (a) plots the cooling limit as a function of normalized damping $\kappa/\omega_m$ for different radii of the nanosphere. One finds that, the cooling limit is not sensitive to the size of the nanosphere when the decay rate $\kappa$ is small. The size of the nanosphere largely affects the cooling limit in the large decay rate $\kappa$ regime, and the nanosphere with smaller radius can achieve the ground state cooling in wide range of the parameter $\kappa$. When $\kappa$ is large, the increase of nanosphere radius will change the physical quantity $\gamma_{sc}$ rapidly, so the classic cooling limit increases too sudden to remain the nanosphere in ground state regime. Fig. \ref{coolinglimit2} (b) plots the cooling limit as a function of normalized auxiliary cavity damping $\kappa_3/\omega_m$ for different decay rate $\kappa$. It demonstrates that, the system can realize ground-state cooling in a wide range of parameter $\kappa$ under the condition $\kappa_3/\omega_m < 1$, and that the cooling limit is sensitive to decay rate $\kappa$ for $\kappa_3/\omega_m > 1$. It is more difficult to achieve ground-state cooling for larger $\kappa$ in range of $\kappa_3/\omega_m > 1$. The quantum interference leads to the actual damping of the hybrid system to relate to $\kappa_3$. When $\kappa_3/\omega_m < 1$, the hybrid system is actually in resolved regime and ground-state cooling can be obtained easily, but for $\kappa_3/\omega_m > 1$, one would obtain the opposite result (see Section \ref{discuss} for details).
\begin{figure*}
\caption{(Color online) The cooling limit as functions of normalized damping $\kappa/\omega_m$ for different radius of the nanosphere with $\kappa_3 = \omega_m$ (a) and normalized auxiliary cavity damping $\kappa_3/\omega_m$ for different normalized decay rate $\kappa/\omega_m$ with $r = 50$ nm (b). The shaded region denotes $n_f < 1$. The relevant parameters are $\Delta_3 = 0.5\omega_m, \Delta'_2 = J^2/(\Delta_3 + \omega_m), J = \sqrt{\kappa\omega_m}, \Omega_m = \omega/4$, and $\gamma = 10^{-5}\omega_m$.}
\label{coolinglimit2}
\end{figure*}
\section{Discussion}\label{4}
From the above study, we know that the auxiliary cavity not only changes the symmetry between the heating and cooling processes of the nanosphere, but also modifies the cooling dynamics of the nanosphere. There exists indirect interaction between the cavity mode $a_3$ and the nanosphere. For the sake of understanding the corresponding result, we will derive the effective parameters for the coupled cavities and discuss the dynamical stability condition of our model in this section.
\subsection{Effective coupling}\label{discuss}
The current system is in the highly unresolved regime $\kappa\gg\omega_m$. The coupling between the cavity mode $a_2$ and the nanosphere is weak ($\Omega_m\ll\omega_m$), which can be taken as a perturbation. Therefore the analytical dynamical equations can be derived only for the cavity mode $a_3$ and the nanosphere. For Eqs. (3.9), (3.10) and (3.12), we derive the formal solution of the corresponding operators by formal integration:
\begin{equation}\label{e}
a_2 = a_2(0)e^{i\Delta_{2}^{'}t-\frac{\kappa}{2}t} + e^{i\Delta_{2}^{'}t-\frac{\kappa}{2}t}\int_{0}^{t}[2ig\alpha_2kx(\tau) - iJa_3(\tau) + \sqrt{\kappa}a_{in,2}(\tau)]e^{-i\Delta_{2}^{'}\tau + \frac{\kappa}{2}\tau}d\tau, \end{equation}
\begin{equation}\label{}
a_3 = a_3(0)e^{i\Delta_{3}t-\frac{\kappa_3}{2}t} + e^{i\Delta_{3}t-\frac{\kappa_3}{2}t}\int_{0}^{t}[ - iJ^\ast a_2(\tau) + \sqrt{\kappa_3}a_{in,3}(\tau)]e^{-i\Delta_{3}\tau + \frac{\kappa_3}{2}\tau}d\tau, \end{equation}
\begin{equation}\label{}
x = \frac{p}{m}t + \int_{0}^{t}F_x(\tau)d\tau. \end{equation} Because $\kappa \gg J$ and $g \ll \omega_m$, we neglect the corresponding terms and obtain \begin{eqnarray}
a_3 &=& a_3(0)e^{i\Delta_{3}t-\frac{\kappa_3}{2}t} + A_{in,3}(t) \label{c},\\
x &=& \frac{p}{m}t + F_X(t)\label{d}, \end{eqnarray}
where $A_{in,3}(t)$ and $F_X(t)$ represent the noise terms. Plugging Eqs. (\ref{c}) and (\ref{d}) into Eq. (\ref{e}) under the condition $|\Delta'_2|\gg|\Delta_3|, \kappa\gg(\kappa_3, \gamma)$, we have \begin{equation}\label{f}
a_2 = a_2(0)e^{i\Delta_{2}^{'}t - \frac{\kappa}{2}t} + \frac{2ig\alpha_2kx(t)}{-i\Delta_{2}^{'} + \frac{\kappa}{2}} - \frac{ iJ a_3(t)}{-i\Delta_{2}^{'} + \frac{\kappa}{2}} + A_{in,2}(t). \end{equation} Substituting Eq. (\ref{f}) into Eqs. (3.10) and (3.12) and neglecting the terms containing $e^{- \frac{\kappa}{2}t}$, one can compare the equations with the single cavity case and then derive \begin{eqnarray}
i\Delta_3 - \frac{\kappa_3}{2} + \frac{|J|^2}{i\Delta_{2}^{'} - \frac{\kappa}{2}} &\longleftrightarrow& i\Delta_{eff} - \frac{\kappa_{eff}}{2},\\
\left|\frac{J^\ast\Omega_{m}}{i\Delta_{2}^{'} - \frac{\kappa}{2}}\right| &\longleftrightarrow& |\Omega_{m~eff}|, \end{eqnarray}
where $|\Omega_{m~eff}| = \eta|\Omega_{m}|$, $\kappa_{eff} = \kappa_{3} + \eta^2\kappa$, $\Delta_{eff} = \Delta_{3} - \eta^2 \Delta_{2}^{'}$ and $\eta = \frac{|J|}{[\Delta_{2}^{'2} + (\frac{\kappa}{2})^2]^{\frac{1}{2}}}$.
Therefore, we reduce a three-mode system to a two-mode system\cite{Liu}. For the effective detuning $\Delta_{eff}$, because the detuning $\Delta_{3}$ is greater than zero and small as the system at cooling state, only $\Delta'_2\gg0$ (i.e. the detuning is blue) can make $\Delta_{eff} <0$ be in the optimum detuning regime. Under the condition of $\kappa\gg J$, the parameter $\eta$ is far less than $1$, so the effective decay rate $ \kappa_{eff} \simeq \kappa_{3}$. This means that the indirect coupling can brings the system from high unresolved regime to an effective resolve regime and explain why the actual damping of the hybrid system to relate to $\kappa_3$.
\subsection{Dynamical stability condition}
The dynamical stability condition of the system is derived by the Routh-Hurwitz criterion\cite{Routh}. For the single cavity system, the dynamical stability condition reads \begin{equation}\label{stability}
\Delta_{2}^{'}[16\Delta_{2}^{'}|\Omega_{m}|^2 + (4\Delta_{2}^{'2} + \kappa^{2} )\omega_m] < 0. \end{equation} When the system is in the resolved regime, the detuning for the optimum cooling limit is $\Delta_{2}^{'} = -\kappa/2$. Thus Eq. (\ref{stability}) is simplified as \begin{equation}\label{a}
|\Omega_{m}|^2 < \frac{\kappa\omega_{m}}{4}. \end{equation}
The dynamical stability condition for the coupled cavities is given in terms of the derived effective parameters \begin{equation}\label{stability1}
\Delta_{eff}[16\Delta_{eff}|\Omega_{m~eff}|^2 + (4\Delta_{eff}^{2} + \kappa_{eff}^{2} )\omega_{m~eff}] < 0. \end{equation} Similarly, we take the effective detuning $\Delta_{eff} = -\omega_{m}$ which is the optimum detuning for effective optomechanical interaction in the resolved regime. Then Eq. (\ref{stability1}) reduces to \begin{equation}
|\Omega_{m~eff}|^2 < \omega_{m}^2/4 + \kappa_{eff}^{2}/16. \end{equation} Back to real parameters, we have \begin{equation}\label{b}
|\Omega_{m}|^2 < \frac{4\omega_{m}^{2} + (\kappa_3 + \eta^2\kappa)^2}{16\eta^2}. \end{equation}
For Eq. (\ref{b}), When $\eta = \eta_{min} \equiv \sqrt[4]{4\omega_{m}^{2} + \kappa_{3}^{2}}/\sqrt{\kappa}$, the right of it has minimum $S_{min} = \frac{\kappa}{4}\sqrt{\omega_{m}^{2} + \frac{\kappa_{3}^{2}}{4}} + \frac{\kappa\kappa_{3}}{8}$. Comparing $S_{min}$ with the right of Eq. (\ref{a}), one finds that $S_{min}$ is larger than the right of Eq. (\ref{a}). It indicates that, in comparison to the single cavity, the coupled cavities system tolerates lager optomechanical coupling to keep the system in stable regime.
\section{Conclusion}\label{5}
In conclusion, we have theoretically investigated the ground-state cooling of an optically levitated nanosphere in the high unresolved regime, by introducing a coupled cavity. The auxiliary cavity is coupled with the optomechanical cavity, but does not interact with the levitated nanosphere. This specific configuration of energy transition causes the quantum interference, which modifies the optomechanical response of the nanosphere and gives rise to asymmetry between heating and cooling processes. By tuning the detuning between optomechanical cavity and cooling field, one can take advantage of this interference to enhance the cooling process and restrain the heating process. So that, a larger net cooling rate is obtained in a wide range of parameters and the cooling limit is lowered dramatically. It is found that, ground-state cooling can still be achieved for large optomechanical cavity decay rate $\kappa$ even if the effective optomechanical coupling $\Omega_m$ is weaker than the frequency of the nanosphere $\omega_m$. The cooling limit in our research is sensitive to the the radius of the nanosphere as well as the damping rate of the auxiliary cavity. The increase of nanosphere radius will made the classic cooling limit increase too sudden to remain the nanosphere in ground state regime. The larger the decay rate of auxiliary cavity is, the smaller the optomechanical cavity dissipation that the ground state cooling can tolerate is. The effective interaction between the auxiliary cavity and the levitated nanosphere brings the system from the highly unresolved-sideband regime to an effective resolved-sideband regime. This significantly relaxes the restricted condition that the system must be in the resolved-sideband regime for the nanosphere cooling. Furthermore, the interaction refines the dynamical stability compared to the case without auxiliary cavity. This work may provide the possibility for the corresponding research and application of the levitated nanosphere system beyond the restriction for the current experiment.
\end{document} |
\begin{document}
\title[Bijections between triangular walks and Motzkin paths] {Bijections between
walks inside a triangular domain and Motzkin paths of bounded amplitude}
\author{Julien Courtiel} \address{ \vspace*{-.5cm} \small Normandie University, UNICAEN, ENSICAEN, CNRS, GREYC} \author{Andrew Elvey Price} \address{ \vspace*{-.5cm} Universit\'e de Bordeaux, LaBRI, Universit\'e de Tours, IDP } \author{Ir\`ene Marcovici} \address{ \vspace*{-.5cm} \small Universit\'e de Lorraine, CNRS, Inria, IECL, F-54000 Nancy, France}
\begin{abstract} This paper solves an open question of Mortimer and Prellberg asking for an explicit bijection between two families of walks. The first family is formed by what we name \textit{triangular walks}, which are two-dimensional walks moving in six directions ($0^{\circ}$, $60^{\circ}$, $120^{\circ}$, $180^{\circ}$, $240^{\circ}$, $300^{\circ}$) and confined within a triangle. The other family is comprised of two-colored Motzkin paths with bounded height, in which the horizontal steps may be forbidden at maximal height.
We provide several new bijections. The first one is derived from a simple inductive proof, taking advantage of a $2^n$-to-one function from generic triangular walks to triangular walks only using directions $0^{\circ}$, $120^{\circ}$, $240^{\circ}$. The second is based on an extension of Mortimer and Prellberg's results to triangular walks starting not only at a corner of the triangle, but at any point inside it. It has a linear-time complexity and is in fact adjustable: by changing some set of parameters called a \textit{scaffolding}, we obtain a wide range of different bijections.
Finally, we extend our results to higher dimensions. In particular, by adapting the previous proofs, we discover an unexpected bijection between three-dimensional walks in a pyramid and two-dimensional simple walks confined in a bounded domain shaped like a waffle. \end{abstract}
\maketitle
\subsection*{Thanks} \thanks{JC was supported by the ``\textit{CNRS projet JCJC}" named ASTEC. AEP was supported by the European Research Council (ERC) in the European Union’s Horizon 2020 research and innovation programme, under the Grant Agreement No.~759702. The authors want also to thank the sponsors of the conference ALEA Young (ANR-MOST MetAConC, Normastic, Université de Caen Normandie) without which this collaboration would never have been born.}
\section{Introduction}
In part due to the ubiquity of random walks in probability theory, lattice walks are extensively studied in enumerative combinatorics~\cite{Mohanty,Humphreys,BoMi10}. In this context, it is frequently discovered that two families of walks, which seem to be very different, are in fact counted by the same numbers. The initial proof is often not combinatorial, and finding an explicit bijection between such families can prove to be a difficult task (see for example \cite{Eliz15,basket}).
In this spirit, this paper answers a $5$ year old open question from Mortimer and Prellberg~\cite[Section 4.3]{MortimerPrellberg}. By solving a functional equation satisfied by the generating function, the two authors realized that the number of walks in a triangular domain starting from a corner of this domain is equal to the number of Motkzin paths of bounded height -- we will give precise definitions of these families in the following subsections. Their proof was purely analytic and, consequently, it raised the issue of finding an explanatory bijection. This gave rise to an open question, which became rather famous in the community, since Prellberg, one of the authors of \cite{MortimerPrellberg}, regularly asked for a bijection in open problems sessions during combinatorics conferences. The current paper solves this question, in several manners.
In the rest of this section, we introduce the notions of triangular paths, Motzkin paths and Motzkin meanders, which will be our objects of study, and we present more formally Mortimer and Prellberg's problem. Then, in the last subsection, we give a detailed outline of the present paper.
\subsection{Triangular paths}
Let $(e_1,e_2,e_3)$ denote the standard basis of $\R^3$. For some $L\in\N$, we define the subset $\T_L$ of $\N^3$ as the triangular section of side length $L$ of the integer lattice: $$\T_L=\{x_1\,e_1+x_2\,e_2+x_3\,e_3 : x_1, x_2, x_3\in\N, x_1+x_2+x_3=L\}.$$ An example of such lattice is shown by Figure~\ref{figure:T3} (left).
We also introduce the notation $$s_1=e_1-e_3, \quad s_2=e_2-e_1, \quad s_3=e_3-e_2,$$ and for $i\in\{1,2,3\}$, we set $\overline{s_i}=-s_i.$ We will interpret the vectors $s_i$ as \emph{forward steps} and the vectors $\overline{s_i}$ as \emph{backward} steps. We denote by $\Sf=\{s_1,s_2,s_3\}$ and $\Sb=\{\bsa,\bsb,\bsc\}$ the set of forward and backward steps, respectively.
\begin{figure}
\caption{\textit{Left.} The triangular lattice $\T_3$. \textit{Right.} The planar representation of the same lattice, with $\Sf$ and $\Sb$. }
\label{figure:T3}
\end{figure}
For convenience, we define the indices modulo $3$, thus $s_0 = s_3$ and $s_4 = s_1$.
The triangular lattice $\T_L$ can be naturally drawn in the plane, as an equilateral triangle of side length $L$, subdivided in smaller equilateral triangles of side length $1$ (see Figure~\ref{figure:T3} right). We will use this planar representation for the remainder of the document.
We define $\origin$ as the bottom left corner of $\T_L$, that is to say $\origin = L e_3$. In some sense, it denotes an origin for the lattice $\T_L$.
\begin{Definition}[Forward paths, triangular paths] Given an integer $L\in\N$, and a point $z\in \T_L$, a \emph{forward (triangular) path} of length $n$ starting from $z$ is a sequence $(\sigma_1,\ldots,\sigma_n)\in \Sf^{n}$ satisfying \[\forall k\in\{0,\ldots,n\}, \quad z+\sum_{i=1}^k \sigma_i \in \T_L.\] A \emph{(generic) (triangular) path} of length $n$ starting from $z$ is a sequence $(\omega_1,\ldots,\omega_n) \in \left(\Sf \cup \Sb\right)^n$ satisfying \[\forall k\in\{0,\ldots,n\}, \quad z+\sum_{i=1}^k \omega_i \in \T_L.\] \end{Definition}
\begin{figure}
\caption{All triangular paths of $\T_3$ with length $2$ starting at $\origin$.}
\label{figure:triangular}
\end{figure}
If $L \geq 2$, there are $2$ forward paths of length $2$ and $8$ generic paths of length $2$ starting from $\origin$, as shown by Figure~\ref{figure:triangular}.
For those who are familiar with the enumeration of walks in the quarter of plane, forward paths can be seen as a subfamily of tandem walks~\cite[Section 4.7]{yellowBook}. \textit{Tandem walks} are walks on $\N^2$ using steps $(1,0)$, $(-1,1)$, $(0,-1)$ (East, North-West, South steps). Their name comes from the fact that in queuing theory, they model the behavior of two queues in series.
To be precise, forward paths of $\T_L$ are
equivalent to tandem walks confined in the part of the positive quarter plane below the anti-diagonal $x+y = L$. In terms of queues, forward paths can be represented by two queues in series where the total number of jobs (or customers) in both queues is never greater than $L$.
\begin{figure}
\caption{Equivalent definitions of the same object: forward paths of $\T_3$ (left); tandem walks in the positive quarter of plane and below the antidiagonal $x + y = 3$ (middle); standard Young tableaux with three rows or less such that the label of the $i$th cell of the bottom row must be less than the label of $(i+3)$th cell of the top row (right). }
\label{figure:tandem}
\end{figure}
Since tandem walks are also described by \textit{standard Young tableaux}~\cite{young} with three rows or less, forward paths on $\T_L$ form a particular subfamily of standard Young tableaux: they must have $3$ rows or less, and for every $k > L$, if there is a $k$th cell in the top row of the tableau, then its label must be greater than the label of the $(k-L)$th cell of the third row (which must exist). The three equivalent definitions of forward paths are illustrated by Figure~\ref{figure:tandem}.
As for generic triangular paths, they are naturally encoded by \textit{double-tandem walks}, which are walks on $\N^2$ using steps $(1,0)$, $(-1,1)$, $(0,-1)$, $(-1,0)$, $(1-,1)$, $(0,1)$ (we add to the base step set of the tandem walks the opposite steps).
\subsection{Motzkin paths and meanders} \label{ss:motzkin}
A \textit{Motzkin path} is a path using up, horizontal and down steps, respectively denoted $\nearrow$, $\rightarrow$ and $\searrow$, such that: \begin{itemize} \item it starts at height $0$; \item it remains at height $\geq 0$ (i.e. inside any prefix of a Motzkin path, the number of $\nearrow$ steps is greater or equal to the number of $\searrow$ steps); \item it ends at height $0$ (i.e. in total, there are as many $\nearrow$ steps as $\searrow$ steps). \end{itemize}
The following definition refines the notion of maximum height for a Motzkin path.
\begin{Definition}[Amplitude] Let $M$ be a Motzkin path and $H$ its maximum height (i.e the maximal difference between the number of $\nearrow$ steps and the number of $\searrow$ steps in a prefix of $M$).
The \emph{amplitude} of $M$ is defined as \[ \left\{ \begin{array}{cl}
2H +1 & \textrm{if a horizontal step }\rightarrow\textrm{ is performed at height }H, \\ 2 H & \textrm{otherwise.} \end{array} \right.\] \end{Definition}
\begin{figure}
\caption{Motzkin paths of length $4$ sorted with respect to their amplitude (from $1$ to $4$)}
\label{figure:motzkin}
\end{figure}
For example, all the Motzkin paths of length $4$ are listed by Figure~\ref{figure:motzkin}: there is one such path with amplitude $1$, four with amplitude $2$, three with amplitude $3$ and one with amplitude $4$.
A \textit{Motzkin meander} is a suffix\footnote{Usually a meander is defined as a prefix, but up to a vertical symmetry, it is equivalent.} of a Motzkin path. A Motzkin meander can thus start at any height, but must end at height $0$.
\subsection{Mortimer and Prellberg's open question}
We now state Mortimer and Prellberg's enumerative result (reformulated in terms of amplitude), for which we are going to give explanatory bijections.
\begin{Theorem}[Corollary 4 \cite{MortimerPrellberg}] Given any $L \geq 0$, there are as many triangular paths in $\T_L$ starting at $\origin$ with $p$ forward steps and $q$ backward steps as bicolored Motzkin paths of length $p+q$ with an amplitude less than or equal to $L$ where $p$ steps are colored in black and $q$ are colored in white. \label{theo:mortimerprellberg} \end{Theorem}
Setting $p=n$ and $q=0$, we obtain the following corollary about forward paths.
\begin{Corollary} Given any $L \geq 0$, there are as many forward paths in $\T_L$ of length $n$ starting at $\origin$ as Motzkin paths of length $n$ with an amplitude less than or equal to $L$. \label{cor:forward-motzkin} \end{Corollary}
\begin{figure}
\caption{Equinumeracy between forward paths of $\T_3$ with length $4$ starting at $\origin$ and Motzkin paths with amplitude bounded by $3$.}
\label{figure:exponential-bijection}
\end{figure}
An illustration of this corollary for $n=4$ is shown by Figure~\ref{figure:exponential-bijection}.
Connections between Motzkin paths and tandem walks (the natural superset of forward paths) are not new. Regev~\cite{regev} was the first to notice via an algebraic method that standard Young tableaux with $3$ rows or less and Motzkin paths are counted by the same numbers. Gouyou-Beauchamps~\cite{gouyouBeauchamps} then found an explanation for this equinumeracy, thanks to the Robinson-Schensted correspondence. Since then, several authors~\cite{eu,eu2,ChyzakYeats,bousquetmelouFusyRaschel} have given new bijections between tandem walks and Motzkin paths, which each have their own ways to be generalized. It should be noted that none of these bijections restrict to a bijection between forward paths in $\T_L$ and Motzkin paths with amplitude bounded by $L$.
By comparing Theorem~\ref{theo:mortimerprellberg} and its corollary, one can remark that there is a factor $2^n$ between forward paths in $\T_L$ of length $n$ and generic triangular paths in $\T_L$ of length $n$. This fact was known before Mortimer and Prellberg's article for tandem walks and double-tandem walks (in other words, whenever $L$ is infinite). Bousquet-Mélou and Mishna~\cite{BoMi10} were the first to notice it and wondered whether there is a combinatorial explanation for this phenomenon. This was solved by Yeats via a convoluted bijection~\cite{yeats2014bijection}. This bijection was subsequently improved by Chyzak and Yeats~\cite{ChyzakYeats} by using the formalism of automata. Again, their bijection does not restrict to the triangular lattice $\T_L$.
\subsection{Outline of the paper}
This paper presents bijections that explain Theorem~\ref{theo:mortimerprellberg}. More precisely, we demonstrate on one hand why the ratio between forward paths and generic paths of length $n$ is $2^n$, and on the other hand, we find several bijections for Corollary~\ref{cor:forward-motzkin}. Combining both results will give different combinatorial proofs of Theorem~\ref{theo:mortimerprellberg}.
First, Section~\ref{s:symmetry} concentrates around a symmetry property for the triangular paths: the number of paths starting from a point in $\T_L$ with a fixed sequence of forward and backward steps does not depend on the sequence of forward and backward steps. This property, stated by Theorem~\ref{theo:directions}, infers the above-mentioned $1$-to-$2^n$ function between forward paths and triangular paths of length $n$. The proof is based on a convergent rewriting system.
Section~\ref{s:expo} provides a simple inductive proof of the equinumeracy between triangular paths in $\T_L$ and Motzkin paths with amplitude bounded by $L$ (Proposition~\ref{prop:motzkin_inductive}). Furthermore, we manage to tweak this proof into a bijection which explains Corollary~\ref{cor:forward-motzkin} (see Figure~\ref{figure:Omega}). However, this bijection is highly complex in the sense it is based on an inclusion-exclusion argument and can take an exponential time to be computed.
Almost independently from the previous sections, we describe in Section~\ref{s:scaffolding} a method to build numerous bijections between triangular paths and Motzkin paths of bounded amplitude. To do so, we relate the number of triangular paths starting at any $z \in \T_L$ and the numbers of Motzkin meanders of amplitude bounded by $L$ starting at height $i$ (Theorem~\ref{theo:anywhere}). This proves the existence of an object which we name \textit{scaffolding}, which works in much the same way as a finite-state transducer. This enables us to find several parameterized bijections between forward paths and Motzkin paths (Algorithm~\ref{algo:scaffolding1}), which can be extended into bijections between generic triangular paths and bicolored Motzkin paths (Subsection~\ref{ss:bico}). In Subsection~\ref{ss:trapeziums} we give an explicit scaffolding, with simple, albeit numerous transition rules, which has the additional property that it is independent of the size $L$.
Finally, in Section~\ref{s:generalization} we generalize our results to higher dimensions. The triangular lattice naturally extends to a simplicial lattice, in which the ratio property between forward paths and generic paths (Theorem~\ref{theo:generic_forward}) still holds. More surprisingly, we find a new bijection specifically in dimension $3$. It matches walks using $4$ steps confined within a pyramid with walks using the $4$ cardinal steps returning to the $x$-axis confined in a domain which is the upper half of a square that have been rotated $45^\circ$ (Theorem~\ref{theo:waffle_to_pyramid}). The second family of walks being easier to count than the first one, we find a formula for the generating function of the pyramidal walks, which was part of an open question from~\cite{MortimerPrellberg}.
The bijections between forward paths and Motzkin paths have been implemented in \texttt{python} and are available at {\url{https://tinyurl.com/yajkqlyv}}.
\section{From forward paths to generic triangular paths} \label{s:symmetry}
This section describes a one-to-$2^n$ function from the set of forward paths of length $n$ in $\T_L$ to the set of generic paths of length $n$ in $\T_L$. This is a crucial step in finding a combinatorial proof of Theorem~\ref{theo:mortimerprellberg}.
More precisely, we are going to describe a bijection between {different} sets of paths {where in each set, all paths} have the same sequence of forward and backward steps, which we call {the} \textit{direction vector}.
\begin{Definition}\label{def:direction_vector} The \emph{direction vector} of a generic path $(\omega_1,\ldots,\omega_n)$ is the finite sequence $(D_1,\dots,D_n)$ where $D_i = F$ if $\omega_i$ is a forward step {and} $D_i = B$ {if $\omega_{i}$} is a backward step. \end{Definition}
A forward path is then a generic path with direction vector $(F,\ldots,F)$. Many examples of paths along with their direction vectors are shown in Figure~\ref{figure:boolean}.
\begin{Theorem}\label{theo:directions} Given $z \in \T_L$ and two sequences $W$ and $W'$ of $\{F,B\}^n$, the set of triangular paths starting from $z$ of direction vector $W$ {is} in bijection with the set of triangular paths starting from $z$ of direction vector $W'$. \end{Theorem}
This theorem will be proved in Section~\ref{ss:bij_generic}.
\subsection{Forward and backward paths}
This subsection shows by induction, without a bijection, a particular case of Theorem~\ref{theo:directions} between two direction vectors: $W= (F,\dots,F)$ and $W'=(B,\dots,B)$. This provides an elementary proof of a weaker result, which enables to understand why the more general theorem works.
\begin{Definition} A \emph{backward (triangular) path} is a triangular path of direction vector $(B,B,\dots,B)$. In other words, a backward path starting at $z \in \T_L$ is a sequence $(\overline{\sigma_1},\ldots,\overline{\sigma_n})\in \Sb^{n}$ satisfying: \[\forall k\in\{1,\ldots,n\}, \quad z+\sum_{i=1}^k \overline{\sigma_i} \in \T_L.\] \end{Definition}
\begin{Theorem} Let $z$ be any point of $\T_L$ and $n \geq 0$. Inside $\T_L$, there are as many \emph{forward} paths of length $n$ starting from $z$ as \emph{backward} paths of length $n$ starting from $z$. \label{theo:first} \end{Theorem}
The proof will use the following lemma, which concerns paths with \textit{one} forward step and \textit{one} backward step:
\begin{Lemma} Given a starting point $z$ and an ending point $z'$, there are as many paths of length $2$ from $z$ to $z'$ made of a forward step then a backward step, as paths of length $2$ from $z$ to $z'$ made of a backward step then a forward step. \label{lemma:length2} \end{Lemma}
\begin{proof} This lemma is obvious whenever the two steps can be permuted.
Let us {first} show that given a forward step $\sigma$ and a backward step~$\overline \tau$ such that $\sigma \neq - \overline \tau$, the path $(\sigma,\overline \tau)$ stays in $\T_L$ from $z$ to $z'$ if and only if the path $(\overline \tau, \sigma)$ stays in $\T_L$ from $z$ to $z'$. For such steps $\sigma$ and $\overline \tau$, there are two possibilities: \begin{enumerate} \item \textbf{ $\boldsymbol \sigma$ is a step $\boldsymbol{s_i}$ and $\boldsymbol {\overline \tau}$ is $\boldsymbol{\overline{s_{i+1}}}$.} By cyclic permutation, we can assume that $\sigma = s_1 = e_1 - e_3$ and $\overline \tau = \overline{s_2} = e_1 - e_2$. If $z+\sigma \in \T_L$ and $z + \sigma + \overline \tau \in \T_L$, then $z$ must have a positive $e_2$-coordinate and a positive $e_3$-coordinate. The same property holds if we replace the condition $z+\sigma \in \T_L$ by $z+\overline \tau \in \T_L$. Therefore, we can permute the forward step and the backward step in that case. \item \textbf{ $\boldsymbol {\overline \tau}$ is a step $\boldsymbol{\overline{s_{i}}}$ and $\boldsymbol \sigma$ is a step $\boldsymbol{s_{i+1}}$.} Again, we can assume that $\overline \tau = \overline{s_1} = e_3 - e_1$ and $\sigma = s_2 = e_2-e_1$. Under the assumption that $z + \sigma + \overline \tau \in \T_L$, we need $z$ to have an $e_1$-coordinate at least equal to $2$. In this case, both paths $(\sigma,\overline \tau)$ and $(\overline \tau, \sigma)$ are valid. \end{enumerate}
\begin{figure}
\caption{All paths of length $2$ returning to their starting point.}
\label{figure:length2}
\end{figure}
It remains to deal with paths satisfying $\sigma = -\overline \tau$. It is equivalent to treat the case $z=z'$. It is then easy to check that for each possible position of $z$, there are as many paths of length $2$ beginning with a forward step as paths of length $2$ beginning with a backward step,
as summarized by Figure~\ref{figure:length2}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{theo:first}] Let $f_n(z)$ be the number of forward paths of length $n$ and starting at $z \in \T_L$, and $b_n(z)$ be the analogue for backward paths. We wish to prove that $f_n(z) = b_n(z)$ for every $z \in \T_L$ by strong induction on $n\geq 0$.
For $n=0$ and $n=1$, the property is straightforward.
Let us assume that the assumption is true for some $n \geq 1$ and $n-1$. For $z \in \T_L$ we have: \[ f_{n+1}(z) = \sum_{ \substack{ \sigma \in \Sf \\ z + \sigma \in \T_L }} f_n(z + \sigma). \] By the induction assumption, \begin{align*} f_{n+1}(z) & = \sum_{ \substack{ \sigma \in \Sf \\ z + \sigma \in \T_L }} b_n(z + \sigma) \\ &= \sum_{ \substack{ \textrm{path of length } 2\\\textrm{from }z\textrm{ to }z' \\ \textrm{made of a forward step}\\ \textrm{then a backward step}}} b_{n-1}(z'). \end{align*} We use the induction assumption now for $n-1$, and Lemma~\ref{lemma:length2}: \begin{align*} f_{n+1}(z) &= \sum_{ \substack{ \textrm{path of length } 2\\\textrm{from }z\textrm{ to }z' \\ \textrm{made of a backward step}\\ \textrm{then a forward step}}} f_{n-1}(z')\\
&= \sum_{ \substack{ \overline \tau \in \Sb \\ z + \overline \tau \in \T_L }} f_n(z + \overline \tau). \\
&= \sum_{ \substack{ \overline \tau \in \Sb \\ z + \overline \tau \in \T_L }} b_n(z + \overline \tau). &\textrm{(by induction)} \\ & = b_{n+1}(z), \end{align*} which concludes the induction, and hence the proof. \end{proof}
\subsection{Bijection between sets of different direction vectors} \label{ss:bij_generic}
In this subsection, we describe a bijection that proves Theorem~\ref{theo:directions}.
\begin{figure}
\caption{The bijections between all direction vectors (arranged as a Boolean lattice) applied to the forward path $(s_1,s_2,s_1)$.}
\label{figure:boolean}
\end{figure}
This bijection consists in combining the elementary operations below, in any possible order, until reaching a path with the desired direction sequences.
\begin{Definition}[Flips] We define here elementary reversible operations on a generic path $(\omega_1,\dots,\omega_n)$. \\ {A \textbf{swap flip}} changes two consecutive steps $\omega_i$ and $\omega_{i+1}$ with respect to the rules: \begin{align*} (s_j,\overline{s_k}) \lra (\overline{s_k}, s_j)& \quad \mbox{if} \; j\not=k, \\ (s_k,\overline{s_k}) \lra (\overline{s_{k-1}},s_{k-1}) & \quad \mbox{otherwise.} \end{align*} (Recall that by convention, $s_0 = s_3$.) This has the effect of doing a flip $(F,B) \lra (B,F)$ in the direction vector.
{A} \textbf{last-step flip} changes the direction of the last step $\omega_n$ thanks to the rule: \[s_i \lra \overline{s_{i-1}}\]
\label{def:flips} \end{Definition}
For example, if we wish to bijectively transform the path $(\bsc,\bsc,\bsb)$ into a path of direction vector $(F,B,B)$, we use the following flips (cf Figure~\ref{figure:boolean}): \begin{align*} (\bsc,\bsc,\bsb) \quad & \underset{ \bsb \rightarrow s_3 }\longleftrightarrow \quad (\bsc,\bsc,s_3) \quad \underset{ (\bsc,s_3) \rightarrow (s_1,\bsa) }\longleftrightarrow \quad (\bsc,s_1,\bsa) \\
& \underset{ \bsa \rightarrow s_2 }\longleftrightarrow \quad (\bsc,s_1,s_2) \quad \underset{ (\bsc,s_1) \rightarrow (s_1,\bsc) }\longleftrightarrow \quad (s_1,\bsc,\bsa). \end{align*}
Note that swap flips give a constructive proof to Lemma~\ref{lemma:length2}.
\begin{proof}[Proof of Theorem~\ref{theo:directions}]
We want to prove that successive flips induce a well-defined bijection between sets of triangular paths with different direction vectors. To do so, we have to establish the following points.
\begin{enumerate} \item \textbf{The flips are well defined.}
In other words, we want to show that a flip does not make a path of $\T_L$ go outside $\T_L$.
For flips swapping steps $s_i$ and $\overline{s_j}$ such that $s_i \neq - \overline{s_j}$, we showed in the proof of Lemma~\ref{lemma:length2} that a forward step and a backward step can commute under the condition that the two steps are not opposite.
The swap flip $(s_1,\bsa) \lra (\bsc,s_3)$ is also well-defined because $s_1$ and $\bsc$ have both a negative $e_3$-coordinate. Therefore, the position of the point just before the flip must have a positive $e_3$-coordinate. One can safely apply $s_1$ or $\bsc$.
Similar arguments hold for the other swap flips, and for last-step flips.
\item \textbf{Each flip is bijective.}
This is clear from the definition of the flips.
\item \textbf{Given two sequences $W$ and $W'$ of $\{F,B\}^n$, one can transform any path with direction vector $W$ into a path of direction vector $W'$ by successive flips.}
If $W$ and $W'$ have the same number of $B$'s, then we can use swap flips to transform a walk of direction vector $W$ into one of direction vector $W'$.
Otherwise, we can increment (resp. decrement) the number of $B$'s of the direction vector by putting a forward step (resp. a backward step) at the end of the walk using successive swap flips, then changing the direction of this last step using a last-step flip. We rinse and repeat until obtaining the desired number of $B$'s, then use swap flips as above.
\item \textbf{If two different sequences of flips lead to triangular paths $p$ and $p'$ that share a same direction vector, then $p = p'$. }\label{item:unique} \end{enumerate} The proof of the last point is postponed {until} the next subsection (Proposition~\ref{prop:tiling}). \end{proof}
In particular, Theorem~\ref{theo:directions} gives a bijective proof of Theorem~\ref{theo:first}. If we wish to make it explicit, we can write an algorithm that chooses a specific sequence of flips that transforms an $(F,\dots,F)$ direction vector into a $(B,\dots,B)$ vector.
\begin{Corollary} Given $z \in \T_L$ and an integer $n$, Algorithm~\ref{algo1} forms a bijection between forward paths of length $n$ starting at $z$ and backward paths of length $n$ starting at $z$. This bijection depends neither on the length $L$ of the triangular lattice, nor on the position of the starting point $z$. \end{Corollary}
\begin{algorithm}[caption={Bijection between forward paths and backward paths (for \textit{flips}, see Definition~\ref{def:flips}).}, label={algo1}] input: a forward path p output: a backward path p n $\gets$ length of p; for i from 1 to n do make a last-step flip on p[i];
for j decreasing from n-1 to i
do make a swap flip between p[j] and p[j+1]; \end{algorithm}
\begin{Remark} Algorithm~\ref{algo1} also transforms (in a bijective manner) a backward path into a forward path. Thus, if we apply twice Algorithm~\ref{algo1} to a forward path, we also obtain at the end a forward path. Therefore, assuming that the uniqueness claimed in Item~\eqref{item:unique} in the proof of Theorem~\ref{theo:first} holds (and it does), the two forward paths must be the same: Algorithm~\ref{algo1} is in fact an involution. \end{Remark}
\subsection{Description of the bijection in terms of folded paths}
This section presents the bijection of Theorem~\ref{theo:directions} in a more symmetric fashion. The last-step flip, which we defined in Definition~\ref{def:flips}, can be actually seen as a disguised swap flip, under the condition that the path is extended to what we call a \textit{folded path}.
\begin{Definition}[Folded paths] Given a generic path $\omega = (\omega_1,\dots,\omega_n) \in (\Sf\cup \Sb)^n$, we define the \emph{folding} of $\omega$ as the path $$\dba{\omega}=(\omega_1,\ldots,\omega_n,-\omega_n,\ldots,-\omega_1).$$ Such paths are said to be \emph{folded}. \end{Definition}
Let us denote by $\mathcal S_n$ the tilted square lattice
\[ \mathcal S_n = \{ (i,j) \, \in \, \N \times \N \quad : \quad |i| + |j| \leq n \}. \] We will geometrically represent folded paths of length $2n$ as labeled walks on $\mathcal S_n$ starting at $(-n,0)$. To construct the walk on $\mathcal S_n$, we replace every forward step by a North-East step $(+1,+1)$, and every backward step by a South-East step $(+1,-1)$. Moreover, these North-East and South-East steps will carry labels, which are the steps of $\Sf \cup \Sb$ from which they originate. For example, the folding of the path $(s_1,\bsc,\bsa)$ is represented on the left of Figure~\ref{figure:diamond}.
\begin{figure}
\caption{The geometric representation of the bijection}
\label{figure:diamond}
\end{figure}
Now, we are going to emulate the effect of swap flips (see Definition~\ref{def:flips}) on these walks. More precisely, we view $\mathcal S_n$ as a square of size $n \times n$ which can be filled out with $1 \times 1$ square tiles of $9$ types (see Figure~\ref{figure:rules}). The four sides of the $9$ allowed tiles are labeled with elements of $\Sf \cup \Sb$ such that the pairs formed by the two top labels and the two bottom labels correspond to a commutation rule described in Definition~\ref{def:flips}.
The tiling of $S_n$ proceeds as follows. We begin with the labels given by a folded path. Then, we place copies of the tiles of Figure~\ref{figure:rules} in such a way that the two top labels or the two bottom labels match (like a domino) with labels which were already in $S_n$. Eventually, we obtain an alternative description of the bijection of Theorem~\ref{theo:first}, and thus the required uniqueness:
\begin{figure}
\caption{The $9$ possible tiles}
\label{figure:rules}
\end{figure}
\begin{Proposition} Let $\dba \omega$ be the folding of a triangular path $\omega$ of length $n$, which we embed in the tilted square lattice $\mathcal S_n$ as described above.
There is a unique way to tile $\mathcal S_n$ with the $9$ tiles of Figure~\ref{figure:rules} while preserving the labels of $\dba \omega$.
Furthermore, let us fix a sequence $W = (W_1,\dots,W_n)$ of $\{F,B\}^n$. The path of direction vector $W$ which corresponds to $\omega$ under the bijection of Theorem~\ref{theo:directions} is defined by the sequence of labels obtained by following the walk in $S_n$ whose $k$-th step is North-East if $W_k = F$ or South-East if $W _k = B$.
\label{prop:tiling} \end{Proposition}
\begin{Example} Let us consider the path $(s_1,\bsc,\bsa)$, represented in Figure~\ref{figure:rules} (left). The unique corresponding tiling is displayed on the right of the figure.
If we want the path of direction vector $(B,F,F)$ corresponding to $(s_1,\bsc,\bsa)$, then we have to read labels from the walk going SE, NE, NE (in this order). We find $(\bsc,s_1,s_2)$. \end{Example}
\begin{proof}[Proof of Proposition~\ref{prop:tiling}] The existence and the uniqueness of the tiling are proved by induction. We just have to notice that every pair $(\sigma,\overline \tau)$ with $\sigma \in \Sf$ and $\overline \tau \in \Sb$ appears once among the top labels of the $9$ tiles, and every pair $(\overline \tau, \sigma)$ appears also once among the bottom labels. We have no choice {in} how to place new tiles: the tiling is automatic and unambiguous.
To connect the tiling with the bijection of Theorem~\ref{theo:first}, note that: \begin{itemize} \item A swap flip at positions $k$ and $k+1$ can be emulated by positioning a tile along the $k$-th and the $(k+1)$-th step \textit{and} by symmetrically placing a second tile along the $(2n-k+1)$-th and the $(2n-k)$-th step. \item A last-step flip can be emulated by positioning a tile on the vertical axis of $S_n$. \end{itemize} One thus recovers what we described in previous subsection. \end{proof}
As a consequence, in view of the vertical symmetry of the tiling, one can describe the bijection of Theorem~\ref{theo:first} uniquely in terms of swap flips -- as claimed at the beginning of this subsection.
\begin{Corollary} The folded paths of direction vector $(F,\dots,F,B,\dots,B)$ are in bijection with the folded paths of direction vector $(B,\dots,B,F,\dots,F)$ via successive uses of swap flips. \end{Corollary}
\section{A first bijection between forward paths and Motzkin meanders} \label{s:expo}
In this section, we provide two proofs of Corollary~\ref{cor:forward-motzkin}: the first one uses an induction and is elementary, the second one is based on a recursive bijection which is derived from the first proof.
\subsection{Recursive proof of the equinumeracy} \label{ss:equinumerosity}
The following proposition links Motzkin meanders and forward paths starting from the border of $\T_L$.
\begin{Proposition}\label{prop:motzkin_inductive} For any $n \geq 0$ and $L > 0$, let $f_n(z)$ be the number of forward paths in $\T_L$ of length $n$ starting at $z$, and $\Mo{L}{n}{\ell}$ the number of Motzkin meanders of length $n$ starting at height $\ell$ and with an amplitude bounded by $L$ (see Subsection~\ref{ss:motzkin} for the definitions).
Then, we have the formula \[f_n(\origin+\ell s_1)=\sum_{i=0}^{\ell} \Mo{H}{n}{i},\] for $\ell \in \{0,\ldots,\lfloor L/2\rfloor\}$. \end{Proposition}
As a particular case $\ell=0$ of the result above, we recover the statement of Corollary~\ref{cor:forward-motzkin}.
\begin{Example} Figure~\ref{figure:forward-motzkin} corroborates Proposition~\ref{prop:motzkin_inductive} with $n=3$, $L = 3$, and $\ell = 1$: numbers agree ($8$ on each side). Remark that if $L$ is larger ($L \geq 4$), the forward path $s_1 s_1 s_1$ will be added on the left, and the Motzkin meander $\nearrow, \searrow, \searrow$ on the right. \end{Example}
\begin{figure}
\caption{\textit{Left.} $8$ forward paths of length $3$ starting from $\origin + s_1$ in $\T_3$. \textit{Right.} $8$ Motzkin meander of length $3$ and amplitude bounded by $L = 3$: four of them begin at height $0$, the remaining four begin at height $1$.}
\label{figure:forward-motzkin}
\end{figure}
\begin{proof}[Proof of Proposition~\ref{prop:motzkin_inductive}.] Let us introduce the notation $g_n( \ell)=f_n(\origin + \ell s_1)$, with the convention that $g_n(\ell)=0$ for $\ell<0$. Let us also write $\D g_n(\ell)=g_n(\ell)-g_n(\ell-1)$, and $H = \lfloor L/2\rfloor$.
Note that the numbers of Motzkin meanders $\Mo{H}{n}{\ell}$ satisfies the obvious recurrences \begin{align*} \Mo{H}{n}{\ell} & = \Mo{H}{n-1}{\ell-1} + \Mo{H}{n-1}{\ell} + \Mo{H}{n-1}{\ell+1} & \textrm{for }\ell \in \{1,\dots,H-1\}, \\ \Mo{H}{n}{0} & = \Mo{H}{n-1}{0} + \Mo{H}{n-1}{1}, \\ \Mo{H}{n-1}{H} & = \left\{ \begin{array}{ll}\Mo{H}{n-1}{H-1} + \Mo{H}{n}{H} & \textrm{ if }L\textrm{ is odd}\\ \Mo{H}{n-1}{H-1} & \textrm{ if }L\textrm{ is even}\\ \end{array}\right. , \end{align*} for $n \geq 1$. The proof is completed whenever we find the same recurrences for $\Delta g_n(i)$. The reader can refer to Figure~\ref{figure:recursion_explanation} as a visual support for what follows.
\begin{figure}
\caption{Explanation of Equations \eqref{eq:firstpiece} and \eqref{eq:secondpiece} in generic case. A dot with a subscript~$n$ represents the number of forward paths of length $n$ starting from this point (which is, by Theorem 2, also the number of backward paths). }
\label{figure:recursion_explanation}
\end{figure}
For any $\ell\in\{1,\ldots,H-1\}$, starting from $\origin +\ell s_2$, the only possible forward steps are $s_1$ and $s_2$, so that \begin{align} g_n(\ell)&=f_{n-1}(\origin+ \ell s_1 + s_2) + f_{n-1} (\origin + \ell s_1 +s_2) \nonumber \\ & = g_{n-1}(\ell+1)+f_{n-1}(\origin +\ell s_1+s_2). \label{eq:firstpiece} \end{align} We now count backward paths starting from $\origin+ (\ell-1) s_1$. By Theorem~\ref{theo:first}, if $b_n(z)$ is the number of backward paths of length $n$ starting at $z$, we have $f_n(z)=b_n(z)$ for every $z \in \T_L$. In particular, $g_n(\ell - 1) = b_n(\origin + (\ell - 1) s_1)$. Since only possible backward steps from $\origin + (\ell - 1) s_1$ are $\bsa$ and $\bsc$, we have for any $\ell\in\{1,\ldots,H-1\}$, \begin{align} g_n(\ell-1)&= b_{n-1}(\origin +(\ell-1) s_1 + \bsa )+b_{n-1}(\origin + (\ell-1) s_1 + \bsc) \nonumber \\ &= f_{n-1}(\origin +(\ell-1) s_1 + \bsa )+f_{n-1}(\origin + (\ell-1) s_1 + \bsc) \nonumber \\ & = g_{n-1}(\ell-2) + f_{n-1}(\origin+ \ell s_1 + (\bsc - s_1) ) \nonumber \\ &=g_{n-1}(\ell-2) + f_{n-1}(\origin +\ell s_1+s_2) \label{eq:secondpiece}. \end{align} (Note that the case $\ell = 1$ is correctly handled since by convention, $g_{n-1}(-1)=0$.) Combining \eqref{eq:firstpiece} and \eqref{eq:secondpiece}, we deduce that for $\ell\in\{1,\ldots,H-1\}$, \[g_n(\ell)-g_n(\ell-1)=g_{n-1}(\ell+1)-g_{n-1}(\ell-2),\] and hence \[\D g_n(\ell)=\D g_{n-1}(\ell-1)+\D g_{n-1}(\ell)+\D g_{n-1}(\ell+1).\]
As for $\ell=0$, we straightforwardly have \begin{align*} \D g_{n}(0)&=g_n(0)=g_{n-1}(1)\\ &=\D g_{n-1}(0)+\D g_{n-1}(1). \end{align*}
\textbf{(i) Let us first assume that $L=2H+1$ is odd.} Then, using a symmetry through the plan of equation $x_1 = x_3$ ($x_1$ being the coordinate in $e_1$ and $x_3$ the one in $e_3$), we have $f_{n-1}( \origin + H s_1 )=b_{n-1}(\origin + (H+1) s_1 )$. By Theorem~\ref{theo:first}, it translates $g_{n-1}(H) = g_{n-1}(H+1)$. Thus, $\D g_{n-1}(H+1)=0$, and $$\D g_{n}(H)=\D g_{n-1}(H-1)+\D g_{n-1}(H).$$ It follows that $(\D g_{n}(\ell))_{0\leq \ell \leq H}$ satisfies the following recursion $$ \begin{pmatrix} \D g_n(0)\\ \D g_n(1)\\ \vdots\\ \vdots \\ \vdots\\ \D g_n(H)\\ \end{pmatrix} = \begin{pmatrix} 1&1&0&\cdots&\cdots&0\\ 1&1&1&\ddots&&\vdots\\ 0&1&1&1&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&0\\ \vdots&&\ddots&1&1&1\\ 0&\cdots&\cdots&0&1&1\\ \end{pmatrix} \begin{pmatrix} \D g_{n-1}(0)\\ \D g_{n-1}(1)\\ \vdots\\ \vdots \\ \vdots\\ \D g_{n-1}(H)\\ \end{pmatrix} $$ which is the same recursion that we saw for $(\Mo{H}{n}{\ell})_{0\leq \ell \leq H}$. Since the base cases agree ($\D g_0(\ell) = \Mo{H}{0}{\ell} = 0$ for $\ell > 1$, and $\D g_0(0) = \Mo{H}{0}{0} = 1$),
we have the equality $\Mo{H}{n}{\ell}=\Dg_n(\ell)$, and the result directly follows.
\textbf{(ii) Let us now assume that $L=2H$ is even.} Always thanks to the symmetry with respect the plane $x_1 = x_3$, we have $g_{n-1}(H-1)=g_{n-1}(H+1)$, so that $\D g_{n-1}(H+1)+\D g_{n-1}(H)=0$, and \[\D g_n(H)=\D g_{n-1}(H-1).\] It follows that $(\D g_n(\ell))_{0\leq \ell \leq H}$ satisfies the following recursion $$ \begin{pmatrix} \D g_n(0)\\ \D g_n(1)\\ \vdots\\ \vdots \\ \vdots\\ \D g_n(H)\\ \end{pmatrix} = \begin{pmatrix} 1&1&0&\cdots&\cdots&0\\ 1&1&1&\ddots&&\vdots\\ 0&1&1&1&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&0\\ \vdots&&\ddots&1&1&1\\ 0&\cdots&\cdots&0&1&0\\ \end{pmatrix} \begin{pmatrix} \D g_{n-1}(0)\\ \D g_{n-1}(1)\\ \vdots\\ \vdots \\ \vdots\\ \D g_{n-1}(H)\\ \end{pmatrix}. $$ We thus recover the recursion of $(\Mop{H}{n}{\ell})_{0\leq \ell \leq H}$, and we conclude like above.
\end{proof}
\subsection{Exponential bijection}
We now convert the argument of Subsection~\ref{ss:equinumerosity} to a bijection, albeit one which is defined recursively and takes non-linear time to apply.
We fix in this section the length $L$ of the triangular lattice $\T_L$, and $H$ the semi-length: $H = \lfloor L/2\rfloor$.
Let $G_{n}(k)$ be the set of forward paths of length $n$ starting at $\origin + k s_{1}$ and let $M_{n}(k)$ be the set of Motzkin meanders of length $n$ starting at height $k$ and having amplitude bounded by $L$.
It follows from Proposition \ref{prop:motzkin_inductive} that $|M_{n}(k)|=|G_{n}(k)|-|G_{n}(k-1)|$.
To show this bijectively, we will recursively define a sequence of bijective functions $\Omega_{n,k}:G_{n}(k)\to M_{n}(k)\cup G_{n}(k-1)$ for $n\in\mathbb{N}$ and $k\in[0,H]$. This will use the bijection of Theorem \ref{theo:directions} between triangular paths with different direction vectors. In particular, we will use this in the special cases sending paths with some direction vector $W$ of length $n$ to paths with direction vector $(F,\dots,F)$. We denote this function by $W_{n}$ -- this forms a bijection when the domain is restricted to those paths with some explicit direction vector.
\begin{figure}
\caption{Algorithm computing $\Omega_{n,k}(\omega)$ where $\omega$ is a path of length $n$ starting at $\origin + k s_1$}
\label{figure:Omega}
\end{figure}
\begin{Theorem} Let $k$ and $n$ be two integers with $k \leq H$. The function $\Omega_{n,k}$, defined by Figure~\ref{figure:Omega}, is a bijection from $G_{n}(k)$ to $M_{n}(k)\cup G_{n}(k-1)$, where $G_{n}(k)$ is the set of forward paths of length $n$ starting at $\origin +k s_{1}$, and $M_{n}(k)$ is the set of Motzkin meanders of length $n$ starting at height $k$ and having amplitude bounded by $L$. \label{theo:Omega} \end{Theorem}
\begin{proof}\textbf{ 1. Let us show that the map is well-defined, i.e. its image is included in $M_{n}(k)\cup G_{n}(k-1)$. }
It is quite straighforward, except maybe two points: \begin{itemize} \item Why is path $\rho$ from block $15$ of Figure~\ref{figure:Omega} a valid triangular path of $\T_L$? By replacing $s_{1},s_{2},s_{3}$ steps with $\bsa, \bsc, \bsb$, path $\omega'$ undergoes a vertical reflection about the vertical midline of $\T_L$. Thus, $\omega'$ is transformed into a backward path starting at $\origin + H s_1$ (if $L$ is odd) or at $\origin + (H-1) s_1$ (if $L$ is even). Applying $W_{n-1}$ makes it a forward path, which is $\rho$, that belongs to $G_{n-1}(H)$ (when $L$ is odd) or $G_{n-1}(H-1)$ (when $L$ is even).
\item When $k = H$, it is impossible to output a Motzkin meander starting at height $H$ and beginning by a $\nearrow$ step. So the amplitude of every meander in the image is bounded by $2H+1$. Moreover, when $L$ is even and $k = H$, the returned meanders cannot begin by a horizontal step, which explains why they have amplitude bounded by $L = 2H$. \end{itemize}
\noindent\textbf{ 2. Let us show by induction on $n$ that $\Omega_{n,k}$ is a bijection for every $k \geq 0$. }
The case $n=0$ is clear.
Let $n$ be a positive integer. If the image is a Motzkin meander beginning by $\nearrow$ (resp. $\rightarrow$, $\searrow$), then the algorithm must end at block $8$ (resp. $10$, resp. $12$). This covers all Motzkin paths of $M_n(k)$ (or $M'_n(k)$). Then we can bijectively recover the original path $\omega$ by following the arrows backwards up to block $1$. In fact, all the arrows are reversible, notably because of the induction hypothesis. There is no ambiguity from blocks $9$ and $12$ (where there are \textit{a priori} two possible ingoing arrows) because one can only go to block $7$ and $9$ if $k < H$. In the contrary case where $k = H$, one have to go to the right side of the diagram (blocks 17 and 18).
If the image is in $G_{n}(k-1)$, then the algorithms ends either to block 13 or to block 14. Since $W_{n}$ is a bijection from paths with direction vector $(B,F,F,\dots,F)$ to forward paths in $G_{n}(k-1)$, we can recover the preimage under $W_n$. If this preimage begins by $\bsa$, then the algorithm actually ended at block $13$; if it begins by $\bsc$, the algorithm ended at block $14$. At this point, we can use the above reasoning to go backwards to the root of the decision tree and find $\omega$. Thus, we prove that $\Omega_{n,k}$ is a bijection. \end{proof}
When $k=0$, Theorem~\ref{theo:Omega} provides a bijection between forward paths and Motzkin paths of bounded amplitude. Go back to Figure~\ref{figure:exponential-bijection} for examples: each forward path is put aside its image under $\Omega_{3,0}$.
Thus, at this point, we have answered Mortimer and Prellberg's open question (Theorem~\ref{theo:mortimerprellberg}). Indeed, starting from a bicolored Motzkin path $m$ (let us say in black and white) of length $n$ and of amplitude bounded by $L$, we can construct a direction vector $W$ from it: write $F$ for each black step; $B$ for each white step. Then, we compute $\Omega_{n,0}(m)$, which is a forward path. Finally, we use the bijection from Theorem~\ref{theo:directions} to transform the forward path into a triangular path of direction vector $W$.
Finally, let us discuss about the complexity of the algorithm. If $c(n,k)$ denotes the worst-case complexity of $\Omega_{n,k}$, then we can derive from Figure~\ref{figure:Omega} the (rough) upper bound \[c(n,k) \leq c(n-1,k+1) + c(n-1,k) + c(n-1,k-1) + n^2.\] (The $n^2$ term reflects the complexity of the function $W_{n-1}$ appearing in block $15$.) Then, by a simple induction, one can see that $c(n,k) \leq \Mo H n k + O(n^3)$ where $\Mo H n k$ is the number of Motzkin meanders of length $n$ starting at height $k$ and having amplitude bounded by $L$. Since $\Mo H n 0$ is $O(3^n)$, we deduce that the complexity of $\Omega_{n,0}$ is bounded by an exponential in $n$. However, we do not know if this bound is tight. Experimentally, we have observed that the complexity of the algorithm has a large standard deviation when the input is randomly chosen: in most cases, the complexity is linear in $n$ (in terms of running time and the number of recursive calls) but sometimes the complexity seems to be quadratic in $n$.
\section{Many other bijections} \label{s:scaffolding}
In the previous section, we described a bijection between forward paths and Motzkin paths of bounded amplitude. However, the definition being recursive, the computation of an image takes \textit{a priori} a long time, and its description lacks some clarity.
This section proposes a new way to define bijections between forward paths and Motzkin paths. Such bijections will have a double advantage. First, they only require linear time to compute. Second, these bijections are parameterized: each one of them comes with a specific metadata (which we name \textit{scaffolding}), making them all different.
\subsection{Profile}
We start to define a integer vector for each point of $\T_L$:
\begin{Definition}[Profile] Let $z = i e_1 + j e_2 + k e_3$ be any point of $\T_L$. The \emph{profile} of $z$ is the vector $(p_0(z),\dots,p_H(z))$ where $H = \left \lfloor { \frac L 2 } \right\rfloor$ and $p_0(z),\dots,p_H(z)$ is the first half of the coefficients of the polynomial \[ \frac{(1 - x^{i+1})(1 - x^{j+1})(1- x^{k+1})}{(1-x)^2} = p_0(z) + p_1(z) x + \dots + p_H(z) x^H + \dots + p_{L+1}(z) x^{L+1}.\] \label{def:profile} \end{Definition}
\begin{figure}
\caption{A cell representation of $\T_5$. The enlighten zone corres\-ponds to point $e_1 + e_2 + 3 e_3$.}
\label{figure:profile}
\end{figure}
\begin{Example} Fix $L=5$. The profile of any corner of $\T_5$ (that is $5e_1$, $5 e_2$ or $5 e_3$) is $(1,0,0)$ since the corresponding polynomial is $(1-x^6)$ (regardless of the corner). The profile of the point $ e_1 + e_2 + 3 e_3$ is $(1,2,1)$, which can be found by expanding the polynomial $(1-x^2)^2(1-x^4)/(1-x)^2 = 1 + 2 x + x^2 - x^4 - 2 x^5 -x^6.$ \end{Example}
Note that one can also extend the definition of profile for points $i e_1 + j e_2 + k e_3$ where $i=-1$ or $j=-1$ or $k = -1$. Even if they are not in $\T_L$, we can see that the polynomial $\frac{(1 - x^{i+1})(1 - x^{j+1})(1- x^{k+1})}{(1-x)^2}$ is null for such points, so by convention, we can define the profile as the null vector $(0,\dots,0)$. It will be useful to deal with border cases.
It is convenient to represent the profiles as sets of square cells.
\begin{Definition}[Cell representation]
A \emph{cell representation of a point $z$} is a finite subset $\C(z)$ of $\Z^2$ satisfying $\left|\{ \ell \ : \ (f,\ell) \in \C(z)\}\right| = p_f(z)$ for every $f \in \{0,\dots,H\}$. A \emph{cell representation of $\T_L$} is a family $\C=(\C(z))_{z \in \T_L}$ of cell representations of points of $\T_L$. The \emph{height} of a cell $c=(f,\ell)$ is defined as $h(c)=f$. \label{def:cell_representation} \end{Definition}
The profile of every point $z$ is then illustrated by the cell representation $\C(z)$: for every $(f,\ell) \in \C(z)$, a square is placed at coordinates $(\ell,f)$.\footnote{We swap the two coordinates so that $f$ (which stands for \textit{floor}) corresponds to the height of a cell, consistent with the fact that $f$ represents the height in a Motzkin path.} For example, as shown by Figure~\ref{figure:profile}, the cell representation of $e_1 + e_2 + 3 e_3$ in $\T_5$ (whose profile is $(1,2,1)$, as mentioned above) can be represented as three rows of squares: the first (bottom) and the third (top) rows have $1$ square each while the central row has $2$ squares.
It is not obvious from Definition~\ref{def:profile} that we always have $p_f(z) \geq 0$, and hence that a cell representation of $\T_L$ exists for every $L \in \N$. However a cell representation of $\T_L$ will be explicitly given by Proposition~\ref{prop:cell_rep}, proving the non-negativity of the components of a profile.
The next lemma establishes some identities about the profile.
\begin{Lemma}Let $z$ be in $\T_L$. Then for $i \in \{1,\dots,H-1\},$ the identities \begin{align} p_i(z+s_1)+p_i(z+s_2) + p_i(z+s_3) &= p_{i-1}(z) + p_i(z) + p_{i+1}(z), \label{eq:pi} \\ p_0(z+s_1)+p_0(z+s_2) + p_0(z+s_3) &= p_0(z) + p_1(z), \label{eq:p0} \\ p_H(z+s_1)+p_H(z+s_2) + p_H(z+s_3) &= \left \{\begin{array}{ll} p_H(z) + p_{H-1}(z) & \textrm{ if }L\textrm{ is odd}\\ p_{H-1}(z) & \textrm{ if }L\textrm{ is even} \label{eq:pH} \end{array} \right. , \end{align} hold. \label{lem:identity_profile} \end{Lemma} \begin{proof} For $z = i e_1 + j e_2 + k e_3 \in \T_L$, let $Pol_z(x)$ be the polynomial of Definition~\ref{def:profile}, that is \[Pol_z(x) = \frac{(1 - x^{i+1})(1 - x^{j+1})(1- x^{k+1})}{(1-x)^2}. \] We also extend for any integer $i$ the definition of $p_i(z)$ as the coefficient of $x^i$ in $Pol_z(x)$.
By an inelegant but simple expansion, one can check the identity \[Pol_{z+s_1}(x)+Pol_{z+s_2}(x)+Pol_{z+s_3}(x) = \left(x + 1 + \frac 1 x\right) Pol_z(x) + x^{L+2} - \frac 1 x.\] Extracting the coefficient of $x^i$ in the above equality for $i \in \{0,\dots,H\}$ straightforwardly gives
\[p_i(z+s_1)+p_i(z+s_2) + p_i(z+s_3) = p_{i-1}(z) + p_i(z) + p_{i+1}(z),\]
which proves \eqref{eq:pi}. The equality \eqref{eq:p0} comes from the fact that $p_{-1}(z) = 0$.
Concerning $i = H$, we remark that \[x^{L+1}Pol_z(1/x)= -Pol_z(x),\] and hence $p_{L+1-j}(z) = -p_j(z)$ for every integer $j$. In particular, if $L = 2H+1$, then for $j = H + 1$, we have $p_{H+1}(z) = -p_{H+1}(z)$ and so $p_{H+1}(z) = 0$. Equality \eqref{eq:pH} is then obtained by substituting $i=H$ and $p_{i+1} = 0$ in \eqref{eq:pi}. As for $L = 2H$ even, set $j=H$, and get $p_{H+1}(z) = -p_H(z)$, which implies that only the term $p_{H-1}(z)$ does not disappear in the right-hand side of the equality. \end{proof}
Thus, Proposition~\ref{prop:motzkin_inductive} is naturally extended to any point of $\T_L$ (not only the ones on the border).
\begin{Theorem} Let $z$ be any point of $\T_L$ and $(p_0(z),\dots,p_H(z))$ be the profile of $z$. Let us denote $f_n(z)$ the number of forward paths in $\T_L$ starting from $z$. We have \[f_n(z)= \sum_{i=0}^{H} p_i(z) \Mo{H}{n}{i},\] where $\Mo H n i$ is the number of Motzkin meanders of length $n$ starting at height $i$ and having an amplitude bounded by $L$. \label{theo:anywhere} \end{Theorem}
\begin{proof} We only do the proof for the odd case, since the even case is very similar. We proceed to an induction on $n$.
For $n=0$, we have $p_0(z)=1$ since it is the constant term in the polynomial $\frac{(1 - x^{i+1})(1 - x^{j+1})(1- x^{k+1})}{(1-x)^2}$. Moreover, $\Mo H 0 i$ is equal to $0$ if $i > 0$, and $\Mo H 0 0 = 1$. We consistently find $f_0(z) = 1$.
Let us assume that the equality holds for a given $n$ and for every $z' \in \T_L$. We have \begin{align*} f_{n+1}(z) &= f_{n}(z + s_1) + f_{n}(z + s_2) + f_{n}(z + s_3) \\ & = \sum_{i=0}^H \left( p_i(z+s_1)+p_i(z+s_2) + p_i(z+s_3) \right) \Mo H {n} i & \textrm{by induction,}\\ & = \sum_{i=1}^{H - 1} \left( p_{i-1}(z)+p_i(z) + p_{i+1}(z) \right) \Mo H {n} i \\ & + (p_0(z) + p_1(z)) \Mo H {n} 0 + (p_{H-1}(z) + p_H(z)) \Mo H {n} H & \textrm{by Lemma~\ref{lem:identity_profile}.} \end{align*} Collecting terms with respect to $p_i(z)$, we get \begin{multline*} f_{n+1}(z) = p_0(z) \left( \Mo H {n} 0 + \Mo H {n} 1 \right) \\ + \sum_{j = 1}^{H-1} p_j(z) \left( \Mo H {n} {j-1} + \Mo H {n} {j} + \Mo H {n} {j+1} \right) \\
+ p_H(z) \left( \Mo H {n} {H-1} + \Mo H {n} H\right), \end{multline*} which reads $f_{n+1}(z) = \sum_{j = 0}^H p_j(H) \Mo H {n+1} j$. \end{proof}
Let us explain why Proposition~\ref{prop:motzkin_inductive} is a special case of the previous theorem. Given a point of the border $\origin + \ell s_1 = s_1 e_1 + (L-\ell) e_3$ with $\ell \leq H = \lfloor L/2 \rfloor$, the associated polynomial is \[\frac{(1-x^{\ell+1})(1-x^{L - \ell + 1})}{1-x} = \left( 1 + x + \dots + x^\ell \right) (1- x^{L-\ell+1}).\] But since $\ell \leq H$, we have $L - \ell + 1 > H$. So the profile of $\origin + \ell s_1$ follows the expansion of $1 + x + \dots + x^\ell$. In other words, \[p_i(\origin + \ell s_1) = \left\{\begin{array}{cl} 1 & \textrm{ if }i \leq \ell \\ 0 & \textrm{ otherwise} \end{array} \right. . \] We thus recover the formula $f_n(\origin+\ell s_1)=\sum_{i=0}^{\ell} \Mo{H}{n}{i}$.
\subsection{Scaffoldings and new bijections}
In order to illustrate the following definition, we begin this subsection by explaining the idea behind the bijection we are going to present next.
\begin{figure}
\caption{A zoom on a scaffolding -- more specifically it depicts the function $s \mapsto \delta_{e_1+ e_2 + 3 e_3}( (1,2),s)$.}
\label{figure:introduction_scaffolding}
\end{figure}
By Theorem~\ref{theo:anywhere}, we know there should be a bijection between the set of triangular paths starting at $z \in T_L$ and the set of triplets $(m,c)$ where $m$ is a Motzkin meander of bounded amplitude and $c$ is a cell in the cell representation of $z$ such that $h(c)$ is the starting height of $m$.
For the sake of example, let us choose $L = 5$, $z = e_1 + e_2 + 3 e_3$, $c = (f,\ell) = (1,2)$. It corresponds to a specific cell of the profile of $z$, which is highlighted in Figure~\ref{figure:introduction_scaffolding}.
We now consider a Motzkin path $m$ which we wish to transform into a triangular path starting at $z$, in a recursive manner. This transformation will depend on the cell we have chosen (here $(1,2)$). At this point there are naturally three possibilities: $m$ begins by $\nearrow$, by $\rightarrow$, or by $\searrow$. The idea is then to map these three possibilities to three other cells located in the profiles of the neighbors of $z$. The $f$-coordinates of these cells must be respectively $2$, $1$ and $0$. We then use a recursion, which now depends on the new cell, to find the {desired} triangular path.
Of course there are several choices for these new cells. For example, if $m$ begins by $\nearrow$, we have $3$ choices: there are $2$ cells in the top floor of $z+s_1$, $1$ cell in the top floor of $z+s_2$, and $0$ cell in the top floor of $z+s_3$. Following Figure~\ref{figure:introduction_scaffolding}, we choose the cell $(2,2)$ from the cell representation of $z+s_1$. The triangular path we would like to output will begin by $s_1$ (because the chosen cell is in the profile of $z + \boldsymbol{s_1}$), and the rest will be computed by recursion.
A \textit{scaffolding} is precisely the data which dictates the choice of the new cells for the whole lattice. More precisely, it indicates in which cell we have to go when we consider a specific cell in some profile, and a particular step in $\{\nearrow,\rightarrow,\searrow\}$.
\begin{Definition}[Scaffolding] Let us fix $L$ the size of the triangular lattice, and let $H$ be $\lfloor L/2 \rfloor$.
For a height $f \in \{0,\ldots,H\}$, we say that a step $s\in\{\nearrow,\rightarrow,\searrow\}$ is an \emph{allowed step} from height $f$ if it is a possible step from height $f$ in a Motzkin meander. Precisely, the only restrictions are that $(f,s)$ cannot be equal to $(0,\searrow)$ nor $(H,\nearrow)$, and furthermore, if $L$ is even, $ (f,s)$ cannot be equal to $(H,\rightarrow)$.
For $z\in \T_L$, we define the set \[A(z):=\{(c,s)\in \C(z)\times \{\nearrow,\rightarrow,\searrow\} : s \mbox{ is an allowed step from } h(c)\},\] where $\C(z)$ is the cell representation of $z$ (see Definition~\ref{def:cell_representation}). For $i\in\{1,2,3\}$, we also introduce the notation \[\C_i(z):= \{ (s_i,c) : c \in \C(z)\}.\] The set $\C_i(z)$ is thus a subset of $\Sf\times \C(z)$, having same cardinality as $\C(z)$, since all the elements of $\C_i(z)$ have the same first coordinate $s_i$.
A \emph{scaffolding} is a collection of functions $(\delta_z)_{z \in \T_L}$, such that for each $z\in \T_L$, the function $$\delta_z : A(z) \to \C_1(z+s_1)\cup \C_2(z+s_2) \cup \C_3(z+s_3)$$ is a bijection. Furthermore, for every $(c,s) \in A(z)$ with $(\sigma,c') = \delta_z(c,s)$, we have the restriction \[ h(c') = \left\{\begin{array}{cl} h(c) + 1 & \textrm{if }s=\nearrow \\ h(c) & \textrm{if }s=\rightarrow \\ h(c) - 1 & \textrm{if }s=\searrow
\end{array} \right.. \] \label{def:scaffolding} \end{Definition}
An entire scaffolding is shown by Figure~\ref{def:scaffolding}.
\begin{figure}
\caption{A random scaffolding for $\T_3$. }
\label{figure:example_scaffolding}
\end{figure}
\begin{Proposition} For any $L \geq 0$, there exists a scaffolding. \end{Proposition} \begin{proof} Let us consider any point $z$ of $\T_L$, and let $f'$ be an integer in $\{0,\dots,H\}$.
Consider the sets \begin{align*} \mathcal U_{f'}(z) & := \{ (c,\nearrow) \in A(z) \ : h(c) = f' - 1 \}, \\ \mathcal F_{f'}(z) & := \{ (c,\rightarrow) \in A(z) \ :h(c) = f' \}, \\ \mathcal D_{f'}(z) & := \{ (c,\searrow) \in A(z) \ : h(c) = f' + 1 \}, \\ \C_{i,f'}(z) & := \{ (s_i,c') \in \C_i(z)\ : \ h(c') = f'\} & \textrm{ for }i \in \{1,2,3\}. \end{align*} By Lemma~\ref{lem:identity_profile}, we have
\[ \left| \mathcal U_{f'}(z) \cup \mathcal F_{f'}(z) \cup \mathcal D_{f'}(z) \right| = \left| \C_{1,f'}(z+s_1) \cup \mathcal \C_{2,f'}(z+s_2) \cup \C_{3,f'}(z+s_3) \right|. \] We can then choose any bijection $b_{f'}$ between these two sets and define $\delta_z(c,s)$ for every $(c,s) \in \mathcal U_{f'}(z) \cup \mathcal F_{f'}(z) \cup \mathcal D_{f'}(z)$ as $b_{f'}(c,s)$.
Doing so for every $f' \in \{0,\dots,H\}$ enables to cover every pair $(c,s) \in A(z)$, and thus successfully define $\delta_z$ on the set of such triplets.
The required bijectivity of $\delta_z$ is straightforward (because $b_{f'}$ is also bijective). \end{proof}
\begin{figure}
\caption{The Motzkin paths and the triangular paths of length $3$ in correspondence under Algorithms~\ref{algo:scaffolding1} and~\ref{algo:scaffolding2}, given the scaffolding of Figure~\ref{figure:example_scaffolding}.}
\label{figure:scaffolding-bijection}
\end{figure}
Once we fix a scaffolding for our triangular lattice, one can describe a bijection between triangular paths and Motzkin paths. The bijection is given by Algorithms~\ref{algo:scaffolding1} and~\ref{algo:scaffolding2}.
\begin{algorithm}[caption={Bijection from Motzkin paths to triangular paths, given a scaffolding $(\delta_z)_{z \in \T_L}$ (for \textit{scaffolding}, see Definition~\ref{def:scaffolding}).}, label={algo:scaffolding1}] metadata: a scaffolding $\delta_z$ input: a Motzkin path m output: a triangular path p starting at $\origin$ n $\gets$ length of m; p $\gets$ empty path; z $\gets \origin$; c $\gets$ unique cell of height $0$ in the cell representation of z; for i from 1 to n do ($\sigma$, c) $\gets$ $\delta_{\textrm z}$(c, m[i]);
add $\sigma$ to the end of p;
z $\gets$ z + $\sigma$; return p; \end{algorithm}
\begin{algorithm}[caption={Bijection from triangular paths to Motzkin paths, given a scaffolding $(\delta_z)_{z \in \T_L}$ (for \textit{scaffolding}, see Definition~\ref{def:scaffolding}).}, label={algo:scaffolding2}] metadata: a scaffolding $\delta_z$ input: a triangular path p starting at $\origin$ output: a Motzkin path m n $\gets$ length of p; m $\gets$ empty path; z $\gets \origin + \sum_{i=1}^n$ p[i]; c $\gets$ unique cell of height $0$ in the cell representation of z; for i decreasing from n to 1 do (c, s) $\gets$ $\delta_{\textrm z}^{-1}$(p[i], c);
add s to the beginning of m;
z $\gets$ z - p[i]; return m; \end{algorithm}
\begin{Theorem} Let $(\delta_z)_{z \in \T_L}$ be a scaffolding. Algorithms~ \ref{algo:scaffolding1} and~\ref{algo:scaffolding2} give two inverse bijections between the set of Motzkin paths of length $n$ with bounded amplitude $L$ and the set of triangular paths of $\T_L$ of length $n$ starting at $\origin$. \end{Theorem}
\begin{proof} At the end of Algorithm~\ref{algo:scaffolding1}, note that the height of the ending cell is $0$, since variable $f$ keeps track of the height of the input Motzkin path (because of the last restriction of Definition~\ref{def:scaffolding}) and a Motzkin path always ends at height $0$. Moreover, because the polynomial $(1-x^{i+1})(1-x^{j+1})(1-x^{k+1})/(1-x)^2$ always has a constant term equal to $1$, by Definition~\ref{def:profile}, we have $p_0(z) = 1$ for every $z \in \T_L$. But $\ell$ is always between $1$ and $p_f(z)$, so at the end of Algorithm~\ref{algo:scaffolding1}, $\ell$ must be $1$.
Thus, the values of $z$ and $c$ are the same at the end of Algorithm~\ref{algo:scaffolding1} and at the beginning of Algorithm~\ref{algo:scaffolding2}. From this point, it is easy to see that the loop of Algorithm~\ref{algo:scaffolding2} reverses what the loop of Algorithm~\ref{algo:scaffolding1} did. Therefore the two algorithms are mutual inverse bijections. \end{proof}
\begin{Remark} If we omit the cost of a precalculation (which is the construction of a scaffolding which can be made in $O(L^4)$ time), both algorithms have a linear-time complexity.
The scaffolding bijection of Subsection \ref{ss:trapeziums} does not require any precalculation (which can be costly if $L$ is large) and it still has a linear-time complexity. \end{Remark}
\begin{Remark}If two Motzkin paths $m$ and $m'$ share a common prefix of length $j$, then the two corresponding triangular paths under Algorithm~\ref{algo:scaffolding1} will also share a common prefix of length $j$. The converse is not true.
This property is not shared by the exponential bijection of Figure~\ref{figure:Omega}. This is why this bijection is not a particular case of the scaffolding bijections. \end{Remark}
\begin{Remark} No scaffolding is necessary if we wish to sample a random forward path under the uniform distribution, given a uniform random Motzkin path of bounded amplitude.
Indeed, since any scaffolding is suitable to have a bijection, one can pick this scaffolding at random, on the fly. To do so, at each step of the loop in Algorithm~\ref{algo:scaffolding1}, we choose $\delta_z(c,m[i])$ as one of the cells with height $h'$ belonging to $\C(z+s_1)\cup \C(z+s_2) \cup \C(z+s_3)$, where $h'=h(c) + 1$ if $m[i]=\nearrow$, $h'=h(c) $ if $m[i]=\rightarrow$, or $h'=h(c) - 1$ if $m[i]=\searrow$. This choice must be uniform among all cells of height $h'$. \end{Remark}
\subsection{Two direct bijective proofs of Mortimer and Prellberg's theorem} \label{ss:bico}
We mention two ways to extend this to a bijection between bounded Motzkin paths with bicolored (black and white) edges and triangular paths (potentially including forward and backward steps), which provides a direct combinatorial interpretation of Theorem~\ref{theo:mortimerprellberg}.
The first method is as mentioned at the end of Section~\ref{s:expo}: Starting with a bicolored Motzkin path, use the scaffolding bijection above to send the Motzkin path to a forward path, and map the colors to a direction vector based on the order in which they appear (black $\to F$ and white $\to B$). Then, using the bijection of Theorem \ref{theo:directions}, send the forward path to a path with that direction vector.
For the second method we start by defining a {\em reverse scaffolding} \[\overline{\delta_z} : A(z) \to \overline{\C_1}(z+\overline{s_1})\cup \overline{\C_2}(z+\overline{s_2}) \cup \overline{\C_3}(z+\overline{s_3}),\] where each $\overline{\C_i}(z)$ is defined by \[ \C_i(z):= \{(\overline{s_i},c) : c\in\C(z)\}.\] We define $\overline{\delta_z}$ symmetrically to $\delta_{z}$ reflected about the midline of $\T_{L}$ passing though $\origin=x_{3}e_{3}$. To be precise, if $z=x_{1}e_{1}+x_{2}e_{2}+x_{3}e_{3}$, let $z'=x_{2}e_{1}+x_{1}e_{2}+x_{3}e_{3}$ and $\delta_{z'}(a)=(s_{j},c)$. Then we define $\overline{\delta_{z}}(a):=(\overline{s_{4-j}},c)$. This is possible because the cell representation of $z'$ is necessarily the same as that of $z$. The bijection then runs as follows: starting with a bicolored Motzkin path, we apply the scaffolding $\delta_{z}$ when there is a black step, and we apply the reverse scaffolding $\overline{\delta_z}$ when there is a white step. An advantage of that second version is that it takes linear time to apply.
\subsection{A canonical scaffolding in terms of colored trapeziums} \label{ss:trapeziums}
In this section we provide an explicit scaffolding which yields a bijection between bounded Motzkin paths and triangular paths which takes linear time to compute (it does not depend on $L$). First we define a new cell representation for $\T_L$.
\begin{Proposition} For every $z \in \T_L$, the set
\[\C(z):=\left\{(f,\ell)\in\mathbb{Z}^{2}\ | \ \max(0,f-x_{3}) \leq \ell \leq \min(f,x_{1},x_{2},x_{1}+x_{2}-f)\right\}\] is a cell representation of $z$ (see Definition~\ref{def:cell_representation}). \label{prop:cell_rep} \end{Proposition}
\begin{figure}
\caption{\textit{Left.} The shape of the cell representation from Proposition~\ref{prop:cell_rep} of a point $x_1 e_1 + x_2 e_2 + x_3 e_3$. \textit{Right.} The associated cell representation of $\T_5$. }
\label{figure:cell_representation}
\end{figure}
\begin{proof} Set $z=x_{1}e_{1}+x_{2}e_{2}+x_{3}e_{3}$, so $x_{1}+x_{2}+x_{3}=L$. Recall that for \[p_{f}(z)=[y^{f}](1+\cdots+y^{x_{1}})(1+\cdots+y^{x_{2}})(1-y^{x_{3}+1}),\] for $2f\leq L$. For $x_{3} \geq x_{1}+x_{2}$, an expansion of the two first factors shows that the numbers $p_f(z)$ are \[1,2,\ldots, \underbrace{\min(x_{1},x_{2})+1,\min(x_{1},x_{2})+1,\ldots,\min(x_{1},x_{2})+1}_{\textrm{repeated }\max(x_{1},x_{2})-\min(x_{1},x_{2})+1\textrm{ times}},\min(x_{1},x_{2}),\dots,2,1,\]
for $f=0,1,\ldots,x_{1}+x_{2}$. So, if we simply define $\C(z):=\{(f,\ell) \ |\ 0\leq\ell\leq p_{f}(z)-1\}$ with $h((f,\ell))=f$, then $\C(z)$ can alternatively be written as
\[\C(z)=\left\{(f,\ell)\in\mathbb{Z}^{2}\ | \ 0\leq \ell \leq \min(f,x_{1},x_{2},x_{1}+x_{2}-f)\right\}.\] For $x_{3}<x_{1}+x_{2}$, it suffices to remove from $\C(z)$ any points $(f,\ell)$ for which $(f,\ell-x_{3}-1)$ belongs to $\C(z)$, as this corresponds to multiplying the polynomial by $(1-y^{x_{3}+1})$. This yields the above general formula for $C(z)$.
\end{proof}
Examples of the cell representation of Proposition~\ref{prop:cell_rep} are shown in Figure~\ref{figure:cell_representation}.
\begin{figure}
\caption{A diagram defining the scaffolding $\delta_{z}$.}
\label{figure:trapezium_scaffolding}
\end{figure}
\begin{figure}
\caption{A geometric depiction of the bijection $\delta_{z}$ in the case $x_{1}=8$, $x_{2}=13$, $L=37$. On the left, there are three copies of of $\C(z)$ while on the right we have $\C(s_{1}+z)$, $\C(s_{2}+z)$ or $\C(s_{3}+z)$. Each case in Figure \ref{figure:trapezium_scaffolding} is represented by a colored zone with labels matching the numbers shown in Figure \ref{figure:trapezium_scaffolding}. Cells for which the given step is not allowed are colored in red. The grey polygon on each of the cell representations is the outline of $C(z)$ (equivalently, the pentagon delimited by the lines $\ell = 0$, $f = \ell$, $\ell = 8$, $f + \ell = 21$ and $f = \ell + 16$).}
\label{figure:trapeziums_diagram}
\end{figure}
\begin{figure}
\caption{The bijection $\delta_{z}$ in the case $x_{1}=13$, $x_{2}=7$, $L=36$. In comparaison with Figure~\ref{figure:trapeziums_diagram}, this decomposition features the case where $L$ is even, but most importantly, the case where $x_1 > x_2 +1$.}
\label{figure:trapeziums_diagram2}
\end{figure}
Finally it remains to define a scaffolding $$\delta_z : A(z) \to \C_1(z+s_1)\cup \C_2(z+s_2) \cup \C_3(z+s_3),$$ where we recall that $A(z)$ and $\C_i(z)$ are defined by \begin{align*}A(z)&:=\{(c,s)\in \C(z)\times \{\nearrow,\rightarrow,\searrow\} : s \mbox{ is an allowed step from } h(c)\}.\\ \C_i(z)&:= \{(s_i,c) : c\in\C(z)\}.\end{align*} We define $\delta_{z}$ by the procedure shown in Figure \ref{figure:trapezium_scaffolding}. Under this procedure there are 12 different cases, shown by the {colored} boxes labeled from $1$ to $12$.
In the following theorem we show that this is indeed a bijection. We give a geometric interpretation of this bijection in two specific cases in Figures \ref{figure:trapeziums_diagram} and \ref{figure:trapeziums_diagram2}. \begin{Theorem} For each $z\in \T_L$, the function $\delta_{z}$ defined by the procedure in Figure \ref{figure:trapezium_scaffolding} is a bijection from $A(z)$ to $\C_1(z+s_1)\cup \C_2(z+s_2) \cup \C_3(z+s_3)$. \label{theo:trapezium} \end{Theorem} \begin{proof} To see that this is a bijection, is suffices to show that each element of $\C_1(z+s_1)\cup \C_2(z+s_2) \cup \C_3(z+s_3)$ is covered exactly once by $\delta_{z}$.
First, we claim that $\C_{1}(z+s_{1})$ is covered by cases $2$, $3$, $8$, $9$ and $11$. Note that $\C(z+s_1)=\{(f',\ell')\in\mathbb{Z}^{2}\ |\ \max(0,f'-x_{3}+1) \leq \ell' \leq \min( f', 1+x_{1},x_{2},1+x_{1}+x_{2}-f')\}.$ In particular, the pairs $(f',\ell')\in\C(z+s_{1})$ covered by each of the five cases are those satisfying the following: \begin{itemize} \item Case 2: $\ell'=1+x_{1}+x_{2}-f'\neq 0$. \item Case 3: $\ell'=0=1+x_{1}+x_{2}-f'$ (this case only occurs if $x_{1}+x_{2}\leq x_{3}$ i.e., $2(x_{1}+x_{2})\leq L$ ). \item Case 8: $\ell'\leq x_{1}$ and $\ell'<x_{1}+x_{2}-f'$. \item Case 9: $\ell'\leq x_{1}$ and $\ell'=x_{1}+x_{2}-f'$. \item Case 11: $\ell'=x_{1}+1\leq x_{1}+x_{2}-f'$ (this case only occurs for $x_{1}<x_{2}$). \end{itemize}
Next, we show that the set $\C_{2}(z+s_{2})$ is covered by cases $1$, $4$, $5$, $7$ and $10$. We have \[\C(z+s_2)=\{(f',\ell')\in\mathbb{Z}^{2}\ |\ \max(0,f'-x_{3})\leq \ell'\leq \min(f', x_{1}-1,x_{2}+1,x_{1}+x_{2}-f')\}.\]
In particular, the pairs $(\ell',f')\in\C(z+s_{2})$ covered by each of the five cases are those satisfying the following: \begin{itemize} \item Case 1: $f'=x_{1}$ and $\ell'=x_{2}$ (this case only occurs if $x_{2}\leq x_{1}-1$). \item Case 4: $\ell'=x_{1}+x_{2}-f'\leq x_{2}-1$. \item Case 5: $\ell'=x_{2}+1$ (this case only occurs if $x_{2}+1\leq x_{1}-1$). \item Case 7: $\ell'=f'\leq x_{2}-1$. \item Case 10: $\ell'\leq x_{1}+x_{2}-f'-1,x_{2},f'-1$ or $\ell'=f=x_{2}$ (the latter case only occurs for $x_{2}\leq x_{1}-1$). \end{itemize}
Finally, we show that the set $\C_{3}(z+s_{3})$ is covered by cases $6$ and $12$. Note that \[\C(z+s_3)=\{(f',\ell')\in\mathbb{Z}^{2} | \max(0,f'-x_{3}-1)\leq \ell'\leq \min(f',x_{1},x_{2}-1,x_{1}+x_{2}-1-f').\] In particular, the pairs $(f',\ell') \in \C(z+s_{3})$ covered by each of the five cases are those satisfying the following: \begin{itemize} \item Case 6: $\ell'\leq f'-1$. \item Case 12: $\ell'=f'$. \end{itemize} We thus have dealt with every element of $\C_1(z+s_1)\cup \C_2(z+s_2) \cup \C_3(z+s_3)$. \end{proof}
Note that the rules in the definition of $\delta_{z}$ only depend on $x_{1}$, $x_{2}$, $f$ and $\ell$, but not $L$. As a consequence, this bijection can be applied to any Motzkin path to yield a path in the $1/6$-plane, and if $L$ is the minimum sidelength of a triangle containing the resulting path then $L$ is the amplitude of the Motzkin path.
\section{Generalization to further dimension} \label{s:generalization}
This section explains to what extent the results of the previous sections can be generalized. In fact, there is a natural extension of triangular paths to higher dimension (already introduced by \cite{MortimerPrellberg}) for which there still exists a bijective correspondence between forward and backward paths. More surprisingly, we can find in dimension $3$ a new bijection between two families of lattice walks, which is an analogue of the bijection between triangular paths and Motzkin path of bounded amplitude.
\subsection{What can be extended in any dimension}
\subsubsection{Definition}
For dimension $d$, let $(e_1,e_2,e_3,\ldots,e_{d+1})$ denote the standard basis of $\R^{d+1}$. For some $L\in\N$, we define the subset $\Ss_{d,L}$ of $\N^{d+1}$ as the simplicial section of side length $L$ of the integer lattice: $$\Ss_{d,L}=\{x_1\,e_1+\cdots+x_{d+1}\,e_{d+1} : x_1, \ldots, x_{d+1}\in\N, x_1+\cdots+x_{d+1}=L\}.$$ We will consider walks in this simplex using \textit{forward} steps $s_{j}=e_{j}-e_{j-1}$ for $1\leq j\leq d+1$ (with the convention that $s_{0}=s_{d+1}$) and \textit{backward} steps $-s_{j}$. Paths of $\Ss_{d,L}$ only using forward steps are again called \textit{forward paths}. The \textit{origin} of $\Ss_{d,L}$, denoted $\origin$, is defined as $L e_{d+1}$. The triangular lattice $\T_L$ can be recovered by setting $d=2$ -- in other words $\T_L = \Ss_{2,L}$.
{As in } the triangle case, forward paths of $\Ss_{d,L}$ starting from $\origin$ form a subfamily of Standard Young Tableaux. Precisely, they are in bijection with standard Young tableaux with $d$ rows or less with an extra restriction:
for $i>L$, if there is a cell with label $\ell$ at position $i$ in the top row of the Young tableau, then there is a cell at position $i-L$ in the bottom row of the Young tableau with a label less than $\ell$. The enumeration of standard Young tableaux with a bounded number of rows is the object of a very active research -- see \cite{mishna} for a survey.
\subsubsection{Equinumeracy of forward and backward paths}
Defining direction vector as in Definition \ref{def:direction_vector}, the equivalent of Theorem \ref{theo:directions} still holds: \begin{Theorem} Given two sequences $W$ and $W'$ of $\{\Sf,\Sb\}^n$, the set of paths in $\Ss_{d,L}$ of direction vector $W$ are in bijection with the set of pyramid paths of direction vector $W'$. \label{theo:generic_forward} \end{Theorem} We can use the same proof almost \textit{verbatim}. In fact, the bijection uses swap flips, defined exactly as in Definition \ref{def:flips}: \begin{align*} (s_j,\overline{s_k}) \lra (\overline{s_k}, s_j)& \quad \mbox{if} \; j\not=k, \\ (s_k,\overline{s_k}) \lra (\overline{s_{k-1}},s_{k-1}) & \quad \mbox{otherwise.} \end{align*} where, by convention, $s_0 = s_{d+1}$.
\subsection{Dimension 3}
It turns out that forward paths in dimension $3$ are {equinumerous} with another family of paths, as in the two dimensional case. We {will show this inductively, then give} a bijection {analogous} to those in Section~\ref{s:scaffolding}.
In dimension $3$, the set $$\Ss_{3,L}=\{x_1\,e_1+x_2\,e_2+x_3\,e_3+x_4\,e_4: x_1, x_2, x_3, x_4\in\N, x_1+x_2+x_3+x_4=L\}$$ is a pyramidal lattice, as shown by Figure~\ref{figure:chef_doeuvre2} (left). We denote by $\Sf$ the set of forward steps, i.e., $\Sf=\{e_1-e_4,e_2-e_1,e_3-e_2,e_4-e_3\}$, and we denote by $\Sb$ the set of backward steps, i.e., $\Sb=-\Sf$. A \textit{pyramidal walk} is a walk in $\Ss_{3,L}$ using steps in $\Sf \cup \Sb$.
\begin{figure}
\caption{\textit{Left.} The Pyramid $\Ss_{3,3}$. \textit{Right.} The waffle $W_{12}$. }
\label{figure:chef_doeuvre2}
\end{figure}
By reducing the dimension of the recurrence using the bijection between forward and backward paths, we find a family of paths in bijection with pyramidal walks:
\begin{Theorem}\label{theo:waffle_to_pyramid} Define the {\em waffle} $W_{L}$ of size $L$ by \[W_{L}=\{(i,j)\in\mathbb{N} : j\leq i\leq L-j\}\] (see Figure~\ref{figure:chef_doeuvre2} (right) for a picture). For $(i,j)\in W_{L}$, the number $w_{n,i,j}$ of square lattice walks in $W_{L}$, starting at $(i,j)$ and ending on the $y$-axis is given by \[w_{n,i,j}=p_{n,i,j}-p_{n,i-1,j-1},\] where $p_{n,i,j}$ is the number of forward (or equally backward) pyramid paths of length $n$ starting at the point $(i-j) e_{1}+je_{2}+(L-i)e_{4}$. \end{Theorem}
\begin{proof} We prove this using an inductive approach. We define $q_{n,i,j}$ to be the number of such paths starting at the point $(i-j) e_{1}+je_{2}+e_{3}+(L-i-1)e_{4}$ (this is $0$ if the starting point is outside the region).
Considering the first step in a forwards path of length $n+1$ starting at $(i-j) e_{1}+je_{2}+(L-i)e_{4}$ yields the following equation for $n,i,j\geq 0$ satisfying $i\leq j\leq L$: \[p_{n+1,i,j}=p_{n,i+1,j}+p_{n,i,j+1}+q_{n,i-1,j-1}.\] Using the same method for backward paths yields \[p_{n+1,i,j}=p_{n,i-1,j}+p_{n,i,j-1}+q_{n,i,j}.\] Canceling the $q$ terms, we obtain the following equation as long as $1\leq j\leq i\leq L$: \[p_{n+1,i,j}-p_{n+1,i-1,j-1}=p_{n,i+1,j}+p_{n,i,j+1}-p_{n,i-2,j-1}-p_{n,i-1,j-2}.\] Finally, writing $w_{n,i,j}:=p_{n,i,j}-p_{n,i-1,j-1}$, we have the following recurrence for $w$: \[w_{n+1,i,j}=w_{n,i+1,j}+w_{n,i,j-1}+w_{n,i,j+1}+w_{n,i-1,j},\] which has only positive coefficients. By analysing this equation on the boundary, we deduce that it holds for $0\leq j\leq i\leq L+1$, if we define $w_{n,i,j}=0$ for $i,j$ outside this region. Finally the initial condition for $w_{0,i,j}$ follows from $p_{0,i,j}=1$ for $0\leq j\leq i\leq L$: \begin{align*}
w_{0,i,j} &= 0, & \text{ for } & 1\leq j\leq i\leq L, \\
w_{0,i,0} &= 1, & \text{ for } & 0\leq i\leq L,\\
w_{0,L+1,j} &= -1, & \text{ for } & 1\leq j\leq L+1, \\
w_{0,L+1,0} &= 0.&&
\end{align*}
These initial conditions along with the recurrence uniquely define the terms $w_{n,i,j}$. Now, by symmetry, $w_{n,i,j}=-w_{n,L+1-j,L+1-i}$, and in particular, $w_{n,i,L+1-i}=0$, so we only need to consider the region $i+j\leq L$. Within this region, all terms are positive, so $w_{n,i,j}$ can be understood combinatorially. The combinatorial interpretation of the recurrence is precisely the statement of the theorem: $w_{n,i,j}$ is the number of square lattice walks starting at $(i,j)$ and ending on the $y$-axis, which are confined to the region $W_{L}=\{(i,j)\in\mathbb{N}:i\leq j\leq L-i\}$. \end{proof}
In particular, $p_{n,0,0}=w_{n,0,0}$.
\begin{Remark} If we apply the transformation $(x,y) \mapsto (x-y,y)$ to waffle walks, we remark that pyramidal walks starting at $\origin$ are in bijection with \emph{Gouyou-Beauchamps walks}, i.e. walks with North-West, West, East, South-East steps, going from $(0,0)$ to a point on the $x$-axis and confined in the part of the positive quarter of plane below the line $x + 2y = L$. This is consistent with the fact that standard Young tableaux with $4$ rows or less are in bijection with Gouyou-Beauchamps walks returning to the $x$-axis confined in the quarter of plane~\cite{gouyouBeauchamps}. \end{Remark}
More generally, the following proposition relates the enumeration of pyramid walks starting at any point to waffle walks.
\begin{Proposition} \label{prop:waffle_to_pyramid} The number $p_{n}(z)$ of length $n$ pyramid walks starting at a point $z=x_{1}e_{1}+x_{2}e_{2}+x_{3}e_{3}+x_{4}e_{4}$ is equal to the number of length $n$ waffle walks starting at a point in the set $W(z)$, defined by
\[W(z):=\{(x_{1}+x_{3}+p-q,p+q):p,q\in\mathbb{N},~p\leq\min(x_{2},x_{4}),~q\leq\min(x_{1},x_{3})\}.\] \end{Proposition}
Now, we will give a bijective proof of this. The proof is via a scaffolding, analogous to Definition \ref{def:scaffolding}. Again, before we define scaffolding we define the profile of a point.
\begin{Definition}[Profile] For a point $z=x_{1}e_{1}+x_{2}e_{2}+x_{3}e_{3}+x_{4}e_{4}$, we define the {\em profile} $\C(z)$ of $z$ by \[\C(z):=\{(p,q)\in\mathbb{N}^{2}:~p\leq\min(x_{2},x_{4}),~q\leq\min(x_{1},x_{3})\}.\] We have a natural bijection $h_{z}:\C(z)\to W(z)$ defined by $h_{z}(p,q):=(x_{1}+x_{3}+p-q,p+q)$. \end{Definition}
\begin{figure}
\caption{The sets $\mathcal C(z)$ and $W(z)$ for $z = 4 e_1 + e_2 + 3 e_3 + 4 e_4$.}
\label{figure:natural_bijection}
\end{figure}
For $z\in \Ss_{3,L}$, we define the set $$A(z):=\{(c,s)\in \C(z)\times \{\uparrow,\rightarrow,\downarrow,\leftarrow\} : s \mbox{ is an allowed step from } h_{z}(c)\}.$$ For $i\in\{1,2,3,4\}$, we also introduce the notation $$\C_i(z):= \{(s_i,c) : c\in\C(z)\}.$$ The set $\C_i(z)$ is thus a subset of $\Sf\times \C(z)$, having same cardinality as $\C(z)$, since all the elements of $\C_i(z)$ have the same first coordinate $s_i$.
\begin{Definition}[Scaffolding] Let us fix the size $L$ of the pyramid. A \emph{scaffolding} is a collection of functions $(\delta_z)_{z \in \Ss_{3,L}}$, such that for each $z\in \Ss_{3,L}$, the function $$\delta_z : A(z) \to \C_1(z+s_1)\cup \C_2(z+s_2) \cup \C_3(z+s_3) \cup \C_4(z+s_4)$$ is a bijection and whenever $\delta_z(c,s)=(s_{j},c_{j})$, we have $h_{z}(c)+s=h_{z+s_{j}}(c_{j})$. \label{def:scaffolding3d} \end{Definition}
\begin{figure}
\caption{A diagram defining the scaffolding $\delta_{z}$.}
\label{diamond_scaffolding}
\end{figure}
Figure~\ref{diamond_scaffolding} shows an example of the sets $W(z)$ and $\C(z)$.
\begin{figure}
\caption{A geometric representation of $\delta_z$ for $x_1 = 8$, $x_2 = 4$, $x_3 = 6$ and $x_4 = 7$. The numbers of the colored zones match with cases of the diagram of Figure~\ref{diamond_scaffolding}.}
\label{scaffolding_waffle}
\end{figure}
An explicit scaffolding $\delta_{z}$ is given in Figure \ref{diamond_scaffolding}. The proof of the bijectivity of $\delta_z$ is omitted (because of its tediousness --- it is a case-by-case proof, similar to the one of Theorem~\ref{theo:trapezium}), but some particular configuration is illustrated by Figure~\ref{scaffolding_waffle}.
Given such a scaffolding, a bijection for each point $z_{c}\in \Ss_{3,L}$ from the set of waffle walks starting at a point in the set $W(z_{c})$ to the set of pyramid walks starting at $z_{c}$ is given by Algorithm~\ref{algo:scaffolding3d}.
\begin{algorithm}[caption={Bijection from waffle paths to pyramid paths, given a scaffolding $(\delta_z)_{z \in \Ss_{3,L}}$ (for \textit{scaffolding}, see Definition~\ref{def:scaffolding3d}).}, label={algo:scaffolding3d}] metadata: a scaffolding $\delta_{z}$ input: A point $(p_{c},q_{c})\in \C(z_{c})$, a waffle path w starting at $h_{z_{c}}(p_{c},q_{c})$ output: a pyramid path y starting at $z_{c}$. n $\gets$ length of $w$; y $\gets$ empty path; z $\gets z_{c}$; p $\gets p_{c}$; q $\gets q_{c}$; for i from 1 to n do ($\sigma$, p, q) $\gets$ $\delta_{\textrm z}$(f, q, w[i]);
add $\sigma$ to the end of y;
z $\gets$ z + $\sigma$; return y; \end{algorithm}
In the following corollary {of} Theorem \ref{theo:waffle_to_pyramid}, we enumerate pyramidal walks starting at~$\origin$ using the relation $p_{n,0,0}=w_{n,0,0}$, which relates their enumeration to that of waffle walks. This partially answers another open question of Mortimer and Prellberg~\cite[Section 4.1]{MortimerPrellberg}.
\begin{Corollary}The generating function \[P(t)=\sum_{t=0}^{\infty}p_{n,0,0}t^{n}\] for pyramid walks starting in a corner is given by
\[P(t)=\frac{1}{(L+4)^2}\sum_{\substack{1\leq j<k\leq L+3\\2\nmid j,k}}^{L+4}\frac{(\alpha^{k}+\alpha^{-k}-\alpha^{j}-\alpha^{-j})^2(2+\alpha^{j}+\alpha^{-j})(2+\alpha^{-k}+\alpha^{k})}{1-(\alpha^{j}+\alpha^{-j}+\alpha^{k}+\alpha^{-k})t},\] where $\alpha=e^{\frac{i\pi}{L+4}}$. \label{cor:enum_pyramid_walks} \end{Corollary} \begin{proof} To prove this, we relate walks confined to the waffle to unconfined walks using the reflection principle~\cite{gessel1992random}, which is possible because the waffle $W_L$ forms a Weyl chamber of some reflection group.
Let $(x,y)$ be a point inside the waffle, let $\Omega$ be the set of unconstrained square lattice walks starting at $(x,y)$ and let $\Omega'$ be the set of walks in the waffle starting at $(x,y)$. Let $\ell_{1}$, $\ell_{2}$ and $\ell_{3}$ be the lines just outside the boundary of $W_{L}$, defined by $y=-1$, $y-x=-1$ and $x+y=L+1$ respectively. We consider the involution $f:\Omega\setminus \Omega'\to\Omega\setminus \Omega'$ defined by reflecting the section of the walk after its first intersection with one of the lines $\ell_{1}$, $\ell_{2}$ and $\ell_{3}$ in that line.
Now, define \begin{align*}T_{L}&:=((2L+8)\mathbb{Z})\times((2L+8)\mathbb{Z})\cup (L+4+(2L+8)\mathbb{Z})\times(L+4+(2L+8)\mathbb{Z})\\ A_{L}&:=T_{L}\cup\left((-1,-3)+T_{L}\right)\cup\left((-4,-2)+T_{L}\right)\cup\left((-3,1)+T_{L}\right)\\ B_{L}&:=\left((-1,1)+T_{L}\right)\cup\left((0,-2)+T_{L}\right)\cup\left((-3,-3)+T_{L}\right)\cup\left((-4,0)+T_{L}\right). \end{align*} Then the involution $f$ sends walks in $\Omega\setminus \Omega'$ ending at a point in $A_{L}$ to walks ending at a point in $B_{L}$ and vice-versa. The only walks in $\Omega'$ ending at a point in $A_{L}$ (or $B_{L}$) are those ending at $(0,0)$. Hence the number of waffle walks of a given length from $(x,y)$ to $(0,0)$ is equal to the number of (uncontrained) walks of the same length from $(x,y)$ to a point in $A_{L}$ minus the number of such walks from $(x,y)$ to a point in $B_{L}$. By shifting the starting point, this is the number of walks from a point in $\{(x,y),(x+1,y+3),(x+4,y+2),(x+3,y-1)\}$ to a point in $T_{L}$ minus the number of walks from a point in $\{(x+1,y-1),(x,y+2),(x+3,y+3),(x+4,y)\}$ to a point in $T_{L}$. These numbers can easily be computed using the generating function for unconstrained walks, and doing so yields the formula in the statement of the theorem. As an example, we show how to compute the generating function for walks from $(x,y)$ to a point in $T_{L}$ counted by length.
Let $F(t,a,b)$ be the generating function for walks starting at $(x,y)$ with walks of length $n$ ending at $(x_{1},y_{1})$ contributing $a^{x_{1}}b^{y_{1}}t^{n}$. We want to sum the coefficients where the powers $x_{1}$ and $y_{1}$ of $a$ and $b$ are both multiples of $2L+8$ or both $L+4$ more than multiples of $2L+8$. For those where both $x_{1}$ and $y_{1}$ are multiples of $2L+8$, This is achieved by setting $\alpha=e^{\frac{i\pi}{L+4}}$, and writing the sum \[\frac{1}{(2L+8)^2}\sum_{1\leq j,k\leq 2L+7}F(t,\alpha^{j},\alpha^{k}),\] as the contribution to this sum from a monomial $a^{x_{1}}b^{y_{1}}t^{n}$ is \[t^n\left(\frac{1}{2L+8}\sum_{1\leq j\leq 2L+7}\alpha^{x_{1}j}\right)\left(\frac{1}{2L+8}\sum_{1\leq k\leq 2L+7}\alpha^{y_{1}k}\right),\] which is $0$ unless $x_{1}$ and $y_{1}$ are both multiples of $2L+8$, in which case it is $t^n$. Similarly, the generating function for the cases where $x_{1}-L-4$ and $y_{1}-L-4$ are multiples of $2L+8$ is \[\frac{1}{(2L+8)^2}\sum_{1\leq j,k\leq 2L+7}(-1)^{j+k}F(t,\alpha^{j},\alpha^{k}).\] Similarly, one can write expressions for the generating function of walks from any given point to a point in $T_{L}$. Adding and subtracting these as appropriate yields the desired result. \end{proof}
\section{Conclusion}
To sum up, we have found several bijections between forward triangular walks and Motzkin path with bounded amplitude, answering thus Mortimer and Prellberg's open question~\cite{MortimerPrellberg}.
There were some interesting consequences from this discovery. First, by looking for a bijection, we discovered an unexpected symmetry property between forward and backward paths (Theorem~\ref{theo:directions}). Second, we refined Mortimer and Prellberg's results by considering triangular walks starting not only at {the} origin, but at any point in the triangle (Theorem~\ref{theo:anywhere}). Finally, by mimicking the proof of the first sections, we managed to extend some of our results to larger dimensions. In particular, we discovered a new bijective correspondence in dimension 3 (Theorem~\ref{theo:waffle_to_pyramid}), enabling in the process to find an expression for the generating function of pyramid walks (Corollary~\ref{cor:enum_pyramid_walks}), which was also an open question in Mortimer and Prellberg's paper.
However, we still do not know if there exists a bijection between triangular walks in dimension $d \geq 4$ and some {class} of walks in dimension $d-1$. It seems like our two- and three-dimensional argument (more precisely, the one in the proofs of Proposition~\ref{prop:motzkin_inductive} and Theorem~\ref{theo:waffle_to_pyramid}) does not work anymore. We leave the question of Mortimer and Prellberg about the enumeration of triangular walks in higher dimension as an open question.
There is another conjecture from a different paper that may relate to this current work: the three authors of~\cite{bousquetmelouFusyRaschel} conjecture that there exists a length-preserving involution on double-tandem walks that exchanges $x_{start}-x_{min}$
and
$y_{end}-y_{min}$,
while preserving $y_{start}-y_{min}$ and $x_{end}-x_{min}$ (point $(x_{start},y_{start})$ denotes the starting point, and $x_{min}$ and $y_{min}$ are respectively the minimal x- and y-coordinates during the walk). It may be interesting to see if techniques of Section~\ref{s:symmetry} facilitate the discovery of this involution.
Finally, this paper shows two examples of bijections where there is a trade-off between domain and endpoint constraints: \begin{itemize} \item The one between triangular paths and Motzkin paths transform two-dimensional walks with no constraint on the endpoint into one-dimensional walks which must finish at the origin; \item the one between pyramid paths and waffle walks transform three-dimensional walks with no constraint on the endpoint into two-dimensional walks which must end on one of the axis. \end{itemize} This is somehow reminiscent of \cite{Eliz15,courtiel2018bijections}. We wonder whether there are some other examples of this phenomenon, or even a generic framework for such bijections.
\end{document} |
\begin{document}
\title{{\Large Phase Controlled Continuously Entangled Two-Photon Laser with Double }$\Lambda ${\Large \ Scheme}} \author{C. H. Raymond Ooi}
\affiliation{{\sl Department of Physics, KAIST, Guseong-dong, Yuseong-gu, Daejeon, 305-701 Korea\\ Max-Planck-Institut f\"{u}r Quantenoptik, D-85748, Garching, Germany}} \date{\today}
\begin{abstract} We show that an absolute coherent phase of a laser can be used to manipulate the entanglement of photon pairs of two-photon laser. Our focus is on the generation of a continuous source of entangled photon pairs in the double $ \Lambda $ (or Raman) scheme. We study the dependence of steady state entanglement on the phase and laser parameters. We obtain a relationship between entanglement and two-photon correlation. We derive conditions that give steady state entanglement for the Raman-EIT scheme and use it to identify region of steady state macroscopic entanglement. No entanglement is found for the double resonant Raman scheme for any laser parameters. \end{abstract}
\maketitle
\section{Introduction}
Entangled photon pairs is an integral asset to quantum communication technology with continuous variables \cite{qtm comm}. A bright source of entangled photon pairs could be useful also for quantum lithography \cite {qtm litho}. Transient entanglement of a large number of photon pairs has been shown to exist for cascade scheme \cite{Han},\cite{transient}, and double Raman scheme \cite{Kiffner}. The transient regime does not provide a continuous source of entangled photon pairs that could be as useful and practical as typical lasers in c. w. operation. One might wonder whether the entanglement still survives in the long time limit.
In this paper, we use the coherent phase of the controlling lasers to generate a large number (macroscopic) of entangled photon pairs in the steady state. This also shows the possibility of coherently controlling entanglement in the steady state. We focus on the Raman-EIT scheme (Fig. \ref {REDlaserscheme}a) that has been shown to produce nonclassically correlated photon pairs in single atom and many atoms cases.
First, we discuss the physics of a two-photon emission laser using the master equation in Section II. The physical significance of each term in the master equation is elaborated and related to the quantities of interests (in Section III) such as two-photon correlation and Duan's \cite{Duan} entanglement measure. In Section IV, we show the importance of laser phase for acquiring entanglement. In Section V, the steady state solutions for the photon numbers and correlation between photon pairs are given. We show that the laser phase provides a useful knob for controlling entanglement. We then use the results to derive a condition for entanglement in the double Raman scheme. By using proper values of cavity damping and laser parameters based on analysis of the condition, we obtain macroscopic number of entangled photon pairs in the steady state. We also analyze the double resonant Raman scheme but find no entanglement.
\begin{figure}
\caption{a) Double Raman atom in doubly-resonant optical cavity. The atom is a trapped by an optical dipole force and driven by a pump laser and a control laser. \emph{Raman-EIT} (REIT) scheme ($\Omega _{c},\Delta _{c}(=\Delta )>>\Omega _{p},\protect\gamma $) and \emph{double resonant Raman } (DRR) scheme ($\Omega _{c}=\Omega _{p},\Delta _{c}=\Delta _{p}=0$) would be the focused for analysis. b) Four off-diagonal two-photon emission density matrix elements with their respective coefficients $C_{k}$.}
\label{REDlaserscheme}
\end{figure}
\section{Physics of Two-Photon Laser}
For simplicity, we consider single atom localized in the doubly resonant cavity. Using the usual approach \cite{QO} we obtain the master equation \begin{equation*} \frac{d}{dt}\hat{\rho}=[C_{\text{loss1}}(\hat{a}_{1}\hat{\rho}\hat{a} _{1}^{\dagger }-\hat{\rho}\hat{a}_{1}^{\dagger }\hat{a}_{1})+C_{\text{gain1} }(\hat{a}_{1}^{\dagger }\hat{\rho}\hat{a}_{1}-\hat{a}_{1}\hat{a} _{1}^{\dagger }\hat{\rho}) \end{equation*} \begin{equation*} +C_{\text{loss2}}(\hat{a}_{2}\hat{\rho}\hat{a}_{2}^{\dagger }-\hat{a} _{2}^{\dagger }\hat{a}_{2}\hat{\rho})+C_{\text{gain2}}(\hat{a}_{2}^{\dagger } \hat{\rho}\hat{a}_{2}-\hat{\rho}\hat{a}_{2}\hat{a}_{2}^{\dagger })+ \end{equation*} \begin{equation} e^{-i\varphi _{t}}(C_{\text{1}}\hat{a}_{2}\hat{\rho}\hat{a}_{1}-C_{\text{2}} \hat{\rho}\hat{a}_{1}\hat{a}_{2}+\ C_{\text{3}}\hat{a}_{1}\hat{\rho}\hat{a} _{2}-C_{\text{4}}\hat{a}_{1}\hat{a}_{2}\hat{\rho})]\text{+adj.,} \label{master} \end{equation} $\allowbreak $with the effective phase $\varphi _{t}$. For double Raman scheme, $\varphi _{t}=\varphi _{p}+\varphi _{c}-(\varphi _{s}+\varphi _{a})$ . The phases $\varphi _{\alpha }=k_{\alpha }z+\phi _{\alpha }$ of the lasers depend on both the position $z$ of the atom and the controllable absolute phases $\phi _{\alpha }$ of the lasers. So, $\phi _{s}=\phi _{a}=0$. Since $ k_{p}+k_{c}-k_{s}-k_{a}=0$, the effective phase becomes $\varphi _{t}=\phi _{p}+\phi _{c}=\phi $. The explicit expressions for $C_{\text{lossj}}$, $C_{ \text{gainj}}$ and $C_{\text{k}}$ (where j$=$1,2 and k$=$1,2,3,4) are given in Appendix \ref{coefficients}.
Equation (\ref{master}) already includes the cavity damping Liouvillean $L \hat{\rho}=-\sum\limits_{j=1,2}\kappa _{j}\{\hat{a}_{j}^{\dagger }\hat{a}_{j} \hat{\rho}+\hat{\rho}\hat{a}_{j}^{\dagger }\hat{a}_{j}-2\hat{a}_{j}\hat{\rho} \hat{a}_{j}^{\dagger }\}$. We find that the relation holds \begin{equation} C_{\text{1}}+C_{\text{3}}=C_{\text{2}}+C_{\text{4}}\text{.} \label{C relation} \end{equation} Note that Eq. (\ref{master}) generalizes the master equation for the cascade scheme \cite{cascade master} in which $C_{\text{3}}=C_{\text{gain2}}=0$, and $C_{\text{lossj}}=\kappa _{j}$.
The $C_{\text{gainj}}$ are due to the emissions processes of the atom in the excited levels and Raman process via the laser fields which provide gain. On the other hand, the $C_{\text{lossj}}$ are due to cavity dissipation $\kappa _{j}$ and absorption processes of the atom in the ground levels which create loss. The $C_{\text{k}}$ coefficients correspond to squeezing. Each term gives the coherence between $n_{j}$ and $n_{j}\pm 1$ such that the difference between the total photon number in the bra and in the ket is always $2$. Figure \ref{REDlaserscheme}b illustrates in two-dimensional photon number space the essence of each diagonal term in Eq. (\ref{master}).
The consequence of this relation is that for large number of photons $ n_{j}>>1$, the coherences due to $\hat{a}_{2}\hat{\rho}\hat{a}_{1},\hat{\rho} \hat{a}_{1}\hat{a}_{2},\hat{a}_{1}\hat{\rho}\hat{a}_{2},\hat{a}_{1}\hat{a} _{2}\hat{\rho}$ and their adjoint are approximately equal. Hence, the contribution of the off-diagonal terms vanish and the master equation reduces to the rate equation. Since the off-diagonal terms give rise to entanglement (as we show below), we can understand that there will be no entanglement for very large $n_{j}$.
\begin{figure}\label{REDg2-Duan}
\end{figure}
\section{Relation between entanglement and two-photon correlation}
Two-photon correlation for \emph{Raman emission doublet} (RED) (large detuning and weak pump) version of the double Raman scheme for single atom \cite{HH} and extended medium \cite{paper I},\cite{Harris} show nonclassical properties such as antibunching and violation of Cauchy-Schwarz inequality. It is useful to show how the nonclassical correlation relates to entanglement. The normalized two-photon correlation at zero time delay is
\begin{eqnarray}
g^{(2)}(t) &\doteq &\frac{|\langle \hat{a}_{2}\hat{a}_{1}\rangle |^{2}}{ \langle \hat{a}_{2}^{\mathbf{\dagger }}\hat{a}_{2}\rangle \langle \hat{a} _{1}^{\mathbf{\dagger }}\hat{a}_{1}\rangle }+1\text{,} \\
|\langle \hat{a}_{2}\hat{a}_{1}\rangle | &=&\sqrt{\bar{n}_{1}\bar{n} _{2}(g^{(2)}(t)-1)}\text{.} \end{eqnarray} Thus, the $g^{(2)}(t)$ does not provide phase $\phi _{21}$ information of the correlation $\langle \hat{a}_{2}\hat{a}_{1}\rangle $. We introduce the phase via \begin{equation}
\langle \hat{a}_{2}\hat{a}_{1}\rangle =|\langle \hat{a}_{2}\hat{a}
_{1}\rangle |e^{i\phi _{21}}\text{.} \label{<a2a1>} \end{equation} Hence the Duan's parameter $D(t)=\hspace{-1cm}{\left( {\Delta \hat{u}} \right) ^{2}+\left( {\Delta \hat{v}}\right) ^{2}}$ can be rewritten as${}$ \begin{eqnarray} D(t) &=&2[1+\bar{n}_{1}+\bar{n}_{2}+2\sqrt{\bar{n}_{1}\bar{n} _{2}(g^{(2)}(t)-1)}\cos \phi _{21} \notag \\
&&-|\langle \hat{a}_{2}\rangle |^{2}-|\langle \hat{a}_{1}\rangle
|^{2}-\langle \hat{a}_{2}\rangle \langle \hat{a}_{1}\rangle -\langle \hat{a} _{2}^{\mathbf{\dagger }}\rangle \langle \hat{a}_{1}^{\mathbf{\dagger } }\rangle ]\text{.} \label{criteria vs phi} \end{eqnarray}
The terms in the second line would be$\ -|\alpha _{1}|^{2}-|\alpha _{2}|^{2}-(\alpha _{1}\alpha _{2}+\alpha _{1}^{\ast }\alpha _{2}^{\ast })$ for an input coherent state. If $g^{(2)}(t)=1$ we can have entanglement that is independent of the phase $\phi _{21}$ \begin{equation} D(t)=2[1-(\alpha _{1}\alpha _{2}+\alpha _{1}^{\ast }\alpha _{2}^{\ast })] \text{.} \end{equation} We shall consider the vacuum state, in which they vanish. Clearly, the presence of inseparability or entanglement is entirely determined by the phase $\phi _{21}$ in Eq. (\ref{criteria vs phi}). We now find the knob to control entanglement, i.e. $\cos \phi _{21}$ must be negative or $\pi /2<\phi _{21}<3\pi /2$.
The condition for inseparability or entanglement $\hspace{-1cm}{0<\left( { \Delta \hat{u}}\right) ^{2}+\left( {\Delta \hat{v}}\right) ^{2}<2}$ can be re-expressed in terms of the phase and the two-photon correlation \begin{equation} -\frac{\bar{n}_{1}+\bar{n}_{2}+1}{2\sqrt{\bar{n}_{1}\bar{n}_{2}(g^{(2)}-1)}} <\cos \phi _{21}<-\frac{\bar{n}_{1}+\bar{n}_{2}}{2\sqrt{\bar{n}_{1}\bar{n} _{2}(g^{(2)}-1)}} \end{equation} where the lower limit corresponds to maximum entanglement. The midpoint value $\cos \phi _{21}=-\frac{\bar{n}_{1}+\bar{n}_{2}+1/2}{2\sqrt{\bar{n}_{1} \bar{n}_{2}(g^{(2)}-1)}}$ gives $D(t)=1$. When $\bar{n}_{1}=\bar{n}_{2}$, we have $-\frac{1+1/2\bar{n}}{\sqrt{g^{(2)}-1}}<\cos \phi _{21}<-\frac{1}{\sqrt{ g^{(2)}-1}}$. Note that for photon antibunching $g^{(2)}\gtrsim 1$ the entanglement window for $\phi _{21}$ can be quite large, provided $\bar{n} _{1},\bar{n}_{2}$ are small too.
In the limit of large two-photon correlation (photon bunching) $g^{(2)}>>1$ and large photon numbers $\bar{n}_{1},\bar{n}_{2}>>1$ the range for entanglement becomes quite restrictive. Here, $\cos \phi _{21}\simeq -\frac{1 }{\sqrt{g^{(2)}-1}}$ becomes very small in magnitude (but negative) and from Eq. (\ref{criteria vs phi}), $D(t)\lesssim 2$ the entanglement is small. This explains the results in Fig. \ref{REDg2-Duan} where large transient correlation is accompanied by small entanglement.
In the long time limit Fig. \ref{REDg2-Duan} shows that the correlation vanishes (corresponds to antibunching) and there is entanglement, $D<<2$. Thus, both the antibunching and entanglement are compatible quantum mechanical properties since they manifest in the same way. Duan's entanglement increases with time while the correlation decreases with time, thus both do not vary in the same way. This clearly shows that correlation should be discerned from entanglement. However, as expected, the decoherence $\gamma _{bc}$ tends to reduce the degree of entanglement and the magnitude of correlation.
\section{Laser Phase for Entanglement}
Here, we show by using simple example from the resonant cascde work of Han \textsl{et. al.} \cite{Han} that the \emph{nonzero phase} of the paired correlation $\langle \hat{a}_{1}\hat{a}_{2}\rangle $ is necessary for entanglement. Let us analyze the transient equation (written in their notations with zero laser phase)
\begin{equation*} \frac{d\langle \hat{a}_{1}\hat{a}_{2}\rangle }{dt}=-\langle \hat{a}_{1}\hat{a }_{2}\rangle (\beta _{22}^{\ast }-\beta _{11}+\kappa _{2}+\kappa _{1}) \end{equation*}
\begin{equation} -\beta _{21}^{\ast }(\langle \hat{a}_{1}^{\dagger }\hat{a}_{1}\rangle +1)+\beta _{12}\langle \hat{a}_{2}^{\dagger }\hat{a}_{2}\rangle \text{.} \end{equation} The coefficients for resonant case are such that: $\beta _{11},\beta _{22}$ are real while $\beta _{12}=i\alpha _{12}$ and $\beta _{21}=i\alpha _{21}$ are purely imaginary. Clearly we have imaginary value for \begin{equation*} \langle \hat{a}_{1}\hat{a}_{2}\rangle (t)=i\int_{0}^{t}e^{-K(t-t^{\prime })}\{\alpha _{12}\langle \hat{a}_{2}^{\dagger }\hat{a}_{2}\rangle (t^{\prime })+ \end{equation*} \begin{equation} \alpha _{21}(\langle \hat{a}_{1}^{\dagger }\hat{a}_{1}\rangle (t^{\prime })+1)\}dt^{\prime }\simeq iX \end{equation} where $K=\beta _{22}-\beta _{11}+\kappa _{2}+\kappa _{1}$ and $X$ is real whose expression is not important for the present discussion.
For initial condition $\langle \hat{a}_{j}(0)\rangle =0$, the Duan's criteria \begin{equation} D=2[1+\langle \hat{a}_{1}^{\mathbf{\dagger }}\hat{a}_{1}\rangle +\langle \hat{a}_{2}^{\mathbf{\dagger }}\hat{a}_{2}\rangle +\langle \hat{a}_{1}\hat{a} _{2}\rangle +\langle \hat{a}_{1}^{\mathbf{\dagger }}\hat{a}_{2}^{\mathbf{ \dagger }}\rangle ] \label{D vacuum} \end{equation} clearly shows $D=2+2\{\langle \hat{a}_{1}^{\mathbf{\dagger }}\hat{a} _{1}\rangle +\langle \hat{a}_{2}^{\mathbf{\dagger }}\hat{a}_{2}\rangle +iX-iX\}>2$, there is no entanglement.
For finite phase $\phi $ associated to the pump laser, the correlation $ \langle \hat{a}_{1}\hat{a}_{2}\rangle $ becomes $\langle \hat{a}_{1}\hat{a} _{2}\rangle e^{i\phi }$ \ but the photon numbers are not affected. The parameter becomes \begin{equation} D{=}2(1+\langle \hat{a}_{1}^{\mathbf{\dagger }}\hat{a}_{1}\rangle +\langle \hat{a}_{2}^{\mathbf{\dagger }}\hat{a}_{2}\rangle +2X\sin \phi ) \end{equation} which gives maximum entanglement when $\phi =-\pi /2$ or $3\pi /2$, and no entanglement when $\phi =0$.
\section{Steady State Entanglement}
The master equation (\ref{master}) is linear and do not include saturation. One might wonder whether steady state solutions. We find that there are steady state solutions when the photon numbers $\bar{n}_{j}$ do not increase indefinitely. Parameters that give non steady state solutions manifest as negative value of $D$ and should be disregarded. The study of entanglement via nonlinear theory will be published elsewhere.
In the case of initial vacuum, only $\frac{d\bar{n}_{1}}{dt},\frac{d\bar{n} _{2}}{dt},\frac{d\langle \hat{a}_{1}\hat{a}_{2}\rangle }{dt},\frac{d\langle \hat{a}_{1}^{\dagger }\hat{a}_{2}^{\dagger }\rangle }{dt}$ are sufficient to compute Duan's entanglement parameter, where $\bar{n}_{j}=\langle \hat{a} _{j}^{\dagger }\hat{a}_{j}\rangle $, $j=1,2$. From the master equation (\ref {master}), we obtain the coupled equations \begin{eqnarray} \frac{d\bar{n}_{1}}{dt} &=&\bar{n}_{1}K_{1}+e^{-i\phi }(C_{\text{1}}-C_{ \text{2}})\langle \hat{a}_{1}\hat{a}_{2}\rangle + \notag \\ &&e^{i\phi }(C_{\text{1}}^{\ast }-C_{\text{2}}^{\ast })\langle \hat{a} _{1}^{\dagger }\hat{a}_{2}^{\dagger }\rangle +C_{\text{gain1}}+C_{\text{gain1 }}^{\ast }\text{,} \label{dn1/dt} \end{eqnarray} \begin{eqnarray} \frac{d\bar{n}_{2}}{dt} &=&\bar{n}_{2}K_{2}+e^{-i\phi }(C_{\text{3}}-C_{ \text{2}})\langle \hat{a}_{1}\hat{a}_{2}\rangle + \notag \\ &&e^{i\phi }(C_{\text{3}}^{\ast }-C_{\text{2}}^{\ast })\langle \hat{a} _{1}^{\dagger }\hat{a}_{2}^{\dagger }\rangle +C_{\text{gain2}}+C_{\text{gain2 }}^{\ast }\text{,} \label{dn2/dt} \end{eqnarray} \begin{equation} (\frac{d}{dt}-K_{12})\langle \hat{a}_{1}\hat{a}_{2}\rangle =e^{i\phi }[\bar{n }_{1}(C_{\text{3}}^{\ast }-C_{\text{2}}^{\ast })+\bar{n}_{2}(C_{\text{1} }^{\ast }-C_{\text{2}}^{\ast })-C_{\text{2}}^{\ast }] \label{da1a2/dt} \end{equation}
where the gain coefficients/loss are \begin{eqnarray} K_{j} &=&C_{\text{gainj}}+C_{\text{gainj}}^{\ast }-(C_{\text{lossj}}+C_{ \text{lossj}}^{\ast }) \\ K_{12} &=&C_{\text{gain2}}+C_{\text{gain1}}^{\ast }-(C_{\text{loss2}}+C_{ \text{loss1}}^{\ast })\text{.} \end{eqnarray}
The steady state solution for the correlation is $\langle \hat{a}_{1}\hat{a} _{2}\rangle =Ee^{i\phi }$ where \begin{eqnarray*} E &=&[-C_{32}^{\ast }(K_{2}K_{12}^{\ast }+C_{12}^{\ast }C_{32}-C_{12}C_{32}^{\ast })(C_{\text{gain1}}+C_{\text{gain1}}^{\ast }) \\ &&-C_{12}^{\ast }(K_{1}K_{12}^{\ast }+C_{12}C_{32}^{\ast }-C_{12}^{\ast }C_{32})(C_{\text{gain2}}+C_{\text{gain2}}^{\ast }) \\ &&+C_{2}^{\ast }(K_{1}C_{12}C_{32}^{\ast }+K_{2}C_{32}C_{12}^{\ast })-C_{2}^{\ast }K_{1}K_{2}K_{12}^{\ast } \end{eqnarray*} \begin{equation} -C_{2}C_{32}^{\ast }C_{12}^{\ast }(K_{1}+K_{2})]\frac{1}{M} \end{equation} where \begin{eqnarray} M &=&(K_{1}K_{12}^{\ast }+K_{2}K_{12})\left( C_{12}^{\ast }C_{32}\right) + \text{c.c.} \notag \\ &&-\left( C_{12}C_{32}^{\ast }-C_{12}^{\ast }C_{32}\right) ^{2}-K_{1}K_{2}K_{12}K_{12}^{\ast }\text{.} \end{eqnarray} The steady state solutions for the photon numbers are \begin{eqnarray} \bar{n}_{1} &=&(C_{\text{gain1}}+C_{\text{gain1}}^{\ast })\frac{ K_{2}K_{12}K_{12}^{\ast }-(C_{12}^{\ast }C_{32}K_{12}^{\ast }+\text{c.c.})}{M } \notag \\ &&+(C_{\text{gain2}}+C_{\text{gain2}}^{\ast })C_{12}^{\ast }C_{12}\frac{ K_{12}^{\ast }+K_{12}}{M} \notag \\ &&+C_{\text{2}}^{\ast }C_{12}\frac{K_{2}K_{12}^{\ast }+(C_{12}^{\ast }C_{32}- \text{c.c.})}{M} \notag \\ &&+C_{\text{2}}C_{12}^{\ast }\frac{K_{2}K_{12}+(C_{12}C_{32}^{\ast }-\text{ c.c.})}{M}\text{,} \label{n1 st} \end{eqnarray} \begin{eqnarray} \bar{n}_{2} &=&(C_{\text{gain2}}+C_{\text{gain2}}^{\ast })\frac{ K_{1}K_{12}K_{12}^{\ast }-(C_{32}^{\ast }C_{12}K_{12}^{\ast }+\text{c.c.})}{M } \notag \\ &&+(C_{\text{gain1}}+C_{\text{gain1}}^{\ast })C_{32}^{\ast }C_{32}\frac{ K_{12}^{\ast }+K_{12}}{M} \notag \\ &&+C_{\text{2}}^{\ast }C_{32}\frac{K_{1}K_{12}^{\ast }+(C_{12}C_{32}^{\ast }- \text{c.c.})}{M} \notag \\ &&+C_{\text{2}}C_{32}^{\ast }\frac{K_{1}K_{12}+(C_{12}^{\ast }C_{32}-\text{ c.c.})}{M}\text{.} \label{n2 st} \end{eqnarray}
From Eq. (\ref{D vacuum}), the necessary condition for entanglement is $ Ee^{i\phi }+E^{\ast }e^{-i\phi }<-(\bar{n}_{1}+\bar{n}_{2})$. If $E$ is real positive there would be no entanglement in the region $\cos \phi >0$. Entanglement is still possible even if $\phi =0$ provided real\{$E$\}$<0$. Thus, the phase $\phi $ is not necessary for entanglement, but it provides an \emph{extra knob} for controlling entanglement.
Let us search for entanglement conditions for the limiting cases of Raman-EIT (REIT) scheme which produces nonclassically correlated photon pairs, and the double resonant Raman (DRR) scheme.
\subsection{Raman-EIT regime}
For this scheme $\Omega _{c},\Delta _{c}(=\Delta )>>\Omega _{p},\gamma _{x}$ ($x=ab,ac,db,dc,bc$ indices for decoherence rates) and $\Delta =0$. Thus, we have $p_{ba}=\frac{-i\Omega _{c}^{\ast }}{\gamma _{ab}}(p_{bb}-p_{aa}) \rightarrow 0$, $p_{cd}=\frac{-\Omega _{p}}{\Delta }=p_{dc}$. It follows from Appendix A that the only finite coefficients in the master equation are \begin{equation} C_{\text{loss1}}\simeq \kappa _{s},C_{\text{loss2}}\simeq \kappa _{a},C_{ \text{2}}\simeq -C_{\text{4}}\simeq i\Xi \text{,} \end{equation} \begin{equation} K_{1}=-2\kappa _{s},K_{2}=-2\kappa _{a},K_{12}=-(\kappa _{s}+\kappa _{a}) \label{K for REIT} \end{equation} where \begin{equation} \Xi =g_{a}g_{s}\frac{\Omega _{p}\Omega _{c}}{\Delta (\gamma _{ac}\gamma _{bc}+\Omega _{c}^{2})}\text{.} \label{CAS} \end{equation}
Here, we also have which are used to to write the steady state solutions \begin{equation} n_{1}=\Xi ^{2}\frac{\kappa _{a}}{(\kappa _{s}+\kappa _{a})[\kappa _{a}\kappa _{s}-\Xi ^{2}]}\text{,} \label{n1 st REIT} \end{equation} \begin{equation} n_{2}=\Xi ^{2}\frac{\kappa _{s}}{(\kappa _{s}+\kappa _{a})[\kappa _{a}\kappa _{s}-\Xi ^{2}]}\text{,} \label{n2 st REIT} \end{equation} \begin{equation} \langle \hat{a}_{1}\hat{a}_{2}\rangle =e^{i\theta _{t}}i\Xi \frac{\kappa _{a}\kappa _{s}}{(\kappa _{s}+\kappa _{a})[\kappa _{a}\kappa _{s}-\Xi ^{2}]} \text{.} \label{a12 st REIT} \end{equation} The entanglement criteria can be rewritten as \begin{equation} \Xi \frac{\Xi -\frac{\kappa _{a}\kappa _{s}}{(\kappa _{s}+\kappa _{a})}2\sin \theta _{t}}{\kappa _{a}\kappa _{s}-\Xi ^{2}}<0\text{.} \label{condition REIT} \end{equation} For negative detuning $\Xi <0$, there are two possibilities: a) if $\kappa _{a}\kappa _{s}<\Xi ^{2}$ entanglement occurs in the region $\frac{\kappa _{a}\kappa _{s}}{(\kappa _{s}+\kappa _{a})}2\sin \theta _{t}>\Xi $, b) if $ \kappa _{a}\kappa _{s}>\Xi ^{2}$ we have entanglement in $\frac{\kappa _{a}\kappa _{s}}{(\kappa _{s}+\kappa _{a})}2\sin \theta _{t}<\Xi $. Similarly, for positive detuning $\Xi >0$: a) if $\kappa _{a}\kappa _{s}<\Xi ^{2}$ then we need $\Xi >\frac{\kappa _{a}\kappa _{s}}{(\kappa _{s}+\kappa _{a})}2\sin \theta _{t}$ and b) if $\kappa _{a}\kappa _{s}>\Xi ^{2}$ then we need $\Xi <\frac{\kappa _{a}\kappa _{s}}{(\kappa _{s}+\kappa _{a})}2\sin \theta _{t}$.
To obtain large entanglement, we tune the cavity damping such that $|\kappa _{a}\kappa _{s}-C_{2}^{\ast }C_{2}|$ is small and $\sin $ $\theta _{t}\sim 1$ . Note that the \emph{sign} of the detuning $\Delta $ in Eq. (\ref{CAS}) is important for entanglement generation. We can arrange the signs of these quantities such that Eq. (\ref{condition REIT}) is satisfied.
Figure \ref{REDlaserSTRamanEIT} is plotted using $\kappa _{a}=\kappa _{s}= \sqrt{1.01\left( C_{2}^{\ast }C_{2}\right) }$ and $\Delta =40\gamma _{ac}$ for $\theta _{t}\sim 90^{o}$. We find that the region $\Omega _{c}\sim \Delta $ gives a large entanglement, but the photon numbers are minimum. This prevents having a steady state macroscopic entanglement. We verify that if we change to a negative detuning $\Delta =-40\gamma _{ac}$ there is no entanglement.
\begin{figure*}\label{REDlaserSTRamanEIT}
\end{figure*} The region of maximum entanglement occur around $\phi =90^{0}$. Entanglement can occur at a wide range of large $\Omega _{c}$. However, the photon numbers $\bar{n}_{j}$ decrease with the increase of $\Omega _{c}$.
Figure \ref{REDlaserSTRamanEITmacro} shows that it is possible to obtain a continuous bright source of entangled photons. We realize that the number of nonclassical photon pairs in REIT scheme is limited by the weak pump field. Thus, by increasing the pump field we can generate more Stokes photons (Fig. \ref{REDlaserSTRamanEITmacro}a). At the same time, the detuning is increased as well to ensures that the scheme remain in the nonclassical Raman-EIT regime. By applying the derived condition Eq. (\ref{condition REIT}) we further obtained a larger number of entangled photon pairs (Fig. \ref {REDlaserSTRamanEITmacro}b).
\begin{figure}\label{REDlaserSTRamanEITmacro}
\end{figure}
\subsection{Resonant Case}
It seems that steady state entanglement in the double resonant Raman case ($ \Omega _{c}=\Omega _{p},\Delta _{c}=\Delta _{p}=0$) is hardly possible. In the following, we investigate this analytically. Here, we find $ C_{ac,ac},C_{ac,bd},C_{bd,ac},C_{bd,bd}$ \ are real and positive while $
C_{ac,ad},C_{ac,bc},C_{bd,ad},C_{bd,bc}$ are purely imaginary (positive or negative). Since $p_{cd}=-i|p_{cd}|,p_{ba}=-i|p_{ba}|$ all $C_{\text{1}},C_{ \text{2}},C_{\text{3}},C_{\text{4}},$ $C_{\text{lossj}}$ and $C_{\text{gainj} }$ are real and could be negative. So, $K_{j}=2C_{\text{gainj}}-2C_{\text{ lossj}}$ and $K_{12}=\frac{1}{2}(K_{1}+K_{2})$.
For \emph{symmetric} system, $\Omega _{p}\simeq \Omega _{c}$ we find $ p_{ab}=-p_{ba}=p_{dc}=-p_{cd}$, $p_{cc}\simeq p_{bb}$ and $p_{aa}\simeq p_{dd}$ \cite{pop range}. Then, $C_{ac,ac}=C_{bd,bd}$ and $ C_{ac,bd}=C_{bd,ac}$. If we take $T_{ac}=T_{dc}=T_{ab}=T_{db}=\gamma $ (spontaneous decay rate) with $T_{bc}=\gamma _{bc}$\ and $T_{ad}=2\gamma $ we further have $C_{bd,ad}=-C_{ac,ad},C_{bd,bc}=-C_{ac,bc}$.
The steady state solutions for DRR scheme can be written as \begin{equation} \bar{n}_{1}=\bar{n}_{2}=\frac{C_{\text{gain}}(C_{\text{gain}}-C_{\text{loss} })+\frac{1}{2}C_{2}C_{12}}{C_{12}^{2}-(C_{\text{gain}}-C_{\text{loss}})^{2}} \text{,} \label{nj st DRR} \end{equation} \begin{equation} \langle \hat{a}_{1}\hat{a}_{2}\rangle =-e^{i\theta _{t}}\left( \frac{C_{1}C_{ \text{gain}}-\frac{1}{2}C_{2}(C_{\text{gain}}+C_{\text{loss}})}{ C_{12}^{2}-(C_{\text{gain}}-C_{\text{loss}})^{2}}\right) \label{a12 st DRR} \end{equation} and the entanglement condition with initial vacuum as \begin{equation} \bar{n}_{1}+\bar{n}_{2}<2\xi \cos \theta _{t} \label{condition DRR} \end{equation} where $\xi $ is the term in the bracket $(...)$.
In order to determine whether Eq. (\ref{condition DRR}) can be met we consider a simpler case where $\gamma _{bc}=0$. From Appendix B, we have $C_{ \text{loss}}-C_{\text{gain}}=\kappa $ and $C_{12}=\frac{g^{2}}{\gamma } (p_{cc}-p_{aa})$, $C_{1}=\frac{g^{2}}{\gamma }p_{aa}$, $C_{2}=\frac{g^{2}}{ \gamma }(2p_{aa}-p_{cc})$, $C_{\text{loss}}=(\frac{g^{2}}{2\gamma } p_{bb}+\kappa )$, $C_{\text{gain}}=\frac{g^{2}}{2\gamma }p_{cc}$. These results are used to rewrite Eqs. (\ref{nj st DRR}) and (\ref{a12 st DRR}) as \begin{equation} \bar{n}_{j}=\frac{g^{2}}{2\gamma }\frac{-p_{cc}\kappa +\frac{g^{2}}{\gamma } (2p_{aa}-p_{cc})(p_{cc}-p_{aa})}{[\frac{g^{2}}{\gamma }(p_{cc}-p_{aa})]^{2}- \kappa ^{2}} \label{nj st nodec DRR} \end{equation} \begin{equation} \langle \hat{a}_{1}\hat{a}_{2}\rangle =e^{i\theta _{t}}\frac{g^{2}}{2\gamma } \frac{(2p_{aa}-p_{cc})(\frac{g^{2}}{2\gamma }2p_{cc}+\kappa )-\frac{g^{2}}{ \gamma }p_{aa}p_{cc}}{[\frac{g^{2}}{\gamma }(p_{cc}-p_{aa})]^{2}-\kappa ^{2}} \label{a12 st nodec DRR} \end{equation}
For strong field, $p_{cc}\simeq p_{aa}=0.25$. The steady solutions become $ \bar{n}_{1}=\bar{n}_{2}=\frac{g^{2}}{8\gamma \kappa }$, $\langle \hat{a}_{1} \hat{a}_{2}\rangle =-\frac{g^{2}}{8\gamma \kappa }e^{i\theta _{t}}$ and $ D=2(1+\frac{g^{2}}{2\gamma \kappa }\sin ^{2}\frac{1}{2}\theta _{t})$, i.e. no entanglement.
For weak field, $p_{cc}\simeq 0.5,p_{aa}\simeq 0$. The steady solutions are $ \bar{n}_{1}=\bar{n}_{2}=\frac{g^{2}}{4\gamma }\frac{1}{\kappa -\frac{g^{2}}{ 2\gamma }}$ , $\langle \hat{a}_{1}\hat{a}_{2}\rangle =\frac{g^{2}}{4\gamma } \frac{e^{i\theta _{t}}}{\kappa -\frac{g^{2}}{2\gamma }}$ with $\kappa >\frac{ g^{2}}{2\gamma }$ and hence $D=2(1+\frac{2g^{2}}{2\gamma \kappa -g^{2}}\sin ^{2}\frac{1}{2}\theta _{t})$, again no entanglement. In the weak field regime, the cavity damping has to be sufficiently large to ensure the existence of steady state solutions, i.e. $\kappa >g^{2}/2\gamma $. If the cavity damping is small $\kappa <g^{2}/2\gamma $ , regions with negative values of $D$ and $\bar{n}_{j}$ would appear which reflect the non-steady state regime.
Thus, we have shown that there is \emph{no} steady state entanglement for DRR scheme in both weak and strong fields regimes. This is compatible with its classical two-photon correlation $G^{(2)}$\cite{G2 for DRR}. However, the REIT photon pairs, which are nonclassically correlated, are also entangled in the steady state.
\section{Conclusion}
We have shown that two-photon laser can produce a continuous source of entangled photon pairs based on the steady state solutions and an entanglement criteria. We have obtained a relationship between entanglement and two-photon correlation, and find that both do not vary with time in the same manner. We have derived a condition for steady state entanglement in the Raman-EIT (REIT) schemes and showed that steady state macroscopic entanglement is possible. We find that a large steady state entanglement occurs at the expense of smaller photon numbers. We showed that the double resonant Raman (DRR) does not generate steady state entangled photon pairs for any laser parameters.
\appendix
\section{Coefficients for double Raman scheme}
\label{coefficients} The coefficients in Eq. \ref{master} for double Raman scheme are \begin{eqnarray}
C_{\text{loss1}} &=&|g_{s}|^{2}(C_{bd,ad}p_{ab}+C_{bd,bd}p_{bb})+\kappa _{s}, \\
C_{\text{gain1}} &=&|g_{s}|^{2}\{C_{bd,bd}p_{dd}+C_{bd,bc}p_{dc}\}, \\
C_{\text{loss2}} &=&|g_{a}|^{2}(C_{ac,ac}p_{cc}+C_{ac,ad}p_{cd})+\kappa _{a}, \\
C_{\text{gain2}} &=&|g_{a}|^{2}\{C_{ac,ac}p_{aa}+C_{ac,bc}p_{ba}\}, \end{eqnarray} \begin{eqnarray} J_{1} &=&C_{bd,ac}p_{cc}+C_{ac,bd}^{\ast }p_{dd}+(C_{bd,ad}+C_{ac,bc}^{\ast })p_{cd} \\ J_{2} &=&C_{bd,ac}p_{aa}+C_{ac,bd}^{\ast }p_{dd}+C_{bd,bc}p_{ba}+C_{ac,bc}^{\ast }p_{cd} \\ J_{3} &=&C_{bd,ac}p_{aa}+C_{ac,bd}^{\ast }p_{bb}+(C_{bd,bc}+C_{ac,ad}^{\ast })p_{ba} \\ J_{4} &=&C_{bd,ac}p_{cc}+C_{ac,bd}^{\ast }p_{bb}+C_{bd,ad}p_{cd}+C_{ac,ad}^{\ast }p_{ba} \end{eqnarray} where $J_{k}=\frac{C_{k}}{g_{a}g_{s}}$, $g_{a},g_{s}$ are atom-field coupling strengths, $C_{\alpha \beta ,\gamma \delta }$ ($\alpha ,\beta ,\gamma ,\delta =a,b,c,d$) are complex coefficients that depend on decoherence rates $\gamma _{\alpha \beta }$, laser detunings $\Delta _{p}$ , $\Delta _{c}$ and Rabi frequencies $\Omega _{p}$, $\Omega _{c}$. The $ p_{\alpha \alpha },p_{ab},p_{cd}$ ($\alpha =a,b,c,d$) are steady state populations and coherences.
\begin{eqnarray} C_{ac,ac} &=&\frac{T_{ad}^{\ast }T_{bc}^{\ast }T_{db}+I_{p}T_{ad}^{\ast }+I_{c}T_{bc}^{\ast }}{Z} \\ C_{ac,ad} &=&-i\Omega _{p}\frac{T_{bc}^{\ast }T_{db}+I_{p}-I_{c}}{Z} \end{eqnarray}
\begin{eqnarray} C_{ac,bc} &=&-i\Omega _{c}\frac{-T_{ad}^{\ast }T_{db}+I_{p}-I_{c}}{Z} \\ C_{ac,bd} &=&\Omega _{c}\Omega _{p}\frac{T_{bc}^{\ast }+T_{ad}^{\ast }}{Z} \end{eqnarray}
\begin{eqnarray} C_{bd,ac} &=&\Omega _{p}\Omega _{c}\frac{T_{bc}^{\ast }+T_{ad}^{\ast }}{Z} \\ C_{bd,ad} &=&-i\Omega _{c}\frac{-T_{ac}^{\ast }T_{bc}^{\ast }+I_{p}-I_{c}}{Z} \end{eqnarray}
\begin{eqnarray} C_{bd,bc} &=&-i\Omega _{p}\frac{T_{ac}^{\ast }T_{ad}^{\ast }+I_{p}-I_{c}}{Z} \\ C_{bd,bd} &=&\frac{T_{ac}^{\ast }T_{ad}^{\ast }T_{bc}^{\ast }+I_{p}T_{bc}^{\ast }+I_{c}T_{ad}^{\ast }}{Z} \end{eqnarray}
\begin{eqnarray} Z &=&T_{ac}^{\ast }T_{ad}^{\ast }T_{bc}^{\ast }T_{db}+I_{p}T_{ac}^{\ast }T_{ad}^{\ast }+I_{p}T_{bc}^{\ast }T_{db} \notag \\ &&+I_{c}T_{ac}^{\ast }T_{bc}^{\ast }+I_{c}T_{ad}^{\ast }T_{db}+(I_{p}-I_{c})^{2} \end{eqnarray}
where $I_{p}=\Omega _{p}^{2}$ and $I_{c}=\Omega _{c}^{2}$.
\section{Coefficients for RRD scheme}
From Appendix A, we obtain the coefficients for RRD scheme: \begin{eqnarray} C_{\text{1}} &=&g_{a}g_{s}\frac{\Omega ^{2}}{Z}2\{T_{bc}p_{cc}+T_{ad}p_{aa}\} \\ C_{\text{2}} &=&g_{a}g_{s}\frac{\Omega ^{2}}{Z} 2\{T_{bc}p_{aa}+T_{ad}(2p_{aa}-p_{cc})\} \end{eqnarray} \begin{eqnarray} C_{12} &=&C_{\text{1}}-C_{\text{2}}=C_{\text{3}}-C_{\text{2}} \notag \\ &=&g_{a}g_{s}\Omega ^{2}\frac{T_{bc}+T_{ad}}{Z}2(p_{cc}-p_{aa}) \label{C12} \end{eqnarray} \begin{equation} Z=\gamma \{T_{ad}T_{bc}\gamma +2I(T_{ad}+T_{bc})\}\text{.} \label{Z} \end{equation} Taking $\kappa _{s}=\kappa _{s}=\kappa $ we also have $C_{\text{loss1}}=C_{ \text{loss2}}$, $C_{\text{gain1}}=C_{\text{gain2}}$ and $ K_{2}=K_{1}=K_{12}=2(C_{\text{gain}}-C_{\text{loss}})$, so \begin{eqnarray}
C_{\text{loss}} &=&|g_{s}|^{2}(\frac{IT_{bc}}{Z}p_{aa}+T_{ad}\frac{TT_{bc}+I }{Z}p_{bb})+\kappa , \label{Closs} \\
C_{\text{gain}} &=&|g_{s}|^{2}\{T_{bc}\frac{TT_{ad}+I}{Z}p_{dd}+\frac{IT_{ad} }{Z}p_{cc}\}, \label{Cgain} \end{eqnarray}
\end{document} |
\begin{document}
\title{Petersson scalar products and $L$-functions arising from modular forms} \markboth{Shigeaki Tsuyumine}{L-functions arising from modular forms} \footnote[0]{This work was partly supported by Grants-in-Aid for Scientific Research (C) from the Ministry of Education, Science, Sports and Culture of Japan, Grant Number 16K05056.}
The Rankin-Selberg method developed by Rankin \cite{Rankin} and by Selberg \cite{Selberg} gives us $L$-functions from cuspidal automorphic forms by taking the scalar products of them with real analytic Eisenstein series of weight $0$. Some of analytic properties of Eisenstein series inherit to the $L$-functions such as functional equations or positions of possible poles. In Zagier \cite{Zagier}, he shows that this important method can be applied to the automorphic forms on $\mathrm{SL}_{2}(\mathbf{Z})$ which are not cuspidal. The researches in this direction are made also by Gupta \cite{Gupta}, Chiera \cite{Chiera}.
In the present paper we consider mainly modular forms on $\Gamma_{0}(N):=\{\left({a\ b\atop c\ d}\right)\in\mathrm{SL}_{2}(\mathbf{Z})\mid c\equiv0\pmod{N}\}$. Let $z=x+\sqrt{-1}y\in\mathfrak{H}$ where $\mathfrak{H}$ denotes the complex upper half plane. Instead of Eisenstein series of weight $0$ we employ Eisenstein series of integral weight $k$ \begin{align*}
y^{s}E_{k,\chi}(z,s):=2^{-1}y^{s}\sum_{c,d}\chi(d)(cz+d)^{-k}|cz+d|^{-2s}\hspace{1em}(s\in\mathbf{C}) \end{align*} with a Dirichlet character $\chi$ modulo $N$ where $c,d$ runs over the set of second rows of matrices in $\Gamma_{0}(N)$, or Eisenstein series of half integral weight (see Sect.~\ref{sect:ESHIW}). We show that the analytic properties of these Eisenstein series inherit to the following $L$-functions. Let $f(z)=\sum_{n=0}^{\infty}a_{n}\mathbf{e}(nz),g(z)=\sum_{n=0}^{\infty}a_{n}\mathbf{e}(nz)$ with $\mathbf{e}(z)=e^{2\pi\sqrt{-1}z}$ be modular forms for $\Gamma_{0}(N)$ of integral or half integral weight so that their weights are not necessarily equal to each other as well as characters. Define \begin{align} L(s;f,g):=\sum_{n=1}^{\infty}\frac{a_{n}\overline{b}_{n}}{n^{s}}, \label{eqn:lseries} \end{align} which converges for $s$ with sufficiently large $\Re s$. Let $f,g$ be of weight $l,l'$ respectively, and suppose that $l\ge l'$. We show that $L(s;f,g)$ extends meromorphically to the whole complex plane, and has the functional equation under $s\longmapsto-(l-l')+1-s$ (Corollary \ref{cor:lseries}, Corollary \ref{cor:lseries2}). If $l=l'$ and $f,g$ have a same character, then \begin{align*} \langle f,g\rangle_{\Gamma_{0}(N)}=c\,\mathrm{Res}_{s=l}L(s;f,g) \end{align*} for a suitable constant $c$ where $\langle f,g\rangle_{\Gamma_{0}(N)}$ denotes the Petersson scalar product (see Sect.~\ref{sect:PSP}), and $\mathrm{Res}_{s=l}$ implies the residue at $s=l$. This was proved in Petersson \cite{Petersson}, Satz 6 for cusps forms $f,g$ of integral weight. If either wights of $f,g$ or charcters are distinct, then $L(l-1;f,g)$ is written in terms of the scalar product involving $f,g$ and a suitable Eisenstein series (Corollary \ref{cor:lseries}, Corollary \ref{cor:lseries2}).
A Dirichlet character $\chi$ is called {\it even} or {\it odd} according as $\chi(-1)$ is $1$ or $-1$. Let \begin{align*} \theta(z):=\sum_{n=-\infty}^{\infty}\mathbf{e}(n^{2}z)=1+2\sum_{n=1}^{\infty}\mathbf{e}(n^{2}z) \end{align*} be the theta series, and let $\theta_{\chi}(z):=\sum_{n=-\infty}^{\infty}\chi(n)\mathbf{e}(n^{2}z)$ be its twist by an even Dirichlet character $\chi$. For an odd $\chi$, put $\Theta_{\chi}(z):=\sum_{n=-\infty}^{\infty}\chi(n)n\mathbf{e}(n^{2}z)$, which is a cusp form of weight $3/2$. Then the Riemann zeta function and the Dirichlet $L$-functions appear as $L(s;\theta,\theta)=4\zeta(2s)$, $L(s;\theta_{\chi},\theta)=4L(2s,\chi)$ for $\chi$ even, $L(s;\Theta_{\chi},\theta)=4L(2s-1,\chi)$ for $\chi$ odd, and we may expect that many other interesting $L$-functions appear by this method. We have the following application. Let $Q,Q'$ be positive definite integral quadratic with $2l,2l'$ variables ($l,l'\in\frac{1}{2}\mathbf{N},\,l\ge l'$). Let $r_{Q}(n),r_{Q'}(n)$ be the numbers of integral representations of $n$ by $Q,Q'$ respectively. Then the Dirichlet series $\sum_{n=1}^{\infty}r_{Q}(n)r_{Q'}(n)n^{-s}$ extends meromorphically to the whole $s$ plane. Indeed if $f(z),g(z)$ are the theta series associated with $Q,Q'$ respectively, then $L(s;f,g)=\sum_{n=1}^{\infty}r_{Q}(n)r_{Q'}(n)n^{-s}$. Then it satisfies the functional equation under $l-1+s\longmapsto l'-s$ such as (\ref{eqn:fn-eq3}) or (\ref{eqn:fn-eq4}). Further the asymptotic value of $X^{-l-l'+1}\sum_{0<n\le X}r_{Q}(n)r_{Q'}(n)$ as $X\longrightarrow\infty$ is obtained in terms of the $0$-th Fourier coefficients of $f,g$ at cusps and the constant terms of Eisenstein series appropriately taken (Section \ref{sect:ALF}).
Let us fix our notation. Let $N\in\mathbf{N}$. We denote by $(\mathbf{Z}/N)^{\ast}$, the group of Dirichlet characters modulo $N$. The identity element of $(\mathbf{Z}/N)^{\ast}$ is denoted by $\mathbf{1}_{N}$, where $\mathbf{1}_{N}(n)$ is $1$ or $0$ according as $n$ is coprime to $N$ or not. In particular $\mathbf{1}_{1}$ always takes the value $1$, which we denote simply by $\mathbf{1}$. We denote by $\mathfrak{f}_{\chi}$ the conductor of $\chi$, and denote by $\widetilde{\chi}$ the primitive character associated with $\chi$. For a prime $p$, $v_{p}(n)$ denotes the $p$-adic valuation of $n\in\mathbf{Z},\ne0$. We put \begin{align*} \mathfrak{e}_{\chi}:=\mathfrak{f}_{\chi}\prod_{p\nmid\mathfrak{f}_{\chi},\chi(p)=0}p,\hspace{2em}\mathfrak{e}_{\chi}':=\mathfrak{f}_{\chi}\prod_{p\nmid\mathfrak{f}_{\chi},\chi(p)=0}p^{2}. \end{align*}
For a prime $p|N$, $\{\chi\}_{p}\in(\mathbf{Z}/p^{v_{p}(N)})^{\ast}$ denotes the $p$-part of $\chi$, and there holds an equality $\chi=\prod_{p|N}\{\chi\}_{p}$.
For $D$ a discriminant of a quadratic number field, $\chi_{D}$ is defined to be the Kronecker-Jacobi-Legendre symbol, and for $D=1$, $\chi_{1}$ is defined to be $\mathbf{1}$. We extend this notation as follows. Let $a\in\mathbf{Z},\ne0$. Then we define $\chi_{a}=\chi_{D(a)}\mathbf{1}_{a}$ for the discriminant $D(a)$ of the quadratic number field $\mathbf{Q}(\sqrt{a})$. When $a$ is odd, $\chi_{a^{\vee}}$ denotes $\chi_{a}$ if $a\equiv1\pmod{4}$, $\chi_{-a}$ if $a\equiv-1\pmod{4}$. Let $\mu(n),\varphi(n)$ for $n\in\mathbf{N}$ denote as usual, the M\"obius function, the Euler function respectively.
Let $\mathfrak{F}$ be the fundamental domain of the group $\mathrm{SL}_{2}(\mathbf{Z})$;\ $\{z=x{+}\sqrt{{-}1}y\in\mathbf{C}\mid |x|\le1/2,|z|\ge1\}$. Let $\Gamma$ be a congruence subgroup. The fundamental domain of $\Gamma$ is obtained by $\mathfrak{F}(\Gamma):=\bigcup_{A} A\mathfrak{F}$ where $A$ runs over the set of left representatives of $\mathrm{SL}_{2}(\mathbf{Z})$ modulo $\Gamma$. We denote by $\mathfrak{F}(N)$, the fundamental domain $\mathfrak{F}(\Gamma_{0}(N))$ of $\Gamma_{0}(N)$. We denote by $\mathcal{M}_{s_{1},s_{2}}(\Gamma)$ for $s_{1},s_{2}\in\mathbf{C}$, the space of real analytic functions on $\mathfrak{H}$ which satisfy \begin{align*} f(Az)=(cz+d)^{s_{1}}(\overline{cz+d})^{s_{2}}f(z)\quad(A=\left(a\ \,b\atop c\ \,d\right)\in\Gamma) \end{align*}with $-\pi<\arg(cz+d)\le\pi$ and $\arg(\overline{cz+d})=-\arg(cz+d)$. The imaginary part $y$ of $z$ is in $\mathcal{M}_{-1,-1}(\Gamma)$. We define \begin{align*}
f|_{A}(z):=(cz+d)^{-s_{1}}(\overline{cz+d})^{-s_{2}}f(Az)\quad(A=\left(a\ \,b\atop c\ \,d\right))
\end{align*}with $a,b,c,d\in\mathbf{Q},ad-bc>0$. For two congruence subgroups $\Gamma_{1},\Gamma_{2}$ with $\Gamma_{1}\supset\Gamma_{2}$, the trace of $f\in\mathcal{M}_{s_{1},s_{2}}(\Gamma_{2})$ is defined by $\mathrm{tr}_{\Gamma_{1}/\Gamma_{2}}(f):=\sum_{A}f|_{A}$ where $A$ runs over the set of left representatives of $\Gamma_{1}$ modulo $\Gamma_{2}$. The trace of $f$ is in $\mathcal{M}_{s_{1},s_{2}}(\Gamma_{1})$. For $k\in\mathbf{Z},\,s\in\mathbf{C}$ and for a Dirichlet character $\chi$ modulo $N$ with the same parity as $k$, $\mathcal{M}_{k+s,s}(N,\chi)$ denotes the space of elements $f\in\mathcal{M}_{k+s,s}(\Gamma_{1}(N))$ satisfying \begin{align*}
f(Az)=\chi(d)(cz+d)^{k}|cz+d|^{2s}f(z)\quad(A\in\Gamma_{0}(N)). \end{align*} Let $j(A,z):=\theta(Az)/\theta(z)\ (A\in\mathrm{SL}_{2}(\mathbf{Z}))$. For $A=\left(a\ \,b\atop c\ \,d\right)\in\Gamma_{0}(4)$, $j(A,z)=1$ if $c=0$, and $j(A,z)=\chi_{c}(d)\iota_{d}^{-1}(cz+d)^{1/2}$ if $c>0$ where $\iota_{d}$ is $1$ or $\sqrt{{-}1}$ according as $d\equiv1$ modulo $4$ or $d\equiv3$ and where $-\pi/2<\arg(cz+d)^{1/2}\le\pi/2$. On the group $\Gamma_{0}(4)$, $j(A,z)$ gives the automorphy factor.
Let $4|N$. For a Dirichlet character $\chi$ modulo $N$ with the same parity as $k$, $\mathcal{M}_{k+1/2+s,s}(N,\chi)$ denotes the space of elements $f\in\mathcal{M}_{k+1/2+s,s}(\Gamma_{1}(N))$ satisfying \begin{align*}
f(Az)=\chi(d)j(A,z)(cz+d)^{k}|cz+d|^{2s}f(z)\quad(A=\left(a\ \,b\atop c\ \,d\right)\in\Gamma_{0}(N)). \end{align*}We denote $\mathbf{M}_{l}(N,\chi)$ for $l\in(1/2)\mathbf{N}$, the space of holomorphic modular forms in $\mathcal{M}_{l,0}(N,\chi)$. If $\chi=\mathbf{1}_{N}$, we drop $\chi$ from the notations $\mathbf{M}_{l}(N,\chi)$ or $\mathcal{M}_{l,0}(N,\chi)$.
\section{Petersson scalar product}\label{sect:PSP} In the section, we define the Petersson scalar product of modular forms which are not necessarily holomorphic. Further we obtain a formula between the scalar product and a special value of the $L$-function (\ref{eqn:lseries}) of holomorphic modular forms of same weight and with same character.
Let $\Gamma$ be a congruence subgroup of $\mathrm{SL}_{2}(\mathbf{Z})$. Let $r$ be a cusp of $\Gamma$, and let $A_{r}$ be a matrix so that \begin{align}A_{r}(\sqrt{-1}\infty)=r\hspace{1em}(A_{r}\in\mathrm{SL}_{2}(\mathbf{Z})).\label{defAr} \end{align} Let $w^{(r)}=w_{\Gamma}^{(r)}$ be the width of a cusp $r$ of $\Gamma$, namely, $w^{(r)}$ is the least natural number so that $A_{r}\left(\pm1\ \,w^{(r)}\atop 0\ \ \,\pm1\right)A_{r}^{-1}\in \Gamma$. Let $f,g$ be real analytic modular forms for $\Gamma$ of wight $l\in\tfrac{1}{2}\mathbf{Z},\ge0$ with same character so that $y^{l}f\overline{g}$ has the Fourier expansions at each cusp $r$ in the form \begin{align}
(y^{l}f\overline{g})|_{A_{r}}(z)=P_{y^{l}f\overline{g}}^{(r)}(y)+\sum_{n=-\infty}^{\infty}u_{n}^{(r)}(y)\mathbf{e}(nx/w^{(r)}),\hspace{.7em}P_{y^{l}f\overline{g}}^{(r)}(y)=\sum_{j}c_{\nu_{j}^{(r)}}^{(r)}y^{\nu_{j}^{(r)}}\label{eqn:dfP} \end{align} where $u_{n}^{(r)}(y)$ is a rapidly decreasing function as $y\longrightarrow\infty$ and where the constant term $P_{y^{l}f\overline{g}}^{(r)}(y)$ with respect to $x$, is a finite linear combination of powers of $y$ with $\nu_{j}^{(r)},c_{\nu_{j}^{(r)}}^{(r)}\in\mathbf{C}$. For $T>1$, we put \begin{align} Q_{y^{l}f\overline{g}}^{(r)}(T):&=w^{(r)}\{c_{1}^{(r)}\log T+\int_{0}^{T}\sum_{\Re \nu_{j}^{(r)}\ge1,\nu_{j}^{(r)}\ne1}c_{\nu_{j}^{(r)}}^{(r)}y^{\nu_{j}^{(r)}-2}dy\}\nonumber\\ &=w^{(r)}\{c_{1}^{(r)}\log T+\sum_{\Re \nu_{j}^{(r)}\ge1,\nu_{j}^{(r)}\ne1}c_{\nu_{j}^{(r)}}^{(r)}\tfrac{T^{\nu_{j}^{(r)}-1}}{\nu_{j}^{(r)}-1}\}. \label{eqn:defQr} \end{align} For $T>1$, let $\mathfrak{F}_{T}(\Gamma)$ denote the domain obtained from $\mathfrak{F}(\Gamma)$ by cutting off neighborhoods of all cusps $r$ along the lines $\Im(A_{r}^{-1}z)=T$ (see Figure \ref{fig:fdomain0}) where $\Im$ means the imaginary part. \begin{figure}
\caption{ }
\label{fig:fdomain0}
\end{figure} Then we define the Petersson scalar product of $f$ and $g$ by \begin{align} \langle f,g\rangle_{\Gamma}:=\lim_{T\to\infty}\left(\rule{0cm}{1.1em}\right.\int_{\mathfrak{F}_{T}(\Gamma)}y^{l}f(z)\overline{g}(z)\tfrac{dxdy}{y^{2}}{-}\sum_{r}Q_{y^{l}f\overline{g}}^{(r)}(T)\left.\rule{0cm}{1.1em}\right), \label{eqn:psp} \end{align} $r$ running over the set of the representatives of cusps of $\Gamma$, or equivalently, \begin{align*} \langle f,g\rangle_{\Gamma}=&\int_{\mathfrak{F}_{T}(\Gamma)}y^{l}f(z)\overline{g}(z)\tfrac{dxdy}{y^{2}}\\
&+\sum_{r}\left(\int_{T}^{\infty}\hspace{-.5em}\int_{0}^{w^{(r)}}\hspace{-.5em}\{(y^{l}f\overline{g})|_{A_{r}}(z)-P_{y^{l}f\overline{g}}^{(r)}(y)\}dx\tfrac{dy}{y^{2}}-Q_{y^{l}f\overline{g}}^{(r)}(T)\right), \end{align*} where the right hand side is independent of $T>1$.
Suppose that $f,g$ are holomorphic with $l\in\tfrac{1}{2}\mathbf{Z},>0$. Let $a_{0}^{(r)},b_{0}^{(r)}$ be the $0$-th Fourier coefficients at a cusp $r$, of $f,g$ respectively. Then $P_{y^{l}f\overline{g}}(y)$ is equal to $a_{0}^{(r)}\overline{b}_{0}^{(r)}y^{l}$, and an equality (\ref{eqn:psp}) turns out to be $\langle f,g\rangle_{\Gamma}=\lim_{T\to\infty}\left(\rule{0cm}{.9em}\right.\int_{\mathfrak{F}_{T}(\Gamma)}y^{l}f(z)\overline{g}(z)\tfrac{dxdy}{y^{2}}-\sum_{r}a_{0}^{(r)}\overline{b}_{0}^{(r)}w^{(r)}\tfrac{T^{l-1}}{l-1}\left)\rule{0cm}{.9em}\right.$ $(l\ne1)$, $\langle f,g\rangle_{\Gamma}=\lim_{T\to\infty}$ $\left(\rule{0cm}{.9em}\right.\int_{\mathfrak{F}_{T}(\Gamma)}yf(z)\overline{g}(z)\tfrac{dxdy}{y^{2}}-\sum_{r}a_{0}^{(r)}\overline{b}_{0}^{(r)}$ $\times w^{(r)}\log T\left)\rule{0cm}{.9em}\right.\ (l=1)$. In Zagier \cite{Zagier} the definition is found in the case $\Gamma=\mathrm{SL}_{2}(\mathbf{Z})$. If at least one of $f,g$ is a cusp form, then the definition coincide with the common definition of the scalar product (Petersson\cite{Petersson}). For $l=1/2$, $\tfrac{T^{l-1}}{l-1}$ tends to $0$ as $T\longrightarrow\infty$, and so the equality $\langle f,g\rangle_{\Gamma}=\int_{\mathfrak{F}_{T}(\Gamma)}y^{1/2}f(z)\overline{g}(z)\tfrac{dxdy}{y^{2}}$ holds. The convergence of the integral has been pointed out in Serre and Stark \cite{Serre-Stark} Appendix by Deligne.
Let $f(z)=\sum_{n=0}^{\infty}a_{n}\mathbf{e}(nz/w),\,g(z)=\sum_{n=0}^{\infty}b_{n}\mathbf{e}(nz/w)$ be the Fourier expansions with $w=w^{(\sqrt{-1}\infty)}$ the width of the cusp $\sqrt{-1}\infty$. We define the Dirichlet series associated with them by $L(s;f,g):=\sum_{n=1}^{\infty}a_{n}\overline{b}_{n}n^{-s}$, namely we make the same definition as (\ref{eqn:lseries}) ignoring $w$.
Replacing $\Gamma$ by $\pm\Gamma$ if necessary, we may assume that $\pm1_{2}\in\Gamma$ where $1_{2}$ denotes the identity matrix. We define the Eisenstein series $E_{\Gamma,r}(z,s):=y^{s}\sum_{(c,d)}|cz{+}d|^{-2s}$ where $(c,d)$ runs over the set of the second rows of matrices in $A_{r}^{-1}\Gamma$ with $c>0$, or with $c=0,d>0$. It converges absolutely and uniformly on any compact subset of $\mathfrak{H}$ for $\Re s>1$, and gives a function in $\mathcal{M}_{0,0}(\Gamma)$. As functions of $s$, it extends meromorphically to the whole plane, and satisfies a functional equations under $s\longmapsto1-s$ (cf. Kubota \cite{Kubota}). We denote $E_{\Gamma,r}(z,s)$ by $E_{\Gamma}(z,s)$ when $r=\sqrt{-1}\infty$. The constant term of the Fourier expansion of $E_{\Gamma,r}(z,s)$ with respect to $x$, at the cusp $r$ is in the form $y^{s}+\xi(s;r)y^{1-s}$, and at a cusp $r'$ not equivalent to $r$, in the form $\xi^{(r')}(s;r)y^{1-s}$. Here both $\xi(s;r)$ and $\xi^{(r')}(s;r)$ are involving the Riemann zeta function or partial zeta functions, and they are meromorphic functions on the $s$-plane and holomorphic on the domain $\Re s>1$. They have poles of order $1$ at $s=1$, and $\mathrm{Res}_{s=1}E_{\Gamma,r}(z,s)=\mathrm{Res}_{s=1}\xi(s;r)=\mathrm{Res}_{s=1}\xi^{(r')}(s;r)$.
Let $l\in\frac{1}{2}\mathbf{Z},\ge 3/2$. An alternative definition of the scalar product is \begin{align*} \langle f,g\rangle_{\Gamma}:=\int_{\mathfrak{F}(\Gamma)}\{y^{l}f(z)\overline{g}(z)-\sum_{r}a_{0}^{(r)}\overline{b}_{0}^{(r)}E_{\Gamma,r}(z,l)\}\frac{dxdy}{y^{2}} \end{align*} where the integral is well-defined since the integrand is decreasing at all the cusps, indeed it is $O(y^{-l-1})$ as $y\longrightarrow\infty$ at each cusp (cf Zagier \cite{Zagier}, Sect. 5).
We denote the Laplacian by \begin{align*} \Delta:=y^{2}\left(\tfrac{\partial^{2}}{\partial x^{2}}+\tfrac{\partial^{2}}{\partial y^{2}}\right)=4y^{2}\tfrac{\partial}{\partial z}\tfrac{\partial}{\partial \overline{z}} \end{align*} where $\frac{\partial}{\partial z}=2^{-1}(\frac{\partial}{\partial x}-\sqrt{-1}\frac{\partial}{\partial y}),\,\frac{\partial}{\partial \overline{z}}=2^{-1}(\frac{\partial}{\partial x}+\sqrt{-1}\frac{\partial}{\partial y})$. It is an $\mathrm{SL}_{2}(\mathbf{R})$ invariant differential operator on $\mathfrak{H}$. As is well known, the Eisenstein series $E_{\Gamma,r}(z,s)$ is an eigenfunction of $\Delta$, in fact, we have $\Delta(E_{\Gamma,r}(z,s))=s(s-1)E_{\Gamma,r}(z,s)$.
\begin{thm}\label{thm:psp} Let $\Gamma$ be a congruence subgroup of $\mathrm{SL}_{2}(\mathbf{Z})$ with the width $w_{\Gamma}$ of the cusp $\sqrt{-1}\infty$. Let $f,g$ be holomorphic modulars forms for $\Gamma$ of weight $l\in\frac{1}{2}\mathbf{Z},\ge 3/2$ or $l=1/2$ with the same character. Then $L(s;f,g)$ extends meromorphically to the whole $s$-plane, and satisfies a functional equation under $l-1+s\longmapsto l-s$ which comes from the functional equation of $E_{\Gamma}(z,s)$ under $s\longmapsto 1-s$, and \begin{align} \langle f,g\rangle_{\Gamma}=(4\pi)^{-l}w_{\Gamma}^{l+1}\Gamma(l)C^{-1}\mathrm{Res}_{s=l}L(s;f,g)\label{eqn:psp-formula} \end{align} where $C=\mathrm{Res}_{s=1}E_{\Gamma}(z,s)=\mathrm{Res}_{s=1}\xi(s)$, $\xi(s)$ being so that the constant term with respect to $x$, of $E_{\Gamma}(z,s)$ is $y^{s}+\xi(s)y^{1-s}$. \end{thm} \begin{remark} The method used in the following proof is found in Chiera \cite{Chiera} in which Theorem\ref{thm:psp} in the case $l=1/2$ is proved. He also uses the method to compute the scalar products of Eisenstein series of integral weight. \end{remark} \begin{proof} Let $l\ge3/2$. Let $F(z)$ be an automorphic form so that $F$ and $\Delta(F)$ are both integrable on $\mathfrak{F}(\Gamma)$. We compute the integration of $\Delta(F)$ over the fundamental domain as Kubota \cite{Kubota} Section 2.3. Then \begin{align*} &\int_{\mathfrak{F}(\Gamma)}\Delta(F)\tfrac{dxdy}{y^{2}}=\int_{\mathfrak{F}(\Gamma)}(\tfrac{\partial^{2}}{\partial x^{2}}+\tfrac{\partial^{2}}{\partial y^{2}})F(z)dxdy=\int_{\partial(\mathfrak{F}(\Gamma))}y\tfrac{\partial}{\partial n}F(z)\tfrac{dl}{y} \end{align*} by Green's theorem where $\partial(\mathfrak{F}(\Gamma))$ is the boundary of $\mathfrak{F}(\Gamma)$, $\partial/\partial n$ is the outer normal derivative, and $dl$ is the euclidean arc length. We note that $y\partial/\partial n$ and $dl/y$ are the invariant under $\mathrm{SL}_{2}(\mathbf{R})$. But on the lines of the boundary, arcs or pieces of arcs which are paired under $\Gamma$, the integral cancels, and $\int_{\mathfrak{F}(\Gamma)}\Delta(F)\tfrac{dxdy}{y^{2}}$ vanishes. We apply this to $F(z):=y^{l}f(z)\overline{g}(z)-\sum_{r}a_{0}^{(r)}\overline{b}_{0}^{(r)}E_{\Gamma,r}(z,l)$, and employ the method developed by Chiera \cite{Chiera}.
We have equality $\Delta(y^{l}f\overline{g})=Q(f,g)+l(l-1)y^{l}f\overline{g}$ with \begin{align} Q(f,g):=4y^{l+2}\tfrac{\partial f}{\partial z}\overline{\tfrac{\partial g}{\partial z}}+2\sqrt{-1}ly^{l+1}(\tfrac{\partial f}{\partial z}\overline{g}-f\overline{\tfrac{\partial g}{\partial z}})\label{eqn:defQ} \end{align} (\cite{Chiera} Proposition 2.1). Then $\Delta(F)=Q(f,g)+l(l-1)F$, and hence \begin{align} \langle f,g\rangle_{\Gamma}=-\frac{1}{l(l-1)}\int_{\mathfrak{F}(\Gamma)}Q(f,g)\tfrac{dxdy}{y^{2}}\label{eqn:psp-Q} \end{align} by the above result. Since $Q(f,g)$ is an automorphic function rapidly decreasing at each cusp, to evaluate the integral we follows the argument due to Petersson \cite{Petersson} in which the Rankin-Selberg method (Rankin \cite{Rankin}, Selberg \cite{Selberg}) is made use of.
By a standard unfolding trick, we have for $\Re s>1$, \begin{align*} &\int_{\mathfrak{F}(\Gamma)}Q(f,g)E_{\Gamma}(z,s)\tfrac{dxdy}{y^{2}}=\int_{0}^{\infty}y^{s-2}\int_{0}^{w_{\Gamma}}Q(f,g)dxdy\\ =&w_{\Gamma}\int_{0}^{\infty}\left[\sum_{n=1}^{\infty}a_{n}\overline{b}_{n}e^{-4\pi ny/w_{\Gamma}}\{(4\pi/w_{\Gamma})^{2}n^{2}y^{l+s}-(8\pi l/w_{\Gamma})ny^{l-1+s}\}\right]dy\\ =&-(4\pi)^{-l+1-s}w_{\Gamma}^{l+s}(l-s)\Gamma(l+s)L(l-1+s;f,g). \end{align*} The extreme left hand side has a meromorphic continuation to whole the $s$-plane since $E_{\Gamma}(z,s)$ has a meromorphic continuation and $Q(f,g)$ is rapidly decreasing at cusps. Hence $L(l-1+s;f,g)$ has also, and the functional equation of $E_{\Gamma}(z,s)$ gives that of $L(l-1+s;f,g)$. We note that this part holds true also for $l=1$. Evaluating the residues at $s=1$, we have $-Cl(l-1)\langle f,g\rangle_{\Gamma}=-(4\pi)^{-l}w_{\Gamma}^{l+1}(l-1)\Gamma(l+1)\mathrm{Res}_{s=1}L(l-1+s;f,g)$, which gives (\ref{eqn:psp-formula}) since $l\ne1$. \end{proof}
When $l=1$, the equality (\ref{eqn:psp-Q}) does not make sense, and (\ref{eqn:psp-formula}) does not hold in general (see Remark \ref{rem:lseries} later).
\section{Eisenstein series of integral weight} To the end we concentrate ourselves to modular forms on $\Gamma_{0}(N)$. In this section we study analytic property of real analytic Eisenstein series for $\Gamma_{0}(N)$ of weight $(k+s,s)$ with $k\in\mathbf{Z},\ge0$, where $y^{s}$ times they are of integral weight $k$. Some generalization of Theorem \ref{thm:psp} is given by using Eisenstein series of weight $0$.
Let $l\in\frac{1}{2}\mathbf{Z},\ge0$. For a real number $m$ and for $\mathfrak{s}\in\mathbf{C}$ with $l+2\Re s>1$ we put \begin{align*}
w_{-m}(y,l,s):=\mathbf{e}(mx)\int_{-\infty}^{\infty}\frac{\mathbf{e}(mt)}{(z+t)^{l}|z+t|^{2s}}\,dt \end{align*} with $z=x+\sqrt{{-}1}y\in\mathfrak{H}$, which satisfies $w_{m}(ny,l,s)=n^{-l+1-2s}w_{nm}(y,l,s)\ (n\in\mathbf{N})$. Then $w_{-m}(y,l,s)$ is equal to \begin{alignat*}{2} &\mathbf{e}(-\tfrac{l}{4})\cdot2\pi\cdot(2y)^{-l+1-2s}\Gamma(l-1+2s)\Gamma(s)^{-1}\Gamma(l+s)^{-1}&&(m=0),\\ &\mathbf{e}(-\tfrac{l}{4})m^{-1}(m\pi y^{-1})^{l/2+s}\Gamma(s)^{-1}W_{-\frac{l}{2},-\frac{l}{2}+\frac{1}{2}-s}(4\pi my)&&(m>0),\\
&\mathbf{e}(-\tfrac{l}{4})|m|^{-1}(|m|\pi y^{-1})^{l/2+s}\Gamma(l+s)^{-1}W_{\frac{l}{2},-\frac{l}{2}+\frac{1}{2}-s}(4\pi|m|y)&\quad&(m<0) \end{alignat*}
which meromorphically extend to the whole complex $s$-plane, where $W_{\pm\frac{l}{2},-\frac{l}{2}+\frac{1}{2}-s}(y)$ denote the Whittaker functions of $y$. We have $w_{-m}(y,l,0)=0$ for $m\ge0$ except for $(l,m)=(1,0)$. If $m<0$, then $w_{-m}(y,0,0)=0$, and $w_{-m}(y,l,0)=(2\pi)^{l}\mathbf{e}(-\frac{l}{4})\Gamma(l)^{-1}|m|^{l-1}e^{2\pi my}$ for $l>0$. By the Poisson summation formula, we have the Fourier expansion \begin{align}
\sum_{m=-\infty}^{\infty}(z+m)^{-l}|z+m|^{-2s}=\sum_{n=-\infty}^{\infty}w_{n}(y,l,s)\mathbf{e}(nx).\label{eqn:fex} \end{align} The right hand side extends meromorphically to the whole complex $s$-plane.
Let $k\in\mathbf{N}$, $N\ge3$ and $0\le c_{0},d_{0}<N$. Put $g_{k}(z;c_{0},d_{0};N;s)=\sum_{c\equiv c_{0}(N)\atop d\equiv d_{0}(N)}(cz+d)^{-k}|cz+d|^{-2s}$. This series converges if $k+2\Re s>2$ and has Fourier expansion \begin{align}
&\delta_{c_{0},0}\sum_{d\equiv d_{0}(N)}d^{-k}|d|^{-2s}+N^{-1}\sum_{c\equiv c_{0}(N),\ne0}c^{-k}|c|^{1-2s}w_{0}(y,k,s)\nonumber\\
&+N^{-1}\sum_{n\in\mathbf{Z},\ne0}\left(\rule{0cm}{1.2em}\right.\sum_{c\equiv c_{0}(N),\ne0\atop c|n}c^{-k}|c|^{1-2s}\mathbf{e}(\tfrac{nd_{0}}{Nc})\left.\rule{0cm}{1.2em}\right)w_{n/N}(y,k,s)\mathbf{e}(\tfrac{nx}{N}),\label{feg} \end{align} $\delta$ denoting the Kronecker delta. The infinite series appearing in the constant term with respect to $x$ are essentially the Hurwitz zeta functions, which extend meromorphically to the whole $s$ plane. Hence $g_{k}(z;c_{0},d_{0};N;s)$ is a meromorphic function on the whole complex plane as a function of $s$.
Let $\chi$ be a Dirichlet character, and let $\widetilde{\chi}$ be the primitive character associated with $\chi$. Let $\mathcal{I}_{\mathbf{Z}}$ denote the characteristic function of $\mathbf{Z}$ on $\mathbf{Q}$. The Gauss sum $\tau(\widetilde{\chi})$ is defined to be $\sum_{i:\mathbf{Z}/\mathfrak{f}_{\chi}}\widetilde{\chi}(i)\mathbf{e}(i/\mathfrak{f}_{\chi})$ where $i:\mathbf{Z}/\mathfrak{f}_{\chi}$ implies that $i$ runs the set of representatives of $\mathbf{Z}$ modulo $\mathfrak{f}_{\chi}$. Then for $m\in\mathbf{Z}$, we have the formula for the Gauss sum of a Dirichlet character not necessarily primitive, \begin{align*}
\sum_{i:\mathbf{Z}/N}\chi(i)\mathbf{e}(im/N)=\tau(\widetilde{\chi})\sum_{0<R|\mathfrak{e}_{\chi}\mathfrak{f}_{\chi}^{-1}}\tfrac{\mu(R)\varphi(N)}{\varphi(\mathfrak{f}_{\chi}R)}\widetilde{\chi}(R)(\overline{\widetilde{\chi}}\mathbf{1}_{R}\mathcal{I}_{\mathbf{Z}})(m\mathfrak{f}_{\chi}RN^{-1}), \end{align*} with $\mathfrak{e}_{\chi}$ as in the introduction where in the summation of the right hand side, at most one term survives for each $m$ in $\mathbf{Z}$.
As a set of representatives of cusps of $\Gamma_{0}(N)$, we take \begin{align}\label{eqn:cusps}
\mathcal{C}_{0}(N):=\{i/M\mid0<M\le N,\ M|N,\ (i,M)=1,\ 0\le i\le (M,N/M)\}, \end{align} where $0\in\mathcal{C}_{0}(N)$ is considered to be $0/1$. Each rational number $r$ is equivalent only one element of $\mathcal{C}_{0}(N)$ under the action of $\Gamma_{0}(N)$. We note that $1/N$ is equivalent to $\sqrt{-1}\infty$, and we denote it also by $\sqrt{-1}\infty$ as a cusp. The width of a cusp $i/M$ in $\mathcal{C}_{0}(N)$ is given by \begin{align}\label{eqn:w-cusp} w^{(i/M)}=N/(M^{2},N). \end{align}
Let $k\in\mathbf{Z},\ge0$. We assume that $N\ge3$ if $k$ is odd. Let $M$ be a fixed positive divisor of $N$, and let $c_{0},d_{0}\in\mathbf{Z}$ with $(c_{0},N/M)=1,\,(d_{0},M)=1$. For $s\in\mathbf{C}$, we define Eisenstein series for $k+2\Re s>2$, by \begin{align*}
G_{k}(z;c_{0},d_{0};M,N;s)&:=\mathop{{\sum}'}_{c\equiv c_{0}\,(N/M)\atop d\in M^{-1}d_{0}+\mathbf{Z}}(cz+d)^{-k}|cz+d|^{-2s},\\
E_{k}(z;c_{0},d_{0};M,N;s)&:=M^{-k-2s}\mathop{{\sum}'}_{{c\equiv c_{0}\,(N/M)\atop d\in M^{-1}d_{0}+\mathbf{Z}}\atop(Mc,Md)=1}(cz+d)^{-k}|cz+d|^{-2s} \end{align*} where ${\sum}'$ implies that the term corresponding to $c=d=0$ is omitted in the summation.
Let $\rho\in(\mathbf{Z}/M)^{\ast},\rho' \in(\mathbf{Z}/(N/M))^{\ast}$ with $N/M=\mathfrak{e}_{\rho'}$ so that $\rho\rho'$ has the same parity as $k$. We define Eisenstein series \begin{align*} G_{k,\rho,M}^{\rho'}(z,s)&:=\tfrac{\Gamma(k+s)}{(-2\sqrt{{-}1}\pi)^{k}\tau(\overline{\widetilde{\rho}})}\sum_{c_{0}:(\mathbf{Z}/(N/M))^{\times}}\sum_{d_{0}:(\mathbf{Z}/M)^{\times}}\overline{\rho}(d_{0})\rho'(c_{0})G_{k}(z;c_{0},d_{0};M,N;s),\\ E_{k,\rho,M}^{\rho'}(z,s)&:=2^{-1}M^{-k-2s}\sum_{c_{0}:(\mathbf{Z}/(N/M))^{\times}}\sum_{d_{0}:(\mathbf{Z}/M)^{\times}}\overline{\rho}(d_{0})\rho'(c_{0})E_{k}(z;c_{0},d_{0};M,N;s)\\
&=2^{-1}\sum_{(c,d)=1\atop c\equiv0(\mathrm{mod}\,M)}\overline{\rho}(d)\rho'(c/M)(cz+d)^{-k}|cz+d|^{-2s} \end{align*} for $\Gamma_{0}(N)$ with character $\rho\rho'$, where $(\mathbf{Z}/M)^{\times}$ denotes the reduced residue class modulo $M$ and $d_{0}:(\mathbf{Z}/M)^{\times}$ implies that $d_{0}$ runs over the complete set of representatives. We omit $M$ from $G_{k,\rho,M}^{\rho'}$ and $E_{k,\rho,M}^{\rho'}$ when $M=\mathfrak{e}_{\rho}$. We also omit $\rho$ or $\rho'$ if $\rho=\mathbf{1}$ or $\rho'=\mathbf{1}$. There holds equalities. \begin{align} &G_{k,\rho,M}^{\rho'}(z,s)\nonumber\\ =&(\sqrt{{-}1}\pi)^{-k}\rho'(-1)2^{-k+1}M^{k+2s}\mathfrak{f}_{\rho}^{-1}\tau(\widetilde{\rho})\Gamma(k{+}s)L(k{+}2s,\overline{\rho}\rho')E_{k,\rho,M}^{\rho'}(z,s),\label{eqn:G-E} \end{align}
and if $M=\mathfrak{e}_{\rho}$, then $M^{k+2s}E_{k,\rho}^{\rho'}(z,s)|_{\left({\,0\ -1\atop N\ 0\,}\right)} = \rho'(-1)(N/M)^{k+2s}E_{k,\overline{\rho}'}^{\overline{\rho}}(z,s)$.
\begin{lem}\label{lem:eisi} (i) The Eisenstein series $G_{k,\rho,M}^{\rho'}(z,s),E_{k,\rho,M}^{\rho'}(z,s)$ as functions of $s$ extend meromorphically to the whole complex plane.
(ii) We fix $s$ so that the Eisenstein series are holomorphic at $s$. The constant terms of $G_{k,\rho,M}^{\rho'}(z,s),E_{k,\rho,M}^{\rho'}(z,s)$ with respect to $x$ at each cusp are linear combination of $1$ and $y^{-k+1-2s}$, and the Eisenstein series minus the constant terms are rapidly decreasing as $y\longrightarrow\infty$. \end{lem} \begin{proof}(i) The Eisenstein series are written as finite linear combinations of functions in the form $g_{k}(c_{0},d_{0};s)$. This shows the assertion.
(ii) Let $r$ be a cusp, and $A_{r}$ be as in (\ref{defAr}). Then $G_{k,\rho,M}^{\rho'}(z,s)|_{A_{r}},E_{k,\rho,M}^{\rho'}(z,s)|_{A_{r}}$ are also written as combinations of functions in the form $g_{k}(c_{0},d_{0};s)$. Then our assertion follows from the Fourier expansion (\ref{feg}). \end{proof} The Fourier expansion of $G_{k,\rho,M}^{\rho'}(z,s)$ is given by \begin{align} &G_{k,\rho,M}^{\rho'}(z,s)\nonumber\\ =&\delta_{M,N}(\sqrt{{-}1}\pi)^{-k}2^{-k+1}N^{k+2s}\mathfrak{f}_{\rho}^{-1}\tau(\widetilde{\rho})\Gamma(k+s)L(k+2s,\overline{\rho})\nonumber\\ &+\delta_{\mathfrak{f}_{\rho},1}\pi^{-k{+}1}2^{-2k+3-2s}\varphi(M)\tfrac{\Gamma(k-1+2s)}{\Gamma(s)}L(k{-}1{+}2s,\rho')y^{-k+1-2s}\nonumber\\
&+\tfrac{2\Gamma(k+s)}{(-2\sqrt{{-}1}\pi)^{k}}\sum_{-\infty<n<\infty\atop n\ne 0}n^{-k}|n|^{1-2s}\sum_{0<R|\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}}\tfrac{\mu(R)\varphi(M)}{\varphi(\mathfrak{f}_{\rho}R)}\nonumber\\
&\mbox{\hspace{3em}}\times\sum_{0<d|n}\overline{\widetilde{\rho}}(R)(\widetilde{\rho}\mathbf{1}_{R}\mathcal{I}_{\mathbf{Z}})(d\mathfrak{f}_{\rho}RM^{-1})\rho'(n/d)d^{k-1+2s}w_{n}(y,k,s)\mathbf{e}(nx) \label{eqn:fe} \end{align} where $\delta_{M,N},\delta_{\mathfrak{f}_{\rho},1}$ denote the Kronecker delta. The Fourier expansion of $E_{k,\rho,M}^{\rho'}(z,s)$ is obtained from (\ref{eqn:G-E}) and (\ref{eqn:fe}).
For later use we write down the constant term of the Fourier expansion with respect to $x$, of $y^{s}E_{k,\mathbf{1}_{N},N}(z,s)$ for even $k$ at each cusp $i/M\in\mathcal{C}_{0}(N)$. If we denote the constant term at $\sqrt{-1}\infty$ by $y^{s}+\xi^{(1/N)}(s)y^{-k+1-s}$, and the constant term at $i/M\ (M\ne N)$ by $\xi^{(i/M)}(s)y^{-k+1-s}$, then \begin{align}
\xi^{(i/M)}(s)=\tfrac{(-1)^{k/2}\pi\Gamma(k-1+2s)}{2^{k-2+2s}\Gamma(s)\Gamma(k+s)}\tfrac{\varphi(N)}{NM^{k-1+2s}\varphi(N/M)}\tfrac{\zeta(k-1+2s)\prod_{p|(N/M)}(1-p^{-k+1-2s})}{\zeta(k+2s)\prod_{p|N}(1-p^{-k-2s})}. \label{eqn:ctk} \end{align} In particular if $k=0$, then \begin{align} \xi^{(i/M)}(s)=\tfrac{\pi^{1/2}\Gamma(s-1/2)}{\Gamma(s)}
\tfrac{M^{1-2s}\varphi(N)}{N\varphi(N/M)}\tfrac{\zeta(-1+2s)\prod_{p|(N/M)}(1-p^{1-2s})}{\zeta(2s)\prod_{p|N}(1-p^{-2s})}\tfrac{}{}, \label{eqn:ct0} \end{align}
all of which have poles of order $1$ at $s=1$ with same residue $3\pi^{-1}[\Gamma_{0}(1):\Gamma_{0}(N)]^{-1}$ where $[\Gamma_{0}(1):\Gamma_{0}(N)]=N^{-1}\prod_{p|N}(1+p^{-1})^{-1}$. There holds an equality $\mathrm{tr}_{\Gamma_{0}(1)/\Gamma_{0}(N)}(y^{s}E_{0,\mathbf{1}_{N},N}(z,s))=y^{s}E_{0}(z,s)=E_{\Gamma_{0}(1)}(z,s)$.
For a square free $S\in\mathbf{N}$ and for a character $\chi$, let us define an operator on the function on $\mathfrak{H}$ by \begin{align*}
\Lambda_{S,s,\chi}f(z)&:=\sum_{0<R|S}\mu(S/R)\chi(S/R)R^{s}f(Rz). \end{align*} Then \begin{align} G_{k,\rho,M}^{\rho'}(z,s)=(\tfrac{M}{\mathfrak{e}_{\rho}})^{k+2s}\Lambda_{\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1},k+2s,\overline{\widetilde{\rho}}}G_{k,\widetilde{\rho}}^{\rho'}(\tfrac{M}{\mathfrak{e}_{\rho}}z,s).\label{eqn:ses} \end{align} Now we assume that $\rho'$ is primitive and $N/M=\mathfrak{f}_{\rho'}$. If $\rho$ is also primitive, then it follows from (\ref{eqn:fe}), the functional equation $y^{-k+1-s}G_{k,\rho}^{\rho'}(z,{-}k{+}1{-}s)=\pi^{-k+1-2s}y^{s}G_{k,\rho'}^{\rho}(z,s)$. Then for $\rho$ not necessarily primitive, we have the functional equations by (\ref{eqn:ses}) as \begin{align} &y^{-k+1-s}G_{k,\rho,M}^{\rho'}(z,{-}k{+}1{-}s)=\pi^{-k+1-2s}\tfrac{M}{\mathfrak{e}_{\rho}}y^{s}\Lambda_{\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1},1,\overline{\widetilde{\rho}}}G_{k,\rho'}^{\widetilde{\rho}}(\tfrac{M}{\mathfrak{e}_{\rho}}z,s),\nonumber\\ &y^{-k+1-s}E_{k,\rho,M}^{\rho'}(z,{-}k{+}1{-}s)\nonumber\\ =&(-1)^{k}\pi^{-k+1-2s}\tfrac{(M\mathfrak{f}_{\rho'})^{k-1+2s}\tau(\rho')\Gamma(k+s)L(k+2s,\widetilde{\rho}\overline{\rho}')}{\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}\tau(\widetilde{\rho})\Gamma(1-s)L(-k+2-2s,\overline{\rho}\rho')}y^{s}\Lambda_{\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1},1,\overline{\widetilde{\rho}}}E_{k,\rho'}^{\widetilde{\rho}}(\tfrac{M}{\mathfrak{e}_{\rho}}z,s). \label{eqn:fe2} \end{align} In particular for $\rho'=\mathbf{1}$ we obtain form (\ref{eqn:fe2}), \begin{align} &y^{-k+1-s}E_{k,\rho,N}(z,{-}k{+}1{-}s)\nonumber\\ =&(-1)^{k}\pi^{-k+1-2s}\tfrac{N^{k-1+2s}\Gamma(k+s)L(k+2s,\widetilde{\rho})}{\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}\tau(\widetilde{\rho})\Gamma(1-s)L(-k+2-2s,\overline{\rho})}y^{s}\Lambda_{\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1},1,\overline{\widetilde{\rho}}}E_{k}^{\widetilde{\rho}}(\tfrac{N}{\mathfrak{e}_{\rho}}z,s)\nonumber\\
=&\pi^{-1/2}z^{-k}U_{k,\overline{\rho}}(s)\sum_{0<P|\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}}U_{k,\overline{\rho},P}(s)(\Im(-\tfrac{1}{Nz})^{s}E_{k,\overline{\widetilde{\rho}}\mathbf{1}_{P},\mathfrak{f}_{\rho}P}(-\tfrac{1}{Nz},s) \label{eqn:felu} \end{align} for $\rho\in(\mathbf{Z}/N)^{\ast}$ with the same parity as $k$ and with \begin{align}
U_{k,\rho}(s)&:=\tfrac{(\sqrt{{-}1})^{k}N^{-1+s}\mathfrak{e}_{\rho}^{-1}\mathfrak{f}_{\rho}^{2}\Gamma(s)\Gamma(k{+}s)L(k{+}2s,\overline{\widetilde{\rho}})}{\Gamma((k{-}1)/2{+}s)\Gamma(k/2{+}s)L(k{-}1{+}2s,\overline{\widetilde{\rho}})\prod_{p|\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}}(1{-}\widetilde{\rho}(p)p^{-k+2-2s})},\label{eqn:Uk1}\\
U_{k,\rho,P}(s)&:=\prod_{p|P}(1{-}\widetilde{\rho}(p)p^{k+2s})\varphi(\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}P^{-1}).\label{eqn:Uk2} \end{align}
Since $E_{\Gamma_{0}(N)}(z,s)=y^{s}E_{0,\mathbf{1}_{N},N}(z,s)$, we have $E_{\Gamma_{0}(N)}(z,1{-}s)=\pi^{-1/2}U_{0}(s)\times$ $\sum\limits_{0<P|\mathfrak{e}_{\mathbf{1}_{N}}}U_{0,P}(s)E_{\Gamma_{0}(P)}(-\tfrac{1}{Nz},s) $ with $\mathfrak{e}_{\mathbf{1}_{N}}=\prod_{p|N}p$ and with \begin{align}\hspace*{-.3em}
U_{0}(s)&:=\tfrac{N^{{-}1{+}s}\mathfrak{e}_{\mathbf{1}_{N}}^{-1}\Gamma(s)\zeta(2s)}{\Gamma({-}1/2{+}s)\zeta({-}1{+}2s)\prod_{p|\mathfrak{e}_{\mathbf{1}_{N}}}(1{-}p^{2{-}2s})},\hspace{.2em}U_{0,P}(s):=\prod\limits_{p|P}(1{-}p^{2s})\varphi(\mathfrak{e}_{\mathbf{1}_{N}}P^{{-}1}).\label{eqn:U} \end{align}
We use these result in the later sections. The Eisenstein series $y^{s}E_{0,\rho,N}(z,s)$ of weight $0$ is an eigenfunction of the Laplacian $\Delta$ unlike the Eisenstein series with $k>0$. By the similar argument as in Section \ref{sect:PSP}, we obtain the following two theorems, making use of the Eisenstein series of weight $0$.
\begin{thm}\label{thm:lseries} Let $f,g$ be holomorphic modulars forms for $\Gamma_{0}(N)$ of weight $l\in\frac{1}{2}\mathbf{Z},l\ge1/2$ with the same character. Then $L(s;f,g)$ extends meromorphically to the whole $s$-plane, and \begin{align}
\langle f,g\rangle_{\Gamma_{0}(N)}=3^{-1}4^{-l}\pi^{-l+1}\Gamma(l)N\prod_{p|N}(1+p^{-1})\mathrm{Res}_{s=l}L(s;f,g).\label{eqn:psp-formula2} \end{align}
except for $l=1$. Let $\widetilde{f\overline{g}}(z):=(f\overline{g})|_{S_{N}}(z)=|N^{1/2}z|^{-2l}(f\overline{g})(-1/(Nz))\in\mathcal{M}_{l.l}(N)$ with $S_{N}=\mbox{\tiny$\left(\begin{array}{@{}c@{\,}c@{}}0&-1/\sqrt{N}\\\sqrt{N}&0\end{array}\right)$}$. If $\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(M)}(\widetilde{f\overline{g}})$ for $M|N$ has $\sum_{n=0}^{\infty}c_{n}^{(M)}e^{-4\pi ny}$ as the constant term of its Fourier expansion with respect to $x$, then we put \linebreak$L(s;\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(M)}(\widetilde{f\overline{g}})):=\sum_{n=1}^{\infty}c_{n}^{(M)}n^{-s}$. Then we have a functional equation \begin{align*}
L(l-s;f,g)=&\frac{2^{2-4s}\pi^{1/2-2s}\Gamma(l{-}1{+}s)U_{0}(s)}{\Gamma(l{-}s)}{\textstyle\sum\limits_{P|\mathfrak{e}_{\mathbf{1}_{N}}}}U_{0,P}(s)L(l{-}1{+}s;\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(P)}(\widetilde{f\overline{g}})) \end{align*} with $U_{0}(s),U_{0,P}(s)$ as in (\ref{eqn:U}). \end{thm} \begin{proof}We prove only the functional equation. Let $Q(f,g)$ be as in the proof of Theorem \ref{thm:psp}. Then \begin{align*} &-(4\pi)^{-l+s}(l-1+s)\Gamma(l+1-s)L(l-s;f,g)\\ =&\int_{\mathfrak{F}(N)}Q(f,g)(z)E_{\Gamma_{0}(N)}(z,1-s)\tfrac{dxdy}{y^{2}}\\
=&\pi^{-1/2}U_{0}(s){\textstyle\sum\limits_{P|\mathfrak{e}_{\mathbf{1}_{N}}}\prod\limits_{p|P}(1{-}p^{2s})\varphi(\mathfrak{e}_{\rho}P^{-1})}\int_{\mathfrak{F}(N)}Q(f,g)(z)E_{\Gamma_{0}(P)}(-\tfrac{1}{Nz},s)\tfrac{dxdy}{y^{2}}\\
=&\pi^{-1/2}U_{0}(s){\textstyle\sum\limits_{P|\mathfrak{e}_{\mathbf{1}_{N}}}\prod\limits_{p|P}(1{-}p^{2s})\varphi(\mathfrak{e}_{\rho}P^{-1})}\int_{\mathfrak{F}(N)}Q(f|_{S_{N}},g|_{S_{N}}(z)E_{\Gamma_{0}(P)}(z,s)\tfrac{dxdy}{y^{2}}\\
=&-\pi^{-1/2}U_{0}(s){\textstyle\sum\limits_{P|\mathfrak{e}_{\mathbf{1}_{N}}}\prod\limits_{p|P}(1{-}p^{2s})\varphi(\mathfrak{e}_{\rho}P^{-1})}\\ &\hspace{5em}\times(4\pi)^{-l+1-s}(l-s)\Gamma(l+s)L(l{-}1{+}s;\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(P)}(\widetilde{f\overline{g}})), \end{align*} which shows the functional equation. \end{proof}
\begin{thm}\label{thm:lseries2} Let $f,g$ be holomorphic modulars forms for $\Gamma_{0}(N)$ of weight $l\in\frac{1}{2}\mathbf{Z},l\ge1/2$ with characters. We assume that $f\overline{g}\in\mathcal{M}_{l,l}(N,\rho)$ with $\rho\in(\mathbf{Z}/N)^{\ast},\ne\mathbf{1}_{N}$. Then $L(s;f,g)$ converges for $s$ with $\Re s>\max\{2l{-}1,1/2\}$, and extends meromorphically to the whole $s$-plane. Let $\widetilde{f\overline{g}}(z):=(f\overline{g})|_{S_{N}}(z)\in\mathcal{M}_{l.l}(N,\overline{\rho})$ with $S_{N}:=\mbox{\tiny$\left(\begin{array}{@{}c@{\,}c@{}}0&-N^{-1/2}\\N^{1/2}&0\end{array}\right)$}$. For $M\in\mathbf{N}$ with $\mathfrak{f}_{\rho}|M|N$, let $\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(M),\rho}(\widetilde{f\overline{g}}):=\sum_{A:\Gamma_{0}(N)/\Gamma_{0}(M)}$ $\rho(d)\widetilde{f\overline{g}}|_{A}(z)$, $d$ being the $(2,2)$ entry of $A$. Then we have a functional equation \begin{align*} &L(l-s;f,g)\\ =&\frac{2^{2-4s}\pi^{1/2-2s}N^{-1+s}\mathfrak{e}_{\rho}^{-1}\mathfrak{f}_{\rho}^{2}\Gamma(l{-}1{+}s)\Gamma(s)L(2s,\overline{\widetilde{\rho}})}
{\Gamma(l-s)\Gamma({-}1/2{+}s)L({-}1{+}2s,\overline{\widetilde{\rho}})\prod_{p|\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}}(1{-}\widetilde{\rho}(p)p^{2-2s})}\\
&\hspace{1em}\times\sum_{P|\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}}\prod_{p|P}(1{-}\widetilde{\rho}(p)p^{2s})\varphi(\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}P^{-1})L(l{-}1{+}s;\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(P\mathfrak{f}_{\rho}),\overline{\rho}}(\widetilde{f\overline{g}})). \end{align*} \end{thm} \begin{proof}Let $Q(f,g)$ be as in (\ref{eqn:defQ}). Then $Q(f,g)$ is a real analytic automorphic form with character $\rho$, and $Q(f,g)y^{s}E_{0,\overline{\rho},N}(z,s)$ is an automorphic form with trivial character. Then $\int_{\mathfrak{F}(\Gamma)}Q(f,g)y^{s}E_{0,\overline{\rho},N}(z,s)\frac{dxdy}{y^{2}}$ is well-defined, and it has a meromorphic continuation to whole the $s$-plane since $Q(f,g)$ is rapidly decreasing at cusps. As in the proof of Theorem \ref{thm:psp}, it is shown to be equal to $-(4\pi)^{-l+1-s}(l-s)\Gamma(l+s)L(l-1+s;f,g)$ by a standard unfolding trick. The functional equation is proved in the same manner as in the proof of Theorem \ref{thm:lseries}. \end{proof}
We note that the function $E_{0,\rho,N}(z,s)$ of $s$ is holomorphic on the real axis with $s\ge1$ for even $\rho\ne\mathbf{1}_{N}$. So the formula of type (\ref{eqn:psp-formula2}) is not obtained in this case.
Theorem \ref{thm:lseries} and Theorem \ref{thm:lseries2} are generalized in Corollary \ref{cor:lseries} and in Corollary \ref{cor:lseries2}.
The scalar product defined in Petersson \cite{Petersson} satisfies the equality $\langle f,g\rangle_{\Gamma_{0}(N)}=\langle f|_{S_{N}},g|_{S_{N}}\rangle_{\Gamma_{0}(N)}$ for holomorphic cusp forms $f,g$ for $\Gamma_{0}(N)$ of same weight and with same character where $S_{N}$ is as in Theorem \ref{thm:lseries2}. Let $f,g$ be real analytic modular forms satisfying (\ref{eqn:dfP}). If $Q_{y^{l}f\overline{g}}^{(r)}(T)$ for all cusps $r$ have only terms given by the definite integrals form $0$ to $\infty$, namely, no $Q_{y^{l}f\overline{g}}^{(r)}(T)$ have a term containing $\log T$, then the equality also holds for the scalar product (\ref{eqn:psp}). However it does not hold in general. \begin{lem}\label{lem:psp-sup} Let $f,g$ be as above. Then
$\langle f|_{S_{N}},g|_{S_{N}}\rangle_{\Gamma_{0}(N)}=\langle f,g\rangle_{\Gamma_{0}(N)}-\sum_{i/M\in\mathcal{C}_{0}(N)}$ $w^{(i/M)}c_{1}^{(i/M)}\log(N/M^{2})$ with $w^{(i/M)}$ in (\ref{eqn:w-cusp}) and with $c_{1}^{(i/M)}$ in (\ref{eqn:dfP}). \end{lem} \begin{proof} Let $\mathbf{T}=(T^{(r)})_{r\in\mathcal{C}_{0}(N)}$ with $T^{(r)}>N$, and let $\mathfrak{F}_{\mathbf{T}}(N)$ denote the domain obtained from $\mathfrak{F}(N)$ by cutting off neighborhoods of cusps $r$ along the lines $\Im(A_{r}^{-1}z)=T^{(r)}\ (r\in\mathcal{C}_{0}(N))$. As easily seen, the equality \begin{align*} \langle f,g\rangle_{\Gamma}=\lim_{T^{(r)}\to\infty\atop r\in\mathcal{C}_{0}(N)}\left(\rule{0cm}{1.1em}\right.\int_{\mathfrak{F}_{\mathbf{T}}(N)}y^{l}f(z)\overline{g}(z)\tfrac{dxdy}{y^{2}}{-}\sum_{r}Q_{y^{l}f\overline{g}}^{(r)}(T^{(r)})\left.\rule{0cm}{1.1em}\right) \end{align*} holds in the notation of (\ref{eqn:psp}).
The matrix $S_{N}$ maps $\mathfrak{F}(N)$ onto the fundamental domain of $\Gamma_{0}(N)$, and hence $S_{N}\mathfrak{F}(N)$ can be decomposed into a finite number of pieces so that the union of their suitable translations by matrices in $\Gamma_{0}(N)$, is equal to $\mathfrak{F}(N)$. Let $\phi_{S_{N}}$ denote the map of $\mathfrak{F}(N)$ onto itself obtained in this manner. Then $\phi_{S_{N}}^{2}$ is the identity map of $\mathfrak{F}(N)$. The map $\phi_{S_{N}}$ can be naturally extended to $\mathfrak{F}(N)\cup\mathcal{C}_{0}(N)$, and then a cusp in the form $i/M\ (M|N)$ in $\mathcal{C}_{0}(N)$ is mapped to a cusp in the form $j/(N/M)$ and vice versa. If we take $\mathbf{T}$ so that $T^{(i/M)}=(N/M^{2})T$, then $\phi_{S_{N}}(\mathfrak{F}_{T}(N))=\mathfrak{F}_{\mathbf{T}}(N)$. Hence $\int_{\mathfrak{F}_{T}(N)}(y^{l}f\overline{g})|_{S_{N}}(z)\tfrac{dxdy}{y^{2}}=\int_{\mathfrak{F}_{\mathbf{T}}(N)}(y^{l}f\overline{g})(z)\tfrac{dxdy}{y^{2}}$ for sufficiently large $T$.
Since $((y^{l}f\overline{g})|_{S_{N}})|_{A_{j/(N/M})}(z)=((y^{l}f\overline{g})|_{S_{N}})|_{A_{i/M}}((N/M^{2})z)$, there is an equality $P_{(y^{l}f\overline{g})|_{S_{N}}}^{(j/(N/M))}(y)=P_{y^{l}f\overline{g}}^{(i/M)}((N/M^{2})y)$. Let $\widetilde{P}_{y^{l}f\overline{g}}^{(i/M)}(y):=\sum_{\Re\nu_{j}^{(i/M)}\ge1,\nu_{j}^{(i/M)}\ne1}c_{\nu_{j}^{(i/M)}}^{(i/M)}$ $\times y^{\nu_{j}^{(i/M)}}$. Then, noticing that $(N/M^{2})w^{(j/(N/M)}=w^{(i/M)}$, we have \begin{align*}
Q_{(y^{l}f\overline{g})|_{S_{N}}}^{(j/(N/M))}(T)&=w^{(j/(N/M))}\left\{\rule{0cm}{.9em}\right.c_{1}^{(i/M)}(N/M^{2})\log T+\int_{0}^{T}\widetilde{P}_{y^{l}f\overline{g}}^{(i/M)}((N/M^{2})y)y^{-2}dy\left.\rule{0cm}{.9em}\right\}\\ &=w^{(i/M)}\left\{\rule{0cm}{.9em}\right.c_{1}^{(i/M)}\log T+\int_{0}^{(N/M^{2})T}\widetilde{P}_{y^{l}f\overline{g}}^{(i/M)}(y)y^{-2}dy\left.\rule{0cm}{.9em}\right\}\\ &=Q_{y^{l}f\overline{g}}^{(i/M)}((N/M^{2})T)-w^{(i/M)}c_{1}^{(i/M)}\log(N/M^{2}). \end{align*} It follows an equality \begin{align*} &\int_{\mathfrak{F}_{T}(N)}(y^{l}f\overline{g})_{S_{N}}(z)\tfrac{dxdy}{y^{2}}{-}\sum_{r}Q_{y^{l}f\overline{g}}^{(r)}(T^{(r)})\\ =&\int_{\mathfrak{F}_{\mathbf{T}}(N)}(y^{l}f\overline{g})(z)\tfrac{dxdy}{y^{2}}{-}\sum_{r}Q_{y^{l}f\overline{g}}^{(r)}(T^{(r)})-\sum_{i/M\in\mathcal{C}_{0}(N)}w^{(i/M)}c_{1}^{(i/M)}\log(N/M^{2}), \end{align*} which shows the lemma. \end{proof} \section{Eisenstein series of half integral weight}\label{sect:ESHIW} In this section, we study analytic property of real analytic Eisenstein series for $\Gamma_{0}(N)$ of weight $(k+1/2+s,s)$ with $k\in\mathbf{Z},\ge0$, or equivalently $y^{s}$ times it of half integral weight $k+1/2$. The main purpose is to obtain the Fourier expansion of some specific Eisenstein series which is necessary to investigate the $L$-function (\ref{eqn:lseries}) through the unfolding trick.
For $k\in\mathbf{Z},\ge0$, $8|N$ and $0\le c_{0},d_{0}<N,(N,c_{0},d_{0})=1$, we put $g_{k+1/2}(z;c_{0},d_{0};N;s)$ $:=\sum_{0\le c\equiv c_{0}(N)\atop{d\equiv d_{0}(N)\atop(c,d)=1}}\chi_{c}(d)(cz+d)^{-k-1/2}|cz+d|^{-2s}$ with $2|c_{0}$, and put $g_{k+1/2}'(z;c_{0},d_{0};N;s):=\sum_{0<c\equiv c_{0}(N)\atop{d\equiv d_{0}(N)\atop(c,d)=1}}\chi_{c^{\vee}}(d)(cz+d)^{-k-1/2}|cz+d|^{-2s}$ with $2\nmid c_{0}$, where we mean $\chi_{0}(\pm1)$ as $1$. These series converge if $k+1/2+2\Re s>2$, and have a Fourier expansion in the form \begin{align*} a_{0}+a_{0}'(s)w_{0}(y,k+1/2,s)+\sum_{n\ne0}a_{n/N}(s)w_{n/N}(y,k+1/2,s)\mathbf{e}(nx/N). \end{align*} \begin{lem}The Eisenstein series $g_{k+1/2}(z;c_{0},d_{0};N;s),g_{k+1/2}'(z;c_{0},d_{0};N;s)$ as functions of $s$ extend meromorphically to the whole complex plane. \end{lem} \begin{proof} At first we note that \begin{align} \sum_{m\equiv a(N),>0\atop(m,Q)=1}\frac{\varphi(m)}{m^{s}},\sum_{m\equiv a(N)\atop{(n,Q)=1\atop m:\mathrm{square\,free}}} \frac{\rho(m)}{m^{s}}\label{dseries} \end{align}for $(a,N)=1,Q\in\mathbf{N},\rho\in(\mathbf{Z}/N)^{\ast}$ extend meromorphically to the whole complex plane. Indeed the first one is equal to $\frac{1}{\varphi(N)}\sum_{\chi\in(\mathbf{Z}/N)^{\ast}}\overline{\chi}(a)\frac{L(s-1,\chi\mathbf{1}_{Q})}{L(s,\chi\mathbf{1}_{Q})}$, and the second is equal to $\frac{1}{\varphi(N)}\sum_{\chi\in(\mathbf{Z}/N)^{\ast}}\frac{\overline{\chi}(a)}{L(s,\chi\rho\mathbf{1}_{Q})}$.
Let $P:=\prod_{p|N,(p,N/(c_{0},N))=1}p$. Let $e_{p}\ (p|P)$ be the order of $p$ modulo $N/(c_{0},N)$. As for $g_{k+1/2}(z;c_{0},d_{0};N;s)$, we have $a_{0}=\delta_{c_{0},0}\delta_{d_{0},1}$ and $a_{0}'(s)=(c_{0},N)^{{-}k{-}1/2{-}2s}$\linebreak$\times\varphi((c_{0},N))\prod_{p|P}(1{-}p^{{-}2e_{p}(k{+}1/2{+}2s)})^{{-}1}\sum_{p|P,0\le j_{p}<2e_{p}}\chi_{(\prod_{p}p^{j_{p}})(c_{0},N)}(d_{0})$\linebreak$\times\prod_{p|P}p^{-j_{p}(k+1/2+2s)}\sum_{n>0,(m,N)=1\atop (\prod_{p}p^{j_{p}})n^{2}\equiv c_{0}/(c_{0},N)(\mathrm{mod}\,N/(c_{0},N))}$ $\varphi(m)m^{-2k-4s}$. The last summation is $0$ or a finite linear combination of Dirichlet series in the form of the first series in (\ref{dseries}). As for $g_{k+1/2}'(z;c_{0},d_{0};N;s)$, we have we have $a_{0}=0$ and $a_{0}'(s)=(c_{0},N)^{-k-1/2-2s}\varphi((c_{0},N))\prod_{p|P}(1-p^{-2e_{p}(k+1/2+2s)})^{-1}\sum_{p|P,\,0\le j_{p}<2e_{p}}$\\$\chi_{\{(\prod_{p}p^{j_{p}})(c_{0},N)\}^{\vee}}(d_{0})\prod_{p|P}p^{-j_{p}(k+1/2+2s)}\sum_{m>0,(m,N)=1\atop (\prod_{p}p^{j_{p}})n^{2}\equiv c_{0}/(c_{0},N)(\mathrm{mod}\,N/(c_{0},N))}\varphi(m)$\linebreak$\times m^{-2k-4s}$. In either case, the constant term extends meromorphically to the whole $s$ plane.
For $n\ne0$, we consider $a_{n/N}(s)$ of $g_{k+1/2}(z;c_{0},d_{0};N;s)$. Let $n=(c_{0},N)n_{P}n'\in\mathbf{Z}$ where $|n'|$ is the maximal divisor of $n$ coprime to $N$, and $n_{P}>0$ is the divisor of $n$ with $\mathrm{rad}(n_{P})|P$. Then it is checked that $a_{n/N}(s)$ is $0$ if $n$ is not in this form. Put $t_{n'}(p^{i}):=\varphi(p^{i})p^{-i(k+1/2+2s)}$ if $2|i\ge0,i\le v_{p}(n')$, $t_{n'}(p^{i}):=-p^{-(v_{p}(n')+1)(k+1/2+2s)}$ if $2|i\ge0,i=v_{p}(n)+1$, $t_{n'}(p^{i}):=p^{1/2+i-1}$ if $2\nmid i\ge0,i=v_{p}(n)+1$, and $t_{n'}(p^{i}):=0$ if otherwise. Then $a_{n/N}(s)=\sum_{M|n_{P}}\chi_{(c_{0},N)M}(d_{0})$ $((c_{0},N)M)^{-k+1/2-2s}\sum_{M'|n'}\prod_{p|M'}$ $t_{n'}(p^{v_{p}(M')})\sum_{(m,Nn)=1\atop{m\equiv\overline{MM'}c_{0}/(c_{0},N)(\mathrm{mod}\,N/(c_{0},N))\atop m:\mathrm{square\,free}}}\iota_{M'm}$\\$\times\chi_{-4}(m)^{(d_{0}-1)/2}\mathbf{e}(\tfrac{nd_{0}\overline{c}'}{N(c_{0},N)M})\chi_{(c_{0},N)M}(m)m^{-k-1/2-2s}$ where $\overline{M},\overline{M}'$ are inverses of $M,M'$ modulo $N/(c_{0},N)$ respectively, and $\overline{c}'$ is an inverse of $c'$ modulo $N$. The summations and the product of the right hand side of this equation are all finite except for the last summation, and the last summation is written as a finite linear combination of the series in the form of the second series in (\ref{dseries}). Then $a_{n/N}(s)$ extends meromorphically to the whole $s$ plane.
The coefficient $a_{n/N}(s)$ of $g_{k+1/2}'(z;c_{0},d_{0};N;s)$ also vanishes if $n$ is not in the form $n=(c_{0},N)n_{P}n'$. For such $n$, $a_{n/N}(s)=\sum_{M|n_{P}}\chi_{\{(c_{0},N)M\}^{\vee}}(d_{0})$\linebreak$\times((c_{0},N)M)^{-k+1/2-2s} \sum_{M'|n'}\prod_{p|M'}t_{n'}(p^{v_{p}(M')})\sum_{(m,Nn)=1\atop{m\equiv\overline{MM'}c_{0}/(c_{0},N)(\mathrm{mod}\,N/(c_{0},N))\atop m:\mathrm{square\,free}}}\iota_{M'm}$\linebreak$\times\mathbf{e}(\tfrac{nd_{0}\overline{c}'}{N(c_{0},N)M})\chi_{\{(c_{0},N)M\}^{\vee}}(m)m^{-k-1/2-2s}$, which extends meromorphically to the whole complex plane. \end{proof}
Let $N,M\in\mathbf{N}$ with $4|N, M|N$ so that $4|M$ or $4|(N/M)$. Let $\rho\in(\mathbf{Z}/M)^{\ast},\rho'\in(\mathbf{Z}/(N/M))^{\ast}$ where \begin{align}
&2|v_{p}(M\mathfrak{f}_{\rho}^{-1})\hspace{1em}(p|M\mbox{ for which }\{\rho\}_{p}\mbox{ is trivial or real}),\label{cond:M}\\ &N/M=\mathfrak{e}_{\rho'}'.\nonumber \end{align}
$\mathfrak{e}_{\rho'}'$ being as in the introduction. For $k\in\mathbf{Z},\ge0$ with the same parity as $\rho\rho'$, we define an Eisenstein series of weight $(k+1/2+s,s)$ for $\Gamma_{0}(N)$ with character $\rho\rho'$ as follows. If $4|M$, then \begin{align}
E_{k+1/2,\rho,M}^{\rho'}(z,s):=\sum_{c,d} \overline{\rho}(d)\rho'(c/M)\chi_{c}(d)\iota_{d}(cz+d)^{-k-1/2}|cz+d|^{-2s},\label{eqn:eshiwe} \end{align} and if $2\nmid M$, then \begin{align}
E_{k+1/2,\rho,M}^{\rho'}(z,s):=\sum_{c,d} \overline{\rho}(d)\rho'(c/M)\chi_{c^{\vee}}(d)\iota_{c}^{-1}(cz+d)^{-k-1/2}|cz+d|^{-2s}\label{eqn:eshiwo} \end{align}
where $c,d$ run over the set of the second rows of matrices in $A_{1/M}^{-1}\Gamma_{0}(N)$ with $c>0$, or with $c=0$ and $d>0$, $A_{1/M}$ being as in (\ref{defAr}). More precisely $c,d$ satisfy the condition that $c>0,M|c,(c/M,N/M)=1, d\in\mathbf{Z},(d,M)=1, (c,d)=1$ where $c=0,d=1$ is added if $M=N$. We drop the notation $M$ from $E_{k+1/2,\rho,M}^{\rho'}(z,s)$ if $M=\mathfrak{e}_{\rho}'$, and drop $\rho$ or $\rho'$ if $\rho=\mathbf{1}$ or $\rho'=\mathbf{1}$.
\begin{lem}\label{lem:eishi} (i) The Eisenstein series $E_{k+1/2,\rho,M}^{\rho'}(z,s)$ as functions of $s$ extends meromorphically to the whole complex plane.
(ii) We fix $s$ so that the Eisenstein series is holomorphic at $s$. Then constant term of $E_{k+1/2,\rho,M}^{\rho'}(z,s)$ with respect to $x$ at each cusp is a linear combination of $1$ and $y^{-k+1-2s}$, and the Eisenstein series minus the constant term is rapidly decreasing as $y\longrightarrow\infty$. \end{lem} \begin{proof} The Eisenstein series $E_{k+1/2,\rho,M}^{\rho'}(z,s)$ is written as a linear combination of Eisenstein series of the form $g_{k+1/2}(z;c_{0},d_{0};N;s)$ or $g_{k+1/2}'(z;c_{0},d_{0};N;s)$. The rest of the proof is parallel to that of Lemma \ref{lem:eisi} \end{proof}
Let $\widetilde{\rho}$ be a primitive Dirichlet character. We put \begin{align} \rho:=\widetilde{\rho}\mathbf{1}_{2},\ \ N:=\mathrm{lcm}(4,\mathfrak{f}_{\rho}) \label{eqn:N} \end{align}
and we consider $\rho$ as a character in $(\mathbf{Z}/N)^{\ast}$. If $2|\mathfrak{f}_{\widetilde{\rho}}$, then the equality $\rho=\widetilde{\rho}$ holds. Obviously $\rho,N$ such as (\ref{eqn:N}) satisfy the condition (\ref{cond:M}) with $M=N$. We closely compute the Fourier expansion of Eisenstein series \begin{align}
E_{k+1/2,\rho}(z,s):=1+\sum_{c\equiv0(\mathrm{mod}N),c>0\atop(c,d)=1}(\overline{\rho}\chi_{c})(d)\iota_{d}(cz+d)^{-k-1/2}|cz+d|^{-2s}.\label{eqn:eshiw} \end{align} For $k\ge2$, $E_{k+1/2,\rho}(z,0)$ is a holomorphic in $z$ and, it is in $\mathbf{M}_{k+1/2}(N,\rho)$. The Eisenstein series has the Fourier expansion for $s\in\mathbf{C}$ with $2\Re s+k+1/2>2$, \begin{align} 1+c_{k,s,\rho,N}(0)w_{0}(y,k{+}1/2,s)+\sum_{n\ne0}c_{k,s,\rho,N}(n)w_{n}(y,k{+}1/2,s)\mathbf{e}(nx)\label{feoe}
\end{align}with $c_{k,s,\rho,N}(n):=\sum_{m\equiv0(N),m>0}\,m^{-k-1/2-2s}\sum_{i:(\mathbf{Z}/m)^{\times}}(\overline{\rho}\chi_{m})(i)\iota_{i}\mathbf{e}(ni/m)\ (n\in\mathbf{Z})$ by (\ref{eqn:fex}). Let $\rho_{2}:=\{\rho\}_{2}$, and let $\rho_{\mathbf{c}}$ be the product of complex $\{\rho\}_{p}\ (2\ne p|N)$, and $\rho_{\mathbf{r}}$ be the product of real $\{\rho\}_{p}\ (2\ne p|N)$, so that \begin{align} \rho=\rho_{2}\rho_{\mathbf{c}}\rho_{\mathbf{r}}.\label{eqn:dcmp-rho} \end{align} By definition, $\mathfrak{f}_{\rho_{\mathbf{r}}}$ is an odd natural number. We put $\rho_{2\mathbf{c}}:=\rho_{2}\rho_{\mathbf{c}},\rho_{2\mathbf{r}}:=\rho_{2}\rho_{\mathbf{r}},\rho_{\mathbf{c}\mathbf{r}}:=\rho_{\mathbf{c}}\rho_{\mathbf{r}}$. The primitive character $\widetilde{\rho^{2}}$ is equal to $\rho_{2\mathbf{c}}^{2}$ or $\rho_{\mathbf{c}}^{2}$ according as $\rho_{2}$ is complex or not. Put $c_{k,s,\rho,N}'(n):=\sum_{2\nmid m\equiv0(2^{-v_{2}(N)}N),m>0}$ $\overline{\rho}_{2}(m)m^{-k-1/2-2s}\iota_{m}^{-1}\sum_{i:(\mathbf{Z}/m)^{\times}}$ $(\overline{\rho}_{\mathbf{cr}}\chi_{m^{\vee}})(i)\mathbf{e}(ni/m)\ (n\in\mathbf{Z})$, and \begin{align*} c_{k,s,\rho,N}^{(2)}(n):&=2^{-1}(1{+}\sqrt{{-}1})\sum_{l=v_{2}(N)}^{\infty}\overline{\rho}_{\mathbf{cr}}(2^{l})2^{-l(k+1/2+2s)}\times\\ &\{\sum_{i:(\mathbf{Z}/2^{l})^{\times}}(\overline{\rho}_{2}\chi_{2^{l}})(i)\mathbf{e}(ni/2^{l}){-}\sqrt{{-}1}\sum_{i:(\mathbf{Z}/2^{l})^{\times}}(\overline{\rho}_{2}\chi_{-2^{l}})(i)\mathbf{e}(ni/2^{l})\}, \end{align*} where the summation of $l=v_{2}(N)\ge2$ to $\infty$ is actually finite for $n\ne0$. We have the decomposition \begin{align} c_{k,s,\rho,N}(n)=c_{k,s,\rho,N}^{(2)}(n)c_{k,s,\rho,N}'(n). \label{eqn:dcmp-c} \end{align}for $n\in\mathbf{Z}$. Put $c_{k,s,\rho,N}''(n):=\sum_{(m,N)=1,m>0}$ $\overline{\rho}(m)m^{-k-1/2-2s}\iota_{m}^{-1}\sum_{i:(\mathbf{Z}/m)^{\times}}\chi_{m^{\vee}}(i)$ $\times\mathbf{e}(ni/m)\ (n\in\mathbf{Z})$, and $c_{k,s,\rho,N}^{(\mathbf{c})}(n):=\sum_{m}\rho_{2\mathbf{r}}(m)m^{-k-1/2-2s}\iota_{m}^{-1}\sum_{i:(\mathbf{Z}/m)^\times}$\linebreak$(\overline{\rho}_{\mathbf{c}}\chi_{m^{\vee}})(i)\mathbf{e}(ni/m)$ where $m$ runs over the set of all multiples of $\mathfrak{f}_{\rho_{\mathbf{c}}}$ whose radicals equal that of $\mathfrak{f}_{\rho_{\mathbf{c}}}$, and $c_{k,s,\rho,N}^{(\mathbf{r})}(n):=\sum_{m}\overline{\rho}_{2\mathbf{c}}(m)m^{-k-1/2-2s}\iota_{m}^{-1}\sum_{i:(\mathbf{Z}/m)^\times}(\rho_{\mathbf{r}}\chi_{m^{\vee}})(i)$ $\times\mathbf{e}(ni/m)$ where $m$ runs the set of all positive integers whose radicals are $\mathfrak{f}_{\rho_{\mathbf{r}}}$. Then there holds the decomposition \begin{align} c_{k,s,\rho,N}'(n)=c_{k,s,\rho,N}^{(\mathbf{c})}(n)c_{k,s,\rho,N}^{(\mathbf{r})}(n)c_{k,s,\rho,N}''(n)\hspace{1.5em}(n\in\mathbf{Z}). \label{eqn:dcmp-cprime} \end{align}
Let $n=0$. Then \begin{align}
c_{k,s,\rho,N}''(0)=\tfrac{L(2k-1+4s,\overline{\rho}^2)}{L(2k+4s,\overline{\rho}^{2})}=\tfrac{L(2k-1+4s,\overline{\widetilde{\rho^{2}}})}{L(2k+4s,\overline{\widetilde{\rho^{2}}})}\prod_{p|2\mathfrak{f}_{\rho_{\mathbf{r}}}}\tfrac{1-\overline{\widetilde{\rho^{2}}}(p)p^{-2k+1-4s}}{1-\overline{\widetilde{\rho^{2}}}(p)p^{-2k-4s}}.\label{eqn:c2prime0} \end{align} If $\rho_{2}$ is complex, then $c_{k,s,\rho,N}^{(2)}(0)$ vanishes since $\overline{\rho}_{2}\chi_{\pm2^{l}}$ is nontrivial for any $l$ and $\sum_{i:(\mathbf{Z}/2^{l})^{\times}}(\overline{\rho}_{2}\chi_{2^{l}})(i)=0\ (l\ge4)$. If $\rho_{\mathbf{c}}$ is nontrivial, then $c_{k,s,\rho,N}'(0)$ vanishes by the same reason. Hence $c_{k,s,\rho,N}(0)=0$ if $\rho$ is complex. If $\rho$ is real, then putting $t_{\rho_{2}}=1$ or $2^{-k+1/2-2s}$ according as $\mathfrak{f}_{\rho_{2}}\le4$ or $\mathfrak{f}_{\rho_{2}}=8$, we have \begin{align} c_{k,s,\rho,N}^{(2)}(0)&=(1{+}\rho_{2}({-}1)\sqrt{{-}1})\tfrac{2^{-2k-1-4s}t_{\rho_{2}}}{1-2^{-2k+1-4s}},
c_{k,s,\rho,N}^{(\mathbf{r})}(0)=\rho_{2}(\mathfrak{f}_{\rho_{\mathbf{r}}})\iota_{\mathfrak{f}_{\rho_{\mathbf{r}}}}^{-1}\prod_{p|\mathfrak{f}_{\rho_{\mathbf{r}}}}\tfrac{(p-1)p^{-k-1/2-2s}}{1-p^{-2k+1-4s}},\nonumber\\
c_{k,s,\rho,N}'(0)&=\rho_{2}(\mathfrak{f}_{\rho_{\mathbf{r}}})\iota_{\mathfrak{f}_{\rho_{\mathbf{r}}}}^{-1}\tfrac{\zeta(2k-1+4s)}{\zeta(2k+4s)}\tfrac{1-2^{-2k+1-4s}}{1-2^{-2k-4s}}\prod_{p|\mathfrak{f}_{\rho_{\mathbf{r}}}}\tfrac{(p-1)p^{-k-1/2-2s}}{1-p^{-2k-4s}},\nonumber\nonumber\\
c_{k,s,\rho,N}(0)&=(1{+}({-}1)^{k}\sqrt{{-}1})\tfrac{\zeta(2k-1+4s)}{\zeta(2k+4s)}\tfrac{2^{-2k-1-4s}t_{\rho_{2}}}{1-2^{-2k-4s}}\prod_{p|\mathfrak{f}_{\rho_{\mathbf{r}}}}\tfrac{(p-1)p^{-k-1/2-2s}}{1-p^{-2k-4s}}.\nonumber \end{align}
Let $n\in\mathbf{Z},\ne0$. We put $n_{\mathbf{c}}:=\prod_{p|\mathfrak{f}_{\rho_{\mathbf{c}}}}p^{v_{p}(n)},n_{\mathbf{r}}:=\prod_{p|N,p\nmid2\mathfrak{f}_{\rho_{\mathbf{c}}}}p^{v_{p}(n)},n'=\prod_{p\nmid N}p^{v_{p}(n)}$, so that $n=\mathrm{sgn}(n)2^{v_{2}(n)}n_{\mathbf{c}}n_{\mathbf{r}}n'$. Then there holds an equality \begin{align*} c_{k,s,\rho,N}^{(\mathbf{c})}(n)=\iota_{n_{\mathbf{c}}\mathfrak{f}_{\rho_{\mathbf{c}}}}^{-1}\overline{\rho}_{2\mathbf{r}}(n_{\mathbf{c}}\mathfrak{f}_{\rho_{\mathbf{c}}})(\rho_{\mathbf{c}}\chi_{(n_{\mathbf{c}}\mathfrak{f}_{\rho_{\mathbf{c}}})^{\vee}})(n/n_{\mathbf{c}})\tau(\overline{\rho}_{\mathbf{c}}\chi_{(n_{\mathbf{c}}\mathfrak{f}_{\rho_{\mathbf{c}}})^{\vee}})\mathfrak{f}_{\rho_{\mathbf{c}}}^{-k-1/2-2s}n_{\mathbf{c}}^{-k+1/2-2s}, \end{align*}and if $\rho_{2}$ is complex, then $c_{k,s,\rho,N}^{(2)}(n)=2^{-1}(1{+}\sqrt{{-}1})$ $\overline{\rho}_{\mathbf{cr}}(2^{v_{2}(n)}$ $\mathfrak{f}_{\rho_{2}})(\rho_{2}\chi_{2^{v_{2}(n)}\mathfrak{f}_{\rho_{2}}})(n2^{-v_{2}(n)})$ $\{1{-}\chi_{-4}(n2^{-v_{2}(n)})(\overline{\rho}_{2}\chi_{2^{v_{2}(n)}})(1{+}2^{-2}\mathfrak{f}_{\rho_{2}})\}$ $\tau(\overline{\rho}_{2}\chi_{2^{v_{2}(n)}\mathfrak{f}_{{\rho}_{2}}})$ $\mathfrak{f}_{{\rho}_{2}}^{-k-1/2-2s}2^{-v_{2}(n)(k-1/2+2s)}$.
We put \begin{align}
\psi_{n}:=(\overline{\rho}\chi_{n})^{\sim}=(\overline{\rho}\chi_{2^{v_{2}(n)}}\chi_{(2^{-v_{2}(n))}|n|)^{\vee}}\chi_{-4}^{(2^{-v_{2}(n))}|n|-\mathrm{sgn}(n))/2})^{\sim}. \label{eqn:psin} \end{align} Then $\widetilde{\psi_{n}^{2}}=\overline{\rho}_{\mathbf{c}}^{2}$ if $\rho_{2}$ is real, and $\widetilde{\psi_{n}^{2}}=\overline{\rho}_{2\mathbf{c}}^{2}$ if $\rho_{2}$ is complex. The conductors $\mathfrak{f}_{\rho_{\mathrm{c}}}$ and $\mathfrak{f}_{\rho_{\mathrm{c}}^{2}}$ have the same prime factors. We define $f_{k,s,\rho}(n,p)$ to be \begin{align} &f_{k,s,\rho}(n,p)\nonumber\\ :=&\begin{cases} \sum\limits_{0\le l\le v_{p}(n)/2}\widetilde{\overline{\rho}^{2}}(p)^{l}p^{-2l(k-1/2+2s)}\\
\hspace{2.5em}-\psi_n(p)p^{-k-2s}\sum\limits_{0\le l\le(v_{p}(n)-2)/2}\widetilde{\overline{\rho}^{2}}(p)^{l}p^{-2l(k-1/2+2s)}&(2| v_{p}(n)),\\ \sum\limits_{0\le l \le(v_{p}(n)-1)/2}\widetilde{\overline{\rho}^{2}}(p)^{l}p^{-2l(k-1/2+2s)}&(2\nmid v_{p}(n)), \end{cases}\label{deffksrho} \end{align}
where in the case $2|v_{p}(n)$, the second summation is $0$ if $v_{p}(n)=0$. As a function of $s$, $f_{k,s,\rho}(n,p)$ is holomorphic.
Let $\rho_{2}$ be real. If $\{\psi_{n}\}_{2}=\mathbf{1}_{2}$, then we put $f_{k,s,\rho}^{(\mathbf{r})}(n,2):=2^{-1}(1{+}\rho_{2}({-}1)\sqrt{{-}1})\{-(1+\psi_{n}(2)2^{-k-2s})^{-1}+f_{k,s,\rho}(4n,2)\}$ for $v_{2}(n)$ even, and $f_{k,s,\rho}^{(\mathbf{r})}(n,2):=\overline{\rho}_{\mathbf{cr}}(2)2^{-k-1/2-2s}(1+\rho_{2}({-}1)\sqrt{{-}1})\{-(1+\psi_{n}(2)2^{-k-2s})^{-1}+f_{k,s,\rho}(2n,2)\}$ for $v_{2}(n)$ odd. If $\{\psi_{n}\}_{2}=\chi_{-4}$, then we put $f_{k,s,\rho}^{(\mathbf{r})}(n,2):=2^{-1}(1{+}\rho_{2}({-}1)\sqrt{{-}1})\{{-}(1-\widetilde{\overline{\rho}^{2}}(2)2^{-2k-4s})^{-1}+f_{k,s,\rho}(2n,2)\}$ for $v_{2}(n)$ even, and $f_{k,s,\rho}^{(\mathbf{r})}(n,2):=\overline{\rho}_{\mathbf{cr}}(2)2^{-k-1/2-2s}(1+\rho_{2}({-}1)\sqrt{{-}1})\{-(1-\widetilde{\overline{\rho}^{2}}(2)2^{-2k-4s})^{-1}+f_{k,s,\rho}(n,2)\}$ for $v_{2}(n)$ odd. If $\{\psi_{n}\}_{2}=\chi_{\pm8}$, then we put $f_{k,s,\rho}^{(\mathbf{r})}(n,2):=-\overline{\rho}_{\mathbf{cr}}(2)2^{-k-1/2-2s}(1+\rho_{2}({-}1)\sqrt{{-}1})(1-\widetilde{\overline{\rho}^{2}}(2)2^{-2k-4s})^{-1} $ for $v_{2}(n)=0$, $f_{k,s,\rho}^{(\mathbf{r})}(n,2):=\overline{\rho}_{\mathbf{cr}}(2)2^{-k-1/2-2s}(1{+}\rho_{2}({-}1)\sqrt{{-}1})\{-(1-\widetilde{\overline{\rho}^{2}}(2)2^{-2k-4s})^{-1}+f_{k,s,\rho}(n/2,2)$ for $v_{2}(n)>0$ even, and $f_{k,s,\rho}^{(\mathbf{r})}(n,2):=2^{-1}(1{+}\rho_{2}({-}1)\sqrt{{-}1})\{-(1-\widetilde{\overline{\rho}^{2}}(2)2^{-2k-4s})^{-1}+f_{k,s,\rho}(n,2)\}$ for $v_{2}(n)$ odd. Then $c_{k,s,\rho,N}^{(2)}(n)=(1+\psi_{n}(2)2^{-k-2s})f_{k,s,\rho}^{(\mathbf{r})}(n,2)$ if $\{\psi_{n}\}_{2}=\mathbf{1}_{2}$, and $c_{k,s,\rho,N}^{(2)}(n)=(1-\widetilde{\overline{\rho}^{2}}(2)2^{-2k-4s})f_{k,s,\rho}^{(\mathbf{r})}(n,2)$ if otherwise. The factor $c_{k,s,\rho,N}^{(2)}(n)\ (n\ne0)$ is holomorphic in $s$ at least if $k+2s\ne0$.
For an odd $p|\mathfrak{f}_{\rho_{\mathbf{r}}}$, the $p$-factor of $c_{k,s,\rho,N}^{(\mathbf{r})}(n)$, namely $\sum_{l=1}^{\infty}(\overline{\rho}_{2\mathbf{c}}\chi_{(\mathfrak{f}_{\mathbf{r}}p^{-1})^{\vee}})(p)^{l}$\linebreak$\times p^{-l(k+1/2+2s)}\iota_{p^{l}}^{-1}\sum_{i:(\mathbf{Z}/p^{l})^{\times}}(\chi_{{p^{l-1}}^{\vee}})(i)\mathbf{e}(ni/p^{l})$ is given by $(1+\psi_{n}(p)p^{-k-2s})$\linebreak$\times f_{k,s,\rho}^{(\mathbf{r})}(n,p)$ for $v_{p}(n)$ odd, or $(1-\overline{\rho}(p)^{2}p^{-2k-4s})f_{k,s,\rho}^{(\mathbf{r})}(n,p)$ for $v_{p}(n)$ even where $f_{k,s,\rho}^{(\mathbf{r})}(n,p)$ denotes \begin{alignat*}{2} &\iota_{p}\widetilde{\rho\chi_{4p}}(p)p^{k-1/2+2s}\{-(1+\psi_{n}(p)p^{-k-2s})^{-1}+f_{k,s,\rho}(pn,p)\}&\hspace{1em}&(2\nmid v_{p}(n)),\\
&\iota_{p}\widetilde{\rho\chi_{4p}}(p)p^{k-1/2+2s}\{-(1-\widetilde{\overline{\rho}^{2}}(p)p^{-2k-4s})^{-1}+f_{k,s,\rho}(pn,p)\}&&(2|v_{p}(n)). \end{alignat*}
We have $c_{k,s,\rho,N}^{(\mathbf{r})}(n)=\prod_{p|\mathfrak{f}_{\rho_{\mathbf{r}}},2\nmid v_{p}(n)}(1+\psi_{n}(p)p^{-k-2s})\prod_{p|\mathfrak{f}_{\rho_{\mathbf{r}}},2|v_{p}(n)}(1-\widetilde{\overline{\rho}^{2}}(p)p^{-2k-4s})$ $\times\prod_{p|\mathfrak{f}_{\rho_{\mathbf{r}}}}f_{k,s,\rho}^{(\mathbf{r})}(n,p)$.
For $p\nmid N$, the $p$-factor of $c_{k,s,\rho,N}''(n)$, namely $\sum_{l=0}^{\infty}\overline{\rho}(p)^{l}p^{-l(k+1/2+2s)}\iota_{p^{l}}^{-1}\sum_{i:(\mathbf{Z}/p^{l})^{\times}}$ $\chi_{{p^{l}}^{\vee}}(i)\mathbf{e}(ni/p^{l})$, is equal to $(1+\psi_{n}(p)p^{-k-2s})f_{k,s,\rho}(n,p)$ if $2|v_{p}(n)$, and to $(1-\widetilde{\overline{\rho}^{2}}(p)p^{-k-2s})f_{k,s,\rho}(n,p)$ if $2\nmid v_{p}(n)$. Then we have for $n\ne0$, \begin{align}
c_{k,s,\rho,N}''(n)=&\tfrac{L(k{+}2s,\psi_{n})}{L(2k{+}4s,\widetilde{\overline{\rho}^{2}})}\prod_{p|2\mathfrak{f}_{\rho_{\mathbf{r}}}}\tfrac{1{-}\psi_{n}(p)p^{-k-2s}}{1{-}\widetilde{\overline{\rho}^{2}}(p)p^{-2k-4s}}\prod_{p\nmid N,p|n}f_{k,s,\rho}(n,p).\label{eqn:c2prime} \end{align} Then
\begin{align*}c_{k,s,\rho,N}(n)=c_{k,s,\rho,N}^{(2)}(n)c_{k,s,\rho,N}^{(\mathbf{c})}(n)\tfrac{L(k{+}2s,\psi_{n})}{L(2k{+}4s,\widetilde{\overline{\rho}^{2}})}\prod_{p|\mathfrak{f}_{\rho_{\mathbf{r}}}}f_{k,s,\rho}^{(\mathbf{r})}(n,p)\prod_{p\nmid N,p|n}f_{k,s,\rho}(n,p) \end{align*} and in particular if $\rho_{2}$ is real, then \begin{align*}
c_{k,s,\rho,N}(n)=c_{k,s,\rho,N}^{(\mathbf{c})}(n)\tfrac{L(k{+}2s,\psi_{n})}{L(2k{+}4s,\widetilde{\overline{\rho}^{2}})}\prod_{p|2\mathfrak{f}_{\rho_{\mathbf{r}}}}f_{k,s,\rho}^{(\mathbf{r})}(n,p)\prod_{p\nmid N, p|n}f_{k,s,\rho}(n,p). \end{align*} Thus the Fourier coefficients of (\ref{eqn:eshiw}) is obtained.
\section{Constant terms of Eisenstein series of half integral weight}\label{sect:CTEHIW} We compute the constant terms of Fourier expansions with respect to $x$ at cusps in $\mathcal{C}_{0}(N)$ of (\ref{eqn:cusps}), of some specific Eisenstein series of half integral weight. They are useful to investigate the functional equations of Eisenstein series.
Let $4|N,\rho\in(\mathbf{Z}/N)^{\ast}$ be as in (\ref{cond:M}) with $M=N$. Then \begin{alignat*}{2} E_{k+1/2,\rho,2^{2m}N}(z,s)&=E_{k+1/2,\rho,N}(2^{2m}z,s)&\mbox{\hspace{2em}}&(m\ge0),\\ E_{k+1/2,\rho\chi_{8},2^{2m+1}N}(z,s)&=E_{k+1/2,\rho,N}(2^{2m+1}z,s)&&(m\ge0),\\ E_{k+1/2,\rho\mathbf{1}_{p},p^{2}N}(z,s)&=E_{k+1/2,\rho\chi_{p},pN}(pz,s)&&(p\nmid N),\\
E_{k+1/2,\rho,p^{2}N}(z,s)&=E_{k+1/2,\rho,N}(p^{2}z,s)&&(p|N). \end{alignat*} For $\rho'\in(\mathbf{Z}/N')$ satisfying (\ref{cond:M}) with $\rho=\rho'$ and $M=N'$, the above equalities imply that Eisenstein series $E_{k+1/2,\rho',N'}(z,s)$ is written in the form $E_{k+1/2,\rho',N'}(z,s)=E_{k+1/2,\rho,N}(mz,s)$ for some natural number $m$ and for $\rho,N$ so that \begin{align} \rho=\widetilde{\rho}\mathbf{1}_{2},\ N=\mathrm{lcm}(4,\mathfrak{f}_{\rho})\mbox{, and }\rho_{2}=\mathbf{1}_{2},\chi_{-4}\mbox{ or }\rho_{2}\mbox{ is complex} \label{eqn:N2} \end{align} where $\widetilde{\rho}$ is a primitive character and where $\rho_{2}$ is as in (\ref{eqn:dcmp-rho}).
The constant terms of Eisenstein series (\ref{eqn:eshiwe}),(\ref{eqn:eshiwo}) at cusps are in the form \begin{align}\label{cte} c_{0}+\xi_{0}(s)y^{-k-1/2-2s} \end{align} with constants $c_{0}$ and with functions $\xi_{0}(s)$ of $s$ by (\ref{eqn:fex}), where $c_{0}$ and $\xi_{0}(s)$ could be $0$.
\begin{lem}\label{lem:vac0} Let $\rho,\widetilde{\rho},N$ be as in (\ref{eqn:N2}), and let $\rho=\rho_{2}\rho_{\mathbf{c}}\rho_{\mathbf{r}}=\rho_{2\mathbf{c}}\rho_{\mathbf{r}}$ be as in (\ref{eqn:dcmp-rho}). Let $k$ be a nonnegative integer with the same parity as $\rho$.
(i) Suppose that $\rho_{2}$ is real. Then $N=4\mathfrak{f}_{\rho_{\mathbf{c}}}\mathfrak{f}_{\rho_{\mathbf{r}}}$, and $E_{k+1/2,\rho}(z,s)$ vanishes at a cusp $i/(2m)\ ((i,2m)=1)$ for any odd $m$.
(ii) Suppose that $\rho_{2}$ is complex, which is irreducible by our assumption. Then $E_{k+1/2,\rho}(z,s)$ vanishes at a cusp $i/(2^{-1}\mathfrak{f}_{\rho_{2}}m)\ ((i,2\mathfrak{f}_{\rho_{2}}m)=1)$ for any odd $m$.
(iii) Let $M|N$. Suppose that $M$ has a prime divisor $p$ so that $\{\rho\}_{p}$ is complex. Then $E_{k+1/2,\rho}(z,s)$ has the constant term (\ref{cte}) at a cusp $i/M\ ((i,M)=1)$ with $\xi_{0}(s)=0$. \end{lem} \begin{proof} (i) Put $B_{n}=\mbox{\footnotesize$\begin{pmatrix}{-}1{+}2imn&{-}i^{2}n\\4m^{2}n&{-}1{-}2imn\end{pmatrix}$}\in\mathrm{SL}_{2}(\mathbf{Z})$, which stabilizes the cusp $i/(2m)$. It is checked that the automorphy factor of $E_{k+1/2,\rho}(z,s)$ has the value in $\pm\sqrt{-1}$ at $z=i/(2m)$ for $B_{\mathfrak{f}_{\rho_{\mathbf{c}}}\mathfrak{f}_{\rho_{\mathbf{r}}}}\in\Gamma_{0}(N)$ noting that ${-}1{-}2im\mathfrak{f}_{\rho_{\mathbf{c}}}\mathfrak{f}_{\rho_{\mathbf{r}}}\equiv1\pmod{4}$, namely $\iota_{({-}1{-}2im\mathfrak{f}_{\rho_{\mathbf{c}}}\mathfrak{f}_{\rho_{\mathbf{r}}})}=1$, and $\lim_{z\to i/(2m)\atop z\in\mathfrak{H}}(4m^{2}\mathfrak{f}_{\rho_{\mathbf{c}}}\mathfrak{f}_{\rho_{\mathbf{r}}}z{-}1{-}2im\mathfrak{f}_{\rho_{\mathbf{c}}}\mathfrak{f}_{\rho_{\mathbf{r}}})^{1/2}=\sqrt{-1}$. Then the constant term of the Fourier expansion of $E_{k+1/2,\rho}(z,s)$ at the cusp $i/(2m)$ vanishes.
(ii) We note that $16|\mathfrak{f}_{\rho_{2}}$. Put $B_{n}=\mbox{\footnotesize$\begin{pmatrix}{-}1{+}2^{-1}\mathfrak{f}_{\rho_{2}}imn&{-}i^{2}n\\2^{-2}\mathfrak{f}_{\rho_{2}}^{2}m^{2}n&{-}1{-}2^{-1}\mathfrak{f}_{\rho_{2}}imn\end{pmatrix}$}\in\mathrm{SL}_{2}(\mathbf{Z})$, which stabilizes the cusp $i/(2^{-1}\mathfrak{f}_{\rho_{2}}m)$. Take $\mathfrak{f}_{\rho_{\mathbf{c}}}\mathfrak{f}_{\rho_{\mathbf{r}}}(>0)$ as $n$. Then $\lim_{z\to i/(2m)\atop z\in\mathfrak{H}}$ $(4m^{2}nz{-}1{-}2imn)^{k+1/2}=(-1)^{k}\sqrt{-1}$, $\chi_{(2^{-2}\mathfrak{f}_{\rho_{2}}^{2}m^{2}n)}({-}1{-}2^{-1}\mathfrak{f}_{\rho_{2}}imn)=$\linebreak$\chi_{n}({-}1{-}2^{-1}\mathfrak{f}_{\rho_{2}}imn)=\chi_{n}({-}1)=1$, and ${-}1{-}2^{-1}\mathfrak{f}_{\rho_{2}}imn\equiv3\pmod{4}$. Hence the automorphy factor of $E_{k+1/2,\rho}(z,s)$ has the value $(-1)^{k}\rho({-}1{-}2^{-1}\mathfrak{f}_{\rho_{2}}imn)$ at $z=i/(2^{-2}\mathfrak{f}_{\rho_{2}}^{2}m^{2}n)$ for $B_{n}\in\Gamma_{0}(N)$. Then $(-1)^{k}\rho({-}1{-}2^{-1}\mathfrak{f}_{\rho_{2}}imn)=\rho_{2}(1{+}2^{-1}\mathfrak{f}_{\rho_{2}}imn)$ $=-1$. Then the constant term of the Fourier expansion at the cusp $i/(2^{-1}\mathfrak{f}_{\rho_{2}}m)$ vanishes. Though Lemma makes no mention of the case $\rho_{2}=\chi_{\pm8}$, this proof is effective.
(iii) At first we assume that there is a prime divisor $p_{0}$ of $M$ with $1\le v_{p_{0}}(M)<v_{p_{0}}(N)$ so that $\{\rho\}_{p_{0}}$ is complex. When $p_{0}=2$, we may assume by the assertion (ii), that $v_{2}(M)\le v_{2}(N)-2$. Put $B_{n}=\mbox{\footnotesize$\begin{pmatrix}{-}1{+}2iMn&{-}i^{2}n\\M^{2}n&{-}1{-}iMn\end{pmatrix}$}\in\mathrm{SL}_{2}(\mathbf{Z})$, which stabilizes the cusp $i/M$. We take $n>0$ so that $N|M^{2}n,\ v_{p}(Mn)\ge v_{p}(N)\ (p|N,p\ne p_{0})$ and that $v_{p_{0}}(Mn)<v_{p_{0}}(N)$ if $p_{0}\ne2$, and $v_{2}(Mn)<v_{2}(N)-1$ if $p_{0}=2$. Then the automorphy factor of $E_{k+1/2,\rho}(z,s)$ does not takes a real value at $i/M$ for $B_{n}\in\Gamma_{0}(N)$, in particular it does not take the value $1$ and hence the constant term of the Fourier expansion at $i/M$ vanishes.
Now we may assume that $(N,N/M)=1$ and the cusp $i/M$ is $1/M$ (see (\ref{eqn:cusps})). Let $A_{1/M}=\left({\, 1\ \ b_{0}\atop M\ d_{0}}\right)\in\mathrm{SL}_{2}(\mathbf{Z})$. Put $u(c,d):=(cz+d)^{k-1/2}|cz+d|^{-2s}$ for short. Then \begin{align}
&E_{k+1/2,\chi}(z,s)|_{A_{i/M}}\nonumber\\
=&u(M,d_{0})+\sum_{N|c>0,(c,d)=1\atop c+dM>0}(\overline{\rho}\chi_{c})(d)\iota_{d}u(c+dM,cb_{0}+dd_{0})\nonumber\\
&-(-1)^{k}\sqrt{-1}\sum_{N|c>0,(c,d)=1\atop c+dM<0}(\overline{\rho}\chi_{c})(d)\iota_{d}u((-(c+dM),-cb_{0}-dd_{0})\nonumber\\
=&\sum_{N|c>0,(c,d)=1\atop c+dM>0}(\overline{\rho}\chi_{c})(d)\iota_{d}u(c+dM,cb_{0}+dd_{0}).\label{eqn:cfc} \end{align} We fix $c+dM>0$. Then (\ref{eqn:cfc}) has the partial sum $\sum_{n=-\infty}^{\infty}\overline{\rho}(d-\tfrac{N}{M}n)\chi_{c+Nn}(d-\tfrac{N}{M}n)$ $\times\iota_{d-\tfrac{N}{M}n}u(c+dM,cb_{0}+dd_{0}-\frac{N}{M}n)$, whose constant term containing $y^{-k-1/2-2s}$ is equal to \begin{align*} &2^{-1}(1+\sqrt{-1})\sum_{n=0}^{M}\{(\overline{\rho}\chi_{c+Md})(d-\tfrac{N}{M}n)-(\overline{\rho}\chi_{(c+Md)}\chi_{-4})(d-\tfrac{N}{M}n)\sqrt{-1}\}\\ &\times w(N^{-1}(c+dM),k+1/2,s). \end{align*} This is $0$, and hence the Fourier expansion of (\ref{eqn:cfc}) does not have the constant term containing $y^{-k-1/2-2s}$. \end{proof}
The nonzero constant term of the Fourier expansion at each cusp , of the Eisenstein series (\ref{eqn:eshiw}) is obtained similarly as in the preceding section. We state them as a lemma. \begin{lem}\label{lem:vac} Let $\rho,\widetilde{\rho},N$ be as in (\ref{eqn:N2}), and let $\rho=\rho_{2}\rho_{\mathbf{c}}\rho_{\mathbf{r}}=\rho_{2\mathbf{c}}\rho_{\mathbf{r}}$ be as in (\ref{eqn:dcmp-rho}). Let $k\in\mathbf{Z},\ge0$ be so that $k$ and $\rho$ have the same parity. Put \begin{align}\hspace*{-.4em} U_{k+1/2,\rho}(s):=\,&\mathbf{e}({-}\tfrac{k{+}1/2}{4})2^{{-}k{-}1/2{-}2s}(\mathfrak{f}_{\rho_{\mathbf{c}}}\mathfrak{f}_{\rho_{\mathbf{r}}})^{-1}\pi\tfrac{\Gamma(k{-}1/2{+}2s)}{\Gamma(s)\Gamma(k+1/2+s)}\tfrac{L(2k-1+4s,\overline{\rho}^{2}\mathbf{1}_{2})}{L(2k+4s,\overline{\rho}^{2}\mathbf{1}_{2})},\label{eqn:cffe} \end{align}
and put for $R|4\mathfrak{f}_{\rho_{\mathbf{r}}}$, \begin{align}
U_{k+1/2,\rho,R}(s):=R^{-k+1/2-2s}\prod_{p|R}\{(p-1)(1-\overline{\rho}_{\mathbf{c}}(p)^{2}p^{-2k+1-4s})^{-1}\}.\label{eqn:cffe2} \end{align}
(i) Suppose that $\rho_{2}$ is real. Then $N=4\mathfrak{f}_{\rho_{\mathbf{c}}}\mathfrak{f}_{\rho_{\mathbf{r}}}$, and $E_{k+1/2,\rho}(z,s)\in$ $\mathbf{M}_{k{+}1/2{+}s,s}(N,\rho)$ has $0$ as the constant terms of the Fourier expansion with respect to $x$ at cusps in $\mathcal{C}_{0}(N)$ except $1/N,1/P,1/(4P)$ with $P|\mathfrak{f}_{\rho_{\mathbf{r}}}$. The constant term of $E_{k+1/2,\rho}(z,s)$ at a cusp $1/P$ is $U_{k+1/2,\rho}(s)U_{k+1/2,\rho,P}(s)y^{-k+1/2-2s}$, and the constant term at a cusp $1/(4P)$ is $\{1{+}\rho_{2}(-1)\chi_{-4}(P)\sqrt{-1}\}U_{k+1/2,\rho}(s) U_{k+1/2,\rho,4P}(s)$ $y^{-k+1/2-2s}$ where there is the additional term $1$ if $\mathfrak{f}_{\rho_{\mathbf{c}}}=1$ and $P=\mathfrak{f}_{\rho_{\mathbf{r}}}$. If $\mathfrak{f}_{\rho_{\mathbf{c}}}>1$, then the constant term at a cusp $1/N$ is $1$.
(ii) Suppose that $\rho_{2}$ is complex. Then $N=\mathfrak{f}_{\rho_{2\mathbf{c}}}\mathfrak{f}_{\rho_{\mathbf{r}}}$, and $E_{k+1/2,\rho}(z,s)$ has $0$ as the constant terms at cusps in $\mathcal{C}_{0}(N)$ except $1/N,1/P$ with $P|\mathfrak{f}_{\rho_{\mathbf{r}}}$. The constant term at a cusp $1/N$ of $E_{k+1/2,\rho}(z,s)$ is $1$, and the constant term at a cusp $1/P$ is $2^{2}\mathfrak{f}_{\rho_{2}}^{-1}U_{k+1/2,\rho}(s)U_{k+1/2,\rho,P}(s)y^{-k+1/2-2s}$ . \end{lem}
For $\rho,N$ as in (\ref{eqn:N2}), the Eisenstein series $E_{k+1/2}^{\rho}(z,s)=\sum_{c>0,(c,N)=1 \atop(c,d)=1}\rho(c)\chi_{c^{\vee}}(d)\iota_{c}^{-1}$ $\times(cz+d)^{-k-1/2}|cz+d|^{-2s}$ has the Fourier expansion \begin{align} \sum_{n=-\infty}^{\infty}c_{k,s,\overline{\rho},N}''(n)w_{n}(y,k+1/2,s)\mathbf{e}(nx) \label{feoe2} \end{align}
where $c_{k,s,\rho,N}''(0)$ is as in (\ref{eqn:c2prime0}), and where $c_{k,s,\rho,N}''(n)$ is as in (\ref{eqn:c2prime}) for $n\ne0$. We have $E_{k+1/2,\rho}(z,s)|_{\mbox{\tiny$\left(\begin{array}{@{}c@{\,}c@{}}0&-1/N\\1&0\end{array}\right)$}}=z^{-k-1/2}|z|^{-2s}E_{k+1/2,\rho}(-\tfrac{1}{Nz},s)=E_{k+1/2}^{\overline{\rho}}(z,s)$, and $E_{k+1/2}^{\overline{\rho}}(z,s)|_{\mbox{\tiny$\left(\begin{array}{@{}c@{\,}c@{}}0&-1/N\\1&0\end{array}\right)$}}=(-1)^{k-1}\sqrt{-1}E_{k+1/2,\rho}(z,s)$.
\begin{lem}\label{lem:vac2} Let $\rho,N,k$ be as in Lemma \ref{lem:vac}.
(i) Let $P|\mathfrak{f}_{\rho_{\mathbf{r}}}$ and let $\varepsilon\in\{\pm 1\}$ be so that $\rho=\rho_{2\mathbf{c}}\chi_{P^{\vee}}\chi_{(\varepsilon4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}$, namely $\varepsilon=\chi_{{-}4}(\mathfrak{f}_{\rho_{\mathbf{r}}}/P)$. Then the constant term of $E_{k+1/2,\chi_{P^{\vee}}}^{\rho_{2\mathbf{c}}\chi_{(\varepsilon4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}}(z,s)\in\mathbf{M}_{k+1/2+s,s}(N,\rho)$ at a cusp $1/P$ is in the form (\ref{cte}) with $c_{0}=(-1)^{k-1}\sqrt{-1}\chi_{-4}(P)\iota_{P}$, and the constant terms at other cusps in $\mathcal{C}_{0}(N)$ are in the form (\ref{cte}) with $c_{0}=0$. The the constant terms (\ref{cte}) with $\xi_{0}(s)\ne0$ possibly appear only at cusps $i/M\in\mathcal{C}_{0}(N)\ ((i,M)=1)$ where $\mathfrak{f}_{\rho_{\mathbf{c}}}|M$ if $\rho_{2}$ is real, and $\mathfrak{f}_{\rho_{2\mathbf{c}}}|M$ if $\rho_{2}$ is complex.
(ii) Suppose that $\rho_{2}=\mathbf{1}_{2},\chi_{-4}$. Let $\rho=\rho_{2\mathbf{c}}\chi_{\varepsilon4P}\chi_{(\mathfrak{f}_{\rho_{\mathbf{r}}}/P)^{\vee}}$, namely $\varepsilon=\chi_{-4}(P)$. Then the constant term of $E_{k+1/2,\rho_{2}\chi_{\varepsilon 4P}}^{\rho_{\mathbf{c}}\chi_{(\mathfrak{f}_{\rho_{\mathbf{r}}}/P)^{\vee}}}(z,s)\in\mathbf{M}_{k+1/2+s,s}(N,\rho)$ at a cusp $1/(4P)$ is in the form (\ref{cte}) with $c_{0}=(-1)^{k}\rho_{2}(-1)\chi_{-4}(P)$, and the constant terms at other cusps in $\mathcal{C}_{0}(N)$ are in the form (\ref{cte}) with $c_{0}=0$. The the constant terms (\ref{cte}) with $\xi_{0}(s)\ne0$ possibly appear only at cusps $i/M\in\mathcal{C}_{0}(N)\ ((i,M)=1)$ with $\mathfrak{f}_{\rho_{\mathbf{c}}}|M$. \end{lem}
\begin{proof} (i) Let $b_{0},d_{0}$ be so that $A_{1/P}=\left(\,1\ \,b_{0}\atop P\ \,d_{0}\right)\in\mathrm{SL}_{2}(\mathbf{Z})$. Put $Q:=\mathfrak{f}_{\rho_{\mathbf{c}}}/P$. Then \begin{align}
&E_{k+1/2,\chi_{P^{\vee}}}^{\rho_{2\mathbf{c}}\chi_{\varepsilon4Q}}(z,s)|_{A_{1/P}}=(Pz+d_{0})^{-k-1/2}|Pz+d_{0}|^{-2s}E_{k+1/2,\chi_{P^{\vee}}}^{\rho_{2\mathbf{c}}\chi_{\varepsilon4Q}}(\tfrac{z+b_{0}}{Pz+d_{0}},s)\nonumber\\
=&(-1)^{k-1}\iota_{-P}+\sum_{P|c\in\mathbf{Z},(c/P,N/P)=1\atop{d\in\mathbf{Z},(c,d)=1\atop c+dP>0}}\chi_{P^{\vee}}(d)(\rho_{2\mathbf{c}}\chi_{\varepsilon4Q})(c/P)\left(\tfrac{d}{|c|}\right)\iota_{c}^{-1}\nonumber\\
&\hspace{5em}\times((c+dP)z+cb_{0}+dd_{0})^{-k-1/2}|(c+dP)z+cb_{0}+dd_{0}|^{-2s},\label{esc} \end{align}
which implies that $c_{0}=(-1)^{k-1}\iota_{-P}$ in (\ref{cte}) at the cusp $1/P$. In the series \linebreak$E_{k+1/2,\chi_{P^{\vee}}}^{\rho_{2\mathbf{c}}\chi_{\varepsilon4Q}}(z,s)|_{A_{i/M}}$ corresponding to (\ref{esc}) at all other cusps $i/M\in\mathcal{C}_{0}(N)$, the coefficients $c$ of $z$ in $(cz+d)^{-k-1/2}$ are always nonzero, and hence $c_{0}=0$.
The similar argument as in the proof of Lemma \ref{lem:vac0} (iii) shows that $\xi_{0}(s)$ in the constant term (\ref{cte}) of the Fourier expansion at $i/M$ is $0$ unless $M$ does not satisfy the condition of the lemma.
The assertion (ii) is proved similarly. \end{proof}
\section{Theta series as Eisenstein series} In this section, we mainly consider Eisenstein series of weight less than $3/2$ with character $\rho$. Their relations between theta series are given when $N=4,\rho=\chi_{-4}^{k}$. \begin{prop} Assume that $k\in\mathbf{Z},\ge1,\,\rho\in(\mathbf{Z}/N)^{\ast}$ have the same parity. If $k\ge2$, or if $k=1$ and $\rho$ is a complex character, then $E_{k+1/2,\rho,N}(z,0)$ is in $\mathbf{M}_{k+1/2}(N,\rho)$. \end{prop} \begin{proof} If $k\ge2$, then the series $E_{k+1/2,\rho,N}(z,0)$ converges absolutely and uniformly on any compact subset of $\mathfrak{H}$, and hence it is holomorphic and it is in $\mathbf{M}_{k+1/2}(N,\rho)$.
Let $k=1$, and $\rho$ be complex. We may assume that $\rho,N$ satisfy (\ref{eqn:N}), since for $\rho'\in(\mathbf{Z}/N')^{\ast}$ satisfying (\ref{cond:M}) with $\rho=\rho'$ and $M=N'$, $E_{k+1/2,\rho',N'}(z,0)$ is written as $E_{k+1/2,\rho,N}(mz,0)$ for some $m\in\mathbf{N}$ as stated in the beginning of Section \ref{sect:CTEHIW}. Then $c_{1,s,\rho,N}(0)=0$, and hence the constant term of the Fourier expansion (\ref{feoe}) is $1$. Since $\rho$ is complex, the character $\psi_{n}$ of (\ref{eqn:psin}) is nontrivial for any $n\in\mathbf{Z},\ne0$. Then $L(1+2s,\psi_{n})$ appearing in (\ref{eqn:c2prime}), is finite at $s=0$, and hence $c_{1,s,\rho,N}(n)$ is finite for all $n\in\mathbf{Z}$. As $s\longrightarrow0$, $w_{n}(y,k+1/2,s)$ tends to $0$ for $n\le0$, the expansion (\ref{feoe}) gives a holomorphic function in $z$ at $s=0$. \end{proof}
In the rest of this section, we consider the Eisenstein series with $N$ of (\ref{eqn:N}) exclusively in the case that $\rho$ is real. Then $N=\mathfrak{e}_{\rho}'$. Though our main objective here is the case $k\le1$, we do not restrict $k$ to be $\le1$. When $k\le1$, $E_{k+1/2,\rho}(z,s)$ does not give a holomorphic Eisenstein series at $s=0$. Let us put \begin{align*} \mathscr{E}_{k+1/2,\rho}(z,s):=E_{k+1/2,\rho}(z,s)-c_{k,0,\rho,N}^{(2)}(0)c_{k,0,\rho,N}^{(\mathbf{r})}(0)E_{k+1/2}^{\rho}(z,s). \end{align*} Then \begin{align*} &\mathscr{E}_{k+1/2,\rho}(z,s)\\ =&1+\sum_{n=-\infty}^{\infty}\{c_{k,s,\rho,N}^{(2)}(n)c_{k,s,\rho,N}^{(\mathbf{r})}(n)-c_{k,0,\rho,N}^{(2)}(0)c_{k,0,\rho,N}^{(\mathbf{r})}(0)\}c_{k,s,\rho,N}''(n)w_{n}(y,k,s)\mathbf{e}(nx). \end{align*}
Let $k=1$. If $n$ is so that the character $\psi_{n}$ of (\ref{eqn:psin}) is not trivial, then $c_{1,s,\rho,N}''(n)$, $c_{1,s,\rho,N}(n)$ are finite at $s=0$ by (\ref{eqn:c2prime}) , (\ref{eqn:dcmp-cprime}) and (\ref{eqn:dcmp-c}). The terms $c_{1,s,\rho,N}''(n),c_{1,s,\rho,N}(n)$ have a pole at $s=0$ only for $n$ in the form $n=-\mathfrak{f}_{\rho}m^{2}\ (m\in\mathbf{Z})$. The pole is of order one. It is check that $c_{1,0,\rho,N}^{(2)}(0)=c_{1,0,\rho,N}^{(2)}(-\mathfrak{f}_{\rho}m^{2})$ for any $m\in\mathbf{Z}$, and that $f_{1,0,\rho,N}^{(\mathbf{r})}(-\mathfrak{f}_{\rho}m^{2},p)=(\rho_{2}\chi_{(\mathfrak{f}_{\mathbf{r}}/p)^{\vee}})(p)\iota_{p}^{-1}p^{-1/2}=f_{1,0,\rho,N}^{(\mathbf{r})}(0,p)$ for $p|\mathfrak{f}_{\rho_{\mathbf{r}}}$. Hence $c_{1,0,\rho,N}^{(\mathbf{r})}(-\mathfrak{f}_{\rho}m^{2})=c_{1,0,\rho,N}^{(\mathbf{r})}(0)$. On the other hand, $c_{1,s,\rho,N}^{(2)}(n)c_{1,s,\rho,N}^{(\mathbf{r})}(n)-c_{1,0,\rho,N}^{(2)}(0)c_{1,0,\rho,N}^{(\mathbf{r})}(0)$ has zero at $s=0$ for such $n$. Thus $\mathscr{E}_{3/2,\rho}(z,0)$ is holomorphic since $w_{-m}(y,3/2,0)=0$ for $m\ge0$, and gives a modular form in $\mathbf{M}_{3/2}(N,\rho)$. The computation of the case $k=1$ and $\rho$ is real, is found in Pei \cite{Pei}.
We take $4$ as $N$, and $\chi_{-4}^{k}$ as $\rho$. Then $\rho_{\mathbf{r}}=\rho_{\mathbf{c}}=\mathbf{1}$. From Section \ref{sect:ESHIW} and from (\ref{feoe2}), we have \begin{align} &E_{k+1/2,\chi_{-4}^{k}}(z,s)\nonumber\\ =&1+\tfrac{(-1)^{k(k+1)/2}2^{-k+1-2s}}{2^{2k+4s}-1}\tfrac{\pi\Gamma(k-1/2+2s)}{\Gamma(s)\Gamma(k+1/2+s)}\tfrac{\zeta(2k-1+4s)}{\zeta(2k+4s)}y^{-k+1/2-2s}\nonumber\\
&+\sum_{n\ne0}\tfrac{L(k{+}2s,\widetilde{\chi}_{(-1)^{k}n})}{\zeta(2k{+}4s)}f_{k,s,\rho}^{(\mathbf{r})}(n,2)\prod_{2\ne p|n}f_{k,s,\rho}(n,p)w_{n}(y,k+1/2,s)\mathbf{e}(nx),\label{eqn:feehiw1}\\ &E_{k+1/2}^{\chi_{-4}^{k}}(z,s)\nonumber\\ =&\tfrac{({-}\sqrt{{-}1})^{k}(1{-}\sqrt{{-}1})(2^{2k-1+4s}-1)}{2^{k-2+2s}(2^{2k+4s}-1)}\tfrac{\pi\Gamma(k-1/2+2s)}{\Gamma(s)\Gamma(k+1/2+s)}\tfrac{\zeta(2k-1+4s)}{\zeta(2k+4s)}y^{-k+1/2-2s}+\sum_{n\ne0}\tfrac{L(k{+}2s,\widetilde{\chi}_{(-1)^{k}n})}{\zeta(2k{+}4s)}\nonumber\\
&\hspace{4em}\times\tfrac{1{-}\widetilde{\chi}_{(-1)^{k}n}(2)2^{-k-2s}}{1{-}2^{-2k-4s}}\prod_{2\ne p|n}f_{k,s,\rho}(n,p)w_{n}(y,k+1/2,s)\mathbf{e}(nx),\label{eqn:feehiw2} \end{align}
where $f_{k,s,\rho}$ is as in (\ref{deffksrho}) and $f_{k,s,\rho}^{(\mathbf{r})}$ is defined below (\ref{deffksrho}). Equations $E_{k+1/2,\chi_{-4}^{k}}(z,s)|_{\left(\,0\ -1\atop 1\ \ 0\right)}=2^{-2k-1-4s}E_{k+1/2}^{\chi_{-4}^{k}}(z/4,s),\ E_{k+1/2}^{\chi_{-4}^{k}}(z,s)|_{\left(\,0\ -1\atop 1\ \ 0\right)}=(-1)^{k-1}\sqrt{-1}E_{k+1/2,\chi_{-4}^{k}}(z/4,s)$ hold. By (\ref{eqn:feehiw1}), a simple computation gives the following lemma.
\begin{lem} Let $\alpha_{k}(n,2):=-(1+\widetilde{\chi}_{n}(2)2^{-k})^{-1}+(1-2^{-2k+1})^{-1}\{1-\widetilde{\chi}_{n}(2)2^{-k}+2^{(v_{2}(n)/2+1)(-2k+1)-k}(1-\widetilde{\chi}_{n}(2)2^{-k+1})\}$, or $-(1-2^{-2k})^{-1}+(1-2^{-2k+1})^{-1}$\linebreak $\times (1-2^{(v_{2}(n)/2+1)(-2k+1)})$, or $-(1-2^{-2k})^{-1}+(1-2^{-2k+1})^{-1}(1-2^{(v_{2}(n)-1)(-2k+1)/2})$ according as $2|v_{2}(n)$ and $2^{-v_{2}(n)}n\equiv(-1)^{k}\pmod{4}$, or $2|v_{2}(n)$ and $2^{-v_{2}(n)}n\equiv(-1)^{k}3\pmod{4}$, or $2\nmid v_{2}(n)$. Then we have for $k\ge1$ \begin{align} E_{k+1/2,\chi_{-4}^{k}}(z,0)=&1+\tfrac{(-1)^{k}2^{2k-1}\pi^{2k+1/2}}{(k-1)!\Gamma(k+1/2)\zeta(2k)}\sum_{n>0}\alpha_{k}(n,2) L(1{-}k,\widetilde{\chi}_{(-1)^{k}n})\nonumber\\
&\hspace{3em}\times(\mathfrak{f}_{\chi_{(-1)^{k}n}}^{-1}n)^{k-1/2}\prod_{2\ne p|n}f_{k,0,\chi_{-4}^{k}}(n,p)\mathbf{e}(nz),\label{eqn:feehiw1s0} \end{align} where there is the additional term $-\pi^{-1}y^{-1/2}-2\pi^{-1/2}\sum_{m=1}^{\infty}m(4\pi m^{2}y)^{-3/4}$\linebreak$\times W_{-3/4,1/4}(4\pi m^{2}y)\mathbf{e}(-m^{2}x)$ when $k=1$. For $k=0$, we have \begin{align} &E_{1/2,\mathbf{1}_{2}}(z,0)\nonumber\\ =&-\tfrac{\pi}{6\log 2}y^{1/2}+1+2\sum_{m=1}^{\infty}(1{-}2^{-v_{2}(m)-2})\mathbf{e}(m^{2}z)\nonumber\\
&+(2\log2)^{-1}\sum_{n>0}(1{-}\widetilde{\chi}_{n}(2))L(1,\widetilde{\chi}_{n})(\mathfrak{f}_{\chi_{n}}n^{-1})^{1/2}\prod_{2\ne p|n}f_{0,0,\mathbf{1}_{2}}(n,p)\mathbf{e}(nz)\nonumber\\
&+(2\log2)^{-1}\pi^{1/2}\sum_{n<0}(1{-}\widetilde{\chi}_{n}(2))L(0,\widetilde{\chi}_{n})|n|^{-1/2}\prod_{2\ne p|n}f_{0,0,\mathbf{1}_{2}}(n,p)\nonumber\\
&\mbox{\hspace{11em}}\times(4\pi|n|y)^{-1/4}W_{-1/4,1/4}(4\pi|n|y)\mathbf{e}(nx).\label{eqn:feehiw2s0} \end{align} \end{lem} We describe theta series $\theta(z),\theta(z)^{3},\theta(z)^{5},\theta(z)^{7}$ as Eisenstein series. We have $\mathscr{E}_{k+1/2,\chi_{-4}^{k}}(z,s)=1+\sum_{n=-\infty}^{\infty}\{c_{k,s,\chi_{-4}^{k},4}^{(2)}(n)-c_{k,0,\chi_{-4}^{k},4}^{(2)}(0)\}c_{k,s,\chi_{-4}^{k},4}''(n)w_{n}(y,k,s)$ $\times\mathbf{e}(nx)$. Let $k=0$. Then the direct computation shows that $\mathscr{E}_{1/2,\mathbf{1}_{2}}(z,0)$ is equal to $\theta(z)$. As is shown, $\mathscr{E}_{k+1/2,\chi_{-4}^{k}}(z,s)$ is a holomorphic modular form for $k\ge1$, and it is in $\mathbf{M}_{k+1/2}(4,\chi_{-4}^{k})$ with the value $1$ at $\sqrt{{-}1}\infty$ and with the value $2^{-k-1/2}\mathbf{e}(\tfrac{6k-1}{8})$ at $1$. Since the subspaces of $\mathbf{M}_{k+1/2}(4,\chi_{-4}^{k})$ consisting of the such modular forms are one dimensional for $k=0,1,2,3$ and since $\theta(z)^{2k+1}$'s satisfy the same condition, we have the following proposition.
\begin{prop}(1) Let $N,\rho$ be as in (\ref{eqn:N}). Then $\mathscr{E}_{k+1/2,\rho,N}(z,0)$ is a holomorphic modular form in $\mathbf{M}_{k+1/2}(N,\rho)$ for $k\ge1$ with the same parity as $\rho$.
(2) If $k=0,1,2,3$, then \begin{align} \theta(z)^{2k+1}=\mathscr{E}_{k+1/2,\chi_{-4}^{k}}(z,0). \label{eqn:t-e} \end{align} \end{prop}
\begin{remark} The equality (\ref{eqn:t-e}) holds for any $k\ge0$ up to cusp forms. As for even powers of $\theta(z)$, the following equalities holds up to cusp forms; $\theta(z)^{2}=E_{1,\chi_{-4}}(z,0)$, and $\theta(z)^{2k}=E_{k,\chi_{-4}}(z,0)+2^{-k}\sqrt{{-}1}^{k}E_{k}^{\chi_{-4}}(z,0)$ for $k>1$ odd. For $k\ge2$, even, $\theta(z)^{2k}=\{(-1)^{k/2}2^{-k}E_{k}(z,0)-(-1)^{k/2}2^{-k}E_{k,\mathbf{1}_{2},2}(z,0)+E_{k,\mathbf{1}_{2},4}(z,0)\}$.
These equalities are exact if the weight is less than or equal to $4$. \end{remark}
\begin{cor} All the holomorphic Eisenstein series of weight $1/2$ on $\Gamma_{1}(N)$ are linear combinations of $\sum_{i:(\mathbf{Z}/\mathfrak{f}_{\omega})^{\times}}\omega(i)\mathscr{E}_{1/2,\mathbf{1}_{2},4}(tz+i/\mathfrak{f}_{\omega},0)$ where $\omega$ are primitive complex characters whose squares are also primitive, and where $t$ are natural numbers with $4(\mathfrak{f}_{\omega^{2}})^{2}t|N$. \end{cor}
\begin{proof} Let $\chi$ be a square of $\omega$. If both of $\omega$ and $\chi$ are primitive, then an equality $\mathfrak{f}_{\chi}=\mathfrak{f}_{\omega}/(2,\mathfrak{f}_{\omega})$ holds. By Serre and Stark \cite{Serre-Stark} Corollary 1 to Theorem A, the subspace in $\mathbf{M}_{1/2}(\Gamma_{1}(N))$ consisting of Eisenstein series is spanned by $\theta_{\chi}(tz):=\sum_{n=-\infty}^{\infty}\chi(n)\mathbf{e}(nz)$ with primitive characters $\chi$ which are squares, and with $4\mathfrak{f}_{\chi}^{2}t|N$. From (\ref{eqn:t-e}) we see that $\theta_{\chi}(z)=\tau(\overline{\omega})^{-1}\sum_{i:(\mathbf{Z}/\mathfrak{f}_{\omega})^{\times}}\overline{\omega}(i)\theta(z+i/\mathfrak{f}_{\omega})=\tau(\overline{\omega})^{-1}\sum_{i:(\mathbf{Z}/\mathfrak{f}_{\omega})^{\times}}$ $\overline{\omega}(i)\mathscr{E}_{1/2,\mathbf{1}_{2},4}(z+i/\mathfrak{f}_{\omega},0)$, which shows our assertion. \end{proof}
\section{Rankin Selberg method} We show that the functions of $s$ obtained from the constant terms of some real analytic modular forms in $\mathcal{M}_{l,l'}(N,\rho)$ with $l,l'\in\tfrac{1}{2}\mathbf{Z},\ge0$ with $\rho\in(\mathbf{Z}/N)^{\ast}$ together with a real analytic Eisenstein series, have analytic continuation to the whole complex $s$-plane by using the unfolding trick. From this, the analytic continuation of $L$-function (\ref{eqn:lseries}) is proved in the case $l-l'\in\mathbf{Z}$. Further some special value of the $L$-function is written in terms of the scalar product, and the functional equation between the $L$-function (\ref{eqn:lseries}) and the $L$-function defined similarly is derived.
Let $l\ge l'\ge0$. Let $N\in\mathbf{N}$ be so that $4|N$ if at least one of $l,l'$ is not integral. Put $k:=l-l'$ if $l-l'\in\mathbf{Z}$, and put $k:=l-l'-1/2$ if otherwise, and let $\rho$ be a Dirichlet character modulo $N$ with the same parity as $k$. We assume that $\rho,N$ satisfy (\ref{cond:M}) with $M=N$ if $l-l'$ is not integral. We consider a real analytic modular form $F(z)$ in $\mathcal{M}_{l,l'}(\Gamma_{1}(N))$ which satisfies $F(Az)=\overline{\rho}(d)(cz+d)^{l}(\overline{cz+d})^{l-k}F(z)$ for $A=\left(a\ \,b\atop c\ \,d\right)\in\Gamma_{0}(N)$ if $l-l'\in\mathbf{Z}$, or $F(Az)=\overline{\rho}(d)(cz+d)^{l}(\overline{cz+d})^{l-k}\overline{j}(A,z)^{-1}F(z)$ if $l-l'\not\in\mathbf{Z}$, $j(A,z)$ being the automorphy factor of $\theta(z)$ defined in the introduction. We assume that $F(z)$ has the Fourier expansion in the form \begin{align}\label{eqn:fearc}
F|_{A_{r}}(z)=P_{F}^{(r)}(y)+\sum_{n=-\infty}^{\infty}u_{n}^{(r)}(y)\mathbf{e}(nx/w^{(r)})\mbox{ \ with \ }P_{F}^{(r)}(y)=\sum_{j} a_{j}^{(r)}y^{q_{j}^{(r)}} \end{align} at each cusp $r\in\mathcal{C}_{0}(N)$, $A_{r}$ being as in (\ref{defAr}), where the summation $P_{F}^{(r)}(y)$ is a finite sum with $a_{j}^{(r)},q_{j}^{(r)}\in\mathbf{C}$ and all $u_{n}^{(r)}(y)$ are rapidly decreasing as $y\longrightarrow\infty$. We drop the notation $(r)$ from $P_{F}^{(r)},a_{j}^{(r)},q_{j}^{(r)},u_{n}^{(r)}$ when $r=1/N$ or equivalently $r=\sqrt{-1}\infty$. We define the Rankin-Selberg transform \begin{align} R(F,s):=\int_{0}^{\infty}y^{l+s-2}u_{0}(y)dy,\label{eqn:rst} \end{align} following Zagier \cite{Zagier}. In Theorem \ref{thm:analytic continuation}, we show that the integral converges for $s$ with sufficiently large $\Re s$. \begin{figure}
\caption{ }
\label{fig:fdomain}
\end{figure}
We denote by $Q(i/M)$, the set of rational numbers in $(-1/2,1/2]$ equivalent to $i/M$ under $\Gamma_{0}(N)$. \begin{lem}\label{lem:ub}
Let $q$ be the maximum of $0,\Re q_{j}^{(r)}\ (r\in\mathcal{C}_{0}(N))$. Then $|F(z)|=$\linebreak$O(y^{-q-(l+l')/2})$ as $y\longrightarrow+0$. \end{lem} \begin{proof}
The function $\psi(z):=y^{(l+l')/2}\mathrm{tr}_{\Gamma_{0}(N)/\mathrm{SL}_{2}(\mathbf{Z})}(|F(z)|)$ is $\mathrm{SL}_{2}(\mathbf{Z})$ invariant function. Then $y^{-q-(l+l')/2}\psi(z)$ is bounded on the fundamental domain $\mathfrak{F}$ of $\mathrm{SL}_{2}(\mathbf{Z})$, and hence $$(\max_{A\in\mathrm{SL}_{2}(\mathbf{Z})}\Im(Az))^{-q-(l+l')/2}\psi(z)$$
is bounded on $\mathfrak{H}$. Since $\max_{A\in\mathrm{SL}_{2}(\mathbf{Z})}\Im(Az)\le \max\{y,y^{-1}\}$, we have $|F(z)|=O(y^{-q-(l+l')/2})$ as $y\longrightarrow+0$. \end{proof} We fix the notation for the rest of the section. We denote by $y^{s}+\xi^{(\sqrt{-1}\infty)}(s)$ $\times y^{-(l-l')+1-s}$ and $\xi^{(r)}(s)y^{-(l-l')+1-s}$, the constant terms of Fourier expansions with respect to $x$, of $y^{s}E_{l-l',\rho,N}(z,s)$ at $\sqrt{-1}\infty$ and at $r\in\mathcal{C}_{0}(N),\ne 1/N$ respectively. The function $\xi^{(\sqrt{-1}\infty)}(s)$ is also denoted by $\xi^{(1/N)}(s)$. For $T>0$, \begin{align} &h_{+}(T,s)=h_{+}^{(\sqrt{-1}\infty)}(T,s):=\sum_{j}\tfrac{a_{j}T^{q_{j}+l-1+s}}{q_{j}+l-1+s}\left(\rule{0cm}{1.em}\right.\!\!=\int_{0}^{T}y^{l+s}P_{F}^{(\sqrt{-1}\infty)}(y)dy\ (\Re s\gg 0)\left)\rule{0cm}{1.em}\right.,\nonumber\\ &h_{-}(T,s)=h_{-}^{(\sqrt{-1}\infty)}(T,s):=\sum_{j}\tfrac{a_{j}T^{q_{j}+l'-s}}{q_{j}+l'-s}\left(\rule{0cm}{1.em}\right.\!\!=-\int_{T}^{\infty}y^{l'+1-s}P_{F}^{(\sqrt{-1}\infty)}(y)dy\ (\Re s\gg 0)\left)\rule{0cm}{1.em}\right.,\nonumber\\ &h^{(r)}(T,s):=w^{(r)}\sum_{j}\tfrac{a_{j}^{(r)}T^{q_{j}^{(r)}+l'-s}}{q_{j}^{(r)}+l'-s}\left(\rule{0cm}{1.em}\right.\!\!=-w^{(r)}\int_{T}^{\infty}y^{l'+1-s}P_{F}^{(r)}(y)dy\ (\Re s\gg 0)\left)\rule{0cm}{1.em}\right.(r\ne\tfrac{1}{N}),\nonumber\\ &h(T,s)=h_{{+}}(T,s)+\overline{\xi^{(1/N)}(\overline{s})}h_{{-}}(T,s)+\sum_{r\in\mathcal{C}_{0}(N)-\{1/N\}}\overline{\xi^{(r)}(\overline{s})}h^{(r)}(T,s) \label{eqn:defHTS} \end{align} where $\Re s\gg0$ implies that $\Re s$ is sufficiently large. If $\Re s\ll0$ (sufficiently small), then $h_{+}(T,s)=-\int_{T}^{\infty}y^{l+s}P_{F}^{(1/N)}(y)dy,h_{-}(T,s)=\int_{0}^{T}y^{l'+1-s}P_{F}^{(1/N)}(y)dy,h^{(r)}(T,s)=w^{(r)}\int_{0}^{T}y^{l'+1-s}P_{F}^{(r)}(y)dy$. \begin{thm}\label{thm:analytic continuation} Let $l,l',k,,\xi^{(r)},h_{+},h_{-},h^{(r)},F,R(F,s)$ be as above. Then for $s$ with $\Re s$ sufficiently large, the integral (\ref{eqn:rst}) converges and $R(F,s)$ is defined. Further $R(F,s)$ extends meromorphically to the whole $s$ plane, and its possible poles are poles of the Eisenstein series $E_{l-l',\rho,N}(z,s)$ and $s=q_{j}^{(r)}+l'\ (r\in\mathcal{C}_{0}(N))$,\ $s=-q_{j}-l+1$. We have \begin{align} R(F,s)&=\lim_{T\to\infty}\left(\rule{0cm}{1.5em}\right.\int_{\mathfrak{F}_{T}(N)}y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}-h_{+}(T,s)\left.\rule{0cm}{1.5em}\right) \label{eqn:RFs} \end{align} for $s$ with sufficiently large $\Re s$. For $s$ with sufficiently small $\Re s$ at which $E_{l-l',\rho,N}(z,s)$ is holomorphic, we have \begin{align} R(F,s)&=\lim_{T\to\infty}\left(\rule{0cm}{1.5em}\right. \int_{\mathfrak{F}_{T}(N)}y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}\nonumber\\ &\hspace{4em}-\overline{\xi^{(1/N)}(\overline{s})}h_{{-}}(T,s)-\sum_{r\in\mathcal{C}_{0}(N)-\{1/N\}}\overline{\xi^{(r)}(\overline{s})}h^{(r)}(T,s)\left.\rule{0cm}{1.5em}\right). \label{eqn:RFs2} \end{align} \end{thm} \begin{proof}
Let \begin{align}
\mathfrak{D}:=\{z\in\mathfrak{H}\mid|x|\le1/2\}.\label{eqn:defD} \end{align}
We denote by $\mathfrak{F}_{T}\ (T>1)$, the truncated fundamental domain $\{z \in\mathfrak{D}\mid |z|\ge1,\, y\le T\}$, and by $\mathfrak{F}_{T}(N)$, the union $\bigcup_{A} A\mathfrak{F}_{T}\,(\subset\mathfrak{D})$ where $A$ runs over the representatives of $\mathrm{SL}_{2}(\mathbf{Z})$ modulo $\Gamma_{0}(N)$.
Let $a/c$ be a rational number with $-1/2<a/c\le 1/2,\,c>0,\,(a,c)=1$. For $T>1$, let $S_{a/c,T}$ be the open disk of radius $(2c^{2}T)^{-1}$ tangent to the real axis at $a/c$ (Fig. \ref{fig:fdomain}) where $S_{1/2,T}$ is exceptionally the union of the left half of the disk tangent to the real axis at $1/2$ and the right half of the disk tangent to the real axis at $-1/2$. The domain $\{z\in\mathfrak{H}\mid y>T\}$ and all $S_{a/c,T}$ but $S_{1/2,T}$ are mapped onto each other by matrices of $\mathrm{SL}_{2}(\mathbf{Z})$.
Put $\mathfrak{B}_{T}(n):=\{z\in\mathfrak{H}\mid -1/2\le x\le n-1/2,\,y>T\}$ for $n\in\mathbf{N}$. For $r\in\mathcal{C}_{0}(N)$, let $A_{r}$ be as in (\ref{defAr}) so that it sends $\mathfrak{B}_{T}(w^{(r)})$ onto $S_{r,T}\cap\mathfrak{F}(N)$. By Lemma \ref{lem:ub}, the integral $\int_{0}^{T}\int_{-1/2}^{1/2}y^{l+s}F(z)\frac{dxdy}{y^{2}}$ converges absolutely and uniformly for $s$ with $\Re s\gg0$. Let $\mathfrak{D}_{T}=\{z\in\mathfrak{D}\mid\Im z\le T\}-\cup_{r\in\mathcal{C}_{0}(N)}\cup_{a/c\in Q(r)}S_{a/b,T}$. Then by taking suitable representatives of $\Gamma_{0}(N)$ modulo $\Gamma_{\infty}=\{\pm\left(1\ \,n\atop 0\ \,1\right)\mid n\in\mathbf{Z}\}$, we have $\mathfrak{D}=\cup_{A:\Gamma_{\infty}\backslash\Gamma_{0}(N)}A\mathfrak{F}(N)$ and $\mathfrak{D}_{T}=\cup_{A:\Gamma_{\infty}\backslash\Gamma_{0}(N)}A\mathfrak{F}_{T}(N)$. Then \begin{align} \int_{\mathfrak{F}_{T}(N)}y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}\frac{dxdy}{y^2}=\int_{\mathfrak{D}_{T}}y^{l+s}F(z)\frac{dxdy}{y^2}. \label{eqn:int-on-FT} \end{align} Indeed if $l-l'$ is integral, then the left hand side of (\ref{eqn:int-on-FT}) is equal to \begin{align*}
&\int_{\mathfrak{F}_{T}(N)}y^{l+s}F(z)\{1+\sum_{(c,d)=1,c>0\atop c\equiv0(\mathrm{mod}\,N)}\rho(d)(\overline{cz+d})^{-k}|cz+d|^{-2s}\}\frac{dxdy}{y^2}\\
=&\int_{\mathfrak{F}_{T}(N)}[y^{l+s}F(z)+\sum_{(c,d)=1,c>0\atop c\equiv0(\mathrm{mod}\,N)}\{\overline{\rho}(d)(\overline{cz+d})^{k}|cz+d|^{2s}\}^{-1}y^{l+s}F(z)]\frac{dxdy}{y^2}\\ =&\int_{\cup_{A:\Gamma_{\infty}\backslash\Gamma_{0}(N)}A\mathfrak{F}_{T}(N)}y^{l+s}F(z)\frac{dxdy}{y^2}=\int_{\mathfrak{D}_{T}}y^{l+s}F(z)\frac{dxdy}{y^2}. \end{align*} The similar argument holds also in the case that $l-l'$ is a half integer. Since $h_{+}(T,s)=\int_{0}^{T}\int_{-1/2}^{1/2}y^{l+s}P_{F}(y)\tfrac{dxdy}{y^{2}}$, from (\ref{eqn:int-on-FT}) we obtain \begin{align} &\int_{\mathfrak{F}_{T}(N)}y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}\nonumber\\ =&\int_{\mathfrak{D}-\mathfrak{B}_{T}(1)}y^{l+s}F(z)\tfrac{dxdy}{y^2}-\int_{\cup_{r\in\mathcal{C}_{0}(N)}\cup_{a/c\in Q(r)}S_{a/b,T}}y^{l+s}F(z)\tfrac{dxdy}{y^2}\nonumber\\ =&\int_{0}^{T}y^{l+s-2}u_{0}(y)dy+h_{{+}}(T,s){-}\sum\limits_{r\in\mathcal{C}_{0}(N)}\sum\limits_{a/c\in Q(r)}\int_{S_{a/b,T}}y^{l+s}F(z)\tfrac{dxdy}{y^2},\label{eqn:int-cal} \end{align} and for $r\in\mathcal{C}_{0}(N),\ne 1/N$, \begin{align} &\int_{\mathfrak{F}(N)\cap S_{r,T}}y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}=\int_{\cup_{a/c\in Q(r)}S_{a/b,T}}y^{l+s}F(z)\tfrac{dxdy}{y^2}\nonumber\\ =&\sum_{a/c\in Q(r)}\int_{S_{a/b,T}}y^{l+s}F(z)\tfrac{dxdy}{y^2}.\label{eqn:int-on-S} \end{align}
We have $E_{l-l',\rho,N}(z,s)|_{A_{r}}=O(y^{1-2s})$ as $y\longrightarrow\infty$ for $r\in\mathcal{C}_{0}(N),\ne 1/N$, and $E_{l-l',\rho,N}(z,s)-1=O(y^{1-2s})$. We take $s$ so that $\Re s>\max\{q+l'+1,q+l\}$. Then $y^{l+s}F|_{A{r}}(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}|_{A_{r}}$ is integrable on $\mathfrak{B}_{T}(w^{(r)})$, and $y^{l+s}F(z)$ $\times \{\overline{E_{l-l',\rho,N}(z,\overline{s})}-1\}$ is integrable on $\mathfrak{B}_{T}(1)$. Then the integral (\ref{eqn:int-on-S}) is equal to $\int_{\mathfrak{B}_{T}(w^{(r)})}y^{l+s}F|_{A_{r}}(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}|_{A_{r}}\frac{dxdy}{y^2}$. Since there holds an equality\linebreak $\cup_{A:\Gamma_{\infty}\backslash(\Gamma_{0}(N)-\Gamma_{\infty})}A\mathfrak{B}_{T}(1)=\cup_{a/c\in Q(1/N)}S_{a/c,T}$ for a suitable choice of representatives, we have \begin{align*} &\int_{\mathfrak{B}_{T}(1)}y^{l+s}F(z)\{\overline{E_{l-l',\rho,N}(z,\overline{s})}-1\}\tfrac{dxdy}{y^2}\\
=&\int_{\mathfrak{B}_{T}(1)}y^{l+s}F(z)\sum_{(c,d)=1,c>0\atop c\equiv0(\mathrm{mod}\,N)}\rho(d)(\overline{cz+d})^{-k}|cz+d|^{-2s}\frac{dxdy}{y^2}\\ =&\sum_{a/c\in Q(1/N)}\int_{S_{a/c,T}}y^{l+s}F(z)\tfrac{dxdy}{y^2}. \end{align*} If $\Re s\gg0$, then we obtain from (\ref{eqn:int-cal}), \begin{align} &R(F,s)-\int_{T}^{\infty}y^{l+s-2}u_{0}(y)dy=\int_{0}^{T}y^{l+s-2}u_{0}(y)dy\nonumber\\ =&\int_{\mathfrak{F}_{T}(N)}y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}+\int_{\mathfrak{B}_{T}(1)}y^{l+s}F(z)\{\overline{E_{l-l',\rho,N}(z,\overline{s})}-1\}\tfrac{dxdy}{y^2}\nonumber\\
&+\sum_{r\in\mathcal{C}_{0}(N)-\{1/N\}}\int_{\mathfrak{B}_{T}(w^{(r)})}y^{l+s}F|_{A_{r}}(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}|_{A_{r}}\tfrac{dxdy}{y^2}-h_{+}(T,s). \label{eqn:RFs3} \end{align} Thus integral $\int_{0}^{T}y^{l+s-2}u_{0}(y)dy$ converges, and since $y^{l+s-2}u_{0}(y)$ is rapidly decreasing as $y\longrightarrow\infty$ by our assumption, the integral (\ref{eqn:rst}) converges for $\Re s\gg0$.
Noting that $\int_{T}^{\infty}y^{l+s-2}u_{0}(y)dy+\int_{\mathfrak{B}_{T}(1)}y^{l+s}F(z)\{\overline{E_{l-l',\rho,N}(z,\overline{s})}-1\}\tfrac{dxdy}{y^2}=$\linebreak$\int_{\mathfrak{B}_{T}(1)}\{y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}-(y^{l+s}+\overline{\xi^{(1/N)}(\overline{s})}y^{l'+1-s})P_{F}(y)\}\tfrac{dxdy}{y^2}$, we obtain from (\ref{eqn:RFs3}), \begin{align} R(F,s)=&\int_{\mathfrak{F}_{T}(N)}y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}\nonumber\\ &+\int_{T}^{\infty}\!\!\int_{-1/2}^{1/2}\{y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}-(y^{l+s}+\overline{\xi^{(1/N)}(\overline{s})}y^{l'+1-s})P_{F}(y)\}\tfrac{dxdy}{y^2}\nonumber\\
&+\sum_{r\in\mathcal{C}_{0}(N)-\{1/N\}}\int_{T}^{\infty}\!\!\int_{-1/2}^{w^{(r)}-1/2}\{y^{l+s}F|_{A_{r}}(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}|_{A_{r}}\nonumber\\ &\hspace{6em}-\overline{\xi^{(r)}(\overline{s})}y^{l'{+}1{-}s}P_{F}^{(r)}(y)\}\tfrac{dxdy}{y^2}-h(T,s) \label{eqn:RFs4} \end{align} with $h(T,s)$ of (\ref{eqn:defHTS}) since $h(T,s)=h_{{+}}(T,s)-\int_{T}^{\infty}\!\int_{-1/2}^{1/2}\overline{\xi^{(1/N)}(\overline{s})}y^{l'+1-s}P_{F}(y)\frac{dxdy}{y^{2}}-\sum_{r\in\mathcal{C}_{0}(N)-\{1/N\}}$ $\int_{T}^{\infty}\!\int_{-1/2}^{w^{(r)}-1/2}\overline{\xi^{(r)}(\overline{s})}y^{l'{+}1{-}s}P_{F}^{(r)}(y)\frac{dxdy}{y^{2}}$.
By our assumption that the second term of (\ref{eqn:fearc}) is rapidly decreasing as $y\longrightarrow\infty$, and by Lemma \ref{lem:eisi} and Lemma \ref{lem:eishi}, the integrands of the second and third terms of (\ref{eqn:RFs4}) are rapidly decreasing. The equality (\ref{eqn:RFs4}) is proved for $s$ with $\Re s\gg0$. However the right hand side is a meromorphic function on the whole $s$ plane since the first integral is over a compact set, and the integrands of other integrals are rapidly decreasing, and $h(T,s)$ is a meromorphic function on the $s$ plane. Thus $R(F,s)$ is a meromorphic on the $s$ plane, and $R(F,s)$ is holomorphic at $s$ if both of $E_{l-l',\rho,N}(z,s)$ and $h(T,s)$ are holomorphic at $s$. This shows the second statement of the theorem.
In the right hand side of (\ref{eqn:RFs4}), the second and the third terms tend to $0$ as $T\longrightarrow\infty$, and $h(T,s)-h_{+}(T,s)$ also tend to $0$ as $T\longrightarrow\infty$ for $s$ with $\Re s\gg0$. This shows (\ref{eqn:RFs}). The equality (\ref{eqn:RFs2}) is proved similarly. \end{proof} \begin{cor}\label{cor:int-eis} (i) For $s$ with sufficient large $\Re s$, there holds equalities \begin{align} R(F,s)&=\int_{\mathfrak{F}(N)}\{y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}-\mathbf{E}_{0}(z,s)\}\tfrac{dxdy}{y^2} \label{eqn:RFs5} \end{align} for $\mathbf{E}_{0}(z,s):=\sum_{j}a_{j}E_{\Gamma_{0}(N)}(z,l+q_{j}+s)\in\mathcal{M}_{0,0}(N)$.
(ii) Let the notations be the same as in the theorem. For $s$ with $\Re s\ll0$ at which $E_{l-l',\rho,N}(z,s)$ is holomorphic, there holds \begin{align*} R(F,s)&=\int_{\mathfrak{F}(N)}\{y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}-\mathbf{E}_{0}(z,s)\}\tfrac{dxdy}{y^{2}} \end{align*} for $\mathbf{E}_{0}(z,s):=\overline{\xi^{(1/N)}(\overline{s})}\sum_{j}a_{j}E_{\Gamma_{0}(N)}(z,q_{j}{+}l'{+}3{-}s)+\sum_{r\in\mathcal{C}_{0}(N){-}\{1/N\}}\overline{\xi^{(r)}(\overline{s})}$\linebreak$\times E_{\Gamma_{0}(N),r}(z,q_{j}^{(r)}{+}l'{+}3{-}s)\in\mathcal{M}_{0,0}(N)$.
(iii) Assume that ${E_{l-l',\rho,N}(z,s)}$ is holomorphic in $s$ at $s=s_{0}$. Let $\widetilde{h}(T,s)$ denote the sum of terms of (\ref{eqn:defHTS}) so that for $s=s_{0}$ the powers of $T$ are nonzero and their real parts are nonnegative. Let $\tfrac{C_{0}}{s-s_{0}}T^{s-s_{0}}-\tfrac{\alpha(s)}{s-s_{0}}T^{-s+s_{0}}$ be the sum of terms of (\ref{eqn:defHTS}) so that the powers of $T$ for $s=s_{0}$ are $0$, where $C_{0}$ is a constant. Then \begin{align}
&\left(\rule{0cm}{1.2em}\right.R(F,s){+}\frac{C_{0}{-}\alpha(s_{0})}{s-s_{0}}-\frac{d}{ds}\alpha(s_{0})\left)\rule{0cm}{1.2em}\right.|_{s=s_{0}}\nonumber\\ \hspace*{-.3em}=&\lim_{T\to\infty}\left(\rule{0cm}{1.2em}\right.\int_{\mathfrak{F}_{T}(N)}y^{l{+}s}F(z)\overline{E_{l{-}l',\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}-(C_{0}{+}\alpha(s_{0}))\log T{-}\widetilde{h}(T,s_{0})\left)\rule{0cm}{1.2em}\right..\label{eqn:r-i-rel} \end{align} If $C_{0}=\alpha(s)=0$, then $R(F,s)$ is holomorphic at $s=s_{0}$. \end{cor}
\begin{proof} (i) We put $H(z,s):=y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}-\sum_{j}a_{j}E_{\Gamma_{0}(N)}(z,l+q_{j}+s)$. Then $H(z,s)=O(y^{l'+1+q-s}),\ H(z,s)|A_{r}=O(y^{l'+1+q-s})$ as $y\longrightarrow\infty$ where $q$ is as in Lemma \ref{lem:ub}. Hence the integral of the right hand side of (\ref{eqn:RFs5}) converges for $\Re s\gg0$. We apply the argument in the theorem to $E_{\Gamma_{0}(N)}(z,l+q_{j}+s)$. Since the term $u_{0}(y)$ of $E_{\Gamma_{0}(N)}(z,l+q_{j}+s)$ is $0$, we obtain from (\ref{eqn:RFs3}), \begin{align*} 0=&\int_{\mathfrak{F}_{T}(N)}E_{\Gamma_{0}(N)}(z,l{+}q_{j}{+}s)\tfrac{dxdy}{y^2}+\int_{\mathfrak{B}_{T}(1)}\{E_{\Gamma_{0}(N)}(z,l{+}q_{j}{+}s)-y^{l+q_{j}+s}\}\tfrac{dxdy}{y^2}\\
&+\sum_{r\in\mathcal{C}_{0}(N)-\{1/N\}}\int_{\mathfrak{B}_{T}(w^{(r)})}E_{\Gamma_{0}(N)}(z,l{+}q_{j}{+}s)|_{A_{r}}\tfrac{dxdy}{y^2}-\tfrac{T^{q_{j}+l-1+s}}{q_{j}+l-1+s}. \end{align*} Then by this and by the equality (\ref{eqn:RFs3}), $R(F,s)$ is equal to \begin{align} &\int_{\mathfrak{F}_{T}(N)}H(z,s)\tfrac{dxdy}{y^2}+\int_{T}^{\infty}\!\!\int_{-1/2}^{1/2}\{H(z,s)+Q(y,s)\}\tfrac{dxdy}{y^2}\nonumber\\
&+\sum_{r\in\mathcal{C}_{0}(N)-\{1/N\}}\int_{T}^{\infty}\!\!\int_{-1/2}^{w^{(r)}-1/2}H(z,s)|_{A_{r}}\tfrac{dxdy}{y^2}+Q'(T,s)\label{eqn:st} \end{align} where $Q(y,s)$ is a linear combination of powers of $y$, and $Q'(T,s)$ is a linear combination of powers of $T$. It is easily checked that the real parts of powers of terms in $Q(y,s)$ or in $Q'(T,s)$ are all negative if $\Re s\gg0$. The first term of (\ref{eqn:st}) tends to the right hand side of (\ref{eqn:RFs5}) as $T\longrightarrow\infty$ and the other terms tend to $0$. This shows the equality (\ref{eqn:RFs5}).
The assertion (ii) is proved similarly.
(iii) In (\ref{eqn:RFs4}), the first integral is holomorphic in $s$ at $s=s_{0}$ since it is over the compact set, and the other integrals are also since integrands are rapidly decreasing. The coefficients of $h(T,s)$ of (\ref{eqn:RFs4}) in $T$ as well as $\alpha(s)$ are holomorphic at $s=s_{0}$ from our assumption that ${E_{l-l',\rho,N}(z,s)}$ is holomorphic at $s=s_{0}$. If $C_{0}=\alpha(s)=0$, then $R(F,s)$ is obviously holomorphic at $s=s_{0}$. By (\ref{eqn:RFs4}), we can write $R(F,s)$ as $R(F,s)=\int_{\mathfrak{F}_{T}(N)}y^{l+s}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}+g_{T}(z,s)-h(T,s)$ for a holomorphic function $g_{T}(z,s)$ of $s$ rapidly decreasing as $T\longrightarrow\infty$ which is a sum of integrals in (\ref{eqn:RFs4}) other than the first one. If we put $n(T,s):=h(T,s)-\frac{C_{0}T^{s-s_{0}}}{s-s_{0}}+\frac{\alpha(s)T^{-s+s_{0}}}{s-s_{0}}-\widetilde{h}(T,s)$, then $\lim_{T\to\infty}n(T,s_{0})=0$ since the powers of $T$ of terms in $n(T,s_{0})$ have negative real parts. By the Taylor expansions $T^{s-s_{0}}=1+(s-s_{0})\log T+O((s-s_{0})^{2}),\,T^{-s+s_{0}}=1-(s-s_{0})\log T+O((s-s_{0})^{2})$ at $s=s_{0}$, we have $\{R(F,s){+}\tfrac{C_{0}{-}\alpha(s_{0})}{s-s_{0}}-\tfrac{d}{ds}\alpha(s_{0})\}|_{s=s_{0}}=\int_{\mathfrak{F}_{T}(N)}y^{l+s_{0}}F(z)\overline{E_{l-l',\rho,N}(z,\overline{s}_{0})}\tfrac{dxdy}{y^2}+g_{T}(z,s_{0})-(C_{0}{+}\alpha(s_{0}))\log T-\widetilde{h}(T,s_{0})-n(T,s_{0})$. Taking the limit as $T\longrightarrow\infty$ of the right hand side, the equality (\ref{eqn:r-i-rel}) follows. \end{proof}
\begin{thm}\label{thm:functional equation} Assume that $F(z)$ of Theorem \ref{thm:analytic continuation} is in $\mathcal{M}_{l,l-k}(N,\rho)$ with $k\in\mathbf{Z},\ge0$ and with $\rho\in(\mathbf{Z}/N)^{\ast}$ where $\rho$ and $k$ have the same parity. Put $\widetilde{F}(z):=F|_{S_{N}}(z)=(N^{1/2}z)^{-k}|N^{1/2}z|^{-2(l-k)}F(-1/(Nz))\in \mathcal{M}_{l,l-k}(N,\overline{\rho})$ with $S_{N}:=\mbox{\tiny$\left(\begin{array}{@{}c@{\,}c@{}}0&-N^{-1/2}\\N^{1/2}&0\end{array}\right)$}$. Then $R(F,s)$ defined in (\ref{eqn:rst}) has the meromorphic continuation to the whole complex plane, and satisfies the functional equation \begin{align*} R(F,-k+1-s)
=\pi^{-1/2}U_{k,\rho}(s)\sum_{0<P|\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}}U_{k,\rho,P}(s)R(\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(\mathfrak{f}_{\rho}P),\rho}(\widetilde{F}),s), \end{align*}
where $\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(M),\rho}(\widetilde{F})$ denotes $\sum\limits_{A:\Gamma_{0}(N)/\Gamma_{0}(M)}\rho(d)\widetilde{F}|_{A}(z)$, $d$ being the $(2,2)$ entry of $A$, and where $U_{k,\rho}(s)$ and $U_{k,\rho,P}(s)$ are as in (\ref{eqn:Uk1}) and (\ref{eqn:Uk2}) respectively. \end{thm} \begin{proof} By (\ref{eqn:felu}) and by Corollary \ref{cor:int-eis} (ii), we have \begin{align} &(-1)^{k}\pi^{1/2}U_{k,\rho}(s)^{-1}R(F,-k+1-s)\nonumber\\
=&\int_{\mathfrak{F}(N)}\{y^{l}F(z)\overline{z}^{-k}\sum_{0<P|\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}}\prod_{p|P}(1-\widetilde{\rho}(p)p^{k+2s})\varphi(\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}P^{-1})\nonumber\\ &\hspace{10em}\times\Im(-\tfrac{1}{Nz})^{s}\overline{E_{k,\overline{\widetilde{\rho}}\mathbf{1}_{P},\mathfrak{f}_{\rho}P}(-\tfrac{1}{Nz},\overline{s})}-\mathbf{E}_{0}(z,s)\}\tfrac{dxdy}{y^2}\label{eqn:int-eis-h} \end{align} for a linear combination $\mathbf{E}_{0}(z,s)$ of Eisenstein series of weight $(0,0)$ if $\Re s\gg 0$ and $E_{k,\rho,N}(z,s)$ is holomorphic at $s$. The integrand of (\ref{eqn:int-eis-h}) takes the value $0$ at each cusp, and hence its transformation by the matrix $S_{N}$ also takes the value $0$ at each cusp. Then (\ref{eqn:int-eis-h}) is equal to \begin{align*}
&\int_{\mathfrak{F}(N)}\{(-1)^{k}y^{l+s}\widetilde{F}(z)\sum_{0<P|\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}}\prod_{p|P}(1-\widetilde{\rho}(p)p^{k+2s})\\ &\hspace{7em}\times\varphi(\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}P^{-1})\overline{E_{k,\overline{\widetilde{\rho}\mathbf{1}_{P}},\mathfrak{f}_{\rho}P}(z,\overline{s})}-\widetilde{\mathbf{E}}_{0}(z,s)(z,s)\}\tfrac{dxdy}{y^2} \end{align*} where $\widetilde{\mathbf{E}}_{0}(z,s)=\mathbf{E}_{0}(-\frac{1}{Nz},s)$. Hence \begin{align*} &(-1)^{k}\pi^{1/2}U_{k,\rho}(s)^{-1}R(F,-k+1-s)\nonumber\\
=&(-1)^{k}\sum_{0<P|\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}}\prod_{p|P}(1-\widetilde{\rho}(p)p^{k+2s})\varphi(\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}P^{-1})R(\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(\mathfrak{f}_{\rho}P)}(\widetilde{F}),s), \end{align*} which shows our assertion. \end{proof}
\begin{cor}\label{cor:lseries} Let $k\in\mathbf{Z},\ge0$ and let $l\in\frac{1}{2}\mathbf{Z},\, l>0,\,l\ge k$. Let $f,g$ be holomorphic modulars forms for $\Gamma_{0}(N)$ of weight $l$ and of weight $l-k$ with characters respectively. We assume that $f\overline{g}\in\mathcal{M}_{l,l-k}(N,\rho)$ for $\rho\in(\mathbf{Z}/N)^{\ast}$ with the same parity as $k$.
(i) Then $L(s;f,g)$ defined in (\ref{eqn:lseries}) converges at least if $\Re s>\max\{2l-k-1,1/2\}$, and extends meromorphically to the whole $s$-plane. Let $\widetilde{f\overline{g}}(z):=(f\overline{g})|_{S_{N}}(z)\in\mathcal{M}_{l,l-k}(N,\overline{\rho})$ with $S_{N}:=\mbox{\tiny$\left(\begin{array}{@{}c@{\,}c@{}}0&-N^{-1/2}\\N^{1/2}&0\end{array}\right)$}$. For $M\in\mathbf{N}$ with $\mathfrak{f}_{\rho}|M|N$, put \linebreak$\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(M),\rho}(\widetilde{f\overline{g}}):=\sum_{A:\Gamma_{0}(N)/\Gamma_{0}(M)}$ $\rho(d)\widetilde{f\overline{g}}|_{A}(z)$, $d$ being the $(2,2)$ entry of $A$. Let $L(s;\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(M),\rho}(\widetilde{f\overline{g}}))$ $:=\sum_{n=1}^{\infty}\frac{c_{n}^{(M)}}{n^{s}}$ if $\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(M),\rho}(\widetilde{f\overline{g}})$ has $\sum_{n=0}^{\infty}c_{n}^{(M)}\times$ $e^{-4\pi ny}$ as the constant term of its Fourier expansion with respect to $x$. Then we have a functional equation \begin{align}
L(l-k-s;f,g)=&\tfrac{2^{-2k+2-4s}\pi^{-k+1/2-2s}\Gamma(l{-}1{+}s)}{\Gamma(l{-}k{-}s)}U_{k,\rho}(s)\nonumber\\&\hspace{1em}\times\sum_{0<P|\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1}}U_{k,\rho,P}(s)L(l-1+s;\mathrm{tr}_{\Gamma_{0}(N)/\Gamma_{0}(M),\rho}(\widetilde{f\overline{g}})).\label{eqn:fn-eq3} \end{align}
(ii) Assume that $\rho$ is primitive with $N=\mathfrak{f}_{\rho}$, or $k\ge1$. Let $P_{y^{l}f\overline{g}}^{(r)}(y)$ be as in (\ref{eqn:dfP}). If $P_{y^{l}f\overline{g}}^{(\sqrt{-1}\infty)}(y)$ does not have a term containing $y$ to the power of $1$, and if $P_{y^{l}f\overline{g}}^{(r)}(y)$ does not have a term containing $y^{k}$ for any $r\in\mathcal{C}_{0}(N)$, then there holds \begin{align} \langle f(z),g(z)E_{k,\rho,N}(z,0) \rangle_{\Gamma_{0}(N)}=(4\pi)^{-l+1}\Gamma(l-1)L(l-1;f,g)\label{eqn:l-psp} \end{align}
with the scalar product defined in (\ref{eqn:psp}). (Note that the right hand side is the value of the analytic continuation of $\Gamma(l-1+s)L(l-1+s;f,g)$ at $s=0$.) Suppose otherwise. If $C_{0}$ is a coefficient of $y$ in $P_{y^{l}f\overline{g}}^{(\sqrt{-1}\infty)}(y)$, and if $\alpha(s)$ is the coefficient of $y^{k}$ in $\sum_{r\in\mathcal{C}_{0}(N)}\overline{\xi^{(r)}(\overline{s})}w^{(r)}P_{y^{l}f\overline{g}}^{(r)}(y)$, then the equation (\ref{eqn:l-psp}) holds replacing the right hand side by $\{(4\pi)^{-l+1-s}\Gamma(l{-}1{+}s)L(l{-}1{+}s;f,g)+s^{-1}(C_{0}{-}\alpha(0))-\frac{d}{ds}\alpha(0)\}|_{s=0}$.
(iii) Let $k=0,\,\rho=\mathbf{1}_{N}$ . If $P_{y^{l}f\overline{g}}^{(\sqrt{-1}\infty)}(y)$ does not have a nonzero constant term, and if $P_{y^{l}f\overline{g}}^{(r)}(y)$ does not have a term containing $y$ to the power of $1$ for any $r\in\mathcal{C}_{0}(N)$, then the equality (\ref{eqn:psp-formula2}) holds.
If $C_{0}$ is a constant term of $P_{y^{l}f\overline{g}}^{(\sqrt{-1}\infty)}(y)$, and if $\alpha(s)$ is the coefficient of $y$ in $\sum_{r\in\mathcal{C}_{0}(N)}\overline{\xi^{(r)}(\overline{s})}w^{(r)}P_{y^{l}f\overline{g}}^{(r)}(y)$, then an equality \begin{align*}
\langle f,g\rangle_{\Gamma_{0}(N)}-C_{0}+\tfrac{1}{2}\tfrac{d^{2}}{ds^{2}}(s{-}1)\alpha(s)|_{s=1}=\mathrm{Res}_{s=l}\tfrac{\Gamma(s)N\prod_{p|N}(1+\tfrac{1}{p})}{3\cdot 4^{s}\pi^{s-1}}L(s;f,g) \end{align*} holds. \end{cor} \begin{proof} (i) We take $f(z)\overline{g}(z)$ as $F(z)$ in the theorem and then $R(F,s)=(4\pi)^{-l+1-s}\Gamma(l-1+s)L(l-1+s;f,g)$. The other assertion follows from the theorem.
(ii) Our assumption implies that the Eisenstein series $E_{k,\rho}(z,s)$ is holomorphic in $s$ at $s=0$. Then the assertion follows from the identity (\ref{eqn:r-i-rel}) for $s_{0}=0$. Indeed $(C_{0}+\alpha(0))\log T+\widetilde{h}(T,0)$ of the right hand side of (\ref{eqn:r-i-rel}) is equal to $\sum_{r\in\mathcal{C}_{0}(N)}Q_{y^{l}f\overline{gE_{k,\rho}(z,\overline{0})}}^{(r)}(T)$, and by (\ref{eqn:psp}), the right hand side of (\ref{eqn:r-i-rel}) equals\linebreak $\langle f(z),g(z)E_{k,\rho,N}(z,0) \rangle_{\Gamma_{0}(N)}$.
(iii) The Eisenstein series $y^{s}E_{0,\mathbf{1}_{N},N}(z,s)$ has a simple pole at $s=1$ with residue $C=3(\pi N\prod_{p|N}(1+p^{-1}))^{-1}$, and $C=\mathrm{Res}_{s=1}\xi^{(r)}(s)$ for all cusps $r\in\mathcal{C}_{0}(N)$. So $(s-1)\xi^{(r)}(s)$ tends to $C$ as $s\longrightarrow1$. By (\ref{eqn:RFs4}), \begin{align} &(s-1)R(f\overline{g},s)\nonumber\\ =&\int_{\mathfrak{F}_{T}(N)}y^{l+s}f(z)\overline{g(z)}(s-1)\overline{E_{0,\mathbf{1}_{N},N}(z,\overline{s})}\tfrac{dxdy}{y^2}+g_{T}(z,s)-(s-1)h(T,s) \label{eqn:RFs4-2} \end{align} where $g_{T}(z,s)$ is $s-1$ times the sum of all integrals but the first one in (\ref{eqn:RFs4}) . Then $g_{T}(z,s)$ is holomorphic in $s$ at $s=1$ and $g_{T}(z,1)$ is rapidly decreasing as $T\longrightarrow\infty$. The integral of the right side of (\ref{eqn:RFs4-2}) tends to $C\int_{\mathfrak{F}_{T}(N)}y^{l}(f\overline{g})(z)\tfrac{dxdy}{y^2}$ as $s\longrightarrow1$ since $(s-1)E_{0,\mathbf{1}_{N},N}(z,s)$ uniformly convergent to $C$ on the compact set $\mathfrak{F}_{T}(N)$. If $C_{0}=0$ and $\alpha(s)=0$, then we see from (\ref{eqn:defHTS}) that $(s-1)h(T,s)$ tends to $C\sum_{r}Q_{y^{l}f\overline{g}}^{(r)}(T)+n(T)$ where the powers of $T$ of terms in $n(T)$ have only negative real parts. Then the right hand side of (\ref{eqn:RFs4-2}) turns out to be $C\{\int_{\mathfrak{F}_{T}(N)}y^{l}(f\overline{g})(z)\tfrac{dxdy}{y^2}-\sum_{r}Q_{y^{l}f\overline{g}}^{(r)}(T)\}+g_{T}(z,1)+n(T)$. Taking the limit as $T\longrightarrow\infty$, we obtain (\ref{eqn:psp-formula2}).
Suppose that $C_{0}\ne0$ or $\alpha(s)\ne0$. Then $(s-1)h(T,s)$ has a term $C_{0}T^{-1+s}-\alpha(s)T^{1-s}$ additionally. Let $a$ be the coefficient of $y$ in $\sum_{r}P_{y^{l}f\overline{g}}^{(r)}(y)$. Then $\alpha(s)$ has $aC$ as the residue at $s=1$, since all $\xi^{(r)}(s)$ have the common residues $C$. Let $\alpha(s)=\frac{aC}{s-1}+c_{0}+O(s-1)$ be the Laurent expansion at $s=1$ with $c_{0}=\frac{1}{2}\frac{d^{2}}{ds^{2}}(s{-}1)\alpha(s)|_{s=1}$. Then $C_{0}T^{-1+s}-\alpha(s)T^{1-s}=-\tfrac{aC}{s-1}+C_{0}+aC\log T-c_{0}+O(s-1)$. Hence $(s-1)h(T,s)+\tfrac{aC}{s-1}-C_{0}+c_{0}$ tends to $C\sum_{r}Q_{y^{l}f\overline{g}}^{(r)}(T)+n(T)$ as $s\longrightarrow1$, and the same argument as above leads to an equality \begin{align*}
\tfrac{\Gamma(l-1+s)N\prod_{p|N}(1+\tfrac{1}{p})}{3\cdot 4^{l-1+s}\pi^{l-2+s}}L(l{-}1{+}s;f,g)
=\tfrac{a}{(s-1)^2}+\tfrac{\langle f,g\rangle_{\Gamma_{0}(N)}-C_{0}+\frac{1}{2}\frac{d^{2}}{ds^{2}}(s{-}1)\alpha(s)|_{s=1}}{s-1}+O(1) \end{align*} which holds near $s=1$. The last assertion of (iii) follows from this. \end{proof} \begin{remark}\label{rem:lseries} Let $f,g\in\mathbf{M}_{1}(N,\rho)$, and let $a_{0}^{(r)},b_{0}^{(r)}$ be the $0$-th Fourier coefficients of $f,g$ at a cusp $r$ respectively. Let $a:=\sum_{r\in\mathcal{C}_{0}(N)}a_{0}^{(r)}b_{0}^{(r)}w^{(r)}$ and $\alpha(s):=\sum_{r\in\mathcal{C}_{0}(N)}a_{0}^{(r)}b_{0}^{(r)} w^{(r)}\xi^{(r)}(s)$ where $w^{(r)}$ is as in (\ref{eqn:w-cusp}), and $\xi^{(r)}(s)$ is as in (\ref{eqn:ct0}). Corollary \ref{cor:lseries} (iii) and its proof imply that \begin{align*}
\tfrac{\Gamma(s)N\prod_{p|N}(1+\tfrac{1}{p})}{3\cdot 4^{s}\pi^{-1+s}}L(s;f,g)=\tfrac{a}{(s-1)^2}+\tfrac{\langle f,g\rangle_{\Gamma_{0}(N)}+\frac{1}{2}\frac{d^{2}}{ds^{2}}(s{-}1)\alpha(s)|_{s=1}}{s-1}+O(1) \end{align*}
near $s=1$. Hence if $a\ne0$, then $L(s;f,g)$ has a pole of order $2$ at $s=1$. If $a=0$, then (\ref{eqn:psp-formula2}) holds by adding $\frac{1}{2}\frac{d^{2}}{ds^{2}}(s{-}1)\alpha(s)|_{s=1}$ to the left hand side, and if a product $fg$ is a cusp form, then $\alpha(s)=0$ and (\ref{eqn:psp-formula2}) holds. \end{remark}
\section{The case of half integral weight} In this section, the analytic continuation of $L$-function (\ref{eqn:lseries}) is proved in the case $l-l'\in\tfrac{1}{2}\mathbf{Z}$. The properties of $L$-function (\ref{eqn:lseries}) shown in the preceding section are proved also in the half integral weight case. At first we need to obtain the functional equations of Eisenstein series of half integral weight, and then we make the similar argument as the proof of Theorem \ref{thm:functional equation}. \begin{prop}\label{prop:fn-eq} Let $\rho,\widetilde{\rho},N$ be as in (\ref{eqn:N2}) with a decomposition $\rho=\rho_{2}\rho_{\mathbf{c}}\rho_{\mathbf{r}}$ as in (\ref{eqn:dcmp-rho}). Let $k$ be a nonnegative integer with the same parity as $\rho$, and let $U_{k+1/2,\rho}(s),U_{k+1/2,\rho,R}(s)$ be as in (\ref{eqn:cffe}),(\ref{eqn:cffe2}) respectively. Then $U_{k+1/2,\rho}({-}k{+}1/2{-}s)^{-1}$ $\times y^{{-}k{+}1/2{-}2s}E_{k+1/2,\rho}(z,{-}k{+}1/2{-}s)$ is equal to \begin{align*}
\sum_{P|\mathfrak{f}_{\rho_{\mathbf{r}}}}[&(-1)^{k}\sqrt{-1}\iota_{P}U_{k+1/2,\rho,P}({-}k{+}1/2{-}s)E_{k+1/2,\chi_{P^{\vee}}}^{\rho_{2\mathbf{c}}\chi_{(\varepsilon_{P}4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}}(z,s)\\ &+(-1)^{k}\{\rho_{2}(-1)\chi_{{-}4}(P){+}\sqrt{{-}1}\}U_{k+1/2,\rho,4P}({-}k{+}1/2{-}s)E_{k+1/2,\rho_{2}\chi_{\varepsilon_{P}'4P}}^{\rho_{\mathbf{c}}\chi_{(\mathfrak{f}_{\rho_{\mathbf{r}}}/P)^{\vee}}}(z,s)] \end{align*} if $\rho_{2}$ is real, or to \begin{align*}
\sum_{P|\mathfrak{f}_{\rho_{\mathbf{r}}}}(-1)^{k}\sqrt{-1}\iota_{P}2^{2}\mathfrak{f}_{\rho_{2}}^{-1}U_{k+1/2,\rho,P}({-}k{+}1/2{-}s)E_{k+1/2,\chi_{P^{\vee}}}^{\rho_{2\mathbf{c}}\chi_{(\varepsilon_{P}4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}}(z,s) \end{align*} if $\rho_{2}$ is complex where $\varepsilon_{P}:=\chi_{-4}(\mathfrak{f}_{\rho_{\mathbf{r}}}/P),\varepsilon_{P}':=\chi_{-4}(P)\in\{\pm1\}$. \end{prop}
\begin{proof} We prove the assertion in the case $\rho_{2}$ is complex. In other cases the proof is similar. Lemma \ref{lem:vac} gives the constant terms of Fourier expansions with respect to $x$, of $y^{-k+1/2-2s}E_{k+1/2,\rho}(z,-k+1/2-s)$ at cusps, that is, $U_{k+1/2,\rho}(-k+1/2-s)^{-1}y^{-k+1/2-2s}E_{k+1/2,\rho}(z,-k+1/2-s)$ has the constant term $2^{2}\mathfrak{f}_{\rho_{2}}^{-1}U_{k+1/2,\rho,P}(-k+1/2-s)$ at $1/P$ for $P|\mathfrak{f}_{\rho_{\mathbf{r}}}$, and the constant term $y^{-k+1/2-2s}$ at $1/N$, and $0$ at other cusps. We put $G_{0}(z;\rho,s):=U_{k+1/2,\rho}({-}k{+}1/2{-}s)^{-1}y^{{-}k{+}1/2{-}2s} E_{k+1/2,\rho}(z,-k+1/2-s)-\sum_{P|\mathfrak{f}_{\rho_{\mathbf{r}}}}(-1)^{k}\sqrt{-1}\iota_{P}2^{2}\mathfrak{f}_{\rho_{2}}^{-1}U_{k+1/2,\rho,P}({-}k{+}1/2{-}s)E_{k+1/2,\chi_{P^{\vee}}}^{2\rho_{\mathbf{c}}\chi_{(\varepsilon_{P}4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}}(z,s)$. Our purpose is to show $G_{0}(z;\rho,s)=0$. It has the constant terms in the form (\ref{cte}) with $c_{0}=0$ at all the cusps by Lemma \ref{lem:vac2}. By the construction of $G_{0}(z;\rho,s)$ and by Lemma \ref{lem:vac} and Lemma \ref{lem:vac2}, $G_{0}(z;\rho,s)$ has the constant term (\ref{cte}) with $\xi_{0}(s)\ne0$ at cusp $1/M\in\mathcal{C}_{0}(N)$ only if $\mathfrak{f}_{\rho_{2\mathbf{c}}}|M$.
Assume that $\Re s>3/4$. Then $y^{s}G_{0}(z;\rho,s)\in\mathcal{M}_{k+1/2,0}(N,\rho)$ and its absolute value has the order $O(y^{-k+1/2-\Re s})$ as $y\longrightarrow\infty $ at all cusps. Since $y^{(k+1/2)/2}|y^{s}$ $\times G_{0}(z;\rho,s)|$ is $\Gamma_{0}(N)$-invariant, and vanishes at each cusp, in particular we have $(|y^{s}G_{0}(z;\rho,s)|)|_{A_{r}}=o(y^{-(k+1/2)/2})$ as $y\longrightarrow0+$ for any cusp $r$, $A_{r}$ being as in (\ref{defAr}).
Let $\xi_{0}(s)y^{-k+1/2-s}$ be the constant term of the Fourier expansion of $y^{s}G_{0}(z;\rho,s)\in\mathcal{M}_{k+1/2,0}(N,\rho)$ at the cusp $\sqrt{-1}\infty$. At first we show that $\xi_{0}(s)$ vanishes. We take a complex variable $t$ so that $-k/2+3/4<\Re t<\Re s$. The series $E_{k+1/2,\rho}(z,t)$ of (\ref{eqn:eshiw}) converges absolutely and locally uniformly on $\mathfrak{H}$ since $k+1/2+2\Re t>2$, and the unfolding trick is available to $E_{k+1/2,\rho}(z,t)$. A product $y^{k+1/2}y^{s}G_{0}(z;\rho,s)\overline{y^{\overline{t}}E_{k+1/2,\rho}(z,\overline{t})}$ $\in\mathcal{M}_{0,0}(N)$ has the order $O(y^{1-\Re s+\Re t})$ as $y\longrightarrow\infty$ for all cusps. Since $1-\Re s+\Re t<1$, its integral over the fundamental domain converges. Then by using the unfolding trick, there holds \begin{align*} \int_{\mathfrak{F}(N)}y^{k+1/2}y^{s}G_{0}(z;\rho,s)\overline{y^{\overline{t}}E_{k+1/2,\rho}(z,\overline{t})}\tfrac{dxdy}{y^{2}}=\int_{\mathfrak{D}}y^{k-3/2+s+t}G_{0}(z;\rho,s)dxdy \end{align*}
with $\mathfrak{D}$ as in (\ref{eqn:defD}), where the right hand side has meaning since $|y^{k-3/2+s+t}G_{0}(z;\rho,s)|=o(k/2{-}7/4{+}\Re t)$ as $y\longrightarrow 0+$ and $k/2-7/4+\Re t>-1$. However \begin{align*} \int_{\mathfrak{D}}y^{k-3/2+s+t}G_{0}(z;\rho,s)dxdy=\int_{0}^{\infty}\xi_{0}(s)y^{-1+t-s}dy, \end{align*} and $\xi_{0}(s)$ must be $0$ to converge the integral. Since the function $\xi_{0}(s)$ is meromorphic on the entire plane and since $\xi_{0}(s)=0$ for $s$ with $\Re s>\Re t$, $\xi_{0}(s)$ vanishes identically.
Next we prove that the constant term of $y^{s}G_{0}(z;\rho,s)$ at a cusp $1/(N/P)$ with $P|\mathfrak{f}_{\rho_{\mathbf{r}}}$ vanishes. An Eisenstein series $E_{k+1/2,\rho_{2\mathbf{c}}\chi_{(\varepsilon_{P}4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}}^{\chi_{P^{\vee}}}(z,s)$ has a constant term (\ref{cte}) with $c_{0}\ne0$ at the cusp $1/(N/P)$. Since $y^{k+t+1/2}y^{s}G_{0}(z;\rho,s)$\linebreak$\times\overline{E_{k+1/2,\rho_{2\mathbf{c}}\chi_{(\varepsilon_{P}4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}}^{\chi_{P^{\vee}}}(z,\overline{t})}$ is in $\mathcal{M}_{0,0}(N)$, $(y^{k+t+1/2}y^{s}G_{0}(z;\rho,s)\times$\linebreak$\overline{E_{k+1/2,\rho_{2\mathbf{c}}\chi_{(\varepsilon_{P}4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}}^{\chi_{P^{\vee}}}(z,\overline{t})})|_{A_{1/(N/P)}}$ is in $\mathcal{M}_{0,0}(A_{1/(N/P)}^{-1}\Gamma_{0}(N)A_{1/(N/P)})$. Here \linebreak$E_{k+1/2,\rho_{2\mathbf{c}}\chi_{(\varepsilon_{P}4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}}^{\chi_{P^{\vee}}}(z,t)|_{A_{1/(N/P)}}$ is in the form $\sum_{c,d}\eta_{c,d}(cz+d)^{-k-1/2}|cz+d|^{-2t}$ with constants $\eta_{c,d}$ of absolute value $1$ or $0$ where $c,d$ runs over the set of the second rows of matrices in $A_{1/(N/P)}^{-1}\Gamma_{0}(N)A_{1/(N/P)}$ with $c>0$, or with $c=0,d>0$. Then we can apply the unfolding trick, and we have \begin{align*} &\int_{(A_{1/(N/P)}^{-1}\Gamma_{0}(N)A_{1/(N/P)})\backslash\mathfrak{H}}y^{k+1/2}y^{s}G_{0}(z;\rho,s)\overline{y^{\overline{t}}E_{k+1/2,\rho_{2\mathbf{c}}\chi_{(\varepsilon_{P}4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}}^{\chi_{P^{\vee}}}(z,\overline{t})}\tfrac{dxdy}{y^{2}}\\
=&\int_{0}^{\infty}\int_{0}^{w_{1/(N/P)}}y^{k-3/2+s+t}G_{0}(z;\rho,s)|_{A_{1/(N/P)}}dxdy, \end{align*} $w_{1/(N/P)}$ denoting the width of the cusp $1/(N/P)$ of $\Gamma_{0}(N)$. Then by the same argument as in the case of the cusp $1/N$, it follows that the constant term of $G_{0}(z;\rho,s)$ at the cusp $1/(N/P)$ vanishes.
Now the constant terms of Fourier expansions of $G_{0}(z;\rho,s)$ at all the cusps in $\mathcal{C}_{0}(N)$ is $0$, and hence $G_{0}(z;\rho,s)$ is rapidly decreasing at all the cusp, namely, it is a cuspidal. Then integrals \begin{align*}
&\int_{\mathfrak{F}(N)}y^{k+1/2}y^{s}G_{0}(z;\rho,s)\overline{y^{\overline{s}}E_{k+1/2,\chi_{P^{\vee}}}^{\rho_{2\mathbf{c}}\chi_{(\varepsilon_{P}4\mathfrak{f}_{\rho_{\mathbf{r}}}/P)}}(z,\overline{s})}\tfrac{dxdy}{y^{2}}\hspace{1em}(P|\mathfrak{f}_{\rho_{\mathbf{r}}}),\\ &\int_{\mathfrak{F}(N)}y^{k+1/2}y^{s}G_{0}(z;\rho,s)\overline{y^{-k+1/2-2\overline{s}}E_{k+1/2,\rho}(z,{-}k{+}1/2{-}\overline{s})}\tfrac{dxdy}{y^{2}} \end{align*}
make sense for $s\in\mathbf{C}$. Applying the unfolding trick to the first integral at a cusp $1/P$ for $\Re s\gg0$, and to the second integral at $1/N$ for $\Re s\ll0$, it follows that they are $0$. This implies that $\int_{\mathfrak{F}(N)}y^{k+1/2}y^{s}G_{0}(z;\rho,s)\overline{y^{\overline{s}}E(z,\overline{s})}\tfrac{dxdy}{y^{2}}=0$. For $s$ real, the integrand turns out to be $y^{k+1/2}|y^{s}G_{0}(z;\rho,s)|^{2}(\ge0)$, and since the integral is $0$, $G_{0}(z;\rho,s)$ must be $0$ for $s\in\mathbf{R}$ and hence for $s\in\mathbf{C}$. \end{proof}
For example, we have functional equations \begin{align} &(1{+}({-}1)^{k}\sqrt{-1})\tfrac{1-2^{2k-2+4s}}{2^{2k-1+4s}}\tfrac{\zeta(2k-1+4s)}{\zeta(2k+4s)}y^{s}w_{0}(y,k{+}1/2,s)E_{k+1/2,\chi_{-4}^{k}}(z,{-}k{+}1/2{-}s)\nonumber\\ =&y^{s}E_{k+1/2,\chi_{-4}^{k}}(z,s)+(1{+}({-}1)^{k}\sqrt{{-}1})\tfrac{1-2^{2k-1+4s}}{2^{2k+4s}}y^{s}E_{k+1/2}^{\chi_{-4}^{k}}(z,s), \label{eqn:exfe}\\ &(1{+}({-}1)^{k}\sqrt{-1})(1{-}2^{2k-2+4s})\tfrac{\zeta(2k-1+4s)}{\zeta(2k+4s)}y^{s}w_{0}(y,k{+}1/2,s)E_{k+1/2}^{\chi_{-4}^{k}}(z,{-}k{+}1/2{-}s)\nonumber\\ =&(1{-}({-}1)^{k}\sqrt{{-}1})2(1{-}2^{2k-1+4s})y^{s}E_{k+1/2,\chi_{-4}^{k}}(z,s)+y^{s}E_{k+1/2}^{\chi_{-4}^{k}}(z,s).\nonumber \end{align}
As is stated in the beginning of Section \ref{sect:CTEHIW}, an Eisenstein series $E_{k+1/2,\rho',N'}(z,s)$ for $\rho',N'$ satisfying (\ref{cond:M}) with $\rho=\rho',M=N'$, but not satisfying (\ref{eqn:N2}), is written as $E_{k+1/2,\rho}(mz,s)$ for some $m\in\mathbf{N}$ and for $\rho$ satisfying (\ref{eqn:N2}). Then the functional equation of $E_{k+1/2,\rho'}(z,s)$ under $s\longmapsto-k+1/2-s$ is obtained from that of $E_{k+1/2,\rho}(z,s)$. We note that if $\Gamma_{0}(N)\supset\Gamma_{0}(N')$, then an Eisenstein series on $\Gamma_{0}(N)$ is written as a finite sum of Eisenstein series on $\Gamma_{0}(N')$ because $A_{r}^{-1}\Gamma_{0}(N)$ is a finite union of cosets $A_{r'}^{-1}\Gamma_{0}(N')$ for suitable $A_{r'}$. For example, replacing $z$ by $2z$ in (\ref{eqn:exfe}), we have a functional equation \begin{align*} &(1{+}({-}1)^{k}\sqrt{-1})\tfrac{1-2^{2k-2+4s}}{2^{3k-3/2+6s}}\tfrac{\zeta(2k-1+4s)}{\zeta(2k+4s)}y^{s}w_{0}(y,k{+}1/2,s)E_{k+1/2,\chi_{(-1)^{k}8}}(z,{-}k{+}1/2{-}s)\\ =&y^{s}E_{k+1/2,\chi_{(-1)^{k}8}}(z,s)+(1{+}({-}1)^{k}\sqrt{{-}1})\tfrac{1-2^{2k-1+4s}}{2^{3k-1/2+6s}}y^{s}E_{k+1/2}^{\chi_{(-1)^{k}8}}(z,s)\\ &+(1{+}({-}1)^{k}\sqrt{{-}1})\tfrac{1-2^{2k-1+4s}}{2^{2k+4s}}y^{s}\sum_{2c,d}\chi_{-4}(c)^{k}
\chi_{c^{\vee}}(d)\iota_{c}^{-1}(2cz+d)^{k-1/2}|2cz+d|^{-2s} \end{align*} where in the summation $2c,d$ runs over the set of second rows of $A_{1/2}^{-1}\Gamma_{0}(8)$ with $2c>0$, and the sum is in $\mathcal{M}_{2k+1+s,s}(8,\chi_{(-1)^{k}8})$.
\begin{thm}Assume that $F(z)$ of Theorem \ref{thm:analytic continuation} is in $\mathcal{M}_{l,l-k-1/2}(N,\rho)$ with $k\in\mathbf{Z},\ge0$ and with $\rho\in(\mathbf{Z}/N)^{\ast}$ where $\rho$ and $k$ have the same parity and where $\rho,N$ satisfy (\ref{cond:M}) with $M=N$. Put \begin{align*} R_{r}(F,s):=w^{(r)}\int_{0}^{\infty}y^{l+s-2}u_{0}^{(r)}(y)dy \end{align*} for $u_{0}^{(r)}(y)$ in (\ref{eqn:fearc}). Let $U({-}k{+}1/2{-}s)^{-1}y^{-k+1/2-s}E_{k+1/2,\rho,N}(z,{-}k{+}1/2{-}s)=$ $\sum_{r\in\mathcal{C}_{0}(N)}$ $U_{k+1/2,\rho}^{(r)}({-}k{+}1/2{-}s))y^{s}E_{k+1/2}^{(r)}(z,\rho,s)$ be the functional equation where\linebreak $E_{k+1/2}^{(r)}(z,\rho,s)\in\mathcal{M}_{k+1/2+s,s}(N,\rho)$ is the Eisenstein series which is a sum on second rows $(c,d)$ of $A_{r}^{-1}\Gamma_{0}(N)$ with $c>0$ or with $c=0,d>0$, and it is normalized to have the constant term (\ref{cte}) with $c_{0}=1$ at a cusp $r$. Then $R(F,s)(=R_{1/N}(F,s))$ defined in (\ref{eqn:rst}) has the meromorphic continuation to the whole complex plane, and satisfies the functional equation \begin{align} R(F,{-}k{+}1/2{-}s)=U({-}k{+}1/2{-}s)\sum_{r\in\mathcal{C}_{0}(N)}U_{k+1/2,\rho}^{(r)}({-}k{+}1/2{-}s)R_{r}(F,s).\label{eqn:int-r} \end{align}
Suppose that $\rho,N$ satisfy (\ref{eqn:N2}), and let $U_{k+1/2,\rho},U_{k+1/2,\rho,R}$ be as in Proposition \ref{prop:fn-eq}. Then there holds the functional equation \begin{align*} &R(F,-k+1/2-s)\\
=&U_{k+1/2,\rho}({-}k{+}1/2{-}s)\sum_{P|\mathfrak{f}_{\rho_{\mathbf{r}}}}[U_{k+1/2,\rho,P}({-}k{+}1/2{-}s)R_{1/P}(F,s)\\ &\hspace{3em}+\{1+\rho_{2}(-1)\chi_{{-}4}(P)\sqrt{{-}1}\}U_{k+1/2,\rho,4P}({-}k{+}1/2{-}s)R_{1/(4P)}(F,s)] \end{align*} if $\rho_{2}$ is real, and \begin{align*}
&R(F,{-}k{+}1/2{-}s)=2^{2}\mathfrak{f}_{\rho_{2}}^{-1}U_{k+1/2,\rho}({-}k{+}1/2{-}s)\sum_{P|\mathfrak{f}_{\rho_{\mathbf{r}}}}U_{k+1/2,\rho,P}({-}k{+}1/2{-}s)R_{1/P}(F,s) \end{align*} if $\rho_{2}$ is complex. \end{thm} \begin{proof} By Corollary \ref{cor:int-eis} (ii), we have \begin{align*} &U_{k+1/2,\rho}({-}k{+}1/2{-}s)^{-1}R(F,{-}k{+}1/2{-}s)\\ =&\int_{\mathfrak{F}(N)}\{y^{l+s}F(z)\sum_{r\in\mathcal{C}_{0}(N)}U_{k+1/2,\rho}^{(r)}({-}k{+}1/2{-}s)E_{k+1/2}^{(r)}(z,\rho,s)-\mathbf{E}_{0}(z,s)\}\tfrac{dxdy}{y^2} \end{align*} for $\Re s\gg0$, $\mathbf{E}_{0}(z,s)$ being a linear combination of Eisenstein series of weight $(0,0)$. Let $\mathbf{E}_{0}(z,s)=\sum_{r\in\mathcal{C}_{0}(N)}E_{0}^{(r)}(z,s)$ where $E_{0}^{(r)}(z,s)$ are the sums of Eisenstein series vanishing at all the cusps in $\mathcal{C}_{0}(N)$ but $r$. Then $U_{k+1/2,\rho}({-}k{+}1/2{-}s)^{-1}R(F,-k+1/2-s)$ is equal to \begin{align*} &\sum_{r\in\mathcal{C}_{0}(N)}\int_{\mathfrak{F}(N)}\{U_{k+1/2,\rho}^{(r)}({-}k{+}1/2{-}s)y^{l+s}F(z)E_{k+1/2}^{(r)}(z,\rho,s)-E_{0}^{(r)}(z,s)\}\tfrac{dxdy}{y^2}\\
=&\sum_{r\in\mathcal{C}_{0}(N)}\int_{(A_{r}^{-1}\Gamma_{0}(N)A_{r})\backslash\mathfrak{H}}\{U_{k+1/2,\rho}^{(r)}({-}k{+}1/2{-}s)y^{l+s}F(z)|_{A_{r}}E_{k+1/2}^{(r)}(z,\rho,s)|_{A_{r}}\\
&\hspace{25em}-E_{0}^{(r)}(z,s)|_{A_{r}}\}\tfrac{dxdy}{y^{2}}, \end{align*}
where $y^{l+s}F(z)|_{A_{r}}E_{k+1/2}^{(r)}(z,\rho,s)|_{A_{r}}$ is in $\mathcal{M}_{0,0}(A_{r}^{-1}\Gamma_{0}(N)A_{r})$. In the summation of Eisenstein series $E_{k+1/2}^{(r)}(z,\rho,s)|_{A_{r}}$, $(c,d)$ runs over the set of the second rows of matrices in $A_{1/(N/P)}^{-1}\Gamma_{0}(N)A_{1/(N/P)}$ with $c>0$, or with $c=0,d>0$. Then we can apply the unfolding trick to the integration, and we obtain (\ref{eqn:int-r}). \end{proof}
\begin{cor}\label{cor:lseries2} Let $k\in\mathbf{Z},\ge0$ and let $l\in\frac{1}{2}\mathbf{Z},l\ge k+1/2$. Let $f,g$ be holomorphic modulars forms for $\Gamma_{0}(N)$ of weight $l$ and of weight $l-k-1/2$ with characters respectively. We assume that $f\overline{g}\in\mathcal{M}_{l,l-k-1/2}(N,\rho)$ for $\rho\in(\mathbf{Z}/N)^{\ast}$ with the same parity as $k$ and where $\rho,N$ satisfy (\ref{cond:M}) with $M=N$.
(i) Then $L(s;f,g)$ defined in (\ref{eqn:lseries}) converges at least if $\Re s>\max\{2l{-}k{-}1/2,1/2\}$, and extends meromorphically to the whole $s$-plane. Under the notation of the theorem there holds the functional equation \begin{align} &L(l{-}k{-}1/2{-}s;f,g)\nonumber\\ =&\tfrac{(4\pi)^{{-}k{+}1/2{-}2s}\Gamma(l{-}1{+}s)U({-}k{+}1/2{-}s)}{\Gamma(l{-}k{-}1/2{-}s)}\nonumber\\
&\times\sum_{r\in\mathcal{C}_{0}(N)}{w^{(r)}}^{l+s}U_{k+1/2,\rho}^{(r)}({-}k{+}1/2{-}s)L(l{-}1{+}s;f|_{A_{r}},g|_{A_{r}}), \label{eqn:fn-eq4} \end{align} $w^{(r)}$ being the width of a cusp $r$ in $\mathcal{C}_{0}(N)$. Let $\rho,N$ be as in (\ref{eqn:N2}). Then there holds the functional equation \begin{align*} &L(l{-}k{-}1/2{-}s;f,g)\\
=&\tfrac{(4\pi)^{{-}k{+}1/2{-}2s}\Gamma(l{-}1{+}s)U_{k+1/2,\rho}({-}k{+}1/2{-}s)}{\Gamma(l{-}k{-}1/2{-}s)}\sum_{p|\mathfrak{f}_{\rho_{\mathbf{r}}}}[(N/P)^{l+s}U_{k+1/2,\rho,P}({-}k{+}1/2{-}s)\\
&\times L(l{-}1{+}s;f|_{A_{1/P}},g|_{A_{1/P}})+\{1{+}\rho_{2}({-}1)\chi_{{-}4}(P)\sqrt{{-}1}\}(N/(4P))^{l+s}\\
&\times U_{k+1/2,\rho,4P}({-}k{+}1/2{-}s)L(l{-}1{+}s;f|_{A_{1/(4P)}},g|_{A_{1/(4P)}})] \end{align*} if $\rho_{2}=\mathbf{1}_{2},\chi_{-4}$, and \begin{align*} &L(l{-}k{-}1/2{-}s;f,g)\\ =&\tfrac{2^{2}\mathfrak{f}_{\rho_{2}}^{-1}(4\pi)^{{-}k{+}1/2{-}2s}\Gamma(l{-}1{+}s)U_{k+1/2,\rho}({-}k{+}1/2{-}s)}{\Gamma(l{-}k{-}1/2{-}s)}\\
&\times\sum_{P|\mathfrak{f}_{\rho_{\mathbf{r}}}}(N/P)^{l+s}U_{k+1/2,\rho,P}({-}k{+}1/2{-}s)L(l{-}1{+}s;f|_{A_{r}},g|_{A_{r}}) \end{align*} if $\rho_{2}$ is complex.
(ii) If $\rho,N$ are as in (\ref{eqn:N}) with $\rho_{\mathbf{r}}=\mathbf{1}$, or $k\ge1$, Let $P_{y^{l}f\overline{g}}^{(r)}(y)$ be as in (\ref{eqn:dfP}). If $P_{y^{l}f\overline{g}}^{(\sqrt{-1}\infty)}(y)$ does not have a term containing $y$ to the power of $1$, and if $P_{y^{l}f\overline{g}}^{(r)}(y)$ does not have a term containing $y^{k+1/2}$ for any $r\in\mathcal{C}_{0}(N)$, then there holds \begin{align} \langle f(z),g(z)E_{k+1/2,\rho}(z,0) \rangle_{\Gamma_{0}(N)}=(4\pi)^{-l+1}\Gamma(l{-}1)L(l-1;f,g) \label{eqn:psp-hiw} \end{align}
with the scalar product defined in (\ref{eqn:psp}). Suppose otherwise. Let $y^{s}+\xi^{(1/N)}(s)y^{-k+1/2-s}$ be the constant term of the Fourier expansion of $y^{s}E_{k+1/2,\rho}(z,s)$ at a cusp $\sqrt{-1}\infty$, and let $\xi^{(r)}(s)y^{-k+1/2-s}$ be the constant term at a cusp $r\in\mathcal{C}_{0}(N),\ne1/N$. If $C_{0}$ is a coefficient of $y$ in $P_{y^{l}f\overline{g}}^{(\sqrt{-1}\infty)}(y)$, and if $\alpha(s)$ is the coefficient of $y^{k}$ in $\sum_{r\in\mathcal{C}_{0}(N)}\overline{\xi^{(r)}(\overline{s})}w^{(r)}P_{y^{l}f\overline{g}}^{(r)}(y)$, then the equation (\ref{eqn:psp-hiw}) holds replacing the right hand side by $\{(4\pi)^{-l+1-s}\Gamma(l{-}1{+}s)L(l{-}1{+}s;f,g)+s^{-1}(C_{0}{-}\alpha(0))-\frac{d}{ds}\alpha(0)\}|_{s=0}$. \end{cor} \begin{proof} (i) We take $f(z)\overline{g}(z)$ as $F(z)$ in the theorem. Then $R(F,s)=(4\pi)^{-l+1-s}\Gamma(l-1+s)L(l-1+s;f,g)$, and the assertion follows from the theorem.
(ii) The Eisenstein series $E_{k+1/2,\rho}(z,s)$ is holomorphic in $s$ at $s=0$ under our assumption. Then the proof is the same as in the proof of Corollary \ref{cor:lseries} (ii). \end{proof}
\section{Applications --- Scalar products}\label{sect:ASP} \begin{prop}\label{prop:vl-sp}
Let $l\in\frac{1}{2}\mathbf{Z},\ge0,\,N\in\mathbf{N}$ where $N\ge3$ if $l$ is odd, and $4|N$ if $l$ is not integral. Let $\rho\in(Z/N)^{\ast}$ be so that $\rho$ has the same parity as $l$ if $l\in\mathbf{Z}$, and that $\rho$ has the same parity as $l-1/2$ and $\rho,N$ satisfy (\ref{cond:M}) with $M=N$ if $l\not\in\mathbf{Z}$. Assume that $E_{l,\rho,N}(z,s)$ is holomorphic in $s$ at $s=0$. Denote by $y^{s}+\xi^{(1/N)}(s)y^{-l+1-s}$, the constant term of Fourier expansion with respect to $x$, of the Eisenstein series $y^{s}E_{l,\rho,N}(z,s)$ at a cusp $\sqrt{-1}\infty$, and by $\xi^{(r)}(s)y^{-l+1-s}$, the constant term at a cusp $r\in\mathcal{C}_{0}(N)$.
Let $F(z)$ be in $\mathcal{M}_{l,0}(N,\rho)$ which has the Fourier expansion (\ref{eqn:fearc}) at each cusp $r$ with $u_{0}(y)=0$. Denote by $b$, the coefficient of $y^{-l+1}$ in $P_{F}^{(1/N)}(y)$, and by $c^{(r)}$, the (absolutely) constant term in $P_{F}^{(r)}(y)$ for $r\in\mathcal{C}_{0}(N)$. Then $b=\sum_{r\in\mathcal{C}_{0}(N)}c^{(r)}w^{(r)}\overline{\xi}^{(r)}(0)$, and \begin{align} \langle F(z),E_{l,\rho,N}(z,0)\rangle_{\Gamma_{0}(N)} = -\sum_{r\in\mathcal{C}_{0}(N)}c^{(r)}w^{(r)}\tfrac{d}{ds}\overline{\xi}^{(r)}(0).\label{eqn:sproduct-eis-eis} \end{align} \end{prop} \begin{proof} Since $u_{0}(y)=0$, the equality (\ref{eqn:RFs4}) turns out to be \begin{align} \int_{\mathfrak{F}_{T}(N)}y^{l+s}F(z)\overline{E_{l,\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}=-g_{T}(z,s)+h(T,s)\label{eqn:prelim} \end{align} where $g_{T}(z,s)$ is the sum of the integrals of the right hand side of (\ref{eqn:RFs4}) except the first one, and $h(T,s)$ is as in (\ref{eqn:defHTS}) with $l'=l$. The function $g_{T}(z,s)$ is holomorphic in $s$ at $s=0$.
For $\Re s\gg0$, we have $\int_{0}^{T} y^{l}by^{-l+1}\cdot y^{s}\tfrac{dy}{y^{2}}=bs^{-1}T^{s}$, and $\int_{T}^{\infty}y^{l}c^{(r)}\cdot\overline{\xi^{(r)}(\overline{s})}y^{-l+1-s}\tfrac{dy}{y^{2}}=c^{(r)}\overline{\xi^{(r)}(\overline{s})}s^{-1}T^{-s}$. Since near $s=0$ there holds $bs^{-1}T^{s}=\frac{b}{s}+b\log T+O(s)$ and $c^{(r)}\overline{\xi^{(r)}(\overline{s})}s^{-1}T^{-s}=\frac{c^{(r)}\overline{\xi}^{(r)}(0)}{s}+c^{(r)}\frac{d}{ds}\overline{\xi}^{(r)}(0)-c^{(r)}\overline{\xi}^{(r)}(0)\log T+O(s)$, we have \begin{align*} h(T,s)=&\tfrac{b-\sum_{r\in\mathcal{C}_{0}(N)}c^{(r)}w^{(r)}\overline{\xi}^{(r)}(0)}{s}+\sum_{r\in\mathcal{C}_{0}(N)}Q_{y^{l}F\overline{E_{l,\rho,N}(z,0)}}^{(r)}(T)\\ &-\sum_{r\in\mathcal{C}_{0}(N)}c^{(r)}w^{(r)}\tfrac{d}{ds}\overline{\xi}^{(r)}(0)+n(T)+O(s) \end{align*} near $s=0$ where $Q_{y^{l}F\overline{E_{l,\rho,N}(z,0)}}^{(r)}(T)$ is as in (\ref{eqn:defQr}) and $n(T)$ is a finite sum of terms in which the real parts of powers of $T$ are all negative. In (\ref{eqn:prelim}), the right hand side is holomorphic in $s$ at $s=0$ since it is the integral over a compact set, and $g_{T}(z,s)$ is also holomorphic in $s$. Then $h_{T}(s)$ is holomorphic, and in particular $b-\sum_{r\in\mathcal{C}_{0}(N)}c^{(r)}w^{(r)}\overline{\xi}^{(r)}(0)=0$. Then \begin{align*} &\int_{\mathfrak{F}_{T}(N)}y^{l+s}F(z)\overline{E_{l,\rho,N}(z,\overline{s})}\tfrac{dxdy}{y^2}-\sum_{r\in\mathcal{C}_{0}(N)}Q_{y^{l}F\overline{E_{l,\rho,N}(z,0)}}^{(r)}(T)\\ =&-g_{T}(z,s)-\sum_{r\in\mathcal{C}_{0}(N)}c^{(r)}w^{(r)}\tfrac{d}{ds}\overline{\xi}^{(r)}(0)+n(T)+o(s). \end{align*} At $s=0$ taking the limit as $T\longrightarrow\infty$, the left hand side tends to $\langle F(z),E_{l,\rho,N}(z,0)\rangle_{\Gamma_{0}(N)}$ by (\ref{eqn:psp}) and we obtain the equality (\ref{eqn:sproduct-eis-eis}). \end{proof}
Let $\rho$ be a character with $N=\mathfrak{e}_{\rho}$ and with the same parity as $k$. From (\ref{eqn:fe}) and (\ref{eqn:G-E}), we have \begin{align}
E_{k,\rho}(z,s)=&1+\delta_{\mathfrak{f}_{\rho},1}\tfrac{(-\sqrt{{-}1})^{k}\pi\varphi(\mathfrak{e}_{\rho})}{2^{k-2+2s}\mathfrak{e}_{\rho}^{k+2s}}\tfrac{\Gamma(k{-}1{+}2s)}{\Gamma(s)\Gamma(k+s)}\tfrac{\zeta(k{-}1{+}2s)}{\zeta(k{+}2s)\prod_{p|\mathfrak{e}_{\rho}}(1{-}p^{{-}k{-}2s})}y^{-k+1-2s}\nonumber\\
&+\tfrac{\tau(\overline{\widetilde{\rho}})\mu(\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1})\overline{\widetilde{\rho}}(\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1})}{\mathfrak{e}_{\rho}^{k+2s}L(k+2s,\overline{\rho})}\sum_{-\infty<n<\infty\atop n\ne 0}n^{-k}|n|^{1-2s}\sum_{0<d|n}\prod_{p|(d,\mathfrak{e}_{\rho}\mathfrak{f}_{\rho}^{-1})}(1-p)\nonumber \\ &\hspace{12em}\times\widetilde{\rho}(d)d^{k-1+2s}w_{n}(y,k,s)\mathbf{e}(nx),\label{eqn:fee}\\ E_{k}^{\rho}(z,s)=&\delta_{\mathfrak{e}_{\rho},1}+\tfrac{(-\sqrt{{-}1})^{k}\pi}{2^{k-2+2s}}\tfrac{\Gamma(k-1+2s)}{\Gamma(s)\Gamma(k+s)}\tfrac{L(k{-}1{+}2s,\rho)}{L(k{+}2s,\rho)}y^{-k+1-2s}+\tfrac{1}{L(k{+}2s,\rho)}\nonumber\\
&\times\sum_{-\infty<n<\infty\atop n\ne 0}n^{-k}|n|^{1-2s}\sum_{0<d|n}\rho(n/d)d^{k-1+2s}w_{n}(y,k,s)\mathbf{e}(nx), \label{eqn:fee2} \end{align}
and $\{y^{s}E_{k,\rho}(z,s)\}|_{\left(\,0\ -1\atop 1\ \ 0\right)}=\mathfrak{e}_{\rho}^{-k-s}\{y^{s}E_{k}^{\overline{\rho}}(z,s)\}|_{z\to z/\mathfrak{e}_{\rho}}$, $\{y^{s}E_{k}^{\rho}(z,s)\}|_{\left(\,0\ -1\atop 1\ \ 0\right)}=(-1)^{k}$ $\times \mathfrak{e}_{\rho}^{s}\{y^{s}E_{k,\overline{\rho}}(z,s)\}|_{z\to z/\mathfrak{e}_{\rho}}$.
We compute some scalar products by using Proposition \ref{prop:vl-sp}. \\
(I) The case of integral weight and $\mathfrak{f}_{\rho}=1$.
Then $k$ is even. Using the notation of Proposition \ref{prop:vl-sp}, $y^{s}E_{k,\rho}(z,s)$ has $\xi^{(1/\mathfrak{e}_{\rho})}(s)=\tfrac{({-}1)^{k/2}\pi\varphi(\mathfrak{e}_{\rho})}{2^{k-2+2s}\mathfrak{e}_{\rho}^{k+2s}}\tfrac{\Gamma(k{-}1{+}2s)}{\Gamma(s)\Gamma(k+s)}\tfrac{\zeta(k{-}1{+}2s)}{\zeta(k{+}2s)\prod_{p|\mathfrak{e}_{\rho}}(1{-}p^{{-}k{-}2s})}$ by (\ref{eqn:fee}), and $\xi^{(0)}(s)=\tfrac{({-}1)^{k/2}\pi}{2^{k-2+2s}\mathfrak{e}_{\rho}}\tfrac{\Gamma(k{-}1{+}2s)}{\Gamma(s)\Gamma(k+s)}$ $\times \tfrac{\zeta(k{-}1{+}2s)\prod_{p|\mathfrak{e}_{\rho}}(1{-}p^{{-}k{+}1{-}2s})}{\zeta(k{+2s)\prod_{p|\mathfrak{e}_{\rho}}(1{-}p^{{-}k{-}2s})}}$ by (\ref{eqn:fee2}) and by the transformation law written below (\ref{eqn:fee2}). In the case $\mathfrak{e}_{\rho}=1$, namely, $\rho=\mathbf{1}$, we have $\tfrac{d}{ds}\overline{\xi}(0)=-\frac{\pi}{3}\,(k=0)$,\ $3\pi^{-1}\{{-}1{+}4\log2{+}$ $2\log\pi$ ${+}24\tfrac{d}{ds}\zeta(s)|_{s=-1}\}$ $(k=2)$,\ $\frac{(-1)^{k/2}\pi\zeta(k-1)}{2^{k-2}(k-1)\zeta(k)}$ $(k\ge4)$. Then by Proposition \ref{prop:vl-sp}, \begin{align*} &\langle E_{k}(z,0),E_{k}(z,0)\rangle_{\Gamma_{0}(1)}=-\tfrac{d}{ds}\overline{\xi}(0)\\
=&\begin{cases}\pi/3&(k=0),\\-3\pi^{-1}\{{-}1{+}4\log2{+}2\log\pi{+}24\tfrac{d}{ds}\zeta(s)|_{s=-1}\}&(k=2),\\
-\frac{(-1)^{k/2}\pi\zeta(k-1)}{2^{k-2}(k-1)\zeta(k)}&(2|k\ge4). \end{cases} \end{align*}
The Eisenstein series $E_{0}(z,0)$ is a constant $1$, and the above formula shows $\pi/3=\langle E_{k}(z,0),E_{k}(z,0)\rangle_{\Gamma_{0}(1)}=\int_{\Gamma_{0}(1)\backslash\mathfrak{H}}1\tfrac{dxdy}{y^{2}}$, namely, the volume of the fundamental domain of $\mathrm{SL}_{2}(\mathbf{Z})$ is $\pi/3$, which is a well-known fact. The values of the scalar products $\langle E_{k}(z,0),E_{k}(z,0)\rangle_{\Gamma_{0}(1)}$ for $k\ge 4$ are already obtained in Chiera \cite{Chiera}, and our result coincides with his.
Let $\mathfrak{e}_{\rho}>1$. When $k=0$, $y^{s}E_{0,\rho}(z,s)$ is holomorphic in $s$ at $s=0$ only if $\mathfrak{e}_{\rho}$ is a prime, say $\mathfrak{e}_{\rho}=p$. Then $\langle E_{0,\rho}(z,0),E_{0,\rho}(z,0)\rangle_{\Gamma_{0}(p)}=-\tfrac{d}{ds}\overline{\xi}^{(1/p)}(0)=$ $-6^{-1}\pi(p-1)+\pi(p-1)(3\log p)^{-1}\{1-2\log2-\log\pi-12\tfrac{d}{ds}\zeta(s)|_{s=-1}\}$. An Eisenstein series $E_{0}^{\rho}(z,0)$ has $c^{(0)}=1$ and $c^{(r)}=0\ (r\in\mathcal{C}_{0}(p),\ne0)$ in the notation of Proposition \ref{prop:vl-sp}. Then $\langle E_{0}^{\rho}(z,0),E_{0,\rho}(z,0)\rangle_{\Gamma_{0}(p)}=-\mathfrak{e}_{\rho}\tfrac{d}{ds}\xi^{(0)}(0)=6^{-1}(1+p)\pi+(3\log p)^{-1}(1-p)\pi\{1-2\log2-\log\pi-12\tfrac{d}{ds}\zeta(s)|_{s=-1}\}$. Let $\mathfrak{e}_{\rho}>1$ be square free, and let $k=2$. Then $\langle E_{2,\rho}(z,0),E_{2,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{e}_{\rho})}=3\mathfrak{e}_{\rho}^{-2}\varphi(\mathfrak{e}_{\rho})\pi^{-1}[\tfrac{d}{ds}\prod_{p|\mathfrak{e}_{\rho}}(1{-}p^{{-}2{-}2s})^{{-}1}|_{s=-1}+\prod_{p|\mathfrak{e}_{\rho}}(1{-}p^{{-}2})^{{-}1}\{1-4\log2-2\log(\pi\mathfrak{e}_{\rho})-24\tfrac{d}{ds}\zeta(s)|_{s=0}\}]$ and $\langle E_{2}^{\rho}(z,0),E_{2,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{e}_{\rho})}$ $=3\mathfrak{e}_{\rho}^{2}\pi^{-1}[\tfrac{d}{ds}\prod_{p|\mathfrak{e}_{\rho}}\tfrac{1-p^{-2-2s}}{1-p^{-1-2s}}+\prod_{p|\mathfrak{e}_{\rho}}(1{+}p^{-1})\{1-4\log2-2\log\pi+\log\mathfrak{e}_{\rho}-24\tfrac{d}{ds}\zeta(s)|_{s=-1}\}]$. For $k\ge4$ even, \begin{align*}
\langle E_{k,\rho}(z,0),E_{k,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{e}_{\rho})}&=-\tfrac{({-}1)^{k/2}\pi\varphi(\mathfrak{e}_{\rho})\zeta(k{-}1)}{2^{k-2}(k-1)\prod_{p|\mathfrak{e}_{\rho}}(p^{k}{-}1)\zeta(k)},\\
\langle E_{k}^{\rho}(z,0),E_{k,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{e}_{\rho})}&=-\tfrac{({-}1)^{k/2}\pi\zeta(k{-}1)\prod_{p|\mathfrak{e}_{\rho}}(1{-}p^{{-}k{+}1})}{2^{k-2}(k-1)\zeta(k)\prod_{p|\mathfrak{e}_{\rho}}(1{-}p^{{-}k})}. \end{align*}
(II) The case of integral weight and $\mathfrak{f}_{\rho}>1$.
Using the notation of Proposition \ref{prop:vl-sp}, $y^{s}E_{k,\rho}(z,s)$ has $0$ as $\xi^{(1/\mathfrak{e}_{\rho})}(s)$ by (\ref{eqn:fee}) if $\mathfrak{f}_{\rho}\ne1$. Hence if $F(z)\in\mathcal{M}_{l,0}(\mathfrak{e}_{\rho},\rho)$ satisfies $c^{(r)}=0\ (r\in \mathcal{C}_{0}(\mathfrak{e}_{\rho}),\ne1/\mathfrak{e}_{\rho})$, then $\langle F(z),E_{k,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{e}_{\rho})}=0$. For example, \begin{align*} \langle E_{k,\rho}(z,0),E_{k,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{e}_{\rho})}=0\hspace{1.5em}(k\ne1). \end{align*} If $k=1$, then $E_{1,\rho}(z,0)$ has nonzero $c^{(r)}$ other than $c^{(1/\mathfrak{e}_{\rho})}$. Suppose that $\rho$ is primitive. Then $c^{(1/\mathfrak{f}_{\rho})}=1$, $c^{(0)}=\tfrac{-\sqrt{-1}\pi L(0,\overline{\rho})}{\mathfrak{f}_{\rho}L(1,\overline{\rho})}$, $c^{(r)}=0\ (r\in\mathcal{C}_{0}(\mathfrak{f}_{\rho}),\ne 1/\mathfrak{f}_{\rho},0)$, and $y^{s}E_{1,\rho}(z,s)$ has $\xi^{(1/\mathfrak{f}_{\rho})}(s)=0$ and $\xi^{(0)}(s)=\tfrac{(-\sqrt{{-}1})\pi}{2^{-1+2s}\mathfrak{f}_{\rho}}\tfrac{\Gamma(2s)}{\Gamma(s)\Gamma(1+s)}\tfrac{L(2s,\overline{\rho})}{L(1+2s,\overline{\rho})}$ in the notation of Proposition \ref{prop:vl-sp}. Then \begin{align} &\langle E_{1,\rho}(z,0),E_{1,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{f}_{\rho})}=-c^{(0)}\mathfrak{f}_{\rho}\tfrac{d}{ds}\overline{\xi}^{(0)}(0)\nonumber\\
=&\{2\log2+2\tfrac{d}{ds}\log\tfrac{L(1+s,\rho)}{L(s,\rho)}|_{s=0}\}\ \ (k=1,\,\rho\mbox{\, is primitive}). \label{eqn:psp-w1} \end{align} For primitive $\rho$, the equality $E_{1,\rho}(z,0)=\tfrac{\tau(\overline{\rho})L(1,\rho)}{\mathfrak{f}_{\rho}\,L(1,\overline{\rho})}E_{1}^{\rho}(z,0)$ holds, and the scalar product $\langle E_{1}^{\rho}(z,0),E_{1,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{f}_{\rho})}$ can be obtained from (\ref{eqn:psp-w1}).
We compute $\langle E_{k}^{\rho}(z,0),E_{k,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{e}_{\rho})}$ for $k\ge2$ and for $\rho$ not necessarily primitive. We take $E_{k}^{\rho}(z,0)$ as $F(z)$ in Proposition \ref{prop:vl-sp}. Then $c^{(0)}=(-1)^{k}$ and $c^{(r)}=0\ (r\in C_{0}(\mathfrak{e}_{\rho}),\ne0)$, and $\langle E_{k}^{\rho}(z,0),E_{k,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{e}_{\rho})}={-}({-}1)^{k}\mathfrak{e}_{\rho}\tfrac{d}{ds}\overline{\xi}^{(0)}(0)$ where $y^{s}E_{k,\rho}(z,s)$ has $\tfrac{(-\sqrt{-1})^{k}\pi\Gamma(k-1+2s)L(k-1+2s,\overline{\rho})}{2^{k-2+2s}\mathfrak{e}_{\rho}\Gamma(s)\Gamma(k+s)L(k+2s,\overline{\rho})}$ as $\xi^{(0)}(s)$. Then \begin{align*} \langle E_{k}^{\rho}(z,0),E_{k,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{e}_{\rho})}=-\tfrac{(-\sqrt{-1})^{k}\pi L(k-1,\rho)}{2^{k-2}(k-1)L(k,\rho)}\hspace{1em}(k\ge2). \end{align*}
If $k=0$ and $\rho$ is primitive, then the same computation leads to $\langle E_{0}^{\rho}(z,0),E_{0,\rho}(z,0)\rangle_{\Gamma_{0}(\mathfrak{f}_{\rho})}$ $=4\pi\tau(\rho)^{-1}L(-1,\rho)L(1,\overline{\rho})^{-1}\{-1-\log(\pi/\mathfrak{f}_{\rho})+\tfrac{d}{ds}\log\Gamma(s)|_{s=1/2}+L(-1,\rho)^{-1}$ \linebreak$\times\tfrac{d}{ds} L(s,\rho)|_{s=-1}+L(1,\overline{\rho})^{-1}\tfrac{d}{ds}L(s,\overline{\rho})|_{s=1}\}$.\\
(III) The case of half integral weight.
By (\ref{eqn:feehiw1}),(\ref{eqn:feehiw2}) and by Lemma \ref{lem:vac0}, $y^{s}E_{k+1/2,\chi_{-4}^{k}}(z,s)\in\mathcal{M}_{k+1/2,0}(4,\chi_{-4}^{k})$ has $\xi^{(1/4)}(s)=\tfrac{(-1)^{k(k+1)/2}}{2^{k-1+2s}(2^{2k+4s}-1)}\tfrac{\pi\Gamma(k-1/2+2s)}{\Gamma(s)\Gamma(k+1/2+s)}\tfrac{\zeta(2k-1+4s)}{\zeta(2k+4s)}$, $\xi^{(1/2)}(s)=0$, $\xi^{(0)}(s)=$\linebreak$\tfrac{({-}\sqrt{{-}1})^{k}(1{-}\sqrt{{-}1})(2^{2k-1+4s}-1)}{2^{k+2s}(2^{2k+4s}-1)}\tfrac{\pi\Gamma(k-1/2+2s)}{\Gamma(s)\Gamma(k+1/2+s)}\tfrac{\zeta(2k-1+4s)}{\zeta(2k+4s)}$. The Fourier expansion of\linebreak $E_{k+1/2,\chi_{-4}^{k}}(z,0)$ is obtained from (\ref{eqn:feehiw1s0}),(\ref{eqn:feehiw2s0}). Then by Proposition \ref{prop:vl-sp}, \begin{align} &\langle E_{k+1/2,\chi_{-4}^{k}}(z,0),E_{k+1/2,\chi_{-4}^{k}}(z,0)\rangle_{\Gamma_{0}(4)}=-\tfrac{d}{ds}\overline{\xi}^{(1/4)}(0)\nonumber\\
=&\begin{cases}-\tfrac{\pi}{3\log 2}\{-2+5\log 2+2\log\pi+24\tfrac{d}{ds}\zeta(s)|_{s=-1}\}&(k=0),\\
\frac{2}{3\pi}\{3-20\log2-6\log\pi-72\zeta(s)|_{s=-1}\}&(k=1),\\ -\tfrac{(-1)^{k(k+1)/2}2^{-k+2}\pi\zeta(2k-1)}{(2^{2k}-1)(2k-1)\zeta(2k)}&(k\ge2). \end{cases}\label{eqn:vl-psp-e} \end{align}
Since $E_{k+1/2,\chi_{-4}^{k}}(z,0)|_{\left({0\ {-}1/2\atop2\ \ \ 0\ }\right)}=2^{-k-1/2}E_{k+1/2}^{\chi_{-4}^{k}}(z,0)$, $2^{-2k-1}\langle E_{k+1/2}^{\chi_{-4}^{k}}(z,0),$\linebreak$E_{k+1/2}^{\chi_{-4}^{k},4}(z,0)\rangle_{\Gamma_{0}(N)}$ is equal to (\ref{eqn:vl-psp-e}) where there is the additional term $2\pi/3\ (k=0)$, $(4\log2)/\pi\ (k=1)$ by Lemma \ref{lem:psp-sup}. Again by Proposition \ref{prop:vl-sp}, \begin{align*} &\langle E_{k+1/2}^{\chi_{-4}^{k}}(z,0),E_{k+1/2,\chi_{-4}^{k}}(z,0)\rangle_{\Gamma_{0}(4)}=-(-1)^{k-1}\sqrt{-1}\cdot4\cdot\tfrac{d}{ds}\overline{\xi}^{(0)}(0)\\ =&\begin{cases}
\tfrac{(1-\sqrt{-1})\pi}{3\log 2}\{-2+7\log 2+2\log \pi+24\tfrac{d}{ds}\zeta(s)|_{s=-1}\}&(k=0)\\
\tfrac{4(1+\sqrt{-1})}{3\pi}\{3-8\log2 -6\log \pi-72\tfrac{d}{ds}\zeta(s)|_{s=-1}\}&(k=1)\\ -\tfrac{(-\sqrt{-1})^{k}(1-\sqrt{-1})2^{-k+3}(2^{2k-1}-1)\pi\zeta(2k-1)}{(2^{2k}-1)(2k-1)\zeta(2k)}&(k\ge2) \end{cases} \end{align*} and \begin{align*} \langle \mathscr{E}_{k+1/2,\chi_{-4}^{k}}(z,0),\mathscr{E}_{k+1/2,\chi_{-4}^{k}}(z,0)\rangle_{\Gamma_{0}(4)}=\begin{cases} 2\pi&(k=0),\\ -12(\log2)/\pi&(k=1),\\ \tfrac{(-1)^{k(k+1)/2}(2^{2k-2}-1)\pi\zeta(2k-1)}{2^{k-2}(2^{2k-1}-1)^{2}(2k-1)\zeta(2k)}&(k\ge2). \end{cases} \end{align*} In particular $\langle \theta,\theta\rangle_{\Gamma_{0}(4)}=2\pi,\,\langle \theta^{3},\theta^{3}\rangle_{\Gamma_{0}(4)}=-(12\log2)/\pi,\,\langle \theta^{5},\theta^{5}\rangle_{\Gamma_{0}(4)}=-\tfrac{2\cdot3^{2}5\zeta(3)}{7^{2}\pi^{3}}$, $\langle \theta^{7},\theta^{7}\rangle_{\Gamma_{0}(4)}=\tfrac{3^{4}5\cdot7\zeta(5)}{2\cdot31^{2}\pi^{5}}$. From $\langle \theta,\theta\rangle_{\Gamma_{0}(4)}=2\pi$, we see that $L(s;\theta,\theta)$ has the residue $2$ at $s=1/2$ by (\ref{eqn:psp-formula2}), however this is obvious because $L(s;\theta,\theta)=4\zeta(2s-1)$.
\section{Applications --- $L$-functions}\label{sect:ALF}
\begin{prop}\label{prop:mr-pole} Let $f,g$ be holomorphic modulars forms for $\Gamma_{0}(N)$ of weight $l,l'\in\tfrac{1}{2}\mathbf{N}\ (l\ge l',\,l+l'>1)$ respectively with $f\overline{g}\in\mathcal{M}_{l,l'}(N,\rho)$ for $\rho\in(\mathbf{Z}/N)^{\ast}$. If $l-l'$ is odd, then $N\ge3$, and if $l-l'\not\in\mathbf{Z}$, then $4|N$. If $l-l'$ is integral, then $\rho$ has the same parity as $l-l'$, and if otherwise, then $\rho$ has the same parity as $l-l'-1/2$, and $\rho,N$ satisfy (\ref{cond:M}) with $M=N$. Let $a_{0}^{(r)},b_{0}^{(r)}$ be the $0$-th Fourier coefficients of $f,g$ respectively at a cusp $r\in\mathcal{C}_{0}(N)$. Denote by $y^{s}+\xi^{(1/N)}(s)y^{-l+l'+1-s}$, the constant term of Fourier expansion with respect to $x$, of the Eisenstein series $y^{s}E_{l-l',\rho,N}(z,s)$ at a cusp $\sqrt{-1}\infty$, and by $\xi^{(r)}(s)y^{-l+l'+1-s}$, the constant term at a cusp $r\in\mathcal{C}_{0}(N)$. Let $m\in\mathbf{Z}$ be the order of a pole of $E_{l-l',\rho,N}(z,s)$ at $s=l'$. Let $\xi^{(r)}(s)=c_{-m}^{(r)}(s-l')^{-m}+O((s-l')^{-m+1})$. Then \begin{align} \lim_{s\to0}s^{m+1}L(l{+}l'{-}1{+}s;f,g)=(4\pi)^{l+l'-1}\Gamma(l{+}l'{-}1)^{-1}\sum_{r\in\mathcal{C}_{0}(N)}w^{(r)}a_{0}^{(r)}\overline{b}_{0}^{(r)}\overline{c}_{-m}^{(r)}.\label{eqn:al-sng} \end{align} In particular if $E_{l-l',\rho,N}(z,s)$ is holomorphic at $s=l'$, then \begin{align} \mathrm{Res}_{s=l+l'-1}L(s;f,g)=(4\pi)^{l+l'-1}\Gamma(l{+}l'{-}1)^{-1}\sum_{r\in\mathcal{C}_{0}(N)}w^{(r)}a_{0}^{(r)}\overline{b}_{0}^{(r)}\overline{\xi}^{(r)}(l').\label{eqn:l-res} \end{align} If $E_{l-l',\rho,N}(z,s)$ is holomorphic on a domain $\Re s\ge l'$, then $L(s;f,g)$ has the only possible pole on the domain at $s=l'$. \end{prop} \begin{proof} By (\ref{eqn:RFs4}), we have \begin{align} &(4\pi)^{-l+1-s}\Gamma(l-1+s)(s-l')^{m}L(l-1+s;f,g)=(s-l')^{m}R(f\overline{g},s)\nonumber\\ =&\int_{\mathfrak{F}_{T}(N)}y^{l+s}f(z)\overline{g(z)}(s{-}l')^{m}\overline{E_{0,\mathbf{1}_{N},N}(z,\overline{s})}\tfrac{dxdy}{y^2}{+}g_{T}(z,s){-}(s{-}l')^{m}h(T,s) \label{eqn:lpsp} \end{align} where $g_{T}(z,s)$ is $(s-l')^{m}$ times the sum of all integrals but the first one in (\ref{eqn:RFs4}). Then $g_{T}(z,s)$ is holomorphic in $s$ at $s=l'$, and \begin{align*} h(T,s)=\tfrac{a_{0}^{(1/N)}\overline{b}_{0}^{(1/N)}}{l-1+s}T^{l-1+s}+\sum_{r\in\mathcal{C}_{0}(N)}w^{(r)}a_{0}^{(r)}\overline{b}_{0}^{(r)}\tfrac{\overline{\xi^{(r)}(\overline{s})}}{l'-s}T^{l'-s}. \end{align*} Then integral of (\ref{eqn:lpsp}) is holomorphic in $s$ at $s=l'$, and $h(T,s)=-\sum_{r}w^{(r)}a_{0}^{(r)}\overline{b}_{0}^{(r)}\overline{c}_{-m}^{(r)}$ $\times(s-l')^{-m-1}+O((s-l')^{-m})$ around $s=l'$. Then replacing $l'+s$ by $s$, we obtain (\ref{eqn:al-sng}). \end{proof} The $L$-series $L(s;f,g)$ for $f,g$ in Proposition \ref{prop:mr-pole} converges for $s>l+l'-1$. Hence if $L(s;f,g)$ has a pole at $s=l+l'-1$, then it is the rightmost pole.
\begin{thm} Let $f,g$ be as in Proposition \ref{prop:mr-pole}. Let $f(z)=\sum_{n=0}^{\infty}a_{n}\mathbf{e}(nz),g(z)=\sum_{n=0}^{\infty}b_{n}\mathbf{e}(nz)$ be the Fourier expansions. Assume that (i) there is a nonzero constant c for which $c\,a_{n}\overline{b}_{n}\ (n\ge1)$ are all real and non-negative, (ii) $E_{l-l',\rho,N}(z,s)$ is holomorphic at $s=l'$, and (iii) the right hand side of (\ref{eqn:l-res}), which we denote by $C$, is not zero. Then $\sum_{0<n\le X}a_{n}\overline{b}_{n}\sim C\frac{X^{l+l'-1}}{l+l'-1}$ as $X\longrightarrow\infty$. \end{thm} \begin{proof} We just apply the Wiener-Ikehara theorem to the $L$-function $L(s;f,g)$ at the rightmost pole $s=l+l'-1$. \end{proof}
\begin{cor}\label{cor:om} (i) Let $k\ge0$ be even and let $l\in\tfrac{1}{2}\mathbf{N}$ with $l>\max\{k,1/2\}$. Let $f,g$ be holomorphic modular forms for $\Gamma_{0}(N)$ of wight $l,l-k$ respectively and with same character. Let $a_{0}^{(r)},b_{0}^{(r)}$ denote the $0$-th Fourier coefficients at a cusp $r\in\mathcal{C}_{0}(N)$, of $f,g$ respectively. Then for $l=1$ and $k=0$, an equality $\lim_{s\to0}s^{2}L(1{+}s;f,g)=\tfrac{12}{\prod_{p|N}(1+p^{-1})}\sum_{i/M\in\mathcal{C}_{0}(N)}\tfrac{a_{0}^{(i/M)}\overline{b}_{0}^{(i/M)}}{(M^{2},N)}$ holds. If $l\ge3/2$, then $L(s;f,g)$ has a possible pole of order $1$ at $s=2l-k-1$ with the residue \begin{align}
\mathrm{Res}_{s=2l-k-1}L(s;f,g)=&\tfrac{(-1)^{k/2}2^{2l-k}\pi^{2l-k}\varphi(N)\zeta(2l-k-1)}{\Gamma(l-k)\Gamma(l)\zeta(2l-k)\prod_{p|N}(1-p^{-2l+k})}\nonumber\\
&\times\sum_{i/M\in\mathcal{C}_{0}(N)}\tfrac{a_{0}^{(i/M)}\overline{b}_{0}^{(i/M)}\prod_{p|(N/M)}(1-p^{-2l+k+1})}{(M^{2},N)\varphi(N/M)M^{2l-k-1}}. \label{eqn:mr-pole} \end{align}
(ii) Let $f(z)=\sum_{n=0}^{\infty}a_{n}\mathbf{e}(nz)$ be a holomorphic modular form for $\Gamma_{0}(N)$ of weight $l\in\frac{1}{2}\mathbf{N},\ge 3/2$ with any character. Let $a_{0}^{(r)}$ be the $0$-th Fourier coefficients at cusps $r\in\mathcal{C}_{0}(N)$ where $a_{0}^{(1/N)}=a_{0}$. We assume that at least one of $a_{0}^{(r)}$ is not zero. Then \begin{align*}
\sum_{0<n\le X}|a_{n}|^{2}\sim\tfrac{2^{2l}\pi^{2l}\varphi(N)\zeta(2l-1)}{\Gamma(l)^{2}\zeta(2l)\prod_{p|N}(1-p^{-2l})}\sum_{i/M\in\mathcal{C}_{0}(N)}\tfrac{|a_{0}^{(i/M)}|^{2}\prod_{p|(N/M)}(1-p^{-2l+1})}{(M^{2},N)\varphi(N/M)M^{2l-1}}\tfrac{X^{2l-1}}{2l-1}. \end{align*}
(iii) Let $Q,Q'$ be positive definite integral quadratic forms of $2l,2l-2k$ variables respectively with $l\in\frac{1}{2}\mathbf{N},\ge 3/2,\,2|k\ge0$. Let $N$ be the maximum of the levels of $Q,Q'$. Let $a_{0}^{(r)},b_{0}^{(r)}$ be the $0$-th Fourier coefficients at $r\in\mathcal{C}_{0}(N)$, of theta series associated with $Q,Q'$ respectively. Then if the discriminants $d_{Q},d_{Q'}$ are equal to each other up to square factors, then $\sum_{0<n\le X}r_{Q}(n)r_{Q'}(n)\sim C X^{2l-k-1}/(2l-k-1)$ where $C$ denotes the right hand side of (\ref{eqn:mr-pole}). \end{cor} \begin{proof} (i) The assertion follows form (\ref{eqn:l-res}), (\ref{eqn:al-sng}) where $w^{(i/M)}$ is given in (\ref{eqn:w-cusp}), and $\xi^{(i/M)}(s)$ is given in (\ref{eqn:ctk}).
As for (ii) and (iii), the assertions follow from Theorem, since $|a_{n}|^{2}\ge0$ in the case (ii), and since $r_{Q}(n)r_{Q'}(n)\ge0$ in the case (iii). \end{proof}
For $k>0$, let $r_{k}(n)$ denote the number of representations of $n$ as a sum of $k$ squares. Then $\theta(z)^{k}=1+\sum_{n=1}^{\infty}r_{k}(n)\mathbf{e}(nz)$, which is a modular form for $\Gamma_{0}(4)$ with the value $1$ at $\sqrt{-1}\infty$, the value $2^{-k/2}\mathbf{e}(-k/8)$ at $0$, the value $0$ at $1/2$. We have $L(s;\theta^{k}\overline{\theta}{}^{k'})=\sum_{n=1}^{\infty}r_{k}(n)r_{k'}(n)n^{-s}$. Then by applying Corollary \ref{cor:om} (ii) to $L(s;\theta^{k}\overline{\theta}{}^{k})$, we obtain \begin{align*} \sum_{0<n\le X}r_{k}(n)^{2}\sim\tfrac{\pi^{k}\zeta(k-1)}{\Gamma(k/2)^{2}\zeta(k)(1-2^{-k})}&\tfrac{X^{k-1}}{k-1} \end{align*} for $k\ge3$, which is known as Wagon's conjecture proved by R.~Crandall and S.~Wagon (for the detail see Borwein and Choi \cite{Borwein-Choi}, Choi, Kumchev and Osburn \cite{Choi-Kumchev-Osburn}). More generally we obtain from Corollary \ref{cor:om} (iii), \begin{align*} \sum_{0<n\le X}r_{k}(n)r_{k-4m}(n)\sim\tfrac{\{1-(1{-}({-}1)^{m})2^{-k+2m+1}\}\pi^{k-2m}\zeta(k-2m-1)}{\Gamma(k/2-2m)\Gamma(k/2)\zeta(k-2m)(1-2^{-k+2m})}\tfrac{X^{k-2m-1}}{k-2m-1} \end{align*} for $k\in\mathbf{N},m\in\mathbf{Z},\ge0,\,k>\max\{4m,2\}$. The estimate of this kind is obtained for many other quadratic forms.
For modular forms on $\Gamma_{0}(4)$, we have the following; \begin{cor}\label{cor:om2} (i) Let $f(z)=\sum_{n=0}^{\infty}a_{n}\mathbf{e}(nz),g(z)=\sum_{n=0}^{\infty}b_{n}\mathbf{e}(nz)$ be holomorphic modular forms for $\Gamma_{0}(4)$ of weight $l\in\tfrac{1}{2}\mathbf{N}$ and $l-k>0$ respectively with odd $k\ge1$. Let $a_{0}^{(r)},b_{0}^{(r)}$ denote the $0$-th Fourier coefficients at a cusp $r\in\mathcal{C}_{0}(4)$, of $f,g$ respectively. If $a_{0}^{(0)}b_{0}^{(0)}\ne0$, then $L(s;f,g)$ has a simple pole at $s=2l-k-1$ with residue \begin{align} \mathrm{Res}_{s=2l{-}k{-}1}L(s;f,g)=\tfrac{\sqrt{{-}1}^{k}2^{2l-k}\pi^{2l-k}a_{0}^{(0)}\overline{b}_{0}^{(0)}}{\Gamma(l{-}k)\Gamma(l)}\tfrac{L(2l{-}k{-}1,\chi_{-4})}{L(2l{-}k,\chi_{-4})}. \label{eqn:mr-pole2} \end{align} Further we suppose that there is a nonzero constant $c$ so that $c\,a_{n}\overline{b}_{n}\ (n\ge1)$ are all non-negative. Then $\sum_{0<n\le x}a_{n}\overline{b}_{n}\sim CX^{2l-k-1}/(2l-k-1)$ where $C$ is the left hand side of (\ref{eqn:mr-pole2}).
(ii) Let $k\in\mathbf{Z},\ge0$, and let $l\in\tfrac{1}{2}\mathbf{N}$ with $l>k+1/2$. Let $f,g$ be holomorphic modular forms for $\Gamma_{0}(4)$ with automorphy factors $j(*,z)^{2l},j(*,z)^{2l{-}2k{-}1}$ respectively where $j$ denotes the automorphy factor of $\theta$ as in Introduction. Then if $l\ge3/2$, then $L(s;f,g)$ has a possible pole of order $1$ at $s=2l-k-3/2$ with the residue \begin{align} \mathrm{Res}_{s=2l-k-3/2}L(s;f,g)=&\tfrac{(-1)^{k(k+1)/2}2^{2l-k-3}\pi^{2l{-}k{-}1/2}\zeta(4l{-}2k{-}3)}{(2^{4l-2k-2}{-}1)\Gamma(l{-}k{-}1/2)\Gamma(l)\zeta(4l{-}2k{-}2)}\{a_{0}^{(1/4)}\overline{b}_{0}^{(1/4)}\nonumber\\ &\hspace{3em}+(1{+}({-}1)^{k}\sqrt{{-}1})2^{3}(2^{4l{-}2k{-}3}{-}1)a_{0}^{(0)}\overline{b}_{0}^{(0)}\}.\label{eqn:mr-pole3} \end{align} If $L(s;f,g)$ has the pole and if there is a nonzero constant $c$ so that $c\,a_{n}\overline{b}_{n}\ (n\ge1)$ are all non-negative, then $\sum_{0<n\le x}a_{n}\overline{b}_{n}\sim CX^{2l-k-3/2}/(2l-k-3/2)$ where $C$ is the left hand side of (\ref{eqn:mr-pole3}). \end{cor} \begin{proof} (i) Any modular form for $\Gamma_{0}(4)$ of odd weight has character $\chi_{-4}$. We apply Theorem to $f,g$ and $y^{s}E_{k,\chi_{-4}}(z,s)$ where $y^{s}E_{k,\chi_{-4}}(z,s)$ has $\xi^{(1/4)}(s)=\xi^{(1/2)}(s)=0,\,\xi^{(0)}(s)=\tfrac{(-\sqrt{{-}1})^{k}\pi}{2^{k+2s}}\tfrac{\Gamma(k-1+2s)}{\Gamma(s)\Gamma(k+s)}\tfrac{L(k{-}1{+}2s,\chi_{-4})}{L(k{+}2s,\chi_{-4})}$. Then the assertion follows from Proposition \ref{prop:mr-pole} and Theorem.
(ii) We apply Proposition \ref{prop:mr-pole} and Theorem to $f,g$ and $E_{k+1/2,\chi_{-4}^{k}}(z,s)$ where $\xi^{(i/M)}(s)$ is written down in Section \ref{sect:ASP}, (III). \end{proof} By Corollary \ref{cor:om2} we have \begin{align*} \sum_{0<n\le X}r_{k}(n)r_{k-2m}(n)\sim\tfrac{\pi^{k-m}L(k{-}m{-}1,\chi_{-4})}{\Gamma(k/2{-}m)\Gamma(k/2)L(k{-}m,\chi_{-4})}\tfrac{X^{k-m-1}}{k{-}m{-}1} \end{align*} for $m\equiv1\pmod{2}, k>2m$, and \begin{align*} \sum_{0<n\le X}r_{k}(n)r_{k-m}(n)\sim&\tfrac{\{1{+}\chi_{8}(m)2^{-k+(m-3)/2}{-}2^{-2k+m+2}\}\pi^{k{-}m/2}\zeta(2k{-}m{-}2)}{\Gamma((k{-}m)/2)\Gamma(k/2)\zeta(2k{-}m{-}1)(1{-}2^{-2k+m+1})}\tfrac{X^{k-m/2-1}}{k{-}m/2{-}1} \end{align*} for $m\equiv1\pmod{2} $ and $k\ge 3/2,\,k>m$.
The $L$-function $L(s;\theta^{k}\overline{\theta}{}^{k})=\sum_{n=1}^{\infty}r_{k}(n)^{2}n^{-s}$ has a pole also at $s=k/2$. Their residues up to $8$ are $2\,(k=1),\ 2^{2}\{\log2 +\tfrac{d}{ds}\log\tfrac{L(1+s,\chi_{-4})}{L(s,\chi_{-4})}|_{s=0}\} \,(k=2)$,\linebreak$ -2^{5}3\pi^{-1}\log2\,(k=3),\ 2^{-1}\{3-86\log2-6\log\pi-72\tfrac{d}{ds}\zeta(s)|_{s=-1}\}\,(k=4),\ -2^{7}3\cdot7^{-2}\pi^{-2}\zeta(3)\,(k=5),\ 2\pi^{-2}L(2,\chi_{-4})\,(k=6),\ 2^{8}3^{3}7\cdot31^{-2}\pi^{-3}\zeta(5)\,(k=7)$ and $2^{-5}3^{-1}\cdot 2153\zeta(3)\,(k=8)$.
\end{document} |
\begin{document}
\maketitle \Opensolutionfile{ans}[answers_lecture_4]
\fbox{\fbox{\parbox{5.5in}{ \textbf{Problem:}\\ How to find invariants of singularities of a $G$-structure? }}}
\section*{Simple example} Let $V = (v^1,v^2)$ be a vector field on $\mathbb{R}^2$. If $V(x_0) \ne 0$, then $V$ restricted to a neighborhood $U(x_0)$ is equivalent to $\partial_1$, hence all nonvanishing vector fields are locally equivalent and there are no invariants.
If $V(x_0) = 0$, and $W(x_0) = 0$, and the matrix $||\partial_i V^j(x_0)||$ is not similar to the matrix $||\partial_i W^j(x_0)||$, then $V$ and $W$ are not locally equivalent and there arise invariants.
Generally, \emph{non-regular points have their own invariants}.
\section{Jets of smooth maps} Let $M$ and $N$ be smooth manifolds, $x \in M$ is a point. We will denote by $f : (M,x) \to N$ a smooth map which is defined in an open neighborhood $U$ of $x$. By $D(f)$ we will denote the domain $U$.
Denote by $C^\infty((M,x),N)$ the set of all smooth maps $f : (M,x) \to N$. On the set $C^\infty((M,x),N)$ we introduce the equivalence relation: we say that $f$, $g$ in $C^\infty( (M,x),N)$ are equivalent at $x \in M$ ($f \sim_x g$) if \begin{equation}
\exists W \in \mathcal{U}_x, \quad W \subset D(f) \cap D(g) \text{ such that } f|_W = g|_W.
\label{eq:equivalence_between_local_maps} \end{equation}
\begin{definition} \label{def:germ_of_map} A \emph{germ of a map} $f \in C^\infty( (M,x) ,N)$ at the point $x$ is the equivalence class of $f \in C^\infty((M,x),N)$ with respect to $\sim_x$ denoted by $\langle f \rangle_x$. \end{definition} We set \begin{multline} \mathcal{G}_x(M,N) = C^\infty( (M,x) ,N) / \sim_x = \\ = \left\{ \langle f \rangle_x \mid f \in C^\infty( (M,x) ,N) \right\}. \label{eq:set_of_germs} \end{multline}
If $M_1$, $M_2$, and $M_3$ are smooth manifolds, and $f_1 \in C^\infty((M_1, x_1),M_2)$, $f_2 \in C^\infty((M_2,x_2),M_3)$ with $f(x_1) = x_2$, then we can define the composition of germs as follows: \begin{eqnarray} \langle f_2 \rangle_{x_2} \circ \langle f_1 \rangle_{x_1} = \langle f_2 \circ f_1 \rangle_{x_1} \label{eq:comp_of_germs} \end{eqnarray}
Another equivalence relation $\sim_k$ on the set $C^\infty( (M,x),N)$ is introduced as follows: for $f,g \in C^\infty_x(M,N)$ such that $f(x)=g(x)$ we take the coordinate systems $(U,x^i)$ in a neighborhood $U$ of $x$ and $(V,y^\alpha)$ in a neighborhood $V$ of $y$. Then we say that $f$ and $g$ are equivalent ($f \sim_k g$) if the Taylor series of the coordinate representations of $f$ and $g$ coincide up to the order $k$. One can prove that this equivalence relation does not depend on the choice of coordinate systems. \begin{definition} The equivalence class $j^k_x f$ of $f$ is called \emph{$k$-jet of the map $f$ at the point $x$}. \end{definition} The set of all $k$-jets of maps $f \in C^\infty( (M,x) ,N)$ will be denoted by $J^k_x(M,N)$.
It is clear that if $ f, g \in C^\infty( (M,x) ,N)$ determine the same germ at $x$, that is if $f \sim_x g$, then $j^k_x f = j^k_x g$ for any $k$. Also, the composition of maps, or the composition of germs, define the composition of $k$-jets: \begin{equation} j^k_{f_1(x)} (f_2) \circ j^k_x(f_1) = j^k_x( f_2 \circ f_1). \label{eq:composition_of_jets} \end{equation}
For the set of all $k$-jets $J^k(M,N)$ one can define two natural projections: \begin{eqnarray} \pi_0 : J^k(M,N) \to M, \quad j^k_{x}f \mapsto x, \\ \pi_1 : J^k(M,N) \to N, \quad j^k_{x}f \mapsto f(x). \label{eq:projections_on_jet_space} \end{eqnarray} The set $J^k(M,N)$, $\dim M = m$, $\dim N = n$ can be endowed by a manifold structure so that these projections are bundle projections. We set \begin{equation} J^k_{x,y}(M,N) = \left\{ j^k f \in J^k(M,N) \mid \pi_0(j^k f) = x, \pi_1(j^k f) = y \right\}. \label{eq:set_of_jets_joining_two_points} \end{equation} The typical fiber of the bundle $(J^k(M,N),\pi_0,M)$ is the manifold $J^k_{0,0}(\mathbb{R}^m,\mathbb{R}^n) \times N$, and of the bundle $(J^k(M,N),\pi_1,N)$ is $J^k_{0,0}(\mathbb{R}^m,\mathbb{R}^n) \times M$.
The manifold $J^k(M,M)$ endowed with the projections $\pi_0$ and $\pi_1$, and the composition of $k$-jets is a groupoid.
If $\pi : E \to M$ is a fiber bundle, then the set of $k$-jets of sections of the bundle $E$ is denoted by $J^k(E)$.
\section{Bundle of germs of diffeomorphisms. Coframe bundle of $k$th order}
\subsection{Bundle of germs of diffeomorphisms} Let $\mathcal{D}(m)$ be the group of all germs of local diffeomorphisms of $\mathbb{R}^m$ at $0 \in \mathbb{R}^m$: \begin{equation*} \mathcal{D}(m) = \left\{ \langle \varphi\rangle_0 \mid \varphi : (\mathbb{R}^m,0) \to (\mathbb{R}^m,0) \text{ is a local diffeomorphism} \right\} \end{equation*} endowed with the operation of composition of germs: $\langle \varphi_1 \rangle_0 \circ \langle \varphi_2 \rangle_0 = \langle \varphi_1 \circ \varphi_2 \rangle_0$.
Now let us consider \begin{equation*} \mathcal{B}(M) = \left\{ \langle f \rangle_x \mid f : (M,x) \to (\mathbb{R}^n,0) \text{ is a local diffeomorphism }\right\} \end{equation*} We have natural projection \begin{equation*} \pi : \mathcal{B}(M) \to M, \quad \pi(\langle f\rangle_x)=x, \end{equation*} and the natural right action of $\mathcal{D}(m)$ on $\mathcal{B}(M)$: \begin{equation*} \langle f\rangle_x \cdot \langle \varphi\rangle_0 = \langle \varphi^{-1} \circ f\rangle_x. \end{equation*} so $(\mathbb{B}(M),\pi,M)$ can be considered as a ``principal fiber bundle''.
Let $(U, u : U \to V \subset \mathbb{R}^m)$ be a coordinate map. This map determines the ``trivialization'' of the bundle $\mathcal{B}(M)$ over $U$. Let \begin{equation*} t_a : \mathbb{R}^m \to \mathbb{R}^m, \quad t_a(v) = v + a \end{equation*} be the parallel translation of $\mathbb{R}^m$. Then \begin{equation*} \mathcal{U}: \pi^{-1}(U) \to U \times \mathcal{D}(m), \quad \langle f \rangle_x \to \left(x, \langle f \circ u^{-1} \circ t_{u(x)} \rangle_0\right). \end{equation*} gives us the required trivialization. The inverse map is \begin{equation*} \mathcal{U}^{-1} : U \times \mathcal{D}(m) \to \pi^{-1}(U) \quad (x, \langle \varphi \rangle_0) \to \langle t_{-u(x)} \circ u \circ f^{-1} \rangle. \end{equation*}
Now assume that we have two coordinate systems $(U,u)$ and $(U,\bar u)$ on $M$. Then, \begin{equation*} \bar{\mathcal{U}} \circ \mathcal{U}^{-1} : U \times \mathcal{D}(m) \to U \times \mathcal{D}(m), \quad (x, \varphi) \to (x, t_{-\bar u(x)} \circ \bar u \circ u^{-1} \circ t_{u(x)}). \end{equation*} so the gluing functions of the atlas of the ``principal bundle'' $(\mathcal{B}(M),\pi,M)$ constructed by an atlas $(U_\alpha,u_\alpha)$ of the manifold $M$ are \begin{equation*} g_{\beta\alpha} : U_{\alpha} \cap U_\beta \to \mathcal{D}(m), \quad
g_{\beta\alpha}(x) = t_{-u_\beta(x)} \circ u_\beta \circ u^{-1}_\alpha \circ t_{u_\alpha(x)}. \end{equation*}
\begin{remark} In what follows we will use unordered multiindices. We denote by $\mathcal{I}(m)$ the set of all unordered multiindices $I = \left\{ i_1 i_2 \dots i_k \right\}$, where $1 \le i_l \le m$, for all $l=\overline{1,k}$.
The number $k$ is called the length of the multiindex and is denoted by $|I|$.
Also, we set $I_k(m) = \left\{ I \in \mathcal{I}(m) \mid |I| = k \right\}$. \end{remark}
\subsection{Differential group of $k$th order}
The $k$th order \emph{differential group} is the set of $k$-jets: \begin{equation*} D^k(m) = \left\{ j^k_0(\varphi) \mid \varphi : (\mathbb{R}^m,0) \to (\mathbb{R}^m,0) \text{ is a local diffeomorphism } \right\}. \end{equation*}
On the set $D^k(m)$ consider the operation \begin{equation} D^k(m) \times D^k(m) \to D^k(m), \quad j^k_0(\varphi) \cdot j^k_0(\psi) = j^k(\varphi \circ \psi), \label{eq:operation_in_kth_order_differential_group} \end{equation} then $(D^k(m),\cdot)$ is a group.
Denote $\varphi^k_I = \left.\partial_I\right|_0 \varphi^k$. Then \begin{equation} \mathcal{C}^k : D^k(m) \to \mathbb{R}^N, \quad j^k_0(\varphi) \to \{\varphi^k_I\} \label{eq:coordinates_on_kth_order_differential_group} \end{equation} in a one-to-one map of $D^k(m)$ onto the open set in $\mathbb{R}^N$ determined by the inequality
$\det\|\varphi^k_i\| \ne 0$. In this way we get globally defined coordinates on $D^k(m)$ which will be called \emph{natural coordinates}. With respect to the natural coordinates the product \eqref{eq:operation_in_kth_order_differential_group} is written in terms of polynomials, therefore is a smooth map. Thus $D^k(m)$ is a Lie group.
\subsection{Bundle of $k$th order holonomic coframes} For an $m$-dimensional manifold $M$ consider the set \begin{equation*} B^k(M) = \left\{ j^k_x{f} \mid f : (M,x) \to (\mathbb{R}^m,0) \text{ is a local diffeomorphism }\right\} \end{equation*} whose elements are called \emph{$k$-coframes} or \emph{coframes of order $k$} of the manifold $M$. We have the projection \begin{equation*} \pi^k : B^k(M) \to M, \quad \pi(j_x(f))=x. \end{equation*}
On the set $B^k(M)$ we have the right $D^k(m)$-action: \begin{equation*} j^k_x(f) \cdot j^k_0(\varphi) = j^k_0(\varphi^{-1} \circ f). \end{equation*} and one can easily prove that this action is free and its orbits are the fibers of the projection $\pi$.
\subsubsection{Trivializing charts of $B^k(M)$. Gluing maps} In what follows we set $t_a : \mathbb{R}^m \to \mathbb{R}^m$, $t_a(x) = x + a$, the parallel translation of $\mathbb{R}^m$ with respect to $a \in \mathbb{R}^m$.
Let $(U, u : U \to V \subset \mathbb{R}^m)$ be a coordinate chart on $M$. We have the one-to-one map \begin{multline} \mathcal{T}^k: (\pi^k)^{-1}(U) \to U \times D^k(m), \\ j^k_x(f) \to \left(x, j^k_0(t_{-u(x)} \circ u \circ f^{-1}) \right). \label{eq:trivialization_of_kth_order_coframe_bundle} \end{multline} The map $\mathcal{T}^k$ is $D^k(m)$-equivariant because \begin{multline*} \mathcal{T}^k( j^k_x f \cdot j^k_0\varphi) = \mathcal{T}^k \left(j^k_x (\varphi^{-1} \circ f)\right) =
\left(x, j^k_0(t_{-u(x)} \circ u \circ f^{-1} \circ \varphi)\right)= \\ =\left(x, j^k_0(t_{-u(x)} \circ u \circ f^{-1}) \cdot j^k_0\varphi \right)= \left(x, j^k_0(t_{-u(x)} \circ u \circ f^{-1}) \right) \cdot j^k_0\varphi. \end{multline*} Since $D^k(m)$ is a Lie group, the map $\mathcal{T}^k$ defines a trivializing chart for the map $\pi^k : B^k(M) \to M$.
Therefore, for each atlas $(U_\alpha,u_\alpha)$, we construct the atlas of trivializing charts $(U_\alpha,\mathcal{T}^k_\alpha)$. Find the gluing maps for this atlas.
Assume that $(U_\alpha,u_\alpha)$, $(U_\beta,u_\beta)$ are two coordinate systems on $M$, and $U_\alpha \cap U_\beta \ne \emptyset$. Then \begin{equation*} j^k_0(t_{-u_\beta(x)} \circ u_\beta \circ f^{-1}) = j^k_0(t_{-u_\beta(x)} \circ u_\beta \circ u_\alpha^{-1} \circ t_{u_\alpha(x)}) \cdot j^k_0(t_{-u_\alpha(x)} \circ u_\alpha \circ f^{-1}) \end{equation*} Therefore, the gluing maps are \begin{equation} g_{\beta\alpha} : U_\alpha \cap U_\beta \to D^k(m), \quad g_{\beta\alpha}(x) = j^k_0(t_{-u_\beta(x)} \circ u_\beta \circ u_\alpha^{-1} \circ t_{u_\alpha(x)}) \label{eq:gluing_maps_for_B_k} \end{equation}
Since the gluing functions are smooth, we conclude that $\pi^k : B^k(M) \to M$ is a $D^k(m)$-principal bundle over $M$ which is called \emph{the bundle of $k$-coframes of $M$} or \emph{the bundle of coframes of order $k$ of $M$}.
For a coordinate chart $(U,u)$ there is defined a section \begin{equation} s : U \to B^k(M), \quad s(x) = j^k_x u, \label{eq:natural_sections_of_B_k} \end{equation} which is called the \emph{natural $k$-coframe field} associated with a coordinate chart $(U_\alpha,u_\alpha)$.
\subsubsection{Natural coordinates on $B^k(M)$} If $(U,u)$ is a coordinate chart on $M$, and $\mathcal{T}^k$ is the corresponding trivialization of $B^k(M)$. Then \begin{equation} (u \times \mathcal{C}^k) \circ \mathcal{T}^k : (\pi^k)^{-1}(U) \to \mathbb{R}^m \times \mathbb{R}^N
\end{equation} gives \emph{natural local coordinates on} $B^k(M)$.
The section $s : U \to B^k(M)$ \eqref{eq:natural_sections_of_B_k} is written with respect to this coordinate system as follows: \begin{equation} s(u^k) = (u^k, \delta^k_i, 0). \label{eq:natural_section_wrt_natural_coordinates} \end{equation}
\begin{remark} We have the natural projections $\pi^k_l : B^k(M) \to B^l(M)$, $k \ge l$, which are, in turn, principal fiber bundles with the group $H^k_l$ which is the kernel of the natural homomorphism $D^k(m) \to D^l(m)$. \end{remark}
\subsection{Case $k=1$} The Lie group $D^1(m) \cong GL(m)$ and $B^1(M)=B(M)$ is the coframe bundle of $M$.
\subsection{Case $k=2$} \subsubsection{The group $D^2(m)$} Elements of the group $D^2(m)$ are the $2$-jets of germs $\varphi \in \mathcal{D}(m)$. The coordinate system \eqref{eq:coordinates_on_kth_order_differential_group} in this case is \begin{equation}
j^2_0\varphi \longrightarrow (\varphi^k_i, \varphi^k_{ij}), \text{ where } \varphi^k_i = \frac{\partial \varphi^k}{\partial u^i}(0), \quad \varphi^k_{ij} = \frac{\partial^2 \varphi^k}{\partial u^i \partial u^j}(0).
\end{equation} Here $u^i$ are coordinates on $\mathbb{R}^m$, and it is clear that $\varphi^k_{ij} = \varphi^k_{ji}$. From this follows that $\dim D^2(m) = m^2 + m^2(m+1)/2$.
Now, if $j^2_0\varphi \rightarrow (\varphi^k_i, \varphi^k_{ij})$, $j^2_0\psi \rightarrow (\psi^k_i, \psi^k_{ij})$, and \begin{equation*} j^2_0\psi \cdot j^2_0\varphi = j^2_0(\psi\circ\varphi) \rightarrow (\eta^k_i, \eta^k_{ij}),
\end{equation*} by the chain rule we get that \begin{equation} \eta^k_i = \psi^k_s \varphi^s_i, \quad \eta^k_{ij} = \psi^k_{pq} \varphi^p_i \varphi^q_j + \psi^k_s \varphi^s_{ij} \label{eq:product_D_2} \end{equation} These formulas express the product in the group $D^2(m)$ in terms of the natural coordinates $(\varphi^k_i,\varphi^k_{ij})$.
\subsubsection{The bundle $B^2(M)$} The elements of $B^2(M)$ are the $2$-jets of local diffeomorphisms $f : (M,x) \to (\mathbb{R}^m,0)$. The natural coordinates \eqref{eq:natural_section_wrt_natural_coordinates} in this case can be found as follows. Let $(U,u)$ be a coordinate chart on $M$. Then, for any $j^2_x f$ with $x \in U$, the diffeomorphism \begin{equation*} f \circ u^{-1} : (\mathbb{R}^m, u(x)) \to (\mathbb{R}^m,0) \end{equation*} can be written as $w^k=f^k(u^i)$, where $w^k$ are standard coordinates on $\mathbb{R}^m$, and $f^k(u^i(x)) = 0$. Then take the inverse diffeomorphism $u^k = \widetilde{f}^k(w^i)$, and the local diffeomorphism $t_{-u(x)} \circ u \circ f^{-1}$ has the form $\widetilde{f}^k(w^i)-u^k(x)$. Therefore, the natural coordinates of $j^k_x f$ induced by a coordinate chart $(U,u)$ on $M$ are $(u^k,u^k_i,u^k_{ij})$, where \begin{equation} u^k_i = \frac{\partial\widetilde{f}^k}{\partial w^i}(0), \quad u^k_{ij} = \frac{\partial^2\widetilde{f}^k}{\partial w^i \partial w^j}(0). \label{eq:natural_coordinates_B_2} \end{equation} The derivatives of $\widetilde{f}$ at $0$ can be expressed in terms of the derivatives of $f$ at $u^k(x)$. If we denote \begin{equation*} f^k_i = \frac{\partial f^k}{\partial u^i}(u(x)), f^k_{ij} = \frac{\partial^2 f^k}{\partial u^i \partial u^j}(u(x)). \end{equation*} then \begin{equation*} u^k_i = \widetilde{f}^k_i, \quad u^k_{ij} = - \widetilde{f}^k_s f^s_{lm} \widetilde{f}^l_i \widetilde{f}^m_j. \end{equation*}
With respect to the natural coordinate system the $D^2(m)$-action is written as follows: \begin{equation} (u^k, u^k_i, u^k_{ij}) \cdot (\varphi^k_i, \varphi^k_{ij}) = (u^k, u^k_s \varphi^s_i, u^k_{pq} \varphi^p_i \varphi^q_j + u^k_s \varphi^s_{ij}). \label{eq:D_2(m)-action_wrt_natural_coordinates} \end{equation}
Let us express the gluing maps in terms of the natural coordinates. If $(U,u)$ and $(U',u')$ are coordinate charts on $M$ sucht that $U \cap U' \ne \emptyset$, then from \eqref{eq:gluing_maps_for_B_k} it follows that the corresponding gluing map is \begin{equation*} g : U \cap U' \to D^2(m), g(x) = j^2_0(t_{-u'(x)} \circ u' \circ u^{-1} \circ t_{u(x)}) \end{equation*} If the coordinate change $u' \circ u^{-1}$ is written as $v^k = v^{k}(u^i)$, then we have to take derivatives at $0$ of the map $v^{k}(u^i+u^i(x))-v^k(u^i(x))$, which are equal to the derivatives of the functions $v^k$ at $u^k(x)$. Therefore, \begin{equation} g : U \cap U' \to D^2(m), \quad g(x) = \left(\frac{\partial v^k}{\partial u^i}(u(x)), \frac{\partial v^k}{\partial u^i \partial u^j}(u(x))\right). \label{eq:gluing_maps_for_B_2} \end{equation}
Therefore, \emph{the bundle $B^2(M) \to M$ is the $D^2(m)$-principal bundle with gluing maps \eqref{eq:gluing_maps_for_B_2}}.
\section{First prolongation of a $G$-structure} \subsection{First prolongation of an integrable $G$-structure} Let $P(M) \to M$ be an integrable $G$-structure, that is a subbundle of $B(M)$ such that there exists an atlas $\mathcal{A} = (U_\alpha,u_\alpha)$ such that the natural coframes of the atlas are sections of $P(M)$, or equivalently,
the coordinate change $u^{k'} = u^{k'}(u^i)$ has the property that $\|\frac{\partial u^{k'}}{\partial u^k}\| \in G$.
In this case, we can specify the set $\mathcal{B}_G$ of local diffeomorphisms $f : (M,x) \to (\mathbb{R}^m,0)$ such that for each coordinate map $u$ of the atlas $\mathcal{A}$, the local diffeomorphism $f \circ u^{-1}$ has the Jacobi matrix at in $G$ at all points of its domain. It is clear that \begin{equation}
P(M) = \left\{ j^1_x f \mid \| \frac{\partial (f \circ u^{-1})^k}{\partial u^i}|_{u(x)} \| \in G \right\}
\end{equation} and consider \begin{equation} P^1(M) = \left\{ j^2_x f \mid f \in \mathcal{B}_G \right\} \label{eq:first_holonomic_prolongation_of_P} \end{equation} \begin{remark} Note that if $j^2_x f$ is an element of $P^1(M)$, then the Jacobi matrix
$\|\left.\frac{\partial (f\circ u^{-1})^k}{\partial u^i}\right|_{u(x)}\|$ is an element of $G$. However, the converse is not true because, by definition, the Jacobi matrix belongs to $G$ for each point which implies conditions on the second derivatives of $f \circ u^{-1}$ (see below). \end{remark}
In the same manner introduce the set $\mathcal{D}_G$ of local diffeomorphisms $\varphi : (\mathbb{R}^m,0) \to (\mathbb{R}^m,0)$ whose Jacobi matrices are elements of $G$ at all points of their domains. Consider the Lie subgroup of $D^2(m)$: \begin{equation} G^1 = \left\{ j^2_0 \varphi \in D^2(m) \mid \varphi \in \mathcal{D}_G \right\} \label{eq:first_holonomic_prolongation_of_G} \end{equation} The Lie subgroup $G^1 \subset D^2(m)$ is called the \emph{first holonomic prolongation of the group} $G$.
The subset $P^1(M)$ is a submanifold of $B^2(M)$, and is the total space of a principal subbundle of $B^2(M) \to M$ with the subgroup $G^1 \subset D^2(m)$. This subbundle is called the \emph{first holonomic prolongation of the integrable $G$-structure $P$}.
It is also clear that, for the atlas $\mathcal{A}$ the gluing map \eqref{eq:gluing_maps_for_B_2} takes values in the subgroup $G^1$. Therefore, \emph{an integrable $G$-structure defines reduction of the principal bundle $B^2(M)$ to the structure group $G^1 \subset D^2(m)$}.
\subsection{Algebraic structure of the Lie group $G^1$} We have the surjective Lie group morphism \begin{equation} p^1 : G^1 \to G, \quad j^2_0 \varphi \mapsto j^1_0 \varphi. \label{eq:surjective_morphism_G_1_to_G} \end{equation} Let us find the kernel of $p^1$. Let $\varphi : (\mathbb{R}^m,0) \to (\mathbb{R}^m,0)$ be an element of $\mathcal{D}_G$.
Then, the map $g(u^k) = \|\frac{\partial \varphi^k}{\partial u^i}\|$ takes values in $G$, and $g(0) = I \in G$. Therefore, \begin{equation*}
\left.\frac{\partial^2\varphi}{\partial u^i\partial u^j}\right|_0 = \left.\frac{\partial g^k_i}{\partial u^j}\right|_0 \end{equation*} is a linear map $t : \mathbb{R}^m \cong T_0 \mathbb{R}^m \to \mathfrak{g}(G) \subset \mathfrak{gl}(m)$ but with property that $t(u)v = t(v)u$. In other words, elements of $\ker p^1$ are tensors of type $(2,1)$ on $\mathbb{R}^m$ such that $t^k_{ij} \in \mathfrak{g}$ for each $i$, and $t^k_{ij} = t^k_{ji}$. The vector space of such tensors is called the \emph{first prolongation of the Lie algebra $\mathfrak{g}$} and is denoted by $\mathfrak{g}^1$. From this follows that $\ker p^1$ is a commutative Lie group.
Thus we have the exact sequence of Lie groups \begin{equation*} 0 \to \mathfrak{g}^1 \to G^1 \to G \to e, \label{eq:exact_splitting_sequence} \end{equation*}
This sequence admits a splitting: for any $\|g^k_i\| \in G$ we take the diffeomorphism $\varphi^k(u^i) = g^k_i u^i$, which is evidently lies in $\mathcal{D}_G$, and set $s(j^1_0\varphi) = j^2_0\varphi$.
According to the group theory, the exact splitting sequence \eqref{eq:exact_splitting_sequence} determines the right action of $G$ on $\mathfrak{g}^1$: $R_g t = s(g^{-1}) \cdot t \cdot s(g)$, and $G^1$ is the extension of $G$ by the commutative group $\mathfrak{g}^1$ with respect to the action $R$. This means that we have the group isomorphism \begin{equation} G^1 \to G \times \mathfrak{g}^1, \quad g^1 \mapsto (p^1(g^1), s\left( p^1( (g^1)^{-1})\right)g^1, \label{eq:G_1_as_extension} \end{equation} therefore \begin{equation} G^1 \cong G \times \mathfrak{g}^1, \text{ and } (g_1,t_1) \cdot (g_2,t_2) = (g_1g_2, R(g_2)t_1 + t_2). \label{eq:G_1_as_extension_1} \end{equation}
Let us express this representation of $G^1$ in terms of the canonical coordinates. We have $p^1(\varphi^k_i,\varphi^k_{ij})=\varphi^k_i$, and $s(\varphi^k_i) = (\varphi^k_i,0)$. Hence follows that the isomorphism \eqref{eq:G_1_as_extension} is \begin{equation*} (\varphi^k_i, \varphi^k_{ij}) \to (g^k_i, t^k_{ij}) \text{ with } g^k_i = \varphi^k_i, \ t^k_{ij} = \widetilde{\varphi}^k_s \varphi^s_{ij}. \end{equation*} Thus we get \emph{algebraic coordinates} $(g^k_i,t^k_{ij})$ on $G^1$ which are adopted to the algebraic structure of $G^1$.
The right action $R$ with respect to the canonical coordinates is written as follows (we use \eqref{eq:product_D_2}): \begin{equation*} (\widetilde{\varphi}^k_i, 0) \cdot (\delta^k_i, \varphi^k_{ij}) \cdot (\varphi^k_i,0) = (\delta^k_i, \widetilde{\varphi}^k_s \psi^s_{lm} \varphi^l_i \varphi^m_j). \end{equation*} At the same time, the algebraic coordinates of the elements $(\varphi^k_i,0)$ and $(\delta^k_i,0)$ are the same, this means that they are $(g^k_i = \varphi^k_i, 0)$ and $(\delta^k_i, t^k_{ij} = \varphi^k_{ij})$. Therefore, with respect to the algebraic coordinates the product of the group $G^1$ looks like (see \eqref{eq:G_1_as_extension_1}): \begin{equation} (g^k_i, t^k_{ij}) \cdot (h^k_i, q^k_{ij}) = (g^k_s h^s_i, \widetilde{h}^k_s t^s_{pq} h^p_i h^q_j + q^k_{ij}). \label{eq:product_G_1_wrt_algebraic_coordinates} \end{equation}
\subsection{Algebraic coordinates on $B^2(M)$. Description of $P^1(M)$ with respect to algebraic coordinates} One can easily see that $D^2(m) = (GL(m))^1$. Therefore, we can consider the algebraic coordinates on $D^2(m)$ which give rise to the \emph{algebraic coordinates on the total space $B^2(M)$}. Namely, if $(U,u)$ is a coordinate chart on $M$, and $(u^k, u^k_i, u^k_{ij})$ are the corresponding natural coordinates on $(\pi^2)^{-1}(U)$, for the \emph{algebraic coordinates} on $(\pi^2)^{-1}(U)$ we take \begin{equation} (u^k, p^k_i, p^k_{ij}) \text{ where } p^k_i = u^k_i, p^k_{ij} = \widetilde{u}^k_s u^s_{ij}. \label{eq:algebraic_coordinates_on_B_2} \end{equation} In fact, we change coordinates on the second factor of $U \times D^2(m) \cong (\pi^2)^{-1}(U)$.
With respect to the algebraic coordinates the first prolongation $P^1$ of an integrable $G$-structure $P$ is described in the following way: \begin{equation}
\left. P^{1} \right|_U = \left\{ (u^k, p^k_i, p^k_{ij}) \mid \|p^k_i\| \in G, \|p^k_{ij}\| \in \mathfrak{g}^1 \right\}. \label{eq:P_1_wrt_algebraic_coordinates} \end{equation}
If we have two coordinate charts of the atlas $\mathcal{A}$ on $M$ and $v^k = v^k(u^i)$ is the coordinate change, the corresponding gluing map \eqref{eq:gluing_maps_for_B_2} of trivializing charts of $P^1$ is written with respect to the algebraic coordinates as follows: \begin{equation} g : U \cap U' \to G^1, \quad g(x) = \left(\frac{\partial v^k}{\partial u^i}(u(x)), \frac{\partial u^k}{\partial v^s}(v(x)) \frac{\partial v^s}{\partial u^i \partial u^j}(u(x))\right). \label{eq:gluing_map_of_P_2_wrt_algebraic_coordinates} \end{equation}
\subsection{$P^1(M)$ in terms of $P(M)$}
\section{First prolongation of $G$-structure and associated bundles.}
Let us now consider the constructions of the previous section for cases $k=1$ and $k=2$.
The case $k=1$ is rather simple. We have $D^1(m) \cong GL(m)$ and $B^1(M)$ is the coframe bundle of $M$, because to each $j^1_x f$ we can put in correspondence the coframe $\{f^* du^i\}$ at $x \in M$.
So we will consider in details the case $k=2$.
\subsection{First prolongation of the Lie subgroup $G \subset GL(m)$} \label{subsec:first_prolongation_of_lie_subgroup_GL(m)}
Now let us consider the set $\widetilde{D}^2(m) = (\varphi^k_i, \varphi^k_{ij})$, where $\|\varphi^k_i\|$ is an invertible matrix, and $\varphi^k_{ij}$ are not necessarily symmetric with respect to lower indices. Then, the set $\widetilde{D}$ endowed with the operation $*$ given by \eqref{eq:product_D_2} is a Lie group called the \emph{nonholonomic differential group of second order}. We will call it \emph{first prolongation of the group} $GL(m)$ and denote by $GL^{(1)}(m)$.
On the group $GL^{(1)}(m)$ we can introduce another coordinate system: \begin{equation} g^k_j = \varphi^k_j, \quad a^k_{ij} = \widetilde{\varphi}^k_s \varphi^s_{ij} \label{eq:new_coorinates_differential_group} \end{equation} Then, with respect to these coordinates, by \eqref{eq:product_D_2}, we see that the product $*$ can be written as follows: \begin{equation} (g^k_i, a^k_{ij}) * (h^k_i, b^k_{ij}) = (g^k_s h^s_i, \widetilde{h}^k_s a^s_{pq} h^p_i h^q_j + b^k_{ij}). \label{eq:product_new_coordinates} \end{equation} These formulas can be written in a matrix form. To do this, we consider $a^k_{ij}$ as a map $a : \mathbb{R}^m \to \mathfrak{gl}(m)$, $w^k \mapsto a^k_{ij}w^j$. Then, \eqref{eq:product_new_coordinates} takes the form \begin{equation} (g,a)*(h,b) = (g h, ad h^{-1} a \circ h + b). \label{eq:product_matrix_form} \end{equation} Therefore, the group \begin{equation*} GL^{(1)} \cong (GL(m)\times Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{gl}(m)),*), \end{equation*} where $*$ is defined in \eqref{eq:product_matrix_form}. \begin{remark} The vector space $Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{gl}(m))$ is a right $GL(m)$-module with respect to the action \begin{equation} \forall g \in GL(m), \quad a \in Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{gl}(m)), \quad g \cdot a = ad g^{-1} a g. \label{eq:hom(R_m,gl(m))_is_a_right_GL(m)_module} \end{equation} The group $(GL(m)\times Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{gl}(m)),*)$ is the extension of the group $GL(m)$ with the right $GL(m)$-module $Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{gl}(m)),*)$. This fact motivates the definition of first prolongation for a subgroup $G \subset GL(m)$. \end{remark}
\begin{remark} The same considerations can be done for the holonomic jet group $D^2(m)$. \end{remark}
\subsubsection{First prolongation of a Lie subgroup $G \subset GL(m)$} The considerations of the previous subsection motivate \begin{definition} Let $G \subset GL(m)$ be a Lie subgroup. Then the \emph{first prolongation of G} is the group \begin{equation} G^{(1)} = G \times Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{g}(G))
\end{equation} with product \begin{equation*} (g_1,a_1)(g_2,a_2) = (g_1 g_2, ad g_2^{-1} a_1 g_2 + a_2). \end{equation*} \end{definition} \begin{remark} In this case also the vector space $Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{g}(G))$ is a right $G$-module with respect to the action \begin{equation} \forall g \in G, \quad a \in Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{g}(G)), \quad g \cdot a = ad g^{-1} a g. \label{eq:hom(R_m,g)_is_a_right_G_module} \end{equation} The group $(G\times Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{g}(G)),*)$ is the extension of the group $G$ with the right $G$-module $Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{g}(G)),*)$.
Therefore we have the short exact sequence of Lie groups: \begin{equation*} 0 \to Hom_{\mathbb{R}}(\mathbb{R}^m,\mathfrak{g}(G)) \to G^{(1)} \overset{\pi}{\longrightarrow} G. \end{equation*} The Lie homomorphism $\pi: G^{(1)} \rightarrow G$ comes from the natural projection of 2-jets onto 1-jets. \end{remark}
\subsubsection{Another coordinate system on $B^2(M)$} Now recall that on $D^2(m)$ we can take another coordinate system $(g^k_i, a^k_{ij})$ (see \ref{eq:new_coorinates_differential_group}). With respect to the coordinates $(g^k_i, a^k_{ij})$, the $D^2(m)$-action on $B^2(M)$ is written as \begin{equation} h^k_i = \widetilde{g}^k_s f^s_i, \quad h^k_{ij} = \widetilde{g}^k_s f^s_{ij} - a^k_{lm} \widetilde{g}^l_t\widetilde{g}^m_r f^t_i f^r_j. \label{eq:action_of_D_2_wrt_coordinates_(g,a)} \end{equation}
Let us find the expression for a trivialization \begin{equation*} (\pi^2)^{-1}(U) \to U \times D^2(m) \end{equation*} with respect to the coordinates $(g,a)$. For this purpose we take a section $s(x^i) = (x^i,\delta^k_i,0)$ of $B^2(M)$ over $U$, in fact this is the natural frame of order $2$, this means that it consists of $2$-jets of the coordinate functions. Then any point $b^2 = (x^i,f^k_i,f^k_{ij})$ can be written as $b^2 = s(x) \cdot (g,a)$, and the trivialization is given by \begin{equation} b^2 = (x^i,f^k_i,f^k_{ij}) \leftrightarrow (x, (g,a)).
\end{equation} Using \eqref{eq:action_of_D_2_wrt_coordinates_(g,a)}, and the coordinate expression $s(x)$, we get \begin{equation} f^k_i = \widetilde{g}^k_i, \quad f^k_{ij} = - a^k_{lm} \widetilde{g}^l_i \widetilde{g}^m_j.
\end{equation} Now let us write the gluing functions with respect to the coordinates $(p^k_i,p^k_{ij})$ on $B^2(M)$ and $(g^k_i,a^k_{ij})$. To do this we use the coordinate change \eqref{eq:change_of_coordinates_f_to_p} on $B^2(M)$ and \eqref{eq:new_coorinates_differential_group} on the group $D^2(M)$. We have \begin{equation*} \begin{split} p^k_i = \widetilde{f}^k_i, \quad p^k_{ij} = - f^k_{lm} \widetilde{f}^l_i \widetilde{f}^m_j \\ \bar x^k_i = g^k_i, \quad \bar x^k_{ij} = g^k_s a^s_{ij} \end{split} \end{equation*} and so, \begin{equation} f^k_i = \widetilde{p}^k_i, \quad f^k_{ij} = - p^k_{lm} \widetilde{p}^l_i \widetilde{p}^m_j \label{eq:inverse_coordinate_transformation_B^2(M)} \end{equation}
\begin{equation} \begin{split} & (x^k,f^k_i,f^k_{ij}) \to (x^k, (p^k_i,p^k_{ij}) ), \text{ where } \\ &p^k_i = \widetilde{f}^k_i, \quad p^k_{ij} = - f^k_{lm} \widetilde{f}^l_i \widetilde{f}^m_j. \end{split} \label{eq:change_of_coordinates_f_to_p} \end{equation} Using these formulas, and \eqref{eq:action_of_D_2_wrt_coordinates_(g,a)}, we can write the $D^2(m)$-action with respect to the coordinates $(x^i,p^k_i,p^k_{ij})$. If $(x^i,(p^k_i,p^k_{ij}))\cdot(g,a) = (x^i,(q^k_i,q^k_{ij}))$, then \begin{equation} q^k_i = p^k_s g^s_i \quad q^k_{ij} = \widetilde{g}^k_s p^s_{lm} g^l_i g^m_j + a^k_{ij}. \label{eq:action_of_D_2_wrt_coordinates_p_and_(g,a)} \end{equation} So, as it should be, at the second argument we get the product of elements of the group $D^2(m)$ (cf. \eqref{eq:product_new_coordinates}). Hence, first of all from
we get that \begin{equation}
\widetilde{\bar p}{}^k_s g^s_i = \widetilde{p}{}^k_i, \text{ and hence } \bar p^k_i = \widetilde{g}^k_s p^s_i. \label{eq:first_order_coordinate_change} \end{equation} Now, from
with \eqref{eq:inverse_coordinate_transformation_B^2(M)}, we get \begin{equation*} - \bar p^k_{rt}\, \widetilde{\bar p}{}^r_l\widetilde{\bar p}{}^t_m g^l_i g^m_j + \widetilde{\bar p}{}^k_s g^s_t a^t_{ij} = f^k_{ij}, \end{equation*} hence follows \begin{equation*} -\bar p^k_{lm} \widetilde{p}^l_i \widetilde{p}^m_j + \widetilde{p}^k_s a^s_{ij} = f^k_{ij} = - p^k_{lm} \widetilde{p}^l_i \widetilde{p}^m_j. \end{equation*} Finally, we obtain \begin{equation*} \bar p^k_{lm} = p^k_{lm} + \widetilde{p}^k_t a^t_{lm} p^l_i p^m_j. \end{equation*} As the result, we get the following theorem.
\begin{theorem}
\label{} \end{theorem}
Note that \begin{equation} x^k_i = \widetilde{\bar x^k_i}, \label{eq:correspondence_between_x_and_bar_x} \end{equation} With this notation, we have \begin{equation} \bar f^k_i = \label{eq:change_coordinates_B_2(M)} \end{equation}
\section{Prolongation of $G$-structure} \subsection{First prolongation of integrable $G$-structure} Let $P(M,G)$ be an integrable $G$-structure, this means that there exists an atlas $(U_\alpha,u_\alpha)$ such that $\left\{ \frac{\partial}{\partial u_\alpha} \right\}$ are the sections of $P$.
A first prolongation of $P(M,G)$ is the subbundle in $B^2(M)$ with the total space \begin{equation}
P^1(M) = \left\{ j^2_x f \mid \left( \frac{\partial (f \circ u^{-1})^k}{\partial u^i}|_{u(x)} \right) \in G \right\}
\end{equation} and the structure group $G^1 \subset D^2(m)$ (the holonomic prolongation of $G$): \begin{equation} G^1 = \left\{ j^2_0 \varphi \in D^2(m) \mid \left(\frac{\partial\varphi^k}{\partial u^i}\right) \in G \right\}.
\end{equation}
The product in $G^1$ is induced by the chain rule: if $j^k_0 \varphi = (\varphi^k_i, \varphi^k_{i_1i_2}, \dots, \varphi^k_{i_1,\dots,i_k})$ $j^k_0 \psi = (\psi^k_i, \psi^k_{i_1i_2}, \dots, \psi^k_{i_1,\dots,i_k})$, then \begin{equation} \eta^k_i = \psi^k_s \varphi^s_i, \quad \eta^k_{ij} = \psi^k_{pq} \varphi^p_i \varphi^q_j + \psi^k_s \varphi^s_{ij}, \dots
\end{equation}
\subsection{Prolongation of $G$-structure} \subsubsection{Nonholonomic jet bundle} A $k$-th order nonholonomic jet is a ``nonsymmetric'' Teylor series and is defined by coordinates: \begin{equation} f^j_0 = \left( f^j, f^j_i, f^j_{i_1 i_2}, \dots, f^j_{i_1,i_2,\dots,i_k} \right),
\end{equation} where $f^j_{i_1 \dots i_j}$ are not supposed to be symmetric with respect to lower indices.
\subsection{Nonholonomic differential group} The nonholonomic $k$-th order differential group is
$D^{(k)}(m) = \{\varphi^k_0 = \left( \varphi^j, \varphi^j_i, \varphi^j_{i_1 i_2}, \dots, \varphi^j_{i_1,i_2,\dots,i_k} \right) \mid \det \left(\varphi^j_i\right) \ne 0\}$,
and the product is also ``induced by the chain rule'': if $\eta^k_0 = \psi^k_0 \cdot \varphi^k_0$ and $\varphi^k_0 = (\varphi^j_i, \varphi^j_{i_1i_2}, \dots, \varphi^j_{i_1,\dots,i_k})$ $\psi^k_0 = (\psi^j_i, \psi^j_{i_1i_2}, \dots, \psi^j_{i_1,\dots,i_k})$, then $\eta^j_i = \psi^j_s \varphi^s_i$, $\eta^j_{i_1i_2} = \psi^j_{p_1p_2} \varphi^{p_1}_{i_1} \varphi^{p_2}_{i_2} + \psi^j_s \varphi^s_{i_1i_2}$, \dots
It is clear that $D^k(m)$ is a Lie subgroup of $D^{(k)}(m)$, so we have the left $D^k(m)$-action on $D^{(k)}(m)$.
A \emph{nonholonomic $k$-th order jet bundle} is the locally trivial bundle over $M$ with the fiber $D^{(k)}(m)$ associated with the $D^k(m)$-principal bundle $B^k(M)$ with respect to the left $D^k(m)$-action on $D^{(k)}(m)$.
\subsection{First prolongation of arbitrary $G$-structure} Let $P(M,G)$ be a $G$-structure. A \emph{first prolongation of $P(M,G)$} is the subbundle in $B^{(2)}(M)$ with the total space \begin{equation} P^{(1)}(M) = \left\{ f^2_0 \in B^{(2)}(M) \mid \left( f^j_i \right) \in G \right\}
\end{equation} and the structure group $G^{(1)} \subset D^{(2)}(m)$ (the nonholonomic prolongation of $G$): \begin{equation} G^{(1)} = \left\{ \varphi^2_0 \in D^2(m) \mid \left(\varphi^k_i\right) \in G \right\}.
\end{equation}
\section{First prolongation of $G$-structure in terms of coframe bundle}
The first prolongation $P^{(1)}$ of $P$ can be expressed in terms of $P$ in the following ways: \begin{equation}
P^{(1)} = \{p^1 : \mathbb{R}^m \to T_p P | \theta_p \alpha = 1_{\mathbb{R}^m} \}, \label{eq:P_1_1} \end{equation} or \begin{equation}
P^{(1)} = \{\omega : T_p P \to \mathfrak{g} | \omega \sigma_p = 1_{\mathfrak{g}} \} \label{eq:P_1_2} \end{equation} or \begin{equation} P^{(1)} = \left\{ H_p \mid H_p \oplus V_p = T_p P\right\}. \label{eq:P_1_3} \end{equation} We will mainly use the first representation \eqref{eq:P_1_1}, but note that the third representation \eqref{eq:P_1_2} says that, geometrically, $P^1$ consists of tangent subspaces transversal to vertical subspaces, i.\,e, of connections.
The projection $\pi^1_0 : P^1 \to P$, is defined in terms of \eqref{eq:P_1_1} as follows $(p^1 : \mathbb{R}^m \to T_p P) \mapsto b$.
\begin{theorem}[Algebraic structure of $G^{(1)}$] $G^{(1)}$ is isomorphic to the extension of $G$ via the $G$-module $\mathcal{L}(\mathbb{R}^n,\mathfrak{g})$: \begin{equation} G^{(1)} = G \times \mathcal{L}(\mathbb{R}^n,\mathfrak{g}), \quad (g_1,a_1)*(g_2,a_2) = (g_1 g_2, ad g_2^{-1} a_1 + a_2).
\end{equation} \end{theorem}
Action of $G^{(1)}$ on $P^{(1)}$ is described in the following way: for $b^1 : \mathbb{R}^m \to T_b P$, and $g^1=(g,a) \in G^1$: \begin{equation} R^{(1)}_{g^1} b^1 = dR_g (b^1 \circ g) + \sigma_{pg} \circ a \circ g
\end{equation}
\section{First prolongation of equivariant map}
\subsection{First prolongation of a $G$-space} Let $V$ be a manifold, then the first prolongation of $V$ is \begin{equation} V^{(1)} = \left\{ v^1 : \mathbb{R}^m \to T_v V \mid v \in V \right\},
\end{equation} Let $\rho : G \times V \to V$ be a left action, then the first prolongation of $\rho$ is \begin{equation} \rho^{(1)} : G^{(1)} \times V^{(1)} \to V^{(1)}, \quad \rho^{(1)}(g^1,v^1) = [dL_g \circ v^1 + \sigma_{gv}\circ A] \circ g^{-1}.
\end{equation}
\subsection{Prolongation of equivariant map} $f : P \to V$ is an equivariant map.
Prolongation of $f$ is $f^{(1)} : P^{(1)} \to V^{(1)}$, $f^{(1)}(b^1) = df_{\pi^1(b^1)} \circ b^1$.
An equivariant map $f : P \to V$ determines a section $s : M \to E$, where $\pi_E : E \to M$ is a bundle with standard fiber $V$ associated with $P$.
$f^1 : P^1 \to V^1$ maps $b^1 \in P^1$ to the coordinates of $(s(\pi(b)),(\nabla s)(\pi(b)))$ with respect to $b$, where $\nabla$ corresponds to $b^1$.
\begin{example}[Simple example: vector field on $\mathbb{R}^n$] $V$ is a vector field on $\mathbb{R}^m$. The corresponding equivariant map is \begin{equation} f : B(\mathbb{R}^m) \to \mathbb{R}^m, b = \{\eta^a\}, f(b) = \{\eta^a(V(\pi(b)))\}.
\end{equation} Then the first prolongation of $f$ is defined as follows. If $b^{(1)} : e_i \in \mathbb{R}^m \to \frac{\partial}{\partial x^i} + \Gamma^k_{ij} \frac{\partial}{\partial x^k_j}$, then \begin{equation} f^{(1)} (b^{(1)}) = (V, \nabla(\omega) V) = (V^i, \partial_j V^i + \Gamma^k_{js} V^s).
\end{equation}
Action of $G^1$ is written as follows \begin{equation} (g,a)(V,\nabla(\omega) V) = (\tilde g^k_i V^i, \tilde g^k_s \nabla_m V^s g^m_i + \tilde g^k_s a^s_{mj} V^j g^m_i),
\end{equation}
At $x_0$ such that $V(x_0) = 0$, we get the action \begin{equation} (g,a) (0,\nabla(\omega) V) = (0, \tilde g^k_s \nabla_m V^s g^m_i) = (0, \tilde g^k_s \partial_m V^s g^m_i)
\end{equation}
Therefore, the prolonged action coincides with the action of the group $GL(n)$ on the vector space of $n\times n$-matrices by conjugation. The invariants of this action are well known, for example, these are the trace and the determinant. Therefore, to find the invariants of a zero $x_0$ of a vector field $V$ we have to find the matrix $[\partial_i V^j(x_0)]$ and then write the invariants of this matrix under the conjugation, for example, one of them is $\det [\partial_i V^j(x_0)]$. \end{example}
\end{document} |
\begin{document}
\title{Scalable Planning for Energy Storage \\in Energy and Reserve Markets }
\author{Bolun~Xu,~\IEEEmembership{Student Member,~IEEE,}
Yishen~Wang,~\IEEEmembership{Student Member,~IEEE,}
Yury~Dvorkin,~\IEEEmembership{Member,~IEEE,}
Ricardo~Fern\'andez-Blanco,
C. A. Silva-Monroy, \IEEEmembership{Member,~IEEE,}
Jean-Paul Watson, \IEEEmembership{Member,~IEEE,}\\
and Daniel~S.~Kirschen,~\IEEEmembership{Fellow,~IEEE}
\thanks{This work was supported by the US Department of Energy under grant 1578574.}
\thanks{B.~Xu, Y.~ Wang, R.~Fern\'andez-Blanco, and D.~S.~Kirschen are with University of Washington, USA (emails: \{xubolun, ywang11, kirschen\}@uw.edu and [email protected]). }
\thanks{Y.~Dvorkin is with New York University, USA (email: [email protected]). }
\thanks{C.~Silva-Monroy and J.~Watson are with Sandia National Laboratories (emails: \{casilv, jwaston\}@sandia.gov). } }
\maketitle \makenomenclature
\begin{abstract}
Energy storage can facilitate the integration of renewable energy resources by providing arbitrage and ancillary services. Jointly optimizing energy and ancillary services in a centralized electricity market reduces the system's operating cost and enhances the profitability of energy storage systems. However, achieving these objectives requires that storage be located and sized properly. We use a bi-level formulation to optimize the location and size of energy storage systems which perform energy arbitrage and provide regulation services. Our model also ensures the profitability of investments in energy storage by enforcing a rate of return constraint. Computational tractability is achieved through the implementation of a primal decomposition and a subgradient-based cutting-plane method. We test the proposed approach on a 240-bus model of the Western Electricity Coordinating Council (WECC) system and analyze the effects of different storage technologies, rate of return requirements, and regulation market policies on ES participation on the optimal storage investment decisions. We also demonstrate that the proposed approach outperforms exact methods in terms of solution quality and computational performance.
\end{abstract}
\begin{IEEEkeywords} Energy storage, arbitrage, ancillary services, power system planning, cutting-plane method, primal decomposition. \end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section*{Nomenclature} \addcontentsline{toc}{section}{Nomenclature}
\subsection{Sets and Indices}
\begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{$V_1,V_2,V_3$}] \item[$B, B\up{E}, B\up{N}$] Set of buses and subset of buses with and without energy storage, indexed by $b$ \item[$I, I_b$] Set of generators and subset of generators connected to bus $b$, indexed by $i$ \item[$J$] Set of typical days, indexed by $j$ \item[$L$] Set of transmission lines, indexed by $l$ \item[$T$] Set of time intervals, indexed by $t$ \item[$o(l), r(l)$] Sending and receiving ends of line $l$ \end{IEEEdescription}
\subsection{Variables} \begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{$V_1,V_2,V_3$}] \item[$C\up{[\cdot]}(\cdot)$] Cost function for different problems, \$ \item[$e\up{R}_b, p\up{R}_{b}$] Energy storage energy (MWh) and power (MW) rating. \item[$e\up{soc}_{j,t,b}$] Energy storage state of charge, MWh \item[$f_{j,t,l}$] Line power flow, MW \item[$p\up{ch}_{j,t,b}, p\up{dis}_{j,t,b}$] Energy storage charging and discharging rates, MW \item[$p\up{g}_{j,t,i}$] Generator output power, MW \item[$p\up{rs}_{j,t,b}$] Renewable spillage, MW \item[$r\up{ed}_{j,t,b}, r\up{eu}_{j,t,b}$] Regulation down and up provided by energy storage, MW \item[$r\up{gd}_{j,t,i},r\up{gu}_{j,t,i}$] Regulation down and up provided by generator, MW \item[$\mathnormal{x}\up{[\cdot]}$] Vector of decision variables \item[$y$] Cutting-plane method decision variable vector \item[$\theta_{j,t,b}$] Voltage phase angle \item[$\lambda\up{lmp}_{j,t,b}$] Locational marginal price, \$/MWh \item[$\lambda\up{rd}_{j,t}, \lambda\up{ru}_{j,t}$] Price for down/up regulation, \$/MWh \item[$\varphi\up{[.]}_{[.]}, \psi\up{[.]}_{[.]}, \gamma\up{[.]}_{[.]}$] Dual variables associated with upper bounds, lower bounds, and equality constraints \end{IEEEdescription}
\subsection{Parameters}
\begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{$V_1,V_2,V_3$}] \item[$c\up{p}, c\up{e}$] Daily prorated capital costs of energy storage, \$/MW and \$/MWh \item[$c\up{g}_i,c\up{gd}_i,c\up{gu}_i$] Hourly incremental, regulation up, and regulation down costs of generators, \$/MWh \item[$c\up{dis},c\up{ch}$] Hourly incremental discharge and charge cost of ES, \$/MWh \item[$c\up{eu},c\up{ed}$] Hourly regulation up and down cost of ES, \$/MWh \item[$c\up{rs}$] Value of renewable spillage, \$/MWh \item[$D_{j,t,b}$] Load demands, MW \item[$F_l$] Capacity of transmission lines, MW \item[$G\up{max}_i,G\up{min}_i$] Maximum and minimum power production of generators, MW \item[$G\up{rn}_{j,t,b}, G\up{rs}_{j,t,b}$] Renewable power maximum expected forecast and maximum allowable spillage, MW \item[$R\up{u}_i, R\up{d}_i$] Ramp down and ramp up capacity of generators, MW/h \item[$T\up{ru}, T\up{rd}$] Ramp down/up speed requirement for down/up regulation, h \item[$T\up{es}$] Continuous full dispatch time requirement for energy storage, h \item[$X_l$] Transmission line reactance
\item[$\omega_j$] Weight of typical day \item[$\rho\up{min}, \rho\up{max}$] Minimum and maximum allowable values of the power/energy ratio of storage systems, h$^{-1}$ \item[$\eta\up{ch}, \eta\up{dis}$] Charging and discharging efficiency of energy storage \item[$\phi\up{D}, \phi\up{R}$] Regulation requirement as a percentage of the load and renewable injections \item[$\nu$] Iteration index \item[$\epsilon$] Relative tolerance for system cost savings \end{IEEEdescription}
\section{Introduction}
Energy storage (ES) is a highly flexible resource that has the potential to facilitate the integration of renewable energy sources such as wind and solar~\cite{barton2004energy,oudalov2006value}. U.S. system operators and regulators have recognized ES as the key technology in achieving sustainability in the power sector~\cite{ercot_es}. For instance, 263~MW of the ES capacity has already been deployed in PJM~\cite{pjm_es}. In ISO New England, 94~MW of the battery energy storage capacity has been proposed for deployment as of January 2016~\cite{isone_outlook}. The California Public Utilities Commission has mandated a merchant ES procurement goal of 1325~MW by 2020~\cite{cal_rm}.
ES can relieve congestion in the transmission system by performing spatio-temporal arbitrage, make possible a more optimal dispatch of conventional generators and hence reduce the cost of dealing with the intermittency of renewable resources~\cite{pandzic2015near}. Advanced ES technologies such as batteries also provide rapid responses in reserve and regulation services, which lower the ancillary service procurement requirements and reduce the cost of handling the stochasticity of wind and solar~\cite{xu2016comparison}. However, because it is likely that many of these ES systems will be deployed by private investors, we should not consider only whether they provide a social benefit in terms of reduced operating cost, but also whether they generate a sufficient return on investment~\cite{dvorkin2016ensuring,wang2017look}. Since ES operators may wish to participate in multiple electricity markets to increase their profits~\cite{krishnan2015optimal, xu2014bess, sandia_es, ny_es,akhavan2014optimal}, we need to consider the arbitrage jointly with the provision of ancillary services when identifying opportunities for investments in ES.
An accurate long-term planning decision must account for its impact on short-term system operations~\cite{pudjianto2014whole}. However, solving a single optimization problem that includes the entire planing horizon (i.e., a full-year operation) is far beyond what is computationally tractable at this point in time. To overcome such computation barriers, heuristic ES planning models \cite{pandzic2015near, dvijotham2014storage} split ES siting and sizing into sequential decisions according to heuristic rules. While heuristic models are solvable over longer planning horizons, they may produce suboptimal planning decisions. To obtain more rigorous planning decisions, stochastic programming has been extensively incorporated in power system planning problems~\cite{gorenstin1993power, buygi2003transmission, de2008transmission,garces2009bilevel}. Stochastic planning models co-optimize siting and sizing decisions on ES over a set of selected representative scenarios~\cite{buygi2003transmission, de2008transmission, oh2011optimal, bayram2015stochastic, qiu2016stochastic, wogrin2015optimizing,fernandez2016optimal}. The computational complexity of a stochastic planning model depends on the number of scenarios, and a sufficient number of scenarios must be considered for effective representations of the uncertain renewable generation resources and demand. Although a larger number of scenarios improves the robustness of the planning result, such formulated problems can be computationally intractable when applied to large power systems~\cite{winston2004operations}. Besides, adding additional planning criteria, such as a guarantee of the ES investment payback rates~\cite{dvorkin2016ensuring,nasrolahpour2016strategic}, and a co-optimization of energy and reserve markets~\cite{krishnan2015optimal}, will further increase the complexity of the planning model.
The aforementioned ES planning approaches aim to trade-off modeling accuracy and computational complexity. Still, solving ES planning problems in realistically large systems is a non-trivial task and the modeling accuracy has been sacrificed for the sake of solvability. Nasrolahpour~\emph{et al.}~\cite{nasrolahpour2016strategic} formulated strategic ES sizing in energy markets as a bi-level problem, and adopted a solution algorithm which combines mathematical programming with equilibrium constraints (MPEC) with Benders decomposition. However, their algorithm takes hours to solve the bi-level ES sizing problem on a single bus case study. We propose a decomposition algorithm that provides more accurate and faster solutions to the ES planning problem for a large number of scenarios. This paper makes the following contributions: \begin{itemize}
\item It formulates the optimal ES profit-constrained siting and sizing problem in a joint energy and reserve market as a bi-level problem considering the perspectives of the system operator in anticipation that energy storage would act as profit-seeking entities in a market environment
\item It describes and tests a solution method which combines primal decomposition with subgradient cutting-planes. This solution method is scalable to any planning scenarios, and has non-heuristic terminating criteria.
\item It benchmarks the computational performance of this algorithm against an exact linear programming (LP) approach, and demonstrates the accuracy and scalability of this algorithm.
\item It uses compressed air energy storage and lithium-ion batteries to represent two different types of ES technologies, and compares their investment for different regulation market policies.
\item It analyzes the effect of a minimum profit constraint on the ES siting and sizing decisions as well as on the system operating cost. \end{itemize} All simulations are performed on a modified version of the 240-bus system of Western Electricity Coordinating Council (WECC)~\cite{price2011reduced}. The WECC system is a realistically large testbed for planning studies that demonstrates scalability of the proposed solution method. Section~\ref{Sec:PD} formulates the bi-level optimal ES siting and sizing model. Section~\ref{Sec:Meth} proposes the solution method to the formulated problem. Section~\ref{Sec:Case} describes the test-bed system parameters. Section~\ref{Sec:Test} presents and discusses the numerical results. Finally, conclusions are drawn in Section~\ref{Sec:Con}.
\section{Problem Formulation}~\label{Sec:PD}
We formulate the optimal ES siting and sizing as a bi-level problem. The upper-level (UL) problem identifies the ES investment decisions which minimize the overall system cost over a set of typical (or representative) days, while the lower-level (LL) problems minimize the operating cost of each typical day.
\subsection{Upper-Level Problem: Energy Storage Siting and Sizing}
The UL problem minimizes the total system cost ($C\up{S}$) over all typical days, i.e. the sum of the expected system operating cost and of the ES investment cost:
\begin{gather}\label{PD:UL_Obj} \textstyle\min_{\mathnormal{x}\up{U}} C\up{S}(x\up{P}_j) :=\sum_{j\in J}\omega_j C\up{P}_j(\mathnormal{x}\up{U}, \mathnormal{x}\up{p}_j)+C\up{E}(\mathnormal{x}\up{U})\,, \end{gather} where $\mathnormal{x}\up{U}$ are upper-level decision variables, and $\mathnormal{x}\up{p}_j$ are lower-level decision variables. The system operating cost is the sum of the dispatch cost $C\up{P}_j$ for each typical days weighted by the relative importance $\omega_j$ of the days it represents. The ES investment cost $C\up{E}$ is calculated based on both the power rating $p\up{R}_b$ and the energy capacity $e\up{R}_b$ of the ES installed at each bus $b\in B$:
\begin{gather} \textstyle C\up{E}(\mathnormal{x}\up{U}) := \sum_{b\in B} (c\up{p} p\up{R}_b + c\up{e} e\up{R}_b)\,, \end{gather} where $\mathnormal{x}\up{U} = \{ p\up{R}_b$, $e\up{R}_b \}$. This problem is constrained by limits on the power to energy (P/E) ratio of of the ES (which depends on the technology adopted), on the capital available for investment in ES, and by the need to achieve a minimum rate of return $\chi$ on these investments: \begin{align} \rho\up{min} e\up{R}_b \leq p\up{R}_b &\leq \rho\up{max} e\up{R}_b\,\label{PD:UL_C1}\,,\\ \textstyle\sum_{b\in B}\big(c\up{p} p\up{R}_b + c\up{e} e\up{R}_b\big) &\leq c\up{ic,max}\,, \label{PD:UL_C2}\\ C\up{R}( x\up{P}_j, x\up{D}_j) &\geq \chi C\up{E}(x\up{U})\,, \label{PD:UL_ESP} \end{align} where the ES operational profit $C\up{R}$ is calculated as follows: \begin{gather} \textstyle C\up{R}(x\up{P}_j, x\up{D}_j) := \sum_{j\in J}\omega_j\sum_{\subalign{t&\in T \\ b&\in B}} \big( p\up{dis}_{j,t,b}\lambda\up{lmp}_{j,t,b}\eta\up{dis} \nonumber\\ - p\up{ch}_{j,t,b}\lambda\up{lmp}_{j,t,b} /\eta\up{ch} + r\up{eu}_{j,t,b} \lambda\up{eu}_{j,t} \eta\up{dis} + r\up{ed}_{j,t,b} \lambda\up{ed}_{j,t} / \eta\up{ch}\nonumber\\ -c\up{dis}_{b} p\up{dis}_{j,t,b}-c\up{ch}_{b} p\up{ch}_{j,t,b}-c\up{eu}_b r\up{eu}_{j,t,b}-c\up{ed}_b r\up{ed}_{j,t,b}\big) \,. \end{gather} The first two terms calculates the payment the ES receives from the energy market that settles in the locational marginal price (LMP), the third and the fourth term calculates the payment from the regulation market that settles in the system-wide regulation up and down prices, the last four terms represents the operation cost of discharging, charging, as well as providing regulation.
\subsection{Lower-Level Problem: Economic Dispatch}
Each lower-level problem minimizes the system operating cost, $C\up{P}_j$, for a particular typical day using an hourly interval. This economic dispatch takes into account the generation and regulation cost of conventional generators and ES units, as well as the cost associated with spillage of renewable energy. For each typical day $j$, this problem can be formulated in a compact way as follows: \begin{align} &\textstyle \min_{\mathnormal{x}\up{P}_j} C\up{P}_j (\mathnormal{x}\up{P}_j) := \sum_{\subalign{t&\in T \\ b&\in B}} c\up{rs}p\up{rs}_{j,t,b}\nonumber\\ &+ \textstyle\sum_{\subalign{t&\in T \\ i&\in I}}\big(c\up{g}_i p\up{g}_{j,t,i}+c\up{gu}_i r\up{gu}_{j,t,i}+c\up{gd}_i r\up{gd}_{j,t,i}\big)\\ &+ \textstyle\sum_{\subalign{t&\in T \\ b&\in B}}\big(c\up{dis}_{b} p\up{dis}_{j,t,b}+c\up{ch}_{b} p\up{ch}_{j,t,b}+c\up{eu}_b r\up{eu}_{j,t,b}+c\up{ed}_b r\up{ed}_{j,t,b}\big)\,,\label{PD:PLL_obj}\\ &\text{subject to:} \nonumber \\ &\mathbf{M}\up{P}_j \mathnormal{x}\up{P}_j + \mathbf{M}\up{E} \mathnormal{x}\up{U} \leq \mathbf{V}\up{P}_j\,, \label{PD:PLL_C} \end{align} where the decision variables are $\mathnormal{x}\up{p}_j = \{$ $p\up{ch}_{j,t,b}$, $p\up{dis}_{j,t,b}$, $p\up{g}_{j,t,i}$, $p\up{rs}_{j,t,b}$, $r\up{ed}_{j,t,b}$, $r\up{eu}_{j,t,b}$, $r\up{gd}_{j,t,i}$, $r\up{gu}_{j,t,i}$, $e\up{soc}_{j,t,b}$, $f_{j,t,l}$, $\theta_{j,t,b}\}$. $\mathbf{M}\up{P}_j$, $\mathbf{M}\up{E}$, $\mathbf{V}\up{P}_j$ are constraint coefficient matrices. The compact expression of the constraints \eqref{PD:PLL_C} is expanded below and the dual variables associated with each constraint are shown in parentheses after a colon. \subsubsection{Nodal power balance equations ($\forall t\in T, b\in B$)} At each bus, the sum of the power injections and the inflows must be equal to the demand: \begin{gather}
\textstyle \sum_{i\in I\ud{b}}p\up{g}_{j,t,i}-\sum_{l|b\in o(l)}f_{j,t,l}+\sum_{l|b\in r(l)}f_{j,t,l} +G\up{rn}_{j,t,b}\nonumber\\-p\up{rs}_{j,t,b}+p_{j,t,b}\up{dis} \eta\up{dis} - p_{j,t,b}\up{ch}/ \eta\up{ch} = D_{j,t,b}:(\lambda\up{lmp}_{j,t,b})\label{PLL:C_bus} \end{gather} where $\lambda\up{lmp}_{j,t,b}$ is the locational marginal price.
\subsubsection{Regulation requirement ($t\in T, b\in B_E, i\in I$)} Hourly up/down regulation requirements are expressed as a percentage of the system-wide demand ($\phi\up{D}$) plus a percentage of the system-wide renewable injection ($\phi\up{R}$): \begin{gather} \textstyle \sum_{b\in B} r\up{eu}_{j,t,b}\eta\up{dis} + \sum_{i\in I}r\up{gu}_{j,t,i} \nonumber\\ \textstyle \geq \sum_{b\in B}\big[\phi\up{R}p\up{rn}_{j,t,b}+\phi\up{D} D_{j,t,b}\big]:(\lambda\up{ru}_{j,t})\, \label{PLL:C_res1}\\ \textstyle \sum_{b\in B} r\up{ed}_{j,t,b}/\eta\up{ch} + \sum_{i\in I}r\up{gd}_{j,t,i} \nonumber\\ \textstyle \geq \sum_{b\in B}\big[\phi\up{R}p\up{rn}_{j,t,b}+\phi\up{D} D_{j,t,b}\big]:(\lambda\up{rd}_{j,t})\,, \label{PLL:C_res2} \end{gather} where $\lambda\up{ru}_{j,t}$ and $\lambda\up{rd}_{j,t}$ are the hourly up and down regulation prices.
\subsubsection{Energy storage constraints ($\forall t\in T, b\in B_E$)} The evolution of the state of charge $e\up{soc}_{j,t,b}$ is calculated from the energy market dispatch schedules: \begin{gather}\label{PLL_CES1}
e\up{soc}_{j,t,b} - e\up{soc}_{j,t-1,b} = p\up{ch}_{j,t,b}-p\up{dis}_{j,t,b} :(\gamma\up{e}_{j,t,b})\,, \end{gather} the initial ES SoC is set to zero ($e\up{soc}_{j,0,b} = 0$) for all ES operations, and the end-of-day SoC is not enforced in \eqref{PLL_CES1}. The charging and discharging power must remain within the rated power: \begin{gather} p\up{ch}_{j,t,b} + r\up{ed}_{j,t,b}\leq p\up{R}_b:(\varphi\up{ch}_{j,t,b})\,\label{PLL_CES2} \\ p\up{dis}_{j,t,b} + r\up{eu}_{j,t,b}\leq p\up{R}_b:(\varphi\up{dis}_{j,t,b})\,,\label{PLL_CES3} \end{gather} and the ES must sustain the full regulation reserve dispatch for the required time interval ($T\up{es}$): \begin{gather} e\up{soc}_{j,t,b} +T\up{es} r\up{ed}_{j,t,b}\leq e\up{R}_b:(\varphi\up{soc}_{j,t,b})\,\label{PLL_CES5}\\ e\up{soc}_{j,t,b}- T\up{es} r\up{eu}_{j,t,b}\geq 0:(\psi\up{soc}_{j,t,b})\,.\label{PLL_CES4} \end{gather}
\subsubsection{Other constraints} Appendix~\ref{App_SL} defines the formulation of the generator power rating and ramp constraints, the constrains on renewable spillage, and the dc power flow model used to enforce the network constraints.
\subsection{Dual Lower-Level Problem} We apply the primal-dual transformation to the primal lower-level~(PLL) problem due to its convexity. The dual lower-level~(DLL) problem optimizes system prices so that constraint \eqref{PD:UL_ESP} is enforced. The DLL objective function is formulated as follows: \begin{align} &\textstyle \max_{\mathnormal{x}\up{D}_j} C\up{D}_j(\mathnormal{x}\up{U}, \mathnormal{x}\up{D}_j) := \textstyle \sum_{\subalign{t&\in T \\ i&\in I}}\big[(\varphi\up{g}_{j,t,i} G\up{max}_{i}+\psi\up{g}_{j,t,i} G\up{min}_{i})\nonumber\\ &\textstyle + R\up{u}_i (\varphi\up{R}_{j,t,i}+T\up{ru} \varphi\up{gu}_{j,t,i})+R\up{d}_i (-\psi\up{R}_{j,t,i}+T\up{rd} \varphi\up{gd}_{j,t,i})\big]\nonumber\\ &\textstyle +\sum_{i\in I}(\varphi\up{R}_{e,1,i}+\varphi\up{R}_{e,1,i}) G_{j,t}\up{0}+\sum_{\subalign{t&\in T \\ l&\in L}}(\varphi\up{f}_{j,t,l}-\psi\up{f}_{j,t,l}) F\up{max}_l\nonumber\\ &\textstyle + \sum_{\subalign{t&\in T \\ b&\in B}} \Big[ \varphi\up{rs}_{j,t,b}G\up{rs}_{j,t,b}+ \lambda_{j,t,b}\up{lmp}(D_{j,t,b}-G\up{rn}_{j,t,b})\big]\nonumber\\ &\textstyle +\sum_{\subalign{t&\in T \\ b&\in B}} \big[\phi\up{rn} G\up{rn}_{j,t,b}(\lambda_{j,t}\up{ru}+\lambda_{j,t}\up{rd}) +\phi\up{D} D_{j,t,b}(\lambda_{j,t}\up{ru}+\lambda_{j,t}\up{rd})\big]\nonumber\\&\textstyle +\sum_{\subalign{t&\in T \\ b&\in B}} \big[p\up{R}_b(\varphi\up{ch}_{j,t,b}+\varphi\up{ dis}_{j,t,b})+e\up{R}_b\varphi\up{soc}_{j,t,b}\big]\,.\\ &\text{subject to:} \nonumber \\ &\mathbf{M}\up{D}_j\mathnormal{x}\up{D}_j \leq \mathbf{V}\up{D}_j\,. \end{align} where $\mathbf{M}\up{D}_j$ and $\mathbf{V}\up{D}_j$ are the constraint coefficient matrices. The detail of these constraints are given in Appendix~\ref{App_SL}. Note that the objective function includes products of UL ($x\up{U}_j$) and DLL variables ($x\up{D}_j$).
\begin{figure}
\caption{Flowchart of the solution algorithm
}
\label{fig:cp}
\end{figure}
\section{Solution Method}\label{Sec:Meth}
Decomposition has been used extensively for solving large-scale programming problems~\cite{zhao2014marginal,linderoth2003decomposition,birge1985decomposition,nielsen1997scalable}, especially for scenario-based stochastic programmings~\cite{nap2016next, papavasiliou2011reserve, kazempour2012strategic,baringo2012wind,nasrolahpour2016strategic}. A stochastic planning problem couples independent scenarios with a few planning decision variables. An effective decomposition breaks each scenario into a subproblem, which can be solved in sequence or in parallel. It is also easier to aggregate subproblem results in such decomposition structures, and the master problem can be solved rapidly and accurately. Fig.~\ref{fig:cp} illustrates the proposed solution algorithm that involves an inner-loop and an outer-loop. The inner-loop identifies the optimal ES locations subject to the maximum ES investment budget, $c\up{ic,max}$, and the outer-loop enforces the ES rate of return constraint \eqref{PD:UL_ESP}.
In the inner-loop, the main problem is decomposed into scenario subproblems by fixing the value of ES planning variables. Each scenario subproblem solves an ED problem. The inner-loop is initialized with no ES installation in the system, and solve the ED for all scenarios. Based on the ED results, the potential benefit of ES installation is calculated for each bus in the system in a subgradient form. ES installations are updated accordingly. Therefore, the inner-loop calculates ES siting and sizing decisions iteratively, until the estimated distance to the exact optimal solution is sufficiently small. The decomposition technique is described in Section~\ref{Sec:PDc}. Section~\ref{Sec:ESSG} explains how the subgradient of the objective function with respect to ES planning variables is calculated. Section~\ref{Sec:CP} explains the subgradient cutting-plane method used to update ES planning variables at each iteration and solve the optimal ES location problem.
In the outer-loop, the optimal ES siting and sizing decisions are tested against the ES rate of return constraint \eqref{PD:UL_ESP}. If it is not satisfied, the maximum ES investment budget, $c\up{ic,max}$, is reduced and the inner-loop is repeated (see Section~\ref{Sec:ESP}). The algorithm terminates once a current solution satisfy constraint \eqref{PD:UL_ESP} or the maximum ES investment budget, $c\up{ic,max}$, reaches the minimum ES investment limit, $c\up{ic,min}$.
\subsection{Problem Decomposition}\label{Sec:PDc} The bi-level problem \eqref{PD:UL_Obj}--\eqref{PLL_CES4} can be recast into a single-level (SL) equivalent. The objective function of this problem is: \begin{align} &\hspace{-0.2cm} \min_{\mathnormal{x}\up{U}, \mathnormal{x}\up{P}_j, \mathnormal{x}\up{D}_j} C\up{S}(\mathnormal{x}\up{U}, \mathnormal{x}\up{P}_j) :=\sum_{j\in J}\omega_jC\up{P}_j(\mathnormal{x}\up{U}, \mathnormal{x}\up{P}_j) + C\up{E}(\mathnormal{x}\up{U})\,, \label{SL:obj}\\ &\text{subject to:} \nonumber\\ &\text{UL, PLL, and DLL constraints} \\ &C\up{P}_j(\mathnormal{x}\up{U}, \mathnormal{x}\up{P}_j) = C\up{D}_j(\mathnormal{x}\up{U}, \mathnormal{x}\up{D}_j)\,,\;j\in J\,, \label{SL:SDC} \end{align} where \eqref{SL:SDC} represents the strong duality constraint. The details of the formulation of this problem are given in Appendix~\ref{App_SL}.
When the value of the coupling variables $\mathnormal{x}\up{U}$ is fixed, we can then apply primal decomposition to this problem. For the sake of simplicity, we first ignore the profit constraint \eqref{PD:UL_ESP}. The subproblem becomes a linear ED problem for each typical day. This decomposed SL problem can then be solved iteratively as follows~\cite{boyd2004convex}: \subsubsection{Set initial values for the coupling variables} The solution algorithm starts with $\mathnormal{x}\up{U,(0)}=0$, indicating no ES deployment.
\subsubsection{Solve the subproblems} At iteration $\nu$, set $\mathnormal{x}\up{U}=\mathnormal{x}\up{U,(\nu)}$, solve each EDSP in parallel to obtain $\hat{\mathnormal{x}}\up{P, (\nu)}_j$ and $\hat{\mathnormal{x}}\up{D, (\nu)}_j$. \subsubsection{Solve the master problem} Calculate the subgradients of $C\up{S}$ with respect to $x\up{U}$ and update the UL variables accordingly. \subsubsection{Iteration} Check for convergence, and repeat from Step 2) if needed. The convergence criterion is explained in Section.~\ref{Sec:CP}.
While subgradient methods can be easily used to solve the master problem, their convergence is slow and they do not provide a measurement of the optimality of the results. We therefore incorporate the subgradient cutting-plane method in the proposed approach because it converges faster and has a non-heuristic stopping criterion.
\subsection{The Cutting-plane Method}\label{Sec:CP}
We apply cutting-plane methods~\cite{kelley1960cutting, belloni2005introduction, boyd2007localization} to solve the master problem. Cutting-plane methods incorporate results from previous iterations and form a piece-wise linear approximation ($\tilde{C}^{(\nu)}$) of the objective function: \begin{gather} \textstyle \tilde{C}^{(\nu+1)}(y) := \max_{k\leq \nu} [C\up{S}(\tilde{y}\up{(\nu)})+(y-\tilde{y}\up{(\nu)})\cdot g\up{U, (\emph{k})}]\,, \label{CP_obj} \end{gather} where $y$ is an inquiry point identical to $x\up{U}$, and $\tilde{C}^{(\nu)}(\tilde{y}\up{(\nu)})$ is a lower-bound estimate of the optimal objective function value, i.e., the system cost.
At each iteration, the ES subproblems are solved by setting $x\up{U,(\nu)}=\tilde{y}\up{(\nu)}$ where $\tilde{y}\up{(\nu)}\in \arg \min_y \tilde{C}^{(\nu)}(y)$ subject to constraints \eqref{PD:UL_C1} and \eqref{PD:UL_C2}. The inquiry point $\tilde{y}\up{(\nu)}$, the current system cost value, and the calculated subgradients are then added to \eqref{CP_obj} as a new objective cut for future iterations. As iterations proceed, $\tilde{C}^{(\nu)}$ approaches the actual objective function, and a solution is found when the difference between $C\up{S}(x\up{U,(\nu)})$ and $\tilde{C}^{(\nu)}(\tilde{y}\up{(\nu)})$ is below a tolerance. We assume that the algorithm terminates based on a relative tolerance with respect to the estimated maximum system cost saving: \begin{gather} \tilde{C}^{(\nu)}(\tilde{y}\up{(\nu)})-C\up{S}(x\up{U,(\nu)}) \leq \epsilon[C\up{S}(0)-\tilde{C}^{(\nu)}(\tilde{y}\up{(\nu)})]\,, \end{gather} where $C\up{S}(0)$ is the system cost without ES deployments, and $\epsilon$ is the relative tolerance. Therefore the system cost saving in the algorithm is always greater than $(1-\epsilon)$ of the optimal system cost saving.
\subsection{Energy Storage Subgradient Cuts }\label{Sec:ESSG} ES subgradients for $B\up{E}$ buses are calculated directly using dual variables associated with constraints \eqref{PLL_CES2}--\eqref{PLL_CES4}, this derivation is shown in Appendix~\ref{App:SBE}. However, the value of $\hat{\varphi}\up{soc, (\nu)}_{j,t,b}$ is always zero at the first iteration, because energy rating constraints are implicit, thus \eqref{PLL_CES5} never binds when the ES power rating is zero. Moreover, $\hat{\varphi}\up{ch, (\nu)}_{j,t,b}$ and $\hat{\varphi}\up{dis, (\nu)}_{j,t,b}$ cannot reflect the value for arbitrage unless the ES has non-zero SoC evolutions to correlate temporal arbitrage decision between different time intervals. Therefore, we calculate ES subgradients for $B\up{N}$ buses through a subgradient subproblem (SGSP) to correctly identify the value of marginal ES investment. The SGSP provides an initial ES P/E ratio $\rho_b^0$, so that solutions to the master problem has reduced perturbations and converges faster. The derivation of the SGSP is shown in Appendix~\ref{App:SBN}. At iteration $\nu$, the subgradients of $C\up{S}$ with respect to the ES power rating ($g\up{p}_b$) and energy rating ($g\up{e}_b$) are calculated as: \begin{align} g\up{p,(\nu)}_b &= \begin{cases} \;\, c\up{p}+\hat{\varphi}\up{ch, (\nu)}_{j,t,b} + \hat{\varphi}\up{dis, (\nu)}_{j,t,b}\,, & \quad b\in B\up{E}\,\\ \;\, \hat{g}\up{0,(\nu)}_b \hat{\rho}^{0,(\nu)}_b/(1+\hat{\rho}^{0,(\nu)}_b) \,, & \quad b\in B\up{N}\, \end{cases}\label{ESSG:P} \\ g\up{e,(\nu)}_b &= \begin{cases} \;\, c\up{e}+\hat{\varphi}\up{soc, (\nu)}_{j,t,b}\,, & \qquad\quad b\in B\up{E}\,\\ \;\, \hat{g}\up{0,(\nu)}_b /(1+\hat{\rho}^{0,(\nu)}_b) \,, & \qquad\quad b\in B\up{N}\,, \end{cases}\label{ESSG:E} \end{align} where $\hat{g}\up{0, (\nu)}_b$ and $\hat{\rho}^{0, (\nu)}_b$ are determined by solving the following subgradient subproblem (SGSP): \begin{align} \textstyle &\textstyle \min_{p\up{ch}_{j,t,b}, p\up{dis}_{j,t,b}, r\up{eu}_{j,t,b}, r\up{ed}_{j,t,b}, e\up{soc}_{j,t,b}, \rho^0_b} g\up{0,(\nu)}_b := \nonumber\\ \textstyle &\textstyle\quad\sum_{j\in J}\omega_j\sum_{t\in T}\big[ p\up{ch}_{j,t,b}\hat{\lambda}\up{lmp, (\nu)}_{j,t,b}/\eta\up{ch} -p\up{dis}_{j,t,b}\hat{\lambda}\up{lmp, (\nu)}_{j,t,b}\eta\up{dis} \nonumber\\ &\textstyle\quad- r\up{eu}_{j,t,b}\hat{\lambda}\up{ru, (\nu)}_{j,t}\eta\up{dis} - r\up{ed}_{j,t,b}\hat{\lambda}\up{rd, (\nu)}_{j,t}/\eta\up{ch} + c\up{dis}_{b} p\up{dis}_{j,t,b}\nonumber\\ &\textstyle\quad +c\up{ch}_{b} p\up{ch}_{j,t,b}+c\up{eu}_b r\up{eu}_{j,t,b}+c\up{ed}_b r\up{ed}_{j,t,b}\big] + \rho\up{0}_bc\up{p}+c\up{e}\,, \label{ESSG:BN} \end{align} subject to constraints \eqref{PD:UL_C1} and \eqref{PLL_CES1}--\eqref{PLL_CES4} by setting $p\up{R}_b = \rho^0_b$ and $e\up{R}_b=1$. This subproblem maximizes the profit of incremental ES deployments at $B\up{N}$ buses, where ES are price-takes and profit maximization is equivalent to system operating cost minimization~\cite{castillo2013profit}. $\hat{\rho}^0_b$ is the optimal P/E ratio for price-taker ES deployments, which is close to the true optimal P/E ratio if the ES has limited price influences. Subgradients at $B\up{N}$ buses are designed to enforce $\hat{\rho}^0_b$ over all new ES deployments. ES deployments with near-optimal P/E ratios have faster convergence due to minimum perturbations between ES power and energy investment decisions.
\subsection{Incorporating the ES Profit Constraint}\label{Sec:ESP}
In Appendix~\ref{App:ESP}, we show that the ES operational revenue can be represented using the ES subgradients: \begin{align} C\up{R} = -\textstyle\sum_{b\in B}\big[(g\up{p}_b-c\up{p})p\up{R}_b + (g\up{e}_b-c\up{e})e\up{R}_b\big]\,. \end{align} Because all ES allocation variables must have non-negative values, all $p\up{R}_b$ and $e\up{R}_b$ with non-zero values must have negative subgradients. we can therefore infer that $C\up{R} \geq C\up{E}$ in all optimal locations, and in unconstrained ES locations, the ES rate of return $\chi$ converges to one. Hence $\chi \geq 1$ is guaranteed for all optimal ES locations. For $\chi > 1$, we can reasonably assume that ES has a limited effect on system prices and that the system-wide ES operating revenue should only increase with ES investment. Therefore, for the optimal ES locations, the ES revenue $\hat{C}\up{R}$ is a concave monotonic increasing function of the ES investment cost $C\up{E}$ such that: \begin{gather} 0 \leq d\hat{C}\up{R}(C\up{E})/dC\up{E} \leq 1\,, \label{ESPA_A} \end{gather} where $C\up{E}$ is capped by $c\up{ic,max}$ in constraint \eqref{PD:UL_C2}. If a rate of return $\chi$ is achievable in the system, then there must be some $C\up{E'}$ that satisfy: \begin{gather} \hat{C}\up{R}(C\up{E'}) - \chi C\up{E'} \geq 0\,. \end{gather} When an ES investment cost $C\up{E}$ violates \eqref{PD:UL_C2}, we can estimate an upper-bound of $C\up{E'}$ as \begin{gather} C\up{E'} \leq \hat{C}\up{R}(C\up{E'})/\chi \leq \hat{C}\up{R}(C\up{E})/\chi\,,\label{ESP:UB} \end{gather} because $C\up{R}(C\up{E'}) \leq C\up{R}(C\up{E})$ according to \eqref{ESPA_A}. Therefore, \eqref{PD:UL_ESP} can be satisfied by iteratively solving the optimal ES allocation with a reduced maximum ES investment cost $c\up{ic,max}=C\up{R}/\chi$.
Since we use the cutting-plane method to solve the master problem and the feasible region is reduced when setting $c\up{ic,max}=C\up{R}/\chi$ (recall that \eqref{PD:UL_ESP} only binds when $\chi>1$), solving the optimal ES allocations recursively will not add much complexity because the method already has a fairly good estimate of the objective function.
\subsection{Comparison to Benders Decomposition}
Benders decomposition is a classic approach for solving block-structured optimization with coupling (complicating) variables~\cite{conejo2006decomposition}, and has been extensively used for solving strategic bi-level planning problems in power system~\cite{kazempour2012strategic,baringo2012wind,nasrolahpour2016strategic}. The proposed algorithm is similar to Benders decomposition because it decomposes the optimization problem by fixing the coupling variables and solves the master problem using cutting planes. However, it incorporates two key improvements over a classic Benders decomposition.
First, it uses coordinated subgradient cuts, which provides more accurate information on the value of marginal ES investments than Benders dual cuts. Since the value of an ES deployment is jointly affected by its power and energy ratings, these planning decisions are not independent, and the P/E ratio must be optimized. However, Benders dual cuts based on the binding conditions of ES rating constraints \eqref{PLL_CES2}~-~\eqref{PLL_CES4} are inefficient at coordinating investments on power and energy ratings, especially during the early iterations where ES ratings are mostly zero. These uncoordinated cuts cause the master problem solution to oscillate around the optimal point, and significantly slow down the convergence. Instead of using dual cuts, the proposed algorithm uses coordinated ES subgradients, which enforces near-optimal P/E ratios over all new ES deployments and thus speeds up the convergence.
Besides using coordinated subgradient cuts, we analytically derived a relationship between the maximum ES investment budget and the profitability, and decomposed the bi-level problem into a recursive structure, as shown in Fig.~\ref{fig:cp}. Previous studies~\cite{kazempour2012strategic,baringo2012wind,nasrolahpour2016strategic} follow an approach that combines Benders decomposition with MPEC. This approach first materializes the bi-level problem into a MPEC problem, the MPEC problem is then recast into a MILP structure using the 'big M' method~\cite{floudas1995nonlinear}, and Benders decomposition is applied to decompose and solve the MILP problem. The 'big M' method uses auxiliary integer variables and a sufficiently large constant $M$ to linearize nonlinear terms. However the accuracy and computational speed of the 'big M' linearization are very sensitive to the value of $M$, if $M$ is not large enough, the linearization is not accurate, if $M$ is too large, the computation can be extremely slow. Compared to the MPEC+Benders method, our algorithm requires no auxiliary linearization variables and the size of the master problem does not increase with the number of scenarios. Therefore, the proposed algorithm has better scalability and leads to more robust planning result. In addition, the proposed algorithm generates simpler subproblems that are solved faster than in the MPEC+Benders method. In Section~\ref{Sec:CS:ESP:CTC}, we demonstrate that the computational speed of the proposed method surpasses the solution time achieved in a similar previous study.
\section{Case Study Test System}\label{Sec:Case}
\subsection{System Settings}
The proposed ES planning model and solution method were tested using a modified 240-bus reduced WECC system~\cite{price2011reduced}. This system includes 448 transmission lines, 71 aggregated thermal plants and renewable sources including hydro, wind, and solar. The maximum expected forecasts of all renewable generations are grouped as $G\up{rn}_{j,t,b}$, the maximum allowable spillage for hydro generation is enforced in $G\up{rs}_{j,t,b}$, other types of renewable generation have no curtailment limits. Renewable curtailments are necessary in the modified WECC testbed because large renewable generation capacities are installed at some buses with a limited transmission capacity, and the objective of the economic dispatch is to minimize the system operating cost. In certain cases, such as days with strong winds and low demand, a certain amount of wind power generation must be curtailed to maintain secure operation of the system.
We use a `3+5'\% reserve policy for setting the requirements for regulation~\cite{papavasiliou2011reserve}, hence $\phi\up{D}=3\%$ and $\phi\up{R}=5\%$. Regulation parameters are adjusted so that the regulation prices are identical to the actual day-ahead regulation clearing prices in CAISO~\cite{CAISO_Report2014}. The value of renewable spillages is set to zero. The modified WECC system has a daily ED operating cost ranging from 15 to 35 M\$.
All simulations were carried out in CPLEX under GAMS~\cite{GAMS} on an Intel Xenon 2.55 GHz processor with 32 GB of RAM. Typical days and their respective weights from the year-long demand and renewable generation profiles were identified using a hierarchical clustering algorithm~\cite{pitt2000applications}. The convergence criterion is set to $\epsilon=5\%$.
\begin{table}[t]
\begin{center}
\centering
\caption{Energy Storage Model Parameters.}
\label{tab:ES_tech}
\begin{tabular}{r || c || c}
\hline
\hline
Energy storage technology & AA-CAES & LiBES \Tstrut\Bstrut\\
\hline
\hline
Power rating investment (\$/kW-year) & 1250 - 20 & 409 - 20 \Tstrut\Bstrut\\
\hline
Energy rating investment (\$/kWh-year) & 150 - 20 & 468 - 20 \Tstrut\Bstrut\\
\hline
Battery replacement cost (\$/kWh) & n/a & 406 \Tstrut\Bstrut\\
\hline
P/E ratio range (h$^{-1}$) & 0.05 to 0.25 & 0.1 to 4 \Tstrut\Bstrut\\
\hline
Round trip efficiency & 0.72 & 0.9 \Tstrut\Bstrut\\
\hline
Incremental production cost (\$/MWh) & 0 & 87 \Tstrut\Bstrut\\
\hline
Incremental consumption cost (\$/MWh) & 0 & 0 \Tstrut\Bstrut\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Energy Storage Cost Model}\label{Sec:Ben:ESC}
We consider two types of representative ES technologies: 1) above-ground advanced adiabatic compressed air energy storage (AA-CAES), and 2) lithium-ion battery energy storage (LiBES)~\cite{zakeri2015electrical, nykvist2015rapidly}. The ES cost model consists of three parts: \begin{itemize}
\item Investment cost of the power equipments ($c\up{p}$) proportional to the ES power rating (unit: \$/kW). This investment covers the turbine generator and air compressor in AA-CAES, or the power electronic equipments in LiBES.
\item Investment cost of the storage system ($c\up{e}$) proportional to the ES energy rating (unit: \$/kWh). This investment covers the storage tank in AA-CAES, or the battery management system in LiBES.
\item Marginal production cost (unit: \$/kWh). This is the energy production cost of ES units. \end{itemize} Because we consider large-scale ES installations, we assume that the fixed storage installation cost, including the land and construction cost, scales linearly with the ES rating and can therefore be incorporated in the costs proportional to the power and energy ratings. AA-CAES units have high investment cost for power ratings and low investment cost for the storage capacity, the operation cost is also negligible because AA-CAES consumes no fuel for power generation. Electrochemical battery energy storage such as LiBES~\cite{dunn2011electrical, zakeri2015electrical} has more evenly distributed investment cost components. The lifetime of lithium batteries is very sensitive to operations due to degradation, the marginal production cost of LiBES is therefore defined based on its cell cycle life.
We assume that cycle aging only occurs during battery discharging and has a constant marginal cost. Using cycle life test data set for Lithium manganese oxide (LMO) batteries~\cite{xu2016modeling}, we apply a linear fit to the LMO cycle life loss per cycle up to 70\% depth of discharge (DoD) (Fig.~\ref{Fig:ES_deg}). The operation region of the LiBES is limited to the range from 20\% to 90\% of SoC to avoid deep discharges as well as overcharge and overdischarge effects, because these factors severely reduce the battery life~\cite{vetter2005ageing}. Instead of introducing new SoC constraints, the LiBES is oversized to reflect the increased cost due to a narrowed SoC operation region. We assume that the battery cells in the LiBES are always replaced once reaching their end of life. The marginal discharging cost of LiBES is calculated by prorating the battery cell replacement cost to the cycle life loss curve \begin{align}
c\up{dis} = a\up{fit}\frac{\text{Cell replacement cost (\$406/kWh)}}{\text{DoD operation range (70\%)}}\,, \end{align} where $a\up{fit}$ is the linear fitted slop of the cycle life loss curve in Fig.~\ref{Fig:ES_deg}.
Table~\ref{tab:ES_tech} shows the capital cost, P/E ratio range, efficiency, and operating cost of these two ES cost models. The battery cells in LiBES are replaced once reaches the end of life. We assume a 5\% annual interest rate and calculate the daily prorated cost as in~\cite{pandzic2015near}. All 240 buses are considered as ES deployment candidates.
\begin{figure}
\caption{MO battery cycle life curve and fitting to 70\% DoD.}
\label{Fig:ES_deg}
\end{figure}
\subsection{Negative Pricing and Storage Dispatch}
In optimal ED solutions, an ES unit may be dispatched to charge and discharge simultaneously during the occurrence of negative LMPs~\cite{go2016assessing}. The storage round-trip efficiency causes energy spillages that are beneficial to the system, and ES units gain additional revenue. Such dispatches are physically achievable for AA-CAES units because the air compressor and generator use separate pipelines~\cite{steta2010modeling}, so that the compressor and generator can operate at the same time.
LiBES can only charge or discharge at one time and simultaneous dispatches must be avoided. In Appendix~\ref{App:SCER} we demonstrated the following sufficient condition for avoiding simultaneous charging and discharging ($\forall b\in B, t\in T, j\in J$): \begin{align}
c\up{dis} + c\up{ch} > -({1}/{\eta\up{dis}}-\eta\up{ch})\lambda\up{lmp}_{j,t,b}\,. \end{align} The above sufficient condition explains that in optimal ED solutions, an ES unit will not charge and discharge simultaneously as long as the operation cost for a round-trip dispatch is higher than the product of the round trip efficiency loss and the negative LMP value. In other words, the cost of performing simultaneous dispatches is higher than the market payment. In the modified WECC model, the largest negative LMP never exceeds -200~\$/MWh, and the round-trip efficiency of LiBES is 90\%. Therefore as long as the marginal production cost of LiBES is higher than 20~\$/MWh, simultaneous LiBES dispatches are avoided.
\begin{figure}
\caption{Computation test results of the 16 test cases.}
\label{Fig:Comp_SC}
\label{Fig:Comp_TC}
\label{Fig:Comp}
\end{figure}
\subsection{Regulation Cost and Dispatch Model}
The cost of providing regulation is estimated using normalized CAISO area control error (ACE) data~\cite{caiso_ace, xu2016comparison}. The provision of regulation does not increase the operating cost for AA-CAES because these units have no marginal operating cost. The average cost for LiBES to provide 1 MW of regulation up for one hour is 10\% of the marginal production cost ($c\up{eu} = 0.1c\up{dis}$) under the assumption that an average 100 kWh of energy is generated. The regulation cost scales linearly with the regulation up capacity within the 70\% DoD region. The LiBES has no cost for providing regulation down because charging has no marginal cost.
In CAISO, ES units have two options for participating regulation: the regulation energy management program (REM) and the traditional option (non-REM). In the REM program, CAISO co-optimize the ACE with the real-time energy market and generates a regularized regulation signal that has a 15-minute zero-mean in energy. Therefore REM units are only required to have a 15 minute capacity ($T\up{es}=0.25$) and are not deviated from their scheduled SoC levels.
Non-REM units are required to have a continuous full dispatch time requirement of one hour ($T\up{es}=1$). SoC deviations for providing regulation in Non-REM ES units are not accounted in the ED formulation because we primarily evaluate the economic value of ES investment in hourly economic dispatch. In the regulation market, energy deviations caused by regulation provision are settled at the real-time locational marginal prices after the dispatch period~\cite{caiso_settlement}. Therefore, from an economic point of view, ES does not gain or loses energy in regulation provision (i.e., ES cannot receive ‘free energy for charging’ by providing regulation, the charged energy is still settled at market prices), and the proposed planning model leads to sufficiently accurate decisions without considering the real-time regulation energy deviations. In real-time dispatches, ES units can adopt control strategies against large energy deviations~\cite{xu2014bess} and maintain the scheduled dispatch.
\section{Simulation Results}\label{Sec:Test}
\subsection{Computational Performance}
\begin{table*}[t]
\centering
\caption{Comparison of ES rate of return with a maximum ES investment cost of 50~k\$/day.}
\label{tab:ES_pc}
\begin{tabular}{r || c c c c|| c c c c|| c c c c }
\hline
\hline
Selected typical day & \multicolumn{4}{c||}{Day 100} & \multicolumn{4}{c||}{Day 141} & \multicolumn{4}{c}{Day 285} \Tstrut\\
\hline
\hline
ES rate of return (\%) & 100 & 110 & 120 & 150 & 100 & 110 & 120 & 150 & 100 & 110 & 120 & 150 \Tstrut\Bstrut\\
\hline
Runtime (s) & 102 & 164 & 148 & 219 & 86 & 91 & 91 & 113 & 398 & 639 & 551 & 566 \Tstrut\Bstrut\\
\hline
ED operation cost (M\$/day) & 15.83 & 15.84 & 15.85 & 15.86 & 22.87 & 22.87 & 22.87 & 22.91 & 25.19 & 25.20 & 25.20 & 25.20\Tstrut\Bstrut\\
\hline
ES operation revenue (k\$/day) & 27.2 & 13.6 & 10.0 & 0 & 60.0 & 60.0 & 60.0 & 36.4 & 9.2 & 1.9 & 0 & 0 \Tstrut\Bstrut\\
\hline
ES investment cost (k\$/day) & 26.8 & 12.3 & 8.0 & 0 & 50.0 & 50.0 & 50.0 & 24.1 & 9.1 & 1.7 & 0 & 0 \Tstrut\Bstrut\\
\hline
ES location (bus number) & 155 & 155 & 155 & n/a & 155 & 155 & 155 & 155 & 228 & 228 & n/a & n/a \Tstrut\Bstrut\\
\hline
\hline
\end{tabular} \end{table*}
We compare the computational performance of the proposed method against solving the problem directly using CPLEX. When the profit constraint \eqref{PD:UL_ESP} is ignored, the objective function \eqref{SL:obj} and constraints \eqref{PD:UL_C1}, \eqref{PD:UL_C2} and \eqref{PD:PLL_C} become a linear problem (LP), and can be solved by using the solver CPLEX.
We designed 16 test cases with different planning scenarios. Case 1-4 are the optimal ES allocation considering 1, 3, 5, and 10 typical days, subject to a maximum ES investment budget constraint, using the CAES ES model. Case 5-8 are identical to 1-4 except that they do not include a maximum investment constraint. Case 9-16 are identical to 1-8 except that the LiBES ES model is used.
As shown in Figure.~\ref{Fig:Comp}, the proposed method is significantly faster than solving the LP problem directly using CPLEX, while all system cost saving results are within the set tolerance. CPLEX exhibits an approximately quadratic increase in computation time with the number of typical days, while the proposed method demonstrates a much slower increase. However, the computational speed of the proposed method depends on the renewable and demand profiles because some profiles result in smoother system cost functions, which facilitates the convergence of the subgradient cutting-plane method.
When the IC constraint is excluded, the search region for ES allocation expands and thus the computation time of both methods increases. However, this effect is much smaller in the proposed method.
\subsection{Rate of Return on ES Investment} We performed ES planning on three different days subject to different ES rate of return constraints. Table~\ref{tab:ES_pc} shows the results for the LiBES model with $T\up{es}=0.25$. A higher ES rate of return reduces the installed ES capacity and increases the system operating cost. A return rate of 150\% is only achievable in one of the three days.
The computation time of the proposed method increases moderately when the payback rate is greater than 1, because the optimal ES allocation is solved repeatedly. However this will not result in a polynomial or exponential increase in complexity because the cutting-plane method keeps track of historical results. Enforcing a higher rate of return reduces the maximum ES investment budget and hence decreases the feasible region. In turn this reduces the solution time when the problem is solved iteratively.
This table also shows that buses 155 and 285 are the only locations where ES is deployed for day 100, 141 and 285. In other single-day tests that we performed, ES was also located at buses 15, 90, 198, 226, 227, 228. These buses are good locations for performing spatio-arbitrage because they are connected to frequently congested lines and renewable sources, especially hydro units. In particular, LMPs are frequently negative at bus 155.
\begin{figure}
\caption{AA-CAES planning in different market scenarios.}
\label{Fig:CAES_P}
\label{Fig:CAES_E}
\label{Fig:CAES_rev}
\label{Fig:CAES}
\end{figure}
\begin{figure}
\caption{LiBES planning with decreasing investment cost.}
\label{Fig:LiBES_P}
\label{Fig:LiBES_E}
\label{Fig:LiBES_rev}
\label{Fig:LiBES}
\end{figure}
\subsection{Stochastic ES Planning}
We performed stochastic ES planning considering 20 typical days with no maximum ES investment limit. Three market scenarios are considered: in \emph{Arb-only} AA-CAES only participates in the energy market, in \emph{Non-REM} AA-CAES can participate energy and regulation markets under traditional regulation requirements, and in \emph{REM} AA-CAES can participate energy and regulation markets under REM regulation requirements.
\subsubsection{AA-CAES Results} Fig.~\ref{Fig:CAES} shows stochastic planning results for AA-CAES. Because system-wide regulation prices are independent of the location, arbitrage is the sole factor for ES siting. Bus 155 is the optimal choice for all ES allocations, mainly due to its high occurrence of negative LMPs. In the Arb-only market scenario, AA-CAES has a P-to-E ratio of 0.11, equivalent to 9 hours of rated energy capacity. In Non-REM and REM cases, the planning results have larger installation capacities, and the P-to-E ratio also increases. The change in the regulation requirement does not have a significant impact on the planning results, and arbitrage is still the primary market income source.
\subsubsection{LiBES Results} No LiBES is installed in any market scenarios under current investment cost as shown in Table~\ref{tab:ES_tech}. Since the decreasing trend of LiBES investment cost is expected to continue for the next ten years~\cite{dunn2011electrical,nykvist2015rapidly}, it is reasonable to assess the planning of LiBES using reduced investment cost. In Fig.~\ref{Fig:LiBES}, LiBES planning results are shown for up to 50\% investment cost reduction for the Non-REM and REM market scenarios, while in the Arb-only case, no LiBES is installed. The result shows that investments in LiBES will become profitable when the investment cost dropped by at least 30\% from its current value , and the installed capacity increases steadily with further cost reductions. At each cost level, the market revenue from arbitrage is roughly the same in the Non-REM and the REM case, while the regulation revenue almost doubles in REM. The difference in revenue also reflects in the installation capacity, while the installed energy capacity is similar in the REM and Non-REM cases, LiBES has a much higher P-to-E ratio with REM.
\subsection{Computational Speed Comparison}\label{Sec:CS:ESP:CTC}
The proposed solution algorithm is faster than solving a bi-level ES planning problem using the combination of Benders decomposition and MPEC. Nasrolahpour~\emph{et al.}~\cite{nasrolahpour2016strategic} use the MPEC+Benders approach to solve a bi-level ES planning problem for optimal ES sizing in a single bus system in an energy market environment. Their solution time range from 3 to 6 hours. By comparison, when our method was applied to the optimization of ES siting and sizing in a 240-bus system considering both energy and regulation markets, the longest simulation finished within one hour, and most simulations finished within 15 minutes.
\section{Summary}\label{Sec:Con}
In this paper, we have formulated the optimal ES profit-constrained siting and sizing as a bi-level problem with a minimum rate of return constraint. We have proposed a scalable solution method involving a primal decomposition and subgradient cutting-planes. The proposed method is significantly faster than CPLEX for solving LP ES planning problems.
The proposed solution method has the same order of complexity as conventional economic dispatch, thus making this method computationally tractable for any system with a feasible ED solution. Since the decomposed subproblems are independent of each other, the computation time increases linearly with the number of typical days considered. The solution time could be further improved by solving the subproblems in parallel.
We have analyzed the optimal ES siting in joint energy and reserve markets on a modified WECC 240-bus model. The sensitivity of these siting decisions has been studied with respect to different ES technologies, the rate of return on ES investments, and regulation market policies. The results show that increasing the rate of return requirement greatly reduces the deployment of ES. In the stochastic ES planning, AA-CAES shows a higher potential for reducing system cost than LiBES, which depends on the design of the regulation market for its profitability. However AA-CAES technology is still at the pilot stage, while grid-scale installations of LiBES are happening worldwide.
\subsection{Single-Level Equivalent Problem Formulation}\label{App_SL}
This problem consists of the objective function \eqref{SL:obj}, the UL constraints \eqref{PD:UL_C1}--\eqref{PD:UL_ESP}, and the following constraints: \subsubsection{PLL constraints} Equations \eqref{PLL:C_bus}--\eqref{PLL_CES4} and the following constraints ($\forall i\in I, j\in J, t\in T, b\in B$): \begin{gather} G\up{min}_i + r\up{gd}_{j,t,i} \leq p\up{g}_{j,t,i}\leq G\up{max}_i-r\up{gu}_{j,t,i}:(\psi\up{g}_{j,t,i},\varphi\up{g}_{j,t,i})\,\label{SL_C1b}\\ 0 \leq r\up{gu}_{j,t,i}\leq T\up{ru} R\up{u}_i:(\psi\up{gu}_{j,t,b}, \varphi\up{gu}_{j,t,b})\,\label{SL_C1c}\\ 0 \leq r\up{gd}_{j,t,i}\leq T\up{rd} R\up{d}_i:(\psi\up{gd}_{j,t,b}, \varphi\up{gd}_{j,t,b})\,\label{SL_C1d}\\ -R\up{d}_i \leq p\up{g}_{j,t,i}-p\up{g}_{j,t-1,i}\leq R\up{u}_i:(\psi\up{R}_{j,t,i}, \varphi\up{R}_{j,t,i})\,\label{SL_C1e}\\ f_{j,t,l}=(\theta_{j,t,o(l)}-\theta_{j,t,r(l)})/x_l : (\gamma\up{f}_{j,t,l})\,\label{SL_C2a}\\ -F\up{max}_l\leq f_{j,t,l} \leq F\up{max}_l : (\psi\up{f}_{j,t,l}, \varphi\up{f}_{j,t,l})\,\label{SL_C2b}\\ 0\leq p\up{rs}_{j,t,b} \leq G\up{rs}_{j,t,b}:(\psi\up{rn}_{j,t,b}, \varphi\up{rn}_{j,t,b})\,,\label{SL_C3} \end{gather} The minimum and maximum capacity of all generators are enforced in \eqref{SL_C1b}. \eqref{SL_C1c} and \eqref{SL_C1d} model the ramp requirement for regulation, and \eqref{SL_C1e} models the ramp requirement for dispatch. The DC power flow is modeled in \eqref{SL_C2a} and \eqref{SL_C2b}. The maximum expected forecast for renewable generation and the maximum allowable renewable spillage are enforced in \eqref{SL_C3}. \subsubsection{DLL constraints} The DLL problem has the following constraints ($\forall i\in I, j\in J, t\in T, b\in B$): \begin{align} \varphi_{j,t,l}\up{f}+\psi\up{f}_{j,t,l}+\gamma\up{f}_{j,t,l}-\lambda_{j,t,o(l)}\up{lmp}+\lambda_{j,t,r(l)}\up{lmp} &=0\,\label{SL_DC1}\\ \psi\up{rn}_{j,t,b}+\varphi\up{rn}_{j,t,b}-\lambda_{j,t,b}\up{lmp}+(\lambda_{j,t}\up{ru} + \lambda_{j,t}\up{rd})\phi\up{D} &=c\up{rs}\,\label{SL_DC2}\\ \varphi\up{g}_{j,t,i}+\psi\up{g}_{j,t,i}+\varphi\up{R}_{j,t,i}-\varphi\up{R}_{j,t+1,i}+\psi\up{R}_{j,t,i}-&\psi\up{R}_{j,t+1,i}\nonumber\\ +\lambda_{j,t,b(i)}\up{lmp}&=c_i\up{g}\,\label{SL_DC3}\\ \varphi\up{g}_{j,t,i}+\lambda_{j,t}\up{ru}+\varphi\up{gu}_{j,t,i}+\psi\up{gu}_{j,t,i}&=c\up{gu}_i\,\label{SL_DC5}\\ -\psi\up{g}_{j,t,i} +\lambda_{j,t}\up{rd}+\varphi\up{gd}_{j,t,i}+\psi\up{gd}_{j,t,i}&=c\up{gd}_i\,\label{SL_DC6}\\ \varphi\up{soc}_{j,t,b}+\psi\up{soc}_{j,t,b}+\gamma\up{soc}_{j,t,b}-\gamma\up{soc}_{j,t+1,b}&=0\,\label{SL_DC7}\\ \varphi\up{soc}_{j,n_T,b}+\psi\up{soc}_{j,n_T,b}+\gamma\up{soc}_{e,n_T,b} &= 0\,\label{SL_DC8}\\ \varphi_{j,t,b}\up{ch}+\psi_{j,t,b}\up{ch}-\gamma\up{soc}_{j,t,b}-\lambda\up{lmp}_{j,t,b}/\eta\up{ch} &= c\up{ch}\,\label{SL_DC9}\\ \varphi_{j,t,b}\up{dis}+\psi_{j,t,b}\up{dis}+\gamma\up{soc}_{j,t,b}+ \lambda\up{lmp}_{j,t,b} \eta\up{dis} &= c\up{dis}\,\label{SL_DC10}\\ \varphi_{j,t,b}\up{ch}+\psi_{j,t,b}\up{rd}+T\up{es}\varphi_{j,t,b}\up{soc} - \overline{e}\up{ed}\gamma\up{soc}_{j,t,b}+ \lambda\up{rd}_{j,t}/\eta\up{ch} &= c\up{ed}\,\label{SL_DC11}\\ \varphi_{j,t,b}\up{dis}+\psi_{j,t,b}\up{ru}-T\up{es}\psi_{j,t,b}\up{soc} - \overline{e}\up{eu}\gamma\up{soc}_{j,t,b} +\lambda\up{ru}_{j,t}\eta\up{dis} &= c\up{eu}\,,\label{SL_DC12} \end{align} where $\psi \geq 0$ and $\varphi \leq 0$.
\subsection{ES Profit Constraint Transformation}\label{App:ESP} From DLL constraints \eqref{SL_DC9}--\eqref{SL_DC10} and \eqref{SL_DC11}--\eqref{SL_DC12}, we obtain the following equalities: \begin{align} &p\up{dis}_{j,t,b}(\lambda\up{lmp}_{j,t,b}\eta\up{dis} - c\up{dis}) - p\up{ch}_{j,t,b}(\lambda\up{lmp}_{j,t,b} /\eta\up{ch} + c\up{ch}) \nonumber\\ &=\gamma\up{soc}_{j,t,b}(p\up{ch}_{j,t,b}-p\up{dis}_{j,t,b}) - p\up{dis}_{j,t,b}(\psi\up{dis}_{j,t,b} + \varphi\up{dis}_{j,t,b}) \nonumber\\&-p\up{ch}_{j,t,b}(\psi\up{ch}_{j,t,b} + \varphi\up{ch}_{j,t,b})\,\label{ESP_Eq2}\\ &r\up{eu}_{j,t,b} (\lambda\up{eu}_{j,t}\eta\up{dis} - c\up{eu}) + r\up{ed}_{j,t,b}(\lambda\up{ed}_{j,t} / \eta\up{ch} - c\up{ed}) \nonumber\\ &= -(\varphi_{j,t,b}\up{ch}+\psi_{j,t,b}\up{rd} +T\up{es}\varphi_{j,t,b}\up{soc}) r\up{ed}_{j,t,b} \nonumber\\&-(\varphi_{j,t,b}\up{dis}+\psi_{j,t,b}\up{ru}-T\up{es}\psi_{j,t,b}\up{soc}) r\up{eu}_{j,t,b}\,. \label{ESP_Eq3} \end{align}
By using \eqref{PLL_CES1}, \eqref{SL_DC7}, and \eqref{SL_DC8}, we derive the following expression: \begin{align} &\textstyle \sum_{t\in T}\gamma\up{soc}_{j,t,b}(p\up{ch}_{j,t,b}-p\up{dis}_{j,t,b}) \nonumber\\ &=\textstyle \sum_{t=1}^{n_{T}-1}e\up{soc}_{j,t,b}(\gamma\up{soc}_{j,t,b} - \gamma\up{soc}_{j,t+1,b}) + e\up{soc}_{j, n_T,b}\gamma\up{soc}_{j, n_T, b}\nonumber\\ &\textstyle= \sum_{t\in T}e\up{soc}_{j,t,b}(\psi\up{soc}_{j,t,b}+\varphi\up{soc}_{j,t,b})\,, \label{ESP_Eq4} \end{align} We can obtain the linear daily revenue collected by ES by 1) combining and rearranging \eqref{ESP_Eq2}--\eqref{ESP_Eq4}, and 2) substituting the complementary slackness condition associated with \eqref{PLL_CES2}--\eqref{PLL_CES4}. \begin{align} &(p\up{ch}_{j,t,b} + r\up{ed}_{j,t,b})\varphi\up{ch}_{j,t,b} + (p\up{dis}_{j,t,b} + r\up{eu}_{j,t,b})\varphi\up{dis}_{j,t,b} \nonumber\\ &+ (e\up{soc}_{j,t,b}+T\up{es}r\up{ed}_{j,t,b})\varphi\up{soc}_{j,t,b} + (e\up{soc}_{j,t,b}-T\up{es}r\up{eu}_{j,t,b})\psi\up{soc}_{j,t,b} \nonumber\\ &= \textstyle\sum_{t\in T} \big[(\varphi\up{ch}_{j,t,b} + \varphi\up{dis}_{j,t,b}) p\up{R}_b + \varphi\up{soc}_{j,t,b} e\up{R}_b\big]\,, \label{ESP_Eq5} \end{align} which leads to \begin{align} C\up{R} = -\textstyle\sum_{j\in J}\sum_{\subalign{t&\in T \\ b&\in B}}\big[(\varphi\up{ch}_{j,t,b} + \varphi\up{dis}_{j,t,b}) p\up{R}_b + \varphi\up{soc}_{j,t,b} e\up{R}_b\big]\,. \label{ESP_Eq6} \end{align} Because ES revenue only applies to $b\in B\up{E}$, by comparing \eqref{ESP_Eq6} with \eqref{ESSG:P} and \eqref{ESSG:E},we can represent ES revenue using ES subgradients: \begin{align} C\up{R} = -\textstyle\sum_{b\in B}\big[(g\up{p}_b-c\up{p})p\up{R}_b + (g\up{e}_b-c\up{e})e\up{R}_b\big]\,. \end{align}
\subsection{ES Subgradient Derivation at $B\up{E}$ Buses}\label{App:SBE} We calculate the ES subgradients assuming $C\up{S}$ is differentiable. At $(\nu)$th iteration, the ES subgradients $g\up{U, (\nu)}$ includes the subgradients with respect to $p\up{R}_b$ and $e\up{R}_b$ for $b\in B$ \begin{gather} g\up{U,(\nu)} = [g\up{p,(\nu)}_b\quad g\up{e,(\nu)}_b]^T\,. \end{gather} $g\up{U,(\nu)}$ can be calculated using either the primal or the dual form of the ED problem with their minimizer (or maximizer): \begin{align} g\up{U, (\nu)} \approx &\textstyle \nabla_{\mathnormal{x}\up{U}}C\up{S}(\mathnormal{x}\up{U, (\nu)}, \hat{\mathnormal{x}}\up{P, (\nu)}) \\ =&\nabla_{\mathnormal{x}\up{U}}C\up{E}(\mathnormal{x}\up{U, (\nu)}) +\textstyle \sum_{j\in J}\omega_j\nabla_{\mathnormal{x}\up{U}}C\up{P}_j(\mathnormal{x}\up{U, (\nu)}, \hat{\mathnormal{x}}\up{P, (\nu)}_j)\nonumber\\ =&\nabla_{\mathnormal{x}\up{U}}C\up{E}(\mathnormal{x}\up{U, (\nu)}) +\textstyle \sum_{j\in J}\omega_j\nabla_{\mathnormal{x}\up{U}}C\up{D}_j(\mathnormal{x}\up{U, (\nu)}, \hat{\mathnormal{x}}\up{D, (\nu)}_j)\,.\nonumber \end{align} and the subgradients are calculated as follows: \begin{align}
\textstyle \lim_{\Delta x\up{U} \to 0} &||C\up{D}_j(\mathnormal{x}\up{U, (\nu)}+\Delta x\up{U}, \hat{\mathnormal{x}}\up{D, (\nu)}_j) - C\up{D}_j(\mathnormal{x}\up{U, (\nu)}, \hat{\mathnormal{x}}\up{D, (\nu)}_j)\nonumber\\
&- [g\up{U}]^T\Delta x\up{U} || / ||\Delta x\up{U}||= 0\,. \end{align} We use the dual form of the ED problem, and the subgradient for $b\in B\up{E}$ is: \begin{align} \begin{bmatrix} g\up{p, (\nu)} \\[1em] g\up{e, (\nu)} \end{bmatrix} = \begin{bmatrix} c\up{p}+\sum_{j\in J}\omega_j\sum_{\subalign{t&\in T \\ b&\in B}}(\hat{\varphi}\up{ch, (\nu)}_{j,t,b} + \hat{\varphi}\up{dis, (\nu)}_{j,t,b})\\[1em]
c\up{e}+\sum_{j\in J}\omega_j\sum_{\subalign{t&\in T \\ b&\in B}}\hat{\varphi}\up{soc, (\nu)}_{j,t,b} \end{bmatrix}\,. \end{align}
\subsection{ES Subgradient Derivation for $B\up{N}$ Buses}\label{App:SBN} For $b\in B\up{N}$, let $\Delta p\up{R}_b$ and $\Delta e\up{R}_b$ be sufficiently small. We use the strong duality condition and replace $C\up{P}_j$ with $C\up{D}_j$ in $C\up{S}$. Because $\Delta p\up{R}_b\to 0$, $\Delta e\up{R}_b\to 0$, other decision variables are not affected and are removed, leaving only terms that are directly associated with energy storage at $B\up{N}$ buses and obtain the following problem that calculates the ES gradient at $B\up{N}$ buses at iteration $\nu$: \begin{gather} \textstyle \max_{x\up{\Delta}} C\up{0, (\nu)}(\hat{x}\up{\Delta}, \hat{x}\up{D}_j) := \nonumber\\ \textstyle \sum_{j\in J}\omega_j\sum_{\subalign{t&\in T \\ b&\in B}} \big[\Delta p\up{R}_b({\varphi}\up{ch}_{j,t,b}+{\varphi}\up{ dis}_{j,t,b})+\Delta e\up{R}_b{\varphi}\up{soc}_{j,t,b}\big] \end{gather} we let $\rho^0_b = \Delta p\up{R}_b / \Delta e\up{R}_b$, because all $\Delta p\up{R}_b$ and $\Delta e\up{R}_b$ variables are completely independent, thus the problem is equivalent to: \begin{align} &\textstyle \max_{\Delta x} \sum_{j\in J}\omega_j\sum_{\subalign{t&\in T \\ b&\in B}} \big[\rho^0_b({\varphi}\up{ch}_{j,t,b}+{\varphi}\up{ dis}_{j,t,b})+{\varphi}\up{soc}_{j,t,b}\big] \\ &\text{subject to:} \nonumber \\ &\rho\up{min} \leq \rho^0_b \leq \rho\up{max}\,,\label{g0_C1} \end{align} and \eqref{SL_DC5} to \eqref{SL_DC12} by replacing $\lambda\up{lmp}_{j,t,b}$, $\lambda\up{ru}_{j,t,b}$, $\lambda\up{rd}_{j,t,b}$ with $\hat{\lambda}\up{lmp, (\nu)}_{j,t,b}$, $\hat{\lambda}\up{ru, (\nu)}_{j,t,b}$, $\hat{\lambda}\up{rd, (\nu)}_{j,t,b}$ in $x\up{D, (\nu)}_j$ because these ES do not affect prices. This subproblem can be transformed into its equivalent primal form \begin{gather} \textstyle \min_{x\up{\Delta}} C\up{0}(x\up{\Delta}, \hat{x}\up{D, (\nu)}_j) := \sum_{b\in B} g\up{0}(x\up{\Delta}, \hat{x}\up{D, (\nu)}_j)\,, \end{gather} which is equivalent to \eqref{ESSG:BN}.
\subsection{Exact Relaxation of ES Dispatch Constraints}\label{App:SCER}
In the established ED problem, an ES can be enforced to only charge or discharge at a single time step with the following non-convex complementary constraint~\cite{li2016sufficient} ($\forall$ $j\in J$, $t\in T$, $b\in B$) \begin{align}
p\up{dis}_{j,t,b}p\up{ch}_{j,t,b} = 0\,,\label{SCER:cc} \end{align} Sufficient conditions for an exact relaxation of \eqref{SCER:cc} is analyzed using the Karush-Kuhn-Tucker~(KKT) condition. In the KKT condition for the ED problem in \eqref{PD:PLL_obj} and \eqref{PD:PLL_C}, the derivative of the Lagrangian function with respect to ES discharging variables $p\up{dis}_{j,t,b}$ must equal to zero, hence the following equation holds ($\forall$ $j\in J$, $t\in T$, $b\in B$) \begin{align}
c\up{dis} - \varphi\up{dis}_{j,t,b} - \psi\up{dis}_{j,t,b} + \gamma\up{e}_{j,t,b} + \lambda\up{lmp}_{j,t,b}/\eta\up{dis} = 0
\label{SCER:dis} \end{align} similarly, for ES charging variables $p\up{ch}_{j,t,b}$, the following equation holds ($\forall$ $j\in J$, $t\in T$, $b\in B$) \begin{align}
c\up{ch} - \varphi\up{ch}_{j,t,b} - \psi\up{ch}_{j,t,b} - \gamma\up{e}_{j,t,b} - \lambda\up{lmp}_{j,t,b}\eta\up{ch} = 0
\label{SCER:ch} \end{align} Assume there exists $p\up{dis}_{j,t,b} > 0 $ and $p\up{ch}_{j,t,b} > 0$ at bus $b$ at time $t$ during typical day $j$ in the optimal solution of the ED problem. Then $\psi\up{dis}_{j,t,b}=0$, $\psi\up{ch}_{j,t,b}=0$ because of the complementary slackness conditions. Summing \eqref{SCER:dis} and \eqref{SCER:ch} and the following equation holds \begin{align}
c\up{dis} + c\up{ch} -\varphi\up{dis}_{j,t,b}-\varphi\up{ch}_{j,t,b} + ({1}/{\eta\up{dis}}-\eta\up{ch})\lambda\up{lmp}_{j,t,b} = 0\,, \label{SCER:e1} \end{align} because $\varphi\up{dis}_{j,t,b} \leq 0$, $\varphi\up{ch}_{j,t,b} \leq 0$, \eqref{SCER:e1} can be reduced to \begin{align}
c\up{dis} + c\up{ch} \leq -({1}/{\eta\up{dis}}-\eta\up{ch})\lambda\up{lmp}_{j,t,b}\,, \label{SCER:e2} \end{align} \eqref{SCER:e2} describes the necessary condition for $p\up{dis}_{j,t,b} > 0 $ and $p\up{ch}_{j,t,b} > 0$. Hence, the sufficient condition for the exact relaxation of the complementary constraint of \eqref{SCER:cc} is \begin{align}
c\up{dis} + c\up{ch} > -({1}/{\eta\up{dis}}-\eta\up{ch})\lambda\up{lmp}_{j,t,b}\,, \label{SCER:e3} \end{align} for all $j\in J$, $t\in T$, $b\in B$.
\begin{IEEEbiographynophoto}{Bolun Xu} (S'14) received B.S. degrees in Electrical and Computer Engineering from the University of Michigan, Ann Arbor, USA and Shanghai Jiaotong University, Shanghai, China in 2011, and the M.Sc degree in Electrical Engineering from Swiss Federal Institute of Technology, Zurich, Switzerland in 2014.
He is currently pursuing the Ph.D. degree in Electrical Engineering at the University of Washington, Seattle, WA, USA. His research interests include energy storage, power system operations, and power system economics. \end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Yishen Wang} (S'12) received the B.S. degree from the Department of Electrical Engineering, Tsinghua University, Beijing, China, in 2011. He is currently pursuing the Ph.D. degree in electrical engineering at the University of Washington, Seattle, WA, USA.
His research interests include power system economics and operation, energy storage, renewable forecasting and electricity markets. \end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Yury Dvorkin} (S'11-M'16) received his Ph.D. degree from the University of Washington, Seattle, WA, USA, in 2016.
Dvorkin is currently an Assistant Professor in the Department of Electrical and Computer Engineering at New York University, New York, NY, USA. Dvorkin was awarded the 2016 Scientific Achievement Award by Clean Energy Institute (University of Washington) for his doctoral dissertation “Operations and Planning in Sustainable Power Systems”. His research interests include short- and long-term planning in power systems with renewable generation and power system economics. \end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Ricardo~Fern\'andez-Blanco} (S'10-M'15) received the Ingeniero Industrial degree and the Ph.D. degree in electrical engineering from the Universidad de Castilla-La Mancha, Ciudad Real, Spain, in 2009 and 2014, respectively.
He is currently a Scientific/Technical Project Officer at the JRC.C7 Knowledge for the Energy Union (Joint Research Center), Petten, The Netherlands. His research interests include the fields of operations and economics of power systems, bilevel programming, hydrothermal coordination, and electricity markets. \end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Jean-Paul Watson} (M'10) received the B.S., M.S., and Ph.D. degrees in computer science.
He is a Distinguished Member of Technical Staff with the Discrete Math and Optimization Department, Sandia National Laboratories, Albuquerque, NM, USA. He leads a number of research efforts related to stochastic optimization, ranging from fundamental algorithm research and development, to applications including power grid operations and planning. \end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Cesar A. Silva-Monroy} (M'15) received the B.S. degree from the Universidad Industrial de Santander, Bucaramanga, Colombia, and the M.S. and Ph.D. degrees from University of Washington, Seattle, WA, USA, all in electrical engineering.
He is a Senior Member of Technical Staff with the Electric Power Systems Research Department at Sandia National Laboratories, Albuquerque, NM, USA. He leads and collaborates in several projects that seek to increase the efficiency, resilience, and reliability of the grid by applying advanced optimization techniques to power system operations, planning, and control. \end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Daniel S. Kirschen} (M’86-SM’91-F’07) received his electrical and mechanical engineering degree from the Universite Libre de Bruxelles, Brussels, Belgium, in 1970 and his M.S. and Ph.D. degrees from the University of Wisconsin, Madison, WI, USA, in 1980, and 1985, respectively.
He is currently the Donald W. and Ruth Mary Close Professor of Electrical Engineering at the University of Washington, Seattle, WA, USA. His research interests include smart grids, the integration of renewable energy sources in the grid, power system economics, and power system security. \end{IEEEbiographynophoto}
\end{document} |
\begin{document}
\title{Dimensions of Prym Varieties} \author{Amy E. Ksir\\
Mathematics Department\\
State University of New York at Stony Brook \\
Stony Brook, NY, 11794} \date{July 26, 2000} \email{[email protected]} \begin{abstract} Given a tame Galois branched cover of curves $\pi: X \to Y$ with any finite Galois group $G$ whose representations are rational, we compute the dimension of the (generalized) Prym variety $\Prym_{\rho}(X)$ corresponding to any irreducible representation $\rho$ of $G$. This formula can be applied to the study of algebraic integrable systems using Lax pairs, in particular systems associated with Seiberg-Witten theory. However, the formula is much more general and its computation and proof are entirely algebraic. \end{abstract} \maketitle
\section{Introduction}
The most familiar Prym variety arises from a (possibly branched) double cover $\pi: X \to Y$ of curves. In this situation, there is a surjective norm map $\Nm: \Jac(X) \to \Jac(Y)$, and the Prym (another abelian variety) is a connected component of its kernel. Another way to think of this is that the involution $\sigma$ of the double cover induces an action of $\mathbb{Z}/2\mathbb{Z}$ on the vector space $H^{0}(X, \omega_{X})$, which can then be decomposed as a representation of $\mathbb{Z}/2\mathbb{Z}$. The Jacobian of the base curve Y and the Prym correspond to the trivial and sign representations, respectively. The Prym variety can be defined as the component containing the identity of $(\Jac(X) \otimes_{\mathbb{Z}} \varepsilon)^{\sigma}$, where $\varepsilon$ denotes the sign representation of $\mathbb{Z}/2\mathbb{Z}$.
The generalization of this construction that we will study in this paper is as follows. Let $G$ be a finite group, and $\pi: X \to Y$ be a tame Galois branched cover, with Galois group $G$, of smooth projective curves over an algebraically closed field. The action of $G$ on $X$ induces an action on the vector space of differentials $H^0(X,\omega_{X})$, and on the Jacobian $\Jac(X)$. For any representation $\rho$ of $G$, we define $\Prym_{\rho}(X)$ to be the connected component containing the identity of $(\Jac(X) \otimes_{\mathbb{Z}} \rho^{*})^{G}$. The vector space $H^0(X,\omega_{X})$ will decompose as a $\mathbb{Z}[G]$-module into a direct sum of isotypic pieces \begin{equation} H^0(X,\omega_{X}) = \bigoplus_{j=1}^{N} \rho_j \otimes V_j \end{equation} where $\rho_1, \ldots , \rho_N$ are the irreducible representations of $G$. If $G$ is such that all of its representations are rational, then the Jacobian will also decompose, up to isogeny, into a direct sum of Pryms [D2]: \begin{equation} \Jac(X) \sim \bigoplus_{j=1}^{N} \rho_j \otimes \Prym_{\rho_j}(X). \end{equation} In particular, if $G$ is the Weyl group of a semisimple Lie algebra, then it will satisfy this property.
The goal of this paper is to compute the dimension of such a Prym variety. This formula is given in section 2, with a proof that uses only the Riemann-Hurwitz theorem and some character theory. Special cases of this formula relevant to integrable systems have appeared previously [A, Me, S, MS].
One motivation for this work comes from the study of algebraically integrable systems. An algebraically integrable system is a Hamiltonian system of ordinary differential equations, where the phase space is an algebraic variety with an algebraic (holomorphic, over $\mathbb{C}$) symplectic structure. The complete integrability of the system means that there are $n$ commuting Hamiltonian functions on the $2n$-dimensional phase space. For an algebraically integrable system, these functions should be algebraic, in which case they define a morphism to an $n$-dimensional space of states for the system. The flow of the system will be linearized on the fibers of this morphism, which, if they are compact, will be $n$-dimensional abelian varieties.
Many such systems can be solved by expressing the system as a Lax pair depending on a parameter $z$. The equations can be written in the form $\frac{d}{dt}A = [A,B]$, where $A$ and $B$ are elements of a Lie algebra $\mathfrak{g}$, and depend both on time $t$ and on a parameter $z$, which is thought of as a coordinate on a curve $Y$. In this case, the flow of the system is linearized on a subtorus of the Jacobian of a Galois cover of $Y$. If it can be shown that this subtorus is isogenous to a Prym of the correct dimension, then the system is completely integrable.
In section 3, we will briefly discuss two examples of such systems, the periodic Toda lattice and Hitchin systems. Both of these are important in Seiberg-Witten theory, providing solutions to $\mathcal{N}=2$ supersymmetric Yang-Mills gauge theory in four dimensions.
This work appeared as part of a Ph.D. thesis at the University of Pennsylvania. The author would like to thank her thesis advisor, Ron Donagi, for suggesting this project and for many helpful discussions. Thanks are also due to David Harbater, Eyal Markman, and Leon Takhtajan.
\section{Dimensions}
We can start by using the Riemann-Hurwitz formula to find the genus $g_X$ of $X$, which will be the dimension of the whole space $H^0(X,\omega_{X})$ and of
$\Jac (X)$. Since $\pi: X \to Y$ is a cover of degree $|G|$, we get \begin{equation}
g_X = 1 + |G|(g-1) + \frac{\deg R}{2} \end{equation} where $g$ is the genus of the base curve $Y$ and $R$ is the ramification divisor.
The first isotypic piece we can find the dimension of is $V_1$, corresponding to the trivial representation. The subspace where $G$ acts trivially is the subspace of differentials which are pullbacks by $\pi$ of differentials on $Y$. This tells us that $\dim V_1 = \dim H^0(Y,\omega_{Y}) = g$.
In the case of classical Pryms, where $G = \mathbb{Z}/2$, there is only one other isotypic piece, $V_{\varepsilon}$ corresponding to the sign representation $\varepsilon$. Thus we have \begin{equation} \dim V_{\varepsilon} = g_X - g = g-1 + \frac{\deg R}{2}. \end{equation}
For larger groups $G$, there are more isotypic pieces, but we also have more information: we can look at intermediate curves, i.e. quotients of $X$ by subgroups $H$ of $G$. Differentials on $X/H$ pull back to differentials on $X$ where $H$ acts trivially. Thus \begin{equation} H^0(X/H,\omega_{X/H}) = \bigoplus_{j=1}^{N} (\rho_j)^{H} \otimes V_j. \end{equation}
The map $\pi_H: X/H \to Y$ will be a cover of degree $\frac{|G|}{|H|}$, so Riemann-Hurwitz gives us the following formula for the genus $g_H$ of $X/H$, which is the dimension of $H^0(X/H,\omega_{X/H})$: \begin{equation}
g_H = 1 + \frac{|G|}{|H|}(g-1) + \frac{\deg R_H}{2}. \end{equation} where again $R_H$ is the ramification divisor.
We can further analyze the ramification divisor, by classifying the branch points according to their inertial groups. Since $\pi: X \to Y$ is a Galois cover of curves over $\mathbb{C}$, all of the inertial groups must be cyclic.
\begin{lemma} Let $G$ be a finite group all of whose characters are defined over $\mathbb{Q}$. If two elements $x,y \in G$ generate conjugate cyclic subgroups, then they are conjugate. \end{lemma}
Proof (adapted from [BZ]): We want to show that for any character $\chi$ of $G$, $\chi(x) = \chi(y)$. Then the properties of characters will tell us that $x$ and $y$ must be in the same conjugacy class.
We may assume that $x$ and $y$ generate the same subgroup, $H$. Then
$y = x^k$ for some integer $k$ relatively prime to $|H|$. Let $\chi$ be a character of $G$, and $\rho: G \to GL(n,\mathbb{C})$
a representation with character $\chi$. Then $\rho(x)$ will be a matrix with eigenvalues $\lambda_1, \ldots, \lambda_n$, and $\rho(y)$ will have eigenvalues $\lambda_1^k, \ldots, \lambda_n^k$. Since $x^{|H|} = 1$,
we have $\lambda_1^{|H|} = \ldots = \lambda_n^{|H|} = 1$. Let $\xi$ be a primitive $|H|$th root of unity. Then we can write $\lambda_1 = \xi^{\nu_1}, \ldots, \lambda_n = \xi^{\nu_n}$ for some integers $\nu_i$. Now $\chi(x)$ = Trace($\rho(x)$) = $\lambda_1 + \ldots + \lambda_n$, and $\chi(y)$ = $\chi(x^k)$ = $\lambda_1^k + \ldots + \lambda_n^k$. Thus $\chi(y)$ will be the image of $\chi(x)$ under the element of Gal($\mathbb{Q}(\xi)/\mathbb{Q}$) which sends $\xi \mapsto \xi^k$. Since the values of $\chi$ are rational, this element will act trivially, so $\chi(y) = \chi(x)$.
$\square$
From now on, we will suppose that $G$ is such that all of its characters are rational. (This will be true, for instance, if $G$ is a Weyl group). Pick representative elements $h_1 \ldots h_N$ for each conjugacy class in $G$, and let $H_1 \ldots H_N$ be the cyclic groups that each of them generates. By Lemma 1, this will be the whole set (up to conjugacy) of cyclic subgroups of $G$. We can partially order this set of cyclic subgroups by their size, so that $H_1$ is the trivial subgroup. Now we can classify the branch points: let $R_k, k=2 \ldots N$ be the degree of the branch locus with inertial group conjugate to $H_k$ (ignoring the trivial group).
Over each point of the branch locus where the inertial group is conjugate to $H_k$, there will be $|G|/|H_k|$ points in the fiber. Thus the degree of the ramification divisor $R$ of $\pi: X \to Y$ will be \begin{equation}
\deg R = \sum_{k=1}^{N} (|G| - \frac{|G|}{|H_k|}) R_k \end{equation} For each quotient curve $X/H$, each point in the fiber of $\pi_H: X/H \to Y$ over a point with inertial group $H_k$ will correspond to a double coset $H_k \backslash G / H$. Thus the degree of the ramification divisor $R_H$ will be \begin{equation}
\deg R_H = \sum_{k=1}^{N} (\frac{|G|}{|H|} - \#(H_k \backslash G / H)) R_k. \end{equation} Combining these formulas with the earlier Riemann-Hurwitz computations, we get: \begin{equation}
g_X = 1 + |G|(g-1) + \sum_k(|G| - \frac{|G|}{|H|} ) \frac{R_k}{2} \end{equation} \begin{equation}
g_H = 1 + \frac{|G|}{|H|}(g-1) + \sum_k(\frac{|G|}{|H|} - \#(H_k \backslash G / H) ) \frac{R_k}{2} \end{equation}
Since the genera $g_H$ are exactly the dimensions $\dim H^0(X/H,\omega_{X/H})$, we also have \begin{equation} g_H = \sum_{j=1}^{N} \dim \rho_j^{H} \dim V_j. \end{equation}
For each subgroup $H$, this is a linear equation for the unknown dimensions $\dim V_j$ in terms of the genus $g_{H}$. Thus by taking quotients by the set of all cyclic subgroups $H_1 \ldots H_N$, we get a system of $N$ equations. We wish to invert the matrix $\dim\rho_j^{H_i}$ and find the $N$ unknowns $\dim V_j$. \begin{lemma} The matrix $\dim\rho_j^{H_i}$ is invertible. \end{lemma}
Proof: We show that the rows of the matrix are linearly independent, using the fact that rows of the character table are linearly independent. First, note that $\dim \rho_j^{H_i}$, the dimension of the subspace of $\rho_j$ invariant under $H_i$, is equal to the inner product of characters $\langle \Res^G_{H_i} \rho_j, \mathbf{1} \rangle$, which we can read off from the character table of $G$ as \begin{equation}
\dim \rho_j^{H_i} = \frac{1}{|H_i|} \sum_{a_i \in H_i} \chi_{\rho_j}(a_i). \end{equation}
Compare this matrix to the matrix of the character table $\chi_{\rho_j}(a_i)$. From (12) we see that each row is a sum of multiples of rows of the character table. Since each element of a subgroup has order less than or equal to the order of the subgroup, the rows of the character table being added to get row $i$ appear at or below row $i$ in the character table. Thus if we write the matrix $\dim\rho_j^{H_i}$ in terms of the basis of the character table, we will get a lower triangular matrix with non-zero entries on the diagonal. By row reduction, we see that the linear independence of the rows of $\dim\rho_j^{H_i}$ is equivalent to the linear independence of the rows of the character table.
$\square$
\begin{theorem} For each nontrivial irreducible representation $\rho_j$ of $G$, $V_j$ has dimension \begin{equation} (\dim \rho_j) (g-1) + \sum_{k=1}^{N} \Bigl((\dim \rho_j) - (\dim \rho_j^{H_k})
\Bigr) \frac{R_{H_k}}{2} \end{equation} \end{theorem}
Proof: Since the matrix $\dim\rho_j^{H_i}$ is invertible, there is a unique solution to the system of equations (11), so we only need to show that this is a solution. Namely, given this formula for $\dim V_j$, and combining (10) and (11), we wish to show that for each cyclic subgroup $H_i$, \begin{equation}
\sum_{j=1}^{N} \dim \rho_j^{H_i} \dim V_j = 1 + \frac{|G|}{|H_i|}(g-1)
+ \sum_k(\frac{|G|}{|H_i|} - \#(H_k \backslash G / H_i) ) \frac{R_k}{2}. \end{equation}
Note that on the left side we are summing over all representations, not just the nontrivial ones, so our notation will be simpler if we write $\dim V_1 = g$ in a similar form to (11). For the trivial representation $\rho_1$, $(\dim \rho_1) - (\dim \rho_1^{H_k}) = 0$ (since $\rho_1$ is fixed by any subgroup $H_k$), so \begin{equation} \dim V_1 = 1 + (\dim \rho_1) (g-1) + \sum_{k=1}^{N} \Bigl((\dim \rho_1) -
(\dim \rho_1^{H_k}) \Bigr) \frac{R_{H_k}}{2}. \end{equation}
The sum on the left hand side of (14) will be \begin{equation} 1 + \sum_{j=1}^{N} \dim \rho_j^{H_i} \Bigl((\dim \rho_j) (g-1) + \sum_{k=1}^{N} \bigl((\dim \rho_j) - (\dim \rho_j^{H_k}) \bigr) \frac{R_{H_k}}{2} \Bigr). \end{equation}
Let us look at the $(g-1)$ term and the $R_{H_k}$ terms separately. For the $(g-1)$ coefficient, we can write both $\dim \rho_j^{H_i}$ and $\dim \rho_j$ in terms of characters of $G$ (as in (12)) and exchange the order of summation to get
\begin{equation} \sum_{j=1}^{N} \dim \rho_j^{H_i} \dim \rho_j =
\frac{1}{|H_i|} \sum_{a_i \in H_i} \sum_{j=1}^N \chi_{\rho_j}(a_i)
\chi_{\rho_j}(e) \end{equation} where $e$ is the identity element of $G$. The inner sum amounts to taking the inner product of two columns of the character table of $G$. The orthogonality of characters tells us that this inner product will be zero unless the two columns are the same, in this case if $a_i = e$. Thus the sum over elements in $H_i$ disappears, and we get the sum of the squares of the dimensions of the characters: \begin{equation}
\frac{1}{|H_i|} \sum_{j=1}^N \chi_{\rho_j}(e)^2 = \frac{|G|}{|H_i|}. \end{equation} which is what we want.
The $R_{H_k}$ term looks like \begin{equation} \sum_{j=1}^{N} \dim \rho_j^{H_i} \sum_{k=1}^{N} \bigl((\dim \rho_j) -
(\dim \rho_j^{H_k}) \bigr) \frac{R_{H_k}}{2}. \end{equation}
We can distribute and rearrange the sums to get: \begin{equation} \sum_{k=1}^{N} \Bigl(\sum_{j=1}^{N} \dim \rho_j^{H_i} \dim \rho_j -
\sum_{j=1}^{N} \dim \rho_j^{H_i} \dim \rho_j^{H_k}
\Bigr)\frac{R_{H_k}}{2}. \end{equation}
As in (17) and (18), the first term becomes $\frac{|G|}{|H_i|}$. The second term is also the inner product of columns of the character table: \begin{equation} \sum_{j=1}^{N} \dim \rho_j^{H_i} \dim \rho_j^{H_k} =
\frac{1}{|H_i|} \frac{1}{|H_k|} \sum_{a_i \in H_i} \sum_{a_k \in H_k}
\sum_{j=1}^{N} \chi_{\rho_j}(a_i) \chi_{\rho_j}(a_k). \end{equation} This will be zero unless $a_i$ and $a_k$ are conjugate, in which case $\chi_{\rho_j}(a_i) = \chi_{\rho_j}(a_k)$ and character theory tells us (see for example [FH], p. 18) that \begin{equation}
\sum_{j=1}^N \chi_{\rho_j}(a_i)^2 = \frac{|G|}{c(a_i)}, \end{equation} where $c(a_i)$ is the number of elements in the conjugacy class of $a_i$. Now the second term has become \begin{equation}
\frac{|G|}{|H_i||H_k|} \sum_{\{a_i, a_k\}} \frac{1}{c(a_i)} \end{equation} where the sum is taken over pairs of elements $a_i \in H_i, a_k \in H_k$ such that $a_i$ and $a_k$ are conjugate. This is exactly the number of double cosets $\#(H_k \backslash G / H_i)$.
Adding up all of the terms, the sum on the left hand side becomes \begin{equation}
1 + \frac{|G|}{|H_i|}(g-1) + (\frac{|G|}{|H_i|} - \#(H_k \backslash G / H_i))
\frac{R_{H_k}}{2} \end{equation} which is exactly the right hand side.
$\square$
\begin{corollary} For each nontrivial irreducible representation $\rho_j$ of $G$, $\Prym_{\rho_j}(X)$ has dimension \begin{equation} (\dim \rho_j) (g-1) + \sum_{k=1}^{N} \Bigl((\dim \rho_j) - (\dim \rho_j^{H_k})
\Bigr) \frac{R_{H_k}}{2}. \end{equation}
$\square$ \end{corollary}
\section{Integrable Systems.}
\textbf{Periodic Toda lattice.}
The periodic Toda system is a Hamiltonian system of differential equations with Hamiltonian \begin{equation*}
H(p,q) = \frac{|p|^{2}}{2} + \sum_{\alpha} e^{\alpha(q)} \end{equation*} where $p$ and $q$ are elements of the Cartan subalgebra $\mathfrak{t}$ of a semisimple Lie algebra $\mathfrak{g}$, and the sum is over the simple roots of $\mathfrak{g}$ plus the highest root. This system can be expressed in Lax form [AvM] $\frac{d}{dt}A = [A,B]$, where $A$ and $B$ are elements of the loop algebra $\mathfrak{g}^{(1)}$. and can be thought of as elements of $\mathfrak{g}$ which depend on a parameter $z \in \mathbb{P}^{1}$. For $\mathfrak{sl}(n)$, $A$ is of the form \begin{equation*} \begin{pmatrix}
y_{1} & 1 & & x_{0}z \\
x_{1} & y_{2} & \ddots & \\
& \ddots & \ddots & 1 \\
z & & x_{n-1} & y_{n} \end{pmatrix} \end{equation*}
For any representation $\varrho$ of $\mathfrak{g}$, the spectral curve $S_{\varrho}$ defined by the equation $\det (\varrho(A(z) - \lambda I) = 0$ is independent of time (i.e. is a conserved quantity of the system). The spectral curve is a finite cover of $Y$ which for generic $z$ parametrizes the eigenvalues of $\varrho(A(z))$. While the eigenvalues are conserved by the system, the eigenvectors are not. The eigenvectors of $\varrho(A)$ determine a line bundle on the spectral cover, so an element of $\Jac(S_{\varrho})$. The flow of the system is linearized on this Jacobean. Since the original system of equations didn't depend on a choice of representation $\varrho$, the flow is actually linearized on an abelian variety which is a subvariety of $\Jac(S_{\varrho})$ for every $\varrho$.
In fact, instead of considering each spectral cover we can look at the cameral cover $X \to \mathbb{P}^{1}$. This is constructed as a pullback to $\mathbb{P}^{1}$ of the cover $\mathfrak{t} \to \mathfrak{t}/G$, where $G$ is the Weyl group of $\mathfrak{g}$. This cover is pulled back by the rational map $\mathbb{P}^{1} \dashrightarrow \mathfrak{t}/G$ defined by the class of $A(z)$ under the adjoint action of the corresponding Lie group. (For $A(z)$ a regular semisimple element of $\mathfrak{sl}(n)$, this map sends $z$ to the unordered set of eigenvalues of $A(z)$.) Thus the cameral cover is a finite Galois cover of $\mathbb{P}^{1}$ whose Galois group $G$ is the Weyl group of $\mathfrak{g}$. The flow of the Toda system is linearized on the Prym of this cover corresponding to the representation of $G$ on $\mathfrak{t}^{*}$. This is an $r$-dimensional representation, where $r$ is the rank, so the dimension of this Prym is \begin{equation*}
r(-1) + \sum_{k=1}^{N} \Bigl(r - (\dim \mathfrak{t}^{H_k})
\Bigr) \frac{R_{H_k}}{2}. \end{equation*}
The ramification of this cover has been analyzed in [D1] and [MS]. There are $2r$ branch points where the inertial group $H$ is $\mathbb{Z}/2\mathbb{Z}$ generated by one reflection, so for each of these $\dim \mathfrak{t}^{H}$ is $r-1$. There are also two points ($z=0$ and $\infty$) where the inertial group $H$ is generated by the Coxeter element, the product of the reflections corresponding to the simple roots. This element of $G$ doesn't fix any element of $\mathfrak{t}$, so for these two points $\dim \mathfrak{t}^{H} = 0$. Thus the dimension of the Prym is \begin{eqnarray*}
-r + (r-(r-1)) \frac{2r}{2} + (r-0) \frac{2}{2} \\
= r. \end{eqnarray*} Since the original system of equations had a $2r$-dimensional phase space, this is the answer that we want.
\textbf{Hitchin systems.} Hitchin showed [H] that the cotangent bundle to the moduli space of semistable vector bundles on a curve $Y$ has the structure of an algebraically completely integrable system. His proof, later extended to principal $\mathcal{G}$ bundles with any reductive Lie group $\mathcal{G}$ [F,S], uses the fact that this moduli space is equivalent (by deformation theory) to the space of \emph{Higgs pairs}, pairs $(P,\phi)$ of a principal bundle and an endomorphism $\phi \in H^{0}(Y, ad(P) \otimes \omega_{Y})$. As in the case of the Toda system, the key construction is of a cameral cover of $Y$. The eigenvalues of $\phi$, which are sections of the line bundle $\omega_{Y}$, determine a spectral cover of $Y$ in the total space bundle. The eigenvectors determine a line bundle on this spectral cover. The Hitchin map sends a Higgs pair $(P,\phi)$ to the set of coefficients of the characteristic polynomial. Each coefficient is a section of a power of $\omega_{Y}$, so the image of the Hitchin map is $B := \bigoplus_{i=1}^{r}H^{0}(Y,\omega_{Y}^{\otimes d_i})$, where the $d_{i}$ are the degrees of the basic invariant polynomials of the Lie algebra $\mathfrak{g}$.
Again, we can consider instead the cameral cover $X_{b} \to Y$, which is obtained as a pullback to $Y$ vi $\phi$ of $\mathfrak{t} \otimes \omega_{Y} \to \mathfrak{t} \otimes \omega_{Y}/G$. The generic fiber of the Hitchin map is isogenous to $\Prym_{\mathbb{t}}(X)$, which has dimension \begin{equation*}
r(g-1) + \sum_{k=1}^{N} \Bigl(r - (\dim \mathfrak{t}^{H_k})
\Bigr) \frac{R_{H_k}}{2} \end{equation*} By looking at the generic fiber, we can restrict our attention to cameral covers where the only ramification is of order two, with inertial group $H$ generated by one reflection. The last piece of information we need to compute the dimension is the degree of the branch divisor of $X \to Y$.
The cover $\mathfrak{t} \otimes \omega_{Y} \to \mathfrak{t} \otimes \omega_{Y}/G$ is ramified where any of the roots, or their product, is equal to zero. There are $(\dim \mathcal{G} - r)$ roots, so this defines a hypersurface of degree $(\dim \mathcal{G} - r)$ in the total space of $\omega_{Y}$. The ramification divisor of $X \to Y$ is the intersection of this hypersurface with the section $\phi$, which is the divisor corresponding to the line bundle $\omega_{Y}^{\otimes(\dim \mathcal{G} - r)}.$ Thus the degree of the branch divisor will be $(\dim \mathcal{G} - r)(2g-2)$.
Combining all of this information, we see that the dimension of the Prym is \begin{eqnarray*} \dim \Prym_{\mathbb{t}} (X) & = & r (g - 1) + (r-(r-1)) \frac{(\dim \mathcal{G} - r)(2g-2)}{2} \\ & = & r(g-1) + (\dim \mathcal{G} - r)(g-1) \\ & = & \dim \mathcal{G}(g-1). \end{eqnarray*}
By comparison, the dimension of the base space is \begin{equation*} \Sigma_{i=1}^{r} h^{0}(Y,\omega_{Y}^{d_i}) \end{equation*}
The sum of the degrees $d_{i}$ of the basic invariant polynomials of $\mathfrak{g}$ is the dimension of a Borel subalgebra, $(\dim \mathcal{G} + r)/2$. For $g>1$, Riemann-Roch gives \begin{eqnarray*} \Sigma_{i=1}^{r} h^{0}(Y,\omega_{Y}^{d_i}) & = & \Sigma_{i=1}^{r} (2d_{i}-1) (g-1) \\ & = & (\dim \mathcal{G} + r - r)(g-1) \\ & = & \dim \mathcal{G}(g-1). \end{eqnarray*}
Which, as Hitchin said, ``somewhat miraculously'' turns out to be the same thing.
Markman [Ma] and Bottacin [B] generalized the Hitchin system by twisting the line bundle $\omega_{Y}$ by an effective divisor $D$. The effect of this is to create a family of integrable systems, parametrized by the residue of the Higgs field $\phi$ at $D$. The base space of each system is a fiber of the map \begin{gather*} B := \bigoplus_{i=1}^{r}H^{0}(Y,\omega_{Y}(D)^{\otimes d_i}) \\ \downarrow \\ \bar{B} := \text{the space of possible residues at } D \end{gather*} which sends the set of $r$ sections in $B$ to its set of residues at $D$. At each point of $D$, there are $r$ independent coefficients, so the dimension of $\bar B$ is $r(\deg D)$. Thus the base space of each system has dimension \begin{eqnarray*} \dim B - \dim \bar B & = & \sum_{i=1}^{r}h^{0}(Y,\omega_{Y}(D)^{\otimes d_i}) - r(\deg D) \\ & = & \sum_{i=1}^{r}( d_{i}(2g-2 + \deg D) - (g-1)) -r(\deg D) \\ & = & (1/2)(\dim \mathcal{G} + r)(2g - 2 + \deg D) -r(g-1) -r(\deg D) \\ & = & (\dim \mathcal{G})(g-1) + \frac{\dim \mathcal{G} -r}{2} \deg D \end{eqnarray*}
Markman showed that the generic fiber of this system is again isogenous to $\Prym_{\mathfrak{t}}(X)$, where $X$ is a cameral cover of the base curve $Y$. The construction of the cameral cover is similar to the case of the Hitchin system, except that $\phi$ is a section of $ad(P) \otimes \omega_{Y}(D)$. Thus the ramification divisor is $(\omega_{Y}(D))^{\otimes(\dim \mathcal{G} -r)}$, and the dimension is \begin{eqnarray*} \dim \Prym_{\mathfrak{t}} (X) & = & r (g - 1) + \frac{(\dim \mathcal{G} - r)(2g-2 + \deg D)}{2} \\ & = & \dim \mathcal{G}(g-1) + \frac{(\dim \mathcal{G} -r)}{2} \deg D. \end{eqnarray*}
Again, this is the same dimension as the base of the system.
\end{document} |
\begin{document}
\begin{abstract} Multiplicative constants are a fundamental tool in the study of maximal representations. In this paper we show how to extend such notion, and the associated framework, to measurable cocycles theory. As an application of this approach, we define and study the Cartan invariant for measurable $\textup{PU}(m,1)$-cocycles of complex hyperbolic lattices.
\end{abstract}
\maketitle
\section{Introduction}
A fruitful approach to the study of geometric structures on a topological space $X$ is to introduce a bounded numerical invariant whose maximum detects those structures on $X$ which have many symmetries. An instance of this situation is the study of the representation space of lattices in (semi)simple Lie groups. More precisely, given two simple Lie groups of non-compact type $G,G'$ and a lattice $\Gamma \leq G$, Burger and Iozzi~\cite{BIuseful} described how to associate to every representation $\rho \colon \Gamma \rightarrow G'$ a real number. Using the pullback map $\upH^\bullet_{cb}(\rho)$ induced by $\rho$ in continuous bounded cohomology, they defined a numerical invariant $\lambda(\rho)$, which depends on a chosen class $\Psi' \in \upH^\bullet_{cb}(G';\bbR)$, as follows: $$ \lambda(\rho) \coloneqq \langle \comp^\bullet_\Gamma \circ \upH^\bullet_{cb}(\rho)(\Psi'), [\Gamma \backslash \calX] \rangle \ , $$ where $\comp^\bullet_\Gamma$ denotes the comparison map (Section~\ref{subsec:transfer:maps}), $\calX$ denotes the Riemannian symmetric space associated to $G$, $[\Gamma \backslash \calX]$ is the (relative) fundamental class of the quotient manifold and $\langle \cdot, \cdot \rangle$ is the Kronecker product. We say that $\lambda(\rho)$ is a \emph{multiplicative constant} if it appears in an integral formula, called \emph{useful formula} by Burger and Iozzi~\cite{BIuseful}. When $\lambda$ is a multiplicative constant, the formula implies that the numerical invariant has bounded absolute value. In several cases~\cite{bucher2:articolo,Pozzetti,BBIborel}, its maximum corresponds precisely to representations induced by representations of the ambient group.
\subsection{A multiplicative formula for measurable cocycles}
One of the main goal of this paper is to settle the foundational framework to define multiplicative constants for measurable cocycles. We carefully choose a setting where we can coherently extend ordinary numerical invariants for representations. Moreover, we introduce an integral formula in such a way that our definition of multiplicative constants is the natural extension of Burger-Iozzi's one. Our techniques make use of bounded cohomology theory.
Let $G,G'$ be two locally compact groups and let $L,Q \leq G$ be two closed subgroups. Assume that $Q$ is \emph{amenable} and that $L$ is a lattice. Let $(X,\mu_X)$ be a \emph{standard Borel probability $L$-space} and let $Y$ be a measurable $G'$-space. Following Burger-Iozzi's approach, given a measurable cocycle $\sigma \colon L \times X \rightarrow G'$, we define the \emph{pullback induced by $\sigma$} in continuous bounded cohomology using directly continuous cochains on the groups (Definition~\ref{def:pullback:cocycle}). Unfortunately, this approach does not lead to the desired multiplicative formula. For this reason, we need to consider boundary maps. A \emph{(generalized) boundary map} $\phi \colon G/Q \times X \rightarrow Y$ is a measurable $\sigma$-equivariant map and its existence is strictly related to the properties of $\sigma$ (Remark~\ref{oss:esistenza:mappe:bordo}). Inspired by the definition of Bader-Furman-Sauer's Euler number~\cite{sauer:articolo}, assuming the existence of a boundary map $\phi$, we describe how to construct a new \emph{pullback map} $\upC^\bullet(\Phi^X)$ in terms of $\phi$ (Definition \ref{def:pullback:not:fibered}). The notation $\upC^\bullet(\Phi^X)$ emphazises the fact that it is not simply the pullback along $\phi$, but we also need to integrate over $X$ (compare with Definitions~\ref{def:pullback:boundary} and~\ref{def:integration:map}). The map induced by $\upC^\bullet(\Phi^X)$ in continuous bounded cohomology agrees with the natural pullback along $\sigma$ (Lemma \ref{lem:pullback:implemented:boundary:map}).
Our aim is to coherently extend the study of numerical invariants of representations to the case of measurable cocycles. Recall that given a continuous representation $\rho \colon L \rightarrow G'$ with boundary map $\varphi$, there always exists a natural \emph{measurable cocycle} $\sigma_\rho$ associated to it. Using the previous pullback $\upC^\bullet(\Phi^X)$, we then show that the map induced by $\rho$ in continuous bounded cohomology agrees with the one induced by $\sigma_\rho$ (Proposition~\ref{prop:pullback:coc:vs:repr}). Moreover, the pullback along $\sigma$ is invariant along the $G'$-cohomology class of the cocycle (Proposition~\ref{prop:invariance:cohomology}).
The study of pullback maps along measurable cocycles (and their boundary maps) leads to the following \emph{multiplicative formula}, which extends Burger-Iozzi's useful formula~\cite[Proposition 2.44, Principle 3.1]{BIuseful}. Recall that the \emph{transfer map} is a cohomological left inverse of the restriction from $G$ to $L$.
\begin{intro_prop}[Multiplicative formula]\label{prop:baby:formula} Keeping the notation above, let $\psi' \in \calB^\infty(Y^{\bullet+1};\bbR)^{G'}$ be an everywhere defined $G'$-invariant cocycle. Let $\psi \in \upL^\infty((G/Q)^{\bullet+1})^G$ be a $G$-invariant cocycle and let $\Psi \in \upH^\bullet_{cb}(G;\bbR)$ denote the class of $\psi$. Assume that $\Psi=\textup{trans}_{G/Q}^{\bullet} [\upC^\bullet(\Phi^X)(\psi')]$, where $\textup{trans}_{G/Q}^\bullet$ is the trasfer map. \begin{enumerate}
\item We have that $$ \int_{L \backslash G} \int_X \psi'(\phi(\overline{g}.\eta_1, x), \ldots, \phi(\overline{g}.\eta_{\bullet+1}, x)) d\mu_X(x) d\mu(\overline{g}) = \psi(\eta_1, \ldots, \eta_{\bullet+1}) + \textup{cobound.} \ , $$ for almost every $(\eta_1,\ldots,\eta_{\bullet+1}) \in (G/Q)^{\bullet+1}$.
\item If $\upH^\bullet_{cb}(G;\bbR) \cong \bbR \Psi (= \bbR[\psi])$, then there exists a real constant $\lambda_{\psi',\psi}(\sigma) \in \bbR$ depending on $\sigma,\psi',\psi$ such that \begin{align*} \int_{L\backslash G} \int_X \psi'(\phi(\overline{g}.\eta_1, x), \ldots, \phi(\overline{g}.\eta_{\bullet+1}, x)) d\mu_X(x) d\mu(\overline{g})&=\lambda_{\psi',\psi}(\sigma) \cdot \psi(\eta_1,\ldots,\eta_{\bullet+1}) \\ &+\textup{cobound.} \ , \end{align*} for almost every $(\eta_1,\ldots,\eta_{\bullet+1}) \in (G/Q)^{\bullet+1}$. \end{enumerate} \end{intro_prop}
Although this formula might appear slightly complicated at first sight, it contains all the ingredients for defining the \emph{multiplicative constant} $\lambda_{\psi',\psi}(\sigma)$ associated to a measurable cocycle $\sigma$ and two given bounded cochains $\psi,\psi'$ (Definition~\ref{def:multiplicative:constant}). When no coboundary terms appear in the previous formula, we provide an explicit upper bound for the multiplicative constant (Proposition~\ref{prop:multiplicative:upperbound}). This leads to the definition of \emph{maximal measurable cocycles} (Definition \ref{def:maximal:cocycle}). Finally, under suitable hypothesis, we prove that a maximal cocycle is \emph{trivializable} (Theorem~\ref{teor:coniugato:standard:embedding}), i.e. it is cohomologous to a cocycle induced by a representation $L \leq G \rightarrow G'$.
This general framework has the great advantage that we can easily deduce several applications (Section~\ref{subsec:applications:baby} and Section~\ref{sec:concluding:remarks}).
\subsection{Cartan invariant of measurable cocycles}
Let $\Gamma \leq \pu(n,1)$ be a torsion-free lattice with $n \geq 2$. The study of representations of $\Gamma$ into $\pu(m,1)$ dates back to the work of Goldman and Millson~\cite{GoldM}, Corlette~\cite{Cor88} and Toledo~\cite{Toledo89}. In order to investigate rigidity properties of maximal representations $\rho \colon \Gamma \rightarrow \pu(m,1)$, Burger and Iozzi~\cite{BIcartan} defined the \emph{Cartan invariant} $i_\rho$ associated to $\rho$. Inspired by their work, we make use of our techniques to define the \emph{Cartan invariant} $i(\sigma)$ for a measurable cocycle $\sigma:\Gamma \times X \rightarrow \pu(m,1)$, where $(X,\mu_X)$ is a standard Borel probability $\Gamma$-space.
If the cocycle admits a boundary map (e.g. if it is \emph{non elementary}), the Cartan invariant can be realized as the multiplicative constant associated to $\sigma$ and the Cartan cocycles $c_n,c_m$. More precisely, as an application of Proposition~\ref{prop:baby:formula}, we prove the following
\begin{intro_prop}\label{prop:cartan:multiplicative:cochains} Let $\Gamma \leq \pu(n,1)$ be a torsion-free lattice and let $(X,\mu_X)$ be a standard Borel probability space. Consider a non-elementary measurable cocycle $\sigma \colon \Gamma \times X \rightarrow \pu(m,1)$ with boundary map $\phi \colon \partial_\infty \bbH^n_{\bbC} \times X \rightarrow \partial_\infty \bbH^m_{\bbC}$. Then, for every triple of pairwise distinct points $\xi_1,\xi_2,\xi_3 \in \partial_\infty \bbH^n_{\bbC}$, we have \begin{small} \begin{equation*} i(\sigma)c_n(\xi_1,\xi_2,\xi_3)=\int_{\Gamma \backslash \pu(n,1)} \int_X c_m(\phi(\overline{g}\xi_1,x),\phi(\overline{g}\xi_2,x),\phi(\overline{g}\xi_3,x))d\mu(\overline{g})d\mu_X(x) \ . \end{equation*} \end{small} Here $\mu$ is a $\pu(n,1)$-invariant probability measure on the quotient $\Gamma \backslash \pu(n,1)$. \end{intro_prop}
First we show that our Cartan invariant extends the one defined for representations (Proposition~\ref{prop:cartan:rep}). Moreover, using our results about the pullback along boundary maps, we prove that the Cartan invariant is constant along $\pu(m,1)$-cohomology classes and it has absolute value bounded by $1$ (Proposition \ref{prop:cartan:cohomology:bound}).
Then, a natural problem is to provide a complete characterization of measurable cocycles whose Cartan invariant attains extremal values, i.e. either $0$ or $1$. Since we are not interested in elementary cocycles, we can assume the existence of a boundary map~\cite[Proposition 3.3]{MonShal0}.
Following the work by Burger and Iozzi~\cite{BIreal}, we introduce the notion of \emph{totally real cocycles}. A cocycle is totally real if it is cohomologous to a cocycle whose image is contained in a subgroup of $\pu(m,1)$ preserving a totally geodesically embedded copy $\bbH^k_{\bbR} \subset \bbH^m_{\bbR}$, for some $1 \leq k \leq m$ (Definition \ref{def:totally:real}). Totally real cocycles can be easily constructed by taking the composition of a measure equivalence cocycle with a totally real representation.
We show that totally real cocycles have trivial Cartan invariant. The converse seems unlikely to hold in general. However, if $X$ is $\Gamma$-ergodic, we obtain the following
\begin{intro_thm}\label{teor:totally:real} Let $\Gamma \leq \pu(n,1)$ be a torsion-free lattice and and let $(X,\mu_X)$ be a standard Borel probability $\Gamma$-space. Consider a non-elementary measurable cocycle $\sigma \colon \Gamma \times X \rightarrow \pu(m,1)$ with boundary map $\phi \colon \partial_\infty \mathbb{H}^n_{\bbC} \times X \rightarrow \partial_\infty \bbH^m_{\bbC}$. Then the following hold \begin{enumerate}
\item If $\sigma$ is totally real, then $i(\sigma)=0$;
\item If $X$ is $\Gamma$-ergodic and $\textup{H}^2(\phi)([c_m])=0$, then $\sigma$ is totally real. \end{enumerate} \end{intro_thm}
The next step in our investigation is the study of the \emph{algebraic hull} of a cocycle with non-vanishing pullback. Recall that the algebraic hull is the smallest algebraic group containing the image a cocycle cohomologous to $\sigma$ (Definition \ref{def:alg:hull}).
\begin{intro_thm}\label{teor:alg:hull} Let $\Gamma \leq \textup{PU}(n,1)$ be a torsion-free lattice and let $(X,\mu_X)$ be an ergodic standard Borel probability $\Gamma$-space. Consider a non-elementary measurable cocycle $\sigma \colon \Gamma \times X \rightarrow \pu(m,1)$ with boundary map $\phi \colon \partial_\infty \bbH^n_{\bbC} \times X \rightarrow \partial_\infty \bbH^m_{\bbC}$. Let $\mathbf{L}$ be the algebraic hull of $\sigma$ and denote by $L=\mathbf{L}(\bbR)^\circ$ the connected component of the identity of the real points.
If $\upH^2(\Phi^X)([c_m])\neq 0$, then $L$ is an almost direct product $K \cdot M$, where $K$ is compact and $M$ is isomorphic to $\textup{PU}(p,1)$ for some $1 \leq p \leq m$.
In particular, the symmetric space associated to $L$ is a totally geodesically embedded copy of $\bbH^p_{\bbC}$ inside $\bbH^m_{\bbC}$. \end{intro_thm}
We conclude with a complete characterization of maximal cocycles.
\begin{intro_thm}\label{teor:cartan:rigidity} Consider $n \geq 2$. Let $\Gamma \leq \pu(n,1)$ be a torsion-free lattice. Let $(X,\mu_X)$ be an ergodic standard Borel probability $\Gamma$-space. Consider a maximal measurable cocycle $\sigma \colon \Gamma \times X \rightarrow \pu(m,1)$. Let $\mathbf{L}$ be the algebraic hull of $\sigma$ and let $L=\mathbf{L}(\bbR)^\circ$ be the connected component of the identity of the real points.
Then, we have \begin{enumerate} \item $m \geq n$;
\item $L$ is an almost direct product $\textup{PU}(n,1) \cdot K$, where $K$ is compact;
\item $\sigma$ is cohomologous to the cocycle $\sigma_i$ associated to the standard lattice embedding $i \colon \Gamma \to \pu(m,1)$ (possibly modulo the compact subgroup $K$ when $m >n$). \end{enumerate} \end{intro_thm}
Since recently one of the authors together with Sarti proved a generalization of the previous theorem for cocycles with target $\textup{PU}(p,q)$~\cite[Theorem 2]{sarti:savini}, we will mainly refer to their more complete result for the proof.
\subsection*{Plan of the paper}
In Section~\ref{sec:prel}, we recall some preliminary definitions and results that we need in the paper. We report the definitions of amenable action, measurable cocycle, boundary map and algebraic hull in Section~\ref{subsec:zimmer:cocycle}. We then review Burger and Monod's functorial approach to continuous bounded cohomology (Section~\ref{subsec:burger:monod}) and we conclude this preliminary section with the definition of transfer maps (Section~\ref{subsec:transfer:maps}).
Section \ref{sec:easy:formula} is devoted to the description of the general framework in which we study multiplicative constants associated to measurable cocycles. There, we first define the pullback along a measurable cocycle and along its boundary map (Section~\ref{subsec:pullback:boundary}). Then, we compare our definition with the usual one given for representations (Section~\ref{subsec:rep:coc}). In Section \ref{subsec:easy:mult:formula} we state our multiplicative formula (Proposition~\ref{prop:baby:formula}) and we introduce the notion of multiplicative constant associated to a measurable cocycle. We conclude the section studying the notion of maximality (Section~\ref{subsec:multiplicative:constant}) and showing some applications of the previous results (Section~\ref{subsec:applications:baby}).
Section~\ref{sec:cartan:invariant} contains the new application of our machinery. There, we introduce and study the Cartan invariant of measurable cocycles (Section~\ref{sec:cartan:invariant}). We prove that it is a multiplicative constant (Proposition \ref{prop:cartan:multiplicative:cochains}) and it extends the same invariant for representations (Proposition \ref{prop:cartan:rep}). Moreover, we show that the Cartan invariant has bounded absolute value in Proposition~\ref{prop:cartan:cohomology:bound}.
In Section \ref{sec:tot:real} we define totally real cocycles and we prove Theorem \ref{teor:totally:real}. Then in Section \ref{sec:cartan:rigidity} we study maximal measurable cocycles and we prove both Theorem \ref{teor:alg:hull} and Theorem \ref{teor:cartan:rigidity}. We conclude with some remarks about recent applications of our theory in Section \ref{sec:concluding:remarks}.
\subsection*{Acknowlegdements}
We truly thank the anonymous referee for the detailed report which allowed to substantially improve the quality of our paper.
\section{Preliminary definitions and results}\label{sec:prel}
\subsection{Amenability and measurable cocycles} \label{subsec:zimmer:cocycle} In this section we are going to recall some classic definitions related to both amenability and measurable cocycles. We start fixing the following notation: \begin{itemize} \item Let $G$ be a locally compact second countable group endowed with its Haar measurable structure.
\item Let $(X,\mu)$ be a \emph{standard Borel measure $G$-space}, i.e. a standard Borel measure space endowed with a measure-preserving $G$-action. \end{itemize} If $\mu$ is a probability measure, we will refer to $(X, \mu)$ as a \emph{standard Borel probability $G$-space}. Given another measure space $(Y,\nu)$, we denote by $\textup{Meas}(X,Y)$ the space of measurable functions from $X$ to $Y$ endowed with the topology of convergence in measure. \begin{oss} In the literature about the ergodic version of simplicial volume~\cite{Sauer, Sthesis, FFM,LP,FLPS,FLF,Camp-Corro,FLMQ}, it is often convenient to work with essentially free actions. For this reason, one might find reasonable to stick with the same assumptions also here working with the dual notion of bounded cohomology. However, it is easy to check that every probability measure-preserving action can be promoted to an essentially-free action just by taking the product with an essentially free action and considering the diagonal action on that product. \end{oss}
We recall now some definitions about amenability. We mainly refer the reader to the books by Zimmer~\cite[Section 4.3]{zimmer:libro} and by Monod~\cite[Section~5.3]{monod:libro} for further details about this topic.
Let $\upL^\infty(G;\bbR)$ denote the space of essentially bounded real functions over $G$. Then, $G$ acts on $\upL^\infty(G;\bbR)$ as follows $$ g.f (g_0) = f(g^{-1} g_0) \ , $$ for all $g, g_0 \in \, G$ and $f \in \, \upL^\infty(G;\bbR)$.
\begin{deft}\label{def:amenable:group} A \emph{mean} on $\upL^\infty(G;\bbR)$ is a continuous linear functional $$ m \colon \upL^\infty(G;\bbR) \rightarrow \bbR \ , $$ such that $m(f)\geq 0$ whenever $f \geq 0$ and $m(\chi_G)=1$, where $\chi_G$ denotes the characteristic function on $G$.
We say that a mean is \emph{left invariant} if for all $g \in \, G$ and $f \in \, \upL^\infty(G;\bbR)$ we have $m(g.f) = m(f)$.
A group is \emph{amenable} if it admits a left-invariant mean. \end{deft}
\begin{es}\label{es:amenable:groups} The following families are examples of amenable groups: \begin{enumerate} \item Abelian groups~\cite[Theorem 4.1.2]{zimmer:libro};
\item Finite/Compact groups~\cite[Theorem 4.1.5]{zimmer:libro};
\item Extensions of amenable groups by amenable groups~\cite[Proposition 4.1.6]{zimmer:libro};
\item Let $G$ be a Lie group and let $P \subset G$ be any minimal parabolic subgroup. Then, $P$ is an extension of a solvable group by a compact group, whence it is amenable~\cite[Corollary 4.1.7]{zimmer:libro}. \end{enumerate} \end{es}
In the sequel we will need a more general notion of amenability which is related to group actions. In fact, amenable spaces and amenable actions will play a crucial role in the functorial approach to the computation of continuous bounded cohomology (Section \ref{subsec:burger:monod}). Following Monod's convention, we begin by defining \emph{regular} $G$-{spaces}~\cite[Definition~2.1.1]{monod:libro}.
\begin{deft} Let $G$ be a locally compact second countable group and let $S$ be a standard Borel space with a measurable $G$-action which preserve a measure class. We say that $(S, \mu)$ is a \emph{regular} $G$-\emph{space} if the previous measure class contains a probability measure $\mu$ such that the isometric action $R \colon G \curvearrowright \upL^1(S, \mu)$ defined by $$ (R(g).f)(s) = f(g^{-1}.s) \frac{d g^{-1}\mu}{d\mu}(s) $$ is continuous. Here $d g^{-1}\mu \slash d\mu$ denotes the Radon-Nikod\'{y}m derivative. \end{deft}
\begin{es}\label{es:reg:spaces} Let $G$ be locally compact second countable group, then the following are examples of regular $G$-spaces~\cite[Example~2.1.2]{monod:libro}: \begin{enumerate} \item If $G$ is endowed with its Haar measure, then $G$ is a regular $G$-space.
\item If $Q$ is a closed subgroup of $G$, then $G \slash Q$ endowed with the natural almost invariant measure is a regular $G$-space.
\item Furstenberg-Poisson boundaries~\cite{furst:articolo73,furst:articolo} associated to a probability measure on $G$ are regular $G$-spaces. \end{enumerate} \end{es}
The notion of regular $G$-spaces allows us to introduce the definitions of \emph{amenable actions} and \emph{amenable spaces}~\cite[Theorem~5.3.2]{monod:libro}.
\begin{deft}\label{def:amenable:action} Let $G$ be a locally compact second countable group and let $(S,\mu)$ be a regular $G$-space. We say that the action of $G$ on $(S,\mu)$ is \emph{amenable} if there exists a continuous norm-one $G$-equivariant linear operator $$ p \colon \upL^\infty(G \times S;\bbR) \rightarrow \upL^\infty(S;\bbR) \ , $$ with the following two properties: First $p(\chi_{G \times S})=\chi_S$, secondly for all $f \in \, \upL^\infty(G \times S)$ and for all measurable sets $A \subset S$ we have $p(f \cdot \chi_{G \times A}) = p(f) \cdot \chi_A$.
If the action by $G$ on $(S, \mu)$ is amenable, then we say that $(S,\mu)$ is an \emph{amenable $G$-space}. \end{deft}
\begin{oss}\label{oss:am:spaces} The previous definition extends the notion of amenable groups in the following sense: A group is amenable if and only if every regular $G$-space is an amenable $G$-space~\cite[Theorem~5.3.9]{monod:libro}.
Amenable actions not only characterize groups but also subgroups. By Example~\ref{es:reg:spaces}.2, given a closed subgroup $Q \subset G$, the quotient $G \slash Q$ is a regular $G$-space. Additionally, we have that $Q$ is amenable if and only if the $G$-action on $G \slash Q$ is amenable~\cite[Proposition 4.3.2]{zimmer:libro}. Hence, Example~\ref{es:amenable:groups}.4 shows that if $G$ is a Lie group and $P \subset G$ is any minimal parabolic subgroup, then the $G$-action on the quotient $G \curvearrowright G \slash P$ is amenable. This applies to the Furstenberg-Poisson boundary of a Lie group (being identified with $G/P$). \end{oss}
We recall now the notion of measurable cocycles and some of their properties.
\begin{deft}\label{def:zimmer:cocycle} Let $G$ and $H$ be locally compact groups and let $(X,\mu)$ be a standard Borel probability $G$-space. A \emph{measurable cocycle} (or, simply \emph{cocycle}) is a measurable map $\sigma \colon G \times X \rightarrow H$ satisfying the following formula \begin{equation}\label{eq:zimmer:cocycle} \sigma(g_1g_2,x)=\sigma(g_1,g_2.x)\sigma(g_2,x) \ , \end{equation} for almost every $g_1,g_2 \in G$ and almost every $x \in X$. Here, $g_2.x$ denotes the action by $g_2 \in G$ on $x \in \, X$. \end{deft}
Associated to measurable cocycles there exists the crucial notion of \emph{boundary map}.
\begin{deft} Let $G$ and $H$ be two locally compact groups and let $Q \leq G$ be a closed amenable subgroup. Let $(X,\mu)$ be a standard Borel probability $G$-space and let $(Y,\nu)$ be a measure space on which $H$ acts by preserving the measure class of $\nu$. Given a measurable cocycle $\sigma \colon G \times X \rightarrow H$, we say that a measurable map $\phi \colon G/Q \times X \rightarrow Y$ is \emph{$\sigma$-equivariant} if we have $$ \phi(g.\eta,g.x)=\sigma(g,x)\phi(\eta,x) \ , $$ for almost every $g \in G,\eta \in G/Q$ and $x \in X$.
A \emph{(generalized) boundary map} associated to $\sigma$ is a $\sigma$-equivariant measurable map. \end{deft}
We will make use of generalized boundary maps in Section~\ref{subsec:pullback:boundary}, when we will explain how to compute the pullback in continuous bounded cohomology.
\begin{oss}\label{oss:esistenza:mappe:bordo} It is quite natural to ask when a (generalized) boundary map actually exists. Let $G(n)=\textup{Isom}(\bbH^n_{K})$ be the isometry group of the $K$-hyperbolic space, where $K$ is either $\bbR$ or $\bbC$. Given a lattice $\Gamma \leq G(n)$, let us consider a standard Borel probability $\Gamma$-space $(X,\mu_X)$ and a measurable cocycle $\sigma:\Gamma \times X \rightarrow G(m)$. In the previous situation, Monod and Shalom \cite[Proposition 3.3]{MonShal0} proved that if the cocycle $\sigma$ is \emph{non elementary} then there exists an essentially unique boundary map $$ \phi:\partial_\infty \bbH^n_{K} \times X \rightarrow \partial_\infty \bbH^m_{K} \ . $$ The notion of non-elementary cocycle relies on the definition of \emph{algebraic hull} (Definition \ref{def:alg:hull}) and it will be explained more carefully later in this paper.
Also in the case of higher rank lattices there are some relevants results about the existence of boundary maps. Indeed a key step in the proof of Zimmer' Superrigidity Theorem \cite[Theorem 4.1]{zimmer:annals} is to prove the existence of generalized boundary maps for \emph{Zariski dense} measurable cocycles (see Definition \ref{def:alg:hull}). \end{oss}
Since Equation~(\ref{eq:zimmer:cocycle}) suggests that $\sigma$ can be interpreted as a Borel $1$-cocycle in $\textup{Meas}(G,\textup{Meas}(X,H))$~\cite{feldman:moore}, it is natural to introduce the definition of \emph{cohomologous cocycles}.
\begin{deft}\label{def:cohomology:cocycle} Let $\sigma \colon G \times X \rightarrow H$ be a measurable cocycle between locally compact groups. Let $f \colon X \rightarrow H$ be a measurable map. We define the \emph{twisted cocycle associated to $\sigma$ and $f$} as $$ f.\sigma \colon G \times X \rightarrow H, \hspace{5pt} (f.\sigma)(g,x) \coloneqq f(g.x)^{-1}\sigma(g,x)f(x) \ , $$ for almost every $g \in G$ and almost every $x \in X$.
We say that two measurable cocycles $\sigma_1,\sigma_2\colon G \times X \rightarrow H$ are \emph{cohomologous} if there exists a measurable function $f \colon X \rightarrow H$ such that $$ \sigma_2=f.\sigma_1 \ . $$ Similarly, we say that $\sigma_1$ and $\sigma_2$ are \emph{cohomologous modulo a closed subgroup $C \leq H$} if $$ \sigma_2=f.\sigma_1 \ \ \mod C \ , $$ that is $$ \sigma_2(g,x) \cdot (f.\sigma_1(g,x))^{-1} \in C \ , $$ for almost every $g \in G, x \in X$. \end{deft}
When a measurable cocycle $\sigma$ admits a generalized boundary map, then all its cohomologous cocycles share the same property.
\begin{deft}\label{def:twisted:map} Let $\sigma \colon G \times X \rightarrow H$ be a measurable cocycle with generalized boundary map $\phi \colon G/Q \times X \rightarrow Y$. Given a measurable function $f \colon X \rightarrow H$ the \emph{twisted boundary map associated to $f$ and $\phi$} is defined as $$ f.\phi \colon G/Q \times X \rightarrow Y, \hspace{5pt} (f.\phi)(\eta,x) \coloneqq f(x)^{-1}\phi(\eta,x) \ , $$ for almost every $g \in G,\eta \in G/Q$ and $x \in X$. \end{deft} \begin{oss}\label{oss:twisted:boundary:map} Let $\sigma, \sigma' \colon G \times X \rightarrow H$ be two cohomologous cocycles and let $f \colon X \rightarrow H$ be the measurable map such that $\sigma' = f.\sigma$. If $\sigma$ admits a generalized boundary map $\phi$, then the twisted boundary map associated to $f$ and $\phi$ is a generalized boundary map associated to $\sigma'$. \end{oss}
Representations provide special cases of measurable cocycles:
\begin{deft}\label{def:rep:cocycle} Let $\rho \colon G \rightarrow H$ be a continuous representation and let $(X,\mu)$ be a standard Borel probability $G$-space. The \emph{cocycle associated to the representation $\rho$} is defined as $$ \sigma_\rho \colon G \times X \rightarrow H, \hspace{5pt} \sigma_\rho(g,x)=\rho(g) \ , $$ for every $g \in G$ and almost every $x \in X$. \end{deft}
Given a representation $\rho \colon G \rightarrow H$, one can obtain useful information about $\rho$ by studying the closure of the image $\overline{\rho(\Gamma)}$ as subgroup of $H$. On the other hand, in general the image of a measurable cocycle does not have any nice algebraic structure. Nevertheless, when $H$ is assumed to be an algebraic group, we can give the following
\begin{deft}\label{def:alg:hull} Let $\mathbf{H}$ be a real algebraic group and denote by $H=\mathbf{H}(\bbR)$ the set of real points of $\mathbf{H}$. The \emph{algebraic hull} of a measurable cocycle $\sigma\colon G \times X \rightarrow H$ is the (conjugacy class of the) smallest algebraic subgroup $\mathbf{L} \leq \mathbf{H}$ such that $\mathbf{L}(\bbR)^\circ$ contains the image of a cocycle cohomologous to $\sigma$. Here $\mathbf{L}(\bbR)^\circ$ denotes the connected component of the neutral element. \end{deft}
\begin{oss} The algebraic hull is well-defined by the Noetherian property of algebraic groups. Moreover, it only depends on the cohomology class of the cocycle $\sigma$~\cite[Proposition 9.2.1]{zimmer:libro}. \end{oss} We will use the previous definition when we will work with~\emph{totally real cocycles} (Section~\ref{sec:tot:real}) and when we will investigate the properties of cocycles with non-vanishing pullback (Theorem~\ref{teor:alg:hull}).
\subsection{Bounded cohomology and its functorial approach} \label{subsec:burger:monod}
In this section we are going to recall the definitions and the properties of both continuous and continuous bounded cohomology that we will need in the sequel.
We first introduce continuous (bounded) cohomology via the homogeneous resolution and then, following the work by Burger and Monod \cite{monod:libro,burger2:articolo}, we describe it in terms of strong resolutions by relatively injective modules.
\begin{deft} Let $G$ be a locally compact group. A \emph{Banach} $G$-\emph{module} $(E, \pi)$ is a Banach space $E$ endowed with a $G$-action induced by a representation $\pi \colon G \to \textup{Isom}(E)$, or equivalently a $G$-action via linear isometries: \begin{align*} \theta_\pi \colon G \times E &\to E \\ \theta_\pi (g, v) &\coloneqq \pi (g) v \ . \end{align*} We say that a Banach $G$-module $(E, \pi)$ is \emph{continuous} if the map $\theta(\cdot, v)$ is continuous for all $v \in \, E$. Finally, we denote by $E^G$ the submodule of $G$-invariant vectors in $E$, i.e. the space of vectors $v$ such that $\theta(g, v) = v$ for all $g \in \, G$. \end{deft} \begin{notation} In the sequel $\mathbb{R}$ will denote the Banach $G$-module of trivial real coefficients. In other words, it is endowed with the trivial $G$-action: $\pi(g) v = v$ for all $v \in \, \mathbb{R}$ and $g \in \, G$. \end{notation}
\begin{es} Let $(E, \pi)$ be a Banach $G$-module. Then, the space of \emph{continuous} $E$-\emph{valued functions} $$
\textup{C}_{c}^\bullet(G; E) \coloneqq \{f \colon G^{\bullet+1} \rightarrow E \, | \, \mbox{$f$ is continuous} \} $$ is a continuous Banach $G$-module with the following action \begin{equation}\label{eq:azione:funzioni} g.f (h_1, \cdots, h_{\bullet +1}) = \pi(g) f(g^{-1} h_1, \cdots, g^{-1} h_{\bullet+1}) \ , \end{equation} for all $g, h_1, \cdots, h_{\bullet+1} \in \, G$. \end{es}
The spaces of continuous $E$-valued functions give raise to a cochain complex $(\textup{C}_{c}^\bullet(G; E), \delta^\bullet)$ together with the standard homogeneous coboundary operator $$ \delta^\bullet \colon \textup{C}_{c}^\bullet(G; E) \rightarrow \textup{C}_{c}^{\bullet + 1}(G; E) $$ $$ \delta^\bullet (f) (g_1, \ldots, g_{\bullet + 2}) \coloneqq \sum_{j = 1}^{\bullet + 2} (-1)^{j - 1} f(g_1, \ldots, g_{j-1}, g_{j+1} \ldots, g_{\bullet + 2}) \ . $$ Since this complex is exact, we are going to focus our attention to the subcomplex of $G$-invariant vectors $(\upC_c^\bullet(G; E)^G, \delta^\bullet)$.
\begin{deft} The \emph{continuous cohomology} of $G$ with coefficients in $E$, denoted by $\upH_{c}^\bullet(G; E)$, is the cohomology of the complex $(\upC_{c}^\bullet(G; E)^G, \delta^\bullet)$. \end{deft} \begin{oss}\label{oss:discr:group:cohom:same:G} If $G$ is a discrete group, then there is no difference between continuous and ordinary cohomology. Hence, in this situation we will usually drop the subscript $c$ from the notation. \end{oss}
Since $(E, \lVert \cdot \rVert_E)$ is a Banach space, the Banach $G$-module $\upC_c^\bullet(G; E)$ has a natural $\upL^\infty$-norm: For every $f \in \, \upC_c^\bullet(G; E)$, we have $$
\lVert f \rVert_\infty \coloneqq \sup \{ \lVert f(g_1, \ldots, g_{\bullet + 1} )\rVert_E \, | \, g_1, \ldots, g_{\bullet +1} \in \, G \} \ . $$ A continuous function is said to be \emph{bounded} if its $\upL^\infty$-norm is finite. Let $\upC_{cb}^\bullet(G; E) \subset \upC_c^\bullet(G; E)$ be the subspace of continuous bounded functions. By linearity the coboundary operator $\delta^\bullet$ preserves boundedness, so we can restrict $\delta^\bullet$ to the space of continuous bounded $G$-invariant functions $\upC_{cb}^\bullet(G;E)^G$. Then we get the following complex $$ (\upC_{cb}^\bullet(G; E)^G, \delta^\bullet) \ . $$
\begin{deft} The \emph{continuous bounded cohomology} of $G$ with coefficients in $E$, denoted by $\upH_{cb}^\bullet(G; E)$, is the cohomology of the complex $(\upC_{cb}^\bullet(G; E)^G, \delta^\bullet)$. \end{deft} \begin{oss} If $L \subset G$ is a closed subgroup, then we can compute the continuous bounded cohomology of $L$ with $E$-coefficients as the cohomology of the complex $(\upC_{cb}^\bullet(G;E)^L, \delta^\bullet)$. Here the $L$-action is the restriction of the natural $G$-action on $\upC_{cb}^\bullet(G;E)$~\cite[Corollary~7.4.10]{monod:libro}. \end{oss}
The $\upL^\infty$-norm defined on cochains induces a canonical $\upL^\infty$-seminorm in cohomology given by $$
\lVert f \rVert_\infty \coloneqq \inf \{ \lVert \psi \rVert_\infty \ | \ [\psi]=f \} \ . $$ We say that an isomorphism between seminormed cohomology groups is \emph{isometric} if the corresponding seminorms are preserved.
Beyond the difference determined by the quotient seminorm, one can study the gap between continuous cohomology and continuous bounded cohomology via the map induced in cohomology by the inclusion $ i \colon \upC_{cb}^\bullet(G; E)^G \rightarrow \upC^\bullet_{c}(G; E)^G$. The resulting map $$\comp_G^\bullet \colon \upH_{cb}^\bullet(G; E) \rightarrow \upH_c^\bullet(G; E) $$ is called \emph{comparison map}.
In the sequel we will need an alternative description of continuous bounded cohomology in terms of strong resolutions via relatively injective modules. Since we will not make an explicit use of these notions, we refer the reader to Monod's book for a broad discussion on them~\cite[Section~4.1 and~7.1]{monod:libro}. The main result in this direction is the following \begin{teor}[{\cite[Theorem~7.2.1]{monod:libro}}]\label{thm:Monod:strong:rel:inj} Let $G$ be a locally compact group and let $(E, \pi)$ be a Banach $G$-module. Then, for every strong resolution $(E^\bullet, \delta^\bullet)$ of $E$ via relatively injective $G$-modules, the cohomology of the complex of $G$-invariants $\upH^n((E^\bullet)^G, \delta^\bullet))$ is isomorphic as a topological vector space to the continuous bounded cohomology $\upH^n_{cb}(G; E)$, for every $n \geq 0$. \end{teor}
We now describe a strong resolution via relatively injective modules which allows us to compute bounded cohomology \emph{isometrically}. Let $G$ be a locally compact second countable group. Let $(E, \pi, \lVert \cdot \rVert_E)$ be a Banach $G$-module such that $E$ is the dual of some Banach space. This implies that $E$ can be endowed with the weak-$^*$ topology and the associated weak-$^*$ Borel structure. Moreover, let $(S, \mu)$ be a regular $G$-space. We have the following
\begin{deft} We define the Banach $G$-module of \emph{bounded weak}-$^*$ \emph{measurable} $E$-\emph{valued functions on} $S$ to be the Banach space \begin{align*}
\mathcal{B}^\infty(S^{\bullet+1};E) \coloneqq \{ f \colon S^{\bullet+1} \rightarrow E \, | \, &\textup{$f$ is weak-$^*$ measurable}, \\ \sup_{s_1,\cdots,s_{\bullet+1} \in S} \lVert &f(s_1,\cdots,s_{\bullet+1}) \rVert_E < \infty \} \ \end{align*} endowed with the following $G$-action $\tau$: $$ (\tau(g) f) (s_1, \cdots, s_{\bullet+1}) \coloneqq \pi(g) f(g^{-1}s_1, \cdots, g^{-1}s_{\bullet+1}) $$ for every $g \in \, G$, $s_1, \cdots, s_{\bullet+1} \in \, S$ and $f \in \, \mathcal{B}^\infty(S^{\bullet+1};E)$.
We define the Banach $G$-module of \emph{essentially bounded weak-$^*$ measurable $E$-valued functions} on $S$ to be $$
\textup{L}^\infty_{\textup{w}^*}(S^{\bullet+1};E) \coloneqq \{ [f]_\sim \ | \ f \in \, \mathcal{B}^\infty(S^{\bullet+1};E)\} \ , $$ where $f \sim g$ if and only if they agree $\mu$-almost everywhere and $[f]_\sim$ denotes the equivalence class of $f$ with respect to $\sim$. \end{deft} \begin{oss} For ease of notation we will denote elements in $\textup{L}^\infty_{\textup{w}^*}(S^{\bullet+1};E)$ simply by one chosen representative $f$. \end{oss} \begin{deft}\label{def:alternating} Let us consider the situation above. We say that a(n essentially) bounded weak-~$^*$ measurable function $f \colon S^{\bullet + 1} \rightarrow E$ is \emph{alternating} if for every $s_1, \cdots, s_{\bullet+1} \in \, S$ we have $$ \sign(\varepsilon) f(s_1, \ldots, s_{\bullet + 1}) = f(s_{\varepsilon(1)}, \ldots, s_{\varepsilon({\bullet + 1})}) \ , $$ where $\varepsilon \in \mathfrak{S}_{\bullet+1}$ is a permutation whose sign is $\sign(\varepsilon)$. \end{deft} Since the standard homogeneous operator $\delta^\bullet$ preserves $G$-invariant (alternating) essentially bounded weak-${}^*$ measurable functions up to a shift of the degree, we can consider the complex $(\textup{L}^\infty_{\textup{w}^*}(S^{\bullet+1};E), \delta^\bullet)$. The following theorem shows when the previous complex computes isometrically the continuous bounded cohomology of $G$ with $E$-coefficients. \begin{teor}\cite[Theorem 7.5.3]{monod:libro}\label{teor:monod:2:rel:inj:strong} Let $G$ be a locally compact second countable group. Let $(E, \pi)$ be a dual Banach $G$-module. Let $(S, \mu)$ be an \emph{amenable} regular $G$-space. Then, the cohomology of the complex $(\textup{L}^\infty_{\textup{w}^*}(S^{\bullet+1};E)^G, \delta^\bullet)$ is \emph{isometrically} isomorphic to $\upH^n_{cb}(G; E)$, for every integer $n \geq 0$.
The same result still holds if we restrict to the subcomplex of alternating essentially bounded weak-${}^*$ measurable functions on $S$. \end{teor} \begin{oss}\label{oss:L:resol:uguale:G} In the situation of the previous theorem if $L \subset G$ is a closed subgroup, then also the cohomology of the complex $(\textup{L}^\infty_{\textup{w}^*}(S^{\bullet+1};E)^L, \delta^\bullet)$ is \emph{isometrically} isomorphic to $\upH^n_{cb}(L; E)$, for every $n \geq 0$~\cite[Lemma~4.5.3]{monod:libro}. \end{oss} \begin{es}\label{es:L:inf:G:Q}
Let $G$ be a locally compact second countable group and let $Q \subset G$ be a closed amenable subgroup. By Remark~\ref{oss:am:spaces} and Example~\ref{es:reg:spaces}.2 we know that $G \slash Q$ is an amenable regular $G$-space. Thus for every Banach $G$-module $(E, \pi)$ the cohomology of the complex $(\textup{L}^\infty_{\textup{w}^*}((G \slash Q)^{\bullet+1};E)^G, \delta^\bullet)$ isometrically computes the continuous bounded cohomology of $G$ with coefficient in $E$. An instance of this situation is when $Q$ is a minimal parabolic subgroup of a semisimple Lie group $G$. \end{es}
As we have just discussed one can compute continuous bounded cohomology by working with equivalence classes of bounded weak-${}^*$ measurable functions. However, in some cases it might be convenient to work directly with $\mathcal{B}^\infty(S^{\bullet + 1}; E)$. Also in this case the homogeneous coboundary operator sends (alternating) bounded weak-$^*$ measurable functions to themselves up to shifting the degree. Hence, we can still construct a complex $(\mathcal{B}^\infty(S^{\bullet + 1}; E), \delta^\bullet)$. Unfortunately, the associated resolution of $E$ is only strong in general~\cite[Proposition~2.1]{burger:articolo}. So it cannot be used to compute the continuous bounded cohomology of $G$ with $E$-coefficients. Nevertheless, one obtain the following canonical map~\cite[Corollary~2.2]{burger:articolo} \begin{equation}\label{eq:canonical:map:B:L} \mathfrak{c}^n \colon \upH^{n}(\mathcal{B}^\infty(S^{\bullet + 1}; E)^G) \rightarrow\upH^{n}(\upL_{\text{w}^*}^\infty(S^{\bullet + 1}; E)^G) \cong \upH_{cb}^n(G; E) \ , \end{equation} for every $n \in \, \mathbb{N}$. This shows that each bounded weak-${}^*$ measurable $G$-invariant function canonically determines a cohomology class in $\upH_{cb}^n(G; E)$. The same result still holds in the situation of alternating functions.
In Section~\ref{subsec:pullback:boundary}, we will tacitly use this result for showing that the pullback of a bounded weak-${}^*$ measurable $G$-invariant function lies in fact in $\upL^\infty_{\text{w}^*}$.
\subsection{Transfer maps}\label{subsec:transfer:maps}
In this section we briefly recall the notion of \emph{transfer maps}~\cite{monod:libro}. Let $G$ be a locally compact second countable group and let $i \colon L \rightarrow G$ be the inclusion of a closed subgroup $L$ into $G$. By functoriality the inclusion induces a pullback in continuous bounded cohomology $$ \upH^\bullet_{cb}(i) \colon \upH_{cb}^\bullet(G; \mathbb{R}) \rightarrow \upH_{cb}^\bullet(L; \mathbb{R}) \ . $$
\begin{oss}\label{oss:notation:restriction}
Since the map $\upH^\bullet_{cb}(i)$ is implemented by the restriction to $L$ of cochains on $G$, we will sometimes write $\kappa|_L$ instead of $\upH^\bullet_{cb}(i)(\kappa)$, for $\kappa \in \upH^\bullet_{cb}(G;\bbR)$. \end{oss}
A transfer map provides a cohomological left inverse to $\upH^\bullet_{cb}(i) $. Assume that $L \backslash G$ admits a $G$-invariant probability measure $\mu$ (e.g. when $L$ is a lattice of $G$), then we have
\begin{deft}\label{def:trans:map} We define the \emph{transfer cochain map} as $$ \widehat{\textup{trans}}^\bullet_L \colon \upC_{cb}^\bullet(G;\bbR)^L \rightarrow \upC_{cb}^\bullet(G;\bbR)^G $$ $$ \widehat{\textup{trans}}^\bullet_L(\psi)(g_1, \ldots, g_{\bullet+1}) \coloneqq \int_{L \backslash G} \psi(\overline{g}.g_1, \ldots, \overline{g}.g_{\bullet+1}) d\mu(\overline{g}) \ , $$ for every $(g_1, \ldots, g_{\bullet+1}) \in \, G^{\bullet+1}$ and $\psi \in \, \upC_{cb}^\bullet(G;\bbR)^L$. Here $\overline{g}$ denotes the equivalence class of $g$ in the quotient $L \backslash G$.
The \emph{transfer map} $\trans_L^\bullet$ is the one induced in cohomology by $\widehat{\textup{trans}}^\bullet_L$: $$ \trans_L^\bullet \colon \upH_{cb}^\bullet(L;\bbR) \rightarrow \upH_{cb}^\bullet(G;\bbR) \ . $$ \end{deft}
\begin{oss} The transfer map is well-defined since we can compute the continuous bounded cohomology of $L$ by looking at the complex $(\upC_{cb}^\bullet(G;\bbR)^L, \delta^\bullet)$ (Remark~\ref{oss:discr:group:cohom:same:G}).
Moreover, since $\psi$ is $L$-invariant, it induces a well-defined function on the quotient $L \backslash G$. With a slight abuse of notation, in the previous formula we still denoted by $\psi$ this induced function. \end{oss}
We give now an alternative definition of the transfer map for essentially bounded weak-${}^*$ measurable functions. Let $Q$ and $L$ be closed subgroups of a locally compact second countable group $G$. If $Q$ is amenable, then the subcomplex of $L$-invariant essentially bounded functions on $G/Q$ computes the continuous bounded cohomology $\upH^\bullet_{cb}(L;\bbR)$ (Remark~\ref{oss:L:resol:uguale:G} and Example~\ref{es:L:inf:G:Q}). Hence, the new \emph{transfer map} $$\trans_{G \slash Q}^\bullet \colon \upH^\bullet_{cb}(L; \mathbb{R}) \rightarrow \upH^\bullet_{cb}(G; \mathbb{R})$$ is the map induced in cohomology by the following $$ \widehat{\textup{trans}}^\bullet_{G \slash Q} \colon \upL^\infty((G \slash Q)^{\bullet+1};\bbR)^L \rightarrow \upL^\infty((G \slash Q)^{\bullet+1};\bbR)^G $$ $$ \widehat{\textup{trans}}^\bullet_{G \slash Q}(\psi)(\xi_1, \ldots, \xi_{\bullet+1}) \coloneqq \int_{L \backslash G} \psi(\overline{g}.\xi_1, \ldots, \overline{g}.\xi_{\bullet+1}) d \mu(\overline{g}) \ , $$ for almost all $(\xi_1, \ldots, \xi_{\bullet+1}) \in \, (G \slash Q)^{\bullet + 1}$ and $\psi \in \, \upL^\infty((G \slash Q)^{\bullet+1};\bbR)^L$.
The following commutative diagram completely describes the relation between the two transfer maps $\trans^\bullet_L$ and $\trans^\bullet_{G \slash Q}$~\cite[Lemma 2.43]{BIuseful} \begin{equation}\label{eq:diagramma:due:transf} \xymatrix{ \upH^\bullet_{cb}(L; \mathbb{R}) \ar[rr]^-{\trans_L^\bullet} \ar[d]_-\cong && \upH^\bullet_{cb}(G; \mathbb{R}) \ar[d]^-{\cong} \\ \upH^\bullet_{cb}(L; \mathbb{R}) \ar[rr]_-{\trans^\bullet_{G \slash Q}} && \upH^\bullet_{cb}(G; \mathbb{R}) \ . } \end{equation} Here the vertical arrows are the canonical isomorphisms obtained by extending the identity $\mathbb{R} \rightarrow \mathbb{R}$ to the complex of continuous bounded and essentially bounded functions, respectively.
\section{Pullback maps, multiplicative constants and maximal measurable cocycles}\label{sec:easy:formula}
The main goal of this section is to define pullbacks in continuous bounded cohomology via measurable cocycles and generalized boundary maps. As an application we extend Burger and Iozzi's \emph{useful formula} for representations~\cite[Proposition~2.44]{BIuseful} to the wider setting of measurable cocycles. This allows us to introduce the notion of multiplicative constants and investigate cocycles rigidity.
\begin{setup}\label{setup:mult:const} Let us consider the following setting: \begin{itemize}
\item Let $G$ be a second countable locally compact group.
\item Let $G'$ be a locally compact group which acts measurably on a measure space $(Y, \nu)$ by preserving the measure class.
\item Let $Q$ be a closed amenable subgroup of $G$.
\item Let $L$ be a lattice in $G$.
\item Let $(X,\mu_X)$ be a standard Borel probability $L$-space.
\item Let $\sigma \colon L \times X \rightarrow G'$ be a measurable cocycle with an essentially unique generalized boundary map $\phi \colon G/Q \times X \rightarrow Y$. \end{itemize} \end{setup}
\subsection{Pullback along measurable cocycles and generalized boundary maps}\label{subsec:pullback:boundary}
In this section we introduce two different pullback maps in continuous bounded cohomology associated to a measurable cocycle. The first pullback will only depend on the measurable cocycle $\sigma$. The second one will be defined in terms of the generalized boundary map $\phi$. We will show that under suitable conditions the two definitions agree (Lemma~\ref{lem:pullback:implemented:boundary:map}). Despite a priori the first definition might appear more natural, we will mainly exploit the second pullback in the study of the rigidity properties of measurable cocycles.
Given a measurable cocycle $\sigma \colon L \times X \rightarrow G'$ we define a pullback map from $\upC^\bullet_{cb}(G';\bbR)^{G'}$ to $\upC^\bullet_b(L;\bbR)^L$ as follows (compare with~\cite[Remark 14]{moraschini:savini}).
\begin{deft}\label{def:pullback:cocycle} In the situation of Setup \ref{setup:mult:const}, the \emph{pullback map induced by the measurable cocycle $\sigma$} is given by $$ \upC_b^\bullet(\sigma) \colon \upC^\bullet_{cb}(G';\bbR) \rightarrow \upC^\bullet_b(L;\bbR) \ , $$ $$ \psi \mapsto \upC^\bullet_b(\sigma)(\psi)(\gamma_1,\ldots,\gamma_{\bullet+1}) \coloneqq \int_X(\psi(\sigma(\gamma_1^{-1},x)^{-1}),\ldots,\sigma(\gamma_{\bullet+1}^{-1},x)^{-1})d\mu_X(x) \ . $$ \end{deft}
\begin{oss} The previous formula takes inspiration both from Bader-Furman-Sauer's result~\cite[Theorem 5.6]{sauer:companion} and Monod-Shalom's cohomological induction for measurable cocycles associated to couplings~\cite[Section 4.2]{MonShal}. \end{oss}
\begin{lem}\label{lem:pullback:map:cocycle:cohomology} In the situation of Setup~\ref{setup:mult:const}, the map $\upC^\bullet_b(\sigma)$ is a well-defined cochain map which restricts to the subcomplexes of invariant cochains. Hence $\upC^\bullet_b(\sigma)$ induces a map in bounded cohomology $$ \upH^\bullet_b(\sigma):\upH^\bullet_{cb}(G';\bbR) \rightarrow \upH^\bullet_b(L;\bbR)\ , \ \upH^\bullet_b(\sigma)([\psi]):=\left[ \upC^\bullet_b(\sigma)(\psi) \right] \ . $$ \end{lem}
\begin{proof} It is easy to check that $\upC^\bullet_b(\sigma)$ is a cochain map. Moreover, it sends bounded cochains to bounded cochains because $\mu_X$ is a probability measure.
It only remains to prove that $\upC^\bullet_b(\sigma)$ sends $G'$-invariant continuous cochains to $L$-invariant ones. Let $\psi \in \upC^\bullet_{cb}(G';\bbR)^{G'}$ and $\gamma,\gamma_1,\ldots,\gamma_{\bullet+1} \in L$, then we have \begin{align*} \gamma \cdot \upC^\bullet_b(\sigma)(\psi)(\gamma_1,\ldots,\gamma_{\bullet+1})&=\upC^\bullet_b(\sigma)(\psi)(\gamma^{-1}\gamma_1,\ldots,\gamma^{-1}\gamma_{\bullet+1})=\\ &=\int_X \psi(\sigma(\gamma^{-1}_1 \gamma,x)^{-1},\ldots,\sigma(\gamma^{-1}_{\bullet+1} \gamma,x)^{-1})d\mu_X(x)=\\ &=\int_X \psi(\sigma(\gamma,x)^{-1}\sigma(\gamma^{-1}_1,\gamma.x)^{-1},\ldots,\sigma(\gamma,x)^{-1}\sigma(\gamma_{\bullet+1}^{-1},\gamma.x)^{-1})d\mu_X(x)=\\ &=\int_X \psi(\sigma(\gamma,x)^{-1}\sigma(\gamma^{-1}_1,x)^{-1},\ldots,\sigma(\gamma,x)^{-1}\sigma(\gamma_{\bullet+1}^{-1},x)^{-1})d\mu_X(x)=\\ &=\int_X\psi(\sigma(\gamma^{-1}_1,x)^{-1},\ldots,\sigma(\gamma_{\bullet+1}^{-1},x)^{-1})d\mu_X(x)=\\ &=\upC^\bullet_b(\sigma)(\psi)(\gamma_1,\ldots,\gamma_{\bullet+1}) \ , \end{align*} where the second line is equal to the third one because of the definition of measurable cocycle (Equation~\eqref{eq:zimmer:cocycle}). Then, the $L$-invariance of the measure $\mu_X$ shows that the third line is equal to the fourth one. Finally, the $G'$-invariance of $\psi$ concludes the computation. \end{proof}
As anticipated we now explain how to define a different pullback map via generalized boundary maps in the situation of Setup~\ref{setup:mult:const}. This approach takes inspiration from a work by Bader, Furman and Sauer~\cite[Proposition~4.2]{sauer:articolo} and has already produced some applications in special settings (Subsection~\ref{subsec:applications:baby}). We define the pullback along a generalized boundary map as the composition of two different maps defined in continuous bounded cohomology. The Banach space $\textup{L}^\infty(X) \coloneqq \textup{L}^\infty(X; \mathbb{R})$ has a natural structure of Banach $L$-module given by the following $L$-action $$ \gamma.f = f(\gamma^{-1}.x) \ , $$ for all $\gamma \in L$ and $f \in \, \upL^\infty(X)$. This leads to the following
\begin{deft}\label{def:pullback:boundary} In the situation of Setup~\ref{setup:mult:const}, the \emph{$\upL^\infty(X)$-pullback along $\phi$} is the following map $$ \upC^\bullet(\phi) \colon \mathcal{B}^\infty(Y^{\bullet + 1}; \mathbb{R})^{G'} \rightarrow \textup{L}_{\text{w}^*}^\infty((G \slash Q)^{\bullet+1}; \upL^\infty(X))^L $$ $$ \upC^\bullet(\phi)(\psi) (\eta_1, \ldots, \eta_{\bullet+1}) \coloneqq \left(x \mapsto \psi(\phi(\eta_1, x), \ldots, \phi(\eta_{\bullet+1}, x))\right) \ , $$ where $\psi \in \, \mathcal{B}^\infty(Y^{\bullet + 1}; \mathbb{R})^{G'}$, $\eta_1, \ldots, \eta_{\bullet+1} \in \, G \slash Q$ and $x \in \, X$. \end{deft}
\begin{lem}\label{lemma:pullback:cochain} The map $\upC^\bullet(\phi)$ is a well-defined norm non-increasing cochain map. \end{lem} \begin{proof} Since $\upC^\bullet(\phi)$ is defined as a pullback, it is immediate to check that it is a norm non-increasing cochain map.
Let us show now that for every $\psi \in \, \mathcal{B}^\infty(Y^{\bullet + 1}; \mathbb{R})^{G'}$, the cocycle $\upC^\bullet(\phi)(\psi)$ is $L$-invariant. First, by \cite[Corollary 2.3.3]{monod:libro} we can identify $$ \textup{L}_{\text{w}^*}^\infty((G \slash Q)^{\bullet+1}; \upL^\infty(X))^L \cong \textup{L}^\infty((G \slash Q)^{\bullet+1} \times X)^L \ , $$ where the latter space is endowed with its natural diagonal $L$-action. Then, for almost every $x \in \, X$, $\gamma \in \, L$ and $\eta_1, \ldots, \eta_{\bullet+1} \in \, G \slash Q$, we have \begin{align*} \gamma \cdot \upC^\bullet(\phi)(\psi)(\eta_1, \ldots, \eta_{\bullet+1}) (x) &= \upC^\bullet(\phi)(\psi)(\gamma^{-1}.\eta_1,\ldots,\gamma^{-1}. \eta_{\bullet+1})(\gamma^{-1}.x)=\\ &=\psi(\phi(\gamma^{-1}.\eta_1, \gamma^{-1}.x), \ldots, \phi(\gamma^{-1}. \eta_{\bullet+1},\gamma^{-1}.x))= \\ &= \psi(\sigma(\gamma^{-1}, x) \phi(\eta_1, x), \ldots, \sigma(\gamma^{-1}, x) \phi(\eta_{\bullet+1}, x))=\\ &= \psi(\phi(\eta_1, x), \ldots, \phi(\eta_{\bullet+1}, x)) \\ &= \upC^\bullet(\phi)(\psi)(\eta_1, \ldots, \eta_{\bullet+1}) (x) \ . \end{align*} Here we first used the definition of diagonal action, then the $\sigma$-equivariance of $\phi$ and finally the $G'$-invariance of $\psi$. \end{proof}
Since our final goal is to pullback a cocycle $\psi \in \, \mathcal{B}^\infty(Y^{\bullet + 1}; \mathbb{R})^{G'}$ along $\phi$ obtaining a new cocycle in $\textup{L}^\infty((G \slash Q)^{\bullet+1}; \mathbb{R})^L$, we need to compose the $\upL^\infty(X)$-pullback along $\phi$ with the integration map (compare with \cite{sauer:articolo, savini3:articolo, moraschini:savini}).
\begin{deft}\label{def:integration:map} In the situation of Setup~\ref{setup:mult:const}, the \emph{integration map} $\upI_X^\bullet$ is the following cochain map $$ \upI_X^\bullet \colon \textup{L}_{\text{w}^*}^\infty((G \slash Q)^{\bullet+1}; \upL^\infty(X))^L \rightarrow \textup{L}^\infty((G \slash Q)^{\bullet+1}; \mathbb{R})^L \\ $$ $$ \upI_X^\bullet(\psi)(\eta_1,\ldots,\eta_{\bullet+1}) \coloneqq \int_X \psi(\eta_1, \ldots, \eta_{\bullet+1})(x) d\mu_X(x) \ , $$ where $\psi \in \, \textup{L}^\infty((G \slash Q)^{\bullet+1}; \upL^\infty(X))^L$, $\eta_1, \ldots, \eta_{\bullet+1} \in \, G \slash Q$ and $\mu_X$ is the probabilty measure on the standard Borel probability $L$-space $X$. \end{deft}
\begin{lem}\label{lemma:int:cochain:map} The integration map $\upI^\bullet_X$ is a well-defined norm non-increasing cochain map. \end{lem} \begin{proof} Given a cocycle $\psi \in \, \textup{L}_{\text{w}^*}^\infty((G \slash Q)^{\bullet+1}; \upL^\infty(X))^L$, it is easy to show that $\upI^\bullet_X(\psi)$ is $L$-invariant. Indeed, given $\eta_1,\ldots,\eta_{\bullet+1} \in G/Q$ and $\gamma \in \, L$, we have \begin{align*} \gamma . \upI_X^\bullet(\psi)(\eta_1, \ldots, \eta_{\bullet+1}) &= \int_X \psi(\gamma^{-1}.\eta_1, \ldots, \gamma^{-1}. \eta_{\bullet+1})(x) d\mu_X(x)=\\ &= \int_X \psi(\eta_1, \ldots, \eta_{\bullet+1})(\gamma.x) d\mu_X(x) \\ &= \int_X \psi(\eta_1, \ldots, \eta_{\bullet+1})(x) d\mu_X(x) = \upI^\bullet_X(\psi)(\eta_1, \ldots, \eta_{\bullet+1}) \ , \end{align*} where we used the $L$-invariance of both $\psi$ and $\mu_X$.
Since it is immediate to check that the integration map is also a norm non-increasing cochain map, we get the thesis. \end{proof}
\begin{oss}\label{oss:unbounded:cochains} The previous construction via integration is only possible working with bounded cocycles. Indeed, there is no hope to extend this map to the case of unbounded cochains~\cite[Remark~13, Remark~16]{moraschini:savini}. \end{oss}
We are now ready to define the \emph{pullback map along} $\phi$.
\begin{deft}\label{def:pullback:not:fibered} In the situation of Setup~\ref{setup:mult:const}, the \emph{pullback map along the (generalized) boundary map} $\phi$ is the following cochain map $$ \upC^\bullet(\Phi^X) \colon \mathcal{B}^\infty(Y^{\bullet + 1}; \mathbb{R})^{G'} \rightarrow \textup{L}^\infty((G \slash Q)^{\bullet+1}; \mathbb{R})^L $$ $$ \upC^\bullet(\Phi^X) \coloneqq \upI^\bullet_X \circ \upC^\bullet(\phi) \ . $$ \end{deft}
\begin{oss}\label{oss:pullback:restringe:alternating:cochains} The restriction of the pullback along $\phi$ to the subcomplexes of alternating cochains (Definition~\ref{def:alternating}) is well-defined. \end{oss}
The fact that the pullback map induces a well-defined map in cohomology is proved in the following
\begin{prop}\label{prop:pullback:cohomology} In the situation of Setup~\ref{setup:mult:const} the pullback map $\upC^\bullet(\Phi^X)$ is a norm non-increasing cochain map. Hence, it induces a well-defined map $$ \upH^\bullet(\Phi^X) \colon \upH^\bullet(\calB^\infty(Y^{\bullet+1};\bbR)^{G'}) \rightarrow \upH^\bullet_{b}(L;\bbR), \hspace{5pt} \upH^\bullet(\Phi^X)([\psi])\coloneqq[\upC^\bullet(\Phi^X)(\psi)] \ . $$ The same result still holds for the subcomplexes of alternating cochains. \end{prop}
\begin{proof} As a consequence of both Lemmas~\ref{lemma:pullback:cochain} and~\ref{lemma:int:cochain:map}, the pullback $\upC^\bullet(\Phi^X)$ is a norm non-increasing cochain map. Indeed, it is the composition of two such maps, namely $\upC^\bullet(\phi)$ and $\upI^\bullet_X$.
Since $Q$ is an amenable group, then $G\slash Q$ is an amenable regular $G$-space (Example~\ref{es:reg:spaces}.2 and Remark~\ref{oss:am:spaces}). Hence, by Remark~\ref{oss:L:resol:uguale:G} the complex of $L$-invariant essentially bounded functions $\upL^\infty((G/Q)^{\bullet+1};\bbR)^L$ computes the continuous bounded cohomology $\upH^\bullet_{b}(L;\bbR)$.
The same proof adapts mutatis mutandis to the case of alternating cochains. \end{proof}
\begin{oss} \label{oss:more:general:amenable} One might define a pullback map in cohomology using any measurable $\sigma$-equivariant map $\phi \colon S \times X \rightarrow Y$, where $S$ is any amenable $L$-space. However, since we will not need this formulation in the sequel, we preferred to keep the previous setting. \end{oss}
Since we have introduced two different pullback maps in continuous bounded cohomology arising from measurable cocycles, it is natural to ask whether they agree. The following lemma completely describes the situation (compare with~\cite[Corollary 2.7]{burger:articolo}).
\begin{lem}\label{lem:pullback:implemented:boundary:map} In the situation of Setup~\ref{setup:mult:const}, let $\psi \in \calB^\infty(Y^{\bullet+1};\bbR)^{G'}$ be a measurable cocycle. Then $$ \upC^\bullet(\Phi^X)(\psi) \in \upL^\infty((G/Q)^{\bullet+1};\bbR)^L $$ is a representative of the class $\upH^\bullet_b(\sigma)([\psi]) \in \upH^\bullet_{b}(L;\bbR)$. \end{lem}
\begin{proof} It is sufficient to consider the following commutative diagram~\cite[Proposition 1.2]{burger:articolo} $$ \xymatrix{ \upH^\bullet(\calB^\infty(Y^{\bullet+1};\bbR)^{G'}) \ar[rrr]^-{\upH^\bullet(\Phi^X)} \ar[dd]_-{\mathfrak{c}^\bullet} &&& \upH^\bullet_b(L;\bbR)\\ \\ \upH^\bullet_{cb}(G';\bbR) \ar[uurrr]_-{\upH^\bullet_b(\sigma)} \ , } $$ where $\mathfrak{c}^\bullet$ is the map introduced in Equation~(\ref{eq:canonical:map:B:L}). \end{proof}
Finally, we show that the pullback along cohomologous measurable cocyles is the same (compare with~\cite[Proposition~13, Proposition~20]{moraschini:savini}).
\begin{prop}\label{prop:invariance:cohomology} In the situation of Setup~\ref{setup:mult:const}, let $f.\sigma \colon L \times X \rightarrow G'$ be a cocycle cohomologous to $\sigma$ with respect to a measurable map $f \colon X \rightarrow G'$. Then, for every $\psi \in \mathcal{B}^\infty(Y^{\bullet + 1}; \mathbb{R})^{G'}$, we have $$\upC^\bullet(\Phi^X)(\psi) = \upC^\bullet(f.\Phi^X)(\psi) \ .$$ Here $\upC^\bullet(\Phi^X)$ and $\upC^\bullet(f.\Phi^X)$ denote the pullback maps along the associated boundary maps $\phi$ and $f.\phi$, respectively. \end{prop} \begin{proof} The boundary map $f.\phi$ associated to $f.\sigma$ is given by $$ f.\phi \colon G \slash Q \times X \rightarrow Y \ , \hspace{10pt} (f.\phi)(\eta, x) = f^{-1}(x) \phi(\eta, x) \ , $$ for almost every $\eta \in \, G \slash Q$ and $x \in \, X$ (Remark~\ref{oss:twisted:boundary:map}). Hence, we have \begin{align*} \upC^\bullet(f.\Phi^X)(\psi)(\eta_1, \ldots, \eta_{\bullet+1}) &= \int_X \psi((f.\phi)(\eta_1, x), \ldots, (f.\phi)(\eta_{\bullet +1}, x)) d\mu_X(x)= \\ &= \int_X \psi(f^{-1}(x) \phi(\eta_1, x), \ldots, f^{-1}(x) \phi(\eta_{\bullet +1}, x)) d\mu_X(x)= \\ &= \int_X \psi( \phi(\eta_1, x), \ldots, \phi(\eta_{\bullet +1}, x)) d\mu_X(x)= \\ &=\upC^\bullet(\Phi^X)(\psi)(\eta_1, \ldots, \eta_{\bullet+1}) \ , \end{align*} for almost every $\eta_1, \ldots, \eta_{\bullet +1 } \in \, G \slash Q$. This finishes the proof. \end{proof}
\begin{oss} Sometimes it is natural to consider the $G'$-module $\mathbb{R}$ with a twisted action. For instance if $G'$ admits a sign homomorphism, we can use it to twist the real coefficients. In that situation the previous equality will be true only up to a sign (see for instance~\cite[Proposition~13]{moraschini:savini}). \end{oss}
\subsection{Pullback along generalized boundary maps vs.~pullback of representations} \label{subsec:rep:coc}
In the situation of Setup~\ref{setup:mult:const}, let $(X,\mu_X)$ be a standard Borel probability $L$-space and let $\rho \colon L \rightarrow G'$ be a representation. Then, there exists an associated measurable cocycle $\sigma_\rho \colon L \times X \rightarrow G'$ defined by $\sigma_\rho(\gamma, x) = \rho(\gamma)$ for every $\gamma \in \, L$ and $x \in \, X$ (Definition~\ref{def:rep:cocycle}). If $\rho$ admits a $\rho$-equivariant measurable map $\varphi \colon G/Q \rightarrow Y$, the corresponding generalized boundary map of $\sigma_\rho$ is $$ \phi \colon G/Q \times X \rightarrow Y, \hspace{5pt} \phi(\eta, x) = \varphi(\eta) \ , $$ for almost every $\eta \in \, G \slash Q$ and $x \in \, X$.
As explained by Burger and Iozzi~\cite{burger:articolo, BIuseful}, one can implement the pullback map $$\upH^\bullet_{cb}(\rho) \colon \upH^\bullet_{cb}(G'; \mathbb{R}) \rightarrow \upH^\bullet_{b}(L; \mathbb{R})$$ using a cochain map $\upC^\bullet(\varphi)$ defined by $$ \upC^\bullet(\varphi):\calB^\infty(Y^{\bullet+1};\bbR)^{G'} \rightarrow \upL^\infty((G/Q)^{\bullet+1};\bbR)^L \ , $$ $$ \psi \mapsto \upC^\bullet(\varphi)(\psi)(\eta_1,\ldots,\eta_{\bullet+1}):=\psi(\varphi(\eta_1),\ldots,\varphi(\eta_{\bullet+1})) \ , $$ for almost every $\eta_1,\ldots,\eta_{\bullet+1} \in G/Q$.
The following result shows that the pullback associated to $\rho$ via $\varphi$ agrees with the one along $\phi$. This property turns out to be fundamental to coherently extend the numerical invariants of representations to the ones of measurable cocycles (see~\cite[Proposition 3.4]{savini3:articolo} and~\cite[Propositions~12, Proposition~19]{moraschini:savini}). \begin{prop}\label{prop:pullback:coc:vs:repr} In the situation of Setup~\ref{setup:mult:const}, let $\rho \colon L \rightarrow G'$ be a representation which admits a $\rho$-equivariant measurable map $\varphi \colon G/Q \rightarrow Y$. Then, we have $$ \upC^\bullet(\Phi^X)=\upC^\bullet(\varphi) \ . $$ \end{prop} \begin{proof} Since the boundary map $\phi$ associated to $\sigma_\rho$ does not depend on the second variable, it is immediate to check that the following diagram commutes $$ \xymatrix{ \mathcal{B}^\infty(Y^{\bullet + 1}; \mathbb{R})^{G'} \ar[rr]^-{\upC^\bullet(\phi)} \ar[rd]_-{\upC^\bullet(\varphi)} && \textup{L}_{\text{w}^*}^\infty((G \slash Q)^{\bullet+1}; \upL^\infty(X))^L \ar[ld]^-{\upI_X^\bullet} \\ & \textup{L}^\infty((G \slash Q)^{\bullet+1}; \mathbb{R})^L \ , } $$ whence the thesis. \end{proof}
\begin{oss} The existence of a cocycle of the form $\sigma \colon L \times X \rightarrow G'$ required in Setup~\ref{setup:mult:const} is irrelevant in the previous result. \end{oss}
\subsection{Multiplicative formula}\label{subsec:easy:mult:formula}
In this section we prove how to deduce the multiplicative formula stated in Proposition~\ref{prop:baby:formula}. Some applications of the formula are then discussed in Section~\ref{subsec:applications:baby}.
\begin{repprop}{prop:baby:formula} In the situation of Setup~\ref{setup:mult:const}, let $\psi' \in \calB^\infty(Y^{\bullet+1};\bbR)^{G'}$ be an everywhere-defined $G'$-invariant cocycle. Let $\psi \in \upL^\infty((G/Q)^{\bullet+1})^G$ be a $G$-invariant cocycle. Denote by $\Psi \in \upH^\bullet_{cb}(G;\bbR)$ the class of $\psi$. Assume that $\Psi=\textup{trans}_{G/Q}^{\bullet} [\upC^\bullet(\Phi^X)(\psi')]$. \begin{enumerate}
\item We have that $$ \int_{L \backslash G} \int_X \psi'(\phi(\overline{g}.\eta_1, x), \ldots, \phi(\overline{g}.\eta_{\bullet+1}, x)) d\mu_X(x) d\mu(\overline{g}) = \psi(\eta_1, \ldots, \eta_{\bullet+1}) + \textup{cobound.} \ , $$ for almost every $(\eta_1,\ldots,\eta_{\bullet+1}) \in (G/Q)^{\bullet+1}$.
\item If $\upH^\bullet_{cb}(G;\bbR) \cong \bbR \Psi (= \bbR[\psi])$, then there exists a real constant $\lambda_{\psi',\psi}(\sigma) \in \bbR$ depending on $\sigma,\psi',\psi$ such that \begin{align*} \int_{L\backslash G} \int_X \psi'(\phi(\overline{g}.\eta_1, x), \ldots, \phi(\overline{g}.\eta_{\bullet+1}, x)) d\mu_X(x) d\mu(\overline{g})&=\lambda_{\psi',\psi}(\sigma) \cdot \psi(\eta_1,\ldots,\eta_{\bullet+1}) \\ &+\textup{cobound.} \ , \end{align*} for almost every $(\eta_1,\ldots,\eta_{\bullet+1}) \in (G/Q)^{\bullet+1}$. \end{enumerate} \end{repprop} \begin{proof} \emph{Ad~1.} Since Setup~\ref{setup:mult:const} ensures the existence of the transfer map $\trans_{G \slash Q}^\bullet$, the first formula is easily true.
\emph{Ad~2.} Since $\upH^\bullet_{cb}(G;\bbR)$ is one-dimensional and generated by $\Psi = [\psi]$ as an $\mathbb{R}$-vector space, $\textup{trans}_{G/Q}^{\bullet} [\upC^\bullet(\Phi^X)(\psi')]$ must be a real multiple of $\Psi$. This finishes the proof. \end{proof}
\begin{oss}\label{oss:evaluate:everywhere:formula} A priori Proposition~\ref{prop:baby:formula}.2 only holds almost everywhere. However, as proved by Monod~\cite[Section~1.C]{monod:lifting}, working with $\textup{L}^\infty$-cocycles on Furstenberg-Poisson boundaries one can always show that the previous formula holds everywhere~\cite[Theorem~B]{monod:lifting} (compare with~\cite[Section~4]{bucher2:articolo}). We will use this fact in the proof of Theorem~\ref{teor:coniugato:standard:embedding} in order to evaluate the formula at a given point. \end{oss}
\subsection{Multiplicative constants and maximal measurable cocycles} \label{subsec:multiplicative:constant}
In this section we are going to introduce the notion of \emph{multiplicative constant}. This definition will allow us to introduce \emph{maximal (measurable) cocycles} and to investigate their rigidity properties.
In the situation of Setup~\ref{setup:mult:const}, let $\psi' \in \calB^\infty(Y^{\bullet+1};\bbR)^{G'}$ and let $\Psi=[\psi] \in \upH^\bullet_{cb}(G;\bbR)$ be represented by a bounded Borel cocycle $\psi \colon (G/Q)^{\bullet+1} \rightarrow \bbR$. If $\upH^\bullet_{cb}(G;\bbR)=\bbR \Psi$, then Proposition~\ref{prop:baby:formula} implies \begin{align}\label{equation:easy:formula} \int_{L \backslash G} \int_X \psi'(\phi(\overline{g}.\eta_1,x),&\ldots,\phi(\overline{g}.\eta_{\bullet+1},x))d\mu_X(x)d\mu(\overline{g})=\\ &=\lambda_{\psi',\psi}(\sigma)\psi(\eta_1,\ldots,\eta_{\bullet+1}) + \textup{cobound.} \nonumber \ . \end{align}
\begin{deft}\label{def:multiplicative:constant} The real number $\lambda_{\psi',\psi}(\sigma) \in \bbR$ appearing in Equation (\ref{equation:easy:formula}) is the \emph{multiplicative constant associated to} $\sigma, \psi', \psi$. \end{deft}
A particularly nice situation for the study of rigidity phenomena is when in Equation~(\ref{equation:easy:formula}) there are no coboundary terms. For this reason we are going to introduce the following notation. \begin{deft} We say that \emph{condition} $(\NCT)$ (no coboundary terms) is satisfied when Equation~(\ref{equation:easy:formula}) reduces to \begin{align*} \int_{L \backslash G} \int_X \psi'(\phi(\overline{g}.\eta_1,x),&\ldots,\phi(\overline{g}.\eta_{\bullet+1},x))d\mu_X(x)d\mu(\overline{g})=\\ =&\lambda_{\psi',\psi}(\sigma)\psi(\eta_1,\ldots,\eta_{\bullet+1}) \nonumber \ . \end{align*} \end{deft} \begin{es}\label{es:when:NCT} Standard examples in which condition $(\NCT)$ is satisfied are the followings: \begin{enumerate} \item Given a torsion-free lattice $L \leq G$ in a semisimple Lie group and a minimal parabolic subgroup $P \leq G$, $L$ acts doubly ergodically on the Furstenberg-Poisson boundary $G/P$~\cite[Theorem 5.6]{albuquerque99}. Hence condition $(\NCT)$ is satisfied in degree $n=2$ for bounded alternating cochains.
\item In degree $n \geq 3$, if $G=\textup{PO}(n,1)$ and $G/Q=\bbS^{n-1}$, condition $(\NCT )$ holds when we consider the real bounded cohomology twisted by the sign action \cite[Lemma 2.2]{bucher2:articolo}. \end{enumerate} \end{es}
\begin{oss}\label{oss:NCT} The condition $(\NCT)$ has the following equivalent reformulation via cochains $$ \widehat{\textup{trans}}^\bullet_{G/Q} \circ \upC^\bullet(\Phi^X)(\psi')=\lambda_{\psi',\psi}(\sigma)\psi \ . $$ \end{oss}
If condition $(\NCT)$ is satisfied, then there exists an explicit upper bound for the multiplicative constant $\lambda_{\psi',\psi}(\sigma)$.
\begin{prop}\label{prop:multiplicative:upperbound} In the situation of Setup~\ref{setup:mult:const}, let $\psi' \in \calB^\infty(Y^{\bullet+1};\bbR)^{G'}$ and let $\Psi=[\psi] \in \upH^\bullet_{cb}(G;\bbR)$ be represented by a bounded Borel cocycle $\psi \colon (G/Q)^{\bullet+1} \rightarrow \bbR$. If condition $(\textup{NCT})$ is satisfied, then we have $$
|\lambda_{\psi',\psi}(\sigma)| \leq \frac{\lVert \psi' \rVert_\infty}{\lVert \psi \rVert_\infty} \ . $$ \end{prop}
\begin{proof} By Remark~\ref{oss:NCT} we know that $$ \widehat{\textup{trans}}^\bullet_{G/Q} \circ \upC^\bullet(\Phi^X)(\psi')=\lambda_{\psi',\psi}(\sigma)\psi \ . $$ Since by Proposition \ref{prop:pullback:cohomology} $\widehat{\textup{trans}}^\bullet_{G/Q}$ and $\upC^\bullet(\Phi^X)$ are norm non-increasing maps, the left-hand side admits the following estimate $$ \lVert \widehat{\textup{trans}}^\bullet_{G/Q} \circ \upC^\bullet(\Phi^X)(\psi') \rVert_\infty \leq \lVert \psi' \rVert_\infty \ . $$ Hence, we get $$
|\lambda_{\psi',\psi}(\sigma)| \lVert \psi \rVert_\infty \leq \lVert \psi' \rVert_\infty \ , $$ as desired. \end{proof}
Using the previous upper bound, we introduce the following
\begin{deft}\label{def:maximal:cocycle} In the situation of Setup~\ref{setup:mult:const} assume that condition $(\NCT)$ is satisfied. We say that a measurable cocycle $\sigma \colon L \times X \rightarrow G'$ is \emph{maximal} if its multiplicative constant $\lambda_{\psi',\psi}(\sigma)$ attains the maximum value: $$ \lambda_{\psi',\psi}(\sigma) = \frac{\lVert \psi' \rVert_\infty}{\lVert \psi \rVert_\infty} \ . $$ \end{deft}
For every representation $\pi \colon G \to G'$, we denote the restriction of $\pi$ to $L$ as $\pi |_L \colon L \to G'$. We prove now that under suitable assumptions maximal cocycles can be trivialized, i.e. they are cohomologous to a suitable representation $\pi |_L \colon L \rightarrow G'$.
\begin{setup}\label{setup:complete:mult:const} In the situation of Setup~\ref{setup:mult:const} assume that condition $(\textup{NCT})$ is satisfied. We also assume that \begin{itemize}
\item Both $\psi'$ and $\psi$ are defined everywhere and they attain their essential supremum:
There exist $\eta_1,\ldots,\eta_{\bullet+1} \in G/Q$ and $y_1,\ldots,y_{\bullet+1} \in Y$ such that $$ \psi'(y_1,\ldots,y_{\bullet+1})=\lVert \psi' \rVert_\infty \hspace{5pt} \mbox{ and } \hspace{5pt} \psi(\eta_1,\ldots,\eta_{\bullet+1})=\lVert \psi \rVert_\infty \ . $$ \item A \emph{maximal} map $\varphi \colon G/Q \rightarrow Y$ is a measurable map such that $$ \psi'(\varphi(g\eta_1),\ldots,\varphi(g\eta_{\bullet+1}))=\lVert \psi' \rVert_\infty \ , $$ for almost every $g \in G$ and for every $\eta_1,\ldots,\eta_{\bullet+1} \in G/Q$ such that $$\psi(\eta_1,\ldots,\eta_{\bullet+1})=\lVert \psi \rVert_\infty \ .$$
\item There exists a continuous representation $\pi \colon G \rightarrow G'$ and unique continuous $\pi$-equivariant map $\Pi \colon G/Q \rightarrow Y$ which satisfies the following: Given any \emph{maximal} measurable map $\varphi \colon G/Q \rightarrow Y$, there exists a unique element $g_\varphi ' \in G'$ such that $$ \varphi(\eta)=g_\varphi '\Pi(\eta) \ , $$ for almost every $\eta \in G/Q$.
\item The $G'$-pointwise stabilizer of the map $\Pi$ is trivial, i.e. the only element $g' \in G'$ such that $g' \Pi (x) = \Pi(x)$ for all $x \in \, G \slash Q$ is the neutral element of $G'$. We denote the previous stabilizer by $\textup{Stab}_{G'}(\Pi)$. \end{itemize} \end{setup}
\begin{teor}\label{teor:coniugato:standard:embedding}
In the situation of Setup~\ref{setup:complete:mult:const} let $\pi |_L \colon L \to G'$ be the restriction of the representation $\pi \colon G \to G'$ to $L$. If the measurable cocycle $\sigma \colon L \times X \rightarrow G'$ is maximal, then $\sigma$ is cohomologous to $\pi |_L$. \end{teor}
\begin{oss} More precisely, the theorem shows the existence of a measurable map $f \colon X \rightarrow G'$ such that: For all $\gamma \in L$ and almost every $x \in X$, we have $$
\pi |_L (\gamma)=f(\gamma.x)^{-1}\sigma(\gamma,x)f(x) \ . $$ \end{oss}
\begin{proof} Since the cocycle $\sigma$ is maximal, we know that $$ \lambda_{\psi',\psi}(\sigma) = \frac{\lVert \psi' \rVert_\infty}{\lVert \psi \rVert_\infty} \ . $$ Under condition $(\NCT)$, if we substitute the value of $\lambda_{\psi',\psi}(\sigma)$ in Equation (\ref{equation:easy:formula}) we get \begin{small} \begin{equation}\label{equation:maximal:nct:substitution} \int_{L \backslash G} \int_X \psi'(\phi(\overline{g}.\eta_1,x),\ldots,\phi(\overline{g}.\eta_{\bullet+1},x))d\mu_X(x)d\mu(\overline{g})=\frac{\lVert \psi' \rVert_\infty}{\lVert \psi \rVert_\infty}\psi(\eta_1,\ldots,\eta_{\bullet+1})\ . \end{equation} \end{small} Moreover, by assumption $\psi$ attains its essential supremum, whence there exist $\hat{\eta}_1,\ldots,\hat{\eta}_{\bullet+1} \in G/Q$ such that \begin{equation}\label{equation:maximum:attain} \psi(\hat{\eta}_1,\ldots,\hat{\eta}_{\bullet+1})=\lVert \psi \rVert_\infty \ . \end{equation} By Remark~\ref{oss:evaluate:everywhere:formula} we can evaluate Equation \eqref{equation:maximal:nct:substitution} at $\hat{\eta}_1,\ldots,\hat{\eta}_{\bullet+1} \in G/Q$. Hence, by Equation~(\ref{equation:maximum:attain}), we have \begin{equation}\label{equation:maximal:integral} \int_{L \backslash G} \int_X \psi'(\phi(\overline{g}.\hat{\eta}_1,x),\ldots,\phi(\overline{g}.\hat{\eta}_{\bullet+1},x))d\mu_X(x)d\mu(\overline{g})=\lVert \psi' \rVert_\infty \ . \end{equation} This shows that $$ \psi'(\phi(\overline{g}.\hat{\eta}_1,x),\ldots,\phi(\overline{g}.\hat{\eta}_{\bullet+1},x))=\lVert \psi' \rVert_\infty \ , $$ for almost every $\overline{g} \in L \backslash G$ and almost every $x \in X$. Additionally, the $\sigma$-equivariance of $\phi$ implies that in fact \begin{equation}\label{equation:maximal:map} \psi'(\phi(g.\hat{\eta}_1,x),\ldots,\phi(g.\hat{\eta}_{\bullet+1},x))=\lVert \psi' \rVert_\infty \end{equation} holds for almost every $g \in G$ and almost every $x \in X$.
We can define for almost every $x \in \, X$ a map $$\phi_x \colon G/Q \rightarrow Y, \hspace{5pt} \phi_x(\eta)\coloneqq\phi(\eta,x) \ ,$$ which is measurable~\cite[Lemma 2.6]{fisher:morris:whyte} and maximal by Equation \eqref{equation:maximal:map}. Hence, by the assumptions of Setup~\ref{setup:complete:mult:const}, for almost every $x \in X$ there must exist an element $g_x \in G'$ such that $$ \phi_x(\eta)=g_x\Pi(\eta) \ , $$ for almost every $\eta \in G/Q$. This shows that $\phi_x$ lies in the $G'$-orbit of $\Pi$. In this way we get a map $$ \widehat{\phi} \colon X \rightarrow G'.\Pi, \ \ \ \widehat{\phi}(x)=\phi_x \ , $$ which is measurable~\cite[Lemma 2.6]{fisher:morris:whyte}. By Setup~\ref{setup:complete:mult:const} the stabilizer of $\Pi$ is trivial and hence the orbit $G'.\Pi$ is naturally homeomorphic to $G'$ through a map $\jmath \colon G'.\Pi \rightarrow G'$. Composing the identification $\jmath$ with the map $\widehat{\phi}$ we get a map $$ f \colon X \rightarrow G', \ \ \ f(x)\coloneqq (\jmath \circ \widehat{\phi})(x) \ , $$ which is defined almost everywhere and it is measurable being the composition of measurable maps (notice that the composition above gives back the element $g_x$).
We can now conclude the proof (compare with~\cite[Proposition 3.2]{sauer:articolo}). Given $\gamma \in L$, on the one hand we have $$ \phi(\gamma.\eta,\gamma.x)=\sigma(\gamma,x)\phi(\eta,x)=\sigma(\gamma,x)f(x)\Pi(\eta) \ , $$ and on the other $$
\phi(\gamma.\eta,\gamma.x)=f(\gamma.x)\Pi(\gamma.x)=f(\gamma.x)\pi |_L (\gamma)\Pi(\eta) \ . $$ In the second equality we used the $\pi$-equivariance of the map $\Pi$. The fact that $\textup{Stab}_{G'}(\Pi)$ is trivial implies that $$
\pi |_L (\gamma)=f(\gamma.x)^{-1}\sigma(\gamma,x)f(x) \ , $$ which finishes the proof. \end{proof}
\subsection{Applications of the multiplicative formula}\label{subsec:applications:baby}
For convenience of the reader we collect here some examples of applications of Proposition~\ref{prop:baby:formula}.
\begin{es} Let $n \geq 3$. Let $L \leq G = \po^\circ(n, 1)$ be a torsion-free non-uniform lattice and $(X, \mu_X)$ be a standard Borel probability $L$-space. Following the notation of Setup~\ref{setup:mult:const}, we set $G' = \po^\circ(n, 1)$ and $Y = G / Q = \partial \mathbb{H}^n_\mathbb{R} \cong \mathbb{S}^{n-1}$, where $Q$ is a (minimal) parabolic subgroup of $G$. Using bounded cohomology theory~\cite{Grom82, FM:Grom}, one can define the \emph{volume} $\vol(\sigma)$ of a measurable cocycle $\sigma \colon L \times X \to \po^\circ(n,1)$~\cite[Section~4.1]{moraschini:savini}. As proved by the authors~\cite[Proposition~2]{moraschini:savini}, in this setting the multiplicative constant is given by $$ \lambda_{\psi', \psi}(\sigma) = \frac{\vol(\sigma)}{\vol(L \backslash \mathbb{H}^n)} \ . $$ Since condition $(\NCT)$ is satisfied for twisted real coefficients~\cite[Lemma~2.2]{bucher2:articolo}, Proposition~\ref{prop:multiplicative:upperbound} shows that the following Milnor-Wood inequality holds~\cite[Proposition~15]{moraschini:savini} $$
| \vol(\sigma) | \leq \vol(L \backslash \mathbb{H}^n) \ . $$ Finally one can apply Theorem~\ref{teor:coniugato:standard:embedding} to show that if $\sigma$ is maximal, then $\sigma$ is cohomologous to the cocycle associated to the standard lattice embedding $L \rightarrow G$. In fact, one can strengthen this result: A cocycle is maximal \emph{if and only if} it is cohomologous to the cocycle associated to the standard lattice embedding~\cite[Theorem~1]{moraschini:savini}.
Similarly, one can apply an analogous strategy for studying the case of closed surfaces. The main difference is that we have to to fix a hyperbolization. Then, maximal cocycles will be cohomologous to the given hyperbolization~\cite[Theorem~5]{moraschini:savini} \end{es}
\begin{es} Fix a torsion-free lattice $L \leq G = \textup{PSL}(2,\mathbb{C})$ together with a standard Borel probability $L$-space $(X, \mu_X)$. Following the notation of Setup~\ref{setup:mult:const}, we set $G' = \textup{PSL}(n,\mathbb{C})$, $Y=\mathscr{F}(n,\mathbb{C})$ is the space of full flags, and $G / Q = \mathbb{P}^1(\mathbb{C})$. Here $Q$ is a (minimal) parabolic subgroup of $G$. The second author defined the \emph{Borel invariant} $\beta_n(\sigma)$ of a measurable cocycle $\sigma \colon L \times X \to \textup{PSL}(n,\mathbb{C})$~\cite[Section~4]{savini3:articolo}. Then, the multiplicative constant is given by~\cite[Proposition 1.2]{savini3:articolo} $$ \lambda_{\psi', \psi}(\sigma) = \frac{\beta_n(\sigma)}{\vol(L \backslash \mathbb{H}^3)} \ . $$ Since condition $(\NCT)$ is satisfied, Proposition~\ref{prop:multiplicative:upperbound} leads to the following Milnor-Wood inequality~\cite[Proposition~4.5]{savini3:articolo} $$
| \beta_n(\sigma) | \leq {n+1 \choose 3}\vol(L \backslash \mathbb{H}^3) \ . $$ Finally, one can apply Theorem~\ref{teor:coniugato:standard:embedding} to show that if $\sigma$ is maximal, then $\sigma$ is cohomologous to the cocycle associated to the standard lattice embedding $L \rightarrow G$ composed with the irreducible representation $\pi_n:\textup{PSL}(2,\mathbb{C}) \rightarrow \textup{PSL}(n,\mathbb{C})$. In fact also the converse holds true~\cite[Theorem~1.1]{savini3:articolo}. \end{es}
\section{Cartan invariant of measurable cocycles of complex hyperbolic lattices}\label{sec:cartan:invariant}
Let $\Gamma \leq \pu(n,1)$ be a torsion-free lattice with $n \geq 2$ and let $(X,\mu_X)$ be a standard Borel probability $\Gamma$-space. In this section we are going to define the \emph{Cartan invariant} $i(\sigma)$ associated to a measurable cocycle $\sigma \colon \Gamma \times X \rightarrow \pu(m,1)$. Then, when $\sigma$ is non elementary, we will express the Cartan invariant as a multiplicative constant (Proposition \ref{prop:cartan:multiplicative:cochains}). This interpretation allows us to deduce many properties of the Cartan invariant for non-elementary measurable cocycles.
We recall here just few notions of complex hyperbolic geometry that we will need in the sequel. We refer the reader to Goldman's book~\cite{Goldmancomplex} for a complete discussion about this topic. Let $\mathbb{H}^n_{\mathbb{C}}$ be the complex hyperbolic space. For every $k \in \{ 0, \ldots, n\}$ a \emph{$k$-plane} is a totally geodesic copy of $\bbH^{k}_{\bbC}$ holomorphically embedded in $\bbH^n_{\bbC}$. When $k=1$, a $1$-plane is simply a \emph{complex geodesic}. Similarly, a \emph{$k$-chain} is the boundary of a $k$-plane in $\partial_\infty \bbH^n_{\bbC}$, i.e. it is an embedded copy of $\partial_\infty \bbH^k_{\bbC}$. When $k = 1$, we will just call them \emph{chains}. Since a chain is completely determined by any two of its points, two distinct chains are either disjoint or they meet exactly in one point.
Let us consider the Hermitian triple product $$ \langle \cdot, \cdot, \cdot \rangle \colon (\bbC^{n,1})^3 \rightarrow \bbC, \hspace{5pt} \langle z_1, z_2, z_3 \rangle \coloneqq h(z_1,z_2)h(z_2,z_3)h(z_3,z_1) \ . $$ If we denote by $(\partial_\infty \bbH^n_{\bbC})^{(3)}$ the set of triples of distinct points in the boundary at infinity, we can defined the following function $$ c_n \colon (\partial_\infty \bbH^n_{\bbC})^{(3)} \rightarrow [-1,1], \hspace{5pt} c_n(\xi_1,\xi_2,\xi_3) \coloneqq \frac{2}{\pi} \arg \langle z_1,z_2,z_3 \rangle \ . $$ Here $\xi_i=[z_i]$ and we choose the branch of the argument function such that $\arg(z) \in [-\pi/2,\pi/2]$. Then, we can extend $c_n$ to a $\pu(n,1)$-invariant alternating Borel cocycle on the whole $(\partial_\infty \bbH^n_{\bbC})^3$.
Moreover, $|c_n(\xi_1,\xi_2,\xi_3)|=1$ if and only if $\xi_1,\xi_2,\xi_3 \in \partial_\infty \bbH^n_{\bbC}$ are distinct and they lie on the same chain \cite[Section 3]{BIW09}.
\begin{deft} The cocycle $$ c_n \in \calB^\infty_{\textup{alt}}((\partial_\infty \bbH^n_{\bbC})^3;\bbR)^{\pu(n,1)} $$ is called \emph{Cartan cocycle}. \end{deft} \begin{oss}\label{oss:cartan:cocycle:det:class} The Cartan cocycle $c_n$ canonically determines a class in $\upH^2_{cb}(\pu(n,1);\bbR)$ via the map defined in Equation~(\ref{eq:canonical:map:B:L}). \end{oss}
Let $\omega_n \in \Omega^2(\bbH^n_{\bbC})$ be the K\"ahler form, which is a $\pu(n,1)$-invariant $2$-form. By the Van Est isomorphism~\cite[Corollary~7.2]{guichardet} the space $\Omega^2(\bbH^n_{\bbC})^{\pu(n,1)}$ is isomorphic to $\upH^2_c(\pu(n,1);\bbR)$. We call \emph{K\"ahler class} the element $\kappa_n \in \, \upH^2_c(\pu(n,1);\bbR)$ corresponding to $\omega_n$ via the previous isomorphism. Since the K\"ahler class is bounded, $\kappa_n$ lies in the image of the comparison map $$ \textup{comp}^2 \colon \upH^2_{cb}(\pu(n,1);\bbR) \rightarrow \upH^2_c(\pu(n,1);\bbR) \ . $$ Hence, there exists a class $\kappa_n^b \in \upH^2_{cb}(\pu(n,1);\bbR)$ which is sent to $\kappa$ under $\textup{comp}^2$. Since the group $\upH^2_{cb}(\pu(n,1);\bbR)$ is one dimensional, we can assume that $\kappa_n^b$ is its generator as real vector space. The relation between the Cartan cocycle and the bounded K\"ahler class is the following (Remark~\ref{oss:cartan:cocycle:det:class}) $$ [c_n]=\frac{\kappa^b_n}{\pi} \in \upH^2_{cb}(\pu(n,1);\bbR) \ . $$ \begin{oss}\label{oss:cartan:repr:kaehler} The previous equality shows that the cocycle $\pi c_n$ is a representative of the bounded K\"ahler class. \end{oss}
\begin{setup}\label{setup:cartan:invariant} Let $n \geq 2$. We assume the following \begin{itemize} \item Let $\Gamma \leq \pu(n,1)$ be a torsion-free lattice;
\item Let $(X,\mu_X)$ be a standard Borel probability $\Gamma$-space;
\item Let $\sigma \colon \Gamma \times X \rightarrow \pu(m,1)$ be a measurable cocycle. \end{itemize} \end{setup} In the previous situation $\sigma$ induces a map in bounded cohomology (Lemma~\ref{lem:pullback:map:cocycle:cohomology}) $$ \upH^2_b(\sigma): \textup{H}^2_{cb}(\pu(m,1);\bbR) \rightarrow \upH^2_b(\Gamma;\bbR) \ . $$ Moreover, since $\Gamma$ is a lattice, there exists a transfer map (Definition~\ref{def:trans:map}) $$ \trans_\Gamma^2:\upH^2_b(\Gamma;\bbR) \rightarrow \upH^2_{cb}(\pu(n,1);\bbR) \ . $$ Composing the two maps above we can give the following \begin{deft}\label{def:cartan:invariant} In the situation of Setup~\ref{setup:cartan:invariant}, the \emph{Cartan invariant associated to the cocycle $\sigma$} is the real number $i(\sigma)$ appearing in the following equation \begin{equation}\label{eq:cartan:no:boundary:map} \trans_\Gamma^2 \circ \upH^2_b(\sigma)(\kappa^b_m)=i(\sigma)\kappa^b_n \ . \end{equation} \end{deft} \begin{oss}\label{oss:cartan:inv:cocycle:1:well:def} The previous formula is well-defined since $\upH^2_{cb}(\pu(n,1);\bbR) \cong \mathbb{R}\kappa_n^b$. \end{oss}
We explain now how to compute the Cartan invariant in terms of a boundary map associated to $\sigma$. This will show that the Cartan invariant is a multiplicative constant in the sense of Definition \ref{def:multiplicative:constant}.
First recall that every non-elementary measurable cocycle $\sigma \colon \Gamma \times X \rightarrow \pu(m,1)$ admits an \emph{essentially unique} boundary map~\cite[Proposition 3.3]{MonShal0} (Remark~\ref{oss:esistenza:mappe:bordo}). Here essentially unique means that two boundary maps coincide on a full measure set. As noticed by Monod and Shalom, the non-elementary condition means that the group of the real points of the algebraic hull of $\sigma$ (Definition \ref{def:alg:hull}) is a non-elementary subgroup of $\pu(n,1)$.
By Lemma \ref{lem:pullback:implemented:boundary:map} the existence of a boundary map implies that the pullback map $\upH^2_b(\sigma)$ coincides with the following composition $$ \upH^2_b(\sigma)=\upH^2(\Phi^X) \circ \mathfrak{c}^2 \ , $$ where $\mathfrak{c}^2$ and $\upH^2(\Phi^X)$ are the maps introduced in Equation~(\ref{eq:canonical:map:B:L}) and Definition~\ref{def:pullback:boundary}, respectively. Thus, Equation~\eqref{eq:cartan:no:boundary:map} is equivalent to \begin{equation}\label{eq:cartan:multiplicative:constant} \trans_\Gamma^2 \circ \upH^2(\Phi^X)([\pi c_m])=i(\sigma) \kappa^b_n \ . \end{equation} This shows that the Cartan invariant is a \emph{multiplicative constant} in the sense of Definition \ref{def:multiplicative:constant}. We are going to prove that Equation \ref{eq:cartan:multiplicative:constant} actually holds at the levels of cochains.
\begin{repprop}{prop:cartan:multiplicative:cochains} In the situation of Setup~\ref{setup:cartan:invariant}, let $\sigma$ be a non-elementary measurable cocycle with boundary map $\phi \colon \partial_\infty \bbH^n_{\bbC} \times X \rightarrow \partial_\infty \bbH^m_{\bbC}$. Then, for every triple of pairwise distinct points $\xi_1,\xi_2,\xi_3 \in \partial_\infty \bbH^n_{\bbC}$, we have \begin{small} \begin{equation}\label{eq:cartan:multiplicative:cochains} i(\sigma)c_n(\xi_1,\xi_2,\xi_3)=\int_{\Gamma \backslash \pu(n,1)} \int_X c_m(\phi(\overline{g}\xi_1,x),\phi(\overline{g}\xi_2,x),\phi(\overline{g}\xi_3,x))d\mu(\overline{g})d\mu_X(x) \ . \end{equation} \end{small} Here $\mu$ is a $\pu(n,1)$-invariant probability measure on the quotient $\Gamma \backslash \pu(n,1)$. \end{repprop}
\begin{proof} We already know that $\pi c_n$ is a representative of $\kappa^b_n$ (Remark~\ref{oss:cartan:repr:kaehler}). Moreover, since $\Gamma$ acts doubly ergodically on $\partial_\infty \bbH^n_{\bbC}$, there are no essentially bounded $\Gamma$-invariant alternating functions on $(\partial_\infty \bbH^n_{\bbC})^2$. Hence, if we rewrite Equation \eqref{eq:cartan:multiplicative:constant} in terms of cochains, we obtain a formula $$ \widehat{\trans}^2_{\partial_\infty \bbH^n_{\bbC}} \circ \upC^2(\Phi^X)(\pi c_m)=i(\sigma)(\pi c_n) \ , $$ without coboundaries. Here $\widehat{\trans}^2_{\partial_\infty \bbH^n_{\bbC}}$ is the map introduced at the end of Section \ref{subsec:transfer:maps}. Since the constant $\pi$ appears on both sides, we get $$ i(\sigma)c_n(\xi_1,\xi_2,\xi_3)=\int_{\Gamma \backslash \pu(n,1)} \int_X c_m(\phi(\overline{g}\xi_1,x),\phi(\overline{g}\xi_2,x),\phi(\overline{g}\xi_3,x))d\mu(\overline{g})d\mu_X(x) \ , $$ for almost every $\xi_1,\xi_2,\xi_3 \in \partial_\infty \bbH^n_{\bbC}$. The fact that the equation is in fact true for \emph{every triple of pairwise distinct points} can be proved by following \emph{verbatim} Pozzetti's proof~\cite[Lemma 2.11]{Pozzetti}. This finishes the proof. \end{proof}
The interpretation of the Cartan invariant as a multiplicative constant has many consequences. For instance, the Cartan invariant of measurable cocyles extends the one for representations introduced by Burger and Iozzi~\cite{BIcartan}.
\begin{prop}\label{prop:cartan:rep} In the situation of Setup~\ref{setup:cartan:invariant}, let $\rho \colon \Gamma \rightarrow \pu(m,1)$ be a non-elementary representation and let $\sigma_\rho \colon \Gamma \times X \rightarrow \pu(m,1)$ be the associated measurable cocycle. Then, we have $$ i(\sigma_\rho)=i(\rho) \ . $$ \end{prop}
\begin{proof} Since $\rho$ is non elementary, $\rho$ admits an essentially unique boundary map $\varphi:\partial_\infty \bbH^n_{\bbC} \rightarrow \partial_\infty \bbH^m_{\bbC}$~\cite[Corollary 3.2]{burger:mozes}. Hence, one can define a boundary map for the cocycle $\sigma_\rho$ as $$ \phi \colon \partial_\infty \bbH^n_{\bbC} \times X \rightarrow \partial_\infty \bbH^m_{\bbC} \ , \ \phi(\xi,x):=\phi(\xi) \ , $$ for almost every $\xi \in \partial_\infty \bbH^n_{\bbC}$ and almost every $x \in X$. This readily implies that \begin{align*} i(\sigma_\rho)c_n&=\widehat{\trans}^2_{\partial_\infty \bbH^n_{\bbC}} \circ \upC^2(\Phi^X)(c_m) \\ &=\widehat{\trans}^2_{\partial_\infty \bbH^n_{\bbC}} \circ \upC^2(\varphi)(c_m) \\ &=i(\rho)c_n \ . \end{align*} The first equality comes from Proposition~\ref{prop:cartan:multiplicative:cochains}, the second one is due to Proposition~\ref{prop:pullback:coc:vs:repr} and finally the last equality is proved by Burger and Iozzi~\cite[Lemma 5.3]{BIgeo}. \end{proof}
We conclude this section by showing that the Cartan invariant is constant along cohomology classes and it has bounded absolute value. \begin{prop}\label{prop:cartan:cohomology:bound} In the situation of Setup~\ref{setup:cartan:invariant}, let $\sigma$ be a non-elementary measurable cocycle with boundary map $\phi:\partial_\infty \bbH^n_{\bbC} \times X \rightarrow \partial_\infty \bbH^m_{\bbC}$. Then, the following hold \begin{enumerate}
\item The Cartan invariant $i(\sigma)$ is constant along the $\pu(m,1)$-cohomology class of $\sigma$.
\item $|i(\sigma)| \leq 1$. \end{enumerate} \end{prop}
\begin{proof} \emph{Ad.~1} The first statement follows by Proposition~\ref{prop:invariance:cohomology}. Indeed if $f \colon X \rightarrow \pu(m,1)$ is a measurable map, then Proposition~\ref{prop:invariance:cohomology} shows that $$ \upH^2(f.\Phi^X)([\pi c_m])=\upH^2(\Phi^X)([\pi c_m]) \ . $$ Hence, we get $$ i(f.\sigma)\kappa^b_n=\trans_\Gamma^2 \circ \upH^2(f.\Phi^X)([\pi c_m])=\trans_\Gamma^2 \circ \upH^2(\Phi^X)([\pi c_m])=i(\sigma) \kappa^b_n \ .$$ This shows that $ i(f.\sigma)=i(\sigma) $, as desired.
\emph{Ad.~2} Since the Cartan invariant is a multiplicative constant and condition $(\NCT)$ is satisfied, Proposition \ref{prop:multiplicative:upperbound} implies $$
|i(\sigma)| \leq 1 \ . $$ Here we used the fact that $\lVert c_n \rVert_\infty=\lVert c_m \rVert_\infty=1$. \end{proof}
The second item of the previous proposition leads to the following definition (compare with Definition \ref{def:maximal:cocycle}).
\begin{deft}\label{def:maximal:cartan} In the situation of Setup~\ref{setup:cartan:invariant}, a non-elementary cocycle $\sigma$ is \emph{maximal} if $i(\sigma)=1$. \end{deft}
\section{Totally real cocycles}\label{sec:tot:real}
In this section we introduce the notion of \emph{totally real cocycles}. Our definition extends the one by Burger and Iozzi~\cite{BIreal} for representations. We aim to investigate the relation between the vanishing of the Cartan invariant and the condition of being totally real. We will show that totally real cocycles have trivial Cartan invariant. On the other hand, it is natural to ask whether the converse is also true. We partially answer to this question by showing that ergodic cocycles inducing the trivial map in bounded cohomology are totally real.
\begin{deft}\label{def:totally:real} In the situation of Setup~\ref{setup:cartan:invariant} we denote by $\mathbf{L}$ the algebraic hull of $\sigma$. Let $L \coloneqq \mathbf{L}(\mathbb{R})^\circ$ be the connected component of the identity of the group of real points of $\mathbf{L}$. A measurable cocycle $\sigma$ is \emph{totally real} if for some $1 \leq k \leq m$ we have $$ L \subseteq \calN_{\textup{PU}(m,1)}(\mathbb{H}^k_{\mathbb{R}}) \ , $$ where $\mathbb{H}^k_{\mathbb{R}} \subset \mathbb{H}^m_{\mathbb{C}}$ is a totally geodesic copy of the real hyperbolic $k$-space. Here $\calN_{\pu(m,1)}(\mathbb{H}^k_{\mathbb{R}})$ denotes the subgroup of $\pu(m,1)$ preserving the fixed copy of $\bbH^k_{\bbR}$, i.e. $g (\bbH^k_{\bbR}) \subset \bbH^k_{\bbR}$ for every $g \in \calN_{\textup{PU}(m,1)}(\mathbb{H}^k_{\mathbb{R}})$. \end{deft}
\begin{oss}\label{oss:tot:real:cohomologous} By the definition of algebraic hull (Definition~\ref{def:alg:hull}) every totally real cocycle $\sigma$ is cohomologous to a cocycle $\widehat{\sigma}$ whose image is contained in $L$. Additionally, $\calN_{\textup{PU}(m,1)}(\mathbb{H}^k_{\mathbb{R}})$ is an almost direct product of a compact subgroup $K \leq \pu(m,1)$ with an embedded copy of $\textup{PO}(k,1)$ inside $\textup{PU}(m,1)$. Hence, the cocycle $\widehat{\sigma}$ preserves the totally geodesic copy $\mathbb{H}^k_{\mathbb{R}} \subset \mathbb{H}^m_{\mathbb{C}}$ stabilized by $L$. \end{oss}
In the sequel we need the following
\begin{lem}\label{lem:no:two:coboundary} Let $\Gamma \leq \textup{PU}(n,1)$ be a torsion-free lattice, with $n \geq 2$, and let $(X,\mu_X)$ be a standard Borel probability space. Then $$ \upH^2_b(\Gamma;\upL^\infty(X)) \cong \mathcal{Z}\upL^\infty_{\textup{w}^\ast,\textup{alt}}((\partial_\infty \bbH^n_{\bbC})^3; \upL^\infty(X))^\Gamma \ . $$ Here, the letter $\mathcal{Z}$ denotes the set of cocycles and the subscript \emph{alt} denotes the restrictions to alternating essentially bounded weak-${}^*$ measurable functions. \end{lem}
\begin{proof} For every $k \in \bbN$ we have the following $$ \upL^\infty_{\textup{w}^\ast}((\partial_\infty \bbH^n_{\bbC})^k; \upL^\infty(X))^\Gamma \cong \upL_{\textup{w}^\ast}^\infty((\partial_\infty \bbH^n_{\bbC})^k \times X;\bbR)^\Gamma \ , $$ where $\Gamma$ acts on $(\partial_\infty \bbH^n_{\bbC})^k \times X$ diagonally \cite[Corollary 2.3.3]{monod:libro}. Moreover, every $\Gamma$-invariant essentially bounded weak-${}^*$ measurable function on $(\partial_\infty \bbH^n_{\bbC})^2\times X$ must be essentially constant~\cite[Proposition 2.4]{MonShal0}. Since an alternating function that is constant vanishes, we have that $$ \upL^\infty_{\textup{w}^\ast,\textup{alt}}((\partial_\infty \bbH^n_{\bbC})^2; \upL^\infty(X))^\Gamma = 0 \ . $$ This shows that there are no coboundaries in dimension two, whence the thesis. \end{proof}
The notion of totally real cocycles is strictly related to the vanishing of the Cartan invariant. This correspondence is described by the following result which is a suitable adaptation of a result by Burger and Iozzi~\cite[Theorem 1.1]{BIreal} to the case of measurable cocycles.
\begin{repteor}{teor:totally:real} In the situation of Setup~\ref{setup:cartan:invariant}, let $\sigma$ be a non-elementary measurable cocycle with boundary map $\phi:\partial_\infty \mathbb{H}^n_{\bbC} \times X \rightarrow \partial_\infty \bbH^m_{\bbC}$. Then, the following hold
\begin{enumerate}
\item If $\sigma$ is totally real, then $i(\sigma)=0$;
\item If $X$ is $\Gamma$-ergodic and $\textup{H}^2(\phi)([c_m])=0$, then $\sigma$ is totally real. \end{enumerate} \end{repteor}
\begin{proof} \emph{Ad~1.} Let $\mathbf{L}$ be the algebraic hull of $\sigma$ and let $L=\mathbf{L}(\bbR)^\circ$ be the connected component of the real points of $\mathbf{L}$ containing the identity. By Remark~\ref{oss:tot:real:cohomologous}, there exists a cocycle $\widehat{\sigma}$ cohomologous to $\sigma$ such that $$\widehat{\sigma} (\Gamma \times X) \subset L \subset \calN_{\textup{PU}(m,1)}(\bbH^k_{\bbR}) \ ,$$ for some $1 \leq k \leq m$. Since $\widehat{\sigma}$ is cohomologous to $\sigma$, it admits a boundary map $\widehat{\phi}$ (Remark~\ref{oss:twisted:boundary:map}). Hence, since the Cartan invariant is constant along the $\pu(m,1)$-cohomology class of $\sigma$, it is sufficient to show that $i(\widehat{\sigma}) = 0$ (Proposition \ref{prop:cartan:cohomology:bound}).
By Remark~\ref{oss:tot:real:cohomologous} the cocycle $\widehat{\sigma}$ also preserves the totally geodesic copy of $\mathbb{H}^k_{\bbR}$ stabilized by $L$, whence it preserves the boundary at infinity $\partial_\infty \mathbb{H}^k_{\bbR}$. We identify $\partial_\infty \mathbb{H}^k_{\bbR}$ with a $(k-1)$-dimensional sphere $\widehat{\calS} \subset \bbH^m_{\bbC}$ as explained in Section~\ref{sec:cartan:invariant}. Hence, the boundary map $\widehat{\phi}$ takes values in $\widehat{\calS}$, that is
$$ \widehat{\phi} \colon \partial_\infty \bbH^n_{\bbC} \times X \rightarrow \widehat{\calS} \ . $$
For almost every $x \in X$, we then define \begin{align*} \widehat{\phi}_x \colon \partial_\infty \bbH^n_{\bbC} \rightarrow \widehat{\calS} \\ \widehat{\phi}_x(\xi) \coloneqq \widehat{\phi}(\xi,x) \ . \end{align*} The map $\widehat{\phi}_x$ is measurable for almost every $x \in X$~\cite[Lemma 2.6]{fisher:morris:whyte}. By Proposition \ref{prop:cartan:multiplicative:cochains} we have $$ \int_{\Gamma \backslash \pu(n,1)} \int_X c_m(\widehat{\phi}_x(\overline{g}.\xi_1),\widehat{\phi}_x(\overline{g}.\xi_2),\widehat{\phi}_x(\overline{g}.\xi_3))d\mu_X(x)d\mu(\overline{g})=i(\widehat{\sigma}) c(\xi_1,\xi_2,\xi_3) \ , $$ for almost every $\xi_1,\xi_2,\xi_3 \in \partial_\infty \bbH^n_{\bbC}$. Here $\mu$ is the $\pu(n,1)$-invariant probability measure on $\Gamma \backslash \pu(n,1)$. Since $\widehat{\phi}_x$ takes values into the sphere $\widehat{\calS}$ for almost every $x \in X$, we have that $$ c_m(\widehat{\phi}_x(\overline{g}.\xi_1),\widehat{\phi}_x(\overline{g}.\xi_2),\widehat{\phi}_x(\overline{g}.\xi_3))=0 \ , $$ for almost every $x \in X$ and almost every $\overline{g} \in \Gamma \backslash \pu(n,1)$~\cite[Corollary 3.1]{BIreal}. Thus, $i(\widehat{\sigma})$ vanishes. Since $i(\sigma) = i(\widehat{\sigma})$ we get the thesis.
\emph{Ad~2.} Since $\upC^2(\phi)$ is a cochain map (Lemma~\ref{lemma:pullback:cochain}), it induces a map $\upH^2(\phi)$ in cohomology. If $\upH^2(\phi)([c_m])=0$, then we have $$ \upH^2(\phi)([c_m])=\left[ \upC^2(\phi)(c_m)\right] =0 \ . $$ Since by Lemma \ref{lem:no:two:coboundary} there are no $\upL^\infty(X)$-coboundaries in degree $2$, we have that $$ \upC^2(\phi)(c_m)=0 \ . $$ More precisely \begin{equation}\label{equation:vanishing:cartan:ess:image} c_m(\phi_x(\xi_1),\phi_x(\xi_2),\phi_x(\xi_3))=0 \ , \end{equation} for almost every $x \in X$ and almost every $\xi_1,\xi_2,\xi_3 \in \partial_\infty \bbH^n_{\bbC}$. For almost every point $x \in \, X$, let us denote by $E_x$ the essential image of $\phi_x$, i.e. the support of the push-forward measure $(\phi_x)_\ast \nu$, where $\nu$ is the standard round measure on $\partial_\infty \bbH^n_{\bbC}$.
We have just proved in Equation~(\ref{equation:vanishing:cartan:ess:image}) that for almost every $x \in \, X$ the Cartan cocycle $c_m$ vanishes over $E_x$. Hence, as proved by Burger and Iozzi~\cite[Corollary 3.1]{BIreal}, for almost every $x \in X$, there exists an integer $1 \leq k(x) \leq m$ and a real $(k(x)-1)$-sphere $\calS_x$ embedded in $\partial_\infty \bbH^m_{\bbC}$, such that $$ E_x \subseteq \calS_x \ . $$ Moreover, we can choose $\calS_x$ to be minimal with respect to the inclusion. We claim now that \begin{equation} \label{eq:dim:inv} \calS_{\gamma.x}=\sigma(\gamma,x)\calS_x \ , \end{equation} for every $\gamma \in \Gamma$ and almost every $x \in X$. First, the definition of $E_x$ and the $\sigma$-equivariance of $\phi$ imply that $$ E_{\gamma.x}=\sigma(\gamma,x) E_x\ , $$ for every $\gamma \in \Gamma$ and almost every $x \in X$. Hence, we have $$ E_{\gamma.x} =\sigma(\gamma,x)E_x \subset \sigma(\gamma,x)\calS_x \ . $$ Thus, the minimality assumption shows that $$ \calS_{\gamma.x} \subset \sigma(\gamma,x)\calS_x \ . $$
By interchanging the role of $E_{\gamma,x}$ and $E_x$, we get the claim. As a consequence $k(\gamma.x)=k(x)$ for every $\gamma \in \Gamma$ and almost every $x \in X$. The ergodicity assumption on the space $(X,\mu_X)$ then implies that almost all the spheres have the same dimension, i.e. $k(x) = k \in \, \mathbb{N}$ for almost every $x \in \, X$.
Let us now denote by $\textup{Sph}^{k-1}(\partial_\infty \bbH^m_{\bbC})$ the space of $(k-1)$-spheres embedded in the boundary at infinity $\partial_\infty \bbH^m_{\bbC}$. Since the action of $\pu(m,1)$ on $(k-1)$-spheres is transitive, $\textup{Sph}^{k-1}(\partial_\infty \bbH^m_{\bbC})$ is a $\pu(m,1)$-homogeneous space. Let $G_0=\calN_{\pu(m,1)}(\calS_0)$ be the subgroup of $\pu(m,1)$ preserving a fixed $(k-1)$-sphere $\calS_0$. Then we can define a map $$ \calS \colon X \rightarrow \textup{Sph}^{k-1}(\partial_\infty \bbH^m_{\bbC}) \ , \ \ \calS(x)=\calS_x \ , $$ which is measurable because $\phi_x$ varies measurably with respect to $x \in X$ \cite[Lemma 2.6]{fisher:morris:whyte}. Since $\textup{Sph}^{k-1}(\partial_\infty \bbH^m_{\bbC}) \cong \textup{PU}(m,1)/G_0$, we can compose the previous map with a measurable section~ \cite[Corollary A.8]{zimmer:libro} $$ s \colon\textup{PU}(m,1)/G_0 \rightarrow \textup{PU}(m,1) \ . $$ Let $f \colon X \rightarrow \pu(m,1)$ be the composition $s \circ \calS$. Since $f$ is a composition of measurable maps, it is measurable. Moreover, by construction we have $$ \calS_x=f(x)\calS_{0} \ , $$ for almost every $x \in X$.
Let us consider now the $f$-twisted cocycle $\sigma_0=f.\sigma$ associated to $\sigma$ (Definition \ref{def:cohomology:cocycle}). On the one hand we have that $$ \calS_{\gamma.x}=f(\gamma.x)\calS_0 \ , $$ on the other $$ \calS_{\gamma.x}=\sigma(\gamma,x)f(x)\calS_0 \ . $$ Hence $\sigma_0$ preserves $\calS_0$. This implies that $\sigma_0(\Gamma \times X) \subset G_0$. If $\mathbf{L}$ denotes the algebraic hull of $\sigma$ (which is the same for $\sigma_0$) and $L=\mathbf{L}(\bbR)^\circ$, we get $$ L \subseteq G_0 \ , $$ whence the thesis. \end{proof}
\begin{oss} Unfortunately, we are not able to show that \emph{Ad.~2} actually provides a complete converse to \emph{Ad.1}. Indeed, it is not unlikely that the vanishing of the pullback $\upH^2(\phi)([c_m])$ is in fact a stronger condition than the vanishing of the Cartan invariant associated to the cocycle $\sigma$. A priori the condition $i(\sigma)=0$ does not necessarily imply that the pullback induced by $\phi$ vanishes on $c_m$. However, at the moment we are not able to construct an explicit example of such situation.
On the other hand, our formulation of Theorem~\ref{teor:alg:hull}.2 suitably extends Burger-Iozzi's result~\cite[Theorem 1.1]{BIreal} in the setting of measurable cocycles. Indeed when $\sigma$ is actually cohomologous to a non-elementary representation $\rho$, the pullback along $\phi$ boils down to the pullback along $\rho$ (Proposition~\ref{prop:pullback:coc:vs:repr}). Thus, we completely recover~\cite[Theorem 1.1]{BIreal} in this particular situation. \end{oss}
\section{Rigidity of the Cartan invariant}\label{sec:cartan:rigidity}
In this section we discuss some rigidy results which can be deduced using the Cartan invariant of measurable cocycles. We first study the algebraic hull (Definition~\ref{def:alg:hull}) of cocycles whose pullback does not vanish. Then, we characterize maximal measurable cocycles (Definition~\ref{def:maximal:cartan}).
We begin with the following result which is a suitable extension of Burger-Iozzi's result for representation~\cite[Theorem 1.2]{BIreal}: \begin{repteor}{teor:alg:hull} In the situation of Setup~\ref{setup:cartan:invariant}, let $\sigma$ be a non-elementary measurable cocycle with boundary map $\phi \colon \partial_\infty \bbH^n_{\bbC} \times X \rightarrow \partial_\infty \bbH^m_{\bbC}$. Let $\mathbf{L}$ be the algebraic hull of $\sigma$ and let $L=\mathbf{L}(\bbR)^\circ$ be the connected component of the identity of the real points.
If $\upH^2(\Phi^X)([c_m])\neq 0$, then $L$ is an almost direct product $K \cdot M$, where $K$ is compact and $M$ is isomorphic to $\textup{PU}(p,1)$ for some $1 \leq p \leq m$.
In particular, the symmetric space associated to $L$ is a totally geodesically embedded copy of $\bbH^p_{\bbC}$ inside $\bbH^m_{\bbC}$. \end{repteor}
\begin{proof}
Since $\textup{H}^2(\Phi^X)([c_m])$ does not vanish, the restriction of the bounded K\"ahler class $\kappa^b_m$ to $\upH^2_b(L; \mathbb{R})$ does not vanish (Remark~\ref{oss:cartan:repr:kaehler}). Thus, $L$ cannot be amenable. Since $\textup{PU}(m,1)$ has real rank one, $L$ is reductive with semisimple part $M$ of rank one. We denote by $K$ the compact term in the decomposition of $L$. Since $K$ compact, whence amenable, we have that $\kappa^b_m|K=0$ (see Remark \ref{oss:notation:restriction} for the notation). Hence, $\kappa^b_m|L \neq 0$ implies $\kappa^b_m|M \neq 0$. As a consequence we have $$ \upH^2_c(M;\bbR) \cong \upH^2_{cb}(M ; \bbR) \neq 0 \ . $$ Thus, $M$ is a group of Hermitian type. Then, since $M$ has real rank one, we have $$ M \cong \textup{PU}(p,1) \ , $$ for some $1 \leq p \leq m$. Take an isomorphism $\pi:\textup{PU}(p,1) \rightarrow M$ such that $\upH^2_{cb}(\pi)(\kappa^b_m)=\lambda \kappa^b_p$ for some $\lambda >0$. If $m \geq 2$, the map $\pi:\pu(p,1) \rightarrow \pu(m,1)$ corresponds to a totally geodesic embedding $\bbH^p_{\bbC}$ inside $\bbH^m_{\bbC}$ (which is holomorphic by the positivity of $\lambda$). When $m=1$, the group $\pi(\pu(1,1))$ cannot correspond to a totally real embedding, otherwise $\lambda=0$ by Theorem \ref{teor:totally:real}. Hence it must correspond to a complex geodesic and the statement is proved. \end{proof}
Among the cocycles with non-trivial pullback, maximal ones can be completely characterized. Maximal cocycles always admit a(n essentially unique) boundary map. Indeed they are non-elementary, since the latters have trivial Cartan invariant.
\begin{repteor}{teor:cartan:rigidity} In the situation of Setup~\ref{setup:cartan:invariant}, let $(X, \mu)$ be ergodic and let $\sigma$ be a maximal cocycle. Let $\mathbf{L}$ be the algebraic hull of $\sigma$ and let $L=\mathbf{L}(\bbR)^\circ$ be the connected component of the identity of the real points.
Then, the following hold \begin{enumerate} \item $m \geq n$;
\item $L$ is an almost direct product $\textup{PU}(n,1) \cdot K$, where $K$ is compact;
\item $\sigma$ is cohomologous to the cocycle $\sigma_i$ associated to the standard lattice embedding $i \colon \Gamma \to \pu(m, 1)$ (possibly modulo the compact subgroup $K$ when $m >n$). \end{enumerate} \end{repteor} \begin{proof} One can restrict the image of $\sigma$ to its algebraic hull, which is completely characterized by Theorem \ref{teor:alg:hull}. In this way we obtain a Zariski dense cocycle. The thesis now follows \emph{verbatim} as in the proof of~\cite[Theorem 2]{sarti:savini}. \end{proof}
\section{Concluding remarks}\label{sec:concluding:remarks}
We conclude the paper with a short list of comments that relate the notion of Cartan invariant with more recent results in this field. These results have been obtained by combining the theory developed in this paper with new insights.
Recently one of the authors has proved a statement analogous to~Theorem \ref{teor:cartan:rigidity} but with completely different techniques~\cite[Theorem 1.2]{savini:natural:cocycle}.
The main new ingredient was the existence of natural maps associated to a measurable cocycles. Natural maps exist for ergodic Zariski dense cocycles,
e.g. cocycles arising from ergodic couplings \cite[Lemma 3.6]{savini:tautness}. The existence of natural maps also played an important role in the recent proof of the $1$-tautness conjecture for $\pu(n,1)$, with $n \geq 2$~\cite[Theorem 1]{savini:tautness}. This result provides a nice classification of discrete groups that are $\upL^1$-measure equivalent to a lattice $\Gamma \leq \pu(n,1)$.
In that situation the key point is to show that measurable cocycles arising from ergodic self-couplings associated to a uniform lattice $\Gamma \leq \pu(n,1)$ are maximal. This then implies that they are cohomologous to the standard lattice embedding by using the results of this paper. The notion of maximality introduced in \cite{savini:tautness} agrees with the one in Definition \ref{def:maximal:cartan}. This provides a wide family of cocycles which do not come from representations but they are cohomologous to them.
Unfortunately the authors were not able to prove the $1$-tautness conjecture directly with the use of the Cartan invariant. The main obstruction to this approach concerns the study of cup products of bounded cohomology classes, which is a highly non trivial subject~\cite{Heuer:cup, BM:cup, AmonBuch}.
The study of lattices in $\pu(1,1)$ was separated from this project because it contained some additional difficulties. Recently one of the authors used some ideas of this paper to provide a complete characterization of the algebraic hull for maximal cocycles of surface groups~\cite{savini:surface} by extending Burger, Iozzi and Wienhard's \emph{tightness} to the wider setting of measurable cocycles.
\end{document} |
\begin{document}
\title{A Resource Efficient Source of Multi-photon Polarization Entanglement}
\author{E. Megidish} \affiliation{Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel} \author{T. Shacham} \affiliation{Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel} \author{A. Halevy} \affiliation{Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel} \author{L. Dovrat} \affiliation{Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel} \author{H. S. Eisenberg} \affiliation{Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel}
\pacs{03.67.Bg, 42.50.Dv}
\begin{abstract} Current photon entangling schemes require resources that grow with the photon number. We present a new approach that generates quantum entanglement between many photons, using only a single source of entangled photon pairs. The different spatial modes, one for each photon as required by other schemes, are replaced by different time slots of only two spatial modes. States of any number of photons are generated with the same setup, solving the scalability problem caused by the previous need for extra resources. Consequently, entangled photon states of larger numbers than before are practically realizable. \end{abstract}
\maketitle
The generation of quantum entangled states of many particles is a central goal of quantum information science. These states are required for the one-way quantum computer scheme \cite{Raussendorf01,Walther05}. In quantum communication, they enable error correction \cite{Schlingemann02} and multi-party protocols \cite{Hillery99,Zhao04}. Additionally, they can refute local realistic theories with an increasing violation as the particle number increases \cite{Mermin90,Zukowski97,Guhne05}.
Polarized photons are an attractive realization of qubits due to their simple single-qubit operations and their weak interaction with the environment. Pairs of polarization entangled photons are easily generated by using the nonlinear optical effect of parametric down-conversion (PDC) \cite{Kwiat95}. Difficulties are encountered when trying to entangle more than two photons. Eight photons have already been entangled in a GHZ state \cite{Greenberger90,Huang11,Yao12}, and six photons in an H-shaped graph \cite{Lu07} and Dicke \cite{Wieczorek09} states. Currently, the state with the largest number of entangled particles of any realization is a GHZ state composed of 14 ions trapped in a linear trap \cite{Monz11}.
In previous experiments, two polarization-entangled photon pairs were fused into a four-photon GHZ state by a polarizing beam-splitter (PBS) \cite{Pan01} (see Fig. \ref{Fig1}a). A PBS is an optical element that transmits horizontally ($h$) polarized photons and reflects them when vertically polarized ($v$). Assuming an entangled pair in the $|\phi^+\rangle$ Bell state \cite{Kwiat95} in paths 1 and 2, and another one in paths 3 and 4, the four-photon state is: \begin{equation}\label{product}
|\phi_{12}^+\rangle\otimes|\phi_{34}^+\rangle=\frac{1}{2}(|h_1h_2\rangle+|v_1v_2\rangle)\otimes(|h_3h_4\rangle+|v_3v_4\rangle)\,. \end{equation} If paths 2 and 3 are combined at a PBS, and we demand that one photon comes out of each of the two output ports, only the two amplitudes with identical polarizations in these modes are left and the result is a four-photon GHZ state: \begin{equation}\label{GHZ4}
|\Psi^{(4)}_{GHZ}\rangle=\frac{1}{\sqrt{2}}(|h_1h_2h_3h_4\rangle+|v_1v_2v_3v_4\rangle)\,. \end{equation} There is a strict requirement for the combined paths to be indistinguishable in all degrees of freedom, in order for their amplitudes to interfere. Temporal indistinguishability is satisfied by generating the photons with a pulsed laser that defines their generation time, and carefully matching their relative path lengths. Fortunately, there is no need for sensitive phase accuracy, but just an overlap of the pulse envelopes.
This scheme is extendable in a straight forward manner. For example, if a third entangled photon-pair is added in paths 5 and 6, another PBS can fuse it by combining paths 4 and 5 to create a six-photon GHZ state \cite{Lu07} (see Fig. \ref{Fig1}a). The addition of each extra pair requires another passage of the pump beam through a generating crystal, the adjustment of an additional delay line, an additional projecting PBS and a detection setup that includes two additional PBS elements and four more single-photon detectors. Clearly, this approach is non-scalable in material resources. Another issue is the non-scalability in temporal resources. Such experiments typically have an entangled-photon-pair generation probability of one pair every 100 pump pulses and an overall single-photon detection efficiency of 10-25\%. Thus, the more photons an experiment is trying to produce, the longer the time it takes to accumulate sufficient data statistics.
\begin{figure*}
\caption{(Color online) A comparison between previous setups and our setup. (a) The previously used setup for entangling four and six photons in a GHZ state. Three $\beta$-BaB$_2$O$_4$ (BBO) crystals generate from a single pump pulse three entangled photon pairs in six spatial modes. Delay lines synchronize the projection of a photon from each pair onto a PBS. Six analyzing wave-plates (WP), six PBS's and twelve detectors are required to measure the state. To produce eight photon entanglement, this setup needs additional components. (b) Our resource efficient setup. Only a single crystal generates pairs from many pump pulses. The pairs are projected onto a large entangled state on a single PBS and occupy two spatial modes and additional temporal modes. The same setup is applicable to entanglement generation of any photon number. (c) The two multi-photon entangled graph states that are possible to obtain without fast polarization rotations: a growing star shaped GHZ state and a connected branched chain (which for 3 pairs is an H-graph state). For two photon pairs, both states are identical. Numbered circles mark photons and their creation order, where dimmed circles represent possible future states of 4 and 5 fused pairs. Connecting lines mark the entangling operations that define the quantum graph state.}
\label{Fig1}
\end{figure*}
In this Letter, we suggest a new approach that solves the problem of scalability with material resources. Our scheme uses only a single entangled photon-pair source, a single delay line and a single projecting PBS element to create entangled states of any number of photons. A pump pulse is down-converted in a nonlinear crystal. When a polarization entangled photon pair is generated, the right photon is directed to a PBS (see Fig. \ref{Fig1}b), while the left photon enters a delay line. The delay time $\tau$ is chosen such that if a second entangled photon pair is created by the next pump pulse, the right photon of the second pair meets with the left photon of the first pair at the projecting PBS. Post-selecting the events in which one photon exits at each PBS output port, projects the two entangled pairs onto a four-photon GHZ state. The left photon of the second pair arrives at the PBS after travelling through the delay line. The four spatial modes of previous schemes \cite{Pan01} (1, 2, 3 and 4, as in Eq. \ref{GHZ4}) are replaced by two spatial modes (1 and 2 after the projecting PBS, 1' and 2' before it) and three temporal modes (0, $\tau$, and $2\tau$): \begin{equation}\label{GHZ4inTime}
|\Psi^{(4)}_{GHZ}\rangle=\frac{1}{\sqrt{2}}(|h_{1'}^0h_2^\tau h_1^\tau h_{2'}^{2\tau}\rangle+|v_{1'}^0v_2^\tau v_1^\tau v_{2'}^{2\tau}\rangle)\,, \end{equation} where the lower and upper indices designate the spatial and the temporal modes, respectively. The first and last photons are considered before the projecting PBS. It is possible to convert the mixed spatio-temporal mode partition to only spatial modes by fast polarization-independent switches.
The most important point to note is the result of the generation of a third entangled photon pair from the next pump pulse. The right photon of this third pair is entangled at the PBS with the delayed left photon of the second pair. All of the six photons from the three pairs are now in a six-photon GHZ state. There is no need for any modification between the setups that create the four- and the six-photon entangled states. It is also clear by induction that as long as additional consecutive pairs are created, larger entangled states can be produced.
The use of a PDC source is merely for demonstration purposes, as our approach can use any photon entangling source, current or futuristic \cite{Akopian06,Stevenson06,Salter10}. Therefore, our scheme does neither address the probabilistic nature of PDC sources, nor other issues that these sources raise, such as the spectral distinguishability between different photon pairs \cite{Mosley08}. Nevertheless, our scheme greatly simplifies the standard approach, enabling the demonstration of entangled photon states of high photon numbers.
In order to demonstrate our scheme, we created polarization entangled photon pairs by the nonlinear type-II parametric down-conversion process \cite{Kwiat95}. A pulsed Ti:Sapphire laser source with a 76\,MHz repetition rate is frequency doubled to a wavelength of 390\,nm and an average power of 400\,mW. The laser beam is corrected for astigmatism and focused on a 2\,mm thick $\beta$-BaB$_2$O$_4$ (BBO) crystal. Down-converted photons, with a wavelength of 780\,nm, are spatially filtered by coupling them into and out of single-mode fibers, and spectrally filtered by 3\,nm wide bandpass filters.
The delay length is chosen such that it fuses pairs that are created eight pulses apart. The delay time is 105\,ns, which is longer than the dead time of 50\,ns of the single-photon detectors (Perkin Elmer SPCM-AQ4C). The delay line is a free-space delay line (31.6\,m long), built from high reflecting dielectric mirrors. The total transmittance is higher than 90\% after 10 reflections. The delay line was designed to cancel distinguishability that results from the different beam propagation properties of the short and long paths.
In order to characterize the four-photon state, the four-photon correlation statistics in the \textbf{HV} (horizontal-vertical polarization) basis was measured (see Fig. \ref{Fig2}). Two opposite possibilities (\textbf{HVVH} and \textbf{VHHV}) are much more probable than the others, as expected from a four-photon GHZ state. Additionally, measurements in a rotated polarization basis (plus and minus (\textbf{PM}) $45^\circ$ linear or right and left (\textbf{RL}) circular) are required in order to demonstrate the coherence between these two amplitudes. After rotation, there are 16 possible amplitudes, divided into two groups: the even and the odd amplitude groups, where each polarization appears an even or an odd number of times, respectively. When the two projected photons are indistinguishable, one of the amplitude groups interferes constructively and the other destructively (which one depends on the specific polarization rotations).
\begin{figure}
\caption{(Color online) Measured amplitude histogram of the generated four-photon GHZ state. Two opposite amplitudes occur on average 65 times more often than any other amplitude. Data was accumulated over 2000\,sec. We used photon pairs in the
$|\psi^+\rangle$ Bell state. The results are equivalent to using a
$|\phi^+\rangle$ state, up to local polarization rotations.}
\label{Fig2}
\end{figure}
The two projected photons can be rotated individually with wave plates positioned after the PBS projection. As the first and last photons are actually measured at the projecting PBS, it would seem that fast Pockels cells would need to be placed before this PBS in order to rotate them. Fortunately, there is a way to circumvent this complication. If the phase of the entangled pairs is tuned to $90^\circ$ such that their state becomes \begin{equation}\label{PhiI}
|\phi^{i}\rangle=\frac{1}{\sqrt{2}}(|h_1h_2\rangle+i|v_1v_2\rangle)\,, \end{equation} and half-wave plates at $22.5^\circ$ are positioned before the projecting PBS, the polarization of the first and last photons is non-locally rotated before the PBS to the circular polarization basis while that of the two projected photons remains unchanged (see the Supplemental Material for a detailed calculation for this rotation \cite{SM}).
\begin{figure}
\caption{(Color online) Coherence of the GHZ state. The sum of all even amplitudes (black squares) and the sum of all odd amplitudes (red circles) at a rotated polarization basis vs. the relative delay between the short and the long paths. At zero delay, the even amplitudes interfere destructively and the odd amplitudes interfere constructively. When delay is introduced, the two GHZ amplitudes are temporally distinguishable and interference is lost. The interference visibility is $69.5\pm0.8$\%. Data was accumulated over 480\,sec per point.}
\label{Fig3}
\end{figure}
\begin{figure}
\caption{(Color online) Fourfold coincidence counts of the 16 individual amplitudes of four photons at the plus-minus 45$^\circ$ (\textbf{PM}) linear polarization basis. At zero delay, counts from all odd amplitudes (light green background) increase while counts from all even amplitudes (light orange background) decrease. Figure \ref{Fig3} is composed of these scans.}
\label{Fig4}
\end{figure}
We applied these rotations and detected all of the 16 even and odd amplitudes. Each possibility corresponds to different sequences of the four detectors. Programmable electronics is used to register the various sequences. The overall post-selected four-photon rate is 13 events per second. Figure \ref{Fig3} presents the sums of all counts of the 8 even and of the 8 odd amplitudes (showing destructive and constructive interference, respectively), as the delay length is scanned. Figure \ref{Fig4} presents the 16 amplitudes that Fig. \ref{Fig3} is composed of. There are enough events to observe the interference of all single amplitudes. The threshold visibility required for the demonstration of non-locality with four particles is below 35\% \cite{Mermin90,Zukowski97}. The observed interference visibility here is $69.5\pm0.8$\%. As the two-photon visibilities are relatively high (larger than 90\% for HV, PM, and RL measurement bases at low pump power), the four-photon visibility is an indication for the projection quality. Nevertheless, the entangled-pair quality is still a major cause for the fourfold visibility degradation. We estimate the effect of higher order terms to cause a degradation of only $\sim3\%$.
The fidelity with a GHZ state can be estimated from the histogram data of Fig. \ref{Fig2} and the observed fourfold visibility. The worst case of full coherence between the unwanted diagonal elements of the state density matrix yields a lower bound of 75.2\% fidelity and assuming no coherence between these terms results in 79.9\%. As the observed fidelity is higher than 50\%, it is clear evidence of genuine quantum entanglement between the generated four photons. We see continuous improvement of the fourfold visibility and the fidelity as the setup is technically improved. See the Supplemental Material for a detailed description of the causes for visibility degradation \cite{SM}.
As the same setup can generate six-photon entanglement, we used it to detect six-photon states and recorded 30 sixfold events per hour for a pump power of 500\,mW. As all of the entangled pairs are produced by the same source, all three pairs have identical quantum properties. The PBS entangling operations between the first and second pairs and between the second and third pairs are identical. Therefore, measuring genuine entanglement between four photons indicates that entanglement exists between all of the six detected photons. Quantifying the amount of sixfold entanglement is left for a future work.
As long as additional pairs are being entangled, a larger GHZ state is created. Interestingly, when fusing $|\phi^i\rangle$ pair states (Eq. \ref{PhiI}) instead of $|\phi^+\rangle$, the growing state is described by a different type of graph from the GHZ graph \cite{Hein04} (see Fig. \ref{Fig1}c). The four-photon state is still a GHZ state, but the six-photon state is an H-graph state \cite{Lu07} (Fig. \ref{Fig1}c), as the left photon from the second pulse is projected on the PBS after already being rotated to the PM basis. When yet another (fourth) pair is entangled, the sixth photon is also rotated, and this pair branches from a corner of the H-graph .
To conclude, we have demonstrated a new scheme for the generation of entanglement between many photons. It is efficient in the required material resources, as the same setup can entangle any number of photons, without any change. Hence, our scheme can enable the demonstration of states with larger photon numbers than what is practically realizable today. Similar to other available schemes, our scheme suffers from decreasing state production rate with the number of photons. Nevertheless, there is much room for improvement as the photon-pair generation probability is currently between $1-2\%$, but up to $10\%$ is acceptable (see Supplemental Material \cite{SM}). Photon numbers can be further increased by using a pump laser with higher power \cite{Yao12}, a generating crystal with higher nonlinear coefficients \cite{Halevi11}, improving photon collection \cite{Kurtsiefer01} and detection \cite{Takeuchi99} efficiencies, using detectors with shorter dead times \cite{Goltsman01,Eraerds07}, lasers with higher repetition rates \cite{Bartels08} and the coherent addition of pump pulses \cite{Krischek10}. More virtual qubits can be added to the generated state by using hyper-entanglement \cite{Kwiat98}, resulting in states of more connected graphs and higher quantum Hilbert spaces \cite{Gao10}. The incorporation of fast polarization rotations will enable the generation of different graph states, as well as on-the-fly alteration of the measurement basis according to previous measurement outcomes, a procedure known as \textit{feed-forward} \cite{Prevedel07} which is required for one-way quantum computation \cite{Raussendorf01}.
The authors thank the Israeli Science Foundation for supporting this work under grants 366/06 and 546/10.
\end{document} |
\begin{document}
\title{On the Decidability of Connectedness Constraints in 2D and 3D Euclidean Spaces}
\begin{abstract} We investigate (quantifier-free) spatial constraint languages with equality, contact and connectedness predicates, as well as Boolean operations on regions, interpreted over low-dimensional Euclidean spaces. We show that the complexity of reasoning varies dramatically depending on the dimension of the space and on the type of regions considered. For example, the logic with the interior-connectedness predicate (and without contact) is undecidable over polygons or regular closed sets in $\mathbb{R}^2$, \textsc{ExpTime}-complete over polyhedra in $\mathbb{R}^3$, and \textsc{NP}-complete over regular closed sets in $\mathbb{R}^3$. \end{abstract}
\section{Introduction}\label{sec:intro}
A central task in Qualitative Spatial Reasoning is that of determining whether some described spatial configuration is geometrically realizable in 2D or 3D Euclidean space. Typically, such a description is given using a spatial logic---a formal language whose variables range over (typed) geometrical entities, and whose non-logical primitives represent geometrical relations and operations involving those entities. Where the geometrical primitives of the language are purely topological in character, we speak of a \emph{topological
logic}; and where the logical syntax is confined to that of propositional calculus, we speak of a \emph{topological constraint
language}.
Topological constraint languages have been intensively studied in Artificial Intelligence over the last two decades. The best-known of these, \ensuremath{\mathcal{RCC}8}{} and \ensuremath{\mathcal{RCC}5}, employ variables ranging over regular closed sets in topological spaces, and a collection of eight (respectively, five) binary predicates standing for some basic topological relations between these sets~\cite{ijcai:Egenhofer&Franzosa91,ijcai:Randelletal92,ijcai:Bennett94,ijcai:Renz&Nebel98}. An important extension of \ensuremath{\mathcal{RCC}8}, known as \ensuremath{\mathcal{BRCC}8}{}, additionally features standard Boolean operations on regular closed sets~\cite{ijcai:Wolter&Z00ecai}.
A remarkable characteristic of these languages is their \emph{in}sensitivity to the underlying interpretation. To show that an \ensuremath{\mathcal{RCC}8}-formula is satisfiable in $n$-dimensional Euclidean space, it suffices to demonstrate its satisfiability in {\em any} topological space \cite{ijcai:Renz98}; for \ensuremath{\mathcal{BRCC}8}-formulas, satisfiability in \emph{any connected} space is enough. This inexpressiveness yields (relatively) low computational complexity: satisfiability of~\ensuremath{\mathcal{BRCC}8}-, \ensuremath{\mathcal{RCC}8}- and \ensuremath{\mathcal{RCC}5}-formulas over arbitrary topological spaces is \textsc{NP}-complete; satisfiability of~\ensuremath{\mathcal{BRCC}8}{}-formulas over connected spaces is \textsc{PSpace}-complete.
However, satisfiability of spatial constraints by {\em arbitrary} regular closed sets by no means guarantees realizability by practically meaningful geometrical objects, where {\em connectedness} of regions is typically a minimal requirement~\cite{Borgo96,ijcai:Cohn&Renz08}. (A connected region is one which consists of a `single piece.') It is easy to write constraints in $\ensuremath{\mathcal{RCC}8}$ that are satisfiable by connected regular closed sets over arbitrary topological spaces but not over $\mathbb{R}^2$; in $\ensuremath{\mathcal{BRCC}8}$ we can even write formulas satisfiable by connected regular closed sets over arbitrary spaces but not over $\mathbb{R}^n$ for any $n$. Worse still: there exist very simple collections of spatial constraints (involving connectedness) that are satisfiable in the Euclidean plane, but only by `pathological' sets that cannot plausibly represent the regions occupied by physical objects~\cite{ijcai:HSL2}. Unfortunately, little is known about the complexity of topological constraint satisfaction by non-pathological objects in low-dimensional Euclidean spaces. One landmark result~\cite{ijcai:iscloes:sss03} in this area shows that satisfiability of \ensuremath{\mathcal{RCC}8}-formulas by \emph{disc-homeomorphs} in $\mathbb{R}^2$ is still \textsc{NP}-complete, though the decision procedure is vastly more intricate than in the general case. In this paper, we investigate the computational properties of more general and flexible spatial logics with connectedness constraints interpreted over $\mathbb{R}^2$ and $\mathbb{R}^3$.
We consider two `base' topological constraint languages. The language $\mathcal{B}$ features $=$ as its only predicate, but has function symbols $+$, $-$, $\cdot$ denoting the standard operations of fusion, complement and taking common parts defined for regular closed sets, as well as the constants $1$ and $0$ for the entire space and the empty set. Our second base language, $\mathcal{C}$, additionally features a binary predicate, $C$, denoting the `contact' relation (two sets are in {\em contact} if they share at least one point). The language $\mathcal{C}$ is a notational variant of~\ensuremath{\mathcal{BRCC}8}{} (and thus an extension of \ensuremath{\mathcal{RCC}8}), while $\mathcal{B}$ is the analogous extension of \ensuremath{\mathcal{RCC}5}{}. We add to $\mathcal{B}$ and $\mathcal{C}$ one of two new unary predicates: $c$, representing the property of connectedness, and $c^\circ$, representing the (stronger) property of having a connected \emph{interior}. We denote the resulting languages by $\ensuremath{\mathcal{B}c}$, $\ensuremath{\mathcal{B}c^\circ}$\!, $\ensuremath{\mathcal{C}c}$ and $\ensuremath{\mathcal{C}c^\circ}$\!. We are interested
in interpretations over ({\em i}) the regular closed sets of $\mathbb{R}^2$ and $\mathbb{R}^3$, and ({\em ii}) the regular closed \emph{polyhedral} sets of $\mathbb{R}^2$ and $\mathbb{R}^3$. (A set is polyhedral if it can be defined by finitely many bounding hyperplanes.) By restricting interpretations to polyhedra we rule out satisfaction by pathological sets and use the same `data structure' as in GISs.
When interpreted over {\em arbitrary} topological spaces, the complexity of reasoning with these languages is known: satisfiability of $\ensuremath{\mathcal{B}c^\circ}$-formulas is \textsc{NP}-complete, while for the other three languages, it is \textsc{ExpTime}-complete. Likewise, the 1D Euclidean case is completely solved. For the spaces $\mathbb{R}^n$ ($n \geq 2$), however, most problems are still open. All four languages contain formulas satisfiable by regular closed sets in $\mathbb{R}^2$, but not by regular closed polygons; in $\mathbb{R}^3$, the analogous result is known only for $\ensuremath{\mathcal{B}c^\circ}$ and $\ensuremath{\mathcal{C}c^\circ}$. The satisfiability problem for \ensuremath{\mathcal{B}c}{}, \ensuremath{\mathcal{C}c}{} and \ensuremath{\mathcal{C}c^\circ}{} is \textsc{ExpTime}-hard (in both polyhedral and unrestricted cases) for $\mathbb{R}^n$ ($n \geq 2$); however, the only known upper bound is that satisfiability of $\ensuremath{\mathcal{B}c^\circ}$-formulas by polyhedra in $\mathbb{R}^n$ ($n \geq 3$) is \textsc{ExpTime}-complete. (See~\cite{ijcai:kphz10} for a summary.)
This paper settles most of these open problems, revealing considerable differences between the computational properties of constraint languages with connectedness predicates when interpreted over $\mathbb{R}^2$ and over abstract topological spaces. Sec.~\ref{sec:sensitivity} shows that $\ensuremath{\mathcal{B}c}$, $\ensuremath{\mathcal{B}c^\circ}$, $\ensuremath{\mathcal{C}c}$ and $\ensuremath{\mathcal{C}c^\circ}$ are all sensitive to restriction to polyhedra in $\mathbb{R}^n$ ($n \geq 2$). Sec.~\ref{sec:undecidability} establishes an unexpected result: all these languages are \emph{undecidable} in 2D, both in the polyhedral and unrestricted cases (\cite{Dornheim} proves undecidability of the \emph{first-order} versions of these languages). Sec.~\ref{sec:3d} resolves the open issue of the complexity of $\ensuremath{\mathcal{B}c^\circ}$ over regular closed sets (not just polyhedra) in $\mathbb{R}^3$ by establishing an NP upper bound. Thus, Qualitative Spatial Reasoning in Euclidean spaces proves much more challenging if connectedness of regions is to be taken into account. We discuss the obtained results in the context of spatial reasoning in Sec.~\ref{conclusion}. Omitted proofs can be found in the appendix.
\section{Constraint Languages with Connectedness}\label{sec:preliminaries}
Let $T$ be a topological space. We denote the closure of any $X \subseteq T$ by $\tc{X}$, its interior by $\ti{X}$ and its boundary by $\delta X = \tc{X} \setminus \ti{X}$. We call $X$ {\em regular closed} if $X = \tc{\ti{X}}$, and denote by ${\sf RC}(T)$ the set of regular closed subsets of $T$. Where $T$ is clear from context, we refer to elements of ${\sf RC}(T)$ as {\em regions}. ${\sf RC}(T)$ forms a Boolean algebra under the operations $X + Y = X \cup Y$, $X \cdot Y = \smash{\tc{\ti{\smash{(X \cap Y)}}}}$ and $-X = \tc{\smash{(T
\setminus X)}}$. We write $X \leq Y$ for $X \cdot (-Y) = \emptyset$; thus $X \leq Y$ iff $X \subseteq Y$. A subset $X \subseteq T$ is \emph{connected} if it cannot be decomposed into two disjoint, non-empty sets closed in the subspace topology; $X$ is \emph{interior-connected} if $\ti{X}$ is connected.
Any $(n-1)$-dimensional hyperplane in $\mathbb{R}^n$, $n \geq 1$, bounds two elements of ${\sf RC}(\mathbb{R}^n)$ called \emph{half-spaces}. We denote by ${\sf RCP}(\mathbb{R}^n)$ the Boolean subalgebra of ${\sf RC}(\mathbb{R}^n)$ generated by the half-spaces, and call the elements of ${\sf RCP}(\mathbb{R}^n)$ (regular closed) \emph{polyhedra}. If $n = 2$, we speak of (regular closed) \emph{polygons}. Polyhedra may be regarded as `well-behaved' or, in topologists' parlance, `\emph{tame}.' In particular, every polyhedron has finitely many connected components, a property which is not true of regular closed sets in general.
The topological constraint languages considered here all employ a countably infinite collection of variables $r_1, r_2, \ldots$ The language $\mathcal{C}$ features binary predicates $=$ and $C$, together with the individual constants $0$, $1$ and the function symbols $+$, $\cdot$, $-$. The \emph{terms} $\tau$ and \emph{formulas} $\varphi$ of $\mathcal{C}$ are given by:
\begin{align*} \tau \quad & ::= \quad r \ \ \mid \ \ \tau_1 + \tau_2 \ \ \mid \ \ \tau_1 \cdot \tau_2 \ \ \mid \ \ - \tau_1 \ \ \mid \ \ 1 \ \ \mid \ \ 0,\\
\varphi \quad & ::= \quad \tau_1 = \tau_2 \ \ \mid \ \ C(\tau_1,\tau_2) \ \ \mid \ \ \varphi_1 \land \varphi_2 \ \ \mid \ \ \neg \varphi_1. \end{align*}
The language $\mathcal{B}$ is defined analogously, but without the predicate $C$. If $S \subseteq {\sf RC}(T)$ for some topological space $T$, an \emph{interpretation over} $S$ is a function $\cdot^\mathfrak{I}$ mapping variables $r$ to elements $r^\mathfrak{I} \in S$. We extend $\cdot^\mathfrak{I}$ to terms $\tau$ by setting $0^\mathfrak{I} = \emptyset$, $1^\mathfrak{I} = T$, $(\tau_1 + \tau_2)^\mathfrak{I} = \tau_1^\mathfrak{I} + \tau_2^\mathfrak{I}$, etc. We write $\mathfrak{I} \models \tau_1 = \tau_2$ iff $\tau_1^\mathfrak{I} = \tau_2^\mathfrak{I}$, and $\mathfrak{I} \models C(\tau_1,\tau_2)$ iff $\tau_1^\mathfrak{I} \cap \tau_2^\mathfrak{I} \neq \emptyset$. We read $C(\tau_1, \tau_2)$ as `$\tau_1$ \emph{contacts} $\tau_2$.' The relation $\models$ is extended to non-atomic formulas in the obvious way. A formula $\varphi$ is \emph{satisfiable over} $S$ if $\mathfrak{I} \models \varphi$ for some interpretation $\mathfrak{I}$ over $S$.
Turning to languages with connectedness predicates, we define $\ensuremath{\mathcal{B}c}$ and $\ensuremath{\mathcal{C}c}$ to be extensions of $\mathcal{B}$ and $\mathcal{C}$ with the unary predicate $c$. We set $\mathfrak I \models c(\tau)$ iff $\smash{\tau^{\mathfrak I}}$ is connected in the topological space under consideration. Similarly, we define $\ensuremath{\mathcal{B}c^\circ}$ and $\ensuremath{\mathcal{C}c^\circ}$ to be extensions of $\mathcal{B}$ and $\mathcal{C}$ with the predicate $c^\circ$, setting $\mathfrak I \models c^\circ(\tau)$ iff $\ti{\smash{(\tau^{\mathfrak
I})}}$ is connected. $\textit{Sat}(\mathcal{L},S)$ is the set of $\mathcal{L}$-formulas satisfiable over $S$, where $\mathcal{L}$ is one of $\ensuremath{\mathcal{B}c}$, $\ensuremath{\mathcal{C}c}$, $\ensuremath{\mathcal{B}c^\circ}$ or $\ensuremath{\mathcal{C}c^\circ}$ (the topological space is implicit in this notation, but will always be clear from context). We shall be concerned with $\textit{Sat}(\mathcal{L}, S)$, where $S$ is ${\sf RC}(\mathbb{R}^n)$ or ${\sf RCP}(\mathbb{R}^n)$ for $n=2,3$.
To illustrate, consider the $\ensuremath{\mathcal{B}c^\circ}$-formulas $\varphi_k$ given by
\begin{equation} \bigwedge_{1 \leq i \leq k}\hspace*{-0.5em} \bigl( c^\circ(r_i) \land (r_i \neq 0) \bigr)
\land \bigwedge_{i < j} \bigl( c^\circ(r_i + r_j) \land (r_i \cdot r_j =0) \bigr). \label{eq:ex1} \end{equation}
One can show that $\varphi_3$ is satisfiable over ${\sf RCP}(\mathbb{R}^n)$, $n \geq 2$, but not over ${\sf RCP}(\mathbb{R})$, as no three intervals with non-empty, disjoint interiors can be in pairwise contact. Also, $\varphi_5$ is satisfiable over ${\sf RCP}(\mathbb{R}^n)$, for $n\geq 3$, but not over ${\sf RCP}(\mathbb{R}^2)$, as the graph $K_5$ is non-planar. Thus, $\ensuremath{\mathcal{B}c^\circ}$ is sensitive to the dimension of the space.
Or again, consider the $\ensuremath{\mathcal{B}c^\circ}$-formula
\begin{equation}\label{eq:wiggly} \bigwedge_{1 \leq i \leq 3}\hspace*{-0.5em} c^\circ(r_i) \ \ \land\ \ c^\circ(r_1 + r_2 + r_3) \ \ \land\ \ \bigwedge_{2 \leq i \leq 3} \negc^\circ(r_1 + r_i). \end{equation}
One can show that~\eqref{eq:wiggly} is satisfiable over ${\sf RC}(\mathbb{R}^n)$, for any $n\ge 2$ (see, e.g., Fig.~\ref{fig:wiggly}), but not over ${\sf RCP}(\mathbb{R}^n)$. Thus $\ensuremath{\mathcal{B}c^\circ}$ is sensitive to tameness in Euclidean spaces.
\begin{figure}
\caption{Three regions in ${\sf RC}(\mathbb{R}^2)$ satisfying \eqref{eq:wiggly}.}
\label{fig:wiggly}
\end{figure}
It is known~\cite{ijcai:kphz10} that, for the Euclidean {\em plane}, the same is true of $\ensuremath{\mathcal{B}c}$ and $\ensuremath{\mathcal{C}c}$: there is a $\ensuremath{\mathcal{B}c}$-formula satisfiable over ${\sf RC}(\mathbb{R}^2)$, but not over ${\sf RCP}(\mathbb{R}^2)$. (The example required to show this is far more complicated than the \ensuremath{\mathcal{B}c^\circ}-formula~\eqref{eq:wiggly}.) In the next section, we prove that any of $\ensuremath{\mathcal{B}c}$, $\ensuremath{\mathcal{C}c}$ and $\ensuremath{\mathcal{C}c^\circ}$ contains formulas satisfiable over ${\sf RC}(\mathbb{R}^n)$, for every $n \geq 2$, but only by regions with infinitely many components. Thus, all four of our languages are sensitive to tameness in all dimensions greater than one.
\section{Regions with Infinitely Many Components}\label{sec:sensitivity}
Fix $n \ge 2$ and let $d_0,d_1,d_2,d_3$ be regions partitioning $\mathbb{R}^n$: \begin{align} \label{eq:InfPart1} \textstyle \big( \sum_{0 \leq i \le 3} d_i =1 \big) \quad\land\quad \bigwedge_{0 \leq i<j\leq3}(d_i\cdot d_j=0).
\end{align}
We construct formulas forcing the $d_i$ to have infinitely many connected components. To this end we require non-empty regions $a_i$ contained in $d_i$, and a non-empty region $t$:
\begin{align}\label{eq:basic-regions} \textstyle\bigwedge_{0 \leq i \leq 3} \bigl((a_i \ne 0) \land (a_i \leq d_i)\bigr) \quad\land\quad (t\ne 0). \end{align}
The configuration of regions we have in mind is depicted in Fig.~\ref{fig:InfCmpSat}, where components of the $d_i$ are arranged like the layers of an onion. The `innermost' component of $d_0$ is surrounded by a component of $d_1$, which in turn is surrounded by a component of $d_2$, and so on. The region $t$ passes through every layer, but avoids the $a_i$. To enforce a configuration of this sort, we need the following three formulas, for $0 \leq i \leq 3$:
\begin{align} \label{eq:InfContact} &c(a_i+d_{\md{i+1}}+t), \\ \label{notC} &\neg C(a_i,d_{\md{i+1}}\cdot (-a_{\md{i+1}})) \ \ \land \ \ \neg C(a_i, t), \\ \label{eq:InfNTriv1} &\neg C(d_i, d_{\md{i+2}}), \end{align}
where $\md{k}= k\, {\rm mod}\, 4$. Formulas~\eqref{eq:InfContact} and \eqref{notC} ensure that each component of $a_i$ is in contact with $a_{\md{i+1}}$, while \eqref{eq:InfNTriv1} ensures that no component of $d_i$ can touch any component of $d_{\md{i+2}}$.
\begin{figure}
\caption{Regions satisfying $\varphi_\infty$ .}
\label{fig:InfCmpSat}
\end{figure}
Denote by $\varphi_\infty$ the conjunction of the above constraints. Fig.~\ref{fig:InfCmpSat} shows how $\varphi_\infty$ can be satisfied over ${\sf RC}(\mathbb{R}^2)$. By cylindrification, it is also satisfiable over any ${\sf RC}(\mathbb{R}^n)$, for $n> 2$.
The arguments of this section are based on the following property of regular closed subsets of Euclidean spaces:
\begin{lemma}\label{lma:ourNewman} If $X \in {\sf RC}(\mathbb{R}^n)$ is connected, then every component of $-X$ has a connected boundary. \end{lemma}
The proof of this lemma, which follows from a result in~\cite{ijcai:Newman64}, can be found in Appendix~\ref{sec:sensitivityA}. The result fails for other familiar spaces such as the torus.
\begin{theorem}\label{theo:inftyCc} There is a $\ensuremath{\mathcal{C}c}$-formula satisfiable over ${\sf RC}(\mathbb{R}^n)$, $n \geq 2$, but not by regions with finitely many components.
\end{theorem}
\begin{proof} Let $\varphi_\infty$ be as above. To simplify the presentation, we ignore the difference between variables and the regions they stand for, writing, for example, $a_i$ instead of $a_i^\mathfrak{I}$. We construct a sequence of disjoint components $X_i$ of $d_{\md{i}}$ and open sets $V_i$ connecting $X_i$ to $X_{i+1}$ (Fig.~\ref{fig:InfCmpConstr}). By the first conjunct of~\eqref{eq:basic-regions}, let $X_0$ be a component of $d_0$ containing points in $a_0$. Suppose $X_i$ has been constructed. By~\eqref{eq:InfContact} and~\eqref{notC}, $X_i$ is in contact with $a_{\md{i+1}}$. Using~\eqref{eq:InfNTriv1} and the fact that $\mathbb{R}^n$ is locally connected, one can find a component $X_{i+1}$ of $d_{\md{i+1}}$ which has points in $a_{i+1}$, and a connected open set $V_i$ such that $V_i \cap X_i$ and $V_i \cap X_{i+1}$ are non-empty, but $V_i \cap d_{\md{i+2}}$ is empty.
\begin{figure}
\caption{The sequence $\{X_i,V_i\}_{i\geq0}$ generated by $\varphi_\infty$. ($S_{i+1}$ and $R_{i+1}$ are the `holes' of $X_{i+1}$ containing $X_i$ and $X_{i+2}$.) }
\label{fig:InfCmpConstr}
\end{figure}
To see that the $X_i$ are distinct, let $S_{i+1}$ and $R_{i+1}$ be the components of $-X_{i+1}$ containing $X_i$ and $X_{i+2}$, respectively. It suffices to show $S_{i+1} \subseteq\ti{S}_{i+2}$. Note that the connected set $V_i$ must intersect $\delta S_{i+1}$. Evidently, $\delta S_{i+1} \subseteq X_{i+1} \subseteq d_{\md{i+1}}$. Also, $\delta S_{i+1} \subseteq -X_{i+1}$; hence, by~\eqref{eq:InfPart1} and~\eqref{eq:InfNTriv1}, $\delta S_{i+1} \subseteq d_{i} \cup d_{\md{i+2}}$. By Lemma~\ref{lma:ourNewman}, $\delta S_{i+1}$ is connected, and therefore, by~\eqref{eq:InfNTriv1}, is entirely contained either in $d_{\md{i}}$ or in $d_{\md{i+2}}$. Since $V_i \cap \delta S_{i+1} \neq \emptyset$ and $V_i \cap d_{\md{i+2}} = \emptyset$, we have $\delta S_{i+1} \not \subseteq d_{\md{i+2}}$, so $\delta S_{i+1} \subseteq d_i$. Similarly, $\delta R_{i+1}\subseteq d_{i+2}$. By~\eqref{eq:InfNTriv1}, then, $\delta S_{i+1} \cap \delta R_{i+1} = \emptyset$, and since $S_{i+1}$ and $R_{i+1}$ are components of the same set, they are disjoint. Hence, $S_{i+1}\subseteq \ti{(-R_{i+1})}$, and since $X_{i+2}\subseteq R_{i+1}$, also $S_{i+1}\subseteq \ti{(-X_{i+2})}$. So, $S_{i+1}$ lies in the interior of a component of $-X_{i+2}$, and since $\delta S_{i+1}\subseteq X_{i+1}\subseteq S_{i+2}$, that component must be $S_{i+2}$. \end{proof}
\vspace*{-2mm}
Now we show how the $\ensuremath{\mathcal{C}c}$-formula $\varphi_\infty$ can be transformed to $\ensuremath{\mathcal{C}c^\circ}$- and $\ensuremath{\mathcal{B}c}$-formulas with similar properties. Note first that all occurrences of $c$ in $\varphi_\infty$ have positive polarity. Let $\ti{\varphi}_\infty$ be the result of replacing them with the predicate $c^\circ$. In Fig.~\ref{fig:InfCmpSat}, the connected regions mentioned in~\eqref{eq:InfContact} are in fact interior-connected; hence $\ti{\varphi}_\infty$ is satisfiable over ${\sf RC}(\mathbb{R}^n)$. Since interior-connectedness implies connectedness, $\ti{\varphi}_\infty$ entails $\varphi_\infty$, and we obtain:
\begin{corollary}\label{cor:inftyCci} There is a $\ensuremath{\mathcal{C}c^\circ}$-formula satisfiable over ${\sf RC}(\mathbb{R}^n)$, $n \geq 2$, but not by regions with finitely many components. \end{corollary}
To construct a $\ensuremath{\mathcal{B}c}$-formula, we observe that all occurrences of $C$ in $\varphi_\infty$ are negative. We eliminate these using the predicate $c$. Consider, for example, the formula $\neg C(a_i, t$) in~\eqref{notC}.
By inspection of Fig.~\ref{fig:InfCmpSat}, one can find regions $r_1$, $r_2$ satisfying
\begin{equation} \label{eq:contactTrick} c(r_1) \wedge c(r_2) \wedge (a_i \leq r_1) \wedge (t \leq r_2) \wedge \neg c(r_1+r_2). \end{equation}
On the other hand, \eqref{eq:contactTrick} entails $\neg C(a_i,t)$. By treating all other non-contact relations similarly, we obtain a $\ensuremath{\mathcal{B}c}$-formula $\psi_\infty$ that is satisfiable over ${\sf RC}(\mathbb{R}^n)$, and that entails $\varphi_\infty$. Thus:
\begin{corollary}\label{cor:inftyBc} There is a $\ensuremath{\mathcal{B}c}$-formula satisfiable over ${\sf RC}(\mathbb{R}^n)$, $n \geq 2$, but not by regions with finitely many components. \end{corollary}
Obtaining a $\ensuremath{\mathcal{B}c^\circ}$ analogue is complicated by the fact that we must enforce non-contact constraints using $c^\circ$ (rather than $c$). In the Euclidean plane, this can be done using \emph{planarity constraints}; see Appendix~\ref{sec:sensitivityA}.
\begin{theorem}\label{theo:inftyBci} There is a $\ensuremath{\mathcal{B}c^\circ}$-formula satisfiable over ${\sf RC}(\mathbb{R}^2)$, but not by regions with finitely many components. \end{theorem}
Theorem~\ref{theo:inftyCc} and Corollary~\ref{cor:inftyBc} entail that, if $\mathcal{L}$ is $\ensuremath{\mathcal{B}c}$ or $\ensuremath{\mathcal{C}c}$, then $\textit{Sat}(\mathcal{L},{\sf RC}(\mathbb{R}^n)) \neq \textit{Sat}(\mathcal{L},{\sf RCP}(\mathbb{R}^n))$ for $n \geq 2$. Theorem~\ref{theo:inftyBci} fails for ${\sf RC}(\mathbb{R}^n)$ with $n\geq 3$ (Sec.~\ref{sec:3d}). However, we know from~\eqref{eq:wiggly} that $\textit{Sat}(\ensuremath{\mathcal{B}c^\circ},{\sf RC}(\mathbb{R}^n)) \neq \textit{Sat}(\ensuremath{\mathcal{B}c^\circ},{\sf RCP}(\mathbb{R}^n))$ for all $n \geq 2$. Theorem~\ref{theo:inftyCc} fails in the 1D case; moreover, $\textit{Sat}(\mathcal{L},{\sf RC}(\mathbb{R}))=\textit{Sat}(\mathcal{L},{\sf RCP}(\mathbb{R}))$ only in the case $\mathcal{L} = \ensuremath{\mathcal{B}c}$ or $\ensuremath{\mathcal{B}c^\circ}$~\cite{ijcai:kphz10}.
\section{Undecidability in the Plane}\label{sec:undecidability}
Let $\mathcal{L}$ be any of $\ensuremath{\mathcal{B}c}$, $\ensuremath{\mathcal{C}c}$, $\ensuremath{\mathcal{B}c^\circ}$ or $\ensuremath{\mathcal{C}c^\circ}$. In this section, we show, via a reduction of the {\em Post correspondence
problem} (PCP), that $\textit{Sat}(\mathcal{L},{\sf RC}(\mathbb{R}^2))$ is r.e.-hard, and $\textit{Sat}(\mathcal{L},{\sf RCP}(\mathbb{R}^2))$ is r.e.-complete. An {\em instance} of the PCP is a quadruple $\mathbf{w} = (S, T, \mathsf{w}_1, \mathsf{w}_2)$ where $S$ and $T$ are finite alphabets, and each $\mathsf{w}_i$ is a word morphism from $T^*$ to $S^*$. We may assume that $S = \set{0,1}$ and $\mathsf{w}_i(t)$ is non-empty for any $t \in T$. The instance $\mathbf{w}$ is {\em positive} if there exists a non-empty $\tau \in T^*$ such that $\mathsf{w}_1(\tau) = \mathsf{w}_2(\tau)$. The set of positive PCP-instances is known to be r.e.-complete. The reduction can only be given in outline here: full details are given in Appendix~\ref{sec:UndecidabilityB}.
To deal with arbitrary regular closed subsets of ${\sf RC}(\mathbb{R}^2)$, we use the technique of `wrapping' a region inside two bigger ones. Let us say that a \emph{3-region} is a triple $\tseq{a} = (a,\intermediate{a},\inner{a})$ of elements of ${\sf RC}(\mathbb{R}^2)$ such that $0 \neq \inner{a} \ll \intermediate{a} \ll a$, where $r \ll s$ abbreviates $\neg C(r, -s)$. It helps to think of $\tseq{a} = (a,\intermediate{a},\inner{a})$ as consisting of a kernel, $\inner{a}$, encased in two protective layers of shell. As a simple example, consider the sequence of 3-regions $\tseq{a}_1, \tseq{a}_2, \tseq{a}_3$ depicted in Fig.~\ref{fig:stack}, where the inner-most regions form a sequence of externally touching polygons.
\begin{figure}
\caption{A chain of 3-regions satisfying $\mathsf{stack}(\tseq{a}_1, \tseq{a}_2, \tseq{a}_3)$. }
\label{fig:stack}
\end{figure}
When describing arrangements of 3-regions, we use the variable $\tseq{r}$ for the triple of variables $(r, \intermediate{r}, \inner{r})$, taking the conjuncts $\inner{r} \neq 0$, $\inner{r} \ll \intermediate{r}$ and $\intermediate{r} \ll r$ to be implicit. As with ordinary variables, we often ignore the difference between 3-region variables and the 3-regions they stand for.
For $k \geq 3$, define the formula $\mathsf{stack}(\tseq{a}_1, \ldots, \tseq{a}_k)$ by
\begin{equation*} \bigwedge_{1 \leq i \leq k}
c(\intermediate{a}_i + \inner{a}_{i + 1} + \cdots + \inner{a}_k) \ \ \ \ \land \ \ \bigwedge_{j - i > 1} \neg C(a_i,a_j). \end{equation*}
Thus, the triple of 3-regions in Fig.~\ref{fig:stack} satisfies $\mathsf{stack}(\tseq{a}_1, \tseq{a}_2, \tseq{a}_3)$. This formula plays a crucial role in our proof. If $\mathsf{stack}(\tseq{a}_1,\ldots,\tseq{a}_k)$ holds, then any point $p_0$ in the inner shell $\intermediate{a}_1$ of $\tseq{a}_1$ can be connected to any point $p_k$ in the kernel $\inner{a}_k$ of $\tseq{a}_k$ via a Jordan arc $\gamma_1\cdots\gamma_k$ whose $i$th segment, $\gamma_i$, never leaves the outer shell $a_i$ of $\tseq{a}_i$. Moreover, each $\gamma_i$ intersects the inner shell $\intermediate{a}_{i+1}$ of $\tseq{a}_{i+1}$, for $1 \leq i <k$.
This technique allows us to write $\ensuremath{\mathcal{C}c}$-formulas whose satisfying regions are guaranteed to contain various networks of arcs, exhibiting almost any desired pattern of intersections. Now recall the construction of Sec.~\ref{sec:sensitivity}, where constraints on the variables $d_0, \ldots, d_3$ were used to enforce `cyclic' patterns of components. Using $\mathsf{stack}(\tseq{a}_1,\ldots,\tseq{a}_k)$, we can write a formula with the property that the regions in any satisfying assignment are forced to contain the pattern of arcs having the form shown in Fig.~\ref{fig:Summary1}.
\begin{figure}
\caption{Encoding the PCP: Stage 1.}
\label{fig:Summary1}
\end{figure}
These arcs define a `window,' containing a sequence $\set{\zeta_i}$ of `horizontal' arcs ($1 \leq i \leq n$), each connected by a corresponding `vertical arc,' $\eta_i$, to some point on the `top edge.' We can ensure that each $\zeta_i$ is included in a region $a_{\md{i}}$, and each $\eta_i$ ($1 \leq i \leq n$) in a region $b_{\md{i}}$, where $\md{i}$ now indicates $i \ \mbox{mod}\ 3$. By repeating the construction, a second pair of arc-sequences, $\set{\zeta'_i}$ and $\set{\eta'_i}$ ($1 \leq i \leq n'$) can be established, but with each $\eta'_i$ connecting $\zeta'_i$ to the `bottom edge.' Again, we can ensure each $\zeta'_i$ is included in a region $a'_{\md{i}}$ and each $\eta'_i$ in a region $b'_{\md{i}}$ ($1 \leq i \leq n'$). Further, we can ensure that the final horizontal arcs $\zeta_n$ and $\zeta'_{n'}$ (but no others) are joined by an arc $\zeta^*$ lying in a region $z^*$.
\begin{figure}
\caption{Encoding the PCP: Stage 2.}
\label{fig:Summary2}
\end{figure}
The crucial step is to match up these arc-sequences. To do so, we write $\neg C(a'_i, b_j) \wedge \neg C(a_i, b'_j) \wedge \neg C(b_i + b'_i, b_j + b'_j + z^*)$, for all $i$, $j$ ($0 \leq i, j < 3$, $i \neq j$). A simple argument based on planarity considerations then ensures that the upper and lower sequences of arcs must cross (essentially) as shown in Fig.~\ref{fig:Summary2}. In particular, we are guaranteed that $n = n'$ (without specifying the value $n$), and that, for all $1 \leq i \leq n$, $\zeta_i$ is connected by $\eta_i$ (and also by $\eta'_i$) to $\zeta'_i$.
Having established the configuration of Fig.~\ref{fig:Summary2}, we write $(b_i \leq l_0 + l_1) \wedge \neg C(b_i \cdot l_0, b_i \cdot l_1)$, for $0\leq i < 3$, ensuring that each $\eta_i$ is included in exactly one of $l_0$, $l_1$. These inclusions naturally define a word $\sigma$ over the alphabet $\set{0,1}$. Next, we write $\ensuremath{\mathcal{C}c}$-constraints which organize the sequences of arcs $\set{\zeta_i}$ and $\set{\zeta'_i}$ (independently) into consecutive blocks. These blocks of arcs can then be put in 1--1 correspondence using essentially the same construction used to put the individual arcs in 1--1 correspondence. Each pair of corresponding blocks can now be made to lie in exactly one region from a collection $t_1, \ldots, t_\ell$. We think of the $t_j$ as representing the letters of the alphabet $T$, so that the labelling of the blocks with these elements defines a word $\tau \in T^*$. It is then straightforward to write non-contact constraints involving the arcs $\zeta_i$ ensuring that $\sigma = \mathsf{w}_1(\tau)$ and non-contact constraints involving the arcs $\zeta'_i$ ensuring that $\sigma = \mathsf{w}_2(\tau)$. Let $\varphi_\mathbf{w}$ be the conjunction of all the foregoing $\ensuremath{\mathcal{C}c}$-formulas. Thus, if $\varphi_\mathbf{w}$ is satisfiable over ${\sf RC}(\mathbb{R}^2)$, then $\mathbf{w}$ is a positive instance of the PCP. On the other hand, if $\mathbf{w}$ is a positive instance of the PCP, then one can construct a tuple satisfying $\varphi_\mathbf{w}$ over ${\sf RCP}(\mathbb{R}^2)$ by `thickening' the above collections of arcs into polygons in the obvious way. So, $\mathbf{w}$ is positive iff $\varphi_\mathbf{w}$ is satisfiable over ${\sf RC}(\mathbb{R}^2)$ iff $\varphi_\mathbf{w}$ is satisfiable over ${\sf RCP}(\mathbb{R}^2)$. This shows r.e.-hardness of $\textit{Sat}(\ensuremath{\mathcal{C}c},{\sf RC}(\mathbb{R}^2))$ and $\textit{Sat}(\ensuremath{\mathcal{C}c},{\sf RCP}(\mathbb{R}^2))$. Membership of the latter problem in r.e.~is immediate because all polygons may be assumed to have vertices with rational coordinates, and so may be effectively enumerated. Using the techniques of Corollaries~\ref{cor:inftyCci}--\ref{cor:inftyBc} and Theorem~\ref{theo:inftyBci}, we obtain:
\begin{theorem} \label{theo:undecidable} For $\mathcal{L}\in \{\ensuremath{\mathcal{B}c^\circ}, \ensuremath{\mathcal{B}c}, \ensuremath{\mathcal{C}c^\circ}, \ensuremath{\mathcal{C}c}\}$, $\textit{Sat}(\mathcal{L},{\sf RC}(\mathbb{R}^2))$ is r.e.-hard, and $\textit{Sat}(\mathcal{L},{\sf RCP}(\mathbb{R}^2))$ is r.e.-complete. \end{theorem}
The complexity of $\textit{Sat}(\mathcal{L},{\sf RC}(\mathbb{R}^3))$ remains open for the languages $\mathcal{L}\in \{\ensuremath{\mathcal{B}c}, \ensuremath{\mathcal{C}c^\circ}, \ensuremath{\mathcal{C}c}\}$. However, as we shall see in the next section, for $\ensuremath{\mathcal{B}c^\circ}$ it drops dramatically.
\section{\ensuremath{\mathcal{B}c^\circ}{} in 3D}\label{sec:3d}
In this section, we consider the complexity of satisfying \ensuremath{\mathcal{B}c^\circ}-constraints by polyhedra and regular closed sets in three-dimensional Euclidean space. Our analysis rests on an important connection between geometrical and graph-theoretic interpretations. We begin by briefly discussing the results of~\cite{ijcai:kp-hwz10} for the {\em polyhedral} case.
Recall that every partial order $(W,R)$, where $R$ is a transitive and reflexive relation on $W$, can be regarded as a topological space by taking $X \subseteq W$ to be open just in case $x \in X$ and $xRy$ imply $y\in X$. Such topologies are called \emph{Aleksandrov spaces}. If $(W,R)$ contains no proper paths of length greater than 2, we call $(W,R)$ a \emph{quasi-saw} (Fig.~\ref{fig:broom}). If, in addition, no $x \in W$ has more than two proper $R$-successors, we call $(W,R)$ a \emph{$2$-quasi-saw}. The properties of 2-quasi-saws we need are as follows~\cite{ijcai:kp-hwz10}:
\begin{itemize}\itemsep=0pt \item[--] satisfiability of $\ensuremath{\mathcal{B}c}$-formulas in arbitrary topological
spaces coincides with satisfiability in 2-quasi-saws, and is
\textsc{ExpTime}-complete;
\item[--] $X \subseteq W$ is connected in a 2-quasi-saw $(W,R)$ iff it is interior-connected in $(W,R)$. \end{itemize}
The following construction lets us apply these results to the problem $\textit{Sat}(\ensuremath{\mathcal{B}c^\circ},{\sf RCP}(\mathbb{R}^3))$. Say that a \emph{connected partition} in ${\sf RCP}(\mathbb{R}^3)$ is a tuple $X_1,\dots,X_k$ of non-empty polyhedra having connected and pairwise disjoint interiors, which sum to the entire space $\mathbb{R}^3$. The \emph{neighbourhood graph} $(V,E)$ of this partition has vertices $V = \{X_1, \ldots, X_k\}$ and edges $E = \{\{X_i, X_j\} \mid i \ne j \text{ and } \ti{(X_i + X_j)} \text{ is connected}\}$ (Fig.~\ref{fig:conn-part}).
\begin{figure}
\caption{A connected partition and its neighbourhood graph.}
\label{fig:conn-part}
\end{figure}
One can show that {\em every} connected graph is the neighbourhood graph of some connected partition in ${\sf RCP}(\mathbb{R}^3)$. Furthermore, every neighbourhood graph $(V,E)$ gives rise to a 2-quasi-saw, namely, $(W_0 \cup W_1, R)$, where $W_0 = V$, $W_1 = \{z_{x,y} \mid \{x,y\} \in E\}$, and $R$ is the reflexive closure of $\{(z_{x,y}, x), (z_{x,y}, y) \mid \{x,y\} \in E \}$. From this, we see that ({\em i}) a $\ensuremath{\mathcal{B}c^\circ}$-formula $\varphi$ is satisfiable over ${\sf RCP}(\mathbb{R}^3)$ iff ({\em
ii}) $\varphi$ is satisfiable over a connected $2$-quasi-saw iff ({\em
iii}) the $\ensuremath{\mathcal{B}c}$-formula $\varphi^{\bullet}$, obtained from $\varphi$ by replacing every occurrence of $c^\circ$ with $c$, is satisfiable over a connected 2-quasi-saw. Thus, $\textit{Sat}(\ensuremath{\mathcal{B}c^\circ}, {\sf RCP}(\mathbb{R}^3))$ is \textsc{ExpTime}-complete.
The picture changes if we allow variables to range over ${\sf RC}(\mathbb{R}^3)$ rather than ${\sf RCP}(\mathbb{R}^3)$. Note first that the $\ensuremath{\mathcal{B}c^\circ}$-formula \eqref{eq:wiggly} is not satisfiable over 2-quasi-saws, but has a quasi-saw model as in Fig.~\ref{fig:broom}.
\begin{figure}
\caption{A quasi-saw model $\mathfrak{I}$ of~\eqref{eq:wiggly}: $r_i^\mathfrak{I} = \{x_i,z\}$.}
\label{fig:broom}
\end{figure}
Some extra geometrical work will show now that ({\em iv}) a $\ensuremath{\mathcal{B}c^\circ}$-formula is satisfiable over ${\sf RC}(\mathbb{R}^3)$ iff ({\em v}) it is satisfiable over a connected quasi-saw. And as shown in~\cite{ijcai:kp-hwz10}, satisfiability of $\ensuremath{\mathcal{B}c^\circ}$-formulas in connected spaces coincides with satisfiability over connected quasi-saws, and is \textsc{NP}-complete.
\begin{theorem}\label{theo:BciRCR3} The problem $\textit{Sat}(\ensuremath{\mathcal{B}c^\circ},{\sf RC}(\mathbb{R}^3))$ is \textsc{NP}-complete. \end{theorem}
\begin{proof} From the preceding discussion, it suffices to show that ({\em v}) implies ({\em iv}) for any $\ensuremath{\mathcal{B}c^\circ}$-formula $\varphi$. So suppose $\mathfrak A \models \varphi$, with $\mathfrak A$ based on a finite connected quasi-saw $(W_0\cup W_1,R)$, where $W_i$ contains all points of depth $i \in \{0,1\}$ (Fig.~\ref{fig:broom}). Without loss of generality we will assume that there is a special point $z_0$ of depth 1 such that $z_0 R x$ for all $x$ of depth 0. We show how $\mathfrak{A}$ can be embedded into ${\sf RC}(\mathbb{R}^3)$.
Take pairwise disjoint \emph{closed} balls $B^1_x$, for $x$ of depth 0, and pairwise disjoint \emph{open} balls $D_z$, for all $z$ of depth 1 except $z_0$ (we assume the $D_z$ are disjoint from the $B^1_x$). Let $D_{z_0}$ be the closure of the complement of all $B_x^1$ and $D_z$.
We expand the $B^1_x$ to sets $B_x$ in such a way that
\begin{itemize}\itemsep=0pt \item[(A)] the $B_x$ form a connected partition in ${\sf RC}(\mathbb{R}^3)$, that
is, they are regular closed and sum up to $\mathbb{R}^3$, and their
interiors are non-empty, connected and pairwise disjoint;
\item[(B)] every point in $D_z$ is either in the interior of some $B_x$ with $zRx$, or on the boundary of \emph{all} of the $B_x$ with $zRx$. \end{itemize}
The required $B_x$ are constructed as follows. Let $q_1, q_2, \ldots$ be an enumeration of all the points in the interiors of $D_z$ with \emph{rational} coordinates.
For $x\in W_0$, we set $B_x$ to be the closure of the infinite union $\bigcup_{k=1}^\infty \ti{(B_x^k)}$, where the regular closed sets $B_x^k$ are defined inductively as follows (Fig.~\ref{fig:apollonian}).
Assuming that the $B^k_x$ are defined, let $q_i$ be the first point in the list $q_1, q_2, \ldots$ that is not in any $B^k_x$ yet. So, $q_i$ is in the interior of some $D_z$. Take an open ball $C_{q_i}$ in the interior of $D_z$ centred in $q_i$ and disjoint from the $B^k_x$. For each $x\in W_0$ with $zRx$, expand $B_x^k$ by a closed ball in $C_{q_i}$ and a closed `rod' connecting it to $B_x^1$ in such a way that the ball and the rod are disjoint from the rest of the $B^k_x$; the result is denoted by $B^{k+1}_x$.
\begin{figure}
\caption{Filling $D_{z_1}$ with $B_{x_i}$, for $z_1 R x_i$, $i = 1,2,3$.}
\label{fig:apollonian}
\end{figure}
Consider a function $f$ that maps regular closed sets $X \subseteq W$ to ${\sf RC}(\mathbb{R}^3)$ so that $f(X)$ is the union of all $B_x$, for $x$ of depth $0$ in $X$. By~(A), $f$ preserves $+$, $\cdot$, $-$, $0$ and $1$.
Define an interpretation $\mathfrak{I}$ over ${\sf RC}(\mathbb{R}^3)$ by $r^\mathfrak{I} = f(r^\mathfrak{A})$. To show that $\mathfrak I \models \varphi$, it remains to prove that $\ti{X}$ is connected iff $\ti{(f(X))}$ is connected (details are in Appendix~\ref{sec:Bci3D_C}). \end{proof}
The remarkably diverse computational behaviour of \ensuremath{\mathcal{B}c^\circ}{} over ${\sf RC}(\mathbb{R}^3)$, ${\sf RCP}(\mathbb{R}^3)$ and ${\sf RCP}(\mathbb{R}^2)$ can be explained as follows. To satisfy a \ensuremath{\mathcal{B}c^\circ}-formula $\varphi$ in ${\sf RC}(\mathbb{R}^3)$, it suffices to find polynomially many points in the regions mentioned in $\varphi$ (witnessing non-emptiness or non-internal-connectedness constraints), and then to `inflate' those points to (possibly internally connected) regular closed sets using the technique of Fig.~\ref{fig:apollonian}.
By contrast, over ${\sf RCP}(\mathbb{R}^3)$, one can write a \ensuremath{\mathcal{B}c^\circ}-formula analogous to \eqref{eq:contactTrick} stating that two internally connected polyhedra do not share a 2D face. Such `face-contact' constraints can be used to generate constellations of exponentially many polyhedra simulating runs of alternating Turing machines on polynomial tapes, leading to \textsc{ExpTime}-hardness. Finally, over ${\sf RCP}(\mathbb{R}^2)$, planarity considerations endow \ensuremath{\mathcal{B}c^\circ}{} with the extra expressive power required to enforce full non-contact constructs (not possible in higher dimensions), and thus to encode the PCP as sketched in Sec.~\ref{sec:undecidability}.
\section{Conclusion}\label{conclusion}
This paper investigated topological constraint languages featuring connectedness predicates and Boolean operations on regions. Unlike their less expressive cousins, \ensuremath{\mathcal{RCC}8}{} and \ensuremath{\mathcal{RCC}5}, such languages are highly sensitive to the spaces over which they are interpreted, and exhibit more challenging computational behaviour. Specifically, we demonstrated that the languages $\ensuremath{\mathcal{C}c}$, $\ensuremath{\mathcal{C}c^\circ}$ and $\ensuremath{\mathcal{B}c}$ contain formulas satisfiable over ${\sf RC}(\mathbb{R}^n)$, $n \geq 2$, but only by regions with infinitely many components. Using a related construction, we proved that the satisfiability problem for any of $\ensuremath{\mathcal{B}c}$, $\ensuremath{\mathcal{C}c}$, $\ensuremath{\mathcal{B}c^\circ}$ and $\ensuremath{\mathcal{C}c^\circ}$, interpreted either over ${\sf RC}(\mathbb{R}^2)$ or over its polygonal subalgebra, ${\sf RCP}(\mathbb{R}^2)$, is \emph{undecidable}. Finally, we showed that the satisfiability problem for $\ensuremath{\mathcal{B}c^\circ}$, interpreted over ${\sf RC}(\mathbb{R}^3)$, is \textsc{NP}-complete, which contrasts with \textsc{ExpTime}-completeness for ${\sf RCP}(\mathbb{R}^3)$. The complexity of satisfiability for $\ensuremath{\mathcal{B}c}$, $\ensuremath{\mathcal{C}c}$ and $\ensuremath{\mathcal{C}c^\circ}$ over ${\sf RC}(\mathbb{R}^n)$ or ${\sf RCP}(\mathbb{R}^n)$ for $n \geq 3$ remains open.
The obtained results rely on certain distinctive topological properties of Euclidean spaces. Thus, for example, the argument of Sec.~\ref{sec:sensitivity} is based on the property of Lemma~\ref{lma:ourNewman}, while Sec.~\ref{sec:undecidability} similarly relies on {\em planarity} considerations. In both cases, however, the moral is the same: the topological spaces of most interest for Qualitative Spatial Reasoning exhibit special characteristics which any topological constraint language able to express connectedness must take into account.
The results of Sec.~\ref{sec:undecidability} pose a challenge for Qualitative Spatial Reasoning in the Euclidean plane. On the one hand, the relatively low complexity of \ensuremath{\mathcal{RCC}8}{} over disc-homeomorphs suggests the possibility of usefully extending the expressive power of \ensuremath{\mathcal{RCC}8}{} without compromising computational properties. On the other hand, our results impose severe limits on any such extension. We observe, however, that the constructions used in the proofs depend on a strong interaction between the connectedness predicates and the Boolean operations on regular closed sets. We believe that by restricting this interaction one can obtain non-trivial constraint languages with more acceptable complexity. For example, the extension of \ensuremath{\mathcal{RCC}8}{} with connectedness constraints is still in \textsc{NP}{} for both ${\sf RC}(\mathbb{R}^2)$ and ${\sf RCP}(\mathbb{R}^2)$~\cite{ijcai:kphz10}.
\noindent {\bf Acknowledgments.}\ \ This work was partially supported by the U.K. EPSRC grants EP/E034942/1 and EP/E035248/1.
\begin{thebibliography}{}\itemsep=1pt
\bibitem[\protect\citeauthoryear{Bennett}{1994}]{ijcai:Bennett94} B.~Bennett. \newblock Spatial reasoning with propositional logic. \newblock In {\em Proc.\ of KR}, pages 51--62. 1994.
\bibitem[\protect\citeauthoryear{Borgo \bgroup \em et al.\egroup
}{1996}]{Borgo96} S.~Borgo, N.~Guarino, and C.~Masolo. \newblock A pointless theory of space based on strong connection and
congruence. \newblock In {\em Proc.\ of
KR}, pages 220--229. 1996.
\bibitem[\protect\citeauthoryear{Cohn and Renz}{2008}]{ijcai:Cohn&Renz08} A. Cohn and J. Renz. \newblock Qualitative spatial representation and reasoning. \newblock In {\em
Handbook of Kno\-w\-ledge Representation}, pages 551--596. Elsevier, 2008.
\bibitem[\protect\citeauthoryear{Dornheim}{1998}]{Dornheim} C.\ Dornheim. \newblock Undecidability of plane po\-ly\-gonal mereo\-topology. \newblock In {\em Proc.\ of KR}. 1998.
\bibitem[\protect\citeauthoryear{Egenhofer and
Franzosa}{1991}]{ijcai:Egenhofer&Franzosa91} M.~Egenhofer and R.~Franzosa. \newblock Point-set topological spatial relations. \newblock {\em International J.\ of Geographical Information Systems},
5:161--174, 1991.
\bibitem[\protect\citeauthoryear{Kontchakov \bgroup \em et al.\egroup
}{2010a}]{ijcai:kp-hwz10} R.~Kontchakov, I.~Pratt-Hartmann, F.~Wolter, and M.~Zakharyaschev. \newblock Spatial logics with connectedness predicates. \newblock {\em Logical Methods in Computer Science}, 6(3), 2010.
\bibitem[\protect\citeauthoryear{Kontchakov \bgroup \em et al.\egroup
}{2010b}]{ijcai:kphz10} R.~Kontchakov, I.~Pratt-Hart\-mann, and M.~Zakharyaschev. \newblock Interpreting topological logics over {E}uclidean spaces. \newblock In {\em Proc.\
of KR}. 2010.
\bibitem[\protect\citeauthoryear{Newman}{1964}]{ijcai:Newman64} M.H.A. Newman. \newblock {\em Elements of the Topology of Plane Sets of Points}. \newblock Cambridge, 1964.
\bibitem[\protect\citeauthoryear{Pratt-Hartmann}{2007}]{ijcai:HSL2} I.~Pratt-Hartmann. \newblock First-order mere\-o\-topology. \newblock In {\em
Handbook of Spatial Logics}, pages 13--97. Springer, 2007.
\bibitem[\protect\citeauthoryear{Randell \bgroup \em et al.\egroup
}{1992}]{ijcai:Randelletal92} D.~Randell, Z.~Cui, and A.~Cohn. \newblock A spatial logic based on regions and connection. \newblock In {\em Proc.\ of
KR}, pages 165--176. 1992.
\bibitem[\protect\citeauthoryear{Renz and Nebel}{2001}]{ijcai:Renz&Nebel98} J.~Renz and B.~Nebel. \newblock Efficient methods for qualitative spatial reasoning. \newblock {\em J.~Artificial Intelligence Research}, 15:289-318, 2001. \bibitem[\protect\citeauthoryear{Renz}{1998}]{ijcai:Renz98} J.~Renz. \newblock A canonical model of the region connection calculus. \newblock In {\em Proc.\ of
KR},`pages 330--341. 1998.
\bibitem[\protect\citeauthoryear{Schaefer \bgroup \em et al.\egroup
}{r003}]{ijcai:iscloes:sss03} M.~Schaefer, E.~Sedgwick, and D.~{\v{S}}tefankovi{\v{c}}. \newblock Recognizing string graphs in {NP}. \newblock {\em J.\ of Computer and System Sciences}, 67:365--380, 2003.
\bibitem[\protect\citeauthoryear{Wolter and
Zakharyaschev}{2000}]{ijcai:Wolter&Z00ecai} F.~Wolter and M.~Zakharyaschev. \newblock Spatial reasoning in \ensuremath{\mathcal{RCC}8}{} with {B}oolean region terms. \newblock In {\em Proc.\ of ECAI}, pages 244--248. 2000.
\end{thebibliography}
\cleardoublepage
\appendix
\section{Regions with infinitely many components} \label{sec:sensitivityA} First we give detailed proofs of Lemma~\ref{lma:ourNewman} and Theorem~\ref{theo:inftyCc}. \begin{theorem}[\cite{ijcai:Newman64}]\label{thm:NewmanBnd} If $X$ is a connected subset of $\mathbb{R}^n$, then every connected component of $\mathbb{R}^n\setminus X$ has a connected boundary. \end{theorem}
\begin{swetheorem}{Lemma~\ref{lma:ourNewman}} If $X \in {\sf RC}(\mathbb{R}^n)$ is connected, then every component of $-X$ has a connected boundary. \end{swetheorem}
\begin{proof} Let $Y$ be a connected component of $-X$. Suppose that the boundary $\beta$ of $Y$ is not connected, and let $\beta_1$ and $\beta_2$ be two sets separating $\beta$: $\beta_1$ and $\beta_2$ are disjoint, non-empty, closed subsets of $\beta$ whose union is $\beta$. We will show that $Y$ is not connected. We have $Y=\tc{(\bigcup_{i\in
I}Z_i)}$, for some index set $I$, where the $Z_i$ are distinct connected components of $\mathbb{R}^n\setminus X$. By Theorem~\ref{thm:NewmanBnd},`the boundaries $\alpha_i$ of $Z_i$ are connected subsets of $\beta$, for each $i\in I$. Hence, either $\alpha_i\subseteq\beta_1$ or $\alpha_i\subseteq\beta_2$, for otherwise $\alpha_i\cap\beta_1$ and $\alpha_i\cap\beta_2$ would separate $\alpha_i$. Let $I_j=\{i\in I\mid \alpha_i\subseteq \beta_j\}$ and $Y_j=\tc{(\bigcup_{i\in I_j}Z_i)}$, for $j=1,2$. Clearly, $Y_1$ and $Y_2$ are closed, and $Y=Y_1\cup Y_2$. Hence, it suffices to show that $Y_1$ and $Y_2$ are disjoint. We know that, for $j=1,2$,
$$ Y_j=\tc{(\bigcup_{i\in I_j}\alpha_i)}\cup\bigcup_{i\in I_j}Z_i. $$
Clearly, $\bigcup_{i\in I_1}Z_i$ and $\bigcup_{i\in I_2}Z_i$ are disjoint. We also know that $\tc{(\bigcup_{i\in I_1}\alpha_i)}$ and $\tc{(\bigcup_{i\in I_2}\alpha_i)}$ are disjoint, as subsets of $\beta_1$ and $\beta_2$, respectively. Finally, $\tc{(\bigcup_{i\in I_j}\alpha_i)}$ and $\bigcup_{i\in I_k}Z_i$ are disjoint, for $j,k=1,2$, as subsets of the boundary and the interior of $Y$, respectively. So, $Y$ is not connected, which is a contradiction. \end{proof}
\begin{swetheorem}{Theorem~\ref{theo:inftyCc}}
If $\mathfrak I$ is an interpretation over ${\sf RC}(\mathbb{R}^n)$ such that
$\mathfrak I \models \varphi_\infty$, then every $d_i^\mathfrak{I}$
has infinitely many components. \end{swetheorem}
\begin{proof} To simplify presentation, we ignore the difference between variables and the regions they stand for, writing, for example, $a_i$ instead of $a_i^\mathfrak{I}$. We also set $b_i=d_i\cdot(-a_i)$. We construct a sequence of disjoint components $X_i$ of $d_{\md{i}}$ and open sets $V_i$ connecting $X_i$ to $X_{i+1}$ (Fig.~\ref{fig:InfCmpConstr}). By the first conjunct of~\eqref{eq:basic-regions}, let $X_0$ be a component of $d_0$ containing points in $a_0$. Suppose $X_i$ has been constructed, for $i \geq 0$. By~\eqref{eq:InfContact} and~\eqref{notC}, there exists a point $q \in X_i \cap a_{\md{i+1}}$. Since $q\notin b_{\md{i+1}}\cup d_{\md{i+2}}\cup d_{\md{i+3}}$, and because $\mathbb{R}^n$ is locally connected, there exists a connected neighbourhood $V_i$ of $q$ such that $V_i\cap (b_{\md{i+1}}\cup d_{\md{i+2}}\cup d_{\md{i+3}})=\emptyset$, and so, by~\eqref{eq:InfPart1}, $V_i\subseteq d_{\md{i}}+a_{\md{i+1}}$. Further, since $q\in a_{\md{i+1}}$, $V_i\cap \ti{a_{\md{i+1}}}\neq \emptyset$. Take $X_{i+1}'$ to be a component of $a_{\md{i+1}}$ that intersects $V_i$ and $X_{i+1}$ the component of $d_{\md{i+1}}$ containing $X_{i+1}'$.
To see that the $X_i$ are distinct, let $S_{i+1}$ and $R_{i+1}$ be the components of $-X_{i+1}$ containing $X_i$ and $X_{i+2}$, respectively. It suffices to show $S_{i+1} \subseteq\ti{S}_{i+2}$. Note that the connected set $V_i$ must intersect $\delta S_{i+1}$. Evidently, $\delta S_{i+1} \subseteq X_{i+1} \subseteq d_{\md{i+1}}$. Also, $\delta S_{i+1} \subseteq -X_{i+1}$; hence, by~\eqref{eq:InfPart1} and~\eqref{eq:InfNTriv1}, $\delta S_{i+1} \subseteq d_{i} \cup d_{\md{i+2}}$. By Lemma~\ref{lma:ourNewman}, $\delta S_{i+1}$ is connected, and therefore, by~\eqref{eq:InfNTriv1}, is entirely contained either in $d_{\md{i}}$ or in $d_{\md{i+2}}$. Since $V_i \cap \delta S_{i+1} \neq \emptyset$ and $V_i \cap d_{\md{i+2}} = \emptyset$, we have $\delta S_{i+1} \not \subseteq d_{\md{i+2}}$, so $\delta S_{i+1} \subseteq d_i$. Similarly, $\delta R_{i+1}\subseteq d_{i+2}$. By~\eqref{eq:InfNTriv1}, then, $\delta S_{i+1} \cap \delta R_{i+1} = \emptyset$, and since $S_{i+1}$ and $R_{i+1}$ are components of the same set, they are disjoint. Hence, $S_{i+1}\subseteq \ti{(-R_{i+1})}$, and since $X_{i+2}\subseteq R_{i+1}$, also $S_{i+1}\subseteq \ti{(-X_{i+2})}$. So, $S_{i+1}$ lies in the interior of a component of $-X_{i+2}$, and since $\delta S_{i+1}\subseteq X_{i+1}\subseteq S_{i+2}$, that component must be $S_{i+2}$. \end{proof}
Now we extend the result to the language $\ensuremath{\mathcal{C}c^\circ}$. All occurrences of $c$ in $\varphi_\infty$ have positive polarity. Let $\ti{\varphi}_\infty$ be the result of replacing them with the predicate $c^\circ$. In the configuration of Fig.~\ref{fig:InfCmpSat}, all connected regions mentioned in $\varphi_\infty$ are in fact interior-connected; hence $\ti{\varphi}_\infty$ is satisfiable over ${\sf RC}(\mathbb{R}^n)$. Since interior-connectedness implies connectedness, $\ti{\varphi}_\infty$ entails $\varphi_\infty$ in a common extension of $\ensuremath{\mathcal{C}c^\circ}$ and $\ensuremath{\mathcal{C}c}$. Hence:
\begin{swetheorem}{Corollary~\ref{cor:inftyCci}} There is a $\ensuremath{\mathcal{C}c^\circ}$-formula satisfiable over ${\sf RC}(\mathbb{R}^n)$, $n \geq 2$, but not by regions with finitely many components. \end{swetheorem}
\begin{figure}
\caption{Satisfying $\varphi_{\lnot C}^c(a_0, b_1,s,t)$ and $\varphi_{\lnot C}^c(a_0, b_2,s,t)$.}
\label{fig:InfCmpElAiBi}
\end{figure}
To extend Theorem~\ref{theo:inftyCc} to the language $\ensuremath{\mathcal{B}c}$, notice that all occurrences of $C$ in $\varphi_\infty$ are negative. We shall eliminate these using only the predicate $c$. We use the fact that, if the sum of two connected regions is not connected, then they must be disjoint. Consider the formula \begin{align*}
\varphi_{\lnot C}^c(r,s,r',s'):=c(r+r')\land c(s+s') \hspace{2cm}
\\
\land \lnot c((r+r')+(s+s')). \end{align*} Note that $\varphi_{\lnot C}^c(r,s,r',s')$ implies $\lnot C(r,s)$. We replace $\lnot C(a_i,t)$ with $\varphi_{\lnot
C}^c(a_i,t,a_0+a_1+a_2+a_3,t)$, which is clearly satisfiable by the regions on Fig.~\ref{fig:InfCmpSat}. Further, we replace $\lnot C(a_i, b_{\md{i+1}})$ with $\varphi_{\lnot C}^c(a_i,b_{\md{i+1}},s,t)$. As shown on Fig.~\ref{fig:InfCmpElAiBi}, there exists a region $s$ satisfying this formula. Instead of dealing with $\lnot C(d_i,d_{i+2})$, we consider the equivalent: \begin{align*}
\lnot C(a_i,b_{\md{i+2}})\land\lnot C(b_i,a_{\md{i+2}})\land \hspace{3cm}\\
\lnot C(a_i,a_{\md{i+2}})\land\lnot C(b_i,b_{\md{i+2}}). \end{align*} We replace $\lnot C(a_i,b_{\md{i+2}})$ by $\varphi^c_{\lnot C}(a_i,b_{\md{i+2}},s,t)$, which is satisfiable by the regions depicted on Fig.~\ref{fig:InfCmpElAiBi}. We ignore $\lnot C(b_i,a_{\md{i+2}})$, because it is logically equivalent to $\lnot C(a_{i},b_{\md{i+2}})$, for different values of $i$. We replace $\lnot C(a_i,a_{\md{i+2}})$ by $\varphi_{\lnot C}^c(a_i,a_{\md{i+2}},a_i',a_{\md{i+2}}')$, which is satisfiable by the regions depicted on Fig.~\ref{fig:InfCmpElAiAi2}. The fourth conjunct is then treated symmetrically.
\begin{figure}
\caption{Satisfying $\varphi_{\lnot C}^c(a_0, a_2,a_0', a_2')$.}
\label{fig:InfCmpElAiAi2}
\end{figure} Transforming $\varphi_\infty$ in the way just described, we obtain a $\ensuremath{\mathcal{B}c}$-formula $\varphi_\infty^c$, which implies $\varphi_\infty$ (in the language $\ensuremath{\mathcal{C}c}$) and which is satisfiable by the arrangement of ${\sf RC}(\mathbb{R}^n)$. Hence, we obtain the following:
\begin{swetheorem}{Corollary~\ref{cor:inftyBc}} There is a $\ensuremath{\mathcal{B}c}$-formula satisfiable over ${\sf RC}(\mathbb{R}^n)$, $n \geq 2$, but not by regions with finitely many components. \end{swetheorem}
The only remaining task in this section is to prove Theorem~\ref{theo:inftyBci}. The construction is similar to the one developed in Sec.~\ref{sec:undecidability}, and as such uses similar techniques. We employ the following notation. If $\alpha$ is a Jordan arc, and $p$, $q$ are points on $\alpha$ such that $q$ occurs after $p$, we denote by $\alpha[p,q]$ the segment of $\alpha$ from $p$ to $q$.
Consider the formula $\ti{\mathsf{stack}}(a_1,\ldots, a_n)$ given by: \begin{align*}
\bigwedge_{1\leq i<n} \left(c^\circ(a_i+\cdots+a_n)\land a_i\cdot a_{i+1}=0\right)
\land \bigwedge_{j-i>1} \lnot C(a_i,a_j) \end{align*}
This formula allows us to construct sequences of arcs in the following sense:
\begin{lemma}\label{lma:StackLemmai} Suppose that the condition $\ti{\mathsf{stack}}(a_1,\ldots,a_n)$ obtains, $n>1$. Then every point $p_1\in \ti a_1$ can be connected to every point $p_n\in \ti a_n$ by a Jordan arc $\alpha=\alpha_1\cdots\alpha_{n-1}$ such that for all $i$ \textup{(}$1\leq i< n$\textup{)}, each segment $\alpha_i\subseteq \ti{(a_i+a_{i+1})}$ is a non-degenerate Jordan arc starting at some point $p_i\in\ti a_i$. \end{lemma}
\begin{proof}
By $c^\circ(a_1+\cdots+a_n)$, let $\alpha_1'\subseteq
\ti{(a_1+\cdots+a_n)}$ be a Jordan arc connecting $p_1$ to
$p_n$ (Fig.~\ref{fig:stacki}). By the non-contact constraints,
$\alpha_1'$ has to contain points in $\ti a_2$. Let $p_2'$ be
one such point. For $2\leq i<n$ we suppose $\alpha_1, \ldots,
\alpha_{i-2}$, $\alpha'_{i-1}$ and $p'_i$ to have been
defined, and proceed as follows. By $c^\circ(a_i+\cdots+a_n)$, let
$\alpha_i''\subseteq \ti{(a_i+\cdots+a_n)}$ be a Jordan arc
connecting $p_i'$ to $p_n$. By the non-contact constraints,
$\alpha_i''$ can intersect
$\alpha_1\cdots\alpha_{i-2}\alpha_{i-1}'$ only in its final
segment $\alpha_{i-1}'$. Let $p_{i-1}$ be the first point of
$\alpha_{i-1}'$ lying on $\alpha_i'$; let $\alpha_{i-1}$ be
the initial segment of $\alpha_{i-1}'$ ending at $p_{i-1}$;
and let $\alpha_i'$ be the final segment of $\alpha_i''$
starting at $p_{i-1}$. It remains only to define
$\alpha_{n-1}$, and to this end, we simply set
$\alpha_{n-1}:=\alpha_{n-1}'$. To see that $p_i$, $2\leq i<n$,
are as required, note that $p_i\in
\alpha_i\cap\alpha_{i-1}$. By the disjoint constraints $p_i$
must be in $a_i$. If $p_i$ was in $\delta(a_i)$, it would also
have to be in $\delta(a_{i-1})$ and $\delta(a_{i+1})$, which
is forbidden by the disjoint constraints. Hence $p_i\in\ti
a_i$, $1\leq i\leq n$. Given $a_i\cdot a_{i+1}=0$, $1\leq
i<n$, this also guarantees that the arcs $\alpha_i$ are
non-degenerate. \end{proof}
\begin{figure}\label{fig:stacki}
\end{figure}
Consider now the formula $\ti{\mathsf{frame}}(a_0,\ldots,a_{n-1})$ given by: \begin{align*}
\bigwedge_{0\leq i < n} \left(c^\circ(a_i)\land
c^\circ(a_i+ a_{\md{i+1}})\land a_i\neq 0 \right)\land \\ \bigwedge_{j-i>1}a_i\cdot a_j=0, \end{align*} where $\md{k}$ denotes $k \mbox{ mod } n$. This formula allows us to construct Jordan curves in the plane, in the following sense: \begin{lemma}\label{lma:FrameLemmaInt}
Let $n\geq 3$, and suppose $\ti{\mathsf{frame}}(a_0, \ldots,a_{n-1})$.
Then there exist Jordan arcs $\alpha_0$, \ldots, $\alpha_{n-1}$
such that $\alpha_0\ldots\alpha_{n-1}$ is a Jordan curve lying in
the interior of $a_0+\cdots+a_{n-1}$, and $\alpha_i \subseteq
\ti{(a_i+ a_{\md{i+1}})}$, for all $i$, $0 \leq i < n$. \end{lemma} \begin{proof} For all $i$ ($0 \leq i < n$), pick $p'_i \in \ti{a}_i$, and pick a Jordan arc $\alpha'_i \subseteq \ti{(a_i + a_{\md{i+1}})}$ from $p_i$ to $p_{\md{i+1}}$. For all $i$ ($2 \leq i \leq n$), let $p_{\md{i}}$ be the first point of $\alpha_{i-1}$ lying on $\alpha_{\md{i}}$, and let $p''_1$ be the first point of $\alpha'_0$ lying on $\alpha'_1$. For all $i$ ($2 \leq i < n$), let $\alpha_i = \alpha'_i[p_i, p_{i+1}]$, let $\alpha''_1 = \alpha'_1[p''_1, p_2]$, and let $\alpha''_0$ denote the section of $\alpha'_0$ (in the appropriate direction) from $p_0$ to $p''_1$. Now let $p_1$ be the first point of $\alpha''_0$ lying on $\alpha''_1$, let $\alpha_0 = \alpha''_0[p_0,p_1]$, and let $\alpha_1 = \alpha''_1[p_1,p_2]$. It is routine to verify that the arcs $\alpha_0$, \ldots, $\alpha_{n-1}$ have the required properties. \end{proof}
We will now show how to separate certain types of regions in the language $\ensuremath{\mathcal{B}c^\circ}$. We make use of Lemma~\ref{lma:FrameLemmaInt} and the following fact.
\begin{lemma}\label{lma:Newman}{\cite[p.~137]{ijcai:Newman64}}
Let $F$, $G$ be disjoint, closed subsets of $\mathbb{R}^2$ such that
$\mathbb{R}^2\setminus F$ and $\mathbb{R}^2 \setminus G$ are connected. Then
$\mathbb{R}^2\setminus (F \cup G)$ is connected. \end{lemma}
\begin{figure}
\caption{The Jordan curve $\Gamma=\tau_0\tau_1\tau_2$ separating $m_1$ from $m_2$.}
\label{fig:SepBci}
\end{figure}
We say that a region $r$ is \emph{quasi-bounded} if either $r$ or $-r$ is bounded. We can now prove the following.
\begin{lemma}\label{lma:Cci2BciStar}
There exists a $\ensuremath{\mathcal{B}c^\circ}$-formula $\eta^*(r,s, \bar{v})$ with the
following properties: \textup{(}i\textup{)} $\eta^*(r,s, \bar{v})$
entails $\neg C(r,s)$ over ${\sf RC}(\mathbb{R}^2)$; \textup{(}ii\textup{)}
if the regions $r$ and $s$ can be separated by a Jordan curve,
then there exist polygons $\bar{v}$ such that
$\eta^*(\tau_1,\tau_2, \bar{v})$; \textup{(}iii\textup{)} if $r$,
$s$ are disjoint polygons such that $r$ is quasi-bounded and
$\mathbb{R}^2 \setminus (r+s)$ is connected, then there exist polygons
$\bar{v}$ such that $\eta^*(\tau_1,\tau_2, \bar{v})$. \end{lemma} \begin{proof}
Let $\bar{v}$ be the tuple of variables $(t_0,\ldots,t_5, m_1, m_2)$, and let
$\eta^*(r,s,\bar{v})$ be the formula
\begin{multline*}
\ti{\mathsf{frame}}(t_0,\ldots,t_5)\land r\leq m_1 \wedge s\leq m_2 \wedge \\
(t_0 + \ldots + t_5) \cdot (m_1 +m_2) = 0 \wedge \bigwedge_{\substack{i=1,3,5\\j=1,2}} c^\circ(t_i + m_j).
\end{multline*}
Property ({\em i}) follows by a simple planarity argument. By
$\ti{\mathsf{frame}}(t_0,\ldots,t_5)$ and Lemma~\ref{lma:FrameLemmaInt},
let $\alpha_i$, for $0\leq i\leq 5$, be such that
$\Gamma=\alpha_0\cdots\alpha_5$ is a Jordan curve included in
$\ti{(t_0 + \cdots + t_5)}$. Further, let
$\tau_i=\alpha_{2i}\alpha_{2i+1}$, $0\leq i\leq 2$
(Fig.\ref{fig:SepBci}). Note that all points in $a_{2i+1}$,
$0\leq i\leq 2$, that are on $\Gamma$ are on $\tau_i$. By
$c^\circ(t_{2i+1} + m_1)$, $0\leq i\leq 2$, let
$\mu_i\subseteq(m_1+t_{2i+1})^\circ$ be a Jordan arc with
endpoints $M_1\in m_1^\circ$ and $T_i\in\tau_i\cap t_{2i+1}^\circ$.
We may assume that these arcs intersect only at their common
endpoint $M_1$, so that they divide the residual domain of
$\Gamma$ which contains $M_1$ into three sub-domains $n_i$, for
$0\leq i\leq 2$. The existence of a point $M_2\in m_2$ in any
$n_i$, $0\leq i\leq 2$, will contradict $c^\circ(t_{2i+1} + m_2)$.
So, $m_2$ must be contained entirely in the residual domain of
$\Gamma$ not containing $M_1$. Similarly, all points in $m_1$ must
lie in the residual domain of $\Gamma$ containing $M_1$. It follows
that $m_1$ and $m_2$ are disjoint, and by $r\leq m_1$ and
$s\leq m_2$, that $r$ and $s$ are disjoint as well.
For Property ({\em ii}), let $\Gamma$ be a Jordan curve
separating $r$ and $s$. Now thicken $\Gamma$ to form an annular
element of ${\sf RCP}(\mathbb{R}^2)$, still disjoint from $r$ and $s$, and divide
this annulus into the three regions $t_0,\ldots,t_5$ as shown
(up to similar situation) in Fig.~\ref{fig:Cci2BciStar}.
Choose $m_1$ and $m_2$ to be the connected components
of $-(t_0+\cdots+t_5)$ containing $r$ and $s$, respectively.
For Property ({\em iii}), it is routine using Lemma~\ref{lma:Newman}
to show that there exists a piecewise linear Jordan curve
$\Gamma$ in $\mathbb{R}^2 \setminus(r+s)$ separating $r$ and $s$. \end{proof}
\begin{figure}
\caption{Separating disjoint polygons by an annulus.}
\label{fig:Cci2BciStar}
\end{figure} \begin{lemma}\label{lma:Cci2Bci}
There exists a $\ensuremath{\mathcal{B}c^\circ}$-formula $\eta(r,s, \bar{v})$ with the following
properties: \textup{(}i\textup{)} $\eta(r,s, \bar{v})$ entails $\neg
C(r,s)$ over ${\sf RC}(\mathbb{R}^2)$; \textup{(}ii\textup{)} if $r$, $s$ are
disjoint quasi-bounded polygons, then there exist
polygons $\bar{v}$ such that $\eta(\tau_1, \tau_2, \bar{v})$. \end{lemma} \begin{proof}
Let $\eta(r,s, \bar{v})$ be the formula
\begin{equation*}
r = r_1 + r_2 \wedge s = s_1 + s_2 \wedge \bigwedge_{\substack{1 \leq i \leq 2\\
1 \leq j \leq 2}} \eta^*(r_i, s_j, \bar{u}_{i,j}),
\end{equation*}
where $\eta^*$ is the formula given in
Lemma~\ref{lma:Cci2BciStar}. Property ({\em i}) is then immediate. For
Property ({\em ii}), it is routine to show that there exist polygons $r_1$, $r_2$
such that $r = r_1 + r_2$ and $\mathbb{R}^2 \setminus r_i$ is connected for $i
= 1,2$; let $s_1$, $s_2$ be chosen analogously. Then for all $i$ ($1 \leq i
\leq 2$) and $j$ ($1 \leq j \leq 2$) we have $r_i \cap s_j =
\emptyset$ and, by Lemma~\ref{lma:Newman}, $\mathbb{R}^2 \setminus (r_i +
s_j)$ connected. By Lemma~\ref{lma:Cci2BciStar}, let
$\bar{u}_{i,j}$ be such that $\eta^*(r_i, s_j, \bar{u}_{i,j})$. \end{proof} We are now ready to prove: \begin{swetheorem}{Theorem~\ref{theo:inftyBci}} There is a $\ensuremath{\mathcal{B}c^\circ}$-formula satisfiable over ${\sf RC}(\mathbb{R}^2)$, but only by regions with infinitely many components. \end{swetheorem} \begin{proof} We first write a $\ensuremath{\mathcal{C}c^\circ}$-formula, $\varphi^*_\infty$ with the required properties, and then show that all occurrences of $C$ can be eliminated. Note that $\varphi^*_\infty$ is not the same as the formula $\ti{\varphi}_\infty$ constructed for the proof of Corollary~\ref{cor:inftyCci}.
Let $s$, $s'$, $a$, $a'$, $b$, $b'$, $a_{i,j}$ and $b_{i,j}$ ($0 \leq i <2$, $1 \leq j \leq 3$) be variables. The constraints \begin{align}
&\ti{\mathsf{frame}}(s,s',b,b',a,a')\label{eq:BciInf1}\\
&\ti{\mathsf{stack}}(s,b_{i,1},b_{i,2},b_{i,3},b)\label{eq:BciInf2}\\
&\ti{\mathsf{stack}}(b_{\md{i-1},2},a_{i,1},a_{i,2},a_{i,3},a)\label{eq:BciInf3}\\
&\ti{\mathsf{stack}}(a_{\md{i-1},2},b_{i,1},b_{i,2},b_{i,3},b)\label{eq:BciInf4} \end{align} are evidently satisfied by the arrangement of Fig.~\ref{fig:InfBci}. \begin{figure}\label{fig:InfBci}
\end{figure}
Let $\varphi^*_\infty$ be the conjunction of~\eqref{eq:BciInf1}--\eqref{eq:BciInf4} as well as all conjuncts \begin{align}
r\cdot r'=0 & \label{eq:BciInf5}, \end{align} where $r$ and $r'$ are any two distinct regions depicted on Fig.~\ref{fig:InfBci}. Note that the regions $a_{i,j}$ and $b_{i,j}$ have infinitely many connected components. We will now show that this is true for every satisfying tuple of $\varphi^*_\infty$.
By \eqref{eq:BciInf1}, we can use Lemma~\ref{lma:FrameLemmaInt} to construct a Jordan curve $\Gamma=\sigma\sigma'\beta\beta'\alpha\alpha'$ whose segments are Jordan arcs lying in the respective sets $(s+s')^\circ$, $(s'+b)^\circ$, $(b+b')^\circ$, $(b'+a)^\circ$, $(a+a')^\circ$, $(a'+s)^\circ$. Further, let $\sigma_0=\sigma\sigma'$, $\beta_0=\beta\beta'$ and $\alpha_0=\alpha\alpha'$ (Fig.~\ref{subfig:BciFrame}). Note that all points in $s$, $a$ and $b$ that are on $\Gamma$ are on $\sigma_0$, $\alpha_0$ and $\beta_0$, respectively. Let $o_0'\in\sigma_0 \cap\ti s$, and let $q^*\in \beta_0\cap\ti b$. By \eqref{eq:BciInf2} and Lemma~\ref{lma:StackLemmai} we can connect $o_0'$ to $q^*$ by a Jordan arc $\beta_{0,1}'\beta_{0,2}\beta_{0,3}'$ whose segments lie in the respective sets $\ti{(s+b_{0,1})}$, $\ti{(b_{0,1}+b_{0,2}+b_{0,3})}$ and $\ti{(b+b_{0,3})}$ (Fig.~\ref{subfig:BciBeta0}). Let $o_0$ be the last point on $\beta_{0,1}'$ that is on $\sigma_0$ and let $\beta_{0,1}$ be the final segment of $\beta_{0,1}'$ starting at $o_0$. Similarly, let $q_0$ be the first point on $\beta_{0,3}'$ that is on $\beta_0$ and let $\beta_{0,3}$ be the initial segment of $\beta_{0,3}'$ ending at $q_0$. Hence, the arc $\beta_{0,1}\beta_{0,2} \beta_{0,3}$ divides one of the regions bounded by $\Gamma$ into two sub-regions. We denote the sub-region whose boundary is disjoint from $\alpha_0$ by $U_0$, and the other sub-region we denote by $U_0'$. Let $\beta_1:=\beta_{0,3}\beta_0[q_0,r]\subseteq \ti{(b+b_{0,3}+b_{1,3})}$. \begin{figure}
\caption{Establishing infinite sequences of arcs.}
\label{subfig:BciFrame}
\label{subfig:BciBeta0}
\label{subfig:BciAlpha#1}
\label{subfig:BciRedrawn}
\label{subfig:BciBeta1}
\label{fig:BciInfCmp}
\end{figure}
We will now construct a cross-cut $\alpha_{0,1}\alpha_{0,2} \alpha_{0,3}$ in $U_0'$. Let $e_0'\in\beta_{0,2}\cap \ti{b_{0,2}}$ and $p^*\in\alpha_0\cap\ti a$. By \eqref{eq:BciInf3} and Lemma~\ref{lma:StackLemmai} we can connect $e_0'$ to $p^*$ by a Jordan arc $\alpha_{0,1}'\alpha_{0,2}\alpha_{0,3}'$ whose segments lie in the respective sets $\ti{(b_{0,2}+a_{0,1})}$, $\ti{(a_{0,1}+a_{0,2}+a_{0,3})}$ and $\ti{(a+a_{0,3})}$ (Fig.~\ref{subfig:BciAlpha0}). Let $e_0$ be the last point on $\alpha_{0,1}'$ that is on $\beta_{0,2}$ and let $\alpha_{0,1}$ be the final segment of $\alpha_{0,1}'$ starting at $e_0$. Similarly, let $p_0$ be the first point on $\alpha_{0,3}'$ that is on $\alpha_0$ and let $\alpha_{0,3}$ be the initial segment of $\alpha_{0,3}'$ ending at $p_0$. By the non-overlapping constraints, $\alpha_{0,1}\alpha_{0,2}\alpha_{0,3}$ does not intersect the boundaries of $U_0$ and $U_0'$ except at its endpoints, and hence it is a cross-cut in one of these regions. Moreover, that region has to be $U_0'$ since the boundary of $U_0$ is disjoint from $\alpha_0$. So, $\alpha_{0,1}\alpha_{0,2} \alpha_{0,3}$ divides $U_0'$ into two sub-regions. We denote the sub-region whose boundary contains
$\beta_1$ by $W_0$, and the other sub-region we denote by $V_0$. Let
$\alpha_1:=\alpha_{0,3}\alpha_0[p_0,r]$ (Fig~\ref{subfig:BciRedrawn}). Note that
$\alpha_1\subseteq \ti{(a+a_{0,3}+a_{1,3})}$.
We can now forget about the region $U_0$, and start constructing a cross-cut
$\beta_{1,1}\beta_{1,2}\beta_{1,3}$ in $W_0$. As before, let $\beta_{1,1}'\beta_{1,2}\beta_{1,3}'$
be a Jordan arc connecting a point $o_1'\in\alpha_{0,2}\cap\ti a_{0,2}$ to a point
$q^*\in\beta_1\cap\ti b_i$ such that its segments are contained in the respective sets
$\ti{(a_{0,2}+b_{1,1})}$, $\ti{(b_{1,1}+b_{1,2}+b_{1,3})}$ and $\ti{(b+b_{1,3})}$. As before, we choose
$\beta_{1,1}\subseteq \beta_{1,1}'$ and $\beta_{1,3}\subseteq \beta_{1,3}'$ so that the Jordan arc
$\beta_{1,1}\beta_{1,2}\beta_{1,3}$ with its endpoints removed is disjoint from the boundaries of
$V_0$ and $W_0$. Hence $\beta_{1,1}\beta_{1,2}\beta_{1,3}$ has to be a cross-cut in $V_0$ or
$W_0$, and since the boundary of $V_0$ is disjoint from $\beta_1$ it has to be a cross-cut in
$W_0$ (Fig.~\ref{subfig:BciBeta1}). So, $\beta_{1,1}\beta_{1,2}\beta_{1,3}$ separates $W_0$
into two regions $U_1$ and $U_1'$ so that the boundary of $U_1$ is disjoint from $\alpha_1$.
Let $\beta_2:=\beta_{1,3}\beta_1[q_1,r]\subseteq \ti{(b+b_{0,3}+b_{1,3})}$. Now, we can ignore
the region $V_0$, and reasoning as before we can construct a cross-cut
$\alpha_{1,1}\alpha_{1,2}\alpha_{1,3}$ in $U_1'$ dividing it into two sub-regions $V_1$ and $W_1$. \begin{figure}\label{fig:BciInftySep}
\end{figure}
Evidently, this process continues forever. Now, note that by
construction and \eqref{eq:BciInf5}, $W_{2i}$ contains in its
interior $\beta_{2i+1,2}$ together with the connected component $c$
of $b_{1,2}$ which contains $\beta_{2i+1,2}$. On the other hand,
$W_{2i+2}$ is disjoint from $c$, and since $W_i\subseteq W_j$, $i>j$,
$b_{1,2}$ has to have infinitely many connected components.
So far we know that the $\ensuremath{\mathcal{C}c^\circ}$-formula $\varphi^*_{\infty}$ forces infinitely many components. Now we replace every conjunct in $\varphi^*_{\infty}$ of the form $\lnot C(r,s)$ by $\eta^*(r,s,\bar v)$, where $\bar v$ are fresh variables each time. The resulting formula entails $\varphi^*_{\infty}$, so we only have to show that it is still satisfiable. By Lemma~\ref{lma:Cci2BciStar} (\emph{ii}), it suffices to separate by Jordan curves every two regions on Fig.~\ref{fig:InfBci} that are required to be disjoint. It is shown on Fig.~\ref{fig:BciInftySep} that there exists a curve which separates the regions $b_{0,2}$ and $a_{0,2}$. All other non-contact constraints are treated analogously. \end{proof}
\section{Undecidability of \ensuremath{\mathcal{B}c}{} and \ensuremath{\mathcal{C}c}{} in the Euclidean plane} \label{sec:UndecidabilityB} In this section, we prove the undecidability of the problems $\textit{Sat}(\mathcal{L},{\sf RC}(\mathbb{R}^2))$ and $\textit{Sat}(\mathcal{L},{\sf RCP}(\mathbb{R}^2))$, for $\mathcal{L}$ any of $\ensuremath{\mathcal{B}c}$, $\ensuremath{\mathcal{C}c}$, $\ensuremath{\mathcal{B}c^\circ}$ or $\ensuremath{\mathcal{C}c^\circ}$. We begin with some technical preliminaries, again employing the notation from the proof of Theorem~\ref{theo:inftyBci}: if $\alpha$ is a Jordan arc, and $p$, $q$ are points on $\alpha$ such that $q$ occurs after $p$, we denote by $\alpha[p,q]$ the segment of $\alpha$ from $p$ to $q$. For brevity of exposition, we allow the case $p= q$, treating $\alpha[p,q]$ as a (degenerate) Jordan arc.
Our first technical preliminary is to formalize our earlier observations concerning the formula $\mathsf{stack}(\tseq{a}_1, \ldots, \tseq{a}_n)$, defined by:
\begin{equation*} \bigwedge_{1 \leq i \leq n}
c(\intermediate{a}_i + \inner{a}_{i + 1} + \cdots + \inner{a}_n) \mbox{}\wedge \bigwedge_{j - i > 1} \neg C(a_i,a_j). \end{equation*}
\begin{lemma} \label{lma:stackLemma} Let $\tseq{a}_1,\ldots,\tseq{a}_n$ be 3-regions satisfying $\mathsf{stack}(\tseq{a}_1,\ldots,\tseq{a}_n)$, for $n \geq 3$. Then, for every point $p_0\in \intermediate{a}_1$ and every point $p_n \in \inner{a}_n$, there exist points $p_1, \ldots, p_{n-1}$ and Jordan arcs $\alpha_1, \ldots, \alpha_n$ such that\textup{:}
\begin{itemize} \item[\textup{(}i\textup{)}] $\alpha = \alpha_1 \cdots \alpha_n$ is a Jordan arc from $p_0$ to $p_n$\textup{; } \item[\textup{(}ii\textup{)}] for all $i$ \textup{(}$0 \leq i < n$\textup{)}, $p_i \in \intermediate{a}_{i+1} \cap \alpha_i$\textup{;} and \item[\textup{(}iii\textup{)}] for all $i$ \textup{(}$1 \leq i \leq n$\textup{)}, $\alpha_i \subseteq a_i$. \end{itemize} \end{lemma}
\begin{proof} Since $\intermediate{a}_1 + \inner{a}_2 + \cdots + \inner{a}_n$ is a connected subset of $\ti{(a_1 + \intermediate{a}_2 + \cdots +
\intermediate{a}_n)}$, let $\beta_1$ be a Jordan arc connecting $p_0$ to $p_n$ in $\ti{(a_1 + \intermediate{a}_2 + \cdots +
\intermediate{a}_n)}$. Since $a_1$ is disjoint from all the $a_i$ except $a_2$, let $p_1$ be the first point of $\beta_1$ lying in $\intermediate{a}_2$, so $\beta_1[p_0,p_1]\subseteq \ti{a}_1\cup \{ p_1\}$, i.e., the arc $\beta_1[p_0,p_1]$ is either included in $\ti{a}_1$, or is an end-cut of $\ti{a}_1$. (We do not rule out $p_0 = p_1$.) Similarly, let $\beta'_2$ be a Jordan arc connecting $p_1$ to $p_n$ in $\ti{(a_2 + \intermediate{a}_3 + \cdots +
\intermediate{a}_n)}$, and let $q_1$ be the last point of $\beta'_2$ lying on $\beta_1[p_0,p_1]$. If $q_1 = p_1$, then set $v_1 = p_1$, $\alpha_1 = \beta_1[p_0,p_1]$, and $\beta_2 = \beta'_2$. so that the endpoints of $\beta_2$ are $v_1$ and $p_n$. Otherwise, we have $q_1 \in \ti{a}_1$. We can now construct an arc $\gamma_1 \subseteq \ti{a}_1 \cup \{ p_1 \}$ from $p_1$ to a point $v_1$ on $\beta'_2[q_1,p_n]$, such that $\gamma_1$ intersects $\beta_1[p_0,p_1]$ and $\beta'_2[q_1,p_n]$ only at its endpoints, $p_1$ and $v_1$ (upper diagram in Fig.~\ref{fig:stackLemma}). Let $\alpha_1 = \beta_1[p_0,p_1]\gamma_1$, and let $\beta_2 = \beta'_2[v_1,p_n]$.
Since $\beta_2$ contains a point $p_2 \in \intermediate{a}_3$, we may iterate this procedure, obtaining $\alpha_2, \alpha_3, \ldots \alpha_{n-1}, \beta_{n}$. We remark that $\alpha_i$ and $\alpha_{i+1}$ have a single point of contact by construction, while $\alpha_i$ and $\alpha_j$ ($i < j-1$) are disjoint by the constraint $\neg C(a_i, a_j)$. Finally, we let $\alpha_n = \beta_n$ (lower diagram in Fig.~\ref{fig:stackLemma}). \end{proof} \begin{figure}
\caption{Proof of Lemma~\ref{lma:stackLemma}.}
\label{fig:stackLemma}
\end{figure}
In fact, we can add a `switch' $w$ to the formula $\mathsf{stack}(\tseq{a}_1, \ldots, \tseq{a}_n)$, in the following sense. If $w$ is a region variable, consider the formula $\mathsf{stack}_w(\tseq{a}_1, \ldots, \tseq{a}_n)$
\begin{align*} \neg C(w\cdot \intermediate{a}_1, (-w) \cdot \intermediate{a}_1) \ \wedge \ \mathsf{stack}((-w)\cdot \tseq{a}_1, \tseq{a}_2, \ldots, \tseq{a}_n), \end{align*}
where $w \cdot \tseq{a}$ denotes the 3-region $(w\cdot a,w\cdot \intermediate{a},w\cdot \inner{a})$. The first conjunct of $\mathsf{stack}_w(\tseq{a}_1, \ldots, \tseq{a}_n)$ ensures that any component of $\intermediate{a}_1$ is either included in $w$ or included in $-w$. The second conjunct then has the same effect as $\mathsf{stack}(\tseq{a}_1, \ldots, \tseq{a}_n)$ for those components of $\intermediate{a}_1$ included in $-w$. That is, if $p \in \intermediate{a}_1 \cdot (-w)$, we can find an arc $\alpha_1 \cdots \alpha_n$ starting at $p$, with the properties of Lemma~\ref{lma:stackLemma}. However, if $p \in \intermediate{a} \cdot w$, no such arc need exist. Thus, $w$ functions so as to `de-activate' the formula $\mathsf{stack}_w(\tseq{a}_1, \ldots, \tseq{a}_n)$ for any component of $\intermediate{a}_1$ included in it.
As a further application of Lemma~\ref{lma:stackLemma}, consider the formula $\mathsf{frame}(\tseq{a}_0, \ldots, \tseq{a}_n)$ given by:
\begin{align}
\mathsf{stack}(\tseq{a}_0, \ldots, \tseq{a}_{n-1}) \wedge \lnot C(a_n,a_1+\ldots+a_{n-2})\wedge\nonumber\\
c(\intermediate a_n)\wedge \intermediate{a}_0\cdot \intermediate a_n\neq 0\wedge
\inner{a}_{n-1}\cdot \intermediate a_n\neq 0. \label{eq:frame} \end{align}
This formula allows us to construct Jordan curves in the plane, in the following sense: \begin{lemma} \label{lma:FrameLemma} Let $n\geq 3$, and suppose $\mathsf{frame}(\tseq{a}_0, \ldots, \tseq{a}_n)$. Then there exist Jordan arcs $\gamma_0$, \ldots, $\gamma_n$ such that $\gamma_0\ldots\gamma_n$ is a Jordan curve, and $\gamma_i \subseteq a_i$, for all $i$, $0 \leq i \leq n$. \end{lemma}
\begin{proof} By $\mathsf{stack}(\tseq a_0,\ldots,\tseq a_{n-1})$, let $\alpha_0,\ldots,\alpha_{n-1}$ be Jordan arcs in the respective regions $a_0,\ldots,a_{n-1}$ such that, $\alpha=\alpha_0\cdots\alpha_{n-1}$ is a Jordan arc connecting a point $p'\in \intermediate{a}_0\cdot \intermediate a_n$ to a point $q'\in \inner{a}_{n-1}\cdot \intermediate a_n$ (see Fig.~\ref{fig:FrameLemma}). Because $\intermediate a_n$ is a connected subset of the interior of $a_n$, let $\alpha_n\subseteq \ti a_n$ be an arc connecting $p'$ and $q'$. Note that $\alpha_n$ does not intersect $\alpha_i$, for $1\leq i< n-1$. Let $p$ be the last point on $\alpha_0$ that is on $\alpha_n$ (possibly $p'$), and $q$ be the first point on $\alpha_{n-1}$ that is on $\alpha_n$ (possibly $q'$). Let $\gamma_0$ be the final segment of $\alpha_0$ starting at $p$. Let $\gamma_i:=\alpha_i$, for $1\leq i\leq n-2$. Let $\gamma_{n-1}$ be the initial segment of $\alpha_{n-1}$ ending at $q$. Finally, take $\gamma_n$ to be the segment of $\alpha_n$ between $p$ and $q$. Evidently, the arcs $\gamma_i$, $0\leq i\leq n$, are as required. \end{proof}
\begin{figure}
\caption{Establishing a Jordan curve.}
\label{fig:FrameLemma}
\end{figure}
Our final technical preliminary is a simple device for labelling arcs in diagrams.
\begin{lemma} \label{lma:labelling} \label{lma:labels} Suppose $r$, $t_1$, \ldots, $t_\ell$ are regions such that
\begin{equation} \label{eq:labelling} (r \leq t_1 + \cdots + t_\ell) \wedge \bigwedge_{1 \leq i < j \leq \ell} \neg C(r \cdot t_i, r \cdot t_j), \end{equation}
and let $X$ be a connected subset of $r$. Then $X$ is included in exactly one of the $t_i$, $1 \leq i \leq \ell$. \end{lemma}
\begin{proof} If $X \cap t_1$ and $X \cap t_2$ are non-empty, then $X \cap t_1$ and $X \cap (t_2 + \cdots + t_\ell)$ partition $X$ into non-empty, non-intersecting sets, closed in $X$. \end{proof}
When~\eqref{eq:labelling} holds, we may think of the regions $t_1, \ldots, t_\ell$ as `labels' for any connected $X \subseteq r$---and, in particular, for any Jordan arc $\alpha \subseteq r$. Hence, any sequence $\alpha_1, \ldots, \alpha_n$ of such arcs encodes a word over the alphabet $\set{t_1, \ldots, t_\ell}$.
The remainder of this section is given over to a proof of
\begin{swetheorem}{Theorem~\ref{theo:undecidable}} For $\mathcal{L}\in \{\ensuremath{\mathcal{B}c^\circ}, \ensuremath{\mathcal{B}c}, \ensuremath{\mathcal{C}c^\circ}, \ensuremath{\mathcal{C}c}\}$, $\textit{Sat}(\mathcal{L},{\sf RC}(\mathbb{R}^2))$ is r.e.-hard, and $\textit{Sat}(\mathcal{L},{\sf RCP}(\mathbb{R}^2))$ is r.e.-complete. \end{swetheorem}
We have already established the upper bounds; we consider here only the lower bounds, beginning with an outline of our proof strategy. Let a PCP-instance $\mathbf{w} = (\set{0,1}, T, \mathsf{w}_1, \mathsf{w}_2)$ be given, where $T$ is a finite alphabet, and $\mathsf{w}_i\colon T^* \rightarrow \set{0,1}^*$ a word-morphism ($i = 1,2$). We call the elements of $T$
{\em tiles}, and, for each tile $t$, we call $\mathsf{w}_1(t)$ the {\em
lower word} of $t$, and $\mathsf{w}_2(t)$ the {\em upper word} of
$t$. Thus, $\mathbf{w}$ asks whether there is a sequence of tiles
(repeats allowed) such that the concatenation of their upper words
is the same as the concatenation of their lower words. We shall
henceforth restrict all (upper and lower) words on tiles to be
non-empty. This restriction simplifies the encoding below, and
does not affect the undecidability of the PCP.
We define a formula $\varphi_\mathbf{w}$ consisting of a large conjunction of $\ensuremath{\mathcal{C}c}$-literals, which, for ease of understanding, we introduce in groups. Whenever conjuncts are introduced, it can be readily checked that---provided $\mathbf{w}$ is positive---they are satisfiable by elements of ${\sf RCP}(\mathbb{R}^2)$. (Figs.~\ref{fig:concrete1} and~\ref{fig:concrete2} depict {\em part} of a satisfying assignment; this drawing is additionally useful as an aid to intuition throughout the course of the proof.) The main object of the proof is to show that, conversely, if $\varphi_\mathbf{w}$ is satisfied by any tuple in ${\sf RC}(\mathbb{R}^2)$, then $\mathbf{w}$ must be positive. Thus, the following are equivalent: \begin{enumerate} \item $\mathbf{w}$ is positive; \item $\varphi_\mathbf{w}$ is satisfiable over ${\sf RCP}(\mathbb{R}^2)$; \item $\varphi_\mathbf{w}$ is satisfiable over ${\sf RC}(\mathbb{R}^2)$. \end{enumerate} This establishes the r.e.-hardness of $\textit{Sat}(\mathcal{L},{\sf RC}(\mathbb{R}^2))$ and $\textit{Sat}(\mathcal{L},{\sf RCP}(\mathbb{R}^2))$ for $\mathcal{L} = \ensuremath{\mathcal{C}c}$; we then extend the result to the languages $\ensuremath{\mathcal{B}c}$, $\ensuremath{\mathcal{C}c^\circ}$ and $\ensuremath{\mathcal{B}c^\circ}$.
\begin{figure}\label{fig:concrete1}
\end{figure} The proof proceeds in five stages.
\noindent \textbf{Stage 1.} In the first stage, we define an assemblage of arcs that will serve as a scaffolding for the ensuing construction. Consider the arrangement of polygonal 3-regions depicted in Fig.~\ref{fig:concrete1}, assigned to the 3-region variables $\tseq{s}_0, \dots, \tseq{s}_9$, $\tseq{s}_8', \dots, \tseq{s}_1'$, $\tseq{d}_0,\dots, \tseq{d}_6$
as indicated.
It is easy to verify that this arrangement can be made to satisfy the following formulas:
\begin{align} \label{eq:PCPFrame} & \mathsf{frame}(\tseq{s}_0, \tseq{s}_1,\dots, \tseq{s}_8,\tseq{s}_9,\tseq{s}_8',\dots, \tseq{s}_1'),\\
\label{eq:PCPCord:endpoints} & (s_0 \leq \intermediate{t}_0) \land (s_9 \leq \inner{t}_6),\\
\label{eq:PCPCord} & \mathsf{stack}(\tseq{d}_0,\dots,\tseq{d}_6). \end{align}
And trivially, the arrangement can be made to satisfy any formula
\begin{equation}\label{eq:pcp:C} \neg C(r,r') \end{equation}
for which the corresponding 3-regions $\tseq{r}$ and $\tseq{r}'$ are drawn as not being in contact. (Remember, $r$ is the outer-most shell of the 3-region $\tseq{r}$, and similarly for $r'$.) Thus, for example, \eqref{eq:pcp:C} includes $\neg C(s_0, d_1)$, but not $\neg C(s_0, d_0)$ of $\neg C(d_0, d_1)$.
Now suppose $\tseq{s}_0, \dots, \tseq{s}_9$, $\tseq{s}_8', \dots, \tseq{s}_1'$, $\tseq{d}_0,\dots, \tseq{d}_6$ is {\em any} collection of 3-regions (not necessarily polygonal) satisfying~\eqref{eq:PCPFrame}--\eqref{eq:pcp:C}. By Lemma~\ref{lma:FrameLemma} and~\eqref{eq:PCPFrame}, let $\gamma_0, \dots, \gamma_9,\gamma_8',\dots,\gamma_1'$ be Jordan arcs included in the respective regions $s_0, \dots, s_9,s_8',\dots,s_1'$, such that $\Gamma = \gamma_0 \cdots \gamma_9 \cdot \gamma_8' \cdots \gamma_1'$ is a Jordan curve (note that $\gamma_i'$ and $\gamma_i$ have opposite directions). We select points $\tilde{o}_{1}$ on $\gamma_0$ and $\tilde{o}_2$ on $\gamma_9$ (see Fig.~\ref{fig:arcs0}).
By~\eqref{eq:PCPCord:endpoints}, $\tilde{o}_1 \in \intermediate{t}_0$ and $\tilde{o}_2 \in \inner{t}_6$. By Lemma~\ref{lma:stackLemma} and~\eqref{eq:PCPCord}, let $\tilde{\chi}_1$, $\chi_2$, $\tilde{\chi}_3$ be Jordan arcs in the respective regions
\begin{equation*} (d_0 + d_1),\qquad (d_2 + d_3 + d_4),\qquad (d_5+ d_6) \end{equation*}
such that $\tilde{\chi}_1 \chi_2 \tilde{\chi}_3$ is a Jordan arc from $\tilde{o}_1$ to $\tilde{o}_2$. Let $o_{1}$ be the last point of $\tilde{\chi}_1$ lying on $\Gamma$, and let $\chi_1$ be the final segment of $\tilde{\chi}_1$, starting at $o_1$. Let $o_2$ be the first point of $\tilde{\chi}_3$ lying on $\Gamma$, and let $\chi_3$ be the initial segment of $\tilde{\chi}_3$, ending at $o_2$. \begin{figure}
\caption{The arcs $\gamma_0, \ldots, \gamma_9$ and $\chi_1, \ldots \chi_3$.}
\label{fig:arcs0}
\end{figure} By~\eqref{eq:pcp:C}, we see that the arc $\chi = \chi_1\chi_2\chi_3$ intersects $\Gamma$ only in its endpoints, and is thus a chord of $\Gamma$, as shown in Fig.~\ref{fig:arcs0}.
A word is required concerning the generality of this diagram. The reader is to imagine the figure drawn on a {\em spherical} canvas, of which the sheet of paper or computer screen in front of him is simply a small part. This sphere represents the plane with a `point' at infinity, under the usual stereographic projection. We do not say where this point at infinity is, other than that it never lies on a drawn arc. In this way, a diagram in which the spherical canvas is divided into $n$ cells represents $n$ different configurations in the plane---one for each of the cells in which the point at infinity may be located. For example, Fig~.\ref{fig:arcs0} represents three topologically distinct configurations in $\mathbb{R}^2$, and, as such, depicts the arcs $\gamma_0, \ldots, \gamma_9$, $\gamma'_1, \ldots, \gamma'_8$, $\chi_1$, $\chi_2$, $\chi_3$ and points $o_1$, $o_2$ in full generality. All diagrams in this proof are to be interpreted in this way. We stress that our `spherical diagrams' are simply a convenient device for using one drawing to represent several possible configurations in the Euclidean plane: in particular, we are interested only in the satisfiability of of $\ensuremath{\mathcal{C}c}$-formulas over ${\sf RCP}(\mathbb{R}^2)$ and ${\sf RC}(\mathbb{R}^2)$, not over the regular closed algebra of any other space! For ease of reference, we refer to the the two rectangles in Fig~.\ref{fig:arcs0} as the `upper window' and `lower window', it being understood that these are simply handy labels: in particular, either of these `windows' (but not both) may be unbounded.
\noindent \textbf{Stage 2.} In this stage, we we construct two sequences of arcs, $\set{\zeta_i}$, $\set{\eta_i}$ of indeterminate length $n \geq 1$, such that the members of the former sequence all lie in the lower window. Here and in the sequel, we write $\md{k}$ to denote $k$ modulo 3. Let $\tseq{a}$, $\tseq{b}$, $\tseq{a}_{i,j}$ and $\tseq{b}_{i,j}$ ($0 \leq i < 3$, $1 \leq j \leq 6$) be 3-region variables, let $z$ be an ordinary region-variable, and consider the formulas
\begin{align} \label{eq:aSeq1} & (s_6 \leq \inner{a}) \wedge (s'_6 \leq \inner{b}) \wedge (s_3 \leq \intermediate{a}_{0,3}), \\
\label{eq:aSeq:b} & \mathsf{stack}_z(\tseq{a}_{\md{i-1},3}, \tseq{b}_{i,1}, \ldots, \tseq{b}_{i,6}, \tseq{b}),\\
\label{eq:aSeq:a} & \mathsf{stack}(\tseq{b}_{i,3}, \tseq{a}_{i,1}, \ldots, \tseq{a}_{i,6}, \tseq{a}).
\end{align}
The arrangement of polygonal 3-regions depicted in Fig.~\ref{fig:concrete2} (with $z$ assigned appropriately) is one such satisfying assignment. \begin{figure}\label{fig:concrete2}
\end{figure} We stipulate that~\eqref{eq:pcp:C} applies now to all regions depicted in either Fig~\ref{fig:concrete1} or Fig~\ref{fig:concrete2}. Again,
these additional constraints are evidently satisfiable.
It will be convenient in this stage to rename the arcs $\gamma_6$ and $\gamma'_6$ as $\lambda_0$ and $\mu_0$, respectively. Thus, $\lambda_0$ forms the bottom edge of the lower window, and $\mu_0$ the top edge of the upper window. Likewise, we rename $\gamma_3$ as $\alpha_0$, forming part of the left-hand side of the lower window. Let $\tilde{q}_{1,1}$ be any point of $\alpha_0$, $p^*$ any point of $\lambda_0$, and $q^*$ any point of $\mu_0$ (see Fig.~\ref{fig:arcs0}). By~\eqref{eq:aSeq1}, then, $\tilde{q}_{1,1} \in \intermediate{a}_{0,3}$, $p^* \in \inner{a}$, and $q^* \in \inner{b}$. Adding the constraint \begin{equation*} \neg C(s_3,z), \end{equation*} further ensures that $\tilde{q}_{1,1} \in -z$. By Lemma~\ref{lma:stackLemma} and~\eqref{eq:aSeq:b}, we may draw an arc $\tilde{\beta}_1$ from $\tilde{q}_{1,1}$ to $q^*$, with successive segments $\tilde{\beta}_{1,1}$, $\beta_{1,2}$, \ldots, $\beta_{1,5}$, $\tilde{\beta}_{1,6}$ lying in the respective regions $a_{0,3} + b_{1,1}$, $b_{1,2}$, \dots, $b_{1,5}$, $b_{1,6} + b$; further, we can guarantee that $\beta_{1,2}$ contains a point $\tilde{p}_{1,1} \in \intermediate{b}_{1,3}$. Denote the last point of $\beta_{1,5}$ by $q_{1,2}$. Also, let $q_{1,1}$ be the last point of $\tilde{\beta}_1$ lying on $\alpha_0$, and $q_{1,3}$ the first point of $\tilde{\beta}_1$ lying on $\mu_0$ Finally, let $\beta_1$ be the segment of $\tilde{\beta}_1$ between $q_{1,1}$ and $q_{1,2}$; and we let $\mu_1$ be the segment of $\tilde{\beta}_1$ from $q_{1,2}$ to $q_{1,3}$ followed by the final segment of $\mu_0$ from $q_{1,3}$. (Fig.~\ref{subfig:arcs1}). By repeatedly using the constraints in~\eqref{eq:pcp:C}, it is easy to see that that $\beta_1$ together with the initial segment of $\mu_1$ up to $q_{1,3}$ form a chord of $\Gamma$. Adding the constraints \begin{equation*} c(b_{0,5} + d_3), \end{equation*} and taking into account the constraints in~\eqref{eq:pcp:C} ensures that $\beta_1$ and $\chi$ lie in the same residual domain of $\Gamma$, as shown. The wiggly lines indicate that we do not care about the exact positions of $\tilde{q}_{1,1}$ or $q^*$; otherwise, Fig.~\ref{subfig:arcs1}) is again completely general. \begin{figure}
\caption{Construction of the arcs $\set{\alpha_i}$ and $\set{\beta_i}$}
\label{subfig:arcs1}
\label{subfig:arcs2}
\label{subfig:arcs3}
\label{subfig:arcs4}
\label{fig:arcsAlphaBeta}
\end{figure} Note that $\mu_1$ lies entirely in $b_{1,6} + b$, and hence certainly in the region \begin{equation*} b^* = b + b_{0,6} + b_{1,6} + b_{2,6}. \end{equation*}
Recall that $\tilde{p}_{1,1} \in \intermediate{b}_{1,3}$, and $p^* \in \inner{a}$. By Lemma~\ref{lma:stackLemma} and~\eqref{eq:aSeq:a}, we may draw an arc $\tilde{\alpha}_1$ from $\tilde{p}_{1,1}$ to $p^*$, with successive segments $\tilde{\alpha}_{1,1}$, $\alpha_{1,2}$, \ldots, $\alpha_{1,5}$, $\tilde{\alpha}_{1,6}$ lying in the respective regions $b_{1,3} + a_{1,1}$, $a_{1,2}$, \dots, $a_{1,5}$ $a_{1,6} + a$; further, we can guarantee that the segment lying in $a_{1,3}$ contains a point $\tilde{q}_{2,1}\in \intermediate{a}_{1,3}$. Denote the last point of $\alpha_{1,5}$ by $p_{1,2}$. Also, let $p_{1,1}$ be the last point of $\tilde{\alpha}_1$ lying on $\beta_1$, and $p_{1,3}$ the first point of $\tilde{\alpha}_1$ lying on $\lambda_0$. From~\eqref{eq:pcp:C}, these points must be arranged as shown in Fig.~\ref{subfig:arcs2}. Let $\alpha_1$ be the segment of $\tilde{\alpha}_1$ between $p_{1,1}$ and $p_{1,2}$. Noting that~\eqref{eq:pcp:C} entails \begin{align*} & \neg C(a_{1,k}, s_0 + s_9 + d_0+ \cdots + d_5 ) & & \qquad{1 \leq k
\leq 6}, \end{align*} we can be sure that $\alpha_1$ lies entirely in the `lower' window, whence $\beta_1$ crosses the central chord, $\chi$, at least once. Let $o_1$ be the first such point (measured along $\chi$ from left to right). Finally, let $\lambda_1$ be the segment of $\tilde{\alpha}_1$ between $p_{1,2}$ and $p_{1,3}$, followed by the final segment of $\lambda_0$ from $p_{1,3}$. Note that $\lambda_1$ lies entirely in $a_{1,6} + a$, and hence certainly in the region \begin{equation*} a^* = a + a_{0,6} + a_{1,6} + a_{2,6}. \end{equation*} We remark that, in Fig.~\ref{subfig:arcs2}, the arcs $\beta_1$ and $\mu_1$ have been slightly re-drawn, for clarity. The region marked $S_1$ may now be forgotten, and is suppressed in Figs.~\ref{subfig:arcs3} and~\ref{subfig:arcs4}.
By construction, the point $\tilde{q}_{2,1}$ lies in some component of $\intermediate{a}_{1,3}$, and, from the presence of the `switching' variable $z$ in~\eqref{eq:aSeq:a}, that component is either included in $z$ or included in $-z$. Suppose the latter. Then we can repeat the above construction to obtain an arc $\tilde{\beta}_2$ from $\tilde{q}_{2,1}$ to $q^*$, with successive segments $\tilde{\beta}_{2,1}$, $\beta_{2,2}$, \ldots, $\beta_{2,5}$, $\tilde{\beta}_{2,6}$ lying in the respective regions $a_{1,3} + b_{2,1}$, $b_{2,2}$, \dots, $b_{2,5}$, $b_{2,6} + b$; further, we can guarantee that $\beta_{2,2}$ contains a point $\tilde{p}_{2,1} \in \intermediate{b}_{2,3}$. Denote the last point of $\beta_{2,5}$ by $q_{2,2}$. Also, let $q_{2,1}$ be the last point of $\tilde{\beta}_2$ lying on $\alpha_1$, and $q_{2,3}$ the first point of $\tilde{\beta}_2$ lying on $\mu_1$. Again, we let $\beta_2$ be the segment of $\tilde{\beta}_2$ between $q_{2,1}$ and $q_{2,2}$; and we let $\mu_2$ be the segment of $\tilde{\beta}_2$ from $q_{2,1}$ to $q_{2,3}$, followed by the final segment of $\mu_1$ from $q_{2,3}$. Note that $\mu_2$ lies in the set $b^*$. It is easy to see that $\beta_2$ must be drawn as shown in Fig.~\ref{subfig:arcs3}: in particular, $\beta_2$ cannot enter the interior of the region marked $R_1$. For, by construction, $\beta_2$ can have only one point of contact with $\alpha_1$, and the constraints~\eqref{eq:pcp:C} ensure that $\beta_2$ cannot intersect any other part of $\delta R_1$; since $q^* \in a$ is guaranteed to lie outside $R_1$, we evidently have $\beta_2 \subseteq -R_1$. This observation having been made, $R_1$ may now be forgotten.
Symmetrically, we construct the arc $\tilde{\alpha}_2 \subseteq b_{1,3} + a_{2,1} + \cdots + a_{2,6} + a$, and points $p_{2,1}$, $p_{2,2}$, $p_{2,3}$, together with the arcs arcs $\alpha_2$ and $\lambda_2$, as shown in Fig.~\ref{subfig:arcs4} (where the region $R_1$ has been suppressed and the region $S_2$ slightly re-drawn). Again, we know from~\eqref{eq:pcp:C} that $\alpha_2$ lies entirely in the `lower' window, whence $\beta_2$ must cross the central chord, $\chi$, at least once. Let $o_2$ be the first such point (measured along $\chi$ from left to right).
This process continues, generating arcs $\beta_i \subseteq a_{\md{i-1},3} + b_{\md{i},1} + \cdots + b_{\md{i},5}$ and $\alpha_i \subseteq b_{\md{i},3} + a_{\md{i},1} + \cdots + a_{\md{i},5}$, as long as $\alpha_i$ contains a point $\tilde{q}_{i,1} \in -z$. That we eventually reach a value $i = n$ for which no such point exists follows from~\eqref{eq:pcp:C}. For the conjuncts $\neg C(b_{i,j}, d_k)$ ($j \neq 5$) together entail $o_i \in b_{\md{i},5}$, for every $i$ such that $\beta_i$ is defined; and these points cycle on $\chi$ through the regions $b_{0,5}$, $b_{1,5}$ and $b_{2,5}$. If there were infinitely many $\beta_i$, the $o_i$ would have an accumulation point, lying in all three regions, contradicting, say, $\neg C(b_{0,5},b_{1,5})$. The resulting sequence of arcs and points is shown, schematically, in Fig.~\ref{fig:arcs2}. \begin{figure}
\caption{The sequences of arcs $\set{\alpha_i}$ and $\set{\beta_i}$.}
\label{fig:arcs2}
\end{figure}
We finish this stage in the construction by `re-packaging' the arcs $\set{\alpha_i}$ and $\set{\beta_i}$, as illustrated in Fig.~\ref{fig:arcs3}. Specifically, for all $i$ ($1 \leq i \leq n$), let $\zeta_i$ be the initial segment of $\beta_i$ up to the point $p_{i,1}$ followed by the initial segment of $\alpha_i$ up to the point $q_{i+1,1}$; and let $\eta_i$ be the final segment of $\beta_i$ from the point $p_{i,1}$: \begin{align*} & \zeta_i = \beta_i[q_{i,1},p_{i,1}]\alpha_i[p_{i,2},q_{i+1,1}]\\ & \eta_i = \beta_i[p_{i,1},q_{i,2}]. \end{align*} The final segment of $\alpha_i$ from the point $q_{i+1}$ may be forgotten. \begin{figure}
\caption{`Re-packaging' of $\alpha_i$ and $\beta_i$ into $\zeta_i$ and
$\eta_i$: before and after.}
\label{fig:arcs3}
\end{figure} Defining, for $0 \leq i < 3$, \begin{eqnarray*} a_i & = & a_{1-i,3} + b_{i,1} + \cdots + b_{i,4} + a_{i,1} + \cdots + a_{i,4}\\ b_i & = & b_{i,2} + \cdots + b_{i,5}, \end{eqnarray*} the constraints~\eqref{eq:pcp:C} guarantee that, for $1 \leq i \leq n$, \begin{eqnarray*} \zeta_i & \subseteq & a_{\md{i}}\\ \eta_i & \subseteq & b_{\md{i}}. \end{eqnarray*} Observe that the arcs $\zeta_i$ are located entirely in the `lower window', and that each arc $\eta_i$ connects $\zeta_i$ to some point $q_{i,2}$, which in turn is connected to a point $q^* \in \lambda_0$ by an arc in $b^*$.
\noindent \textbf{Stage 3.} We now repeat Stage~2 symmetrically, with the `upper' and `lower' windows exchanged. Let $\tseq{a}'_{i,j}$, $\tseq{b}'_{i,j}$ be 3-region variables (with indices in the same ranges as for $\tseq{a}_{i,j}$, $\tseq{b}_{i,j}$). Let $\tseq{a}' = \tseq{b}$, $\tseq{b}' = \tseq{a}$; and let \begin{eqnarray*} a'_i & = & a'_{1-i,3} + b'_{i,1} + \cdots + b'_{i,4} + a'_{i,1} + \cdots + a'_{i,4}\\ b'_i & = & b'_{i,2} + \cdots + b'_{i,5}, \end{eqnarray*} for $0 \leq i < 2$. The constraints \begin{align*} & (s'_3 \leq \intermediate{a}'_{0,3}) \\
& \mathsf{stack}_z(\tseq{a}'_{\md{i-1},3}, \tseq{b}'_{i,1}, \ldots, \tseq{b}'_{i,6}, \tseq{b}'),\\
& \mathsf{stack}(\tseq{b}'_{i,3}, \tseq{a}'_{k,1}, \ldots, \tseq{a}'_{i,6}, \tseq{a}')\\
& c(b'_{0,5} + d_3) \end{align*} then establish sequences of arcs $\set{\zeta'_i}$, $\set{\eta'_i}$, ($1 \leq i \leq n'$) satisfying \begin{eqnarray*} \zeta_i' & \subseteq & a_{\md{i}}'\\ \eta_i' & \subseteq & b_{\md{i}}' \end{eqnarray*} for $1 \leq i \leq n'$. The arcs $\zeta'_i$ are located entirely in the `upper window', and each arc $\eta'_i$ connects $\zeta'_i$ to a point $p_{i,2}$, which in turn is connected to a point $p^*$ by an arc in the region \begin{align*} & {b^*}' = b' + b'_{0,6} + b'_{1,6} + b'_{2,6}. \end{align*} Our next task is to write constraints to ensure that $n = n'$, and that, furthermore, each $\eta_i$ (also each $\eta'_i$) connects $\zeta_i$ to $\zeta'_i$, for $1\leq i\leq n=n'$. Let $z^*$ be a new region-variable, and write \begin{equation*} \neg C(z^*, s_0 + \cdots + s_9 + s'_1 + \cdots + s'_8 + d_1 + \cdots + d_4 + d_6). \end{equation*} Note that $d_5$ does not appear in this constraint, which ensures that the only arc depicted in Fig.~\ref{fig:arcs0} which $z$ may intersect is $\chi_3$. Recalling that $\alpha_n$ and $\alpha'_{n'}$ contain points $q_{n,1}$ and $q'_{n',1}$, respectively, both lying in $z$, the constraints \begin{equation*} c(z) \wedge \neg C(z, -z^*) \end{equation*} ensure that $q_{n,1}$ and $q'_{n',1}$ may be joined by an arc, say $\zeta^*$, lying in $\ti{(z^*)}$, and also lying entirely in the upper and lower windows, crossing $\chi$ only in $\chi_3$. Without loss of generality, we may assume that $\zeta^*$ contacts $\zeta_n$ and $\zeta'_{n'}$ in just one point. Bearing in mind that the constraints~\eqref{eq:pcp:C} force $\eta_n$ and $\eta'_{n'}$ to cross $\chi$ in its central section, $\chi_2$, writing \begin{align} & \neg C(b_{i,j}, z) \wedge \neg C(b'_{i,j}, z) \label{eq:zeta} \end{align} for all $i$ ($0 \leq i < 3$) and $j$ ($1 \leq i \leq 6$) ensures that $\zeta^*$ is (essentially) as shown in Fig.~\ref{fig:arcZeta}. \begin{figure}
\caption{The arc $\zeta^*$.}
\label{fig:arcZeta}
\end{figure} Now consider the arc $\eta_1$. Recalling that $\eta_1\mu_1$ joins $\zeta_1$ to the point $q^*$ (on the upper edge of the upper window), crossing $\chi_2$, we see by inspection of Fig.~\ref{fig:arcZeta} that~\eqref{eq:zeta} together with \begin{align*} & \neg C(a'_i, b^*) \end{align*} for $0 \leq i < 3$ forces $\eta_1$ to cross one of the arcs $\zeta'_{j'}$ ($1 \leq j' \leq n'$); and the constraints \begin{align*} & \neg C(a'_i,b_j) \end{align*} for $0 \leq i < 3$, $0 \leq j < 3$, $i \neq j$, ensure that $j' \equiv 1 $ modulo 3. We write the symmetric constraints \begin{align} & \neg C(a_i,b'_j) \label{eq:abPrime} \end{align} for $0 \leq i < 3$, $0 \leq j < 3$, $i \neq j$, together with \begin{align} & \neg C(b_i,b'_j) \label{eq:bbPrime} \end{align} for $0 \leq i < j \leq 3$. Now suppose $j' \geq 4$. The arc $\eta'_2 \lambda'_2$ must connect $\zeta'_2$ to the point $p^*$ on the bottom edge of the lower window, which is now impossible without $\eta'_2$ crossing either $\zeta_1$ or $\eta_1$---both forbidden by~\eqref{eq:abPrime}--\eqref{eq:bbPrime}. Thus, $\eta_1$ intersects $\zeta'_j$ if and only if $j = 1$. Symmetrically, $\eta'_1$ intersects $\zeta_j$ if and only if $j = 1$. And the reasoning can now be repeated for $\eta_2$, $\eta_2'$, $\eta_3$, $\eta_3'$ \ldots, leading to the 1--1 correspondence depicted in Fig.~\ref{fig:arcCorrespondence}. \begin{figure}
\caption{The 1--1 correspondence between the $\zeta_i$ and the
$\zeta'_i$ established by the $\eta_i$ and the $\eta'_i$.}
\label{fig:arcCorrespondence}
\end{figure} In particular, we are guaranteed that $n = n'$.
\noindent \textbf{Stage 4.} Recall the given PCP-instance, $\mathbf{w} = (\set{0,1}, T, \mathsf{w}_1, \mathsf{w}_2)$. We think of $T$ as a set of `tiles', and the morphisms $\mathsf{w}_1$, $\mathsf{w}_2$ as specifying, respectively, the `lower' and `upper' strings of each tile. In this stage, we shall `label' the arcs $\zeta_1, \ldots, \zeta_n$, with elements of $\set{0,1}$, thus defining a word $\sigma$ over this alphabet. Using a slightly more complicated labelling scheme, we shall label the arcs $\eta_1, \ldots, \eta_n$ so as to define a word $\tau$ (of length $m \leq n$) over the alphabet $T$; likewise we shall label the arcs $\eta'_1, \ldots, \eta'_n$ so as to define another word $\tau'$ (of length $m' \leq n$) over $T$.
We begin with the $\zeta_i$. Consider the constraints \begin{equation*} b_i \leq l_0 + l_1 \wedge \neg C(b_i \cdot l_0, b_i \cdot l_1) \qquad (i = 0,\ 1). \end{equation*} By Lemma~\ref{lma:labelling}, in any satisfying assignment over ${\sf RC}(\mathbb{R}^2)$, every arc $\eta_i$ ($1 \leq i \leq n$) is included in (`labelled with') exactly one of the regions $l_0$ or $l_1$, so that the sequence of arcs $\eta_1, \ldots, \eta_n$ defines a word $\sigma
\in \set{0,1}^*$, with $|w| = n$.
Turning our attention now to the $\zeta_i$, let us write $T =
\set{t_1, \ldots, t_\ell}$. For all $j$ ($1 \leq j \leq \ell$), we shall write $\sigma_j = \mathsf{w}_1(t_j)$ and $\sigma'_j = \mathsf{w}_2(t_j)$; further, we denote $|\sigma_j|$ by $u(j)$ and $|\sigma'_j|$ by $u'(j)$. (Thus, by assumption, the $u(j)$ and $u'(j)$ are all positive.)
Now let $t_{j,k}$ ($1 \leq j \leq \ell$, $1 \leq k \leq u(j)$) and $t'_{j,k}$ ($1 \leq j \leq \ell$, $1 \leq k \leq u'(j)$) be fresh region variables. We think of $t_{j,k}$ as standing for the $k$th letter in the word $\sigma_j$, and likewise think of $t'_{j,k}$ as standing for the $k$th letter in the word $\sigma'_j$. By Lemma~\ref{lma:labelling}, we may write constraints ensuring that each component of either $a_0$, $a_1$ or $a_2$---and hence each of the arcs $\zeta_1, \ldots, \zeta_n$---is `labelled with' one of the $t_{j,k}$, in the by-now familiar sense. Further, we can ensure that these labels are organized into (contiguous) blocks, $E_1, \ldots, E_{m}$ such that, in the $h$th block, $E_h$, the sequence of labels reads $t_{j,1}, \ldots, t_{j,{u(j)}}$, for some fixed $j$ ($1 \leq j \leq \ell$). This amounts to insisting that: ({\em i}) the very first arc, $\zeta_1$, must be labelled with $t_{j,1}$ for some $j$; ({\em ii}) if, $\zeta_i$ is labelled with $t_{j,k}$, where $i < n$ and $k < u(j)$, then the next arc, namely $\zeta_{i+1}$, must be labelled with the next letter of $\sigma_j$, namely $t_{j,k+1}$; ({\em iii}) if $\zeta_i$ ($i < n$) is labelled with the final letter of $w_j$, then the next arc must be labelled with the initial letter of some possibly different word $\sigma_{j'}$; and ({\em iv}) $\zeta_n$ must be labelled with the final letter of some word. To do this we simply write: \begin{align*} & \neg C(t_{j,i}, s_3) & & (\mbox{if } i \neq 1)\\ & \neg C(a_k \cdot t_{j,i}, a_{\lfloor k+1 \rfloor} \cdot t_{j',i'}) & & \begin{array}{l} \text{($i < u(j)$ and either}\\ \text{$j' \neq j$ or $i' \neq i+1$)} \end{array}\\ & \neg C(a_k \cdot t_{j,u(j)}, a_{\lfloor k+1 \rfloor} \cdot t_{j',i'}) & & (\mbox{if } i' \neq 1)\\ & \neg C(t_{j,i}, z^*) & & (\mbox{if } i \neq u(j)), \end{align*} where $1 \leq j, j' \leq \ell$, $1 \leq i \leq u(j)$ and $1\leq i' \leq u(j')$.
Thus, within each block $E_h$, the labels read $t'_{j,1}, \ldots, t'_{j,u'(j)}$, for some fixed $j$; we write $j(h)$ to denote the common subscript $j$. The sequence of indices $j(1), \ldots, j(m)$ corresponding to the successive blocks thus defines a word $\tau = t_{j(1)}, \ldots t_{j(m)} \in T^*$.
Using corresponding formulas, we label the arcs $\zeta'_i$ ($1 \leq i \leq n$) with the alphabet $\set{t'_{j,k} \mid 1 \leq j \leq \ell, 1
\leq k \leq u'(j)}$, so that, in any satisfying assignment over ${\sf RC}(\mathbb{R}^2)$, every arc $\zeta'_i$ ($1 \leq i \leq n$) is labelled with exactly one of the regions $t'_{j,k}$. Further, we can ensure that these labels are organized into (say) $m'$ contiguous blocks, $E'_1, \ldots, E'_{m'}$ such that in the $h$th block, $E_h'$, the sequence of labels reads $t'_{j,1}, \ldots, t'_{j,u'(j)}$, for some fixed $j$. Again, writing $j'(h)$ for the common value of $j$, the sequence of of indices $j'(1), \ldots, j'(m')$ corresponding to the successive blocks defines a word $\tau' = t_{j'(1)}, \ldots t_{j'(m')} \in T^*$.
\noindent \textbf{Stage 5.} The basic job of the foregoing stages was to define the words $\sigma \in \set{0,1}^*$ and $\tau, \tau' \in T^*$. In this stage, we enforce the equations $\sigma = \mathsf{w}_1(\tau)$, $\sigma = \mathsf{w}_2(\tau')$ and $\tau = \tau'$. That is: the PCP-instance $\mathbf{w} = (\set{0,1}, T, \mathsf{w}_1, \mathsf{w}_2)$ is positive.
We first add the constraints \begin{align*} & \neg C(l_h, t_{j,k}) \qquad \text{the $k$'th letter of $\sigma_j$ is not $h$}\\ & \neg C(l_h, t'_{j,k}) \qquad \text{the $k$'th letter of $\sigma'_j$ is not $h$}. \end{align*} Since $\eta_i$ is in contact with $\zeta_i$ for all $i$ ($1 \leq i \leq n$), the string $\sigma \in \set{0,1}^*$ defined by the arcs $\eta_i$ must be identical to the string $\sigma_{j(1)} \cdots \sigma_{j(m)}$. But this is just to say that $\sigma = \mathsf{w}_1(\tau)$. The equation $\mathsf{w}_2(\tau') = \sigma$ may be secured similarly.
It remains only to show that $\tau = \tau'$. That is, we must show that $m = m'$ and that, for all $h$ ($1 \leq h \leq m$), $j(h) = j'(h)$. The techniques required have in fact already been encountered in Stage~3. We first introduce a new pair of variables, $f_0$, $f_1$, which we refer to as `block colours', and with which we label the arcs $\zeta_i$ in the fashion of Lemma~\ref{lma:labelling}, using the constraints: \begin{align*} & (a_0 + a_1 + a_2) \leq (f_0 + f_1)\\ & \neg C(f_0 \cdot a_i, f_1 \cdot a_i), & & (0 \leq i <3). \end{align*} We force all arcs in each block $E_j$ to have a uniform block colour, and we force the block colours to alternate by writing, for $0 \leq h <2$, $1 \leq j,j' \leq \ell$, $1 \leq k < u(j)$ and $0\leq i<3$:
\begin{align*} & \neg C(f_h \cdot t_{j,k}, f_{\md{h+1}}\cdot t_{j,k+1}),
\\& \neg C(f_h \cdot t_{j,u(j)}\cdot a_i, f_{h}\cdot t'_{j',1}\cdot a_{\md{i+1}})
\end{align*} Thus, we may speak unambiguously of the colour ($f_0$ or $f_1$) of a block: if $E_1$ is coloured $f_0$, then $E_2$ will be coloured $f_1$, $E_3$ coloured $f_0$, and so on. Using the the {\em same} variables $f_0$ and $f_1$, we similarly establish a block structure $E'_1, \ldots, E'_{m'}$ on the arcs $\eta'_i$. (Note that there is no need for primed versions of $f_0$ and $f_1$.)
Now we can match up the blocks in a 1--1 fashion just as we matched up the individual arcs. Let $\tseq{g}_0$, $\tseq{g}_1$, $\tseq{g}'_0$ and $\tseq{g}'_1$ be new 3-regions variables. We may assume that every arc $\zeta_i$ contains some point of $\intermediate{b}_{\md{i},1}$. We wish to connect any such arc that starts a block $E_h$ (i.e. any $\zeta_i$ labelled by $t_{j,1}$ for some $j$) to the top edge of the upper window, with the connecting arc depending on the block colour. Setting $w_k = -(f_k \cdot \sum_{i =
1}^{i=\ell} t_{j,1})$ ($0 \leq k < 2$), we can do this using the constraints: \begin{align*} & \mathsf{stack}_{w_k}(\tseq{b}_{i,1}, \tseq{g}_k, \tseq{a}) & & (1 \leq k < 2, 0 \leq i < 3). \end{align*} Specifically, the first arc in each block $E_h$ ($1 \leq h \leq m$) is connected by an arc $\theta_h\tilde{\theta}_h$ to some point on the upper edge of the upper window, where $\theta_h \subseteq b_{i,1} + g_i$ and $\tilde{\theta}_h \subseteq a$. Similarly, setting $w'_k = -(f_k \cdot \sum_{i = 1}^{i=\ell} t'_{j,1})$ ($0 \leq k < 2$), the constraints \begin{align*} & \mathsf{stack}_{w_k'}(\tseq{b}'_{i,1}, \tseq{g}'_k, \tseq{b}) & & (1 \leq k < 2, 0 \leq i < 3) \end{align*} ensure that the first arc in each block $E_{h'}$ ($1 \leq h' \leq m'$) is connected by an arc $\theta'_{h'}\tilde{\theta}'_{h'}$ to some point on the bottom edge of the lower window, where $\theta_{h'} \subseteq b'_{i,1} + g'_i$ and $\tilde{\theta}'_{h'} \subseteq b$. Furthermore, from the arrangement of the $\zeta_i$, $\zeta'_i$ and $\zeta^*$ (Fig.~\ref{fig:arcZeta}) we can easily write non-contact constraints
forcing each $\theta_h$ to intersect one of the arcs $\zeta'_i$ ($1 \leq i \leq n$), and each $\theta'_h$ to intersect one of the arcs $\zeta_{i'}$ ($1 \leq i' \leq n$).
We now write the constraints \begin{align*} & \neg C(g_k, f_{1-k}) \wedge \neg C(g'_k, f_{1-k}) & & (0 \leq k < 2). \end{align*} Thus, any $\theta_h$ included in $g_k$ must join some arc $\zeta_i$ in a block with colour $f_k$ to some arc $\zeta'_{i'}$ also in a block with colour $f_k$; and similarly for the $\theta'_h$. Adding \begin{align*} & \neg C(g_0 + g'_0, g_1 + g'_1) \end{align*} then ensures, via reasoning exactly similar to that employed in Stage~3, that $\theta_1$ connects the block $E_1$ to the block $E'_1$, $\theta_2$ connects $E_2$ to $E'_2$, and so on; and similarly for the $\theta'_h$ (as shown, schematically, in Fig.~\ref{fig:blockCorrespondence}). Thus, we have a 1--1 correspondence between the two sets of blocks, whence $m = m'$. \begin{figure}
\caption{The 1--1 correspondence between the $E_h$ and the
$E'_h$ established by the $\theta_i$ and the $\theta'_i$.}
\label{fig:blockCorrespondence}
\end{figure}
Finally, we let $d_1, \ldots, d_\ell$ be new regions variables labelling the components of $g_0$ and of $g_1$, and hence the arcs $\theta_1, \ldots, \theta_m$: \begin{align*} & g_i \leq \sum_{1 \leq j \leq \ell} d_j \wedge \bigwedge_{1 \leq j \leq \ell} C(d_j \cdot g_i, (-d_j) \cdot g_i) \end{align*} for $0 \leq i < 2$. Adding the constraints \begin{align*} & \neg C(p_{j,k}, d_{j'}) & & (j \neq j')\\ & \neg C(p'_{j,k}, d_{j'}) & & (j \neq j') \end{align*} where $1 \leq j \leq \ell$, $1 \leq k \leq u(j)$ and $1 \leq j' \leq \ell$, instantly ensures that the sequences of tile indices $j(1), \ldots, j(m)$ and $j'(1), \ldots, j'(m)$ are identical. In other words, $\tau = \tau'$. This completes the proof that $\mathbf{w}$ is a positive instance of the PCP.
We have established the r.e.-hardness of $\textit{Sat}(\ensuremath{\mathcal{C}c},{\sf RC}(\mathbb{R}^2))$ and $\textit{Sat}(\ensuremath{\mathcal{C}c},{\sf RCP}(\mathbb{R}^2))$. We must now extend these results to the other languages considered here. We deal with the languages $\ensuremath{\mathcal{C}c^\circ}$ and $\ensuremath{\mathcal{B}c}$ as in Sec.~\ref{sec:sensitivity}. Let $\ti{\varphi}_\mathbf{w}$ be the $\ensuremath{\mathcal{C}c^\circ}$ formula obtained by replacing all of occurrences of $c$ in $\varphi_\mathbf{w}$ with $c^\circ$. Since all occurrences of $c$ in $\varphi_\mathbf{w}$ are positive, $\ti{\varphi}_\mathbf{w}$ entails $\varphi_\mathbf{w}$. On the other hand, the connected regions satisfying $\varphi_\mathbf{w}$ are also interior-connected, and thus satisfy $\ti{\varphi}_\mathbf{w}$ as well.
For the language $\ensuremath{\mathcal{B}c}$, observe that, as in Sec.~\ref{sec:sensitivity}, all conjuncts of $\varphi_\mathbf{w}$ featuring the predicate $C$ are {\em negative}. (Remember that there are additional such literals implicit in the use of 3-region variables; but let us ignore these for the moment.) Recall from Sec.~\ref{sec:sensitivityA} that \begin{align*}
\varphi_{\lnot C}^c(r,s,r',s'):=c(r+r')\land c(s+s') \hspace{2cm}
\\
\land \lnot c((r+r')+(s+s')), \end{align*} and consider the effect of replacing any literal $\neg C(r,s)$ from~\eqref{eq:pcp:C} with the $\ensuremath{\mathcal{B}c}$-formula $\varphi_{\lnot
C}^c(r,s,r',s')$ where $r'$ and $s'$ are fresh variables, and let the formula obtained be $\psi$. It is easy to see that $\psi$ entails $\varphi_\mathbf{w}$; hence if $\psi$ is satisfiable, then $\mathbf{w}$ is a positive instance of the PCP. To see that $\psi$ is satisfiable, consider the satisfying tuple of $\varphi_\mathbf{w}$. Note that if $\tseq{r}$ and $\tseq{s}$ are 3-regions whose outer-most elements $r$ and $s$ are disjoint (for example: $\tseq{r} = \tseq{a}_{0,1}$, $\tseq{s} = \tseq{a}_{0,3}$), then $r$ and $s$ have finitely many connected components and have connected complements. Hence, it is easy to find $r'$ and $s'$ in ${\sf RCP}(\mathbb{R}^2)$ satisfying the corresponding formula $\varphi_{\lnot
C}^c(r,s,r',s')$. Fig.~\ref{fig:connectingRsAndSs} represents the situation in full generality. (As usual, we assume a spherical canvas, with the point at infinity not lying on the boundary of any of the depicted regions.) We may therefore assume, that all such literals involving $C$ have been eliminated from $\varphi_\mathbf{w}$.
\begin{figure}
\caption{Satisfying $\varphi_{\lnot C}^c(r,s,r',s')$}
\label{fig:connectingRsAndSs}
\end{figure}
We are not quite done, however. We must show that we can replace the {\em implicit} non-contact constraints that come with the use of 3-region variables by suitable $\ensuremath{\mathcal{B}c}$-formulas. For example, a 3-region variable $\tseq{r}$ involves the implicit constraints $\neg C(\inner{r}, -\intermediate{r})$ and $\neg C(\intermediate{r}, -r)$. Since the two conjuncts are identical in form, we only show how to deal with $\neg C(\intermediate{r}, -r)$. Because the complement of $-r$ is in general not connected, a direct use of $\varphi_{\lnot C}^c$ will result in a formula which is not satisfiable. Instead, we represent $-r$ as the sum of two regions $s_1$ and $s_2$ with connected complements, and then proceed as before. In particular, we replace $\neg C(\intermediate{r}, -r)$ by: \begin{eqnarray*}
-r=s_1+s_2\land \varphi_{\lnot C}^c(\intermediate r,s_1,r_1,s_1)
\land \varphi_{\lnot C}^c(\intermediate r,s_2,r_2,s_2). \end{eqnarray*} For $i=1,2$, $\intermediate r+r_i$ is a connected region that is disjoint from $s_i$. So, $\intermediate r$ is disjoint from $s_1$ and $s_2$, and hence disjoint from their sum $-r:=s_1+s_2$. Fig~\ref{fig:crenellate} shows regions $s_i,r_i$, for $i=1,2$, which satisfy the above formula. \begin{figure}
\caption{Eliminating the conjuncts of the form $\lnot C(-r,\intermediate{r})$.}
\label{fig:crenellate}
\end{figure} Let $\psi_\mathbf{w}$ be the result of replacing all the conjuncts (explicit or implicit) containing the predicate $C$, as just described. We have thus shown that, if $\psi_\mathbf{w}$ is satisfiable over ${\sf RC}(\mathbb{R}^2)$, then $\mathbf{w}$ is positive, and that, if $\mathbf{w}$ is positive, then $\psi_\mathbf{w}$ is satisfiable over ${\sf RCP}(\mathbb{R}^2)$. This completes the proof.
The final case we must deal with is that of $\ensuremath{\mathcal{B}c^\circ}$. We use the r.e.-hardness results already established for $\ensuremath{\mathcal{C}c^\circ}$, and proceed, as before, to eliminate occurrences of $C$. Since all the polygons in the tuple satisfying $\ti{\varphi}_\mathbf{w}$ are quasi-bounded, we can eliminate all occurrences of $C$ from $\ti{\varphi}_\mathbf{w}$ using Lemma~\ref{lma:Cci2BciStar}~(\emph{iii}). This completes the proof of Theorem~\ref{theo:undecidable}.
\section{$\ensuremath{\mathcal{B}c^\circ}$ in 3D}\label{sec:Bci3D_C}
\newcommand{{\sf ConRC}}{{\sf ConRC}}
Denote by ${\sf ConRC}$ the class of all connected topological spaces with regular closed regions. As shown in ~\cite{ijcai:kphz10}, every $\ensuremath{\mathcal{B}c^\circ}$-formula satisfiable over ${\sf ConRC}$ can be satisfied in a finite connected quasi-saw model and the problem $\textit{Sat}(\ensuremath{\mathcal{B}c^\circ},{\sf ConRC})$ is \textsc{NP}-complete.
\begin{theorem} The problems $\textit{Sat}(\ensuremath{\mathcal{B}c^\circ},{\sf RC}(\mathbb{R}^n))$, $n \geq 3$, coincide with $\textit{Sat}(\ensuremath{\mathcal{B}c^\circ},{\sf ConRC})$, and so are all \textsc{NP}-complete. \end{theorem}
\begin{proof} It suffices to show that every $\ensuremath{\mathcal{B}c^\circ}$-formula $\varphi$ satisfiable over connected quasi-saws can also be satisfied over any of ${\sf RC}(\mathbb{R}^n)$, for $n \geq 3$.
So suppose that $\varphi$ is satisfied in a model $\mathfrak{A}$ based on a finite connected quasi-saw $(W,R)$. Denote by $W_i$ the set of points of depth $i$ in $(W,R)$, for $i=0,1$. Without loss of generality we may assume that there exists a point $z_0\in W_1$ with $z_0 R x$ for all $x\in W_0$. Indeed, if this is not the case, take the interpretation $\mathfrak B$ obtained by extending $\mathfrak A$ with such a point $z_0$ and setting $z_0 \in r^{\mathfrak B}$ iff $x \in r^{\mathfrak A}$ for some $x \in W_0$. Clearly, we have $\mathfrak A \models (\tau = \tau')$ iff $\mathfrak B \models (\tau = \tau')$, for any terms $\tau$, $\tau'$. To see that $\mathfrak A \models c^\circ(\tau)$ iff $\mathfrak B \models c^\circ(\tau)$, recall that $(W,R)$ is connected, and so $\ti{\tau}$ is disconnected in $\mathfrak A$ iff there are two distinct points $x,y \in \tau^{\mathfrak A} \cap W_0$ connected by at least one path in $(W,R)$ and such that no such path lies entirely in $\ti{(\tau^{\mathfrak A})}$. It follows that if $\ti{(\tau^{\mathfrak A})}$ is disconnected then $W_0 \setminus \tau^{\mathfrak A} \neq \emptyset$, and so $z_0 \notin \ti{(\tau^{\mathfrak B})}$. Thus, by adding $z_0$ to $(W,R)$ we cannot make a disconnected open set in $\mathfrak A$ connected in $\mathfrak B$.
We show now how $\mathfrak{A}$ can be embedded into $\mathbb{R}^n$, for any $n\ge 3$. First we take pairwise disjoint \emph{closed} balls $B^1_x$ for all $x\in W_0$. We also select pairwise disjoint \emph{open} balls $D_z$ for $z\in W_1\setminus\{z_0\}$, which are disjoint from all of the $B^1_x$, and take $D_{z_0}$ to be the complement of
\begin{equation*} \bigcup_{x\in W_0} \ti{(B^1_x)} \ \ \cup \bigcup_{z\in W_1\setminus\{z_0\}} D_z. \end{equation*}
(Note that $\ti{D_z}$ is connected for each $z\in W_1$; all $D_z$, for $z\in W_1\setminus\{z_0\}$, are open, while $D_{z_0}$ is closed). We then expand every $B^1_x$ to a set $B_x$ in such a way that the following two properties are satisfied:
\begin{itemize} \item[(A)] the $B_x$, for $x\in W_0$, form a \emph{connected partition in ${\sf RC}(\mathbb{R}^n)$} in the sense that the $B_x$ are regular closed sets in $\mathbb{R}^n$, whose interiors are non-empty, connected and pairwise disjoint, and which sum up to the entire space;
\item[(B)] every point in $D_z$, $z\in W_1$, is either
\begin{itemize} \item[--] in the interior of some $B_x$ with $zRx$, or
\item[--] on the boundary of \emph{all} of the $B_x$ for which $zRx$. \end{itemize} \end{itemize}
The required sets $B_x$ are constructed as follows. Let $q_1, q_2, \ldots$ be an enumeration of all the points in $\bigcup_{z\in W_1} \ti{D_z}$ with \emph{rational} coordinates.
For $x\in W_0$, we set $B_x$ to be the closure of the infinite union $\bigcup_{k \in \omega} \ti{(B_x^k)}$, where the regular closed sets $B_x^k$ are defined inductively as follows (see Fig.~\ref{fig:apollonian-app}):
\begin{itemize} \item[--] Assuming that the $B^k_x$ are already defined, let $q_i$ be the first point in the list $q_1, q_2, \ldots$ such that $q_i\notin B^k_x$, for all $x\in W_0$. Suppose $q_i \in \ti{D_z}$ for $z\in W_1$. Take an open ball $C_{q_i} \subsetneqq \ti{D_z}$ of radius $< 1/k$ centred in $q_i$ and disjoint from the $B^k_x$. For each $x\in W_0$ with $zRx$, expand $B_x^k$ by a closed ball in $C_{q_i}$ and a closed rod connecting it to $B_x^1$ in such a way that the ball and the rod are disjoint from the rest of the $B^k_x$. The resulting set is denoted by $B^{k+1}_x$. \end{itemize}
\begin{figure}
\caption{The first two stages of filling $D_{z_1}$ with $B_{x_i}$, for $z_1 R x_i$, $i = 1,2,3$. (In $\mathbb{R}^3$, the sets $B_{x_1}$ and $B_{x_2}$ would not intersect.)}
\label{fig:apollonian-app}
\end{figure}
Let ${\sf RC}(W,R)$ be the Boolean algebra of regular closed sets in $(W,R)$ and let ${\sf RC}(\mathbb{R}^n)$ be the Boolean algebra of regular closed sets in $\mathbb{R}^n$.
Define a map $f$ from ${\sf RC}(W,R)$ to ${\sf RC}(\mathbb{R}^n)$ by taking
\begin{equation*} f(X) ~=~ \bigcup_{x\in X \cap W_0} B_x,\quad \text{ for $X \in {\sf RC}(W,R)$}. \end{equation*}
By~(A), $f$ is an isomorphic embedding of ${\sf RC}(W,R)$ into ${\sf RC}(\mathbb{R}^n)$, that is, $f$ preserves the operations $+$, $\cdot$ and $-$ and the constants $0$ and $1$.
Define an interpretation $\mathfrak{I}$ over ${\sf RC}(\mathbb{R}^n)$ by taking $r^\mathfrak{I} = f(r^\mathfrak{A})$. To show that $\mathfrak I \models \varphi$, it remains to prove that, for every $X \in {\sf RC}(W,R)$, $\ti{X}$ is connected if, and only if, $\ti{(f(X))}$ is connected. This equivalence follows from the fact that
\begin{equation*} \ti{(f(X))} ~=~ \bigcup_{x\in X \cap W_0} \ti{B}_x \quad\cup \bigcup_{z\in X\cap W_1,\ V_z\subseteq X} D_z, \end{equation*}
where $V_z\subseteq W_0$ is the set of all $R$-successors of $z$ of depth 0, which in turn is an immediate consequence of (B). \end{proof}
\end{document} |
\begin{document}
\title{Stable determination of an immersed body \ in a stationary Stokes fluid}
\begin{abstract} We consider the inverse problem of the detection of a single body, immersed in a bounded container filled with a fluid which obeys the Stokes equations, from a single measurement of force and velocity on a portion of the boundary. We obtain an estimate of stability of log-log type. \end{abstract}
{\bf Mathematical Subject Classification (2010):} Primary 35R30. Secondary 35Q35, 76D07, 74F10. \\ {\bf Keywords:} Cauchy problem, inverse problems, Stokes system, stability estimates.
\section{Introduction.} In this paper we deal with an inverse problem associated to the Stokes system. We consider $\Omega \subset \mathbb{R}^n$, with $n=2,3$, with a sufficiently smooth boundary $\partial \Omega$. We want to detect an object $D$ immersed in this container, by collecting measurements of the velocity of the fluid motion and of the boundary forces, but we only have access to a portion $\Gamma$ of the boundary $\partial \Omega$. The fluid obeys the Stokes system in ${\Omega} {\setminus} \overline{D}$: \begin{equation}
\label{NSE} \left\{ \begin{array}{rl}
\mathrm{div} {\hspace{0.25em}}\sigma(u,p) &= 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em}
{\Omega} {\setminus} \overline{D},\\
\mathrm{div} {\hspace{0.25em}} u & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} {\Omega} {\setminus} \overline{D},\\
u & = g \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \Gamma,\\
u & = 0 \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \partial D.
\end{array} \right. \end{equation} Here, \begin{displaymath} \sigma (u, p) = \mu ( \nabla u + \nabla u ^T ) - p {\hspace{0.25em}}\mathbb{I} \end{displaymath} is the \tmtextit{stress tensor}, where ${\hspace{0.25em}}\mathbb{I}$ denotes the $n \times n$ identity matrix, and $\mu$ is the viscosity function. The last request in (\ref{NSE}) is the so called ``no-slip condition''. We will always assume constant viscosity, $\mu(x)=1$, for all $x \in {\Omega} {\setminus} \overline{D}$. We observe that if $(u,p) \in \accauno{{\Omega} {\setminus} \overline{D}} \times L^2({\Omega} {\setminus} \overline{D})$ solves (\ref{NSE}), then it also satisfies \begin{displaymath} \triangle u -\nabla p=0. \end{displaymath} Call $\nu$ the outer normal vector field to $\partial \Omega$. The ideal experiment we perform is to assign $g \in \accan{\frac{3}{2}}{\Gamma}$ and measure on $\Gamma$ the normal component of the stress tensor it induces, \begin{equation}\label{psi}\sigma (u, p) \cdot \nu = \psi, \end{equation} and try to recover $D$ from a single pair of Cauchy data $(g, \psi)$ known on the accessible part of the boundary $\Gamma$. Under the hypothesis of $\partial \Omega$ being of Lipschitz class, the uniqueness for this inverse problem has been shown to hold (see \cite{ConcOrtega2}) by means of unique continuation techniques. For a different inverse problem regarding uniqueness of the viscosity function $\mu$, an analogous uniqueness result has been shown to hold, under some regularity assumptions (see \cite{HXW}). \\ The stability issue, however, remains largely an open question. There are some partial "directional stability" type result, given in \cite{ConcOrtega} and \cite{ConcOrtega2}. This type of result, however, would not guarantee an a priori uniform stability estimate for the distance between two domains that yield boundary measurement that are close to each other. In the general case, even if we add some a priori information on the regularity of the unknown domain, we can only obtain a weak rate of stability. This does not come unexpected since, even for a much simpler system of the same kind, the dependence of $D$ from the Cauchy data is at most of logarithmic type. See, for example, \cite{ABRV} for a similar problem on electric conductivity, or \cite{MRC}, \cite{MR} for an inverse problem regarding elasticity. The purpose of this paper is thus to prove a log-log type stability for the Hausdorff distance between the boundaries of the inclusions, assuming they have $C^{2,\alpha}$ regularity. Such estimates have been estabilished for various kinds of elliptic equations, for example, \cite{ABRV}, \cite{AlRon}, for the electric conductivity equation, \cite{MRC} and \cite{MR} for the elasticity system and the detection of cavities or rigid inclusions. For the latter case, the optimal rate of convergence is known to be of log type, as several counterexamples (see \cite{Aless1} and \cite{DiCriRo}) show. The main tool used to prove stability here and in the aforementioned papers (\cite{ABRV}, \cite{MRC}, \cite{MR}) is essentially a quantitative estimate of continuation from boundary data, in the interior and in the boundary, in the form of a three spheres inequality, see Theorem \ref{teotresfere}, and its main consequences. However, while in \cite{ABRV} the estimates are of log type for a scalar equation, here, and in \cite{MRC} and \cite{MR}, only an estimate of log-log type could be obtained for a system of equations. The reason for this is that, at the present time, no doubling inequalities at the boundary for systems are available, while on the other hand they are known to hold in the scalar case. \\ The basic steps of the present paper closely follows \cite{MRC}, \cite{MR}, and are the following: \begin{enumerate} \item {\it An estimate of propagation of smallness from the interior}. The proof of this estimate relies essentially on the three spheres inequality for solutions of the bilaplacian system. Since both the Lam\'e system and the Stokes system can be represented as solutions of such equations (at least locally and in the weak sense, see \cite{GAES} for a derivation of this for the elasticity system), we expected the same type of result to hold for both cases.
\item {\it A stability estimate of continuation from the Cauchy data}. This result also relies heavily on the three spheres inequality, but in order to obtain a useful estimate of continuation near the boundary, we need to extend a given solution of the Stokes equation a little outside the domain, so that the extended solution solves a similar system of equation. Once the solution has been properly extended, we may apply the stability estimates from the interior to the extended solution and treat them like estimates near the boundary for the original solution. \item{\it An extension lemma for solutions to the Stokes equations}. This step requires finding appropriate conditions on the velocity field $u$ as well as for the pressure $p$ at the same time, in order for the boundary conditions to make sense. In Section 5 we build such an extension. We point out that, if we were to study the inverse problem in which we assign the normal component $\psi$ of the stress tensor and measure the velocity $g$ induced on the accessible part of the boundary, the construction we mentioned would fail to work.
\end{enumerate} The paper is structured as follows. In Section 2, we state the apriori hypotheses we will need throughout the paper, and state the main result, Theorem \ref{principale}. In Section 3 we state the estimates of continuation from the interior we need, Propositions \ref{teoPOS}, \ref{teoPOSC}, and Propositions \ref{teostabest} and \ref{teostabestimpr} which deal, in turn, with the stability estimates of continuation from Cauchy data and a better version of the latter under some additional regularity hypotheses, and we use them for the proof of Theorem \ref{principale}. In section 4, we prove Proposition \ref{teoPOS} and \ref{teoPOSC} using the three spheres inequality, Theorem \ref{teotresfere}. Section 5 is devoted to the proof of Proposition \ref{teostabest}, which will use an extension argument, Proposition \ref{teoextensionNSE}, which will in turn be proven in Section 6.
\section{The stability result.} \subsection{Notations and definitions.}
Let $x\in \mathbb{R}^n$. We will denote by $ B_{\rho}(x)$ the ball in $\mathbb{R}^n$ centered in $x$ of radius $\rho$. We will indicate $x = (x_1, \dots ,x_n) $ as $x= (x^\prime, x_n)$ where $x^\prime = (x_1 \dots x_{n-1})$. Accordingly, $B^\prime_{ \rho}(x^\prime)$ will denote the ball of center $x^\prime$ and radius $\rho$ in $\mathbb{R}^{n-1}$. We will often make use of the following definition of regularity of a domain. \begin{definition} Let $\Omega \subset \mathbb{R}^n$ a bounded domain. We say $\Gamma \subset \partial \Omega$ is of class $C^{k, \alpha}$ with constants $\rho_0$, $M_0 >0$, where $k$ is a nonnegative integer, $\alpha \in [ 0,1 )$ if, for any $P \in \Gamma$ there exists a rigid transformation of coordinates in which $P = 0$ and \begin{equation} \label{regolarita} \Omega \cap B_{\rho_0}(0) = \{ (x^\prime, x_n) \in B_{\rho_0}(0) \, \, \mathrm{s.t. } \, \, x_n > \varphi (x^\prime)\}, \end{equation}
where $\varphi$ is a real valued function of class $C^{k, \alpha}(B^\prime_{\rho_0}(0))$ such that \begin{displaymath} \begin{split} \varphi(0)&=0, \\ \nabla\varphi(0)&=0, \text{ if } k \ge 1 \\ \| \varphi\|_{C^{k, \alpha}(B^\prime_{\rho_0}(0))} &\le M_0 \rho_0. \end{split} \end{displaymath} \end{definition} When $k=0$, $\alpha=1$ we will say that $\Gamma$ is {\it of Lipschitz class with constants $\rho_0$, $M_0$}.
\begin{remark} We normalize all norms in such a way they are all dimensionally equivalent to their argument and coincide with the usual norms when $\rho_0=1$. In this setup, the norm taken in the previous definition is intended as follows: \begin{displaymath}
\| \varphi\|_{C^{k, \alpha}(B^\prime_{\rho_0}(0))} = \sum_{i=0}^{k} \rho_0^i \| D^i \varphi\|_{L^{\infty}(B^\prime_{\rho_0}(0))} + \rho_0^{k+\alpha} | D^k \varphi |_{\alpha,B^\prime_{\rho_0}(0) }, \end{displaymath}
where $| \cdot |$ represents the $\alpha$-H\"older seminorm
\begin{displaymath}
| D^k \varphi |_{\alpha,B^\prime_{\rho_0}(0) } = \sup_{x^\prime, y^\prime \in B^\prime_{\rho_0}(0), x^\prime \neq y^\prime } \frac{| D^k \varphi(x^\prime)-D^k \varphi(y^\prime)| }{|x^\prime -y^\prime|^\alpha}, \end{displaymath}
and $D^k \varphi=\{ D^\beta\varphi\}_{|\beta|= k}$ is the set of derivatives of order $k$. Similarly we set \begin{displaymath} \normadue{u}{\Omega}^2 = \frac{1}{\rho_0^n} \int_\Omega u^2 \, \end{displaymath}
\begin{displaymath}
\norma{u}{1}{\Omega}^2 = \frac{1}{\rho_0^n} \Big( \int_\Omega u^2 +\rho_0^2 \int_\Omega |\nabla u|^2 \Big). \end{displaymath} The same goes for the trace norms $\norma{u}{\frac{1}{2}}{\partial \Omega}$ and the dual norms $\norma{u}{-1}{\Omega}$, $\norma{u}{-\frac{1}{2}}{\partial \Omega}$ and so forth. \end{remark} We will sometimes use the following notation, for $h>0$: \begin{displaymath} \Omega_h = \{ x \in \Omega \, \, \mathrm{such \, \, that } \, \, d(x, \partial \Omega) > h \}. \end{displaymath}
\subsection{A priori information.} Here we present all the a priori hypotheses we will make all along the paper. \\ (1) {\it A priori information on the domain.} \\ We assume $\Omega \subset \mathbb{R}^n$ to be a bounded domain, such that \begin{equation} \label{apriori0}
\partial \Omega \text{ is connected, } \end{equation} and it has a sufficiently smooth boundary, i.e., \begin{equation} \label{apriori1} \partial \Omega \text{ is of class } C^{2, \alpha} \text{ of constants } \rho_0, \, \, M_0, \end{equation} where $\alpha \in (0,1]$ is a real number, $M_0 > 0$, and $\rho_0 >0 $ is what we shall treat as our dimensional parameter. In what follows $\nu$ is the outer normal vector field to $\partial \Omega$. We also require that
\begin{equation} \label{apriori2} |\Omega| \le M_1 \rho_0^n, \end{equation} where $M_1 > 0$. \\ In our setup, we choose a special open and connected portion $\Gamma \subset \partial \Omega$ as being the accessible part of the boundary, where, ideally, all measurements are taken. We assume that there exists a point $P_0 \in \Gamma$ such that \begin{equation} \label{apriori2G} \partial \Omega \cap B_{\rho_0}(P_0) \subset \Gamma. \end{equation}
(2) { \it A priori information about the obstacles.} \\ We consider $D \subset \Omega$, which represents the obstacle we want to detect from the boundary measurements, on which we require that \begin{equation} \label{apriori2bis} \Omega \setminus \overline{D} \text{ is connected, } \end{equation} \begin{equation} \label{apriori2ter} \partial D \text{ is connected. } \end{equation} We require the same regularity on $D$ as we did for $\Omega$, that is, \begin{equation} \label{apriori3} \partial D \text{ is of class } C^{2, \alpha} \text{ with constants } \rho_0 , \, M_0. \end{equation} In addition, we suppose that the obstacle is "well contained" in $\Omega$, meaning \begin{equation} \label{apriori4} d (D, \partial \Omega) \ge \rho_0. \end{equation} \begin{remark} We point out that, in principle, assumptions (\ref{apriori1}), (\ref{apriori3}) and (\ref{apriori4}) could hold for different values of $\rho_0$. If that were the case, it would be sufficient to redefine $\rho_0$ as the minimum among the three constants; then (\ref{apriori1}), (\ref{apriori2}) and (\ref{apriori3}) would still be true with the same $\rho_0$, while we would need to assume a different value of the constant $M_1$ in (\ref{apriori2}) accordingly. As a simple example, if $\Omega = B_1(0)$, and $D=B_{1/2}(0)$, then (\ref{apriori1}) is true for every $\rho_0 <1$, while (\ref{apriori3}) and (\ref{apriori4}) is true for all $\rho_0 <1/2$, so $\rho_0$ would be assumed to be less than $1/2$. \end{remark} (3) { \it A priori information about the boundary data.} \\ For the Dirichlet-type data $g$ we assign on the accessible portion of the boundary $\Gamma$, we assume that \begin{equation} \begin{split} \label{apriori5} g \in \accan{\frac{3}{2}}{\partial \Omega}, \, \; \; g \not \equiv 0, \\ \mathrm{supp} \,g \subset \subset \Gamma. \end{split} \end{equation}
As it is required in order to ensure the existence of a solution, we also require \begin{equation} \label{aprioriexist} \int_{\partial \Omega} g \, \mathrm{d} s =0. \end{equation} We also ask that, for a given constant $F>0$, we have \begin{equation} \label{apriori7} \frac{\norma{g}{\frac{1}{2}}{\Gamma} }{\normadue{g}{\Gamma} } \le F. \end{equation} Under the above conditions on $g$, one can prove that there exists a constant $c>0$, only depending on $M_0$, such that the following equivalence relation holds: \begin{equation} \label{equivalence} \norma{g}{\frac{1}{2}}{\Gamma} \le \norma{g}{\frac{1}{2}}{\partial \Omega} \le c \norma{g}{\frac{1}{2}}{\Gamma}. \end{equation}
\subsection{The main result.}
Let $\Omega \subset \mathbb{R}^n$, and $\Gamma \subset \partial \Omega$ satisfy (\ref{apriori1})-(\ref{apriori2G}). Let $D_i \subset \Omega$, for $i=1,2$, satisfy (\ref{apriori2bis})-(\ref{apriori4}), and let us denote by $\Omega_i= \Omega \setminus \overline{D_i}$. We may state the main result as follows. \begin{theorem}[Stability] \label{principale} Let $g \in \accan{\frac{3}{2}}{\Gamma}$ be the assigned boundary data, satisfying (\ref{apriori5})-(\ref{apriori7}). Let $u_i \in \accauno{\Omega_i}$ solve (\ref{NSE}) for $D=D_i$. If, for $\epsilon > 0 $, we have \begin{equation} \label{HpPiccolo} \rho_0 \norma{\sigma(u_1, p_1)\cdot \nu -\sigma(u_2,p_2) \cdot \nu }{-\frac{1}{2}}{\Gamma} \le \epsilon, \end{equation} then \begin{equation}\label{stimstab} d_{\mathcal{H}} (\partial D_1, \partial D_2) \le \rho_0 \omega \Bigg( \frac{\epsilon}{\norma{g}{\frac{1}{2}}{\Gamma}}\Bigg), \end{equation} where $\omega : (0, +\infty) \to \mathbb{R}^+$ is an increasing function satisfying, for all $0<t<\frac{1}{e}$: \begin{equation}
\omega(t) \le C (\log | \log t |)^{-\beta }. \end{equation} The constants $C>0$ and $0<\beta<1$ only depend on $n$, $M_0$, $M_1$ and $F$. \end{theorem} \subsection{The Helmholtz-Weyl decomposition.} We find it convenient to recall a classical result which will come in handy later on. A basic tool in the study of the Stokes equations (\ref{NSE}) is the Helmholtz-Weyl decomposition of the space $\elledue{\Omega}$ in two orthogonal spaces: \begin{equation}\label{HW} \elledue{\Omega} = H \oplus H^{\perp}, \end{equation} where \[ H =\{u \in \elledue{\Omega} \hspace{0.25em} : \mathrm{div} {\hspace{0.25em}} u = 0, \hspace{0.25em}
u|_{\partial \Omega} = 0\} \] and \[ H^{\perp} =\{u \in \elledue{\Omega} \hspace{0.25em} : \exists \hspace{0.25em} p \in \accan{1}{\Omega} \,: \; u = \nabla p \hspace{0.25em} \}. \] This decomposition is used, for example, to prove the existence of a solution of the Stokes system (among many others, see \cite{LadyK}).
From this, and using a quite standard "energy estimate" reasoning, one can prove the following (see \cite{LadyK} or \cite{Temam}, among many others): \begin{theorem}[Regularity for the direct Stokes problem.] \label{TeoRegGen} Let $m \ge -1$ an integer number and let $E \subset \mathbb{R}^n$ be a bounded domain of class $C^r$ , with $r= \max \{ m+2, 2\}$. Let us consider the following problem: \begin{equation}
\label{NSEdiretto} \left\{ \begin{array}{rl}
\mathrm{div} {\hspace{0.25em}} \sigma(u,p) & = f \hspace{2em} \mathrm{\tmop{in}} \hspace{1em}
E,\\
\mathrm{div} {\hspace{0.25em}} u & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} E, \\
u & = g \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \partial E,\\
\end{array} \right. \end{equation} where $f \in \mathbf{H}^{m} (E)$ and $g \in \accan{m+\frac{3}{2}}{E}$. Then there exists a weak solution $(u,p) \in \mathbf{H}^{m+2}(E) \times H^{m+1} (E)$ and a constant $c_0$, only depending on the regularity constants of $E$ such that
\begin{equation} \label{stimanormadiretto}
\| u \|_{\mathbf{H}^{m+2} (E)} + \rho_0 \| p-p_E \|_{H^{m+1}(E)} \le c_0 \big(\rho_0 \| f \|_{\mathbf{H}^{m} (E)} + \| g \|_{\mathbf{H}^{m+\frac{3}{2}} (\partial E)} \big), \end{equation}
where $p_E$ denotes the average of $p$ in $E$, $p_E = \frac{1}{|E|} \int_E p . $ \end{theorem} Finally, we would like to recall the following version of Poincar\`e inequality, dealing with functions that vanish on an open portion of the boundary: \begin{theorem}[Poincar\`e inequality.] Let $E\subset \mathbb{R}^n$ be a bounded domain with boundary of Lipschitz class with constants $\rho_0$, $M_0$ and satisfying (\ref{apriori2}). Then for every $ u \in \accan{1}{E}$ such that \begin{displaymath} u = 0 \, \, \text{on} \, \, \partial E \cap B_{\rho_0}(P), \end{displaymath} where $P$ is some point in $\partial E$, we have \begin{equation} \label{pancarre} \normadue{u}{E} \le C \rho_0 \normadue{\nabla u}{E}, \end{equation} where C is a positive constant only depending on $M_0$ and $M_1$. \end{theorem}
\section{Proof of Theorem \ref{principale}.} The proof of Theorem \ref{principale} relies on the following sequence of propositions. \begin{proposition}[Lipschitz propagation of smallness] \label{teoPOS} Let $E$ be a bounded Lipschitz domain with constants $\rho_0$, $M_0$, satisfying (\ref{apriori2}).
Let $u$ be a solution to the following problem: \begin{equation}
\label{NSEPOS} \left\{ \begin{array}{rl}
\mathrm{div} {\hspace{0.25em}}\sigma(u,p) &= 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em}
E,\\
\mathrm{div} {\hspace{0.25em}} u & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} E,\\
u & = g \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \partial E,\\
\end{array} \right. \end{equation} where $g$ satisfies \begin{equation} \label{apriori5POS} g \in \accan{\frac{3}{2}}{\partial E}, \, \; \; g \not \equiv 0, \end{equation}
\begin{equation} \label{aprioriexistPOS} \int_{\partial E} g \, \mathrm{d} s =0, \end{equation} \begin{equation} \label{apriori7POS} \frac{\norma{g}{\frac{1}{2}}{\partial E} }{\normadue{g}{\partial E} } \le F, \end{equation} for a given constant $F>0$. Also suppose that there exists a point $P \in \partial E$ such that \begin{equation} g = 0 \;\; \text{on} \; \; \partial E \cap B_{\rho_0}(P). \end{equation} Then there exists a constant $s>1$, depending only on $n$ and $M_0$ such that, for every $\rho >0$ and for every $\bar{x} \in E_{s\rho}$, we have
\begin{equation} \label{POS} \int_{B_{\rho}(\bar{x})} \! |\nabla u|^2 dx \ge C_\rho \int_{E} \! |\nabla u|^2 dx . \end{equation} Here $C_\rho>0$ is a constant depending only on $n$, $M_0$, $M_1$, $F$, $\rho_0$ and $\rho$. The dependence of $C_\rho$ from $\rho$ and $\rho_0$ can be traced explicitly as \begin{equation} \label{crho} C_\rho = \frac{C}{\exp \Big[ A \big( \frac{\rho_0}{\rho}\big) ^B \Big] } \end{equation} where $A$, $B$, $C>0$ only depend on $n$, $M_0$, $M_1$ and $F$. \end{proposition} \begin{proposition}[Lipschitz propagation of smallness up to boundary data] \label{teoPOSC} Under the hypotheses of Theorem \ref{principale}, for all $\rho>0$, if $\bar{x} \in (\Omega_i)_{{{(s+1)\rho}}}$, we have for $i=1,2$: \begin{equation} \label{POScauchy}
\frac{1}{\rho_0^{n-2}} \int_{B_{\rho}(\bar{x})} \! |\nabla u_i|^2 dx \ge C_\rho \norma{g}{\frac{1}{2}}{\Gamma}^2, \end{equation} where $C_\rho$ is as in (\ref{crho}) (with possibly a different value of the term $C$), and $s$ is given by Proposition \ref{teoPOS}. \end{proposition} \begin{proposition}[Stability estimate of continuation from Cauchy data] \label{teostabest} Under the hypotheses of Theorem \ref{principale} we have
\begin{equation}\label{stabsti1} \frac{1}{\rho_0^{n-2}} \int_{D_2\setminus D_1} |\nabla u_1|^2 \le C \norma{g}{\frac{1}{2}}{\Gamma}^2 \omega\Bigg( \frac{\epsilon}{\norma{g}{\frac{1}{2}}{\Gamma}} \Bigg) \end{equation}
\begin{equation}\label{stabsti2} \frac{1}{\rho_0^{n-2}} \int_{D_1\setminus D_2} |\nabla u_2|^2 \le C \norma{g}{\frac{1}{2}}{\Gamma}^2 \omega\Bigg( \frac{\epsilon}{\norma{g}{\frac{1}{2}}{\Gamma}} \Bigg)\end{equation} where $\omega$ is an increasing continuous function, defined on $\mathbb{R}^+$ and satisfying
\begin{equation}\label{andomega} \omega(t) \le C \big( \log |\log t|\big)^{-c} \end{equation} for all $t < e^{-1}$, where $C$ only depends on $n$, $M_0$, $M_1$, $F$, and $c>0$ only depends on $n$. \end{proposition} \begin{proposition}[Improved stability estimate of continuation] \label{teostabestimpr} Let the hypotheses of Theorem \ref{principale} hold. Let $G$ be the connected component of $\Omega_1 \cap \Omega_2$ containing $\Gamma$, and assume that $\partial G$ is of Lipschitz class of constants $\tilde{\rho}_0$ and $\tilde{M_0}$, where $M_0>0$ and $0<\tilde{\rho}_0<\rho_0$. Then (\ref{stabsti1}) and (\ref{stabsti2}) both hold with $\omega$ given by \begin{equation}\label{omegabetter}
\omega(t)= C |\log t|^{\gamma}, \end{equation} defined for $t<1$, where $\gamma >0$ and $C>0$ only depend on $M_0$, $\tilde{M_0}$, $M_1$ and $\frac{\rho_0}{\tilde{\rho}_0}$. \end{proposition} \begin{proposition} \label{teoreggra} Let $\Omega_1$ and $\Omega_2$ two bounded domains satisfying (\ref{apriori1}). Then there exist two positive numbers $d_0$, $\tilde{\rho}_0$, with $\tilde{\rho}_0 \le \rho_0$, such that the ratios $\frac{\rho_0}{\tilde{\rho}_0}$, $\frac{d_0}{\rho_0}$ only depend on $n$, $M_0$ and $\alpha$ such that, if \begin{equation} \label{relgr1} d_{\mathcal{H}} (\overline{\Omega_1}, \overline{\Omega_2}) \le d_0, \end{equation} then there exists $\tilde{M}_0>0$ only depending on $n$, $M_0$ and $\alpha$ such that every connected component of $\Omega_1 \cap \Omega_2$ has boundary of Lipschitz class with constants $\tilde{\rho}_0$, $\tilde{M}_0$. \end{proposition} We postpone the proofs of Propositions \ref{teoPOS} and \ref{teoPOSC} to Section 4, while Propositions \ref{teostabest} and \ref{teostabestimpr} will be proven in Section 5. The proof of Proposition \ref{teoreggra} is purely geometrical and can be found in \cite{ABRV}. \begin{proof}[Proof of Theorem \ref{principale}.] Let us call \begin{equation} \label{distanza} d= d_\mathcal{H}(\partial D_1, \partial D_2). \end{equation} Let $\eta$ be the quantity on the right hand side of (\ref{stabsti1}) and (\ref{stabsti2}), so that \begin{equation} \label{eta} \begin{split}
\int_{D_2 \setminus D_1} |\nabla u_1|^2 \le \eta, \\
\int_{D_1 \setminus D_2} |\nabla u_2|^2 \le \eta. \\ \end{split}\end{equation} We can assume without loss of generality that there exists a point $x_1 \in \partial D_1$ such that dist$(x_1, \partial D_2)=d$. That being the case, we distinguish two possible situations: \\ (i) $B_d(x_1) \subset D_2$, \\ (ii) $B_d(x_1) \cap D_2 =\emptyset$.\\ In case (i), by the regularity assumptions on $\partial D_1$, we find a point $x_2 \in D_2 \setminus D_1$ such that $B_{td}(x_2) \subset D_2 \setminus D_1$, where $t$ is small enough (for example, $t=\frac{1}{1+\sqrt{1+M_0^2}}$ suffices). Using (\ref{POScauchy}), with $\rho = \frac{t d}{s}$ we have
\begin{equation} \label{stimapos} \int_{B_\rho (x_2) } |\nabla u_1|^2 dx \ge \frac{C\rho_0^{n-2}}{\exp \Big[A \big(\frac{s\rho_0}{t d }\big)^B\Big]} \norma{g}{\frac{1}{2}}{\Gamma}^2.
\end{equation} By Proposition \ref{teostabest}, we have: \begin{equation} \label{quellaconomega} \omega\Bigg( \frac{\epsilon}{ \norma{g}{\frac{1}{2}}{\Gamma}} \Bigg) \ge \frac{C}{\exp \Big[A \big(\frac{s\rho_0}{t d }\big)^B\Big]} , \end{equation} and solving for $d$ we obtain an estimate of log-log-log type stability: \begin{equation}\label{logloglog}
d \le C \rho_0 \Bigg\{ \log \Bigg[ \log \Bigg|\log\frac{\epsilon}{\norma{g}{\frac{1}{2}}{\Gamma}} \Bigg| \Bigg] \Bigg\}^{-\frac{1}{B}}, \end{equation} provided $\epsilon < e^{-e} \norma{g}{\frac{1}{2}}{\Gamma}$: this is not restrictive since, for larger values of $\epsilon$, the thesis is trivial. If we call $d_0$ the right hand side of (\ref{logloglog}), we have that there exists $\epsilon_0$ only depending on $n$, $M_0$, $M_1$ and $F$ such that, if $\epsilon \le \epsilon_0$ then $d\le d_0$. Proposition \ref{teoreggra} then applies, so that $G$ satisfies the hypotheses of Proposition \ref{teostabestimpr}. This means that we may choose $\omega$ of the form (\ref{omegabetter}) in (\ref{quellaconomega}), obtaining (\ref{stabsti1}). Case (ii) can be treated analogously, upon substituting $u_1$ with $u_2$. \end{proof}
\section{Proof of Proposition \ref{teoPOS}.} The main idea of the proof of Proposition \ref{teoPOS} is a repeated application of a three-spheres type inequality. Inequalities as such play a crucial role in almost all stability estimates from Cauchy data, thus they have been adapted to a variety of elliptic PDEs: in the context of the scalar elliptic equations (see \cite{ABRV}), then in the determination of cavities or inclusions in elastic bodies (\cite{MR}, \cite{MRC}) and more in general, for scalar elliptic equations (\cite{ARRV}) as well as systems (\cite{LNW}) with suitably smooth coefficients. We recall in particular the following estimate, which is a special case of a result of Nagayasu, Lin and Wang (\cite{LNW}), dealing with systems of differential inequalities of the form:
\begin{equation} \label{ellgeneral} |\triangle^l u^i | \le K_0 \sum_{|\alpha| \le \big[ \frac{3l}{2} \big] } | D^\alpha u | \, \quad i=1,\dots , n. \end{equation} Then the following holds (see \cite{LNW}): \begin{theorem}[Three spheres inequality.] \label{teotresfere} Let $E \subset \mathbb{R}^n$ be a bounded domain with Lipschitz boundary with constants $\rho_0$, $M_0$. Let $B_R(x)$ a ball contained in $E$, and let $u \in \accan{2l}{E}$ be a solution to (\ref{ellgeneral}). Then there exists a real number $\vartheta^* \in (0, e^{-1/2})$, depending only on $n$, $l$ and $K_0$ such that, for all $0<r_1 <r_2 <\vartheta^* r_3$ with $r_3 \le R$ we have:
\begin{equation} \label{tresfere} \int_{B_{r_2}} \! | u|^2 dx \le C \Big(\int_{B_{r_1} } \! | u|^2 dx \Big)^\delta \Big(\int_{B_{r_3}} \! | u|^2 dx \Big)^{1-\delta} \end{equation} where $\delta \in (0,1)$ and $C>0$ are constants depending only on $n$, $l$, $K_0$, $\frac{r_1}{r_3}$ and $\frac{r_2}{r_3}$, and the balls $B_{r_i}$ are centered in $x$. \end{theorem} First, we show that Proposition \ref{teoPOSC} follows from Proposition \ref{teoPOS}: \begin{proof}[Proof of Proposition \ref{teoPOSC}.]
From Proposition \ref{teoPOS} we know that \begin{displaymath} \int_{B_{\rho}(x)} \! |\nabla u_i|^2 dx \ge C_\rho \int_{\Omega \setminus \overline{D_i}} \! |\nabla u_i|^2 dx, \end{displaymath} where $C_\rho$ is given in (\ref{crho}). We have, using Poincar\`e inequality (\ref{pancarre}) and the trace theorem,
\begin{equation} \label{altofrequenza} \begin{split} \int_{\Omega\setminus \overline{D_i}} |\nabla u_i|^2 dx \ge C \rho_0^{n-2} \norma{u_i}{1}{\Omega \setminus \overline{D_i}}^2 \ge C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\partial \Omega}^2. \end{split}\end{equation} Applying the above estimate to (\ref{POS}) and using (\ref{equivalence}) will prove our statement. \end{proof} Next, we introduce a lemma we shall need later on: \begin{lemma} \label{42} Let the hypotheses of Proposition \ref{teoPOS} be satisfied. Then \begin{equation} \normadue{u}{E} \ge \frac{C}{F^2} \rho_0 \normadue{\nabla u}{E} \end{equation} where $C>0$ only depends on $n$, $M_0$ and $M_1$. \end{lemma} The proof is obtained in \cite{MRC}, with minor modifications. We report it here for the sake of completeness. \begin{proof} Assume $\rho_0=1$, otherwise the thesis follows by scaling. The following trace inequality holds (see \cite[Theorem 1.5.1.10]{21}): \begin{equation} \label{trace1} \normadue{u}{\partial E} \le C (\normadue{\nabla u}{E} \normadue{u}{E} + \normadue{u}{E}^2), \end{equation} where $C$ only depends on $M_0$ and $M_1$. Using the Poincar\`e inequality (\ref{pancarre}), we have \begin{equation} \frac{\normadue{\nabla u}{E} }{ \normadue{u}{E} } \le C \frac{\normadue{\nabla u}{E}^2}{\normadue{u}{\partial E}^2}. \end{equation} This, together with (\ref{stimanormadiretto}), immediately gives the thesis. \end{proof} A proof of Proposition \ref{teoPOS} has already been obtained in \cite{MRC} dealing with linearized elasticity equations; we give a sketch of it here, with the due adaptations. \begin{proof}[Proof of Proposition \ref{teoPOS}.] We outline the main steps taken in the proof. First, we show that the three spheres inequality (\ref{tresfere}) applies to $\nabla u$. Then, the goal is to estimate $\normadue{\nabla u}{E}$ by covering the set $E$ with a sequence of cubes $Q_i$ with center $q_i$ of "relatively small" size. Each of these cubes is contained in a sphere $S_i$, thus we estimate the norm of $\nabla u$ in every sphere of center $q_i$, by connecting $q_i$ with $x$ with a continuous arc, and apply an iteration of the three spheres inequality to estimate $\normadue{\nabla u}{S_i}$ in terms of $\normadue{\nabla u}{B_\rho(x)}$. However, the estimates deteriorate exponentially as we increase the number of spheres (or equivalently, if the radius $\rho$ is comparable with the distance of $x$ from the boundary) giving an exponentially worse estimate of the constant $C_\rho$. To solve this problem, the idea is to distinguish two areas within $E_{s \rho}$, which we shall call $A_1$, $A_2$. We consider $A_1$ as the set of points $y \in E_{s \rho}$ such that $\mathrm{dist}(y, \partial E)$ is sufficiently large, whereas $A_2$ is given as the complement in $E_{s \rho}$ of $A_1$. Then, whenever we need to compare the norm of $\nabla u$ on two balls whose centers lie in $A_2$, we reduce the number of spheres by iterating the three spheres inequality over a sequence of balls with increasing radius, exploiting the Lipschitz character of $\partial E$ by building a cone to which all the balls are internall tangent to. Once we have reached a sufficiently large distance from the boundary, we are able to pick a chain of larger balls, on which we can iterate the three speres inequality again without deteriorating the estimate too much. This line of reasoning allows us to estimate the norm of $\nabla u$ on any sphere contained in $E_{s \rho}$, thus the whole $\normadue{\nabla u}{E}$. \\
{\bf Step 1.} { \it If $u \in \accauno{E}$ solves (\ref{NSEPOS}) then the three spheres inequality (\ref{tresfere}) applies to $ \nabla u$.} \begin{proof}[Proof of Step 1.] We show that $u$ can be written as a solution of a system of the form (\ref{ellgeneral}). By Theorem \ref{TeoRegGen}, we have $u \in \mathbf{H}^2(E)$ so that we may take the laplacian of the second equation in (\ref{NSE}): \begin{displaymath} \triangle \mathrm{div} {\hspace{0.25em}} u =0. \end{displaymath} Commuting the differential operators, and recalling the first equation in (\ref{NSE}), \begin{displaymath} \triangle{p}=0 \end{displaymath} thus $p$ is harmonic, which means that, if we take the laplacian of the first equation in (\ref{NSE}) we get \begin{displaymath} \triangle^2 u=0, \end{displaymath} so that $\nabla u$ is also biharmonic, hence the thesis. \end{proof} In what follows, we will always suppose $\rho_0=1$: The general case is treated by a rescaling argument on the biharmonic equation.
We closely follow the geometric construction given in \cite{MRC}. In the aforementioned work the object was to estimate $\| \hat{\nabla} u\|$, by applying the three spheres inequality to $\hat{\nabla} u$ (the symmetrized gradient of $u$); in order to relate it to the boundary data, this step had to be combined with Korn and Caccioppoli type inequalities. Here the estimates are obtained for $\|\nabla u \|$. \\
From now on we will denote, for $z \in \mathbb{R}^n$, $\xi \in \mathbb{R}^n$ such that $|\xi|=1$, and $\vartheta >0$, \begin{equation} \label{cono}
C(z, \xi, \vartheta)= \Big\{ x \in \mathbb{R}^n \text{ s.t. } \frac{(x-z) \cdot \xi}{|x-z|} > \cos \vartheta \Big\} \end{equation} the cone of vertex $z$, direction $\xi$ and width $2 \vartheta$. \\ Exploiting the Lipschitz character of $\partial E$, we can find $\vartheta_0 >0$ depending only on $M_0$, $\vartheta_1>0$, $\chi >1$ and $s>1$ depending only on $M_0$ and $n$, such that the following holds (we refer to \cite{MRC} for the explicit expressions of the constants $\vartheta_0$, $\vartheta_1$, $\chi$, $s$, and for all the detailed geometric constructions).\\
{\bf Step 2.} { \it Choose $0<\vartheta^* \le 1$ according to Theorem \ref{teotresfere} .There exists $\overline{\rho}>0$, only depending on $M_0$, $M_1$ and $F$, such that: \\ If $ 0<\rho \le \bar{\rho}$, and $x \in E$ is such that $ s \rho < \mathrm{dist} (x, \partial E) \le \frac{\vartheta^*}{4}$, then there exists $\hat{x} \in E$ satisfying the following conditions: \begin{enumerate}
\item[(i)] $B_{\frac{5 \chi \rho}{\vartheta^*}} (x) \subset C(\hat{x},e_n=\frac{x-\hat{x}}{|x-\hat{x}|} , \vartheta_0) \cap B_{\frac{\vartheta^* }{8}} (\hat{x}) \subset E$, \item[(ii)] Let $x_2 = x+ \rho(\chi+1)e_n$. Then the balls $B_\rho (x)$ and $B_{\chi \rho} (x_2)$ are internally tangent to the cone $C(\hat{x},e_n, \vartheta_1)$. \end{enumerate}}
The idea is now to repeat iteratively the construction made once in Step 2. We define the following sequence of points and radii: \begin{displaymath} \begin{split} \rho_1 &= \rho, \; \; \; \rho_k = \chi \rho_{k-1}, \; \; \text{ for } k \ge 2, \\ x_1 &= x, \; \; \; x_k=x_{k-1}+ (\rho_{k-1} + \rho_k) e_n , \qquad \text{ for } k \ge 2. \end{split} \end{displaymath} We claim the following geometrical facts (the proof of which can be found again in \cite{MRC}, except the first, which is \cite[Proposition 5.5]{ARRV}): \\
{\it There exist $0<h_0<1/4$ only depending on $M_0$, $\bar{\rho} >0$ only depending on $M_0$, $M_1$ and $F$, an integer $k(\rho)$ depending also on $M_0$ and $n$, such that, for all $h \le h_0$, $0<\rho \le \bar{\rho}$ and for all integers $1<k \le k(\rho)-1$ we have: \begin{enumerate} \item \label{fatto0} $E_h$ is connected, \item \label{fatto1} $B_{\rho_k}(x_k)$ is internally tangent to $C(\hat{x}, e_n, \vartheta_1) $, \item \label{fatto2} $B_{\frac{5 \chi \rho_k}{\vartheta^*}}(x_k) $ is internally tangent to $C(\hat{x}, e_n, \vartheta_0) $,
\item The following inclusion holds: \begin{equation} \label{fatto3} B_{\frac{5 \rho_k}{\vartheta^*}}(x_k) \subset B_{\frac{\vartheta^*}{8}}(\hat{x}), \end{equation} \item $k(\rho)$ can be bounded from above as follows: \begin{equation} \label{432} k(\rho) \le \log \frac{\vartheta^* h_0}{5 \rho} +1. \end{equation} \end{enumerate} }
Call $\rho_{k(\rho)}= \chi^{k(\rho)-1} \rho$; from (\ref{432}) we have that \begin{equation} \label{433} \rho_{k(\rho)} \le \frac{\vartheta^* h_0}{5}. \end{equation}
In what follows, in order to ease the notation, norms will be always understood as being $\mathbf{L}^2$ norms, so that $\|\cdot \|_U$ will stand for $\normadue{\cdot}{U}$. \\ {\bf Step 3.} {\it For all $0<\rho \le \bar{\rho}$ and for all $x \in E$ such that $s \rho \le \mathrm{dist}(x, \partial E) \le \frac{\vartheta^*}{4}$, the following hold: \begin{equation} \label{434} \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}} (x_{k(\rho)}) }}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_{\rho} (x)}}{\nor{\nabla u}{E}} \Bigg)^{\delta_\chi^{k(\rho)-1}}, \end{equation} \begin{equation} \label{435} \frac{\nor{\nabla u}{B_{\rho} (x)} }{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}} (x_{\rho_{k(\rho)}})}}{\nor{\nabla u}{E}} \Bigg)^{\delta^{k(\rho)-1}}, \end{equation} where $C>0$ and $0<\delta_\chi<\delta<1$ only depend on $M_0$. } \begin{proof}[Proof of Step 3.]
We apply to $\nabla u$ the three-spheres inequality, with balls of center $x_j$ and radii $r_1^{j}=\rho_j$, $r_2^{j}=3\chi \rho_j$, $r_3^{j}=4 \chi\rho_j$, for all $j=1, \dots, k(\rho)-1$. Since $B_{r_1^{j+1}}(x_{j+1}) \subset B_{r_2^j}(x_j)$, by the three spheres inequality, there exists $C$ and $\delta_\chi$ only depending on $M_0$, such that: \begin{equation} \label{437} \nor{\nabla u}{B_{\rho_{j+1}}(x_{j+1})} \le C \Big(\nor{\nabla u}{B_{\rho_j}(x_j) }\Big)^{\delta_\chi} \Big(\nor{\nabla u}{B_{4\chi\rho_j}(x_j)}\Big) ^{1-\delta_\chi}. \end{equation} This, in turn, leads to: \begin{equation} \label{438} \frac{\nor{ \nabla u}{B_{\rho_{j+1}}(x_{j+1})} }{\nor{\nabla u}{E}} \le C \Bigg(\frac{\nor{ \nabla u}{B_{\rho_j}(x_j)} }{\nor{\nabla u}{E}} \Bigg)^{\delta_\chi},\end{equation} for all $j=0, \dots k(\rho)-1$. Now call \begin{displaymath} m_k = \frac{\nor{ \nabla u}{B_{\rho_{j+1}}(x_{j+1})} }{\nor{\nabla u}{E}}. \end{displaymath} so that (\ref{438}) reads \begin{equation} \label{stepdue} m_{k+1} \le C m_k^{\delta_\chi} \, \nor{\nabla u}{E}^{1-\delta_\chi} , \end{equation} which, inductively, leads to \begin{equation} \label{steptre} m_{N} \le \tilde{C} m_0^\alpha, \end{equation} where $\tilde{C} = C^{1+\delta_\chi+ \dots+ \delta_\chi^{k(\rho)-2}}$. Since $ 0<\delta_\chi <1$, we have $1+\delta_\chi+ \dots+ \delta_\chi^{k(\rho)-2} \le \frac{1}{1-\delta_\chi}$, and since we may take $C>1$, \begin{equation} \label{stepquattro} \tilde{C} \le C^{\frac{1}{1-\delta_\chi}}. \end{equation}
Similarly, we obtain (\ref{435}): we find a $0<\delta<1$ such that the three spheres inequality applies to the balls $B_{\rho_j}(x_j)$, $B_{3\rho_j}(x_j)$ $B_{4\rho_j}(x_j)$ for $j=2,\dots, k(\rho)$; observing that $B_{\rho_{j}(x_{j-1})} \subset B_{3\rho_j}(x_j)$, the line of reasoning followed above applies identically. \end{proof} {\bf Step 4.} \\{\it For all $0<\rho \le \overline{\rho}$, and for every $\bar{x} \in E_{s\rho}$ we have \begin{equation}\label{453} \frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{A+B\log \frac{1}{\rho}}} . \end{equation} } \begin{proof} We distinguish two subcases: \begin{enumerate}
\item[\it (i).] $\bar{x}$ is such that $\mathrm{dist} (\bar{x}, \partial E) \le \frac{\vartheta^*}{4}$, \item[\it (ii).] $\bar{x}$ is such that $\mathrm{dist}(\bar{x}, \partial E) > \frac{\vartheta^*}{4}$. \end{enumerate}
{\it Proof of Case (i).} Let us consider $\delta$, $\delta_\chi$ we introduced in Step 3 . Take any point $y \in E$ such that $s \rho < \mathrm{dist}(y, \partial E) \le \frac{\vartheta^*}{4}$. By construction, the set $E_{\frac{5\rho_{k(\rho)}}{\vartheta^*}}$ is connected, thus there exists a continuous path $\gamma : [0,1] \to E_{\frac{5\rho_{k(\rho)}}{\vartheta^*}}$ joining $\bar{x}_{k(\rho)}$ to $y_{k(\rho)}$. We define a ordered sequence of times $t_j$, and a corresponding sequence of points $x_j= \gamma(t_j)$, for $j=1, \dots, L$ in the following way: $t_1=0$, $t_L =1$, and \begin{displaymath}
t_j= \mathrm{max} \{t\in (0,1] \text{ such that } |\gamma(t)- x_i| = 2 \rho_{k(\rho)} \} \; \text{, if } |x_i-y_{k(\rho)}| > 2 \rho_{k(\rho)}, \end{displaymath}
otherwise, let $k=L$ and the process is stopped. Now, all the balls $B_{\rho_{k(\rho)}}(x_i)$ are pairwise disjoint, the distance between centers $| x_{j+1}-x_j | = 2 \rho_{k(\rho)}$ for all $j=1 \dots L-1$ and for the last point, $|x_L - y_{k(\rho)}| \le 2 \rho_{k(\rho)}$. The number of points, using (\ref{apriori2}), is at most \begin{equation} \label{sferealmassimo} L \le \frac{M_1}{\omega_n \rho_{k(\rho)}^n}. \end{equation} Iterating the three spheres inequality over this chain of balls, we obtain \begin{equation} \label{442} \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}}(y_{k(\rho)})}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}}(\bar{x}_{k(\rho)}) }}{\nor{\nabla u}{E}} \Bigg)^{\delta^L} \end{equation} On the other hand, by the previous step we have, applying (\ref{434}) and (\ref{435}) for $x=\bar{x}$ and $x=y$ respectively, \begin{equation} \label{443} \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}} (\bar{x}_{k(\rho)}) }}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_{\rho} (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{\delta_\chi^{k(\rho)-1}}, \end{equation} \begin{equation} \label{444} \frac{\nor{\nabla u}{B_{\rho}(y)} }{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}} (y_{k(\rho)})}}{\nor{\nabla u}{E}} \Bigg)^{\delta^{k(\rho)-1}}, \end{equation} where $C$, as before, only depends on $n$ and $M_0$. Combining (\ref{442}), (\ref{443}) and (\ref{444}), we have \begin{equation} \label{445} \frac{\nor{\nabla u}{B_{\rho}(y)} }{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_{\rho} (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{\delta_\chi^{k(\rho)-1} \delta^{k(\rho)+L-1}}, \end{equation} for every $y \in E_{s\rho}$ satisfying $\mathrm{dist} (y, \partial E) \le \frac{\vartheta^*}{4}$. Now consider $y \in E$ such that $\mathrm{dist} (y, \partial E) > \frac{\vartheta^*}{4}$. Call \begin{equation} \label{446} \tilde{r}= \vartheta^* \rho_{k(\rho)}. \end{equation} By construction (\ref{433}) and (\ref{fatto3}) we have \begin{equation} \mathrm{dist}(\bar{x}_{k(\rho)}, \partial E) \ge \frac{5 \rho_{k(\rho)}}{\vartheta^*} > \frac{5}{\vartheta^*} \tilde{r} , \end{equation} \begin{equation} \mathrm{dist}(y, \partial E) \ge \frac{5 \rho_{k(\rho)}}{\vartheta^*} > \frac{5}{\vartheta^*} \tilde{r}, \end{equation} and again $E_{\frac{5}{\vartheta^*} \tilde{r}}$ is connected, since $\tilde{r}< \rho_{k(\rho)}$. We are then allowed to join $\bar{x}_{k(\rho)}$ to $y$ with a continuous arc, and copy the argument seen before over a chain of at most $\tilde{L}$ balls of centers $x_j \in E_{\frac{5}{\vartheta^*} \tilde{r}}$ and radii $\tilde{r}$, $3\tilde{r}$, $4\tilde{r}$, where \begin{equation} \label{sferealmassimotilde} \tilde{L} \le \frac{M_1}{\omega_n \tilde{r}^n}. \end{equation} Up to possibly shrinking $\overline{\rho}$, we may suppose $\rho \le \tilde{r}$; iterating the three spheres inequality as we did before, we get \begin{equation} \label{451} \frac{\nor{\nabla u}{B_{\tilde{r}}(y)} }{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_{\tilde{r}} (\bar{x}_{k(\rho)})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta^{\tilde{L}}}, \end{equation} which, in turn, by (\ref{443}) and since $\rho \le \tilde{r} < \rho_{k(\rho)}$, becomes \begin{equation} \label{452} \frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{k(\rho)-1}\delta^{\tilde{L}}}, \end{equation} with $C$ depending only on $M_0$ and $n$. The estimate (\ref{452}) holds for all $y \in E$ such that $\mathrm{dist} (y, \partial E) > \frac{\vartheta^*}{4}$. We now put (\ref{432}), (\ref{452}), (\ref{445}), (\ref{sferealmassimo}) (\ref{sferealmassimotilde}) together, by also observing that $\delta_\chi \le \delta$ and trivially $\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le 1$, we obtain precisely (\ref{453}), for $\rho \le \overline{\rho}$, where $C>1$ and $B>0$ only depend on $M_0$, while $A>0$ only depend on $M_0$ and $M_1$.\\ { \it Proof of Case (ii).}
We use the same constants $\delta$ and $\delta_\chi$ introduced in Step 3. Take $\rho \le \bar{\rho}$, then $B_{s\rho}(\bar{x}) \subset B_{\frac{\vartheta^*}{16}}(\bar{x})$, and for any point $\tilde{x}$ such that $|\bar{x} - \tilde{x}| = s \rho$, we have $B_{\frac{\vartheta^*}{8}}(\tilde{x}) \subset E$. Following the construction made in Steps 2 and 3, we choose a point $\bar{x}_{k(\rho)} \in E_{\frac{5}{\vartheta^*}\rho_{k(\rho)}}$, such that \begin{equation} \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}}(\bar{x}_{k(\rho)})}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{k(\rho)-1}}, \end{equation} with $C>1$ only depending on $n$, $M_0$. If $y \in E$ is such that $s\rho <\mathrm{dist} (y, \partial E) \le \frac{\vartheta^*}{4}$, then, by the same reasoning as in Step 4.(i), we obtain \begin{equation}\label{459} \frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{k(\rho)-1} \delta^{k(\rho)+L-1}}, \end{equation} with $C>1$ again depending only on $M_0$. If, on the other hand, $y \in E$ is such that $\mathrm{dist}(y, \partial E) \ge \frac{\vartheta^*}{4}$, taking $\tilde{r}$ as in (\ref{446}), using the same argument as in Step 4.(i), we obtain \begin{equation}\label{460} \frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_\rho(\bar{x})}}{\nor{\nabla u}{E}} \Bigg) ^{\delta_\chi^{k(\rho)-1} \delta^{\tilde{L}}}, \end{equation} where again $C>1$ only depends on $M_0$. From (\ref{459}),(\ref{460}), (\ref{sferealmassimo}),(\ref{sferealmassimotilde}) and (\ref{432}), and recalling that, again, $\delta_\chi \le \delta$, and $\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le 1$, we obtain \begin{equation} \frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{A+B\log\frac{1}{\rho}} }, \end{equation} where $C>1$ and $B>0$ only depend on $M_0$, while $A>0$ only depends on $M_0$, $M_1$. \end{proof}
{\bf Step 5.} {\it For every $\rho \le \bar{\rho}$ and for every $\bar{x} \in E_{s\rho}$ the thesis (\ref{POS}) holds. }
\begin{proof}[Proof of Step 5]
Suppose at first that $\bar{x} \in E_{s\rho}$ satisfies $\mathrm{dist} (\bar{x}, \partial E) \le \frac{\vartheta^*}{4}$. We cover $E_{(s+1)\rho}$ with a sequence of non-overlapping cubes of side $l= \frac{2 \rho}{\sqrt{n}}$, so that every cube is contained in a ball of radius $\rho$ and center in $E_{s \rho}$. The number of cubes is bounded by \begin{displaymath}
N= \frac{|\Omega|n^{\frac{n}{2}}}{(2\rho)^n} \le \frac{M_1 n^{\frac{n}{2}}}{(2 \rho)^n}. \end{displaymath} If we then sum over $k=0$ to $N$ in (\ref{453}) we can write: \begin{equation} \label{stepcinque} \frac{\nor{\nabla u}{E_{(s+1) \rho}}}{\nor{\nabla u}{E}} \le C \rho^{-\frac{n}{2}} \Biggr( \frac{\nor{\nabla u}{B_\rho(\bar{x})} }{\nor{\nabla u}{E}} \Biggr)^{\delta_\chi^{A+B\log \frac{1}{\rho}}} . \end{equation} Here $C$ depends only on $M_0$. Now, we need to estimate the left hand side in (\ref{stepcinque}). In order to do so, we start by writing \begin{equation} \label{unomeno} \frac{\nor{\nabla u}{E_{(s+1) \rho}}}{\nor{\nabla u}{E}} =1-\frac{\nor{\nabla u}{E \setminus E_{(s+1) \rho}}}{\nor{\nabla u}{E}}. \end{equation} By Lemma \ref{42} and the H\"older inequality, \begin{equation} \label{buttatali}
\nor{\nabla u}{E \setminus E_{(s+1)\rho}}^2 \le C F^2 \nor{u}{E \setminus E _{(s+1)\rho}}^2 \le C F^2 |E \setminus E _{(s+1)\rho}| ^{\frac{1}{n}} \| u\|^2_{\mathbf{L}^{\frac{2n}{n-1}}(E \setminus E _{(s+1)\rho})}. \end{equation} On the other hand, by the Sobolev and the Poincar\`e inequalities: \begin{equation} \label{buttatali2}
\| u\|_{\mathbf{L}^{\frac{2n}{n-1}}(E
)} \le C \norma{ u}{\frac{1}{2}}{E
} \le C \nor{u}{E} \le C \nor{\nabla u} {E}. \end{equation} It can be proven (see \cite[Lemma 5.7]{ARRV}) that \begin{equation} \label{buttatali3}
|E \setminus E_{(s+1)\rho }| \le C \rho, \end{equation} where $C$ depends on $M_0$, $M_1$ and $n$. We thus obtain that \begin{equation} \label{storysofar}
\frac{\nor{\nabla u}{E \setminus E_{(s+1)\rho}}}{\nor{\nabla u} {E}} \le C F^2 |E \setminus E_{(s+1)\rho}|^{\frac{1}{n}}. \end{equation} Therefore, combining
(\ref{storysofar}) and (\ref{buttatali3}), we have that for $\rho \le \bar{\rho}$, \begin{equation} \label{unmezzo} \frac{\nor{\nabla u}{E _{(s+1) \rho}}}{\nor{\nabla u}{E}} \le \frac{1}{2}, \end{equation} which, inserted into (\ref{stepcinque}) yields \begin{equation*}
\int_{B_\rho(\bar{x})} |\nabla u|^2 \ge C \rho^{n\delta_\chi^{-A-B\log\frac{1}{\rho}}} \int_E |\nabla u|^2. \end{equation*}
Since for all $t>0$ we have $|\log t| \le \frac{1}{t}$, it is immediate to verify that (\ref{POS}) holds. Now take $\bar{x} \in E_{s\rho}$ such that $\mathrm{dist}(\bar{x}, \partial E) > \frac{\vartheta^*}{4}$. Then $B_{s\rho}(\bar{x}) \subset B_{\frac{\vartheta^*}{16}}(\bar{x})$, then for any point $\tilde{x}$ such that $|\bar{x} - \tilde{x}| = s \rho$, we have $B_{\frac{\vartheta^*}{8}}(\tilde{x}) \subset E$. Following the construction made in Steps 2 and 3, we choose a point $\bar{x}_{k(\rho)} \in E_{\frac{5}{\vartheta^*}\rho_{k(\rho)}}$, such that \begin{equation} \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}}(\bar{x}_{k(\rho)})}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{k(\rho)-1}}, \end{equation} with $C>1$ only depends on $n$, $M_0$. \\ If $y \in E$ is such that $s\rho <\mathrm{dist} (y, \partial E) \le \frac{\vartheta^*}{4}$, then, by the same reasoning as in Step 4, we obtain \begin{equation}\label{4591} \frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{k(\rho)-1} \delta^{k(\rho)+L-1}}, \end{equation} with $C>1$ again depending only on $n$ and $M_0$. If, on the other hand, $y \in E$ is such that $\mathrm{dist}(y, \partial E) \ge \frac{\vartheta^*}{4}$, taking $\tilde{r}$ as in (\ref{446}), using the same argument as in Step 4, we obtain \begin{equation}\label{4601} \frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_\rho(\bar{x})}}{\nor{\nabla u}{E}} \Bigg) ^{\delta_\chi^{k(\rho)-1} \delta^{\tilde{L}}}, \end{equation} where again $C>1$ only depends on $n$ and $M_0$. From (\ref{4591}),(\ref{4601}), (\ref{sferealmassimo}),(\ref{sferealmassimotilde}) and (\ref{432}), and recalling that, again, $\delta_\chi \le \delta$, and $\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le 1$, we obtain \begin{equation} \frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{A+B\log\frac{1}{\rho}} }, \end{equation} where $C>1$ and $B>0$ only depend on $n$ and $M_0$, while $A>0$ only depends on $n$, $M_0$, $M_1$. The thesis follows from the same cube covering argument as in Step 4. \end{proof} {\bf Conclusion.} So far, we have proven (\ref{POS}) true for every $\rho \le \bar{\rho}$, and for every $\bar{x} \in E_{s \rho}$, where $\bar{\rho}$ only depends on $M_0$, $M_1$ and $F$. If $\rho > \bar{\rho}$ and $\bar{x} \in E_{s \rho} \subset E_{s \bar{\rho}}$, then, using what we have shown so far, \begin{equation} \label{462} \nor{\nabla u}{B_\rho (\bar{x})} \ge \nor{\nabla u}{B_{\bar{\rho}}(\bar{x})} \ge \tilde{C} \nor{\nabla u}{E}, \end{equation} where $\tilde{C}$ again only depends on $n$, $M_0$, $M_1$ and $F$. On the other hand, by the regularity hypotheses on $E$, it is easy to show that \begin{equation} \label{463} \rho \le \frac{\mathrm{diam}(\Omega)}{2s} \le \frac{C^*}{2s} \end{equation} thus the thesis \begin{displaymath}
\int_{B_\rho (\bar{x})} |\nabla u|^2 \ge \frac{C}{\exp \Big[ A \Big(\frac{1}{\rho}\Big)^B\Big] } \int_E |\nabla u|^2 \end{displaymath} is trivial, if we set \begin{displaymath} C = \tilde{C} \exp\Big[ A \Big( \frac{2s}{C^*}\Big)^B \Big]. \end{displaymath} \end{proof}
\section{Stability of continuation from Cauchy data.} Throughout this section, we shall again distinguish two domains $\Omega_i= \Omega \setminus \overline{D_i}$ for $i=1,2$, where $D_i$ are two subset of $\Omega$ satisfying (\ref{apriori2bis}) to (\ref{apriori4}). We start by putting up some notation. In the following, we shall call \begin{displaymath} U^i_\rho =\{x \in \overline{\Omega_i} \; \text{s.t.} \mathrm{dist}(x,\partial \Omega) \le \rho \}. \end{displaymath} The following are well known results of interior regularity for the bilaplacian (see, for example, \cite{Miranda}, \cite{GilTru}): \begin{lemma}[Interior regularity of solutions] \label{teoschauder} Let $u_i$ be the weak solution to \ref{NSE} in $\Omega_i$. Then for all $0<\alpha<1$ we have that $u_i \in C^{1,\alpha}(\overline{\Omega_i \setminus U^i_{\frac{\rho_0}{8}}})$ and
\begin{equation} \label{schauder1} \|u_i \|_{C^{1,\alpha}(\overline{ \Omega_i \setminus U^i_{\frac{\rho_0}{8}}})} \le C
\norma{g}{\frac{1}{2}}{\Gamma} \end{equation}
\begin{equation} \label{schauder2} \|u_1-u_2 \|_{C^{1,\alpha}( \overline{\Omega_1 \cap \Omega_2})} \le C
\norma{g}{\frac{1}{2}}{\Gamma} \end{equation} where $C>0$ only depends on $\alpha$, $M_0$. \end{lemma} \begin{proof} Using standard energy estimates, as in Theorem \ref{TeoRegGen}, it follows that \begin{equation} \label{stimau} \norma{u_i}{1}{\Omega_i} \le C \norma{g}{\frac{1}{2}}{\partial \Omega}. \end{equation} On the other hand, using interior regularity estimates for biharmonic functions, we have \begin{equation} \label{intreg}
\|u_i \|_{C^{1,\alpha} (\overline{\Omega_i \setminus U^i_{\frac{\rho_0}{8}}})} \le C \|u_i \|_{\mathbf{L}^{\infty} (\overline{\Omega_i \setminus U^i_{\frac{\rho_0}{16}}})} \le
\normadue{u_i}{\Omega_i}, \end{equation} where $C>0$ only depends on $\alpha$ and $M_0$. Combining (\ref{stimau}), (\ref{intreg}), and recalling (\ref{equivalence}), immediately leads to (\ref{schauder1}). As for (\ref{schauder2}), we observe that $u_1-u_2=0$ on $\Gamma$ (actually, on $\partial \Omega$); therefore, the $C^{1,\alpha}$ norm of $u_1-u_2$ in $U_{\frac{\rho_0}{2}}^1 \cap U_{\frac{\rho_0}{2}}^2$ can be estimated in the same fashion; using (\ref{schauder1}) in the remaining part, we get (\ref{schauder2}). \end{proof} We will also need the following lemma, proved in \cite{ABRV}: \begin{lemma}[Regularized domains] \label{regularized} Let $\Omega$ be a domain satisfying (\ref{apriori1}) and (\ref{apriori2}), and let $D_i$, for $i=1,2$ be two connected open subsets of $\Omega$ satisfying (\ref{apriori3}), (\ref{apriori4}). Then there exist a family of regularized domains $D_i^h \subset \Omega$, for $0 < h < a \rho_0$, with $C^1$ boundary of constants $\til{\rho_0}$, $\til{M_0}$ and such that \begin{equation} \label{643} D_i \subset D_i^{h_1} \subset D_i^{h_2} \; \text{ if } 0<h_1 \le h_2; \end{equation} \begin{equation} \label{644} \gamma_0 h \le \mathrm{dist}(x, \partial D_i) \le \gamma_1 h \; \text{ for all } x \in \partial D_i^h; \end{equation} \begin{equation} \label{645} \mathrm{meas}(D_i^h\setminus D_i)\le \gamma_2 M_1 \rho_0^2 h; \end{equation} \begin{equation} \label{646} \mathrm{meas}_{n-1}(\partial D_i^h)\le \gamma_3 M_1 \rho_0^2; \end{equation} and for every $x \in \partial D_i^h$ there exists $y \in \partial D_i$ such that
\begin{equation} \label{647} |y-x|= \mathrm{dist}(x, \partial D_i), \; \; |\nu(x) - \nu(y)|\le \gamma_4 \frac{h^\alpha}{\rho_0^\alpha}; \end{equation} where by $\nu(x)$ we mean the outer unit normal to $\partial D_i^h$, $\nu(y)$ is the outer unit normal to $D_i$, and the constants $a$, $\gamma_j$, $j=0 \dots 4$ and the ratios $\frac{\til{M}_0}{M_0}$, $\frac{\til{\rho}_0}{\rho_0}$ only depend on $M_0$ and $\alpha$. \end{lemma} We shall also need a stability estimate for the Cauchy problem associated with the Stokes system with homogeneous Cauchy data. The proof of the following result, which will be given in the next section, basically revolves around an extension argument. Let us consider a bounded domain $E\subset \mathbb{R}^n$ satisfying hypotheses (\ref{apriori1}) and (\ref{apriori2}), and take $\Gamma \subset \partial E$ a connected open portion of the boundary of class $C^{2, \alpha}$ with constants $\rho_0$, $M_0$. Let $P_0 \in \Gamma$ such that (\ref{apriori2G}) holds. By definition, after a suitable change of coordinates we have that $P_0 = 0$ and \begin{equation} E \cap B_{\rho_0}(0) = \{ (x^\prime, x_n) \in E \, \text{ s.t.} \, x_n>\varphi(x^\prime) \} \subset E, \end{equation} where $\varphi$ is a $C^{2,\alpha}(B^\prime_{\rho_0}(0))$ function satisfying \begin{displaymath}
\begin{split}
\varphi(0)&=0, \\
|\nabla \varphi (0)|&=0, \\
\|\varphi \|_{C^{2,\alpha} (B^\prime_{\rho_0}(0))}& \le M_0 \rho_0.
\end{split} \end{displaymath}
Define \begin{equation} \begin{split} \label{rho00}
\rho_{00} & = \frac{\rho_0}{\sqrt{1+M_0^2}}, \\ \Gamma_0 & = \{ (x^\prime, x_n) \in \Gamma \, \, \mathrm{s.t.} \, \, |x^\prime|\le \rho_{00}, \, \, x_n = \varphi(x^\prime) \}. \end{split} \end{equation} \begin{theorem} \label{stabilitycauchy} Under the above hypotheses, let $(u,p)$ be a solution to the problem: \begin{equation}
\label{NseHomDir} \left\{ \begin{array}{rl}
\mathrm{div} {\hspace{0.25em}} \sigma(u,p) & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em}
E,\\
\mathrm{div} {\hspace{0.25em}} u & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} E,\\
u & = 0 \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \Gamma,\\
\sigma (u, p) \cdot \nu & = \psi \hspace{2em} \mathrm{\tmop{on}}
\hspace{1em} \Gamma,\\ \end{array} \right. \end{equation} where $\psi \in \accan{-\frac{1}{2}}{\Gamma}$. Let $P^* = P_0 + \frac{\rho_{00}}{4} \nu$ where $\nu$ is the outer normal field to $\partial \Omega$. Then we have \begin{equation} \label{NseHomDirEqn}
\| u \|_{{\bf L}^\infty(E \cap B_{\frac{3 \rho_{00}}{8}} (P^*))} \leq \frac{C}{\rho_0^{\frac{n}{2}}} \normadue{u}{E}^{1-\tau} (\rho_0 \norma{\psi}{-\frac{1}{2}}{\Gamma})^\tau, \end{equation} where $C>0$ and $\tau$ only depend on $\alpha$ and $M_0$. \end{theorem} \begin{proof}[Proof of Proposition \ref{teostabest}] Let $\theta= \mathrm{min} \{a, \frac{7}{8 \gamma_1} \frac{\rho_{0}}{2\gamma_0 (1+M_0^2)} \}$ where $a$, $\gamma_0$, $\gamma_1$ are the constants depending only on $M_0$ and $\alpha$ introduced in Lemma \ref{regularized}, then let $\overline{\rho}= \theta \rho_0$ and fix $\rho \le \overline{\rho}$. We introduce the regularized domains $D_1^\rho$, $D_2^\rho$ according to Lemma \ref{regularized}. Let $G$ be the connected component of $\Omega\setminus(\overline{D_1 \cup D_2})$ which contains $\partial \Omega$, and $G^\rho$ be the connected component of $\overline{\Omega}\setminus(D_1^\rho \cup D_2^\rho)$ which contains $\partial \Omega$. We have that \begin{equation*} D_2 \setminus \overline{D_1} \subset \Omega_1 \setminus \overline{G} \subset \big( (D_1^\rho \setminus \overline{D_1} ) \setminus\overline{G}\big) \cup \big( (\Omega \setminus G^\rho)\setminus D_1^\rho \big) \end{equation*} and \begin{equation*} \partial \big( (\Omega \setminus G^\rho)\setminus D_1^\rho \big) = \Gamma_1^\rho \cup \Gamma_2^\rho, \end{equation*} where $\Gamma_2^\rho= \partial D_2^\rho \cap \partial G^\rho$ and $\Gamma_1^\rho \subset \partial D_1^\rho$. It is thus clear that
\begin{equation} \label{652} \int_{D_2 \setminus \overline{D_1 }} |\nabla u_1|^2 \le \int_{\Omega_1 \setminus \overline{G}} |\nabla u_1|^2 \le \int_{(D_1^\rho \setminus \overline{D_1} )\setminus\overline{G}} |\nabla u_1|^2 +\int_{(\Omega \setminus G^\rho)\setminus D_1^\rho} |\nabla u_1|^2. \end{equation} The first summand is easily estimated, for using (\ref{schauder1}) and (\ref{645}) we have
\begin{equation} \label{6.53} \int_{(D_1^\rho \setminus \overline{D_1} )\setminus\overline{G}} |\nabla u_1|^2 \le C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\Gamma}^2 \frac{\rho}{\rho_0} \end{equation} where $C$ only depends on the $M_0$, $M_1$ and $\alpha$. We call $\Omega(\rho)= (\Omega \setminus G^\rho)\setminus D_1^\rho$. The second term in (\ref{652}), using the divergence theorem twice, becomes:
\begin{equation} \label{sommandi} \begin{split} & \int_{\Omega(\rho)} |\nabla u_1|^2 = \int_{\partial\Omega(\rho)} (\nabla u_1 \cdot \nu) u_1 - \int_{\Omega(\rho)} \triangle u_1 \cdot u_1 = \\& \int_{\partial\Omega(\rho)} (\nabla u_1 \cdot \nu) u_1 - \int_{\Omega(\rho)} \nabla p_1 \cdot u_1 = \int_{\partial\Omega(\rho)} (\nabla u_1 \cdot \nu) u_1 + \int_{\partial \Omega(\rho)} p_1 (u_1\cdot \nu) = \\ & \int_{\Gamma_1^\rho}(\nabla u_1 \cdot \nu) u_1 + \int_{\Gamma_2^\rho}(\nabla u_1 \cdot \nu) u_1 + \int_{\Gamma_1^\rho} p_1 (u_1 \cdot \nu) +\int_{\Gamma_2^\rho} p_1 (u_1 \cdot \nu) . \end{split} \end{equation}
About the first and third term, if $x \in \Gamma_1^\rho$, using Lemma \ref{regularized}, we find $y \in \partial D_1$ such that $|y-x|= d(x, \partial D_1) \le \gamma_1 \rho$; since $u_1(y)=0$, by Lemma \ref{teoschauder} we have
\begin{equation} \label{pezzobuono} |u_1(x)|= |u_1(x)-u_1(y)|\le C \frac{\rho}{\rho_0} \norma{g}{\frac{1}{2}}{\Gamma} . \end{equation}
On the other hand, if $x \in \Gamma_2^\rho$, there exists $y \in D_2$ such that $|y-x| = d(x, \partial D_2) \le \gamma_1 \rho$. Again, since $u_2(y)=0$, we have
\begin{equation} \label{pezzocattivo} \begin{split} & |u_1(x)| \le |u_1(x)-u_1(y)|+|u_1(y)-u_2(y) | \\
& \le C \big( \frac{\rho}{\rho_0} \norma{g}{\frac{1}{2}}{\Gamma} + \max_{\partial G^\rho \setminus \partial \Omega} |w| \big) , \end{split}\end{equation} where $w=u_1-u_2$. Combining (\ref{pezzobuono}), (\ref{pezzocattivo}) and (\ref{sommandi}) and recalling (\ref{schauder1}) and (\ref{646}) we have: \begin{equation} \label{sommandi2}
\int_{D_2\setminus D_1} |\nabla u_1|^2 \le C\rho_0^{n-2} \Big( \norma{g}{\frac{1}{2}}{\Gamma}^2 \frac{\rho}{\rho_0} + \norma{g}{\frac{1}{2}}{\Gamma} \max_{\partial G^\rho \setminus \partial \Omega} |w| \Big) \end{equation}
We now need to estimate $\max_{\partial G^\rho \setminus \partial \Omega} |w| $. We may apply (\ref{tresfere}) to $w$, since it is biharmonic. Let $ x \in \partial G^\rho \setminus \partial \Omega$ and \begin{equation} \label{rhostar} \rho^*=\frac{\rho_0}{16(1+M_0^2)}, \end{equation} \begin{equation}\label{zetazero} x_0= P_0 - \frac{\rho_1}{16}\nu, \end{equation} where $\nu$ is the outer normal to $\partial \Omega$ at the point $P_0$. By construction $x_0 \in \overline{\til{\Omega}_{\frac{\rho^*}{2}}}$.
There exists an arc $\gamma: [0,1] \mapsto G^\rho \setminus \overline{\til{\Omega}_{\frac{\rho^*}{2}}} $ such that $\gamma(0)=x_0$, $\gamma(1)=x$ and $\gamma([0,1])\subset G^\rho \setminus \overline{\til{\Omega}_{\frac{\rho^*}{2}}}$. Let us define a sequence of points $\{x_i \}_{i=0 \dots S}$ as follows: $t_0=0$, and \begin{displaymath}
t_i= \mathrm{max} \{t\in (0,1] \text{ such that } |\gamma(t)- x_i| = \frac{\gamma_0 \rho \vartheta^*}{2} \} \; \text{, if } |x_i-x| >\frac{\gamma_0 \rho \vartheta^*}{2}, \end{displaymath}
otherwise, let $i=S$ and the process is stopped. Here $\vartheta^*$ is the constant given in Theorem \ref{teotresfere}. All the balls $B_{\frac{\gamma_0 \rho \vartheta^*}{4}}(x_i)$ are pairwise disjoint, the distance between centers $| x_{i+1}-x_i | =\frac{\gamma_0 \rho \vartheta^*}{2}$ for all $i=1 \dots S-1$ and for the last point, $|x_S - x| \le \frac{\gamma_0 \rho \vartheta^*}{2}$. The number of spheres is bounded by \begin{displaymath} S\le C \Big( \frac{\rho_0}{\rho} \Big)^n \end{displaymath} where $C$ only depends on $\alpha$, $M_0$ and $M_1$. For every $\rho \le \overline{\rho}$, we have that, letting \begin{displaymath} \rho_1 = \frac{\gamma_0 \rho \vartheta^*}{4},\; \rho_2= \frac{3 \gamma_0 \rho \vartheta^*}{4}, \; \rho_3={\gamma_0 \rho \vartheta^*} \end{displaymath} an iteration of the three spheres inequality on a chain of spheres leads to
\begin{equation} \label{iteratresfere} \int_{B_{\rho_2} (x)} \! | w|^2 dx \le C \Big(\int_G \! | w|^2 dx \Big)^{1-\delta^S} \Big(\int_{B_{\rho_3}(x_0)} \! | w |^2 dx \Big)^{\delta^S} \end{equation} where $0<\delta<1$ and $C>0$ only depend on $M_0$ and $\alpha$. From our choice of $\bar{\rho}$ and $\vartheta^*$, it follows that $B_{\frac{\gamma_0 \rho \vartheta^*}{4}}(x_0) \subset B_{\rho^*}(x_0) \subset G \cap B_{\frac{3 \rho_1 }{4}}(P^*)$, where we follow the notations from Theorem \ref{stabilitycauchy}. We can therefore apply Theorem \ref{stabilitycauchy}. Let us call \begin{equation} \label{epsilontilde} \tilde{\epsilon} = \frac{ \epsilon}{ \norma{g}{\frac{1}{2}}{\Gamma} }. \end{equation} Using (\ref{NseHomDirEqn}), (\ref{stimau}) and (\ref{HpPiccolo}) on (\ref{iteratresfere}) we then have: \begin{equation} \label{pallina}
\int_{B_{\rho_2}(x)} \! | w|^2 dx \le C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\Gamma}^2 \tilde{\epsilon}^{2 \tau \delta^S}. \end{equation} The following interpolation inequality holds for all functions $v$ defined on the ball $B_t(x) \subset \mathbb{R}^n$: \begin{equation} \label{interpolation}
\|v \|_{\mathbf{L}^\infty (B_t(x))} \le C \Big( \Big( \int_{B_t(x)} | v|^2 \Big)^{\frac{1}{n+2}} |\nabla v|^{\frac{n}{n+2}}_{\mathbf{L}^\infty (B_t(x))} + \frac{1}{t^{n/2}} \Big( \int_{B_t(x)} | v|^2 \Big)^{\frac{1}{2}} \Big) \end{equation}
We apply it to $w$ in $B_{\rho_2}(x)$, using (\ref{pallina}) and (\ref{schauder1}) we obtain \begin{equation} \label{stimaw}
\| w \|_{\mathbf{L}^\infty (B_{\rho_2}(x))} \le C \Big( \frac{\rho_0}{\rho} \Big)^{\frac{n}{2}} \norma{g}{\frac{1}{2}}{\Omega} \tilde{\epsilon}^{\gamma \delta^S}, \end{equation} where $\gamma=\frac{2\tau}{n+2}$. Finally, from (\ref{stimaw}) and (\ref{sommandi2}) we get: \begin{equation} \label{sommandi3}
\int_{D_2\setminus D_1} |\nabla u_1|^2 \le C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\Gamma}^2 \Big( \frac{\rho}{\rho_0} + \Big( \frac{\rho_0}{\rho} \Big)^{\frac{n}{2}} \tilde{\epsilon}^{\gamma \delta^S} \Big) \end{equation} Now call $$\til{\mu}=\exp \Big( -\frac{1}{\gamma} \exp \Big(\frac{2S \log \delta}{\theta^n}\Big)\Big) $$ and $\overline{\mu}= \min \{ \til{\mu}, \exp(-\gamma^2) \}.$ Choose $\rho$ depending upon $\tilde{\epsilon}$ of the form \begin{displaymath}
\rho(\tilde{\epsilon}) = \rho_0 \Bigg( \frac{2S \log |\delta|}{\log |\log \tilde{\epsilon}^\gamma|} \Bigg)^{-\frac{1}{n}}. \end{displaymath} We have that $\rho$ is defined and increasing in the interval $(0, e^{-1})$, and by definition $\rho(\overline{\mu}) \le \rho(\til{\mu}) = \theta \rho= \overline{\rho}$, we are able to apply (\ref{sommandi3}) to (\ref{652}) with $\rho=\rho(\til{\epsilon})$ to obtain \begin{equation} \label{quasifinito}
\int_{D_2 \setminus D_1} |\nabla u_1|^2 \le C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\Gamma}^2 \log |\log \til{\epsilon}|^\gamma, \end{equation} and since $\til{\epsilon} \le \exp(-\gamma^2)$ it is elementary to prove that \begin{displaymath}
\log |\log {\til{\epsilon}^\gamma}| \ge \frac{1}{2} \log | \log \til{\epsilon}|, \end{displaymath} so that (\ref{quasifinito}) finally reads \begin{displaymath}
\int_{D_2 \setminus D_1} |\nabla u_1|^2 \le C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\Gamma}^2 \,\omega(\til{\epsilon}), \end{displaymath}
with $\omega(t) = \log |\log t|^{\frac{1}{n}}$ defined for all $0<t<e^{-1}$, and $C$ depends on $M_0$, $M_1$ and $\alpha$. \end{proof} \begin{proof}[Proof of Proposition \ref{teostabestimpr}] We will prove the thesis for $u_1$, the case $u_2$ being completely analogous. First of all, we observe that \begin{equation} \label{sommandiB}
\int_{D_2 \setminus D_1} |\nabla u_1|^2 \le \int_{\Omega_1 \setminus G} |\nabla u_1|^2 =\int_{\partial (\Omega_1 \setminus G)} (\nabla u_1 \cdot \nu) u_1 + \int_{\partial (\Omega_1 \setminus G)} p_1 ( u_1 \cdot \nu) \end{equation} and that \begin{equation*} \partial (\Omega_1 \setminus G) \subset \partial D_1 \cup (\partial D_2 \cap \partial G) \end{equation*} and recalling the no-slip condition, applying to (\ref{sommandiB}) computations similar to those in (\ref{652}), (\ref{6.53}), we have \begin{equation*} \begin{split}
& \int_{D_2 \setminus D_1} |\nabla u_1|^2 \le \int_{\partial D_2 \cap \partial G} (\nabla u_1 \cdot \nu) w + \int_{\partial D_2 \cap \partial G} p_1 ( w \cdot \nu) \le \\ \le & C \rho_0^{n-2}\norma{g}{\frac{1}{2}}{\Gamma} \max_{\partial D_2 \cap \partial G} |w|, \end{split} \end{equation*}
where again $w= u_1 - u_2$ and $C$ only depends on $\alpha$, $M_0$ and $M_1$. Take a point $z \in \partial G$. By the regularity assumptions on $\partial G$, we find a direction $\xi \in \mathbb{R}^n$, with $|\xi|=1$, such that the cone (recalling the notations used during the proof of Proposition \ref{teoPOS}) $C(z, \xi, \vartheta_0) \cap B_{\rho_0} (z) \subset G$, where $\vartheta_0 =\arctan \frac{1}{M_0}$. Again (\cite[Proposition 5.5]{ARRV}) $G_\rho$ is connected for $\rho \le \frac{\rho_0 h_0 }{3}$ with $h_0$ only depending on $M_0$. Now set \begin{equation*}\begin{split} \lambda_1 &= \min \Big\{ \frac{\tilde{\rho}_0}{1+\sin \vartheta_0}, \frac{\tilde{\rho}_0}{3\sin \vartheta_0}, \frac{\tilde{\rho}_0}{16(1+M_0^2)\sin \vartheta_0} \frac{}{} \Big\}, \\ \vartheta_1 & = \arcsin\Big(\frac{\sin \vartheta_0}{4} \Big), \\ w_1 &=z+ \lambda_1 \xi, \\ \rho_1 &= \vartheta^* h_0 \lambda_1 \sin \vartheta_1. \end{split}\end{equation*} where $0<\vartheta^*\le 1$ was introduced in Theorem \ref{teotresfere}. By construction, $B_{\rho_1}(w_1) \subset C(z, \xi, \vartheta_1) \cap B_{\tilde{\rho}_0}(z)$ and $B_{\frac{4 \rho_1}{\vartheta^*}}(w_1) \subset C(z, \xi, \vartheta_0) \cap B_{\tilde{\rho}_0}(z) \subset G$. Furthermore $\frac{4 \rho_1}{\vartheta^*} \le \rho^*$, hence $B_{\frac{4 \rho_1}{\vartheta^*}} \subset G$, where $\rho^*$ and $x_0$ were defined by (\ref{rhostar}) and (\ref{zetazero}) respectively, during the previous proof. Therefore, $w_1$, $x_0 \in \overline{G_{\frac{4\rho_1}{\vartheta^*}}}$, which is connected by construction. Iterating the three spheres inequality (mimicking the construction made in the previous proof)
\begin{equation} \label{iteratresferei} \int_{B_{\rho_1} (w_1)} \! | w|^2 dx \le C \Big(\int_G \! | w|^2 dx \Big)^{1-\delta^S} \Big(\int_{B_{\rho_1 }(x_0)} \! | w |^2 dx \Big)^{\delta^S} \end{equation} where $0<\delta<1$ and $C \ge 1$ depend only on $n$, and $S \le \frac{M_1 \rho_0^n}{\omega_n \rho_1^n}$. Again, since $B_{\rho^*}(x_0) \subset G \cap B_{\frac{3}{8}\rho_1}(P_0)$, we apply Theorem \ref{stabilitycauchy} which leads to \begin{equation}
\int_{B_{\rho_1}(w_1)} |w|^2 \le C \rho_0^n \norma{g}{\frac{1}{2}}{\Gamma}^2 \tilde{\epsilon}^{2\beta}, \end{equation} where $0<\beta<1$ and $C \ge 1$ only depend on $\alpha$, $M_0$, and $\frac{\tilde{\rho}_0}{\rho_0}$ and $\tilde{\epsilon}$ was defined in (\ref{epsilontilde}). So far the estimate we have is only on a ball centered in $w_1$, we need to approach $z \in \partial G$ using a sequence of balls, all contained in $C(z, \xi, \vartheta_1)$, by suitably shrinking their radii. Take \begin{equation*} \chi = \frac{1-\sin \vartheta_1}{1+\sin\vartheta_1} \end{equation*} and define, for $k \ge 2$, \begin{equation*} \begin{split} \lambda_k&=\chi \lambda_{k-1}, \\ \rho_k&= \chi \rho_{k-1}, \\ w_k &= z + \lambda_k \xi. \\ \end{split}\end{equation*} With these choices, $\lambda_k= \lambda \chi^{k-1} \lambda_1$, $\rho_k=\chi^{k-1} \rho_1$ and $B_{\rho_{k+1}}(w_{k+1}) \subset B_{3\rho_k}(w_k)$, $B_{\frac{4}{\vartheta^*}\rho_k}(w_k) \subset C(z, \xi, \vartheta_0) \cap B_{\tilde{\rho}_0}(z) \subset G$. Denote by \begin{displaymath}
d(k)= |w_k-z|-\rho_k, \end{displaymath} we also have \begin{displaymath} d(k)= \chi^{k-1}d(1), \end{displaymath} with \begin{displaymath} d(1)= \lambda_1(1-\vartheta^* \sin \vartheta_1). \end{displaymath} Now take any $\rho \le d(1)$ and let $k=k(\rho)$ the smallest integer such that $d(k) \le \rho$, explicitly \begin{equation} \label{chirho}
\frac{\big|\log \frac{\rho}{d(1)}\big|}{\log \chi} \le k(\rho)-1 \le \frac{| \log \frac{\rho} {d(1)}|}{\log \chi}+1. \end{equation} We iterate the three spheres inequality over the chain of balls centered in $w_j$ and radii $\rho_j$, $3 \rho_j$, $4\rho_j$, for $j=1, \dots, k(\rho)-1$, which yields \begin{equation} \label{iteratresferetre}
\int_{B_{\rho_{k(\rho)}}(w_{k(\rho)})} |w|^2 \le C \norma{g}{\frac{1}{2}}{\Gamma}^2 \rho^n \tilde{\epsilon}^{2 \beta \delta^{k(\rho)-1}}, \end{equation} with $C$ only depending on $\alpha$, $M_0$ and $\frac{\tilde{\rho}_0}{\rho_0}$. Using the interpolation inequality (\ref{interpolation}) and (\ref{schauder2}) we obtain \begin{equation}\label{543}
\|w \|_{\mathbf{L}^\infty (B_{\rho_{k(\rho)}}(w_{k(\rho)}))} \le C \norma{g}{\frac{1}{2}}{\Gamma} \frac{\tilde{\epsilon}^{\beta_1 \delta^{k(\rho)-1}}}{\chi^{\frac{n}{2}(k(\rho)-1)}}, \end{equation} where $\beta_1=\frac{2 \beta}{n+2}$ depends only on $\alpha$, $M_0$, $M_1$ and $\frac{\tilde{\rho}_0}{\rho_0}$. From (\ref{543}) and (\ref{schauder2}) we obtain \begin{equation} \label{544}
|w(z) | \le C \norma{g}{\frac{1}{2}}{\Gamma} \Bigg( \frac{\rho}{\rho_0} +\frac{\tilde{\epsilon}^{\beta_1 \delta^{k(\rho)-1}}}{\chi^{\frac{n}{2}(k(\rho)-1)}} \Bigg), \end{equation} Finally, call \begin{displaymath}
\rho(\tilde{\epsilon})= d(1) |\log \tilde{\epsilon}^{\beta_1}|^{-B}, \end{displaymath} with \begin{displaymath}
B= \frac{|\log \chi|}{2 \log |\delta|}. \end{displaymath} and let $\tilde{\mu} = \exp(-\beta_1^{-1})$. We have that $\rho(\tilde{\epsilon})$ is monotone increasing in the interval $0<\tilde{\epsilon} < \tilde{\mu}$, and $\rho(\tilde{\mu})=d(1)$, so $\rho(\tilde{\epsilon}) \le d(1)$ there. Putting $\rho=\rho(\tilde{\epsilon})$ into (\ref{544}) we obtain \begin{equation}
\int_{D_2 \setminus D_1} |\nabla u_1|^2 \le C \rho_0^{n-2}\norma{g}{\frac{1}{2}}{\Gamma}^2 |\log \tilde{\epsilon}|^{-B}, \end{equation} where $C$ only depends on $\alpha$, $M_0$ and $\frac{\til{\rho}_0}{\rho_0}$. \end{proof}
\section{Proof of Theorem \ref{stabilitycauchy}. } As already premised, in order to prove Theorem \ref{stabilitycauchy}, we will need to perform an extension argument on the solution to (\ref{NSE}) we wish to estimate. This has been done for solutions to scalar elliptic equations with sufficiently smooth coefficients (\cite{Isa}). Here, however, we are dealing with a system: extending $u$ implies finding a suitable extension for the pressure $p$ as well; moreover, both extensions should preserve some regularity they inherit from the original functions. Following the notations given for Theorem \ref{stabilitycauchy} we define $$Q(P_0) = B^\prime_{\rho_{00}} (0) \times \Big[-\frac{M_0\rho_0^2}{\sqrt{1+M_0^2}}, \frac{M_0\rho_0^2}{\sqrt{1+M_0^2}}\Big].$$ We have: \begin{equation}\label{gagrafico} \begin{split} \Gamma_0 &= \partial E \cap Q(P_0). \\
\end{split}\end{equation} We then call $E^- = Q(P_0) \setminus E$ and $\til{E} = E \cup E^- \cup \Gamma_0$.
\begin{lemma}[Extension] \label{teoextensionNSE} Suppose the hypotheses of Theorem \ref{stabilitycauchy} hold. Consider the domains $E^-$, $\til{E}$ as constructed above. Take, furthermore, $g \in \accan{\frac{5}{2}}{\partial E}$. Let $(u,p)$ be the solution to the following problem: \begin{equation}
\label{NseHomDirExt} \left\{ \begin{array}{rl}
\mathrm{div} {\hspace{0.25em}} \sigma(u,p) & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em}
E,\\
\mathrm{div} {\hspace{0.25em}} u & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} E,\\
u & = g \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \Gamma,\\
\sigma (u, p) \cdot \nu & = \psi \hspace{2em} \mathrm{\tmop{on}}
\hspace{1em} \Gamma,\\ \end{array} \right. \end{equation} Then there exist functions $\tilde{u} \in \accan{1}{\til{E}}$, $\til{p} \in L^2(\til{E})$ and a functional $\Phi \in \accan{-1}{\til{E}}$ such that $\tilde{u} = u$, $\tilde{p} = p$ in $E$ and $(\til{u}, \til{p})$ solve the following: \begin{equation} \begin{split} \label{sistilde} \triangle \til{u} + \nabla \til{p} &= \Phi \, \, \text{ in } \, \, \til{E}, \\ \mathrm{div} {\hspace{0.25em}} \til{u}&=0 \, \, \text{ in } \, \, \til{E}. \end{split} \end{equation} If \[ \norma{g}{\frac{1}{2}}{\Gamma}+ \rho_0\norma{\psi}{-\frac{1}{2} }{\Gamma} = \eta , \] then we have \begin{equation} \label{stimaPhi} \norma{\Phi}{-1}{\til{E}} \le C\frac{\eta}{\rho_0}. \end{equation} where $C>0$ only depends on $\alpha$ and $M_0$. \end{lemma} \begin{proof} From the assumptions we made on the boundary data and the domain, it follows that $(u, p) \in \accan{3}{E} \times L^2(E) $. We can find (see \cite{MITREA} or \cite{BedFix}) a function $u^- \in \accan{3}{E^-}$ such that \begin{equation} \label{propumeno} \begin{split} \mathrm{div} {\hspace{0.25em}} u^-=0 \quad \mathrm{in}\quad E^-,\qquad u^-=g \quad\mathrm{on} \quad \Gamma,\\ \norma{u^-}{3}{E^-} \le C \norma{g}{\frac{1}{2}}{\Gamma},
\end{split} \end{equation}
with $C$ only depending on $|E|$. We now call \begin{displaymath} F^-= \triangle u^-,\end{displaymath} by our assumptions we have $ F^- \in \accan{1}{E^-}$. Let $p^- \in H^1(E^-)$ be the weak solution to the following Dirichlet problem:\begin{equation}\label{pmeno} \left\{ \begin{array}{rl}
\triangle p^- - \mathrm{div} {\hspace{0.25em}} F^- &=0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} E^-,\\
p^- & = 0 \hspace{1em}
\hspace{0.50em} \mathrm{\tmop{on}} \hspace{1em}
\partial E^-.\\
\end{array} \right. \end{equation}
We now define \begin{equation}\label{effestesa} X^-= F^- -\nabla p^-. \end{equation} This field is divergence free by construction, and its norm is controlled by \begin{equation} \label{stimaX} \|X^- \|_{\elledue{E^-}} \le C \norma{g}{\frac{1}{2}}{\Gamma} \end{equation} We thus extend $(u,p)$ as follows: \begin{displaymath} \til{u}= \left\{ \begin{array}{rl} & u \quad \text{ in } \; \; E, \\ & u^- \quad \text{ in } \; \; E^-,\end{array} \right.\end{displaymath} \begin{displaymath}\til{p}= \left\{ \begin{array}{rl} & p \quad \text{ in } \; E, \\ & p^- \quad \text{ in } \; E^-. \end{array} \right. \end{displaymath} We now investigate the properties of the thus built extension $(\til{u},\til{p})$. Take any $v \in \accano{1}{\til{E}}$, we have \begin{equation}
\label{NSEEXT} \begin{split} &\int_{\til{E}} (\nabla \til{u} +(\nabla \til{u})^T - \til{p} {\hspace{0.25em}}\mathbb{I} ) \cdot \nabla v = \\ =& \int_{E} (\nabla u +(\nabla u )^T - p {\hspace{0.25em}}\mathbb{I} ) \cdot \nabla v + \int_{E^-} (\nabla u^- +(\nabla u^-)^T - p^- {\hspace{0.25em}}\mathbb{I} ) \cdot \nabla v. \end{split}\end{equation} About the first term, using (\ref{NSE}) and the divergence theorem we obtain \begin{equation}
\label{phiuno} \int_{E} (\nabla u +(\nabla u)^T - p {\hspace{0.25em}}\mathbb{I} ) \cdot \nabla v = \int_{\Gamma} \psi \cdot v. \end{equation} Define $ \Phi_1(v)= \int_{\Gamma} \psi \cdot v $ for all $v \in \accano{1}{\til{E}}$. Using the decomposition made in (\ref{effestesa}) on the second term, we have \begin{equation}\label{phiduetre} \begin{split} & \int_{E^-} (\nabla u^- +(\nabla u^- )^T - p^- {\hspace{0.25em}}\mathbb{I} ) \cdot \nabla v = \\ =& \int_{\Gamma}(\nabla u^- +(\nabla u^- )^T - p^- {\hspace{0.25em}}\mathbb{I} ) \cdot \nu \, v -\int_{E^-} \mathrm{div} {\hspace{0.25em}} \big( \nabla u^- +(\nabla u^- )^T - p^-{\hspace{0.25em}}\mathbb{I} \big) \cdot v= \\=& \int_{\Gamma}(\nabla u^- +(\nabla u^- )^T ) \cdot \nu \, v -\int_{E^-} (\triangle u^- -\nabla p^- ) \cdot v= \\ =&\int_{\Gamma}(\nabla u^- +(\nabla u^- )^T ) \cdot \nu \, v -\int_{E^-} X^- \cdot v = \Phi_2(v)+\Phi_3(v), \end{split}\end{equation} where we define for all $v \in \accano{1}{\til{E}}$ the functionals \begin{displaymath}
\begin{split} \Phi_2(v)&=\int_{\Gamma}(\nabla u^- +(\nabla u^- )^T ) \cdot \nu \, v, \\ \Phi_3(v)&=-\int_{E^-} X^- \cdot v \end{split} \end{displaymath} We can estimate each of the linear functionals $\Phi_1$, $\Phi_2$ and $\Phi_3$ easily, for we have (by (\ref{phiuno}) and the trace theorem):
\begin{equation} \label{stimaphi1} \big| \Phi_1(v) \big| \le \norma{\psi}{-\frac{1}{2}}{\Gamma} \norma{v}{\frac{1}{2}}{\Gamma} \le C \rho_0\norma{\psi}{-\frac{1}{2}}{\Gamma} \norma{v}{1}{E^-}, \end{equation} moreover (using (\ref{phiduetre}) and (\ref{propumeno}) )
\begin{equation} \label{stimaphi2} \big| \Phi_2(v) \big| \le {\| \nabla u \|_{{\bf L}^2(\Gamma)}} {\|v \|_{{\bf L}^2(\Gamma)}} \le C \norma{g}{\frac{1}{2}}{\Gamma} \norma{v}{1}{E^-}, \end{equation} and, at last, by (\ref{stimaX}),
\begin{equation} \label{stimaphi3} \big| \Phi_3(v) \big| \le \|X^- \|_{\elledue{E^-}} \|v \|_{\elledue{E^-}} \le C \norma{g}{\frac{1}{2}}{\Gamma} \norma{v}{1}{E^-}. \end{equation}
Then, defining $\Phi(v)=\Phi_1(v) + \Phi_2(v) + \Phi_3(v)$ for all $v \in \accano{1}{\til{E}}$, putting together (\ref{phiuno}), (\ref{phiduetre}), (\ref{stimaphi1}), (\ref{stimaphi2}) and (\ref{stimaphi3}), we have (\ref{stimaPhi}). \end{proof}
\begin{proof}[Proof of Theorem \ref{stabilitycauchy}. ] Consider the domain $\til{E}$ built at the beginning of this section, and take $\til{u}$ the extension of $u$ built according to Theorem \ref{teoextensionNSE}. By linearity, we may write $\til{u}= u_0+w$ where $(w,q)$ solves \begin{equation} \label{NSEPARTIC} \mathrm{div} {\hspace{0.25em}} \sigma (w, q) = \til{\Phi} \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} \til{E}, \end{equation} and $w \in \accano{1}{\til{E}}$, whereas $(u_0, p_0)$ solves \begin{equation} \label{NSEHOM} \left\{ \begin{array}{rl}
\mathrm{div} {\hspace{0.25em}} \sigma (u_0, p_0) &= 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} \til{E}, \\
u_0 & = 0 \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \Gamma,\\
\sigma (u_0, p_0) \cdot \nu & = \psi \hspace{2em} \mathrm{\tmop{on}}
\hspace{1em} \Gamma.
\end{array} \right.
\end{equation} Using well known results about interior regularity of solutions to strongly elliptic equations \begin{equation}
\| u_0 \|_{{\bf L}^\infty( B_{\frac{t}{2}} (x))} \le t^{-\frac{n}{2}} \normadue{u_0}{B_{\frac{t}{2}}(x)}. \end{equation} It is then sufficient to estimate $\normadue{u}{B(x)}$ for a "large enough" ball near the boundary. Since (see the proof of Proposition \ref{teoPOS}) $\triangle^2 u_0=0$, we may apply Theorem \ref{teotresfere} to $u_0$. Calling $r_1= \frac{\rho_{00}}{8}$, $r_2= \frac{3 \rho_{00}}{8}$ and $r_3= \rho_{00}$ we have (understanding that all balls are centered in $P^*$) \begin{equation} \label{3sfereu0} \normadue{u_0}{B_{r_2}} \le C \normadue{u_0}{B_{r_1}}^{\tau} \normadue{u_0}{B_{r_3}}^{1-\tau}. \end{equation} Let us call $\eta=\rho_0\norma{\psi}{-\frac{1}{2}}{\Gamma}$. By the triangle inequality, (\ref{propumeno}) and (\ref{stimau}) we have that \begin{equation} \label{trin1} \normadue{u_0}{B_{r}} \le \normadue{\til{u}}{B_{r}}+\normadue{w}{B_{r}} \le \normadue{\til{u}}{B_{r}} + C \eta,
\end{equation} for $r=r_1,r_3$; furthermore, we have \begin{equation} \label{trin2} \normadue{\til{u}}{B_{r_2}} \le \normadue{u_0}{B_{r_2}}+\normadue{w}{B_{r_2}} \le \normadue{u_0}{B_{r_2}} + C \eta. \end{equation} Putting together (\ref{3sfereu0}), (\ref{trin1}), (\ref{trin2}), and recalling (\ref{stimau}) and (\ref{stimanormadiretto}) we get \begin{equation} \begin{split} \label{3sfere2} & \normadue{u}{B_{r_2}} \le \normadue{\til{u}}{B_{r_2} \cap E} \le \\ \le & C \eta + C (\normadue{\til{u}}{B_{r_1}}+ C \eta)^{\tau} (\normadue{\til{u}}{B_{r_3} \cap E} + C \eta )^{1-\tau} \le \\ \le & C \big( \eta + \eta^\tau (\eta + \normadue{u}{E} )^{1-\tau} \big) \le C \eta^\tau \normadue{u}{E}^{1-\tau}. \end{split} \end{equation} \end{proof}
\end{document} |
\begin{document}
\title{Comparison of quantum discord and relative entropy in some bipartite quantum systems }
\author{M.~Mahdian} \altaffiliation{ Author to whom correspondence should be addressed; electronic mail: [email protected]} \affiliation{ Faculty of Physics, Theoretical and astrophysics department , University of Tabriz, 51665-163 Tabriz, Iran}
\author{M. B. ~Arjmandi}
\affiliation{ Faculty of Physics, Theoretical and astrophysics department , University of Tabriz, 51665-163 Tabriz, Iran}
\begin{abstract}
The study of quantum correlations in High-dimensional bipartite systems is crucial for the development of quantum computing. We propose relative entropy as a distance measure of correlations may be measured by means of the distance from the quantum state to the closest classical-classical state. In particular, We establish relations between relative entropy and quantum discord quantifiers obtained by means of orthogonal projection measurements. We show that for symmetrical X-states density matrices the quantum discord is equal to relative entropy. At the end of paper, various examples of X-states such as two-qubit and qubit-qutrit have been demonstrated.
\end{abstract}
\pacs{03.67.Mn 03.65.Ta }
\maketitle
\section{Introduction}
The main goal of quantum information theory is quantifying and describing quantum processes and rely on quantum correlations [1,2,3,4]. These correlations are essential resources in information and computation sciences and different measures have been put forward for it. Entanglement as the cornerstone of correlation measures has an effective role in information processing and has applications in the quantum computing, cryptography, superdense coding [5,6] and has been used to quantify quantum teleportation \cite{Bennett2,Oh}. Apart from entanglement, quantum states can exhibit other correlations not present in classical systems and it could be a new resource for quantum computation. In order to quantify quantum correlations, the suitable measure is so called the quantum discord, introduced by Olliver and Zurek \cite{zurek} and also by Henderson and Vedral \cite{Vedral} independently. Over the past decade, quantum discord has received a lot of attention and many studies performed and articles written about that \cite{ Li, Lang, Mahdian, Luo, Girolami1, Daki, Girolami2, Saif, Shunlong}. For pure states, quantum discord reduces to entanglement but has a nonzero value for some mixed separable states and define as the discrepancy between total correlation and classical correlation. However, for mixed quantum states, evaluation of quantum discord is based on minimization procedures over all possible positive operator valued measures (POVM), or von Neumann measurements that can be performed on the subsystems and thus it is somewhat difficult to calculate even numerically.\\ Quite recently, a few analytical results of quantum discord including especially the case of two qubits states such as rank-2 states \cite{shi} and Bell-diagonal \cite{Luo} have been obtained. In addition, for a rather limited set of two-qubit states, the so-called X states, an analytical formula of quantum discord is proposed by Ali et al \cite{mazhar}. We know that quantum discord measures the amount of information that cannot be obtained by performing the measurement on one subsystem alone and state after measurement is conditional state. But, in this paper we consider some density matrices that after performing measurement over all the subsystems are conditional density matrix that are classical-classical state (which means that right and left quantum discord are equal) . So, for these kind of quantum states quantum discord is equal to relative entropy and calculation of relative entropy would be too easy.\\
\emph{Quantum discord}. The classical mutual information $I(A:B)$ for two discrete random variables A and B, is defined as $I(A:B)=H(A)+H(B)-H(A,B)$. Here, $H(p)=-\sum_{i}p_{i} \log p_{i} $ denotes the shannon entropy of the proper distribution \cite{Nielsen}. For a classical probability distribution, Bayes' rule: $p(a_{i},b_{j})=p(a_{i}|b_{j})p(b_{j})=p(b_{j}|a_{i})p(a_{i})$, leads to an equivalent definition of the mutual information as
$I(A:B)=H(A)-H(A|B)$.\\ For a given quantum density matrix of a composite system $\rho_{AB}$ , the total amount of correlations, including classical and quantum correlations, is quantified by the quantum mutual information as \begin{equation} I(\rho_{AB})=S(\rho_{A})+S(\rho_{B})-S(\rho_{AB}), \end{equation} where $S(\rho) = -Tr(\rho log \rho)$ denotes the von Neumann entropy of the relevant state and suppose A and B share a quantum state $\rho_{AB}\in \cal{H}_{A}\otimes \cal{H}_{B}$.
Assume that we perform a set of local projective measurements (von Neumann measurements)
$\{\Pi^{(j)}_{B}=|j_B\rangle\langle j_B|\}$ on subsystem B .
The measurements will disturb subsystem B and the whole system AB simultaneously.
If the measurement is taken over all possible complete set of von Neumann projective measurement (one-dimensional orthogonal projectors), described by ${\{\Pi^{j}_{B}}\}$, corresponding to outcomes j, on subsystem B the resulting state is given by the shared ensemble $\{\rho_{A|i}, P_{i}\}$, where $ \rho_{A|i} $ is conditional density matrix of bipartite system : \begin{equation}
\rho_{A|i}=\frac{1}{p_{i}}{(I_A\otimes\Pi^{j}_{B}) \rho_{AB} (I_A\otimes\Pi^{j}_{B})}, \end{equation} and after taking partial trace over subsystem B, the result is state of subsystem A in form : \begin{equation}
\rho_{A|i}=\frac{1}{p_{i}}\texttt{Tr}_B\{(I_A\otimes\Pi^{j}_{B}) \rho_{AB} (I_A\otimes\Pi^{j}_{B})\}, \end{equation} \begin{equation} P_{i}=\texttt{Tr} (\Pi^{j}_{B}\rho_{AB}\Pi^{j}_{B}), \end{equation} with $I_A$ being the identity matrix of subsystem A and $\texttt{Tr}_{B}$ denotes the partial trace of over subsystem B.
As an example for the set of measurment, for the state of two qubits \begin{equation}
\Pi^{1}=\frac{1}{2} (I+\sum_{j} n_{j} \sigma_{j}), \end{equation} \begin{equation}
\Pi^{2}=\frac{1}{2} (I-\sum_{j} n_{j} \sigma_{j}), \end{equation} that $ \sigma_{j} $ are the Pauli matrices and $ \widehat{n} $ is the Bloch sphere eigen vectors : \begin{equation} \widehat{n}=(\widehat{n}_{x}, \widehat{n}_{y}, \widehat{n}_{z})=(sin\theta cos\phi, sin\theta sin\phi, cos\theta) . \end{equation}
A quantum analogue of the conditional entropy can then be defined as
$S_{\{\Pi^{j}_{B}\}}(A|B)\equiv\sum_{i}P_{i}S(\rho_{A|i})$ and an alternative version of the quantum mutual information can now be defined as $J_{\{\Pi^{j}_{B}\}}(A|B)=
S(\rho_{A} )- S_{\{\Pi^{j}_{B}\}}(A|B)$ where $\rho_{A} = \texttt{Tr}_{B}(\rho)$ and $\rho_{B} = \texttt{Tr}_{A}(\rho)$
are the reduced density matrix of subsystem A and subsystem B. The above quantity depends on the selected set of von Neumann measurements or a suitable set of orthogonal projectors ${\{\Pi^{j}_{B}}\}$ . To get all the classical correlations present in $\rho_{AB}$, we maximize $J_{\{\Pi^{j}_{B}\}}(\rho_{AB})$, over all ${\{\Pi^{j}_{B}}\}$ \begin{equation}\label{122}
J (\rho_{AB})= Max_{\{\Pi^{j}_{B}\}}\{S(\rho_{A})-S_{\{\Pi^{j}_{B}\}}(A|B)\}. \end{equation} Then, quantum discord on subsystem B is defined (right quantum discord) as: $$D_{R}(\rho_{AB})=I(\rho_{AB})-J(\rho_{AB})$$ \begin{equation}
=S(\rho_{B})-S(\rho_{AB})+Min_{{\{\Pi^{j}_{B}}\}}S_{\{\Pi_{j}\}}(A|B). \end{equation} If the measurement is taken over all possible POVMs
${\{\Pi^{j}_{A}}\}$ on subsystem A, the resulting state is given by the shared ensemble $\{\rho_{B|i}, P_{i}\}$, where $ \rho_{B|i} $ is conditional density matrix of bipartite system : \begin{equation}
\rho_{B|i}=\frac{1}{p_{i}}{(\Pi^{j}_{A}\otimes I_B) \rho_{AB} (\Pi^{j}_{A}\otimes I_B)}, \end{equation} and by taking partial trace over subsystem A , the result is state of subsystem B : \begin{equation}
\rho_{B|i}=\frac{1}{p_{i}}\texttt{Tr}_A\{(\Pi^{j}_{A}\otimes I_B) \rho_{AB} (\Pi^{j}_{A}\otimes I_B)\}, \end{equation} \begin{equation} P_{i}=\texttt{Tr}(\Pi^{j}_{A}\rho_{AB}\Pi^{j}_{A}), \end{equation} with $I_B$ is the identity matrix of subsystem B and $\texttt{Tr}_{A}$ denotes the partial trace of over subsystem A and similar to above relation, classical correlation and quantum discord on subsystem A (left quantum discord) defined as \begin{equation}\label{222}
J (\rho_{AB})= Max_{\{\Pi^{j}_{A}\}}\{S(\rho_{B})-S_{\{\Pi^{j}_{A}\}}(A|B)\}, \end{equation} and \begin{equation}\label{QD}
D_{L}(\rho_{AB})=S(\rho_{A})-S(\rho_{AB})+Min_{{\{\Pi^{j}_{A}}\}}S_{\{\Pi^{j}_{A}\}}(B|A). \end{equation} It has been shown that $D_{R}(\rho)$, $D_{L}(\rho)$ are always non-negative and is not symmetric, i.e. $D_{R}(\rho)\neq D_{L}(\rho)$ in general \cite{zurek}.\\ As mentioned above, for quantifying correlations we need to apply optimization over POVM measures to extract all of classical correlations which is a nontrivial task \cite{Nielsen, Brandt, Sandor}. Therefore, it is difficult to calculate quantum discord in the general case since the optimization should be taken. In this article, for some bipartite density matrices in SU(N) algebra (i.e. X-states), we will see that density matrix of composite subsystems A and B after von Neumann measurements is conditional density matrix of bipartite system that is classical-classical state and for these X-states right and left quantum discord are equal. Therefor, the set of measurement will be complete and our states change in a classical state, so we just extract classical correlation from it. Maximization of $J(\rho_{AB})$ captures the maximum classical correlation that can be extracted from the system, and whatever extra correlation that may remain is the quantum correlation. So, for these bipartite quantum systems, we can use relative entropy of discord instead of quantum discord.\\ The organization of this paper is as follows. In Sec. II we explain relative entropy of discord and reveal the relation between quantum discord and relative entropy of discord. In Sec. III, we give an explanation about SU(N) algebra and general form of the density matrix are obtained. In Sec. IV, we perform our investigation on two-qubit states and established relations. In Sec. V, we perform our inquiry on qubit-qutrit states and got the same results. Finally, we summarize our results in Sec. VI.
\section{Relative Entropy Of Discord}
The relative entropy is a non-negative and appropriate measure of distance between two arbitary states, which is defined as \cite{Modi3} \begin{equation}
S(\rho\|\gamma)=\texttt{Tr}(\rho log_2\rho-\rho log_2\gamma). \end{equation}
By using the consept of relative entropy, we can define the Geometric Discord (GD) as the minimum of distance between closest classical-classical state and the state of bipartite system
\begin{equation}
GD_{rel}(\rho_{AB})=Min_{\chi\in \cal{C}}S(\rho_{AB}\|\chi), \end{equation}
which $\chi $ belongs to set of classical states ($\cal{C}$) and minimum is taken over all possible states $\chi$. \\
Modi et al. have showed in [29] that definition of relative entropy represented by equation (15) can be replaced with \begin{equation}
S(x\|y)=S(y)-S(x). \end{equation} So we will have
$$GD_{rel}(\rho_{AB})=S(\rho_{AB}\|\chi)=S(\chi_{\rho_{AB}})-S(\rho_{AB})=$$ \begin{equation}\label{REL} \texttt{Tr}(\rho_{AB}\log\rho_{AB}-\rho_{AB}\log{\chi_{\rho_{AB}}}). \end{equation} Moreover the closest classical-classical state is defined by \begin{equation} \chi_{\rho_{AB}}=\sum_{j}(\Pi^j_{A}\otimes\Pi^j_{B})\rho_{AB}(\Pi^j_{A}\otimes\Pi^j_{B}), \end{equation} where $ \Pi^j $ is the von Neumann projective measurment which acts on subsystems A and B. Now, it can be shown that after von Neumann measurment the result will be conditional state as the equations (2) and (10) and for this kind of density matrix e.g. X-state, it is classical-classical state. So for these bipartite systems, the optimization problem over measurements that used for computing the quantum discord in equation (14) can be turn into minimization of distance between the density matrix of bipartite system and closest classical-classical state. Then we can use the geometric discord instead of quantum discord for these density matrices.
Let the projection measurements $\Pi^{1}_{A,B}$ and $\Pi^{2}_{A,B}$ effect on subsystems A and B. After apply the measurements, the conditional states will be \begin{equation}
\rho_{A|i}=\sum_{i}\frac{(\mathbb{I}\otimes \Pi^{i}_B)\rho_{AB}(\mathbb{I}\otimes \Pi^{i}_B)}{\texttt{Tr} (\Pi^{i}_B\rho_{AB}\Pi^{i}_{B})}, \end{equation} \begin{equation}
\rho_{B|i}=\sum_{i}\frac{(\Pi^{i}_A\otimes \mathbb{I})\rho_{AB}(\Pi^{i}_A\otimes \mathbb{I})}{\texttt{Tr} (\Pi^{i}_{A}\rho_{AB}\Pi^{i}_{A})}. \end{equation} We have investigated for these bipartite systems that considered here, after projective measurements on subsystems we get conditional states
$$\rho_{A|i}=\rho_{A|\Pi^{1}_B}+\rho_{A|\Pi^{2}_B}=\chi_{\rho_{AB}},$$
and also
$$\rho_{B|i}=\rho_{B|\Pi^{1}_A}+\rho_{B|\Pi^{2}_A}=\chi_{\rho_{AB}},$$
and here
$$\texttt{Tr}(\Pi^{i}_{B}\rho_{AB}\Pi^{i}_{B})=\texttt{Tr}(\Pi^{i}_{A}\rho_{AB}\Pi^{i}_{A})=Tr(\chi_{\rho_{AB}})=1. $$
So, with compression of equations (\ref{REL}) and (\ref{QD}), after some calculation for these bipartite quantum states(X-state) we get \begin{equation}
S(\chi_{\rho_{AB}})=S(\rho_{B})+Min_{{\{\Pi^{j}}\}}S_{\{\Pi^{j}\}}(B|A). \end{equation} So we have
$$D_{R,L}=S(\rho_{B})+Min_{{\{\Pi^{j}}\}}S_{\{\Pi^{j}\}}(B|A)-S(\rho_{AB})$$ \begin{equation}\label{re-dis} =S(\chi_{\rho_{AB}})-S(\rho_{AB})=GD_{rel}. \end{equation}
We apply this method for various examples of X-states such as on two qubits and calculate the quantum discord for these quantum states and show that the result is equal with results of previous papers, especially with Mazhar Ali's work in reference [11].\\ One of the other measure of quantum correlation is the quantum deficit which is defined as difference between the work or information of total system and information of subsystems after effect the LOCC operations to localization of information \cite{Modi3}. It is categorized such zero, one and two-way deficit that are different in type of interaction between subsystems. The zero-way quantum deficit is quantified as minimum of distance between the state of system and classical-classical state \begin{equation} \Delta=Min_{\Pi_{a},\Pi_{b}}(S(\chi_{\rho_{AB}})-S(\rho_{AB})), \end{equation} where $ \chi_{\rho_{AB}} $ is classical-classical state that represented in equation (19). So by these explanations, the zero-way quantum deficit is equal to minimum of relative entropy.
\section{SU(N) Description}
In this section, we show Hermitian operator on a discrete N-dimensional Hilbert space $\cal{H}$ versus generators of the SU(N) algebra \cite{Schlienz}. To obtain the generators of the SU(N) algebra, introduce a set of N projection operators as follows: \begin{equation}
\widehat{P}_{jk}=|j\rangle\langle k|, \end{equation}
where $|n\rangle$ are the orthonormalized eigenstates of the linear Hermitian operator. We can make $N^{2}-1$ operators with \begin{equation} \widehat{U}_{jk}=\widehat{P}_{jk}+\widehat{P}_{kj}, \end{equation} \begin{equation} \widehat{V}_{jk}=-i(\widehat{P}_{jk}-\widehat{P}_{kj}), \end{equation} \begin{equation} \widehat{W}_{l}=\sqrt{\frac{2}{l(l+1)}}(P_{11}+\cdots+P_{ll}-lP_{l+1,l+1}), \end{equation}
where \,$1\leq j<k\leq N$\,\,\,,\,\,\,$1\leq l\leq N-1$,
the set of the resulting operators are given by \begin{equation} \{\widehat{\lambda}_{j}\}=\{\widehat{U}_{jk}\}\cup\{\widehat{V}_{jk}\}\cup\{\widehat{W}_{l}\}, \end{equation} $$\{j=1,2,...,N^{2}-1\},$$ the matrices $\{\widehat{\lambda}_{j}\}$ are called generalized Pauli matrices or SU(N) generators and density matrix for this algebra is represented by \begin{equation} \rho=\frac{1}{N}\mathbb{I}+\frac{1}{2}\sum_{j=1}^{N^{2}-1}\lambda_{j}\widehat{\lambda}_{j}. \end{equation} They also satisfy the following relations:
$$Tr(\widehat{\lambda}_{i}\widehat{\lambda}_{j})=2\delta_{ij},$$
$$S_{j}=Tr\{\widehat{\lambda}_{j}\rho\},$$
$$Tr\{\widehat{\lambda}_{j}\}=0.$$
For a bipartite system with states $\rho_{AB}\in\cal{H_A}\otimes \cal{H_B},$ $dim H_A=d_{A}$ and $dim H_B= d_{B},$ density matrix is shown in Fano form \cite{Fano} as \begin{equation} \rho_{AB}=\frac{1}{d_{A}d_{B}}(\mathbb{I}_{A}\otimes\mathbb{I}_{B}+ \sum^{N^{2}-1}_{i=1}\alpha_{i}\widehat{\lambda}_{i}^{A}\otimes\mathbb{I}_{B}+ \sum_{j=1}^{N^{2}-1}\beta_{j}\mathbb{I}_{A}\otimes\widehat{\lambda}_{j}^{B} \end{equation} \begin{equation} +\sum_{i=1}^{N^{2}-1}\sum_{j=1}^{N^{2}-1}\gamma_{ij}\widehat{\lambda}_{i}^{A}\otimes\widehat{\lambda}_{j}^{B}). \end{equation}
Closest classical-classical state with projection operators $P_{k}=|k\rangle\langle k|$ is given with
$$\chi_{\rho_{(AB)}}=\sum_{k}(P_{k}^{A}\otimes P_{k}^{B})\rho_{AB} (P_{k}^{A}\otimes P_{k}^{B})=\frac{1}{d_{A}d_{B}}(\mathbb{I}_{A}\otimes\mathbb{I}_{B}$$ $$+\sum_{k=1}^{N}\sum_{i=1}^{N^{2}-1}\alpha_{i}(P_{k}^{A}\lambda_{i}^{A}P_{k}^{A})\otimes\mathbb{I}_{B}+ \sum_{k=1}^{N}\sum_{j=1}^{N^{2}-1}\beta_{j}\mathbb{I}_{A}\otimes(P_{k}^{B}\lambda_{j}^{B}P_{k}^{B})$$ \begin{equation} +\sum_{k=1}^{N}\sum_{i=1}^{N^{2}-1}\sum_{j=1}^{N^{2}-1}\gamma_{ij} (P_{k}^{A}\lambda_{i}^{A}P_{k}^{A})\otimes(P_{k}^{B}\lambda_{j}^{B}P_{k}^{B}), \end{equation} where after calculation takes the form: \begin{equation}
\chi_{\rho_{AB}}=\sum_{A,B}(|k_{A}k_{B}\rangle\langle k_{A}k_{B}|)\rho_{AB}(|k_{A}k_{B}\rangle\langle k_{A}k_{B}|), \end{equation} by applying the projection operators we obtain \begin{equation} P_{k}\{\widehat{V}_{jk}\}P_{k}=0, \end{equation} \begin{equation} P_{k}\{\widehat{U}_{jk}\}P_{k}=0, \end{equation} \begin{equation} P_{k}\{\widehat{W}_{l}\}P_{k}\neq0. \end{equation} Density matrix of projective measurements on the subsystem A for density matrix Eq. (21) is
$$\rho_{B|k_1}=\sum_{k}(P_{k}^{A}\otimes \mathbb{I})\rho_{AB}(P_{k}^{A}\otimes\mathbb{I})=\frac{1}{d_{A}d_{B}}(\mathbb{I}_{A}\otimes\mathbb{I}_{B}$$ $$+\sum_{k=1}^{N}\sum_{i=1}^{N^{2}-1}\alpha_{i}(P_{k}^{A}\lambda_{i}^{A}P_{k}^{A})\otimes\mathbb{I}_{B} +\sum_{k=1}^{N}\sum_{j=1}^{N^{2}-1}\beta_{j}\mathbb{I}_{A}\otimes \lambda_{j}^{B}$$ \begin{equation} +\sum_{k=1}^{N}\sum_{i=1}^{N^{2}-1}\sum_{j=1}^{N^{2}-1}\gamma_{ij}(P_{k}^{A}\lambda_{i}^{A}P_{k}^{A})\otimes \lambda_{j}^{B}), \end{equation} with considering Eqs. (25, 26, 27) we will gain
$$\rho_{B|k_1}=\frac{1}{d_{A}d_{B}}(\mathbb{I}_{A}\otimes\mathbb{I}_{B}$$ $$+\sum_{k=1}^{N}\sum_{i=1}^{N^{2}-1}\alpha_{i}(P_{k}^{A}\{\widehat{W}_{l}\}P_{k}^{A})\otimes\mathbb{I}_{B} +\sum_{k=1}^{N}\sum_{j=1}^{N^{2}-1}\beta_{j}\mathbb{I}_{A}\otimes \lambda_{j}^{B}$$ \begin{equation} +\sum_{k=1}^{N}\sum_{i=1}^{N^{2}-1}\sum_{j=1}^{N^{2}-1}\gamma_{ij}(P_{k}^{A}\{\widehat{W}_{l}\}P_{k}^{A})\otimes \lambda_{j}^{B}). \end{equation}
Density matrix of projective measurements on the subsystem B for density matrix Eq. (22) is
$$\rho_{A|k_2}=\sum_{k}(\mathbb{I}\otimes P_{k}^{B})\rho_{AB}(\mathbb{I}\otimes P_{k}^{B})=\frac{1}{d_{A}d_{B}}(\mathbb{I}_{A}\otimes\mathbb{I}_{B}$$ $$+\sum_{k=1}^{N}\sum_{i=1}^{N^{2}-1}\alpha_{i}\lambda_{i}^{A}\otimes\mathbb{I}_{B}+ \sum_{k=1}^{N}\sum_{j=1}^{N^{2}-1}\beta_{j}\mathbb{I}_{A}\otimes (P_{k}^{B}\lambda_{j}^{B}P_{k}^{B})$$ \begin{equation} +\sum_{k=1}^{N}\sum_{i=1}^{N^{2}-1}\sum_{j=1}^{N^{2}-1} \gamma_{ij}\lambda_{i}^{A}\otimes(P_{k}^{B}\lambda_{j}^{B}P_{k}^{B})), \end{equation} with considering Eqs. (25, 26, 27) we will get
$$\rho_{A|k_2}=\frac{1}{d_{A}d_{B}}(\mathbb{I}_{A}\otimes\mathbb{I}_{B}$$ $$+\sum_{k=1}^{N}\sum_{i=1}^{N^{2}-1}\alpha_{i}\lambda_{i}^{A}\otimes\mathbb{I}_{B}+ \sum_{k=1}^{N}\sum_{j=1}^{N^{2}-1}\beta_{j}\mathbb{I}_{A}\otimes (P_{k}^{B}\{\widehat{W}_{l}\}P_{k}^{B})$$ \begin{equation} +\sum_{k=1}^{N}\sum_{i=1}^{N^{2}-1}\sum_{j=1}^{N^{2}-1}\gamma_{ij}\lambda_{i}^{A} \otimes(P_{k}^{B}\{\widehat{W}_{l}\}P_{k}^{B})). \end{equation}
The computation of quantum discord is dependent to optimization of measurment. In the next sections, two examples of X-state density matrix as Two-Qubit and Qubit-Qutrit have been presented. We show that optimization of measurment can be replaced by minimum of distance between the state of bipartite system and its classical-classical state. Also we are following to extend this method to higher bipartite systems.
\section{Two-Qubit density matrices}
As the first example, we investigate two qubits state which we frequently encounter in condensed matter systems, quantum dynamic, etc. and apply our achievements. The general form of two qubits density matrix is given by \begin{equation}\label{density} \rho_{AB}=\frac{1}{4}(\mathbb{I}_{2}\otimes \mathbb{I}_{2}+\sum_{i=1}^{3}\alpha_{i}\sigma_{i}\otimes \mathbb{I}_{2}+\sum_{i=1}^{3}\beta_{i}\mathbb{I}_{2}\otimes \sigma_{i}+\sum_{i,j=1}^{3}\gamma_{ij}\sigma_{i}\otimes\sigma_{j}), \end{equation}
where $\alpha_{i}, \beta_{i}, \gamma_{ij}\in\mathbb{R},$\ and $\sigma_{i}$ $(i=1,2,3)$ are three Pauli matrices and $\mathbb{I}$ is identity matrix. For this density matrix, closest classical-classical state according Eq.(25) calculate as follow
$$\chi_{\rho_{AB}}=\frac{1}{4}(\mathbb{I}_{2}\otimes\mathbb{I}_{2}
+\sum_{k_{A}=1}^{2}\sum_{i=1}^{3}\alpha_{i}(|k_{A}\rangle\langle k_{A}|\sigma_{i}^{A}|k_{A}\rangle\langle k_{A}|)\otimes\mathbb{I}_{2}+$$ $$\sum_{k_{B}=1}^{2}\sum_{j=1}^{3}\beta_{j}\mathbb{I}_{2}\otimes
(|k_{B}\rangle\langle k_{B}|\sigma_{j}^{B}|k_{B}\rangle\langle k_{B}|)+$$ \begin{equation}
\sum_{k_{A}=1}^{2}\sum_{k_{B}=1}^{2}\sum_{i,j=1}^{3}\gamma_{ij}(|k_{A}\rangle\langle k_{A}|\sigma_{i}^{A}|k_{A}\rangle\langle k_{A}|)\otimes(|k_{B}\rangle\langle k_{B}|\sigma_{j}^{B}|k_{B}\rangle\langle k_{B}|). \end{equation} With refer to equations (2) and (10) and apply the measurments on $ \rho_{AB} $, the conditional states obtain as $$
\rho_{B|k_1}=\frac{1}{4}(\mathbb{I}_{2}\otimes\mathbb{I}_{2}+
\sum_{k_{A}=1}^{2}\sum_{i=1}^{3}\alpha_{i} (|k_{A}\rangle\langle k_{A}|\sigma_{i}^{A}|k_{A}\rangle\langle k_{A}|)\otimes\mathbb{I}_{2}$$ \begin{equation}+\sum_{j=1}^{3}\beta_{j}\mathbb{I}_{2}\otimes
\sigma_{j}^{B}+\sum_{k_{A}=1}^{2}\sum_{i=1}^{3}\sum_{j=1}^{3}\gamma_{ij}(|k_{A}\rangle\langle k_{A}|\sigma_{i}^{A}|k_{A}\rangle\langle k_{A}|)\otimes \sigma_{j}^{B}), \end{equation} and
$$\rho_{A|k_2}=\frac{1}{4}(\mathbb{I}_{A}\otimes\mathbb{I}_{B} +\sum_{i=1}^{3}\alpha_{i}\sigma_{i}^{A}\otimes\mathbb{I}_{B}$$ $$+\sum_{k_{B}=1}^{2}\sum_{j=1}^{3}\beta_{j}\mathbb{I}_{A}\otimes
(|k_{B}\rangle\langle k_{B}|\sigma_{j}^{B}|k_{B}\rangle\langle k_{B}|)$$ \begin{equation}
+\sum_{k_{B}=1}^{2}\sum_{i=1}^{3}\sum_{j=1}^{3}\gamma_{ij}\sigma_{i}^{A}\otimes(|k_{B}\rangle\langle k_{B}|\sigma_{j}^{B}|k_{B}\rangle\langle k_{B}|)). \end{equation}
We choose the measurment in eigenbasis of $ \sigma_z $ i.e
$$|k_{A}\rangle\langle k_{A}|= |0\rangle\langle 0|,$$ and
$$|k_{B}\rangle\langle k_{B}|= |1\rangle\langle 1|.$$ In this paper, we consider X-state density matrix which because of the visual appearance of these density matrices look like to letter X, then are called by this name. By effect the mentioned measurments in equation (40) it becomes X-state by following conditions
$$\alpha_{1}=\alpha_{2}=\beta_{1}=\beta_{2}=0,$$ $$\gamma_{31} =\gamma_{13}=\gamma_{32}=\gamma_{23}= 0$$ So density matrix for two qubits in form X-state obtains as \begin{equation} \rho_{AB}=\left( \begin{array}{cccc} \rho_{11}&0&0&\rho_{14}\\ 0&\rho_{22}&\rho_{23}&0\\ 0&\rho_{32}&\rho_{33}&0\\ \rho_{41}&0&0&\rho_{44}\\ \end{array} \right), \end{equation} where $$\rho_{11}=1+\gamma_{33}+\alpha_{3}+\beta_{3},$$ $$\rho_{22}=1-\gamma_{33}+\alpha_{3}-\beta_{3},$$ $$\rho_{33}=1-\gamma_{33}-\alpha_{3}+\beta_{3},$$ $$\rho_{44}=1+\gamma_{33}-\alpha_{3}-\beta_{3},$$ $$\rho_{14}=\rho^\ast_{41}=\gamma_{11}-i\gamma_{12}-i\gamma_{21}-\gamma_{22},$$ $$\rho_{23}=\rho^\ast_{32}=\gamma_{11}+i\gamma_{12}-i\gamma_{21}+\gamma_{22},$$ and also $ \sum_{i}{\rho_{ii}}=1 $. By apply the measurment in equation (32), closest classical-classical state will be \begin{equation} \chi_{\rho_{AB}}=\left( \begin{array}{cccc} \rho_{11}&0&0&0\\ 0&\rho_{22}&0&0\\ 0&0&\rho_{33}&0\\ 0&0&0&\rho_{44}\\ \end{array} \right). \end{equation} Moreover the conditional states represented by equations (42) and (43) are \begin{equation}
\rho_{A|i}=\rho_{A|1}+\rho_{A|2}=\chi_{\rho_{AB}}, \end{equation} and also \begin{equation}
\rho_{B|i}=\rho_{B|1}+\rho_{B|2}=\chi_{\rho_{AB}}. \end{equation} The von Neumann entropy of $ \chi_{\rho_{AB}} $ is
\begin{equation} S(\chi_{\rho_{AB}})=-\sum \rho_{ii}\log_{2}\rho_{ii}, \end{equation}
In the other hand $ S_{\Pi^j}(B|A) $ will be [11] \begin{equation}
S_{\Pi^{j}}(B|A)=-\frac{1+\delta_{z}}{2} log \frac{1+\delta_{z}}{2} - \frac{1-\delta_{z}}{2} log \frac{1-\delta_{z}}{2}, \end{equation}
where $ \delta_{z} = |(\rho_{11}+\rho_{44})-(\rho_{22}+\rho_{33})| $ and also the state of subsystems A and B respectively is \begin{equation} \rho_{A}=\left( \begin{array}{cccc} \rho_{11}+\rho_{22}&0\\ 0&\rho_{33}+\rho_{44}\\ \end{array} \right), \end{equation} \begin{equation} \rho_{B}=\left( \begin{array}{cccc} \rho_{11}+\rho_{33}&0\\ 0&\rho_{22}+\rho_{44}\\ \end{array} \right), \end{equation} then von Neumann entropy of $ \rho_{B} $ obtains as \begin{equation} S(\rho_{B})=-((\rho_{11}+\rho_{33}) log (\rho_{11}+\rho_{33}) + (\rho_{22}+\rho_{44}) log (\rho_{22}+\rho_{44})). \end{equation} By simplify these equations we get \begin{equation}
S(\rho_B)+S_{\{\Pi^{j}_{B}\}}(B|A)=-\sum \rho_{ii}\log_{2}\rho_{ii}=S(\chi_{\rho_{AB}}). \end{equation} So
$$D(\rho_{AB})=S(\rho_B)+S_{\{\Pi^{j}_{B}\}}(B|A)-S(\rho_{AB})$$ \begin{equation}=S(\chi_{\rho_{AB}})-S(\rho_{AB})=GD(\rho_{AB}). \end{equation}
It can be shown that if $ \delta_{x} $ be the optimal value, so we choose the von Neumann measurment in the eigenbasis of $ \sigma_{x} $ i.e. $ |k_{A}\rangle=\frac{|0\rangle+|1\rangle}{\sqrt{2}} $ , $|k_{B}\rangle=\frac{|0\rangle-|1\rangle}{\sqrt{2}} $ and as well as for $ \delta_{y} $ i.e. $ |k_{A}\rangle=\frac{|0\rangle+i|1\rangle}{\sqrt{2}} $ , $|k_{B}\rangle=\frac{|0\rangle-i|1\rangle}{\sqrt{2}} $ , and the results is same to Mazhar Ali et al. [11].
\section{Qubit-Qutrit states}
As the second example, we have generalized our relations for Qubit-Qutrit, including $\rho_{AB}\in\cal{H_A}\otimes \cal{H_B},$ $dim H_A=2$ and $dim H_B=3$. So, density matrix versus SU(N) algebra can be represented as \begin{equation} \rho=\frac{1}{6}(\mathbb{I}_{2}\otimes\mathbb{I}_{3}+ \sum_{i=1}^{3}\alpha_{i}\sigma_{i}\otimes\mathbb{I}_{3}+\sum_{i}^{8}\sqrt{3} \beta_{i}\mathbb{I}_{2}\otimes\lambda_{i} +\sum_{i}^{3}\sum_{j}^{8}\gamma_{ij}\sigma_{i}\otimes\lambda_{j}), \end{equation}
where $\alpha_{i}, \beta_{i}, \gamma_{ij} \in \mathbb{R},\, \sigma_{i} (i=1,2,3)$, are three Pauli matrices and $\lambda_{i} (i=1,\cdots,8)$, are Gell mann matrices and $\mathbb{I}$ is identity matrix. We applied the conditions until matrix comes in the form X-state. These conditions are as $\{\alpha_{3};\beta_{3};\gamma_{33};\gamma_{38};\gamma_{24};\gamma_{14};\gamma_{25};\gamma_{15}\}\neq0,$ and other coefficients are equal to zero. So, density matrix can be written as $$ \rho_{AB}=\frac{1}{6}(\mathbb{I}_{2}\otimes\mathbb{I}_{3}+ \alpha_{3}\sigma_{3}\otimes\mathbb{I}_{3}+\sqrt{3} \beta_{3}\mathbb{I}_{2}\otimes\lambda_{3}+\gamma_{33}\sigma_{3}\otimes\lambda_{3} $$ \begin{equation}\label{density2} +\gamma_{38}\sigma_{3}\otimes\lambda_{8}+ \gamma_{24}\sigma_{2}\otimes\lambda_{4}+\gamma_{14}\sigma_{1}\otimes\lambda_{4}+\gamma_{25}\sigma_{2}\otimes\lambda_{5}+\gamma_{15}\sigma_{1}\otimes\lambda_{5}). \end{equation} By using Eq. (\ref{REL}) relative entropy of discord for this density matrix Eq. (\ref{density2}) is equal to \begin{equation}\label{relt1} D_{rel}(\rho)=\sum_{i=1}^{6}-(\Phi_{i}\log_{2}\Phi_{i}+\Psi_{i}\log_{2}\Psi_{i}), \end{equation} where $$\Phi_{1,2}=\frac{1}{6}(1-2\beta_{8}\pm \alpha_{3}\mp\frac{2\gamma_{38}}{\sqrt{3}}),$$ $$\Phi_{3,4}=\frac{1}{6}(1+\sqrt{3}\beta_{3}+\beta_{8}\mp \alpha_{3}\mp \gamma_{33}\mp\frac{\gamma_{38}}{\sqrt{3}}),$$ $$\Phi_{5,6}=\Psi_{5,6}=\frac{1}{6} (1-\sqrt{3}\beta_{3}+\beta_{8}-\alpha_{3}\pm \gamma_{33}\mp\frac{\gamma_{38}}{\sqrt{3}}),$$ $$\Psi_{1,2}=\frac{1}{36}(6+ 3\sqrt{3}\beta_{3}-3\beta_{8}+3\gamma_{33}+3\sqrt{3}\gamma_{38}\pm$$ $$\sqrt{3}[9\beta_{3}^{2}+27\beta_{8}^{2}+36\beta_{8}\alpha_{3}+12\alpha_{3}^{2}+ 12((\gamma_{15}+\gamma_{24})^{2}+(\gamma_{14}-\gamma_{25})^{2})$$ $$+3\gamma_{33}^{2}+6\beta_{3} (3\sqrt{3}\beta_{8}+2\sqrt{3}\alpha_{3}+\sqrt{3}\gamma_{33}-\gamma_{38})$$ $$-6\sqrt{3}\beta_{8}\gamma_{38}-4\sqrt{3}\alpha_{3}\gamma_{38}+\gamma_{38}^{2}+2\gamma_{33} (9\beta_{8}+6\alpha_{3}-\sqrt{3}\gamma_{38})]^{\frac{1}{2}}),$$ $$\Psi_{3,4}=\frac{1}{36}(6 + 3\sqrt{3}\beta_{3}-3\beta_{8}-3\gamma_{33}-3\sqrt{3}\gamma_{38}\pm$$ $$\sqrt{3}[9\beta_{3}^{2}+27\beta_{8}^{2}-36\beta_{8}\alpha_{3}+12\alpha_{3}^{2}+ 12((\gamma_{15}-\gamma_{24})^{2}+(\gamma_{14}+\gamma_{25})^{2})$$ $$+3\gamma_{33}^{2}+6\beta_{3} (3\sqrt{3}\beta_{8}-2\sqrt{3}\alpha_{3}-\sqrt{3}\gamma_{33}+\gamma_{38})$$ $$+6\sqrt{3}\beta_{8}\gamma_{38}-4\sqrt{3}\alpha_{3}\gamma_{38}+\gamma_{38}^{2}+2\gamma_{33} (-9\beta_{8}+6\alpha_{3}-\sqrt{3}\gamma_{38})]^{\frac{1}{2}}).$$
It can be seen that the result Eq.(\ref{relt1}) is equal to the result is obtained for quantum discord qubit-qutrit density matrix. Here we consider the set of measurment in eigenbasis of $ S_{z} $ . To better illustrate the results for $2\times3$ matrices we consider the following example \cite{Karpat,kapil,Mazhar}\\
$\rho=\frac{p}{2}(|00\rangle\langle00|+|01\rangle\langle01|+|00\rangle\langle12|+
|11\rangle\langle11|+|12\rangle\langle12|+$ \begin{equation}
|12\rangle\langle00|)+\frac{1 -
2p}{2}(|02\rangle\langle02|+|02\rangle\langle10| +
|10\rangle\langle02|+|10\rangle\langle10|), \end{equation} where classical correlation $\chi_{\rho}$ are obtained as follows. \begin{equation} \chi_{\rho}=\frac{1}{2} \left( \begin{array}{cccccc} p&0&0&0&0&0\\ 0&p&0&0&0&0\\ 0&0&1-2p&0&0&0\\ 0&0&0&1-2p&0&0\\ 0&0&0&0&p&0\\ 0&0&0&0&0&p\\ \end{array} \right), \end{equation}
and we have \begin{equation} S(\chi_{\rho_{AB}})=1-2p\log_{2}p-(1-2p)\log_{2}(1-2p). \end{equation}
\section{Conclusions}
In this paper, we have investigated an analytical method of quantum discord for some bipartite quantum systems. We represent with orthogonal projective measurement on the subsystems, the resulting matrix will be classical-classical state and set of measurements will be complete. Thus, for these states we obtain after measurement, the optimization over orthogonal projective measurements can be turn into minimization of distance between the state of bipartite system and its closest classical-classical state. This means that the relative entropy of discord can be replaced with quantum discord and we have justified our claim with examples that have mentioned above. We are going to extend this method for case of high bipartite systems in future.
\section{Acknowledgments}
This work is published as a part of research project supported by the university of Tabriz research affairs office.
\end{document} |
\begin{document}
\title{The equivariant cohomology of isotropy actions on symmetric spaces}
\author{Oliver Goertsches} \address{Oliver Goertsches, Mathematisches Institut, Universit\"at zu K\"oln, Weyertal 86-90, 50931 K\"oln, Germany} \email{[email protected]}
\begin{abstract} We show that for every symmetric space $G/K$ of compact type with $K$ connected, the $K$-action on $G/K$ by left translations is equivariantly formal. \end{abstract} \maketitle
\section{Introduction} Given compact connected Lie groups $K\subset G$ of equal rank, it is well-known that the $K$-action on the homogeneous space $G/K$ is equivariantly formal because the odd de Rham cohomology groups of $G/K$ vanish. (See for example \cite{GHZ} for an investigation of the equivariant cohomology of such spaces.) If however the rank of $K$ is strictly smaller than the rank of $G$, then the isotropy action is not necessarily equivariantly formal, and in general it is unclear when this is the case.\footnote{A sufficient condition for equivariant formality of the isotropy action was introduced in \cite{Shiga}, see Remark \ref{rem:Shiga} below. If $K$ belongs to a certain class of subtori of $G$ this condition is in fact an equivalence, see \cite{ShigaTakahashi}.} Restricting our attention to symmetric spaces of compact type, we will prove the following theorem.
\begin{thm*} Let $(G,K)$ be a symmetric pair of compact type, where $G$ and $K$ are compact connected Lie groups. Then the $K$-action on the symmetric space $M=G/K$ by left translations is equivariantly formal. \end{thm*}
For symmetric spaces of type II, i.e., compact Lie groups, this result is already known, see Section \ref{sec:groups}. More generally, in the case of symmetric spaces of split rank ($\operatorname{rank} G=\operatorname{rank} K+\operatorname{rank} G/K$), the fact that all $K$-isotropy groups have maximal rank implies equivariant formality, see Section \ref{sec:split}. However, for the general case we have to rely on an explicit calculation of the dimension of the cohomology of the $T$-fixed point set $M^{T}$, where $T\subset K$ is a maximal torus, in order to use the characterization of equivariant formality via the condition $\dim H^*(M^{T})=\dim H^*(M)$. With the help of the notion of compartments introduced in \cite{EMQ} and several results proven therein we will find in Section \ref{sec:fixed set} a calculable expression for this dimension, and after reducing to the case of an irreducible simply-connected symmetic space in Section \ref{sec:reduction} we can invoke the classification of such spaces to show equivariant formality in each of the remaining cases by hand. On the way we obtain a formula for the number of compartments in a fixed $K$-Weyl chamber, see Proposition \ref{prop:rasquotientofweylgroups}. \\[0.3cm] \noindent {\bf Acknowledgements.} The author wishes to express his gratitude to Augustin-Liviu Mare for interesting discussions on a previous version of the paper.
\section{Symmetric spaces}
Let $G$ be a connected Lie group and $K\subset G$ a closed subgroup. Then $K$ is said to be a symmetric subgroup of $G$ if there is an involutive automorphism $\sigma:G\to G$ such that $K$ is an open subgroup of the fixed point subgroup $G^\sigma$. We will refer to the pair $(G,K)$ as a symmetric pair, and $G/K$ is a symmetric space.
Given a symmetric pair $(G,K)$ with corresponding involution $\sigma:G\to G$, then the Lie algebra $\mathfrak{g}$ decomposes into the $(\pm 1)$-eigenspaces of $\sigma$: $\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{p}$, and the usual commutation relations hold: $[\mathfrak{k},\mathfrak{k}]\subset \mathfrak{k}$, $[\mathfrak{k},\mathfrak{p}]\subset \mathfrak{p}$ and $[\mathfrak{p},\mathfrak{p}]\subset \mathfrak{k}$. The rank of $G/K$ is by definition the maximal dimension of an abelian subalgebra of $\mathfrak{p}$. Then clearly $\operatorname{rank} G-\operatorname{rank} K \leq \operatorname{rank} G/K$, and if equality holds, then we say that $G/K$ is of split rank.
A symmetric pair $(G,K)$ is called (almost) effective if $G$ acts (almost) effectively on $G/K$. Given a symmetric pair $(G,K)$, then the kernel $N\subset G$ of the $G$-action on $G/K$ is contained in $K$, and $(G/N,K/N)$ is an effective symmetric pair with $(G/N)/(K/N)=G/K$. An almost effective symmetric pair $(G,K)$ (and the corresponding symmetric space $G/K$) will be called of compact type if $G$ is a compact semisimple Lie group. In this paper only symmetric spaces of compact type will occur. If $(G,K)$ is effective, then $G$ can be regarded as a subgroup of the isometry group of $G/K$ with respect to any $G$-invariant Riemannian metric on $G/K$. If $(G,K)$ is additionally of compact type, then this inclusion is in fact an isomorphism between $G$ and the identity component of the isometry group.
\section{Equivariant formality} The equivariant cohomology of an action of a compact connected Lie group $K$ on a compact manifold $M$ is by definition the cohomology of the Borel construction \[ H^*_K(M)=H^*(EK\times_K M); \] we use real coefficients throughout the paper. The projection $EK\times_K M\to EK/K=BK$ to the classifying space $BK$ of $K$ induces on $H^*_K(M)$ the structure of an $H^*(BK)$-algebra.
An action of a compact connected Lie group $K$ on a compact manifold $M$ is called equivariantly formal in the sense of \cite{GKM} if $H^*_K(M)$ is a free $H^*(BK)$-module. If the $K$-action on $M$ is equivariantly formal then automatically \begin{equation} \label{eq:eqformalequality} H^*_K(M)=H^*(M)\otimes H^*(BK) \end{equation} as graded $H^*(BK)$-modules, see \cite[Proposition 2.3]{GR}. In the following proposition we collect some known equivalent characterizations of equivariant formality. \begin{prop} \label{prop:eqformalequivalent} Consider an action of a compact connected Lie group $K$ on a compact manifold $M$, and let $T\subset K$ be a maximal torus. Then the following conditions are equivalent: \begin{enumerate} \item The $K$-action on $M$ is equivariantly formal. \item The $T$-action on $M$ is equivariantly formal. \item The cohomology spectral sequence associated to the fibration $ET\times_T M\to BT$ collapses at the $E_2$-term. \item We have $\dim H^*(M)=\dim H^*(M^T)$. \item The natural map $H^*_T(M)\to H^*(M)$ is surjective. \end{enumerate} \end{prop} \begin{proof} For the equivalence of $(1)$ and $(2)$ see \cite[Proposition C.26]{GGK}. The Borel localization theorem implies that the rank of $H^*_T(M)$ as an $H^*(BT)$-module always equals $\dim H^*(M^T)$. Then \cite[Lemma C.24]{GGK} implies the equivalence of $(2)$, $(3)$, and $(4)$; see also \cite[p.~46]{Hsiang}. For the equivalence to $(5)$, see \cite[p.~148]{McCleary}. \end{proof} Note that by \cite[p.~46]{Hsiang} the inequality $\dim H^*(M^T)\leq \dim H^*(M)$ holds for any $T$-action on $M$. Condition $(5)$ in the proposition shows that \begin{cor} \label{cor:subgroupseqformal} If a compact connected Lie group $K$ acts equivariantly formally on a compact manifold $M$, then so does every connected closed subgroup of $K$. \end{cor}
Applying the gap method to the spectral sequence in Item $(3)$ of Proposition \ref{prop:eqformalequivalent} we obtain the following well-known sufficient condition for equivariant formality. \begin{prop} \label{prop:hoddeqformal} Any action of a compact Lie group $K$ on a compact manifold $M$ with $H^{odd}(M)=0$ is equivariantly formal. \end{prop}
\section{Isotropy actions on symmetric spaces of compact type}
Let $G$ be a compact connected Lie group and $K\subset G$ a compact connected subgroup. Because an equivariantly formal torus action always has fixed points, the only tori $T\subset G$ that can act equivariantly formally on $G/K$ by left translations are those that are conjugate to a subtorus of $K$. On the other hand, if a maximal torus $T$ of $K$ acts equivariantly formally on $G/K$, then we know by Corollary \ref{cor:subgroupseqformal} that all these tori do in fact act equivariantly formally. In the following, we will prove that this indeed happens for symmetric spaces of compact type. More precisely:
\begin{thm} \label{thm:main} Let $(G,K)$ be a symmetric pair of compact type, where $G$ and $K$ are compact connected Lie groups. Then the $K$-action on the symmetric space $G/K$ by left translations is equivariantly formal. \end{thm}
\begin{rem} \label{rem:Shiga} The pair $(G,K)$ is a Cartan pair in the sense of \cite{Greub}, see \cite[p.~448]{Greub}. Therefore, \cite[Theorem A]{Shiga} shows that a sufficient condition for the $K$-action on $G/K$ to be equivariantly formal is that the map $H^*(G/K)^{N_G(K)}\to H^*(G)$ induced by the projection $G\to G/K$, where $N_G(K)$ acts on $G/K$ from the right, is injective. It would be interesting to know whether a symmetric pair always satisfies this condition. \end{rem}
\subsection{The fixed point set of a maximal torus in $K$} \label{sec:fixed set}
Let $(G,K)$ be a symmetric pair of compact type, where $G$ and $K$ are compact connected Lie groups. Denote by $\sigma:G\to G$ the corresponding involutive automorphism. Then $M=G/K$ is a symmetric space of compact type. We fix maximal tori $T_K\subset K$ and $T_G\subset G$ such that $T_K\subset T_G$. Let $\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{p}$ be the decomposition of the Lie algebra $\mathfrak{g}$ into eigenspaces of $\sigma$.
In order to prove Theorem \ref{thm:main} we can without loss of generality assume that the symmetric pair $(G,K)$ is effective: if $N\subset K$ is the kernel of the $G$-action on $G/K$, then clearly the $K$-action on $G/K=(G/N)/(K/N)$ is equivariantly formal if and only if the $K/N$-action is equivariantly formal. (This follows for example from Proposition \ref{prop:eqformalequivalent} because the fixed point sets of appropriately chosen maximal tori in $K$ and $K/N$ coincide.)
\begin{lem} \label{lem:T-fixedpoints} The $T_K$-fixed point set in $M$ is $N_G(T_K)/N_K(T_K)$. \end{lem} \begin{proof} An element $gK\in M$ is fixed by $T_K$ if and only if $g^{-1}T_Kg\subset K$ (i.e., $g^{-1}T_K g$ is a maximal torus in the compact Lie group $K$), which is the case if and only if there is some $k\in K$ with $k^{-1}g^{-1}T_Kgk=T_K$. Thus, $(G/K)^{T_K}=N_G(T_K)/N_G(T_K)\cap K=N_G(T_K)/N_K(T_K)$. \end{proof}
\begin{lem}[{\cite[Proposition VII.3.2]{Loos}}]\label{lem:uniquetorus} $T_G$ is the unique maximal torus in $G$ containing $T_K$. \end{lem}
Lemma \ref{lem:uniquetorus} implies that the Lie algebra $\mathfrak{t}_\mathfrak{g}$ of $T_G$ decomposes according to the decomposition $\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{p}$ as $\mathfrak{t}_\mathfrak{g}=\mathfrak{t}_\mathfrak{k}\oplus \mathfrak{t}_\mathfrak{p}$. (In fact, this statement is the first part of the proof of \cite[Proposition VII.3.2]{Loos}.)
\begin{prop} \label{prop:conncomponentsoffixedpoints} Each connected component of $M^{T_K}$ is a torus of dimension $\operatorname{rank} G-\operatorname{rank} K$. \end{prop} \begin{proof} Because of Lemma \ref{lem:uniquetorus}, the abelian subalgebra $\mathfrak{t}_\mathfrak{p} \subset \mathfrak{p}$ is the space of elements in $\mathfrak{p}$ that commute with $\mathfrak{t}_\mathfrak{k}$. Thus, Lemma \ref{lem:T-fixedpoints} implies that the component of $M^{T_K}$ containing $eK$ is $T_G/(T_G\cap K)=T_G/T_K$ (note that the centralizer of $T_K$ in $K$ is exactly $T_K$), i.e., a $\operatorname{rank} G-\operatorname{rank} K$-dimensional torus. Because the fixed set $M^{T_K}$ is a homogeneous space, all components are diffeomorphic. \end{proof} We therefore understand the structure of the $T_K$-fixed point set $M^{T_K}$ if we know its number of connected components, which we denote by $r$. In view of condition $(4)$ in Proposition \ref{prop:eqformalequivalent}, we are mostly interested in the dimension of its cohomology. \begin{prop}\label{prop:cohomoffixedpointset} We have $\dim H^*(M^{T_K})=2^{\operatorname{rank} G-\operatorname{rank} K}\cdot r$. \end{prop} In order to get a calculable expression for $r$ we will use several results from \cite[Sections 5 and 6]{EMQ} which we now collect. Denote by $\Delta_G=\Delta_\mathfrak{g}$ the root system of $G$ with respect to the maximal torus $T_G$, i.e., the set of nonzero elements $\alpha\in \mathfrak{t}_\mathfrak{g}^*$ such that the corresponding eigenspace $\mathfrak{g}_\alpha=\{X\in \mathfrak{g}^\mathbb C \mid [W,X]=i\alpha(W)X \text{ for all }W\in \mathfrak{t}_\mathfrak{g}\}$ is nonzero. Then we have the root space decomposition \begin{equation}\label{eq:rootspacedecomp} \mathfrak{g}^\mathbb C=\mathfrak{t}_\mathfrak{g}^\mathbb C \oplus\bigoplus_{\alpha\in \Delta_\mathfrak{g}} \mathfrak{g}_\alpha. \end{equation} The $\mathfrak{g}$-Weyl chambers are the connected components of the set $\mathfrak{t}_\mathfrak{g}\setminus \bigcup_{\alpha\in \Delta_\mathfrak{g}} \ker \alpha$. Because of Lemma \ref{lem:uniquetorus}, $\mathfrak{t}_\mathfrak{k}$ contains $\mathfrak{g}$-regular elements, hence no root in $\Delta_\mathfrak{g}$ vanishes on $\mathfrak{t}_\mathfrak{k}$. Therefore some of the $\mathfrak{g}$-Weyl chambers intersect $\mathfrak{t}_\mathfrak{k}$ nontrivially, and following \cite{EMQ} we will refer to these intersections as \emph{compartments}. Considering as in \cite{EMQ} the decomposition of $\Delta_\mathfrak{g}$ into complementary subsets $\Delta_\mathfrak{g}=\Delta'Ê\cup \Delta''$, where \begin{equation}\label{eq:decomprootsystem} \Delta'=\{\alpha\in \Delta_\mathfrak{g}\mid \mathfrak{g}_\alpha\not\subset \mathfrak{p}^{\mathbb C}\},\quad \Delta''=\{\alpha\in \Delta_\mathfrak{g}\mid \mathfrak{g}_\alpha\subset \mathfrak{p}^{\mathbb C}\}, \end{equation} we have by \cite[Lemma 9]{EMQ} that the root system $\Delta_K=\Delta_\mathfrak{k}$ of $K$ with respect to $T_K$ is given by \begin{equation}\label{eq:Krootsystem}
\Delta_\mathfrak{k}=\{\left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}\mid \alpha\in \Delta'\}. \end{equation} In particular, $\mathfrak{g}$-regular elements in $\mathfrak{t}_\mathfrak{k}$ are also $\mathfrak{k}$-regular, and hence each compartment is contained in a $\mathfrak{k}$-Weyl chamber.
Because of Lemma \ref{lem:uniquetorus}, the group $N_G(T_K)$ is a subgroup of $N_G(T_G)$. Both groups have the same identity component $T_G$, so we may regard the quotient group $N_G(T_K)/T_G$ as a subgroup of the Weyl group $W(G)$ of $G$. The free action of $W(G)$ on the $\mathfrak{g}$-Weyl chambers induces an action of $N_G(T_K)/T_G$ on the set of compartments. Because any two compartments are $G$-conjugate \cite[Theorem 10]{EMQ}, this action is simply transitive on the set of compartments, and it follows that the number of connected components of $N_G(T_K)$ equals the total number of compartments in $\mathfrak{t}_\mathfrak{k}$. On the other hand no connected component of $N_G(T_K)$ contains more than one connected component of $N_K(T_K)$. (An element in $N_K(T_K)\cap T_G$ is an element in $K$ centralizing $T_K$, hence already contained in $T_K$.) Because the number of connected components of $N_K(T_K)$ equals the number of $\mathfrak{k}$-Weyl chambers, and each $\mathfrak{k}$-Weyl chamber contains the same number of compartments \cite[Theorem 10]{EMQ}, we have shown the following lemma. \begin{lem} \label{lem:rindependentofGK} The number $r$ of connected components of $M^{T_K}=N_G(T_K)/N_K(T_K)$ is the number of compartments in a fixed $\mathfrak{k}$-Weyl chamber. In particular it only depends on the Lie algebra pair $(\mathfrak{g},\mathfrak{k})$. \end{lem}
Let $C$ be a $\mathfrak{g}$-Weyl chamber that intersects $\mathfrak{t}_\mathfrak{k}$ nontrivially. By \cite[Lemma 8]{EMQ} the compartment $C\cap \mathfrak{t}_\mathfrak{k}$ can be described explicitly: The involution $\sigma:G\to G$ permutes the $\mathfrak{g}$-Weyl chambers and fixes $\mathfrak{t}_\mathfrak{k}$, hence it fixes $C$. Let $B=\{\alpha_1,\ldots,\alpha_{\operatorname{rank} G}\}$ be the corresponding simple roots such that $C$ is exactly the set of points where the elements of $B$ take positive values. The involution $\sigma$ acts as a permutation group on $B$ because for any $i$ the linear form $\alpha_i\circ \sigma$ is again positive on $C$. Note that for every root $\alpha\in \Delta_\mathfrak{g}$ the linear form $\frac{1}{2}(\alpha + \alpha \circ \sigma)$ vanishes on $\mathfrak{t}_\mathfrak{p}$ and coincides with $\left.{\alpha}\right|_{\mathfrak{t}_\mathfrak{k}}$ on $\mathfrak{t}_\mathfrak{k}$. The set $\left.B\right|_{\mathfrak{t}_\mathfrak{k}}=\{\left.{\alpha_i}\right|_{\mathfrak{t}_\mathfrak{k}}\mid i=1,\ldots,\operatorname{rank} G\}$ is a basis of $\mathfrak{t}_\mathfrak{k}^*$ (in particular it consists of $\dim \mathfrak{t}_\mathfrak{k}$ elements) and the compartment $C\cap \mathfrak{t}_\mathfrak{k}$ is exactly the set of points in $\mathfrak{t}_\mathfrak{k}$ where all $\left.{\alpha_i}\right|_{\mathfrak{t}_\mathfrak{k}}$ take positive values. It is a simplicial cone bounded by the hyperplanes $\ker \left.{\alpha_i}\right|_{\mathfrak{t}_\mathfrak{k}}$. Any such hyperplane is either a wall of a $\mathfrak{k}$-Weyl chamber or the kernel of a $\mathfrak{g}$-root $\alpha_i$ with $\alpha_i\circ \sigma=\alpha_i$, see \eqref{eq:Krootsystem}. In any case, reflection along the hyperplane defines an element of $N_G(T_K)/T_G$ and takes $C\cap \mathfrak{t}_\mathfrak{k}$ to an adjacent compartment. (This argument is taken from the proof of \cite[Theorem 10]{EMQ}.)
It follows that the action of $N_G(T_K)/T_G$ on the set of compartments described above is generated by the reflections along all hyperplanes $\ker \left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}$, where $\alpha\in \Delta_\mathfrak{g}$. Let $\langle\cdot,\cdot\rangle$ be the Killing form on $\mathfrak{g}$. The decomposition $\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{p}$ is orthogonal with respect to $\langle\cdot,\cdot\rangle$. We identify $\mathfrak{t}_\mathfrak{g}^*$ with $\mathfrak{t}_\mathfrak{g}$ and $\mathfrak{t}_\mathfrak{k}^*$ with $\mathfrak{t}_\mathfrak{k}$ via $\langle\cdot,\cdot\rangle$. For $\alpha\in \Delta_\mathfrak{g}$, let $H_\alpha\in \mathfrak{t}_\mathfrak{g}$ be the element such that $\alpha(H)=\langle H,H_\alpha\rangle$ for all $H\in \mathfrak{t}_\mathfrak{g}$. Given $X\in \mathfrak{t}_\mathfrak{g}$, we write $X^\mathfrak{k}$ and $X^\mathfrak{p}$ for the $\mathfrak{k}$- and $\mathfrak{p}$-parts of $X$ respectively. Then $H_\alpha^\mathfrak{k}$ corresponds to $\left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}$ under the isomorphism $\mathfrak{t}_\mathfrak{k}\cong \mathfrak{t}_\mathfrak{k}^*$. \begin{lem}\label{lem:rootlemma} Let $\alpha\in \Delta_\mathfrak{g}$ be a root with $\alpha\circ \sigma\neq \alpha$. Then either \begin{enumerate}
\item $\langle H_\alpha,H_{\alpha\circ \sigma}\rangle=0$ and $|H^\mathfrak{p}_\alpha|^2=|H^\mathfrak{k}_\alpha|^2$ or
\item $2\cdot\frac{\langle H_\alpha,H_{\alpha\circ \sigma}\rangle}{|H_\alpha|^2}=-1$, $|H^\mathfrak{p}_\alpha|^2=3|H^\mathfrak{k}_\alpha|^2$ and $\alpha+\alpha\circ \sigma\in \Delta_\mathfrak{g}$. \end{enumerate} \end{lem} \begin{proof} We have $H_{\alpha\circ \sigma}=H^\mathfrak{k}_\alpha-H^\mathfrak{p}_\alpha$, and because $\Delta_\mathfrak{g}$ is a root system it follows that \[
2\cdot\frac{\langle H_\alpha,H_{\alpha\circ \sigma}\rangle}{|H_\alpha|^2}=2\cdot \frac{|H_\alpha^\mathfrak{k}|^2 - |H_\alpha^\mathfrak{p}|^2}{|H_\alpha^\mathfrak{k}|^2+ |H_\alpha^\mathfrak{p}|^2}\in \mathbb Z. \] Because $\alpha$ and $\alpha\circ \sigma$ are roots of equal length, this integer can only equal $0$ or $\pm 1$ \cite[Proposition 2.48.(d)]{Knapp}. Further, because $\alpha-\alpha\circ \sigma$ is not a root (by Lemma \ref{lem:uniquetorus} no root vanishes on $\mathfrak{t}_\mathfrak{k}$) and not $0$, only the possibilities $0$ and $-1$ remain, and in the latter case we also have that $\alpha+\alpha\circ\sigma\in \Delta_\mathfrak{g}$ \cite[Proposition 2.48.(e)]{Knapp}. \end{proof}
\begin{prop} The set $\left.\Delta_\mathfrak{g}\right|_{\mathfrak{t}_\mathfrak{k}}=\{\left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}\mid \alpha\in \Delta_\mathfrak{g}\}$ is a root system in $\mathfrak{t}_\mathfrak{k}^*$. \end{prop} \begin{proof}
It is clear that $\left.\Delta_\mathfrak{g}\right|_{\mathfrak{t}_\mathfrak{k}}$ spans $\mathfrak{t}_\mathfrak{k}^*$. We have to check that for all $\alpha,\beta\in \Delta_\mathfrak{g}$, the quantity \begin{equation}\label{eq:checkinteger}
2\cdot \frac{\langle H_\alpha^\mathfrak{k},H_\beta^\mathfrak{k}\rangle}{|H_\alpha^\mathfrak{k}|^2} \end{equation} is an integer. With respect to the decomposition $\Delta_\mathfrak{g}=\Delta'\cup \Delta''$ (see \eqref{eq:decomprootsystem}) there are four cases:
If both $\alpha$ and $\beta$ are elements of $\Delta'$, then \eqref{eq:checkinteger} is an integer because $\left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}$ and $\left.\beta\right|_{\mathfrak{t}_\mathfrak{k}}$ are $\mathfrak{k}$-roots, see \eqref{eq:Krootsystem}. In case $\alpha$ and $\beta$ are elements of $\Delta''$, then the corresponding vectors $H_\alpha$ and $H_\beta$ are already elements of $\mathfrak{t}_\mathfrak{k}$, so $H_\alpha^\mathfrak{k}=H_\alpha$ and $H_\beta^\mathfrak{k}=H_\beta$, hence \eqref{eq:checkinteger} is an integer.
Consider the case that $\alpha\in \Delta''$ and $\beta\in \Delta'$. Then $H_\alpha=H_\alpha^\mathfrak{k}\in \mathfrak{t}_\mathfrak{k}$, hence \[
2\cdot \frac{\langle H_\alpha^\mathfrak{k},H_\beta^\mathfrak{k}\rangle}{|H_\alpha^\mathfrak{k}|^2}= 2\cdot \frac{\langle H_\alpha,H_\beta\rangle}{|H_\alpha|^2}\in \mathbb Z. \]
The last case to be considered is that $\alpha\in \Delta'$ and $\beta\in \Delta''$. In this case $H_\beta=H_\beta^\mathfrak{k}\in \mathfrak{t}_\mathfrak{k}$. It may happen that $H_\alpha\in \mathfrak{t}_\mathfrak{k}$, but then the claim would follow as before, so we may assume that $H_\alpha\notin \mathfrak{t}_\mathfrak{k}$. It follows that $\alpha\circ \sigma$ is a root different from $\alpha$. By Lemma \ref{lem:rootlemma} we have $|H^\mathfrak{p}_\alpha|^2=c|H^\mathfrak{k}_\alpha|^2$ with $c=1$ or $c=3$. We know that
\[
2\cdot \frac{\langle H_\alpha,H_\beta \rangle}{|H_\alpha|^2} = 2\cdot \frac{\langle H_\alpha^\mathfrak{k},H_\beta^\mathfrak{k} \rangle}{|H_\alpha^\mathfrak{k}|^2+|H_\alpha^\mathfrak{p}|^2}=\frac{2}{1+c}\cdot \frac{\langle H_\alpha^\mathfrak{k},H_\beta^\mathfrak{k} \rangle}{|H_\alpha^\mathfrak{k}|^2}
\]
is an integer, hence multiplying with the integer $1+c$ shows that \eqref{eq:checkinteger} is an integer in this case as well.
Next we have to check that for each $\alpha\in \Delta_\mathfrak{g}$ the reflection $s_{\left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}}:\mathfrak{t}_\mathfrak{k}\to \mathfrak{t}_\mathfrak{k}$ along $\ker \left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}$ defined by \begin{equation}\label{eq:reflection}
X\mapsto X-2\cdot \frac{\langle H_\alpha^\mathfrak{k},X\rangle}{|H_\alpha^\mathfrak{k}|^2} H_\alpha^\mathfrak{k} \end{equation} sends $\{H_\beta^\mathfrak{k}\mid \beta\in \Delta_\mathfrak{g}\}$ to itself. If $H_\alpha\in \mathfrak{t}_\mathfrak{k}$ (this includes the case $\alpha\in \Delta''$), then the reflection $s_\alpha:\mathfrak{t}_\mathfrak{g}\to \mathfrak{t}_\mathfrak{g}$ along $\ker \alpha$ leaves invariant $\mathfrak{t}_\mathfrak{k}$, and \eqref{eq:reflection} is nothing but the restriction of this reflection to $\mathfrak{t}_\mathfrak{k}$. Thus, $\{H_\beta^\mathfrak{k}\mid \beta\in \Delta_\mathfrak{g}\}$ is sent to itself.
Let $\alpha\in \Delta'$ with $H_\alpha\notin \mathfrak{t}_\mathfrak{k}$. We treat the two cases that can arise by Lemma \ref{lem:rootlemma} separately: assume first that $\langle H_\alpha,H_{\alpha\circ \sigma}\rangle=0$. In this case the two reflections $s_\alpha$ and $s_{\alpha\circ \sigma}$ commute and we have, recalling that $H_{\alpha\circ \sigma}=H_\alpha^\mathfrak{k}-H_\alpha^\mathfrak{p}$, \begin{align*}
s_{\alpha\circ \sigma}\circ s_\alpha(X)&=X-2\cdot \frac{\langle H_\alpha,X\rangle}{|H_\alpha|^2} H_\alpha-2\cdot \frac{\langle H_{\alpha\circ \sigma},X\rangle}{|H_{\alpha\circ \sigma}|^2} H_{\alpha\circ \sigma}\\
&=X-2\cdot \frac{\langle H_{\alpha},X\rangle+\langle H_{\alpha\circ \sigma},X\rangle}{2|H_\alpha^\mathfrak{k}|^2} H_\alpha^\mathfrak{k}
- 2\cdot \frac{\langle H_{\alpha},X\rangle-\langle H_{\alpha\circ \sigma},X\rangle}{2|H_\alpha^\mathfrak{p}|^2} H_\alpha^\mathfrak{p}\\
&=X-2\cdot \frac{\langle H_\alpha^\mathfrak{k},X\rangle}{|H_\alpha^\mathfrak{k}|^2}H_\alpha^\mathfrak{k} + 2 \cdot \frac{\langle H_\alpha^\mathfrak{p},X\rangle}{|H_\alpha^\mathfrak{p}|^2}H_\alpha^\mathfrak{p}. \end{align*}
In particular for each $\beta\in \Delta_\mathfrak{g}$ the vector $H_\beta^\mathfrak{k}-2\cdot \frac{\langle H_\alpha^\mathfrak{k},H_\beta^\mathfrak{k}\rangle}{|H_\alpha^\mathfrak{k}|^2}H_\alpha^\mathfrak{k}$ is the $\mathfrak{k}$-part of some vector $H_\gamma$, which shows that \eqref{eq:reflection} sends $\{H_\beta^\mathfrak{k}\mid \beta\in \Delta_\mathfrak{g}\}$ to itself.
In the second case of Lemma \ref{lem:rootlemma} we have that $\alpha+\alpha\circ \sigma\in \Delta_\mathfrak{g}$, with $\ker (\alpha+\alpha\circ \sigma)=\ker \left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}} \oplus \mathfrak{t}_\mathfrak{p}$. Thus, the reflection $s_{\left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}}$ is nothing but the restriction of $s_{\alpha+\alpha\circ\sigma}$ to $\mathfrak{t}_\mathfrak{k}$; in particular it sends $\{H_\beta^\mathfrak{k}\mid \beta\in \Delta_\mathfrak{g}\}$ to itself. \end{proof} \begin{rem} \label{rem:reduced}
The root system $\left.\Delta_\mathfrak{g}\right|_{\mathfrak{t}_\mathfrak{k}}$ is not necessarily reduced: if there exists a root $\alpha\in \Delta_\mathfrak{g}$ with $\alpha\circ \sigma\neq \alpha$ for which the second case of Lemma \ref{lem:rootlemma} holds, then it contains $\left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}$ as well as $2 \cdot\left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}$. This happens for instance for $\operatorname{SU}(2m+1)/\operatorname{SO}(2m+1)$. \end{rem}
Because $B$ is the set of simple roots of $\Delta_\mathfrak{g}$ every root $\alpha\in \Delta_\mathfrak{g}$ can be written as a linear combination of elements in $B$ with integer coefficients of the same sign. It follows that every restriction $\left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}\in \left.\Delta\right|_{\mathfrak{t}_\mathfrak{k}}$ is a linear combination of elements in $\left. B\right|_{\mathfrak{t}_\mathfrak{k}}$ of the same kind. We thus have proven the following lemma. \begin{lem}
The $\left.\Delta_\mathfrak{g}\right|_{\mathfrak{t}_\mathfrak{k}}$-Weyl chambers are exactly the compartments. If $C$ is a $\mathfrak{g}$-Weyl chamber that intersects $\mathfrak{t}_\mathfrak{k}$ nontrivially, with corresponding set of simple roots $B\subset \Delta_\mathfrak{g}$, then $\left.B\right|_{\mathfrak{t}_\mathfrak{k}}$ is the set of simple roots of the root system $\left.\Delta_\mathfrak{g}\right|_{\mathfrak{t}_\mathfrak{k}}$ corresponding to $C\cap \mathfrak{t}_\mathfrak{k}$. \end{lem}
Recall that the $N_G(T_K)/T_G$-action on the set of compartments was shown to be generated by the reflections along all hyperplanes $\ker \left.\alpha\right|_{\mathfrak{t}_\mathfrak{k}}$, where $\alpha\in \Delta_\mathfrak{g}$. Thus, we obtain
\begin{cor} The $N_G(T_K)/T_G$-action on the set of compartments is the same as the action of the Weyl group $W(\left.\Delta_\mathfrak{g}\right|_{\mathfrak{t}_\mathfrak{k}})$. In particular, it is generated by the reflections along the hyperplanes $\ker \left.\alpha_i\right|_{\mathfrak{t}_\mathfrak{k}}$. Furthermore, $r=\frac{|W(\left.\Delta_\mathfrak{g}\right|_{\mathfrak{t}_\mathfrak{k}})|}{|W(\mathfrak{k})|}$. \end{cor}
Recall that whereas a reduced root system is determined by its simple roots \cite[Proposition 2.66]{Knapp}, this is no longer true for nonreduced root systems such as $\left.\Delta_\mathfrak{g}\right|_{\mathfrak{t}_\mathfrak{k}}$, see \cite[II.8]{Knapp}. However, the reduced elements in a nonreduced root system always form a reduced root system \cite[Lemma 2.91]{Knapp} with the same simple roots and the same Weyl group. Using the following proposition taken from \cite{Loos} we will identify this reduced root system contained in $\left.\Delta_\mathfrak{g}\right|_{\mathfrak{t}_\mathfrak{k}}$ with the root system of a second symmetric subalgebra $\mathfrak{k}'\subset \mathfrak{g}$.
\begin{prop}[{\cite[Proposition VII.3.4]{Loos}}] \label{prop:loos} There is an extension of $\sigma:\mathfrak{t}_\mathfrak{g}\to \mathfrak{t}_\mathfrak{g}$ to an involutive automorphism $\sigma':\mathfrak{g}\to \mathfrak{g}$ such that its $\mathbb C$-linear extension $\sigma':\mathfrak{g}^\mathbb C\to \mathfrak{g}^\mathbb C$ satisfies $\left.\sigma'\right|_{\mathfrak{g}_{\alpha}}=\operatorname{id}$ for every root $\alpha\in B$ with $\alpha=\alpha\circ \sigma$. The root system of the fixed point algebra $\mathfrak{k}'=\mathfrak{g}^{\sigma'}$ relative to the maximal abelian subalgebra $\mathfrak{t}_\mathfrak{k}$ has $\left.B\right|_{\mathfrak{t}_\mathfrak{k}}$ as simple roots. \end{prop}
The roots of $\mathfrak{k}'$ relative to $\mathfrak{t}_\mathfrak{k}$ are restrictions of certain (not necessarily all) elements in $\Delta_\mathfrak{g}$ to $\mathfrak{t}_\mathfrak{k}$; the restrictions of all elements in $B$ occur. See \cite[p.~129]{Loos} for the root space decomposition of $\mathfrak{k}'$ with respect to $\mathfrak{k}_\mathfrak{k}$. Because the sub-root system of reduced elements in $\left.\Delta_\mathfrak{g}\right|_{\mathfrak{t}_\mathfrak{k}}$ and the root system of $\mathfrak{k}'$ have the same simple roots, these reduced root systems coincide. In particular we obtain the following formula for $r$:
\begin{prop} \label{prop:rasquotientofweylgroups} We have $r=\frac{|W(\mathfrak{k}')|}{|W(\mathfrak{k})|}$. \end{prop}
\begin{ex} If $\operatorname{rank} G=\operatorname{rank} K$, i.e., if $T_K$ is also a maximal torus of $G$, then the identity on $\mathfrak{g}$ satisfies the conditions of Proposition \ref{prop:loos}. Hence $\mathfrak{k}'=\mathfrak{g}$ and the proposition says $r=\frac{|W(G)|}{|W(K)|}$. This however follows already from Lemma \ref{lem:T-fixedpoints}. \end{ex}
\begin{ex} \label{ex:splitrank} If $G/K$ is a symmetric space of split rank, i.e., $\operatorname{rank} G=\operatorname{rank} K+\operatorname{rank} G/K$, then $\sigma$ itself satisfies the conditions of Proposition \ref{prop:loos}. In fact, let $\alpha\in B$ with $\alpha=\alpha\circ \sigma$. In this case $\alpha$ vanishes on $\mathfrak{t}_\mathfrak{p}$, which implies that $\mathfrak{g}_\alpha$ is contained either in $\mathfrak{k}^\mathbb C$ or in $\mathfrak{p}^\mathbb C$. But if it was contained in $\mathfrak{p}^\mathbb C$, then $[\mathfrak{t}_\mathfrak{p},\mathfrak{g}_\alpha]=0$ and $[\mathfrak{t}_\mathfrak{p},\mathfrak{g}_{-\alpha}]=0$, which would contradict the fact that $\mathfrak{t}_\mathfrak{p}$ is maximal abelian in $\mathfrak{p}$. Thus, we have $r=1$ in the split rank case. Note that $r=1$ also follows from \cite[Lemma 13]{EMQ}, combined with Lemma \ref{lem:rindependentofGK}. \end{ex}
\begin{ex} The symmetric space $G/K'$, where $K'$ is the connected subgroup of $G$ with Lie algebra $\mathfrak{k}'$, is not always of split rank. Assume as in Remark \ref{rem:reduced} that there exists a root $\alpha\in \Delta_\mathfrak{g}$ with $\alpha\circ \sigma\neq \alpha$ such that $\alpha+\alpha\circ \sigma\in \Delta_\mathfrak{g}$. Let $X\in \mathfrak{g}_\alpha$ be nonzero. Then $[X,\sigma'(X)]$ is a nonzero element in $\mathfrak{g}_{\alpha+\alpha\circ \sigma}$. We have $\sigma'([X,\sigma'(X)])=-[X,\sigma'(X)]$, thus $[X,\sigma'(X)]\in \mathfrak{p}'$, where $\mathfrak{p}'$ is the $-1$-eigenspace of $\sigma'$. By definition of $\sigma'$ we have $\mathfrak{t}_\mathfrak{p}\subset \mathfrak{p}'$, but $\mathfrak{t}_\mathfrak{p}$ is not a maximal abelian subspace of $\mathfrak{p}'$ because it commutes with $[X,\sigma'(X)]$. For example, in the case $\operatorname{SU}(2m+1)/\operatorname{SO}(2m+1)$ we have $K'=K$ although the space is not of split rank, see Subsection \ref{sssec:su2m+1} below. \end{ex}
We will use below that the symmetric subalgebra $\mathfrak{k}'$ can be determined via the Dynkin diagram of $G$: $\sigma$ defines an automorphism of the Dynkin diagram of $G$ (because it is a permutation group of $B$), which is nontrivial if and only if $\operatorname{rank} \mathfrak{g} >\operatorname{rank} \mathfrak{k}$. One can calculate the root system of $\mathfrak{k}'$ via the fact that by Proposition \ref{prop:loos} the simple roots of $\mathfrak{k}'$ are given by $\left.B\right|_{\mathfrak{t}_\mathfrak{k}}=\{\frac{1}{2}(\alpha_i+\alpha_i\circ \sigma)\mid i=1,\ldots,\operatorname{rank} G\}$.
\subsection{Reduction to the irreducible case}\label{sec:reduction}
\begin{lem} \label{lem:eqformalindependentofgroups} If $(G,K)$ and $(G',K')$ are two effective symmetric pairs of connected compact semisimple Lie groups associated to the same pair of Lie algebras $(\mathfrak{g},\mathfrak{k})$, then the $K$-action on $G/K$ is equivariantly formal if and only if the $K'$-action on $G'/K'$ is equivariantly formal. \end{lem} \begin{proof} Because $K$ and $K'$ are connected, both $H^*(G/K)$ and $H^*(G'/K')$ are given as the $\mathbb R$-algebra of $\mathfrak{k}$-invariant elements in $\Lambda^* \mathfrak{p}$, see \cite[Theorem 8.5.8]{Wolf}. In particular $\dim H^*(G/K)=\dim H^*(G'/K')$. Choosing maximal tori $T\subset K$ and $T'\subset K'$, we furthermore know from Propositions \ref{prop:cohomoffixedpointset} and \ref{lem:rindependentofGK} that $\dim H^*((G/K)^T)=\dim H^*((G'/K')^{T'})$ because $(G,K)$ and $(G',K')$ correspond to the same Lie algebra pair. The statement then follows from Proposition \ref{prop:eqformalequivalent}. \end{proof} \begin{lem} \label{lem:eqformalproduct} Given actions of compact connected Lie groups $K_i$ on compact manifolds $M_i$ ($i=1\ldots n$), then the $K_1\times\ldots \times K_n$-action on $M_1\times \ldots \times M_n$ is equivariantly formal if and only if all the $K_i$-actions on $M_i$ are equivariantly formal. \end{lem} \begin{proof} Choose maximal tori $T_i\subset K_i$. Then $T_1\times \ldots \times T_n$ is a maximal torus in $K_1\times\ldots \times K_n$. The claim follows from Proposition \ref{prop:eqformalequivalent} because the $T_1\times \ldots \times T_n$-fixed point set is exactly the product of the $T_i$-fixed point sets. \end{proof}
Lemmas \ref{lem:eqformalindependentofgroups} and \ref{lem:eqformalproduct} imply that for proving Theorem \ref{thm:main} it suffices to check it for effective symmetric pairs $(G,K)$ of compact connected Lie groups such that $G/K$ is an irreducible simply-connected symmetric space of compact type. Below we will make use of the classification of such spaces, see \cite{Helgason}.
\subsection{Lie groups}\label{sec:groups} Given a compact connected Lie group $G$, the product $G\times G$ acts on $G$ via $(g_1,g_2)\cdot g=g_1gg_2^{-1}$. The isotropy group of the identity element is the diagonal $D(G)\subset G\times G$. In the language of Helgason \cite{Helgason}, we obtain an irreducible symmetric pair $(G\times G,D(G))$ of type II. The $D(G)$-action on $(G\times G)/D(G)$ is nothing but the action of $G$ on itself by conjugation. But for any compact connected Lie group, the action on itself by conjugation is equivariantly formal. In fact, if $T\subset G$ is a maximal torus, then the fixed point set of the $T$-action, $G^T$, is $T$ itself, and thus $\dim H^*(G^T)=\dim H^*(T)=2^{\operatorname{rank} G}=\dim H^*(G)$. For other ways to prove that this action is equivariantly formal see \cite[Example 4.6]{GR}. For instance, equivariant formality would also follow from Proposition \ref{prop:splitrankequivformal} below as $(G\times G,D(G))$ is of split rank.
\subsection{Inner symmetric spaces}\label{sec:inner}
Consider the case that the symmetric space $G/K$ of compact type is inner, i.e., that the involution $\sigma$ is inner. By \cite[Theorem IX.5.6]{Helgason} this is the case if and only if $\operatorname{rank} G=\operatorname{rank} K$. Hence, a maximal torus $T_K\subset K$ is also a maximal torus in $G$, and the $T_K$-fixed point set is by Lemma \ref{lem:T-fixedpoints} a finite set of cardinality $\frac{|W(G)|}{|W(K)|}$. Because of the following classical result (see for example \cite[Chapter XI, Theorem VII]{Greub}), the case of inner symmetric spaces is easy to deal with. \begin{prop} Given any compact connected Lie groups $K\subset G$, the following conditions are equivalent: \begin{enumerate} \item $\operatorname{rank} G=\operatorname{rank} K$. \item $\chi(G/K)>0$. \item $H^{odd}(G/K)=0$. \end{enumerate} \end{prop} It follows from Proposition \ref{prop:hoddeqformal} that the $K$-action on a homogeneous space $G/K$ with $\operatorname{rank} G=\operatorname{rank} K$ is always equivariantly formal. Alternatively, \cite[Corollary 4.5]{GR} implies that the $G$-action on $G/K$ is equivariantly formal because all its isotropy groups have rank equal to the rank of $G$. Then by Corollary \ref{cor:subgroupseqformal} any closed subgroup of $G$ acts equivariantly formally on $G/K$.
\begin{prop} If $\operatorname{rank} G=\operatorname{rank} K$, then the $K$-action on $G/K$ is equivariantly formal. If $T_K\subset K$ is a maximal torus, then the fixed point set of the induced $T_K$-action consists of exactly $\dim H^*(G/K)=\frac{|W(G)|}{|W(K)|}$ points. \end{prop} \begin{rem} This is not a new result. For an investigation of the (algebra structure of the) equivariant cohomology of homogeneous spaces $G/K$ with $\operatorname{rank} G =\operatorname{rank} K$ see \cite{GHZ}, or \cite[Section 5]{HolmSjamaar} for an emphasis on other coefficient rings. \end{rem}
\subsection{Spaces of split rank}\label{sec:split}
Also when $G/K$ is of split rank, i.e., $\operatorname{rank} G=\operatorname{rank} K + \operatorname{rank} G/K$, there is a general argument that implies equivariant formality of the $K$-action on $G/K$. \begin{prop} \label{prop:splitrankequivformal} If $G/K$ is of split rank, then the natural $K$-action on $G/K$ is equivariantly formal. \end{prop} \begin{proof} We will show that every $K$-isotropy algebra has maximal rank, i.e., rank equal to $\operatorname{rank} \mathfrak{k}$. Then equivariant formality follows from \cite[Corollary 4.5]{GR}.
Consider the decomposition $\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{p}$ and choose any $\operatorname{Ad}_K$-invariant scalar product on $\mathfrak{p}$ that turns $G/K$ into a Riemannian symmetric space. Then we have an exponential map $\exp:\mathfrak{p}\to G/K$, and it is known that every orbit of the $K$-action on $G/K$ meets $\exp(\mathfrak{a})$, where $\mathfrak{a}$ is a maximal abelian subalgebra of $\mathfrak{p}$. Because $G/K$ is of split rank, there is a maximal torus $T_K\subset K$ such that $\mathfrak{t}_\mathfrak{k} \oplus \mathfrak{a}$ is abelian. The torus $T_K$ acts trivially on $\exp(\mathfrak{a})$. Thus, the $K$-isotropy algebra of any point in $\exp(\mathfrak{a})$ (and hence of any point in $M$) has maximal rank. \end{proof}
In the split-rank case we have $r=1$ by Example \ref{ex:splitrank}. We thus have
\begin{prop} If $G/K$ is of split rank then $\dim H^*(G/K)=2^{\operatorname{rank} G/K}$. If $T_K\subset K$ is a maximal torus, then the fixed point set of the induced $T_K$-action on $G/K$ is a $\operatorname{rank} G/K$-dimensional torus (in particular connected). \end{prop}
\subsection{Outer symmetric spaces which are not of split rank} For the remaining cases that are not covered by any of the arguments above, i.e., irreducible simply-connected symmetric spaces of type I that are neither of equal nor of split rank, we do not have a general argument for equivariant formality of the isotropy action. Using the classification of symmetric spaces \cite[p.~518]{Helgason}, we calculate for each of these spaces the dimension of the cohomology of the $T_K$-fixed point set and show that it coincides with the dimension of the cohomology of $G/K$ (which we take from the literature), upon which we conclude equivariant formality via Proposition \ref{prop:eqformalequivalent}. Fortunately, there are only three (series of) such symmetric spaces, namely \[ \operatorname{SU}(n)/\operatorname{SO}(n),\quad \operatorname{SO}(2p+2q+2)/\operatorname{SO}(2p+1)\times \operatorname{SO}(2q+1), \text{ and } E_6/\operatorname{PSp}(4), \] where $n\geq 4$ and $p,q\geq 1$. We have shown with Propositions \ref{prop:cohomoffixedpointset} and \ref{prop:rasquotientofweylgroups} that \[
\dim H^*((G/K)^{T_K})=2^{\operatorname{rank} \mathfrak{g}-\operatorname{rank} \mathfrak{k}} \cdot \frac{|W(\mathfrak{k}')|}{|W(\mathfrak{k})|}, \] where the symmetric subalgebra $\mathfrak{k}'\subset \mathfrak{g}$ was introduced in Proposition \ref{prop:loos}. Because in this section we are dealing with outer symmetric spaces, we have $\operatorname{rank} \mathfrak{g}>\operatorname{rank} \mathfrak{k}$, so $\mathfrak{k}'\neq \mathfrak{g}$ is a symmetric subgroup of $\mathfrak{g}$. The orders of the appearing Weyl groups are listed in \cite[p.~66]{Humphreys}.
\subsubsection{$\operatorname{SU}(2m)/\operatorname{SO}(2m)$} Let $M=\operatorname{SU}(2m)/\operatorname{SO}(2m)$, where $m\geq 2$, and $T\subset \operatorname{SO}(2m)$ be a maximal torus. The only connected symmetric subgroup of $\operatorname{SU}(2m)$ of rank $m$ different from $\operatorname{SO}(2m)$ is $\operatorname{Sp}(m)$. The fact that $\mathfrak{k}'={\mathfrak{sp}}(m)$ can be visualized via the Dynkin diagrams: the involution $\sigma$ fixes only the middle root of the Dynkin diagram $A_{2m-1}$ of $\operatorname{SU}(2m)$. Hence, after restricting, the middle root becomes a root which is longer than the other roots, and only in $C_m$ there exists a root longer than the others, not in $D_m$. \begin{center} \includegraphics{dynkinAoddToCn} \end{center} We thus may calculate \[
r= \frac{|W(C_m)|}{|W(D_m)|} =\frac{2^m\cdot m!}{2^{m-1}\cdot m!}=2; \] note that for this example the number of compartments was also calculated in \cite[p.~11]{EMQ}. It is known that $\dim H^*(M)=2^m$ (see for example \cite[p.~493]{Greub} or \cite[Theorem III.6.7.(2)]{Mimura}), hence \[ \dim H^*(M^T)=2^{2m-1-m}\cdot r =2^m =\dim H^*(M). \] Thus, the action is equivariantly formal.
\subsubsection{$\operatorname{SU}(2m+1)/\operatorname{SO}(2m+1)$} \label{sssec:su2m+1} Let $M=\operatorname{SU}(2m+1)/\operatorname{SO}(2m+1)$, where $m\geq 2$, and $T\subset \operatorname{SO}(2m+1)$ be a maximal torus. It is known that $\dim H^*(M)=2^m$ (see for example \cite[p.~493]{Greub} or \cite[Theorem III.6.7.(2)]{Mimura}), hence \[ 2^m\cdot r = \dim H^*(M^T)\leq \dim H^*(M)=2^m \] for some natural number $r$. Thus necessarily $r=1$ (in fact $\mathfrak{k}'={\mathfrak{so}}(2m+1)$) and the action is equivariantly formal. Note that this space is also listed as an exception in \cite{EMQ} as it is the only outer symmetric space which is not of split rank such that the corresponding involution fixes no root in the Dynkin diagram (and hence every compartment is a $K$-Weyl chamber).
\subsubsection{$\operatorname{SO}(2p+2q+2)/\operatorname{SO}(2p+1)\times \operatorname{SO}(2q+1)$} Let $M=\operatorname{SO}(2p+2q+2)/\operatorname{SO}(2p+1)\times \operatorname{SO}(2q+1)$, where $p,q\geq 1$, and $T\subset \operatorname{SO}(2p+1)\times \operatorname{SO}(2q+1)$ be a maximal torus. The only connected symmetric subgroups of $\operatorname{SO}(2p+2q+2)$ of rank $p+q$ are $\operatorname{SO}(2p'+1)\times \operatorname{SO}(2q'+1)$, where $p'+q'=p+q$. The involution $\sigma$ fixes all roots of the Dynkin diagram $D_{p+q+1}$ of $\operatorname{SO}(2p+2q+2)$ but two; after restricting, these two become a single root which is shorter than the others. Because $A_{p+q-1}\oplus A_1$ and $D_{p+q}$ do not appear as the Dynkin diagram of any of the possible symmetric subgroups, the Dynkin diagram of $\mathfrak{k}'$ is forced to be $B_{p+q}$, which means that $\mathfrak{k}'={\mathfrak{so}}(2p+2q+1)$. \begin{center} \includegraphics{dynkinDnToBn} \end{center} We thus have \[
r=\frac{|W(B_{p+q})|}{|W(B_p)|\cdot |W(B_q)|}=\frac{2^{p+q}\cdot (p+q)!}{2^p \cdot p!\cdot 2^q\cdot q!} = {p+q \choose p}. \] By \cite[p.~496]{Greub} we have $\dim H^*(M)=2\cdot {p+q\choose p}$, and it follows that the action is equivariantly formal because of \begin{align*} \dim H^*(M^T)&=2^{p+q+1-p-q}\cdot r =2\cdot {p+q \choose p} = \dim H^*(M). \end{align*} \subsubsection{$E_6/\operatorname{PSp}(4)$} Let $M=E_6/\operatorname{PSp}(4)$ and $T\subset \operatorname{PSp}(4)$ be a maximal torus. The only symmetric subalgebra of $\mathfrak{e}_6$ of rank $4$ different from ${\mathfrak{sp}}(4)$ is $\mathfrak{f}_4$. \begin{center} \includegraphics{dynkinE6modToF4} \end{center} We obtain \[
r=\frac{|W(F_4)|}{|W(C_4)|}=\frac{2^7 \cdot 3^2}{2^4 \cdot 4!}=3. \] It is shown in \cite{Takeuchi} that $\dim H^*(M)=12$. Thus, \[ \dim H^*(M^T)=2^{6-4}\cdot r = 2^2\cdot 3 = 12 = \dim H^*(M) \] shows that the action is equivariantly formal.
\end{document} |
\begin{document}
\def\mathbb R{\mathbb R} \def\accentset{\circ}{W}_p^{1}{\accentset{\circ}{W}_p^{1}} \defW_p^{1}{W_p^{1}} \newcommand{
\mbox{$\quad{}_{\Box}$}}{
\mbox{$\quad{}_{\Box}$}} \newcommand{
\par
}{\vspace{-.75cm$
\mbox{$\quad{}_{\Box}$}$}\par
} \newcommand{
\par
}{\vspace{-.4cm$
\mbox{$\quad{}_{\Box}$}$}\par
}
\newtheorem{thm}{Theorem}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{claim}{Proposition}[section] \newtheorem{rem}{Remark}[section] \newtheorem{defi}{Definition}[section] \newtheorem{example}{Example}
\title{\Large \bfseries \sffamily On the exact multiplicity of stable ground states of non-Lipschitz semilinear elliptic equations for some classes of starshaped sets} \author{\bfseries\sffamily J.I. D\'{\i}az, J.~Hern\'{a}ndez and Y.Sh.~Ilyasov \thanks{\hfil\break\indent {\sc Keywords}: semilinear elliptic equation, non-Lipschitz terms, spectral problem, Pohozaev identity, flat and compact support ground states, sharp multiplicty. \hfil\break\indent {\sc AMS Subject Classifications:} 35J60, 35J96, 35R35, 53C45 }} \date{}
\maketitle \begin{abstract} We prove the exact multiplicity of flat and compact support stable solutions of an autonomous non-Lipschitz semilinear elliptic equation of eigenvalue type according to the dimension N and the two exponents, $0<\alpha<\beta<1$, of the involved nonlinearites. Suitable assumptions are made on the spatial domain $\Omega$ where the problem is formulated in order to avoid a possible continuum of those solutions and, on the contrary, to ensure the exact number of solutions according to the nature of the domain $\Omega$. Our results also clarify some previous works in the literature. The main techniques of proof are a Pohozhaev's type identity and some fibering type arguments in the variational approach. \end{abstract}
\section{Introduction}
In this paper we study the existence of non-negative solutions of the following problem \begin{equation} \label{PL} \begin{cases}
-\Delta u+|u|^{\alpha -1}u=\lambda |u|^{\beta -1}u~~\mbox{in}~\Omega , \\ ~~u=0~~\mbox{on}~\partial \Omega . \end{cases} \tag*{$P(\alpha ,\beta ,\lambda )$} \end{equation} Here $\Omega $ is a bounded domain in $\mathbb{R}^{N}$, $N\geq 3$ with a smooth boundary $\partial \Omega $, which is strictly star-shaped with respect to a point $x_{0}\in $ $\mathbb{R}^{N}$ (which will be identified as the origin of coordinates if no confusion may arise), $\lambda $ is a real parameter, $0<\alpha <\beta <1$. By a weak solution of $P(\alpha ,\beta ,\lambda ) $ we mean a critical point $u\in H_{0}^{1}:=H_{0}^{1}(\Omega ) $ of the energy functional \begin{equation*}
E_{\lambda }(u)=\frac{1}{2}\int_{\Omega }|\nabla u|^{2}dx+\frac{1}{{\alpha +1
}}\int_{\Omega }|u|^{{\alpha +1}}dx-\frac{\lambda }{\beta +1}\int_{\Omega
}|u|^{\beta +1}dx, \end{equation*} where $H_{0}^{1}(\Omega )$ is the standard vanishing on the boundary Sobolev space. We are interested in \textit{ground states } of $P(\alpha ,\beta ,\lambda )$: i.e., a weak solution $u_{\lambda }$ of $P(\alpha ,\beta ,\lambda )$ which satisfies the inequality \begin{equation*} E_{\lambda }(u_{\lambda })\leq E_{\lambda }(w_{\lambda }) \end{equation*} for any non-zero weak solution $w_{\lambda }$ of $P(\alpha ,\beta ,\lambda )$ . Notice that in \cite{PucciSerrin} authors also use the term \textquotedblleft ground state\textquotedblright\ with a different meaning.
Since the diffusion-reaction balance $-\Delta u=f(\lambda ,u)$ involves the non-linear reaction term \begin{equation*}
f(\lambda ,u):=\lambda |u|^{\beta -1}u-|u|^{\alpha -1}u, \end{equation*} and it is a non-Lipschitz function at zero (since $\alpha <1$ and $\beta <1$ ) important peculiar behavior of solutions of these problems arises. For instance, that may lead to the violation of the Hopf maximum principle on the boundary and the existence of compactly supported solutions as well as the so called \ \textit{flat solutions} which correspond to weak solutions $ u>0$ in $\Omega $ such that \begin{equation} \frac{\partial u}{\partial \nu }=0~~\mbox{on}~~\partial \Omega , \label{N} \end{equation} where $\nu $ denotes the unit outward normal to $\partial \Omega $. When the additional information (\ref{N}) holds but the weak solution may vanish in a positively measured subset of $\Omega $, i.e. if $u\geq 0$ in $\Omega $, we shall call it as a \textit{compact support solution} of $P(\alpha ,\beta ,\lambda )$ (sometimes also called as a \textit{free boundary solution, } since the boundary of its support is not a a priori known). Notice that in that case the support of $u$ is strictly included in $\overline{\Omega }$. If $u$ is a weak solution such that property (\ref{N}) is not satisfied we shall call it as a \textit{usual weak solution} (since, at least for the associated linear problem and for Lipschitz non-linear terms, the strong maximum principle due to Hopf, implies that (\ref{N}) cannot be verified).
In what follows we shall use the following notation: any largest ball $
B_{R(\Omega )}:=\{x\in \mathbb{R}^{N}:~|x|\leq R(\Omega )\}$ contained in $ \Omega $ will be denoted as an \textit{inscribed ball} in $\Omega $. Our exact multiplicity results will concern the case of some classes of starshaped sets of $\mathbb{R}^{N}$ containing a finite number of different \textit{inscribed balls} in $\Omega .$
For sufficiently large $\lambda $ the existence of a \textit{compactly supported solution} of $P(\alpha ,\beta ,\lambda )$ follows from \cite {GazzolaSerin, Serrin-Zou} (see also for the case $N=1$, \cite{diaz, Diaz-Hernan-Man}, \cite{CortElgFelmer-1, CortElgFelmer-2, Kaper1, Kaper2}. Indeed, by \cite{Kaper1, Kaper2, GazzolaSerin, Serrin-Zou} the equation in $ P(\alpha ,\beta ,1)$ considered in $\mathbb{R}^{N}$ has a unique (up to translation in $\mathbb{R}^{N}$) compactly supported solution $u^{\ast }$, moreover $ u^{\ast }$ is radially symmetric such that supp$(u^{\ast })$=$\overline{B} _{R^{\ast }}$ for some $R^{\ast }>0$. Hence since the support of $u_{\sigma }^{\ast }(x):=u^{\ast }(x/\sigma ),~x\in B_{\sigma R^{\ast }}$ is contained in $\Omega ,$ for sufficiently small $\sigma $, the function $w_{\lambda }^{c}(x)=\sigma ^{-\frac{2}{1-\alpha }}\cdot u_{\sigma }^{\ast }(x)$ weakly satisfies $P(\alpha ,\beta ,\lambda )$ in $\Omega $ with $\lambda =\sigma ^{- \frac{2(\beta -\alpha )}{1-\alpha }}$. However, it is not hard to show (see, e.g. Corollary 5.2 below, that, in general (for all sufficiently large $ \lambda $), weak solutions $w_{\lambda }$ are not ground states.
On the other side, finding \textit{flat} or \textit{compactly supported ground states} is important in view of the study of non-stationary problems (see \cite{DIH1, DIH, ilCrit} and \cite{Rosenau}).
The existence of \textit{flat }and \textit{compact support ground states}, for certain $\lambda ^{\ast }$ of $P(\alpha ,\beta ,\lambda )$ has been obtained in \cite{IlEg} (see also \cite{DIH}). In the present paper we develop this result presenting here a sharper explanation of the main arguments of its proof. Furthermore, we shall offer here some more precise results on the behaviour of ground states depending on $\lambda $.
It is well known that the non-Lipschitz nonlinearities may entail the existence of a continuum of nonnegative compact supported solutions of elliptic boundary value problems. However the answer for the same question stated about ground states or \textit{usual} solutions becomes unclear. Notice that this question is important in the investigation of stability solutions for non-stationary problems (see \cite{DIH1, DIH, ilCrit}). We recall that, as a matter of fact, flat solutions of $P(\alpha ,\beta ,\lambda ^{\ast })$ only may arise if $\Omega $ is the ball $B_{R^{\ast }}$ mentioned before. For the rest of domains, and values of $\lambda \geq \lambda ^{\ast },$ any weak solution which is not a \textquotedblleft usual\textquotedblright\ solution should have compact support.
Let us state our main results. For given $u\in H_{0}^{1}(\Omega )$, the \textit{fibrering mappings} are defined by $\phi _{u}(t)=E_{\lambda }(tu)$ so that from the variational formulation of $P(\alpha ,\beta ,\lambda )$ we know that $\phi _{u}^{\prime
}(t)|_{t=1}=0$ for solutions, where we use the notation \begin{equation*} \phi _{u}^{\prime }(t)=\frac{\partial }{\partial t}E_{\lambda }(tu). \end{equation*}
If we also define $\phi _{u}^{\prime \prime }(t)=\frac{\partial ^{2}}{ \partial t^{2}}E_{\lambda }(tu)$, then, in case $\beta <1$ the equation $ \phi _{u}^{\prime }(t)=0$ may have at most two nonzero roots $t_{\min }(u)>0$ and {$t_{\max }(u)>0$} such that $\phi _{u}^{\prime \prime }(t_{\max }(u))\leq 0$, $\phi _{u}^{\prime \prime }(t_{\min }(u))\geq 0$ and $ 0<t_{\max }(u)\leq t_{\min }(u)$. This implies that any weak solution of $ P(\alpha ,\beta ,\lambda )$ (any critical point of $E_{\lambda }(u)$) corresponds to one of the cases $t_{\min }(u)=1$ or $t_{\max }(u)=1$. However, it was discovered in \cite{IlEg} (see also \cite{DIH, ilDr, ilCrit} ) that in case when we study flat \ or compactly supported solutions this correspondence essentially depends on the relation between $\alpha $, $\beta $ and $N$. Thus following this idea (from \cite{DIH, ilDr, ilCrit, IlEg}, in the case $N\geq 3$, we consider the following subset of exponents \begin{equation*} \mathcal{E}_{s}(N):=\{(\alpha ,\beta ):~~2(1+\alpha )(1+\beta )-N(1-\alpha )(1-\beta )<0,~0<\alpha <\beta <1\}. \end{equation*} The main property of $\mathcal{E}_{s}(N)$ is that for star-shaped domains $ \Omega $ in $\mathbb{R}^{N}$, $N\geq 3,$ if $(\alpha ,\beta )\in \mathcal{E} _{s}(N)$, any ground state solution $u$ of $P(\alpha ,\beta ,\lambda )$
satisfies $\phi _{u}^{\prime \prime }(t)|_{t=1}>0$ (see Lemma \ref{pro} below and \cite{DIH, IlEg}).
\begin{rem} In the cases $N=1,2$, one has $\mathcal{E}_{s}(N)=\emptyset $. Furthermore, this implies (see \cite{DIH}) that if $N=1,2$ and $0<\alpha <\beta <1$, then any flat or compact support weak solution $u$ of $P(\alpha ,\beta ,\lambda )$
satisfies $\phi _{u}^{\prime \prime }(t)|_{t=1}<0$. \end{rem}
In what follows we shall use the notations \begin{equation*}
E_{\lambda }^{\prime }(u)=\phi _{u}^{\prime }(t)|_{t=1}=\frac{\partial }{
\partial t}E_{\lambda }(tu)|_{t=1},~~E_{\lambda }^{\prime \prime }(u)=\phi _{u}^{\prime \prime }(t)|_{t=1}=\frac{\partial ^{2}}{\partial t^{2}}
E_{\lambda }(tu)|_{t=1},~~~u\in H_{0}^{1}(\Omega ). \end{equation*} Our first result is the following
\begin{thm} \label{Th1} Let $N\geq 3$ and let $\Omega $ be a bounded strictly star-shaped domain in $\mathbb{R}^{N}$ with $C^{2}$-manifold boundary $ \partial \Omega $. Assume that $(\alpha ,\beta )\in \mathcal{E}_{s}(N)$. Then there exists $\lambda ^{\ast }>0$ such that for any $\lambda \geq \lambda ^{\ast }$ problem $P(\alpha ,\beta ,\lambda )$ possess a ground state $u_{\lambda }$. Moreover $E_{\lambda }^{\prime \prime }(u_{\lambda })>0 $, $u_{\lambda }\in C^{1,\gamma }(\overline{\Omega })$ for some $\gamma \in (0,1)$ and $u_{\lambda }\geq 0$ in $\Omega $. For any $\lambda <\lambda ^{\ast }$, problem $P(\alpha ,\beta ,\lambda )$ has no weak solution. \end{thm}
Our second main result deals with the (non-)existence of flat or compactly supported ground states.
\begin{thm} \label{Th2} There is a non-negative ground state $u_{\lambda ^{\ast }}$ which is flat or has compact support. Moreover, $u_{\lambda ^{\ast }}$ is radially symmetric about some point of $\Omega $, and supp$(u_{\lambda ^{\ast }})$=$\overline{B}_{R(\Omega )}$ is an inscribed ball in $\Omega $. For all $\lambda >\lambda ^{\ast }$, any ground state $u_{\lambda }$ of $ P(\alpha ,\beta ,\lambda )$ is a \textquotedblleft usual\textquotedblright\ solution. \end{thm}
Our last result deals with the multiplicity of solutions. Our main goal is to extend the results of \cite{diaz} and \cite{Diaz-Hernan-Man} concerning the one-dimensional case. We also recall that the existence of what we call now \textquotedblleft usual\textquotedblright\ solutions was proved in some previous papers in the literature. Existence of a smooth branch of such positive solutions was proved for $\lambda >\lambda ^{\ast }$ in \cite{HMV} by using a change of variables and then a continuation argument. The existence of at least two non-negative solutions in such a case was shown in \cite{Montenegro} by using variational arguments and this result was improved in \cite{Annello} showing that one of the solutions is actually positive, again by variational arguments. Many of these results are valid even in the singular case $-1<\alpha <\beta <1.$
In order to present our exact multiplicity results we introduce the \textit{ geometrical reflection} across a given hyperplane $H$ by the usual isometry $ R_{H}:\mathbb{R}^{N}\rightarrow $ $\mathbb{R}^{N}$. Remember that any point of $H$ is a fixed point of $R_{H}$. Now we shall introduce some classes of starshaped sets $\Omega $ for which we can obtain the exact multiplicity of flat stable ground solutions of problem $P(\alpha ,\beta ,\lambda ^{\ast }).$ We say that $\Omega $ is of \textit{Strictly} \textit{Starshaped Class }$ \mathit{m}$, if it is a strictly starshaped domain and contains exactly $m$ inscribed balls of the same radius $R(\Omega )$ such that each of them can be obtained from any other by $k\in \{1,...,m\}$ reflections of $\Omega $ across some hyperplanes $H_{i}$, $i=1,...,k$.
\begin{thm} \label{ThmCor1} Assume $N\geq 3$, $(\alpha ,\beta )\in \mathcal{E}_{s}(N)$. Let $\Omega $ be a domain of Strictly Starshaped Class $m>1$ with a $C^{2}$ -manifold boundary $\partial \Omega $. Then there exist exactly $m$ stable nonnegative flat or compact supported ground states $u_{\lambda ^{\ast }}^{1}$, $u_{\lambda ^{\ast }}^{2}$,..., $u_{\lambda ^{\ast }}^{m}$ of problem $P(\alpha ,\beta ,\lambda ^{\ast })$ and $m$ sets of \textquotedblleft \textit{usual\textquotedblright } ground states $ (u_{\lambda _{n}}^{1})_{n=1}^{\infty }$, $(u_{\lambda _{n}}^{2})_{n=1}^{\infty }$,..., $(u_{\lambda _{n}}^{m})_{n=1}^{\infty }$ of $P(\alpha ,\beta ,\lambda _{n}),$ , with $\lim_{n\rightarrow \infty }\lambda _{n}=\lambda ^{\ast }$, $\lambda _{n}>\lambda ^{\ast }$, $n=1,2,...$ and such that $u_{\lambda _{n}}^{i}\rightarrow u_{\lambda ^{\ast }}^{i}$, strongly in $H_{0}^{1}$ as $n\rightarrow \infty ,$ for any $i=1,...,m$. \end{thm}
Let us show how can be obtained some domains of Strictly Starshaped class $m$ . We start by considering an initial bounded Lipschitz set $\Omega _{1}$ of $ \mathbb{R}^{N}$ such that: \begin{equation} \Omega _{1}\text{ contains exactly one inscribed ball of radius }R(\Omega_{1} ) \text{.} \label{First cond omega} \end{equation} We also introduce the following notation: given a general open set $G$ of $ \mathbb{R}^{N}$ we define $S[G]$ as the set of points $y\in G$ such that $G$ is strictly starshaped with respect to $y$. Then, the second condition we shall require to $\Omega _{1}$ is \begin{equation} S[\Omega _{1}]~~\text{ is not empty.} \label{starshap} \end{equation} Then $\Omega $ belongs to the Strict Starshaped class $1$ if there exists $\Omega _{1}$ satisfying (\ref{First cond omega}) and (\ref{starshap}) such that $ \Omega =\Omega _{1}$. Now, let us show how we can obtain a domain of Strictly Starshaped class $2$.
Let $\Omega _{1}$ be a domain of Strictly Starshaped class $1$ and assume, additionally, that the set $S[\Omega _{1}]$ contains some other point different than $x_{1}$, $\{x_{1}\}\varsubsetneq S[\Omega _{1}]$, i.e. \begin{equation*} \text{there exists }y_{1}\in S[\Omega _{1}]\text{ such that }y_{1}\neq x_{1}.
\end{equation*} Let now $\Omega _{2}:=R_{H(y_{1})}(\Omega _{1})$ be the reflected set of $ \Omega _{1}$ across some hyperplane $H(y_{1})$ containing the point $y_{1}$ such that \begin{equation*} \Omega _{1}\cup \Omega _{2}\text{ contains exactly one inscribed ball of radius }R(\Omega)\text{ of center }x_{2}\neq x_{1} \text{.} \end{equation*} We now consider \begin{equation*} \Omega =\Omega _{1}\cup \Omega _{2}. \end{equation*} Notice that, obviously, $\Omega $ is Strictly Starshaped class $1$ with respect to $y_{1}$ (since $y_{1}\in S[\Omega _{1}]$ and any ray starting from $y_{1}$ is reflected to a ray linking $y_{1}$ with any other point of $ \Omega _{2}$). Moreover, such a domain $\Omega $ verifies \begin{align*} \Omega &\text{ contains exactly two inscribed balls of radius }{R(\Omega )},\\ &\text{ with center at two different points }x_{i}\in \Omega ,\text{ }i=1,2. \end{align*} Thus $\Omega $ is a set of \textit{Strictly Starshaped class 2}. Evidently we can repeat this construction with a domain of \textit{Strictly Starshaped class 2} and obtained a domain $\Omega $ of \textit{Strictly Starshaped class 3}, etc.
\begin{figure}
\caption{Domain generating exactly three ground states}
\end{figure}
We believe that we can iterate this process in a similar way until some number $m:=m(N)\geq 3$, which maybe depends on the dimension $N$. However we don't know how to prove this. Moreover we rise the following conjecture: \textit{For a given dimension }$N$, \textit{\ there exists a number $m(N)$ such that for any $k=1,2,...,m(N)$ there exists a domain of Strictly Starshaped class $k$ whereas there is no domain in $\mathbb{R}^{N}$ of Strictly Starshaped class $k$ with $k>m(N)$.}
\begin{figure}
\caption{Union of the supports of the three radially symmetric ground states corresponding to the domain given by Figure 1.}
\end{figure}
\begin{rem} We emphasize that by Theorems \ref{Th1}, \ref{Th2},\ref{ThmCor1} we obtain the complete bifurcation diagram for the ground states of $P(\alpha ,\beta ,\lambda )$ for domains of Starshaped Class $m$. Indeed, the flat ground state $u_{\lambda ^{\ast }}$ corresponds to a fold bifurcation point (or turning point) from which it start $m+1$ different branches of weak solutions: on one hand, the branch of \textquotedblleft usual\textquotedblright\ ground states $u_{\lambda }$, forming a branch of stable equilibria, and, on the other hand, $m$ branches formed by unstable compactly supported weak solutions, of the form $w_{\lambda }^{c}(x:x_{0,j})=\sigma ^{-\frac{2}{1-\alpha }}\cdot u_{\lambda ^{\ast }}((x-x_{0,j})/\sigma )$ with $\lambda =\sigma ^{-\frac{2(\beta -\alpha )}{ 1-\alpha }}$ (see Figure 1) and $m$ different points $x_{0,j}$, $j=1,...,m$. Furthermore, we know a global information: the energy of $u_{\lambda ^{\ast }}$ is the maximum among all the possible energies associated to any weak solution of $ P(\alpha ,\beta ,\lambda )$. \end{rem}
\begin{figure}
\caption{Bifurcation diagram for the energy levels of ground states and compact support solutions. }
\end{figure} In the last part of the paper we consider the associate parabolic problem
\begin{equation} PP(\alpha ,\beta ,\lambda ,v_{0})\quad \left\{ \begin{array}{ll}
v_{t}-\Delta v+|v|^{\alpha -1}v=\lambda |v|^{\beta -1}v & \text{in } (0,+\infty )\times \Omega \\ v=0 & \text{on }(0,+\infty )\times \partial \Omega \\ v(0,x)=v_{0}(x) & \text{on }\Omega . \end{array} \right. \label{p1} \end{equation} For the basic theory for this problem, under the structural assumption $ 0<\alpha <\beta <1$ we send the reader to \cite{DIH} and its references. We apply here some local energy methods, for the two cases $\lambda >\lambda ^{\ast }$ and $\lambda =\lambda ^{\ast }$, to give some information on the evolution and formation, respectively, of the free boundary given by the boundary of the support of the solution $v(t,.)$ when $t$ increases. This provides a complemmentary information since by Theorem 1.1 (and the asymptotic behaviour results for $PP(\alpha ,\beta ,\lambda ,v_{0})$) we know that, as $t\rightarrow +\infty $, the support of $v(t,.)$ must converge to a ball of $\mathbb{R}^{N}$, in the case $\lambda =\lambda ^{\ast }$, or to the whole domain $\overline{\Omega }$, if $\lambda >\lambda ^{\ast }$, (the supports of one of the corresponding stationary solutions).
\section{Preliminaries}
In this section we give some preliminary results. In what follows $ H_{0}^{1}:=H_{0}^{1}(\Omega )$ denotes the standard vanishing on the boundary Sobolev space. We can assume that its norm is given by \begin{equation*}
||u||_{1}=\left( \int_{\Omega }|\nabla u|^{2}\,dx\right) ^{1/2}. \end{equation*} Denote \begin{equation*}
P_{\lambda }(u):=\frac{1}{2^{\ast }}\int_{\Omega }|\nabla u|^{2}\,\mathrm{d}
x+\frac{1}{{\alpha +1}}\int_{\Omega }|u|^{{\alpha +1}}\,\mathrm{d}x-\lambda \frac{1}{{\beta +1}}\int_{\Omega
}|u|^{{\beta +1}}\,\mathrm{d}x, \end{equation*} where \begin{equation*} 2^{\ast }=\frac{2N}{N-2}~~\text{ for }N\geq 3. \end{equation*} We will use the notation $P_{\lambda }^{\prime }(tu)=dP_{\lambda }(tu)/dt$, $ t>0$, $u\in H_{0}^{1}$. From now on we suppose that the boundary $\partial \Omega $ is a $C^{2}$-manifold. As usual, we denote by $\mathrm{d}\sigma $ the surface measure on $\partial \Omega $. We need the Pohozhaev's identity for a weak solution of $P(\alpha ,\beta ,\lambda ).$
\begin{lem} \label{lem1} Assume that $\partial \Omega $ is a $C^{2}$-manifold, $N\geq 3$ . Let $u\in C^{1}(\overline{\Omega })$ be a weak solution of $P(\alpha ,\beta ,\lambda )$. Then there holds the Pohozaev identity \begin{equation*} P_{\lambda }(u)=-\frac{1}{2N}\int_{\partial \Omega }\left\vert \frac{ \partial u}{\partial \nu }\right\vert ^{2}\,(x\cdot \nu (x))\mathrm{d}\sigma (x). \end{equation*} \end{lem}
For the proof see \cite{DIH, Takac_Ilyasov}and \cite{poh}, \cite{Temam}. See also some related results in \cite{poh} and \cite{Temam}.
\noindent Notice that \begin{equation}
E_{\lambda }(u)=P_{\lambda }(u)+\frac{1}{N}\int_{\Omega }|\nabla u|^{2}dx,~~~\forall u\in H_{0}^{1}(\Omega ). \label{PandE} \end{equation}
Assume $\Omega $ is strictly star-shaped with respect to a point $x_{0}\in $ $\mathbb{R}^{N}$ (which will be identified as the origin of coordinates of $\mathbb{R}^{N}$). Observe that if $\Omega $ is a star-shaped (strictly star-shaped) domain with respect to the origin of $\mathbb{R}^{N}$, then $x\cdot \nu \geq 0$ ($x\cdot \nu >0$) for all $x\in \partial \Omega $. This and Lemma \ref{lem1} imply
\begin{cor} \label{cor1} Let $\Omega $ be a bounded star-shaped domain in $\mathbb{R} ^{N} $ with a $C^{2}$-manifold boundary $\partial \Omega $. Then any weak solution $u\in C^{1}(\overline{\Omega })$ of $P(\alpha ,\beta ,\lambda )$ satisfies $P_{\lambda }(u)\leq 0$. Moreover, if $u$ is a flat solution or it has a compact support then $P_{\lambda }(u)=0$. Furthermore, in the case $ \Omega $ is strictly star-shaped, the converse is also true: if $P_{\lambda }(u)=0$ and $u\in C^{1}(\overline{\Omega })$ is a weak solution of $P(\alpha ,\beta ,\lambda )$, then $u$ is flat or it has a compact support. \end{cor}
The proof of the following result can be found in \cite{DIH, IlEg}.
\begin{lem} \label{pro} Assume $N\geq 3$ and $(\alpha ,\beta )\in \mathcal{E}_{s}(N)$.
$(i)$ Let $u\in C^{1}(\overline{\Omega })$ be a flat or compact support weak solution of $P(\alpha ,\beta ,\lambda )$. Then $E_{\lambda }(u)>0$ and $ E_{\lambda }^{\prime \prime }(u)>0$.
$(ii)$ If $E_{\lambda }^{\prime }(u)=0$, $P_{\lambda }(u)\leq 0$ for some $ u\in H_{0}^{1}(\Omega )\setminus 0$, then \begin{equation*} E_{\lambda }^{\prime \prime }(u)>0. \end{equation*} \end{lem}
\begin{rem} When $0<\beta <\alpha <1,$ a case which is not considered in this paper, we have $E_{\lambda }^{\prime \prime }(u)>0$ and $P_{\lambda }(u)<0$ for any weak solution $u\in H_{0}^{1}\setminus 0$ of $P(\alpha ,\beta ,\lambda )$. In particular, in this case, any solution of $P(\alpha ,\beta ,\lambda )$ is a \textquotedblleft usual\textquotedblright\ solution. The uniqueness of solution was shown in \cite{HMV}. \end{rem}
In what follows we need also
\begin{claim} \label{pradd} If $E_{\lambda}^{\prime }(tu)=0$ for $u\neq 0 $, then $ P^{\prime }_\lambda(tu)<0$. \end{claim} \noindent {\em Proof}\quad Observe that, \begin{equation*}
P_{\lambda }^{\prime }(tu)=\frac{N-2}{N}t\int_{\Omega }|\nabla u|^{2}\,
\mathrm{d}x-\lambda t^{\beta }\int_{\Omega }|u|^{\beta +1}\,\mathrm{d}
x+t^{\alpha }\int_{\Omega }|u|^{\alpha +1}\,\mathrm{d}x=E_{\lambda
}^{\prime }(tu)-\frac{2t}{N}\int_{\Omega }|\nabla u|^{2}\,\mathrm{d}x. \end{equation*} Thus $E_{\lambda }^{\prime }(tu)=0$ entails $P^{\prime}_{\lambda
}(tu)=-(2t/N)\int |\nabla u|^{2}\,\mathrm{d}x<0
\mbox{$\quad{}_{\Box}$}$.
\section{Auxiliary extremal values}
In this section we introduce some extremal values which will play an important role in the following. Some of these values, and the corresponding variational functionals, have been already introduced in \cite{DIH, IlEg}. However, for our aims we shall introduce them using another approach which is more natural and easy.
Our approach will be based on using a \textit{nonlinear generalized Rayleigh quotient} (see \cite{ilyaReil}). In fact, we can associate to problem $ P(\alpha ,\beta ,\lambda )$ several nonlinear generalized Rayleigh quotients which may give useful information on the nature of the problem. In this paper we will deal with two of them.
First, let us consider the following Rayleigh's quotient \cite{ilyaReil} \begin{equation}
R^{0}(u)=\frac{\frac{1}{2}\int_{\Omega }|\nabla u|^{2}dx+\frac{1}{{\alpha +1}
}\int_{\Omega }|u|^{{\alpha +1}}dx}{\frac{1}{{\beta +1}}\int_{\Omega }|u|^{{ \beta +1}}dx},~~u\neq 0. \label{lamb1} \end{equation} Following \cite{ilyaReil}, we consider \begin{equation}
r_{u}^{0}(t):=R^{0}(tu)=\frac{\frac{t^{1-\beta }}{2}\int_{\Omega }|\nabla u|^{2}dx+\frac{t^{\alpha -\beta }}{{\alpha +1}}\int_{\Omega }|u|^{{\alpha +1}
}dx}{\frac{1}{{\beta +1}}\int_{\Omega }|u|^{{\beta +1}}dx},~~t>0,~~u\neq 0. \label{RaylCC} \end{equation} Notice that for any $u\neq 0$, \ and $\lambda \in \mathbb{R}$, \begin{equation}
\text{if}~R^{0}(u)\equiv r_{u}^{0}(t)|_{t=1}=\lambda ,~~\text{then} ~~E_{\lambda }(u)=0. \label{LzeroProp} \end{equation}
It is easy to see that $\partial r_{u}^{0}(t)/\partial t=0$ if and only if \begin{equation*}
(1-\beta )\frac{t^{-\beta }}{2}\int_{\Omega }|\nabla u|^{2}dx+(\alpha -\beta
)\frac{t^{\alpha -\beta -1}}{{\alpha +1}}\int_{\Omega }|u|^{{\alpha +1}}dx=0, \end{equation*} and that the only solution to this equation is \begin{equation} t_{0}(u)=\left( \frac{2(\beta -\alpha )}{(\alpha +1)(1-\beta )}\frac{
\int_{\Omega }|u|^{{\alpha +1}}dx}{\int_{\Omega }|\nabla u|^{2}dx}\right) ^{ \frac{1}{1-\alpha }}. \label{P11} \end{equation} Let us emphasize that $t_{0}(u)$ is a value where the function $r_{u}^{0}(t)$ attains its global minimum. Substituting $t_{0}(u)$ into $r_{u}^{0}(t)$ we obtain the following \textit{nonlinear generalized Rayleigh quotient}: \begin{equation}
\lambda _{0}(u)=r_{u}^{0}(t_{0}(u))\equiv R^{0}(tu)|_{t=t_{0}(u)}=c_{0}^{\alpha ,\beta }\lambda (u), \label{lambdazero} \end{equation} where \begin{equation} c_{0}^{\alpha ,\beta }=\frac{(1-\alpha )(\beta +1)}{(1-\beta )(1+\alpha )} \left( \frac{(1-\beta )(\alpha +1)}{2(\beta -\alpha )}\right) ^{\frac{\beta -\alpha }{1-\alpha }}, \label{cE} \end{equation} and \begin{equation*}
\lambda (u)=\frac{(\int_{\Omega }|u|^{{\alpha +1}}dx)^{\frac{1-\beta }{
1-\alpha }}(\int_{\Omega }|\nabla u|^{2}dx)^{\frac{\beta -\alpha }{1-\alpha }
}}{\int_{\Omega }|u|^{{\beta +1}}dx}. \end{equation*} See Figure 3.
It is not hard to prove (see, e.g., page 400 of \cite{Zeidler}) that
\begin{claim} \label{propDiff} The map $\lambda (\cdot ):H_{0}^{1}(\Omega )\setminus 0\rightarrow \mathbb{R}$ is a $C^{1}$-functional. \end{claim}
Consider the following extremal value of $\lambda _{0}(u)$ \begin{equation} \Lambda _{0}=\inf_{u\in H_{0}^{1}(\Omega )\setminus 0}\lambda _{0}(u). \label{P3} \end{equation} Using Sobolev's and H\"{o}lder's inequalities (see, e.g., \cite{IlEg}) it can be shown that \begin{equation} 0<\Lambda _{0}<+\infty . \end{equation} By the above construction and using \eqref{LzeroProp} it is not hard to prove the following
\begin{claim} \label{propL0}
\begin{description} \item[(i)] If $\lambda <\Lambda _{0}$, then $E_{\lambda }(u)>0$ for any $u \neq 0$,
\item[(ii)] For any $\lambda >\Lambda _{0}$ there is $u\in H_{0}^{1}(\Omega )\setminus 0$ such that $E_{\lambda }(u)<0$, $E_{\lambda }^{\prime }(u)=0$. \end{description} \end{claim}
In what follows we shall use the following result:
\begin{claim} \label{CRRay} Let $u$ be a critical point of $\lambda _{0}(u)$ at some critical value $\bar{\lambda}$, i.e. $D_{u}\lambda _{0}(u)=0 $, $\bar{\lambda }=\lambda _{0}(u)$. Then $D_{u}E_{\bar{\lambda}}(u)=0$ and $E_{\bar{\lambda} }(u)=0$. \end{claim}
\noindent {\em Proof}\quad Observe that \begin{equation*} D_{u}\lambda _{0}(u)(\phi )=D_{u}r_{u}^{0}(t_{0}(u))(\phi )+\frac{\partial }{ \partial t}r_{u}^{0}(t_{0}(u))(D_{u}t_{0}(u)(\phi ))=0,~~~~\forall \phi \in C_{0}^{\infty }(\Omega ). \end{equation*}
Hence, since $\partial r_{u}^{0}(t)/\partial t|_{t=t_{0}(u)}=0$, we get \begin{equation*}
D_{u}r_{u}^{0}(t_{0}(u))(\phi )=t_{0}(u)\cdot D_{w}R^{0}(w)|_{w=t_{0}(u)u}(\phi )=0,~~~\forall \phi \in C_{0}^{\infty }(\Omega ). \end{equation*} Now taking into account that the equality $\bar{\lambda}=\lambda _{0}(u)$ implies $E_{\bar{\lambda}}(u)=0,$ we obtain \begin{equation*}
0=D_{w}R^{0}(w)|_{w=t_{0}(u)u}=\frac{1}{\int_{\Omega }|w|^{{\beta +1}}dx}
\cdot D_{w}E_{\bar{\lambda}}(w)|_{w=t_{0}(u)u}, \end{equation*} which yields the proof.$
\mbox{$\quad{}_{\Box}$}$
We shall need also the following Rayleigh's quotients: \begin{align}
& R^{P}(u)=\frac{\frac{1}{2^{\ast }}\int_{\Omega }|\nabla u|^{2}\,\mathrm{d}
x+\frac{1}{{\alpha +1}}\int_{\Omega }|u|^{{\alpha +1}}dx}{\frac{1}{{\beta +1}
}\int_{\Omega }|u|^{{\beta +1}}dx}, \label{lamb11} \\
& R^{1}(u)=\frac{\int_{\Omega }|\nabla u|^{2}\,\mathrm{d}x+\int_{\Omega
}|u|^{{\alpha +1}}dx}{\int_{\Omega }|u|^{{\beta +1}}dx},~~u\neq 0. \end{align} Notice that for any $u\neq 0$ and $\lambda \in \mathbb{R}$, \begin{equation} R^{P}(u)=\lambda \Leftrightarrow P_{\lambda }(u)=0~~\mbox{and} ~~R^{1}(u)=\lambda \Leftrightarrow E_{\lambda }^{\prime }(u)=0. \label{RPR1} \end{equation} Let $u\neq 0$. Consider $r_{u}^{P}(t):=R^{P}(tu)$, $r_{u}^{1}(t):=R^{1}(tu)$ , $t>0$. Then, arguing as above for $r_{u}^{0}(t),$ it can be shown that each of these functions attains its global minimum at some point, $t_{P}(u)$ and $t_{1}(u)$, respectively. Moreover, it is easily seen that the following equation \begin{equation} r_{u}^{P}(t)=r_{u}^{1}(t),~~~t>0, \label{eqPD} \end{equation} has a unique solution \begin{equation} t_{1P}(u)=\left( \frac{2^{\ast }(\beta -\alpha )}{(2^{\ast }-\beta
-1)(\alpha +1)}\frac{\int_{\Omega }|u|^{{\alpha +1}}dx}{\int_{\Omega
}|\nabla u|^{2}dx}\right) ^{\frac{1}{1-\alpha }}. \label{P1} \end{equation} Thus, we have {the next} \textit{nonlinear generalized Rayleigh quotient} \begin{equation*} \lambda _{1P}(u):=r_{u}^{P}(t_{1P}(u))=r_{u}^{1}(t_{1P}(u)). \end{equation*} It is easily to seen that $\lambda _{1P}(u)=c_{1P}^{\alpha ,\beta }\lambda (u)$, where \begin{equation} c_{1P}^{\alpha ,\beta }=\frac{(\beta +1)(2^{\ast }-\alpha +1)}{(\beta -\alpha )2^{\ast }}\left( \frac{2^{\ast }(\beta -\alpha )}{(2^{\ast }-\beta -1)(\alpha +1)}\right) ^{\frac{\beta -\alpha }{1-\alpha }}. \label{cPD} \end{equation} Notice that \begin{equation} P_{\lambda _{1P}(u)}(t_{1P}(u)u)=0,~~E_{\lambda _{1P}(u)}^{\prime }(t_{1P}(u)u)=0,~~\forall u\neq 0. \label{P1Propert} \end{equation} Consider \begin{equation} \Lambda _{1P}=\inf_{u\neq 0}\lambda _{1P}(u). \label{PPoh} \end{equation} Using Sobolev's and H\"{o}lder's inequalities it can be shown (see, e.g., \cite{IlEg}) that \begin{equation} 0<\Lambda _{1P}<+\infty . \end{equation}
Moreover we have (see Figure 5):
\begin{claim} \label{propEst} For any $u \neq 0$,
\begin{description} \item[(i)] $r_{u}^{P}(t)>r_{u}^{1}(t)$ iff $t\in (0,t_{1P}(u))$ and $ r_{u}^{P}(t)<r_{u}^{1}(t)$ iff $t\in (t_{1P}(u),+\infty )$;
\item[(ii)] $t_{1}(u)<t_{1P}(u)<t_{P}(u)$. \end{description} \end{claim}
\noindent {\em Proof}\quad Observe that $r_{u}^{P}(t)/r_{u}^{1}(t)\rightarrow \frac{\beta +1}{{\alpha +1}}>1$ as $t\rightarrow 0$. Hence, from the uniqueness of $t_{1P}(u)$ we obtain \textbf{(i)}.
\noindent By \eqref{RPR1} we have $E_{\lambda _{1P}(u)}^{\prime }(u)=0$. Therefore Proposition \ref{pradd} implies $\frac{d}{dt}P_{\lambda _{1P}(u)}(t_{1P}(u)u)<0$. Hence and since \begin{equation*}
\frac{d}{dt}r_{u}^{P}(t)|_{t=t_{1P}(u)}=\frac{\beta +1}{\int |tu|^{{\beta +1}
}dx}\cdot \frac{d}{dt}P_{\lambda _{1P}(u)}(tu)|_{t=t_{1P}(u)}, \end{equation*}
we conclude that $\frac{d}{dt}r_{u}^{P}(t)|_{t=t_{1P}(u)}<0$. Now taking into account that $t_{P}(u)$ is a point of global minimum of $r_{u}^{P}(t)$ we obtain that $t_{1P}(u)<t_{P}(u)$. To prove of $t_{1}(u)<t_{1P}(u)$, first observe that \begin{equation*}
\frac{d}{dt}r_{u}^{1}(t)|_{t=t_{1P}(u)}=\frac{1}{
\int_{\Omega }|tu|^{{\beta +1}}dx}\cdot E_{\lambda_{1P}(u)}^{\prime \prime
}(tu)|_{t=t_{1P}(u)}, \end{equation*} and that by Lemma \ref{pro} the equalities $E_{\lambda _{1P}(u)}^{\prime }(t_{1P}(u)u)=0$, $P_{\lambda _{1P}(u)}(t_{1P}(u)u)=0$ imply $E_{\lambda _{1P}(u)}^{\prime \prime }(t_{1P}(u)u)>0$. Thus $\frac{d}{dt} r_{u}^{1}(t_{1P}(u))>0$ and the proof follows.$
\mbox{$\quad{}_{\Box}$}$
\begin{cor} \label{corPG}
\begin{description} \item[(i)] If $\lambda<\Lambda_{1P}$ and $E_{\lambda}^{\prime }(u)=0$, then $ P_\lambda(u)>0$.
\item[(ii) ] For any $\lambda>\Lambda_{1P}$, there exists $u \in H_{0}^{1}\setminus 0 $ such that $E_{\lambda}^{\prime }(u)=0$ and $ P_\lambda(u)\leq 0$ \end{description} \end{cor}
\noindent {\em Proof}\quad \textbf{(i)}. ~Let $u \in H_{0}^{1}\setminus 0$. Assume $ \lambda<\lambda_{1P}(u)$ such that $E_{\lambda}^{\prime }(u)=0$. Then in view of \eqref{RPR1} we have $r^1_u(1)=\lambda<\lambda_{1P}(u)$. Thus \textbf{(ii)}, Proposition \ref{propEst} yields $1\equiv t_{1}(u)< t_{1P}(u)$ and therefore by \textbf{(i)}, Proposition \ref{propEst} we have $ r^P_u(1)>r^1_u(1)=\lambda$. Thus by \eqref{lamb1} we get $P_\lambda(u)>0$.
The proof of \textbf{(ii)} is similar to \textbf{(i)}.$
\mbox{$\quad{}_{\Box}$}$
\begin{cor} \label{proL} $\Lambda_{1P}<\Lambda_0$. \end{cor} \noindent {\em Proof}\quad Suppose that $\Lambda _{0}<\Lambda _{1P}$. From Proposition \ref{propL0} for any $\lambda \in (\Lambda _{0},\Lambda _{1P})$, there exists $u \neq 0$ such that $E_{\lambda }(u)<0$ and $E_{\lambda }^{\prime }(u)=0$. By Corollary \ref{corPG}, the equality $E_{\lambda }^{\prime }(u)=0$ entails $P_{\lambda }(u)>0$. Hence by \eqref{PandE} we have $E_{\lambda }(u)>P_{\lambda }(u)>0$, i.e., we get a contradiction. The equality $\Lambda _{0}=\Lambda _{1P}$ is impossible since $c_{1P}^{\alpha ,\beta }\neq c_{0}^{\alpha ,\beta }.
\mbox{$\quad{}_{\Box}$}$
\begin{cor} \label{nonexist} Let $\Omega $ be a bounded star-shaped domain in $\mathbb{R} ^{N}$ with $C^{2}$-manifold boundary $\partial \Omega $. Then for any $ \lambda <\Lambda _{1P}$ equation $P(\alpha ,\beta ,\lambda )$ cannot have weak solution. \end{cor}
\noindent {\em Proof}\quad Let $\lambda<\Lambda_{1P}$. Assume conversely that there exists a weak solution $u$. By the regularity of solutions of elliptic equations, $u \in C^1(\overline{\Omega})$. Then since $E_{\lambda}^{\prime }(u)=0$ by Corollary \ref{corPG} we have $P_\lambda(u)>0$. However by Corollary \ref{cor1}, any weak solution $u \in C^1(\overline{\Omega})$ of $ P(\alpha ,\beta ,\lambda)$ satisfies $P_\lambda(u)\leq 0$. Thus we get a contradiction. $
\mbox{$\quad{}_{\Box}$}$
\section{Main constrained minimization problem}
Consider the constrained minimization problem: \begin{equation} \hat{E}_{\lambda }:=\min_{u\in M_{\lambda }}E_{\lambda }(u). \label{min1} \end{equation} where \begin{equation*} M_{\lambda }:=\{u\in H_{0}^{1}\setminus 0:~E_{\lambda }^{\prime }(u)=0,~P_{\lambda }(u)\leq 0\}. \end{equation*} Observe that any weak solution of $P(\alpha ,\beta ,\lambda )$ belongs to $ M_{\lambda },$ such as it follows from Corollary \ref{cor1}. Hence if $\hat{E }_{\lambda }=E_{\lambda }(u_{\lambda })$, in \eqref{min1}, for some solution $u_{\lambda }$ of $P(\alpha ,\beta ,\lambda )$, then $u_{\lambda }$ is a ground state.
\begin{claim} \label{PrL} $M_\lambda\neq \emptyset$ for any $\lambda> \Lambda_{1P}$. \end{claim}
\noindent {\em Proof}\quad Let $\lambda >\Lambda _{1P}$. Consider the function $\lambda _{1P}(\cdot ):H_{0}^{1}\setminus 0\rightarrow \mathbb{R}$. By Proposition \ref{propDiff} this is a continuous functional. Hence there is $u\in H_{0}^{1}\setminus 0$ such that $\Lambda _{1P}<\lambda _{1P}(u)<\lambda $. Since by \eqref{P1Propert} we have $P_{\lambda _{1P}(u)}(t_{1P}(u)u)=0$, $E_{\lambda _{1P}(u)}^{\prime }(t_{1P}(u)u)=0$, it follows $P_{\lambda }(t_{1P}(u)u)<0$, $E_{\lambda }^{\prime }(t_{1P}(u)u)<0$ . Hence there is $t_{\min }(u)>t_{1P}(u)$ such that $E_{\lambda }^{\prime }(t_{\min }(u)u)=0$. In view that $P_{\lambda }^{\prime }(tu)=E_{\lambda
}^{\prime }(tu)-(2t/N)\int |\nabla u|^{2}$ for any $t>0$ we have $P_{\lambda }^{\prime }(t_{\min }(u)u)<0$ which implies that $P_{\lambda }(t_{\min }(u)u)<0$. Thus $t_{\min }(u)u\in M_{\lambda }.
\mbox{$\quad{}_{\Box}$}$
\begin{lem} \label{le1e} For any $\lambda> \Lambda_{1P}$ there exists a minimizer $ u_\lambda$ of problem (\ref{min1}), i.e., $E_\lambda(u_\lambda)=\hat{E} _\lambda$ and $u_\lambda\in M_\lambda$. \end{lem}
\noindent {\em Proof}\quad Let $\lambda >\Lambda _{1P}$. Then $M_{\lambda }$ is bounded. Indeed, if $u\in M_{\lambda }$,
then \begin{equation*}
\frac{1}{2^{\ast }}\int_{\Omega }|\nabla u|^{2}\,\mathrm{d}x+\frac{1}{{
\alpha +1}}\int_{\Omega }|u|^{{\alpha +1}}\,\mathrm{d}x\leq \lambda \frac{1}{
{\beta +1}}\int_{\Omega }|u|^{{\beta +1}}\,\mathrm{d}x\leq c\lambda \frac{1}{ {\beta +1}}\Vert u\Vert _{1}^{\beta +1} \end{equation*} From here $\Vert u\Vert _{1}\leq C<+\infty $, $\forall u\in M_{\lambda }$. Now, if $(u_{m})$ is a minimizing sequence of (\ref{min1}), then it is bounded and there exists a subsequence, denoting again, $(u_{m})$ which converges $u_{m}\rightharpoonup u_{0}$ weakly in $H_{0}^{1}$ and strongly $ u_{m}\rightarrow u_{0}$ in $L^{q}$, $1<q<2^{\ast }$. We claim that $ u_{m}\rightarrow u_{0}$ strongly in $H_{0}^{1}$. If not, $\Vert u_{0}\Vert _{1}<\liminf_{m\rightarrow \infty }\Vert u_{m}\Vert _{1}$ and this implies \begin{align*}
\int_{\Omega }|\nabla u_{0}|^{2}\,\mathrm{d}x+& \int_{\Omega }|u_{0}|^{{
\alpha +1}}\,\mathrm{d}x-\lambda \int_{\Omega }|u_{0}|^{{\beta +1}}\,\mathrm{ d}x< \\
& \liminf_{m\rightarrow \infty }\left( \int_{\Omega }|\nabla u_{m}|^{2}\,
\mathrm{d}x+\int_{\Omega }|u_{m}|^{{\alpha +1}}\,\mathrm{d}x-\lambda
\int_{\Omega }|u_{m}|^{{\beta +1}}\,\mathrm{d}x\right) =0 \end{align*} since $E_{\lambda }^{\prime }(u_{m})=0$, $m=1,2,...$. Hence $u_{0}\neq 0$ and $E_{\lambda }^{\prime }(u_{0})=0$. Then there exists $\gamma >1$ such that $E_{\lambda }^{\prime }(\gamma u_{0})=0$ and $E_{\lambda }(\gamma u_{0})<E_{\lambda }(u_{0})<\hat{E}_{\lambda }$. By Proposition \ref{pradd}, $ E_{\lambda }^{\prime }(\gamma u_{0})=0$ implies $P_{\lambda }^{\prime }(\gamma u_{0})<0$. From this and since \begin{equation*} P_{\lambda }(u_{0})<\liminf_{m\rightarrow \infty }P_{\lambda }(u_{m})\leq 0, \end{equation*} we conclude that $P_{\lambda }(\gamma u_{0})<0$. Thus $\gamma u_{0}\in M_{\lambda }$ and $E_{\lambda }(\gamma u_{0})<\hat{E}_{\lambda }$, which is a contradiction.$
\mbox{$\quad{}_{\Box}$}$
\subsection{Existence of a flat or compact support ground state $u_{\protect \lambda ^{\ast }}$}
Let $\lambda >\Lambda _{1P}$, then by Lemma \ref{le1e} there exists a minimizer $u_{\lambda }$ of (\ref{min1}). Notice since $\min \{\alpha ,\beta \}>0$, $E_{\lambda }(u)$ and $E_{\lambda }^{\prime }(u)$, $P_{\lambda }(u)$ are $C^{1}$-functionals on $H_{0}^{1}(\Omega )$ . Hence we may apply Lagrange multipliers rule (see, e.g., page 417 of \cite{Zeidler}) and thereby there exist Lagrange multipliers $\mu _{0}$, $\mu _{1}$ $\mu _{2}$ such that
$|\mu _{0}|+|\mu _{1}|+|\mu _{2}|\neq 0$, $\mu _{2}\geq 0$ and \begin{eqnarray} &&\mu _{0}D_{u}E_{\lambda }(u_{\lambda })+\mu _{1}D_{u}E_{\lambda }^{\prime }(u_{\lambda })+\mu _{2}D_{u}P_{\lambda }(u_{\lambda })=0, \label{eq2} \\ &&\mu _{2}P_{\lambda }(u_{\lambda })=0. \label{eq22} \end{eqnarray}
\begin{claim} \label{Lag} Assume $(\alpha ,\beta )\in \mathcal{E}_{s}(N)$. Let $\lambda >\Lambda _{1P}$ and $u_{\lambda }\in H_{0}^{1}$ be a minimizer in (\ref{min1} ) such that $P_{\lambda }(u_{\lambda })<0$. Then $u_{\lambda }$ is a weak solution to $P(\alpha ,\beta ,\lambda )$. \end{claim}
\noindent {\em Proof}\quad Since $P_{\lambda }(u_{\lambda })<0$, equality \eqref{eq22} implies $\mu _{2}=0$. Moreover, since $(\alpha ,\beta )\in \mathcal{E}_{s}(N)$, ($ii$), Lemma \ref{pro} implies that $E_{\lambda }^{\prime \prime }(u_{\lambda })>0$. Testing \eqref{eq2} by $u_{\lambda }$ we get $0=\mu _{0}E_{\lambda }^{\prime }(u_{\lambda })=\mu _{1}E_{\lambda }^{\prime \prime }(u_{\lambda })$. But $E_{\lambda }^{\prime \prime }(u_{\lambda })\neq 0$ and therefore $\mu _{1}=0$. Thus, $D_{u}E_{\lambda }(u_{\lambda })=0$, that is $u_{\lambda }$ weakly satisfies $P(\alpha ,\beta ,\lambda )$. This completes the proof.$
\mbox{$\quad{}_{\Box}$}$
\noindent Introduce \begin{equation} Z:=\{\lambda \in (\Lambda _{1P},+\infty ):~~P_{\lambda }(u_{\lambda })<0,~~u_{\lambda }\in M_{\lambda }~~\mbox{s.t.}~E_{\lambda }(u_{\lambda })= \hat{E}_{\lambda }\}. \label{Pr1} \end{equation}
\begin{claim} $Z$ is a non-empty open subset of $(\Lambda_{1P}, +\infty)$. \end{claim}
\noindent {\em Proof}\quad Notice that by Lemma \ref{le1e}, for any $\lambda >\Lambda _{1P}$ there exists $u_{\lambda }\in M_{\lambda }$ such that $ E_{\lambda }(u_{\lambda })=\hat{E}_{\lambda }$. To prove that $Z\neq \emptyset $, we show that $[\Lambda _{0},+\infty )\subset Z$. Take $\lambda \geq \Lambda _{0}$. Then in view of (ii), Proposition \ref{propL0} we have $ \hat{E}_{\lambda }\leq 0$. Thus $E_{\lambda }(u_{\lambda })\leq 0$, for any $ u_{\lambda }\in M_{\lambda }$ such that $E_{\lambda }(u_{\lambda })=\hat{E} _{\lambda }$. In view of \eqref{PandE} we have $E_{\lambda }(u_{\lambda })>P_{\lambda }(u_{\lambda })$ and therefore $P_{\lambda }(u_{\lambda })<0$, $\forall u_{\lambda }\in M_{\lambda }$~~ such that $E_{\lambda }(u_{\lambda })=\hat{E}_{\lambda }$. Thus $\lambda \in Z$.
\noindent Let us show that $Z$ is an open subset of $(\Lambda _{1P},+\infty ) $. Notice that if $Z=(\Lambda _{1P},+\infty )$, then $Z$ is an open subset of $(\Lambda _{1P},+\infty )$ by the definition.
\noindent Assume $Z\neq (\Lambda _{1P},+\infty )$. Let $\lambda \in Z$. Suppose, contrary to our claim, that there is a sequence $(\lambda _{m})\subset (\Lambda _{1P},+\infty )\setminus Z$ such that $\lambda _{m}\rightarrow \lambda $ as $m\rightarrow \infty $. Then there is a sequences of solutions $(u_{\lambda _{m}})$ of (\ref{min1}) such that $ P_{\lambda _{m}}(u_{\lambda _{m}})=0$. Then by Lemma \ref{app} (see Appendix I), there exists a minimizer $u_{\lambda }$ of (\ref{min1}) and a subsequence, still denoted by $(u_{\lambda _{m}})$, such that $u_{\lambda _{m}}\rightarrow u_{\lambda }$ strongly in $H_{0}^{1}$ as $m\rightarrow +\infty $. However, then $P_{\lambda }(u_{\lambda })=0$, which contradicts the assumption $\lambda \in Z.
\mbox{$\quad{}_{\Box}$}$
Set \begin{equation*} \lambda^*:=\inf Z. \end{equation*}
\begin{lem} \label{Pr2} There exists a minimizer $u_{\lambda ^{\ast }}$ of (\ref{min1}) which is a flat or a compact support non-negative ground state of $P(\alpha ,\beta ,\lambda ^{\ast })$. Furthermore, $\Lambda _{1P}<\lambda ^{\ast }$ and there exists a set of \textquotedblleft usual\textquotedblright\ non-negative ground states $(u_{\lambda _{n}})_{n=1}^{\infty }$ of $P(\alpha ,\beta ,\lambda _{n})$, with $\lambda _{n}\downarrow \lambda ^{\ast }$ as $ n\rightarrow \infty $, such that $u_{\lambda _{n}}\rightarrow u_{\lambda ^{\ast }}$ strongly in $H_{0}^{1}$ as $n\rightarrow \infty $. \end{lem}
\noindent {\em Proof}\quad Since $Z$ is an open set, we can find a sequence $ \lambda _{n}\in Z$, $n=1,2,...$ such that $\lambda _{n}\rightarrow \lambda ^{\ast }$ as $n\rightarrow \infty $. By the definition of $Z$ for any $ n=1,2,...$ we can find a minimizer $u_{\lambda _{n}}$ of (\ref{min1}) such that $P_{\lambda _{n}}(u_{\lambda _{n}})<0$. Then Proposition \ref{Lag} yields that $u_{\lambda _{n}}$ weakly satisfies $P(\alpha ,\beta ,\lambda _{n})$, $n=1,2,...$. Moreover by Corollary \ref{cor1}, $u_{\lambda _{n}}$ is a \textquotedblleft usual\textquotedblright\ weak solution of $P(\alpha
,\beta ,\lambda _{n})$, $n=1,2,...$. Since $E_{\lambda }(|u|)=E_{\lambda }(u)
$, $E_{\lambda }^{\prime }(|u|)=E_{\lambda }^{\prime }(u)=0$, $P_{\lambda
}(|u|)=P_{\lambda }(u)$ for any $u\in H_{0}^{1}$ we may assume that $ u_{\lambda _{n}}\geq 0$, $n=1,2,...$. Furthermore, since $\hat{E}_{\lambda _{n}}={E}_{\lambda _{n}}(u_{\lambda _{n}})$, $u_{\lambda _{n}}$ is a ground state of $P(\alpha ,\beta ,\lambda _{n})$, $n=1,2,...$. Thus we have a set of \textquotedblleft usual\textquotedblright\ non-negative ground states $ (u_{\lambda _{n}})_{n=1}^{\infty }$ of $P(\alpha ,\beta ,\lambda _{n})$, $ n=1,2,...$.
\noindent By Lemma \ref{app} (see Appendix I), there exists a minimizer $ u_{\lambda ^{\ast }}$ of (\ref{min1}) and the subsequence, still denoted by $ (u_{\lambda _{n}})$, such that $u_{\lambda _{n}}\rightarrow u_{\lambda ^{\ast }}$ strongly in $H_{0}^{1}$ as $\lambda _{n}\rightarrow \lambda ^{\ast }$. This implies that $u_{\lambda ^{\ast }}$ is a non-negative solutions of $P(\alpha ,\beta ,\lambda )$ and $P_{\lambda ^{\ast }}(u_{\lambda ^{\ast }})\leq 0$. Furthermore, since $u_{\lambda ^{\ast }}$ is a minimizer of (\ref{min1}), it is a ground state of $P(\alpha ,\beta ,\lambda ^{\ast })$.
\noindent Let us show that $\Lambda _{1P}<\lambda ^{\ast }$. To obtain a contradiction suppose, that $\Lambda _{1P}=\lambda ^{\ast }$. Then $\Lambda _{1P}=\lambda _{1P}(u_{\lambda ^{\ast }})$ and $u_{\lambda ^{\ast }}$ is a minimizer of \eqref{PPoh}. Since $\lambda _{1P}(u)=c^{\alpha ,\beta }\lambda _{0}(u)$, where $c^{\alpha ,\beta }=c_{1P}^{\alpha ,\beta }/c_{0}^{\alpha ,\beta }$, $u_{\lambda ^{\ast }}$ is also a critical point of $\lambda _{0}(u)$ with value $\Lambda _{0}$. Then by Proposition \ref{CRRay}, $ u_{\lambda ^{\ast }}$ satisfies $P(\alpha ,\beta ,\Lambda _{0})$. However, by the construction $u_{\lambda ^{\ast }}$ satisfies $P(\alpha ,\beta ,\lambda ^{\ast })$. Notice that by Corollary \ref{proL}, $\Lambda _{0}>\Lambda _{1P}=\lambda ^{\ast }$. Thus we get a contradiction.
\noindent Observe that $P_{\lambda ^{\ast }}(u_{\lambda ^{\ast }})=0$. Indeed, if $P_{\lambda ^{\ast }}(u_{\lambda ^{\ast }})<0$, then $\lambda ^{\ast }\in Z$. But this is impossible since $Z$ is an open subset of $ (\Lambda _{1P},+\infty )$.
\noindent A global (up to the boundary) regularity result (see \cite {Lieberman}) yields that $u_{\lambda}\in C^{1,\beta }(\overline{\Omega })$, $\lambda \in \lbrack \lambda ^{\ast },+\infty )$ for some $\beta \in (0,1)$. Thus we may apply Corollary \ref{cor1} which yields that $u_{\lambda ^{\ast }}$ is flat or compactly supported in $\Omega .
\mbox{$\quad{}_{\Box}$}$
\section{On the radially symmetric property}
We need the following result that has been proved in \cite{Kaper1, Kaper2, Serrin-Zou}.
\begin{lem} \label{lem:3} Assume $0<\alpha <\beta<1$. Let $u$ be a non-negative $C^1$ distribution solution of \begin{equation} \label{Eqw} -\Delta u+u^{\alpha}=u^{\beta}~~~\mbox{in}~~\mathbb{R}^N \tag*{$Eq(\alpha ,\beta ,1)$} \end{equation} with connected support. Then the support of $u$ is a ball and $u$ is radially symmetric about the center .
Furthermore, equation $Eq(\alpha ,\beta ,1)$ admits at most one radially symmetric compact support solution. \end{lem}
We denote by $R^{\ast }$ the radius of the supporting ball $B_{R^{\ast }}$ of the unique (up to translation in $\mathbb{R}^{N}$) compact support solution of $Eq(\alpha ,\beta ,1)$, i.e., it is the unique flat solution of $ P(\alpha ,\beta ,1)$ for $\Omega =B_{R^{\ast }}$.
It is easy to see, from Lemma \ref{lem:3}, that the function $u_{\lambda }^{\ast }(x):=\sigma ^{-\frac{2}{1-\alpha }}\cdot u^{\ast }(x/\sigma )$ is the unique flat solution of $P(\alpha ,\beta ,\lambda)$ with $\lambda =\sigma ^{-\frac{2(\beta -\alpha )}{1-\alpha }}$ and $\Omega=B_{\sigma R^{\ast }}$.
\begin{claim} Assume $u_{\lambda }\in C^{1}(\overline{\Omega })$ is a non-negative ground state of $P(\alpha ,\beta ,\lambda )$ which has compact support in $\Omega $ . Then $u_{\lambda }$ is radially symmetric about some origin $0\in \Omega $ , and supp$(u_{\lambda })$=$\overline{B_{R(\Omega )}}$ is a inscribed ball in $\overline{\Omega }$. \end{claim}
\noindent {\em Proof}\quad Observe that any compact support function $ u_{\lambda }$ from $C^{1}(\overline{\Omega })$ can be extended to $\mathbb{R} ^{N}$ as \begin{equation} \left\{ \begin{array}{ll} \tilde{u}_{\lambda }=u_{\lambda } & \mbox{in}~\Omega , \\ \tilde{u}_{\lambda }=0 & \mbox{in}~\mathbb{R}^{N}\setminus \Omega . \end{array} \right. \label{expan} \end{equation} Then $\tilde{u}_{\lambda }\in C^{1}(\mathbb{R}^{N})$ is a distribution solution of $P(\alpha ,\beta ,\lambda )$ on $\mathbb{R}^{N}$. Since $ u_{\lambda }$ is a ground state, it is not hard to show that $u_{\lambda }$ has a connected support. Thus by Lemma \ref{lem:3}, $\tilde{u}_{\lambda }$ is a radially symmetric function with respect to the centre of some ball $ B_{R^{\lambda }}$ with a radius $R^{\lambda }>0,$ so that supp($u_{\lambda })=\overline{B}_{R^{\lambda }}$.
\noindent Let us show that $B_{R^{\lambda }}$ is an inscribed ball in $ \Omega $. Consider $B_{\sigma R^{\lambda }}:=\{x\in \mathbb{R}^{N}:~x/\sigma \in B_{R^{\lambda }}\}$ where $\sigma >0$. Notice that $B_{\sigma R^{\lambda }}\subset \Omega $ if $\sigma \leq 1$. Suppose, contrary to our claim, that there is $\sigma _{0}>1$ such that $B_{\sigma R^{\lambda }}\subset \Omega $ for any $\sigma \in (1,\sigma _{0})$. Let $\sigma \in (1,\sigma _{0})$. Introduce $u_{\lambda }^{\sigma }(x)=u_{\lambda }(x/\sigma ),~x\in B_{\sigma R^{\lambda }}$ and set $u_{\lambda }^{\sigma }(x)=0$ in $\Omega \setminus B_{\sigma R^{\lambda }}$. Observe that \begin{equation*} E_{\lambda }(u_{\lambda }^{\sigma })=\frac{\sigma ^{N-2}}{2}\int_{\Omega
}|\nabla u_{\lambda }|^{2}\,\mathrm{d}x-\sigma ^{N}(\frac{\lambda }{{\beta +1
}}\int_{\Omega }|u_{\lambda }|^{{\beta +1}}\,\mathrm{d}x-\frac{1}{{\alpha +1}
}\int_{\Omega }|u_{\lambda }|^{{\alpha +1}}\,\mathrm{d}x). \end{equation*}
From this $dE_{\lambda }(u_{\lambda }^{\sigma })/d\sigma |_{\sigma =1}=P_{\lambda }(u_{\lambda })=0$, and thus $\sigma =1$ is a maximizing point of the function $\psi _{u}(\sigma ):=E_{\lambda }(u_{\lambda }^{\sigma })$. Then $E_{\lambda }(u_{\lambda }^{\sigma })<E_{\lambda }(u_{\lambda })= \hat{E}_{\lambda }$ and $P_{\lambda }(u_{\lambda }^{\sigma })<0$ for $\sigma \in (1,\sigma _{0})$. From this it follows that for $\sigma $ sufficiently close to $1$ we have $E_{\lambda }(t_{\min }(u_{\lambda }^{\sigma })u_{\lambda }^{\sigma })<E_{\lambda }(u_{\lambda })=\hat{E}_{\lambda }$ and $P_{\lambda }(t_{\min }(u_{\lambda }^{\sigma })u_{\lambda }^{\sigma })<0$, $ E_{\lambda }^{\prime }(t_{\min }(u_{\lambda }^{\sigma })u_{\lambda }^{\sigma })=0,$ which is a contradiction. $
\mbox{$\quad{}_{\Box}$}$
\noindent From this and Lemma \ref{Pr2} we have
\begin{cor} \label{corFin1} $u_{\lambda ^{\ast }}$ is radially symmetric about some point of $\Omega $, and supp$(u_{\lambda ^{\ast }})$=$\overline{B}_{R(\Omega )}$ is an inscribed ball in $\Omega $. \end{cor}
Furthermore, we have
\begin{cor} \label{corFin2} For any $\lambda > \lambda^*$, problem $P(\alpha ,\beta ,\lambda)$ has no non-negative ground state with compact support. \end{cor}
\noindent {\em Proof}\quad Suppose, conversely that there exists ${\lambda_a} >\lambda^*$ and a ground state $u_{\lambda_a}$ of $P(\alpha ,\beta ,\lambda_a)$ such that $u_{\lambda_a}$ has a compact support. Then arguing as above one may infer that $u_{{\lambda_a}}$ is a radially symmetric function with respect to a centre of inscribed ball $B_{R(\Omega)}$ in $ \Omega$ so that supp($u_{\lambda_a})=\overline{B}_{R(\Omega)}$. Consider $ u_{\lambda^*}^\sigma(x)=u_{\lambda^*}(x/\sigma)$ with $\sigma=(\lambda^*/{ \lambda_a})^{(1-\alpha)/2(\beta-\alpha)}$. Then $u_{\lambda^*}^\sigma$ is compactly supported non-negative weak solution of $P(\alpha ,\beta ,\lambda_a)$. By the uniqueness of radial compact support solution of $ P(\alpha ,\beta ,\lambda_a)$ (see Lemma \ref{lem:3}) this is possible only if $u_{\lambda^*}^\sigma=u_{\lambda_a}$. However supp($u_{\lambda^*}^\sigma)= \overline{B}_{\sigma R(\Omega)}$ whereas supp($u_{\lambda_a})=\overline{B} _{R(\Omega)}$ and $\sigma<1$. Thus we get a contradiction.$
\mbox{$\quad{}_{\Box}$}$
\begin{cor} \label{corFin3} $Z=(\lambda^*, +\infty)$. \end{cor}
\noindent {\em Proof}\quad Suppose, contrary to our claim, that there is an another limit point $\lambda _{b}$ of $Z$ such that $\lambda _{b}\in (\lambda ^{\ast },+\infty )\setminus Z$. Then arguing similarly to the proof of Lemma \ref{Pr2} one may conclude that there exists a compactly supported non-negative ground state $u_{\lambda _{b}}$ of $P(\alpha ,\beta ,\lambda _{b})$. However $\lambda _{b}>\lambda ^{\ast }$ and therefore by Corollary \ref{corFin2} this is impossible.$
\mbox{$\quad{}_{\Box}$}$
\section{Proofs of Theorems}
\subsection{Proof of Theorem \protect\ref{Th1}}
For $\lambda=\lambda^*$, the existence of non-negative ground state $ u_{\lambda^*}$ of $P(\alpha ,\beta ,\lambda^*)$ follows from Lemma \ref{Pr2} . Since $Z=(\lambda^*, +\infty)$, we see that for $\lambda>\lambda^*$, any minimizer $u_\lambda $ of (\ref{min1}) satisfies $P_\lambda(u_\lambda)< 0 $. From this by Proposition \ref{Lag} we derive that $u_\lambda $ is a weak solution of $P(\alpha ,\beta ,\lambda)$. Moreover, since $\hat{E}_\lambda={E} _\lambda(u_\lambda)$, $u_\lambda$ is a ground state of $P(\alpha ,\beta ,\lambda )$ for all $\lambda \in (\lambda^*, +\infty)$. By the same arguments as in the proof of Lemma \ref{Pr2} we may assume that $ u_{\lambda}\geq 0$ in $\Omega$ for all $\lambda> \lambda^*$. In view of Lemma \ref{pro} we have $E^{\prime \prime }_\lambda(u_{\lambda})>0$, and by global (up to the boundary) regularity result for elliptic equations we have $u_\lambda \in C^{1,\gamma}(\overline{\Omega})$ for some $\gamma\in (0,1)$.
Let us prove that for $\lambda < \lambda^*$, problem $P(\alpha ,\beta ,\lambda)$ has no weak solution $u \in H^1_0(\Omega)$. Observe that any weak solution of $P(\alpha ,\beta ,\lambda)$ (if it exists) by global (up to the boundary) regularity result for elliptic equations belongs to $C^1(\overline{ \Omega})$. Notice that by Corollary \ref{nonexist} for any $ \lambda<\Lambda_{1P}$ equation $P(\alpha ,\beta ,\lambda)$ has no weak solution $u \in C^1(\overline{\Omega}) $. Thus since by Lemma \ref{Pr2}, $ \Lambda_{1P}<\lambda^*$ it remains to prove nonexistence of weak solutions in the case $\lambda \in [\Lambda_{1P}, \lambda^*)$.
Let $\lambda \in [\Lambda_{1P}, \lambda^*)$. Suppose, contrary to our claim, that there exists a weak solution $u_\lambda \in C^1(\overline{\Omega})$ of $ P(\alpha ,\beta ,\lambda)$. Then $E^{\prime}(u_\lambda)=0$ and by Corollary \ref{cor1} we have $P_\lambda(u_\lambda)\leq 0$. Hence $u_\lambda \in M_\lambda$.
Let us show that then there exists a ground state of $P(\alpha ,\beta ,\lambda )$ which belongs to $C^{1}(\overline{\Omega })$. Notice that if $ u_{\lambda }$ is a unique solution of $P(\alpha ,\beta ,\lambda )$ then it is a ground state. Assume there exists a set of such solutions $\tilde{M} _{\lambda }$ of $P(\alpha ,\beta ,\lambda )$. Notice that $\tilde{M} _{\lambda }\subset M_{\lambda }$. Consider \begin{equation} \tilde{E}_{\lambda }:=\min_{u\in \tilde{M}_{\lambda }}E_{\lambda }(u). \label{min1Til} \end{equation}
Let $(u_{m})$ be a minimizing sequence of (\ref{min1Til}), i.e., \begin{equation} E_{\lambda }(u_{m})\rightarrow \tilde{E}_{\lambda }~~\mbox{as}~~n\rightarrow \infty ~~\mbox{and}~u_{m}\in \tilde{M}_{\lambda },~n=1,2,... \label{maxsTil} \end{equation} Using the same arguments as in the proof of Lemma \ref{le1e} we may conclude that there exists a nonzero limit point $\tilde{u}_{0}$ such that (up to subsequence) $u_{m}\rightarrow \tilde{u}_{0}$ converges weakly in $H_{0}^{1}$ and strongly in $L_{q}$ for $1<q<2^{\ast }$. Then we have \begin{equation} E_{\lambda }(\tilde{u}_{0})\leq \tilde{E}_{\lambda } \label{lower} \end{equation} and \begin{equation*} 0=D_{u}E_{\lambda }(u_{m})(\psi )\rightarrow D_{u}E_{\lambda }(\tilde{u} _{0})(\psi )~~~\forall \psi \in C_{0}^{\infty }(\Omega ). \end{equation*} Thus $\tilde{u}_{0}$ is a nonzero weak solution of $P(\alpha ,\beta ,\lambda )$. Moreover by global (up to the boundary) regularity result for elliptic equations we have $\tilde{u}_{0}\in C^{1,\gamma }(\overline{\Omega })$ for some $\gamma \in (0,1)$. Thus $\tilde{u}_{0}\in \tilde{M}_{\lambda }$ and by \eqref{lower} we conclude that $E_{\lambda }(\tilde{u}_{0})=\tilde{E} _{\lambda }$. This implies that $\tilde{u}_{0}$ is a ground state of $ P(\alpha ,\beta ,\lambda )$ belonging to $C^{1}(\overline{\Omega })$.
Thus we have proved that there exists a ground state $u_{\lambda }$ of $ P(\alpha ,\beta ,\lambda )$ which belongs to $C^{1}(\overline{\Omega })$. Then there are two possibilities $P_{\lambda }(u_{\lambda })<0$ or $ P_{\lambda }(u_{\lambda })=0$. In the first case, we get that $\lambda \in Z$ . But in view of Corollary \ref{corFin3} this is a contradiction. In the second case, Corollary \ref{cor1} implies that $u_{\lambda }$ has a compact support in $\Omega $. However the same arguments as in the proof of Corollary \ref{corFin2} show that for $\lambda \neq \lambda ^{\ast }$ this is impossible.
This concludes the proof of Theorem \ref{Th1}.
\subsection{Proof of Theorem \protect\ref{Th2}}
The existence of a non-negative ground state $u_{\lambda ^{\ast }}$ with compact support follows from Lemma \ref{Pr2} . By Corollary \ref{corFin1}, $ u_{\lambda ^{\ast }}$ is radially symmetric about some point of $\Omega $, and supp$(u)$=$\overline{B}_{R(\Omega)}$ is an inscribed ball in $\Omega $.
In view of Corollary \ref{corFin2}, for all $\lambda > \lambda^*$, any ground state $u_\lambda$ of $P(\alpha ,\beta ,\lambda)$ is a usual solution.
\subsection{Proof of Theorem \protect\ref{ThmCor1}}
We shall only prove the theorem, as an example, for the case $m=2$, i.e., when $\Omega $ is a domain of Strictly Starshaped Class $2$.
Let $\lambda ^{\ast }>0$ be a limit value obtained in Theorem \ref{Th1}. By Lemma \ref{Pr2} there exists a compactly supported ground state $u_{\lambda ^{\ast }}^{1}$ of $P(\alpha ,\beta ,\lambda )$ and there exists a set of usual non-negative ground states $(u_{\lambda _{n}}^{1})_{n=1}^{\infty }$,
$\lambda_n>\lambda^{\ast }$, $n=1,2,...$ such that $ u_{\lambda_n}^{1}\rightarrow u_{\lambda ^{\ast }}^{1}$ strongly in $ H_{0}^{1} $ as $n\rightarrow \infty $. By Corollary \ref{corFin1}, $ u_{\lambda^*}$ is radially symmetric about some origin $0\in \Omega$, and supp$(u)$=$\overline{B}_{R(\Omega)}$ is an inscribed ball in $\Omega$. By the assumptions $\Omega$ contains exactly $2$ inscribed balls of radio $R(\Omega) $
Set $u_{\lambda ^{\ast }}^{2}(x):=u_{\lambda ^{\ast }}^{1}(R_{H}x)$, $ u_{\lambda _{n}}^{2}(x):=u_{\lambda _{n}}^{1}(R_{H}x)$, $x\in \Omega $, $ n=1,2,...$, where $R_{H}:\mathbb{R}^{N}\rightarrow \mathbb{R}^{N}$ is the reflection map. By Theorem \ref{Th1}, the support of $u_{\lambda ^{\ast }}^{1}$ coincides with one of the balls $B^{1}$ or $B^{2}$. Assume supp($ u_{\lambda ^{\ast }}^{1})=B^{1}$. Then since $R_{H}B_{1}=B_{2}$ for some hyperplane $H$, we have supp($u_{\lambda ^{\ast }}^{2})=B^{2}$ and thus $ u_{\lambda ^{\ast }}^{2}\neq u_{\lambda ^{\ast }}^{1}$. Since $u_{\lambda _{n}}^{2}\rightarrow u_{\lambda ^{\ast }}^{2}$ strongly in $H_{0}^{1}$ as $ n\rightarrow \infty $, it follows that $u_{\lambda _{n}}^{1}\neq u_{\lambda _{n}}^{2}$ for sufficiently large $n$.
\section{On the free boundary for the parabolic problem}
We consider now the associate parabolic problem
\begin{equation} PP(\alpha ,\beta ,\lambda ,v_{0})\quad \left\{ \begin{array}{ll}
v_{t}-\Delta v+|v|^{\alpha -1}v=\lambda |v|^{\beta -1}v & \text{in } (0,+\infty )\times \Omega \\ v=0 & \text{on }(0,+\infty )\times \partial \Omega \\ v(0,x)=v_{0}(x) & \text{on }\Omega . \end{array} \right. \end{equation} For the basic theory for this problem, always under the structural assumption $0<\alpha <\beta <1$, we send the reader to \cite{CDE} and \cite{DIH}. In particular, we know that for any $v_{0}\in \mathrm{L}^{\infty }(\Omega ),$ $ v_{0}\geq 0$ there exists a nonnegative weak solution $v\in \mathcal{C} ([0,+\infty ),\mathrm{L}^{2}(\Omega ))\cap $ $L^{\infty }((0,+\infty )\times \Omega )$ of $PP(\alpha ,\beta ,\lambda ,v_{0})$. This solution is unique if $v_{0}$ is non-degenerate near its free boundary.
Our main goal in this Section is to give an idea of the time evolution of the support of the solution. We recall that, as $t\rightarrow +\infty $, the support of $v(t,.)$ must converge to a ball of $\mathbb{R}^{N}$, in the case $\lambda =\lambda ^{\ast }$, or to the whole domain $\overline{\Omega }$, if $\lambda >\lambda ^{\ast }$ (since the shape of the support of the associated stationary solutions was given in Theorem 1.1).
Our first result concerns the special case of $v_{0}=u_{\lambda ^{\ast }}$ (i.e. with support in the ball of $\mathbb{R}^{N}$ of radio $R(\Omega )$) and $\lambda >\lambda ^{\ast }$. It is clear that any stationary solution $ u_{\lambda ^{\ast }}$ is a subsolution to the problem $PP(\alpha ,\beta ,\lambda ,v_{0})$. Indeed, \begin{equation*}
(u_{\lambda ^{\ast }})_{t}-\Delta u_{\lambda ^{\ast }}+|u_{\lambda ^{\ast
}}|^{\alpha -1}u_{\lambda ^{\ast }}=\lambda ^{\ast }|u_{\lambda ^{\ast
}}|^{\beta -1}u_{\lambda ^{\ast }}<\lambda |u_{\lambda ^{\ast }}|^{\beta -1}u_{\lambda ^{\ast }}. \end{equation*} So, if $u_{\lambda ^{\ast }}$ is nondegenerate near its free boundary, we get that $u_{\lambda ^{\ast }}(x)\leq v(t,x)$ for any $t>0$ and a.e. $x\in \Omega .$ As a matter of fact, it is easy to prove that under these assumptions $v_{t}\geq 0$ a.e. $(0,+\infty )\times \Omega $. Thus, a priori, the support of the solution $v(t,.)$ is greater or equal to the support of $ u_{\lambda ^{\ast }}$ for any $t>0$. The following result gives some indication about how the support of $v(t,.)$ should increase slowly with time. We shall apply the general local energy methods for the study of free boundary problems (see, e.g. \cite{ADS}). Notice that for our goal we only need to get some information on $v(t,.)$ on the level sets where this function is small enough. So, given $\theta >0$ and $t\geq 0$ we introduce the notation \begin{equation*} \Omega _{v,\theta }(t):=\{x\in \Omega :v(t,x)\leq \theta \}. \end{equation*}
\begin{thm} \label{free boundary1} Assume $\lambda >\lambda ^{\ast }$, $v_{0}=u_{\lambda ^{\ast }}$ and let $\theta >0$ such that $\theta ^{\beta -\alpha }<1/\lambda $. Let $x_{0}\in \mathbb{R}^{N}\setminus $ supp($v_{0})$ such that $B_{\rho _{0}}(x_{0})\subset \mathbb{R}^{N}\setminus $supp($v_{0})$ for some $\rho _{0}>0$. Then there exists $\widehat{t}>0$ and a continuous decreasing function $\rho :[0, \widehat{t}]\rightarrow \lbrack 0,\rho _{0}]$ such that $\rho (0)=\rho _{0}$ , $\rho (\widehat{t})=0$ and $B_{\rho (t)}(x_{0})\subset \mathbb{R} ^{N}\setminus $supp($v(t,.)\cap \Omega _{v,\theta }(t))$ for any $t\in \lbrack 0,\widehat{t}]$. In particular, $v(t,x)=0$ a.e. $x\in B_{\rho (t)}(x_{0})$ for any $t\in \lbrack 0,\widehat{t}].$ \end{thm}
\noindent {\em Proof}\quad It is enough to apply Theorem 2.2 of \cite{ADS} to the special case of $\psi (u)=u$ and \begin{equation*}
A(x,t,u,Du)=Du,~~ B(x,t,u,Du)=0,~~C(x,t,u,Du)=(1-\lambda \theta ^{\beta -\alpha })|u|^{\alpha -1}u, \end{equation*} since we know that \begin{equation*}
v_{t}-\Delta v+(1-\lambda \theta ^{\beta -\alpha })|v|^{\alpha -1}v\leq 0 \text{ on }\cup _{t>0}\{t\}\times \Omega _{v,\theta }(t), \end{equation*} and all the assumptions of Theorem 2.2 of \cite{ADS} hold.$
\mbox{$\quad{}_{\Box}$}$
When $\lambda =\lambda ^{\ast }$ we can also give an idea how the support of $v(t,.)$ corresponding to a strictly positive initial decreases, after a finite time large enough (remember that in that case the support of $v(t,.)$ must decrease from $\overline{\Omega }$ to the closed ball of $\mathbb{R}^{N} $ of radio $R(\Omega )\subset \overline{\Omega }$). In this case, we shall pay attention to the special choice of $v_{0}=u_{\lambda }$ for some $ \lambda >\lambda ^{\ast }$. Notice that now $u_{\lambda }$ is a supersolution to $PP(\alpha ,\beta ,\lambda ^{\ast },v_{0})$ since
\begin{equation*}
(u_{\lambda })_{t}-\Delta u_{\lambda }+|u_{\lambda }|^{\alpha -1}u_{\lambda
}=\lambda |u_{\lambda }|^{\beta -1}u_{\lambda }>\lambda ^{\ast }|u_{\lambda
^{\ast }}|^{\beta -1}u_{\lambda ^{\ast }}. \end{equation*} As above, if $u_{\lambda }$ is nondegenerate, we can even prove that $ v_{t}\leq 0$ a.e. $(0,+\infty )\times \Omega $. \ Concerning the formation of the free boundary we have:
\begin{thm} \label{free boundary2} Assume $\lambda =\lambda ^{\ast }$, $v_{0}=u_{\lambda }$ for some $\lambda >\lambda ^{\ast }$and let $\theta >0$ such that $\theta ^{\beta -\alpha }<1/\lambda ^{\ast }$. Then, for any time $T>0$ large enough, there exist a finite time $t^{\#}>0$ and a continuous increasing function $\rho :[t^{\#},T]\rightarrow \lbrack 0,+\infty )$ such that $\rho (t^{\#})=0$, and $B_{\rho _{0}}(x_{0})\subset \mathbb{R}^{N}\setminus $ support($v(t,.)\cap \Omega _{v,\theta }(t))$ for any $t\in \lbrack t^{\#},T]$ . In particular, $v(t,x)=0$ a.e. $x\in B_{\rho (t)}(x_{0})$ for any $t\in \lbrack t^{\#},T].$ \end{thm}
\noindent {\em Proof}\quad This time it is enough to apply Theorem 4.2 of \cite{ADS} to the special case of $\psi (u)=u$, $A(x,t,u,Du)=Du$, $ B(x,t,u,Du)=0$ and \begin{equation*}
C(x,t,u,Du)=(1-\lambda ^{\ast }\theta ^{\beta -\alpha })|u|^{\alpha -1}u. \end{equation*} Indeed, as above we know that \begin{equation*}
v_{t}-\Delta v+(1-\lambda ^{\ast }\theta ^{\beta -\alpha })|v|^{\alpha -1}v\leq 0\text{ on }\cup _{t\in (0,T)}\{t\}\times \Omega _{v,\theta }(t). \end{equation*} and all the assumptions of Theorem 4.2 of \cite{ADS} hold.$
\mbox{$\quad{}_{\Box}$}$
\section{Appendix}
\begin{lem} \label{app} Assume $\lambda \in [\Lambda_{1P}, +\infty)$ and $u_{\lambda_m}$ is a sequence of solutions of (\ref{min1}), where $\lambda_m \to \lambda$ as $m\to +\infty$. Then there exist a minimizer $u_\lambda$ of (\ref{min1}) and a subsequence, still denoted by $(u_{\lambda_m})$, such that $u_{\lambda_m} \to u_\lambda$ strongly in $H_{0}^{1}$ as $m \to +\infty$. \end{lem}
\noindent {\em Proof}\quad Let $\lambda \in \lbrack \Lambda _{1P},+\infty )$ , $\lambda _{m}\rightarrow \lambda $ as $m\rightarrow +\infty $ and $ u_{\lambda _{m}}$ be a sequence of solutions of (\ref{min1}). As in the proof of Lemma \ref{le1e} it is derived that the set $(u_{\lambda _{m}})$ is bounded in $H_{0}^{1}$. Hence by the Sobolev embedding theorem there exists a subsequence, still denoted by $(u_{\lambda _{m}})$, such that \begin{equation} u_{\lambda _{m}}\rightharpoondown \bar{u}_{\lambda }~~\mbox{weakly in} ~~H_{0}^{1},~~~u_{\lambda _{m}}\rightarrow \bar{u}_{\lambda }~~ \mbox{strongly in}~~L_{q}(\Omega ), \label{convAp} \end{equation} where $0<q<2^{\ast }$, for some limit point $\bar{u}_{\lambda }$. As in the proof of Lemma \ref{le1e} one derives that $\bar{u}_{\lambda }\neq 0$ and \begin{equation}\label{C1} E_{\lambda }(\bar{u}_{\lambda })\leq \liminf_{m\rightarrow \infty }E_{\lambda _{m}}(u_{\lambda _{m}}),~~E_{\lambda }^{\prime }(\bar{u} _{\lambda })\leq 0,~~P_{\lambda }(\bar{u}_{\lambda })\leq 0. \end{equation} Let $\lambda >\Lambda _{1P}$. By Lemma \ref{le1e} there exists a minimizer $ u_{\lambda }$ of (\ref{min1}), i.e. $u_{\lambda }\in M_{\lambda }$ and $\hat{ E}_{\lambda }=E_{\lambda }(u_{\lambda })$. Then \begin{equation*}
|E_{\lambda }(u_{\lambda })-E_{\lambda _{m}}(u_{\lambda })|<C|\lambda
-\lambda _{m}|, \end{equation*} where $C<+\infty $ does not depend on $m$. Furthermore, \begin{equation*} E_{\lambda _{m}}(u_{\lambda })\geq E_{\lambda _{m}}(t_{\min }(u_{\lambda })u_{\lambda })\geq E_{\lambda _{m}}(u_{\lambda _{m}}) \end{equation*} provided that $m$ is a sufficiently large number. Thus we have \begin{equation*}
E_{\lambda }(u_{\lambda })+C|\lambda -\lambda _{m}|>E_{\lambda _{m}}(u_{\lambda })\geq E_{\lambda _{m}}(u_{\lambda _{m}}), \end{equation*} and therefore $\hat{E}_{\lambda }:=E_{\lambda }(u_{\lambda })\geq \liminf_{m\rightarrow \infty }E_{\lambda _{m}}(u_{\lambda _{m}})$. Hence by \eqref{C1} we have \begin{equation*} E_{\lambda }(\bar{u}_{\lambda })\leq \hat{E}_{\lambda }. \end{equation*} Assume $E_{\lambda }^{\prime }(\bar{u}_{\lambda })<0$. Then $E_{\lambda }^{\prime }(t_{\min }(\bar{u}_{\lambda })\bar{u}_{\lambda })=0$ and $ E_{\lambda }(t_{\min }(\bar{u}_{\lambda })\bar{u}_{\lambda })<E_{\lambda }( \bar{u}_{\lambda })\leq \hat{E}_{\lambda }$. In virtue of Proposition \ref {pradd}, this implies that $P_{\lambda }(t_{\min }(\bar{u}_{\lambda })\bar{u} _{\lambda })<0$. Thus $t_{\min }(\bar{u}_{\lambda })\bar{u}_{\lambda }\in M_{\lambda }$ and since $E_{\lambda }(t_{\min }(\bar{u}_{\lambda })\bar{u} _{\lambda })<\hat{E}_{\lambda }$ we get a contradiction. Hence $E_{\lambda }( \bar{u}_{\lambda })=\hat{E}_{\lambda }$, $E_{\lambda }^{\prime }(\bar{u} _{\lambda })=0$ and $u_{\lambda _{m}}\rightarrow \bar{u}_{\lambda }$ strongly in $H_{0}^{1}$ as $m\rightarrow +\infty $ .
Assume now that $\lambda =\Lambda _{1P}$. Since $E_{\lambda _{m}}^{\prime }(u_{\lambda _{m}})=0$, $P_{\lambda _{m}}(u_{\lambda _{m}})\leq 0$, we have $ r_{u_{\lambda _{m}}}^{P}(1)\leq \lambda _{m}=r_{u_{\lambda _{m}}}^{1}(1)$. Then by Proposition \ref{propEst} (see Figure 5), $1\in \lbrack t_{1P}(u_{\lambda _{m}}),+\infty )$ and therefore \begin{equation*} \lambda _{1P}(u_{\lambda _{m}})=r_{u_{\lambda _{m}}}^{1}(t_{1P}(u_{\lambda _{m}}))\leq r_{u_{\lambda _{m}}}^{1}(1)=\lambda _{m},~~m=1,2,.... \end{equation*} Hence, since $\lambda _{m}\downarrow \lambda $, we have $\lambda _{1P}(u_{\lambda _{m}})\downarrow \Lambda _{1P}$ as $m\rightarrow \infty $. Thus, $(u_{\lambda _{m}})$ is a minimizing sequence of \eqref{PPoh} and therefore by \eqref{convAp}, $\lambda _{1P}(\bar{u}_{\lambda })\leq \Lambda _{1P}$. Since the strong inequality $\lambda _{1P}(\bar{u}_{\lambda })<\Lambda _{1P}$ is impossible, we conclude that $\lambda _{1P}(\bar{u} _{\lambda })=\Lambda _{1P}$, which yields that $u_{\lambda _{m}}\rightarrow \bar{u}_{\lambda }$ strongly in $H_{0}^{1}.
\mbox{$\quad{}_{\Box}$}$
\textbf{Acknowledgments}
The research of J.I. D\'{\i}az and J. Hern\'{a}ndez was partially supported by the projects ref. MTM 2014-57113-P and MTM2017-85449-P of the DGISPI (Spain).
\flushright{\small \begin{tabular}{llll} J.I. D\'{\i}az & J.~Hern\'{a}ndez & Y.Sh.~Ilyasov \\ Instituto de Matem\'{a}tica Interdisciplinar & ,Instituto de Matem\'{a}tica Interdisciplinar & Institute of Mathematics of UFRC RAS \\ Universidad Complutense de Madrid& Universidad Complutense de Madrid & Chernyshevsky str., 450008, Ufa, Rusia\\ 28040 Madrid, Spain & 28040 Madrid, Spain & Instituto de Matem\'atica e Estat\'istica.\\ & & Universidade Federal de Goi\'as,\\ & & 74001-970, Goiania, Brazil\\ {\tt [email protected]} & {\tt [email protected]} & {\tt [email protected]} \end{tabular} }
\end{document} |
\begin{document}
\title{One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities}
\begin{abstract} The softmax representation of probabilities for categorical variables plays a prominent role in modern machine learning with numerous applications in areas such as large scale classification, neural language modeling and recommendation systems. However, softmax estimation is very expensive for large scale inference because of the high cost associated with computing the normalizing constant. Here, we introduce an efficient approximation to softmax probabilities which takes the form of a rigorous lower bound on the exact probability. This bound is expressed as a product over pairwise probabilities and it leads to scalable estimation based on stochastic optimization. It allows us to perform doubly stochastic estimation by subsampling both training instances and class labels. We show that the new bound has interesting theoretical properties and we demonstrate its use in classification problems. \end{abstract}
\section{Introduction}
Based on the softmax representation, the probability of a variable $y$ to take the value $k \in \{1,\ldots,K\}$, where $K$ is the number of categorical symbols or classes, is modeled by \begin{equation}
p(y=k|\mathbf{x}) = \frac{e^{f_k(\mathbf{x}; \mathbf{w})}} {\sum_{m=1}^K e^{f_m(\mathbf{x}; \mathbf{w}) }}, \label{eq:softmaxGen} \end{equation} where each $f_k(\mathbf{x}; \mathbf{w})$ is often referred to as {\em the score function} and it is a real-valued function indexed by an input vector $\mathbf{x}$ and parameterized by $\mathbf{w}$. The score function measures the compatibility of input $\mathbf{x}$ with symbol $y=k$ so that the higher the score is the more compatible $\mathbf{x}$ becomes with $y=k$. The most common application of softmax is multiclass classification where $\mathbf{x}$ is an observed input vector and $f_k(\mathbf{x}; \mathbf{w})$ is often chosen to be a linear function or more generally a non-linear function such as a neural network \citep{Bishop:2006, Goodfellow-et-al-2016-Book}. Several other applications of softmax arise, for instance, in neural language modeling for learning word vector embeddings \citep{MnihTeh2012, mikolov2013, pennington-etal-2014}
and also
in collaborating filtering for representing probabilities of $(user,item)$ pairs \citep{PaquetKoenigsteinWinther14}. In such applications the number of symbols $K$ could often be very large, e.g.\ of the order of tens of thousands or millions, which makes the computation of softmax probabilities very expensive due to the large sum in the normalizing constant of Eq.\ \eqref{eq:softmaxGen}. Thus, exact training procedures based on maximum likelihood or Bayesian approaches are computationally prohibitive and approximations are needed.
While some rigorous bound-based approximations to the softmax exists \citep{bouchard_efficient_2007}, they are not so accurate or scalable and therefore it would be highly desirable to develop accurate and computationally efficient approximations.
In this paper we introduce a new efficient approximation to softmax probabilities which takes the form of a lower bound on the probability of Eq. \eqref{eq:softmaxGen}. This bound draws an interesting connection between the exact softmax probability and all its one-vs-each pairwise probabilities, and it has several desirable properties. Firstly, for the non-parametric estimation case it leads to an approximation of the likelihood that shares the same global optimum with exact maximum likelihood, and thus estimation based on the approximation is a perfect surrogate for the initial estimation problem. Secondly, the bound allows for scalable learning through stochastic optimization where data subsampling can be combined with subsampling categorical symbols. Thirdly, whenever the initial exact softmax cost function is convex the bound remains also convex.
Regarding related work, there exist several other methods that try to deal with the high cost of softmax such as methods that attempt to perform the exact computations \citep{gopal13, VijayanarasimhanSMY14}, methods that change the model based on hierarchical or stick-breaking constructions \citep{morin2005hierarchical, KhanMMM12} and sampling-based methods \citep{BengioSenecal-2003, mikolov2013, devlin2014, BlackOut}. Our method is a lower bound based approach that follows the
variational inference framework. Other rigorous variational lower bounds on the softmax have been used before \citep{Bohning92, bouchard_efficient_2007}, however they are not easily scalable since they require optimizing data-specific variational parameters. In contrast, the bound we introduce in this paper does not contain any variational parameter, which greatly facilitates stochastic minibatch training. At the same time it can be much tighter than previous bounds \citep{bouchard_efficient_2007} as we will demonstrate empirically in several classification datasets.
\section{One-vs-each lower bound on the softmax \label{sec:theory}}
Here, we derive the new bound on the softmax (Section \ref{sec:onevsone}) and we prove its optimality property when performing approximate maximum likelihood estimation (Section \ref{sec:optimality}). Such a property holds for the {\em non-parametric case}, where we estimate probabilities of the form $p(y=k)$, without conditioning on some $\mathbf{x}$, so that the score functions $f_k(\mathbf{x}; \mathbf{w})$ reduce to unrestricted parameters $f_k$; see Eq.\ \eqref{eq:softmax1} below. Finally, we also analyze the related bound derived by Bouchard \citep{bouchard_efficient_2007} and we compare it with our approach (Section \ref{sec:bouchnonnegsample}).
\subsection{Derivation of the bound \label{sec:onevsone}}
Consider a discrete random variable $y \in \{1,\ldots,K\}$ that takes the value $k$ with probability, \begin{equation} p(y=k) = \text{Softmax}_k(f_1,\ldots,f_K) = \frac{e^{f_k}} {\sum_{m=1}^K e^{f_m}}, \label{eq:softmax1} \end{equation} where each $f_k$ is a free real-valued scalar parameter. We wish to express a lower bound on $p(y=k)$ and the key step of our derivation is to re-write $p(y = k)$ as \begin{equation} p(y=k) = \frac{1} {1 + \sum_{m \neq k} e^{- (f_k - f_m)}}. \label{eq:softmax2} \end{equation} Then, by exploiting the fact that for any non-negative numbers $\alpha_1$ and $\alpha_2$ it holds $1 + \alpha_1 + \alpha_2 \leq 1 + \alpha_1 + \alpha_2 + \alpha_1 \alpha_2 = (1 + \alpha_1) (1 + \alpha_2)$, and more generally it holds $(1 + \sum_{i} \alpha_i) \leq \prod_{i} (1 + \alpha_i)$ where each $\alpha_i \geq 0 $, we obtain the following lower bound on the above probability, \begin{equation} p(y=k) \geq \prod_{m \neq k} \frac{1} {1 + e^{- (f_k - f_m)}} = \prod_{m \neq k} \frac{e^{f_k}} { e^{f_k} + e^{f_m}} = \prod_{m \neq k} \sigma(f_k - f_m) . \label{eq:softmaxbound} \end{equation} where $\sigma(\cdot)$ denotes the sigmoid function. Clearly, the terms in the product are pairwise probabilities each corresponding to the event $y=k$ conditional on the union of pairs of events, i.e.\ $ y \in \{k,m \}$ where $m$ is one of the remaining values. We will refer to this bound as one-vs-each bound on the softmax probability, since it involves $K-1$ comparisons of a specific event $y=k$ versus each of the $K-1$ remaining events. Furthermore, the above result can be stated more generally to define bounds on arbitrary probabilities as the following statement shows.
\begin{prop} Assume a probability model with state space $\Omega$ and probability measure $P(\cdot)$. For any event $A \subset \Omega$ and an associated countable set of disjoint events $\{B_i\}$ such that $ \cup_{i} B_i = \Omega \setminus A$, it holds \begin{equation}
P(A) \geq \prod_{i} P(A|A \cup B_i). \label{eq:generalbound} \end{equation} \end{prop} \begin{proof} Given that $P(A) = \frac{P(A)}{P(\Omega)} = \frac{P(A)}{P(A) + \sum_i P(B_i)}$, the result follows by applying the inequality $(1 + \sum_{i} \alpha_i) \leq \prod_{i} (1 + \alpha_i)$ exactly as done above for the softmax parameterization. \end{proof}
{\bf Remark.} If the set $\{B_i\}$ consists of a single event $B$ then by definition $B = \Omega \setminus A$ and the bound is exact since in such case $P(A|A \cup B) = P(A)$.
Furthermore, based on the above construction we can express a full class of hierarchically ordered bounds. For instance, if we merge two events $B_i$ and $B_j$ into a single one, then
the term $P(A|A \cup B_i) P(A|A \cup B_j)$ in the initial bound is
replaced with $P(A|A \cup B_i \cup B_j )$ and the associated new bound, obtained after this merge, can only become tighter. To see a more specific example in the softmax probabilistic model, assume a small subset of categorical symbols $\mathcal{C}_k$, that does not include $k$, and denote the remaining symbols excluding $k$ as $\mathcal{\bar{C}}_k$ so that $k \cup \mathcal{C}_k \cup \mathcal{\bar{C}}_k = \{1, \ldots, K\}$. Then, a tighter bound, that exists higher in the hierarchy, than the one-vs-each bound (see Eq.\ \ref{eq:softmaxbound}) takes the form, \begin{equation} p(y=k) \geq \text{Softmax}_k(f_k, \mathbf{f}_{\mathcal{C}_k}) \times \text{Softmax}_k(f_k,\mathbf{f}_{\mathcal{\bar{C}}_k}) \geq \text{Softmax}_k(f_k, \mathbf{f}_{\mathcal{C}_k}) \times \prod_{m \in \mathcal{\bar{C}}_k} \sigma(f_k - f_m), \end{equation} where $\text{Softmax}_k(f_k, \mathbf{f}_{\mathcal{C}_k}) = \frac{e^{f_k}}{e^{f_k} + \sum_{m \in \mathcal{C}_k} e^{f_m} }$ and $\text{Softmax}_k(f_k, \mathbf{f}_{\mathcal{\bar{C}}_k}) = \frac{e^{f_k}}{e^{f_k} + \sum_{m \in \mathcal{\bar{C}}_k} e^{f_m} }$.
For simplicity of our presentation in the remaining of the paper we do not discuss further these more general bounds and we focus only on the one-vs-each bound.
The computationally useful aspect of the bound in Eq.\ (\ref{eq:softmaxbound}) is that it factorizes into a product, where each factor depends only on a pair of parameters $(f_k,f_m)$. Crucially, this avoids the evaluation of the normalizing constant associated with the global probability in Eq.\ \eqref{eq:softmax1} and, as discussed in Section \ref{sec:classification}, it leads to scalable training using stochastic optimization that can deal with very large $K$. Furthermore, approximate maximum likelihood estimation based on the bound can be very accurate and, as shown in the next section, it is exact for the non-parametric estimation case.
The fact that the one-vs-each bound in \eqref{eq:softmaxbound} is a product of pairwise probabilities suggests that there is a connection with Bradley-Terry (BT) models \citep{bradley1952rank, Huang:2006} for learning individual skills from paired comparisons and the associated multiclass classification systems obtained by combining binary classifiers, such as one-vs-rest and one-vs-one approaches \citep{Huang:2006}. Our method differs from BT models, since we do not combine binary probabilistic models to a posteriori form a multiclass model. Instead, we wish to develop scalable approximate algorithms that can surrogate the training of multiclass softmax-based models by maximizing lower bounds on the exact likelihoods of these models.
\subsection{Optimality of the bound for maximum likelihood estimation \label{sec:optimality}}
Assume a set of observation $(y_1,\ldots,y_N)$ where each $y_i \in \{1,\ldots, K\}$. The log likelihood of the data takes the form, \begin{equation} \mathcal{L}(\mathbf{f}) = \log \prod_{i=1}^N p(y_i) = \log \prod_{k=1}^K p(y=k)^{N_k}, \label{eq:exactlik} \end{equation} where $\mathbf{f} = (f_1,\ldots,f_K)$ and $N_k$ denotes the number of data points with value $k$. By substituting $p(y=k)$ from Eq.\ (\ref{eq:softmax1}) and then taking derivatives with respect to $\mathbf{f}$
we arrive at the standard stationary conditions of the maximum likelihood solution, \begin{equation}
\frac{ e^{f_k}} {\sum_{m=1}^K e^{f_m} } = \frac{N_k}{N}, \ k=1,\ldots,K. \label{eq:analyticPk} \end{equation} These stationary conditions are satisfied for $f_k = \log N_k + c$ where $c \in \Real$ is an arbitrary constant.
What is rather surprising is that the same solutions $f_k = \log N_k + c$ satisfy also the stationary conditions when maximizing a lower bound on the exact log likelihood obtained from the product of one-vs-each probabilities.
More precisely, by replacing $p(y=k)$ with the bound from Eq.\ (\ref{eq:softmaxbound}) we obtain a lower bound on the exact log likelihood, \begin{equation} \mathcal{F}(\mathbf{f}) = \log \prod_{k=1}^K \left[ \prod_{m \neq k} \frac{e^{f_k}} {e^{f_k} + e^{f_m}} \right]^{N_k} = \sum_{k > m} \log P(f_k,f_m), \label{eq:lowerboundlik} \end{equation} where $P(f_k,f_m) = \left[ \frac{e^{f_k}} {e^{f_k} + e^{f_m}} \right]^{N_k} \left[ \frac{e^{f_m}} {e^{f_k} + e^{f_m}} \right]^{N_m}$ is a likelihood involving only the data of the pair of states $(k,m)$, while there exist $K (K-1)/2$ possible such pairs. If instead of maximizing the exact log likelihood from Eq.\ \eqref{eq:exactlik} we maximize the lower bound we obtain the same parameter estimates.
\begin{prop} The maximum likelihood parameter estimates $f_k = \log N_k + c, k=1,\ldots,K$ for the exact log likelihood from Eq.\ (\ref{eq:exactlik}) globally also maximize the lower bound from Eq.\ (\ref{eq:lowerboundlik}). \end{prop} \begin{proof} By computing the derivatives of $\mathcal{F}(\mathbf{f})$ we obtain the following stationary conditions \begin{equation} K - 1 = \sum_{m \neq k} \frac{N_k + N_m}{N_k} \frac{e^{f_k}}{e^{f_k} + e^{f_m}}, \ k=1,\ldots,K, \end{equation} which form a system of $K$ non-linear equations over the unknowns $(f_1,\ldots,f_K)$. By substituting the values $f_k = \log N_k + c$ we can observe that all $K$ equations are simultaneously satisfied which means that these values are solutions. Furthermore, since $\mathcal{F}(\mathbf{f})$ is a concave function of $\mathbf{f}$ we can conclude that the solutions $f_k = \log N_k + c$ globally maximize $\mathcal{F}(\mathbf{f})$. \end{proof} {\bf Remark.} Not only is $\mathcal{F}(\mathbf{f})$ globally maximized by setting $f_k = \log N_k + c$, but also each pairwise likelihood $P(f_k,f_m)$ in Eq.\ (\ref{eq:lowerboundlik}) is separately maximized by the same setting of parameters.
\subsection{Comparison with Bouchard's bound \label{sec:bouchnonnegsample}}
Bouchard \citep{bouchard_efficient_2007} proposed a related bound that next we analyze in terms of its ability to approximate the exact maximum likelihood training in the non-parametric case, and then we compare it against our method. Bouchard \citep{bouchard_efficient_2007} was motivated by the problem of applying variational Bayesian inference to multiclass classification and he derived the following upper bound on the log-sum-exp function, \begin{equation} \log \sum_{m=1}^K e^{f_m} \leq \alpha + \sum_{m=1}^K \log \left(1 + e^{f_m - \alpha} \right), \label{eq:bouchard} \end{equation} where $\alpha \in \Real$ is a variational parameter that needs to be optimized in order for the bound to become as tight as possible. The above induces a lower bound on the softmax probability $p(y=k)$ from Eq.\ \eqref{eq:softmax1} that takes the form \begin{equation} p(y=k) \geq \frac{e^{f_k - \alpha}}{\prod_{m=1}^K \left( 1 + e^{f_m - \alpha} \right)}. \label{eq:softmaxBou} \end{equation} This is not the same as Eq.\ (\ref{eq:softmaxbound}), since there is not a value for $\alpha$ for which the above bound will reduce to our proposed one. For instance, if we set $\alpha = f_k$, then Bouchard's bound becomes half the one in Eq.\ (\ref{eq:softmaxbound}) due to the extra term $1 + e^{f_k - f_k} = 2$ in the product in the denominator.\footnote{Notice that the product in Eq.\ (\ref{eq:softmaxbound}) excludes the value $k$, while Bouchard's bound includes it.} Furthermore, such a value for $\alpha$ may not be the optimal one and in practice $\alpha$ must be chosen by minimizing the upper bound in Eq.\ \eqref{eq:bouchard}. While such an optimization is a convex problem, it requires iterative optimization since there is not in general an analytical solution for $\alpha$. However, for the simple case where $K=2$ we can analytically find the optimal $\alpha$ and the optimal $\mathbf{f}$ parameters. The following proposition carries out this analysis and provides a clear understanding of how Bouchard's bound behaves when applied for approximate maximum likelihood estimation.
\begin{prop} Assume that $K=2$ and we approximate the probabilities $p(y=1)$ and $p(y=2)$ from (\ref{eq:softmax1}) with the corresponding Bouchard's bounds given by $\frac{e^{f_1 - \alpha}}{(1 + e^{f_1 - \alpha}) (1 + e^{f_2 - \alpha})}$ and $\frac{e^{f_2 - \alpha}}{(1 + e^{f_1 - \alpha}) (1 + e^{f_2 - \alpha})}$. These bounds are used to approximate the maximum likelihood solution by maximizing a bound $\mathcal{F}(f_1,f_2,\alpha)$ which is globally maximized for \begin{equation} \alpha = \frac{f_1 + f_2}{2}, \ \ f_k = 2 \log N_k + c, \ \ k=1,2. \label{eq:Bouchalphaf1f2} \end{equation} \end{prop} The proof of the above is given in the Appendix.
Notice that the above estimates are biased so that the probability of the most populated class (say the $y=1$ for which $N_1>N_2$) is overestimated while the other probability is underestimated. This is due to the factor $2$ that multiplies $\log N_1$ and $\log N_2$ in \eqref{eq:Bouchalphaf1f2}.
Also notice that the solution $\alpha = \frac{f_1 + f_2}{2}$ is not a general trend, i.e.\ for $K>2$ the optimal $\alpha$ is not the mean of $f_k$s. In such cases approximate maximum likelihood estimation based on Bouchard's bound requires iterative optimization. Figure \ref{fig:toycomparisons}a shows some estimated softmax probabilities, using a dataset of $200$ points each taking one out of ten values, where $\mathbf{f}$ is found by exact maximum likelihood, the proposed one-vs-each bound and Bouchard's method. As expected estimation based on the bound in Eq.\ \eqref{eq:softmaxbound} gives the exact probabilities, while Bouchard's bound tends to overestimate large probabilities and underestimate small ones.
\begin{figure*}
\caption{(a) shows the probabilities estimated by exact softmax (blue bar), one-vs-each approximation (red bar) and Bouchard's method (green bar). (b) shows the 5-class artificial data together with the decision boundaries found by exact softmax (blue line), one-vs-each (red line) and Bouchard's bound (green line). (c) shows the maximized (approximate) log likelihoods for the different approaches when applied to the data of panel (b) (see Section \ref{sec:classification}). Notice that the blue line in (c) is the exact maximized log likelihood while the remaining lines correspond to lower bounds.
}
\label{fig:toycomparisons}
\end{figure*}
\section{Stochastic optimization for extreme classification \label{sec:classification}}
Here, we return to the general form of the softmax probabilities as defined by Eq.\ (\ref{eq:softmaxGen}) where the score functions are indexed by input $\mathbf{x}$ and parameterized by $\mathbf{w}$. We consider a classification task where given a training set $\{\mathbf{x}_n, y_n \}_{n=1}^N$, where $y_n \in \{1,\dots,K\}$, we wish to fit the parameters $\mathbf{w}$ by maximizing the log likelihood, \begin{equation} \mathcal{L} = \log \prod_{n=1}^N \frac{e^{f_{y_n}(\mathbf{x}_n; \mathbf{w})}} {\sum_{m=1}^K e^{f_m(\mathbf{x}_n; \mathbf{w}) }}. \label{eq:Lwx} \end{equation} When the number of training instances is very large, the above maximization can be carried out by applying stochastic gradient descent (by minimizing $-\mathcal{L}$) where we cycle over minibatches. However, this stochastic optimization procedure cannot deal with large values of $K$ because the normalizing constant in the softmax couples all scores functions so that the log likelihood cannot be expressed as a sum across class labels.
To overcome this, we can use the one-vs-each lower bound on the softmax probability from Eq.\ (\ref{eq:softmaxbound}) and obtain the following lower bound on the previous log likelihood, \begin{equation} \mathcal{F} = \log \prod_{n=1}^N \prod_{m \neq y_n} \frac{1} {1 + e^{ - [f_{y_n}(\mathbf{x}_n; \mathbf{w}) - f_m(\mathbf{x}_n; \mathbf{w}) ] }} = - \sum_{n=1}^N \sum_{m \neq y_n} \log \left(1 + e^{ - [f_{y_n}(\mathbf{x}_n; \mathbf{w}) - f_m(\mathbf{x}_n; \mathbf{w}) ]} \right) \label{eq:onevsonecostClass} \end{equation} which now consists of a sum over both data points and labels. Interestingly, the sum over the labels, $\sum_{m \neq y_n}$, runs over all remaining classes that are different from the label $y_n$ assigned to $\mathbf{x}_n$. Each term in the sum is a logistic regression cost, that depends on the pairwise score difference $f_{y_n}(\mathbf{x}_n; \mathbf{w}) - f_m(\mathbf{x}_n; \mathbf{w})$, and encourages the $n$-th data point to get separated from the $m$-th remaining class. The above lower bound can be optimized by stochastic gradient descent by subsampling terms in the double sum in Eq.\ (\ref{eq:onevsonecostClass}), thus resulting in a doubly stochastic approximation scheme. Next we further discuss the stochasticity associated with subsampling remaining classes.
The gradient for the cost associated with a single training instance $(\mathbf{x}_n, y_n)$ is \begin{equation} \nabla \mathcal{F}_n = \sum_{m \neq y_n} \sigma\left( f_m(\mathbf{x}_n; \mathbf{w}) - f_{y_n}(\mathbf{x}_n; \mathbf{w}) \right) \left[ \nabla_{\mathbf{w}} f_{y_n}(\mathbf{x}_n; \mathbf{w}) - \nabla_{\mathbf{w}} f_m(\mathbf{x}_n; \mathbf{w}) \right]. \label{eq:prodgrad} \end{equation} This gradient consists of a weighted sum where the sigmoidal weights $\sigma\left( f_m(\mathbf{x}_n; \mathbf{w}) - f_{y_n}(\mathbf{x}_n; \mathbf{w}) \right)$ quantify the contribution of the remaining classes to the whole gradient; the more a remaining class overlaps with $y_n$ (given $\mathbf{x}_n$) the higher its contribution is.
A simple way to get an unbiased stochastic estimate of \eqref{eq:prodgrad} is to randomly subsample
a small subset of remaining classes from the set $\{m | m \neq y_n\}$.
More advanced schemes could be based on importance sampling where we introduce a proposal distribution $p_{n}(m)$ defined on the set $\{m | m \neq y_n\}$ that could favor selecting classes with large sigmoidal weights. While such more advanced schemes could reduce variance, they require prior knowledge (or on-the-fly learning) about how classes overlap with one another. Thus, in Section \ref{sec:experiments} we shall experiment only with the simple random subsampling approach and leave the above advanced schemes for future work.
To illustrate the above stochastic gradient descent algorithm we simulated a two-dimensional data set of $200$ instances, shown in Figure \ref{fig:toycomparisons}b, that belong to five classes. We consider a linear classification model where the score functions take the form $f_k(\mathbf{x}_n, \mathbf{w}) = \mathbf{w}_k^T \mathbf{x}_n$ and where the full set of parameters is $\mathbf{w} = (\mathbf{w}_1,\dots,\mathbf{w}_K)$. We consider minibatches of size ten to approximate the sum $\sum_n$ and subsets of remaining classes of size one to approximate $\sum_{m \neq y_n}$. Figure \ref{fig:toycomparisons}c shows the stochastic evolution of the approximate log likelihood (dashed red line), i.e.\ the unbiased subsampling based approximation of \eqref{eq:onevsonecostClass}, together with the maximized exact softmax log likelihood (blue line), the non-stochastically maximized approximate lower bound from \eqref{eq:onevsonecostClass} (red solid line) and Bouchard's method (green line). To apply Bouchard's method we construct a lower bound on the log likelihood by replacing each softmax probability with the bound from \eqref{eq:softmaxBou} where we also need to optimize a separate variational parameter $\alpha_n$ for each data point. As shown in Figure \ref{fig:toycomparisons}c our method provides a tighter lower bound than Bouchard's method despite the fact that it does not contain any variational parameters. Also, Bouchard's method can become very slow when combined with stochastic gradient descent since it requires tuning a separate variational parameter $\alpha_n$ for each training instance. Figure \ref{fig:toycomparisons}b also shows the decision boundaries discovered by the exact softmax, one-vs-each bound and Bouchard's bound.
Finally, the actual parameters values found by maximizing the one-vs-each bound were remarkably close (although not identical) to the parameters found by the exact softmax.
\section{Experiments \label{sec:experiments}}
\subsection{Toy example in large scale non-parametric estimation \label{sec:largedensity}}
Here, we illustrate the ability to stochastically maximize the bound in Eq.\ \eqref{eq:lowerboundlik} for the simple nonparametric estimation case. In such case, we can also maximize the bound based on the analytic formulas and therefore we will be able to test how well the stochastic algorithm can approximate the optimal/known solution. We consider a data set of $N = 10^6$ instances each taking one out of $K = 10^4$ possible categorical values. The data were generated from a distribution $p(k) \propto u_k^2$, where each $u_k$ was randomly chosen in $[0,1]$. The probabilities estimated based on the analytic formulas are shown in Figure \ref{fig:DensityLarge}a. To stochastically estimate these probabilities we follow the doubly stochastic framework of Section \ref{sec:classification} so that we subsample data instances of minibatch size $b=100$ and for each instance we subsample $10$ remaining categorical values. We use a learning rate initialized to $0.5/b$ (and then decrease it by a factor of $0.9$ after each epoch) and performed $2 \times 10^5$ iterations. Figure \ref{fig:DensityLarge}b shows the final values for the estimated probabilities, while Figure \ref{fig:DensityLarge}c shows the evolution of the estimation error during the optimization iterations. We can observe that the algorithm performs well and exhibits a typical stochastic approximation convergence.
\begin{figure*}
\caption{(a) shows the optimally estimated probabilities which have been sorted for visualizations purposes. (b) shows the corresponding probabilities estimated by stochastic optimization. (c) shows the absolute norm for the vector of differences between exact estimates and stochastic estimates.}
\label{fig:DensityLarge}
\end{figure*}
\subsection{Classification \label{sec:experimClass}}
{\bf Small scale classification comparisons.} Here, we wish to investigate whether the proposed lower bound on the softmax is a good surrogate for exact softmax training in classification. More precisely, we wish to compare the parameter estimates obtained by the one-vs-each bound with the estimates obtained by exact softmax training.
To quantify closeness we use the normalized absolute norm \begin{equation}
\text{norm} = \frac{|\mathbf{w}_{\text{softmax}} - \mathbf{w}_*|}{|\mathbf{w}_{\text{softmax}}|}, \label{eq:norm} \end{equation} where $\mathbf{w}_{\text{softmax}}$ denotes the parameters obtained by exact softmax training and $\mathbf{w}_*$ denotes estimates obtained by approximate training. Further, we will also report predictive performance measured by classification error and negative log predictive density (nlpd) averaged across test data, \begin{equation} \text{error} = (1/N_{test}) \sum_{i=1}^{N_{test}} I( y_i \neq t_i), \quad
\text{nlpd} = (1/N_{test}) \sum_{i=1}^{N_{test}} - \log p(t_i|\mathbf{x}_i), \end{equation} where $t_i$ denotes the true label of a test point and $y_i$ the predicted one. We trained the linear multiclass model of Section \ref{sec:classification} with the following alternative methods: exact softmax training (\textsc{soft}), the one-vs-each bound (\textsc{ove}), the stochastically optimized one-vs-each bound (\textsc{ove-sgd}) and
Bouchard's bound (\textsc{bouchard}). For all approaches, the associated cost function was maximized together with an added regularization penalty term,
$-\frac{1}{2} \lambda ||\mathbf{w}||^2$, which ensures that the global maximum of the cost function is achieved for finite $\mathbf{w}$.
Since we want to investigate how well we surrogate exact softmax training, we used the same fixed value $\lambda=1$ in all experiments.
We considered three small scale multiclass classification datasets: \textsc{mnist}\footnote{\url{http://yann.lecun.com/exdb/mnist}}, \textsc{20news}\footnote{\url{http://qwone.com/~jason/20Newsgroups/}} and \textsc{bibtex} \citep{Katakis08multilabeltext}; see Table \ref{table:datasets} for details.
Notice that \textsc{bibtex} is originally a
multi-label classification dataset \citep{NIPS2015_5969}. where each example may have more than one labels. Here, we maintained only a single label for each data point in order to apply standard multiclass classification. The maintained label was the first label appearing in each data entry in the repository files\footnote{\url{http://research.microsoft.com/en-us/um/people/manik/downloads/XC/XMLRepository.html}} from which we obtained the data.
Figure \ref{fig:smallScaleClass} displays convergence of the lower bounds (and for the exact softmax cost) for all methods. Recall, that the methods \textsc{soft}, \textsc{ove} and \textsc{bouchard} are non-stochastic and therefore their optimization can be carried out by standard gradient descent. Notice that in all three datasets the one-vs-each bound gets much closer to the exact softmax cost compared to Bouchard's bound. Thus, \textsc{ove} tends to give a tighter bound despite that it does not contain any variational parameters, while \textsc{bouchard} has $N$ extra variational parameters, i.e.\ as many as the training instances. The application of \textsc{ove-sgd} method (the stochastic version of \textsc{ove}) is based on a doubly stochastic scheme where we subsample minibatches of size $200$ and subsample remaining classes of size one. We can observe that \textsc{ove-sgd} is able to stochastically approach its maximum value which corresponds to \textsc{ove}.
Table \ref{table:scores} shows the parameter closeness score from Eq.\ \eqref{eq:norm} as well as the classification predictive scores. We can observe that \textsc{ove} and \textsc{ove-sgd} provide parameters closer to those of \textsc{soft} than the parameters provided by \textsc{bouchard}. Also, the predictive scores for \textsc{ove} and \textsc{ove-sgd} are similar to \textsc{soft}, although they tend to be slightly worse. Interestingly, \textsc{bouchard} gives the best classification error, even better than the exact softmax training, but at the same time it always gives the worst nlpd which suggests sensitivity to overfitting. However, recall that the regularization parameter $\lambda$ was fixed to the value one and it was not optimized separately for each method using cross validation. Also notice that \textsc{bouchard} cannot be easily scaled up (with stochastic optimization) to massive datasets since it introduces an extra variational parameter for each training instance.
{\bf Large scale classification.} Here, we consider \textsc{amazoncat-13k} (see footnote 4) which is a large scale classification dataset. This dataset is originally multi-labelled \citep{NIPS2015_5969} and here we maintained only a single label, as done for the \textsc{bibtex} dataset, in order to apply standard multiclass classification. This dataset is also highly imbalanced since there are about $15$ classes having the half of the training instances while they are many classes having very few (or just a single) training instances.
\begin{table}[t]
\caption{Summaries of the classification datasets.}
\label{table:datasets}
\centering
\begin{tabular}{lllll}
\toprule
Name & Dimensionality & Classes & Training examples & Test examples \\
\midrule
\textsc{mnist} & 784 & 10 & 60000 & 10000 \\
\textsc{20news} & 61188 & 20 & 11269 & 7505 \\
\textsc{bibtex} & 1836 & 148 & 4880 & 2515 \\
\textsc{amazoncat-13k} & 203882 & 2919 & 1186239 & 306759 \\
\bottomrule
\end{tabular} \end{table}
\begin{table}[t]
\caption{Score measures for the small scale classification datasets.}
\label{table:scores}
\centering
\begin{tabular}{lllll}
\toprule
& \textsc{soft} & \textsc{bouchard} & \textsc{ove} & \textsc{ove-sgd} \\
& (error, nlpd) & (norm, error, nlpd) & (norm, error, nlpd) & (norm, error, nlpd) \\ \midrule \textsc{mnist} & (0.074, 0.271) & (0.64, 0.073, 0.333) & (0.50, 0.082, 0.287) & (0.53, 0.080, 0.278) \\ \textsc{20news} & (0.272, 1.263) & (0.65, 0.249, 1.337) & (0.05, 0.276, 1.297) & (0.14, 0.276, 1.312) \\ \textsc{bibtex} & (0.622, 2.793) & (0.25, 0.621, 2.955) & (0.09, 0.636, 2.888) & (0.10, 0.633, 2.875) \\
\bottomrule
\end{tabular} \end{table} \begin{figure*}
\caption{(a) shows the evolution of the lower bound values for \textsc{mnist}, (b) for \textsc{20news} and (c) for \textsc{bibtex}. For more clear visualization the bounds of the stochastic \textsc{ove-sgd} have been smoothed using a rolling window of $400$ previous values. (d) shows the evolution of the \textsc{ove-sgd} lower bound (scaled to correspond to a single data point) in the large scale \textsc{amazoncat-13k} dataset. Here, the plotted values have been also smoothed using a rolling window of size $4000$ and then thinned by a factor of $5$.
}
\label{fig:smallScaleClass}
\end{figure*}
Further, notice that in this large dataset the number of parameters we need to estimate for the linear classification model is very large: $K \times (D+1) = 2919 \times 203883$ parameters where the plus one accounts for the biases. All methods apart from \textsc{ove-sgd} are practically very slow in this massive dataset, and therefore we consider $\textsc{ove-sgd}$ which is scalable.
We applied \textsc{ove-sgd} where at each stochastic gradient update we consider a single training instance (i.e.\ the minibatch size was one) and for that instance we randomly select five remaining classes. This leads to sparse parameter updates, where the score function parameters of only six classes (the class of the current training instance plus the remaining five ones) are updated at each iteration. We used a very small learning rate having value $10^{-8}$ and we performed five epochs across the full dataset, that is we performed in total $5 \times 1186239$ stochastic gradient updates. After each epoch we halve the value of the learning rate before next epoch starts. By taking into account also the sparsity of the input vectors each iteration is very fast and full training is completed in just $26$ minutes in a stand-alone PC. The evolution of the variational lower bound that indicates convergence is shown in Figure \ref{fig:smallScaleClass}d. Finally, the classification error in test data was $53.11\%$ which is significantly better than random guessing or by a method that decides
always the most populated class (where in \textsc{amazoncat-13k} the most populated class
occupies the $19 \%$ of the data so the error of that method is around $79\%$).
\section{Discussion}
We have presented the one-vs-each lower bound on softmax probabilities and we have analyzed its theoretical properties. This bound is just the most extreme case of a full family of hierarchically ordered bounds. We have explored the ability of the bound to perform parameter estimation through stochastic optimization in models having large number of categorical symbols, and we have demonstrated this ability to classification problems.
There are several directions for future research. Firstly, it is worth investigating the usefulness of the bound in different applications from classification, such as for learning word embeddings in natural language processing and for training recommendation systems. Another interesting direction is to consider the bound not for point estimation, as done in this paper, but for Bayesian estimation using variational inference.
\subsection*{Acknowledgments}
We thank the reviewers for insightful comments. We would like also to thank Francisco J. R. Ruiz for useful discussions and David Blei for suggesting the name {\em one-vs-each} for the proposed method.
\appendix \section{Proof of Proposition 3}
Here we re-state and prove {\bf Proposition 3}.
{\bf Proposition 3.} {\em Assume that $K=2$ and we approximate the probabilities $p(y=1)$ and $p(y=2)$ from $(2)$
with the corresponding Bouchard's bounds given by $\frac{e^{f_1 - \alpha}}{(1 + e^{f_1 - \alpha}) (1 + e^{f_2 - \alpha})}$ and $\frac{e^{f_2 - \alpha}}{(1 + e^{f_1 - \alpha}) (1 + e^{f_2 - \alpha})}$. These bounds are used to approximate the maximum likelihood solution for $(f_1,f_2)$ by maximizing the lower bound \begin{equation} \mathcal{F}(f_1,f_2,\alpha) = \log \frac{e^{N_1 (f_1 - \alpha) + N_2 (f_2 -\alpha)}}{\left[(1 + e^{f_1 - \alpha}) (1 + e^{f_2 - \alpha})\right]^{N_1+N_2} }, \label{eq:Qf1f2alpha} \end{equation} obtained by replacing $p(y=1)$ and $p(y=2)$ in the exact log likelihood with Bouchard's bounds. Then, the global maximizer of $\mathcal{F}(f_1,f_2,\alpha)$ is such that \begin{equation} \alpha = \frac{f_1 + f_2}{2}, \ \ f_k = 2 \log N_k + c, \ \ k=1,2. \label{eq:Bouchalphaf1f2_2} \end{equation} } \begin{proof} The lower bound is written as $$ N_1 (f_1 - \alpha) + N_2 (f_2 - \alpha) - (N_1 + N_2) \left[ \log (1 + e^{f_1 -\alpha}) + \log (1 + e^{f_2 - \alpha})
\right]. $$ We will first maximize this quantity wrt $\alpha$. For that is suffices to minimize the upper bound on the following log-sum-exp function $$ \alpha + \log (1 + e^{f_1 -\alpha}) + \log (1 + e^{f_2 - \alpha}), $$ which is a convex function of $\alpha$. By taking the derivative wrt $\alpha$ and setting to zero we obtain the stationary condition $$ \frac{e^{f_1 - \alpha}}{1 + e^{f_1 - \alpha}}
+ \frac{e^{f_2 - \alpha }}{1 + e^{f_2 - \alpha}} = 1. $$ Clearly, the value of $\alpha$ that satisfies the condition is $\alpha = \frac{f_1 + f_2}{2}$. Now if we substitute this value back into the initial bound we have $$ N_1 \frac{f_1 - f_2}{2} + N_2 \frac{f_2 - f_1}{2} - (N_1 + N_2) \left[ \log (1 + e^{\frac{f_1 - f_2}{2}}) + \log (1 + e^{\frac{f_2 - f_1}{2}})
\right] $$ which is concave wrt $f_1$ and $f_2$. Then, by taking derivatives wrt $f_1$ and $f_2$ we obtain the conditions $$ \frac{N_1 - N_2}{2} = \frac{(N_1 + N_2)}{2} \left[ \frac{ e^{\frac{f_1 - f_2}{2}}}{1 + e^{\frac{f_1 - f_2}{2}}} - \frac{ e^{\frac{f_2 - f_1}{2}}}{1 + e^{\frac{f_2 - f_1}{2}}}
\right] $$ $$ \frac{N_2 - N_1}{2} = \frac{(N_1 + N_2)}{2} \left[ \frac{ e^{\frac{f_2 - f_1}{2}}}{1 + e^{\frac{f_2 - f_1}{2}}} - \frac{ e^{\frac{f_1 - f_2}{2}}}{1 + e^{\frac{f_1 - f_2}{2}}}
\right] $$ Now we can observe that these conditions are satisfied by $f_1 = 2 \log N_1 + c$ and $f_2 = 2 \log N_2 + c$ which gives the global maximizer since $\mathcal{F}(f_1,f_2,\alpha)$ is concave. \end{proof}
\begin{small}
\end{small}
\end{document} |
\begin{document}
\begin{nouppercase} \maketitle \end{nouppercase}
\begin{abstract} Let $A$ be an abelian variety defined over a number field $F$. Suppose its dual abelian variety $A'$ has good non-ordinary reduction at the primes above $p$. Let $F_{\infty}/F$ be a $\Zp$-extension, and for simplicity, assume that there is only one prime $\mfp$ of $F_{\infty}$ above $p$, and $F_{\infty, \mfp}/\Qp$ is totally ramified and abelian. (For example, we can take $F=\Q(\zeta_{p^N})$ for some $N$, and $F_{\infty}=\Q(\zeta_{p^{\infty}})$.) As Perrin-Riou did in \cite{Perrin-Riou-1}, we use Fontaine's theory (\cite{Fontaine}) of group schemes to construct series of points over each $F_{n, \mfp}$ which satisfy norm relations associated to the Dieudonne module of $A'$ (in the case of elliptic curves, simply the Euler factor at $\mfp$), and use these points to construct characteristic power series $\bfL_{\alpha} \in \Qp[[X]]$ analogous to Mazur's characteristic polynomials in the case of good ordinary reduction. By studying $\bfL_{\alpha}$, we obtain a weak bound for $\rank E(F_n)$.
In the second part, we establish a more robust Iwasawa Theory for elliptic curves, and find a better bound for their ranks under the following conditions: Take an elliptic curve $E$ over a number field $F$. The conditions for $F$ and $F_{\infty}$ are the same as above. Also as above, we assume $E$ has supersingular reduction at $\mfp$. We discover that we can construct series of local points which satisfy finer norm relations under some conditions related to the logarithm of $E/F_{\mfp}$. Then, we apply Sprung's (\cite{Sprung}) and Perrin-Riou's insights to construct \textit{integral} characteristic polynomials $\bfLalg^{\sharp}$ and $\bfLalg^{\flat}$. One of the consequences of this construction is that if $\bfLalg^{\sharp}$ and $\bfLalg^{\flat}$ are not divisible by a certain power of $p$, then $E(F_{\infty})$ has a finite rank modulo torsions. \end{abstract} \tableofcontents
\begin{section}{Introduction} A good place to start our discussion is Mazur's influential work on the rational points of abelian varieties over towers of number fields (\cite{Mazur}). Suppose $A$ is an abelian variety over a number field $F$, $A$ has good ordinary reduction at every prime above $p$, and $F_{\infty}$ is a $\Zp$-extension of $F$ (i.e., $\operatorname{Gal}(F_{\infty}/F) \cong \Zp$). First, he established the Control Theorem for $\operatorname{Sel}_p(A[p^{\infty}]/F_n)$'s (meaning he showed that the natural map $\operatorname{Sel}_p(A[p^{\infty}]/F_n)\to \operatorname{Sel}_p(A[p^{\infty}]/F_{\infty})^{\operatorname{Gal}(F_{\infty}/F_n)}$ has bounded kernel and cokernel as $n$ varies), and second, he demonstrated the existence of the characteristic polynomial $f(F_{\infty}/F, A)$ of $\operatorname{Sel}_p(A[p^{\infty}]/F_{\infty})$. (Any attempt to reduce his immense work to two sentences should be resisted, and readers should understand that the author is only trying to describe how his work has influenced this paper.)
It means that we can use powerful tools of Iwasawa Theory. For example, if $f(F_{\infty}/F, A)\not=0$ (which is true if $A(F_n)^{\chi_n}$ and the $\chi_n$-part of the Shafarevich-Tate group $\Sha(A/F_n)[p^{\infty}]^{\chi_n}$ are finite for any $n\geq 0$ and any character $\chi_n$ of $\operatorname{Gal}(F_n/F)$), then $A(F_{\infty})$ has a finite rank modulo torsions. (Torsions over $F_{\infty}$ are often finite.)
Regarding the rank of $A(F_{\infty})$, now we have a stronger result for elliptic curves over $\Q$ by Kato (\cite{Kato}). However, we want to emphasize that Mazur's work and Kato's work have different goals and strengths.
Can we establish a result analogous to Mazur's for abelian varieties with good \textit{non-ordinary} reduction at primes above $p$? (See Section~\ref{Reduction} for the discussion about reduction types. We will not treat bad reduction primes, which seem to require a very different approach except for multiplicative reduction primes.)
The answer is that it is not easy to do Mazur's work directly for non-ordinary reduction primes. The main problem seems to be that the local universal norms are trivial when the primes are non-ordinary.
One of the more successful strategies to overcome this difficulty is to construct a series of local points which satisfy certain norm relations associated with the Euler factor $X^2-a_p(E)X+p$. Rubin introduced the idea of $\pm$-Selmer groups of elliptic curves (\cite{Rubin}). His method was to use the Heegner points as local points. Perrin-Riou (\cite{Perrin-Riou-1}) invented a way to construct such local points purely locally using Fontaine's theory of formal group schemes (\cite{Fontaine}). Her brilliant idea was all but forgotten for a long time, but are getting more influential recently. (And, this paper owes much to her work.)
More recently, Kobayashi (\cite{Kobayashi}) also constructed such local points of elliptic curves using a more explicit method, and demonstrated the potential that the theory for supersingular reduction primes can be as good as the theory for ordinary reduction primes.
Kobayashi assumed $a_p(E)=0$ for an elliptic curve $E$ defined over $\Q$ (which is automatically true by the Hasse inequality if $E$ has good supersingular reduction at $p$ and $p>3$). Sprung introduced a new idea, what he calls $\sharp/\flat$-Selmer groups for elliptic curves, which does not require $a_p(E)=0$ (\cite{Sprung}). His work has particular relevance to this paper because we are interested in the abelian varieties and elliptic curves over ramified fields. Even when we assume $a_p(E)=0$ or an equivalent condition, the assocaited formal groups behave as if $a_p$ is not 0 because the fields are ramified. We will make much use of his idea of the $\sharp/\flat$-decomposition in the second part.
Whereas our predecessors were concerned with abelian varieties over $\Q$ (and therefore formal groups defined over $\Qp$), we are concerned with abelian varieties defined over fields whose primes above $p$ are ramified, which present new difficulties.
First (Section~\ref{Case 1}), we take an abelian variety $A$ over a number field $F$, and let $A'$ be its dual abelian variety. For simplicity, we assume there is only one prime $\mfp$ of $F$ above $p$, and it is totally ramified over $F/\Q$. We assume $A'$ has good reduction at $\mfp$. Suppose $F_{\infty}$ is a $\Zp$-extension of $F$ such that $\mfp$ is totally ramified over $F_{\infty}/F$, and $F_{\infty, \mfp}/\Qp$ is abelian. For example, take $F =\Q(\zeta_{p^N})$ for some $N$, and $F_{\infty}=\Q (\zeta_{p^{\infty}})$.
Suppose $A'/F_{\mfp}$ has dimension $1$. (Generalizing to higher dimensions may not be very hard.) Let $H^{\vee}(X)=X^d+pb_1X^{d-1}+p^2b_2X^{d-2}+\cdots+p^db_d$ be the characteristic polynomial of the Verschiebung ${\bf V}$ acting on the Dieudonne module. For example, for an elliptic curve, that is simply $X^2-a_p(E)X+p$. Suppose that $A'(F_{\infty, \mfp})_{tor}$ is annihilated by some $M'>0$. Then, we construct points $Q(\pi_{N+n}) \in A'(F_{n, \mfp})$ such that we have
$$ \operatorname{Tr}_{F_{n, \mathfrak p}/F_{n-d, \mathfrak p}} Q(\pi_{N+n}) = \sum_{i=1}^d -p^i\cdot b_i \operatorname{Tr}_{F_{n-i, \mathfrak p}/F_{n-d, \mathfrak p}} Q(\pi_{N+n-i}). $$
Fontaine's theory of finite group schemes (\cite{Fontaine}) is instrumental in our construction, as it is in Perrin-Riou's work (\cite{Perrin-Riou-1}). As Perrin-Riou does, for each root $\alpha$ of $H^{\vee}(X)$ with $v_p(\alpha)<1$, we can construct a characteristic power series $\bfL_{\alpha}(X)\in \Qp[[X]]$ which is analogous to Mazur's characteristic polynomial $f(F_{\infty}/F, A)$ except that it is not an integral power series unless $v_p(\alpha)=0$.
Then, we can obtain the following bound for the coranks of the Selmer groups (and thus, for the ranks of $A(F_n)$):
\begin{theorem}[Proposition~\ref{ZeroGo}] Let $\lambda=v_p(\alpha)$.
\begin{enumerate} \item If $\bfL_{\alpha}\not=0$, then \[ \corank_{\Zp} \operatorname{Sel}_p(A[p^{\infty}]/F_n) \leq e(p-1) \times \left\{ p^{n-1}+p^{n-2}+ \cdots+ p^m \right\}+O(1)\] where $n-m = \lambda n +O(1)$.
\item If any root $\alpha$ of $H^{\vee}(X)$ has valuation 0 (i.e., if $A'$ has ``in-between'' reduction or ordinary reduction), then $\corank_{\Zp}(\operatorname{Sel}_p(A[p^{\infty}]/F_n))$ is bounded by the number of roots of $\bfL_{\alpha}$. \end{enumerate} \end{theorem}
We have $\bfL_{\alpha}\not=0$ if $\operatorname{Sel}_p(A[p^{\infty}]/F_n)^{\chi_n}$ is finite for any $n$ and any character $\chi_n$ of $\operatorname{Gal}(F_n/F)$. Also note that $\rank(A(F_n))$ is bounded by $\corank_{\Zp} \operatorname{Sel}_p(A[p^{\infty}]/F_n)$.
In addition, we construct similar local points over the extensions $F_{\mfp}(\sqrt[p^n]{\pi})$ ($n\geq 0$) for any uniformizer $\pi$ of $F_{\mfp}$ (Section~\ref{Kummer}). On one hand, this construction is fully general. On the other hand, since $\cup_n F_{\mfp}(\sqrt[p^n]{\pi})$ is not abelian over $F_{\mfp}$, it is not clear what we can do with it. (For instance, we cannot apply Iwasawa Theory to the points.)
Furthermore, assuming additional hypotheses, and with the crucial help of Sprung's insight, we can establish an Iwasawa Theory that is more closely aligned with Mazur's theory. In Section~\ref{Case 2}, we take an elliptic curve $E$ over $F$, and suppose $E$ has \textit{good supersingular reduction} at $\mfp$ (i.e., $a_{\mfp}(E)$ is not prime to $p$).
We choose a logarithm $\bfl$ of $E$ over $F_{\mfp}$ and a generator $\bfm$ of the Dieudonne module of $E$, and write
\[ \bfl=\alpha_1 \bfm+\alpha_2 {\bf F}\bfm\]
for some $\alpha_1, \alpha_2 \in F_{\mfp}$. We assume $p| \frac{\alpha_2}{\alpha_1}$ (Assumption~\ref{Assumption K}). Also we assume Assumption~\ref{Assumption L}, which is too technical to explain here, but is probably true in most cases.
One crucial step is that we modify our construction so that the resulting local points satisfy a finer norm relation (Proposition~\ref{Mark IV}). Another crucial step is that like Perrin-Riou, we construct $p$-adic characteristics, but this time, by applying an idea inspired by Sprung's insight of $\sharp/\flat$ (\cite{Sprung}), we construct integral $p$-adic characteristic polynomials $\bfLalg^{\sharp}(E), \bfLalg^{\flat}(E) \in \Lambda$. Since these are integral, they are more analogous to Mazur's characteristic $f(F_{\infty}/F, A)$, and it is likely that they have nice properties. They may not necessarily satisfy a control theorem in a literal sense, but nonetheless we manage to prove Proposition~\ref{DDT}, by which we can obtain the following.
\begin{theorem}[Theorem~\ref{DDR}] Suppose $a_p$ and $\alpha$ are divisible by $p^T$ for some $T$, and neither $\bfLalg^{\sharp}(E)$ nor $\bfLalg^{\flat}(E)$ is divisible by $p^S$ for some $S$ with $S+\frac{[F:\Q]\times p}{(p-1)^2}<T$. Then, $E(F_{\infty})$ has a finite rank modulo torsions, and $\Sha(E/F_n)[p^{\infty}]^{\chi_n}$ is finite for all sufficiently large $n$ and primitive characters $\chi_n$ of $\operatorname{Gal}(F_n/F)$. \end{theorem}
\end{section}
\begin{section}{Reduction Types} \Label{Reduction} In this short section, we discuss reduction types.
For elliptic curves, what good reduction, good ordinary reduction, and good supersingular reduction mean is clear. Suppose an elliptic curve $E$ is defined over a local field $K$. Then, we may suppose it has a minimal model over $\OO_K$. Let $\tilde E$ denote the reduced curve of the minimal model modulo $\mm_{\OO_K}$. We say $E$ has good reduction if $\tilde E$ is non-singular (i.e., smooth). Furthermore, we say $E$ has good ordinary reduction if $\tilde E$ is non-singular, and $\tilde E[p]$ is non-trivial, and has good supersingular reduction if $\tilde E$ is non-singular, and $\tilde E[p]$ is trivial. There are other equivalent definitions.
For general abelian varieties, it may be advantageous to use the Dieudonne modules to define reduction types. (There are other definitions, but the one using Dieudonne modules seems relatively simple.) Suppose $G$ is a formal group scheme over $\OO_K$ where $K$ is a local field. Let $G_{/k}$ be its reduction over the residue field $k$. If $G_{/k}$ is smooth, then we say $G$ has good reduction. Assume $G$ has good reduction, and let $M$ be its Dieudonne module $\hat{CW}(R_{G_{/k}})$ where $R_{G_{/k}}$ is the affine algebra that defines $G_{/k}$, and $\hat{CW}$ denotes the completion of the co-Witt vectors. (See \cite{Dieudonne-1}, \cite{Dieudonne-2}, \cite{Fontaine}, or Section~\ref{Fontaine}.) The Frobenius ${\bf F}$ and the Verschiebung ${\bf V}$ act on $M$ through $\hat{CW}$ with ${\bf F}{\bf V}={\bf V}{\bf F}=p$.
Let $H(X)$ be the characteristic polynomial of ${\bf F}$ as action on $M$, i.e., $H(X)=\det (X\cdot 1_M-{\bf F}|M)$. Write
\[ H(X)=X^d+a_{d-1}X^{d-1}+\cdots+a_0.\] Then, ${\bf F}$ is a topological nilpotent if and only if the roots of $H(X)$ are non-units.
Since ${\bf F}{\bf V}=p$, $$H^{\vee}(X) \stackrel{def}=X^d+p\frac{a_1}{a_0}X^{d-1}+p^2 \frac{a_2}{a_0}X^{d-2}+\cdots+p^{d-1} \frac{a_{d-1}}{a_0} X+p^d \frac 1{a_0}$$ is the characteristic polynomial of ${\bf V}$ as action on $M$. We define the following terminology we will use in this paper.
\begin{definition} \Label{Calais} Assume $G$ has good reduction. Also assume ${\bf F}$ is a topological nilpotent. Recall that ${\bf V}$ is a topological nilpotent if all the roots of $H^{\vee}(X)$ are non-units. \begin{enumerate} \item If all the roots of $H^{\vee}$ are units, then we say $G$ has ordinary reduction. \item If ${\bf V}$ is a topological nilpotent (i.e., all the roots of $H^{\vee}$ are non-untis), then we say $G$ has supersingular reduction.
\item If some roots of $H^{\vee}$ are units and some are not, then we say $G$ has in-between reduction. \end{enumerate} \end{definition}
The last terminology is our own ad-hoc invention.
Definition~\ref{Calais} makes it clear that in this paper, we assume ${\bf F}$ is a topological nilpotent, but this condition is used only in a minor way, and when we use that assumption, we will mention it.
\end{section}
\begin{section}{Fontaine's functor for ramified extensions} \Label{Fontaine}
Our primary reference is \cite{Fontaine}~Chapter~4. We will keep his notation wherever possible. Fontaine's book is out of print, and not many libraries have a copy. So, we will explain his work briefly.
Let
\begin{enumerate}[(a)] \item $K'$: an extension over $\Qp$ (possibly ramified), \item $\mathcal O_{K'}$: its ring of integers, \item $\mathfrak m$: its maximal ideal, \item $e$: the ramification index of $K'$. \end{enumerate}
Let $k$ be the residue field of $\OO_{K'}$, and let $K$ be the fractional field of $W=W(k)$, the set of Witt vectors of $k$. In other words, it is the maximal unramified extension of $\Qp$ contained in $K'$. Then, there is the $p$-th Frobenius $\sigma$ on $K$. We let
\[ \mathbf D_k\stackrel{def}=W[{\bf F}, {\bf V}] \] where
\begin{enumerate}[(a)] \item ${\bf F}$ acts $\sigma$-linearly, and ${\bf V}$ acts $\sigma^{-1}$-linearly on $W$. In other words, ${\bf F} a=\sigma(a)$ and ${\bf V} a=\sigma^{-1}(a)$ where $a\in W$.
\item ${\bf F}{\bf V}={\bf V}{\bf F}=p$ \end{enumerate} If $K'$ is totally ramified so that $k=\mathbb F_p$, we drop $k$ from $\mathbf D_k$.
Suppose $G$ is a smooth finite-dimensional (commutative) formal group scheme over $\OO_{K'}$ such that $G_{/k}$ is smooth. Fontaine found a way to describe $G$ by linear algebra. More specifically, he can describe $G$ completely up to isogeny (or, up to isomorphism if $e<p-1$) by the Dieudonne module $M$, and the set $L$ of its ``logarithms'', and his description is given by expressing the points of $G$ by the linear algebra of $L$ and $M$. Together, $(L, M)$ is called the Honda system of $G$.
We briefly summarize Fontaine's work: Let $R$ be the affine algebra of $G$ (i.e., $G(g)\cong \operatorname{Hom}(R, g)$ for any algebra $g$ over $\OO_{K'}$ where $\operatorname{Hom}$ is the set of ring homomorphisms). Then, $R_k=R/\mm R$ is the affine algebra of the special fiber $G_{/k}$. Set
\[ M\stackrel{def}=\operatorname{Hom}(G_{/k}, \hat{CW}) \] where $\hat{CW}$ is the functor of completed co-Witt vectors. Then, $M=\operatorname{Hom}(G_{/k}, \hat{CW})\cong \hat{CW}(R_k)$. Since the Frobenius ${\bf F}$ and the Verschiebung ${\bf V}$ act on $CW$ by
\[ {\bf F}(\ldots, a_{-n},\ldots)=(\ldots, a_{-n}^p,\ldots),\] \[ {\bf V}(\ldots, a_{-2}, a_{-1}, a_0)=(\ldots, a_{-2}, a_{-1}),\] ${\bf F}$ and ${\bf V}$ also act on $M$ accordingly. For any algebra $A$ over $k$,
\[ G_{/k}(A)\cong \operatorname{Hom}_{\mathbf D_k}(M, A). \]
Suppose $N$ is a $\Dieudonne_k$-module. Let $N^{(j)}$ denote the $\mathbf D_k$-module with the same underlying set $N$ and action twisted by $\sigma^j$. In other words, for $n \in N^{(j)}$ and $\lambda \in W$,
\[ \lambda\circ n=\sigma^{-j}(\lambda)n.\]
We note that ${\bf F}$ induces a $\mathbf D_k$-linear isomorphism ${\bf F}: M^{(j)} \to M^{(j-1)}$, and ${\bf V}$ induces a $\mathbf D_k$-linear isomorphism ${\bf V}: M^{(j)} \to M^{(j+1)}$. So, we can define the following maps: \begin{enumerate}[(a)] \item $$\varphi_{i, j}:\mm^i\otimes_{\OO_{K'}} N^{(j)} \to \mm^{i-1}\otimes_{\OO_{K'}}N^{(j)}$$ is a natural map induced by the inclusion $\mathfrak m^i \to \mathfrak m^{i-1}$,
\item $$f_{i,j}: \mm^i \otimes_{\OO_{K'}} N^{(j)} \to \mm^i \otimes_{\OO_{K'}} N^{(j-1)}$$ induced by ${\bf F}: N^{(j)} \to N^{(j-1)}$, and
\item $$v_{i,j}: \mm^i \otimes_{\OO_{K'}} N^{(j)} \to \mm^{i-e} \otimes_{\OO_{K'}} N^{(j+1)}$$ given by $v_{i,j}(\lambda \otimes m)= p^{-1}\lambda \otimes {\bf V} m$. \end{enumerate}
For a subset $I$ of $\Z\times \Z$, we let $\mathcal D_I(N)$ denote the system of diagrams (in the category of $\OO_{K'}$-modules) of the objects $\mm^i\otimes N^{(j)}$ where $(i,j) \in I$ and the maps $\varphi_{i,j}, f_{i,j}, v_{i,j}$ between the objects of $\mathcal D_I(N)$. (See \cite{Fontaine}~p.189.)
We define
\[ I_0=\{ (i,j) \in \Z\times \Z, \; (j\geq 0) \; | \; i\geq 0 \text{ if }j=0, i \geq p^{j-1}-je \text{ if }j \geq 1\}, \] and let
$$N_{\OO_{K'}}\stackrel{def}=\varinjlim\mathcal D_{I_0}(N).$$
For $j'>0$, we also define
\[ I_{j'}=\{ (i,j) \in \Z\times \Z, \; (j \geq j') \; | i \geq p^{j-1}-je \}, \] and let
$$N_{\mathcal O_{K'}}[j']\stackrel{def}=\varinjlim\mathcal D_{I_{j'}}(N).$$
When $M$ is a $\mathbf D_k$-module without ${\bf F}$-torsion, it is well-known that $M_{\mathcal O_{K'}}[1]\to M_{\mathcal O_{K'}}$ is injective, and
\[ M/{\bf F} M\cong M_{\mathcal O_{K'}}/ M_{\mathcal O_{K'}}[1] \] (\cite{Fontaine}~5.2.5, Corollaire~1).
\begin{definition} \begin{enumerate}[(a)] \item For an algebra $g$ over $\OO_{K'}$, we can define
\begin{eqnarray*} \omega_g: \hat{CW}(g) &\to & \Qp\otimes g \\ (\cdots, a_{-n},\cdots, a_{-1},a_0) &\mapsto & \sum_{n=0}^{\infty} p^{-n}a_{-n}^{p^n}. \end{eqnarray*}
\item We define $P'(g)$ as the $\mathcal O_{K'}$-submodule of $\Qp\otimes g$ generated by $p^{-n} a^{p^n}$ for all $n\geq 0$ and all $a \in \mathfrak m \cdot g$. \end{enumerate}
We will drop $g$ from $\omega_g$ if it does not cause confusion. \end{definition}
This group $P'(g)$ is not indefinitely large. In fact, we have
\[ \mathfrak m \cdot g \subset P'(g) \subset \mathfrak m^{v} \cdot g \] where $v=\text{min}(p^n-ne)$ (in particular, if $e\leq p-1$, $P'(g)=\mathfrak m \cdot g$). See \cite{Fontaine}~p.197.
It is easy to see $\omega_g$ naturally extends to
\[ \omega'_g: \mathcal O_{K'} \otimes \hat{CW}_k(g/\mm \cdot g) \to \Qp \otimes g/P'(g)\] by choosing a lifting $(\tilde a_{-n})\in \hat{CW}_k(g)$ of $(a_{-n})\in \hat{CW}_k(g/\mm \cdot g)$.
\begin{proposition}[\cite{Fontaine}~Proposition~2.5] \Label{Atlantic} Let $N$ be a $\Dieudonne_k$-module so that ${\bf V} N=N$. Then, the canonical map $\OO_{K'}\otimes N \to N_{\OO_{K'}}$ is surjective, and its kernel is $\sum_{j=1}^{\infty} \mm^{p^{j-1}}\otimes \operatorname{Ker} {\bf V}^j$.
\end{proposition}
\begin{proposition}[\cite{Fontaine}~Lemme~3.1] \Label{Pacific} The kernel of $\omega_g'$ contains $\sum_{j=1}^{\infty} \mm^{p^{j-1}}\otimes \operatorname{Ker} V^j$. \end{proposition}
There is a natural map $\mathcal O_{K'} \otimes \hat{CW}_k(g/\mm g) \to \hat{CW}_{k}(g/\mm g)_{\OO_{K'}}$. Note ${\bf V} \hat{CW}_k(g/\mm g)= \hat{CW}_k(g/\mm g)$. Thus, by Propositions~\ref{Atlantic} and \ref{Pacific}, $\omega_g'$ factors through
\[ \omega_g: \hat{CW}_{k}(g/\mm g)_{\OO_{K'}} \to \Qp \otimes g/P'(g) \] (\cite{Fontaine}~p.197).
Recall that $R$ is the affine algebra of $G$. Then, there is the coproduct map $\delta: R\to R\hat\otimes_{\OO_{K'}} R$ which induces the group operation of $G$.
Let $P_R$ be the $R$-module generated by $a^{p^n}/p^n$ for every $a \in R$ and $n\geq 0$. Let $L$ be the set of $a \in P_R$ so that $a\otimes 1 - \delta(a)+1\otimes a=0$. In other words, $L$ is the set of logarithms. It naturally satisfies \[ L/\mathfrak m L \stackrel{\sim}\to M_{\mathcal O_{K'}}/ M_{\mathcal O_{K'}}[1] \cong M/{\bf F} M. \]
Fontaine defined the following functor $G(L, M)$:
\begin{definition}[\cite{Fontaine}~Section~4.4] \Label{Fontaine-Nara} For an algebra $g$ over $\OO_{K'}$ (i.e., $g$ is a ring containing $\OO_{K'}$), $G(L, M)(g)$ is the set of points $(\mathbf y, \mathbf x)$ with $\mathbf x \in \operatorname{Hom}_{\mathbf D_k}( M, CW_k(g/\mm \cdot g))$, and $\mathbf y \in \operatorname{Hom}_{\mathcal O_{K'}}(L, \Qp\otimes g)$ satisfying the following: $\bfx$ naturally induces a map
$$\bfx_{\OO_{K'}}: M_{\mathcal O_{K'}} \to CW_k(g/\mm \cdot g)_{\mathcal O_{K'}}.$$ Then, $(\bfy, \bfx)$ is a fiber product in the sense that $\bfx_{\OO_{K'}}$ and $\bfy$ are identical through
\begin{eqnarray*} \operatorname{Hom}_{\mathcal O_{K'} \otimes \Dieudonne_k}( M_{\mathcal O_{K'}}, CW_k(g/\mm g)_{\mathcal O_{K'}}) \to & \operatorname{Hom}_{\mathcal O_{K'}}( L, \Qp\otimes g/ P'(g))\\ &\uparrow\\ &\operatorname{Hom}_{\mathcal O_{K'}}(L, \Qp\otimes g), \end{eqnarray*} \end{definition}
There is a natural map $i_G:G\to G(L, M)$, and also we can find a map in the reverse direction $j_G:G(L, M)\to G$. These maps are not necessarily isomorphisms unless $e<p-1$. Rather, $i_G\circ j_G=p^t$, $j_G\circ i_G=p^t$ for some $t$ which depends on the ramification index $e$. \end{section}
\begin{section}{Perrin-Riou's insight, and weak bounds for ranks} \Label{Constructing}
In this section, we construct points of formal group schemes over local fields satisfying certain norm relations. The local points we construct are analogous to the points that Perrin-Riou constructed (\cite{Perrin-Riou-1}), and indeed, this section is an effort to find a way to make her idea work for group schemes defined over ramified fields. As in her work, Fontaine's functor (\cite{Fontaine}, and also \cite{Dieudonne-1}, \cite{Dieudonne-2}) plays a central role, but we need a functor defined for group schemes over ramified fields. There is a brief discussion about the functor in the previous section (Section~\ref{Fontaine}). And then, again following Perrin-Riou, we construct power series analogous to Mazur's characteristic polynomials of the Selmer groups. Our power series have limited utility unlike Mazur's characteristics because they are not integral. Nonetheless, they give a bound for the coranks of the Selmer groups (thus a bound for the ranks of the Mordell-Weil groups).
\begin{subsection}{Constructing the Perrin-Riou local points} \Label{Some special}
Suppose $k_{\infty}/\Qp$ is a totally ramified normal extension with $\operatorname{Gal}(k_{\infty}/\Qp)\cong \Z_p^{\times}$. By local class field theory, it is given by a Lubin-Tate group of height $1$ over $\Zp$. In other words, there is $\varphi(X)=X^p+\alpha_{p-1} X^{p-1}+\cdots+\alpha_1X \in \Zp[X]$ with $p|\alpha_i$, $v_p(\alpha_1)=1$ so that
\[ k_{\infty}=\cup_n \Qp(\pi_n) \] where $\varphi(\pi_n)=\pi_{n-1}$ ($\pi_n\not=0$ for $n > 0$, $\pi_0=0$).
\begin{remark} \Label{Edward the Confessor} We can also study a more general case where $k_{\infty}/\Qp$ is ``merely'' ramified (rather than totally ramified). It can certainly be done as the author did in a different context and for a different problem in \cite{Kim-1}. The notation will become much more complicated. \end{remark}
Suppose $K'=\Qp(\pi_N)$ for some $N>0$. Let $\mm=\mm_{\OO_{K'}}$, and $k$ be $\OO_{K'}/\mm_{\OO_{K'}}$ (which is simply $\mathbb F_p$).
We let $G$ be a formal group scheme over $\mathcal O_{K'}$ such that its reduced group scheme $G_{/k}$ (i.e., the special fiber) is smooth (therefore, $G$ has good reduction).
As in section~\ref{Fontaine}, we set $M=\operatorname{Hom}(G_{/k}, \hat{CW})$, which is a $\mathbf D$-module, and define $L$ as we did in Section~\ref{Fontaine}. In addition, we assume
\begin{assumption} The dimension of $G$ is $1$ (i.e., $L$ is rank $1$ over $\OO_{K'}$). \end{assumption} This assumption will make our work much simpler.
\begin{remark} Even though the author has not thought much about it, the case where the dimension of $G$ is not $1$ may not be so difficult. We only need to consider multiple logarithms. \end{remark} Also, we assume
\begin{assumption} \Label{Assumption-1} Recall that we assume $G$ has good reduction. Also we assume $G$ does not have ordinary reduction. (See Definition~\ref{Calais}.) \end{assumption} Clearly, the case where $G$ has good ordinary reduction is covered well by Mazur's work (\cite{Mazur}).
Since we always assume that ${\bf F}$ acts on $M$ as a topological nilpotent, $M$ can be considered as a $\Zp[[{\bf F}]]$-module.
We set
\[ d=\rank_{\Zp} M. \] Since we assume $G_{/k}$ is of dimension $1$, $\dim_{\mathbb F_p} M/{\bf F} M=1$, thus we may choose $\bfm \in M$ so that it generates $M$ over $\Zp[[{\bf F}]]$. More specifically,
\[ \bfm, {\bf F}\bfm, \cdots, {\bf F}^{d-1} \bfm,\] are $\Zp$-linearly independent, and generate $M$ over $\Zp$.
\begin{remark} In fact, this seems to be the only place in this section where we use the condition that ${\bf F}$ is a topological nilpotent. \end{remark}
We may also choose an $\OO_{K'}$-generator $\bfl$ of $L$. Since $L \subset M_{\OO_{K'}}$, we may write
\begin{eqnarray*} \bfl &=& (\bfl_{ij})_{(i,j)\in I_0},\\ \bfl_{ij} &=& \sum_{k=0}^{d-1} \alpha_k^{(ij)} {\bf F}^k \bfm \in \mm^i \otimes M^{(j)} \end{eqnarray*} for some $\alpha_k^{(ij)} \in \mm^i$.
We set
\[ H(X)={\det}_{\Zp} (X\cdot 1_M-{\bf F}|M)=X^d+a_{d-1}X^{d-1}+\cdots+a_0 \in \Zp[X], \] then
\[ \bar H(X)\stackrel{def}= \displaystyle \frac{H(X)}{a_0} = 1+ \displaystyle \frac{a_1}{a_0}X+\cdots+\frac{a_{d-1}}{a_0}X^{d-1}+\frac1{a_0}X^d. \] We let
\[J(X)\stackrel{def}=\bar H(X) -1=b_1X+b_2X^{2}+\cdots+b_dX^{d} \] then formally we have
\[ \bar H(X)^{-1}=1-J(X)+J(X)^2-\cdots .\]
\begin{notation} \begin{enumerate} \item Recall $H(X)=X^d+a_{d-1}X^{d-1}+\cdots+ a_0$, and $\varphi(X)=X^p+\alpha_{p-1} X^{p-1}+\cdots+\alpha_1X$. Define
\[ \epsilon\stackrel{def}=\displaystyle \frac{a_0 \alpha_{p-1}}{p\cdot (a_0+a_1+\cdots+a_{d-1}+1)}. \] (Note that $\epsilon \in p\Zp$ because $a_0+a_1+\cdots+a_{d-1}+1 \in 1+p\Zp$.)
\item Let $\mathcal P$ be the $\Zp[[X]]$-submodule of $\Qp[[X]]$ which is generated by $\frac{X^{p^n}}{p^n}$ for $n=0,1,2,\cdots$. And, let $\bar{\mathcal P}=\mathcal P/p\Zp[[X]]$, which is isomorphic to $\hat{CW}(\mathbb F_p[[X]])$ through
\begin{eqnarray*} \omega: \hat{CW}(\mathbb F_p[[X]]) &\to & \bar{\mathcal P} \\ (\cdots, a_1, a_0) &\mapsto & \sum \displaystyle \frac{\tilde a_n^{p^n}}{p^n} \end{eqnarray*} ($\tilde a_n \in \Zp[[X]]$ is a lifting of $a_n$).
\item Let $\varphi$ be an operator on $\mathcal P$ given by
\[ \varphi(X^n):=\varphi(X)^n \] which is equivalent to ${\bf F}$ on $\bar{\mathcal P} \cong \hat {CW}(\mathbb F_p)$. (More precisely, for $a\in W$,
\[ \varphi(a)=\sigma(a) \] where $\sigma$ is the $p$-th Frobenius map on $W$, and thus $\varphi$ is a $\sigma$-linear operator. But, here we have $W=\Zp$, so we can safely ignore this.)
Then, we define \[ l(X)=\left[ 1-J(\varphi)+J(\varphi)^2-\cdots \right] \circ X. \]
\item Define $\bfx \in G_{/k}(\mathbb F_p[[X]]) \cong \operatorname{Hom}_{\Dieudonne}(M, \hat {CW}(\mathbb F_p[[X]])) \cong \operatorname{Hom}_{\Dieudonne}(M, \bar{\mathcal P})$ by
\[ \bfx(\bfm)= l(X) \pmod{p\Zp[[X]]}\] and extend $\Dieudonne$-linearly. (Note that $\operatorname{Hom}_{\Dieudonne}(M, \bar{\mathcal P}) \cong \operatorname{Hom}_{\Zp[{\bf F}]}(M, \bar{\mathcal P})$ by \cite{Perrin-Riou-1}~Section~3.1~p.261.) \end{enumerate} \end{notation}
\begin{proposition} \Label{Sino-Japanese War} $\bfx$ is well-defined. \end{proposition} \begin{proof} First, we need to show $l(X)$ is well-defined. Because $G$ has supersingular reduction, $p^i b_i \in p\Zp$ ($i=1,2,\cdots, d$). Thus, $l(X)$ is well-defined (i.e., the infinite summation which defines $l(X)$ is convergent). Then, we check \[ (1+J(\varphi))\circ \left\{ 1-J(\varphi)+J(\varphi)^2-\cdots \right\} \circ X=1\circ X.\]
Since ${\bf F}$ is a topological nilpotent, $p|a_0$, thus $H(\varphi)\circ l(X)=a_0X \in p\Zp[[X]]$, in other words, $H({\bf F})\circ l(X)=0 \in \hat CW(\mathbb F_p [[X]])$. Since $H(X)$ is irreducible, $\bfx$ extends to the entire $M$ $\Dieudonne$-linearly. \end{proof}
\begin{notation} \Label{Moscow} \begin{enumerate} \item Define a lifting $\tilde \bfx \in \operatorname{Hom}_{\Zp}(M, \mathcal P)$ of $\bfx$ by
\[ \tilde \bfx({\bf F}^k \bfm)=\epsilon+\varphi^k \circ l(X)=\epsilon+ l(\varphi^{(k)}(X)), \quad k=0,1,\cdots,d-1\] where $\varphi^{(k)}=\varphi(\varphi(\cdots(X)))$ ($k$-times).
\item Recall
\begin{eqnarray*} \bfl &=& (\bfl_{ij})_{(i,j)\in I_0},\\ \bfl_{ij} &=& \sum_{k=0}^{d-1} \alpha_k^{(ij)} {\bf F}^k \bfm \in \mm^i \otimes M^{(j)}. \end{eqnarray*} We can write
\[ {\bf F}^j \bfl_{ij} = \sum_{k=0}^{d-1} \beta_k^{(ij)} {\bf F}^k \bfm\] for some $\beta_k^{(ij)}\in \mm^i$.
\item Define $\bfy \in \operatorname{Hom}_{\OO_{K'}}(L, K'[[X]])$ explicitly as follows:
We set
\[ \bfy(\bfl)=\sum_{(i,j)\in I_0} \sum_{k=0}^{d-1} \beta_k^{(ij)} \tilde\bfx ({\bf F}^k \bfm) \] and extend to $L$ $\OO_{K'}$-linearly.
\item Then, we set $P=(\bfy, \bfx) \in G(L, M)(\OO_{K'}[[X]])$. \end{enumerate} \end{notation}
\begin{proposition} $P=(\bfy, \bfx)$ is well-defined. \end{proposition} \begin{proof} We need to show it is a fiber product in the sense of Definition~\ref{Fontaine-Nara}. We let $\bfx$ also denote the extended map $\bfx: M_{\OO_{K'}}\to \hat{CW}(\mathbb F_p[[X]])_{\OO_{K'}}$.
For each $\bfl_{ij}\in \mm^i \otimes M^{(j)}$,
$$\bfx(\bfl_{ij})=\bfx(\sum_{k=0}^{d-1} \alpha_k^{(ij)} {\bf F}^k \bfm)=\sum_{k=0}^{d-1} \alpha_k^{(ij)} \bfx({\bf F}^k \bfm) \in \mm^i \otimes \hat{CW}(\mathbb F_p[[X]])^{(j)}.$$
Because $\omega$ on $\hat{CW}(\mathbb F_p[[X]])_{\OO_{K'}}$is deduced from $\omega: \OO_{K'}\otimes \hat{CW}(\mathbb F_p[[X]]) \to K'[[X]]/P'(\OO_{K'}[[X]])$ through $\OO_{K'}\otimes \hat{CW}(\mathbb F_p[[X]]) \to \hat{CW}(\mathbb F_p[[X]])_{\OO_{K'}}$, to evaluate $\omega$ on $\sum_{k=0}^{d-1} \alpha_k^{(ij)} \bfx({\bf F}^k \bfm) \in \mm^i \otimes \hat{CW}(\mathbb F_p[[X]])^{(j)}$, we need to send it to $p^j \cdot \mm^i \otimes \hat{CW}(\mathbb F_p[[X]])$ by ${\bf F}^j$, and obtain
\begin{eqnarray*} \omega(\bfx(\bfl_{ij})) &=& \omega \left( {\bf F}^j\sum_{k=0}^{d-1} \alpha_k^{(ij)} \bfx({\bf F}^k \bfm) \right) \\ &=& \omega \left( \sum_{k=0}^{d-1} \beta_k^{(ij)} \bfx({\bf F}^k \bfm) \right) \\ &=& \sum_{k=0}^{d-1} \beta_k^{(ij)} l(\varphi^k(X)) \pmod{P'(\OO_{K'}[[X]])}. \end{eqnarray*} Thus, $\omega(\bfx(\bfl))=\bfy(\bfl) \pmod{P'(\OO_{K'}[[X]])}$, and by extending $\OO_{K'}$-linearly, $\bfx=\bfy$ as elements of $\operatorname{Hom}_{\OO_{K'}}(L, K'[[X]]/P'(\OO_{K'}[[X]]))$, and our claim follows. \end{proof}
For simplicity, let $\operatorname{Tr}_{n/m}$ denote $\operatorname{Tr}_{K'(\pi_n)/K'(\pi_m)}$.
\begin{proposition} \Label{Despicable-Laundry-Machine} Modulo torsions, we have
\begin{eqnarray*} \operatorname{Tr}_{n/n-d} P(\pi_n) &=& -p\cdot b_1\cdot \operatorname{Tr}_{n-1/n-d} P(\pi_{n-1} ) - p^2 \cdot b_2 \cdot \operatorname{Tr}_{n-2/n-d} P(\pi_{n-2} ) \\ && - \quad \cdots \quad - p^d \cdot b_d\cdot P(\pi_{n-d} ) \end{eqnarray*} for every $n\geq N+d$. \end{proposition}
\begin{proof} Note that $(0, \mathbf z) \in G(L, M)(g)$ is a torsion point for any $\mathbf z \in G_{/k}(g/\mm g)$. Thus, we only need to show the identity of the $L$-parts.
First, we find
\begin{eqnarray*} \operatorname{Tr}_{n/n-1} l(\pi_n) &=& \operatorname{Tr}_{n/n-1} \left. \left[ 1-J(\varphi)+J(\varphi)^2-\cdots \right] \circ X \right|_{X=\pi_n} \\
&=&\operatorname{Tr}_{n/n-1} \pi_n - \operatorname{Tr}_{n/n-1} J(\varphi)\circ \left. \left[ 1-J(\varphi)+J(\varphi)^2- \right]\circ X \right|_{X=\pi_n} \\ &=& -\alpha_{p-1}-\operatorname{Tr}_{n/n-1} \left[ b_1l(\varphi(X))+\cdots+b_{d}l(\varphi^{(d)}(X))\right]_{X=\pi_n} \\ &=& -\alpha_{p-1}-p\cdot \left[b_1 l(\pi_{n-1})+\cdots+b_d l(\pi_{n-d}) \right]. \end{eqnarray*} Then, we can also find
\begin{eqnarray*} \operatorname{Tr}_{n/n-d} l(\pi_n) &=& -p^{d-1}\alpha_{p-1}-p\cdot b_1 \cdot \operatorname{Tr}_{n-1/n-d} l(\pi_{n-1}) \\ && -\quad \cdots \quad -p^{d-1}\cdot b_{d-1} \cdot \operatorname{Tr}_{n-d+1/n-d} l(\pi_{n-d+1})-p^d\cdot b_d \cdot l(\pi_{n-d}). \end{eqnarray*}
We recall that $b_1=\frac{a_1}{a_0},\cdots, b_{d-1}=\frac{a_{d-1}}{a_0}, b_d=\frac 1{a_0}$, thus from the definition of $\epsilon$, we have
\[ p^d\cdot \left( 1+\displaystyle \frac{a_1}{a_0}+\cdots+\frac{a_{d-1}}{a_0}+\frac1{a_0} \right) \cdot \epsilon=p^{d-1} \cdot \alpha_{p-1}. \] Thus, we have
\begin{multline} \Label{Tokyo} \operatorname{Tr}_{n/n-d}(\epsilon+l(\pi_n)) =-p\cdot \displaystyle \frac{a_1}{a_0} \cdot \operatorname{Tr}_{n-1/n-d} (\epsilon+l(\pi_{n-1})) \\ \qquad \qquad \qquad - \quad \cdots \quad - p^{d-1}\cdot \displaystyle \frac{a_{d-1}}{a_0} \cdot \operatorname{Tr}_{n-d+1/n-d} (\epsilon+ l(\pi_{n-d+1})) -p^d\cdot \frac1{a_0} \cdot (\epsilon+ l(\pi_{n-d})). \end{multline} Similarly, we check the following: For $0<i<d$,
\begin{eqnarray*} (\varphi^i\circ l)(\pi_n) &=& \varphi^{(i)}(\pi_n) -\varphi^i\circ J(\varphi)\circ [1-J(\varphi)+J(\varphi)^2-\cdots ] \circ X |_{\pi_n} \\ &=& \pi_{n-i} - \left[ b_1 l(\pi_{n-i-1})+\cdots+b_d l(\pi_{n-i-d}) \right]. \end{eqnarray*} Then, we have
\begin{eqnarray*} \operatorname{Tr}_{n/n-d} (\varphi^i\circ l) (\pi_n) &=& -p^{d-1}\alpha_{p-1}-p\cdot b_1 \cdot \operatorname{Tr}_{n-1/n-d} l(\pi_{n-i-1}) \\ && -\quad \cdots \quad -p^{d-1}\cdot b_{d-1} \cdot \operatorname{Tr}_{n-d+1/n-d} l(\pi_{n-i-d+1})-p^d\cdot b_d \cdot l(\pi_{n-i-d}) \\ &=& -p^{d-1}\alpha_{p-1}-p\cdot b_1 \cdot \operatorname{Tr}_{n-1/n-d} (\varphi^i\circ l)(\pi_{n-1}) \\ && -\quad \cdots \quad -p^{d-1}\cdot b_{d-1} \cdot \operatorname{Tr}_{n-d+1/n-d} (\varphi^i\circ l) (\pi_{n-d+1}) \\ &&-p^d\cdot b_d \cdot (\varphi^i\circ l) (\pi_{n-d}), \end{eqnarray*} and by repeating the argument used above, we obtain an identity analogous to (\ref{Tokyo}).
Recall
\[ \bfy(\bfl)=\sum_{(i,j)\in I_0} \sum_{k=0}^{d-1} \beta_k^{(ij)} (\epsilon+l(\varphi^{(k)}(X)) \] from Notation~\ref{Moscow}. By the above discussion, we have
\begin{eqnarray*} \operatorname{Tr}_{n/n-d} \bfy(\bfl)|_{X=\pi_n} &=& -p\cdot b_1\cdot \operatorname{Tr}_{n-1/n-d} \bfy(\bfl) |_{X=\pi_{n-1}} - p^2 \cdot b_2 \cdot \operatorname{Tr}_{n-2/n-d} \bfy(\bfl) |_{X=\pi_{n-2}} \\
&& - \quad \cdots \quad - p^d \cdot b_d\cdot \bfy(\bfl) |_{X=\pi_{n-d}} \end{eqnarray*} and by extending it to $L$ $\OO_{K'}$-linearly, we obtain our claim. \end{proof}
\end{subsection}
\begin{subsection}{Construction for Kummer Extensions} \Label{Kummer}
Now we define a slightly different operator $\varphi$ on $\Zp[[X]]$ by
\[ \varphi(X)=X^p, \quad \varphi(a)=\sigma(a), a \in \Zp \] where $\sigma$ is the $p$-th Frobenius map mentioned earlier (actually, $\sigma$ acts trivially on $\Zp$, so the action of $\varphi$ on $\Zp$ is purely symbolic.)
\begin{notation} \begin{enumerate} \item $K'$ is a totally ramified extension of $\Qp$, and $\zeta_p \not\in K'$. Let $\mm$ denote $\mm_{\OO_{K'}}$.
\item Set $e=[K':\Qp]$. Assume $e<p$.
\item Choose a uniformizer $\pi$ of $K'$, and choose $\pi_n$ for every $n \geq 0$ such that
\[ \pi_0=\pi,\quad \pi_{n+1}^p=\pi_n \quad \text{ for every }\quad n \geq 0.\]
\item For any $n\geq m\geq 0$, we let $\operatorname{Tr}_{n/m}$ denote $\operatorname{Tr}_{K'(\pi_n)/K'(\pi_m)}$. \end{enumerate} \end{notation}
Supppose $G$ is a formal group scheme of dimension $1$ over $\OO_{K'}$, its reduced scheme $G_{/k}$ over $k=\OO_{K'}/\mm$ is smooth (thus $G$ has good reduction), and $G$ has supersingular reduction. We recall from Section~\ref{Fontaine} that a Honda system $(M, L)$ is attached to $G$.
Like Section~\ref{Some special}, we choose an $\OO_{K'}$-generator $\bfl$ of $L$ and a $\Zp[{\bf F}]$-generator $\bfm$ of $M$. Then,
\begin{eqnarray*} \bfl = (\bfl_{ij})_{(i,j) \in I_0}, \quad \bfl_{ij}= \sum_{k=0}^{d-1} \alpha_k^{(ij)} {\bf F}^k \bfm \in \mm^i \otimes M^{(j)} \end{eqnarray*} for some $\alpha_k^{(ij)} \in K'$.
Again, similar to Section~\ref{Some special}, we define
\begin{notation}
\[ H(X)={\det}_{\Zp} (X\cdot 1_M-{\bf F}|M)=X^d+a_{d-1}X^{d-1}+\cdots+a_0 \in \Zp[X], \]
\[ \bar H(X)\stackrel{def}= \displaystyle \frac{H(X)}{a_0},\]
\[ J(X)\stackrel{def}=\bar H(X) -1=b_1X+b_2X^{2}+\cdots+b_dX^{d} \]
\[ l(X)\stackrel{def}= \left\{ 1-J(\varphi)+J(\varphi)^2-\cdots \right\} \circ X. \] \end{notation}
\begin{proposition} Recall $G(k[[X]])\cong \operatorname{Hom}_{\Zp[{\bf F}]}(M, \bar {\mathcal P}$) (\cite{Perrin-Riou-1}~Section~3.1 p.261). We define $\bfx \in G(k[[X]])$ by
\[ \bfx(\bfm)=l(X) \pmod{p\Zp[[X]]},\] and expand $\Zp[{\bf F}]$-linearly. Then, $\bfx$ is well-defined. \end{proposition} \begin{proof} See Proposition~\ref{Sino-Japanese War}. \end{proof}
Now, we choose a lifting $\bfy \in \operatorname{Hom}_{\OO_{K'}}(L, K'[[X]])$ of $\bfx$ as follows:
\begin{notation}
\begin{enumerate} \item Define a lifting $\tilde \bfx \in \operatorname{Hom}_{\Zp}(M, \mathcal P)$ of $\bfx$ by
\[ \tilde \bfx({\bf F}^i \bfm)=\varphi^i \circ l(X)=l(X^{p^i}), \quad i=0,1,\cdots,d-1.\] Then, define $\bfy \in \operatorname{Hom}_{\OO_{K'}}(L, K'[[X]])$ explicitly as follows:
Write ${\bf F}^j \bfl_{ij}=\sum_{k=0}^{d-1} \beta_k^{(ij)} {\bf F}^k \bfm$ for some $\beta_k^{(ij)} \in K'$. We set
\[ \bfy(\bfl)=\sum_{(i,j) \in I_0} \sum_{k=0}^{d-1} \beta_k^{(ij)} \tilde\bfx ({\bf F}^k\bfm)= \sum_{(i,j) \in I_0} \sum_{k=0}^{d-1} \beta_k^{(ij)} l(X^{p^k}) \] and expand $\bfy$ $\OO_{K'}$-linearly.
\item Then, we set $P=(\bfx, \bfy) \in G(M,L)(\Zp[[X]] \otimes \OO_{K'})$.
\end{enumerate} \end{notation}
We note
\begin{eqnarray} \Label{Note} \operatorname{Tr}_{n/n-1} \pi_n^i =0 \quad \text{for all }n >0 \end{eqnarray} for $i\leq e$ because $e<p$.
\begin{proposition} \Label{Chicken-Burger} For $n>d$ and $i=1,2,\cdots, e$, modulo torsions, we have
\begin{eqnarray*} \operatorname{Tr}_{n/n-d} P(\pi_n^i) &=& -p\cdot b_1\cdot \operatorname{Tr}_{n-1/n-d} P(\pi_{n-1}^i) - p^2 \cdot b_2 \cdot \operatorname{Tr}_{n-2/n-d} P(\pi_{n-2}^i) \\ && - \quad \cdots \quad -b_d\cdot p^d \cdot P(\pi_{n-d}^i). \end{eqnarray*} \end{proposition}
\begin{proof} This is similar to Proposition~\ref{Despicable-Laundry-Machine} in Section~\ref{Some special}, so we will provide only a brief proof.
\begin{eqnarray*} \operatorname{Tr}_{n/n-1}l(\pi_n^i)&=& \operatorname{Tr}_{n/n-1}\left\{ \pi_n^i -[J(\varphi)-J(\varphi)^2+\cdots]\circ X|_{X=\pi_n^i} \right\} \\
&=& \operatorname{Tr}_{n/n-1} \left\{-[J(\varphi)-J(\varphi)^2+\cdots]\circ X|_{X=\pi_n^i} \right\} \\ &=& -p\cdot (J(\varphi)\circ l)(\pi_n^i). \end{eqnarray*} The last line is equal to
\begin{eqnarray*} -p\cdot (J(\varphi)\circ l)(\pi_n^i)&=& -p \left\{ b_1 \cdot l(\pi_n^{i\cdot p})+b_2 \cdot l(\pi_n^{i\cdot p^2})+\cdots+b_d \cdot l(\pi_n^{i\cdot p^d}) \right\} \\ &=& -p \left\{ b_1 \cdot l(\pi_{n-1}^i) + b_2 \cdot l(\pi_{n-2} ^i)+ \cdots + b_d \cdot l(\pi_{n-d}^i) \right\}. \end{eqnarray*} Thus by applying $\operatorname{Tr}_{n-1/n-d}$ to it, we have
\begin{eqnarray*} \operatorname{Tr}_{n/n-d} l(\pi_n^i) &=& -p \cdot b_1 \cdot \operatorname{Tr}_{n-1/n-d} l(\pi_{n-1}^i)-p^2 \cdot b_2 \cdot \operatorname{Tr}_{n-2/n-d}l(\pi_{n-2}^i) \\ &&-\quad \cdots\quad -p^d \cdot b_d \cdot l(\pi_{n-d}^i). \end{eqnarray*}
Also similar to Proposition~\ref{Despicable-Laundry-Machine}, for $j=1,\cdots, d-1$ we have
\begin{eqnarray*} \operatorname{Tr}_{n/n-d} l((\pi_n^i)^{p^j}) &=& -b_1 \cdot p \cdot \operatorname{Tr}_{n-1/n-d} l((\pi_{n-1}^i)^{p^j} )- b_2 \cdot p^2\cdot \operatorname{Tr}_{n-2/n-d} l((\pi_{n-2}^i)^{p^j}) \\ &&- \quad \cdots \quad - b_d \cdot p^d \cdot l((\pi_{n-d}^i)^{p^j}) . \end{eqnarray*}
Thus, we have
\begin{eqnarray*} \operatorname{Tr}_{n/n-d} \bfy (\bfl)|_{X=\pi_n^i} &=& -p \cdot b_1 \cdot \operatorname{Tr}_{n-1/n-d} \bfy (\bfl)|_{X=\pi_{n-1}^i} \\
&&-p^2 \cdot b_2 \cdot \operatorname{Tr}_{n-2/n-d} \bfy (\bfl)|_{X=\pi_{n-2}^i} \\ &&-\quad \cdots\quad \\
&&-p^d \cdot b_d \cdot \bfy (\bfl)|_{X=\pi_{n-d}^i}. \end{eqnarray*}
Similar to Proposition~\ref{Despicable-Laundry-Machine}, we obtain our claim. \end{proof}
The problem is that we do not know whether these points are useful or not. The extension $K'(\pi_{\infty})/K'$ is not even normal. Its normal closure $K'(\pi_{\infty}, \zeta_{p^{\infty}})/K'$ is not abelian. So, it seems impossible to use Iwasawa Theory, and the author cannot see any other use for them.
\end{subsection}
\begin{subsection}{The Perrin-Riou characteristics, and weak bounds for ranks} \Label{Case 1}
In this section, we apply the construction in Section~\ref{Some special}. As in that section, we suppose $k_{\infty}/\Qp$ is a totally ramified normal extension with $\operatorname{Gal}(k_{\infty}/\Qp)\cong \Z_p^{\times}$. By local class field theory, it is given by a Lubin-Tate group of height $1$ over $\Zp$. In other words, there is $\varphi(X)=X^p+\alpha_{p-1} X^{p-1}+\cdots+\alpha_1X \in \Zp[X]$ with $p|\alpha_i$, $v_p(\alpha_1)=1$ so that
\[ k_{\infty}=\cup_n \Qp(\pi_n) \] where $\varphi(\pi_n)=\pi_{n-1}$ ($\pi_n\not=0$ for $n > 0$, $\pi_0=0$).
We let $F$ be a number field, $F_{\infty}$ be a $\Zp$-extension of $F$ (i.e., $\operatorname{Gal}(F_{\infty}/F) \cong \Zp$), $A$ be an abelian variety over $F$, and $A'$ be its dual abelian variety over $F$ so that there is a non-degenerate Weil pairing $e_n: A[n]\times A'[n]\to \Z/n\Z$ for every integer $n$, which is non-degenerate and commutative with the action of $G_F$. Let $\mathbf T=T_pA$, and let $\mathbf A\stackrel{def}=\varinjlim \mathbf T/p^n \mathbf T$.
In this section, we suppose there is only one prime $\mathfrak p$ of $F$ above $p$, $\mathfrak p$ is totally ramified over $F/\Q$, $\mathfrak p$ is totally ramified over $F_{\infty}/F$, $F_{\infty, \mathfrak p}=k_{\infty}$, and $F_{\mathfrak p}=\Qp(\pi_N)$ for some $N\geq 1$.
Let $G/\OO_{F_{\mathfrak p}}$ denote the formal completion of $A'/F_{\mathfrak p}$. As in Section~\ref{Some special}, we assume $G$ has dimension $1$, which means that the group of its logarithms has rank $1$ over $\OO_{F_{\mathfrak p}}$.
\begin{example} An obvious example that satisfies all these conditions is an elliptic curve $E$ defined over $\rat(\zeta_{p^N})$ with good supersingular reduction at the unique prime $\mathfrak p$ above $p$. \end{example}
We recall the points $P(\pi_n) \in G(M, L)(\mm_{\Qp(\pi_n)})$ constructed in Section~\ref{Some special}.
\begin{assumption} There is $M'>0$ so that $M' \cdot G(\OO_{k_{\infty}})_{tors}=0$. \end{assumption}
This assumption is obviously true if $G(\OO_{k_{\infty}})_{tors}$ is finite.
\begin{definition}
\begin{enumerate}[(a)] \item Let $M$ be the Dieudonne module $\operatorname{Hom}(G_{/\mathbb F_p}, \hat{CW})$, and $L$ be the set of logarithms of $G$ as defined in Section~\ref{Fontaine}.
\item As in Section~\ref{Some special}, we set
\[ H(X)={\det}_{\Zp} (X\cdot 1_M-{\bf F}|M)=X^d+a_{d-1}X^{d-1}+\cdots+a_0 \in \Zp[X], \] and
\begin{eqnarray*} \bar H(X) &\stackrel{def}=& \displaystyle \frac{H(X)}{a_0} = 1+ \displaystyle \frac{a_1}{a_0}X+\cdots+\frac{a_{d-1}}{a_0}X^{d-1}+\frac1{a_0}X^d \\ &=&1+b_1X+b_2X^{2}+\cdots+b_dX^{d}. \end{eqnarray*}
\end{enumerate} \end{definition}
\begin{definition} From Section~\ref{Fontaine}, recall that there is a natural map $i_G:G\to G(L, M)$, and a map $j_G:G(L, M)\to G$ so that $i_G\circ j_G=p^t$, $j_G\circ i_G=p^t$ for some $t$ which depends on the ramification index $e$. Also, let $i':G\to A'$ be the natural injection from the formal group scheme $G$ to the abelian variety $A'$. We define
\begin{enumerate}[(a)] \item Where $e=[\Qp(\pi_N):\Qp]=[F_{\mfp}:\Qp]$, let
$$\{ \pi_{N,1},\cdots, \pi_{N, e}\}=\{ \pi_N^{\sigma} \}_{\sigma \in\operatorname{Gal}(\Qp(\pi_N)/\Qp)}.$$
\item Then, for every $n> N$ and for each $i=1,\cdots,e$, choose $\pi_{n,i}$ so that $\varphi(\pi_{n,i})=\pi_{n-1,i}$.
\item For $i=1,\cdots, e$, \[ Q(\pi_{N+n, i})=M'\cdot i'\circ j_G\left( P(\pi_{N+n, i}) \right) \in A'(F_{n, \mfp}). \] \end{enumerate} \end{definition}
\begin{proposition} \Label{Lunch} For every $n\geq d$, we have
\begin{eqnarray*} \operatorname{Tr}_{F_{n, \mathfrak p}/F_{n-d, \mathfrak p}} Q(\pi_{N+n, i}) &=& -p\cdot b_1\cdot \operatorname{Tr}_{F_{n-1, \mathfrak p}/F_{n-d, \mathfrak p}} Q(\pi_{N+n-1, i}) \\ &&- p^2 \cdot b_2 \cdot \operatorname{Tr}_{F_{n-2, \mathfrak p}/F_{n-d, \mathfrak p}} Q(\pi_{N+n-2, i}) \\ && - \quad \cdots \quad - p^d \cdot b_d\cdot Q(\pi_{N+n-d, i}). \end{eqnarray*} \end{proposition} \begin{proof} Note that $M'$ annihilates every torsion of $G(\OO_{F_{n-d, \mfp}})$. Thus, the claim follows immediately from Proposition~\ref{Despicable-Laundry-Machine}. \end{proof}
\begin{definition}[Relaxed Selmer groups] \Label{Relaxed Selmer} Where $L$ is a number field, \[ \Selr(\mathbf A/L)\stackrel{def}= \ker \left( H^1(L, \mathbf A)\to \prod_{v\nmid p} \displaystyle \frac{H^1(L_v, \mathbf A)}{H^1_f(L_v, \mathbf A)} \right)\] where
\[ H^1_f (L_v, \mathbf A)\stackrel{def}=H^1_{un}(L_v, \mathbf A)\stackrel{def}=H^1(L_v^{un}/L_v, \mathbf A^{G_{L_v^{un}}}).\] \end{definition} In fact, when ${G_{L_v^{un}}}$ acts trivially on $A$ (i.e., good reduction at $v$), $H^1_{un}(L_v, \mathbf A)$ is the standard definition for a local condition $H^1_f(L_v, A)$. (Local conditions for a finite number of primes not above $p$ do not affect our result.)
Set
\[ \Gamma\stackrel{def}=\operatorname{Gal}(F_{\infty}/F),\] \[ \Lambda\stackrel{def}=\Zp[[\Gamma]]\cong \Zp[[X]] \] where the last isomorphism is (non-canonically) given by choosing a topological generator $\gamma$ of $\Gamma$, and set $\gamma=X+1$.
\begin{assumption} \Label{Fries} Let $M^{\vee}$ denote the Pontryagin dual $\operatorname{Hom}(M,\rat/\Z)$. We assume \[ \rank_{\Lambda} \Selr(\mathbf A/ F_{\infty})^{\vee}=[F_{\mfp}:\Qp]=e. \] \end{assumption} If $\dim G$ is not $1$, then we probably need to multiply it to $e$ in Assumption~\ref{Fries}. We can show Assumption~\ref{Fries} is true if $\operatorname{Sel}(\mathbf A/F)$ or $\operatorname{Sel}(\mathbf A/F_n)^{\chi}$ for some primitive character $\chi$ of $\operatorname{Gal}(F_n/F)$ is finite. Although there are some notable counterexamples to this assumption (for instance, when $F_{\infty}$ is the anti-cyclotomic extension), for all intents and purposes, it is a safe assumption.
Let
\[ S_{tor}=\left( \Selr(\mathbf A/ F_{\infty})^{\vee} \right)_{\Lambda-torsion}.\] If we assume Assumption~\ref{Fries}, then there is a short exact sequence
\begin{eqnarray} \Label{Ice-Cream} 0 \to \Selr(\mathbf A/F_{\infty})^{\vee}/S_{tor} \to \Lambda^e \to C \to 0 \end{eqnarray} for a finite group $C$.
\begin{notation} \begin{enumerate} \item For each $n\geq 0$,
\[ \Gamma_n=\Gamma/\Gamma^{p^n}, \quad \Lambda_n=\Zp[\Gamma_n]. \] \item For a group $M$ on which $\Gamma$ acts,
\[ M_{/\Gamma^{p^n}}=M/\{ (1-a)\cdot m \; |\; a \in \Gamma^{p^n}, m \in M\}. \] Equivalently, where $\gamma$ is a topological generator of $\Gamma$,
\[ M_{/\Gamma^{p^n}}=M/(1-\gamma^{p^n})\cdot M. \] \end{enumerate} \end{notation}
\begin{lemma} Suppose there is an exact sequence of $\Lambda$-modules
\[ 0 \to A_1\to A_2 \to A_3 \to A_4 \to 0, \] and $A_1$ and $A_4$ are finite. Then, for every $n$, the orders of the kernel and cokernel of
\[ \left( A_2 \right)_{/\Gamma^{p^n}} \to \left( A_3 \right)_{/\Gamma^{p^n}} \]
are bounded by $|A_1|\cdot|A_4|$. \end{lemma}
\begin{proof} The exact sequence induces two short exact sequences
\[ 0 \to A_1 \to A_2 \to A_2/A_1 \to 0,\] \[ 0 \to A_2/A_1 \to A_3 \to A_4 \to 0, \] which in turn induce
\[ (A_1)_{/\Gamma^{p^n}} \to (A_2)_{/\Gamma^{p^n}} \to (A_2/A_1)_{/\Gamma^{p^n}} \to 0,\] \[ (A_4)^{\Gamma^{p^n}} \to (A_2/A_1)_{/\Gamma^{p^n}} \to (A_3)_{/\Gamma^{p^n}} \to (A_4)_{/\Gamma^{p^n}} \to 0.\] Our claim follows immediately. \end{proof}
It is not difficult to show $\Selr(\mathbf A/F_n) \to \Selr(\mathbf A/F_{\infty})^{\Gamma^{p^n}}$ has bounded kernel and cokernel for every $n$. For the sake of argument, we assume it is an isomorphism, which will not hurt the integrity of our argument.
The map in (\ref{Ice-Cream}) induces the following:
\[ \alpha_n: (\Selr(\mathbf A/F_{\infty})^{\vee}/S_{tor})_{/\Gamma^{p^n}} \to \Lambda_n^e \] which induces
\[ \alpha_n': \Selr(\mathbf A/F_n)^{\vee} \to \Lambda_n^e \] by the above assumption. We note that there is a map
\[ \beta_n: A'(F_{n, \mathfrak p}) \to \Selr(\mathbf A/F_n)^{\vee} \] given by the local Tate duality which states that $A'(F_{n, \mathfrak p})$ is the Pontryagin dual of $H^1(F_{n, \mathfrak p}, \mathbf A)/A(F_{n, \mathfrak p})\otimes \Qp/\Zp$.
\begin{definition} \begin{enumerate}[(a)] \item Let $R(\pi_{N+n, i}) \in \Lambda_n^e$ be the image of $Q(\pi_{N+n, i})$ under $\alpha_n' \circ \beta_n$.
\item Let $\Proj_n^m$ be the natural projection from $\Lambda_m$ to $\Lambda_n$ ($m\geq n$). \end{enumerate} \end{definition}
Let $H^{\vee}(X)= X^d+pb_1X^{d-1}+p^2b_2X^{d-2}+\cdots+p^db_d=0$. By Proposition~\ref{Lunch} we have
\begin{eqnarray} \Label{Smolensk} \Proj_{n-d}^n R(\pi_{N+n, i}) +\sum_{k=1}^d p^k b_k \Proj_{n-d}^{n-k} R(\pi_{N+n-k, i})=0 \end{eqnarray} for each $i$.
Here we recall Perrin-Riou's lemma: In the following $\Lambda_{\alpha}$ is the set of power series $f(T) \in \overline{\Q}_p[[T]]$ satisfying $|f(x)| < C |1/\alpha^n|$ for some fixed $C>0$ for every $n \geq 1$ and $x \in \C_p$ with $|x| < |1/\sqrt[p^n]p|$.
\begin{lemma}[\cite{Perrin-Riou-1}~Lemme~5.3.] \Label{Perrin-Riou-Lemma} Let $R(T)=\sum a_kT^k$ be a monic polynomial of $\Zp[T]$ whose roots are simple, non-zero, and have $p$-adic valuation strictly less than $1$. Suppose $f^{(n)}$'s are elements of $\Lambda$ satisfying the recurrence relation
\[ \sum_k a_k f^{(n+k)} \equiv 0 \pmod {(T+1)^{p^n}-1}. \] Then, for every root $\alpha$ of $R(T)$, there is unique $f_{\alpha} \in \Lambda_{\alpha}$ so that for some fixed constant $c$,
\[ f^{(n)}\equiv \sum_{\alpha} f_{\alpha} \alpha^{n+1} \pmod{c^{-1}((T+1)^{p^n}-1) \Lambda} \] for every $n$. \end{lemma}
\begin{proof} Simple linear algebra. See \cite{Perrin-Riou-1}. \end{proof}
Since we assume ${\bf F}$ is a topological nilpotent on $M$, all the roots of $H^{\vee}(X)=0$ have $p$-adic valuation less than $1$.
Thus, by Lemma~\ref{Perrin-Riou-Lemma} and (\ref{Smolensk}), for each root $\alpha$ of $H^{\vee}(X)$, there is $f_{\alpha, i} \in \Lambda_{\alpha}^e$ associated to $\{ R(\pi_{N+n, i}) \}_n$.
\begin{definition} Choose a generator $g_{tor}\in \Lambda$ of the characteristic ideal of $(\Selr(\mathbf A/F_{\infty})^{\vee})_{\Lambda-torsion}$. Then we let
\[ \bfL_{\alpha} \stackrel{def}= g_{tor}\times \det [f_{\alpha, 1}, \cdots, f_{\alpha, e}].\] \end{definition}
Suppose $\chi_n$ is a primitive character of $\operatorname{Gal}(F_n/F)$, and $\zeta_{p^n}=\chi_n (\gamma)$. Suppose $g_{tor}(\zeta_{p^n}-1)\not=0$ (true if $n$ is large enough). Then, we can see that
\begin{center} ``$\operatorname{Sel}_p(\mathbf A/F_n)^{\chi_n}$ is infinite $\leftrightarrow$ the $\chi_n$-part of the cokernel of $\alpha'\circ\beta_n$ is infinite
$\leftrightarrow$ $\{ R(\pi_{N+n,i})^{\chi_n} \}_{i=1,\cdots,e}$ generates a subgroup of $(\Lambda_n^e)^{\chi_n}$ of infinite index
$\longrightarrow$ $\left. \det [f_{\alpha, 1}, \cdots, f_{\alpha, e}] \right|_{\gamma=\zeta_{p^n}} =0$.'' \end{center}
And, in such a case, \begin{eqnarray} \Label{Three Go} \corank_{\Zp[\zeta_{p^n}]} \operatorname{Sel}_p(\mathbf A/F_n)^{\chi_n} \leq e. \end{eqnarray}
Consider the following Perrin-Riou's lemma.
\begin{lemma}[\cite{Perrin-Riou-1}~Lemme~5.2.] Let $\lambda=v_p(\alpha)$. Suppose $f \in \Lambda_{\alpha}$. Let $s_m$ be the number of positive integers $n$ ($n\leq m$) such that $f(\zeta_{p^n}-1)=0$ for every $p^n$-th primitive root of unity $\zeta_{p^n}$. If $s_m -\lambda m \to \infty$ as $m\to\infty$, then $f=0$. \end{lemma} In other words, $s_m =\lambda m +O(1)$ if $f \not=0$.
She assumed $0\leq \lambda <1$. But, in fact, since we assume ${\bf F}$ is a topological nilpotent, $\lambda<1$, so that condition is unnecessary.
We can modify Perrin-Riou's proof slightly, and obtain the following:
\begin{proposition} \Label{ZeroGo} \begin{enumerate}[(a)] \item If $\bfL_{\alpha}\not=0$, then for some fixed $C$
\[ \corank_{\Zp} \operatorname{Sel}_p(\mathbf A/F_n) \leq e(p-1) \times \left\{ p^{n-1}+p^{n-2}+ \cdots+ p^m \right\}+C\] where $n-m = \lambda n +O(1)$.
\item If any root $\alpha$ is a unit, then $\corank_{\Zp} \operatorname{Sel}_p(\mathbf A/F_n)$ is bounded by the number of roots of $\bfL_{\alpha}$ (counting multiplicity).
\end{enumerate} \end{proposition}
\begin{proof} Let $t_n$ be the number of the primitive $p^n$-th roots of unity which are roots of $\bfL_{\alpha}$.
By applying Perrin-Riou's proof for the above lemma, we get
\[ \sum_{m\leq n} t_m < e\lambda n +O(1). \] Then we obtain our claim by (\ref{Three Go}).
If $\alpha$ is a unit, then $\bfL_{\alpha}$ is integral, so it has a finite number of roots. Thus, (b) is clear. \end{proof}
This is a rough bound unless $H^{\vee}(X)$ has a unit root (i.e., unless the abelian variety has good ``in-between'' reduction). Probably it is possible to obtain a slightly better bound (ideally, something like ``$e(p-1)\times \{ p^{n-1}-p^{n-2}+\cdots \}$''), but not a substantially better one from $\bfL_{\alpha}$ alone, because any power series in $\Lambda_{\alpha}$ has an infinite number of roots. (For example, see R. Pollack's $\log_p^{\pm}$, \cite{Pollack}).
Thus, we need a new tool, and perhaps a new Selmer group. There is precisely such a tool in Sprung's $\sharp/\flat$-decomposition theory (\cite{Sprung}), and we will present our result in that direction in the next section.
Lastly, we want to discuss how Perrin-Riou obtained the result that $\rank E(\Q(\mu_{\infty}))$ is bounded. As stated above, it does not seem possible to obtain a finite bound from $\bfL_{\alpha}$ alone. However, she noted that her points $P_n \in E(\Q_{p, n})$ satisfy
\begin{eqnarray} \Label{PR-Relations} \operatorname{Tr}_{\Q_{p, n+1}/\Q_{p, n}}P_{n+1}-a_p P_n+P_{n-1}=0. \end{eqnarray} This is more sophisticatead than the relation $\operatorname{Tr}_{\Q_{p, n}/\Q_{p, n-2}}P_{n}-a_p \operatorname{Tr}_{\Q_{p, n-1}/\Q_{p, n-2}}P_{n-1}+p P_{n-2}=0$. She used these relations skilfully to obtain her result. Indeed, with the benefit of hindsight, we now know that recognizing such relations is the first step of the $\pm$-Iwasawa Theory, the $\sharp/\flat$-Iwasawa Theory, and so on.
In fact, in the next section, we will construct points satisfying relations analogous to (\ref{PR-Relations}), and use them to find a finite bound for $E(F_{\infty})$ where $F$ is ramified under some conditions. But, because the field is ramified, the relation will be given by matrices which vary depending on $n$. \end{subsection} \end{section}
\begin{section}{Refined Local Points, Sprung's $\sharp/\flat$-Decomposition, and Finiteness of Ranks} \Label{Case 2} In this section, we consider only elliptic curves for simplicity. Take an elliptic curve $E$ over a number field $F$. Except that, our setting is the same as Section~\ref{Case 1}. But for readers' convenience, we will repeat our conditions and assumptions.
As in that section, we suppose
\begin{enumerate}
\item $k_{\infty}/\Qp$ is a totally ramified normal extension with $\operatorname{Gal}(k_{\infty}/\Qp)\cong \Z_p^{\times}$. By local class field theory, it is given by a Lubin-Tate group of height $1$ over $\Zp$. In other words, there is $\varphi(X)=X^p+\alpha_{p-1} X^{p-1}+\cdots+\alpha_1X \in \Zp[X]$ with $p|\alpha_i$, $v_p(\alpha_1)=1$ so that
\[ k_{\infty}=\cup_n \Qp(\pi_n) \] where $\varphi(\pi_n)=\pi_{n-1}$ ($\pi_n\not=0$ for $n\geq 0$, $\pi_0=0$).
\item We let $F$ be a number field, and $F_{\infty}$ be a $\Zp$-extension of $F$. Since $E$ is an elliptic curve, its dual abelian variety is itself. Let $\mathbf T=T_pE$, and let $\mathbf A\stackrel{def}=\cup_n E[p^n]$.
\item We suppose there is only one prime $\mathfrak p$ of $F$ above $p$, $\mathfrak p$ is totally ramified over $F_{\infty}/F$, $F_{\infty, \mathfrak p} = k_{\infty}$, and $F_{\mathfrak p}=\Qp(\pi_N)$ for some $N\geq 1$.
\item We set
\[ H(X)={\det}_{\Zp} (X\cdot 1_M-{\bf F}|M)=X^2-a_pX+p. \] (Then, $a_p=1+N\mfp - \# \tilde E(\OO_{F_{\mfp}}/\mm_{\OO_{F_{\mfp}}})$). And, we set
\begin{eqnarray*} \bar H(X) \stackrel{def}= \displaystyle \frac{H(X)}p &=& 1- \displaystyle \frac{a_p}p X+\frac1p X^d \\ &=&1+b_1X+b_2X^{2}. \end{eqnarray*} \item We assume $E$ has \textbf{good supersingular reduction} at $\mathfrak p$.
\end{enumerate}
\begin{subsection}{Fontaine's functor (revisited), and our assumptions.}
Let $G$ be the formal group scheme given by the formal completion of $E/F_{\mfp}$, and let $M$ be the Dieudonne module of $G_{/\mathbb F_p}$, and $L$ be the set of logarithms of $G$. As in earlier sections, we choose a $\Zp[[{\bf F}]]$-generator $\bfm$ of $M$, and an $\OO_{F_{\mfp}}$-generator $\bfl$ of $L$.
Let $A'$ denote $\OO_{F_{\mfp}}$, and let $\mm$ denote its maximal ideal. Let $e$ be the ramification index of $F_{\mfp}$. (Since it is totally ramified, $e=[F_{\mfp}:\Qp]$.) Recall that $M_{A'}$ is the direct (i.e., injective) limit of
$$ \{ \mm^i \otimes M^{(j)} \}_{I_0}$$ where $I_0$ is the set of $(i,j) \in \Z\times \Z$ so that $j\geq 0$, and
$$ \left\{ \begin{array}{ll} i\geq 0 & \text{if } j=0,\\ i\geq p^{j-1}-je & \text{if }j\geq 1. \end{array} \right. $$ with maps $\varphi_{i,j}, f_{i,j}$, and $v_{i,j}$ between $\mm^i \otimes M^{(j)}$'s. Note that there is $s$ so that $p^s-(s+1)e \leq p^{j-1}-je$ for every $j\geq 1$.
\begin{proposition} \Label{CA} Let $E=p^s-(s+1)e$. There is a map
\[ \iota: M_{A'} \to \mm^E \otimes M \] which is well-defined, and its cokernel is finite. If $M_{A'}$ is torsion-free, $\iota$ is injective. \end{proposition} \begin{proof} For each $\mm^i \otimes M^{(j)}$, we have a map $\mm^i \otimes M^{(j)}\to \mm^i \otimes M$ given by $f_{i,1}\circ f_{i,2}\circ \cdots \circ f_{i,j}$. Since $i \geq p^s-(s+1)e$, there is a map $\mm^i \otimes M\to \mm^{E} \otimes M$ given by $\varphi_{E+1,0} \circ \varphi_{E+2,0} \circ \cdots \circ \varphi_{i,0}$. The rest is clear. \end{proof}
Then we can write
\[ \iota( \bfl)=\alpha_1 \bfm+\alpha_2 {\bf F} \bfm \] for some $\alpha_1, \alpha_2 \in \mm^E$. We assume
\begin{assumption} \Label{Assumption K}
\[ p|\frac{\alpha_2}{\alpha_1}. \] \end{assumption} Doubtlessly, some formal groups associated to elliptic curves satisfy this condition, and many others do not. In fact, $\frac{\alpha_2}{\alpha_1}$ can have a negative $p$-adic valuation, although it is bounded below, and the bound depends on $e$.
Also we assume
\begin{assumption} \Label{Assumption Torsions} The group of torsions of $E(F_{\infty, \mfp})$ is finite. \end{assumption}
This is a reasonable assumption. In fact, we can often show that $E[p]$ is irreducible as a $G_{F_{\mfp}}$-module. \end{subsection}
\begin{subsection}{Finite bounds for ranks} \Label{AlphaGo} \begin{notation} \Label{BetaGo} \begin{enumerate}[(a)] \item Where $e=[\Qp(\pi_N):\Qp]$, let $\{ \pi_{N,1},\cdots, \pi_{N, e}\}=\{ \pi_N^{\sigma} \}_{\sigma \in\operatorname{Gal}(\Qp(\pi_N)/\Qp)}$.
\item Then, for every $n> N$ and for each $i=1,\cdots,e$, choose $\pi_{n,i}$ so that $\varphi(\pi_{n,i})=\pi_{n-1,i}$. (Then, $\pi_{n,i}$ is a uniformizer of $\Qp(\pi_n)$.)
\end{enumerate} \end{notation}
Similar to Section~\ref{Some special} but slightly differently, we define the following.
\begin{definition} \Label{MI} \begin{enumerate} \item \[ J(X)=\bar H(X)-1=b_1X+b_2X^2=-\displaystyle \frac {a_p}pX+\frac 1p X^2, \]
\item \[ \epsilon=\displaystyle \frac{\alpha_{p-1}}{p-a_p+1}, \]
\item \[ l(X)=[1-J(\varphi)+J(\varphi)^2-\cdots ]\circ X \] where $\varphi\circ X^n=\varphi(X)^n$,
\item Define $\tilde \bfx \in \operatorname{Hom}_{\Zp}(M, \mathcal P)$ given by
\[ \tilde \bfx(\bfm)=\epsilon+l(X)\] \[ \tilde \bfx({\bf F} \bfm)=\varphi \circ l(X).\] \end{enumerate} \end{definition}
We define the following functor.
\begin{definition} We let $M'=\mm^E \otimes M$, and let $L'$ denote the maximal $A'$-submodule of $M'$ which contains $\iota(L)\subset M'$, and $L'/\iota(L)$ has a finite index.
\begin{enumerate} \item For an $A'$-algebra $g$, we define $G'(L',M)(g)$ as the set of $(u_{L'}, u_M)$ where
\[ u_{L'} \in \operatorname{Hom}_{A'}(L', \Q \otimes g),\] \[ u_M \in \operatorname{Hom}_{\mathbf D} (M, CW(g/\mm g))\] which naturally induces $u_M': M'(=\mm^E \otimes M) \to \mm^E \otimes CW(g/\mm g)$, so that $u_{L'}$ and $u_M'$ are identical under
\begin{eqnarray*} \operatorname{Hom}_{A'\otimes \mathbf D}(M', \mm^E \otimes CW (g/\mm g))& \stackrel{\omega_g'}\longrightarrow & \operatorname{Hom}_{A'}(L', (\Q \otimes g)/\mm^E \cdot P'(g)) \\
& & \qquad \uparrow \\
& & \operatorname{Hom}_{A'} (L', \Q \otimes g). \end{eqnarray*}
\item Similarly, but slightly differently, define $G'(L', M)(A'[[X]])$ as the set of $(u_{L'}, u_M)$ where (in the following, $K'$ denotes $Frac(A')$)
\[ u_{L'} \in \operatorname{Hom}_{A'}(L', K'[[X]]),\] \[ u_M \in \operatorname{Hom}_{\mathbf D} (M, CW(\mathbb F_p[[X]]))\] which naturally induces $u_M': M' \to \mm^E \otimes CW(\mathbb F_p[[X]])$, so that $u_{L'}$ and $u_M'$ are identical under
\begin{eqnarray*} \operatorname{Hom}_{A'\otimes \mathbf D}(M', \mm^E \otimes CW (\mathbb F_p[[X]]))& \longrightarrow & \operatorname{Hom}_{A'}(L', K'[[X]] /\mm^E \cdot P'(A'[[X]])) \\
& & \qquad \uparrow \\
& & \operatorname{Hom}_{A'} (L', K'[[X]]). \end{eqnarray*}
\item And, choose $M \in \Z(\geq 0)$ such that $p^M \cdot \mm^E \in A'$. \end{enumerate} \end{definition}
\begin{definition}
\begin{enumerate} \item Recall $\tilde\bfx$ from Definition~\ref{MI}. Modulo $p\Zp[[X]]$, it induces $\bfx \in \operatorname{Hom}_{\mathbf D}(M, \overline{\mathcal P})$ satisfying
\[ \bfx(\bfm)=l(X) \pmod{p\Zp[[X]]}.\] Then, define \begin{enumerate}
\item We can choose an $A'$-generator $\bfl'$ of $L'$, and write it as
\[ \bfl'=\beta_1 \bfm+\beta_2 {\bf F} \bfm \] where $\displaystyle \frac{\beta_2}{\beta_1}=\frac{\alpha_2}{\alpha_1}$. Then, define $\bfy \in \operatorname{Hom}_{A'}(L', K'[[X]])$ by
\[ \bfy(\bfl)= \beta_1 \tilde \bfx(\bfm)+\beta_2 \tilde \bfx({\bf F} \bfm)= \beta_1 \left( \epsilon + l(X) \right)+\beta_2 l(\varphi(X)) \] and extend $A'$-linearly.
\item Then, we set \[ P'=(\bfy, \bfx) \in G'(L', M)(A'[[X]]). \] \end{enumerate}
\item Then, for every $n \geq N$ and $i=1,2,\cdots, e$, we obtain points $P'(\pi_{n,i}) \in G'(L', M)(\Zp[\pi_n])$ by substituting $X=\pi_{n,i}$.
\end{enumerate} \end{definition}
We make the following assumption analogous to \cite{Kobayashi}~Proposition~8.12~(ii).
\begin{assumption} \Label{Assumption L} $\{ P'(\pi_{n,1}), \cdots, P'(\pi_{n,e}) , P'(\pi_{n-1,1}), \cdots, P'(\pi_{n-1,e}) \}$ generates $G'(L', M)(\Zp[\pi_n])$ over $\Zp[\operatorname{Gal}(\Qp(\pi_n)/\Qp(\pi_N))]$ modulo torsions for every $n > N$. \end{assumption} We can apply the proof of \cite{Kobayashi} to this assumption certainly in some cases. We hope we can in most cases.
\begin{definition}We define a map $\xi:G'(L',M)\to G(L,M)$ as follows: \begin{enumerate}[(a)] \item First, recall that $G(L, M)(g)$ is the set of $(u_L, u_M)$ where $u_L:L\to \Qp\otimes g$ and $u_{M_{A'}}: M_{A'} \to CW_{k}(g/\mm g)_{A'}$ are identical through the diagram
\begin{eqnarray*} \operatorname{Hom}_{A'\otimes \mathbf D}(M_{A'}, CW_k (g/\mm g)_{A'} ) & \longrightarrow & \operatorname{Hom}_{A'}(L, (\Qp\otimes g) /P'(g)) \\
& & \qquad \uparrow \\
& & \operatorname{Hom}_{A'} (L, \Qp\otimes g). \end{eqnarray*} We also recall that $G'(L',M)(g)$ is the set of $(u_{L'}, u_M)$ where $u_{L'} \in \operatorname{Hom}_{A'}(L', F)$, and $u_M \in \operatorname{Hom}_{\mathbf D} (M, CW(g/\mm g))$ which naturally induces $u_M': M'(=\mm^E \otimes M) \to \mm^E \otimes CW(g/\mm g)$ satisfy that $u_{L'}=u_M'$ through the diagram
\begin{eqnarray*} \operatorname{Hom}_{A'\otimes \mathbf D}(M', \mm^E \otimes CW (g/\mm g)) & \longrightarrow & \operatorname{Hom}_{A'}(L', (\Qp\otimes g)/\mm^E P'(g)) \\
& & \qquad \uparrow \\
& & \operatorname{Hom}_{A'} (L', \Qp\otimes g). \end{eqnarray*}
\item We recall that $\iota: M_{A'} \to M'(=\mm^E \otimes M)$ (which is identity on $M$) also induces $\iota:L\to L'(\supset \iota(L))$. Then, $p^M\cdot\iota^*$ induces
\begin{eqnarray*} p^M\cdot\iota^* &:& \operatorname{Hom}_{A'\otimes \mathbf D}(M', \mm^E \otimes CW (g/\mm g)) \to \operatorname{Hom}_{A'\otimes \mathbf D}(M_{A'}, CW_{k} (g/\mm g)_{A'} ), \\ p^M\cdot\iota^* &:& \operatorname{Hom}_{A'} (L', \Qp\otimes g) \to \operatorname{Hom}_{A'} (L, \Qp\otimes g), \\ p^M\cdot\iota^* &:& \operatorname{Hom}_{A'} (L', (\Qp\otimes g)/ \mm^E P'(g)) \to \operatorname{Hom}_{A'} (L, (\Qp\otimes g)/P'(g)), \end{eqnarray*} because $p^M\cdot \mm^E \subset A'$. \end{enumerate} Thus, $p^M\cdot \iota^*$ induces a map $\xi: G'(L', M)\to G(L, M)$.
\end{definition}
We also define:
\begin{definition} Recall the isogeny $j_G: G(L, M) \to G$, and an embedding $i:G \to E$. We define
\[ P(\pi_{n,i})=i \circ j_G \circ \xi (P'(\pi_{n,i})) \] for every $n \geq N$ and $i=1,2,\cdots, e$. \end{definition}
\begin{proposition} \Label{Mark IV} For now, let $\operatorname{Tr}_{n/m}$ denote $\operatorname{Tr}_{\Qp(\pi_n)/\Qp(\pi_m)}$. For $n>N$, we have
\[ \operatorname{Tr}_{n/n-1} \begin{bmatrix} P(\pi_{n, 1}) \\ \vdots \\ P(\pi_{n, e}) \end{bmatrix} = pA_{n-1} \begin{bmatrix} P(\pi_{n-1, 1}) \\ \vdots \\ P(\pi_{n-1, e}) \end{bmatrix}- A'_{n-1} \begin{bmatrix} P(\pi_{n-2, 1}) \\ \vdots \\ P(\pi_{n-2, e}) \end{bmatrix} \] where $A_{n-1}$ is an $e\times e$ matrix with entries in $\Zp[\operatorname{Gal}(\Qp(\pi_{n-1})/\Qp(\pi_N))]$, and $A'_{n-1}$ is an $e\times e$ matrix also with entries in $\Zp[\operatorname{Gal}(\Qp(\pi_{n-1})/\Qp(\pi_N))]$ so that
\[ A'_{n-1}\equiv I_e \pmod p. \] \end{proposition}
\begin{proof} As in the proof of Proposition~\ref{Despicable-Laundry-Machine}, \begin{eqnarray*} \operatorname{Tr}_{n/n-1} l(\pi_{n,i}) = -\alpha_{p-1}-p\cdot \left[b_1 l(\pi_{n-1, i})+b_2 l(\pi_{n-2, i}) \right] \end{eqnarray*} thus
\begin{eqnarray} \Label{Falcon} \operatorname{Tr}_{n/n-1} (\epsilon+l(\pi_{n, i}))-a_p(\epsilon+l(\pi_{n-1, i}))+(\epsilon+l(\pi_{n-2, i}))=0. \end{eqnarray}
On the other hand, again as in the proof of Proposition~\ref{Despicable-Laundry-Machine},
\[ l(\varphi(\pi_{n, i}))=\pi_{n-1, i}-\left( \displaystyle \frac {-a_p}p l(\pi_{n-2, i})+\frac 1p l(\pi_{n-3, i}) \right) \] thus
\begin{multline} \operatorname{Tr}_{n/n-1} l(\varphi(\pi_{n, i})) = p \pi_{n-1, i}+a_pl(\pi_{n-2, i})-l(\pi_{n-3, i}) \\ = \Label{Eagle} p\pi_{n-1, i} + a_p l(\varphi(\pi_{n-1, i}))-l(\varphi(\pi_{n-2, i})). \end{multline}
Since $\displaystyle \frac{\beta_2}{\beta_1} \pi_{n-1, i}$ is divisible by $p$, there is $d_{n-1,i} \in \mm_{\Zp[\pi_{n-1}]}$ so that
\[ (\epsilon+l(d_{n-1,i}))+\displaystyle \frac{\beta_2}{\beta_1} l(\varphi(d_{n-1,i}))=\frac{\beta_2}{\beta_1} \pi_{n-1, i}. \]
In other words, $\bfy(\bfl)(d_{n-1,i})=\beta_2 \pi_{n-1, i}$.
Let $D_{n-1, i}=P'(d_{n-1,i}) \in G'(L', M)(\Zp[\pi_{n-1}])$. By Assumption~\ref{Assumption L},
\begin{multline} \Label{ABCD} \begin{bmatrix} D_{n-1, 1}\\ \vdots \\ D_{n-1, e} \end{bmatrix}=
\begin{bmatrix} a_{n-1, 11} &\cdots & a_{n-1, 1e} \\ \vdots & \ddots & \vdots \\ a_{n-1, e1} &\cdots & a_{n-1, ee} \end{bmatrix} \cdot \begin{bmatrix} P'(\pi_{n-1, 1}) \\ \vdots \\ P'(\pi_{n-1, e}) \end{bmatrix} \\ + \begin{bmatrix} a_{n-1, 11}' &\cdots & a_{n-1, 1e}' \\ \vdots & \ddots & \vdots \\ a_{n-1, e1}' &\cdots & a_{n-1, ee}' \end{bmatrix} \cdot \begin{bmatrix} P'(\pi_{n-2, 1}) \\ \vdots \\ P'(\pi_{n-2, e}) \end{bmatrix} \end{multline} modulo torsions for some $a_{n-1, ij}, a_{n-1, ij}' \in \Zp[\operatorname{Gal}(\Qp(\pi_{n-1})/\Qp(\pi_N))]$.
For $Q=(y_Q, x_Q) \in G'(L', M)(g)$ with $y_Q \in \operatorname{Hom}_{A'}(L', \Qp\otimes g)$, we let $\bfl(Q)$ denote $y_Q(\bfl)$. For example, $\bfl(P'(\pi_{n,i}))=\beta_1(\epsilon+l(\pi_{n,i}))+\beta_2l(\varphi(\pi_{n,i}))$.
Then, by (\ref{Falcon}), (\ref{Eagle}), and (\ref{ABCD}),
\begin{eqnarray*} \operatorname{Tr}_{n/n-1} \begin{bmatrix} \bfl(P'(\pi_{n, 1})) \\ \vdots \\ \bfl(P'(\pi_{n, e})) \end{bmatrix} &=& a_p \begin{bmatrix} \bfl(P'(\pi_{n-1, 1})) \\ \vdots \\ \bfl(P'(\pi_{n-1, e})) \end{bmatrix}- \begin{bmatrix} \bfl(P'(\pi_{n-2, 1})) \\ \vdots \\ \bfl(P'(\pi_{n-2, e})) \end{bmatrix} \\ && +pB_{n-1}\cdot \begin{bmatrix} \bfl(P'(\pi_{n-1, 1})) \\ \vdots \\ \bfl(P'(\pi_{n-1, e})) \end{bmatrix}+ p B_{n-1}' \cdot \begin{bmatrix} \bfl(P'(\pi_{n-2, 1})) \\ \vdots \\ \bfl(P'(\pi_{n-2, e})) \end{bmatrix} \end{eqnarray*} where $B_{n-1}, B_{n-1}'$ are the matrices that appear in (\ref{ABCD}). Since $L'$ is one-dimensional, this implies an analogous identity for the $y$-part of $P'(\pi_{n,i})$'s, therefore an analogous identity for $P'(\pi_{n,i})$'s holds modulo torsions.
By taking $ i\circ j_G \circ (p^M\cdot \iota^*)$, we obtain our claim because $p^M$ annihilates the torsions. \end{proof}
This relation is finer than the one used in Section~\ref{Case 1}, and we will adopt Sprung's insight of $\sharp/\flat$-decomposition to produce the characteristics $\bfL^{\sharp}, \bfL^{\flat}$ which are integral power series, which make a big difference between this section and the previous one.
Recall that $e=[F_{\mfp}:\Qp]=[F:\Q]$. As in Section~\ref{Case 1}, we assume
\[ \rank_{\Lambda} \Selr(E[p^{\infty}]/F_{\infty})^{\vee}=e. \]
It is not difficult to prove this assumption when $\operatorname{Sel}_p(E[p^{\infty}]/F_n)^{\chi_n}$ is finite for some $n$ and a character $\chi_n$.
Also as in Section~\ref{Case 1}, we let
\[ S_{tor}=\left( \Selr(E[p^{\infty}]/F_{\infty})^{\vee} \right)_{\Lambda-torsion}. \] As Section~\ref{Case 1}, there is
\begin{eqnarray} 0 \to \Selr(A/F_{\infty})^{\vee}/S_{tor} \to \Lambda^e \to C \to 0 \end{eqnarray} for a finite group $C$.
It is often true that $\Selr(E[p^{\infty}] /F_n) \to \Selr(E[p^{\infty}] /F_{\infty})^{\Gamma^{p^n}}$ is an isomorphism, and even when it is not, its kernel and cokernel are bounded, so are easy to deal with. In this section, for convenience, assume it is an isomorphism for each $n$. The above short exact sequence induces
\[ \alpha_n': \Selr(E[p^{\infty}]/F_n)^{\vee} \to \left( \Selr(A/F_{\infty})^{\vee}/S_{tor} \right)_{/\Lambda^{p^n}} \to \Lambda_n^e.\]
\begin{definition} \begin{enumerate}[(a)] \item
Recall $P(\pi_{N+n, i})$ is a point of $E(\Qp(\pi_{N+n}))=E(F_{n, \mfp})$.
Recall $\Lambda_n=\Zp[\Gamma_n]$. Let $R(\pi_{N+n, i})$ be the image of $P(\pi_{N+n, i})$ under the map
\[ E(F_{n,\mfp}) \to \Selr(E[p^{\infty}]/F_n)^{\vee} \stackrel{\alpha_n'}\to \Lambda_n^e. \]
\item Choose a lifting $\tilde R(\pi_{N+n, i}) \in \Lambda^e$ of $ R(\pi_{N+n, i})$ for each $n$. Our result will not depend on the choice of $\tilde R(\pi_{N+n, i})$. Let
\[ \mathbf R_{N+n}=\begin{bmatrix} \tilde R(\pi_{N+n, 1})^t \\ \vdots \\ \tilde R(\pi_{N+n, e})^t \end{bmatrix} \in M_{e}(\Lambda) .\]
\item Let $\Phi_n \in \Lambda$ be the minimal polynomial of $\zeta_{p^n}-1$, i.e., $\Phi_n=\displaystyle \frac{(1+X)^{p^n}-1}{(1+X)^{p^{n-1}}-1}$ if $n \geq 1$, and $\Phi_0=X$. And, let $\omega_n=(1+X)^{p^n}-1$. We consider $\Phi_n$ and $\omega_n$ as elements of $\Lambda$ under the identification $\Lambda=\Zp[[X]]$. \end{enumerate} \end{definition}
We note $\Phi_n=\sum _{\sigma \in \operatorname{Ker}(\Gamma_n \to \Gamma_{n-1})} \sigma \pmod{\omega_n}$.
\begin{proposition} \Label{Montana}
\[ \begin{bmatrix} \mathbf R_{N+n+1} \\ \mathbf{ R}_{N+n} \end{bmatrix} = \begin{bmatrix} pA_{N+n} & -A'_{N+n} \Phi_n \\ I_e & 0 \end{bmatrix} \cdot \begin{bmatrix} \mathbf R_{N+n} \\ \mathbf{ R}_{N+n-1} \end{bmatrix} \pmod{\omega_n}. \] \end{proposition} \begin{proof} This follows immediately from Proposition~\ref{Mark IV}. \end{proof}
\begin{definition} \Label{Mateo} We choose liftings $\tilde A_{N+n}, \tilde A'_{N+n} \in M_e(\Lambda)$ of $A_{N+n}, A'_{N+n}$ for every $n$. We set \begin{eqnarray*} \begin{bmatrix} \tilde \bfL^{\sharp}(E) \\ \tilde \bfL^{\flat}(E) \end{bmatrix} &\stackrel{def}=& \varprojlim_n \begin{bmatrix} p\tilde A_{N+1} & -\tilde A'_{N+1} \Phi_1 \\ I_e & 0 \end{bmatrix}^{-1} \cdot \begin{bmatrix} p \tilde A_{N+2} & -\tilde A'_{N+2} \Phi_2 \\ I_e & 0 \end{bmatrix} ^{-1} \cdot \\ && \cdots\quad \cdot
\begin{bmatrix} p \tilde A_{N+n} & -\tilde A'_{N+n} \Phi_n \\ I_e & 0 \end{bmatrix}^{-1} \cdot \begin{bmatrix} \mathbf R_{N+n+1} \\ \mathbf{ R}_{N+n} \end{bmatrix}. \\ \bfLalg^{\sharp}(E) &\stackrel{def}=& \det(\tilde \bfL^{\sharp}(E)), \\ \bfLalg^{\flat}(E) &\stackrel{def}=& \det(\tilde \bfL^{\flat}(E)). \end{eqnarray*} \end{definition}
\begin{proposition} \Label{GagConcert} (a) $\tilde \bfL^{\sharp}(E)$ and $\tilde \bfL^{\flat}(E)$ are well-defined (i.e., the projective limits exist), and (b) their entries are in $\Lambda$. \end{proposition}
\begin{proof} First, we show the following:
Let $c_{n+1}=\mathbf R_{N+n+1}, d_{n+1}=\mathbf R_{N+n}$, and
\[ \begin{bmatrix} c_i \\ d_i \end{bmatrix} =\begin{bmatrix} p \tilde A_{N+i} & -\tilde A'_{N+i} \Phi_i \\ I_e & 0 \end{bmatrix}^{-1} \cdots \begin{bmatrix} p \tilde A_{N+n} & -\tilde A'_{N+n} \Phi_n \\ I_e & 0 \end{bmatrix}^{-1} \begin{bmatrix} \mathbf R_{N+n+1} \\ \mathbf R_{N+n} \end{bmatrix} \] for every $1 \leq i \leq n$. We will show that
\begin{enumerate}[(1)] \item $c_i, d_i \in M_e(\Lambda)$, \item $c_i \equiv \mathbf R_{N+i} \pmod{\omega_i}, d_i \equiv \mathbf R_{N+i-1} \pmod{\omega_{i-1}}$ for every $1\leq i\leq n+1$. \end{enumerate} We prove it inductively as follows:
\emph{Step 1.} By the definition of $c_{n+1}$ and $d_{n+1}$, the claim is true for $i=n+1$.
\emph{Step 2.} Suppose the claim is true for $c_{i+1}, d_{i+1}$. Then,
\begin{eqnarray*} \begin{bmatrix} c_i \\ d_i \end{bmatrix} &=& \begin{bmatrix} p \tilde A_{N+i} & -\tilde A'_{N+i} \Phi_i \\ I_e & 0 \end{bmatrix}^{-1} \begin{bmatrix} c_{i+1} \\ d_{i+1} \end{bmatrix} \\ &=& \displaystyle \frac1{\Phi_i} (\tilde A_{N+i}')^{-1} \begin{bmatrix} 0& \tilde A_{N+i}' \Phi_i \\ -I_e & p\tilde A_{N+i} \end{bmatrix} \begin{bmatrix} c_{i+1} \\ d_{i+1} \end{bmatrix} \\ &=& \begin{bmatrix} d_{i+1} \\ \displaystyle \frac1{\Phi_i} (\tilde A_{N+i}')^{-1} ( -c_{i+1}+p \tilde A_{N+i} d_{i+1} ) \end{bmatrix} \end{eqnarray*} By the induction hypothesis and Proposition~\ref{Montana}, we have
\begin{eqnarray*} -c_{i+1}+p \tilde A_{N+i} d_{i+1} &=& -\mathbf R_{N+i+1}+p\tilde A_{N+i} \mathbf R_{N+i} \pmod{\omega_i} \\ &=& \tilde A_{N+i}' \Phi_i \mathbf R_{N+i-1} \pmod{\omega_i}. \end{eqnarray*} Thus,
\[ \displaystyle \frac1{\Phi_i} (\tilde A_{N+i}')^{-1} \left[ -c_{i+1}+p \tilde A_{N+i} d_{i+1} \right] \equiv \mathbf R_{N+i-1} \pmod{\omega_{i-1}} .\] Thus, $c_i=d_{i+1}\equiv \mathbf R_{N+i} \pmod{\omega_i}$, and $d_i\equiv \mathbf R_{N+i-1} \pmod{\omega_{i-1}}$, and $c_i, d_i \in \Lambda^e$. Inductively, $c_1,d_1 \in M_e( \Lambda)$.
Second, we show the following: By the above, for any $m \geq n$,
\[ \begin{bmatrix} p\tilde A_{N+n+1}& -\tilde A_{N+n+1}' \Phi_{n+1} \\ I_e & 0 \end{bmatrix}^{-1} \cdots \begin{bmatrix} p\tilde A_{N+m}& -\tilde A_{N+m}' \Phi_m \\ I_e & 0 \end{bmatrix}^{-1} \begin{bmatrix} \mathbf R_{N+m+1} \\ \mathbf R_{N+m} \end{bmatrix} = \begin{bmatrix} r_{N+n+1} \\ s_{N+n+1} \end{bmatrix} \] where $r_{N+n+1} \equiv \mathbf R_{N+n+1} \pmod {\omega_{n+1}}$, $s_{N+n+1} \equiv \mathbf R_{N+n} \pmod {\omega_{n}}$. Let
\[ \begin{bmatrix} e_{n+1} \\ e_n \end{bmatrix}= \begin{bmatrix} r_{N+n+1} \\ s_{N+n+1} \end{bmatrix} - \begin{bmatrix} \mathbf R_{N+n+1} \\ \mathbf R_{N+n} \end{bmatrix}, \] then $e_{n+1} \equiv 0 \pmod {\omega_{n+1}}$, $e_n \equiv 0 \pmod{\omega_n}$.
Let
\[ \begin{bmatrix} e_i \\ e_{i-1} \end{bmatrix} =\begin{bmatrix} p \tilde A_{N+i} & -\tilde A'_{N+i} \Phi_i \\ I_e & 0 \end{bmatrix}^{-1} \cdots \begin{bmatrix} p \tilde A_{N+n} & -\tilde A'_{N+n} \Phi_n \\ I_e & 0 \end{bmatrix}^{-1} \begin{bmatrix} e_{n+1} \\ e_n \end{bmatrix} \] for every $1 \leq i \leq n$.
For our immediate purpose, we devise the following way of counting the number of divisors of elements of $M_e(\Lambda)$. If $f=p$ or $f=\Phi_i$ for some $i$, and $f|a \in M_e(\Lambda)$, we say $f$ is a divisor of $a$. Any other irreducible polynomial that divides $a$ is ignored in our way of counting. To define the number of divisors of $a$, we count $p$ any number of times that $p$ divides $a$ (for example, if $p^k|a$, then $p$ is counted $k$ times towards the number of divisors), but we count each $\Phi_i$ that divides $a$ only once (for example, if $\Phi_i^k|a$, then $\Phi_i$ is counted only once towards the number of divisors). For example, if $p^3 (X^2+2)| a \in M_e(\Lambda)$, then $a$ has at least $3$ divisors ($p$ is counted $3$ times, and $X^2+2$ is not counted), and if $p\Phi_2^2 \Phi_3^2 |b$, then $b$ has at least $3$ divisors ($\Phi_2$ and $\Phi_3$ are each counted only once).
If $a=\sum a_i$ for some $a_i \in M_e(\Lambda)$ with each $a_i$ having at least $k$ divisors, we say $a$ is a sum of elements, each of which has at least $k$ divisors.
Suppose $e_{i+1}$ is a sum of elements, each of which has at least $n_{i+1}$ divisors, and suppose $e_i$ is a sum of elements, each of which has at least $n_{i}$ divisors. And, suppose $\omega_{i+1}|e_{i+1}$ and $\omega_i | e_i$. Then,
\[ e_{i-1}=\displaystyle \frac1{\Phi_i} \tilde A_{N+i}'^{-1} (-e_{i+1}+p\tilde A_{N+i} e_i) = \tilde A_{N+i}'^{-1}(- \displaystyle \frac 1{\Phi_i} e_{i+1}+\frac p{\Phi_i} \tilde A_{N+i} e_i)\] and $\frac 1{\Phi_i} e_{i+1}$ and $\frac p{\Phi_i} \tilde A_{N+i} e_i$ are respectively a sum of elements, each of which has at least $n_{i+1}-1$ divisors, and a sum of elements, each of which has at least $n_{i}$ divisors. Both are divisible by $\omega_{i-1}$. Thus, $e_{i-1}$ is a sum of elements, each of which has at least $\operatorname{min}(n_{i+1}-1, n_i)$ divisors, and is divisible by $\omega_{i-1}$.
Since $\omega_{n+1}|e_{n+1}$ and $\omega_n|e_n$, it is not difficult to see that $e_1$ and $e_0$ are sums of elements, each of which has at least $n/2$ divisors.
For $i\geq 1$, $\Phi_i\equiv 0 \pmod{(p, X^{p^{i-1}})}$, so when $0\leq \alpha_1 < \cdots < \alpha_{n'}$ for some $n'$,
\begin{eqnarray*} p^j \Phi_{\alpha_1}\cdots \Phi_{\alpha_{n'}} &\equiv & p^j p^{n'-i} * \pmod{X^{p^{i-1}}} \\ &=& p^{n'-i+j} * \pmod{X^{p^{i-1}}} \end{eqnarray*} ($*$ indicates any element). Thus, it follows that
\[ \begin{bmatrix} e_1 \\ e_0 \end{bmatrix}
\equiv \begin{bmatrix} 0\\ 0 \end{bmatrix}
\pmod{p^{n/2-i}, X^{p^{i-1}}}. \] In other words,
\begin{eqnarray*} \bfL_{n,m}& \stackrel{def}= & \begin{bmatrix} p\tilde A_{N+1}& -\tilde A_{N+1}' \Phi_{1} \\ I_e & 0 \end{bmatrix}^{-1} \cdots \begin{bmatrix} p\tilde A_{N+m}& -\tilde A_{N+m}' \Phi_m \\ I_e & 0 \end{bmatrix}^{-1} \begin{bmatrix} \mathbf R_{N+m+1} \\ \mathbf R_{N+m} \end{bmatrix} \\ && -\begin{bmatrix} p\tilde A_{N+1}& -\tilde A_{N+1}' \Phi_{1} \\ I_e & 0 \end{bmatrix}^{-1} \cdots \begin{bmatrix} p\tilde A_{N+n}& -\tilde A_{N+n}' \Phi_n \\ I_e & 0 \end{bmatrix}^{-1} \begin{bmatrix} \mathbf R_{N+n+1} \\ \mathbf R_{N+n} \end{bmatrix} \\ &=& \begin{bmatrix} 0 \\ 0 \end{bmatrix} \pmod{p^{n/2-i}, X^{p^{i-1}}}, \end{eqnarray*} so $\bfL_{n,m}$ converges to $0$ uniformly as $n,m \to \infty$.
Thus, we obtain our claim. \end{proof}
In the proof of Proposition~\ref{GagConcert}, we see that there are $\mathbf R_{N+n}^{(m)}, \mathbf R_{N+n-1}^{(m)} \in M_e(\Lambda)$ so that $\mathbf R_{N+n}^{(m)} \equiv \mathbf R_{N+n} \pmod{\omega_n}, \mathbf R_{N+n-1}^{(m)}\equiv \mathbf R_{N+n-1} \pmod{\omega_{n-1}}$, and
\[ \begin{bmatrix} \mathbf R_{N+n}^{(m)} \\ \mathbf R_{N+n-1}^{(m)} \end{bmatrix} =\begin{bmatrix} p \tilde A_{N+n} & -\tilde A'_{N+n} \Phi_n \\ I_e & 0 \end{bmatrix}^{-1} \cdots \begin{bmatrix} p \tilde A_{N+m} & -\tilde A'_{N+m} \Phi_m \\ I_e & 0 \end{bmatrix}^{-1} \begin{bmatrix} \mathbf R_{N+m+1} \\ \mathbf R_{N+m} \end{bmatrix}. \] From Definition~\ref{Mateo},
\begin{multline*} \begin{bmatrix} p \tilde A_{N+n-1} & -\tilde A'_{N+n-1} \Phi_{n-1} \\ I_e & 0 \end{bmatrix} \cdot \cdots \cdot \begin{bmatrix} p \tilde A_{N+2} & - \tilde A'_{N+2} \Phi_2 \\ I_e & 0 \end{bmatrix} \cdot \begin{bmatrix} p \tilde A_{N+1} & -\tilde A'_{N+1} \Phi_1 \\ I_e & 0 \end{bmatrix} \cdot \begin{bmatrix} \tilde \bfL^{\sharp}(E) \\ \tilde \bfL^{\flat}(E) \end{bmatrix} \\ = \varprojlim_m \begin{bmatrix} \mathbf R_{N+n}^{(m)} \\ \mathbf{ R}_{N+n-1}^{(m)} \end{bmatrix} , \end{multline*}
and for a primitive $p^n$-th root of unity $\zeta_{p^n}$, $\varprojlim \mathbf R_{N+n}^{(m)}|_{X=\zeta_{p^n}-1}= \mathbf R_{N+n}|_{X=\zeta_{p^n}-1}$.
Then, naturally we would hope for the following:
Let $\chi$ be a finite character of $\Gamma$ satisfying $\chi(\gamma)=\zeta_{p^{n}}$. We may also consider it as a character of $\operatorname{Gal}(F_n/F)$. It is not hard to see that assuming $S_{tor}^{\chi}$ is finite, $\det(\mathbf R_{N+n}|_{X=\zeta_{p^n}-1})=0$ if and only if $\operatorname{Sel}_p(E[p^{\infty}]/F_n)^{\chi}$ is infinite. Since $\bfLalg^{\sharp}=\det(\tilde\bfL^{\sharp}(E))$ and $\bfLalg^{\flat}=\det(\tilde\bfL^{\flat}(E))$ are in $\Lambda$, and therefore have a finite number of roots, we would hope that it implies that $\operatorname{Sel}_p(E[p^{\infty}]/F_n)^{\chi}$ is infinite for a finite number of characters $\chi$. But, the author finds it a little difficult to show that because we may have $\det R_{N+n}|_{X=\zeta_{p^n}-1}=0$ even when $\bfLalg^{\sharp}(\zeta_{p^n}-1)\not=0$ and $\bfLalg^{\flat}(\zeta_{p^n}-1)\not=0$.
Instead, we make a more modest claim:
\begin{proposition} \Label{DDT} Suppose $\bfLalg^{\sharp}$ and $\bfLalg^{\flat}$ are not 0, and $a_p$ and $\frac{\beta_2}{\beta_1}$ are divisible by $p^T$ for some $T>0$. Suppose $\chi$ is a primitive character of $\Gamma_n$ for sufficiently large $n$. Also, suppose that
\begin{enumerate}[(a)] \item if $n$ is odd, $p^S\nmid \bfLalg^{\sharp}$ for some $S$ with $S+\displaystyle \frac{ep}{(p-1)^2}<T$, or
\item if $n$ is even, $p^{S'} \nmid \bfLalg^{\flat}$ for some $S'$ with $S'+\displaystyle \frac{ep}{(p-1)^2}<T$. \end{enumerate} Then, $E(F_n)^{\chi}$ and $\Sha(E/F_n)[p^{\infty}]^{\chi}$ are finite. \end{proposition}
\begin{proof}
First, we note $p^{T-1}|B_{i}$ and $p^{T-1}|B_{i}'$ for each $i$ where $B_{i}, B_{i}'$ are the matrices in the proof of Proposition~\ref{Mark IV}, thus $p^{T-1}|A_{i}$ and $A_{i}'\equiv I_e \pmod{p^T}$. Then, we can choose $\tilde A_i, \tilde A_i'$ so that $p^{T-1}|\tilde A_i, \tilde A_i' \equiv I_e \pmod{p^T}$.
Thus, if $n$ is odd, for $\zeta_{p^n}=\chi(\gamma)$,
\begin{multline*} \left. \begin{bmatrix} p \tilde A_{N+n-1} & -\tilde A'_{N+n-1} \Phi_{n-1} \\ I_e & 0 \end{bmatrix} \cdot \cdots \cdot \begin{bmatrix} p \tilde A_{N+2} & -\tilde A'_{N+2} \Phi_2 \\ I_e & 0 \end{bmatrix} \cdot \begin{bmatrix} p \tilde A_{N+1} & -\tilde A'_{N+1} \Phi_1 \\
I_e & 0 \end{bmatrix} \right|_{X=\zeta_{p^n}-1}\\ \equiv \begin{bmatrix} 0 & -\Phi_{n-1}(\zeta_{p^n}-1) I_e \\ I_e&0 \end{bmatrix} \cdot \cdots \cdot \begin{bmatrix} 0 & -\Phi_1(\zeta_{p^n}-1) I_e \\ I_e&0 \end{bmatrix} = \begin{bmatrix} a I_e&0 \\ 0&bI_e \end{bmatrix} \pmod{p^T} \end{multline*} for some $a,b$ with $v_p(a), v_p(b) < p/(p-1)^2$, and if $n$ is even, it is congruent to $\begin{bmatrix} 0&a I_e \\ b I_e &0 \end{bmatrix}$.
Then, in case (a), $a \tilde \bfL^{\sharp}(\zeta_{p^n}-1)\equiv \mathbf R_{N+n}(\zeta_{p^n}-1) \pmod {p^T}$, and in case (b), $a\tilde \bfL^{\flat}(\zeta_{p^n}-1)\equiv \mathbf R_{N+n}(\zeta_{p^n}-1) \pmod {p^T}$. If $n$ is sufficiently large, $v_p(a^e \bfLalg^{\sharp}(\zeta_{p^n}-1))<T$ and $v_p(a^e \bfLalg^{\flat}(\zeta_{p^n}-1))<T$ respectively by our assumption, thus $\det(\mathbf R_{N+n}(\zeta_{p^n}-1)) \not\equiv 0 \pmod{p^T}$, and also $S_{tor}^{\chi}$ is finite for a sufficiently large $n$. Thus our claim follows. \end{proof}
Then we immediately have:
\begin{theorem} \Label{DDR} Suppose
\begin{enumerate} \item $a_p$ and $\frac{\beta_2}{\beta_1}$ are divisible by $p^T$ for some $T>0$,
\item $p^S\nmid \bfLalg^{\sharp}, p^{S} \nmid \bfLalg^{\flat}$ for some $S$ with $S+\displaystyle \frac{ep}{(p-1)^2}<T$. \end{enumerate} Then, $E(F_{\infty})/E(F_{\infty})_{tor}$ is a group of finite rank, and $\Sha(E/F_n)[p^{\infty}]^{\chi}$ is finite for all sufficiently large $n$, and every primitive character $\chi$ of $\operatorname{Gal}(F_n/F)$. \end{theorem} We note that it is often relatively easy to show that $E(F_{\infty})$ has a finite number of $p$-power torsions.
\end{subsection}
\begin{subsection}{Appendix: Sprung's $\sharp/\flat$-Selmer groups} \Label{Appendix}
Even though we do not use them in this paper, using the points constructed in Section~\ref{AlphaGo}, we can construct $\operatorname{Sel}_p^{\sharp}(E/F_{\infty})$ and $\operatorname{Sel}_p^{\flat}(E/F_{\infty})$ as Sprung did (\cite{Sprung}).
\begin{definition}[Perrin-Riou map] \begin{enumerate} \item Let $(\cdot, \cdot)_{N+n}$ denote the following pairing given by the local class field theory:
\[ (\cdot, \cdot)_{N+n}: H^1(\Qp(\pi_{N+n}), T_p) \times H^1(\Qp(\pi_{N+n}), T_p) \to \Zp.\]
Recall $\Gamma_n=\operatorname{Gal}(F_n/F) \cong \operatorname{Gal}(\Qp(\pi_{N+n})/ \Qp(\pi_N))$, $\Gamma=\varprojlim \Gamma_n$, and $\Lambda=\Zp[[\Gamma]]\cong \Zp[[X]]$ (non-canonically). For $z \in H^1(\Qp(\pi_{N+n}), T_p)$ and $x = [x_1,\cdots, x_e]^t \in E(\Qp(\pi_{N+n}))^e$,
\[ \bfP_{N+n, x}(z)\stackrel{def}= \begin{bmatrix} \sum_{\sigma \in \Gamma_n} (z, x_1^{\sigma})_{N+n} \cdot \sigma \\ \sum_{\sigma \in \Gamma_n} (z, x_2^{\sigma})_{N+n} \cdot \sigma \\ \vdots \\ \sum_{\sigma \in \Gamma_n} (z, x_e^{\sigma})_{N+n} \cdot \sigma \end{bmatrix} \in \Zp[\Gamma_n]^e.\]
\item Also, let $\tilde \bfP_{N+n, x}(z)$ denote its lifting to $\Zp[\Gamma_{n+1}]^e$. \end{enumerate} \end{definition}
\begin{notation} \begin{enumerate} \item Let $x_{N+n}$ denote
\[ x_{N+n}=[ P(\pi_{N+n, 1}), \cdots, P(\pi_{N+n, e}) ]^t. \]
\item Let $\Proj_{n/m}$ denote the natural projection from $\Zp[\Gamma_n]$ to $\Zp[\Gamma_m]$. \end{enumerate} \end{notation}
By Proposition~\ref{Mark IV}, for any $z=(z_n) \in \varprojlim_{n \geq N} H^1(\Qp(\pi_n), T_p)$,
\[ \Proj_{n+1/n} \begin{bmatrix} \bfP_{N+n+1, x_{N+n+1}} (z_{N+n+1}) \\ \tilde \bfP_{N+n, x_{N+n}} (z_{N+n}) \end{bmatrix} = \begin{bmatrix} pA_{N+n} & -A'_{N+n} \Phi_n \\ I_e & 0 \end{bmatrix} \cdot \begin{bmatrix} \bfP_{N+n, x_{N+n}} (z_{N+n}) \\ \tilde\bfP_{N+n-1, x_{N+n-1}} (z_{N+n-1}) \end{bmatrix} \]
Following Sprung (\cite{Sprung}), we can define the following:
\begin{definition}
From the previous section we recall the liftings $\tilde A_{N+n}, \tilde A'_{N+n} \in M_e(\Lambda)$ of $A_{N+n}, A'_{N+n} \in M_e(\Lambda_n)$ for every $n$.
For $z=(z_n) \in \varprojlim_{n \geq N} H^1(\Qp(\pi_n), T_p)$, \begin{eqnarray*} \begin{bmatrix} \Col^{\sharp}(z) \\ \Col^{\flat}(z) \end{bmatrix} &\stackrel{def}=& \varprojlim_n
\begin{bmatrix} p \tilde A_{N+1} & -\tilde A'_{N+1} \Phi_1 \\ I_e & 0 \end{bmatrix}^{-1}
\cdot \begin{bmatrix} p \tilde A_{N+2} & -\tilde A'_{N+2} \Phi_2 \\ I_e & 0 \end{bmatrix}^{-1}
\cdot \\ && \cdots\quad \cdot \begin{bmatrix} p \tilde A_{N+n} & -\tilde A'_{N+n} \Phi_n \\ I_e & 0 \end{bmatrix}^{-1} \cdot \begin{bmatrix} \bfP_{N+n+1, x_{N+n+1}}(z_{N+n+1}) \\ \tilde \bfP_{N+n, x_{N+n}}(z_{N+n}) \end{bmatrix}. \end{eqnarray*} \end{definition}
Similar to Proposition~\ref{GagConcert}, we can show $\Col^{\sharp}(z), \Col^{\flat}(z) \in \Lambda^e$. We omit its proof.
\begin{definition} \Label{Sharp Distinction}
We recall the definition of the relaxed Selmer group $\Selr$ from Definition~\ref{Relaxed Selmer}. We define
\[ \operatorname{Sel}_p^{\sharp}(E[p^{\infty}] /F_{\infty}) \stackrel{def}= \ker\left( \Selr(E[p^{\infty}] /F_{\infty}) \to \displaystyle \frac {H^1(F_{\infty, \mfp}, E[p^{\infty}])}{\left( \ker \Col^{\sharp}\right)^{\perp}} \right) \] where $\left( \ker \Col^{\sharp}\right)^{\perp}$ denotes the orthogonal complement of $\ker \Col^{\sharp}$ with respect to the local pairing $\varprojlim_n H^1(\Qp(\pi_n), T_p) \times H^1(\Qp(\pi_{\infty}), E[p^{\infty}])\to \Qp/\Zp$.
Similarly, we define $\operatorname{Sel}_p^{\flat}(E/F_{\infty})$. \end{definition}
It seems likely that $\operatorname{Sel}_p^{\sharp}(E[p^{\infty}]/F_{\infty})$ and $\operatorname{Sel}_p^{\flat}(E[p^{\infty}]/F_{\infty})$ are $\Lambda$-cotorsion under some suitable assumptions. In fact, we can imagine
\begin{eqnarray} \Label{Speculation One} char(\operatorname{Sel}_p^{\sharp}(E[p^{\infty}] /F_{\infty})^{\vee})=char(S_{tor})\cdot (\bfLalg^{\sharp}), \end{eqnarray}
\begin{eqnarray} \Label{Speculation Two} (\text{resp.} \quad char(\operatorname{Sel}_p^{\flat}(E[p^{\infty}] /F_{\infty})^{\vee})=char(S_{tor}) \cdot (\bfLalg^{\flat}).) \end{eqnarray}
But, the ways that $\operatorname{Sel}_p^{\sharp/\flat}$ and $\bfLalg^{\sharp/\flat}$ are defined seem to be dual to each other. Thus, we suspect that to prove such an equality, we may need some kind of self-duality (similar to the Tate local duality) of the local conditions such as the one proven in \cite{Kim-1}. The author cannot say with certainty that such self-duality exists for the local conditions in Definition~\ref{Sharp Distinction}, but, an analogous result has been proven for a different but related Selmer group (\cite{Lei-Ponsinet}), and the author is hopeful that equalities such as (\ref{Speculation One}) and (\ref{Speculation Two}) will be proven soon.
\end{subsection}
\end{section}
\end{document} |
\begin{document}
\title{Modeling with a Large Class of Unimodal Multivariate Distributions}
\author[1]{Marina Silva Paez} \author[2]{Stephen G. Walker} \small \affil[1]{Instituto de Matem\'atica, Universidade Federal do Rio de Janeiro, Brazil} \affil[2]{Department of Statistics and Data Sciences, University of Texas at Austin, U.S.A} \normalsize \renewcommand\Authands{ and } \date{} \maketitle
\begin{abstract} In this paper we introduce a new class of multivariate unimodal distributions, motivated by Khintchine's representation. We start by proposing a univariate model, whose support covers all the unimodal distributions on the real line. The proposed class of unimodal distributions can be naturally extended to higher dimensions, by using the multivariate Gaussian copula. Under both univariate and multivariate settings, we provide MCMC algorithms to perform inference about the model parameters and predictive densities. The methodology is illustrated with univariate and bivariate examples, and with variables taken from a real data-set. \end{abstract}
\noindent \emph{Keywords}: unimodal distribution, multivariate unimodality, mixture models, nonparametric Bayesian inference.
\section{Introduction} The clustering of data into groups is one of the current major topics under research. It is fundamental in many statistical problems; most notably in machine learning, see, for example, \cite{TehJor10}.
The traditional approach is to model the data via a mixture model, see \cite{Titterington85} for finite mixture distributions, and, more recently, \cite{frauhwirth06}. The idea, which carries through to infinite mixture models, is that each component in the mixture is represented by a parametric density function, typically from the same family, which is usually the normal distribution. This assumption, often made for convenience, supposes each cluster or group can be adequately modeled via a normal distribution. For if a group requires two such normals, then it is deemed there are in fact two groups. Yet two normals could be needed even if there is one group and it happens to be skewed, for example.
On the other hand, if we start by thinking about what a cluster could be represented by in terms of probability density functions, then a unimodal density is the most obvious choice. Quite simply, with lack of further knowledge, i.e. only observing the data, a bimodal density would indicate two clusters. This was the motivation behind \cite{rodriguezwalker14}, which relies heavily on the unimodal density models being defined on the real line, for which there are no obvious extensions to the multivariate setting.
There are representations of unimodal density functions on the real line (e.g. \cite{Khint38} and \cite{Feller71}) and it is not difficult to model such a density adequately. See, for example, \cite{brunner89}, \cite{quintana09} and \cite{rodriguezwalker14}. Clearly, the aim is to provide large classes of unimodal densities and hence infinite dimensional models are often used.
However, while there is an abundance of literature on unimodal density function modeling on the real line, there is a noticeable lack of work in two and higher dimensions. The reasons are quite straightforward; first there is a lack of a representation for unimodal distributions outside of the real line and, second, the current approaches to modeling unimodal densities on the real line do not naturally extend to higher dimensions.
The aim in this paper is to introduce and demonstrate a class of unimodal densities for a multivariate setting. While marginal density functions will be modeled using the \cite{Khint38} representation on the real line, the dependence structure will be developed using a Gaussian copula model. To elaborate, using an alternative form of representation of unimodal densities on the real line, we can write a unimodal density as \begin{equation}
f(y)=\int_0^1 f(y|x)\,{\rm d} x.\label{unim} \end{equation} This then naturally extends to a bivariate setting with marginals of the form (\ref{unim}) via the use of a copula density function $c(x_1,x_2)$;
$$f(y_1,y_2)=\int_0^1\int_0^1 f_1(y_1|x_1)\,f_2(y_2|x_2)\,c(x_1,x_2)\,{\rm d} x_1\,{\rm d} x_2.$$ Using a Gaussian copula, which has the ability to model pairwise dependence, we can then easily proceed upwards to the general multivariate unimodal density.
The layout of the paper is as follows: In section 2 we provide the key representation of unimodal densities on the real line which we adapt in order to define continuous densities. Otherwise there is an issue of continuity at the mode. With a full representation of unimodal densities, we need a novel MCMC algorithm, particularly, and perhaps strangely, to include a non-zero mode. Section 3 then deals with the multivariate setting. There are peculiarities to the higher dimension case with the MCMC and again it is the location of the mode parameter which needs special attention. Section 4 provides a real data analysis.
\section{Models for unimodal densities}
Our aim in this paper is to start with the representation of \cite{Khint38} for unimodal densities on the real line and extend it to higher dimensions in a natural way, using the multivariate Gaussian copula. The representation of \cite{Khint38} is given by $Y=X\,Z$, where $X$ is uniform on $[0,1]$ and $Z$, independent of $X$, has any distribution. Then $Y$ has a unimodal distribution with mode at $0$. To see this let us consider the situation of $y>0$. So $$P(Y\leq y)=\int_0^1 P(Z\leq y/x)\,{\rm d} x$$ and hence $$f(y)=\int_0^1 x^{-1}\,g(y/x)\,{\rm d} x,$$ where $g$ is used to represent the density function of $Z$. To see more clearly the unimodality at $0$, let us use the transform $s=y/x$, so \begin{equation} f(y)=\int_y^\infty s^{-1}\,g(s)\,{\rm d} s,\label{oldf} \end{equation} which is maximized at $y=0$.
This representation has been widely used and is usually presented as a mixture of uniform distribution; i.e. $$f(y)=\int s^{-1}{\bf 1}(0<y<s)\,{\rm d} G(s).$$ The prior for $G$, in a Bayesian nonparametric setting, is usually taken as a Dirichlet process (\cite{Ferg73}). To denote this we write $G \sim D(M,G_0)$, where $M > 0$ is a scale parameter, and $G_0$ a distribution function. In particular, $E(G)=G_0$.
Now \cite{brunner89} and \cite{quintana09}, among others, have worked with this specification and extended to the real line with arbitrary mode by using $U(y|\kappa-s,\kappa+s)$ for some $\kappa\geq 0$. This specification leads to a symmetric density.
Furthermore, \cite{rodriguezwalker14} develop the model to allow for asymmetry by adopting the idea of \cite{fernandezsteel98}; incorporating an asymmetry parameter $\lambda$ in the uniform kernel, \begin{eqnarray}
f(y|\lambda, \kappa, G) &=& \int U\left(y|\kappa- s e^{-\lambda}, \kappa + s e^{-\lambda}\right) \,{\rm d} G(s). \label{RW2012} \end{eqnarray} So (\ref{RW2012}) defines a unimodal density determined by $(\lambda, \kappa, G)$, where $\kappa$ is the location parameter, $\lambda$ defines the asymmetry, and the distribution $G$ defines characteristics such as variance, kurtosis, tails and higher moments. With this representation, the support of the model proposed by \cite{rodriguezwalker14} includes all symmetric unimodal densities and a large class of asymmetric ones.
However, when extending to the real line, and using a natural density for $g$ such as the normal, we encounter the situation of $f(0)=\infty$. In fact, on the real line, $g$ needs to be somewhat unusual to ensure that $f(0)\ne \infty$. See the form of density in (\ref{oldf}).
To resolve this problem, we instead work with $Y=X/Z$; so again looking at $y>0$, we have $$P(Y\leq y)=\int_0^1 P(Z\geq x/y)\,{\rm d} x.$$ Hence, \begin{equation} f(y)=\int_0^1 (x/y^2)\,g(x/y)\,{\rm d} x\label{origuni} \end{equation} and using the transform $s=x/y$ we obtain \begin{equation} f(y)=\int_0^{1/y} s\,g(s)\,{\rm d} s.\label{posuni} \end{equation} Thus we see the mode is again at $y=0$ and $f(0)$ will be finite subject to the more reasonable assumption that $sg(s)$ is integrable at $0$, rather than the more unreasonable assumption that $s^{-1}g(s)$ is integrable at $0$ in the former setting.
\subsection{A class of unimodal densities} \label{unimoddens}
Here we further investigate the choice of $Z$ being a normal distribution with the representation $Y=X/Z$. We can easily extend (\ref{posuni}) to the whole real line; resulting in \begin{equation}
f(y)={\bf 1}(y>0)\,\int_0^{1/y}\,s\,g(s)\,{\rm d} s+{\bf 1}(y<0)\,\int_{1/y}^0 |s|\,g(s)\,\,{\rm d} s.\label{realuni} \end{equation} For a mode at $\kappa\ne 0$ we simply exchange $y$ for $y-\kappa$. If the mode is at $0$, then for $\lim_{y \downarrow 0} f(y) =\lim_{y \uparrow 0} f(y)$, we need $$
\int_0^\infty s g(s) {\rm d} s = \int_\infty^0 |s| g(s) {\rm d} s. $$ Our aim is to take $g$ as a large class of density functions on the real line and the full class is given by mixtures of normal distributions. Thus, it is expedient to compute (\ref{realuni}) for $g(s)$ normal with mean $\mu$ and variance $\sigma^2$. That is, for $\xi>0$
$$f(\xi)=\int_0^\xi s\,N({\rm d} s|\mu,\sigma^2),$$ which, after some algebra, results in $$f(\xi)=\mu\left[\Phi\left(\frac{\xi-\mu}{\sigma}\right)-\Phi\left(\frac{-\mu}{\sigma}\right)\right] +\sigma\left[\phi\left(\frac{-\mu}{\sigma}\right)-\phi\left(\frac{\xi-\mu}{\sigma}\right)\right],$$ where $\phi$ and $\Phi$ are the pdf and cdf, respectively, for the standard normal distribution. With $\xi<0$ we simply need to rearrange the signs, so \begin{equation}
f(\xi)=\left|\mu\left[\Phi\left(\frac{\xi-\mu}{\sigma}\right)-\Phi\left(\frac{-\mu}{\sigma}\right)\right]
+\sigma\left[\phi\left(\frac{-\mu}{\sigma}\right)-\phi\left(\frac{\xi-\mu}{\sigma}\right)\right] \right|.\label{moduni} \end{equation} Hence, this $f(\xi)$ holds for all $\xi \in (-\infty, +\infty)$.
To move the mode to $\kappa$, rather than $0$, write $f(y|\mu,\sigma,\kappa)=f(1/(y-\kappa))$ with $f(\cdot)$ given by (\ref{moduni}). We could also maintain the representation (\ref{origuni}) in which case we can write $$
f(y|\mu,\sigma,\kappa)=\frac{1}{\sigma\sqrt{2\pi}}\int_0^1 \left\{ x/(y-\kappa) \right\}^2\,\exp\left\{-\hbox{$1\over2$}\left(\frac{x/(y-\kappa)-\mu}{\sigma}\right)^2\right\}\,{\rm d} x. $$ The usefulness of this is that it provides a latent joint density function which will be helpful when it comes to model estimation, namely, \begin{eqnarray}
f(y,x|\mu,\sigma,\kappa)\propto \left\{ x/(y-\kappa) \right\}^2\,\exp\left\{-\hbox{$1\over2$}\left(\frac{x/(y-\kappa)-\mu}{\sigma}\right)^2\right\}. \label{withx} \end{eqnarray} For the forthcoming mixture model, where we consider mixtures of normals for $Z$, we find it convenient for identifiability concerns, to use the parametrization $(\mu,c)$, where $c=(\mu/\sigma)^2$.
\subsubsection{The mixture model}
The model proposed for the class of unimodal density functions on ${\rm I\!R}$ is a mixture of the unimodal densities with mode at $\kappa$ proposed in section \ref{unimoddens}, i.e. (\ref{withx}). This model can we written as \begin{eqnarray}
f(y) = \sum_{j=1}^\infty w_j f(y|\mu_j,c, \kappa). \label{inftysum} \end{eqnarray} The mode is therefore at $\kappa$ and we allow parameter $c$, which is a transformation of the coefficient of variation, to remain fixed across the normal components. This is analogous to keeping the variance terms fixed, for identifiability concerns, in a mixture of normal model. The support of the model is not diminished by doing this.
Therefore, we can write \begin{equation}
f(y)=\int f(y|\mu,c,\kappa)\,{\rm d} G(\mu)\label{mixmod} \end{equation} where $G$ is a discrete distribution with mass $w_j$ at the point $\mu_j$. Hence, in a Bayesian \\ context, the \hspace{0.01 mm} parameters \hspace{0.01 mm} to \hspace{0.01cm} be \hspace{0.01cm} estimated \hspace{0.01cm} and \hspace{0.01cm} first \hspace{0.01cm} assigned \hspace{0.01cm} prior \hspace{0.01cm} distributions are \\ $((\mu_j,w_j),c,\kappa)$, with the constraints being that the weights sum to 1, $c>0$, and the $(\mu_j)$ are distinct. As an illustration, Figure \ref{density} shows the density function of a mixture of two components with $\kappa = 10$, $c=1$, $\mu = (-5,20)$ and $w = (0.8,0.2)$.
The claim is that model \ref{mixmod} can be arbitrarily close to any unimodal density on the real line. The key is writing $y=X/Z$ rather than $y = XZ$, and with the former we can employ the full support abilities of a mixture of normals without the problem of a discontinuity at the mode.
\subsection{Prior specifications and model estimation} \label{priors}
\subsubsection{Prior specifications}
To complete the model described in the previous section, we now set the prior distributions for the unknown model parameters. Specific values are assigned in the illustration section.
\begin{itemize}
\item The prior distribution for the parameters $(\mu_j)$ are normal with zero mean and variance $\sigma_\mu^2$; while the prior for $\kappa$ will also be normal with zero mean and variance $\sigma_\kappa^2$. The prior for $c$ will be gamma with parameters $(\alpha_c,\beta_c)$.
\item The prior distribution for the weights $(w_j)$ has a stick-breaking construction (\cite{sethuraman94}), given by $$w_1 = v_1 \quad\mbox{and}\quad w_j = v_j \prod_{l < j}(1 - v_l)\quad\mbox{for }j>1,$$ with the $(v_j)$ being i.i.d. from a beta distribution with parameters $1$ and $M$. We assume a gamma, $ga(\alpha_M,\beta_M)$, as the hyper-prior for the parameter $M$.
\end{itemize}
As a consequence, the prior for $G$ in (\ref{mixmod}) is a Dirichlet process with mean distribution $N(0,\sigma_\mu^2)$ and scale parameter $M$.
Before detailing the MCMC algorithm for this model, we will now introduce the key latent variables which facilitate an ``as easy as possible'' implementable algorithm. First, to deal with the infinite sum of elements in (\ref{inftysum}), we introduce a variable $u$, such that we have the joint density with observation $y$, as \begin{eqnarray*}
f(y,u) = \sum_{j=1}^\infty {\bf 1}\{u < \xi_j\} (w_j/\xi_j)\,f(y|\mu_j,c, \kappa) \end{eqnarray*} for some decreasing deterministic sequence $(\xi_j)$. See \cite{Kalli11} for details and choices of $(\xi_j)$. In particular, we use $\xi_j = \exp(-\gamma j)$, with $\gamma = 0.01$, but other sequences can be used.
The key here is that given $u$, the number of components is finite, and correspond to the indices $A_u = \{j: \xi_j > u \}$, such that \begin{eqnarray*}
f(y|u) = \sum_{j \in A_u} (w_j/\xi_j)\,f(y|\mu_j,c, \kappa). \end{eqnarray*} Next we introduce the component indicator $d$ for each observation, which tells us from which component available the observation came from, resulting in the joint density with $(y,u)$ as
$$f(y,u,d)={\bf 1}(u<\xi_d)\,(w_d/\xi_d)\,f(y|\mu_d,c,\kappa).$$ The joint likelihood function, including latent variables, is then given by \begin{eqnarray*}
\prod_{i=1}^n {\bf 1}\{u_i < \xi_{d_i}\} (w_{d_i}/\xi_{d_i}) \,f(y_i|\mu_{d_i},c, \kappa), \end{eqnarray*} and it will be convenient to define $D = \max\{d_i\}$.
\subsubsection{The MCMC algorithm}
It turns out that the most complicated parameter to deal with is $\kappa$; surprisingly, since often the location parameter is one of the easiest to sample in a MCMC routine. Thus, we will initially describe the algorithm assuming $\kappa$ is known. The unknown quantities that need to be sampled from, under that scenario, are: $(\mu_j, v_j), j=1,2,\ldots$; $(d_i, u_i), i=1,\ldots,n$; $c$ and $M$. The sampling of these parameters and latent variables is as follows, obviously conditional on other parameters and variables being known in each case:
\begin{enumerate}
\item Sample $u_i$ from the uniform distribution on $(0,\xi_{d_i})$, for $i=1,\ldots,n$. A useful summary to record here is $N=\max\{\xi^{-1}(d_i)\}$.
\item Sample from $M$ following the approach in \cite{escobarwest95}. Now $M$ only depends on the other model parameters through the number of distinct $(d_i)$ and the sampling goes as follows, \begin{itemize} \item sample $\nu \sim B(\widetilde{M}, n)$, where $\widetilde{M}$ is the current value for $M$. \item sample $M$ from $ga(\alpha_M + k, \beta_M - \log\nu), $ \end{itemize} where $k$ is the number of distinct $(d_i)$.
\item Sample $(\mu_j)$ for $ j=1, \ldots, N$ via Metropolis-Hastings (M-H). If there is at least one $d_i = j$ then we use the M-H step, otherwise the $\mu_j$ comes from the prior. The proposal $\mu_{j}^*$ is taken to be normal, centered at the current value, i.e. $$ \mu_{j}^* = \mu_{j} + N(0,h_\mu). $$ The M-H acceptance criterion is given by: \begin{eqnarray*}
q=\frac{ \prod_{\{d_i = j\}} f(y_i|\mu_{j}^*,c,\kappa) \pi(\mu_{j}^*) } { \prod_{\{d_i = j\}} f(y_i|\mu_j,c,\kappa) \pi(\mu_j) }, \end{eqnarray*} where $\pi$ represents the prior distribution. We accept the proposal with probability $\min\{1,q\}$.
\item Sample $c$ via Metropolis-Hastings. The proposal $c^*$ is obtained from: \begin{eqnarray*} \log(c^*) = \log(c) + N(0,h_c). \end{eqnarray*} The M-H acceptance criterion is given by: \begin{eqnarray*}
q=\frac{ \prod_{i=1}^n f(y_i|\mu_{d_i},c^*,\kappa) \pi(c^*)Q(c^*;c) }{ \prod_{i=1}^n f(y_i|\mu_{d_i},c,\kappa) \pi(c)Q(c;c^*) }, \end{eqnarray*} where $Q(c^*;c)$ denote a source density for a candidate draw $c^*$ given the current value $c$ in the sampled sequence. In this case, $Q(c^*;c) = 1/c^*$, and we accept the proposal with probability $\min\{1,q\}$.
\item Sampling from $d_i$ can be done directly since we have the probability mass function, \begin{eqnarray*}
P(d_i = j) = \frac{w_j}{\xi_j}\,f(y_i|\mu_j,c,\kappa) \quad \mbox{for}\quad j = 1, \ldots, N_i, \end{eqnarray*} where $N_i=\max\{j:u_i<\xi_j\}$.
\end{enumerate}
When the mode $\kappa$ is unknown, simply adding an extra step in the algorithm above is neither efficient nor viable. The reason for this is that $\kappa$ and the $(\mu_j)$ are highly correlated. Data values coming from cluster $j$ with $\mu_j > 0$ are typically larger then $\kappa$, while if $\mu_j < 0$, they are typically smaller. The consequence is that for fixed values of $\mu_j$, any proposals for $\kappa$ are prone to be rejected.
The explanation for this is the following. Let $\{y_{(1)}, y_{(2)}, \ldots, y_{(n)}\}$ be the ordered data, and suppose the initial/current value for $\kappa$, written as $\kappa^*$, in the MCMC algorithm, places it between $y_{(l)}$ and $y_{(l+1)}$, for some $l = 1, \ldots, n-1$, as shown in Figure \ref{fig:toy}. It is likely that $\{y_{(1)},\ldots,y_{(l)}\}$ belong to clusters with negative $\mu_j$, and $\{y_{(l+1)},\ldots,y_{(n)}\}$ belong to clusters with positive $\mu_j$. This is true particularly for data which is close to $\kappa$, as from (\ref{withx}) we can see that they correspond to large values of $|\mu_j|$. These large values typically generate positive values of $x/z$ when $\mu_j > 0$, and negative values of $x/z$ when $\mu_j < 0$.
Suppose we start the algorithm assigning to the data clusters whose $\mu_{d_i}$ have a sign which is negative for $y_i$ smaller than $\kappa$, and positive otherwise. If we now propose moving the value of $\kappa$ to, say, somewhere between $y_{(l+1)}$ and $y_{(l+2)}$, it means that $y_{(l+1)}$ will be incompatible with its corresponding cluster. The same happens if we move $\kappa$ in the opposite direction. More drastic moves make the problem more acute.
A solution for this would be to test moves of $\kappa$ and $(\mu_{d_i})$ jointly in a Metropolis-Hastings step. That, however, would be very inefficient due to the possible high numbers of $(\mu_j)$ involved. Our idea then is to test a new proposal for $\kappa$ jointly with moving the affected $y_i$, so the M-H step utilized to sample $\kappa$, and possibly some of the $d_i$, is now described.
To make the proposal, first sample auxiliary variable $s$ from a discrete uniform distribution in the interval $[-m,m]$, for a chosen $m \in {\rm I\!N}^+$. This sampled value will give the size and direction of the move we wish to propose for $\kappa$.
As a toy example, suppose we choose $m=2$. Therefore, $s$ will be either $-2, -1, 0, 1$ or $2$, with the same probability. If $\tilde{\kappa}$ is placed between order statistics $y_{(l)}$ and $y_{(l+1)}$ we have that: if $s = -2$, $\kappa^*$ will be sampled uniformly within the interval $[y_{(l-2)},y_{(l-1)}]$; if $s = -1$, $\kappa^*$ will be sampled uniformly within the interval $[y_{(l-1)},y_{(l)}]$, and so forth, as illustrated in Figure $\ref{fig:toy}$.
Note, however, that near the edges ($y_{(1)}$ and $y_{(n)}$) not every one of these movements can be made. For instance, in the example above, if $l = 2$, the movement proposed by $s = -2$ is not possible, since there is no interval $[y_0,y_1]$. In that case, $s$ is sampled uniformly between the possible values, which in this case would be $s = -1, 0, 1$ and $2$.
Generalizing the notation, $\kappa^*$ will be sampled within the interval $[\kappa_a, \kappa_b]$, where $\kappa_a = \max \left\{ y_{(h+s)},y_{(1)} \right\}$, and $\kappa_b = \min \left\{y_{(h+s+1)},y_{(n)} \right\}$, with $h$ representing the rank of the order statistics of the $y$ which is immediately ranked lower than $\tilde{\kappa}$. In our toy example, $h = l$.
If $s = 0$, only a new proposal for $\kappa$ will be made, $\kappa^* \sim U(\kappa_a, \kappa_b)$; otherwise we will also test moving the observations which fall between $\tilde{\kappa}$ and $\kappa^*$ to a different cluster. If $\kappa^* > \tilde{\kappa}$, we propose moving these observation to the same cluster of $y_{(h)}$. If $\kappa^* < \tilde{\kappa}$, we propose moving them to the same cluster of $y_{(h+1)}$. In our example, if $s = 2$, it means proposing that $d^{o}[h+1]$ and $d^{o}[h+2]$ are equal to $d^{o}[l]$, where $d^{o}[l]$ represents the cluster to which $y_{(l)}$ belongs to. In the general case, we have:
\begin{itemize} \item if $s<0$, we propose that $d^{o}[h+s+1],\ldots,d^{o}[h]$ are equal to $d^{o}[h+1]$; \item if $s>0$, we propose that $d^{o}[h+1],\ldots,d^{o}[h+s]$ are equal to $d^{o}[h]$. \end{itemize}
The Metropolis rate is given by: \begin{eqnarray*}
q = \frac{p(\kappa^*,d^*|\cdot)}{p(\kappa,d|\cdot)} \propto \frac{\prod_{i=1}^n f(y_{(i)}|\mu_{d_{pi}},c,\kappa^*) \pi(\kappa^*)Q(\kappa^*,d^*;\kappa,d)}{ \prod_{i=1}^n f(y_{(i)}|\mu_{d_i},c,\kappa) \pi(\kappa)Q(\kappa,d;\kappa^*,d^*)}, \end{eqnarray*} where \begin{eqnarray*} Q(\kappa^*,d^*;\kappa,d) = \frac{(\min\{n,h+m\} - \max\{0,h-m\})^{-1}}{\kappa_b - \kappa_a}, \end{eqnarray*} and, analogously, \begin{eqnarray*} Q(\kappa,d;\kappa^*,d^*) = \frac{(\min\{n,h+s+m\} - \max\{0,h+s-m\})^{-1}}{y_{(h+1)} - y_{(h)}}. \end{eqnarray*} We accept the proposal with probability $\min\{1,q\}$.
\subsection{Examples with simulated data}
To test our model and the proposed MCMC algorithm, we simulate two artificial data-sets, and predict their densities. The idea here is verify if we can efficiently reconstruct the densities, which in this case are known. The simulated models are given bellow:
\begin{itemize}
\item[A.] $y$ is sampled from $N(100,100^2)$;
\item[B.] $y$ is sampled from $ga (3,10)$.
\end{itemize}
Data-sets of size $n=100$ were simulated from models A and B. Figure \ref{simuldata} shows the histogram of these data-sets and the density of the distribution they were generated from. All figures included in this article were produced via the software R (\cite{R14}).
The two sets of observations are modeled through \ref{inftysum}, with the following specifications for the parameters of the prior distributions: $\sigma_\mu^2 = 10$, $\sigma_\kappa^2=10000$, $\alpha_c = \beta_c = 0.1$, and $\alpha_M = \beta_M = 0.01.$
For each data-set, the algorithm proposed in section $4$ was applied to sample from the posterior distribution of the model parameters. The MCMC was implemented in the software Ox version 7.0 (\cite{doornik08}), with $T=300000$ iterations and a burn in of $10000$. Due to auto-correlation in the MCMC chains, the samples were recorded at each $100$ iterations, leaving us with samples of size $2900$ to perform inference.
Figure \ref{histA} shows the histogram of the sample from the posterior of parameter $\kappa$ under data-sets A and B. Note that in both cases the real value of the parameter is well estimated, falling close to the mode of the estimated posterior density.
Figure \ref{predA} shows the histograms of the predictive distributions under both data-sets compared with their real density. It can be seen that in both cases the predictive distribution is close to the real density, showing the flexibility of the model and validating the methodology.
\section{Models for multivariate densities}
This section is divided in two sub-sections. Firstly, in sub-section \ref{multuni}, we discuss different definitions of multivariate unimodality that can be found in the literature. In sub-section \ref{multext} we propose a class of multivariate unimodal densities, extending the univariate densities of section \ref{unimoddens}.
\subsection{Multivariate unimodality} \label{multuni}
The concept of unimodality is well established for univariate distributions, and, following Khintchine's representation (\cite{Khint38}), it can be defined in different but equivalent ways. It is not straightforward, however, to extend the notion of unimodality to higher dimensional distributions, as different definitions of unimodality are not equivalent when extended.
The first attempts to define multivariate unimodality were made for symmetric distributions only, by \cite{anderson55} and \cite{sherman55}. Again under symmetry, \cite{dharma76} introduced the notion of {\it convex unimodality} and {\it monotone unimodality}. The authors also defined {\it linear unimodality}, which can be applied to asymmetric distributions. A random vector $(Y_1, \ldots, Y_n)$ is {\it linear unimodal} about $0$ if every linear combination $\sum_{i=1}^ n{a_i Y_i}$ has a univariate unimodal distribution about $0$. This, however, is not a desired definition, as pointed out by \cite{dharma76} themselves, as the density of such may not be a maximum at the mode of univariate unimodality.
Further, \cite{olshen70} define {\it $\alpha-$unimodality} about $0$ for a random variable $Y$ in ${\rm I\!R}^n$ when for all real, bounded, nonnegative Borel function $g$ on ${\rm I\!R}^n$ the function $t^{\alpha}E(g(tY))$ is non-decreasing as $t$ increases in $[0,\infty)$, where $E$ denotes the expectation with respect to the density of $Y$. If $X$ has a density function $f$ with respect to the Lebesgue measure $\mu_n$ on ${\rm I\!R}^n$, then $X$ is {\it $\alpha-$unimodal} if and only if $t^{n-\alpha}f(ty)$ is decreasing in $t \geq 0$. The distribution of a random vector $(Y_1, \ldots, Y_n)$ is said to be {\it star unimodal} about $0$, according to \cite{dharma76}, if it is $n$-unimodal in the sense of \cite{olshen70}.
According to \cite{dharma76}, a {\it linear unimodal} distribution need not be {\it star unimodal}, and vice versa. Thus there is no implied relationship between {\it star} and {\it linear unimodality}.
Another useful definition of unimodality is given by \cite{devroye97}: A multivariate density $f$ on ${\rm I\!R}^d$ is {\it orthounimodal} with mode at $m = (m_1,\ldots,m_d)$ if for each $j$, $f(y_1,\ldots,y_d)$ is a decreasing function of $y_j$ as $y_j \rightarrow \infty$ for $y_j \geq m_j$, and as $x_j \rightarrow -\infty$ for $y_j \leq m_j$, when all other components are held fixed.
{\it Orthounimodal} densities are widely applicable. They form a robust class in the sense that all lower-dimensional marginals of {\it orthounimodal} densities are also {\it orthounimodal}. Also, they are {\it star unimodal}, according to \cite{devroye97}, and most bivariate densities are either {\it orthounimodal}, or {\it orthounimodal} after a linear transformation. Narrower notions than {\it orthounimodality} can also be explored, and they are discussed in \cite{dharma88}.
For the two dimensional case only, \cite{shepp62} generalizes Khintchine's stating that the distribution function of $(Y_1, Y_2)$ is unimodal if and only if it can be written as \begin{equation}
(Y_1, Y_2) = (X_1 Z_1, X_2 Z_2), \label{bikhin} \end{equation} where $(X_1, X_2)$ and $(Z_1, Z_2)$ are independent and $(X_1, X_2)$ is uniformly distributed in $[0,1]$. For higher dimensions, Khintchine's representation was generalized by \cite{kanter77} for the symmetric case. He defines symmetric multivariate unimodal distributions on ${\rm I\!R}^d$ as generalized mixtures (in the sense of integrating on a probability measure space) of uniform distributions on symmetric, compact and convex sets in ${\rm I\!R}^d$. In this paper we will use a form of (\ref{bikhin}) while guaranteeing orthounimodality.
Comprehensive reviews on multivariate unimodality can be found in \cite{dai82} and \cite{dharma76}. Also \cite{Kouvaras08} reviews the literature and introduces and discusses a new class of nonparametric prior distributions for multivariate multimodal distributions.
\subsection{Proposed model} \label{multext} To \hspace{1 pt} extend \hspace{1 pt} the \hspace{1 pt} univariate \hspace{1 pt} unimodality \hspace{1 pt} proposed \hspace{1 pt} in \hspace{1 pt} section \hspace{1 pt} \ref{unimoddens},\hspace{1 pt} we \hspace{1 pt} start \hspace{1 pt} by de- \\fining \hspace{1 pt} marginal \hspace{1 pt} variables \hspace{1 pt} $Y_1, \hspace{1 mm} Y_2, \hspace{1 pt} \ldots, \hspace{1 pt} Y_d$ \hspace{1 pt} via \hspace{1 pt} $Y_l \hspace{1 pt} = \hspace{1 pt} X_l/Z_l$, \hspace{1 pt} where \hspace{1 pt} $X_l \hspace{1 pt} \sim \hspace{1 pt} U(0,1)$ and $Z_l \sim N(\mu_l,\mu_l^2/c_l)$, for $l=1,2,\ldots,d$. The dependence between variables $Y = (Y_1, Y_2, \ldots, Y_d)$ can be obtained imposing dependence to either $Z = (Z_1,\ldots,Z_d)$ or $X = (X_1,\ldots,X_d)$, or both.
Our first attempt was to allow dependence only in $Z$ through a multivariate normal distribution with a general variance-covariance matrix. This way our construction would be following the generalization of Khintchine's representation made by \cite{shepp62} for the bivariate case. The dependence obtained between $Y$ under that construction, however, for a bivariate setting, is limited. Better results were obtained when imposing dependence in $X$ instead, through a Gaussian copula.
To demonstrate: Figure \ref{correlations} shows the approximate 95\% confidence intervals of the correlations between $Y$ when varying the correlations between $Z$ (top) and $X$ (bottom) respectively, through the interval $\{-1,-0.9,-0.8,\ldots,0.9,1\}$. These confidence intervals were found by the simulation of $100$ samples of size $100$. The figure was constructed considering $\mu=(10,10)$ and $\mu=(-10,10)$. Similar results to the ones with $\mu = (10,10)$ were found for $\mu = (-10,-10)$. In the same way, $\mu=(10,-10)$ and $\mu=(-10,10)$ present similar results. The magnitude of $\mu$ did not seem to change this output significantly. It can be seen that when imposing dependence through $Z$, the correlation between $Y$ do not vary much beyond the interval $[-0.2,0.2]$, while the correlation between the $(Y_1,\ldots,Y_d)$ and the correlation between the $(X_1,\ldots,X_d)$ was found to be similar.
Due to the results just described we opted to work with the second construction: i.e. $Y_l = X_l/Z_l$, $Z_l \stackrel{ind}\sim N(\mu_{lj},\mu_{lj}^2/c_l), l = 1,\ldots, d$, and $X = (X_1, X_2, \ldots, X_d)$ are modeled through the Gaussian copula with correlation matrix $\Sigma$. Once again, the mode at $\kappa = (0,\ldots,0)$ can be exchanged, substituting $Y$ for $(Y - \kappa)$ where $\kappa$, as well as $Y$, is now a $d-$dimensional vector.
Given $x$, $\mu_j = (\mu_{1j},\ldots, \mu_{dj})$, $c = (c_1,\ldots, c_d)$ and $\kappa = (\kappa_1,\ldots, \kappa_d)$, the observations $y = (y_1, y_2, \ldots, y_d)$ are independent, and $f(y|x,\mu_{j},c,\kappa)$ is given by $$
f(y|x,\mu_{j},c,\kappa) = \prod_{l=1}^d (x_l/y_l^2)g_l(x_l/y_l|\mu_{lj},c_l,\kappa_l). $$
The marginal model, $f(y|\mu_{j},c,\kappa)$, can be obtained through the integral \begin{equation}
f(y|\mu_{j},c,\kappa) = \int_{[0,1]^d} \prod_{l=1}^d (x_l/y_l^2)g_l(x_l/y_l|\mu_{lj},c_l,\kappa_l) {\rm d} c(x_1, \ldots, x_d|\Sigma), \label{intmult} \end{equation}
where $c(x_1, \ldots, x_d|\Sigma)$ represents the multivariate Gaussian copula with correlation matrix $\Sigma$: \begin{equation*}
c(x_1, \ldots, x_d|\Sigma) = \frac{1}{\sqrt{|\Sigma|}} \exp \left\{ -\frac{1}{2} \left( \begin{array}{c} \Phi^{-1}(x_1) \\ \vdots \\ \Phi^{-1}(x_d) \end{array} \right) (\Sigma^{-1} - I) \left( \begin{array}{c} \Phi^{-1}(x_1) \\ \vdots \\ \Phi^{-1}(x_d) \end{array} \right) \right\}. \end{equation*}
Note, however, that the dependence created between variables $Y$ is defined not only by the correlation matrix $\Sigma$, but also by the sign of the components of $\mu$. As an illustration, we sampled four bivariate data-sets, and examined the dispersion plots between $y_1$ and $y_2$. Figure \ref{example_multi_1} shows the dispersion plot between data sampled with $\mu_1 = 3$, and either $\mu_2 = 10$ or $\mu_2 = -10$, and with $\rho = \Sigma[1,2] = 0.5$, $\rho = 0.8$ or $\rho = -0.8$. It can be seen that positive dependence is created when $(\mu_1 \times \mu_2 \times \rho)$ is positive, and negative dependence is created otherwise. To help the model identification, we assume that $\rho \in [0,1]$. This restriction does not take away the model flexibility, and negative correlations between the variables can be captured by opposite signs in $\mu$.
Let us now define the mixture of these densities, which is the unimodal multivariate model of interest. For observation $Y = (Y_1, \ldots, Y_d)$ the model is: \begin{equation*}
f(y) = \sum_{j=1}^\infty{w_j f(y|\mu_{j},c,\kappa)}, \end{equation*}
where $f(y|\mu_{j},c,\kappa)$ is in ($\ref{intmult}$). Note that, by construction, the proposed distribution is multivariate {\it orthounimodal}, which is, therefore, also {\it star unimodal}. Unlike the univariate case, however, we cannot solve the integral in (\ref{intmult}) and obtain $f(y|\mu_{j},c,\kappa)$ analytically. Therefore, we must come up with a different algorithm to solve the sampling of $\kappa$. As a ``bridge'' to the multivariate case, this algorithm is first developed for the univariate case, and then extended to higher dimensions. The univariate ``bridge'' algorithm is presented in subsection \ref{bridge}.
\subsection{MCMC algorithm for univariate ``bridge''} \label{bridge}
To mimic the situation in which the latent variable $x$ cannot be integrated out, we present in this section an algorithm which samples from $x = (x_1, \ldots, x_n)$, and samples from the other parameters given $x$. We consider the same prior distributions specified in section \ref{priors}. For $x$ we assume a uniform prior at $[0,1]^n$. The algorithm goes as follows:
\begin{enumerate}
\item Sample $u_i$ and obtain $N=\max\{\xi^{-1}(d_i)\}$ as previously in section \ref{priors};
\item Sample $M$ as previously in section \ref{priors};
\item Sample $(\mu_j)$ for $j = 1, \ldots, N$ as before, but using $f(y_i|x_i,\mu_j,c,\kappa)$ (proportional to \ref{withx}) instead of $f(y_i|\mu_j,c,\kappa)$ in the Metropolis ratio. This way, the M-H acceptance criterion is given by: \begin{eqnarray*}
q=\frac{ \prod_{i=1}^n f(y_i|x_i,\mu_{d_i},c^*,\kappa) \pi(c^*) }{ \prod_{i=1}^n f(y_i|x_i,\mu_{d_i},c,\kappa) \pi(c) }. \end{eqnarray*}
\item The full conditional distribution of $c$ in this case has a closed form, and it is updated via Gibbs sampling. Given $y$, $x$, $\mu$ and $\kappa$, we have: $$
c|x,\mu,\kappa,y \sim ga\left( \frac{n}{2} + \alpha_c, \frac{1}{2} \sum_{i=1}^n \left( \frac{x_i}{y_i - \kappa} - \mu_{d_i} \right)^2 + \beta_c \right). $$
\item Sample $(x,d,\kappa|\mu, c, y)$ in two stages: first sample from $f(d, \kappa|\mu, c, y)$ and then from $f(x|d, \kappa, \mu, c, y)$. $d$ and $\kappa$ are sampled as before, and $(x_i)$ for $ i=1, \ldots, n$, is sampled via rejection sampling, as follows:
\begin{itemize}
\item sample a proposal $\tilde{x_i} \sim U[0,1]$;
\item compute $f(\tilde{x}_i|c,\mu_{d_i},y_i,\kappa) \propto \tilde{x}_i \mbox{exp} \left\{ -\frac{c}{2 \mu_{d_i}^2} \left( \frac{\tilde{x}_i}{y_i - \kappa} - \mu_i \right) \right\} $ \item compute the value $\hat{x}$ which maximizes the function: \begin{equation*}
\hat{x} = \min \left\{ 1, \frac{ \mu_{d_i}(y_i - \kappa) }{2}+ \left| (y_i - \kappa) \sqrt{ \mu_{d_i}^2/c + 0.25 \mu_{d_i} } \right| \right\}. \end{equation*}
\item accept $\tilde{x_i}$ with probability $\min \left\{ 1, \frac{f(\tilde{x_i}|c,\mu_{d_i},y_i,\kappa)}{f(\hat{x_i}|c,\mu_{d_i},y_i,\kappa)} \right\} $, or go back to the first step.
\end{itemize} \end{enumerate}
The results obtained under this algorithm were similar to those obtained from the previous one, despite being considerably slower. The new algorithm, however, can be easily extended to the multivariate case. In this paper we will discuss the algorithm and results obtained for the bivariate case, and leave higher dimensionality for future work.
\subsection{MCMC algorithm for bivariate observations} \label{bivariate}
In this section we present the algorithm developed to handle bivariate observations ($p=2$). Under this scenario, the unknown quantities are: $(\mu_j = (\mu_{1j},\mu_{2j}), v_j), j=1,2,\ldots$; $(d_i, u_i), i=1,\ldots,n$; $c = (c_1, c_2)$; $M$; $\kappa = (\kappa_1, \kappa_2)$, $x_{li}, l=1,2, i=1,\ldots,n$, and also the correlation parameter in the bivariate Gaussian Copula ($\rho$). As pointed out in Section \ref{multext}, without compromising the model flexibility, we can assume that $\rho \in [0,1]$. Therefore we assigned a $U[0,1]$ prior for this parameter.
The algorithm proposed for the bivariate case is given below:
\begin{enumerate}
\item Sample $u_i$ and obtain $N=\max\{\xi^{-1}(d_i)\}$ as previously;
\item Sample $M$ as previously;
\item Sample $(\mu_{1j})$ and $(\mu_{2j})$, independently for $j = 1, \ldots, N$, following the same idea as in (\ref{bridge});
\item Sample $c_1$ and $c_2$ via Gibbs sampling. Given $y$, $x$, $\mu$, and $\kappa$, we have: $$
c_l|x,\mu,\kappa \sim ga \left( \frac{n}{2} + \alpha_c, \frac{1}{2} \sum_{i=1}^n \left( \frac{x_{li}}{y_{li} - \kappa} - \mu_{l,d_i} \right)^2 + \beta_c \right), l=1,2. $$
\item Sample $\kappa_1$ and $\kappa_2$ independently, using a similar algorithm as previously. Note that every time $\kappa_1$ and $\kappa_2$ are updated, some elements of $d$ might also be changed.
\item Sample $d_i$ directly from its probability mass function, \begin{eqnarray*}
P(d_i = j) = \frac{w_j}{\xi_j}\,f(y_{1i}|\mu_{1j},c,\kappa_1)f(y_{2i}|\mu_{2j},c,\kappa_2) \mbox{ for }j = 1, \ldots, N_i, \end{eqnarray*} where $N_i=\max\{j:u_i<\xi_j\}$.
\item Sample $(x_1, x_2)$ via rejection sampling. Here we consider two possible algorithms:
\begin{itemize}
\item[7.1.]
\item Sample from the copula $C(x^*_{1i},x^*_{2i})$: $$ (\Phi^{-1}(x^*_1),\Phi^{-1}(x^*_2))^T \sim N \left( \left( \begin{array}{c} 0 \\ 0 \end{array} \right), \left( \begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array} \right) \right); $$
\item compute
$$f(x^*_{1i},x^*_{2i}|c,\mu_{d_{1i}},y_{1i},\mu_{d_{2i}},y_{2i},\kappa) \propto \prod_{j=1}^2{x^*_{ji} \mbox{exp} \left\{ -\frac{c}{2 \mu_{d_{ji}}^2} \left( \frac{\tilde{x}_{ji}}{y_{ji} - \kappa} - \mu_{ji} \right) \right\}}; $$
\item compute the values $\hat{x}_1$ and $\hat{x}_2$ which maximize the function above: \begin{equation*}
\hat{x}_j = \min \left\{ 1, \frac{ \mu_{d_{ji}}(y_{ji} - \kappa) }{2}+ \left| (y_{ji} - \kappa) \sqrt{ \mu_{d_{ji}}^2/c + 0.25 \mu_{d_{ji}} } \right| \right\}, j=1,2; \end{equation*}
\item accept $(x^*_{1i}, x^*_{2i})$ with probability $$\min \left\{ 1, \frac{f(x^*_{1i},x^*_{2i}|c,\mu_{d_{1i}},y_{1i},\mu_{d_{2i}},y_{2i},\kappa)}{f(\hat{x}_{1i},\hat{x}_{2i}|c,\mu_{d_{1i}},y_{1i},\mu_{d_{2i}},y_{2i},\kappa)} \right\},$$ or go back to the first step.
\item[7.2.]
\item Sample $x^*_{ji}$ from the function $$
f(x^*_{1i},x^*_{2i}|c,\mu_{d_{1i}},y_{1i},\mu_{d_{2i}},y_{2i},\kappa) \propto {x^*_{ji} \mbox{exp} \left\{ -\frac{c}{2 \mu_{d_{ji}}^2} \left( \frac{x^*_{ji}}{y_{ji} - \kappa} - \mu_{ji} \right) \right\}}, $$ through adaptive rejection sampling (\cite{gilkswild91});
\item compute $c(x^*_{1i},x^*_{2i})$;
\item compute the values $\hat{x}_1$ and $\hat{x}_2$ which maximize the copula: $\hat{x}_1 = \rho x_2$ and $\hat{x}_1 = \rho x_1$;
\item accept $(x^*_{1i}, x^*_{2i})$ with probability $$\min \left\{ 1, \frac{c(x^*_{1i},x^*_{2i})}{c(\hat{x}_{1i},\hat{x}_{2i})} \right\},$$ or go back to the first step. \end{itemize}
The first algorithm proposed to sample from $x$ leads to faster convergence. However, it can be very slow at times, requiring a large amount of simulations until acceptance. We combined the two algorithms in the following way: Sample from 7.1 until it either accepts the proposal or reaches a set limited number of trials. If the latter occurs, sample from $x$ through algorithm 7.2.
\item Sample $\rho$ via Metropolis-Hastings. The proposal $\rho^*$ is obtained from:
$$ \log \left( \frac{\rho^*}{1 - \rho^*} \right) = \log \left( \frac{\rho}{1 - \rho} \right) + N(0,h_\rho^2), $$ for a suitable choice of $\sigma_\rho^2.$
The M-H acceptance criterion is given by: \begin{eqnarray*}
q=\frac{ f(\rho^*|x_1,x_2) Q(\rho^*;\rho) }{ f(\rho|x_1,x_2) Q(\rho;\rho^*) }, \end{eqnarray*} where $Q(\rho^*;\rho) = 1/(\rho^*(1- \rho^*))$. We accept the proposal with probability $\min\{1,q\}$.
\end{enumerate}
\subsection{Examples with simulated data}
In this section we present the results obtained for a simulated bivariate data-set of size $n=100$, from a bivariate Normal distribution: $y \sim N(\kappa, \Omega)$ with $\kappa = (30, 60)$, and $\Omega = 10 \left( \begin{array}{cc} 1 \quad \rho_y \\ \rho_y \quad 1 \end{array} \right)$, with $\rho_y = 0.5$.
The algorithm proposed in section \ref{bivariate} was applied to sample from the posterior distribution of the model parameters and obtain samples of the predictive distributions. The algorithm was run for $T=75000$ iterations, with a burn in of $5000$. Due to auto-correlation in the MCMC chains, samples were recorded at each $50$ iterations, leaving a sample of size $1400$ to perform inference.
Figure \ref{k_bi_2} shows the histograms of the posterior densities of the components of parameter $\kappa$. It can be seen that this parameter was reasonably well estimated, with the true value being close to the posterior mode. Figure \ref{hist_bi_2} presents a comparison between the histograms of the simulated samples of $y_1$ and $y_2$, and the ones predicted through the proposed MCMC algorithm, showing that the predictions seem close to what was expected.
We also wish to verify if the dependence between $y_1$ and $y_2$ was preserved in the predictions. To do this analysis, we computed the correlation at every $100$ samples of the predicted $(y_1,y_2)$. This time we used the full $T=70000$ sampled values, ending up with a sample of size $700$ correlations. Figure \ref{cor_biB} compares the histogram of these correlations, which can be seen as a proxy of the posterior distribution of $\rho_y$, to the real values (in red), showing good predictions.
\section{Boston Housing Data}
As an application, we work with a part of the Boston Housing data-set created by \cite{harrison78} and taken from \cite{Lichman13}. This database comprises $506$ cases of $12$ variables concerning housing values in the suburbs of Boston, and it has been used in many applications, especially in regression and machine learning, such as \cite{belsley80} and \cite{quinlan93}. We chose to work with two variables of the database: nitric oxides concentration (parts per 10 million) at the household (NOX) and weighted distances to five Boston employment centers (DIS). Exploratory analysis points towards the unimodality of the joint density of NOX and DIS, as can be seen by the contour plot displayed in Figure \ref{realdata}. This contour plot also shows a negative, unusually shaped, dependence between these variables. Histograms of observations NOX and DIS are also presented in Figure \ref{realdata}, showing that both variables have a non-normal unimodal shape.
Our objective in this application is to illustrate the flexibility of our approach in capturing the form of this bivariate distribution and the oddly shaped dependence between NOX and DIS. We worked with a sub-set of $n=100$ observations and applied the proposed bivariate model with the same priors used for the toy examples. Again, the algorithm was run for $T=75000$ iterations, with a burn in of $5000$, and recorded at every $50$ iteration, totalizing a sample of size $1400$ for inference.
Figure \ref{preddata} shows the histograms of the predictive density of NOX and DIS, and the contour plot made with $500$ points of the predictive distribution. We observe a high similarity between the real and the predicted histograms and dispersion plots, and can conclude that the proposed methodology was able to capture well the behavior of the data for this example.
Note that our approach also allows for predictions of one variable given the other after samples of both had been previously observed. As a second exercise, we analyze the predictive capacity of our model, compared to a more usual linear alternative. If the objective is to predict nitric oxides concentration based on weighted distances to employment centers, a regression model would probably be considered. As can be seen in Figure \ref{realdata}, however, the data needs transformation for the normal linear regression model assumptions to hold adequately. After exploratory analysis, we propose the following regression model: \begin{eqnarray*} \mbox{NOX}_i^{-1} &=& \alpha + \beta \mbox{log}(\mbox{DIS}_i) + e_i, \quad i = 1,\ldots,n, \\ e_i &\sim& N(0,\sigma_e^2), \end{eqnarray*} with vague Normal priors for $\alpha$ and $\beta$, and a vague inverse Wishart prior for $\sigma_e^2$.
Figure \ref{tranfnorm} shows the box-plot of the response variable NOX$^{-1}$ and the dispersion plot between NOX$^{-1}$ and log(DIS), showing a linear dependence between both variables. The simple regression was fit using software OpenBugs 3.2.3 (\cite{thomas06}).
A comparison is then performed between predictions of $10$ values of NOX given DIS, after observing $n=100$ cases of both DIS and NOX, under both models. Figure \ref{preddata1} presents the 95\% credible intervals and posterior medians under the proposed nonparametric model and the simple regression model, being compared with the real values. Figure \ref{preddata2} presents the histograms of the posterior distribution of the first three predicted values. As is typical, nonparametric methods produce larger credible intervals than the parametric counterparts, yet are typically centered in the true values. The parametric versions however can simply be wrong.
\section{Final remarks}
In this paper we have proposed a new class of unimodal densities whose support include all the unimodal densities on the real line. Our motivation for this is that a mixture of the proposed univariate model can be adequate to cluster data, with each cluster being represented by a unimodal density.
One of our objectives was also to be able to create a class of unimodal densities that could be naturally extended to the multivariate case. Our models allow this through the use of the multivariate Gaussian copula. This way, modeling multivariate clusters can also benefit from the methodology developed in this paper.
The proposed models, however, cannot be dealt with analytically. We have also proposed an MCMC algorithm to obtain samples of the posterior distribution of the model parameters and to obtain predictive densities. The methodology was illustrated with univariate and bivariate examples, and with two variables taken from the Boston Housing Data (\cite{harrison78}).
Modeling multivariate unimodal densities with dimension higher than two can be easily done with a slight modification to the code presented in section \ref{bivariate}. In that case, instead of sampling from a single correlation parameter $\rho$, we must sample from a correlation matrix, which can be done following \cite{Wu14}. Preliminary results showed this to be effective for a three-variate model. For future work we must test the code extensively for three or more dimensions, and finally do clustering for an arbitrary dimension $d$, through the mixture of the densities proposed in this paper.
\section{Acknowledgements*} The first author acknowledges the support of a research grant from CNPq-Brazil.
\pagebreak
\begin{figure}
\caption{Unimodal density function (\ref{withx}) with a mixture of two components, $\kappa = 10$, $\mu = (-5,20)$, $w = (0.8,0.2)$.}
\label{density}
\end{figure}
\begin{figure}
\caption{Toy example of initial value for $\kappa$ and possible values of $s$ in the MCMC algorithm.}
\label{fig:toy}
\end{figure}
\begin{figure}
\caption{Histograms based on $n=100$ points sampled from models A and B. The black line represents their respective densities.}
\label{simuldata}
\end{figure}
\begin{figure}
\caption{Histograms based on $2900$ samples from the posterior of parameter $\kappa$ under observations from models $A$ and $B$. Red lines indicate the true values of the parameter.}
\label{histA}
\end{figure}
\begin{figure}
\caption{Histograms based on $2900$ samples from the predictive distribution under observations from models $A$ and $B$. Black line indicates the true density.}
\label{predA}
\end{figure}
\begin{figure}
\caption{95\% confidence intervals of the correlation obtained between $(Y_1,Y_2)$ when varying the correlation between $(Z_1,Z_2)$ (top) and $(X_1,X_2)$ (bottom) respectively, through the interval $\{-1,-0.9,-0.8,\ldots,0.9,1\}$.}
\label{correlations}
\end{figure}
\begin{figure}
\caption{Dispersion Plot of samples $y_1$ vs $y_2$ of size $1000$ from the simulated bivariate example.}
\label{example_multi_1}
\end{figure}
\begin{figure}
\caption{Histograms of the posterior of $\kappa$ under simulated bivariate model. Red lines indicate true values.}
\label{k_bi_2}
\end{figure}
\begin{figure}
\caption{Histograms comparing the original simulated samples (size 100) to the predicted samples under the bivariate model. Black lines represent the true density.}
\label{hist_bi_2}
\end{figure}
\begin{figure}
\caption{Histogram of the estimated correlation between samples of size $100$ of predicted $(y_1,y_2)$. The red line indicates the true correlation.}
\label{cor_biB}
\end{figure}
\begin{figure}
\caption{Histograms of variables DIS and NOX, and contour plot of DIS vs NOX from the Boston Housing database.}
\label{realdata}
\end{figure}
\begin{figure}
\caption{Predicted histograms of DIS and NOX, and predicted contour plot of DIS vs NOX under the proposed bivariate mixture model.}
\label{preddata}
\end{figure}
\begin{figure}
\caption{Histogram of NOX$^{-1}$ and dispersion plot of log(DIS) vs NOX$^{-1}$ with regression line.}
\label{tranfnorm}
\end{figure}
\begin{figure}
\caption{95\% credible intervals and posterior medians of 10 ``future'' values of NOX being compared to their observed values (red dots) under the proposed nonparametric model (top) and a regression model (bottom).}
\label{preddata1}
\end{figure}
\begin{figure}
\caption{histograms of the posterior of 3 ``future'' values of NOX under the proposed nonparametric model (top) and a regression model (bottom). Real observed values are indicated by red line.}
\label{preddata2}
\end{figure}
\end{document} |
\begin{document}
\title{A generalisation of de la Vall\'{e}e-Poussin procedure to multivariate approximations}
\author{
Nadezda Sukhorukova, \\ Swinburne University of Technology, John St, Hawthorn VIC 3122,\\ Australia and Federation University Australia,\\ Postal address: PO Box 663. Ballarat VIC 3353\\
{[email protected]} \and Julien Ugon, Federation University Australia,\\ Postal address: PO Box 663. Ballarat VIC 3353\\
{[email protected]}}
\maketitle
\abstract{The theory of Chebyshev approximation has been extensively studied. In most cases, the optimality conditions are based on the notion of alternance or alternating sequence (that is, maximal deviation points with alternating deviation signs). There are a number of approximation methods for polynomial and polynomial spline approximation. Some of them are based on the classical de la Vall\'{e}e-Poussin procedure. In this paper we demonstrate that under certain assumptions the classical de la Vall\'{e}e-Poussin procedure, developed for univariate polynomial approximation, can be extended to the case of multivariate approximation. The corresponding basis functions are not restricted to be monomials. }
{\bf Keywords:} {Multivariate polynomial, Chebyshev approximation, de la Vall\'{e}e-Poussin procedure}
{\bf Subclass:} {41A10 \and 41A50 \and 41N10}
\section{Introduction}\label{sec:introduction}
The theory of Chebyshev approximation for univariate functions was developed in the late nineteenth (Chebyshev) and twentieth century (just to name a few \cite{nurnberger,rice67,Schumaker68}). Many papers are dedicated to polynomial and polynomial spline approximations, however, other types of functions (for example, trigonometric polynomials) have also been used. In most cases, the optimality conditions are based on the notion of alternance (that is, maximal deviation points with alternating deviation signs).
There have been several attempts to extend this theory to the case of multivariate functions. One of them is \cite{rice63}. The main obstacle in extending these results to the case of multivariate functions is that it is not very easy to extend the notion of monotonicity to the case of several variables.
The main contribution of this paper is the extention of the classical de la Vall\'{e}e-Poussin procedure (originally developed for univariate polynomial approximation \cite{valleepoussin:1911}) to the case of multivariate approximation under certain assumptions. The corresponding basis functions are not restricted to be monomials (that is, non-polynomial approximation).
The paper is organised as follows. In section~\ref{sec:convexObjective} we demonstrate that the corresponding optimisation problems are convex. Then, in section~\ref{sec:VPprocedure} we extend the classical de la Vall\'{e}e-Poussin procedure to the case of multivariate approximation. Finally, section~\ref{sec:conclusion} highlights our future research directions.
\section{Convexity of the objective function}\label{sec:convexObjective}
Let us now formulate the objective function. Suppose that a continuous function $f(\mathbf{x})$ is to be approximated by a function
\begin{equation}\label{eq:model_function}
L(\mathbf{A},\mathbf{x})=a_0+\sum_{i=1}^{n}a_ig_i(\mathbf{x}),
\end{equation}
where $L(\mathbf{A},\mathbf{x})$ is a modelling function, $g_i(\mathbf{x}),~i=1,\dots,n$ are the basis functions and the multipliers $\mathbf{A} = (a_0,a_1,\dots,a_n)$ are the corresponding coefficients. In the case of polynomial approximation, basis functions are monomials. In this paper, however, we do not restrict ourselves to polynomials. At a point \(\mathbf{x}\) the deviation between the function \(f\) (also referred as approximation function) and the approximation is:
\begin{equation}
d(\mathbf{A},\mathbf{x}) = |f(\mathbf{x}) - L(\mathbf{A},\mathbf{x})|. \end{equation}
\label{eq:deviation}
Then we can define the uniform approximation error over the set \(Q\) by
\begin{equation}
\label{eq:uniformdeviation} \Psi(\mathbf{A})=\sup_{\mathbf{x}\in Q} \max\{f(\mathbf{x})-a_0-\sum_{i=1}^{n}a_ig_i(\mathbf{x}),a_0+\sum_{i=1}^{n}a_ig_i(\mathbf{x})-f(\mathbf{x})\}. \end{equation}
The approximation problem is
\begin{equation}\label{eq:obj_fun_con}
\mathrm{minimise~}\Psi(\mathbf{A}) \mathrm{~subject~to~} \mathbf{A}\in
\mathbb{R}^{n+1}.
\end{equation}
Since the function \(L(\mathbf{A},\mathbf{x})\) is linear in \(\mathbf{A}\), the approximation error function \(\Psi(\mathbf{A})\), as the supremum of affine functions, is convex. Furthermore, its subdifferential at a point~\(\mathbf{A}\) is trivially obtained using the gradients of the active affine functions in the supremum (see \cite{Zalinescu2002} for details):
\begin{equation}
\label{eq:subdifferentialObjective}
\partial \Psi(\mathbf{A}) = \mathrm{co}\left\{ \begin{pmatrix} 1\\ g_1(\mathbf{x})\\ g_2(\mathbf{x})\\ \vdots \\ g_n(\mathbf{x}) \end{pmatrix}: \mathbf{x} \in E_+(\mathbf{A}),-\begin{pmatrix} 1\\ g_1(\mathbf{x})\\ g_2(\mathbf{x})\\ \vdots \\ g_n(\mathbf{x}) \end{pmatrix}: \mathbf{x}\in E_-(\mathbf{A})\right\}, \end{equation}
where \(E_+(\mathbf{A})\) and \(E_-(\mathbf{A})\) are respectively the points of maximal positive and negative deviation (extreme points): \begin{align*}
E^+(\mathbf{A}) &= \Big\{\mathbf{x}\in Q: f(\mathbf{x})-L(\mathbf{A},\mathbf{x}) = \max_{\mathbf{y}\in Q} d(A,\mathbf{y})\Big\},\\
E_- (\mathbf{A})&= \Big\{\mathbf{x}\in Q: -f(\mathbf{x})+ L(\mathbf{A},\mathbf{x}) = \max_{\mathbf{y}\in Q} d(\mathbf{A},\mathbf{y})\Big\}.
\end{align*} Note that in the case of multivariate polynomial approximation, $g_i(\mathbf{x})$, $i=1,\dots,n$ are monomials.
Define by \(G^+\) and \(G^-\) the sets
\begin{align*}
G^+(\mathbf{A}) &= \Big\{(1,g_1(\mathbf{x}),\dots,g_n(\mathbf{x}))^T: \mathbf{x}\in E^+(\mathbf{A})\Big\}\\
G^-(\mathbf{A}) &= \Big\{(1,g_1(\mathbf{x}),\dots,g_n(\mathbf{x}))^T: \mathbf{x}\in E^-(\mathbf{A})\Big\}
\end{align*} Assume that \(\mathrm{card}(E_+) + \mathrm{card}(E_-) = n+2\).
The following theorem holds. We present the proof for completeness.
\begin{theorem}\label{thm:main}(\cite{matrix}) $\mathbf{A}^*$ is an optimal solution to problem~(\ref{eq:obj_fun_con}) if and only if the convex hulls of the sets \(G^+(\mathbf{A}^*)\) and \(G^-(\mathbf{A}^*)\) intersect. \end{theorem}
\begin{proof}
The vector \(\mathbf{A}^*\) is an optimal solution to the convex problem \eqref{eq:obj_fun_con} if and only if \[
\mathbf{0}_{n+1} \in \partial \Psi(\mathbf{A}^*), \] where $\Psi$ is defined in \eqref{eq:uniformdeviation}. Note that due to Carath\'eodory's theorem, $\mathbf{0}_{n+1}$ can be constructed as a convex combination of a finite number of points (one more than the dimension of the corresponding space). Since the dimension of the corresponding space is $n+1$, it can be done using at most $n+2$ points.
Assume that in this collection of $n+2$ points $k$ points ($h_i,~i=1,\dots,k$) are from~$G^+(\mathbf{A}^*)$ and $n+2-k$ ($h_i,~i=k+1,\dots,n+2$) points are from $G^-(\mathbf{A}^*)$. Note that $0<k<n+2$, since the first coordinate is either~1 or $-1$ and therefore $\mathbf{0}_{n+1}$ can only be formed by using both sets ($G^+(\mathbf{A}^*)$ and $-G^-(\mathbf{A}^*)$). Then $$\mathbf{0}_{n+1}=\sum_{i=1}^{n+2}\alpha_ih_i,~0\leq\alpha\leq 1.$$ Let $0<\gamma=\sum_{i=1}^{k}\alpha_i$, then $$\mathbf{0}_{n+1}=\sum_{i=1}^{n+2}\alpha_ih_i=\gamma\sum_{i=1}^{k}\frac{\alpha_i}{\gamma}h_i+(1-\gamma)\sum_{i=k+1}^{n+2}\frac{\alpha_i}{1-\gamma}h_i=\gamma h^+ +(1-\gamma)h^-,$$ where $h^+\in G^+(\mathbf{A}^*)$ and $h^-\in -G^-(\mathbf{A}^*)$. Therefore, it is enough to demonstrate that $\mathbf{0}_{n+1}$ is a convex combination of two vectors, one from $G^+(\mathbf{A}^*)$ and one from $-G^-(\mathbf{A}^*)$.
By the formulation of the subdifferential of \(\Psi\) given by \eqref{eq:subdifferentialObjective}, there exists a nonnegative number \(\gamma \leq 1\) and two vectors \[
g^+ \in \mathrm{co}\left\{ \begin{pmatrix} 1\\ g_1(\mathbf{x})\\ g_2(\mathbf{x})\\ \vdots \\ g_n(\mathbf{x}) \end{pmatrix}: \mathbf{x} \in E^+(\mathbf{A}^*)\right\}, \]
and \[
g^- \in \mathrm{co}\left\{ \begin{pmatrix} 1\\ g_1(\mathbf{x})\\ g_2(\mathbf{x})\\ \vdots \\ g_n(\mathbf{x}) \end{pmatrix}: \mathbf{x} \in E^-(\mathbf{A}^*)\right\} \] such that \(\mathbf{0} = \gamma g^+ - (1-\gamma) g^-\). Noticing that the first coordinates \(g^+_1 = g^-_1 = 1\), we see that \(\gamma = \frac{1}{2}\). This means that \(g^+ - g^- = 0\). This happens if and only if
\begin{equation}\label{eq:opt_main2}
\mathrm{co}\left\{ \left( \begin{matrix} 1\\ g_1(\mathbf{x})\\ g_2(\mathbf{x})\\ \vdots \\ g_n(\mathbf{x})\\ \end{matrix} \right): \mathbf{x} \in E^+(\mathbf{A}^*)
\right
\}\cap
\mathrm{co}\left\{ \left( \begin{matrix} 1\\ g_1(\mathbf{x})\\ g_2(\mathbf{x})\\ \vdots \\ g_n(\mathbf{x})\\ \end{matrix} \right): \mathbf{x} \in E^-(\mathbf{A}^*)
\right \}\ne\emptyset.
\end{equation}
As noted before, the first coordinates of all these vectors are the same, and therefore the theorem is true, since if $\gamma$ exceeds one, the solution where all the components are divided by $\gamma$ can be taken as the corresponding coefficients in the convex combination.
\end{proof}
\section{de la Vall\'{e}e-Poussin procedure for nonsingular basis}\label{sec:VPprocedure} \subsection{Definitions and existing results}
We start with necessary definitions from convex analysis.
\begin{definition} The relative interior of a set $S$ (denoted by $\textrm{relint} (S)$) is defined as its interior within the affine hull of $S$. That is, $$ \textrm{relint}(S)= \{\textbf{x} \in S : \exists \varepsilon>0, B_\varepsilon(x)\cap \textrm{aff}(S)\subseteq S\},$$ where $B_\varepsilon(x)$ is a ball of radius $\varepsilon$ centred in $x$ and $\textrm{aff}(S)$ is the affine hull of $S$. \end{definition}
A useful property of relative interiors of convex hulls of finite number of points is formulated in the following lemma. \begin{lemma} Any relative interior point of a convex combination of a finite number of points can be presented as a convex combination of all these points with strictly positive convex combination coefficients and vice versa. \end{lemma}
In univariate case polynomial approximation, basis is an arbitrary collection of $n+2$ points, where $n$ is the number of monomials. What do we call basis in multivariate case? Based on necessary and sufficient optimality conditions (Theorem~\ref{thm:main}) the convex hulls built over positive and negative maximal deviation points should intersect. Is it always possible to partition $n+2$ points in to two subsets in such a way that the corresponding convex hulls are intersecting. The answer to this question is ``yes'', if $n\geq d$. The following theorem holds. \begin{theorem}(Radon \cite{Radon1921}) Any set of $d+2$ points in $\mathbb{R}^d$ can be partitioned into two disjoint sets whose convex hulls intersect. \end{theorem}
\begin{definition} A point in the intersection of these convex hulls is called a Radon point of the set. \end{definition} In the rest of the paper we assume that $n\geq d$.
It will be demonstrated that it is not possible to extend de la Vall\'{e}e-Poussin procedure to multivariate approximations without imposing additional assumptions (non-singular basis). It may be possible that some (or all) of these assumptions can be removed if we restrict ourselves to a particular class of basis functions (for example, monomials). This research direction is out of scope of this paper.
\begin{definition} Consider a set \(\mathcal{S}\) of \(n+2\) points partitioned into two sets, the sets \(\mathcal{Y}\) of points with positive deviation and \(\mathcal{Z}\) of points with negative deviation. These points are said to form a \emph{basis} if the convex hulls of \(\mathcal{Y}\) and \(\mathcal{Z}\) intersect. Furthermore, if the relative interiors of the convex hulls intersect and any $(n+1)$ point subset of this basis form an affine independent system then the basis is said to be \emph{non-singular}. \end{definition}
\subsection{de la Vall\'{e}e-Poussin procedure for multivariate approximations} \subsubsection{Classical univariate procedure} The classical univariate de la Vall\'{e}e-Poussin procedure contains three steps. \begin{enumerate} \item For any basis ($n+2$ points) there exists a unique polynomial, such that the absolute deviation at the basis points is the same and the deviation sign is alternating. This polynomial is also called Chebyshev interpolation polynomial. \item If there is a point (outside of the current basis), such that the absolute deviation at this point is higher than at the basis points then this point can be included in the basis by removing one of the current basis points and the deviation signs are deviating. \item The absolute deviation of the new Chebyshev interpolating polynomial is at least as high as the absolute deviation for the original basis. \end{enumerate} In the rest of this section we extend the procedure for a non-singular basis. \subsubsection{Step one extension}
We start with constructing Chebyshev interpolation polynomials. The following theorem holds. \begin{theorem} Assume that a system of points $\mathbf{y}_i,~i=1,\dots,N_+$ and $\mathbf{z}_i,~i=1,\dots,N_-$ forms a non-singular basis. Then there exists a unique polynomial deviating from $f$ at the points $\mathbf{y}_i,~i=1,\dots,N_+$ and $\mathbf{z}_i,~i=1,\dots,N_-$ by the same value and the deviation signs are opposite for $\mathbf{y}_i$ and $\mathbf{z}_i$. \end{theorem} \begin{proof} Consider the following linear system: \begin{equation}\label{eq:main_system} \left( \begin{tabular}{ccc} 1&{$g(\mathbf{y}_1)$}&1\\ 1&{$g(\mathbf{y}_2)$}&1\\ \vdots & \vdots & \vdots\\ 1&{$g(\mathbf{y}_{N_+})$}&1\\ 1&{$g(\mathbf{z}_1)$}&-1\\ 1&{$g(\mathbf{z}_2)$}&-1\\ \vdots & \vdots & \vdots\\ 1&{$g(\mathbf{z}_{N_-})$}&-1\\ \end{tabular} \right)\left( \begin{tabular}{c} $\mathbf{A}$\\ $\sigma$\\ \end{tabular} \right) =\left(\ \begin{tabular}{c} $f(\mathbf{y}_1)$\\ $f(\mathbf{y}_2)$\\ \vdots\\ $f(\mathbf{y}_{N_+})$\\ $f(\mathbf{z}_1)$\\ $f(\mathbf{z}_2)$\\ \vdots\\ $f(\mathbf{z}_{N_-})$\\ \end{tabular} \right), \end{equation} where $\mathbf{A}$ represents the parameters of the polynomial, while $\sigma$ is the deviation. If $\sigma=0$, there exists a polynomial passing through the chosen points (interpolation). Denote the system matrix in~(\ref{eq:main_system}) by $M$. Since the basis is non-singular, that is, the relative interiors of sets ${\cal{Y}}$ and ${\cal{Z}}$ are intersecting, there exist two sets of strictly positive coefficients $$\alpha_1,\dots,\alpha_{N_+}:~ \sum_{i=1}^{N_+}\alpha_i=1$$ and $$\beta_1,\dots,\alpha_{N_-}:~ \sum_{i=1}^{N_-}\beta_i=1,$$ such that \begin{equation}\label{eq:intersecting} \sum_{i=1}^{N_+}\alpha_ig(\mathbf{y}_i)=\sum_{i=1}^{N_-}\beta_i g(\mathbf{z}_i). \end{equation} Multiply the first row of $M$ by the convex coefficient $\alpha_1$ from~(\ref{eq:intersecting}). For each remaining row of $M$ one can apply the following update: \begin{itemize} \item multiply by the corresponding convex coefficient and add all the rows that correspond to the vertices with the same deviation sign as the first row; \item multiply by the corresponding convex coefficient and subtract all the rows that correspond to the vertices with the deviation sign opposite to the sign of the first row. \end{itemize} Then \begin{equation}\label{eq:det_tilde_M} \alpha_l\det(\tilde{M})=2(-1)^{l+2+i}\det(M^+_l),\;~l=1,\dots, N_{+}, \end{equation} where $M^+_i$ is obtained from~$\tilde{M}$ by removing the last column and the $i-$th row and $M^-_j$ is obtained from~$\tilde{M}$ by removing the last column and the $(N_{+}+j)$-th row. Also note that \begin{equation} \det(M^+_i)=2(-1)^{l+2+N_{+}+j+1}\det(M^-_j),~l=1,\dots, N_{+}. \end{equation}
If now we evaluate the the determinant of~$M$ directly, then \begin{equation}\label{eq:directly_det_M} \det M=\sum_{i=1}^{N_+}(-1)^{l+2+i}\Delta_i+\sum_{j=N_{+}+1}^{N_{+}+N_{-}}(-1)^{l+2+j+1}\Delta_j. \end{equation} Based of~(\ref{eq:det_tilde_M}), each component in the right hand side of~(\ref{eq:directly_det_M}) has the same sign.
Therefore, the linear system~(\ref{eq:main_system}) has a unique solution for any right hand side of the system. \end{proof}
Note that the division into ``positive'' and ``negative'' basis points does not mean that the deviation sign is positive for ``positive'' basis points and negative for ``negative'' basis points. The actual deviation sign also depends on the sign of $\sigma$ from~(\ref{eq:main_system}).
Extending the notion of Chebyshev interpolating polynomial to the case of multivariate approximation and not restricting ourselves to polynomials, define the following. \begin{definition} A modelling function $L(\mathbf{A},\mathbf{x})$ from~(\ref{eq:model_function}) that deviates at the basis points by the same absolute value from its approximation function and the deviation signs are opposite for any two points if they are selected from different basis subsets (positive or negative) is called Chebyshev interpolation modelling function. \end{definition}
The additional requirement for a basis to be non-singular may be removed by \begin{itemize} \item restricting to some particular types of basis functions (for example, polynomials); \item allowing the system~(\ref{eq:main_system}) to have more than one solution. \end{itemize} These will be included in our future research directions. \subsubsection{Step two extension}
Our next step is to demonstrate \begin{theorem} Consider two intersecting sets \(\mathcal{Y}\) and~\(\mathcal{Z}\) such that the points in \(\mathcal{Y}\) all have the same deviation and opposite deviation to all the points in \(\mathcal{Z}\) (\(g(\tilde{y}) = -g(\tilde{z}), \forall \tilde{y}\in \mathcal{Y}, \tilde{z}\in\mathcal{Z}\)). Assume now that $g(\mathbf{y}) = g(\tilde{y}), \forall \tilde{y} \in \mathcal{Y}$, and that the set $$\mathcal{K}=\textrm{relint}(\{{\cal{Y}}\cup g(\mathbf{y})\})\cap\textrm{relint}({\cal{Z}}) \neq \emptyset.$$ There exists a point in the combined collection of vertices of~${\cal{Y}}$ and ${\cal{Z}}$, that can be removed while~$\mathbf{y}$ is included in~${\cal{Y}}$, such that the updated sets~${\cal{\tilde{Y}}}$ and ${\cal{\tilde{Z}}}$ intersect. \end{theorem} \begin{proof} Since ${\textrm{relint}}({\cal{Y}})\cap\text{relint}({\cal{Z}})\ne \emptyset$, there exist strictly positive coefficients $$\alpha_i,~ i=1,\dots,N_+$$ and $$\beta_i,~j=1,\dots,N_-,$$ such that $\sum_{i=1}^{N_+}\alpha_i=1$ and $\sum_{j=1}^{N_-}\beta_j=1$.
Since $\mathcal{K}\ne\emptyset$ there exist strictly positive coefficients $$\alpha,~\tilde{\alpha}_i,~i=1,\dots,N_+$$ such that $\alpha+\sum_{i=1}^{N_+}\tilde{\alpha}_i=1$ and $\tilde{\beta}_i$, $j=1,\dots,N_-$, such that $$\sum_{j=1}^{N_-}\tilde{\beta}_j=1.$$
Find \begin{equation}\label{eq:gamma} \gamma=\min\left\{\min_{i=1,\dots,N_+}{\tilde{\alpha}_i\over \alpha_i},\min_{j=1,\dots,N_-}{\tilde{\beta}_j\over \beta_j}\right\}. \end{equation}
First, assume that $\gamma={\tilde{\alpha}_1\over\alpha_1}$. Note that $\alpha_1\ne 0$,
then~(\ref{eq:intersecting}) can be written as $$\mathbf{y}_1={1\over\alpha_1}\left(\sum_{j=1}^{N_-}\beta_jg(\mathbf{z}_j)-\sum_{i=2}^{N_+}\alpha_i g(\mathbf{y}_i)\right).$$ Then, the convex hull with the new point $\mathbf{y}$ is \begin{equation} \alpha g(\mathbf{y})+{\tilde{\alpha}_1\over\alpha_1}\left(\sum_{j=1}^{N_-}\beta_jg(\mathbf{z}_j)-\sum_{i=2}^{N_+}\alpha_ig(\mathbf{y}_i)\right)+\sum_{i=2}^{N_+}\tilde{\alpha_i g(\mathbf{y}_i)}=\sum_{j=1}^{N_-}\tilde{\beta}_jg(\mathbf{z}_j) \end{equation} and finally \begin{equation} \alpha g(\mathbf{y})+\sum_{i=2}^{N_+}(\tilde{\alpha}_i-{\tilde{\alpha}_i\over \alpha_i})g(\mathbf{y}_i)=\sum_{j=1}^{N_-}(\tilde{\beta}_j-{\tilde{\alpha}_1\over \alpha_1})g(\mathbf{z}_j). \end{equation} Since $\alpha_i>0$, $i=1,\dots,N_+$ and the definition of $\gamma$, one can obtain that for any $i=1,\dots,N_+$ \begin{equation} \tilde{\alpha}_i-{\tilde{\alpha}_1\over \alpha_1}\geq \tilde{\alpha}_i-{\tilde{\alpha}_i\over \alpha_i}=0. \end{equation} Similarly, for any $j=1,\dots,N_-$, $$\tilde{\beta}_j-{\tilde{\alpha}_1\over\alpha_1}\beta_j\geq 0.$$ Note that \begin{equation} \sum_{j=1}^{N_-}(\tilde{\beta}_j-{\tilde{\alpha}_1\over \alpha_1}\beta_j)=1-{\tilde{\alpha}_1\over \alpha_1} \end{equation} and \begin{equation} \alpha+\sum_{i=2}^{N_+}\tilde{\alpha}_i-{\tilde{\alpha}_1\over\alpha_1}\sum_{i=2}^{N_+}\alpha_i=\alpha+(1-\alpha\tilde{\alpha}_1)-{\tilde{\alpha}_1\over\alpha_1}=1-{\tilde{\alpha}_1\over\alpha_1}=1-\gamma. \end{equation} Since $\alpha$ is strictly positive, $\gamma<1$. Therefore, the new point can be included instead of $\mathbf{y}_1$ and the convex hulls of the updated sets are intersecting (and so their relevant interiors).
Second, assume that $\gamma={\tilde{\beta}_1\over\beta_1}$. Note that $\beta_1\ne 0$, otherwise $\mathbf{y}$ can be included instead of $\mathbf{z}_1$.
Similarly to part~1, obtain \begin{equation} \alpha g(\mathbf{y})+\sum_{i=1}^{N_+}(\tilde{\alpha}_i-{\tilde{\beta}_1\over\beta_1}\alpha_i)g(\mathbf{y}_i)=\sum_{j=2}^{N_-}(\tilde{\beta}_j-{\tilde{\beta}_1\over\beta_1}\beta_j)g(\mathbf{z}_j). \end{equation} Since $$\alpha+1-\alpha-{\tilde{\beta}_1\over\beta_1}=1-\tilde{\beta}_1-{\tilde{\beta}_1\over\beta_1}(1-\beta_1)=1-{\tilde{\beta}_1\over\beta_1}>0,$$ the convex hulls of the updated sets are intersecting. \end{proof}
Note that for the extension of this step we only need the assumption that the relative interiors are intersecting, moreover, if this is the case, the new basis preserves this property.
\subsubsection{Step three extension}
The final step is to show that the proposed exchange rule leads to a modelling function whose deviation at the new basis is strictly higher than the deviation at the points of the original basis.
\begin{theorem} Assume that a point with a higher absolute deviation is included in the basis instead of one of the points of the original basis (which is also non-singular). The absolute deviation of the Chebyshev interpolation modelling function that corresponds to the new basis is higher than the one of the Chebyshev interpolation modelling function on the original basis. \end{theorem} \begin{proof} Denote by \[\mathcal{Y} = \{\mathbf{y}_i,~i=1,\dots,N_+\}\] and \[\mathcal{Z} = \{\mathbf{z}_j,~j=1,\dots,N_-\}\] respectively. Assume that \(\tilde{\mathcal{Y}} = \mathcal{Y}\cup \{y\}\setminus \{y_1\}\) and \(\tilde{Z} = \mathcal{Z}\) (when the a point from the set \(\mathcal{Z}\) is removed instead, the proof is similar.)
Since the convex hulls of positive and negative deviation points are intersecting, there exist nonnegative convex coefficients \begin{itemize} \item $\alpha_1,\dots,\alpha_{N_+}: \sum_{i=1}^{N_+}\alpha_i=1$ and $\beta_1,\dots,\beta_{N_-}: \sum_{j=1}^{N_-}\beta_j=1$ (original basis); \item $\alpha,~\tilde{\alpha}_2,\dots,\tilde{\alpha}_{N_+}: \alpha+\sum_{i=2}^{N_+}\tilde{\alpha}_i=1$ and $\beta_1,\dots,\beta_{N_-}: \sum_{j=1}^{N_-}\beta_j=1$ (new basis), \end{itemize} such that on the original basis \begin{equation}\label{eq:convex_hulls_new_basis} \sum_{i=1}^{N_+}\alpha_i\mathbf{y}_i-\sum_{j=1}^{N_-}\beta_j\mathbf{z}_j=\mathbf{0} \end{equation} and on the new basis \begin{equation}\label{eq:convex_hulls_original_basis} \alpha\mathbf{y}+\sum_{i=2}^{N_+}\tilde{\alpha}_i\mathbf{y}_i-\sum_{j=1}^{N_-}\tilde{\beta}_j\mathbf{z}_j=\mathbf{0} \end{equation} Systems~(\ref{eq:convex_hulls_new_basis}) is equivalent to
\begin{equation} \left[\alpha,\tilde{\alpha}_2,\dots,\tilde{\alpha}_{N_+},\tilde{\beta}_1,\dots,\tilde{\beta}_{N_-}\right]\left[\begin{matrix} \mathbf{y}\\ \mathbf{y}_2\\ \vdots\\ \mathbf{y}_{N_+}\\ \mathbf{z}_1\\ \vdots\\ \mathbf{z}_{N_-} \end{matrix} \right]=\mathbf{0}. \end{equation}
Then
\begin{equation} \left[\alpha,\tilde{\alpha}_2,\dots,\tilde{\alpha}_{N_+},\tilde{\beta}_1,\dots,\tilde{\beta}_{N_-}\right]\left[\begin{matrix} 1&\mathbf{y}\\ 1&\mathbf{y}_2\\ \vdots&\vdots\\ 1&\mathbf{y}_{N_+}\\ 1&\mathbf{z}_1\\ \vdots&\vdots\\ 1&\mathbf{z}_{N_-} \end{matrix} \right]\mathbf{A}=\mathbf{0} \end{equation} for any $\mathbf{A}\in\mathbb{R}^{n+1}$. Let $\mathbf{A}_{o}$ and $\mathbf{A}_{new}$ be parameter coefficients of the Chebyshev interpolation modelling functions that correspond to the original and new basis respectively. Then \begin{equation}\label{eq:orig_cheb_inter_pol} \alpha P_n(\mathbf{A}_{o},\mathbf{y})+\sum_{i=2}^{N_+}\tilde{\alpha}_iP_n(\mathbf{A}_{o},\mathbf{y}_i)-\sum_{j=1}^{N_-}\tilde{\beta}_jP_n(\mathbf{A}_{o},\mathbf{z}_j)=0 \end{equation} and \begin{equation}\label{eq:new_cheb_inter_pol} \alpha P_n(\mathbf{A}_{new},\mathbf{y})+\sum_{i=2}^{N_+}\tilde{\alpha}_iP_n(\mathbf{A}_{new},\mathbf{y}_i)-\sum_{j=1}^{N_-}\tilde{\beta}_jP_n(\mathbf{A}_{new},\mathbf{z}_j)=0. \end{equation} Assume that \begin{equation} f(\mathbf{y}_1)-P_n(\mathbf{A}_{new},\mathbf{y}_1)=\sigma_{new}>0. \end{equation} Then \begin{equation} \sigma_{new}+P_n(\mathbf{A}_{new},\mathbf{y})=f(\mathbf{y}), \end{equation} \begin{equation} \sigma_{new}+P_n(\mathbf{A}_{new},\mathbf{y}_i)=f(\mathbf{y}_i),~i=2,\dots,N_+, \end{equation} and \begin{equation} -\sigma_{new}+P_n(\mathbf{A}_{new},\mathbf{z}_j)=f(\mathbf{z}_j),~j=2,\dots,N_-. \end{equation} Due to~(\ref{eq:orig_cheb_inter_pol})-(\ref{eq:new_cheb_inter_pol})
\begin{align*} 2\sigma_{new}=&\\ =&\alpha(f(\mathbf{y})-P_n(\mathbf{A}_{o},\mathbf{y})+\sum_{i=2}^{N_+}\tilde{\alpha_i}(f(\mathbf{y}_i)-P_n(\mathbf{A}_{o},\mathbf{y}_i))-\sum_{j=1}^{N_-}\tilde{\beta_j}(f(\mathbf{z}_j)-P_n(\mathbf{A}_{o},\mathbf{z}_i))\\
>&2\sigma_{o}.\\
\end{align*}
Therefore, $\sigma_{new}>\sigma_{o}.$
\end{proof}
Therefore, the notion of basis and de la Vall\'{e}e-Poussin procedure is extended to multidimensional functions. Also, it has been extended to any basis functions (not only traditional polynomials). If the newly obtained basis is non-singular, one can make another de la Vall\'{e}e-Poussin procedure step.
\section{Further research directions}\label{sec:conclusion}
We will extend the results to the case when the basis is singular. In order to do this, we need to remove two assumptions. \begin{enumerate} \item Any $(n+1)$ point subset of the basis ($n+2$ points) form an affine independent system. \item Relative interiors of the convex hulls of positive and negative maximal deviation points (restricted to basis) are intersecting. \end{enumerate}
The first assumption may not be removed for an arbitrary type of basis function. However, it may be possible to remove this assumption for some special types of functions (for example, polynomials). The removal of the second assumption may lead to dimension reduction. These will be included in our future research directions.
\end{document} |
\begin{document}
\title{Log-concavity for series in reciprocal gamma functions and applications}
\begin{center} \parbox{12cm}{ \small\textbf{Abstract.} Euler's gamma function $\Gamma(x)$ is logarithmically convex on $(0,\infty)$. Additivity of logarithmic convexity implies that the function $x\to\sum{f_k\Gamma(x+k)}$ is also log-convex (assuming convergence) if the coefficients are non-negative. In this paper we investigate the series $\sum{f_k\Gamma(x+k)^{-1}}$, where each term is clearly log-concave. Log-concavity is not preserved by addition, so that non-negativity of the coefficients is now insufficient to draw any conclusions about the sum. We demonstrate that the sum is log-concave if $kf_k^2\geq(k+1)f_{k-1}f_{k+1}$ and is discrete Wright log-concave if $f_k^2\geq{f_{k-1}f_{k+1}}$. We conjecture that the latter condition is in fact sufficient for the log-concavity of the sum. We exemplify our general theorems by deriving known and new inequalities for the modified Bessel, Kummer and generalized hypergeometric functions and their parameter derivatives.} \end{center}
Keywords: \emph{Gamma function, log-concavity, Tur\'{a}n inequality, hypergeometric functions, modified Bessel function, Kummer function}
MSC2010: 26A51, 33C20, 33C15, 33C05
\paragraph{1. Introduction.} A positive continuous function $f$ defined on a real interval $I\subseteq\mathbf{R}$ is said to be \textbf{Jensen log-concave} if \begin{equation}\label{eq:log-conc} f(\mu+h)^2\geq f(\mu)f(\mu+2h) \end{equation} for all $h>0$ and all $\mu$ such that $[\mu,\mu+2h]\subseteq{I}$. If inequality (\ref{eq:log-conc}) is strict the function $f$ is called strictly Jensen log-concave. If the sign of the inequality is reversed one talks about Jensen log-convexity (or strict Jensen log-convexity). For continuous functions Jensen log-concavity is equivalent to log-concavity, i.e. concavity of $\log(f)$ (but is weaker in general). It is also equivalent to the seemingly stronger inequality \begin{equation}\label{eq:wright-log-conc} \phi_{h,s}(\mu)=f(\mu+h)f(\mu+s)-f(\mu)f(\mu+h+s)\geq 0 ~\text{for all}~~h,s>0, \end{equation} which expresses the fact that $\mu\to f(\mu+h)/f(\mu)$ is non-increasing for any $h>0$. We tacitly assume here and below that all arguments lie in $I$. A function satisfying (\ref{eq:wright-log-conc}) is called \textbf{Wright log-concave} \cite[Chapter~I.4]{MPF}. For comparisons of these notions and their higher order analogues see also the recent paper \cite{NRW}.
One is also frequently encountered with the situation when (\ref{eq:log-conc}) or (\ref{eq:wright-log-conc}) only holds for integer values of $h$. We will express this fact by saying that $f$ is \textbf{discrete log-concave} or \textbf{discrete Wright log-concave}, respectively. In this case, however, we only have the implication (\ref{eq:wright-log-conc})$\Rightarrow$(\ref{eq:log-conc}), while the reverse implication is not true even for continuous functions. We note that $f$ is discrete Wright log-concave if and only if \begin{equation}\label{eq:disc-wright} \phi_{1,s}(x)=f(\mu+1)f(\mu+s)-f(\mu)f(\mu+s+1)\geq{0} ~\text{for all}~~s>0. \end{equation} Indeed, (\ref{eq:disc-wright}) says that $f(\mu+1)/f(\mu)$ is non-increasing, so that $f(\mu+2)/f(\mu+1)$ is again non-increasing implying that their product $f(\mu+2)/f(\mu)$ is non-increasing, i.e. satisfies (\ref{eq:wright-log-conc}) with $h=2$. In a similar fashion (\ref{eq:wright-log-conc}) holds for all integer values of $h$ which is discrete Wright log-concavity by definition. Discrete Jensen log-concavity and log-convexity are also frequently referred to as ''Tur\'{a}n type inequalities'' following the classical result of Paul Tur\'{a}n for Legendre polynomials \cite{Turan}.
If $f:\mathbb{N}_0\to{\mathbf{R}_+}$ is a sequence, then discrete log-concavity reduces to the inequality $f_k^2\geq{f_{k-1}f_{k+1}}$, $k\in\mathbb{N}$. We additionally require that the sequence $\{f_k\}_{k=0}^{\infty}$ is non-trivial and has no internal zeros: $f_{N}=0~\Rightarrow~f_{N+i}=0$ for all $i\in\mathbb{N}_{0}$. Such sequences are also known as $PF_2$ (P\'{o}lya frequency sub two) or doubly positive \cite{Karlin}.
Clearly, if $f$ is (Jensen or Wright) log-concave then $1/f$ is (Jensen or Wright) log-convex. Notwithstanding the simplicity of this relation, several important properties of log-concavity and log-convexity differ. Particularly important is that log-convexity is additive while log-concavity is not. Further, log-convexity is a stronger property than convexity whereas log-concavity is weaker than concavity.
In \cite{KS} the second author and Sergei Sitnik initiated the investigation of the following problem: under what conditions on non-negative sequence $\{f_k\}$ and the numbers $a_i$, $b_j$ the function \begin{equation}\label{eq:problem1} \mu\to f(\mu;x):=\sum\limits_{k=0}^{\infty}f_k\frac{\prod_{i=1}^{n}\Gamma(a_i+\mu+\varepsilon_ik)} {\prod_{j=1}^{m}\Gamma(b_j+\mu+\varepsilon_{n+j}k)}x^k \end{equation} is (discrete) log-concave or log-convex? Here $\Gamma$ is Euler's gamma function and $\varepsilon_l$ can be $1$ or $0$. In \cite{KS} the authors treated the cases of (\ref{eq:problem1}) with $n=1$, $m=0$, $\varepsilon_1=1$; $n=m=1$, $\varepsilon_1=1$, $\varepsilon_2=0$; and $n=m=1$, $\varepsilon_1=0$, $\varepsilon_2=1$. Of course, the log-convexity cases are nearly trivial but the results of \cite{KS} go beyond log-convexity by showing that the function $\phi_{h,s}(\mu;x)$ on the left-hand side of (\ref{eq:wright-log-conc}) has non-positive Taylor coefficients in powers of $x$, so that $x\to-\phi_{h,s}(\mu;x)$ is absolutely monotonic. According to Hardy, Littlewood and P\'{o}lya theorem \cite[Proposition~2.3.3]{NP} this also implies that this function is multiplicatively concave: $\phi_{h,s}(\mu;x^{\lambda}y^{1-\lambda})\geq{\phi_{h,s}(\mu;x)^{\lambda}\phi_{h,s}(\mu;y)^{1-\lambda}}$ for $\lambda\in[0,1]$.
In this paper we treat the case $n=0$, $m=1$, $\varepsilon_1=1$. We get slightly different results depending on conditions imposed on the sequence $\{f_{k}\}$. If $f_{k}$ is log-concave without internal zeros (i.e. doubly positive) we prove discrete Wright log-concavity of the series (\ref{eq:problem1}). We conjecture that the true log-concavity holds but we were unable to demonstrate this. If $\{f_kk!\}$ is doubly positive we show that (\ref{eq:problem1}) is log-concave. We do so by establishing non-negativity of the Taylor coefficients of $\phi_{h,s}(\mu;x)$ in powers of $x$ (either for all or only for integer $h>0$). Again, by Hardy, Littlewood and P\'{o}lya theorem this implies that $x\to\phi_{h,s}(\mu;x)$ is multiplicatively convex.
The paper is organized as follows: in section~2 we collect several lemmas repeatedly used in the proofs; section~3 comprises two theorems constituting the main content of the paper; in section~4 we give applications to Bessel, Kummer and generalized hypergeometric functions and relate them to several previously known results.
\paragraph{2. Preliminaries.} We will need several lemmas which we prove in this section. First, we formulate an elementary inequality we will repeatedly use below. \begin{lemma}\label{lm:uvrs} Suppose $u,v,r,s>0$, $u=\max(u,v,r,s)$ and $uv>rs$. Then $u+v>r+s$. \end{lemma} \textbf{Proof.} Indeed, dividing by $u$ we can rewrite the required inequality as $r'+s'<1+v'$, where $r'=r/u\in(0,1)$, $s'=s/u\in(0,1)$, $v'=v/u\in(0,1)$. Since $r's'<v'$ by $rs<uv$, the required inequality follows from the elementary inequality $r'+s'<1+r's'$. ~~$\square$
Lemma~\ref{lm:uvrs} is a particular case of a much more general result on logarithmic majorization - see \cite[2.A.b]{MOA}.
In the next lemma we say that a sequence has no more than one change of sign if the pattern is $(--\cdots--00\cdots00++\cdots++)$, where zeros and minus signs may be omitted. \begin{lemma}\label{lm:sum} Suppose $\{f_k\}_{k=0}^{n}$ has no internal zeros and $f_k^2\geq{f_{k-1}f_{k+1}}$, $k=1,2,\ldots,n-1$. If the real sequence $A_0,A_1,\ldots,A_{[n/2]}$ satisfying $A_{[n/2]}>0$ and $\sum\limits_{0\leq{k}\leq{n/2}}\!\!\!\!A_k\geq{0}$ has no more than one change of sign, then \begin{equation}\label{eq:keysum} \sum\limits_{0\leq{k}\leq{n/2}}f_{k}f_{n-k}A_k\geq{0}. \end{equation} Equality is only attained if $f_k=f_0\alpha^k$, $\alpha>0$, and $\sum\limits_{0\leq{k}\leq{n/2}}\!\!\!\!A_k=0$. \end{lemma} \textbf{Proof.} Suppose $f_k>0$, $k=s,\ldots,p$, $s\geq{0}$, $p\leq{n}$. Log-concavity of $\{f_k\}_{k=0}^{n}$ clearly implies that $\{f_{k}/f_{k-1}\}_{k=s+1}^{p}$ is decreasing, so that for $s+1\leq{k}\leq{n-k+1}\leq{p+1}$ $$ \frac{f_k}{f_{k-1}}\geq\frac{f_{n-k+1}}{f_{n-k}}~\Leftrightarrow~f_{k}f_{n-k}\geq f_{k-1}f_{n-k+1}. $$ Since $k\leq{n-k+1}$ is true for all $k=1,2,\ldots,[n/2]$, the weights $f_{k}f_{n-k}$ assigned to negative $A_k$s in (\ref{eq:keysum}) are smaller than those assigned to positive $A_k$s leading to (\ref{eq:keysum}). The equality statement is obvious.~~~$\square$ \begin{lemma}\label{lm:gammasum} Suppose $m$ is non-negative integer. The inequality \begin{equation}\label{eq:gammasum} \sum\limits_{k=0}^{m}\frac{1}{k!(m-k)!} \left(\frac{1}{\Gamma(k+\mu+a)\Gamma(m-k+\mu+b)}-\frac{1}{\Gamma(m-k+\mu)\Gamma(k+\mu+a+b)}\right)\geq{0} \end{equation} holds for all $a,b\geq{0}$, $\mu\geq{-1}$ and such that $\mu+a\geq{0}$, $\mu+b\geq{0}$ . Equality is only attained if $ab=0$. \end{lemma} \textbf{Proof.} Using the easily verifiable relations $$ (c)_k=\frac{\Gamma(c+k)}{\Gamma(c)},~~(m-k)!=\frac{(-1)^km!}{(-m)_k}~~\text{and}~~(c)_{m-k}=\frac{(-1)^k(c)_m}{(1-c-m)_k} $$ we obtain \begin{multline*} \sum\limits_{k=0}^{m}\frac{1}{k!(m-k)!\Gamma(k+\alpha)\Gamma(m-k+\beta)}=\frac{1}{m!\Gamma(\alpha)\Gamma(\beta)} \sum\limits_{k=0}^{m}\frac{(-1)^k(-m)_k}{k!(\alpha)_k(\beta)_{m-k}} \\ =\frac{1}{m!\Gamma(\alpha)\Gamma(\beta)(\beta)_m}\sum\limits_{k=0}^{m}\frac{(-m)_k(1-\beta-m)_k}{k!(\alpha)_k} =\frac{(\alpha+\beta+m-1)_m}{m!\Gamma(\alpha)(\alpha)_m\Gamma(\beta)(\beta)_m} \\ =\frac{\Gamma(\alpha+\beta+2m-1)}{\Gamma(\alpha+m)\Gamma(\beta+m)\Gamma(\alpha+\beta+m-1)m!}, \end{multline*} where we have used the Chu-Vandermonde identity \cite[Corollary~2.2.3]{AAR} $$ \sum\limits_{k=0}^{m}\frac{(-m)_k(a)_k}{(c)_kk!}=\frac{(c-a)_m}{(c)_m}. $$
This leads to an explicit evaluation of the left hand side of (\ref{eq:gammasum}) as $$ \frac{\Gamma(2\mu+a+b+2m-1)}{\Gamma(2\mu+a+b+m-1)m!}\left( \frac{1}{\Gamma(m+\mu+a)\Gamma(m+\mu+b)}-\frac{1}{\Gamma(m+\mu)\Gamma(m+\mu+a+b)} \right) $$ For $a,b,\mu>0$, the required inequality reduces to log-convexity of $\Gamma(x)$ for $x>0$ \cite[Corollary~1.2.6]{AAR}. If $ab=0$ we clearly get the equality. If $a,b>0$, $\mu=m=0$, the second term in parentheses disappears and (\ref{eq:gammasum}) holds strictly. If $m=0$, $-1\leq\mu<0$ and $\mu+a\geq{0}$, $\mu+b\geq{0}$ then the term outside the parentheses reduces to 1 while the second term inside parentheses is strictly negative (since $\mu+a+b>0$), so that the sum is strictly positive. If $m\geq{1}$ then $\mu+m\geq{0}$ and we are back in the previous situation.~~$\square$
\begin{lemma}\label{lm:rec-gammas} Suppose $m\geq{0}$ is an integer. Then for all complex $\beta$ and $\mu$ \begin{multline}\label{eq:rec-gamma-sum} S_m(\mu,\beta):=\sum\limits_{k=0}^{m}\left\{\frac{1}{\Gamma(k+\mu+1)\Gamma(m-k+\mu+\beta)} -\frac{1}{\Gamma(k+\mu)\Gamma(m-k+\mu+\beta+1)}\right\} \\ =\frac{(\mu+\beta)_{m+1}-(\mu)_{m+1}}{\Gamma(\mu+m+1)\Gamma(\mu+\beta+m+1)}. \end{multline} \end{lemma} \textbf{Proof.} We have \begin{multline*} S_m(\mu,\beta)=\frac{1}{\Gamma(\mu+1)\Gamma(\mu+\beta)\Gamma(\mu)\Gamma(\mu+\beta+1)} \sum\limits_{k=0}^{m}\left\{\frac{\Gamma(\mu)\Gamma(\mu+\beta+1)}{(\mu+1)_k(\mu+\beta)_{m-k}} -\frac{\Gamma(\mu+1)\Gamma(\mu+\beta)}{(\mu)_{k}(\mu+\beta+1)_{m-k}}\right\} \\ =\frac{1}{\Gamma(\mu+1)\Gamma(\mu+\beta+1)} \sum\limits_{k=0}^{m}\left\{\frac{\mu+\beta}{(\mu+1)_k(\mu+\beta)_{m-k}} -\frac{\mu}{(\mu)_{k}(\mu+\beta+1)_{m-k}}\right\} \\ =\frac{1}{\Gamma(\mu+1)\Gamma(\mu+\beta+1)} \sum\limits_{k=0}^{m}\frac{1}{(\mu)_k(\mu+\beta)_{m-k}} \left\{\frac{\mu(\mu+\beta)}{\mu+k} -\frac{\mu(\mu+\beta)}{\mu+\beta+m-k}\right\} \\ =\frac{1}{\Gamma(\mu+1)\Gamma(\mu+\beta+1)} \sum\limits_{k=0}^{m}\frac{\beta+m-2k}{(\mu+1)_k(\mu+\beta+1)_{m-k}}. \end{multline*} Writing $$ u_k=\frac{1}{(\mu+1)_{k-1}(\mu+\beta+1)_{m-k}},~1\leq{k}\leq{m},~~ u_0=\frac{\mu}{(\mu+\beta+1)_{m}},~u_{m+1}=\frac{\mu+\beta}{(\mu+1)_{m}}, $$ we get a telescoping sum, since $$ u_{k+1}-u_{k}=\frac{1}{(\mu+1)_{k}(\mu+\beta+1)_{m-k-1}}-\frac{1}{(\mu+1)_{k-1}(\mu+\beta+1)_{m-k}} =\frac{\beta+m-2k}{(\mu+1)_k(\mu+\beta+1)_{m-k}} $$ for $k=0,1,\ldots,m$, so that $$ \sum\limits_{k=0}^{m}(u_{k+1}-u_{k})=u_{m+1}-u_{0}=\frac{\mu+\beta}{(\mu+1)_{m}}-\frac{\mu}{(\mu+\beta+1)_{m}} $$ and $$ \frac{1}{\Gamma(\mu+1)\Gamma(\mu+\beta+1)} \sum\limits_{k=0}^{m}\frac{\beta+m-2k}{(\mu+1)_k(\mu+\beta+1)_{m-k}}= \frac{(\mu+\beta)_{m+1}-(\mu)_{m+1}}{\Gamma(\mu+m+1)\Gamma(\mu+\beta+m+1)}.~\square $$ The following is a straightforward consequence of formula (\ref{eq:rec-gamma-sum}). \begin{corol}\label{cr:rec-gammas} If $\mu\geq{-1}$, $\beta\geq{0}$ and $\mu+\beta\geq{0}$ then $S_{m}(\mu,\beta)\geq{0}$. The inequality is strict unless $\beta=0$. \end{corol}
\paragraph{3. Main results.} In this section we prove two general theorems for series in reciprocal gamma functions. The power series expansions in this section are understood as formal, so that no questions of convergence are discussed. In applications the radii of convergence will usually be apparent. The results of this section are exemplified by concrete special functions in the subsequent section. \begin{theo}\label{th:gammadenom} Suppose $\{f_n\}_{n=0}^{\infty}$ is a non-trivial non-negative log-concave sequence without internal zeros. Then the function \begin{equation}\label{eq:func} \mu\mapsto f(\mu,x)=\sum\limits_{n=0}^{\infty}\frac{f_nx^n}{n!\Gamma(\mu+n)}, \end{equation} is strictly log-concave on $(0,\infty)$ for each fixed $x\geq{0}$. Moreover, the function $$ \varphi_{a,b,\mu}(x):=f(a+\mu,x)f(b+\mu,x)-f(a+b+\mu,x)f(\mu,x)=\sum\limits_{m=0}^{\infty}\varphi_mx^m $$ has positive power series coefficients $\varphi_m>0$ for $\mu\geq{-1}$, $a,b>0$ and $\mu+a\geq{0}$, $\mu+b\geq{0}$ so that $\varphi_{a,b,\mu}(x)$ is absolutely monotonic and multiplicatively convex on $(0,\infty)$. \end{theo}
\noindent\textbf{Proof.} Cauchy product yields $$ \varphi_m=\sum\limits_{k=0}^{m}\frac{f_kf_{m-k}}{k!(m-k)!}\left(\frac{1}{\Gamma(k+\mu+a)\Gamma(m-k+\mu+b)}-\frac{1}{\Gamma(m-k+\mu)\Gamma(k+\mu+a+b)}\right). $$
Further, by Gauss summation (the first term is combined with the last,
the second with the second last etc.) we can write $\varphi_m$ in the form \begin{equation}\label{eq:repr} \varphi_m=\sum\limits_{k=0}^{[m/2]}\frac{f_kf_{m-k}}{k!(m-k)!}M_k(a,b,\mu), \end{equation} where for $k<m/2$ \begin{multline*} M_k(a,b,\mu)=\underbrace{[\Gamma(k+\mu+a)\Gamma(m-k+\mu+b)]^{-1}}_{=u} +\underbrace{[\Gamma(k+\mu+b)\Gamma(m-k+\mu+a)]^{-1}}_{=v} \\[0pt] -\underbrace{[\Gamma(m-k+\mu)\Gamma(k+\mu+a+b)]^{-1}}_{=r} -\underbrace{[\Gamma(k+\mu)\Gamma(m-k+\mu+a+b)]^{-1}}_{=s} \end{multline*} and for $k=m/2$ $$ M_k=[\Gamma(m/2+\mu+a)\Gamma(m/2+\mu+b)]^{-1}-[\Gamma(m/2+\mu)\Gamma(m/2+\mu+a+b)]^{-1}. $$ Under assumptions on $\mu,a,b$ made in the theorem \begin{equation}\label{pos} \sum\limits_{k=0}^{[m/2]}\frac{M_k(a,b,\mu)}{k!(m-k)!}\geq{0} \end{equation} according to Lemma~\ref{lm:gammasum}. Write $M_k:=M_k(a,b,\mu)$ for brevity. We aim to show that the sequence $\{M_k\}_{k=0}^{[m/2]}$ has no more than one change of sign with $M_{[m/2]}>0$ in order to apply Lemma~\ref{lm:sum}. Due to log-convexity of $\Gamma(x)$ the ratio $x\mapsto \Gamma(x+\alpha)/\Gamma(x)$ is strictly increasing on $(0,\infty)$ when $\alpha>0$. This immediately implies $M_{[m/2]}>0$ and the following inequalities \begin{equation}\label{vu} v\geq u \Leftrightarrow \frac{\Gamma(m-k+\mu+b)}{\Gamma(m-k+\mu+a)}\geq \frac{\Gamma(k+\mu+b)}{\Gamma(k+\mu+a)}~-~\text{true for}~k\leq{m-k}, \end{equation} \begin{equation}\label{us} u > s \Leftrightarrow \frac{\Gamma(m-k+\mu+a+b)}{\Gamma(m-k+\mu+b)}> \frac{\Gamma(k+\mu+a)}{\Gamma(k+\mu)}~-~\text{true for}~k\leq{m-k}, \end{equation} \begin{equation}\label{vr} v\geq r \Leftrightarrow \frac{\Gamma(k+\mu+a+b)}{\Gamma(k+\mu+b)} \geq \frac{\Gamma(m-k+\mu+a)}{\Gamma(m-k+\mu)}~-~\text{true for}~(m-b)/2\leq k, \end{equation} \begin{equation}\label{rv} r\geq v \Leftrightarrow \frac{\Gamma(m-k+\mu+a)}{\Gamma(m-k+\mu)}\geq \frac{\Gamma(k+\mu+a+b)}{\Gamma(k+\mu+b)}~-~\text{true for}~k\leq(m-b)/2. \end{equation} If $(m-b)/2\leq k<m/2$ then the sum of (\ref{us}) and (\ref{vr}) yields $M_k=u+v-r-s>0$. If $k<(m-b)/2$ then it follows from (\ref{vu}), (\ref{us}) and (\ref{rv}) that $r>v>u>s$ (equality cannot be attained in (\ref{vu}) and (\ref{rv}) under this restriction on $k$). We will change notation to simplify writing: $\alpha = \mu, \beta = b + \mu, \delta = a$. According the hypothesis of the theorem we have $\beta>\alpha>0, \, \delta>0$. We will show now that if $M_k<0 \Leftrightarrow u+v<r+s$ for some $0<k<(m-b)/2$ then $M_{k-1}<0$ as well. Indeed, using $z\Gamma(z)=\Gamma(z+1)$ we can write $M_{k-1}$ in the following form $$ M_{k-1}(\delta)= \frac{k-1+\alpha+\delta}{m-k+\beta}u+\frac{k-1+\beta}{m-k+\alpha+\delta}v-\frac{k-1+\beta+\delta}{m-k+\alpha}r-\frac{k-1+\alpha}{m-k+\beta+\delta}s. $$ Treating $u, v, r, s$ as constants we see that $M_{k-1}(0)<0$ by forming a combination of $r>v$ and $r+s>v+u$ with positive coefficients: $(C_1+C_2)r+C_2s>(C_1+C_2)v+C_2u$, where $$ C_1+C_2=\frac{k-1+\beta}{m-k+\alpha}>C_2=\frac{k-1+\alpha}{m-k+\beta}. $$ Further, differentiating with respect to $\delta$, we get $$ M'_{k-1}(\delta)= \frac{1}{m-k+\beta}u-\frac{k-1+\beta}{(m-k+\alpha+\delta)^2}v-\frac{1}{m-k+\alpha}r+\frac{k-1+\alpha}{(m-k+\beta+\delta)^2}s, $$ which is obviously negative since $r>u$ (by (\ref{rv}) and (\ref{vu})) and $v>s$ (by (\ref{vu}) and (\ref{us})). This shows that $M_{k-1}< 0$ and hence $\{M_k\}_{k=0}^{[m/2]}$ has no more than one change of sign. Applying Lemma~\ref{lm:sum} with $A_k=M_k/(k!(m-k)!)$ we conclude that $\varphi_m>0$. Multiplicative convexity follows by Hardy, Littlewood and P\'{o}lya theorem \cite[Proposition~2.3.3]{NP}. ~~$\square$
\par\refstepcounter{theremark}\textbf{Remark \arabic{theremark}.} If $\{f_n\}_{n=0}^{\infty}$ is log-convex then $\varphi_m$ can take both signs.
\begin{corol}\label{cr:compl-mon} Assume the series in $(\ref{eq:func})$ converges for all $x\geq{0}$. Then $\varphi_{a,b,\mu}(1/y)$ is completely monotonic and log-convex on $[0,\infty)$, so that there exists a non-negative measure $\tau$ supported on $[0,\infty)$ such that \begin{equation}\label{eq:compl-mon} \varphi_{a,b,\mu}(x)=\int\limits_{[0,\infty)}e^{-t/x}d\tau(t). \end{equation} \end{corol} \textbf{Proof}. According to \cite[Theorem~3]{MS} a convergent series of completely monotonic with non-negative coefficients is again completely monotonic. This implies that $y\to\varphi_{a,b,\mu}(1/y)$ is completely monotonic, so that the above integral representation follows by Bernstein's theorem \cite[Theorem~1.4]{SSV}. Log-convexity follows from complete monotonicity according to \cite[Exersice 2.1(6)]{NP}.~~$\square$
In the next corollary we adopt the convention $\Gamma(-1)=-\infty$, $\Gamma(0)=+\infty$. \begin{corol}\label{cr:f-twosided} Under hypotheses and notation of Theorem~\ref{th:gammadenom} \begin{equation}\label{eq:f-twosided} \frac{\Gamma(a+\mu)\Gamma(b+\mu)}{\Gamma(\mu)\Gamma(a+b+\mu)}<\frac{f(\mu,x)f(a+b+\mu,x)}{f(a+\mu,x)f(b+\mu,x)}<1~\text{for all}~x\geq{0}. \end{equation} If $\mu=0$ or $\mu=-1$ we additionally require that $x\ne0$ otherwise the left inequality becomes equality. \end{corol} \textbf{Proof.} The estimate from above is a restatement of Theorem~\ref{th:gammadenom} since it is equivalent to $\phi_{\beta,\mu}(x)>0$.
The estimate from below is obvious for $\mu=-1$ for we have zero or negative number (if $a=1$ or $b=1$) on the left and a positive number on the right for $x>0$. The remaining proof will be divided into two cases (I) $\mu\geq{0}$; and (II)$-1<\mu<0$, $\mu+a\geq{0}$, $\mu+b\geq{0}$ (recall that $a,b>0$ by hypotheses of Theorem~\ref{th:gammadenom}).
In case (I) the left-hand inequality in (\ref{eq:f-twosided}) follows from strict log-convexity of $\mu\to\Gamma(\mu)f(\mu,x)$ which has been proved in \cite[Theorem~3]{KS} (where one has to take account of the formula $(\mu)_k=\Gamma(\mu+k)/\Gamma(\mu)$).
In case (II) $\Gamma(\mu)<0$ and the left-hand inequality in (\ref{eq:f-twosided}) reduces to $$ \Gamma(a+\mu)f(a+\mu,x)\Gamma(b+\mu)f(b+\mu,x)>\Gamma(\mu)f(\mu,x)\Gamma(a+b+\mu)f(a+b+\mu,x). $$ This inequality follows by observing that $\Gamma(\mu)f(\mu,x)=\sum_{n=0}^{\infty}f_nx^n(n!(\mu)_n)^{-1}$ and $$ \sum\limits_{k=0}^{m}\frac{f_{k}f_{m-k}}{k!(m-k)!} \left\{\frac{1}{(a+\mu)_k(b+\mu)_{m-k}}-\frac{1}{(\mu)_k(a+b+\mu)_{m-k}}\right\}>0, $$ since for $k=1,2,\ldots,m$ $(\mu)_k<0$ and for $k=0$ $(b+\mu)_{m}<(a+b+\mu)_{m}$.~~$\square$
\begin{corol}\label{cr:phi-below} Under hypotheses and notation of Theorem~\ref{th:gammadenom} and for all $x\geq{0}$ $$ f(a+\mu,x)f(b+\mu,x)-f(a+b+\mu,x)f(\mu,x)\geq f_0^2\left[\frac{1}{\Gamma(\mu+a)\Gamma(\mu+b)}-\frac{1}{\Gamma(\mu)\Gamma(\mu+a+b)}\right] $$ with equality only at $x=0$. \end{corol} \textbf{Proof.} Indeed, the claimed inequality is just $\varphi_{a,b,\mu}(x)\geq\varphi_{a,b,\mu}(0)$ which is true by Theorem~\ref{th:gammadenom}.~~$\square$
\par\refstepcounter{theremark}\textbf{Remark \arabic{theremark}.} Corollaries~\ref{cr:f-twosided} and \ref{cr:phi-below} imply by elementary calculation the following bounds for the so called ``generalized Turanian'' $\Delta_{\varepsilon}(\mu,x):=f(\mu,x)^2-f(\mu+\varepsilon,x)f(\mu-\varepsilon,x)$: \begin{equation}\label{eq:genTuranian} A_\varepsilon(\mu)f_0^2\leq\Delta_{\varepsilon}(\mu,x)\leq B_\varepsilon(\mu)f(\mu,x)^2, \end{equation} where $\mu-\varepsilon\geq{-1}$, $\mu\geq{0}$, $x\geq{0}$ and \begin{equation}\label{eq:AB} A_\varepsilon(\mu)=\frac{\Gamma(\mu-\varepsilon)\Gamma(\mu+\varepsilon)-\Gamma(\mu)^2} {\Gamma(\mu-\varepsilon)\Gamma(\mu+\varepsilon)\Gamma(\mu)^2},~~~~ B_\varepsilon(\mu)=\frac{\Gamma(\mu-\varepsilon)\Gamma(\mu+\varepsilon)-\Gamma(\mu)^2} {\Gamma(\mu-\varepsilon)\Gamma(\mu+\varepsilon)}. \end{equation} In particular, if $\varepsilon=1$ the bounds (\ref{eq:genTuranian}) simply to ($\mu\geq{0}$, $x\geq{0}$) \begin{equation}\label{eq:Turanian} \frac{f_0^2}{\mu\Gamma(\mu)^2}\leq f(\mu,x)^2-f(\mu+1,x)f(\mu-1,x)\leq \frac{1}{\mu}f(\mu,x)^2,~~~x\geq{0},~\mu\geq{0}. \end{equation}
Theorem~\ref{th:gammadenom} can be reformulated in terms of the numbers $g_n:=f_n/n!$. The hypotheses of the theorem require then that these numbers satisfy $$ g_n^2\geq\frac{n+1}{n} g_{n-1}g_{n+1} $$ - a condition stronger then log-concavity. If we weaken it to log-concavity we are only able to prove discrete Wright log-concavity of $\mu\to\,f(\mu,x)$ in the next theorem. We conjecture below that the adjective ``discrete'' is actually redundant. \begin{theo}\label{th:gammadenom1} Suppose $\{g_n\}_{n=0}^{\infty}$ is a non-trivial non-negative log-concave sequence without internal zeros. Then the function \begin{equation}\label{eq:g-def} \mu\to g(\mu,x)=\sum\limits_{n=0}^{\infty}\frac{g_nx^n}{\Gamma(\mu+n)}, \end{equation} is strictly discrete Wright log-concave on $(0,\infty)$ for each fixed $x\geq{0}$. Moreover, the function $$ \lambda_{\beta,\mu}(x):=g(\mu+1,x)g(\mu+\beta,x)-g(\mu,x)g(\mu+\beta+1,x)=\sum\limits_{m=0}^{\infty}\lambda_mx^m $$ has positive power series coefficients $\lambda_m>0$ for each $\mu\geq{-1}$ and $\beta>0$ such that $\mu+\beta\geq{0}$. This implies that $x\to\lambda_{\beta,\mu}(x)$ is absolutely monotonic and multiplicatively convex on $(0,\infty)$. \end{theo} \textbf{Proof.} Pursuing the same line of argument as in Theorem~\ref{th:gammadenom} we have by the Cauchy product and the Gauss summation: $$ \lambda_m=\sum\limits_{k=0}^{[m/2]}g_kg_{m-k}M_k(1,\beta,\mu), $$ where the numbers $M_k$ are defined in the proof of Theorem~\ref{th:gammadenom}, below formula (\ref{eq:repr}). Under assumptions on $\mu$ and $\beta$ made in the theorem we have $$ \sum\limits_{k=0}^{[m/2]}M_k(1,\beta,\mu)=S_m(\mu,\beta)>0 $$ according to Corollary~\ref{cr:rec-gammas}. Further, it has been shown in the course of the proof of Theorem~\ref{th:gammadenom} that the sequence $\{M_k\}_{k=0}^{[m/2]}$ has no more than one change of sign with $M_{[m/2]}>0$. Hence by Lemma~\ref{lm:sum} we conclude that $\lambda_m>0$ implying discrete Wright log-concavity of $\mu\to{g(\mu,x)}$ for each $x\geq{0}$ and absolutely monotonicity of $x\to\lambda_{\beta,\mu}(x)$. Multiplicative convexity follows by Hardy, Littlewood and P\'{o}lya theorem \cite[Proposition~2.3.3]{NP}. ~~$\square$
\begin{corol}\label{cr:g-compl-mon} Assume the series in $(\ref{eq:g-def})$ converges for all $x\geq{0}$. Then $\lambda_{\beta,\mu}(1/y)$ is completely monotonic and log-convex on $[0,\infty)$, so that there exists a non-negative measure $\tau$ supported on $[0,\infty)$ such that \begin{equation}\label{eq:g-compl-mon} \lambda_{\beta,\mu}(x)=\int\limits_{[0,\infty)}e^{-t/x}d\tau(t). \end{equation} \end{corol}
\begin{corol}\label{cr:g-twosided} Under hypotheses and notation of Theorem~\ref{th:gammadenom1} \begin{equation}\label{eq:g-twosided} \frac{\mu}{\beta+\mu}<\frac{g(\mu,x)g(1+\beta+\mu,x)}{g(1+\mu,x)g(\beta+\mu,x)}<1~\text{for all}~x\geq{0}. \end{equation} \end{corol} \textbf{Proof.} The estimate from above is a restatement of Theorem~\ref{th:gammadenom1} since it is equivalent to $\lambda_{\beta,\mu}(x)>0$.
The estimate from below is obvious for $\mu=-1$ for we a negative number on the left and a positive number on the right. The remaining proof will be divided into two cases (I) $\mu\geq{0}$; and (II)$-1<\mu<0$, $\mu+\beta\geq{0}$ (recall that $\beta>0$ by hypotheses of Theorem~\ref{th:gammadenom1}).
In case (I) the left-hand inequality in (\ref{eq:g-twosided}) follows from strict log-convexity of $\mu\to\Gamma(\mu)g(\mu,x)$ which, in view of $(\mu)_k=\Gamma(\mu+k)/\Gamma(\mu)$, has been proved in \cite[Theorem~3]{KS}.
In case (II) the left-hand inequality in (\ref{eq:g-twosided}) can be rewritten as $$ \Gamma(1+\mu)g(1+\mu,x)\Gamma(\beta+\mu)g(\beta+\mu,x)>\Gamma(\mu)g(\mu,x)\Gamma(1+\beta+\mu)g(1+b+\mu,x). $$ This inequality follows by observing that $\Gamma(\mu)g(\mu,x)=\sum_{n=0}^{\infty}g_n(\mu)_n^{-1}$ and $$ \sum\limits_{k=0}^{m}g_{k}g_{m-k} \left\{\frac{1}{(1+\mu)_k(\beta+\mu)_{m-k}}-\frac{1}{(\mu)_k(1+\beta+\mu)_{m-k}}\right\}>0, $$ since $(\mu)_k<0$ for $k=1,2,\ldots,m$ and $(\beta+\mu)_{m}<(1+\beta+\mu)_{m}$ for $k=0$.~~$\square$ \begin{corol}\label{cr:lambda-below} Under hypotheses and notation of Theorem~\ref{th:gammadenom1} and for all $x\geq{0}$ $$ g(\mu+1,x)g(\mu+\beta,x)-g(\mu,x)g(\mu+\beta+1,x)\geq \frac{g_0^2\beta}{\Gamma(\mu+1)\Gamma(\mu+\beta+1)} $$ with equality only at $x=0$. \end{corol}
\par\refstepcounter{theremark}\textbf{Remark \arabic{theremark}.} Since we have only proved discrete Wright log-concavity in Theorem~\ref{th:gammadenom1} we cannot make any statements about the ``generalized Turanian'' $g(\mu,x)^2-g(\mu+\varepsilon,x)g(\mu-\varepsilon,x)$. We can assert, however, that the standard Turanian satisfies the following bounds similar to those in (\ref{eq:Turanian}) \begin{equation}\label{eq:Turanian1} \frac{g_0^2}{\mu\Gamma(\mu)^2}\leq g(\mu,x)^2-g(\mu+1,x)g(\mu-1,x)\leq \frac{1}{\mu}g(\mu,x)^2,~~~x\geq{0},~\mu\geq{0}. \end{equation}
\textbf{Conjecture~1.} Under hypotheses of Theorem~\ref{th:gammadenom1} the function $\mu\to{g(\mu,x)}$ is log-concave on $(0,\infty)$ for each fixed $x\geq{0}$. Moreover, the function $$ x\to g(\mu+\alpha,x)g(\mu+\beta,x)-g(\mu,x)g(\mu+\alpha+\beta,x) $$ has positive power series coefficients for $\mu\geq{-1}$ and $\mu+\alpha\geq{0}$, $\mu+\beta\geq{0}$, where $\alpha,\beta>0$.
The above conjecture is equivalent to the assertion that \begin{equation*} \sum\limits_{k=0}^{m}\left\{\frac{1}{\Gamma(k+\mu+\alpha)\Gamma(m-k+\mu+\beta)} -\frac{1}{\Gamma(k+\mu)\Gamma(m-k+\mu+\alpha+\beta)}\right\}>0 \end{equation*} which extends Corollary~\ref{cr:rec-gammas}.
\paragraph{4. Applications and relation to other work.} We start with the well-studied case of the modified Bessel function. Even for this classical case we can add to the current knowledge.
\textbf{Example~1}. The modified Bessel function is defined by the series \cite[formula~(4.12.2)]{AAR} $$ I_{\nu}(u)=\sum\limits_{n\geq0}\frac{(u/2)^{2n+\nu}}{n!\Gamma(n+\nu+1)}. $$ Hence, if we set $f_n=1$ $\forall n$, $x=(u/2)^{2}$ and $\mu=\nu+1$ in Theorem~\ref{th:gammadenom} and use $\partial^2_\nu\log(u/2)^{\nu}=0$ we immediately conclude that $\nu\to{I_{\nu}(u)}$ is log-concave on $(-1,\infty)$ for each fixed $u>0$. Moreover, for any $\nu\geq{-1}$ and $\nu-\varepsilon\geq{-2}$ the ``generalized Turanian'' \begin{equation}\label{eq:I1} u\to\Delta_{\varepsilon}(\nu,u):=(I_{\nu}(u))^2-I_{\nu+\varepsilon}(u)I_{\nu-\varepsilon}(u) \end{equation} has positive power series coefficients, is multiplicatively convex and according to (\ref{eq:genTuranian}) satisfies \begin{equation}\label{eq:I2} (u/2)^{2\nu}A_\varepsilon(\nu+1)\leq\Delta_{\varepsilon}(\nu,u)\leq B_\varepsilon(\nu+1)(I_{\nu}(u))^2,~~u\geq{0}, \end{equation} where $A_\varepsilon$ and $B_\varepsilon$ are defined in (\ref{eq:AB}). In particular for $\varepsilon=1$ we get for $\nu\geq{-1}$: \begin{equation}\label{eq:I3} \frac{(u/2)^{2\nu}}{(\nu+1)\Gamma(\nu+1)^2}\leq(I_{\nu}(u))^2-I_{\nu+1}(u)I_{\nu-1}(u) \leq\frac{1}{\nu+1}(I_{\nu}(u))^2. \end{equation} All the more, the function $(u/2)^{-2\nu}\Delta_{\varepsilon}(\nu,u)$ admits representation (\ref{eq:compl-mon}). Proofs of various forms of log-concavity of $I_{\nu}(u)$ have a long history. The discrete log-concavity, $I_{\nu-1}(x)I_{\nu+1}(x)\leq{(I_{\nu}(x))^2}$, and the right-hand side of (\ref{eq:I3}) for $\nu\geq{0}$ were probably first demonstrated in 1951 by Thiruvenkatachar and Nanjundiah \cite{TN}. In fact, our method here is an extension of their approach, so that they could have proved the log-concavity of $\nu\to I_{\nu}(x)$. The discrete log-concavity was rediscovered by Amos \cite{Am} in 1974 and later by Joshi and Bissu \cite{JB} in 1991 with different proofs. Their paper also gives a proof of the right-hand side of (\ref{eq:I3}) for $\nu\geq{0}$. Finally, Lorch in \cite{Lo} and later Baricz in \cite{Baricz10H} showed the log-concavity of $\nu\to I_{\nu}(x)$ on $(-1,\infty)$ and demonstrated the positivity of the function (\ref{eq:I1}) for $\nu>-1/2$ and small $\varepsilon$. He also conjectured that the positivity remains true for $\nu>-1$ and $\varepsilon\in(0,1]$. Baricz \cite{Baricz10} demonstrated the Lorch's conjecture for $\varepsilon=1$ and extended the right-hand side of (\ref{eq:I3}) to $\nu>-1$. Our results here not only confirm Lorch's conjecture but also refine and strengthen it by proving (\ref{eq:I2}) and the positivity of the power series coefficients of (\ref{eq:I1}). Various extensions and a related results can also be found in \cite{Baricz10,Baricz12,Segura}. We note that many proofs use special properties of the modified Bessel functions, like differential-recurrence relations, zeros etc. Theorem~\ref{th:gammadenom} and its corollaries show that it is in fact the structure of the power series that is responsible for the bounds (\ref{eq:I2}) and (\ref{eq:I3}).
\textbf{Example~2}. In his 1993 preprint \cite{Sitnik} Sitnik, among other things, proved the inequality $$ R_{n}^2(x)>R_{n-1}(x)R_{n+1}(x), ~~x>0,~~n=1,2,\ldots, $$ where $$ R_{n}(x)=e^{x}-\sum\limits_{k=0}^{n}\frac{x^k}{k!}=\frac{x^{n+1}}{(n+1)!}{_1F_1}(1;n+2;x) $$ is the exponential remainder. We can generalize this function as follows $$ R_{\eta,\nu}(x)=\frac{x^{\nu+1}}{\Gamma(\nu+2)}{_1F_1}(\eta;\nu+2;x) =x^{\nu+1}\sum\limits_{k=0}^{\infty}\frac{(\eta)_kx^k}{\Gamma(\nu+2+k)k!}. $$ It is straightforward to check that the sequence $g_k=(\eta)_k/k!$ is log-concave iff $\eta\geq{1}$. Then according to Theorem~\ref{th:gammadenom1} the function $\nu\to R_{\eta,\nu}(x)$ is discrete Wright log-concave on $(-2,\infty)$ for each fixed $\eta\geq{1}$, $x>0$ and $$ x\to R_{\eta,\nu+1}(x)R_{\eta,\nu+\beta}(x)-R_{\eta,\nu}(x)R_{\eta,\nu+\beta+1}(x) $$ has positive power series coefficients for $\nu\geq-3$, $\nu+\beta\geq-2$, $\beta>0$. Moreover, $$ \frac{x^{2\nu+2}}{(\nu+2)\Gamma(\nu+2)^2}\leq R_{\eta,\nu}(x)^2-R_{\eta,\nu+1}(x)R_{\eta,\nu-1}(x)\leq \frac{1}{\nu+2}R_{\eta,\nu}(x)^2,~~~x\geq{0},~\nu\geq{-2}. $$
\textbf{Example~3}. In addition to the results for the Kummer function presented in Example~2 above we can derive bounds for its logarithmic derivative. The logarithmic derivatives of the Kummer function plays an important role in some probabilistic applications - see \cite{SK}. Let us use abbreviated notation $F(a;b;x)={_1F_1}(a;b;x)$. The following contiguous relations are easy to check (recall that $F'(a;b;x)=(a/b)F(a+1;b+1;x)$): \begin{equation}\label{eq:con1} aF(a;b;x)-aF(a+1;b;x)+xF'(a;b;x)=0, \end{equation} \begin{equation}\label{eq:con2} abF(a+1;b;x)=b(a+x)F(a;b;x)-(b-a)xF(a;b+1;x), \end{equation} \begin{equation}\label{eq:con3} b(b-1)(F(a;b-1;x)-F(a;b;x))-axF(a+1;b+1;x)=0. \end{equation} Dividing (\ref{eq:con2}) by $b$ and substituting $aF(a+1;b;x)$ into (\ref{eq:con1}) we get after simplification and dividing by $\Gamma(b+1)$: $$ \frac{1}{\Gamma(b+1)}F(a;b+1;x)=\frac{F(a;b;x)-F'(a;b;x)}{(b-a)\Gamma(b)} $$ From (\ref{eq:con3}) we obtain: $$ \frac{1}{\Gamma(b-1)}F(a;b-1;x)=\frac{1}{\Gamma(b)}(xF'(a;b;x)+(b-1)F(a;b;x)) $$ Thus we get the following expression for the Turanian: \begin{multline*} \frac{F(a;b;x)^2}{\Gamma(b)^2}-\frac{F(a;b+1;x)F(a;b-1;x)}{\Gamma(b+1)\Gamma(b-1)} \\ =\frac{1}{\Gamma(b)^2(b-a)}\left\{-(a-1)F(a;b;x)^2+xF'(a;b;x)^2+(b-x-1)F(a;b;x)F'(a;b;x)\right\} \end{multline*} Hence, inequality (\ref{eq:Turanian}) becomes ($x>0$, $b>0$, $a\geq{1}$): $$ \frac{1}{b\Gamma(b)^2}< \frac{-(a-1)F(a;b;x)^2+xF'(a;b;x)^2+(b-x-1)F(a;b;x)F'(a;b;x)}{\Gamma(b)^2(b-a)} <\frac{1}{b\Gamma(b)^2}F(a;b;x)^2, $$ which leads to ($F\equiv{F(a;b;x)}$, $F'\equiv{F'(a;b;x)}$): $$ 0<\frac{-(a-1)+x(F'/F)^2+(b-x-1)(F'/F)}{(b-a)}<\frac{1}{b}. $$ Solving these quadratic inequalities we arrive at $$ \frac{x+1-b+\sqrt{(x+1-b)^2+4x(a-1)}}{2x}<\frac{F'(a;b;x)}{F(a;b;x)} <\frac{x+1-b+\sqrt{(x+1-b)^2+4xa(b-1)/b}}{2x} $$ for $x>0$ and $b>a\geq{1}$. The upper and lower bounds interchange if $x>0$, $a\geq{1}$ and $0<b<a$: $$ \frac{x+1-b+\sqrt{(x+1-b)^2+4xa(b-1)/b}}{2x}<\frac{F'(a;b;x)}{F(a;b;x)} <\frac{x+1-b+\sqrt{(x+1-b)^2+4x(a-1)}}{2x}. $$ These bounds are quite precise numerically especially when $a$ and $b$ are close.
\textbf{Example~4}. The generalized hypergeometric function is defined by the series \begin{equation}\label{eq:pFqdefined} {_{p}F_q}\left(\left.\!\!\begin{array}{c} a_1,a_2,\ldots,a_p\\
b_1,b_2,\ldots,b_q\end{array}\right|z\!\right):=\sum\limits_{n=0}^{\infty}\frac{(a_1)_n(a_2)_n\cdots(a_{p})_n}{(b_1)_n(b_2)_n\cdots(b_q)_nn!}z^n, \end{equation} where $(a)_0=1$, $(a)_n=a(a+1)\cdots(a+n-1)$, $n\geq{1}$, denotes the rising factorial. The series (\ref{eq:pFqdefined}) converges in the entire complex plane if $p\leq{q}$ and in the unit disk if $p=q+1$. In the latter case its sum can be extended analytically to the whole complex plane cut along the ray $[1,\infty)$ \cite[Chapter~2]{AAR}. Applications of Theorems~\ref{th:gammadenom} and \ref{th:gammadenom1} to generalized hypergeometric function is largely based on the following lemma. \begin{lemma}\label{lm:HVVKS} Denote by $e_k(x_1,\ldots,x_q)$ the $k$-th elementary symmetric polynomial, $$ e_0(x_1,\ldots,x_q)=1,~~~e_k(x_1,\ldots,x_q)=\!\!\!\!\!\!\!\!\sum\limits_{1\leq{j_1}<{j_2}\cdots<{j_k}\leq{q}} \!\!\!\!\!\!\!\!x_{j_1}x_{j_2}\cdots{x_{j_k}},~~k\geq{1}. $$ Suppose $q\geq{1}$ and $0\leq{r}\leq{q}$ are integers, $a_i>0$, $i=1,\ldots,q-r$, $b_i>0$, $i=1,\ldots,q$, and \begin{equation}\label{eq:symmetric-chain1} \frac{e_q(b_1,\ldots,b_q)}{e_{q-r}(a_1,\ldots,a_{q-r})}\leq \frac{e_{q-1}(b_1,\ldots,b_q)}{e_{q-r-1}(a_1,\ldots,a_{q-r})}\leq\cdots \leq\frac{e_{r+1}(b_1,\ldots,b_q)}{e_{1}(a_1,\ldots,a_{q-r})}\leq e_{r}(b_1,\ldots,b_q). \end{equation} Then the sequence of hypergeometric terms \emph{(}if $r=q$ the numerator is $1$\emph{)}, $$ f_n=\frac{(a_1)_n\cdots(a_{q-r})_n}{(b_1)_n\cdots(b_q)_n}, $$ is log-concave, i.e. $f_{n-1}f_{n+1}\leq{f_n^2}$, $n=1,2,\ldots$ It is strictly log-concave unless $r=0$ and $a_i=b_i$, $i=1,\ldots,q$. \end{lemma} The proof of this lemma for $r=0$ can be found in \cite[Theorem~4.4]{HVV} and \cite[Lemma~2]{KS}. The latter reference also explains how to extend the proof to general $r$ (see the last paragraph of \cite{KS}). This leads immediately to the following statements. \begin{theo} Let $0\leq{p}\leq{q}$ be integers. Denote $$ f(\nu,x):=\frac{1}{\Gamma(\nu)}{_pF_{q+1}}(a_{1},\ldots,a_{p};\nu,b_{1},\ldots,b_{q};x) $$ and suppose that parameters $(a_1,\ldots,a_p)$, $(b_1,\ldots,b_q)$ satisfy \emph{(\ref{eq:symmetric-chain1})}. Then the function $f(\nu,x)$ satisfies Theorem~\ref{th:gammadenom} and Corollaries~\ref{cr:compl-mon}-\ref{cr:phi-below}. \end{theo} \begin{theo} Let $0\leq{p}\leq{q+1}$ be integers. Denote $$ g(\nu,x):=\frac{1}{\Gamma(\nu)}{_{p}F_{q+1}}(a_{1},\ldots,a_{p};\nu,b_{1},\ldots,b_{q};x) $$ and suppose that parameters $(a_1,\ldots,a_p)$, $(1,b_1,\ldots,b_q)$ satisfy \emph{(\ref{eq:symmetric-chain1})}. Then the function $g(\nu,x)$ satisfies Theorem~\ref{th:gammadenom1} and Corollaries~\ref{cr:g-compl-mon}-\ref{cr:lambda-below}. \end{theo}
\textbf{Example~5}. Our last example is non-hypergeometric. Consider the parameter derivative of the regularized Kummer function: $$ \frac{\partial}{\partial{a}}\frac{1}{\Gamma(b)}{_1F_1}(a;b;x)= \sum\limits_{k=0}^{\infty}\frac{(\psi(a+k)-\psi(a))(a)_k}{\Gamma(b+k)k!}x^k $$ (this function cannot be expressed as hypergeometric function of one variable). If $a\geq{1}$ then the sequence $$ h_k=\frac{(\psi(a+k)-\psi(a))(a)_k}{k!} $$ is log-concave, since \begin{multline*} h_k^2-h_{k-1}h_{k+1}=\frac{(a)_{k-1}(a)_{k}}{(k-1)!k!}\times \\ \left\{\frac{a+k-1}{k}(\psi(a+k)-\psi(a))^2-\frac{a+k}{k+1}(\psi(a+k-1)-\psi(a))(\psi(a+k+1)-\psi(a))\right\}>0. \end{multline*} The last inequality holds because $y\to{\psi(a+y)-\psi(a)}$ is concave according to the Gauss formula~\cite[Theorem~1.6.1]{AAR} $$ (\psi(a+y)-\psi(a))''_y=\psi''(a+y)=-\int\limits_{0}^{\infty}\frac{t^2e^{-t(a+y)}}{1-e^{-t}}dt<0 $$ and hence is log-concave while $(a+k-1)/k>(a+k)/(k+1)$ if $a\geq{1}$. By Theorem~\ref{th:gammadenom1} this leads to \begin{theo} Suppose $a\geq{1}$. Then the function $$ h(\nu,x):=\frac{\partial}{\partial{a}}\frac{{_1F_1}(a;\nu;x)}{\Gamma(\nu)} $$ satisfies Theorem~\ref{th:gammadenom1} and Corollaries~\ref{cr:g-compl-mon}-\ref{cr:lambda-below}. \end{theo}
\paragraph{5. Acknowledgements.} We acknowledge the financial support of the Russian Basic Research Fund (grant 11-01-00038-a), Far Eastern Federal University and the Far Eastern Branch of the Russian Academy of Sciences.
\end{document} |
\begin{document}
\begin{abstract} We prove a formula for the orbifold Chow ring of semi-projective toric DM stacks, generalizing the orbifold Chow ring formula of projective toric DM stacks by Borisov-Chen-Smith. We also consider a special kind of semi-projective toric DM stacks, the Lawrence toric DM stacks. We prove that the orbifold Chow ring of a Lawrence toric DM stack is isomorphic to the orbifold Chow ring of its associated hypertoric DM stack studied in \cite{JT}. \end{abstract}
\maketitle
\section{Introduction}
The main goal of this paper is to generalize the orbifold Chow ring formula of Borisov-Chen-Smith for projective toric DM stacks to the case of semi-projective toric DM stacks.
In the paper \cite {BCS}, Borisov, Chen, and Smith developed the theory of toric DM stacks using stacky fans. A stacky fan is a triple $\mathbf{\Sigma}=(N,\Sigma,\beta)$, where $N$ is a finitely generated abelian group, $\Sigma$ is a simplicial fan in the lattice $\overline{N}:=N\slash \text{torsion}$ and $\beta: \mathbb{Z}^n\rightarrow N$ is a map given by a collection of vectors $\{b_1,\cdots,b_n\}\subset N$ such that the images $\{\overline{b}_{1},\cdots,\overline{b}_{n}\}$ generate the fan $\Sigma$. A toric DM stack $\mathcal{X}(\mathbf{\Sigma})$ is defined using $\mathbf{\Sigma}$; it is a quotient stack whose coarse moduli space is the toric variety $X(\Sigma)$ corresponding to the simplicial fan $\Sigma$.
The construction of toric DM stacks was slightly generalized later in \cite{Jiang}, in which the notion of extended stacky fans was introduced. This new notion is based on that of stacky fans plus some extra data. Extended stacky fans yield toric DM stacks in the same way as stacky fans do. The main point is that extended stacky fans provide presentations of toric DM stacks not available from stacky fans.
When $X(\Sigma)$ is projective, it is found in \cite{BCS} that the orbifold Chow ring (or Chen-Ruan cohomology ring) of $\mathcal{X}(\mathbf{\Sigma})$ is isomorphic to a deformed ring of the group ring of $N$. We call a toric DM stack $\mathcal{X}(\mathbf{\Sigma})$ {\em semi-projective} if its coarse moduli space $X(\Sigma)$ is semi-projective. Hausel and Sturmfels \cite{HS} computed the Chow ring of semi-projective toric varieties. Their answer is also known as the ``Stanley--Reisner'' ring of a fan. Using their result, we prove a formula of the orbifold Chow ring of semi-projective toric DM stacks.
Consider an extended stacky fan $\mathbf{\Sigma}=(N,\Sigma,\beta)$, where $\Sigma$ is the simplicial fan of the semi-projective toric variety $X(\Sigma)$. Let $N_{tor}$ be the torsion subgroup of $N$, then $N=\overline{N}\oplus N_{tor}$. Let $N_{\Sigma}:=|\Sigma|\oplus N_{tor}$. Note that $|\Sigma|$ is convex, so $|\Sigma|\oplus N_{tor}$ is a subgroup of $N$. Define the deformed ring $\mathbb{Q}[N_{\Sigma}]:=\bigoplus_{c\in N_{\Sigma}}\mathbb{Q}y^{c}$ with the product structure given by \begin{equation}\label{productA} y^{c_{1}}\cdot y^{c_{2}}:=\begin{cases}y^{c_{1}+c_{2}}&\text{if there is a cone}~ \sigma\in\Sigma ~\text{such that}~ \overline{c}_{1}\in\sigma, \overline{c}_{2}\in\sigma\,;\\ 0&\text{otherwise}\,.\end{cases} \end{equation} Note that if $\mathcal{X}(\mathbf{\Sigma})$ is projective, then $N_{\Sigma}=N$ and $\mathbb{Q}[N_{\Sigma}]$ is the deformed ring $\mathbb{Q}[N]^{\mathbf{\Sigma}}$ in \cite{BCS}. Let $A^{*}_{orb}(\mathcal{X}(\mathbf{\Sigma}))$ denote the orbifold Chow ring of the toric DM stack $\mathcal{X}(\mathbf{\Sigma})$.
\begin{thm}\label{main} Assume that $\mathcal{X}(\mathbf{\Sigma})$ is semi-projective. There is an isomorphism of rings $$A^{*}_{orb}(\mathcal{X}(\mathbf{\Sigma}))\cong \frac{\mathbb{Q}[N_{\Sigma}]}{\{\sum_{i=1}^{n}e(b_{i})y^{b_{i}}:e\in N^{\star}\}}.$$ \end{thm}
The strategy of proving Theorem \ref{main} is as follows. We use a formula in \cite{HS} for the ordinary Chow ring of semi-projective toric varieties. We prove that each twisted sector is also a semi-projective toric DM stack. With this, we use a method similar to that in \cite{BCS} and \cite{Jiang} to prove the isomorphism as modules. The argument to show the isomorphism as rings is the same as that in \cite{BCS}, except that we only take elements in the support of the fan.
An interesting class of examples of semi-projective toric DM stack is the Lawrence toric DM stacks. We discuss the properties of such stacks. We prove that each 3-twisted sector or twisted sector is again a Lawrence toric DM stack. This allows us to draw connections to hypertoric DM stacks studied in \cite{JT}. We prove that the orbifold Chow ring of a Lawrence toric DM stack is isomorphic to the orbifold Chow ring of its associated hypertoric DM stack. This is an analog of Theorem 1.1 in \cite{HS} for orbifold Chow rings.
The rest of this text is organized as follows. In Section \ref{semi-pro} we define semi-projective toric DM stacks and prove Theorem 1.1. Results on Lawrence toric DM stacks are discussed in Section \ref{Lawrence}.
\subsection*{Conventions} In this paper we work entirely algebraically over the field of complex numbers. Chow rings and orbifold Chow rings are taken with rational coefficients. By an orbifold we mean a smooth Deligne-Mumford stack with trivial generic stabilizer.
For a simplicial fan $\Sigma$, we use $|\Sigma|$ to represent the lattice points in $\Sigma$. Note that if $\Sigma$ is convex, $|\Sigma|$ is a free abelian subgroup of $N$. We write $N^{\star}$ for $Hom_{\mathbb{Z}}(N,\mathbb{Z})$ and $N\to \overline{N}$ the natural map of modding out torsions. We refer to \cite{BCS} for the construction of the Gale dual $\beta^{\vee}: \mathbb{Z}^{m}\to DG(\beta)$ of $\beta: \mathbb{Z}^{m}\to N$.
\subsection*{Acknowledgments} We thank Kai Behrend and Nicholas Proudfoot for valuable discussions.
\section{Semi-projective toric DM stacks and their orbifold Chow rings}\label{semi-pro} In this section we define semi-projective toric DM stacks and discuss their properties.
\subsection{Semi-projective toric DM stacks}
\begin{defn}[\cite{HS}] A toric variety $X$ is called semi-projevtive if the natural map $$\pi: X\rightarrow X_{0}=\text{Spec}(H^{0}(X,\mathcal{O}_{X})),$$ is projective and $X$ has at least one torus-fixed point. \end{defn}
\begin{defn}[\cite{Jiang}]\label{stackyfan} An extended stacky fan $\mathbf{\Sigma}$ is a triple $(N,\Sigma,\beta)$, where $N$ is a finitely generated abelian group, $\Sigma$ is a simplicial fan in $N_{\mathbb{R}}$ and $\beta: \mathbb{Z}^{m}\to N$ is the map determined by the elements $\{b_{1},\cdots,b_{m}\}$ in $N$ such that $\{\overline{b}_{1},\cdots,\overline{b}_{n}\}$ generate the simplicial fan $\Sigma$ (here $m\geq n$). \end{defn} Given an extended stacky fan $\mathbf{\Sigma}=(N,\Sigma,\beta)$, we have the following exact sequences: \begin{equation}\label{exact1} 0\longrightarrow DG(\beta)^{\star}\longrightarrow \mathbb{Z}^{m}\stackrel{\beta}{\longrightarrow} N\longrightarrow Coker(\beta)\longrightarrow 0, \end{equation} \begin{equation}\label{exact2} 0\longrightarrow N^{\star}\longrightarrow \mathbb{Z}^{m}\stackrel{\beta^{\vee}}{\longrightarrow} DG(\beta)\longrightarrow Coker(\beta^{\vee})\longrightarrow 0, \end{equation} where $\beta^{\vee}$ is the Gale dual of $\beta$ (see \cite{BCS}). Applying $Hom_\mathbb{Z}(-,\mathbb{C}^{*})$ to (\ref{exact2}) yields \begin{equation}\label{exact3} 1\longrightarrow \mu\longrightarrow G\stackrel{\alpha}{\longrightarrow} (\mathbb{C}^{*})^{m}\longrightarrow (\mathbb{C}^{*})^{d}\longrightarrow 1. \ \end{equation} The toric DM stack $\mathcal{X}(\mathbf{\Sigma})$ is the quotient stack $[Z/G]$, where $Z:=(\mathbb{C}^{n}\setminus V(J_{\Sigma}))\times (\mathbb{C}^{*})^{m-n}$, $J_{\Sigma}$ is the irrelevant ideal of the fan $\Sigma$ and $G$ acts on $Z$ through the map $\alpha$ in (\ref{exact3}). The coarse moduli space of $\mathcal{X}(\mathbf{\Sigma})$ is the simplicial toric variety $X(\Sigma)$ corresponding to the simplicial fan $\Sigma$, see \cite{BCS} and \cite{Jiang}.
\begin{defn}\label{semi-toric} A toric DM stack $\mathcal{X}(\mathbf{\Sigma})$ is {\em semi-projective} if the coarse moduli space $X(\Sigma)$ is semi-projective. \end{defn}
\begin{thm}\label{semitor} The following notions are equivalent: \begin{enumerate} \item A semi-projective toric DM stack $\mathcal{X}(\mathbf{\Sigma})$; \item A toric DM stack $\mathcal{X}(\mathbf{\Sigma})$ such that the simplicial fan $\Sigma$ is a regular triangulation of $\mathcal{B}=\{\overline{b}_{1},\cdots,\overline{b}_{n}\}$ which spans the lattice $\overline{N}$. \end{enumerate} \end{thm}
\begin{pf} Since the toric DM stack is semi-projective if its coarse moduli space is semi-projective, the theorem follows from results in \cite{HS}. \end{pf}
\subsection{The inertia stack} Let $\mathbf{\Sigma}$ be an extended stacky fan and $\sigma\in\Sigma$ a cone. Define $link(\sigma):=\{\tau:\sigma+\tau\in \Sigma, \sigma\cap \tau=0\}$. Let $\{\widetilde{\rho}_{1},\ldots,\widetilde{\rho}_{l}\}$ be the rays in $link(\sigma)$. Consider the quotient extended stacky fan $\mathbf{\Sigma/\sigma}=(N(\sigma),\Sigma/\sigma,\beta(\sigma))$, with $\beta(\sigma): \mathbb{Z}^{l+m-n}\to N(\sigma)$ given by the images of $b_{1},\ldots,b_{l}$ and $b_{n+1},\ldots,b_{m}$ under $N\to N(\sigma)$. By the construction of toric Deligne-Mumford stacks, if $\sigma$ is contained in a top dimensional cone in $\Sigma$, we have $\mathcal{X}(\mathbf{\Sigma/\sigma}):=[Z(\sigma)/G(\sigma)]$, where $Z(\sigma)=(\mathbb{A}^{l}\setminus\mathbb{V}(J_{\Sigma/\sigma}))\times (\mathbb{C}^{*})^{m-n}$ and $G(\sigma)=Hom_{\mathbb{Z}}(DG(\beta(\sigma)),\mathbb{C}^{*})$.
\begin{lem} If $\mathcal{X}(\mathbf{\Sigma})$ is semi-projective, so is $\mathcal{X}(\mathbf{\Sigma/\sigma})$. \end{lem}
\begin{pf} Semi-projectivity of the stack $\mathcal{X}(\mathbf{\Sigma})$ means the simplicial fan $\Sigma$ is a fan coming from a regular triangulation of $\mathcal{B}=\{\overline{b}_{1},\cdots,\overline{b}_{n}\}$ which spans the lattice $\overline{N}$. Let $pos(\mathcal{B})$ be the convex polyhedral cone generated by $\mathcal{B}$. Then from \cite{HS}, the triangulation is supported on $pos(\mathcal{B})$ and is dermined by a simple polyhedron whose normal fan is $\Sigma$. So $\sigma$ is contained in a top-dimensional cone $\tau$ in $\Sigma$. The image $\widetilde{\tau}$ of $\tau$ under quotient by $\sigma$ is a top-dimensional cone in the quotient fan $\Sigma/\sigma$. So the toric variety $X(\Sigma/\sigma)$ is semi-projective by Theorem \ref{semitor}, and the stack $\mathcal{X}(\mathbf{\Sigma/\sigma})$ is semi-projective by definition. \end{pf}
Recall in \cite{BCS} that for each top-dimensional cone $\sigma$ in $\Sigma$, define $Box(\sigma)$ to be the set of elements $v\in N$ such that $\overline{v}=\sum_{\rho_{i}\subseteq \sigma}a_{i}\overline{b}_{i}$ for some $0\leq a_{i}<1$. Elements in $Box(\sigma)$ are in one-to-one correspondence with elements in the finite group $N(\sigma)=N/N_{\sigma}$, where $N(\sigma)$ is a local group of the stack $\mathcal{X}(\mathbf{\Sigma})$. In fact, we write $\overline{v}=\sum_{\rho_{i}\subseteq \sigma(\overline{v})}a_{i}\overline{b}_{i}$ for some $0<a_{i}<1$, where $\sigma(\overline{v})$ is the minimal cone containing $\overline{v}$. Denoted by $Box(\mathbf{\Sigma})$ the union of $Box(\sigma)$ for all top-dimensional cones $\sigma$.
\begin{prop} The $r$-inertia stack is given by \begin{equation}\label{inertia} \mathcal{I}_{r}\left(\mathcal{X}(\mathbf{\Sigma})\right)=\coprod_{(v_{1},\cdots,v_{r})\in Box(\mathbf{\Sigma})^{r}} \mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}_{1},\cdots,\overline{v}_{r})), \end{equation} where $\sigma(\overline{v}_{1},\cdots,\overline{v}_{r})$ is the minimal cone in $\Sigma$ containing $\overline{v}_{1},\cdots,\overline{v}_{r}$. \end{prop} \begin{pf} Since $G$ is an abelian group, we have $$\mathcal{I}_{r}\left(\mathcal{X}(\mathbf{\Sigma})\right)=[(\coprod_{(v_{1},\cdots,v_{r})\in (G)^{r}} Z^{(v_{1},\cdots,v_{r})})\slash G],$$ where $Z^{(v_{1},\cdots,v_{r})}\subset Z$ is the subvariety fixed by $v_{1},\cdots,v_{r}$. Since $\sigma(\overline{v}_{1},\cdots,\overline{v}_{r})$ is contained in a top-dimensional cone in $\Sigma$. We use the same method as in Lemma 4.6 and Proposition 4.7 of \cite{BCS} to prove that $[Z^{(v_{1},\cdots,v_{r})}\slash G]\cong \mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}_{1},\cdots,\overline{v}_{r}))$. \end{pf}
Note that in (\ref{inertia}) each component is semi-projective.
\subsection{The orbifold Chow ring}\label{ring} In this section we compute the orbifold Chow ring of semi-projective toric DM stacks and prove Theorem \ref{main}.
\subsubsection{The module structure}
Let $\mathbf{\Sigma}=(N,\Sigma,\beta)$ be an extended stacky fan such that the toric DM stack $\mathcal{X}(\mathbf{\Sigma})$ is semi-projective. Since the fan $\Sigma$ is convex, $|\Sigma|$ is an abelian subgroup of $N$. We put $N_{\Sigma}:=|\Sigma|\oplus N_{tor}$, where $N_{tor}$ is the torsion subgroup of $N$. Define the deformed ring $\mathbb{Q}[N_{\Sigma}]:=\bigoplus_{c\in N_{\Sigma}}\mathbb{Q}y^{c}$ with the product structure given by (\ref{productA}).
Let $\{\rho_{1},\ldots,\rho_{n}\}$ be the rays of $\Sigma$, then each $\rho_{i}$ corresponds to a line bundle $L_{i}$ over the toric Deligne-Mumford stack $\mathcal{X}(\mathbf{\Sigma})$ given by the trivial line bundle $\mathbb{C}\times Z$ over $Z$ with the $G$ action on $\mathbb{C}$ given by the $i$-th component $\alpha_{i}$ of $\alpha: G\to (\mathbb{C}^{*})^{m}$ in (\ref{exact3}). The first Chern classes of the line bundles $L_{i}$, which we identify with $y^{b_{i}}$, generate the cohomology ring of the simplicial toric variety $X(\Sigma)$.
Let $S_{\mathbf{\Sigma}}$ be the quotient ring $\frac{\mathbb{Q}[y^{b_{1}},\cdots,y^{b_{n}}]}{I_{\Sigma}}$, where $I_{\Sigma}$ is the square-free ideal of the fan $\Sigma$ generated by the monomials $$\{y^{b_{i_{1}}}\cdots y^{b_{i_{k}}}: \overline{b}_{i_{1}},\cdots, \overline{b}_{i_{k}} \text{ do not generate a cone in }\Sigma\}.$$ It is clear that $S_{\mathbf{\Sigma}}$ is a subring of the deformed ring $\mathbb{Q}[N_{\Sigma}]$.
\begin{lem} Let $A^{*}(\mathcal{X}(\mathbf{\Sigma}))$ be the ordinary Chow ring of a semi-projective toric DM stack $\mathcal{X}(\mathbf{\Sigma})$. Then there is a ring isomorphism: $$A^{*}(\mathcal{X}(\mathbf{\Sigma}))\cong \frac{S_{\mathbf{\Sigma}}} {\{\sum_{i=1}^{n}e(b_{i})y^{b_{i}}: e\in N^{\star}\}}.$$ \end{lem}
\begin{pf} The Lemma is easily proven from the fact that the Chow ring of a DM stack is isomorphic to the Chow ring of its coarse moduli space (\cite{V}) and Proposition 2.11 in \cite{HS}. \end{pf}
Now we study the module structure on $A_{orb}^{*}\left(\mathcal{X}(\mathbf{\Sigma})\right)$. Because $\Sigma$ is a simplicial fan, we have:
\begin{lem}\label{smalllemma} For any $c\in N_{\Sigma}$, let $\sigma$ be the minimal cone in $\Sigma$ containing $\overline{c}$. Then there is a unique expression $c=v+\sum_{\rho_{i}\subset\sigma}m_{i}b_{i}$ where $m_{i}\in \mathbb{Z}_{\geq 0}$, and $v\in Box(\sigma)$. \end{lem}
\begin{prop}\label{vectorspace} Let $\mathcal{X}(\mathbf{\Sigma})$ be a semi-projective toric DM stack associated to an extended stacky fan $\mathbf{\Sigma}$. We have an isomorphism of $A^{*}(\mathcal{X}(\mathbf{\Sigma}))$-modules: $$\bigoplus_{v\in Box(\mathbf{\Sigma})}A^{*}\left(\mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}))\right)[deg(y^{v})]\cong \frac{\mathbb{Q}[N_{\Sigma}]}{\{\sum_{i=1}^{n}e(b_{i})y^{b_{i}}: e\in N^{\star}\}}.$$ \end{prop}
\begin{pf} From the definition of $\mathbb{Q}[N_{\Sigma}]$ and Lemma \ref{smalllemma}, we see that $\mathbb{Q}[N_{\Sigma}]=\bigoplus_{v\in Box(\mathbf{\Sigma})}y^{v}\cdot S_{\mathbf{\Sigma}}$. The rest is similar to the proof of Proposition 4.7 in \cite{Jiang}, we leave it to the readers. \end{pf}
\subsubsection{The Chen-Ruan product structure}
The orbifold cup product on a DM stack $\mathcal{X}$ is defined using genus zero, degree zero 3-pointed orbifold Gromov-Witten invariants on $\mathcal{X}$. The relevant moduli space is the disjoint union of all 3-twisted sectors (i.e. the double inertia stack). By (\ref{inertia}), the 3-twisted sectors of a semi-projective toric DM stack $\mathcal{X}(\mathbf{\Sigma})$ are \begin{equation}\label{3-sector} \coprod_{(v_{1},v_{2},v_{3})\in Box(\mathbf{\Sigma})^{3}, v_{1}v_{2}v_{3}=1} ~\mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3})). \end{equation}
Let $ev_{i}: \mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3}))\to \mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}_{i}))$ be the evaluation maps. The obstruction bundle (see \cite{CR2}) $Ob_{(v_{1},v_{2},v_{3})}$ over the 3-twisted sector $\mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3}))$ are defined by \begin{equation}\label{obstruction} Ob_{(v_{1},v_{2},v_{3})}:=\left(e^{*}T\left(\mathcal{X}(\mathbf{\Sigma})\right)\otimes H^{1}(C,\mathcal{O}_{C})\right)^{H}, \end{equation} where $e: \mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3}))\to \mathcal{X}(\mathbf{\Sigma})$ is the embedding, $C\to \mathbb{P}^{1}$ is the $H$-covering branched over three marked points $\{0,1,\infty\}\subset \mathbb{P}^{1}$, and $H$ is the group generated by $v_{1},v_{2},v_{3}$.
A general result in \cite{CH} and \cite{JKK} about the obstruction bundle implies the following.
\begin{prop}\label{obstructionbdle} Let $\mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3}))$ be a 3-twisted sector of the stack $\mathcal{X}(\mathbf{\Sigma})$. Suppose $v_{1}+v_{2}+v_{3}=\sum_{\rho_{i}\subset \sigma(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3})}a_{i}b_{i}$, $a_{i}=1$ or $2$. Then the Euler class of the obstruction bundle $Ob_{(v_{1},v_{2},v_{3})}$ on $\mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3}))$ is
$$\prod_{a_{i}=2}c_{1}(L_{i})|_{\mathcal{X}(\mathbf{\Sigma/\sigma}(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3}))},$$ where $L_{i}$ is the line bundle over $\mathcal{X}(\mathbf{\Sigma})$ corresponding to the ray $\rho_{i}$. \end{prop}
Let $v\in Box(\mathbf{\Sigma})$, say $v\in N(\sigma)$ for some top-dimensional cone $\sigma$. Let $\check{v}\in Box(\mathbf{\Sigma})$ be the inverse of $v$ as an element in the group $N(\sigma)$. Equivalently, if $v=\sum_{\rho_{i}\subseteq \sigma(\overline{v})}\alpha_{i}b_{i}$ for $0<\alpha_{i}<1$, then $\check{v}=\sum_{\rho_{i}\subseteq \sigma(\overline{v})}(1-\alpha_{i})b_{i}$. Then for $\alpha_{1},\alpha_{2}\in A^{*}_{orb}(\mathcal{X}(\mathbf{\Sigma}))$, the orbifold cup product is defined by \begin{equation}\label{cupproduct} \alpha_{1}\cup_{orb}\alpha_{2}=\widehat{ev}_{3*}(ev_{1}^{*}\alpha_{1}\cup ev_{2}^{*}\alpha_{2}\cup e(Ob_{(v_{1},v_{2},v_{3})})), \end{equation} where $\widehat{ev}_{3}=I\circ ev_{3}$, and $I: \mathcal{I}\mathcal{X}(\mathbf{\Sigma}) \rightarrow \mathcal{I}\mathcal{X}(\mathbf{\Sigma})$ is the natural map given by $(x,g)\mapsto (x,g^{-1})$.
\subsubsection*{Proof of Theorem 1.1} By Proposition \ref{vectorspace}, it remains to consider the cup product. In this case, for any $v_{1},v_{2}\in Box(\mathbf{\Sigma})$, we also have $$v_{1}+v_{2}=\check{v}_{3}+\sum_{a_{i}=2}b_{i}+\sum_{i\in J}b_{i},$$ where $J$ represents the set of $j$ such that $\rho_{j}$ belongs to $\sigma(\overline{v}_{1},\overline{v}_{2})$, but not belong to $\sigma(\overline{v}_{3})$. Then the proof is the same as the proof in \cite{BCS}. We omit the details.
\section{Lawrence Toric DM stacks}\label{Lawrence}
In this section we study a special type of semi-projective toric DM stacks called the Lawrence toric DM stacks. Their orbifold Chow rings are shown to be isomorphic to the orbifold Chow rings of their associated hypertoric DM stacks studied in \cite{JT}.
\subsection{Stacky hyperplane arrangements} Let $N$, $\{b_{1},\cdots,b_{m}\}\in N$, $\beta:\mathbb{Z}^{m}\to N$, and $\{\overline{b}_{1},\cdots,\overline{b}_{m}\}\subset \overline{N}$ be as in Definition \ref{stackyfan}. We assume that $\{b_{1},\cdots,b_{m}\}\in N$ are nontorsion integral vectors. We still have the exact sequences (\ref{exact1}) and (\ref{exact2}). The Gale dual map $\beta^{\vee}$ of $\beta$ is given by a collection of integral vectors $\beta^{\vee}=(a_1,\cdots,a_m)$. Choose a generic element $\theta\in DG(\beta)$ and let $\psi:=(r_{1},\cdots,r_{m})$ be a lifting of $\theta$ in $\mathbb{Z}^{m}$ such that $\theta=-\beta^{\vee}\psi$. Note that $\theta$ is generic if and only if it is not in any hyperplane of the configuration determined by $\beta^{\vee}$ in $DG(\beta)_{\mathbb{R}}$. Associated to $\theta$ there is a hyperplane arrangement $\mathcal{H}=\{H_{1},\cdots,H_{m}\}$ defined as follows: let $H_{i}$ be the hyperplane \begin{equation}\label{arrangement}
H_{i}:=\{v\in M_{\mathbb{R}}|<\overline{b}_{i},v>+r_{i}=0\}\subset M_{\mathbb{R}}. \end{equation} This determines hyperplane arrangement in $M_{\mathbb{R}}$, up to translation. It is well-known that hyperplane arrangements determine the topology of hypertoric varieties (\cite{BD}). We call $\mathcal{A}:=(N,\beta,\theta)$ a {\em stacky hyperplane arrangement}.
The toric variety $X(\Sigma)$ is defined by the weighted polytope $\mathbf{\Gamma}:=\bigcap_{i=1}^{m}F_{i}$, where $F_{i}=\{v\in M_{\mathbb{R}}|<b_{i},v>+r_{i}\geq 0\}$. Suppose that $\mathbf{\Gamma}$ is bounded, the fan $\Sigma$ is the normal fan of $\mathbf{\Gamma}$ in $M_{\mathbb{R}}=\mathbb{R}^{d}$ with one dimensional rays generated by $\overline{b}_{1},\cdots,\overline{b}_{n}$. By reordering, we may assume that $H_{1},\cdots,H_{n}$ are the hyperplanes that bound the polytope $\mathbf{\Gamma}$, and $H_{n+1},\cdots,H_{m}$ are the other hyperplanes. Then we have an extended stacky fan $\mathbf{\Sigma}=(N,\Sigma,\beta)$ as in Definition \ref{stackyfan}, with $\Sigma$ the normal fan of $\mathbf{\Gamma}$, $\beta:\mathbb{Z}^{m}\to N$ given by $\{b_{1},\cdots,b_{n},b_{n+1},\cdots,b_{m}\}\subset N$, and $\{b_{n+1},\cdots,b_{m}\}$ the extra data. We define the hypertoric DM stack $\mathcal{M}(\mathcal{A})$ using this $\mathcal{A}$, see \cite{JT} for more details.
\subsection{Lawrence toric DM stacks} Applying Gale dual to the map \begin{equation}\label{betaL} \mathbb{Z}^{m}\oplus \mathbb{Z}^{m}\to DG(\beta), \end{equation} given by $(\beta^{\vee},-\beta^{\vee})$, we obtain $$\beta_{L}: \mathbb{Z}^{m}\oplus \mathbb{Z}^{m}\longrightarrow N_{L},$$ which is given by integral vectors $\{b_{L,1},\cdots,b_{L,m},b'_{L,1},\cdots,b'_{L,m}\}$ in $N_{L}$. The natural images $\{\overline{b}_{L,1},\cdots,\overline{b}_{L,m},\overline{b}'_{L,1},\cdots,\overline{b}'_{L,m}\}\subset \overline{N}_{L}$ are called the Lawrence lifting of $\{\overline{b}_{1},\cdots,\overline{b}_{m}\}\subset \overline{N}$.
Associated to the generic element $\theta$, let $\overline{\theta}$ be the natural image under the map $DG(\beta)\rightarrow \overline{DG(\beta)}$. Then the map $\overline{\beta}^{\vee}: \mathbb{Z}^{m}\rightarrow \overline{DG(\beta)}$ is given by $\overline{\beta}^{\vee}=(\overline{a}_1,\cdots,\overline{a}_m)$. For any column basis of the form $C=\{\overline{a}_{i_{1}},\cdots,\overline{a}_{i_{m-d}}\}$, there exist unique $\lambda_{1},\cdots,\lambda_{m-d}$ such that $$a_{i_{1}}\lambda_{1}+\cdots+a_{i_{m-d}}\lambda_{m-d}=\overline{\theta}.$$ Let $\mathbb{C}[z_{1},\cdots,z_{m},w_{1},\cdots,w_{m}]$ be the coordinate ring of $\mathbb{C}^{2m}$. Let
$\sigma(C,\theta)=\{\overline{b}_{i_{j}}~|\lambda_{j}>0\}\sqcup\{\overline{b}'_{i_{j}}|~\lambda_{j}<0\},$ and
$C(\theta)=\{z_{i_{j}}~|\lambda_{j}>0\}\sqcup\{w_{i_{j}}|~\lambda_{j}<0\}$. We set \begin{equation}\label{irrelevant}
\mathbf{\mathcal{I}}_{\theta}:=<\prod C(\theta)|~C~\text{is a column basis of}~\overline{\beta}^{\vee}>, \end{equation} and \begin{equation}\label{fan} \Sigma_{\theta}:=\{\overline{\sigma}(C,\theta):~C~\text{is a column basis of}~\overline{\beta}^{\vee}\}, \end{equation} where $\overline{\sigma}(C,\theta)= \{\overline{b}_{L,1},\cdots,\overline{b}_{L,m},\overline{b}'_{L,1},\cdots,\overline{b}'_{L,m}\}\setminus\sigma(C,\theta)$ is the complement of $\sigma(C,\theta)$ and corresponds to the maximal cones in $\Sigma_{\theta}$. According to \cite{HS}, $\Sigma_{\theta}$ is the fan of Lawrence toric variety $X(\Sigma_{\theta})$ corresponding to $\theta$ in the lattice $\overline{N}_{L}$. The ideal $\mathcal{I}_{\theta}$ is the irrelevant ideal of the fan $\Sigma_{\theta}$. Then we have the Lawrence stacky fan $\mathbf{\Sigma_{\theta}}=(N_{L},\Sigma_{\theta},\beta_{L})$ introduced in \cite{JT}.
Applying $Hom_\mathbb{Z}(-,\mathbb{C}^{*})$ functor to (\ref{betaL}), we get \begin{equation}\label{Lawrencemap} \alpha_{h}: G\rightarrow (\mathbb{C}^{*})^{2m}. \end{equation} So $G$ acts on $\mathbb{C}^{2m}$ through $\alpha_{h}$. From Section 2, $\mathcal{X}(\mathbf{\Sigma_{\theta}})=[(\mathbb{C}^{2m}\setminus V(\mathcal{I}_{\theta}))\slash G]$ whose coarse moduli space is the Lawrence toric variety $X(\Sigma_{\theta})=(\mathbb{C}^{2m}\setminus V(\mathcal{I}_{\theta}))\slash G$. Let $Y\subset \mathbb{C}^{2m}\setminus V(\mathcal{I}_{\theta})$ be the subvariety defined by the ideal: \begin{equation}\label{ideal1}
I_{\beta^{\vee}}:=<\sum_{i=1}^{m}(\beta^{\vee})^{\star}(x)_{i}a_{ij}z_{i}w_{i}|\forall x\in DG(\beta)^{\star}>, \end{equation} where $(\beta^{\vee})^{\star}: DG(\beta)^{\star}\rightarrow \mathbb{Z}^{m}$ is the dual map of $\beta^{\vee}$ and $(\beta^{\vee})^{\star}(x)_{i}$ is the $i$-th component of the vector $(\beta^{\vee})^{\star}(x)$. From \cite{JT}, the hypertoric DM stack $\mathcal{M}(\mathcal{A})=[Y/G]$ whose coarse moduli space is the hypertoric variety $Y(\beta^{\vee},\theta)=Y\slash G$.
\begin{defn}(\cite{JT}) The Lawrence toric DM stack is the toric DM stack $\mathcal{X}(\mathbf{\Sigma_{\theta}})$ corresponding to the Lawrence stacky fan $\mathbf{\Sigma_{\theta}}$. \end{defn}
By \cite{HS}, $X(\Sigma_{\theta})$ is semi-projective. So the Lawrence toric DM stack $\mathcal{X}(\mathbf{\Sigma_{\theta}})$ is semi-projective by definition.
\subsection{Comparison of inertia stacks} Next we compare the orbifold Chow ring of the hypertoric DM stack and the orbifold Chow ring of the Lawrence toric DM stack. First we compare the inertia stacks. From the map $\beta: \mathbb{Z}^{m}\rightarrow N$ which is given by vectors $\{b_{1},\cdots,b_m\}$. Let $Cone(\beta)$ be a partially ordered finite set of cones generated by $\overline{b}_{1},\cdots,\overline{b}_{m}$. The partial order is defined by: $\sigma\prec\tau$ if $\sigma$ is a face of $\tau$, and we have the minimum element $\hat{0}$ which is the cone consisting of the origin. Let $Cone(\overline{N})$ be the set of all convex polyhedral cones in the lattice $\overline{N}$. Then we have a map $$C: Cone(\beta)\longrightarrow Cone(\overline{N}),$$ such that for any $\sigma\in Cone(\beta)$, $C(\sigma)$ is the cone in $\overline{N}$. Then $\Delta_{\mathbf{\beta}}:=(C,Cone(\beta))$ is a simplicial {\em multi-fan} in the sense of \cite{HM}.
For the multi-fan $\Delta_{\mathbf{\beta}}$, let $Box(\Delta_{\mathbf{\beta}})$ be the set of pairs $(v,\sigma)$, where $\sigma$ is a cone in $\Delta_{\mathbf{\beta}}$, $v\in N$ such that $\overline{v}=\sum_{\rho_{i}\subseteq \sigma}\alpha_{i}b_{i}$ for $0<\alpha_{i}<1$. (Note that $\sigma$ is the minimal cone in $\Delta_{\mathbf{\beta}}$ satisfying the above condition.) From \cite{JT}, an element $(v,\sigma)\in Box(\Delta_{\mathbf{\beta}})$ gives a component of the inertia stack $\mathcal{I}(\mathcal{M}(\mathcal{A}))$. Also consider the set $Box(\mathbf{\Sigma_{\theta}})$ associated to the stacky fan $\mathbf{\Sigma_{\theta}}$, see Section 2.2 for its definition. An element $v\in Box(\mathbf{\Sigma_{\theta}})$ gives a component of the inertia stack $\mathcal{I}(\mathcal{X}(\mathbf{\Sigma_{\theta}}))$.
By the Lawrence lifting property, a vector $\overline{b}_{i}$ in $\overline{N}$ lifts to two vectors $\overline{b}_{L,i},\overline{b}'_{L,i}$ in $\overline{N}_{L}$. Let $\{\overline{b}_{L,i_{1}},\cdots,\overline{b}_{L,i_{k}}, \overline{b}'_{L,i_{1}},\cdots,\overline{b}'_{L,i_{k}}\}$ be the Lawrence lifting of $\{\overline{b}_{i_{1}},\cdots,\overline{b}_{i_{k}}\}$.
\begin{lem}\label{conemulti} $\{\overline{b}_{i_{1}},\cdots,\overline{b}_{i_{k}}\}$ generate a cone $\sigma$ in $\Delta_{\mathbf{\beta}}$ if and only if $\{\overline{b}_{L,i_{1}},\cdots,\overline{b}_{L,i_{k}}, \overline{b}'_{L,i_{1}},\cdots,\overline{b}'_{L,i_{k}}\}$ generate a cone $\sigma_{\theta}$ in $\Sigma_{\theta}$. \end{lem}
\begin{pf} Suppose $\sigma$ is a cone in $\Delta_{\mathbf{\beta}}$ generated by $\{\overline{b}_{i_{1}},\cdots,\overline{b}_{i_{k}}\}$, it is contained in a top-dimensional cone $\tau$. Assume that $\tau$ is generated by $\{\overline{b}_{i_{1}},\cdots,\overline{b}_{i_{k}}, \overline{b}_{i_{k+1}},\cdots,\overline{b}_{i_{d}}\}$. Let $C$ be the complement $\{\overline{b}_{1},\cdots,\overline{b}_{m}\}\setminus \tau$. Then $C$ corresponds to a column basis of $\overline{\beta}^{\vee}$ in the map $\overline{\beta}^{\vee}: \mathbb{Z}^{m}\rightarrow \overline{DG(\beta)}$. By the definition of $\Sigma_{\theta}$ in (\ref{fan}), $C$ corresponds to a maximal cone $\tau_{\theta}$ in $\Sigma_{\theta}$ which contains the rays generated by $\{\overline{b}_{L,i_{1}},\cdots,\overline{b}_{L,i_{k}}, \overline{b}'_{L,i_{1}},\cdots,\overline{b}'_{L,i_{k}}\}$. Thus these rays generate a cone $\sigma_{\theta}$ in $\Sigma_{\theta}$.
Conversely, suppose $\sigma_{\theta}$ is a cone in $\Sigma_{\theta}$ generated by $\{\overline{b}_{L,i_{1}},\cdots,\overline{b}_{L,i_{k}}, \overline{b}'_{L,i_{1}},\cdots,\overline{b}'_{L,i_{k}}\}$. Using the similar method above we prove that $\{\overline{b}_{i_{1}},\cdots,\overline{b}_{i_{k}}\}$ must be contained in a top-dimensional cone of $\Delta_{\mathbf{\beta}}$. So $\{\overline{b}_{i_{1}},\cdots,\overline{b}_{i_{k}}\}$ generate a cone $\sigma$ in $\Delta_{\mathbf{\beta}}$. \end{pf}
\begin{lem}\label{box} There is an one-to-one correspondence between the elements in $Box(\mathbf{\Sigma_{\theta}})$ and the elements in $Box(\Delta_{\mathbf{\beta}})$. Moreover, their degree shifting numbers coincide. \end{lem}
\begin{pf} First the torsion elements in $Box(\mathbf{\Sigma_{\theta}})$ and $Box(\Delta_{\mathbf{\beta}})$ are both
isomorphic to $\mu=ker(\alpha)=ker(\alpha_{h})$ in (\ref{exact3}) and (\ref{Lawrencemap}). Let $(v,\sigma)\in Box(\Delta_{\mathbf{\beta}})$ with $\overline{v}=\sum_{\rho_i\subseteq \sigma}\alpha_{i}\overline{b}_{i}$. Then $v$ may be identified with an element (which we ambiguously denote by) $v\in G:=Hom_\mathbb{Z}(DG(\beta),\mathbb{C}^*)$. Certainly $v$ fixes a point in $\mathbb{C}^{m}$. Consider the map $\alpha$ in (\ref{exact3}), put $\alpha(v)=(\alpha^{1}(v),\cdots,\alpha^{m}(v))$. Then $\alpha^{i}(v)\neq 1$ if $\rho_i\subseteq\sigma$, and $\alpha^{i}(v)= 1$ otherwise. By Lemma \ref{conemulti}, let $\{\overline{b}_{L,i},\overline{b}'_{L,i}:i=1,\cdots,|\sigma|\}$ be the Lawrence lifting of $\{\overline{b}_{i}\}_{\rho_i\subseteq\sigma}$. Since the action of $v$ on $\mathbb{C}^{2m}$ is given by $(v,v^{-1})$, $v$ fixes a point in $\mathbb{C}^{2m}$ and yields an element $v_\theta$ in $Box(\mathbf{\Sigma_\theta})$. From the map (\ref{Lawrencemap}), let \begin{equation}\label{vtheta} \alpha_{h}(v_{\theta})=(\alpha^{1}_{h}(v_{\theta}),\cdots,\alpha^{m}_{h}(v_{\theta}), \alpha^{m+1}_{h}(v_{\theta}),\cdots,\alpha^{2m}_{h}(v_{\theta})). \end{equation}
Then $\alpha_{h}^{i}(v_{\theta})\neq 1$ and $\alpha_{h}^{i+m}(v_{\theta})\neq 1$ if $\rho_i\subseteq\sigma$; $\alpha_{h}^{i}(v_{\theta})= \alpha_{h}^{i+m}(v_{\theta})= 1$ otherwise. So $\sigma_{\theta}(\overline{v}_{\theta})=\{\overline{b}_{L,i},\overline{b}'_{L,i}:i=1,\cdots,|\sigma|\}$ is the minimal cone in $\Sigma_{\theta}$ containing $\overline{v}_{\theta}$. Furthermore, $\overline{v}_{\theta}=\sum_{\rho_i\subseteq \sigma}\alpha_{i}\overline{b}_{L,i}+ \sum_{\rho_i\subseteq \sigma}(1-\alpha_{i})\overline{b}'_{L,i}$.
Conservely, given an element $v_{\theta}\in Box(\mathbf{\Sigma_{\theta}})$, let $\sigma_{\theta}(\overline{v}_{\theta})$ be the minimal cone in $\Sigma_{\theta}$ containing $\overline{v}_{\theta}$. Then from the action of $G$ on $\mathbb{C}^{2m}$ and (\ref{vtheta}), we have $\alpha^{i}_{h}(v_{\theta})=(\alpha^{i+m}_{h}(v_{\theta}))^{-1}$. If $\alpha^{i}_{h}(v_{\theta})\neq 1$, then $\alpha^{i+m}_{h}(v_{\theta})\neq 1$, which means that $\overline{b}_{L,i}, \overline{b}_{L,i+m}\in \sigma_{\theta}(\overline{v}_{\theta})$. The cone $\sigma_{\theta}(\overline{v}_{\theta})$ is the one in $\Sigma_{\theta}$ containing $\overline{b}_{L,i}, \overline{b}_{L,i+m}$'s satisfying this condition. Then $\overline{v}_{\theta}=\sum_{i}(\alpha_{i}\overline{b}_{L,i}+(1-\alpha_{i})\overline{b}^{'}_{L,i})$. By Lemma \ref{conemulti}, $\sigma_{\theta}(\overline{v}_{\theta})$ is the Lawrence lifting of a cone $\sigma$ generated by the $\{\overline{b}_{i}\}$'s in $\Delta_{\mathbf{\beta}}$. Let $v=\sum_{\rho_i\subseteq\sigma}\alpha_{i}b_{i}$. So it also determines an element $(v,\sigma)\in Box(\Delta_{\mathbf{\beta}})$. \end{pf}
For $(v_{1},\sigma_{1}),(v_{2},\sigma_{2}),(v_{3},\sigma_{3})\in Box(\Delta_{\mathbf{\beta}})$, let $\sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})$ be the miniaml cone containing $\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3}$ in $\Delta_{\mathbf{\beta}}$ such that $\overline{v}_{1}+\overline{v}_{2}+\overline{v}_{3}=\sum_{\rho_i\subseteq \sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})}a_{i}\overline{b}_{i}$ and $a_{i}=1,2$. Let $v_{\theta,1}, v_{\theta,2},v_{\theta,3}$ be the corresponding elements in $Box(\mathbf{\Sigma_{\theta}})$ and $\sigma(\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3})$ the minimal cone containing $\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3}$ in $\mathbf{\Sigma_{\theta}}$. Then by Lemmas \ref{conemulti} and \ref{box}, $\sigma(\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3})$ is the Lawrence lifting of $\sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})$. Suppose that $\sigma$ is generated by $\{\overline{b}_{i_{1}},\cdots,\overline{b}_{i_{s}}\}$, then $\sigma(\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3})$ is generated by $\{\overline{b}_{L,i_{1}},\cdots,\overline{b}_{L,i_{s}},\overline{b}^{'}_{L,i_{1}},\cdots,\overline{b}^{'}_{L,i_{s}}\}$, the Lawrence lifting of $\{\overline{b}_{i_{1}},\cdots,\overline{b}_{i_{s}}\}$. Let $\{\overline{b}_{j_{1}},\cdots,\overline{b}_{j_{m-l-s}}\}$ be the rays not in $\sigma\cup link(\sigma)$, we have the Lawrence lifting $\{\overline{b}_{L,j_{1}},\cdots,\overline{b}_{L,j_{m-l-s}},\overline{b}^{'}_{L,j_{1}},\cdots,\overline{b}^{'}_{L,j_{m-l-s}}\}$. Then from the definition of Lawrence fan $\Sigma_{\theta}$ in (\ref{fan}), we have the following lemma: \begin{lem}\label{keycone} There exist $m-l-s$ vectors in $\{\overline{b}_{L,j_{1}},\cdots,\overline{b}_{L,j_{m-l-s}},\overline{b}^{'}_{L,j_{1}},\cdots,\overline{b}^{'}_{L,j_{m-l-s}}\}$ such that the rays they generate plus the rays in $\sigma(\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3})$ generate a cone $\sigma_{\theta}$ in $\Sigma_{\theta}$. $\square$ \end{lem}
\begin{prop}\label{3-twisted-sector} The stack $\mathcal{X}(\mathbf{\Sigma_{\theta}}/\sigma_{\theta})$ is also a Lawrence toric DM stack. \end{prop}
\begin{pf} For simplicity, put $\sigma:=\sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})$. Suppose there are
$l$ rays in the $link(\sigma)$. Then by Lemma \ref{conemulti} there are $2l$ rays in $link(\sigma_{\theta})$, the Lawrence lifting of $link(\sigma)$. Let $s:=|\sigma|$, then $2s+m-l-s=|\sigma_{\theta}|$. Applying Gale dual to the diagrams \[ \begin{CD} 0 @ >>>\mathbb{Z}^{s}@ >>> \mathbb{Z}^{l+s}@ >>> \mathbb{Z}^{l} @ >>> 0\\ && @VV{\beta_{\sigma}}V@VV{\widetilde{\beta}}V@VV{\beta(\sigma)}V \\ 0@ >>>N_{\sigma} @ >{}>>N@ >>> N(\sigma) @>>> 0, \end{CD} \] and \[ \begin{CD} 0 @ >>>\mathbb{Z}^{l+s}@ >>> \mathbb{Z}^{m}@ >>> \mathbb{Z}^{m-l-s} @ >>> 0\\ && @VV{\widetilde{\beta}}V@VV{\beta}V@VV{}V \\ 0@ >>>N @ >{\cong}>>N@ >>> 0 @>>> 0 \end{CD} \] yields \begin{equation}\label{3-sector2} \begin{CD} 0 @ >>>\mathbb{Z}^{l}@ >>> \mathbb{Z}^{l+s}@ >>> \mathbb{Z}^{s} @ >>> 0\\ && @VV{\beta(\sigma)^{\vee}}V@VV{\widetilde{\beta}^{\vee}}V@VV{\beta_{\sigma}^{\vee}}V \\ 0@ >>>DG(\beta(\sigma)) @ >{\varphi_{1}}>>DG(\widetilde{\beta})@ >>> DG(\beta_{\sigma}) @>>> 0, \end{CD} \end{equation} and \begin{equation}\label{3-sector22} \begin{CD} 0 @ >>>\mathbb{Z}^{m-l-s}@ >>> \mathbb{Z}^{m}@ >>> \mathbb{Z}^{l+s} @ >>> 0\\ && @VV{\cong}V@VV{\beta^{\vee}}V@VV{\widetilde{\beta}^{\vee}}V \\ 0@ >>>\mathbb{Z}^{m-l-s} @ >{}>>DG(\beta)@ >{\varphi_{2}}>> DG(\widetilde{\beta}) @>>> 0. \end{CD} \end{equation} Since $\mathbb{Z}^{s}\cong N_{\sigma}$, we have that $DG(\beta_{\sigma})=0$. We add two exact sequences $$0\longrightarrow \mathbb{Z}^{l}\longrightarrow\mathbb{Z}^{m}\longrightarrow\mathbb{Z}^{m-l}\longrightarrow 0,$$ and $$0\longrightarrow 0\longrightarrow\mathbb{Z}^{m}\longrightarrow\mathbb{Z}^{m}\longrightarrow 0,$$ on the rows of the diagrams (\ref{3-sector2}),(\ref{3-sector22}) and make suitable maps to the Gale duals we get \begin{equation}\label{3-sectors} \begin{CD} 0 @ >>>\mathbb{Z}^{2l}@ >>> \mathbb{Z}^{l+s+m}@ >>> \mathbb{Z}^{s+m-l} @ >>> 0\\ && @VV{(\beta(\sigma)^{\vee},-\beta(\sigma)^{\vee})}V@VV{(\widetilde{\beta}^{\vee},-\beta^{\vee})}V@VV{0}V \\ 0@ >>>DG(\beta(\sigma)) @ >{\cong}>>DG(\widetilde{\beta})@ >>> 0 @>>> 0, \end{CD} \end{equation} and \begin{equation}\label{3-sectors2} \begin{CD} 0 @ >>>\mathbb{Z}^{m-l-s}@ >>> \mathbb{Z}^{2m}@ >>> \mathbb{Z}^{l+s+m} @ >>> 0\\ && @VV{\cong}V@VV{(\beta^{\vee},-\beta^{\vee})}V@VV{(\widetilde{\beta}^{\vee},-\beta^{\vee})}V \\ 0@ >>>\mathbb{Z}^{m-l-s} @ >{}>>DG(\beta)@ >>> DG(\widetilde{\beta}) @>>> 0. \end{CD} \end{equation} Applying Gale dual to (\ref{3-sectors}), (\ref{3-sectors2}) we get \[ \begin{CD} 0 @ >>>\mathbb{Z}^{s+m-l}@ >>> \mathbb{Z}^{l+s+m}@ >>> \mathbb{Z}^{2l} @ >>> 0\\ && @VV{\cong}V@VV{\widetilde{\beta}_{L}}V@VV{\beta_{L}(\sigma_{\theta})}V \\ 0@ >>>\mathbb{Z}^{s+m-l} @ >{}>>\widetilde{N}_{L}@ >>> N_{L}(\sigma_{\theta}) @>>> 0, \end{CD} \]
and \[ \begin{CD} 0 @ >>>\mathbb{Z}^{l+s+m}@ >>> \mathbb{Z}^{2m}@ >>> \mathbb{Z}^{m-l-s} @ >>> 0\\ && @VV{\widetilde{\beta}_{L}}V@VV{\beta_{L}}V@VV{0}V \\ 0@ >>>\widetilde{N}_{L} @ >{\cong}>>N_{L}@ >>> 0 @>>> 0. \end{CD} \] For the generic element $\theta$, from them map $\varphi_{2}$ in (\ref{3-sector22}), $\theta$ induces $\widetilde{\theta}\in DG(\widetilde{\beta})$, and from the isomorphism $\varphi_{1}$ in (\ref{3-sector2}), $\widetilde{\theta}=\theta(\sigma)\in DG(\beta(\sigma))$. So we a quotient stacky hyperplane arrangement $\mathcal{A}(\sigma)=(N(\sigma),\beta(\sigma),\theta(\sigma))$. From the above diagrams we see that the quotient fan $\Sigma_{\theta}/\sigma_{\theta}$ in $\overline{N}_{L}(\sigma_{\theta})$ also comes from a Lawrence construction of the map $\beta(\sigma)^{\vee}: \mathbb{Z}^{l}\rightarrow DG(\beta(\sigma))$. Let $X(\sigma)=\mathbb{C}^{2l}\setminus V(\mathcal{I}_{\theta(\sigma)})$, where $\mathcal{I}_{\theta(\sigma)}$ is the irrelevant ideal of the quotient fan $\Sigma_{\theta}\slash \sigma_{\theta}$. Let $G(\sigma)=Hom_\mathbb{Z}(DG(\beta(\sigma)),\mathbb{C}^{*})$. The stack $\mathcal{X}(\mathbf{\Sigma_{\theta}}/\sigma_{\theta})=[X(\sigma)/G(\sigma)]$ is a Lawrence toric Deligne-Mumford stack. \end{pf}
\begin{cor}\label{sectors} $\mathcal{M}(\mathcal{A}(\sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})))$ is the hypertoric DM stack associated to the quotient Lawrence toric DM stack $\mathcal{X}(\mathbf{\Sigma_{\theta}}/\sigma_{\theta})$. \end{cor}
\begin{pf} $\mathcal{M}(\mathcal{A}(\sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})))$ is constructed in \cite{JT} as a quotient stack $[Y(\sigma)/G(\sigma)]$, where $Y(\sigma)\subset X(\sigma)$ is defined by $I_{\beta(\sigma)^{\vee}}$, which is the ideal in (\ref{ideal1}) corresponding to the map $\beta(\sigma)^{\vee}$ in (\ref{3-sector2}). So the stack $\mathcal{M}(\mathcal{A}(\sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})))$ is the associated hypertoric DM stack in the Lawrence toric DM stack $\mathcal{X}(\mathbf{\Sigma_{\theta}}/\sigma_{\theta})$. \end{pf}
\begin{rmk} For any $v_{\theta}\in Box(\mathbf{\Sigma_{\theta}})$, let $v^{-1}_{\theta}$ be its inverse. We have the quotient Lawrence toric stack $\mathcal{X}(\mathbf{\Sigma_{\theta}}/\sigma_{\theta})$. Let $(v,\sigma)$ be the corresponding element in $Box(\Delta_{\mathbf{\beta}})$, then $$\mathcal{M}(\mathcal{A}(\sigma(\overline{v}, \overline{v}^{-1},1)))\cong \mathcal{M}(\mathcal{A}(\sigma)).$$ By Proposition \ref{3-twisted-sector} and Corollary \ref{sectors}, the twisted sector $\mathcal{M}(\mathcal{A}(\sigma))$ is the associated hypertoric DM stack of the Lawrence toric DM stack $\mathcal{X}(\mathbf{\Sigma_{\theta}}/\sigma_{\theta})$. \end{rmk}
\begin{rmk} From Lemma \ref{keycone}, the cone $\sigma_{\theta}$ is not the minimal cone $\sigma(\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3})$ containing $\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3}$ in $\Sigma_{\theta}$. So $\mathcal{X}(\Sigma_{\theta}/\sigma(\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3}))$ is not a Lawrence toric DM stack. But from the construction of Lawrence toric DM stack, the quotient stack $\mathcal{X}(\Sigma_{\theta}/\sigma(\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3}))$ is homotopy equivalent to the quotient stack $\mathcal{X}(\Sigma_{\theta}/\sigma_{\theta})$. Since we do not need this to compare the orbifold Chow ring, we omit the details. \end{rmk}
\subsection{Comparison of orbifold Chow rings}
Recall that $N_{L}=\overline{N}_{L}\oplus N_{L,tor}$, where $N_{L,tor}$ is the torsion subgroup of $N_{L}$. Let $N_{\Sigma_{\theta}}=N_{L,tor}\oplus |\Sigma_{\theta}|$. By Theorem 1.1, we have
\begin{prop}\label{orbifoldlawrence} The orbifold Chow ring $A^{*}_{orb}(\mathcal{X}(\mathbf{\Sigma}_\theta))$ of the Lawrence toric DM stack $\mathcal{X}(\mathbf{\Sigma_{\theta}})$ is isomorphic to the ring \begin{equation}\label{chowringlawrence} \frac{\mathbb{Q}[N_{\Sigma_{\theta}}]} {\{\sum_{i=1}^{m}e(b_{L,i})y^{b_{L,i}}+\sum_{i=1}^{m}e(b'_{L,i})y^{b'_{L,i}}:e\in N_{L}^{\star}\}}. \end{equation}~ $\square$ \end{prop}
Recall in \cite{JT} that for any $c\in N$, there is a cone $\sigma\in \Delta_\mathbf{\beta}$ such that $\overline{c}=\sum_{\rho_{i}\subseteq \sigma}\alpha_{i}\overline{b}_{i}$ where $\alpha_{i}>0$ are rational numbers. Let $N^{\Delta_\mathbf{\beta}}$ denote all the pairs $(c,\sigma)$. Then $N^{\Delta_\mathbf{\beta}}$ gives rise a group ring $$\mathbb{Q}[\Delta_\mathbf{\beta}]=\bigoplus_{(c,\sigma)\in N^{\Delta_\mathbf{\beta}}}\mathbb{Q}\cdot y^{(c,\sigma)},$$ where $y$ is a formal variable. For any $(c,\sigma)\in N^{\Delta_\mathbf{\beta}}$, there exists a unique element $(v,\tau)\in Box(\Delta_\mathbf{\beta})$ such that $\tau\subseteq\sigma$ and $c=v+\sum_{\rho_{i}\subseteq \sigma}m_{i}b_{i}$, where $m_{i}$ are nonnegative integers. We call $(v,\tau)$ the fractional part of $(v,\sigma)$. We define the $ceiling ~function$ for fans. For $(c,\sigma)$ define the ceiling function $\lceil c \rceil_{\sigma}$ by $\lceil c \rceil_{\sigma}=\sum_{\rho_{i}\subseteq \tau}b_{i}+\sum_{\rho_{i}\subseteq \sigma}m_{i}b_{i}$. Note that if $\overline{v}=0$, $\lceil c \rceil_{\sigma}=\sum_{\rho_{i}\subseteq \sigma}m_{i}b_{i}$. For two pairs $(c_1,\sigma_1)$, $(c_2,\sigma_2)$, if $\sigma_1\cup\sigma_2$ is a cone in $\Delta_\mathbf{\beta}$, define $\epsilon(c_1,c_2):=\lceil c_1 \rceil_{\sigma_{1}}+\lceil c_2 \rceil_{\sigma_{2}}-\lceil c_1+c_2 \rceil_{\sigma_{1}\cup\sigma_2}$. Let $\sigma_{\epsilon}\subseteq\sigma_1\cup\sigma_2$ be the minimal cone in $\Delta_\mathbf{\beta}$ containing $\epsilon(c_1,c_2)$ so that $(\epsilon(c_1,c_2),\sigma_{\epsilon})\in N^{\Delta_\mathbf{\beta}}$. We define the grading on $\mathbb{Q}[\Delta_{\mathbf{\beta}}]$ as follows. For any $(c,\sigma)$, write $c=v+\sum_{\rho_{i}\subseteq \sigma}m_{i}b_{i}$, then
$deg(y^{(c,\sigma)})=|\tau|+\sum_{\rho_{i}\subseteq\sigma}m_{i}$, where
$|\tau|$ is the dimension of $\tau$. By abuse of notation, we write $y^{(b_{i},\rho_i)}$ as $y^{b_{i}}$. The multiplication is defined by \begin{equation}\label{product1} y^{(c_{1},\sigma_{1})}\cdot y^{(c_{2},\sigma_{2})}:= \begin{cases}
(-1)^{|\sigma_{\epsilon}|}y^{(c_{1}+c_{2}+\epsilon(c_1,c_2),\sigma_1\cup\sigma_2)}&\text{if $\sigma_{1}\cup\sigma_{2}$ is a cone in $\Delta_{\mathbf{\beta}}$}\,,\\ 0&\text{otherwise}\,. \end{cases} \end{equation} From the property of $ceiling~ function$ we check that the multiplication is commutative and associative. So $\mathbb{Q}[\Delta_\mathbf{\beta}]$ is a unital associative commutative ring. In \cite{JT}, it is shown that \begin{equation}\label{chowringhyper} A^{*}_{orb}(\mathcal{M}(\mathcal{A}))\cong \frac{\mathbb{Q}[\Delta_{\mathbf{\beta}}]}{\{\sum_{i=1}^{m}e(b_{i})y^{b_{i}}: e\in N^{\star}\}}. \end{equation}
Consider the map $\beta: \mathbb{Z}^{m}\rightarrow N$ which is given by the vectors $\{b_{1},\cdots,b_{m}\}$. We take $\{1,\cdots,m\}$ as the vertex set of the {\em matroid complex} $M_{\beta}$, defined from $\beta$ by requiring that $F\in M_{\beta}$ iff the vectors $\{\overline{b}_{i}\}_{i\in F}$ are linearly independent in $\overline{N}$. A face $F\in M_{\beta}$ corresponds to a cone in $\Delta_{\mathbf{\beta}}$ generated by $\{\overline{b}_{i}\}_{i\in F}$. By \cite{S}, the ``Stanley-Reisner'' ring of the matroid $M_{\beta}$ is $$\mathbb{Q}[M_{\beta}]=\frac{\mathbb{Q}[y^{b_{1}},\cdots,y^{b_{m}}]}{I_{M_{\beta}}},$$ where $I_{M_{\beta}}$ is the matroid ideal generated by the set of square-free monomials
$$\{y^{b_{i_{1}}}\cdots y^{b_{i_{k}}}| \overline{b}_{i_{1}},\cdots,\overline{b}_{i_{k}} ~\text{linearly dependent in}~\overline{N}\}.$$ It is proved in \cite{JT} that , $$\mathbb{Q}[\Delta_{\mathbf{\beta}}]\cong\bigoplus_{(v,\sigma)\in Box(\Delta_{\mathbf{\beta}})}y^{(v,\sigma)}\cdot \mathbb{Q}[M_{\beta}].$$ For any $(v_{1},\sigma_{1}),(v_{2},\sigma_{2})\in Box(\Delta_{\mathbf{\beta}})$, let $(v_{3},\sigma_{3})$ be the unique element in $Box(\Delta_{\mathbf{\beta}})$ such that $v_1+v_2+v_3\equiv 0$ in the local group given by $\sigma_1\cup\sigma_2$, where $\equiv 0$ means that there exists a cone $\sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})$ in $\Delta_{\mathbf{\beta}}$ such that $\overline{v}_{1}+\overline{v}_{2}+\overline{v}_{3}=\sum_{\rho_{i}\subseteq\sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})}a_{i}\overline{b}_{i}$, where $a_{i}=1 ~\text{or}~ 2$. Let $\overline{v}_1=\sum_{\rho_j\subseteq\sigma_1}\alpha_{j}^{1}\overline{b}_{j}$, $\overline{v}_2=\sum_{\rho_j\subseteq\sigma_2}\alpha_{j}^{2}\overline{b}_{j}$, $\overline{v}_3=\sum_{\rho_j\subseteq\sigma_3}\alpha_{j}^{3}\overline{b}_{j}$ with $0<\alpha_{j}^{1},\alpha_{j}^{2},\alpha_{j}^{3}<1$. Let $I$ be the set of $i$ such that $a_{i}=1$ and $\alpha_{j}^{1},\alpha_{j}^{2},\alpha_{j}^{3}$ exist, $J$ the set of $j$ such that $\rho_{j}$ belongs to $\sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})$ but not $\sigma_{3}$. If $(v,\sigma)\in Box(\Delta_\mathbf{\beta})$, let $(\check{v},\sigma)$ be the inverse of $(v,\sigma)$. Except torsion elements, equivalently, if $\overline{v}=\sum_{\rho_{i}\subseteq \sigma}\alpha_{i}\overline{b}_{i}$ for $0<\alpha_{i}<1$, then $\check{\overline{v}}=\sum_{\rho_{i}\subseteq \sigma}(1-\alpha_{i})\overline{b}_{i}$. By abuse of notation, we write $y^{(b_{i},\rho_i)}$ as $y^{b_{i}}$. We have that $v_{1}+v_{2}=\check{v}_{3}+\sum_{a_{i}=2}b_{i}+\sum_{j\in J}b_{j}$. From (\ref{product1}), Lemma 5.11 and Lemma 5.12 in \cite{JT}, if $\overline{v}_{1},\overline{v}_{2}\neq 0$, we have $$ \lceil v_1\rceil_{\sigma_1}+\lceil v_2\rceil_{\sigma_2}-\lceil v_1+v_2\rceil_{\sigma_1\cup\sigma_2}= \begin{cases} \sum_{i\in I}b_{i}+\sum_{j\in J}b_{j}&\text{if $\overline{v}_{1}\neq\check{\overline{v}}_{2}$}\,,\\ \sum_{j\in J}b_{j}&\text{if $\overline{v}_{1}=\check{\overline{v}}_{2}$}\,.\\ \end{cases} $$ So it is easy to check that the multiplication $y^{(v_{1},\sigma_{1})}\cdot y^{(v_{2},\sigma_{2})}$ can be written as \begin{equation}\label{product} \begin{cases}
(-1)^{|I|+|J|}y^{(\check{v}_{3},\sigma_{3})}\cdot\prod_{a_{i}=2} y^{b_{i}}\cdot\prod_{i\in I} y^{b_{i}}\cdot \prod_{j\in J}y^{2b_{j}}&\text{if $\overline{v}_{1},\overline{v}_{2}\in\sigma$ for $\sigma \in\Delta_{\mathbf{\beta}}$ and $\overline{v}_{1}\neq \check{\overline{v}}_{2}$}\,,\\
(-1)^{|J|} \prod_{j\in J}y^{2b_{j}}&\text{if $\overline{v}_{1},\overline{v}_{2}\in\sigma$ for $\sigma \in\Delta_{\mathbf{\beta}}$ and $\overline{v}_{1}=\check{\overline{v}}_{2}$}\,,\\ 0&\text{otherwise}\,. \end{cases} \end{equation}
The following is the main result of this Section.
\begin{thm}\label{main2} There is an isomorphism of orbifold Chow rings $A_{orb}^{*}(\mathcal{X}(\mathbf{\Sigma_{\theta}}))\cong A_{orb}^{*}(\mathcal{M}(\mathcal{A}))$. \end{thm}
\begin{pf} The ring $\mathbb{Q}[N_{\Sigma_{\theta}}]$ is generated by $\{y^{b_{L,i}},y^{b'_{L,i}}: i=1,\cdots,m\}$ and $y^{v_{\theta}}$ for $v_{\theta}\in Box(\mathbf{\Sigma_{\theta}})$ by the definition. By Lemma \ref{box}, define a morphism $$\phi: \mathbb{Q}[N_{\Sigma_{\theta}}]\rightarrow\mathbb{Q}[\Delta_{\mathbf{\beta}}] $$ by $y^{b_{L,i}}\mapsto y^{b_{i}}, y^{b'_{L,i}}\mapsto -y^{b_{i}}$ and $y^{v_{\theta}}\mapsto y^{(v,\sigma)}$. By \cite{HS}, the ideal $\mathcal{I}_{\theta}$ goes to the ideal $I_{M_{\beta}}$ and the relation $\{\sum_{i=1}^{m}e(b_{L,i})y^{b_{L,i}}+\sum_{i=1}^{m}e(b'_{L,i})y^{b'_{L,i}}:e\in N_{L}^{\star}\}$ goes to the relation $\{\sum_{i=1}^{m}e(b_{i})y^{b_{i}}:e\in N^{\star}\}$. Thus the two rings are isomorphic as modules.
It remains to check the multiplications. For any $y^{v_{\theta}}$ and $y^{b_{L,i}}$ or $y^{b'_{L,i}}$, let $y^{(v,\sigma)}$ be the corresponding element in $\mathbb{Q}[\Delta_{\mathbf{\beta}}]$. By the property of $v_{\theta}$ and Lemma \ref{box}, the minimal cone in $\Sigma_{\theta}$ containing $\overline{v}_{\theta}, \overline{b}_{L,i}$ must contains $\overline{b}'_{L,i}$. By Lemma \ref{conemulti}, there is a cone in $\Delta_{\mathbf{\beta}}$ containing $\overline{v}, \overline{b}_{i}$. In this way, $y^{v_{\theta}}\cdot y^{b_{L,i}}$ goes to $y^{(v,\sigma)}\cdot y^{b_{i}}$ and $y^{v_{\theta}}\cdot y^{b'_{L,i}}$ goes to $-y^{(v,\sigma)}\cdot y^{b_{i}}$. If there is no cone in $\Sigma_{\theta}$ containing $\overline{v}_{\theta}, \overline{b}_{L,i},\overline{b}'_{L,i}$, then by Lemma \ref{conemulti} there is no cone in $\Delta_{\mathbf{\beta}}$ containing $\overline{v}, \overline{b}_{i}$. So $y^{v_{\theta}}\cdot y^{b_{L,i}}=0$ goes to $y^{(v,\sigma)}\cdot y^{b_{i}}=0$ and $y^{v_{\theta}}\cdot y^{b'_{L,i}}=0$ goes to $-y^{(v,\sigma)}\cdot y^{b_{i}}=0$.
For any $y^{v_{\theta,1}},y^{v_{\theta,2}}$, let $y^{(v_{1},\sigma_{1})},y^{(v_{2},\sigma_{2})}$ be the corresponding elements in $\mathbb{Q}[\Delta_{\mathbf{\beta}}]$. If there is no cone in $\Sigma_{\theta}$ containing $\overline{v}_{\theta,1},\overline{v}_{\theta,2}$, then by Lemmas \ref{conemulti} and \ref{box}, there is no cone in $\Delta_{\mathbf{\beta}}$ containing $\overline{v}_{1},\overline{v}_{2}$. So $y^{v_{\theta,1}}\cdot y^{v_{\theta,2}}=0$ goes to $y^{(v_{1},\sigma_{1})}\cdot y^{(v_{2},\sigma_{2})}=0$. Suppose there is a cone containing $\overline{v}_{\theta,1},\overline{v}_{\theta,2}$, let $v_{\theta,3}\in Box(\mathbf{\Sigma}_{\theta})$ such that $v_{\theta,1}+v_{\theta,2}+v_{\theta,3}\equiv 0$. Let $\sigma(\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3})$ be the minimal cone containing $\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3}$ in $\Sigma_{\theta}$. Then by Lemmas \ref{conemulti} and \ref{box}, $\sigma(\overline{v}_{\theta,1}, \overline{v}_{\theta,2},\overline{v}_{\theta,3})$ is the Lawrence lifting of $\sigma(\overline{v}_{1}, \overline{v}_{2},\overline{v}_{3})$ for $(v_{1},\sigma_{1}),(v_{2},\sigma_{2}),(v_{3},\sigma_{3})\in Box(\Delta_{\mathbf{\beta}})$. So we may write $\overline{v}_{\theta,1}+\overline{v}_{\theta,2}+\overline{v}_{\theta,3}=\sum_{\rho_{i}\subseteq \sigma(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3})}a_{i}\overline{b}_{L,i} +\sum_{\rho_{i}\subseteq \sigma(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3})}a'_{i}\overline{b}'_{L,i}$. The corresponding $\overline{v}_{1}+\overline{v}_{2}+\overline{v}_{3}= \sum_{\rho_{i}\subseteq \sigma(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3})}a_{i}\overline{b}_{i}$. Let $(\check{v},\sigma)$ be the inverse of $(v,\sigma)$ in $Box(\Delta_{\mathbf{\beta}})$, i.e. if $v$ is nontorsion and $\overline{v}=\sum_{\rho_{i}\subseteq \sigma}\alpha_{i}\overline{b}_{i}$ for $0<\alpha_{i}<1$, then $\check{\overline{v}}=\sum_{\rho_{i}\subseteq \sigma}(1-\alpha_{i})\overline{b}_{i}$. The $\check{v}_{\theta}$ is defined similarly in $Box(\mathbf{\Sigma_{\theta}})$. The notation $J$ represents the set of $j$ such that $\rho_{j}$ belongs to $\sigma(\overline{v}_{1},\overline{v}_{2},\overline{v}_{3})$ but not $\sigma_{3}$, the corresponding $\rho_{L,j},\rho'_{L,j}$ belong to $\sigma(\overline{v}_{\theta,1},\overline{v}_{\theta,2},\overline{v}_{\theta,3})$ but not $\sigma(\overline{v}_{\theta,3})$.
If some $\overline{v}_{\theta,i}=0$ which means that $v_{\theta,i}$ is a torsion. Then from Lemma (\ref{box}) the corresponding $v$ is also a torsion element. In this case we know that the orbifold cup product $y^{v_{\theta,1}}\cdot y^{v_{\theta,2}}$ is the usual product, and under the map $\phi$, is equal to $y^{(v_{1},\sigma_{1})}\cdot y^{(v_{2},\sigma_{2})}$.
If $\overline{v}_{\theta,1}=\check{\overline{v}}_{\theta,2}$, then $\overline{v}_{\theta,3}=0$ and the obstruction bundle over the corresponding 3-twisted sector is zero. The set $J$ is the set $j$ such that $\rho_j$ belongs to $\sigma(\overline{v}_{\theta,1})$. So from \cite{BCS}, we have $$y^{v_{\theta,1}}\cdot y^{v_{\theta,2}}= \prod_{j\in J}y^{b_{L,j}}\cdot y^{b'_{L,j}}.$$ Under the map $\phi$ we see that $y^{(v_{1},\sigma_{1})}\cdot y^{(v_{2},\sigma_{2})}$ is equal to the second line in the product (\ref{product}).
If $\overline{v}_{\theta,1}\neq\check{\overline{v}}_{\theta,2}$, then $\overline{v}_{\theta,3}\neq 0$ and the obstruction bundle is given by Proposition \ref{obstructionbdle}. If all $\alpha_{j}^{1},\alpha_{j}^{2},\alpha_{j}^{3}$ exist, the coefficients $a_{i}$ and $a'_{i}$ satisfy that if $a_{i}=1$ then $a'_{i}=2$, and if $a_{i}=2$ then $a'_{i}=1$. So from \cite{BCS}, $$y^{v_{\theta,1}}\cdot y^{v_{\theta,2}}= y^{\check{v}_{\theta,3}}\cdot\prod_{a_{i}=2} y^{b_{L,i}}\cdot\prod_{i\in I} y^{b'_{L,i}}\cdot \prod_{j\in J}y^{b_{L,j}}\cdot y^{b'_{L,j}}.$$ Under the map $\phi$ we see that $y^{(v_{1},\sigma_{1})}\cdot y^{(v_{2},\sigma_{2})}$ is equal to the first line in the product (\ref{product}). By Lemma \ref{box}, the box elements have the same orbifold degrees. By Corollary \ref{sectors} and the definition of orbifold cup product in (\ref{cupproduct}), the products $y^{v_{\theta,1}}\cdot y^{v_{\theta,2}}$ and $y^{(v_{1},\sigma_{1})}\cdot y^{(v_{2},\sigma_{2})}$ have the same degrees in both Chow rings. So $\phi$ induces a ring isomorphism $A_{orb}^{*}(\mathcal{X}(\mathbf{\Sigma_{\theta}}))\cong A_{orb}^{*}(\mathcal{M}(\mathcal{A}))$. \end{pf}
\begin{rmk} The presentation (\ref{chowringhyper}) of orbifold Chow ring only depends on the matroid complex corresponding to the map $\beta: \mathbb{Z}^{m}\rightarrow N$, not $\theta$. Note that the presentation (\ref{chowringlawrence}) depends on the fan $\Sigma_{\theta}$. We couldn't see explicitly from this presentation that the ring is independent to the choice of generic elements $\theta$. \end{rmk}
\end{document} |
\begin{document}
\title{Robust spectral phase reconstruction of time-frequency entangled bi-photon states}
\author{Ilaria Gianani}\email{[email protected]} \affiliation{Dipartimento di Scienze, Universit\`a degli Studi Roma Tre, Via della Vasca Navale 84, 00146, Rome, Italy}
\begin{abstract} Exploitation of time-frequency properties of SPDC photon pairs has recently found application in many endeavors. Complete characterization and control over the states in this degree of freedom is of paramount importance for the development of optical quantum technologies. This is achieved by accessing information both on the joint spectral amplitude and the joint spectral phase. Here we propose a novel scheme based on the MICE algorithm, which aims at reconstructing the joint spectral phase by adopting a multi-shear approach, making the technique suitable for noisy environments. We report on simulations for the phase reconstruction and propose an experiment using a Franson modified interferometer.
\end{abstract}
\maketitle
Spectral-temporal properties are amongst the most reliable and robust choices for encoding information for photonics quantum techonolgies. Being an internal degree of freedom, it is suitable for long distance communications, and allows for propagation through long-distance fibres without affecting the quantum state \cite{fibreprop}. Applications exploiting time-frequency encoding range from QKD protocols \cite{josh, weiner, rodiger, mower}, clock synchronization \cite{giov}, and quantum communications \cite{wiseman}, all of which make use of frequenncy correlated two-photon states. \\ \indent The most common technique to generate such pairs are non linear processes, such as spontaneous parametric down conversion (SPDC) and four wave mixing, in which the spectral-temporal properties are ditcated by the shape of the pump as well as the material through its phase-matching function. Tailoring the pump and choosing the appropriate dispersion grants for diverse capacities in shaping spectrally broad two-photon states \cite{vahid, raymer, mosley, carrasco}. Quantum technologies demand that the information carriers are prepared in fiducial states at the beginning, as a key requirement for the correct operation of any protocol. The variegate structure of time-frequency modes is at the same time what grants its advantages but poses some critical challenges in its characterization. Ultrafast pulsed modes - exactly like their classical counterpart - vary too quickly to be characterized in the time domain. To characterize them in the frequency domain, there is need to access both their spectral amplitude and spectral phase, as both affect the time profile and can carry signatures of frequency correlations. Measuring the joint spectral amplitude is now a commonly addressed task \cite{brian2,xstine,clark}, however the measurement of the joint spectral phase has only recently been tackled. This has been achieved by performing quantum state tomography on the biphoton state \cite{john}, by extending what is normally applied to discrete systems, e.g. polarization. \\ \indent An alternative route relies on classical ultrafast metrology techniques, which have been extensively developed following the need to characterize femtosecond and attosecond pulses \cite{ian}. An approach in this direction has been recently proposed in \cite{brian}, where the self- reference classical metrology technique SPIDER \cite{spider} has been adapted to the heralded measurement of photon pairs phases. SPIDER reconstructs the spectral phase by retrieving the interferometric phase between two frequency-sheared copies of an unknown pulse. The extraction algorithm is quite simple and is based on the integration of the interferometric phase. In the last few decades many different implementations of SPIDER have been developed to address increasing degrees of pulse complexity \cite{spider2, spider3,spider4,spider5}. In particular multi-shear techniques as SEA-CAR SPIDER \cite{carspider1, carspider2,carspider3} have provided a very robust tool for the reconstruction of broadband pulses with high spectral complexity. In all its implementations SPIDER is a referenced technique, where the reference is either the pulse itself, or, with a slight modification, a known external field. More recently a new algorithm, MICE \cite{MICE}, which relies as well on multi-shear techniques, has been developed. Contrary to the standard SPIDER extraction algorithm, MICE allows for the mutual characterisation of multiple unknown fields at the same time. Due to the redundancies achieved via the multi-shear arrangement, MICE performs extremely well even under very stringent noise conditions. This technique has proven to be extremely versatile in the classical regime and it has been employed for the reconstruction of spectral phases of complex pulses in the visible-near IR regime \cite{spice}, for wavefront reconstruction \cite{MICE}, for the spatial characterization of high harmonic sources \cite{mmm}, and for digital holography microscopy \cite{patrick}. \\ \indent Here we propose a technique to employ MICE in a setup taking into accout the specific needs of quantum light detection. The measurement strategy relies on the use of a modified Franson interferometer \cite{cabello09, hardy11}, which allows to observe genuine time-bin entanglment without relying on time-resolved detection. This is a necessary condition to obtain coincidences which are dependent on the biphoton spectral phase to be extracted. Simulations show that, due to the redundancy provided by the multishear approach, the technique works reliably even with moderate signal intensities.\\ \indent MICE is a classical metrology technique which uses an iterative algorithm to simultaneously reconstruct multiple unknown fields $E_i$ depending on a set of parameters $\gamma$ without the need of an external known reference. This is made possible by means of a multishear measurement strategy, in which multiple shears must be used to scan the fields along each parameter, and the number of fields to reconstruct mus be lower than the number of shears used for each dimension. This is sufficient to guarantee enough redundancy, which makes the technique particularly robust against noise. Given two fields $E_1(\gamma)$ and $E_2(\gamma)$, MICE relies on the minimization of the error with respect to each field \cite{MICE}: \begin{equation} \mathcal{E}= \sum_{j,k,l} \vert AC^{meas}_{j,j-k} - E_1(\gamma_j)E_2^*(\gamma_j-\Gamma_{k}) \vert ^2 \label{error} \end{equation} where $AC^{meas} $, is the measured interferometric product between the two fields, obtained as the sideband of the Fourier transform of $I=\vert E_1(\gamma) + E_2(\gamma-\Gamma_{k}) \vert ^2$. Measuring the bi-photon spectral phase requires implementing interferometric schemes, which typically demand for long accumulation times to achieve good signal levels. Given its robustness against noise, using MICE grants a solution to this, becoming the preferable choice for such an endeavor. This is conditioned on properly choosing an arrangement whose measurement outcome obey to the behavior described above. \begin{figure}
\caption{{\it Proposed interferometric scheme.} A photon-pair produced via SPDC enters a modified Franson interferometer where each photon can undertake either a long (L) or short (S) path. When the signal (idler) photon passes through the short (long) path it is subject to a frequency shear given by the EOM. A frequency-resolved coincidence counting measurement is then performed.}
\label{setup}
\end{figure} Consider the modified version of the interferometric scheme proposed by Cabello et. al \cite{cabello09}, as depicted in Fig. \ref{setup}. The original motivation of this scheme lies in easing some technical requirements of Franson's original idea \cite{franson} for the generation of time-bin entanglement. A photon pair is generated via spontaneous parametric down conversion(SPDC); both the signal and idler photons can undertake either a short $\vert S \rangle$ or a long $\vert L \rangle$ path before being detected with a frequency-resolved measurement. This scheme has been proved to generate time-bin entanglement between the short and long paths without relying on time-resolved detection \cite{cabello09} . In order to make it suitable for our purposes, two further modifications need to be introduced: frequency resolved detection is adopted; independent frequency shears are inserted on the $S_s$ and $L_i$ path: the signal will be sheared only when taking the short path, the idler only when taking the long one. Shearing can be performed by means of Electro Optic Modulators (EOMs) as proposed and demonstrated by \cite{eosi, brian}. This is preferable to non-linear optical shearing, as we work in the single photon regime. Since we adopt a multishear approach, both the shears have to be scanned independently through multiple values, so that for each shear $(\Omega_{1,k})$ on $S_s$, the shear $(\Omega_{2,l})$ on $L_i$ scans along the idler dimension of the joint spectral wavefunction. In the most general case, MICE is not bounded to work with fields having the same spectral support, if the shears are chosen so that the interferograms will completely cover the fields along every dimension. In fact, the phase will be only retrieved in the zones covered by the interference. At the same time the interferograms given by two subsequent shears need to overlap, otherwise it cannot make use of the redundancy. If the fields interfering have the same isupport, the sole purpose of the multiple shears is to grant the redundancy, hence they can be as small as allowed by the detection resolution. The state entering the interferometer will be given by \cite{eberly}: \begin{equation} \Psi(\omega_s,\omega_i)=\int d\omega_s \,d\omega_i A(\omega_s,\omega_i)a^{\dagger}_s(\omega_s)a^{\dagger}_i(\omega_i)\vert 0 \rangle\vert 0 \rangle, \end{equation} where $ A(\omega_s,\omega_i)$ is the wave function of the biphoton state. As the photon pairs goes through the interferometer, the output state will be transformed into: \begin{equation} \begin{aligned} \Psi(\omega_s,\omega_i) = \int d\omega_s \,d\omega_i A(\omega_s,\omega_i)a^{\dagger}_s(\omega_s+\Omega_{1,k})a^{\dagger}_i(\omega_i)+\\ +A(\omega_s,\omega_i)a^{\dagger}_s(\omega_s)a^{\dagger}_i(\omega_i+\Omega_{2,l})e^{i(\omega_s+\omega_i+\Omega_{2,l})\tau}\vert 0 \rangle\vert 0 \rangle, \end{aligned} \end{equation} where $\tau$ is the delay between the two paths and we have supposed to perform the shear on the $L_i$ path after a length equal to that of the S paths. We notice however that due to the modified geometry, the detector will not always measure $\omega_s$ or $\omega_i$, and that is a fundamental requirement to assure genuine time-bin entanglement between the two photons, as it allows to automatically discard the $\vert S \rangle \vert L \rangle$ ad $\vert L \rangle \vert S \rangle$ events. Hence, when the photons are measured, the destruction operators, as a function of the measured frequencies $\omega_A$ and $\omega_B$, are $ b_A(\omega_A) = a_s(\omega),\,\,\, b_B(\omega_B)=a_i(\omega) $, if the state measured is $\vert SS\rangle$, or $b_A(\omega_A) = a_i(\omega),\,\,\, b_B(\omega_B)=a_s(\omega)$ if the state measured is $\vert LL \rangle$. Hence, the coincidence probability reads: \begin{equation} \begin{aligned} &P(\omega_A,\omega_B)=\\ &\vert A(\omega_A - \Omega_1,\omega_B) + A(\omega_B, \omega_A - \Omega_2)e^{i(\omega_A+\omega_B)\tau} \vert^2 \end{aligned} \end{equation} We can now define $E_1(\omega_s,\omega_i) \equiv A(\omega_A - \Omega_1,\omega_B) = X_1(\omega_A - \Omega_1,\omega_B)e^{i\phi_1((\omega_A - \Omega_1,\omega_B))}$ and $E_2(\omega_s,\omega_i) \equiv A(\omega_B, \omega_A - \Omega_2) = X_2(\omega_B, \omega_A - \Omega_2) e^{i\left[\phi_2(\omega_B, \omega_A - \Omega_2)+(\omega_A+\omega_B)\tau\right]},$where $X_i$ is the spectral amplitude of each electric field ad $\phi_i$ is its phase, ad we have introduced the ancillary field $E_2(\omega_s,\omega_i)$, which bears no additional physical meaning but is instrumental to the field retrieval. The probability becomes: \begin{equation} P(\omega_A,\omega_B)=\vert E_1(\omega_A - \Omega_1,\omega_B) + E_2(\omega_B,\omega_A-\Omega_2) \vert^2, \end{equation} which has the same structure of the interferogram $I$ between $E_1(\gamma)$ and $E_2(\gamma)$, where $\gamma = \omega_s,\omega_i$. As such, this can now be processed with the MICE algorithm, solving the following equations which have been obtained by minimizing the error in Eq. \eqref{error} with respect to both fields: \begin{equation} \begin{aligned} &E_1(\omega_i,\omega_j)=\frac{\sum_{k,l} AC^{meas}_{i -k,j+l}\cdot E_2^*(\omega_{i}+\Omega_{1,k},\omega_{j}-\Omega_{2,l})}{\sum_{k,l}E_2(\omega_{i}-\Omega_{1,k},\omega_{j}+\Omega_{2,l})}\\ &E_2^*(\omega_i,\omega_j)=\frac{\sum_{k,l}AC^{meas}_{i+k,j-l} \cdot E_1^*(\omega_{i}-\Omega_{1,k},\omega_{j}+\Omega_{2,l}) }{\sum_{k,l}E_1(\omega_{i}-\Omega_{1,k},\omega_{j}+\Omega_{2,l})}. \end{aligned} \end{equation} To solve this set of equations it is necessary to provide an initial guess for $E_2$, so to obtain an initial value of $E_1$ which is then fed into the second equation. By iteration the two fields are retrieved. We remark that as any implementation based on SPIDER, MICE suffers form ambiguities in determining the amplitude $X(\omega_s,\omega_i)$: the phases retrieved for both fields will be accurate, but the amplitudes will not. In order to retrieve the JSA with the setup proposed, it would be sufficient to block the L arms, and perform the spectral measurement on the S arms alone. \begin{figure}
\caption{{\it Simulation restults.} a) Joint spectral amplitude for $E_1$ b) Joint spectral phase for $E_1$ c-e) interferograms obtained with 5, 20, 5000 maximum peak coincidence counts f-h) retrieved joint spectral phase for the three signal intensities}
\label{results}
\end{figure} In order to test the analysis routine, we perform a bi-photon phase reconstruction on simulated data. The JSA and JSP constituting $E_1$ are shown in panel (a) and (b) of Fig. \ref{results}, and are those emitted by typical (e.g. those in \cite{brian, john}). The field is sampled on a 32x32 pixels grid, covering a spectral range of 10 nm centered at 820 nm along each dimension. $E_2$ is obtained as a permutation between the two dimensions of $E_1$. The shears are then applied to both fields. Both $\Omega_1$ and $\Omega_2$ can each assume 8 different values, which leads to 64 interferograms per reconstruction. As per Fig. \ref{results}, we choose a scenario in which the correlations are present only in the phase to be retrieved and not in the JSA, which results in both $E_1$ and $E_2$ sharing the same amplitude. Since the amplitude of the two fields is the same and the multiple shears are used only for the required redundancy, their value can be as small as dictated by the detection resolution, so in order to have eight different values, the shear on each arm will vary between -4 px to 3 px (where each pixel corresponds to $\Delta \lambda \sim 0,3$ nm). \\ \indent To test against the robustness to noise in a realistic scenario, we perform different reconstructions by varying the signal intensity. In particular the interferograms are normalized by setting peak coincidence counts of the interferogram $N_{max} $, from 5 to 8000 coincidences. Furthermore accidental coincidences are added accordingly, given by $N_{acc}=(N_{max}/0.1)^2/80e6$, obtained considering a $10\%$ coincidence efficiency to determine the signal intensity and a repetition rate of 80 MHz. Note that $N_{acc}$ is calculated on the maximum coincidence value and is hence overestimated. The interferograms are then randomly generated with a Poissonian distribution centered at the value give by the normalization for each pixel, to which Possionian-distributed accidental coincidences are added. The interferograms are then fed to the MICE algorithm set with 20 iterations. The interferograms for $N_{max} = 5 cc$, $N_{max} =20 cc$, and $N_{max} =5000 cc$ are shown in Fig. \ref{results} panels c-e. Panels f-h show the reconstructed JSP of $E_1$ for each signal intensity. The phase of $E_2$, which is also reconstructed, is not shown as it does not add any meaningful information. \begin{figure}
\caption{{\it RMS error} between the original and retrieved phase vs. the interferogram's peak coincidence counts.The error saturates at 5000 peak coincidences at $4.5\cdot 10^{-3}$ rad .}
\label{rmse}
\end{figure} Each reconstruction is then repeated 30 times to accumulate statistics for calculating the RMS error, weighted with the field's intensity \cite{articolodiian}, between the original and retrieved phase of $E_1$. The results are shown in Fig. \ref{rmse}. Varying the signal intensity, the error converges to its minimum of $0,0045$ rad for $N_{max} = 5000$. However even for 5 peak counts the intensity-weighted RMSE is 0.056 rad, which indicates a good agreement between the retrieved and original phase. In fact, even when the full span of the phase is not reconstructed, the low intensity doesn't affect the reconstruction in the portion with non-zero signal. This makes MICE an excellent tool for dealing with particularly low count rates and noisy scenarios. Shear, resolution, and signal intensity all concur in achieving a correct reconstruction and have to be tailored to the measured state, taking into account its spectral amplitude and phase complexity, which is common to every reconstruction technique in the classical domain as well. Nonetheless, with the appropriate choice of parameters, the algorithm is capable of successfully reconstructing arbitrarily complex JSPs, as demonstrated on the reconstruction in Fig. \ref{rm3}, where the University of Roma Tre logo has been used as JSP. With respect to the reconstruction shown before, 32 shears were employed instead of 8, however the same resolution and spectral intensity were kept of the previous, more realistic, case. In this example the redundancy given by the multiple shears contrasts the lack of resolution in the reconstruction of a highly structured phase, showing the flexibility given by the interplay among the many reconstruction parameters. \begin{figure}
\caption{{\it JSP reconstruction} of University of Roma Tre logo.}
\label{rm3}
\end{figure}\\ \indent Concluding, we propose of a new technique which is capable of characterizing the joint spectral phase of a biphoton state even in low-signal, noisy regimes. This takes advantage of the high redundancy granted by the multi-shear approach, which is implemented using a modified Franson interferometer. The robustness to noise is reflected in a rapid convergence of the RMS error to its minimum. The proposed setup presents its complexities but it has already been successfully used in many endeavours. The lack of strict signal requirements and the robustness to noise make up for these complexities, posing this novel technique as possible route to obtain a complete characterization of time-frequency states. \\ {\it Acknowledgements.} The author would like to thank M. Barbieri for his helpful advice, and G. Vallone for fruitful discussion.
\end{document} |
\begin{document}
\setcounter{page}{1}
\title[Boutet de Monvel operators on singular manifolds]{Boutet de Monvel operators on singular manifolds \\
\\ Operateurs de Boutet de Monvel pour de vari\'et\'es singuli\`eres}
\author[Karsten Bohlen]{Karsten Bohlen}
\address{$^{1}$ Leibniz University Hannover, Germany} \email{\textcolor[rgb]{0.00,0.00,0.84}{[email protected]}}
\subjclass[2000]{Primary 58J32; Secondary 58B34.}
\keywords{Boutet de Monvel's calculus, groupoids, Lie manifolds.}
\begin{abstracts} \abstractin{english} We construct a Boutet de Monvel calculus for general pseudodifferential boundary value problems defined on a broad class of non-compact manifolds, the class of so-called Lie manifolds with boundary. It is known that this class of non-compact manifolds can be used to model many classes of singular manifolds.
\abstractin{french} Nous construisons un calcul des type Boutet de Monvel pour des probl\`emes de valeurs au bord pseudodiff\'erentiels defin\'es sur une large classe de vari\'et\'es non-compactes, des vari\'et\'es de Lie \`a bord. Il est bien connu que cette classe de veri\'et\'es non-compactes peut \^etre utilis\'ee pour mod\'eliser des nombreuses classes de vari\'et\'es singuli\`eres. \end{abstracts}
\maketitle
\section{Introduction}
The analysis on singular manifolds has a long history, and the subject is to a large degree motivated by the study of partial differential equations (with or without boundary conditions) and by the generalizations of index theory to the singular setting, e.g. Atiyah-Singer type index theorems. One particular approach is based on the observation first made by A. Connes (cf. \cite{C}, section II.5) that groupoids are good models for singular spaces. The pseudodifferential calculus on longitudinally smooth groupoids was developed by B. Monthubert, V. Nistor, A. Weinstein and P. Xu; see e.g. \cite{NWX}. Later a pseudodifferential calculus on a Lie manifold was constructed in \cite{ALN} via representations of pseudodifferential operators on a Lie groupoid. This representation also yields closedness under composition.
It is important for applications in the study of partial differential equations to pose boundary conditions and to construct a parametrix for general boundary value problems.
In our case we consider the following data: a Lie manifold $(X, \V)$ with boundary $Y$ which is an embedded, transversal hypersurface $Y \subset X$ and which is a Lie submanifold of $X$ (cf. \cite{ALN}, \cite{AIN}).
We will describe a general calculus with pseudodifferential boundary conditions on the Lie manifold with boundary $(X, Y, \V)$. Special cases of our setup have been considered by Schrohe and Schulze, cf. e.g. \cite{SS}. Debord and Skandalis study Boutet de Monvel operators using deformation groupoids, \cite{DS}.
\section{Boutet de Monvel's calculus}
Boutet de Monvel's calculus (e.g. \cite{BM}) was introduced in 1971. For a detailed account we refer the reader to the book \cite{G}. This calculus provides a convenient and general tool to study the classical boundary value problems (BVP's). Let $X$ be a smooth compact manifold with boundary and fix smooth vector bundles $E_i \to X, \ F_i \to \partial X, \ i = 1,2$. Denote by $P \in \Psi_{tr}^m(M)$ a pseudodifferential operator (\cite{G}, p. 20 (1.2.4)) with transmission property (\cite{G}, p. 23, (1.2.6)) defined on a suitable smooth neighborhood $M$ of $X$ where $P_{+}$ means $P = r^{+} P e^{+}$ the truncation. The transmission property (loc. cit.) ensures that $P_{+}$ maps functions smooth up to the boundary to functions which are smooth up to the boundary. Additionally, $G \colon C^{\infty}(X) \to C^{\infty}(X)$ is a singular Green operator (\cite{G}, p. 30), $K \colon C^{\infty}(\partial X) \to C^{\infty}(X)$ is a potential operator (\cite{G}, p. 29) and $T \colon C^{\infty}(X) \to C^{\infty}(\partial X)$ is a trace operator (\cite{G}, p. 27) We also have a pseudodifferential operator on the boundary $S \in \Psi^m(\partial X)$.
An operator of order $m \leq 0$ and type $0$ in Boutet de Monvel's calculus is a matrix
\[ A = \begin{pmatrix} P_{+} + G & K \\ T & S \end{pmatrix} \colon \begin{matrix} C^{\infty}(X, E_1) \\ \oplus \\ C^{\infty}(\partial X, F_1) \end{matrix} \to \begin{matrix} C^{\infty}(X, E_2) \\ \oplus \\ C^{\infty}(\partial X, F_2) \end{matrix} \in \B^{m,0}(X, \partial X). \]
\textbf{The calculus of Boutet de Monvel has the following \emph{features:}} \begin{itemize} \item If the bundles match, i.e. if $E_1 = E_2 = E, \ F_1 = F_2 = F$ the calculus is \textbf{\emph{closed under composition}}.
\item If $F_1 = 0, \ G = 0$ and $K, \ S$ are not present, we obtain a \textbf{\emph{classical BVP}}, e.g. the Dirichlet problem.
\item If $F_2 = 0$ and $T, \ S$ are not present, the calculus \textbf{\emph{contains inverses of classical BVP's}} whenever they exist. \end{itemize}
The proof that two Boutet de Monvel operators composed are again of this type is technical, see e.g. \cite{G}, chapter 2. Additionally, the symbolic structure of the operators involves more complicated behavior than that of merely pseudodifferential operators.
\section{Lie manifolds with boundary}
In this section we will consider the general setup for the analysis on singular and non-compact manifolds.
\begin{Exa} On a compact manifold with boundary $M$ we introduce a Riemannian metric which models a singular structure, where the manifold with boundary is viewed as a compactification of a non-compact manifold with cylindrical end. Precisely, the metric is a product metric $g = g_{\partial M} + dt^2$ in a tubular neighborhood of the boundary (or the far end of the cylinder). The cylindrical end is mapped to a tubular neighborhood of the boundary via the Kondratiev transform $r = e^t$ based on \cite{Kon}. Assume that we are given a tubular neighborhood of the form $[0, \epsilon) \times \partial M$ and let $(r, x') \in [0,\epsilon) \times \partial M$ be local coordinates. The $b$-\emph{differential operators} take the form for $n = \dim(M)$ \begin{align}
P &= \sum_{|\alpha| \leq m} a_{\alpha}(r, x') (r \partial_r)^{\alpha_1} \partial_{x_2'}^{\alpha_2} \cdots \partial_{x_n'}^{\alpha_n} = \sum_{|\alpha| \leq m} a_{\alpha} (r \partial_r)^{\alpha_1} \partial^{\alpha'}. \tag{$*$} \label{1} \end{align}
We observe that the vector fields, which are \emph{local generators}, in this example are $\{r \partial_r, \partial_{x_2}, \cdots, \partial_{x_n}\}$.
We consider a locally finitely generated module of vector fields $\V_b$ that has local generators as defined in our example. These are the vector fields that are tangent to the boundary $\partial M$. An operator of order $m$ in the universal enveloping algebra $P \in \Diff_{\V_b}^m(M)$ is locally written as in \eqref{1}. Since $\V_b$ is a locally finitely generated and projective $C^{\infty}(M)$-module we obtain a vector bundle $\A_b \to M$ such that the smooth sections identify $\Gamma(\A_b) \cong \V_b$ by the Serre-Swan theorem. On $\A_b$ we have a structure of a \emph{Lie algebroid} with anchor $\varrho \colon \A_b \to TM$ (see \cite{ALN} for further details). \label{Exa:Kontradiev} \end{Exa}
There are sub Lie algebras of $\V_b$ constituting so-called \emph{Lie structures} which model different types of singular structures on a manifold, see also \cite{AIN}, \cite{ALN}, \cite{ALNV}. In this general setup $M$ is a compact manifold \emph{with corners} (generalizing Example \ref{Exa:Kontradiev}) viewed as the compactification endowed with a Riemannian metric. The Riemannian metric is of product type in a tubular neighborhood of the singular hyperfaces. The topological structure of $M$ is such that $M$ has a finite number of embedded (intersecting) codimension one hypersurfaces. Open subsets of $[-1,1]^k \times \Rr^{n-k}$, where $k$ is the codimension, are needed to model manifolds with corners. We can require the transition maps to be smooth and obtain a smooth structure on $M$. In our setup we consider such a Lie manifold $X$ with an additional hypersurface (denoted $Y$ below) which is \emph{transversal}. Hence $Y$ is allowed to intersect the singular strata (at infinity) of $X$ as long as this intersection does not occur in a corner (where two singular strata meet).
\begin{Def}[\cite{ALN}, Def. 1.1] A \emph{Lie manifold} $(X, \A^{\pm})$ consists of the following data.
\emph{i)} A compact manifold with corners $X$.
\emph{ii)} A Lie algebroid $(\A^{\pm}, \varrho_{\pm})$ with projection map $\pi_{\pm} \colon \A^{\pm} \to X$.
\emph{iii)} The module of vector fields $\V_{\pm} = \Gamma(\A^{\pm})$ is a locally finitely generated, projective $C^{\infty}(X)$-module. \end{Def}
\begin{Def}[\cite{AIN}, Def. 2.1, Def. 2.5] A \emph{Lie manifold with boundary} $(X, Y, \A^{\pm})$ consists of the following data.
\emph{i)} A Lie manifold $(X, \A^{\pm})$.
\emph{ii)} An embedded codimension one submanifold with corners $Y \hookrightarrow X$.
\emph{iii)} There is a Lie algebroid $(\A_{\partial}, \varrho_{\partial})$ on $Y$ with projection map $\pi_{\partial} \colon \A_{\partial} \to Y$ such that $\A_{\partial}$ is a Lie subalgebroid of $\A^{\pm}$, \cite{M}, Def. 4.3.14.
\emph{iv)} The submanifold $Y$ is \emph{transversal}, i.e. $\varrho_{\pm}(\A_y) + T_y Y = T_y X, \ y \in \partial Y$.
\emph{v)} The interior $(X_0, Y_0)$ is diffeomorphic to a smooth manifold with boundary. \end{Def}
\begin{Rem}
\emph{i)} In \cite{AIN} the authors define a Lie manifold with boundary $(X, Y, \V_{\pm})$ and also the \emph{double} of a given Lie manifold with boundary. We denote this double by $M = 2X$ which is a Lie manifold $(M, \V)$. The \emph{Lie structure} $\V$ is defined such that $\V_{\pm} = \{V_{|X_{\pm}} : V \in \V\}$. We obtain a Lie manifold $(M, \A)$ with the Lie algebroid $(\A, \varrho)$.
\emph{ii)} We set $\W = \Gamma(\A_{\partial})$ for the \emph{Lie structure} of $Y$. Using \emph{iii)} and \emph{iv)} of the definition we obtain (cf. \cite{M}, pp. 164-165) \begin{align*}
\W &= \{V \in \Gamma(Y, \A_{|Y}) : \varrho \circ V \in \Gamma(Y, TY)\} = \{V_{|Y} : V \in \V, \ V_{|Y} \ \text{tangent to} \ Y\}. \end{align*}
\label{Rem:double} \end{Rem}
\section{Quantization}
In this section we describe the quantization of H\"ormander symbols defined on the conormal bundles. We restrict ourselves to the case of trace operators. The other cases are defined analogously.
\textbf{\emph{We fix the following data:}} \begin{itemize} \item A Lie manifold with boundary $(X, Y, \V)$ and the double $M = 2X$ of $X$, endowed with Lie structure $2 \V$. Fix a Lie algebroid $(\pi \colon \A \to M, \ \varrho_M)$ such that $\Gamma(\A) = 2\V$. The hypersurface $Y$ is endowed with the Lie structure $\W$ as defined in \ref{Rem:double}. Furthermore, fix the vector bundle $(\pi_{\partial} \colon \A_{\partial} \to Y, \ \varrho_{\partial})$ with $\Gamma(\A_{\partial}) = \W$.
\item We fix the \emph{normal bundles} $\A_{|Y} / \A_{\partial} =: \N \to Y$ as well as\footnote{We denote by $\Delta_Y$ the diagonal in $Y \times Y$ being understood as a submanifold of $Y \times M, \ M \times Y$ and $M \times M$, groupoids $\G$, $\G_{\partial}$ and spaces $\X$, $\Xop$ as defined in \cite{B}.} $\N^{\X} \Delta_Y \to Y, \ \N^{\X^t} \Delta_Y \to Y, \ \N^{\G} \Delta_Y \to Y$ which are used to quantize pseudodifferential, trace, potential and singular Green operators respectively. \end{itemize}
The notation is reminiscent of the underlying geometry which is described using groupoids and groupoid correspondences \cite{B}. We will keep using this notation, though we remark that there are (non-canonical) isomorphisms \begin{align*}
& \N^{\X} \Delta_Y \cong \A_{\partial} \times \N, \ \N^{\Xop} \Delta_{Y} \cong \N \times \A_{\partial} \ \text{and} \ \N^{\G} \Delta_Y \cong \A_{|Y} \times \N. \end{align*}
\begin{Rem} \emph{i)} On the singular normal bundles we define the H\"ormander symbols spaces $S^m(\N^{\X} \Delta_Y^{\ast}) \subset C^{\infty}(\N^{\X} \Delta_Y^{\ast})$ as in \cite{HIII}, Thm 18.2.11.
\emph{ii)} Define the \emph{inverse fiberwise Fourier transform} \[ \Ff^{-1}(\varphi)(\zeta) = \int_{\overline{\pi}(\zeta) = \pi(\xi)} e^{i \scal{\xi}{\zeta}} \varphi(\xi) \,d\xi, \ \varphi \in S(\N^{\X} \Delta_Y^{\ast}). \]
Here we use the notation $S(\N^{\X} \Delta_Y^{\ast})$ for the space of rapidly decreasing functions on the conormal bundle, see also \cite{S}, Chapter 1.5.
The spaces of conormal distributions are defined as $I^{m}(\N^{\X} \Delta_Y, \Delta_Y) := \Ff^{-1} S^m(\N^{\X} \Delta_Y^{\ast})$ and $I^{m}(\N^{\Xop} \Delta_Y, Y), \ I^{m}(\N^{\G} \Delta_Y, Y)$ analogously. \label{Rem:fwise} \end{Rem}
On a Lie manifold the injectivity radius is positive, see \cite{ALN2}, Thm. 4.14.
Let $r$ be smaller than the injectivity radius and write $(\N^{\X} \Delta_Y)_r = \{v \in \N^{\X} \Delta_Y : \|v\| < r\}$ as well as $I_{(r)}^m(\N^{\X} \Delta_Y, \Delta_Y) = I^m((\N^{\X} \Delta_Y)_r, \Delta_Y).$
Fix the restriction $\R \colon I_{(r)}^m(\N^{\X} \Delta_Y, \Delta_Y) \to I_{(r)}^m(N^{Y_0 \times M_0} \Delta_{Y_0}, \Delta_{Y_0})$. Additionally, denote by $\J_{tr}$ the action of a conormal distribution (its induced linear operator). We denote by $\Psi$ the normal fibration of the inclusion $\Delta_{Y_0} \hookrightarrow Y_0 \times M_0$ such that $\Psi$ is the local diffeomorphism mapping an open neighborhood of the zero section $O_{Y_0} \subset V \subset N^{Y_0 \times M_0} \Delta_{Y_0}$ onto an open neighborhood $\Delta_{Y_0} \subset U \subset Y_0 \times M_0$ (cf. \cite{S}, Thm. 4.1.1). Then we have the induced map on conormal distributions $\Psi_{\ast} \colon I_{(r)}^m(N^{Y_0 \times M_0} \Delta_{Y_0}, \Delta_{Y_0}) \to I^m(Y_0 \times M_0, \Delta_{Y_0})$. Also let $\chi \in C_c^{\infty}(\N^{\X} \Delta_Y)$ be a cutoff function which acts by multiplication $I^m(\N^{\X} \Delta_Y, \Delta_Y) \to I_{(r)}^{m}(\N^{\X} \Delta_Y, \Delta_Y)$.
\begin{Def}[Quantization] Define $q_{T, \chi} \colon S^m(\N^{\X} \Delta_Y^{\ast}) \to \Trace^{m,0}(M, Y)$ such that for $t \in S^m(\N^{\X} \Delta_Y^{\ast})$ we have $q_{T, \chi}(t) = \J_{tr} \circ q_{\Psi, \chi}(t)$ where $q_{\Psi, \chi}(t) = \Psi_{\ast}(\R(\chi \Ff^{-1}(t)))$. \end{Def}
From the compactness of $M$ we can associate to each vector field in $2 \V$ a \emph{global flow} $2\V \ni V \mapsto \Phi_V \colon \Rr \times M \to M$. Then consider the diffeomorphism $\Phi(1, -) \colon M \to M$ evaluated at time $t = 1$ and fix the corresponding group actions on functions which we denote by $2\V \ni V \mapsto \varphi_V \colon C^{\infty}(M) \to C^{\infty}(M)$.
\begin{Def} The class of $\V$-trace operators is defined as $\Trace_{2\V}^{m,0}(M, Y) := \Trace^{m,0}(M, Y) + \Trace_{2\V}^{-\infty, 0}(M, Y)$. Here $\Trace^{m,0}(M, Y)$ consists of the extended operators from the previous definition. The residual class is defined as follows \begin{align*} & \Trace_{2\V}^{-\infty, 0}(M, Y) := \mathrm{span}\{q_{\chi, T}(t) \varphi_{V_1} \cdots \varphi_{V_k} : V_j \in 2\V, \ \chi \in C_c^{\infty}(\N^{\X} \Delta_Y), \ t \in S^{-\infty}(\N^{\X} \Delta_Y^{\ast})\}. \end{align*} \end{Def}
We henceforth denote by $\B_{2\V}^{m,0}(M, Y)$ the class of extended Boutet de Monvel operators which consist of matrices of operators $\begin{pmatrix} P + G & K \\ T & S \end{pmatrix}$. The components are given via the fibrations on the appropriate normal bundles.
\section{Compositions and Parametrices}
To prove closedness under composition we require to assume that a groupoid $\G$ which integrates the Lie structure on $M$ and a groupoid $\G_{\partial}$ which integrates the Lie structure on $Y$ are chosen in the following sense. Precisely, we construct for the given Lie structures on $M$ and $Y$ respectively integrating groupoids $\G$ and $\G_{\partial}$ as well as a morphism from $\G \to \G_{\partial}$ and a morphism from $\G_{\partial} \to \G$ in the category of Lie groupoids. These morphisms are described using \emph{correspondences} of groupoids, see \cite{MO}. \begin{Exa} Consider the example of the algebroids $\A = TM, \ \A_{\partial} = TY$, i.e. the Lie structures consisting of \emph{all vector fields}. The pair groupoids $M \times M \rightrightarrows M, \ Y \times Y \rightrightarrows Y$ and also the \emph{path groupoids} (see \cite{LN}, example 2.9) $\P_M \rightrightarrows M, \ \P_Y \rightrightarrows Y$ integrate these algebroids. \label{Exa:corr} \end{Exa} For several Lie structures, groupoids and correspondences with good geometry exist, e.g. the Lie structure of $b$-vector fields or the structure of fibered cusp vector fields \cite{B}. The following results hold for Lie structures of this type.
\begin{Thm} The class of extended Boutet de Monvel operators $\B_{2\V}^{0,0}(M, Y)$ is closed under composition and adjoint, hence $\B_{2\V}^{0,0}(M, Y)$ forms an associative $\ast$-algebra. \label{Thm:closed1} \end{Thm}
We define the class of \emph{truncated} Boutet de Monvel operators as follows. The restriction $r^+$ to the interior $\mathring{X}_0 := X_0 \setminus Y_0$ and the extension by zero operator $e^+$ are given on the manifold level by \[ \xymatrix{ L^2(M_0) \ar@/^1pc/[r]^{r^{+}} & \ar@/-0pc/[l]^-{e^{+}} L^2(\mathring{X}_0) } \]
with $r^{+} e^{+} = \id_{L^2(\mathring{X}_0)}$ and $e^{+} r^{+}$ being a projection onto a subspace of $L^2(M_0)$. We define \[ \End\begin{pmatrix} C_c^{\infty}(M_0) \\ \oplus \\ C_c^{\infty}(Y_0) \end{pmatrix} \supset \B_{2\V}^{m,0}(M, Y) \ni A = \begin{pmatrix} P + G & K \\ T & S \end{pmatrix} \mapsto \C(A) = \begin{pmatrix} r^{+} (P + G) e^{+} & r^{+} K \\ T e^{+} & S \end{pmatrix} \in \End\begin{pmatrix} C_c^{\infty}(X_0) \\ \oplus \\ C_c^{\infty}(Y_0) \end{pmatrix}. \]
\begin{Def} The class of \emph{truncated operators} is for $m \leq 0$ defined as $\B_{\V}^{m,0}(X, Y) := \C \circ \B_{2\V}^{m,0}(M, Y)$. \label{Def:truncated} \end{Def}
To show closedness under composition we use the longitudinally smooth structure of the integrating groupoids as well as the previous Theorem. This enables us to state the second main result.
\begin{Thm} The calculus $\B_{\V}^{0,0}(X, Y)$ is closed under composition and adjoint. \end{Thm}
A priori, the inverse of an invertible Boutet de Monvel operator will not be contained in our calculus due to the definition via compactly supported distributional kernels. We define a completion $\overline{\B}_{\V}^{-\infty,0}(X, Y)$ of the residual Boutet de Monvel operators with regard to the family of norms of operators $\L\left(\begin{matrix} H_{\V}^t(X) \\ \oplus \\ H_{\W}^t(Y) \end{matrix}, \ \begin{matrix} H_{\V}^r(X) \\ \oplus \\ H_{\W}^r(Y) \end{matrix}\right)$ on Sobolev spaces, cf. \cite{ALNV}. Define the completed algebra of Boutet de Monvel operators as \[ \overline{\B}_{\V}^{0,0}(X, Y) = \B_{\V}^{0,0}(X, Y) + \overline{\B}_{\V}^{-\infty, 0}(X, Y). \] The resulting algebra contains inverses and has favorable algebraic properties, e.g. it is spectrally invariant, \cite{B}. We obtain a parametrix construction after defining a notion of \emph{Shapiro-Lopatinski ellipticity}. The indicial symbol $\R_{F}$ of an operator $A$ on $X$ is an operator $\R_{F}(A)$ defined as the restriction to a singular hyperface $F \subset X$ (see \cite{ALN}). Note that if $F$ intersects the boundary $Y$ non-trivially we obtain in this way a non-trivial Boutet de Monvel operator $\R_F(A)$ defined on the Lie manifold $F$ with boundary $F \cap Y$.
\begin{Def} \emph{i)} We say that $A \in \overline{\B}_{\V}^{0,0}(X, Y)$ is \emph{$\V$-elliptic} if the principal symbol $\sigma(A)$ and the principal boundary symbol $\sigma_{\partial}(A)$ are both pointwise invertible.
\emph{ii)} A $\V$-elliptic operator $A$ is \emph{elliptic} if $\R_F(A)$ is pointwise invertible for each hyperface $F \subset X$. \label{Def:elliptic} \end{Def}
\begin{Thm} \emph{i)} Let $A \in \overline{\B}_{\V}^{0,0}(X, Y)$ be $\V$-elliptic. There is a parametrix $B \in \overline{\B}_{\V}^{0,0}(X, Y)$ of $A$, in the sense \[ I - AB \in \overline{\B}_{\V}^{-\infty, 0}(X, Y), \ I - BA \in \overline{\B}_{\V}^{-\infty, 0}(X, Y). \]
\emph{ii)} Let $A \in \overline{\B}_{\V}^{0,0}(X, Y)$ be elliptic. There is a parametrix $B \in \overline{\B}_{\V}^{0,0}(X, Y)$ of $A$ up to compact operators \[ I - AB \in \K\begin{pmatrix} L_{\V}^2(X) \\ \oplus \\ L_{\W}^2(Y) \end{pmatrix}, \ I - BA \in \K\begin{pmatrix} L_{\V}^2(X) \\ \oplus \\ L_{\W}^2(Y) \end{pmatrix}. \]
\label{Thm:parametrix} \end{Thm}
{ \small {
}}
\end{document} |
\begin{document}
\pagestyle{plain}
\title{\large {\textbf{REAL HYPERSURFACES EQUIPPED WITH $\xi$-PARALLEL STRUCTURE JACOBI OPERATOR IN $\mathbb{C}P^{2}$ OR $\mathbb{C}H^{2}$}}}
\author{ \textbf{\normalsize{Konstantina Panagiotidou and Philippos J. Xenos}}\\ \small \emph{Mathematics Division-School of Technology, Aristotle University of Thessaloniki, Greece}\\ \small \emph{E-mail: [email protected], [email protected]}} \date{}
\maketitle \begin{flushleft} \small {\textsc{Abstract}. The $\xi$-parallelness condition of the structure Jacobi operator of real hypersurfaces has been studied in combination with additional conditions. In the present paper we study three dimensional real hypersurfaces in $\mathbb{C}P^{2}$ or $\mathbb{C}H^{2}$ equipped with $\xi$-parallel structure Jacobi operator. We prove that they are Hopf hypersurfaces and if additional $\eta(A\xi)\neq0$, we give the classification of them.} \end{flushleft} \begin{flushleft} \small{\emph{Keywords}: Real hypersurface, $\xi$-parallel structure Jacobi operator, Complex projective space, Complex hyperbolic space.\\} \end{flushleft} \begin{flushleft} \small{\emph{Mathematics Subject Classification }(2000): Primary 53B25; Secondary 53C15, 53D15.} \end{flushleft}
\section{Introduction}
A complex n-dimensional Kaehler manifold of constant holomorphic sectional curvature c is called a complex space form, which is denoted by $M_{n}(c)$. A complete and simply connected complex space form is complex analytically isometric to a complex projective space $\mathbb{C}P^{n}$, a complex Euclidean space $\mathbb{C}^{n}$ or a complex hyperbolic space $\mathbb{C}H^{n}$ if $c>0, c=0$ or $c<0$ respectively.
The study of real hypersurfaces in a nonflat complex space form is a classical problem in Differential Geometry. Let $M$ be a real hypersurface in $M_{n}(c)$. Then $M$ has an almost contact metric structure $(\varphi,\xi,\eta,g)$. The structure vector field $\xi$ is called principal if $A\xi=\alpha\xi$ holds on $M$, where A is the shape operator of $M$ in $M_{n}(c)$ and $\alpha$ is a smooth function. A real hypersurface is called \textit{Hopf hypersurface} if $\xi$ is principal.
Takagi in \cite{T2} classified homogeneous real hypersurfaces in $\mathbb{C}P^{n}$ and Berndt in \cite{Ber} classified Hopf hypersurfaces with constant principal curvatures in $\mathbb{C}H^{n}$. Let $M$ be a real hypersurface in $M_{n}(c)$, $c\neq0$. Then we state the following theorems due to Okumura \cite{Ok} for $\mathbb{C}P^{n}$ and Montiel and Romero \cite{MR} for $\mathbb{C}H^{n}$ respectively.\\
\begin{theorem} Let M be a real hypersurface of $M_{n}(c)$ , $n\geq2$, $c\neq0$. If it satisfies $A\varphi-\varphi A=0$, then M is locally congruent to one of the following hypersurfaces:
\begin{itemize}
\item In case $\mathbb{C}P^{n}$\\
$(A_{1})$ a geodesic hypersphere of radius r , where
$0<r<\frac{\pi}{2}$,\\
$(A_{2})$ a tube of radius r over a totally geodesic
$\mathbb{C}P^{k}$,$(1\leq k\leq n-2)$, where $0<r<\frac{\pi}{2}.$
\item In case $\mathbb{C}H^{n}$\\
$(A_{0})$ a horosphere in $ \mathbb{C}H^{n}$, i.e a Montiel tube,\\
$(A_{1})$ a geodesic hypersphere or a tube over a hyperplane $\mathbb{C}H^{n-1}$,\\
$(A_{2}) $ a tube over a totally geodesic $\mathbb{C}H^{k}$ $(1\leq k\leq n-2)$.
\end{itemize} \end{theorem}
Since 2006 many authors have studied real hypersurfaces whose structure Jacobi operator is parallel $(\nabla l=0)$. Ortega, Perez and Santos \cite{OPS} proved the nonexistence of real hypersurfaces in non-flat complex space form with parallel structure Jacobi operator $\nabla l=0$. Perez, Santos and Suh \cite{PSaSuh} continuing the work of \cite{OPS} considered a weaker condition ($\mathbb{D}$-parallelness), that is $\nabla_{X}l=0$ for any vector field $X$ orthogonal to $\xi$. They proved the non-existence of such real hypersurfaces in $\mathbb{C}P^{m}$, $m\geq3$.
Kim and Ki in \cite{KK} classified real hypersurfaces if $\nabla_{\xi}l=0$ and $S\varphi=\varphi S$. Ki and Liu \cite{KL} proved that real hypersurfaces satisfying $\nabla_{\xi}l=0$ and $lS=Sl$ are Hopf hypersurfaces provided that the scalar curvature is non-negative. Ki, et.al. in \cite{KPSaSuh} classified real hypersurfaces satisfying $\nabla_{\xi}l=0$ and $\nabla_{\xi}S=0$. Kim et.al. in \cite{KKK} studied the real hypersurfaces satisfying $g(\nabla_{\xi}\xi,\nabla_{\xi}\xi)=\mu^{2}=$const, $6\mu^{2}+\frac{c}{4}\neq0$ and classified those whose $l$ is $\xi-$parallel. Cho and Ki \cite{CK1} classified real hypersurfaces satisfying A$l=l$A and $\nabla_{\xi}l=0$.
Recently Ivey and Ryan, in \cite{IR} studied real hypersurfaces in $M_{2}(c)$.
Motivated by all the above conclusions we study real hypersurfacs in $\mathbb{C}P^{2}$ or $\mathbb{C}H^{2}$ equipped with $\xi$-parallel structure Jacobi operator, i.e. $\nabla_{\xi}l=0$. More precisely, the following relation holds: \begin{eqnarray} (\nabla_{\xi}l)X=0. \end{eqnarray} We prove the following theorem
\begin{pro} Let M be a connected real hypersurface in $\mathbb{C}P^{2}$ or $\mathbb{C}H^{2}$ with $\xi$-parallel structure Jacobi operator. Then M is a Hopf hypersurface. Further, if $\eta(A\xi)\neq0$, then : \begin{itemize}
\item in the case of $\mathbb{C}P^{2}$, $M$ is locally congruent to\\
a geodesic sphere, where
$0<r<\frac{\pi}{2}$ and $r\neq\frac{\pi}{4}$,
\item in the case of $\mathbb{C}H^{2}$, $M$ is locally congruent \\
to a horosphere,\\
or to a geodesic sphere\\
or to a tube over the hyperplane $\mathbb{C}H^{1}$.\\ \end{itemize} \end{pro}
\section{Preliminaries} Throughout this paper all manifolds, vector fields e.t.c. are assumed to be of class $C^{\infty}$ and all manifolds are assumed to be connected. Furthermore, the real hypersurfaces are supposed to be oriented and without boundary.
Let $M$ be a real hypersurface immersed in a nonflat complex space form $(M_{n}(c),G)$ with almost complex structure J of constant holomorphic sectional curvature $c$. Let $N$ be a unit normal vector field on $M$ and $\xi=-JN$. For a vector field $X$ tangent to $M$ we can write $JX=\varphi (X)+\eta(X)N$, where $\varphi X$ and $\eta(X)N$ are the tangential and the normal component of $JX$ respectively. The Riemannian connection $\overline{\nabla}$ in $M_{n}(c)$ and $\nabla$ in $M$ are related for any vector fields $X$, $Y$ on $M$: $$\overline{\nabla}_{Y}X=\nabla_{Y}X+g(AY,X)N$$ $$\overline{\nabla}_{X}N=-AX$$ where $g$ is the Riemannian metric on $M$ induced from G of $M_{n}(c)$ and A is the shape operator of $M$ in $M_{n}(c)$. $M$ has an almost contact metric structure $(\varphi,\xi,\eta)$ induced from J on $M_{n}(c)$ where $\varphi$ is a (1,1) tensor field and $\eta$ a 1-form on $M$ such that (see \cite{Bl} $$g(\varphi X,Y)=G(JX,Y),\hspace{20pt}\eta(X)=g(X,\xi)=G(JX,N).$$ Then we have \begin{eqnarray} \varphi^{2}X=-X+\eta(X)\xi,\hspace{20pt} \eta\circ\varphi=0,\hspace{20pt} \varphi\xi=0,\hspace{20pt} \eta(\xi)=1 \end{eqnarray} \begin{eqnarray}\hspace{20pt} g(\varphi X,\varphi Y)=g(X,Y)-\eta(X)\eta(Y),\hspace{10pt}g(X,\varphi Y)=-g(\varphi X,Y) \end{eqnarray} \begin{eqnarray} \nabla_{X}\xi=\varphi AX,\hspace{20pt}(\nabla_{X}\varphi)Y=\eta(Y)AX-g(AX,Y)\xi \end{eqnarray}
Since the ambient space is of constant holomorphic sectional curvature $c$, the equations of Gauss and Codazzi for any vector fields $X$, $Y$, $Z$ on $M$ are respectively given by \begin{eqnarray} R(X,Y)Z=\frac{c}{4}[g(Y,Z)X-g(X,Z)Y+g(\varphi Y ,Z)\varphi X\end{eqnarray} $$-g(\varphi X,Z)\varphi Y-2g(\varphi X,Y)\varphi Z]+g(AY,Z)AX-g(AX,Z)AY$$ \begin{eqnarray} \hspace{10pt} (\nabla_{X}A)Y-(\nabla_{Y}A)X=\frac{c}{4}[\eta(X)\varphi Y-\eta(Y)\varphi X-2g(\varphi X,Y)\xi] \end{eqnarray} where $R$ denotes the Riemannian curvature tensor on $M$.
For every point $P\epsilon M$, the tangent space $T_{P}M$ can be decomposed as following: $$T_{P}M=span\{\xi\}\oplus ker{\eta}$$ where $ker(\eta)=\{X\;\;\epsilon\;\; T_{P}M:\eta(X)=0\}$.
Due to the above decomposition,the vector field $A\xi$ is decomposed as follows:
\begin{eqnarray}
A\xi=\alpha\xi+\beta U
\end{eqnarray}
where $\beta=|\varphi\nabla_{\xi}\xi|$ and
$U=-\frac{1}{\beta}\varphi\nabla_{\xi}\xi\;\epsilon\;ker(\eta)$, provided
that $\beta\neq0$.
\section{Auxiliary relations}
Let $M$ be a real hypersurfaces in $\mathbb{C}P^{2}$ or $\mathbb{C}H^{2}$, i.e. $M_{2}(c)$, $c\neq0$. We consider the open subset $\mathcal{N}$ of $M$ such that: $$\mathcal{N}=\{P\;\;\epsilon\;\;M:\;\beta\neq0,\;\;\mbox{in a neighborhood of P}\}.$$ Furthermore, we consider $\mathcal{V}$, $\Omega$ open subsets of $\mathcal{N}$ such that: $$\mathcal{V}=\{P\;\;\epsilon\;\;\mathcal{N}:\alpha=0,\;\;\mbox{in a neighborhood of P}\},$$ $$\Omega=\{P\;\;\epsilon\;\;\mathcal{N}:\alpha\neq0,\;\;\mbox{in a neighborhood of P}\},$$ where $\mathcal{V}\cup\Omega$ is open and dense in the closure of $\mathcal{N}$.
\begin{lemma} Let M be a real hypersurface in $M_{2}(c)$, equipped with $\xi$-parallel structure Jacobi operator. Then $\mathcal{V}$ is empty. \end{lemma} \textbf{Proof:} Let $\{U,\varphi U,\xi\}$ be a local orthonormal basis on $\mathcal{V}$. The relation (2.6) takes the form $A\xi=\beta U$. The first relation of (2.3) for $X=\xi$, taking into account the latter implies \begin{eqnarray} \nabla_{\xi}\xi=\beta\varphi U.\nonumber\ \end{eqnarray} Relation (1.1) for $X=\xi$, because of the above relation yields: \begin{eqnarray} \nabla_{\xi}(l\xi)=l\nabla_{\xi}\xi\Rightarrow \beta\varphi U=0,\nonumber\ \end{eqnarray} which leads to a contradiction and this completes the proof of Lemma 3.1. {$
\Box$} \\
In what follows we work on $\Omega$, where $\alpha\neq0$ and $\beta\neq0$.
\begin{lemma} Let M be a real hypersurface in $M_{2}(c)$, equipped with $\xi$-parallel structure Jacobi operator. Then the following relations hold in $\Omega$: \begin{eqnarray} \hspace{-140pt}AU=(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})U+\beta\xi,\;\;\;\; A\varphi U=-\frac{c}{4\alpha}\varphi U \end{eqnarray} \begin{eqnarray} \hspace{-100pt}\nabla_{\xi}\xi=\beta\varphi U,\;\;\; \nabla_{U}\xi=(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})\varphi U,\;\;\; \nabla_{\varphi U}\xi=\frac{c}{4\alpha}U \end{eqnarray} \begin{eqnarray} \hspace{-100pt}\nabla_{\xi}U=\kappa_{1}\varphi U,\;\;\; \nabla_{U}U=\kappa_{2}\varphi U,\;\;\; \nabla_{\varphi U}U=\kappa_{3}\varphi U-\frac{c}{4\alpha}\xi \end{eqnarray} \begin{equation} \nabla_{\xi}\varphi U=-\kappa_{1}U-\beta\xi,\;\;\; \nabla_{U}\varphi U=-\kappa_{2}U-(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})\xi,\;\;\; \nabla_{\varphi U}\varphi U=-\kappa_{3}U \end{equation} \begin{eqnarray} \hspace{-180pt}\kappa\kappa_{1}=0,\;\;\; (\xi\kappa)=0, \end{eqnarray} where $\kappa,\kappa_{1},\kappa_{2},\kappa_{3}$ are smooth functions on M. \end{lemma} \textbf{Proof:} Let $\{U,\varphi U,\xi\}$ be a local orthonormal basis of $\Omega$.
The first relation of (2.3) for $X=\xi$ implies: $\nabla_{\xi}\xi=\beta\varphi U$ and so relation (1.1) for $X=\xi$, taking into account the latter, gives: \begin{eqnarray} l\varphi U=0. \end{eqnarray} Relation (2.4) for $X=\varphi U$ and $Y=Z=\xi$ gives: $l\varphi U=\frac{c}{4}\varphi U+\alpha A\varphi U$, which because of (3.6) implies the second of (3.1). Relation (2.4) for $X=U$ and $Y=Z=\xi$, we have: \begin{eqnarray} lU=\frac{c}{4}U+\alpha AU-\beta A\xi\ \end{eqnarray} The scalar products of (3.7) with $\varphi U$ and $U$, because of (2.6) and the second of (3.1) imply the first of (3.1), where $\kappa=g(lU,U)$.
The first relation of (2.3), for $X=U$ and $X=\varphi U$. taking into consideration relations (3.1), gives the rest of relation (3.2).
From the well known relation: $Xg(Y,Z)=g(\nabla_{X}Y,Z)+g(Y,\nabla_{X}Z)$ for $X,Y,Z\;\; \epsilon$ $\{\xi,U,\varphi U\}$ we obtain (3.3) and (3.4), where $\kappa_{1},\kappa_{2},\kappa_{3}$ are smooth functions in $\Omega$.
On the other hand
\begin{eqnarray} &&\xi\kappa=\xi g(lU,U)\nonumber\\ &&\Rightarrow \xi\kappa=g(\nabla_{\xi}(lU),U)+g(lU,\nabla_{\xi}U)\nonumber\\ &&\Rightarrow \xi\kappa=g((\nabla_{\xi}l)U+l(\nabla_{\xi}U),U)+g(lU,\nabla_{\xi}U)\nonumber\\ &&\Rightarrow \xi\kappa=g(l(\nabla_{\xi}U),U)+g(lU,\nabla_{\xi}U)\nonumber\ \end{eqnarray} The above relation because of (3.3), (3.6) and (3.7) yields: \begin{eqnarray} \xi\kappa=g(\kappa_{1}l\varphi U,U)+g(lU,\kappa_{1}\varphi U)\Rightarrow \xi\kappa=0\nonumber\ \end{eqnarray} On the other hand: \begin{eqnarray} &&\xi g(l\varphi U,U)=0\nonumber\\ &&\Rightarrow g(\nabla_{\xi}(l\varphi U),U)+g(l\varphi U,\nabla_{\xi}U)=0\nonumber\\ &&\Rightarrow g((\nabla_{\xi}l)\varphi U+l(\nabla_{\xi}\varphi U),U)+g(l\varphi U,\nabla_{\xi}U)=0\nonumber\ \end{eqnarray} From the above equation because of (1.1), (2.6), (3.4), (3.6) and $\kappa=g(lU,U)$ we obtain: \begin{eqnarray} &&g(l(-\kappa_{1}U-\beta\xi),U)=0\Rightarrow \kappa\kappa_{1}=0\nonumber\ \end{eqnarray} {$
\Box$}
Relation (2.5) for $X$ $\epsilon$ $\{U,\varphi U\}$ and $Y=\xi$, because of Lemma 3.2 yields: \begin{eqnarray} U\beta&=&\xi(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})\\ U\alpha&=&\xi\beta\\ \frac{\beta^{2}\kappa_{1}}{\alpha}&=&\kappa+\beta\kappa_{2}+\frac{c}{4\alpha}(\frac{\kappa}{\alpha}-\frac{c}{4\alpha}+\frac{\beta^{2}}{\alpha})\\ (\varphi U)\beta&=&\frac{\kappa_{1}\beta^{2}}{\alpha}+\beta^{2}+\frac{c}{4\alpha}(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})\\ \xi\alpha&=&\frac{4\alpha^{2}\kappa_{3}\beta}{c}\\ (\varphi U)\alpha&=&\beta(\kappa_{1}+\alpha+\frac{3c}{4\alpha}) \end{eqnarray} Furthermore, relation (2.5), for $X=U$ and $Y=\varphi U$, due to Lemma 3.2 and (3.10), implies: \begin{eqnarray} (\varphi U)\kappa&=&-\frac{c\beta\kappa_{1}}{4\alpha}+\kappa\beta+\kappa\kappa_{2}-c\beta\\ U\alpha&=&\frac{4\kappa_{3}\alpha}{c}(\beta^{2}+\kappa) \end{eqnarray} Using the relations (3.9)-(3.15) and Lemma 3.2 we obtain:
\begin{eqnarray} &&[U,\xi](\frac{c}{4\alpha})=(\nabla_{U}\xi-\nabla_{\xi}U)\frac{c}{4\alpha}\nonumber\\ &&\Rightarrow [U,\xi](\frac{c}{4\alpha}) =-\frac{c\beta}{4\alpha^{2}}(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha}\frac{}{}-\kappa_{1})(\kappa_{1}+\alpha+\frac{3c}{4\alpha}) \end{eqnarray} \begin{eqnarray} && [U,\xi](\frac{c}{4\alpha})=(U(\xi\frac{c}{4\alpha}))-(\xi(U\frac{c}{4\alpha}))\nonumber\\ &&\Rightarrow [U,\xi](\frac{c}{4\alpha})=-\beta\kappa_{3}^{2}-\beta(U\kappa_{3})+(\frac{\beta^{2}}{\alpha}+\frac{\kappa}{\alpha})(\xi\kappa_{3}) \end{eqnarray} Similarly: \begin{eqnarray} &&[U,\varphi U](\frac{c}{4\alpha})=\kappa_{2}\kappa_{3}(\frac{\beta^{2}}{\alpha}+\frac{\kappa}{\alpha})+\beta\kappa_{3}(\frac{\kappa}{\alpha}-\frac{c}{2\alpha}+\frac{\beta^{2}}{\alpha})\nonumber\\ &&+\frac{c\beta\kappa_{3}}{4\alpha^{2}}(\kappa_{1}+\alpha+\frac{3c}{4\alpha}) \end{eqnarray} \begin{eqnarray} &&[U,\varphi U](\frac{c}{4\alpha})=\frac{2\kappa_{3}\beta^{3}\kappa_{1}}{\alpha^{2}}+\frac{\kappa_{3}\beta^{3}}{\alpha}+\frac{5c\kappa_{3}\beta}{4\alpha^{3}}(\beta^{2}+\kappa)-\frac{c\beta\kappa_{1}\kappa_{3}}{4\alpha^{2}}\nonumber\\ &&-\frac{c\beta\kappa_{3}}{4\alpha}-\frac{5c^{2}\beta\kappa_{3}}{16\alpha^{3}}-\frac{c\beta}{4\alpha^{2}}(U\kappa_{1})-\frac{\beta\kappa\kappa_{3}}{\alpha}+\frac{\kappa_{3}}{\alpha}((\varphi U)\kappa)\nonumber\\ &&+(\frac{\beta^{2}}{\alpha}+\frac{\kappa}{\alpha})((\varphi U)\kappa_{3}) \end{eqnarray} \begin{eqnarray} [\varphi U,\xi](\frac{c}{4\alpha})=-\kappa_{3}(\kappa_{1}+\frac{c}{4\alpha})(\frac{\beta^{2}}{\alpha}+\frac{\kappa}{\alpha})-\beta^{2}\kappa_{3} \end{eqnarray} \begin{eqnarray} &&[\varphi U,\xi](\frac{c}{4\alpha})=-\frac{2\kappa_{1}\kappa_{3}\beta^{2}}{\alpha}-\kappa_{3}\beta^{2}-\frac{7c\beta^{2}\kappa_{3}}{4\alpha^{2}}+\frac{c^{2}\kappa_{3}}{16\alpha^{2}}+\frac{c\kappa\kappa_{3}}{2\alpha^{2}}\nonumber\\ &&-\beta(\varphi U)\kappa_{3}+\kappa\kappa_{3}+\frac{c\beta}{4\alpha^{2}}(\xi\kappa_{1}). \end{eqnarray}
Due to the first relation of (3.5), we consider $\Omega_{1}$ the open subset of $\Omega$ such that: $$\Omega_{1}=\{P\;\;\epsilon\;\;\Omega:\kappa_{1}\neq0,\;\;in\;\;a\;\;neighborhood\;\;of\;\;P\}.$$ So in $\Omega_{1}$, we have: $\kappa=0$.
In $\Omega_{1}$ relation (3.14), since $\kappa=0$, yields: \begin{eqnarray} \kappa_{1}=-4\alpha \end{eqnarray} and from relation (3.10), taking into account (3.22), we get: \begin{eqnarray} \kappa_{2}=-4\beta-\frac{c\beta}{4\alpha^{2}}+\frac{c^{2}}{16\alpha^{2}\beta} \end{eqnarray} From (3.20) and (3.21), using (3.12), (3.22) and (3.23) we obtain: \begin{eqnarray} \beta(\varphi U)\kappa_{3}=-\frac{3c\beta^{2}\kappa_{3}}{2\alpha^{2}}+\frac{c^{2}\kappa_{3}}{16\alpha^{2}} \end{eqnarray} From (3.18), (3.19), using (3.15), (3.22), (3.23) and (3.24), we obtain: \begin{eqnarray} \kappa_{3}(4\alpha^{2}-c)=0. \end{eqnarray} Because of (3.25), let $\Omega'_{1}$ be the open subset of $\Omega_{1}$ such that: $$\Omega'_{1}=\{P\;\;\epsilon\;\;\Omega_{1}:\kappa_{3}\neq0,\;\;in\;\;a\;\;neighborhood\;\;of\;\;P\}.$$ So in $\Omega'_{1}$ we obtain: $c=4\alpha^{2}$. Differentiation of the latter with respect to $\xi$, implies $\xi\alpha=0$ which because of (3.12) leads to $\kappa_{3}=0$, which is impossible. So $\Omega'_{1}$ is empty and $\kappa_{3}=0$ in $\Omega_{1}$.
\begin{lemma} Let M be a real hypersurface in $M_{2}(c)$, equipped with $\xi$-parallel structure Jacobi operator. Then $\Omega_{1}$ is empty. \end{lemma} \textbf{Proof:} We resume that in $\Omega_{1}$ we have: \begin{eqnarray} \kappa=\kappa_{3}=0 \end{eqnarray} and relations (3.22), (3.23) and (3.24) hold.
Relations (3.8), (3.9), (3.12) and (3.15), because of (3.5) and (3.26), yield: \begin{eqnarray} U\alpha=U\beta=\xi\alpha=\xi\beta=0 \end{eqnarray} In $\Omega_{1}$, combining (3.16) and (3.17) and taking into account (3.22) and (3.26), we obtain: \begin{eqnarray} (\frac{c}{4\alpha}-\alpha)(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+4\alpha)=0 \end{eqnarray} Owing to (3.28), let $\Omega_{11}$ be the open subset of $\Omega_{1}$, such that: $$\Omega_{11}=\{P\;\;\epsilon\;\;\Omega_{1}:c\neq4\alpha^{2},\;\;\mbox{in a neighborhood of P}\}.$$ From (3.28) in $\Omega_{11}$, we have: $4\alpha=-\frac{\beta^{2}}{\alpha}+\frac{c}{4\alpha}$. Differentiation of the latter along $\varphi U$, because of (3.11), (3.13), (3.22), (3.26) and the last relation yields $c=0$, which is impossible. Hence, $\Omega_{11}$ is empty.
So in $\Omega_{1}$ the relation $c=4\alpha^{2}$ holds. Due to the last relation and (3.22), the relation (3.11) becomes: \begin{eqnarray} (\varphi U)\beta=-(\alpha^{2}+2\beta^{2}). \end{eqnarray} From (3.27) we have $[U,\xi]\beta=U(\xi\beta)-\xi(U\beta)\Rightarrow [U,\xi]\beta=0$. On the other hand, from (3.2) and (3.3) we obtain
$[U,\xi]\beta=(\nabla_{U}\xi-\nabla_{\xi}U)\beta\Rightarrow [U,\xi]\beta=\frac{1}{\alpha}(3\alpha^{2}+\beta^{2})(\varphi U)\beta$. The
last two relations imply $(\varphi U)\beta=0$. Therefore, from (3.29) we obtain $\alpha^{2}+2\beta^{2}=0$, which is a contradiction. Hence, $\Omega_{1}$ is empty. {$
\Box$} \\
Since $\Omega_{1}$ is empty, in $\Omega$ we have $\kappa_{1}=0$. So from relations (3.20) and (3.21) we obtain: $$\beta(\varphi U)\kappa_{3}=\frac{\kappa_{3}}{16\alpha^{2}}[c^{2}-24c\beta^{2}+12c\kappa+16\alpha^{2}\kappa].$$ Furthermore, the combination of relations (3.18) and (3.19), using (3.10) and (3.14), implies: $$(\beta^{2}+\kappa)(\varphi U)\kappa_{3}=\frac{c\beta\kappa_{3}}{16\alpha^{2}}[16\alpha^{2}-24(\beta^{2}+\kappa)+9c].$$ From the last two relations we obtain: \begin{eqnarray} \kappa_{3}[c^{2}\kappa+12c\kappa^{2}+12c\beta^{2}\kappa+16\alpha^{2}\beta^{2}\kappa-16c\alpha^{2}\beta^{2}+16\alpha^{2}\kappa^{2}-8c^{2}\beta^{2}]=0\nonumber\ \end{eqnarray} Due to the above relation, we consider $\Omega_{2}$ the open subset of $\Omega$, such that: $$\Omega_{2}=\{P\;\;\epsilon\;\;\Omega:\kappa_{3}\neq0,\;\;\mbox{in a neighborhood of P}\},$$ so in $\Omega_{2}$ the following relation holds: \begin{eqnarray} c^{2}\kappa+12c\kappa^{2}+12c\beta^{2}\kappa+16\alpha^{2}\beta^{2}\kappa-16c\alpha^{2}\beta^{2}+16\alpha^{2}\kappa^{2}-8c^{2}\beta^{2}=0. \end{eqnarray} Differentiating (3.30) with respect to $\xi$ and using (3.5), (3.9), (3.12) and (3.15) we obtain: \begin{eqnarray} &&8\alpha^{2}\beta^{2}\kappa-8c\alpha^{2}\beta^{2}+8\alpha^{2}\kappa^{2}+3c\beta^{2}\kappa-2c^{2}\beta^{2}+3c\kappa^{2}-4c\alpha^{2}\kappa\nonumber\\ &&-2c^{2}\kappa=0 \end{eqnarray} From (3.30) and (3.31) we obtain: \begin{eqnarray} 5c\kappa+6\kappa^{2}+6\beta^{2}\kappa-4c\beta^{2}+8\alpha^{2}\kappa=0. \end{eqnarray} Differentiating (3.32) with respect to $\xi$ and using (3.5), (3.9), (3.12) and (3.15) we have: $4\kappa\alpha^{2}=(2c-3\kappa)(\beta^{2}+\kappa).$ The last relation with (3.32) imply: $\kappa=0$. Substituting the latter in (3.30) gives $c=-2\alpha^{2}$. Differentiation of the last relation with respect to $\varphi U$ and taking into account (3.13), $c=-2\alpha^{2}$ and $\kappa_{1}=0$ results in $\alpha=0$, which is impossible.
So $\Omega_{2}$ is empty and in $\Omega$ we get: $\kappa_{3}=0$.
\begin{lemma} Let M be a real hypersurface in $M_{2}(c)$, equipped with $\xi$-parallel structure Jacobi operator. Then $\Omega$ is empty. \end{lemma} \textbf{Proof:} We resume that in $\Omega$ the following relation holds: \begin{eqnarray} \kappa_{1}=\kappa_{3}=0. \end{eqnarray} Relations (3.8), (3.9), (3.12), (3.15), because of (3.5) and (3.33) yield: \begin{eqnarray} U\alpha=U\beta=\xi\alpha=\xi\beta=0 \end{eqnarray} In $\Omega$ the combination of (3.16), (3.17) and taking into account (3.33), implies: \begin{eqnarray} (4\alpha^{2}+3c)(\beta^{2}+\kappa-\frac{c}{4})=0 \end{eqnarray} Due to (3.35), we consider $\Omega_{3}$ the open subset of $\Omega$ such that: $$\Omega_{3}=\{P\;\;\epsilon\;\;\Omega:\beta^{2}+\kappa\neq\frac{c}{4},\;\;\mbox{in a neighborhood of P}\}.$$ So in $\Omega_{3}$ the following relation holds: \begin{eqnarray} c=-\frac{4\alpha^{2}}{3}. \end{eqnarray} Differentiation of (3.36) with respect to $\varphi U$ implies: \begin{eqnarray} (\varphi U)\alpha=0. \end{eqnarray} Because of (3.34) we have $[U,\xi]\beta=U(\xi\beta)-\xi(U\beta)\Rightarrow [U,\xi]\beta=0$. On the other hand due to (3.2), (3.3) and (3.33) we get $[U,\xi]\beta=(\nabla_{U}\xi-\nabla_{\xi}U)\beta\Rightarrow [U,\xi]\beta=(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})(\varphi U)\beta$. Combination of the last relations imply: \begin{eqnarray} (\varphi U)\beta=0 \end{eqnarray} From (3.11), owing to (3.33), (3.36) and (3.38) yields $2\beta^{2}=\kappa+\frac{\alpha^{2}}{3}$. Differentiation of the last relation with respect to $\varphi U$ and taking into account (3.37) and (3.38) imply $(\varphi U)\kappa=0$. So from (3.14), because of the latter and (3.33), we obtain $\kappa(\beta+\kappa_{2})=c\beta$. The combination of the latter with (3.10) and taking into account (3.33), (3.36) and $2\beta^{2}=\kappa+\frac{a^{2}}{3}$ imply: \begin{eqnarray} \alpha^{2}=18\beta^{2}\hspace{20pt}\kappa_{2}=5\beta\hspace{20pt}\kappa=-4\beta^{2} \end{eqnarray} The relations of Lemma 3.2 in $\Omega_{3}$, because of (3.36) and (3.39) become: \begin{eqnarray} &&AU=\frac{\alpha}{6}U+\beta\xi,\;\;\;A\varphi U=\frac{\alpha}{3}\varphi U\\ &&\nabla_{\xi}\xi=\beta\varphi U,\;\;\;\nabla_{U}\xi=\frac{\alpha}{6}\varphi U,\;\;\;\nabla_{\varphi U}\xi=-\frac{\alpha}{3}U,\\ &&\nabla_{\xi}U=0,\;\;\;\nabla_{U}U=5\beta\varphi U,\;\;\;\nabla_{\varphi U}U=\frac{\alpha}{3}\xi,\\ &&\nabla_{\xi}\varphi U=-\beta\xi,\;\;\;\nabla_{U}\varphi U=-5\beta U-\frac{\alpha}{6}\xi,\;\;\;\nabla_{\varphi U}\varphi U=0. \end{eqnarray}
The relation (2.4), because of (3.36), (3.39) and (3.40) implies: $R(U,\varphi U)U=23\beta^{2}\varphi U$. On the other hand $R(X,Y)Z=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,Y]}Z$, because of (3.34), (3.36), (3.39) and(3.41)-(3.43) yields: $R(U,\varphi U)U=26\beta^{2}\varphi U$. The combination the last two relations implies $\beta=0$, which is impossible in $\Omega_{3}$.
So $\Omega_{3}$ is empty and in $\Omega$ the following relation holds \begin{eqnarray} \beta^{2}+\kappa=\frac{c}{4}. \end{eqnarray} In $\Omega$ (3.10) becomes: \begin{eqnarray} \kappa+\beta\kappa_{2}=0. \end{eqnarray}
Differentiating (3.44) with respect to $\varphi U$ and using (3.11), (3.14), (3.33), (3.44) and (3.45) we obtain: $\beta^{2}=-\frac{c}{4}$. Differentiation of the last relation along $\varphi U$ implies $(\varphi U)\beta=0$, which because of (3.11), (3.33) and (3.44) yields $\beta=0$, which is a contradiction. Therefore, $\Omega$ is empty and this completes the proof of Lemma 3.4. {$
\Box$} \\
From Lemmas 3.1 and 3.4, we conclude that $\mathcal{N}$ is empty and we lead to the following result: \begin{proposition} Every real hypersurface in $M_{2}(c)$, equipped with $\xi$-parallel structure Jacobi operator, is a Hopf hypersurface. \end{proposition}
\section{\hspace{-15pt}.\hspace{10pt}Proof of Main Theorem} Since $M$ is a Hopf hypersurface, due to Theorem 2.1 \cite{NR1} we have that $\alpha$ is a constant. We suppose that $\alpha\neq0$. We consider a unit vector field $e$ $\epsilon$ $\mathbb{D}$, such that $Ae=\lambda e$, then $A\varphi e=\nu\varphi e$ at some point $P$ $\epsilon$ $M$, where $\{ e, \varphi e, \xi\}$ is a local orthonormal basis. Then the following relation holds on $M$, (Corollary 2.3 \cite{NR1}): \begin{eqnarray} \lambda\nu=\frac{\alpha}{2}(\lambda+\nu)+\frac{c}{4}. \end{eqnarray} The first relation of (2.3) for $X=e$ implies: \begin{eqnarray} \nabla_{e}\xi=\lambda\varphi e. \end{eqnarray} Relation (2.4) for $X=e$ and $Y=Z=\xi$ yields: \begin{eqnarray} le=\frac{c}{4}e+\alpha Ae. \end{eqnarray} From relation (1.1) for $X=e$, we obtain: \begin{eqnarray} \nabla_{\xi}(le)=l\nabla_{\xi}e. \end{eqnarray} From (2.4) for $X=\nabla_{\xi}e$ and $Y=Z=\xi$, we get: \begin{eqnarray} l\nabla_{\xi}e=\frac{c}{4}\nabla_{\xi}e+\alpha A(\nabla_{\xi}e). \end{eqnarray} Substitution in (4.4) of (4.3) and (4.5) yields: \begin{eqnarray} (\nabla_{\xi}A)e=0. \end{eqnarray} The relation (2.5) for $X=\xi$ and $Y=e$, taking into account (4.6), we get: \begin{eqnarray} (\nabla_{e}A)\xi=-\frac{c}{4}\varphi e \end{eqnarray} Finally, the scalar product of (4.7) with $\varphi e$, taking into consideration (4.1), (4.2) and $A\varphi e=\nu\varphi e$ yields: \begin{eqnarray} &&g(\nabla_{e}(A\xi)-A\nabla_{e}\xi, \varphi e)=-\frac{c}{4}\nonumber\\ && \Rightarrow \alpha\lambda=-\frac{c}{4}+\lambda\nu\Rightarrow \lambda=\nu.\nonumber\ \end{eqnarray} Then $Ae=\lambda e$ and $A\varphi e=\lambda\varphi e$, therefore we obtain: $$(A\varphi-\varphi A)X=0,\;\;\forall\;\;X\;\;\epsilon\;\;TM.$$ From the above relation Theorem 1.1 holds. Since $\alpha\neq0$ we can not have the geodesic sphere of radius $r=\frac{\pi}{4}$ and this completes the Proof of Main Theorem.
\end{document} |
\begin{document}
\title{Instance-based learning using the Half-Space Proximal Graphootnote{Under review in Pattern Recognition Letters} \begin{abstract} The primary example of instance-based learning is the $k$-nearest neighbor rule (kNN), praised for its simplicity and the capacity to adapt to new unseen data and toss away old data. The main disadvantages often mentioned are the classification complexity, which is $O(n)$, and the estimation of the parameter $k$, the number of nearest neighbors to be used. The use of indexes at classification time lifts the former disadvantage, while there is no conclusive method for the latter.
This paper presents a parameter-free instance-based learning algorithm using the {\em Half-Space Proximal} (HSP) graph. The HSP neighbors simultaneously possess proximity and variety concerning the center node. To classify a given query, we compute its HSP neighbors and apply a simple majority rule over them. In our experiments, the resulting classifier bettered $KNN$ for any $k$ in a battery of datasets. This improvement sticks even when applying weighted majority rules to both kNN and HSP classifiers.
Surprisingly, when using a probabilistic index to approximate the HSP graph and consequently speeding-up the classification task, our method could {\em improve} its accuracy in stark contrast with the kNN classifier, which worsens with a probabilistic index. \end{abstract}
\section{Introduction}
One of the most popular classifiers is k-Nearest Neighbors (kNN), which was rated as one of the top 10 algorithms in data mining \cite{Settouti2016, Wu2008}. Trivially implemented, kNN is popular because of its simplicity. There is no training stage, and as soon as the data is acquired, the algorithm is ready to make predictions. Moreover, it works without prior knowledge of the data distribution. \par
A vanilla implementation of the kNN classifier consists of computing the distance between the query and every training sample to obtain a neighborhood of the closest $k$ samples to the query and assigning the majority's label in that neighborhood. The kNN classifier's performance depends crucially on the neighborhood's size determined by the value of $k$ \cite{Gou2019, Zhang2018} and the distance function used to measure the similarity between two specimens \cite{Weinberger2005, Xu2013}.
Choosing the optimal parameter $k$ for kNN is a challenging task. A large $k$ produce a neighborhood robust to noise, but it may include too many neighbors from other classes \cite{Gallego2018}, while a small value of \textit{k} often leads to an overfitted decision boundary resulting in high noise sensitivity. Ideally, \textit{k} should be adapted to the every dataset in particular.
On the other hand, regarding complexity or speed, a brute force algorithm for finding the $k$-nearest neighbors has linear complexity, $O(n)$ with $n$ the database size. This complexity does not scale for large problems. Unlike databases made up of simple attribute data, recent data tends to be large and complex. For example, in multimedia data, the standard approach is to search not at the level of actual multimedia objects but instead using the so-called deep-features extracted from these objects \cite{Zezula2005}. This technique produces high-dimensional vectors that are difficult to index, with provable hardness complexity\cite{r2018STOC}. The difficulty of finding the $k$ closest neighbors in large, high-dimensional databases has prompted approximation algorithms for the search for similarity (ANN). An ever-growing amount of research in ANN search aims at high accuracy and low computational complexity algorithms. These methods are typically used as an offline stage in kNN to accelerate the classification tasks. \par For high dimensional spaces, graph-based ANN takes the lead in algorithm usage. One of the most efficient graphs for the nearest neighbor search is the Small World graph (SW-graph), proposed in \cite{Malkov2014} as NSW graph. Each insertion finds its approximate neighbors by a greedy search in the partial graph built so far. HNSW \cite{Malkov2020} is an extension of NSW. It incrementally builds a multi-layer structure consisting of a hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. HNSW is one of the most efficient, general-purpose algorithms \cite{Li2020} for this problem. HNSW is the index we selected to speed-up near-neighbor searches in this work. \par
\subsection{Motivation}
Probabilistic indexes, such as HNSW, have lifted one of the main disadvantages of the kNN classifier: classification speed. On the other hand, deep-features simplifies the design of the distance to compare instances in the classifier. One standing problem in kNN classification is the selection of the parameter $k$. Researchers have proposed many methods to discover a {\em good} value for $k$ (discussed with more detail later, in the next section) in a quest to lift this last limitation for a proper black-box classification algorithm. This paper proposes a more general approach; designing an algorithm that naturally chooses a query neighborhood without parameters. This neighborhood should contain objects near the query while at the same time providing {\em geometric diversity} within the database. In other words, we want to eliminate redundancy in the neighborhood.
\subsection{Contribution}
We base our proposal on the Half-Space Proximal (HSP) graph introduced in \cite{Chavez2006}. This graph extracts a low-degree spanner of the complete graph. Each node is associated with its nearest neighbor, clearing from the complete graph all redundant nodes in the nearest neighbor's direction by using a half-space hyperplane. We repeat with the remaining nodes until clearing all of them. We will discuss this in more detail in section \ref{sec:HSP}. We fixed our attention just in the neighborhood of the query instead of the entire graph.
In this paper, we show that a majority classifier using the HSP neighborhood of the query systematically outperforms the kNN classifier for {\em any} $k$. Computing the HSP neighborhood of the query is as fast as computing the $k$-nearest neighbors, and it admits speeding-up using an index. Moreover, when using an index, the classification precision {\em increases}, unlike the kNN classifier. We tested our claims with a realistic experimental setup with a well-known benchmark.
\section{Related Work}
Research in kNN performance is abundant, with classical and recent approaches\cite{Gou2019}, with many variations of the classifier \cite{Wu2008, Shi2018}. The efforts focus mostly on solving one or more of the issues present in kNN. \par In the vanilla kNN algorithm, the distance function used is Euclidian, with the same weight to all features, yielding inaccurate results when irrelevant attributes are present, as in high-dimensional data \cite{Jiang2007}. The approaches for solving this problem include assigning different weights to each feature \cite{Biswas2018} or eliminating the least relevant features. Some other methods include assigning different weights to each neighbor, with the idea that closer neighbors should contribute more for assigning the class label to the query \cite{Tang2020}. Other approaches include the design of distance functions like Mahalanobis \cite{Xiang2008,Gautheron2020}, adaptive Euclidian \cite{Wang2007}, or the Value Difference Metric (VDM) \cite{Li2011,Li2014}. \par
In choosing the optimal $k$, the vanilla approach uses a fixed value of $k$ for every test sample. A popular choice is to use $k=\sqrt{n}$, proposed in \cite{Lall1996}. Another method, proposed in \cite{Zhu2011}, is tenfold cross-validation to find the optimal value for $k$. However, in \cite{Zhang2017} they show that a fixed value leads to a low prediction rate since it does not consider the distribution of the data. Alternatively, recent efforts focus on setting a different $k$ value for each test sample, giving better results \cite{Liu2010}. More efforts in this direction include evolutionary computation techniques \cite{Biswas2018}, probabilistic methods \cite{Ghosh2006}, and linear modeling \cite{Zhang2017}, among others. \par
In general, the idea behind these methods is to search for the optimal $k$ values and then perform a traditional kNN classification. The methods that use this approach require additional processing time during the classification, which increases the algorithm's overall complexity. According to \cite{ktree2018} these techniques have a time complexity of at least \(O(n^2)\) during the classification time, which is not suitable for large data repositories. Zhang et al. \cite{ktree2018} proposed the \textit{kTree} and \textit{k*Tree} methods, which introduce an offline training stage that, in addition to finding a proper size of the neighborhood, also focuses on reducing the online classification time. This approach also has quadratic complexity to find the neighborhood's optimal size, finding optimal $k$ values in an offline stage. With this modification, instead of having an online complexity of $O(n^2)$ during classification, the task can be achieved with a complexity of $O(log(d)+n)$, where $d$ is the dimensions of the features.
For each of the proposed methods and improvements of kNN, there is an additional time-consuming stage, online or offline, to estimate a proper \textit{k} value of each test sample. These procedures take away the simplicity of kNN, which is one of the characteristics that makes it so popular. Moreover, the accuracy of these methods is limited to the traditional kNN with optimal parameters. \par
\section{HSP graph} \label{sec:HSP} The HSP graph is a \textit{local proximity graph} \cite{Jaromczyk1992} that was originally proposed and designed for applications in \textit{ad-hoc networks}. Computationally, these networks are represented by Unit Disk Graphs (UDG), where the nodes represent the network components, called hosts. An edge connects two nodes if the Euclidian distance between the hosts is less than a given unit, where the unit represents the common transmission range of the hosts in the network. An edge indicates the hosts can communicate with each other with a single transmission, called a hop.\par
\begin{figure}
\caption{Example of the transmission range of a host. In a UDG, all nodes within the range would be connected to the central node.}
\label{fig:hsp range}
\end{figure}
\begin{figure}
\caption{Example of an HSP graph.}
\label{fig:hsp example}
\end{figure}
The HSP test determines which neighbors are retained within each node's range for constructing a suitable geometric subgraph of the UDG. The resulting graph referred to as the HSP graph, is a sparse directed or undirected subgraph of the UDG (see Figure \ref{fig:hsp example}). \par
Extracting a UDG subgraph reduces the complexity of the network, which is useful in many applications. Some examples include energy-efficient routing and power optimization. These applications often need the resulting graph to have some properties like having a small degree, being planar, or having the minimum spanning tree as its subgraph. \par
The HSP graph is a computationally simple algorithm that has many properties desirable for network applications. An important characteristic is that it uses only local computations for its construction. This property is essential because in \textit{ad-hoc networks} the topology of the whole network is usually not available. Besides, in dynamically changing networks, eventual changes should be detected and fixed without disturbing the entire network. \par
\subsection{Construction}
For the construction of the HSP graph, we assume the graph $G=(V,E)$ is a UDG with coordinates $(v_{x}$,$v_{y})$ for each node $v$ in the Euclidian plane, and a unique integer label for each vertex. The algorithm to choose the neighbors for each node to construct the HSP graph is described in Algorithm \ref{algorithm:hsp} and illustrated in Figure \ref{fig:neighbor selection}.\par
\begin{algorithm} \SetAlgoLined \KwIn{a vertex $u$ of a geometric graph and a list $L_{1}$ of edges incident with $u$.} \KwOut{a list of directed edges $L_{2}$ which are retained for the HSP graph.}
Set the forbidden area $F(u)$ to be $\varnothing$\;
\While{$L_{1}$ is not empty}{
Remove from $L_{1}$ the shortest edge, say $(u,v)$, (any tie is broken by smaller end-vertex label) and insert in $L_{2}$ the directed edge $(u,v)$ with $u$ being the initial vertex\;
Add to $F(u)$ the open half-plane determined by the line perpendicular to the edge $(u,v)$ in the middle of the edge and containing the vertex $v$\;
Scan the list $L_{1}$ and remove from it any edge whose end-vertex is in $F(u)$\;
}
\caption{HSP}
\label{algorithm:hsp} \end{algorithm}
\begin{figure}
\caption{Zooming around the vecinity of a selected node}
\label{fig:neighbor selection}
\end{figure}
From \cite{Chavez2006}, Figure \ref{fig:neighbor selection} illustrates the forbidden half-space represented by a shaded area. Computationally, an edge $(u,z)$ is forbidden by an edge $(u,v)$ when the Euclidian distance from $z$ to $v$ is smaller than the Euclidian distance from $z$ to $u$. Additionally, there is no explicit use of the coordinates, and each node chooses its neighbors without parameters. \par
\subsection{Properties} The HSP graph has many desirable properties for ad-hoc networks, it is a $t$-spanner, with $t\ge (2\pi+1)$. The obtained spanner is invariant under similarity transformations and contains the minimum weight spanning tree.\par
The out-degree of the HSP graph depends on the data's intrinsic dimension, and it coincides with the kissing number in that dimension. For example, it is 2, 6, 12, and 24 for dimensions 1, 2, 3, and 4, respectively. In higher dimensions, only upper and lower bounds are known, with few exceptions. The relevant feature for the HSP in classification tasks is that it provides diversity and similarity in each node's neighborhood. The neighborhood of a node comprises the neighbors after eliminating the redundancy between them. Notably, the above can be achieved without tuning parameters. This last property is what makes it unique for classification applications. \par
The HSP graph is fully distributed and computationally simple to construct. The algorithm is executed by each node using only information of their neighborhood. Although initially formulated for two-dimensional data, where the vectors represent the physical coordinates that correspond to geographic locations, the HSP graph's algorithm has no explicit use of the coordinates, which means that it can be generalized to work in any metric space. Therefore, a generalized HSP algorithm can be used for other applications\cite{corral, aguilera}, as our proposal for classification tasks is the case. \par
\section{HSP for classification}
Our proposal is the {\em HSP classifier}, that is, using the neighborhood discovered by the HSP test as the instances to be compared with a query. Our initial proposal assumes each query can choose its neighbors from the entire database (i.e., all the training samples) instead of only within a range determined by a given unit, as it is the UDG case; this corresponds to a UDG with infinite radius.\par
The selected neighbors have the property of being similar while diverse, which gives a representative vicinity for each query. Furthermore, besides being a computationally simple algorithm, this proposal's main advantage is that there is no parameter $k$ needed for selecting the neighbors; the selection happens naturally. \par
Our proposal solves the problem of choosing the proper $k$ neighbors to do the classification task. Once we obtained the neighborhood, we apply the majority rule over the selected neighbors' labels, as in the vanilla kNN classifier. \par The proposal could be trivially implemented in any metric space and any dimension. We present the pseudocode in Algorithm \ref{algorithm:hspclassifier}. \par
\begin{algorithm} \SetAlgoLined \KwIn{training samples X and test samples Y} \KwOut{class labels of Y}
\For{each $u\in Y$}{
$N \leftarrow \varnothing$\;
$C \leftarrow X$\;
\While{C is not empty}{
$v \leftarrow c\in C \mid d(u,c) \leq d(u,c'), \forall c'\in C$\;
$N.insert(v)$\;
\For{each $c\in C$}{
\If{$d(c,u) > d(c,v) $}{
$C.remove(c)$\;
}
}
}
$label(u) \leftarrow$ most repeated label in $N$\;
}
\caption{HSP classifier}
\label{algorithm:hspclassifier} \end{algorithm}
\section{Experiments}
We performed experiments to assess the performance of the HSP classifier, contrasting it with the kNN. We selected realistic high-dimensional data obtained by performing deep-feature extraction on popular datasets used for image classification tasks. We used the VGG16 model weights pre-trained with ImageNet\cite{Deng2009}. Researchers often call the above process {\em deep-features} or knowledge transfer. In this case, we feed the VGG16 model with the corresponding image and output the last layer just before classification. The above is a widely used technique for image classification tasks \cite{Gallego2018,Wang2019}. \par
We selected five datasets with different characteristics; those datasets are among the most popular for benchmarking classification algorithms. In table \ref{table:datasets} we describe each of the selected datasets for the experiments.
\begin{table}[!ht] \centering \caption{Description of the datasets}
\begin{tabular}{|p{2.9cm}|p{1.4cm}|p{1.4cm}|p{1.4cm}|} \hline Name & \# samples & \# features & \# classes \\ \hline MNIST \cite{LeCun2010} & 70000 & 512 & 10 \\ Fasion MNIST \cite{Xiao2017} & 70000 & 512 & 10 \\ CIFAR10 \cite{Krizhevsky2009} & 60000 & 512 & 10 \\ CIFAR100 \cite{Krizhevsky2009} & 60000 & 512 & 100 \\ Mini-ImageNet \cite{Vinyals2016} & 60000 & 2048 & 100 \\ \hline \end{tabular} \label{table:datasets} \end{table}
The datasets have an increasing difficulty of classification, respectively MNIST, Fashion MNIST, CIFAR 10, Mini-ImageNet, and CIFAR 100. Each dataset has a dimension that depends on the deep-feature extraction and the original size of the images. Both CIFAR10 and CIFAR100 have an original size of (32x32). MNIST and Fashion MNIST have an original size of (28x28) but were converted to (32x32) since the model accepts this minimum size. Thus, the size of the feature vectors for these four datasets is the same. \par In the case of mini-Imagenet, the image size is (84x84). This dataset is interesting since it is a subset of ImageNet. The complexity of Mini-ImageNet is relatively high but requires fewer resources than the full ImageNet dataset. \par
In each of the datasets, we selected a random sample of 1000 images for testing. Since our focus is on designing a parameter-less instance-based classifier, we only used the Euclidean distance to measure the similarity between samples. \par
\begin{figure}
\caption{Mini-ImageNet - VGG16}
\label{fig:mini:maj}
\end{figure}
\begin{figure}
\caption{CIFAR10 - VGG16}
\label{fig:cif10:maj}
\end{figure}
\begin{figure}
\caption{CIFAR100 - VGG16}
\label{fig:cif100:maj}
\end{figure}
\begin{figure}
\caption{MNIST - VGG16}
\label{fig:mnist:maj}
\end{figure}
\begin{figure}
\caption{Fashion MNIST - VGG16}
\label{fig:fmnist:maj}
\end{figure}
\begin{figure}
\caption{Plot legends}
\label{fig:plotlegends}
\end{figure}
We plotted all experiments together to save space. We explain each one of the results appearing in the images, in the order presented in Figure \ref{fig:plotlegends}. Firstly kNN is the vanilla kNN classifier. Probabilistic kNN is the kNN classifier using a probabilistic index. Below we discuss a couple of HSP variants to speed up the computation of the HSP neighborhood of a query.
\paragraph{Asymptotic and Probabilistic Asymptotic HSP}
The described HSP classifier has linear time complexity, not scaling in large-sized high-dimensional data. One alternative is to compute the HSP inside a ball of a certain radius. A natural radius is the $k$-nearest neighbors. We call this the {\em Asymptotic HSP}. If we use a probabilistic index to compute the $k$-nearest neighbors of the query, we call the resulting neighborhood {\em Probabilistic Asymptotic HSP}
\section{Experiments}
We tried all values of $k$, from 1 to 300, for the vanilla kNN classifier, the asymptotic HSP, and the probabilistic versions. For the HSP classifier, there is only one value since it is parameter-free. The plots show the accuracy of the classification in the vertical axis. Please notice that any method to select a proper $k$ for kNN will correspond to one value between 1 and 300, dispensing the need to compare to SOTA kNN classifiers.
\begin{algorithm} \SetAlgoLined \KwIn{training samples X and test samples Y} \KwOut{class labels of Y}
\For{each $u\in Y$}{
$N \leftarrow \varnothing$\;
$C \leftarrow kNN(X,u,k)$\;
\While{C is not empty}{
$v \leftarrow c\in C \mid d(u,c) \leq d(u,c'), \forall c'\in C$\;
$N.insert(v)$\;
\For{each $c\in C$}{
\If{$d(c,u) > d(c,v) $}{
$C.remove(c)$\;
}
}
}
$label(u) \leftarrow$ most repeated label in $N$\;
}
\caption{Asymptotic probabilistic HSP classifier}
\label{algorithm:approximatehsp} \end{algorithm}
Please notice that the probabilistic asymptotic HSP has the same complexity as the probabilistic kNN classifier when using an index like the HNSW \cite{Malkov2020}. The pseudocode is in Algorithm \ref{algorithm:approximatehsp}.
\begin{figure}
\caption{Mini-ImageNet - VGG16}
\label{fig:mini:dud}
\end{figure}
\begin{figure}
\caption{CIFAR10 - VGG16}
\label{fig:cif10:dud}
\end{figure}
\begin{figure}
\caption{CIFAR100 - VGG16}
\label{fig:cif100:dud}
\end{figure}
\begin{figure}
\caption{MNIST - VGG16}
\label{fig:mnist:dud}
\end{figure}
\begin{figure}
\caption{Fashion MNIST - VGG16}
\label{fig:fmnist:dud}
\end{figure}
\begin{table}[!ht] \centering \footnotesize \caption{Maximum accuracy percentage for each technique using: 1. Majority rule, 2. Dudani's weighting rule \cite{Dudani1976}, 3. Inverse distance weighting. }
\begin{tabular}{|p{.15cm}|p{1.15cm}|p{0.8cm}|p{0.8cm}|p{.9cm}|p{1cm}|p{1.1cm}|} \cline{3-7}
\multicolumn{2}{c|}{} & CIFAR \newline 10 & CIFAR \newline 100 & MNIST & Fashion \newline MNIST & Mini\newline ImageNet \\ \hline
\multirow{5}{*}{1} & kNN & 62.2 & 39 & 92.6 & 86.8 & 59.5\\ & P-kNN & 61 & 38.9 & 92.4 & 86.8 & 58.1\\ & HSP & 64 & \textbf{39.7} & 92.4 & 87 & 58.8\\ & A-HSP & 64.4 & \textbf{39.7} & \textbf{93.2} & \textbf{87.2} & \textbf{61.3}\\ & PA-HSP & \textbf{66.4} & \textbf{39.7} & \textbf{93.2} & 87 & 61.1 \\ \hline
\multirow{5}{*}{2} & kNN & 63.8 & 40 & \textbf{92.8} & 87.2 & 60.6\\ & P-kNN & 64 & 39.7 & \textbf{92.8} & 87 & 60.6\\ & HSP & 65.2 & \textbf{40.9} & 92 & 86.8 & 62.2\\ & A-HSP & 64.8 & \textbf{40.9} & \textbf{92.8} & \textbf{87.4} & 63.2\\ & PA-HSP & \textbf{67} & \textbf{40.9} & 92.6 & 87.2 & \textbf{63.3} \\ \hline
\multirow{5}{*}{3} & kNN & 62.6 & 38.8 & 92.6 & 86.8 & 59.5\\ & P-kNN & 61.8 & 38.9 & 92.4 & 86.8 & 58.1\\ & HSP & 63.6 & 40.3 & 92.6 & 86.8 & 59.7\\ & A-HSP & 64.8 & \textbf{40.5} & \textbf{93.2} & \textbf{87.4} & \textbf{61.3}\\ & PA-HSP & \textbf{66.8} & \textbf{40.5} & 92.8 & 86.8 & 61\\ \hline
\end{tabular} \label{table:results} \end{table}
In the first experiment, we used only the majority rule for the five classifiers. In this case (figures \ref{fig:mini:maj} to \ref{fig:fminist:maj}), the HSP outperforms both versions of kNN in CIFAR 10, Fashion MNIST, and CIFAR 100, being slightly worst than Mini-ImageNet and MNIST. However, the asymptotic versions of HSP outperform kNN in the last two examples. Notice that the asymptotic versions of the HSP have a smoother behavior than kNN, which seems chaotic as a function of $k$.
It is possible to improve kNN using algorithms giving closer neighbors more influence than further ones. Dudani's work \cite{Dudani1976} is one approach where the weighting function varies with the distance between the query and the considered neighbor in such a manner that the value decreases with increasing the query-to-neighbor distance. He orders the $k$ neighbors so that $d_{k}$ corresponds to the distance of the furthest neighbor to the query and $d_{1}$ to the nearest one.
\par
\begin{equation*} w_{j} = \begin{cases} \frac{d_{k}-d_{j}}{d_{k}-d_{1}} & d_{k}\neq d_{i}\\ 1 & d_{k} = d_{i} \end{cases} \end{equation*}
Another fairly general technique is to use inverse distance weighting voting, where the neighbors get to vote on the class of the query with votes weighted by the inverse of their distance to the query.
\begin{equation*} w_{j} = \frac{1}{d_{j}} \end{equation*}
In the second experiment, we tried the two versions of distance weighting described above. Due to space restrictions, we only show the plots for Dudani's weighting algorithm. We can observe in figures \ref{fig:mini:dud} to \ref{fig:fmnist:dud} that, similarly to the previous experiment, the HSP outperforms kNN for all datasets, except for MNIST and Fashion MNIST in this case. However, either the asymptotic versions of the HSP or the HSP itself outperform kNN in all cases.\par
The two experiments, plus the inverse distance weighting, are summarized in table \ref{table:results}. In the case of MNIST, with Dudani's rule, kNN matches the performance of our classifier. Our classifier has the additional advantage of being parameter-free or with a smooth behavior in the asymptotic version.
\section{Conclusions and future work}
We presented three new instance-based classifiers, the HSP, Asymptotic HSP, and Probabilistic Asymptotic HSP. We compared our proposal with state of the art kNN classifiers, with optimal parameters (something unachievable in a production environment because there is no ground truth to know the best $k$). Our approach is parameter-free from the point of view of accuracy and a parameter related to the complexity. In all cases, the kNN classifiers at most topped our performance.
With our approach, it is possible to focus on finding a suitable distance function to compare instances when designing a classifier or a data mining task. A parameter-free instance-based classifier could be of help in many applications.
\end{document} |
\begin{document}
\title{Embedding large subgraphs into dense graphs}
\def\COMMENT#1{} \def\TASK#1{}
\def{\varepsilon}{{\varepsilon}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{{\rm hcf}}{{\rm hcf}} \newcommand{\mathcal D}{\mathcal D}
\begin{abstract} \noindent What conditions ensure that a graph $G$ contains some given spanning subgraph $H$? The most famous examples of results of this kind are probably Dirac's theorem on Hamilton cycles and Tutte's theorem on perfect matchings. Perfect matchings are generalized by perfect $F$-packings, where instead of covering all the vertices of $G$ by disjoint edges, we want to cover $G$ by disjoint copies of a (small) graph $F$. It is unlikely that there is a characterization of all graphs $G$ which contain a perfect $F$-packing, so as in the case of Dirac's theorem it makes sense to study conditions on the minimum degree of~$G$ which guarantee a perfect $F$-packing.
The Regularity lemma of Szemer\'edi and the Blow-up lemma of Koml\'os, S\'ark\"ozy and Szemer\'edi have proved to be powerful tools in attacking such problems and quite recently, several long-standing problems and conjectures in the area have been solved using these. In this survey, we give an outline of recent progress (with our main emphasis on $F$-packings, Hamiltonicity problems and tree embeddings) and describe some of the methods involved. \end{abstract}
\section{Introduction, overview and basic notation}\label{intro} In this survey, we study the question of when a graph $G$ contains some given large or spanning graph $H$ as a subgraph. Many important problems can be phrased in this way: one example is Dirac's theorem, which states that every graph $G$ on $n \ge 3$ vertices with minimum degree at least $n/2$ contains a Hamilton cycle. Another example is Tutte's theorem on perfect matchings which gives a characterization of all those graphs which contain a perfect matching (so $H$ corresponds to a perfect matching in this case). A result which gives a complete characterization of all those graphs $G$ which contain $H$ (as in the case of Tutte's theorem) is of course much more desirable than a sufficient condition (as in the case of Dirac's theorem). However, for most $H$ that we consider, it is unlikely that such a characterization exists as the corresponding decision problems are usually NP-complete. So it is natural to seek simple sufficient conditions. Here we will focus mostly on degree conditions. This means that $G$ will usually be a dense graph and that we have to restrict $H$ to be rather sparse in order to get interesting results. We will survey the following topics: \begin{itemize} \item a generalization of the matching problem, which is called the \emph{$F$-packing} or \emph{$F$-tiling} problem (here the aim is to cover the vertices of $G$ with disjoint copies of a fixed graph $F$ instead of disjoint edges); \item Hamilton cycles (and generalizations) in graphs, directed graphs and hypergraphs; \item large subtrees of graphs; \item arbitrary subgraphs $H$ of bounded degree; \item Ramsey numbers of sparse graphs. \end{itemize} A large part of the progress in the above areas is due to the Regularity lemma of Szemer\'edi~\cite{reglem} and the Blow-up lemma of Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{KSSblowup}. Roughly speaking, the former states that one can decompose an arbitrary large dense graph into a bounded number of random-like graphs. The latter is a powerful tool for embedding spanning subgraphs $H$ into such random-like graphs. In the final section we give a formal statement of these results and describe in detail an application to a special case of the $F$-packing problem. We hope that readers who are unfamiliar with these tools will find this a useful guide to how they can be applied.
There are related surveys in the area by Koml\'os and Simonovits~\cite{KSi} (some minor updates were added later in~\cite{KSSS}) and by Koml\'os~\cite{JKblowup}. However, much has happened since these were written and the emphasis is different in each case. So we hope that the current survey will be a useful complement and update to these. In particular, as the title indicates, our focus is mainly on embedding large subgraphs and we will ignore other aspects of regularity/quasi-randomness. There is also a recent survey on $F$-packings (and so-called $F$-decompositions) by Yuster~\cite{yuster}, which is written from a computational perspective.
\section{Packing small subgraphs in graphs} \label{packings}
\subsection{$F$-packings in graphs of large minimum degree}
Given two graphs $F$ and $G$, an \emph{$F$-packing in $G$} is a collection of vertex-disjoint copies of $F$ in $G$. (Alternatively, this is often called an $F$-tiling.) $F$-packings are natural generalizations of graph matchings (which correspond to the case when $F$ consists of a single edge). An $F$-packing in $G$ is called \emph{perfect} if it covers all vertices of $G$. In this case, we also say that $G$ contains an \emph{$F$-factor} or a \emph{perfect $F$-matching}. If $F$ has a component which contains at least~3 vertices then the question whether $G$ has a perfect $F$-packing is difficult from both a structural and algorithmic point of view: Tutte's theorem characterizes those graphs which have a perfect $F$-packing if $F$ is an edge but for other connected graphs~$F$ no such characterization is known. Moreover, Hell and Kirkpatrick~\cite{HKsiam} showed that the decision problem of whether a graph $G$ has a perfect $F$-packing is NP-complete if and only if $F$ has a component which contains at least~3 vertices. So as mentioned earlier, this means that it makes sense to search for degree conditions which ensure the existence of a perfect $F$-packing. The fundamental result in the area is the Hajnal-Szemer\'edi theorem: \begin{theorem}{\bf (Hajnal and Szemer\'edi~\cite{HSz})} \label{hajnalsz} Every graph whose order~$n$ is divisible by~$r$ and whose minimum degree is at least $(1-1/r)n$ contains a perfect $K_r$-packing. \end{theorem} The minimum degree condition is easily seen to be best possible. (The case when $r=3$ was proved earlier by Corr\'adi and Hajnal~\cite{CH}.) The result is often phrased in terms of colourings: any graph~$G$ whose order is divisible by~$k$ and with $\Delta(G) \le k-1$ has an equitable $k$-colouring, i.e.~a colouring with colour classes of equal size. (So $k:=n/r$ here.) Theorem~\ref{hajnalsz} raises the question of what minimum degree condition forces a perfect $F$-packing for arbitrary graphs~$F$. The following result gives a general bound.
\begin{theorem}\label{KSS}{\bf (Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{KSSz01})} For every graph $F$
there exists a constant $C=C(F)$ such that every graph $G$ whose order $n$ is divisible by $|F|$ and whose minimum degree is at least $(1-1/\chi(F))n+C$ contains a perfect $F$-packing. \end{theorem}
This confirmed a conjecture of Alon and Yuster~\cite{AY96}, who had obtained the above result with an additional error term of~${\varepsilon} n$ in the minimum degree condition. As observed in~\cite{AY96}, there are graphs $F$ for which the above constant~$C$ cannot be omitted completely (e.g.~$F=K_{s,s}$ where $s\ge 3$ and $s$ is odd). Thus one might think that this settles the question of which minimum degree guarantees a perfect $F$-packing. However, we shall see that this is \emph{not} the case. There are graphs $F$ for which the bound on the minimum degree can be improved significantly: we can often replace $\chi(F)$ by a smaller parameter. For a detailed statement of this, we define the \emph{critical chromatic number} $\chi_{cr}(F)$ of a graph $F$ as $$
\chi_{cr}(F):=(\chi(F)-1)\frac{|F|}{|F|-\sigma(F)},$$ where $\sigma(F)$ denotes the minimum size of the smallest colour class in an optimal colouring of $F$. (We say that a colouring of $F$ is \emph{optimal} if it uses exactly $\chi(F)$ colours.) So for instance a $k$-cycle $C_k$ with $k$ odd has $\chi_{cr}(C_k)=2+2/(k-1)$. Note that $\chi_{cr}(F)$ always satisfies $\chi(F)-1 < \chi_{cr}(F) \le \chi(F)$ and equals $\chi(F)$ if and only if for every optimal colouring of $F$ all the colour classes have equal size. The critical chromatic number was introduced by Koml\'os~\cite{JKtiling}. He (and independently Alon and Fischer~\cite{AF99}) observed that for \emph{any} graph~$F$ it can be used to give a lower bound on the minimum degree that guarantees a perfect $F$-packing.
\begin{prop}\label{propKomlos}
For every graph $F$ and every integer $n$ that is divisible by $|F|$ there exists a graph $G$ of order $n$ and minimum degree $\lceil(1-1/\chi_{cr}(F))n\rceil-1$ which does not contain a perfect $F$-packing. \end{prop} Given a graph~$F$, the graph $G$ in the proposition is constructed as follows: write $k:=\chi(F)$ and let $\ell \in \mathbb{N}$ be arbitrary.
$G$ is a complete $k$-partite graph with vertex classes $V_1,\dots,V_k$, where $|V_1|=\sigma(F)\ell-1$, $n=\ell |F|$ and the sizes of $V_2,\dots,V_k$ are as equal as possible. Then any perfect $F$-packing would consist of $\ell$ copies of $F$. On the other hand, each such copy would contain at least $\sigma(F)$ vertices in $V_1$, which is impossible.
Koml\'os also showed that the critical chromatic number is the parameter which governs the existence of \emph{almost} perfect packings in graphs of large minimum degree. (More generally, he also determined the minimum degree which ensures that a given fraction of vertices is covered.) \begin{theorem}\label{thmKomlos}{\bf (Koml\'os~\cite{JKtiling})} For every graph $F$ and every $\gamma>0$ there exists an integer $n_0=n_0(\gamma,F)$ such that every graph $G$ of order $n\ge n_0$ and minimum degree at least $(1-1/\chi_{cr}(F))n$ contains an $F$-packing which covers all but at most $\gamma n$ vertices of~$G$. \end{theorem} By making $V_1$ slightly smaller in the previous example, it is easy to see that the minimum degree bound in Theorem~\ref{thmKomlos} is also best possible. Confirming a conjecture of Koml\'os~\cite{JKtiling}, Shokoufandeh and Zhao~\cite{SZ,SZ3} subsequently proved that the number of uncovered vertices can be reduced to a constant depending only on~$F$.
We~\cite{KOmatch} proved that for any graph~$F$, either its critical chromatic number or its chromatic number is the relevant parameter which governs the existence of perfect packings in graphs of large minimum degree. The classification depends on a parameter which we call the \emph{highest common factor} of $F$.
This is defined as follows for non-bipartite graphs~$F$. Given an optimal colouring $c$ of~$F$, let $x_1\le x_2\le \dots\le x_{\ell}$ denote the sizes of the colour classes of~$c$. Put
$\mathcal D (c):= \{ x_{i+1}-x_i\,|\, i=1,\dots, \ell-1 \}.$ Let $\mathcal D(F)$ denote the union of all the sets $\mathcal D (c)$ taken over all optimal colourings $c$. We denote by ${\rm hcf}(F)$ the highest common factor of all integers in $\mathcal D(F)$. If $\mathcal D(F)=\{0\}$ we set ${\rm hcf}(F):=\infty$. Note that if all the optimal colourings of $F$ have the property that all colour classes have equal size, then $\mathcal D(F)=\{0\}$ and so ${\rm hcf}(F) \neq 1$ in this case. In particular, if $\chi_{cr}(F)=\chi(F)$, then ${\rm hcf}(F) \neq 1$. So for example, odd cycles of length at least 5 have ${\rm hcf}=1$ whereas complete graphs have ${\rm hcf} \neq 1$.
The definition can be extended to bipartite graphs $F$. For connected bipartite graphs, we always have ${\rm hcf}(F) \neq 1$, but for disconnected bipartite graphs the definition also takes into account the relative sizes of the components of~$F$ (see~\cite{KOmatch}).
We proved that in Theorem~\ref{KSS} one can replace the chromatic number by the critical chromatic number if ${\rm hcf}(F)=1$. (A much simpler proof of a weaker result can be found in~\cite{KOSODA}.)
\begin{theorem}{\bf (K\"uhn and Osthus~\cite{KOmatch})} \label{thmmain} Suppose that $F$ is a graph with
${\rm hcf}(F)=1$. Then there exists a constant $C=C(F)$ such that every graph $G$ whose order $n$ is divisible by~$|F|$ and whose minimum degree is at least $(1-1/\chi_{cr}(F))n+C$ contains a perfect $F$-packing. \end{theorem}
Note that Proposition~\ref{propKomlos} shows that the result is best possible up to the value of the constant~$C$. A simple modification of the examples in~\cite{AF99,JKtiling} shows that there are graphs~$F$ for which the constant $C$ cannot be omitted entirely. Moreover, it turns out that Theorem~\ref{KSS} is already best possible up to the value of the constant~$C$ if ${\rm hcf}(F)\neq 1$. To see this, for simplicity assume that
$k:=\chi(F) \ge 3$ and $n=k \ell |F|$ for some $\ell \in \mathbb{N}$ and let $G$ be a complete $k$-partite graph with vertex classes $V_1,\dots,V_k$, where
$|V_1|:=\ell |F|-1$, $|V_2|:=\ell |F|+1$ and $|V_i|=\ell |F|$ for $i \ge 3$. Consider any $F$-packing $F_1,\dots,F_t$ in~$G$. Let $G_i$ be the graph obtained from $G$ by removing $F_1,\dots, F_{i}$. So $G=G_0$. If $t:={\rm hcf}(F) \neq 1$, then the vertex classes $V_i^1$ of $G_1$ still have property that
$|V_1^1|-|V_k^1| \not\equiv 0$ modulo $t$. More generally, this property is preserved for all $G_i$, so the original $F$-packing cannot cover all the vertices in $V_1 \cup V_k$.
One can now combine Theorems~\ref{KSS} and~\ref{thmmain} (and the corresponding lower bounds which are discussed in detail in~\cite{KOmatch}) to obtain a complete answer to the question of which minimum degree forces a perfect $F$-packing (up to an additive constant). For this, let $$\chi^*(F):= \begin{cases} \chi_{cr}(F) &\text{ if ${\rm hcf} (F)=1$};\\ \chi(F) &\text{ otherwise}. \end{cases} $$ Also let $\delta(F,n)$ denote the smallest integer $k$ such that every graph $G$
whose order $n$ is divisible by~$|F|$ and with $\delta(G)\ge k$ contains a perfect $F$-packing.
\begin{theorem}{\bf (K\"uhn and Osthus~\cite{KOmatch})}\label{thmmaingeneral} For every graph $F$ there exists a constant $C=C(F)$ such that $$\left( 1-\frac{1}{\chi^*(F)} \right)n-1\le \delta(F,n) \le \left(1-\frac{1}{\chi^*(F)} \right)n+C.$$ \end{theorem}
The constant~$C$ appearing in Theorems~\ref{thmmain} and~\ref{thmmaingeneral} is rather large since it is related to the number of partition classes (clusters) obtained by the Regularity lemma. It would be interesting to know whether one can take e.g.~$C=|F|$ (this holds for large~$n$ in Theorem~\ref{KSS}). Another open problem is to characterize all those graphs~$F$ for which $\delta(F,n)=\lceil(1-1/\chi^*(F))n\rceil$. This is known to be the case for complete graphs by Theorem~\ref{hajnalsz} and all graphs with at most $4$ vertices (see~Kawarabayashi~\cite{ken} for a proof of the case when $F$ is a $K_4$ minus an edge and a discussion of the other cases). If $n$ is large, this is also known to hold for cycles (this follows from Theorem~\ref{abbasi} below) and for the case when $F$ is a complete graph minus an edge~\cite{KOKlminus} (the latter was conjectured in~\cite{ken}).
\subsection{Ore-type degree conditions} Recently, a simple proof (based on an inductive argument) of the Hajnal-Szemer\'edi theorem was found by Kierstead and Kostochka~\cite{KK1}. Using similar methods, they subsequently strengthened this to an Ore-type condition~\cite{KK2}: \begin{theorem}{\bf (Kierstead and Kostochka~\cite{KK2})} Let $G$ be a graph whose order~$n$ is divisible by~$r$. If $d(x)+d(y) \ge 2(1-1/r) n-1$ for all pairs $x\neq y$ of nonadjacent vertices, then~$G$ has a perfect~$K_r$-packing. \end{theorem} Equivalently, if a graph~$G$ whose order is divisible by~$k$ satisfies $d(x)+d(y) \le 2k-1$ for every edge~$xy$, then~$G$ has an equitable~$k$-colouring. (So $k:=n/r$.) Recently, together with Treglown~\cite{KOTore}, we proved an Ore-type analogue of Theorem~\ref{thmmaingeneral} (but with a linear error term $\varepsilon n$ instead of the additive constant~$C$). The result in this case turns out to be genuinely different: again, there are some graphs $F$ for which the degree condition depends on $\chi(F)$ and some for which it depends on $\chi_{cr}(F)$. However, there are also graphs~$F$ for which it depends on a parameter which lies strictly between $\chi_{cr}(F)$ and $\chi(F)$. This parameter in turn depends on how many additional colours are necessary to extend colourings of neighbourhoods of certain vertices of~$F$ to a colouring of~$F$. It is an open question whether the linear error term in~\cite{KOTore} can be reduced to a constant one.
\subsection{$r$-partite versions} Also, it is natural to consider $r$-partite versions of the Hajnal-Szemer\'edi theorem. For this, given an~$r$-partite graph~$G$, let $\delta'(G)$ denote the minimum over all vertex classes~$W$ of~$G$ and all vertices~$x\notin W$ of the number of neighbours of~$x$ in~$W$. The obvious question is what value of $\delta'(G)$ ensures that~$G$ has a perfect~$K_r$-packing. The following (surprisingly difficult) conjecture is implicit in~\cite{MM}. Fischer~\cite{Fisher} originally made a stronger conjecture which did not include the `exceptional' graph~$\Gamma_{r,n}$ defined below. \begin{conj} \label{partite} Suppose that $r\ge 2$ and that~$G$ is an $r$-partite graph with vertex classes of size $n$. If $\delta'(G) \ge (1-1/r)n$, then $G$ has a perfect $K_r$-packing unless both $r$ and $n$ are odd and $G=\Gamma_{r,n}$. \end{conj} To define the graph $\Gamma_{r,n}$, we first construct a graph $\Gamma_r$: its vertices are labelled $g_{ij}$ with $1 \le i,j \le r$. We have an edge between $g_{ij}$ and $g_{i'j'}$ if $i \neq i'$, $j \neq j'$ and $j \le r-2$ or $j' \le r-2$. We also have an edge if $i \neq i'$ and we have either $j=j'=r-1$ or $j=j'=r$ (see Fig.~1). \begin{figure}
\caption{The graph $\Gamma_3=\Gamma_{3,1}$ in Conjecture~\ref{partite}}
\label{g33}
\end{figure} $\Gamma_{r,n}$ is then obtained from $\Gamma_r$ by replacing each vertex with an independent set of size $n/r$ and replacing each edge with a complete bipartite graph.
To see that $\Gamma_{r,n}$ has no perfect $K_r$-packing when both $r$ and $n$ are odd, let $W_\ell$ denote the set of vertices of $\Gamma_{r,n}$ which correspond to a vertex of $\Gamma_{r}$ with $j =\ell$. Note that every copy of~$K_r$ which covers a vertex in $W_1\cup\dots\cup W_{r-2}$ has to contain at least 2 vertices in~$W_{r-1}$ or at least~2 vertices in~$W_r$. So in order to cover all vertices in $W_1\cup\dots\cup W_{r-2}$
we can only use copies of~$K_r$ which contain exactly~2 vertices in~$W_{r-1}$ or exactly~2 vertices in~$W_r$. But since $|W_{r-1}|=|W_r|=n$ is odd this means that it is impossible to cover all vertices of $\Gamma_{r,n}$ with vertex-disjoint copies of~$K_r$. (Note that the argument uses only that~$n$ is odd, but we cannot have that~$n$ is odd and~$r$ is even.)
A much simpler example which works for all~$r$ and~$n$ but which gives a weaker bound when $r$ and $n$ are odd is obtained as follows: choose a set $A$ which has less than $(1-1/r)n$ vertices in each vertex class and include all edges which have at least one endpoint in~$A$. For large $n$, the case $r=3$ of Conjecture~\ref{partite} was solved by Magyar and Martin~\cite{MM} and the case $r=4$ by Martin and Szemer\'edi~\cite{MS}, both using the Regularity lemma (the case $r=2$ is elementary). Johansson~\cite{johansson} had earlier proved an approximate version of the case~$r=3$. Csaba and Mydlarz~\cite{csabamulti} proved a result which implies that Conjecture~\ref{partite} holds approximately when~$r$ is large (and $n$ large compared to~$r$). Generalizations to packings of arbitrary graphs were considered in~\cite{hladkyschacht,zhaomartin,zhaobip}. A variant of the problem (where one considers usual minimum degree $\delta(G)$) was considered by Johansson, Johansson and Markstr\"om~\cite{JJM}. They solved the case $r=3$ and gave bounds for the case $r>3$. This problem is related to bounding the so-called `strong chromatic number'.
\subsection{Hypergraphs} \label{hyperpack} (Perfect) $F$-packings have also been investigated for the case when~$F$ is a uniform hypergraph. Unsurprisingly, the hypergraph problem turns out to be much more difficult than the graph problem. There are two natural notions of (minimum) degree of the `dense' hypergraph~$G$. Firstly, one can consider the vertex degree. Secondly, given an $r$-uniform hypergraph~$G$ and an $(r-1)$-tuple~$W$ of vertices in $G$, the degree of $W$ is defined to be the number of hyperedges which contain~$W$. This notion of degree is called \emph{collective degree} or \emph{co-degree}. In contrast to the graph case, even the minimum collective degree which ensures a perfect matching (i.e.~when~$F$ consists of a single edge) is not easy to determine. R\"odl, Ruci\'nski and Szemer\'edi~\cite{RRS} gave a precise solution to this problem, the answer turns out to be close to $n/2$. This improved bounds of~\cite{KOhypermatch,RRSapprox}. An $r$-partite version (which is best possible for infinitely many values of~$n$) was proved by Aharoni, Georgakopoulos and Spr\"ussel~\cite{AGS}. The minimum vertex degree which forces the existence of a perfect matching is unknown. It is natural to make the following conjecture (a related $r$-partite version is conjectured in~\cite{AGS}). \begin{conj} \label{hypermatch} For all integers $r$ and all ${\varepsilon}>0$ there is an integer $n_0=n_0(r,{\varepsilon})$ so that the following holds for all $n\ge n_0$ which are divisible by~$r$: if~$G$ is an $r$-uniform hypergraph on $n$ vertices whose minimum vertex degree is at least $$(1-(1-1/r)^{r-1}+{\varepsilon})\binom{n}{r-1},$$ then $G$ has a perfect matching. \end{conj} The following construction gives a corresponding lower bound: let $V$ be a set of $n$ vertices and let $A \subseteq V$ be a set of less than $n/r$ vertices and include as hyperedges all $r$-tuples with at least one vertex in $A$. The case $r=3$ of the conjecture was proved recently by Han, Person and Schacht~\cite{HPS}.
A hypergraph analogue of Theorem~\ref{thmmaingeneral} currently seems out of reach. So far, the only hypergraph~$F$ (apart from the single edge) for which the approximate minimum collective degree which forces a perfect $F$-packing has been determined is the $3$-uniform hypergraph with 4 vertices and 2 edges~\cite{KOloose}. Pikhurko~\cite{pikhurko} gave bounds on the minimum collective degree which forces the complete $3$-uniform hypergraph on $4$ vertices. In the same paper, he also shows that if $\ell \ge r/2$ and $G$ is an $r$-uniform hypergraph where every $\ell$-tuple of vertices is contained in at least $(1/2+o(1))\binom{n}{r-\ell}$ hyperedges, then $G$ has a perfect matching, which is best possible up to the $o(1)$-term. This result is rather surprising in view of the fact that Conjecture~\ref{hypermatch} (which corresponds to the case when $\ell=1$) has a rather different form. Further results on this question are also proved in~\cite{HPS}.
\section{Trees}
One of the earliest applications of the Blow-up lemma was the solution by Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{KSStrees1} of a conjecture of Bollob\'as on the existence of given bounded degree spanning trees. The authors later relaxed the condition of bounded degree to obtain the following result.
\begin{theorem} {\bf (Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{KSSztrees})} For any $ \gamma > 0 $ there exist constants $c>0$ and $n_0$ with the following properties. If $ n \geq n_0 $, $T$ is a tree of order $n$ with $ \Delta (T) \leq { cn /\log n}$, and $G$ is a graph of order $n$ with $ { \delta (G)} \geq (1/2+ { \gamma })n $, then $T$ is a subgraph of $G$. \end{theorem} The condition $ \Delta (T) \leq { cn /\log n}$ is best possible up to the value of $c$. (The example given in~\cite{KSSztrees} to show this is a random graph $G$ with edge probability $0.9$ and a tree of depth $2$ whose root has degree close to $\log n$.)
It is an easy exercise to see that every graph of minimum degree at least $k$ contains any tree with $k$ edges. The following classical conjecture would imply that we can replace the minimum degree condition by one on the average degree. \begin{conj} {\bf (Erd\H{o}s and S\'os~\cite{ErdosSos})} \label{erdossos} Every graph of average degree greater than $k-1$ contains any tree with $k$ edges. \end{conj} This is trivially true for stars. (On the other hand, stars also show that the bound is best possible in general.) It is also trivial if one assumes an extra factor of~2 in the average degree. It has been proved for some special classes of trees, most notably those of diameter at most~4~\cite{mclennan}.
The conjecture is also true for `locally sparse' graphs -- see Sudakov and Vondrak~\cite{vondraksudakov} for a discussion of this.
The following result proves (for large $n$) a related conjecture of Loebl. An approximate version was proved earlier by Ajtai, Koml\'os and Szemer\'edi~\cite{AKSloebl}. \begin{theorem}{\bf (Zhao~\cite{zhao})} \label{zhao} There is an integer $n_0$ so that every graph $G$ on $n \ge n_0$ vertices which has at least $n/2$ vertices of degree at least $n/2$ contains all trees with at most $n/2$ edges. \end{theorem} This would be generalized by the following conjecture. \begin{conj} {\bf (Koml\'os and S\'os)} \label{KomlosSos} Every graph $G$ on $n$ vertices which has at least $n/2$ vertices of degree at least $k$ contains all trees with $k$ edges. \end{conj} Again, the conjecture is trivially true (and best possible) for stars. Piguet and Stein~\cite{PS} proved an approximate version for the case when $k$ is linear in $n$ and $n$ is large. Cooley~\cite{cooley} as well as Hladk\'y and Piguet~\cite{hladkypiguet} proved an exact version for this case. All of these proofs are based on the Regularity lemma. As with Conjecture~\ref{erdossos}, there are several results on special cases which are not based on the Regularity lemma. For instance, Piguet and Stein proved it for trees of diameter at most~5~\cite{PSdiam}.
\section{Hamilton cycles}
\subsection{Classical results for graphs and digraphs} As mentioned in the introduction, the decision problem of whether a graph has a Hamilton cycle is NP-complete, so it makes sense to ask for degree conditions which ensure that a graph has a Hamilton cycle. One such result is the classical theorem of Dirac. \begin{theorem}{\bf (Dirac~\cite{dirac})} \label{dirac} Every graph on $n\ge 3$ vertices with minimum degree at least $n/2$ contains a Hamilton cycle. \end{theorem} For an analogue in directed graphs it is natural to consider the \emph{minimum semidegree~$\delta^0(G)$} of a digraph $G$, which is the minimum of its minimum outdegree~$\delta^+(G)$ and its minimum indegree~$\delta^-(G)$. (Here a directed graph may have two edges between a pair of vertices, but in this case their directions must be opposite.) The corresponding result is a theorem of Ghouila-Houri~\cite{gh}. \begin{theorem}{\bf (Ghouila-Houri~\cite{gh})} \label{hamGH} Every digraph on $n$ vertices with minimum semidegree at least $n/2$ contains a Hamilton cycle. \end{theorem} In fact, Ghouila-Houri proved the stronger result that every strongly connected digraph of order~$n$ where every vertex has total degree at least~$n$ has a Hamilton cycle. (When referring to paths and cycles in directed graphs we always mean that these are directed, without mentioning this explicitly.) All of the above degree conditions are best possible. Theorems~\ref{dirac} and~\ref{hamGH} were generalized to a degree condition on pairs of vertices for graphs as well as digraphs: \begin{theorem}{\bf (Ore~\cite{ore})} Suppose that $G$ is a graph with $n \ge 3$ vertices such that every pair $x\neq y$ of nonadjacent vertices satisfies $d(x)+d(y) \ge n$. Then $G$ has a Hamilton cycle. \end{theorem} \begin{theorem}{\bf (Woodall~\cite{woodall})} \label{woodall} Let $G$ be a strongly connected digraph on $n \ge 2$ vertices. If $d^+(x)+d^-(y)\ge n$ for every pair $x\neq y$ of vertices for which there is no edge from $x$ to $y$, then $G$ has a Hamilton cycle. \end{theorem}
There are many generalizations of these results. The survey~\cite{gould} gives an overview for undirected graphs and the monograph~\cite{digraphsbook} gives a discussion of directed versions. Below, we describe some recent progress on degree conditions for Hamilton cycles, much of which is based on the Regularity lemma. \COMMENT{mention algos (Sarkozy) somewhere}
\subsection{Hamilton cycles in oriented graphs} Thomassen~\cite{thomassen_79} raised the natural question of determining the minimum semidegree that forces a Hamilton cycle in an \emph{oriented graph} (i.e.~in a directed graph that can be obtained from a simple undirected graph by orienting its edges). Thomassen initially believed that the correct minimum semidegree bound should be $n/3$ (this bound is obtained by considering a `blow-up' of an oriented triangle). However, H\"aggkvist~\cite{HaggkvistHamilton} later gave the following construction which gives a lower bound of $\lceil (3n-4)/8 \rceil -1$ (see Fig.~2). \begin{figure}
\caption{An extremal example for Theorem~\ref{main}}
\label{extremal2}
\end{figure}
For $n$ of the form $n=4m+3$ where $m$ is odd, we construct~$G$ on $n$ vertices as follows. Partition the vertices into $4$ parts $A,B,C,D$, with $|A|=|C|=m$, $|B|=m+1$ and $|D|=m+2$. Each of $A$ and $C$ spans a regular tournament, $B$ and $D$ are joined by a bipartite tournament
(i.e.~an orientation of the complete bipartite graph) which is as regular as possible. We also add all edges from $A$ to $B$, from $B$ to $C$, from $C$ to $D$ and from $D$ to $A$. Since every path which joins two vertices in~$D$ has to pass through~$B$, it follows that every cycle contains at least as many vertices from~$B$ as it contains from~$D$. As $|D|>|B|$ this means that one cannot cover all the vertices of~$G$ by disjoint cycles.
This construction can be extended to arbitrary~$n$ (see~\cite{KKOexact}). The following result exactly matches this bound and improves earlier ones of several authors, e.g.~\cite{HaggkvistHamilton,HaggkvistThomasonHamilton,kellyKO}.
\begin{theorem}{\bf (Keevash, K\"uhn and Osthus~\cite{KKOexact})} \label{main} There exists an integer $n_0$ so that any oriented graph $G$ on $n \ge n_0$ vertices with minimum semidegree $\delta^0(G) \ge \frac{3n-4}{8}$ contains a Hamilton cycle. \end{theorem} The proof of this result is based on some ideas in~\cite{kellyKO}. H\"aggkvist~\cite{HaggkvistHamilton} also made the following conjecture which is closely related to Theorem~\ref{main}. Given an oriented graph~$G$, let~$\delta(G)$ denote the minimum degree of~$G$ (i.e.~the minimum number of edges incident to a vertex) and set $\delta^*(G):=\delta(G)+\delta^+(G)+\delta^-(G)$. \begin{conj}{\bf (H\"aggkvist~\cite{HaggkvistHamilton})} \label{haggconj} Every oriented graph~$G$ on $n$ vertices with $\delta^*(G)>(3n-3)/2$ contains a Hamilton cycle. \end{conj} (Note that this conjecture does not quite imply Theorem~\ref{main} as it results in a marginally greater minimum semidegree condition.) In~\cite{kellyKO}, Conjecture~\ref{haggconj} was verified approximately, i.e.~if $\delta^*(G) \ge (3/2+o(1))n$, then $G$ has a Hamilton cycle (note this implies an approximate version of Theorem~\ref{main}). The same methods also yield an approximate version of Theorem~\ref{woodall} for oriented graphs. \begin{theorem}{\bf (Kelly, K\"uhn and Osthus~\cite{kellyKO})}\label{thm:Ore} For every $\alpha>0$ there exists an integer $n_0=n_0(\alpha)$ such that every oriented graph~$G$ of order $n\geq n_0$ with $d^+(x)+d^-(y)\ge (3/4+\alpha)n$ whenever $G$ does not contain an edge from~$x$ to~$y$ contains a Hamilton cycle. \end{theorem} The above construction of H\"aggkvist shows that the bound is best possible up to the term $\alpha n$. It would be interesting to obtain an exact version of this result.
Note that Theorem~\ref{main} implies that every sufficiently large regular tournament on~$n$ vertices contains at least $n/8$ edge-disjoint Hamilton cycles. (To verify this, note that in a regular tournament, all in- and outdegrees are equal to $(n-1)/2$. We can then greedily remove Hamilton cycles as long as the degrees satisfy the condition in Theorem~\ref{main}.) It is the best bound so far towards the following conjecture of Kelly~(see e.g.~\cite{digraphsbook}). \begin{conj}{\bf (Kelly)} Every regular tournament on~$n$ vertices can be partitioned into~$(n-1)/2$ edge-disjoint Hamilton cycles. \end{conj} A result of Frieze and Krivelevich~\cite{FKhampack} states that every dense ${\varepsilon}$-regular digraph contains a collection of edge-disjoint Hamilton cycles which covers almost all of its edges. This implies that the same holds for almost every tournament. Together with a lower bound by McKay~\cite{McKay} on the number of regular tournaments, it is easy to see that the above result in~\cite{FKhampack} also implies that almost every regular tournament contains a collection of edge-disjoint Hamilton cycles which covers almost all of its edges. Thomassen made the following conjecture which replaces the assumption of regularity by high connectivity. \begin{conj}{\bf (Thomassen~\cite{thomassenconj})} For every $k \ge 2$ there is an integer $f(k)$ so that every strongly $f(k)$-connected tournament has $k$ edge-disjoint Hamilton cycles. \end{conj} The following conjecture of Jackson is also closely related to Theorem~\ref{main} -- it would imply a much better degree condition for regular oriented graphs. \begin{conj}{\bf (Jackson~\cite{jacksonconj})} \label{jacksonconj} For $d>2$, every $d$-regular oriented graph $G$ on $n \le 4d+1$ vertices is Hamiltonian. \end{conj} The disjoint union of two regular tournaments on $n/2$ vertices shows that this would be best possible. An undirected analogue of Conjecture~\ref{jacksonconj} was proved by Jackson~\cite{jacksonreg}. It is easy to see that every tournament on~$n$ vertices with minimum semidegree at least $n/4$ has a Hamilton cycle. In fact, for tournaments $T$ of large order~$n$ with minimum semidegree at least $n/4+{\varepsilon} n$, Bollob\'as and H\"aggkvist~\cite{BHpower} proved the stronger result that (for fixed $k$) $T$ even contains the $k$th power of a Hamilton cycle. It would be interesting to find corresponding degree conditions which ensure this for arbitrary digraphs and for oriented graphs.
\subsection{Degree sequences forcing Hamilton cycles in directed graphs}
For undirected graphs, Dirac's theorem is generalized by Chv\'atal's theorem~\cite{chvatal} that characterizes all those degree sequences which ensure the existence of a Hamilton cycle in a graph: suppose that the degrees of the graph are $d_1\le \dots \le d_n$. If $n \geq 3$ and $d_i \geq i+1$ or $d_{n-i} \geq n-i$ for all $i <n/2$ then $G$ is Hamiltonian. This condition on the degree sequence is best possible in the sense that for any degree sequence violating this condition there is a corresponding graph with no Hamilton cycle. Nash-Williams~\cite{nw} raised the question of a digraph analogue of Chv\'atal's theorem quite soon after the latter was proved: for a digraph~$G$ it is natural to consider both its outdegree sequence $d^+ _1,\dots , d^+ _n$ and its indegree sequence $d^- _1,\dots , d^- _n$. Throughout, we take the convention that $d^+ _1\le \dots \le d^+ _n$ and $d^- _1 \le \dots \le d^- _n$ without mentioning this explicitly. Note that the terms $d^+ _i$ and $d^- _i$ do not necessarily correspond to the degree of the same vertex of~$G$. \begin{conj}[Nash-Williams~\cite{nw}]\label{nw} Suppose that $G$ is a strongly connected digraph on $n \geq 3$ vertices such that for all $i < n/2$ \begin{itemize} \item[{\rm (i)}] $d^+ _i \geq i+1 $ or $ d^- _{n-i} \geq n-i $, \item[{\rm (ii)}] $ d^- _i \geq i+1$ or $ d^+ _{n-i} \geq n-i.$ \end{itemize} Then $G$ contains a Hamilton cycle. \end{conj} It is even an open problem whether the conditions imply the existence of a cycle through any pair of given vertices (see~\cite{bt}). It is easy to see that one cannot omit the condition that~$G$ is strongly connected. The following example (which is a straightforward generalization of the corresponding undirected example) shows that the degree condition in Conjecture~\ref{nw} would be best possible in the sense that for all $n\ge 3$ and all $k<n/2$ there is a non-Hamiltonian strongly connected digraph~$G$ on~$n$ vertices which satisfies the degree conditions except that $d^+_k,d^-_k\ge k+1$ are replaced by $d^+_k,d^-_k\ge k$ in the $k$th pair of conditions. To see this, take an independent set~$I$ of size $k<n/2$ and a complete digraph~$K$ of order~$n-k$. Pick a set~$X$ of~$k$ vertices of~$K$ and add all possible edges (in both directions) between~$I$ and~$X$. The digraph~$G$ thus obtained is strongly connected, not Hamiltonian and $$\underbrace{k, \dots ,k}_{k \text{ times}}, \underbrace{n-1-k, \dots , n-1-k}_{n-2k \text{ times}}, \underbrace{n-1, \dots , n-1}_{k \text{ times}}$$ is both the out- and indegree sequence of~$G$. In contrast to the undirected case there exist examples with a similar degree sequence to the above but whose structure is quite different (see~\cite{KOTchvatal}). In~\cite{KOTchvatal}, the following approximate version of Conjecture~\ref{nw} for large digraphs was proved.
\begin{theorem}[K\"uhn, Osthus and Treglown \cite{KOTchvatal}]\label{approxnw} For every $\alpha >0$ there exists an integer $n_0 =n_0 (\alpha)$ such that the following holds. Suppose $G$ is a digraph on $n \geq n_0$ vertices such that for all $i < n/2$ \begin{itemize} \item $ d^+ _i \geq i+ \alpha n $ or $ d^- _{n-i- \alpha n} \geq n-i $, \item $ d^- _i \geq i+ \alpha n $ or $ d^+ _{n-i- \alpha n} \geq n-i .$ \end{itemize}Then $G$ contains a Hamilton cycle. \end{theorem} Theorem~\ref{approxnw} was derived from a result in~\cite{KKOexact} on the existence of a Hamilton cycle in an oriented graph satisfying a `robust' expansion property.
The following weakening of Conjecture~\ref{nw} was posed earlier by Nash-Williams \cite{ch2}. It would yield a digraph analogue of P\'osa's theorem which states that a graph~$G$ on~$n\ge 3$ vertices has a Hamilton cycle if its degree sequence $d_1\le \dots \le d_n$ satisfies $d_i \geq i+1$ for all $i<(n-1)/2$ and if additionally $d_{\lceil n/2\rceil} \geq \lceil n/2\rceil$ when~$n$ is odd~\cite{posa}. Note that P\'osa's theorem is much stronger than Dirac's theorem but is a special case of Chv\'atal's theorem. \begin{conj}[Nash-Williams~\cite{ch2}]\label{nw2} Let $G$ be a digraph on $n \geq 3$ vertices such that $d^+ _i,d^-_i \geq i+1 $ for all $i <(n-1)/2$ and such that additionally $d^+_{\lceil n/2\rceil},d^-_{\lceil n/2\rceil} \geq \lceil n/2\rceil$ when~$n$ is odd. Then~$G$ contains a Hamilton cycle. \end{conj} The previous example shows the degree condition would be best possible in the same sense as described there. The assumption of strong connectivity is not necessary in Conjecture~\ref{nw2}, as it follows from the degree conditions. Theorem~\ref{approxnw} immediately implies an approximate version of Conjecture~\ref{nw2}.
It turns out that the conditions of Theorem~\ref{approxnw} even guarantee the digraph~$G$ to be \emph{pancyclic}, i.e.~$G$ contains a cycle of length~$t$ for all $t=2,\dots,n$. Thomassen~\cite{tom} as well as H\"aggkvist and Thomassen~\cite{haggtom} gave degree conditions which imply that every digraph with minimum semidegree $>n/2$ is pancyclic. The latter bound can also be deduced directly from Theorem~\ref{hamGH}. The complete bipartite digraph whose vertex class sizes are as equal as possible shows that the bound is best possible. For oriented graphs the minimum semidegree threshold which guarantees pancyclicity turns out to be $(3n-4)/8$ (see~\cite{kellyKOpan}).
\subsection{Powers of Hamilton cycles in graphs}
The following result is a common extension (for large $n$) of Dirac's theorem and the Hajnal-Szemer\'edi theorem. It was originally conjectured (for all $n$) by Seymour. \begin{theorem} \label{KSSpowers} {\bf (Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{KSSz98})} For every $k\ge 1$ there is an integer $n_0$ so that every graph $G$ on $n \ge n_0$ vertices and with $\delta(G) \ge \frac{k}{k+1} n$ contains the $k$th power of a Hamilton cycle. \end{theorem} Complete $(k+1)$-partite graphs whose vertex classes have almost (but not exactly) equal size show that the minimum degree bound is best possible. Previous to this a large number of partial results had been proved (see~e.g.~\cite{LSS} for a history of the problem). Very recently, Levitt, Sark\"ozy and Szemer\'edi~\cite{LSS} gave a proof of the case $k=2$ which avoids the use of the Regularity lemma, resulting in a much better bound on~$n_0$. Their proof is based on a technique introduced by R\"odl, Ruci\'nski and Szemer\'edi~\cite{RRSDirac} for hypergraphs. The idea of this method (as applied in~\cite{LSS}) is first to find an `absorbing' path $P^2$: roughly, $P^2$ is the second power of a path $P$ which, given any vertex $x$, has the property that $x$ can be inserted into $P$ so that $P \cup {x}$ still induces the second power of a path. The proof of the existence of $P^2$ is heavily based on probabilistic arguments. Then one finds the second power $Q^2$ of a path which is almost spanning in $G-P^2$. One can achieve this by repeated applications of the Erd\H{o}s-Stone theorem. One then connects up~$Q^2$ and~$P^2$ into the second power of a cycle and finally uses the absorbing property of $P^2$ to incorporate the vertices left over so far.
\subsection{Hamilton cycles in hypergraphs} \label{hypercycle}
It is natural to ask whether one can generalize Dirac's theorem to uniform hypergraphs. There are several possible notions of a hypergraph cycle. One generalization of the definition of a cycle in a graph is the following one. An $r$-uniform hypergraph $C$ is a \emph{cycle of order $n$} if there a exists a cyclic ordering $v_1,\dots,v_n$ of its~$n$ vertices such that every consecutive pair $v_iv_{i+1}$ lies in a hyperedge of $C$ and such that every hyperedge of $C$ consists of consecutive vertices. Thus the cyclic ordering of the vertices of $C$ induces a cyclic ordering of its hyperedges. A cycle is \emph{tight} if every $r$ consecutive vertices form a hyperedge. A cycle of order $n$ is \emph{loose} if all pairs of consecutive edges (except possibly one pair) have exactly one vertex in common. (So every tight cycle contains a spanning loose cycle but a cycle might not necessarily contain a spanning loose cycle.) There is also the even more general notion of a \emph{Berge-cycle}, which consists of a sequence of vertices where each pair of consecutive vertices is contained in a common hyperedge. \noindent \begin{center} \psfrag{tight cycle}[][]{\small{tight cycle}} \includegraphics[scale=0.3]{hcycle11} \end{center} \begin{center} \psfrag{cycle}[][]{\small{cycle}} \includegraphics[scale=0.3]{hcycle21} \end{center} \begin{center} \psfrag{loose cycle of even order}[][]{\small{loose cycle}} \includegraphics[scale=0.3]{hcycle31} \end{center} A \emph{Hamilton cycle} of a uniform hypergraph $G$ is a subhypergraph of $G$ which is a cycle containing all its vertices. Theorem~\ref{tightcycle} gives an analogue of Dirac's theorem for tight hypergraph cycles, while Theorem~\ref{loosecycle} gives an analogue for $3$-uniform (loose) cycles. \begin{theorem}{\bf (R\"odl, Ruci\'nski and Szemer\'edi~\cite{RRSDirac})} \label{tightcycle} For all $r \in \mathbb{N}$ and $\alpha>0$ there is an integer $n_0=n_0(r,\alpha)$ such that every $r$-uniform hypergraph $G$ with $n\ge n_0$ vertices and minimum degree at least $n/2+\alpha n$ contains a tight Hamilton cycle. \end{theorem}\begin{theorem}{\bf (Han and Schacht~\cite{HScycle}; Keevash, K\"uhn, Mycroft and Osthus~\cite{rcycle})} \label{loosecycle} For all $r \in \mathbb{N}$ and $\alpha>0$ there is an integer $n_0=n_0(\alpha)$ such that every $r$-uniform hypergraph $G$ with $n\ge n_0$ vertices and minimum degree at least $n/(2r-2)+\alpha n$ contains a loose Hamilton cycle. \end{theorem} Both results are best possible up to the error term $\alpha n$. In fact, if the minimum degree is less than $\lceil n/(2r-2) \rceil$, then we cannot even guarantee \emph{any} Hamilton cycle in an $r$-uniform hypergraph. The case $r=3$ of Theorems~\ref{tightcycle} and~\ref{loosecycle} was proved earlier in~\cite{RRSDirac3} and~\cite{KOloose} respectively. The result in~\cite{HScycle} also covers the notion of an $r$-uniform $\ell$-cycle for $\ell<r/2$ (here we ask for consecutive edges to intersect in precisely $\ell$ vertices). Hamiltonian Berge-cycles were considered by Bermond et al.~\cite{bermond}.
\section{Bounded degree spanning subgraphs}
Bollob\'as and Eldridge~\cite{BE78} as well as Catlin~\cite{catlin} made the following very general conjecture on embedding graphs. If true, this conjecture would be a far-reaching generalization of the Hajnal-Szemer\'edi theorem (Theorem~\ref{hajnalsz}). \begin{conj}[Bollob\'as and Eldridge~\cite{BE78}, Catlin~\cite{catlin}]\label{conjBE} If $G$ is a graph on $n$ vertices with $\delta(G) \ge \frac{\Delta n-1}{\Delta+1}$, then $G$ contains any graph $H$ on $n$ vertices with maximum degree at most $\Delta$. \end{conj} The conjecture has been proved for graphs $H$ of maximum degree at most 2~\cite{AB93,AF96} and for large graphs of maximum degree at most 3~\cite{CSS03}. Recently, Csaba~\cite{Csababipartite} proved it for bipartite graphs $H$ of arbitrary maximum degree $\Delta$, provided the order of~$H$ is sufficiently large compared to $\Delta$. In many applications of the Blow-up lemma, the graph~$H$ is embedded into~$G$ by splitting~$H$ up into several suitable parts and applying the Blow-up lemma to each of these parts (see e.g.~the example in Section~\ref{sample}). It is not clear how to achieve this for~$H$ as in Conjecture~\ref{conjBE}, as $H$ may be an `expander'. So the proofs in~\cite{Csababipartite,CSS03} rely on a variant of the Blow-up lemma which is suitable for embedding such `expander graphs'. Also, Kaul, Kostochka and Yu~\cite{KKY} showed (without using the Regularity lemma) that the conjecture holds if we increase the minimum degree condition to $\frac{\Delta n+2n/5-1}{\Delta+1}$.
Theorem~\ref{KSS} suggests that one might replace $\Delta$ in Conjecture~\ref{conjBE} with $\chi(H)-1$, resulting in a smaller minimum degree bound for some graphs $H$. This is far from being true in general (e.g.~let $H$ be a $3$-regular bipartite expander and let~$G$ be the union of two cliques which have equal size and are almost disjoint). However, Bollob\'as and Koml\'os conjectured that this does turn out to be true if we restrict our attention to a certain class of `non-expanding' graphs. This conjecture was recently confirmed in~\cite{bandwidth}. The bipartite case was proved earlier by Abbasi~\cite{Abbasi}. \begin{theorem}{\bf (B\"ottcher, Schacht and Taraz~\cite{bandwidth})} \label{bandwidth} For every $\gamma>0$ and all integers $r \ge 2$ and $\Delta$, there exist $\beta>0$ and $n_0$ with the following property. Every graph $G$ of order $n \ge n_0$ and minimum degree at least $(1-1/r+\gamma)n$ contains every $r$-chromatic graph $H$ of order $n$, maximum degree at most $\Delta$ and bandwidth at most $\beta n$ as a subgraph. \end{theorem} Here the \emph{bandwidth} of a graph $H$ is the smallest integer
$b$ for which there exists an enumeration $v_1,\dots,v_{|H|}$ of the vertices of $H$ such that every edge $v_iv_j$ of $H$ satisfies $|i-j|\le b$. Note that $k$th powers of cycles have bandwidth $2k$, so Theorem~\ref{bandwidth} implies an approximate version of Theorem~\ref{KSSpowers}. (Actually, this is only the case if $n$ is a multiple of $k+1$, as otherwise the $k$th power of a Hamilton cycle fails to be $(k+1)$-colourable. But~\cite{bandwidth} contains a more general result which allows for a small number of vertices of colour $k+2$.) A further class of graphs having small bandwidth and bounded degree are planar graphs with bounded degree~\cite{bandwidthplanar}. (See~\cite{KOtriang,KOTplanar} for further results on embedding planar graphs in graphs of large minimum degree.) Note that the discussion in Section~\ref{packings} implies that the minimum degree bound in Theorem~\ref{bandwidth} is approximately best possible for certain graphs $H$ but not for all graphs. Abbasi~\cite{abbasiband} showed that there are graphs $H$ for which the linear error term $\gamma n$ in Theorem~\ref{bandwidth} is necessary. One might think that one could reduce the error term to a constant for graphs of bounded bandwidth. However, this turns out to be incorrect. (We grateful to Peter Allen for pointing this out to us.)
Alternatively, one can try to replace the bandwidth assumption in Theorem~\ref{bandwidth} with a less restrictive parameter. For instance, Csaba~\cite{csabasep} gave a minimum degree condition on $G$ which guarantees a copy of a `well-separated' graph $H$ in $G$. Here a graph with $n$ vertices is \emph{$\alpha$-separable} if there is a set $S$ of vertices of size at most $\alpha n$ so that all components of $H-S$ have size at most $\alpha n$. It is easy to see that every graph with $n$ vertices and bandwidth at most $\beta n$ is $\sqrt{\beta}$-separable. (Moreover large trees are $\alpha$-separable for $\alpha \to 0$ but need not have small bandwidth, so considering separability is less restrictive than bandwidth.)
Here is another common generalization of Dirac's theorem and the triangle case of Theorem~\ref{hajnalsz} (i.e.~the Corr\'adi-Hajnal theorem). It proves a conjecture by El-Zahar (actually, El-Zahar made the conjecture for all values of $n$, this is still open). \COMMENT{for future proposals: can we use absorbing technique to get new proof?} \begin{theorem} {\bf (Abbasi~\cite{Abbasi})} \label{abbasi} There exists an integer $n_0$ so that the following holds. Suppose that $G$ is a graph on $n \ge n_0$ vertices and $n_1,\dots, n_k\ge 3$ are so that $$\sum_{i=1}^k n_i=n \qquad and \qquad \delta(G) \ge \sum_{i=1}^k \lceil n_i/2 \rceil. $$ Then $G$ has $k$ vertex-disjoint cycles whose lengths are $n_1,\dots,n_k$. \end{theorem}
Note that $\sum_{i=1}^k \lceil n_i/2 \rceil=\sum_{i=1}^k(1-1/\chi_{cr}(C_i))n_i$, where $C_i$ denotes a cycle of length $n_i$. This suggests the following more general question (which was raised by Koml\'os \cite{JKtiling}): Given $t\in\mathbb{N}$, does there exists an $n_0=n_0(t)$ such that whenever $H_1,\dots,H_k$ are graphs which each have at most $t$ vertices and which together have $n\ge n_0$ vertices and whenever~$G$ is a graph on $n$
vertices with minimum degree at least $\sum_i (1-1/\chi_{cr}(H_i))|H_i|$, then there is a set of vertex-disjoint copies of $H_1,\dots,H_k$ in $G$? In this form, the question has a negative answer by (the lower bound in) Theorem~\ref{thmmaingeneral}, but it would be interesting to find a common generalization of Theorems~\ref{thmmaingeneral} and~\ref{abbasi}.
It is also natural to ask corresponding questions for oriented and directed graphs. As in the case of Hamilton cycles, the questions appear much harder than in the undirected case and again much less is known. Keevash and Sudakov~\cite{keevashsudakov} recently obtained the following result which can be viewed as an oriented version of the $\Delta=2$ case of Conjecture~\ref{conjBE}. \begin{theorem} {\bf (Keevash and Sudakov~\cite{keevashsudakov})} There exist constants $c,C$ and an integer~$n_0$ so that whenever $G$ is an oriented graph on $n\ge n_0$ vertices with minimum semidegree at least $(1/2-c)n$ and whenever $n_1,\dots,n_t$ are so that $\sum_{i=1}^t n_i \le n-C$, then $G$ contains disjoint cycles of length $n_1,\dots,n_t$. \end{theorem} In the case of triangles (i.e.~when all the $n_i=3$), they show that one can choose $C=3$ (one cannot take $C=0$). \cite{keevashsudakov} also contains a discussion of related open questions for tournaments and directed graphs. Similar questions were also raised earlier by Song~\cite{song}. For instance, given $t$, what is the smallest integer $f(t)$ so that all but a finite number of $f(t)$-connected tournaments $T$ satisfy the following: Let $n$ be the number of vertices of $T$ and let $\sum_{i=1}^t n_i = n$. Then $T$ contains disjoint cycles of length $n_1,\dots,n_t$.
\section{Ramsey Theory} The Regularity lemma can often be used to show that the Ramsey numbers of sparse graphs $H$ are small. (The \emph{Ramsey number $R(H)$} of~$H$ is the smallest $N \in \mathbb{N}$ such that for every $2$-colouring of the complete graph on $N$ vertices one can find a monochromatic copy of~$H$.) In fact, the first result which demonstrated the use of the Regularity lemma in extremal graph theory was the following result of Chv\'atal, R\"odl, Szemer\'edi and Trotter~\cite{CRST}, which states that graphs of bounded degree have linear Ramsey numbers: \begin{theorem}{\bf (Chv\'atal, R\"odl, Szemer\'edi and Trotter~\cite{CRST})} \label{CRST} For all $\Delta\in\mathbb{N}$ there is a constant $C=C(\Delta)$ so that every graph $H$ with maximum degree $\Delta(H) \le \Delta$ and $n$ vertices satisfies $R(H)\le Cn$. \end{theorem} The constant $C$ arising from the original proof (based on the Regularity lemma) is quite large. The bound was improved in a series of papers. Recently, Fox and Sudakov~\cite{FoxSudakov} showed that $R(H)\le 2^{4\chi(H)\Delta}n$ (the bipartite case was also proved independently by Conlon~\cite{ConlonRamsey}). For bipartite graphs, a construction from~\cite{GRR} shows that this bound is best possible apart from the value of the absolute constant~$4\cdot 2$ appearing in the exponent.
Theorem~\ref{CRST} was recently generalized to hypergraphs~\cite{CFKO3,CFKOk,NORS,Ishigami} using hypergraph versions of the Regularity lemma. Subsequently, Conlon, Fox and Sudakov~\cite{CFSRamsey} gave a shorter proof which gives a better constant and does not rely on the Regularity lemma.
One of the most famous conjectures in Ramsey theory is the Burr-Erd\H{o}s conjecture on $d$-degenerate graphs, which generalizes Theorem~\ref{CRST}. Here a graph~$G$ is \emph{$d$-degenerate} if every subgraph has a vertex of degree at most~$d$. In other words, $G$ has no `dense' subgraphs. \begin{conj}{\bf (Burr and Erd\H{o}s~\cite{BERamsey})}\label{burrerdos} For every $d$ there is a constant $C=C(d)$ so that every $d$-degenerate graph $H$ on $n$ vertices satisfies $R(H)\le Cn$. \end{conj} It has been proved in many special cases (see e.g.~the introduction of~\cite{FoxSudakovbip} for a recent overview). Also, Kostochka and Sudakov~\cite{KSRamsey} proved that it is `approximately' true: \begin{theorem}{\bf (Kostochka and Sudakov~\cite{KSRamsey})} \label{approxburrerdos} For every $d$ there is a constant $C=C(d)$ so that every $d$-degenerate graph $H$ on $n$ vertices satisfies $R(H)\le 2^{C (\log n)^{2d/(2d+1)}} n $. \end{theorem} The exponent `$2d/(2d+1)$' of the logarithm was improved to `1/2' in~\cite{FoxSudakovbip}. All the results in~\cite{ConlonRamsey,FoxSudakov,FoxSudakovbip,KSRamsey} rely on variants of the same probabilistic argument, which was first applied to special cases of Conjecture~\ref{burrerdos} in~\cite{KoRodl}. To give an idea of this beautiful argument, we use a simple version to give a proof of the following density result (which is implicit in several of the above papers): it implies that bipartite graphs $H$ whose maximum degree is logarithmic in their order have polynomial Ramsey numbers. (The logarithms in the statement and the proof are binary.) \begin{theorem} \label{random}
Suppose that $H=(A',B',E')$ is a bipartite graph on $n \ge 2$ vertices and $\Delta(H) \le \log n$. Suppose that $m \ge n^8$. Then every bipartite graph $G=(A,B,E)$ with $|A|=|B|=m$ and at least $m^2/8$ edges contains a copy of $H$. In particular, $R(H) \le 2n^8$. \end{theorem} An immediate corollary is that the Ramsey number of a $d$-dimensional cube $Q_d$ is polynomial in its number $n=2^d$ of vertices (this fact was first observed in~\cite{Shi} based on an argument similar to that in~\cite{KoRodl}). The best current bound of $R(Q_d) \le d 2^{2d+5}$ is given in~\cite{FoxSudakov}. Burr and Erd\H{o}s~\cite{BERamsey} conjectured that the bound should actually be linear in~$n=2^d$. \removelastskip\penalty55
\noindent{\bf Proof. } Write $\Delta:=\log n$. Let $b_1,\dots,b_s$ be a sequence of $s:=2\Delta$ not necessarily distinct vertices of $B$, chosen uniformly and independently at random and write $S:=\{b_1,\dots,b_s \}$. Let $N(S)$ denote the set of common neighbours of vertices in~$S$. Clearly, $S \subseteq N(a)$ for every $a \in N(S)$. So Jensen's inequality implies that \begin{align*}
\mathbb{E} ( |N(S)| ) & = \sum_{a \in A} \mathbb{P} ( a \in N(S) )
= \sum_{a \in A} \left( \frac{|N(a)|}{m} \right)^s = \frac{\sum_{a \in A} ( d(a) )^s}{m^s} \\ & \ge \frac{m \left( \frac{\sum_{a \in A} d(a) }{m} \right)^s }{m^s} \ge \frac{m \left( (m^2/8)/m \right)^s }{m^s} = \frac{m}{8^s} \ge \frac{n^8}{n^6} =n^2. \end{align*}
We say that a subset $W \subseteq A$ is \emph{bad} if it has size $\Delta$ and its common neighbourhood $N(W)$ satisfies $|N(W)| <n$. Now let $Z$ denote the number of bad subsets $W$ of $N(S)$. Note that the probability that a given set $W \subseteq A$ lies in $N(S)$ equals
$(|N(W)|/m)^s$ (since the probability that it lies in the neighbourhood of a fixed
vertex $b\in B$ is $|N(W)|/m$). So $$ \mathbb{E} Z= \sum_{W \text{bad}} \mathbb{P} (W \subseteq N(S) ) \le \binom{m}{\Delta} \left( \frac{ n}{m} \right)^s \le m^\Delta \left( \frac{ n}{m} \right)^s = \left( \frac{n^2}{m} \right)^\Delta \le (1/2)^\Delta <1. $$
So $\mathbb{E} ( |N(S)| -Z ) \ge n^2-1 \ge n$ and hence there is a choice of $S$ with $|N(S)| -Z \ge n$. By definition, we can delete a vertex from every bad $W$ contained in $N(S)$ to obtain a set $T \subseteq N(S)$
with $|T| \ge n$
so that every subset $W \subseteq T$ with $|W|=\Delta$ satisfies $|N(W)| \ge n$. Clearly we can now embed $H$: first embed $A'$ arbitrarily into $T$ and then embed the vertices of $B'$ one by one into $B$, using the property that $T$ has no bad subset.
The bound on $R(H)$ can be derived as follows: consider any $2$-colouring of the complete graph on $2n^8$ vertices. Partition its vertices arbitrarily into two sets $A$ and $B$ of size $n^8$ and then apply the main statement to the subgraph of~$G$ induced by the colour class having the most edges between $A$ and $B$. \noproof
Note that the proof immediately shows that the bound on the maximum degree of $H$ can be relaxed: all we need is the property that every subgraph of $H$ has a vertex $b \in B'$ of low degree. In the proof of (the bipartite case) of Theorem~\ref{approxburrerdos}, this was exploited as follows: roughly speaking one carries out the above argument twice (of course with different parameters than the above). The first time we consider a random subset $S \subseteq B$ and the second time we consider a smaller random subset $S' \subseteq T$.
For some types of sparse graphs $H$, one can give even more precise estimates for $R(H)$ than the ones which follow from the above results. For instance, Theorem~\ref{zhao} has an immediate application to the Ramsey number of trees. \begin{cor} \label{ramseytrees} There is an integer $n_0$ so that if $T_n$ is a tree on $n \ge n_0$ vertices then $R(T_n) \le 2n-2$. \end{cor} Indeed, to derive Corollary~\ref{ramseytrees} from Theorem~\ref{zhao}, consider a $2$-colouring of a complete graph $K_{2n-2}$ on $2n-2$ vertices, yielding a red graph $G_r$ and a blue graph $G_b$. Order the vertices $x_i$ according to their degree (in ascending order) in $G_r$. If $x_{n-1}$ has degree at least $n-1$ in $G_r$, then we can apply Theorem~\ref{zhao} to find a red copy of $T$ in $G_r$. If not, we can apply it to find a blue copy of $T$ in $G_b$. For even $n$, the bound is best possible (let $T$ be a star and let $G_b$ and $G_r$ be regular of the same degree) and proves a conjecture of Burr and Erd\H{o}s~\cite{BEtrees}. For odd $n$, they conjectured that the answer is $2n-3$. Similarly, the Koml\'os-S\'os conjecture (Conjecture~\ref{KomlosSos}) would imply that $R(T_n,T_m) \le n+m-2$, where $T_n$ and $T_m$ are trees on $n$ and $m$ vertices respectively. Of course, Corollary~\ref{ramseytrees} is not best possible for every tree. For instance, in the case when the tree is a path, Gerencs\'er and Gyarfas~\cite{GGpath} showed that $R(P_n,P_n)= \lfloor (3n-2)/2 \rfloor$. Further recent results on Ramsey numbers of paths and cycles (many of which rely on the Regularity lemma) can be found e.g.~in~\cite{3colourpath,FG}. Hypergraph versions (i.e.~Ramsey numbers of tight cycles, loose cycles and Berge-cycles) were considered e.g.~in~\cite{ramseyhypercycle1,ramseyhypercycle2, bergeramsey}.
\COMMENT{ Luczak and FigaiWe show that for any real positive numbers $\alpha_1$, $\alpha_2$ and $\alpha_3$, the Ramsey number for a triple of even cycles of lengths $2\lfloor \alpha_1n\rfloor$, $2\lfloor \alpha_2n\rfloor$ and $2\lfloor \alpha_3n\rfloor$ is (asymptotically) equal to $(\alpha_1+\alpha_2+\alpha_3+\max\{\alpha_1,\alpha_2,\alpha_3\}+o(1))n$. Also, Skokan has announced results on his homepage}
\section{A sample application of the Regularity and Blow-up lemma} \label{sample}
In order to illustrate the details of the Regularity method for those not familiar with it, we now prove Theorem~\ref{KSS} for the case when $H:=C_4$ and when we replace the constant~$C$ in the minimum degree condition with a linear error term.
\begin{theorem}\label{thm:C4} For every $0<\eta<1/2$ there exists an integer~$n_0$ such that every graph~$G$ whose order $n\ge n_0$ is divisible by~$4$ and whose minimum degree is at least~$n/2+\eta n$ contains a perfect $C_4$-packing. \end{theorem}
(Note that Theorem~\ref{thm:C4} also follows from Theorems~\ref{bandwidth} and~\ref{abbasi}.) We start with the formal definition of ${\varepsilon}$-regularity. The \emph{density} of a bipartite graph $G=(A,B)$ with vertex classes~$A$ and~$B$
is $$d_G(A,B):=\frac{e_G(A,B)}{|A||B|}.$$ We also write $d(A,B)$ if this is unambiguous. Given ${\varepsilon}>0$, we say that~$G$ is \emph{${\varepsilon}$-regular} if for all sets $X\subseteq A$ and $Y\subseteq B$ with $|X|\ge {\varepsilon} |A|$ and
$|Y|\ge {\varepsilon} |B|$ we have $|d(A,B)-d(X,Y)|<{\varepsilon}$. Given $d\in[0,1)$, we say that $G$ is \emph{$({\varepsilon},d)$-superregular} if all sets $X\subseteq A$ and $Y\subseteq B$ with $|X|\ge {\varepsilon} |A|$ and
$|Y|\ge {\varepsilon} |B|$ satisfy $d(X,Y)>d$ and, furthermore, if $d_G(a)>d|B|$
for all $a\in A$ and $d_G(b)> d|A|$ for all $b\in B$. Moreover, we will denote the neighbourhood of a vertex $x$ in a graph $G$ by $N_G(x)$. Given disjoint sets~$A$ and~$B$ of vertices of~$G$, we write $(A,B)_G$ for the bipartite subgraph of~$G$ whose vertex classes are~$A$ and~$B$ and whose edges are all the edges of~$G$ between~$A$ and~$B$.
Szemer\'edi's Regularity lemma~\cite{reglem} states that one can partition the vertices of every large graph into a bounded number `clusters' so that most of the pairs of clusters induce ${\varepsilon}$-regular bipartite graphs. Proofs are also included in~\cite{BGraphTh} and~\cite{Diestel}. Algorithmic proofs of the Regularity lemma were given in~\cite{algRL,FK}. There are also several versions for hypergraphs (in fact, all the results in Section~\ref{hypercycle} are based on some hypergraph version of the Regularity lemma). The first so-called `strong' versions for $r$-uniform hypergraphs were proved in~\cite{gowers} and~\cite{Count,RSkok}. \begin{lemma}[Szemer\'edi~\cite{reglem}] \label{reglem} For all ${\varepsilon}>0$ and all integers $k_0$ there is an $N=N({\varepsilon},k_0)$ such that for every graph~$G$ on~$n\ge N$ vertices there exists a partition of $V(G)$ into $V_0,V_1,\dots,V_k$ such that the following holds: \begin{enumerate}
\item[$\bullet$] $k_0\le k\le N$ and $|V_0|\le {\varepsilon} n$,
\item[$\bullet$] $|V_1|=\dots=|V_k|=:m$, \item[$\bullet$] for all but ${\varepsilon} k^2$ pairs $1\le i<j\le k$ the graph $(V_i,V_j)_{G}$ is ${\varepsilon}$-regular. \end{enumerate} \end{lemma} Unfortunately, the constant $N$ appearing in the lemma is very large, Gowers~\cite{gowerstower} showed that it has at least a tower-type dependency on ${\varepsilon}$. We will use the following degree form of Szemer\'edi's Regularity lemma which can be easily derived from Lemma~\ref{reglem}.
\begin{lemma}[Degree form of the Regularity lemma]\label{deg-reglemma} For all ${\varepsilon}>0$ and all integers $k_0$ there is an $N=N({\varepsilon},k_0)$ such that for every number $d\in [0,1)$ and for every graph~$G$ on~$n\ge N$ vertices there exist a partition of $V(G)$ into $V_0,V_1,\dots,V_k$ and a spanning subgraph $G'$ of $G$ such that the following holds: \begin{enumerate}
\item[$\bullet$] $k_0\le k\le N$ and $|V_0|\le {\varepsilon} n$,
\item[$\bullet$] $|V_1|=\dots=|V_k|=:m$, \item[$\bullet$] $d_{G'}(x)>d_G(x)-(d+{\varepsilon})n$ for all vertices $x\in G$, \item[$\bullet$] for all $i\ge 1$ the graph $G'[V_i]$ is empty, \item[$\bullet$] for all $1\le i<j\le k$ the graph $(V_i,V_j)_{G'}$ is ${\varepsilon}$-regular and has density either $0$ or $>d$. \end{enumerate} \end{lemma} The sets $V_i$ ($i\ge 1$) are called \emph{clusters}, $V_0$ is called the \emph{exceptional set} and~$G'$ is called the \emph{pure graph}.
\noindent {\bf Sketch of proof of Lemma~\ref{deg-reglemma}\ } To obtain a partition as in Lemma~\ref{deg-reglemma}, apply Lemma~\ref{reglem} with parameters $d,{\varepsilon}', k_0'$ satisfying $1/k_0',{\varepsilon}' \ll {\varepsilon},d,1/k_0$ to obtain clusters $V'_1,\dots,V'_{k'}$ and an exceptional set~$V'_0$. (Here $a \ll b < 1$ means that there is an increasing function $f$ such that all the calculations in the argument work as long as $a \le f(b)$.)
Let $m':=|V'_1|=\dots=|V'_{k'}|$. Now delete all edges between pairs of clusters which are not ${\varepsilon}'$-regular and move any vertices into $V'_0$ which were incident to at least ${\varepsilon} n/10$ (say) of these deleted edges. Secondly, delete all (remaining) edges between pairs of clusters whose density is at most $d+{\varepsilon}'$. Consider such a pair $(V'_i,V'_j)$ of clusters. For every vertex $x\in V'_i$ which has more than $(d+2{\varepsilon}')m'$ neighbours in~$V'_j$ mark all but $(d+2{\varepsilon}')m'$ edges between~$x$ and~$V'_j$. Do the same for the vertices in~$V'_j$ and more generally for all pairs of clusters of density at most $d+{\varepsilon}'$. It is easy to check that in total this yields at most ${\varepsilon}'n^2$ marked edges. Move all vertices into $V'_0$ which are incident to at least ${\varepsilon} n/10$ of the marked edges. Thirdly, delete any edges within the clusters. Finally, we need to make sure that the clusters have equal size again (as we may have lost this property during the deletion process). This can be done by splitting up the clusters into smaller subclusters (which contain almost all the vertices and have equal size) and moving a small number of further vertices into $V'_0$. A straightforward calculation shows that the new exceptional set $V_0$ has size at most ${\varepsilon} n$ as required. \noproof
The \emph{reduced graph~$R$} is the graph whose vertices are $1,\dots,k$ and in which~$i$ is joined to~$j$ whenever the bipartite subgraph $(V_i,V_j)_{G'}$ of~$G'$ induced by~$V_i$ and~$V_j$ is ${\varepsilon}$-regular and has density~$>d$. Thus~$ij$ is an edge of~$R$ if and only if~$G'$ has an edge between~$V_i$ and~$V_j$. Roughly speaking, the following result states that~$R$ almost `inherits' the minimum degree of~$G$.
\begin{prop}\label{prop:mindegR}
If $0<2{\varepsilon}\le d\le c/2$ and $\delta(G)\ge cn$ then $\delta(R)\ge (c-2d)|R|$. \end{prop} \removelastskip\penalty55
\noindent{\bf Proof. } Consider any vertex~$i$ of~$R$ and pick $x\in V_i$. Then every neighbour of~$x$ in~$G'$ lies in $V_0\cup \bigcup_{j\in N_R(i)} V_j$. Thus $(c-(d+{\varepsilon}))n\le d_{G'}(x)\le d_R(i)m +{\varepsilon} n$
and so $d_R(i)\ge (c-2d)n/m\ge (c-2d)|R|$ as required. \noproof
The proof of Proposition~\ref{prop:mindegR} is a point where it is important that~$R$ was defined using the graph~$G'$ obtained from Lemma~\ref{deg-reglemma} and not using the partition given by Lemma~\ref{reglem}.
In our proof of Theorem~\ref{thm:C4} the reduced graph~$R$ will contain a Hamilton path~$P$. Recall that every edge~$ij$ of~$P\subseteq R$ corresponds to the ${\varepsilon}$-regular bipartite subgraph $(V_i,V_j)_{G'}$ of~$G'$ having density~$>d$. The next result shows that by removing a small number of vertices from each cluster (which will be added to the exceptional set~$V_0$) we can guarantee that the edges of~$P$ even correspond to superregular pairs.
\begin{prop}\label{prop:superreg} Suppose that $4{\varepsilon} <d\le 1$ and that~$P$ is a Hamilton path in~$R$. Then every cluster~$V_i$ contains a subcluster $V'_i\subseteq V_i$ of size~$m-2{\varepsilon} m$ such that $(V'_i,V'_j)_{G'}$ is $(2{\varepsilon},d-3{\varepsilon})$-superregular for every edge $ij\in P$. \end{prop}
\removelastskip\penalty55
\noindent{\bf Proof. } We may assume that $P=1\dots k$. Given any~$i< k$, the definition of regularity implies that there are at most ${\varepsilon} m$ vertices $x\in V_i$ such that $|N_{G'}(x)\cap V_{i+1}|\le (d-{\varepsilon})m$. Similarly, for each $i>1$ there are at most ${\varepsilon} m$ vertices $x\in V_i$ such that
$|N_{G'}(x)\cap V_{i-1}|\le (d-{\varepsilon})m$. Let~$V'_i$ be a subset of size $m-2{\varepsilon} m$ of~$V_i$ which contains none of the above vertices (for all $i=1,\dots,k$). Then $V'_1,\dots, V'_k$ are as required. \noproof
Of course, in Proposition~\ref{prop:superreg} it is not important that~$P$ is a Hamilton path. One can prove an analogue whenever~$P$ is a subgraph of~$R$ of bounded maximum degree.
We will also use the following special case of the Blow-up lemma of Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{KSSblowup}. It implies that dense superregular pairs behave like complete bipartite graphs with respect to containing bounded degree graphs as subgraphs, i.e.~if the superregular pair has vertex classes~$V_i$ and~$V_j$ then any bounded degree bipartite graph on these vertex classes is a subgraph of this superregular pair. An algorithmic version of the Blow-up lemma was proved by the same authors in~\cite{algblowup}. A hypergraph version was recently proved by Keevash~\cite{keevash}.
\begin{lemma}[Blow-up lemma, bipartite case]\label{blowup} Given $d>0$ and $\Delta\in \mathbb{N}$, there is a positive constant ${\varepsilon}_0={\varepsilon}_0(d,\Delta)$ such that the following holds for every ${\varepsilon}<{\varepsilon}_0$. Given $m\in \mathbb{N}$, let~$G^*$ be an $({\varepsilon},d)$-superregular bipartite graph with vertex classes of size~$m$. Then~$G^*$ contains a copy of every subgraph~$H$ of~$K_{m,m}$ with $\Delta(H)\le \Delta$. \end{lemma}
\noindent {\bf Proof of Theorem~\ref{thm:C4}\ } We choose further positive constants~${\varepsilon}$ and~$d$ as well as $n_0\in\mathbb{N}$ such that $$1/n_0\ll {\varepsilon}\ll d\ll\eta<1/2.$$ (In order to simplify the exposition we will not determine these constants explicitly.) We start by applying the degree form of the Regularity lemma (Lemma~\ref{deg-reglemma}) with parameters~${\varepsilon}$, $d$ and $k_0:=1/{\varepsilon}$ to~$G$ to obtain clusters $V_1,\dots,V_k$, an exceptional set~$V_0$, a pure graph~$G'$ and a reduced graph~$R$.
Thus $k:=|R|$ and \begin{equation}\label{eq:minR} \delta(R)\ge (1/2+\eta -2d)k\ge (1+\eta)k/2 \end{equation} by Proposition~\ref{prop:mindegR}. So~$R$ contains a Hamilton path~$P$ (this follows e.g.~from Dirac's theorem). By relabelling if necessary we may assume that $P=1\dots k$. Apply Proposition~\ref{prop:superreg} to obtain subclusters~$V'_i\subseteq V_i$ of size $m-2{\varepsilon} m=:m'$ such that for every edge $i(i+1)\in P$ the bipartite subgraph $(V'_i,V'_{i+1})_{G'}$ of~$G'$ induced by~$V'_i$ and~$V'_{i+1}$ is $(2{\varepsilon},d/2)$-superegular. Note that the definition of ${\varepsilon}$-regularity implies that $(V'_i,V'_{j})_{G'}$ is still $2{\varepsilon}$-regular of density at least $d-{\varepsilon}\ge d/2$ whenever $ij$ is an edge of~$R$. We add all those vertices of~$G$ that are not contained in some~$V'_i$ to the exceptional set~$V_0$. Moreover, if~$k$ is odd then we also add all the vertices in~$V'_k$ to~$V_0$. We still denote the reduced graph by~$R$, its number of vertices by~$k$ and the exceptional set by~$V_0$. Thus
\COMMENT{Here we need that $m\le {\varepsilon} n$, ie $n/k_0\le {\varepsilon} n$. Ok if $k_0\ge 1/{\varepsilon}$.}
$$ |V_0|\le {\varepsilon} n+2{\varepsilon} n+m \le 4{\varepsilon} n. $$ Let~$M$ denote the perfect matching in~$P$. So~$M$ consists of the edges $12,34,\dots, (k-1)k$. The Blow-up lemma would imply that for every odd~$i$ the bipartite graph $(V'_i,V'_{i+1})_{G'}$ contains a perfect $C_4$-packing, provided that $2$ divides~$m'$. So we have already proved that~$G$ contains a $C_4$-packing covering \emph{almost} all of its vertices (this can also be easily proved without the Regularity lemma). In order to obtain a perfect $C_4$-packing, we have to incorporate the exceptional vertices.
To make it simpler to deal with divisibility issues later on, for every odd~$i$ we will now choose a set~$X_i$ of~7 vertices of~$G$ which we can put in any of~$V'_i$ and~$V'_{i+1}$ without destroying the superregularity of $(V'_i,V'_{i+1})_{G'}$. More precisely, (\ref{eq:minR}) implies that the vertices~$i$ and~$i+1$ of~$R$ have a common neighbour, $j$ say. Recall that both $(V'_i,V'_j)_{G'}$ and $(V'_{i+1},V'_j)_{G'}$ are $2{\varepsilon}$-regular and have density at least $d/2$. So almost all vertices in~$V'_j$ have at least $(d/2-2{\varepsilon})m'$ neighbours in both~$V'_i$ and~$V'_{i+1}$. Let~$X_i\subseteq V'_j$ be a set of~7 such vertices. Clearly, we may choose the sets $X_i$ disjoint for distinct odd~$i$. Remove all the vertices in $X_1\cup X_3\cup\dots\cup X_{k-1}=:X$ from the clusters they belong to. By removing at most
$|X|k\le 7k^2$ further vertices and adding them to the exceptional set we may assume that the subclusters $V''_i\subseteq V'_i$ thus obtained satisfy
$|V''_1|=\dots =|V''_k|=:m''$. (The vertices in~$X$ are not added to~$V_0$.) Note that we now have $$
|V_0|\le 4{\varepsilon} n+7k^2\le 5{\varepsilon} n. $$ Consider any vertex $x\in V_0$. Call an odd~$i$ \emph{good for~$x$} if~$x$ has at least~$\eta^2 m''$ neighbours in both~$V''_i$ and~$V''_{i+1}$ (in the graph~$G'$). Then the number~$g_x$ of good indices satisfies
$$ (1/2+\eta/2)n\le d_{G'}(x)-|V_0|-|X|\le 2g_x m''+(k/2-g_x)(1+\eta^2) m''\le 2g_x m''+ (1+\eta^2)n/2, $$ which shows that
\COMMENT{Here we need that $\eta<1/2$}
$g_x\ge \eta k/8=\eta |M|/4$. Since $|V_0|/(\sqrt{{\varepsilon}} m'')\le \eta |M|/4$, this implies that we can assign each $x\in V_0$ to an odd index~$i$ which is good for~$x$ in such a way that to each odd~$i$ we assign at most $\sqrt{{\varepsilon}} m''$ exceptional vertices. Now consider any matching edge $i(i+1)\in M$. Add each exceptional vertex assigned to~$i$ to~$V'_i$ or~$V'_{i+1}$ so that the sizes of the sets $V^*_i\supseteq V''_i$ and $V^*_{i+1}\supseteq V''_{i+1}$ obtained in this way differ by at most~1. It is easy to check that the bipartite subgraph $(V^*_i,V^*_{i+1})_{G'}$ of~$G'$ is still $(2\sqrt{{\varepsilon}}, d/8)$-superregular.
Since the vertices in~$X_i$ can be added to any of~$V^*_i$ and~$V^*_{i+1}$ without destroying the superregularity of $(V^*_i,V^*_{i+1})_{G'}$, we could now apply the Blow-up lemma to find a $C_4$-packing of $G'[V^*_i\cup V^*_{i+1}\cup X_i]$ which covers all but at most 3 vertices (and so altogether these packings would form a
$C_4$-packing of $G$ covering all but at most $3k$ vertices of~$G$). To ensure the existence of a perfect $C_4$-packing, we need to make $|V^*_i\cup V^*_{i+1}\cup X_i|$ divisible by~4 for every odd~$i$. We will do this for every $i=1,3,\dots,k-1$ in turn by shifting the remainders $\mod 4$
along the path~$P$. More precisely, suppose that $|V^*_1\cup V^*_{2}\cup X_1|\equiv a\mod 4$ where $0\le a<4$. Choose $a$ disjoint copies of~$C_4$, each having~1 vertex in~$V^*_2$, 2 vertices in~$V^*_3$ and~1 vertex in~$V^*_4$. Remove the vertices in these copies from the clusters they belong to and still denote the subclusters thus obtained by~$V^*_i$. (Each such copy of~$C_4$ can be found greedily using that both $(V^*_2,V^*_3)_{G'}$ and $(V^*_3,V^*_4)_{G'}$
are still $2\sqrt{{\varepsilon}}$-regular and have density at least $d/8$. Indeed, to find the first copy, pick any vertex $x\in V^*_2$ having at least $(d/8-2\sqrt{{\varepsilon}})|V^*_3|$ neighbours in~$V^*_3$. The regularity of $(V^*_2,V^*_3)_{G'}$ implies that almost all vertices in~$V^*_2$ can play the role of~$x$. The regularity of $(V^*_3,V^*_4)_{G'}$ now implies that its bipartite subgraph induced by the neighbourhood of~$x$ in~$V^*_3$ and by~$V^*_4$ has density at least $d/8-2\sqrt{{\varepsilon}}$. So there are many vertices $y\in V^*_4$ which have at least~2 neighbours in $N_{G'}(x)\cap V^*_3$. Then~$x$ and~$y$ together with~2 such neighbours form a copy of~$C_4$.)
Now $|V^*_1\cup V^*_2\cup X_1|$ is divisible by~4. Similarly, by removing at most~3 further copies of~$C_4$, each having~1 vertex in~$V^*_4$, 2 vertices in~$V^*_5$ and~1 vertex in~$V^*_6$ we can achieve that $|V^*_3\cup V^*_4\cup X_3|$ is divisible by~4. Since $n=|G|$ is divisible by~$4$ we can continue in this way to achieve that
$|V^*_i\cup V^*_{i+1}\cup X_i|$ divisible by~4 for every odd~$i$.
Recall that before we took out all these copies of~$C_4$, for every odd~$i$ the sizes of~$V^*_i$ and~$V^*_{i+1}$ differed by at most~1. Thus now these sizes differ (crudely) by at most~7. But every vertex $x\in X_i$ can be added to both~$V^*_i$ and~$V^*_{i+1}$ without destroying the superregularity. Add the vertices from~$X_i$ to~$V^*_i$ and~$V^*_{i+1}$ in such a way that the sets $V^\diamond_i\supseteq V^*_i$ and $V^\diamond_{i+1}\supseteq V^*_{i+1}$
thus obtained have equal size. (This size must be even since $|V^*_i\cup V^*_{i+1}\cup X_i|$ is divisible by~4.) It is easy to check that $(V^\diamond_i,V^\diamond_{i+1})_{G'}$ is still $(3\sqrt{{\varepsilon}},d/9)$-superregular. Thus we can apply the Blow-up lemma (Lemma~\ref{blowup}) to obtain a perfect $C_4$-packing in $(V^\diamond_i,V^\diamond_{i+1})_{G'}$. The union of all these packings (over all odd~$i$) together with the~$C_4$'s we have chosen before form a perfect $C_4$-packing of~$G$. \noproof
\section{Acknowledgment} We would like to thank Demetres Christofides, Nikolaos Fountoulakis and Andrew Treglown for their comments on an earlier version of this manuscript.
{\footnotesize
\obeylines\parindent=0pt Daniela K\"uhn \& Deryk Osthus School of Mathematics Birmingham University Edgbaston Birmingham B15 2TT UK {\it E-mail addresses}: {\tt \{kuehn,osthus\}@maths.bham.ac.uk} }
\end{document} |
\begin{document}
\title{Matching of observations} \author{Th\'eophile Caby,\\
LAMIA, Université des Antilles, Fouillole, Guadeloupe\\
[email protected]} \date{}
\maketitle \begin{abstract} We study the statistical distribution of the closest encounter between observations computed along different trajectories of a mixing dynamical system. At the limit of large trajectories, the distribution is of Gumbel type and depends on the length of the trajectories and on the Generalized Dimensions of the image measure. It is also modulated by an Extremal Index, for which we give a formula in the case of expanding maps of the interval and regular observations. We discuss the implications of these results for the study of physical systems. \end{abstract}
\section{Introduction} Recently, the problem of the shortest distance between orbits of a dynamical system has gained interest. Barros, Liao and Rousseau give the asymptotic behavior of the shortest distance between two orbits, and show its connection with sequence matching problems \cite{short}. In a later publication, these results were generalized to multiple orbits and observed orbits \cite{encoded}. Meanwhile, in a series of paper, the distributions the times of first synchronization and of the closest encounter between several trajectories were also studied, using tools of Extreme Value Theory \cite{synchro,d2,dq}. This approach provides numerical methods to compute fractal dimensions and hyperbolicity indices associated with the system, but necessitate to work with real trajectories, while often, physicists only have access to observations collected along the trajectories of the studied system. In the present paper, we generalize this approach to such observed trajectories, and show that for observations with large enough rank, one can recover information on the underlying system.
\section{The general approach} Let us consider the dynamical system $(M,T,\mu)$. We take $T:M \to M$ to be a discrete transformation (it could be a discretized version of a flow) that leaves the probability measure $\mu$ invariant. To model the process of measurement, we consider the $C^1$ function $f:M \to J$ that we call the {\em observation}. Both the phase space $M$ and the observation space $J$ are compact metric spaces endowed with two distances that we will both call $d$ to simplify notations. We generally take $J\subset\mathbb{R}^m$, as observational data are usually a collection of real numbers that can be arranged into vectors. Because we are interested in the statistical properties of observations, we need a measure that is supported in the observational space; the image measure $\mu_f$, defined by $$\mu_{f}(A)=\mu(f^{-1}(A)),$$ for all $A \subset J$ such that $f^{-1}(A)$ is $\mu-$measurable.\\
For our purpose, we define the following process:
$$ Y_i=-\log(\max_{j=2,\dots,q}d(f(T^ix_1),f(T^ix_j))),
$$
being $x_1,...,x_q \in M$ $q$ starting points drawn independently from the invariant probability measure $\mu$.\\
Let $s\in \mathbb{R}$. To follow the usual procedure in Extreme Value Theory, we consider a sequence of thresholds $u_n(s)$ such that
\begin{equation}\label{tau} \mu_q(Y_0 > u_n(s)) \sim \frac{e^{-s}}{n}, \end{equation}
where $\mu_q$ is the product measure with support in $M^q$.\\
We notice that, since the $q$ trajectories are independent, we also have:
\begin{equation}\label{dqf} \begin{aligned} \mu_q(Y_0> u_n(s)) &=\int_J \mu_f (B(y,e^{-u_n}))^{q-1}d \mu_f(y)\\
& \sim e^{-u_nD^f_q(q-1)}, \end{aligned} \end{equation}
being $B(y,r)$ a ball centered at $y\in J$ of radius $r$ and $D^f_q$ the generalized dimension of order $q$ of the image measure, defined for $q\neq 1$ by
\begin{equation} D_q^f=\underset{r\to 0}{\lim} \frac{\log \int_{J} f_*\mu(B(x,r))^{q-1}df_*\mu(x)}{(q-1)\log r}. \end{equation}
We will place ourselves in physical situations, where the limit defining the previous quantity exists.\\
To satisfy both scalings \ref{tau} and \ref{dqf}, we take
$$u_n(s)=\frac{\log n}{D_q^f(q-1)}+\frac{s}{D_q^f(q-1)}.$$
We now consider the variable
$$M_n(x_1,...,x_q)=\max \{Y_0,\dots,Y_{n-1}\}.$$
The distribution of this maximum gives us the hitting time statistics in the target set
$$S^q_n=\{(s_1,...,s_q)\in M^q, \max_{j=2,\dots,q}d(f(s_1),f(s_j)) < e^{-u_n}\},$$
that is when all the observations lay in the same ball of radius $e^{-u_n}$ for the first time. We now apply techniques from Extreme Value Theory, in particular the spectral theory of Keller and Liverani \cite{kl,k} which yields the asymptotic distribution of the maximum for a large class of exponentially-mixing systems and for {\em regular} observations. The cumulative distribution of the maximum
$${F_n}(u_n) = \mu_q(\{(x_1,...,x_q) \in M^q \mbox{ s.t. } M_n(x_1,...,x_q) \leq u_n \})$$
converges, in the sense that
\begin{equation}
|F_n(u_n(s)) -\exp(-\theta^f_q e^{-s})| \underset{n\to\infty}\to 0. \end{equation} The term $\theta^f_q$ is called the extremal index, and is a number comprised between 0 and 1 that quantifies the tendency of the process $(Y_i)$ to form clusters of high values. The spectral theory demands that the system is rapidly mixing and that the measure of the target sets $S^q_n$ goes to zero in a regular fashion. More detailed presentations of the theory and its domain of application can be found in various publications \cite{kl,k,synchro,d2,dq}. The theory is proven to be particularly applicable to expanding maps of the interval \cite{synchro} and certain well-behaved 2-dimensional systems \cite{bakersandro}. It is not our goal to give conditions of existence of the extreme value law that are more adapted to the present case, since these are in practice difficult to check in dimension more than 1 or 2. We will however provide numerical evidence of the convergence to the extreme value law. We will now discuss on the values of the different parameters of the limit law, that can acquire physical meaning.
\section{The Generalized Dimensions of the image measure $D_q^f$}
We have seen in preceding section that the $D_q^f$ spectrum modulates the synchronization properties of the observations. In fact, these quantities play a central role in many of the statistical properties of the system. We can show that if the $D_q^f$ spectrum is well defined, it also influences the behavior of diverse local dynamical quantities associated with the observations. It is well known that both return and hitting times of a chaotic system in small balls (in fact a re-scaled version of these quantities) have large deviations that are governed by the spectrum of generalized dimensions of the invariant measure \cite{dq,ldr}. A similar relation also holds for (finite-resolution) local dimensions (see \cite{ldr,dq} and \cite{these} for discussion). This kind of large deviations relations are known to hold for real trajectories, but they also apply to the recurrence times of observations (and to the local dimension of the image measure). The rate function is now entirely determined by the spectrum of generalized dimensions of the image measure. To see this, one can perform a direct adaptation of the proofs in \cite{ldr} and \cite{dq} to the case of observations. In particular, hypothesis {\bf A-1} in \cite{dq} is satisfied for exponentially-mixing systems also when considering observed trajectories \cite{obsrec,jerobs}. Hypothesis {\bf A-2} is satisfied for ergodic systems for which the local dimensions are well defined and for typical observations. Indeed, in that case, Young's theorem \cite{young} insures the exact dimensionality of the underlying system, then theorem 4.1 in \cite{hk} insures the exact dimensionality of the image measure. Hypothesis {\bf A-4} concerns the existence and analyticity of the generalized dimensions of the image measure. We now investigate this matter.\\
In \cite{hk}, Hunt and Kaloshin give results concerning the effect of projection on the generalized dimensions for $1\ge q \ge 2$. In this range, they show that if $M$ is a compact subset of $\mathbb{R}^n$ and $J=\mathbb{R}^m$, and if the generalized dimension of order $q$, $D_q$($=D_q^{Id}$) of the invariant measure exists, then
\begin{equation}\label{hk} D_q^f=\min(D_q,m) \end{equation}
for a prevalent set of $C^1$ observables. See \cite{prev} for a review of prevalence, which is a notion of genericity for infinite dimensional spaces. Very little is known for the behavior of $D_q^f$ outside this range, although still in \cite{hk}, the authors show that no analogous general result hold when $q >2$.\\
A first application of the theory is the computation of the correlation dimension ($D_2$) of a physical system using observations, which can be performed by fitting the empirical distribution of $M_n$ and extracting the desired parameter (this type of methods of computation of fractal dimensions has been widely used in climate lately \cite{nature,messori,d2,dq}). Indeed, from result \ref{hk}, $D_2^f=D_2$ whenever the observable $f$ is typical and has a large enough rank (larger than $D_2$). The latter can be obtained by recording simultaneously the system at different locations in space (by using gridded observables) or by considering delay coordinates observables used in embedding techniques \cite{takens}. Notice that it is here enough to take $m\ge D_2$ delay coordinates to access the correlation dimension $D_2$, and not the $\lceil2D_0\rceil$ required to reconstruct the attractor \cite{takens}.\\
To investigate on the value of $D_q^f$ for $q>2$, we compare in figure \ref{dqf} the numerical estimates of $D_q^f$ for different observations $f$ and the generalized dimensions associated with a motion on a Sierpinski gasket, for which explicit formulas are known (see \cite{dq} for the presentation of the map and explicit formulae). The estimates are obtained by fitting the empirical distribution of the maximum value taken by the process $(Y_i)$ over blocks of size $5.10^4$. This procedure will also allow us to confirm the convergence of the distribution. The results are averaged over 10 runs, using trajectories of length $2.10^8$. The error bars represent the standard deviations of the results. Functions $f_1$, $f_2$ are diffeomorphisms, which are known to preserve the generalized dimensions. Indeed, for these two functions, good agreement is found, so that the two curves are hardly distinguishable visually in the figure. These results suggest that this method of computation of $D_q$ can be completed and even improved by introducing a diffeomorphism computed along the orbit of the system, which may, if well chosen, speed up the convergence of the method and provide better estimates. Function $f_3$ is a very oscillatory function, which gives a point in the observational space many antecedent, having the effect to alter significantly the fine structure of the image measure. We do not know whether the disagreement with the $D_q$ spectrum is due to the method not being at convergence, or if is a sign that the spectrum is not preserved under the action of $f_3$. However, the small disagreement for $q=2$ seems to imply that the method may not be at convergence, since the correlation dimension is preserved by typical observations. $f_4$ is not a diffeomorphism either, but has a more simple structure. For this function, the generalized dimensions seem to be preserved. $f_5$ is a degenerate function yielding values close to 1.\\
\begin{figure}
\caption{Numerical estimates of $D_q^f$ for different observations: $f_1=Id$, $f_2(x,y)=(2x+y,2y)$, $f_3(x,y)=(\sin(\frac1x),\cos(\frac1y))$, $f_4(x,y)=((x-0.5)^2,2y)$ and $f_5=(1,y^2+x)$. In dashed lines is the $D_q$ spectrum of the underlying system. Estimates are computed as described in the text.}
\label{1}
\end{figure}
In \cite{obsrec}, we show that for the baker's map, a typical linear unidimensional projection gives $D_q^f=1$ for all $q$. Overall, this result along with our numerical computations suggest that Hunt and Kaloshin's results may extend to $q>2$ for a certain class of measures and for some smooth observations.
\section{The Extremal Index $\theta^f_q$}
When we work with real trajectories (i.e. when $f=Id$), the Extremal Index $\theta_q$, and more specifically the quantity $$H_q=\frac{\log(1-\theta_q)}{1-q},$$ encodes the hyperbolic properties of the system (see \cite{dq} for a detailed review). In particular, $H_q$ as a function of $q$ is constant for maps with constant Jacobian and is close to the metric entropy of the system (its Lyapunov exponent in dimension 1). When we introduce an observation $f$, the use of the Extremal Index to quantify the rate at which nearby trajectories diverge becomes less relevant, partly due to the fact that two nearby points in observational space may have antecedents far away in the actual phase space of the system. Let us investigate more in details this matter.\\
Keller and Liverani \cite{kl} provide a general formula for the Extremal Index of time series originating from dynamical systems. Applied to the present situation, and if the limits defining the different quantities exist, we have that
\begin{equation}\label{deftheta} \theta^f_q=1-\sum_{k=0}^{\infty}p_{k,q}, \end{equation}
where
\begin{equation}\label{p0} p_{0,q}=\lim_{n\to\infty}\frac{\mu_q(S^q_n \cap T^{-1} S^q_n)}{\mu_q(S^q_n)} \end{equation}
and for $k\ge 1$,
\begin{equation}\label{pk} p_{k,q}=\lim_{n\to\infty}\frac{\mu_q(S^q_n \cap \bigcap_{i=1}^k T^{-i}(S^q_n)^c \cap T^{-k-1} S^q_n)}{\mu_q(S^q_n)}. \end{equation}
In this general set up, obtaining a formula for $\theta^f_q$ is challenging, so let us place ourselves in the more simple case of expanding maps of the unit interval $I=[0,1]$. We define the following sets for $x\in I$ : $$A_0(x)=\{y \in I, f(y)=f(x) \cap f(Ty)=f(Tx)\}$$ and $$A_k(x)=\{y \in I, f(y)=f(x), f(T^iy)\neq f(T^ix),\text{ for } i=1,..,k\text{ and } f(T^{k+1}(y))=f(T^{k+1}(x))\}.$$
\begin{proposition} Let $T$ be an expanding map of the unit interval $I=[0,1]$ which is $C^1$ by part and admitting an a.c. invariant measure $\mu(x)=h(x)dx$. Let $f : I \to J \subset \mathbb{R}$ be $C^1$ by part, finite to one and such that $f' \neq 0$ on $I$, then if
\begin{equation}\label{h1} \mu(\{x \in I,A_0(x)=\{x\} \})=1 \end{equation}
and, for all $k\ge 1$,
\begin{equation}\label{h2} \mu(\{x\in I, A_k(x)= \emptyset \})=1, \end{equation}
we have that
\begin{equation}\label{thetaq}
\theta^f_q=1-\frac{\int_I \frac{h(x)^q}{\max(|f'(x)|,|(f\circ T)'(x)|)^{q-1}}dx}{\int_I \sum_{(y_1,...y_{q-1})\in (f^{-1}\{f(x)\})^{q-1}} \prod_{i=1}^{q-1}\frac{h(y_i)}{|f'(y_i)|}h(x)dx}. \end{equation} \end{proposition}
For a given map $T$, assumptions \ref{h1} and \ref{h2} should be satisfied for a generic observation $f$. The cases where these assumptions are not satisfied are when $T$ and $f$ share some particular symmetries and similarities of structures. For example, $\mu(A_0(x) = \{x\})\neq 1$ if both the graphs of $T$ and $f$ are symmetric with respect to the straight line of equation $x=1/2$.\\
\begin{proof} We will write it for $q=2$. Following the lines of the proof in \cite{synchro} (where the case $f=Id$ is treated), and making use of the mean value theorem, we get: \begin{equation}\label{1} \begin{aligned}
\mu_2(S^2_n)&\sim \int_I \sum_{y\in f^{-1}\{f(x)\}} \mu(B(y,\frac{e^{-u_n}}{|f'(y)|})) d\mu(x)\\
&\sim 2e^{-u_n} \int_I \sum_{y\in f^{-1}\{f(x)\}} \frac{h(y)}{|f'(y)|} h(x)dx. \end{aligned} \end{equation}
We also have
\begin{equation}\label{2} \begin{aligned}
\mu_2(S^2_n \cap T^{-1} S^2_n) &\sim \int_I \sum_{y\in A_0(x)} \mu(\{z\in I, z\in B(y,\frac{e^{-u_n}}{|f'(y)|})\cap Tz \in B(Ty,\frac{e^{-u_n}}{|f'(Ty)|}\})d\mu(x)\\
&\sim \int_I \sum_{y\in A_0(x)} \mu(\{z\in I, |z-y| \le \frac{e^{-u_n}}{|f'(y)|} \cap T'(y)|y-z| \le \frac{e^{-u_n}}{|f'(Ty)|}\}) h(x)dx.\\
&= \int_I \sum_{y\in A_0(x)} \mu(\{z\in I, |z-y| \le \min(\frac{e^{-u_n}}{|f'(y)|},\frac{e^{-u_n}}{|T'(y)f'(Ty)|})\}) h(x)dx.\\
&\sim 2e^{-u_n} \int_I \sum_{y\in A_0(x)} \frac{h(y)h(x)}{\max(|f'(y)|,|(f\circ T)'(y)|)} dx.\\ \end{aligned} \end{equation}
By a similar reasoning, we get that for $k \ge 1$,
\begin{equation}\label{3}
\mu_2(S^2_n \cap \bigcap_{i=1}^k T^{-i}(S^2_n)^c \cap T^{-k-1} S^2_n) \sim 2e^{-u_n} \int_I \sum_{y\in A_k(x)} \frac{h(x)h(y)}{\max(|f'(x)|,|(f(T^{k+1}(y))'|)}dx.\\ \end{equation}
Finally, combining eqs. \ref{deftheta},\ref{1}, \ref{2} and \ref{3}, we obtain
\begin{equation}
\theta^f_2=1 - \sum_{k=0}^{+\infty} \frac{\int_I \sum_{y\in A_k(x)} \frac{h(x)h(y)}{\max(|f'(x)|,|(f\circ T^{k+1})'(y)|)}dx}{\int_I \sum_{y\in f^{-1}\{f(x)\}} \frac{h'(y)}{|f'(y)|} h(x)dx}. \end{equation}
This formula is still difficult to handle, but under condition \ref{h2}, we have that $p_{k,2}=0$ for $k>0$, and if moreover condition \ref{h1} holds, we obtain
\begin{equation}\label{thetafin} \begin{aligned} \theta^f_2&=1-p_{0,2}\\
&=1-\frac{\int_I \frac{h(x)^2}{\max(|f'(x)|,|(f\circ T)'(x)|)} dx}{\int_I \sum_{y\in f^{-1}\{f(x)\}} \frac{h(y)h(x)}{|f'(y)|}dx}. \end{aligned} \end{equation}
We can generalize this result for $q\ge 2$ to obtain the desired result. \end{proof}
\begin{example} Let us take $Tx=2x \mod 1 $ and $$f(x)=\left\{
\begin{array}{ll}
2x & \mbox{ if } 0\le x\le 1/2 \\
3/2-x & \mbox{ if } 1/2<x\le 1.\\
\end{array} \right.$$
The function $f$ is not injective, satisfies conditions \ref{h1} and \ref{h2}, and computations are worked out quite easily, which constitutes a good test for our results. Applying formula \ref{thetaq}, we get $$\theta^f_q=1-p_{0,q}=1-\frac{2+2^{2-q}}{1+3^q}.$$ This result is confirmed by numerical experiments (see figure \ref{1}). We used the estimate $\hat{\theta}_{5}$ introduced in \cite{ei}. This estimate consists in evaluating by means of Birkhoff sums the 5 first $p_{k,q}$ terms introduced in formula \ref{pk} and subtracting them from 1. It requires to fix a high threshold $u$ (here we take $u$ equal to the 0.99999-quantile of the $Y_i$ distribution). As expected, we find that all the $p_{k,q}$ are 0 or very close to 0 for $k\ge1$. The result is displayed in figure \ref{1}. Our results are averaged over 10 runs, with trajectories of length $2.10^7$. We see easily that in this example, $D_q^f=1$ for all $q$.\\
\end{example}
\begin{figure}
\caption{Comparison between theory and computation for the $\theta_q^f$ spectrum of the system described in the text.}
\label{1}
\end{figure}
A general formula for higher dimensional system is out of scope, but we expect that with conditions of `non compatibility' between the dynamics and the observation analogue to conditions \ref{h1} and \ref{h2}, all the $p_{k,q}$ terms are 0 for $k\ge1$. This hypothesis is verified by several numerical experiments, as we will see. The vanishing of the $p_k$ terms is particularly welcome when it comes to proving the existence of the extreme value law using more classical approaches in Extreme Value Theory \cite{book,freitascond1,freitascond2}.\\%as we shall see in the second part of this paper.
The presence of the derivative of the observation in formula \ref{thetaq} renders the interpretation of $\theta_q^f$ more difficult than in the case $f=Id$, however we notice two facts :
\begin{itemize}
\item For a given observation $f$, the larger are the values of $|T'|$ over phase space, the larger are the values of $\theta_q^f$, so this index can somehow still quantify the hyperbolic properties of $T$. \item For a given map $T$, the more the points in the observational space have antecedents by $f$, the bigger is the denominator in equation \ref{thetaq}, and the larger is $\theta_q^f$: oscillatory observations give higher values for the extremal index. \end{itemize}
We expect analogous properties to hold for higher dimensional systems. We now test that statement.\\
In figure \ref{thetf} (a), we compare the estimates of $\theta_q^f$ for the 2-dimensional H\'enon system, defined by $T(x,y)=(1-ax^2+y,bx)$, with $a=1.4$ and different values of $b$ such that the system admits a strange attractor. The observation we take is $f(x,y)=\frac{x+y}{2}$. The determinant of the Jacobian is given by $b$. We find indeed that for this fixed choice of observation, the more the original system tends to separate trajectories (the higher is parameter $b$), the higher are the values of $\theta_q^f$, even for lower dimensional projections. The estimates $\hat{p}_{k,q}$ of the $p_{k,q}$ terms, for $k>0$ are all null or close to 0 for all the observation that we considered, as conjectured before.\\
In figure \ref{thetf} (b), we plot the estimates of the extremal index for 2-dimensional H\'enon system (with usual parameters $a=1.4$, $b=0.3$) and different observations. We observe that for one to one observations, ($f_1$, $f_2$ and $f_3$), the $\theta_q^f$ spectrum remains low, although the form of the Jacobian can impact significantly the values of $\theta_q^f$. When the observation ceases to be one to one, the spectrum of extremal indices increases significantly (see the curve for $f_4$). This effect is even more important for the very oscillatory function $f_5$.
\begin{figure}
\caption{ }
\label{thet2}
\caption{ }
\label{thetf}
\caption{Left: Estimates for the $\theta^f_q$ spectrum computed for a H\'enon system with different parameters $b$ and for the observation $f(x,y)=\frac{x+y}{2}$. Right: Estimates for the $\theta^f_q$ spectrum computed for the H\'enon system (b=0.3) and different observations : $f_1=Id$, $f_2(x,y)=(100x+y,100y)$, $f_3(x,y)=(x,100y)$, $f_4(x,y)=(x^2,y^2)$, $f_5(x,y)=(\sin(1/x),\cos(1/y))$. For both figures, we used the estimate $\hat{\theta}_5$, with trajectories of length $10^6$ and a threshold value equal to the $0.999$ quantile of the $\phi^f_q$ distribution. The error bars represent the standard deviation of the results over 10 runs.}
\label{thetf}
\end{figure}
\end{document} |
\begin{document}
\title{Non-malleable encryption of quantum information}
\author{Andris Ambainis} \affiliation{Department of Computer Science, University of Latvia, Raina bulv. 19, Riga, LV-1586, Latvia} \affiliation{Department of Combinatorics and Optimization \&{} Institute for Quantum Computing, University of Waterloo}
\author{Jan Bouda} \affiliation{Faculty of Informatics, Masaryk University, Botanick\'{a} 68a, 602\,00 Brno, Czech Republic}
\author{Andreas Winter} \affiliation{Department of Mathematics, University of Bristol, Bristol BS8 1TW, U.K.} \affiliation{Centre for Quantum Technologies, National University of Singapore,
3 Science Drive 2, Singapore 117543}
\date{3 February 2009}
\begin{abstract} We introduce the notion of \emph{non-malleability} of a quantum state encryption scheme (in dimension $d$): in addition to the requirement that an adversary cannot learn information about the state, here we demand that no controlled modification of the encrypted state can be effected.
We show that such a scheme is equivalent to a \emph{unitary 2-design} [Dankert \emph{et al.}], as opposed to normal encryption which is a unitary 1-design. Our other main results include a new proof of the lower bound of $(d^2-1)^2+1$ on the number of unitaries in a 2-design [Gross \emph{et al.}], which lends itself to a generalization to approximate 2-design.
Furthermore, while in prime power dimension there is a unitary 2-design with $\leq d^5$ elements, we show that there are always approximate 2-designs with $O(\epsilon^{-2} d^4 \log d)$ elements. \end{abstract}
\maketitle
\section*{INTRODUCTION}
The ordinary (and in terms of secret key length, optimal) encryption of quantum states on $n$ qubits is by applying a randomly chosen tensor product of Pauli operators (including the identity). This requires $2n$ bits of shared secret randomness, corresponding to the $4^n$ Pauli operators. (More generally, for states on a $d$-dimensional system, one can use the elements of the discrete Weyl group -- up to global phases -- of which there are $d^2$.) This is perfectly secure in the sense that the state the adversary can intercept is, without her knowing the key, always the maximally mixed state. For perfectly secure encryption with random unitaries, it was shown in~\cite{Ambainis+Mosca...-Priva_quant_chann:2000} that $2n$ bits of secret key are also necessary for $n$ qubits. The lower bound of $2$ bits of key per qubit continues to hold even for $\epsilon$-approximate encryption (up to expressions in $\epsilon$), but there it becomes relevant how the approximation is defined --- whether it randomizes entangled states or not [see Eq.~\eqref{eq:enc-scheme-2-correct} and \eqref{eq:enc-scheme-2-naive} below]. In~\cite{Hayden+Leung...-Rando_quant_state:2003} it was shown that in the latter case one gets away with $n+o(n)$ key bits for arbitrary $n$-qubit states; their construction was derandomized later in~\cite{Ambainis.Smith-Smallpseudo-randomfamilies-2004} and~\cite{Dickinson.Nayak-ApproximateRandomizationof-2006}.
However, even perfectly secure encryption allows for a different sort of intervention by the adversary: she can, without ever attempting to learn the message, change the plaintext by effecting certain dynamics on the encrypted state. Consider briefly the classical one-time pad, i.e. an $n$-bit message XORed with a random $n$-bit string: by flipping a bit of the ciphertext, an adversary can effectively flip any bit of the recovered plaintext. In the quantum case, due to the (anti-)commutation relations of the Pauli operators, by applying to the ciphertext (encrypted state) some Pauli, she forces that the decrypted state is the plaintext modified by that Pauli: for an $n$-qubit state $\ket{\varphi}$, any adversary's Pauli operator $Q$ and secret key Pauli $P_k$, the decrypted state is \[
P_k^\dagger Q P_k \ket{\varphi} = \zeta Q\ket{\varphi}, \] with some (unimportant) global phase $\zeta = \zeta(P,Q)$.
This is evidently an undesirable property of a encryption scheme, and can be classically addressed e.g. by authenticating the message as well as encrypting it. Interestingly, in the above quantum message case, it was shown in~\cite{Barnum+Crepeau...-Authenticatio_of_q_mes:2002} that authenticating quantum messages is at least as expensive as encrypting them (it actually encrypts the message as well): one needs $2$ bits of shared secret key for each qubit authenticated, even in the approximate setting considered in~\cite{Barnum+Crepeau...-Authenticatio_of_q_mes:2002}. Classical non-malleable cryptosystems include both symmetric and asymmetric encryption schemes, bit commitment, zero knowledge proofs and others~\cite{DDN01}.
Here we will introduce a formal definition of perfect non-malleability of a quantum state encryption scheme (NMES), i.e. resistance against predictable modification of the plaintext, as well as of two notions of approximate encryption with approximate non-malleability. We show that a unitary non-malleable channel is equivalent to unitary $2$-design in the sense of Dankert \emph{et al.}~\cite{Dankert.Cleve.ea-ExactandApproximate-2006}. We use this fact to design an exact ideal non-malleable encryption scheme requiring $5\log d$ bits of key. Also, the lower bound of Gross \emph{et al.}~\cite{Gross.Audenaert.ea:Evenlydistributedunitaries:-2007} for unitary $2$-designs applies for perfect NMES; we give a new proof of their result that at least $(d^2-1)^2+1$ unitaries are required, which also yields a more general lower bound of $(4-O(\epsilon)) \log d$ on the \emph{entropy} of an approximate unitary $2$-design. Finally we demonstrate that approximate NMES (unitary 2-designs) exist which require only $4\log d+\log\log d+ O(\log 1/\epsilon)$ bits of key.
\section{General Model of Encryption}
Suppose Alice wants to send a secret quantum message to Bob, say an arbitrary state $\rho \in \cB(\cH)$, a Hilbert space of dimension $d$. For this purpose they will use a encryption scheme with pre-shared secret key $K$ as follows. $K$ is distributed according to some probability distribution $p_K(k)$ and for each $k$ there is a pair of c.p.t.p.~(completely positive and trace preserving) maps \[
E_k:{\cal B}({\cal H}) \longrightarrow {\cal B}({\cal H}')
\text{ and }
D_k:{\cal B}({\cal H}') \longrightarrow {\cal B}({\cal H}) \] for encryption and decryption. The combined effect of en- and decryption, averaged over all keys, is described by a c.p.t.p. map (noisy quantum channel) $R:{\cal B}({\cal H}) \longrightarrow {\cal B}({\cal H})$, acting on operators on ${\cal H}$ as \[
R(\rho) = \sum_k p_K(k) D_k\bigl( E_k(\rho) \bigr). \] Similarly, for an adversary who intercepts the encrypted state but doesn't know the secret key, we have an average channel $R':{\cal B}({\cal H}) \longrightarrow {\cal B}({\cal H}')$, \[
R'(\rho) = \sum_k p_K(k) E_k(\rho). \] Loosely speaking, the quality of the scheme is described by two parameters: first, the reliability, i.e.~how close $R$ is to the ideal channel; secondly, the secrecy, i.e.~how close $R'$ is to a constant (meaning a map taking all input states to a fixed output state). In an ideal scheme, $R=\id$ and $R'=\text{const.}$, i.e.~there is a state $\xi_0$ on ${\cal H}'$, such that \begin{align}
\label{eq:ideal-enc-scheme-1}
\forall \rho &\quad R(\rho) = \rho, \\
\label{eq:ideal-enc-scheme-2}
\forall \rho &\quad R'(\rho) = \xi_0. \end{align}
The issue of approximate performance is a little bit tricky: whereas for the reliability of communication there is essentially one notion, namely, for $\delta > 0$, \begin{equation}
\label{eq:enc-scheme-1}
\forall \rho \quad \norm{\rho - R(\rho)}_1 \le \delta,
\tag{1'} \end{equation} there are two asymptotically radically different notions of secrecy. One is the ``naive'' one \begin{equation}
\label{eq:enc-scheme-2-naive}
\forall \rho \quad \bigl\| R'(\rho) - \xi_0 \bigr\|_1 \leq \epsilon
\tag{2'} \end{equation} that does not randomize entangled states when applied locally.
The ``correct'' (composable!) definition takes into account the possibility to apply $R'$ to part of an entangled state: \begin{equation}
\label{eq:enc-scheme-2-correct}
\forall \rho_{12} \quad
\bigl\| (R'\ox\id)\rho_{12} - \xi_0\ox\rho_2 \bigr\|_1 \leq \epsilon.
\tag{2''} \end{equation} We note that the two conditions coincide in the ideal case $\epsilon=0$.
The minimal key length required for (approximate) encryption reflects whether Eq.~\eqref{eq:enc-scheme-2-naive} or Eq.~\eqref{eq:enc-scheme-2-correct} is used. In the former case $\log d$ bits of key are necessary, and $\log d+o(\log d)$ bits of key are sufficient~\cite{Hayden+Leung...-Rando_quant_state:2003,Ambainis.Smith-Smallpseudo-randomfamilies-2004} to randomize quantum system of dimension $d$, while in the latter case the key length essentially coincides with the exact encryption case and equals $(2-O(\epsilon))\log d$~\cite{Ambainis+Mosca...-Priva_quant_chann:2000}.
\section{Non-malleability}
There is, of course, a simple scheme of encryption that implements an ideal scheme: on $n$ qubits, use a key of length $2n$ and apply an independent random Pauli operator to each qubit. (More generally, in dimension $d$, the key identifies one of the $d^2$ discrete Weyl operators made up of the basis shift and phase shift operators.) The adversary evidently cannot see any information about the plaintext state, but she can use the ciphertext in another way: by modulating the ciphertext with an arbitrary Pauli operation, she can effectively implement this Pauli transformation on the plaintext state.
We shall show that this is not at all a necessary feature of any encryption scheme. There are, however, always two possible actions for the adversary (and their arbitrary convex combination). Namely, not to interfere at all, resulting in correct decryption of the state $\rho$ sent; or interception of the ciphertext and its replacement by a state $\eta_0$ on ${\cal H}'$, resulting in Bob always decrypting the constant state $\rho_0 = \sum_k p_K(k) D_k(\eta_0)$. In other words, assuming the adversary implements an arbitrary quantum channel, i.e.~a completely positive and trace non-increasing ({c.p.t.$\leq${}}) map $\Lambda:{\cal B}({\cal H}') \longrightarrow {\cal B}({\cal H}')$, the class of \emph{effective channels} on the plaintext she can realize, namely all channels \begin{equation*}\begin{split}
{\widetilde\Lambda}: {\cal B}({\cal H}) &\longrightarrow {\cal B}({\cal H}) \text{ s.t.}\\
\rho &\longmapsto \sum_k p_K(k) D_k\Bigl( \Lambda\bigl( E_k(\rho) \bigr) \Bigr), \end{split}\end{equation*} will include all convex combinations of the identity (up to approximation as specified by $\epsilon$) and the completely forgetful channels $\langle \rho_0 \rangle$ mapping all inputs to the state $\rho_0 = \sum_k p_K(k) D_k(\eta_0)$, with arbitrary $\eta_0$.
We call an encryption scheme \emph{(perfectly) non-malleable}, if these are the only effective channels the adversary can realize, i.e.~if for every $\Lambda$, ${\widetilde\Lambda}$ is in the semi-linear span of $\id$ and the $\langle \rho_0 \rangle$, \begin{equation}
\label{eq:ideal-tres-3}
{\widetilde\Lambda} \in {\cal C} :=
{\operatorname{semi-lin}\,}\left( \{\id\} \cup
\left\{ \langle \rho_0 \rangle : \rho \mapsto \rho_0
= \sum_k p_K(k) D_k(\eta_0) \right\}
\right), \end{equation} with ${\operatorname{semi-lin}\,}$ being the semi-linear hull, i.e. with any family of elements it also contains all their linear combinations, subject to complete positivity of the resulting operator. [Clearly, in the above the convex hull can be realized by an adversary; however, in general the full semi-linear hull is accessible; e.g.~for the Haar measure on the unitary group -- and infinite key -- the only constant channel is $\langle \tau \rangle$, with the maximally mixed state $\tau = \frac{1}{d}\1$, cf.~the beginning of the next section, in particular eqs.~(\ref{eq:channels})--(\ref{eq:semilinear-example}) On the other hand, any traceless unitary by the adversary results in the effective channel ${\widetilde\Lambda}(\rho) = \frac{1}{d^2-1}\left(d^2\tau-\rho\right)$.]
Also, a word on why we demand this for all {c.p.t.$\leq${}}\ maps, which is a strictly larger class than c.p.t.p.: note that the adversary could implement an \emph{instrument}~\cite{DaviesLewis:operational}, which is a resolution of a c.p.t.p.~map into {c.p.t.$\leq${}}\ ones. One of them will act randomly, but the adversary can learn which one, so could effectively correlate herself with the effective channel ${\widetilde\Lambda}$.
As before, this is to be understood up to approximations: for every effective channel ${\widetilde\Lambda}$ there is $\Theta \in {\cal C}$ such that \begin{equation}
\label{eq:tres-3-naive}
\forall \rho \quad
\bigl\| {\widetilde\Lambda}(\rho) - \Theta(\rho) \bigr\|_1 \leq \theta.
\tag{3'} \end{equation} However, again the ``correct'' (composable) definition has to take into account the possibility of applying the effective channels to part of an entangled state: \begin{equation}
\label{eq:tres-3-correct}
\forall \rho_{12} \quad
\bigl\| ({\widetilde\Lambda}\ox\id)\rho_{12}
- (\Theta\ox\id)\rho_{12} \bigr\|_1 \leq \theta.
\tag{3''} \end{equation} We call the scheme \emph{strictly non-malleable}, if Eq.~(\ref{eq:ideal-tres-3}) or (\ref{eq:tres-3-naive}) or (\ref{eq:tres-3-correct}) holds for some set ${\cal C}' = {\operatorname{semi-lin}\,}\bigl\{ \id, \langle\rho_0\rangle \bigr\}$ instead of ${\cal C}$. (In other words, there is essentially only one constant channel in ${\cal C}$, independent of $\eta_0$.) Perfect non-malleability then corresponds to $\theta = 0$, in either Eq.~(\ref{eq:tres-3-naive}) or (\ref{eq:tres-3-correct})
\section{Main Results}
In this paper we restrict ourselves to the ``minimal'' case, when ${\cal H}' = {\cal H}$ is a $d$-dimensional Hilbert space, and to perfect transmission, i.e.~Eq.~(\ref{eq:ideal-enc-scheme-1}). This entails that $E_k$ is conjugation by a unitary $U_k$, while $D_k$ is simply the inverse, i.e.~conjugation by $U_k^\dagger$: \[
E_k(\rho) = U_k \rho U_k^\dagger,\quad
D_k(\sigma) = U_k^\dagger \sigma U_k. \]
Since convex combinations of unitary conjugation channels are unital, in an encryption scheme all input states are encrypted as the maximally mixed state $\xi_0 = \tau := \frac{1}{d}\1$ in Eqs.~(\ref{eq:ideal-enc-scheme-2}), (\ref{eq:enc-scheme-2-naive}) and (\ref{eq:enc-scheme-2-correct}). (For a more general discussion see~\cite{BoudaZiman:Optimalityofprivate-2007}.) This means that the adversary can always implement channels \begin{equation}
\label{eq:channels}
\Theta \in \cC' = {\operatorname{semi-lin}\,}\{ \id, \langle \tau \rangle \}, \end{equation} where $\langle \tau \rangle$ is the completely depolarizing channel. Conversely, we demand that these are the only ones she can achieve: for every {c.p.t.$\leq${}}\ map $\Lambda$, we demand that the effective channel ${\widetilde\Lambda} \in \cC'$, with \[
{\widetilde\Lambda}(\rho) = \sum_k p_K(k) U_k^\dagger \bigl( \Lambda(U_k \rho U_k^\dagger) \bigr) U_k. \]
This can be conveniently re-expressed using the Choi-Jamio\l{}kowski operators~\cite{Choi:matrix,Jamiolkowski-Lineartransformationswhich-1972}: for the maximally entangled state $\Phi_d = \frac{1}{d}\sum_{i,j=0}^{d-1}\ket{ii}\!\bra{jj}$ on two systems labelled $1$ and $2$, let $\omega = J_\Lambda := (\Lambda \otimes \id)\Phi_d$. Note that $\tr J_\Lambda \leq 1$ and that $\Lambda$ can be be recovered from the Choi-Jamio\l{}kowski operator as follows: \begin{equation}
\label{eq:CJ-inverse}
\Lambda(\rho) = d \tr_2\bigl( (\1\otimes\rho^\top)J_\Lambda \bigr), \end{equation} where $\rho^\top$ is the transpose operator of $\rho$ with respect to the basis $\{ \ket{i} \}_{i=0}^{d-1}$. The image of the set $\cC'$ under the Choi-Jamio\l{}kowski isomorphism is the set of bipartite positive operators \begin{equation}
\label{eq:semilinear-example}
(\cC' \otimes \id)\Phi_d = {\operatorname{semi-lin}\,}\{ \Phi_d, \tau\otimes\tau \}
= \RR_{\geq 0}\Phi_d + \RR_{\geq 0}(\1-\Phi_d) =: \cI, \end{equation} which are (up to normalization) just the so-called \emph{isotropic states}. Note that these are exactly the (semidefinite) operators invariant under conjugation with $U\ox\cocon{U}$, and that integration over the Haar measure ${\rm d}U$ implements the projection into $\cI$: for every operator $X$, \begin{equation}
\label{eq:iso-twirl}
\int {\rm d}U (U\ox\cocon{U}) X (U\ox\cocon{U})^\dagger = \alpha\Phi_d + \beta(\1-\Phi_d),
\text{ with }
\alpha = \tr X\Phi_d,\ \beta=\frac{1}{d^2-1}\tr X(\1-\Phi_d). \end{equation} The c.p.t.p.~mapping from $X$ to the above average is known as the $U\ox\cocon{U}$-twirl, denoted $\cT_{U\ox\cocon{U}}$.
On the other hand, exploiting the symmetry $\Phi_d = (U\ox\cocon{U})\Phi_d(U\ox\cocon{U})^\dagger$, we can write the Choi-Jamio\l{}kowski operator of the effective channel, \[\begin{split}
\widetilde\omega &= ({\widetilde\Lambda} \otimes \id)\Phi_d \\
&= \sum_k p_K(k) (U_k\ox\1)^\dagger
\Bigl[ (\Lambda\ox\id)
\bigl( (U\ox\1)\Phi_d(U\ox\1)^\dagger\bigr) \Bigr]
(U_k\ox\1) \\
&= \sum_k p_K(k) (U_k\ox\1)^\dagger
\Bigl[ (\Lambda\ox\id)
\bigl( (\1\ox U_k^\top)\Phi_d(\1\ox U_k^\top)^\dagger\bigr) \Bigr]
(U_k\ox\1) \\
&= \sum_k p_K(k) (U_k\ox\cocon{U_k})^\dagger
\bigl[ (\Lambda\ox\id) \Phi_d \bigr]
(U_k\ox\cocon{U_k}) \\
&= \sum_k p_K(k) (U_k\ox\cocon{U_k})^\dagger \omega (U_k\ox\cocon{U_k})
=: \cT(\omega), \end{split}\] where $\cT$ is manifestly a c.p.t.p.~map. The condition that $\{ p_K(k), U_k \}$ forms a perfect NMES is now concisely expressed as $\cT = \cT_{U\ox\cocon{U}}$.
This is precisely the condition for a so-called \emph{unitary 2-design} \cite{Dankert.Cleve.ea-ExactandApproximate-2006}, see also \cite{Gross.Audenaert.ea:Evenlydistributedunitaries:-2007}. Note that modulo a partial transpose, the $U\ox\cocon{U}$-twirl is equivalent to the more familiar $U\ox U$-twirl \[
\cT_{U\ox U}(X) = \int {\rm d}U (U\ox U) X (U\ox U)^\dagger
= \alpha F + \beta (\1-F), \] with the swap (or flip) operator $F = \sum_{i,j=0}^{d-1} \ket{ij}\!\bra{ji}$, mapping density operators to \emph{Werner states}~\cite{Werner-QuantumStateswith-1989}.
Thus we have proved, \begin{theorem}
\label{thm:TRES-is-2design}
Every perfect non-malleable encryption scheme is a unitary $2$-design.
\qed \end{theorem}
\begin{corollary}
\label{cor:TRES-implies-encryption}
Any perfect non-malleable encryption scheme, i.e., an ensemble of
unitaries $\{ p_K(k), U_k \}$ satisfying ${\widetilde\Lambda} \in \cC'$, is automatically
an ideal encryption scheme, i.e.~Eq.~(\ref{eq:ideal-enc-scheme-2})
holds. \end{corollary} \begin{proof}
By Theorem~\ref{thm:TRES-is-2design} a perfect NMES is a unitary
$2$-design. But then it is automatically a unitary $1$-design,
meaning that for all $\rho$, $\sum_k p_K(k) U_k \rho U_k^\dagger = \tau$,
which is precisely Eq.~(\ref{eq:ideal-enc-scheme-2}). \end{proof}
\begin{theorem}
\label{thm:lowerbounds}
Every perfect non-malleable encryption scheme $\{ p_K(k), U_k \}$
requires at least $(d^2-1)^2+1$ unitaries. Furthermore, every
$\theta$--NMES as in Eq.~(\ref{eq:tres-3-correct}) with $\theta \leq 1/e$
satisfies
\[
H(p_K) \geq H_2\left(\frac{1}{d^2}\right) + 2\left(1-\frac{1}{d^2}\right)\log(d^2-1)
- 4\theta\log d - H_2(\theta)
\geq (4-O(\theta)) \log d,
\]
where $H_2(x) = -x\log x - (1-x)\log(1-x)$ is the binary entropy. \end{theorem}
\begin{remark}
In the light of Theorem~\ref{thm:TRES-is-2design}, the first part amounts to
a demonstration that $2$-designs have to have at least $(d^2-1)^2+1$ unitaries;
this was proved by Gross
\emph{et al.}~\cite{Gross.Audenaert.ea:Evenlydistributedunitaries:-2007},
but we give a different, direct, proof below.
It seems that it is conjectured that in fact the better lower
bound $d^2(d^2-1)$ holds in general -- which is true for
so-called ``Clifford twirls'', and tight in some
dimensions~\cite{Chau:UnconditionallySecureKey-2005,Gross.Audenaert.ea:Evenlydistributedunitaries:-2007}. \end{remark}
\begin{proof} Consider the Choi-Jamio\l{}kowski operator of $\cT$, labeling the systems $1$, $2$, $1'$ and $2'$, and with the maximally entangled state understood between systems $12$ and $1'2'$: \[
\Omega_{U\ox\cocon{U}} :=
(\cT_{U\ox\cocon{U}}^{12} \ox \id^{1'2'})\Phi_{d^2}
= \frac{1}{d^2}\Phi_d^{12} \ox \Phi_d^{1'2'}
+ \frac{1}{d^2(d^2-1)} (\1-\Phi_d)^{12} \ox (\1-\Phi_d)^{1'2'}. \] On the other hand, for the first part of the theorem this has to be equal to \[
\Omega :=
(\cT^{12}\ox\id^{1'2'})\Phi_{d^2}
= \sum_{k=1}^N p_K(k) (U_k^1 \ox \cocon{U}_k^2 \ox \1^{1'2'})
\Phi_{d^2}
(U_k^1 \ox \cocon{U}_k^2 \ox \1^{1'2'})^\dagger. \] Comparing ranks of the two right hand side expressions reveals immediately $N \geq (d^2-1)^2+1$.
For the entropy statement in the approximate case, we note that by Eq.~(\ref{eq:tres-3-correct}),
$\| \Omega - \Omega_{U\ox\cocon{U}} \|_1 \leq \theta$, so by Fannes' inequality~\cite{Fannes:inequality} and Schur concavity of the entropy~\cite{Wehrl:review}, \[
H(p_K) \geq S(\Omega) \geq S(\Omega_{U\ox\cocon{U}}) - \theta\log d^4 - H_2(\theta), \] and we are done. \end{proof}
\begin{theorem}[Chau~\cite{Chau:UnconditionallySecureKey-2005},
Gross \emph{et al.}~\cite{Gross.Audenaert.ea:Evenlydistributedunitaries:-2007}]
\label{thm:d5}
If $d=p^n$ is a prime power, then
there exists a perfect non-malleable encryption scheme with
$d^5-d^3$ unitaries, meaning that the key length is $\leq 5 \log d$.
In fact, such a scheme is obtained as the uniform ensemble
over a particular subgroup of the Clifford group (i.e., the
normalizer) of the $n$-th power Heisenberg-Weyl (aka generalised Pauli)
group ${\cal P}_p^{\ox n}$, where
${\cal P}_p$ is the group generated by the discrete Weyl operators
\[
X_p = \sum_{j=0}^{p-1} \ket{j\!+\!1 \mod p}\!\bra{j},\quad
Z_p = \sum_{k=0}^{p-1} e^{2\pi i k/p}\proj{k}.
\] \end{theorem} \begin{proof}
Apart from Chau~\cite{Chau:UnconditionallySecureKey-2005} see
Gross \emph{et al.}~\cite{Gross.Audenaert.ea:Evenlydistributedunitaries:-2007},
as well as the crisp presentation of Grassl~\cite{Grassl:6-SIC}. \end{proof}
\begin{remark}
We note that in even prime power dimension, the cardinality
of the subgroup can be reduced to $(d^5-d^3)/8$.
Furthermore, Chau~\cite{Chau:UnconditionallySecureKey-2005} showed
that for several small dimensions the minimum $d^4-d^2$ is attainable;
see also Gross \emph{et al.}~\cite{Gross.Audenaert.ea:Evenlydistributedunitaries:-2007}
for another example of $2(d^4-d^2)$. \end{remark}
\begin{theorem}
\label{thm:approx-2-design}
For $0< \theta \leq 1/2$ there exists a $\theta$-NMES
with $O(\theta^{-2}d^4\log d)$ unitaries, i.e.~with key
requirement of $4 \log d + \log\log d + O\left(\log\frac{1}{\theta}\right)$
bits.
In fact, Eq.~(\ref{eq:tres-3-correct}) holds in the stronger form
\begin{equation}
\label{eq:tres-3-strong}
(1-\theta) \Theta \leq {\widetilde\Lambda} \leq (1+\theta) \Theta. \tag{$3^*$}
\end{equation} \end{theorem} \begin{proof} Start from any exact unitary $2$-design, such as the unitary group with Haar measure, or the Clifford group or one of its admissible subgroups. We shall select $U_1,\ldots U_N$ independently at random from that chosen 2-design, and show that Eq.~(\ref{eq:tres-3-strong}) is true with high probability as soon as $N \gg \theta^{-2}d^4\log d$; which of course implies that there exist a particular selection of an ensemble $\{ 1/N, U_k \}_{k=1}^N$ satisfying (\ref{eq:tres-3-strong}).
In fact, it is sufficient to show that for $\cT(\omega) = \frac{1}{N} \sum_{k=1}^N (U_k\ox\cocon{U}_k) \omega (U_k\ox\cocon{U}_k)^\dagger$, \[
(1-\theta)\cT_{U\ox\cocon{U}} \leq \cT \leq (1+\theta)\cT_{U\ox\cocon{U}}, \] which in turn is equivalent to the corresponding statement for the Choi-Jamio\l{}kowski states -- compare Eq.~(\ref{eq:CJ-inverse}): \[
(1-\theta)\Omega_{U\ox\cocon{U}} \leq \Omega \leq (1+\theta)\Omega_{U\ox\cocon{U}}, \] where \begin{align*}
\Omega_{U\ox\cocon{U}} &= (\cT_{U\ox\cocon{U}}^{12} \ox \id^{1'2'})\Phi_{d^2}
= \frac{1}{d^2}\Phi_d^{12} \ox \Phi_d^{1'2'}
+ \frac{1}{d^2(d^2-1)} (\1-\Phi_d)^{12} \ox (\1-\Phi_d)^{1'2'}, \\
\Omega &= (\cT^{12}\ox\id^{1'2'})\Phi_{d^2}
= \frac{1}{N} \sum_{k=1}^N (U_k^1 \ox \cocon{U}_k^2 \ox \1^{1'2'})
\Phi_{d^2}
(U_k^1 \ox \cocon{U}_k^2 \ox \1^{1'2'})^\dagger. \end{align*}
Now $\Omega$ is a random variable, in fact an average of $N$ independent, identically distributed terms \(
X_k := (U_k^1 \ox \cocon{U}_k^2 \ox \1^{1'2'}) \Phi_{d^2} (U_k^1 \ox \cocon{U}_k^2 \ox \1^{1'2'})^\dagger \) with expectation $\EE X_k = \EE \Omega = \Omega_{U\ox\cocon{U}}$. All $X_k$ are bounded between $0$ and $\1$, so the technical result from~\cite{AhlswedeWinter-ID} applies, the \emph{operator Chernoff bound}, yielding (with a universal constant $c>0$) \[
\Pr\bigl\{ (1-\theta)\Omega_{U\ox\cocon{U}} \leq \Omega \leq (1+\theta)\Omega_{U\ox\cocon{U}} \bigr\}
\geq 1 - 2d^4 e^{-c \theta^2 N/d^4}, \] which implies the claim. \end{proof}
\section{Discussion} We have introduced the cryptographic primitive of a non-malleable quantum state encryption scheme. While many questions remain open, we have shown that every such scheme based on random unitaries is a unitary 2-design, showing in particular that every such scheme must use $4\log d$ bits of key, as opposed to the well-known $2\log d$ necessary and sufficient for quantum state encryption~\cite{Ambainis+Mosca...-Priva_quant_chann:2000}.
This situation essentially persists even if we relax the non-malleability to being approximate. On the other hand, there exists an exact construction based on the Jacobi subgroup of the Clifford group in dimension $d$, which requires $5\log d$ bits of key, and we show a new randomized construction requiring only $(4+o(1))\log d$ bits of key. We leave open the question of finding an explicit description of such a scheme, as well as that of finding an exact unitary 2-design with only $O(d^4)$ elements.
What we also leave open is the perhaps more pressing problem of relaxing the condition that encryption is done by unitaries. Giving up this restriction results in an advantage in key size, see the work of Barnum \emph{et al.}~\cite{Barnum+Crepeau...-Authenticatio_of_q_mes:2002}. More precisely, these authors show how using $2n+O(s)$ bits of secret key to encrypt $n-s$ qubits into $n$ qubits results in a $\theta$-NMES with $\theta = 2^{-O(s)}$. In our setting this can be understood as only using $d_0 < d$ of the Hilbert space dimensions for quantum information. Then, to transmit a state in the $d_0$-dimensional space ${\cal H}_0 \subset {\cal H}$, first $s$ key bits are used to specify a unitary rotation $V_\ell$ of ${\cal H}$, and then the familiar further $2\log d$ bits of key are used to encrypt ${\cal H}$. If the $V_\ell$ ($\ell=1,\ldots,2^s$) are ``sufficiently random'' and $2^s \geq d/d_0$ then it can be shown that while the adversary can implement certain effective channels on ${\cal H}$, for most $\ell$ this will map the state significantly outside of ${\cal H}_\ell := V_\ell {\cal H}_0$.
\acknowledgments JB and AW thank the Perimeter Institute for Theoretical Physics for its hospitality during a visit in 2006, where the present work was conceived.
JB acknowledges support of the Hertha Firnberg ARC stipend program, and grant projects GA\v{C}R 201/06/P338, GA\v{C}R 201/07/0603 and MSM0021622419.
AW received support from the European Commission (project ``QAP''), from the U.K.~EPSRC through the ``QIP IRC'' and an Advanced Research Fellowship, and through a Wolfson Research Merit Award of the Royal Society. The Centre for Quantum Technologies is funded by the Singapore Ministry of Education and the National Research Foundation as part of the Research Centres of Excellence programme.
\end{document} |
\begin{document}
\allowdisplaybreaks \def\mathbb R} \def\ff{\frac} \def\ss{\sqrt{\mathbb R} \def\ff{\frac} \def\ss{\sqrt} \def\B{\mathbf B} \def\mathbb W{\mathbb W} \def\mathbb N} \def\kk{\kappa} \def\m{{\bf m}{\mathbb N} \def\kk{\kappa} \def\m{{\bf m}} \def\varepsilon}\def\ddd{D^*{\varepsilon}\def\ddd{D^*} \def\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho} \def\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma{\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma}
\def\nabla} \def\pp{\partial} \def\E{\mathbb E{\nabla} \def\pp{\partial} \def\E{\mathbb E} \def\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}
\def\sigma} \def\ess{\text{\rm{ess}}{\sigma} \def\ess{\text{\rm{ess}}} \def\begin} \def\beq{\begin{equation}} \def\F{\scr F{\begin} \def\beq{\begin{equation}} \def\F{\scr F} \def\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}{\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}} \def\text{\rm{e}}{\text{\rm{e}}} \def\text{\rm{i}}{\text{\rm{i}}} \def\underline a} \def\OO{\Omega} \def\oo{\omega{\underline a} \def\OO{\Omega} \def\oo{\omega}
\def\tilde} \def\Ric{\text{\rm{Ric}}{\tilde} \def\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}{\text{\rm{Ric}}} \def\text{\rm{cut}}} \def\P{\mathbb P} \def\ifn{I_n(f^{\bigotimes n}){\text{\rm{cut}}} \def\P{\mathbb P} \def\ifn{I_n(f^{\bigotimes n})} \def\scr C} \def\aaa{\mathbf{r}} \def\r{r{\scr C} \def\aaa{\mathbf{r}} \def\r{r} \def\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r{\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r} \def\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda{\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \def\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I{\scr L}\def\Tt{\tilde} \def\Ric{\text{\rm{Ric}}} \def\TT{\tilde} \def\Ric{\text{\rm{Ric}}}\def\II{\mathbb I} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H} \def\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda{\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda} \def{\rm Rank}} \def\B{\scr B} \def\i{{\rm i}} \def\HR{\hat{\R}^d{{\rm Rank}} \def\B{\scr B} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm i}} \def\HR{\hat{\mathbb R} \def\ff{\frac} \def\ss{\sqrt}^d} \def\rightarrow}\def\l{\ell{\rightarrow}\def\l{\ell}\def\iint{\int} \def\scr E}\def\Cut{{\rm Cut}{\scr E}\def\Cut{{\rm Cut}} \def\scr A} \def\Lip{{\rm Lip}{\scr A} \def\Lip{{\rm Lip}} \def\scr B}\def\Ent{{\rm Ent}}\def\L{\scr L{\scr B}\def\Ent{{\rm Ent}}\def\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I{\scr L} \def\mathbb R} \def\ff{\frac} \def\ss{\sqrt{\mathbb R} \def\ff{\frac} \def\ss{\sqrt} \def\B{\mathbf B} \def\mathbb N} \def\kk{\kappa} \def\m{{\bf m}{\mathbb N} \def\kk{\kappa} \def\m{{\bf m}} \def\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho} \def\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma{\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma}
\def\nabla} \def\pp{\partial} \def\E{\mathbb E{\nabla} \def\pp{\partial} \def\E{\mathbb E} \def\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}
\def\sigma} \def\ess{\text{\rm{ess}}{\sigma} \def\ess{\text{\rm{ess}}} \def\begin} \def\beq{\begin{equation}} \def\F{\scr F{\begin} \def\beq{\begin{equation}} \def\F{\scr F} \def\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}{\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}} \def\underline a} \def\OO{\Omega} \def\oo{\omega{\underline a} \def\OO{\Omega} \def\oo{\omega}
\def\tilde} \def\Ric{\text{\rm{Ric}}{\tilde} \def\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}{\text{\rm{Ric}}} \def\text{\rm{cut}}} \def\P{\mathbb P} \def\ifn{I_n(f^{\bigotimes n}){\text{\rm{cut}}} \def\P{\mathbb P} \def\ifn{I_n(f^{\bigotimes n})} \def\scr C} \def\aaa{\mathbf{r}} \def\r{r{\scr C} \def\aaa{\mathbf{r}} \def\r{r} \def\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r{\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r} \def\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda{\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \def\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I{\scr L}\def\Tt{\tilde} \def\Ric{\text{\rm{Ric}}} \def\TT{\tilde} \def\Ric{\text{\rm{Ric}}}\def\II{\mathbb I} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H} \def\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda{\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda} \def{\rm Rank}} \def\B{\scr B} \def\i{{\rm i}} \def\HR{\hat{\R}^d{{\rm Rank}} \def\B{\scr B} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm i}} \def\HR{\hat{\mathbb R} \def\ff{\frac} \def\ss{\sqrt}^d} \def\rightarrow}\def\l{\ell{\rightarrow}\def\l{\ell} \def\infty}\def\I{1}\def\U{\scr U{\infty}\def\I{1}\def\U{\scr U} \title{{f Well-Posedness for McKean-Vlasov SDEs with Distribution Dependent Stable Noises}
\begin{abstract} The well-posedness is established for McKean-Vlasov SDEs driven by $\alpha$-stable noises ($1<\alpha<2$). In this model, the drift is H\"{o}lder continuous in space variable and Lipschitz continuous in distribution variable with respect to the sum of Wasserstein and weighted variation distances, while the noise coefficient satisfies the Lipschitz condition in distribution variable with respect to the sum of two Wasserstein distances. The main tool relies on Zvonkin's transform, a time-change technique and a two-step fixed point argument. \end{abstract} \noindent
AMS subject Classification: 60G52, 60H10. \\ \noindent
Keywords: McKean-Vlasov SDEs, distribution dependent noise, $\alpha$-stable process,
subordinator, weighted variation distance.
\vskip 2cm
\section{Introduction}
Distribution dependent SDEs, also called McKean-Vlasov SDEs (see e.g.\ \cite{McKean}), can be used to characterize nonlinear Fokker-Planck-Kolmogorov equations. Compared with the classical (distribution independent) SDE, due to the dependence of coefficients (especially noise coefficients) on distribution of the solutions, the study of McKean-Vlasov SDEs is much more difficult. One of the basic issues in the investigation of McKean-Vlasov SDEs is the well-posedness (existence and uniqueness of solutions).
If the drift coefficients are Lipschitz continuous in distribution variable with respect to the $L^\theta$-Wasserstein distance ($\theta\geq1$), the authors obtain in \cite{HW19} the well-posedness for McKean-Vlasov SDEs driven by distribution dependent Brownian noises. When the drift is Lipschitz continuous under a weighted variation distance, the well-posedness is established in \cite{RZ,W21a} only for the case that the diffusion coefficient is distribution free. It seems that the technique (Girsanov's transform) used in \cite{RZ,W21a} is unavailable in the distribution dependent noise case. To overcome the difficulty, \cite{HWJMAA} adopts a parametrix method to derive the well-posedness and regularity estimates. One can refer to \cite{CF} for more details on the parametrix method.
In recent years, McKean-Vlasov SDEs with pure jump noises have also attracted great interest. In \cite{HY}, the authors investigate the well-posedness for McKean-Vlasov SDEs with additive $\alpha$-stable noise ($1<\alpha<2$), where the drift is assumed to be $C_b^\beta$ with $\beta\in(1-\alpha/2,1)$ in space variable, and Lipschitz continuous in distribution variable with
respect to the $L^\theta$-Wasserstein distance ($1<\theta<\alpha$). We refer the readers to \cite{JMW} for related results
on L\'{e}vy-driven McKean-Vlasov SDEs without drift. The aim of this paper is to make some progress on the well-posedness
for McKean-Vlasov SDEs of the following form \begin{align}\label{E1} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t=b_t(X_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+\sigma_t(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Z_{t},\quad t\in[0,T], \end{align} where $T>0$ is a fixed constant, $(Z_t)_{t \ge 0}$ is an $m$-dimensional rotationally invariant $\alpha$-stable L\'evy process (with infinitesimal generator $-\frac12(-\triangle)^{\alpha/2}$) on a complete filtration probability space $(\Omega,\{\scr F_t\}_{t\in[0,T]},\P)$, $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}$ is the law of $X_t$, and for the space $\scr P$ of all probability measures on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ equipped with the weak topology, $$
b:[0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\scr P\rightarrow\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\quad \sigma:[0,T]\times\scr P\rightarrow\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\otimes\mathbb R} \def\ff{\frac} \def\ss{\sqrt^m $$ are measurable.
To characterize the dependence of the coefficients on distribution variable, we introduce some distances on $\scr P$ or its subspace. The total variance distance $\|\cdot\|_{var}$ is given by
$$\|\gamma-\tilde{\gamma}\|_{var} := \sup_{|f|\le 1}
\big|\gamma(f)-\tilde{\gamma}(f)\big|,\quad\gamma,\tilde{\gamma}\in \scr P.$$ For $\kappa>0$, let
$$\scr P_\kappa=\big\{\gg\in \scr P\,;\, \gg(|\cdot|^\kappa)<\infty\big\}.$$ Recall the $L^\kappa$-Wasserstein distance $\mathbb W_\kappa$
$$\mathbb W_\kappa(\gamma,\tilde{\gamma}):= \inf_{\pi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r(\gamma,\tilde{\gamma})} \left(\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} |x-y|^\kappa
\,\pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)\right)^{1/(1\vee \kappa)
},\quad \gamma,\tilde{\gamma}\in \scr P_\kappa,$$
where $\scr C} \def\aaa{\mathbf{r}} \def\r{r(\gamma,\tilde{\gamma})$ is the set of all couplings of $\gamma$ and $\tilde{\gamma}$.
For $\kappa>0$, $\scr P_\kappa$ is a complete metric space under the weighted variation distance
$$\|\gamma-\tilde{\gamma}\|_{\kappa,var} := \sup_{|f|\le 1+|\cdot|^\kappa}
\big|\gamma(f)-\tilde{\gamma}(f)\big|,\quad\gamma,\tilde{\gamma}\in \scr P_\kappa.$$
It is clear that $\|\gamma-\tilde{\gamma}\|_{var}\leq\|\gamma-\tilde{\gamma}\|_{\kappa,var}$ for $\kappa>0$ and
$\gamma,\tilde{\gamma}\in \scr P_\kappa$. If $\kappa\in(0,1]$,
the following adjoint formula (see e.g.\ \cite[Theorem 5.10]{Chen04}) holds for
$\gamma,\tilde{\gamma}\in \scr P_\kappa$
$$\mathbb W_\kappa(\gamma,\tilde{\gamma})=\sup_{[f]_\kappa\leq 1}|\gamma(f)-\tilde{\gamma}(f)|
=\sup_{[f]_\kappa\leq 1,f(0)=0}|\gamma(f)-\tilde{\gamma}(f)|,$$
where $[f]_\kappa$ denotes the H\"{o}lder seminorm (of exponent $\kappa$) of $f:\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\rightarrow\mathbb R} \def\ff{\frac} \def\ss{\sqrt$ defined by $[f]_\kappa:=\sup_{x\neq y}\frac{|f(x)-f(y)|}{|x-y|^\kappa}$. It is easy to see that for $\gamma,\tilde{\gamma}\in \scr P_\kappa$, $$
\mathbb W_\kappa(\gamma,\tilde{\gamma})\leq \|\gamma-\tilde{\gamma}\|_{\kappa,var},\quad \kappa\in(0,1]. $$
To derive the well-posedness for \eqref{E1}, we make the following assumptions. \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(A1)$] $\alpha\in(1,2)$. \item[$(A2)$] There exist $\beta\in(0,1)$ satisfying $2\beta+\alpha>2$, $K_1>0$ and $k\in[1,\alpha)$ such that $$
|b_t(x,\gamma)-b_t(y,\tilde{\gamma})|\leq K_1(\|\gamma-\tilde{\gamma}\|_{k,var}+\mathbb W_{k}(\gamma,\tilde{\gamma})+|x-y|^\beta) $$ for all $t\in[0,T]$, $x,y\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ and $\gamma,\tilde{\gamma}\in\scr P_k$. Moreover,
$$\|b\|_\infty:=\sup_{t\in[0,T], x\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\gamma\in\scr P_k}|b_t(x,\gamma)|<\infty.$$ \item[$(A3)$] There exist constants $K_2\geq1$ and $\eta\in(0,1)$ such that $$K_2^{-1}I\leq (\sigma_t\sigma^\ast_t)(\gamma)\le K_2I,\quad\gamma\in\scr P_k $$ and
$$\|\sigma_t(\gamma)-\sigma_t(\tilde{\gamma})\|\leq K_2\big(\mathbb W_\eta(\gamma,\tilde{\gamma})+\mathbb W_k(\gamma,\tilde{\gamma}) \big),\quad t\in[0,T],\,\gamma,\tilde{\gamma}\in\scr P_k,$$ where $k\in[1,\alpha)$ is the constant appearing in $(A2)$. \end{enumerate}
Inspired by \cite[Example 3]{ZG}, we first present a counterexample to point out that we cannot expect the uniqueness for the solution to \eqref{E1} if the noise coefficient $\sigma$ is only assumed to be Lipschitz continuous in distribution variable with respect to the total variance distance. This means that, to guarantee the well-posedness, it is impossible to replace the Wasserstein distance $\mathbb W_\eta$
by the weighted variation distance $\|\cdot\|_{k,var}$ in $(A3)$.
\begin{exa}
Let $d=m=1$ and $Z_t$ be an $\alpha$-stable process on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt$ \textup{(}$1<\alpha<2$\textup{)}.
By the asymptotic formula for the heat kernel of stable
processes \textup{(}cf.\ \cite[Corollary 2.1\,a)]{DS19}\textup{)},
it is not hard to verify that
$$
\lim_{x\rightarrow\infty}\frac{\P(Z_1\geq2x)}{\P(x\leq Z_1<2x)}=\frac{1}{2^\alpha-1}<1.
$$
Then we can pick large enough $M>1$ such that $\P(Z_1\geq 2M)<\P(M\leq Z_1<2M)$. Set
\begin{align*}
a&:=\frac{\P(M\leq Z_1<2M)-\P(Z_1\geq 2M)}{\P(M\leq Z_1<2M)}\,\in\,(0,1),\\
b&:=\frac{1}{\P(M\leq Z_1<2M)}\,\in\,(1,\infty).
\end{align*}
Let
$$
\sigma_t(\gamma):=a+b\gamma\left([2Mt^{1/\alpha},\infty)\right),
\quad t\in[0,T],\,\gamma\in\scr P.
$$
It is easy to see that for all $t\in[0,T]$ and $\gamma,\tilde{\gamma}\in\scr P$,
$$
0<a\leq \sigma_t(\gamma)\leq a+b,\quad\text{and}\quad
|\sigma_t(\gamma)-\sigma_t(\tilde{\gamma})|\leq b\|\gamma-\tilde{\gamma}\|_{var}.
$$
Consider the McKean-Vlasov SDE \textup{(}without drift\textup{)} on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt$:
\begin{equation}\label{e22hs}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t=\sigma_t(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Z_{t},\quad t\in[0,T].
\end{equation}
Since the distribution of $Z_t$ coincides with that of $t^{1/\alpha}Z_1$, we have
\begin{align*}
\sigma_t(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Z_t})&=a+b\P(Z_t\geq 2Mt^{1/\alpha})=a+b\P(Z_1\geq2M)=1,\\
\sigma_t(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{2Z_t})&=a+b\P(2Z_t\geq 2Mt^{1/\alpha})=a+b\P(Z_1\geq M)=2.
\end{align*}
This implies that the SDE \eqref{e22hs} with initial value $X_0=0$ has at least two strong
solutions: $Z_t$ and $2Z_t$. \end{exa}
Denote by $C([0,T];\scr P_k)$ the set of all continuous maps from $[0,T]$ to $\scr P_k$ under the metric $\mathbb W_k$. Throughout the paper the constant $C$
denotes positive constant which may depend on $T,d,m,\alpha,\beta,k,K_1,K_2,\|b\|_\infty$; its value may change, without further notice, from line to line.
Our main result is the following theorem: \begin{thm}\label{EUS} Assume $(A1)$-$(A3)$. Then \eqref{E1} is well-posed in $\scr P_{k}$, and the solution satisfies $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_\cdot}\in C([0,T];\scr P_k)$ and $$
\E\left[\sup_{t\in[0,T]}|X_t|^k\right]<C\left(
1+\E\big[|X_0|^k\big]\right). $$ \end{thm}
The remainder of the paper is organized as follows: In Section 2, we use a fixed point argument to establish the well-posedness for McKean-Vlasov SDEs with distribution free drifts, which will be used in the proof of Theorem \ref{EUS}. The proof of our main result is presented in Section 3. Finally, we give in the Appendix two limits (concerning stable subordinators), which have been used in Section 2 and might be interesting on their own.
\section{Well-posedness for SDEs with distribution free drifts}
This section is devoted to the well-posedness for a particular class of McKean-Vlasov SDEs, where the drift coefficient does not depend on distribution variable. For $\mu\in C([0,T];\scr P_k)$ and $\gg\in \scr P_k$, consider the following McKean-Vlasov SDE with initial distribution $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{0}^{\gg,\mu}}=\gg$:
\beq\label{ED}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_{t}^{\gg,\mu}= b_t(X_{t}^{\gg,\mu}, \mu_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+\sigma_t(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu}})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Z_{t},\quad t\in [0,T].
\end{equation}
\begin{prp}\label{PW} Assume $(A1)$-$(A3)$.
For any $\mu\in C([0,T]; \scr P_k)$ and $\gg\in \scr P_k$, $\eqref{ED}$ is well-posed in $\scr P_k$, and the unique solution satisfies $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{\cdot}^{\gg,\mu}} \in C([0,T]; \scr P_k)$. Furthermore, for any $\mu^1,\mu^2\in C([0,T];\scr P_k)$, $\gg\in \scr P_k$ and
$\delta>0$ large enough, \begin{align*}
\sup_{t\in[0,T] }\text{\rm{e}}^{-\delta t}&\big[\mathbb W_\eta(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}})+ \mathbb W_k(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}})\big]\\
&\le C\gamma(1+|\cdot|^k)[\delta^{1/\alpha-1}+\delta^{-1}]
\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\left[
\|\mu^1_t-\mu^2_t\|_{k,var}+\mathbb W_{k}(\mu^1_t,\mu^2_t)\right]. \end{align*} \end{prp}
\subsection{A useful estimate for classical SDEs}
Let $\gamma\in\scr P_k$, $X_{0}^\gg$ be $\F_0$-measurable with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{0}^\gg}=\gg$, and
$\mu,\nu\in C([0,T];\scr P_k)$. Consider the following (distribution independent) SDE \beq\label{ED00}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_{s,t}^{\gg,\mu,\nu}= b_t(X_{s,t}^{\gg,\mu,\nu},\mu_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+\sigma} \def\ess{\text{\rm{ess}}_t(\nu_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Z_{t},
\quad 0\leq s\leq t\leq T \end{equation} with $X_{s,s}^{\gg,\mu,\nu}=X_{0}^\gg$. According to \cite{P}, \eqref{ED00} is well-posed under the assumptions $(A2)$ and $(A3)$. For simplicity, we denote $X_{t}^{\gg,\mu,\nu}=X_{0,t}^{\gg,\mu,\nu}$. Moreover, if $\gg=\delta_x$ is the Dirac measure concentrated at $x\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$, we write $X_{s,t}^{x,\mu,\nu}=X_{s,t}^{\delta_x,\mu,\nu}$ and $X_{t}^{x,\mu,\nu}=X_{0,t}^{\delta_x,\mu,\nu}$ for $0\leq s\leq t\leq T$.
\begin{lem}\label{PW0} Assume $(A2)$ and $(A3)$. Let $\gamma\in\scr P_k$ and $\mu^i,\nu^i\in C([0,T];\scr P_k)$, $i=1,2$. Then for any $\delta>0$,
\begin{align*}
\sup_{t\in[0,T] }\text{\rm{e}}^{-\delta t}\mathbb W_k(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1,\nu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2,\nu^2}})
&\leq \frac{C}{\delta}\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}[\|\mu^1_t-\mu^2_t\|_{k,var}+\mathbb W_{k}(\mu^1_t,\mu^2_t)]\\
&\quad+\frac{C}{\sqrt{\delta}}\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}[\mathbb W_\eta(\nu^1_t,\nu^2_t)+\mathbb W_k(\nu^1_t,\nu^2_t)].
\end{align*} \end{lem}
\begin{proof} For $\ll>0$ and $\mu,\nu\in C([0,T];\scr P_k)$, consider the following PDE for $u^{\ll,\mu,\nu}:[0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\rightarrow}\def\l{\ell\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$: \begin{equation}\label{A10} \partial_tu^{\ll,\mu,\nu}_t(\cdot)+\scr A} \def\Lip{{\rm Lip}_t^{\nu} u^{\ll,\mu,\nu}_t(\cdot)+\nabla u^{\ll,\mu,\nu}_t(\cdot) b_t(\cdot,\mu_t)+ b_t(\cdot,\mu_t)=\ll u^{\ll,\mu,\nu}_t(\cdot),\quad u^{\ll,\mu,\nu}_T(\cdot)=0, \end{equation} where \begin} \def\beq{\begin{equation}} \def\F{\scr F{equation}\label{L34} \scr A} \def\Lip{{\rm Lip}^{\nu}_t f(\cdot):=\int_{\mathbb{R}^{m}\backslash\{0\}}\big[f(\cdot+\sigma_t(\nu_t)y)-f(\cdot)
-\langle \sigma_t(\nu_t)y,\nabla f(\cdot)\rangle\mathds{1}_{\{ |y|\leq 1\}} \big]\,\Pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y), \end{equation} and $$
\Pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y):=\frac{\alpha\Gamma(\frac{m+\alpha}{2})}
{2^{2-\alpha}\pi^{m/2}\Gamma(1-\frac\alpha2)}\,\frac{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y}{|y|^{m+\alpha}} $$ is the L\'{e}vy measure of $Z_t$. According to \cite[Theorem 3.4]{P}, there exists (large enough) $\lambda>0$ such that \eqref{A10} has a unique solution $u^{\ll,\mu,\nu} \in C^1\big([0,T];C_{b}^{\alpha+\beta}\left(\mathbb{R}^{d};\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\right)\big)$ with \begin{equation}\label{A20}
\sup_{\mu,\nu\in C([0,T];\scr P_k),t\in[0,T]}\|\nabla} \def\pp{\partial} \def\E{\mathbb E u_t^{\ll,\mu,\nu}(\cdot)\|_{\infty}\def\I{1}\def\U{\scr U}\le \ff{1}{2}, \end{equation} and \begin} \def\beq{\begin{equation}} \def\F{\scr F{equation}\label{uuu}
\sup_{\mu,\nu\in C([0,T];\scr P_k),t\in[0,T]}\|u_t^{\ll,\mu,\nu}(\cdot)\|_{\infty}+\sup_{\mu,\nu\in C([0,T];\scr P_k),t\in[0,T]}\|\nabla u^{\ll,\mu,\nu}_t(\cdot)\|_{\alpha+\beta-1}<\infty. \end{equation} Here $C_b^{\alpha+\beta}\left(\mathbb{R}^{d};\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\right)\big)$ denotes the space of all bounded functions $g:\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\rightarrow\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ with continuous derivatives up to order $\lfloor\alpha+\beta\rfloor$ (the integer part of $\alpha+\beta$) with the norm
$$\|g\|_{\alpha+\beta}:=\sum_{i=0}^{\lfloor\alpha+\beta\rfloor}\|\nabla^i g\|_\infty+\sup_{x\neq y}\frac{|\nabla^{\lfloor\alpha+\beta\rfloor} g(x)-\nabla^{\lfloor\alpha+\beta\rfloor} g(y)|}{|x-y|^{\alpha+\beta-\lfloor\alpha+\beta\rfloor}}.$$
Next, let $\gamma\in\scr P_k$, $\mu^i,\nu^i\in C([0,T];\scr P_k)$, $i=1,2$,
and $\theta^{\ll,\mu^1,\nu^1}_t(x)=x+u^{\ll,\mu^1,\nu^1}_t(x)$. Take $\F_0$-measurable random variable
$X_{0}^{\gg}$ such that $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{0}^{\gg}}=\gg$.
Recall that $X_{t}^{\gg,\mu^i,\nu^i}$ solves \eqref{ED00} with $s,\mu,\nu$ replaced by $0,\mu^i,\nu^i$, respectively.
For simplicity, we denote $X_t=X_{t}^{\gg,\mu^1,\nu^1}$ and $Y_t=X_{t}^{\gg,\mu^2,\nu^2}$. Let $\tilde{N}$ be a Poisson random measure with
compensator $\Pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t$.
Applying It\^{o}'s formula (see e.g.\ \cite[Lemma 4.2]{P}) and \eqref{A10}, we have \begin{equation*} \begin{split} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \theta^{\ll,\mu^1,\nu^1}_t(X_t)&=\ll u^{\ll,\mu^1,\nu^1}_t(X_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\int_{\mathbb{R}^{m}\backslash\{0\}} \left[ \theta^{\ll,\mu^1,\nu^1}_t\left(X_{t-}+\sigma_t(\nu^1_t)x \right) -\theta^{\ll,\mu^1,\nu^1}_t\left(X_{t-} \right) \right]\,\tilde{N}(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t),\\ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \theta^{\ll,\mu^1,\nu^1}_t(Y_t)
&=\ll u^{\ll,\mu^1,\nu^1}_t(Y_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t
+\int_{\mathbb{R}^{m}\backslash\{0\}} \left[ \theta^{\ll,\mu^1,\nu^1}_t\left(Y_{t-}+\sigma_t(\nu^2_t)x \right)-\theta^{\ll,\mu^1,\nu^1}_t\left(Y_{t-}\right) \right]\,\tilde{N}(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t)\\ &\quad+\nabla} \def\pp{\partial} \def\E{\mathbb E\theta^{\ll,\mu^1,\nu^1}_t(Y_t) \big[b_t(Y_t,\mu^2_t)-b_t(Y_t,\mu^1_t)\big]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+(\scr A} \def\Lip{{\rm Lip}^{\nu^2}_t -\scr A} \def\Lip{{\rm Lip}^{\nu^1}_t) \theta^{\ll,\mu^1,\nu^1}_t(Y_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.
\end{split} \end{equation*} By \eqref{A20} we get $$
\frac{1}{2}|X_t-Y_t|\leq |\theta^{\lambda,\mu^1,\nu^1}_t(X_t)-\theta^{\lambda,\mu^1,\nu^1}_t(Y_t)|\leq \sum_{i=1}^5\Lambda_{i}(t), $$ where \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}
\Lambda_{1}(t)&:=\left|\int_{0}^{t}\int_{|x|>1}\big[\Gamma_r^{\ll,\mu^1,\nu^1}(X_{r-} ,\sigma_r(\nu^1_r)x) -\Gamma_r^{\ll,\mu^1,\nu^1}(Y_{r-} ,\sigma_r(\nu^2_r)x) \big]\,\tilde{N}(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r)\right|,\\
\Lambda_{2}(t)&:=\left|\int_{0}^{t}\int_{0<|x|\leq 1}\big[\Gamma_r^{\ll,\mu^1,\nu^1}(X_{r-} ,\sigma_r(\nu^1_r)x) -\Gamma_r^{\ll,\mu^1,\nu^1}(Y_{r-} ,\sigma_r(\nu^2_r)x)\big]\,\tilde{N}(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r)\right|,\\
\Lambda_{3}(t)&:=\int_{0}^{t}\lambda |u^{\ll,\mu^1,\nu^1}_r(X_r)-u^{\ll,\mu^1,\nu^1}_r(Y_r)|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r,\\
\Lambda_{4}(t)&:=\int_{0}^{t}\big|\nabla} \def\pp{\partial} \def\E{\mathbb E\theta^{\ll,\mu^1,\nu^1}_r(Y_r)
\big[b_r(Y_r,\mu^2_r)-b_r(Y_r,\mu^1_r)\big]\big|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r,\\
\Lambda_{5}(t)&:=\int_{0}^{t}|(\scr A} \def\Lip{{\rm Lip}^{\nu^2}_r -\scr A} \def\Lip{{\rm Lip}^{\nu^1}_r) \theta^{\ll,\mu^1,\nu^1}_r(Y_r)|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r, \end{align*} and $$
\Gamma_r^{\ll,\mu^1,\nu^1}(z,y):=\theta^{\ll,\mu^1,\nu^1}_r\left(z+ y \right)-\theta^{\ll,\mu^1,\nu^1}_r\left(z \right). $$ Then we obtain $$
|X_t-Y_t|^k\leq 2^k\left(\sum_{i=1}^5\Lambda_{i}(t)\right)^k\leq 2^k\cdot5^{k-1}
\sum_{i=1}^5\Lambda^k_{i}(t), $$ and thus \begin{equation}\label{upperbound}
\mathbb{E}\left[\sup_{s\in[0,t]}|X_s-Y_s|^k\right]\leq
C\sum_{i=1}^5\mathbb{E}\left[\sup_{s\in[0,t]}\Lambda_{i}^{k}(s)
\right]. \end{equation} First, by $(A2)$ and \eqref{A20}, we obtain $$ \mathbb{E}\left[\sup_{s\in[0,t]}\Lambda_{4}^{k}(s)\right]
\leq C\left(\int_0^t[\|\mu_r^1-\mu_r^2\|_{k,var}+\mathbb W_{k}(\mu_r^1,\mu_r^2)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^k. $$ By \eqref{A20}, we have $$ \mathbb{E}\left[\sup_{s\in[0,t]}\Lambda_{3}^{k}(s)\right]
\leq C\int_0^t\mathbb{E}\left[\sup_{s\in[0,r]}|X_s-Y_s|^{k}\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r. $$ Moreover, \eqref{A20} and $(A3)$ imply that \begin{align*}
&|\Gamma_r^{\ll,\mu^1,\nu^1}(X_{r-} ,\sigma_r(\nu^1_r)x) -\Gamma_r^{\ll,\mu^1,\nu^1}(Y_{r-} ,\sigma_r(\nu^2_r)x)|\\
&\qquad\leq|\theta_r^{\ll,\mu^1,\nu^1}(X_{r-}+\sigma_r(\nu^1_r)x)
-\theta_r^{\ll,\mu^1,\nu^1}(Y_{r-}+\sigma_r(\nu^2_r)x)|
+|\theta_r^{\ll,\mu^1,\nu^1}(X_{r-})-\theta_r^{\ll,\mu^1,\nu^1}(Y_{r-})|\\
&\qquad\leq C|X_{r-}-Y_{r-}|+C|\sigma_r(\nu^1_r)x-\sigma_r(\nu^2_r)x|\\
&\qquad\leq C|X_{r-}-Y_{r-}|+C|x|[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]. \end{align*} This combined with the Burkholder-Davis-Gundy inequality for jump processes from \cite{Nov} (see also \cite[Theorem 3.1(i)]{KS}), implies that \begin{align*}
\mathbb{E}\left[\sup_{s\in[0,t]}\Lambda_{1}^{k}(s)\right]&\leq C\int_0^t\int_{|x|>1}\left\{\E[|X_{r-}-Y_{r-}|^k]+|x|^k[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]^k \right\}\,\Pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\leq C\int_0^t\E[|X_{r-}-Y_{r-}|^k]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r+C\int_0^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]^k\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\leq C\int_0^t\E\left[\sup_{s\in[0,r]}|X_{s}-Y_{s}|^k\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r+C\left(\int_0^t [\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k/2}. \end{align*} Next, by \cite[Lemma 4.1]{P}, \eqref{A20}, \eqref{uuu} and $(A3)$, we get
\begin{align*}&|\Gamma_r^{\ll,\mu^1,\nu^1}(X_{r-} ,\sigma_r(\nu^1_r)x) -\Gamma_r^{\ll,\mu^1,\nu^1}(Y_{r-} ,\sigma_r(\nu^2_r)x)|\\
&\qquad\qquad\qquad\quad\leq |\Gamma_r^{\ll,\mu^1,\nu^1}(X_{r-} ,\sigma_r(\nu^1_r)x) -\Gamma_r^{\ll,\mu^1,\nu^1}(Y_{r-} ,\sigma_r(\nu^1_r)x)|\\
&\qquad\qquad\qquad\qquad+|\Gamma_r^{\ll,\mu^1,\nu^1}(Y_{r-} ,\sigma_r(\nu^1_r)x) -\Gamma_r^{\ll,\mu^1,\nu^1}(Y_{r-} ,\sigma_r(\nu^2_r)x)|\\
&\qquad\qquad\qquad\quad\leq C|x|^{\alpha+\beta-1}|X_{r-}-Y_{r-}|+C|x|[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]. \end{align*} Since $2\alpha+2\beta-2=\alpha+(\alpha+2\beta-2)>\alpha$ and
\begin{align}\label{sma}\int_{0<|x|\leq 1}|x|^\epsilon\,\Pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)<\infty\quad \text{for all $\epsilon>\alpha$}, \end{align} it holds that \begin{align*} &C\mathbb{E}\left[\sup_{s\in[0,t]}\Lambda_{2}^{k}(s)\right]\\
&\leq C\E \left(\int_{0}^{t}\int_{0<|x|\leq 1}|\Gamma_r^{\ll,\mu^1,\nu^1}(X_{r-} ,\sigma_r(\nu^1_r)x) -\Gamma_r^{\ll,\mu^1,\nu^1}(Y_{r-} ,\sigma_r(\nu^1_r)x)|^2\,\Pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k/2}\\
&\leq C\E \left(\int_{0}^{t}\int_{0<|x|\leq 1}\left[|x|^{2\alpha+2\beta-2}|X_{r-}-Y_{r-}|^2+|x|^2[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]^2\right]\,\Pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k/2}\\ &\leq C\E \left(
\int_0^t|X_{r}-Y_{r}|^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r \right)^{k/2}+C\left(\int_0^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k/2}\\
&\leq \frac{1}{2}\,\E\left[\sup_{s\in[0,t]}|X_{s}-Y_{s}|^k\right]+C\int_0^t\E\left[\sup_{s\in[0,r]}|X_{s}-Y_{s}|^k\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\ &\quad +C\left(\int_0^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k/2}. \end{align*} Here in the last inequality we have used the fact that if $f:[0,T]\rightarrow[0,\infty)$ is a right continuous function having left limits, then \begin{align*}
C\left(\int_0^tf(r)^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k/2}
&\leq \sup_{s\in[0,t]}f(s)^{k/2}\times C\left(\int_0^tf(r)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k/2}\\
&\leq\frac12\,\sup_{s\in[0,t]}f(s)^{k}+\frac12\,C^2\left(\int_0^tf(r)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k}\\
&\leq\frac12\,\sup_{s\in[0,t]}f(s)^{k}+C\int_0^tf(r)^k\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\leq\frac12\,\sup_{s\in[0,t]}f(s)^{k}+C\int_0^t\sup_{s\in[0,r]}f(s)^k\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r. \end{align*} Finally, it follows from $(A3)$, \eqref{A20}, \eqref{uuu} and \eqref{sma} that \begin{align*} &\mathbb{E}\left[\sup_{s\in[0,t]}\Lambda_{5}^{k}(s)\right]\\
&\leq\E\bigg(\int_0^t\int_{\mathbb{R}^{m}\setminus\{0\}}\Big|\theta^{\ll,\mu^1,\nu^1}_r(Y_r+\sigma_r(\nu^1_r)y) -\theta^{\ll,\mu^1,\nu^1}_r(Y_r+\sigma_r(\nu^2_r)y)\\ &\qquad\qquad\qquad\quad-\langle \sigma_r(\nu^1_r)y-\sigma_r(\nu^2_r)y,\nabla \theta^{\ll,\mu^1,\nu^1}_r(Y_r)
\rangle\mathds{1}_{\{ |y|\leq 1\}} \Big|\,\Pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\bigg)^k\\
&\leq C\left(\int_{|y|>1}|y|\,\Pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)+\int_{0<|y|\leq 1}|y|^{\alpha+\beta}\,\Pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)\right)^k\left(\int_{0}^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^k\\ &\leq C\left(\int_{0}^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^k\\ &\leq C\left(\int_{0}^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k/2}. \end{align*} Combining \eqref{upperbound} and the above estimates, we get \begin{align*}
\mathbb{E}\left[\sup_{s\in[0,t]}|X_s-Y_s|^k\right]
&\leq C\int_0^t\mathbb{E}\left[\sup_{s\in[0,r]}|X_s-Y_s|^{k}\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\quad+C\left(\int_0^t[\|\mu_r^1-\mu_r^2\|_{k,var}+\mathbb W_{k}(\mu_r^1,\mu_r^2)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^k\\
&\quad+C\left(\int_{0}^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k/2}. \end{align*} This, together with Gronwall's inequality, implis \begin{align*}
\mathbb W_k(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_t})^k&\leq\mathbb{E}\left[\sup_{s\in[0,t]}|X_s-Y_s|^k\right]\\
&\leq
C\left(\int_0^t[\|\mu_r^1-\mu_r^2\|_{k,var}+\mathbb W_{k}(\mu_r^1,\mu_r^2)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^k\\
&\quad +C\left(\int_{0}^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{k/2}. \end{align*} Then for any $\delta>0$ and $t\in[0,T]$, \begin{align*}
\text{\rm{e}}^{-\delta t}\mathbb W_k(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_t})
&\leq C\text{\rm{e}}^{-\delta t}\int_0^t\text{\rm{e}}^{-\delta r}[\|\mu_r^1-\mu_r^2\|_{k,var}+\mathbb W_{k}(\mu_r^1,\mu_r^2)]\cdot\text{\rm{e}}^{\delta r}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\quad+C\text{\rm{e}}^{-\delta t}\left(\int_{0}^t\text{\rm{e}}^{-2\delta r}[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]^2\cdot\text{\rm{e}}^{2\delta r}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{1/2}\\
&\leq C\sup_{r\in[0,T]}\text{\rm{e}}^{-\delta r}[\|\mu_r^1-\mu_r^2\|_{k,var}+\mathbb W_{k}(\mu_r^1,\mu_r^2)]
\times\int_0^t\text{\rm{e}}^{-\delta (t-s)}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\
&\quad+C\sup_{r\in[0,T]}\text{\rm{e}}^{-\delta r}[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]
\times\left(\int_{0}^t\text{\rm{e}}^{-2\delta (t-s)}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\right)^{1/2}\\
&\leq \frac{C}{\delta}\sup_{r\in[0,T]}\text{\rm{e}}^{-\delta r}[\|\mu_r^1-\mu_r^2\|_{k,var}+\mathbb W_{k}(\mu_r^1,\mu_r^2)]\\
&\quad+\frac{C}{\sqrt{2\delta}}\sup_{r\in[0,T]}\text{\rm{e}}^{-\delta r}[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)], \end{align*}
which completes the proof.
\end{proof}
\subsection{The method of time-change}
It is well-known that an $m$-dimensional rotationally symmetric $\alpha$-stable L\'evy process $Z_t$ can be represented as subordinated Brownian motion, see for instance \cite{SSZ12}. More precisely, let $S_t$ be an $\frac{\alpha}{2}$-stable subordinator, i.e.\ $S_t$ is a $[0,\infty)$-valued L\'evy process with the following Laplace transform: $$
\E\left[\text{\rm{e}}^{-rS_t}\right] = \text{\rm{e}}^{-2^{-1}t (2r)^{\alpha/2}},\quad r>0,\,t\geq 0, $$
and let $W_t$ be an $m$-dimensional standard Brownian motion, which is independent of $S_t$. The time-changed process $Z_{t}:=W_{S_{t}}$ is an $m$-dimensional rotationally symmetric $\alpha$-stable L\'evy process such that $\E\,\text{\rm{e}}^{\text{\rm{i}} \langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\xi, Z_t\>}=\text{\rm{e}}^{-t|\xi|^\alpha/2}$ for $\xi\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^m$. Using the subordination representation, \eqref{E1} can be written in the following form $$ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t=b_t(X_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+\sigma_t(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_{S_t},\quad t\in[0,T]. $$
For $ 0\leq s< t\leq T$, $x\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$, and $\nu\in C([0,T];\scr P_k)$, let $q_{s,t}^{\nu}(x,\cdot)$ be the density function of the random variable $$
Y_{s,t}^{x,\nu}:= x + \int_s^t \sigma} \def\ess{\text{\rm{ess}}_r(\nu_r)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_{S_r}. $$ As before, we write $q_{t}^{\nu}(x,\cdot)=q_{0,t}^{\nu}(x,\cdot)$ for $t>0$. Since $W_t$ is independent of $S_t$, one has \beq\label{ES6'}
q_{s,t}^{\nu}(x,y)=\E\,q_{s,t}^{\nu,S}(x,y), \end{equation} where $$
q_{s,t}^{\nu,S}(x,y):=\ff{\exp\left[-\ff 1 {2} \left\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma(a_{s,t}^{\nu,S})^{-1}(y-x), y-x\right\>\right]}{(2\pi )^{d/2}
({\rm det} \{a_{s,t}^{\nu,S}\})^{1/2}}\quad \text{and}\quad
a_{s,t}^{\nu,S} := \int_s^t(\sigma} \def\ess{\text{\rm{ess}}_r\sigma} \def\ess{\text{\rm{ess}}_r^*)( \nu_r)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r. $$ Obviously, $(A3)$ implies \begin{equation}\label{mv}
\|a_{s,t}^{\nu^1,S}-a_{s,t}^{\nu^2,S}\|\le 2K_2^{3/2}\int_s^t[\mathbb W_k(\nu^1_r,\nu^2_r)+\mathbb W_\eta(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r,
\quad \nu^1,\nu^2\in C([0,T];\scr P_k), \end{equation} \begin{equation}\label{mv2}
\frac{1}{K_2(S_t-S_s)}\leq \|[a_{s,t}^{\nu,S}]^{-1}\|\leq \frac{K_2}{S_t-S_s},
\quad \nu\in C([0,T];\scr P_k). \end{equation}
For $0\leq s< t\leq T$ and $x,y\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$, let \begin{align*} q_{s,t}^{\nu,S}(x,y):=\ff{\exp\left[-\ff 1 {2} \left\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma(a_{s,t}^{\nu,S})^{-1}(y-x), y-x\right\>\right]}{(2\pi )^{d/2} ({\rm det} \{a_{s,t}^{\nu,S}\})^{1/2}}\quad \text{and}\quad
\tilde{q}_{s,t}^{S}(x,y):=\ff{\exp\left[-\frac{|y-x|^2}{4K_2(S_t-S_s)}\right]}{(4K_2\pi(S_t-S_s))^{d/2}}. \end{align*} By \eqref{mv} and \eqref{mv2}, one can apply the argument used in the proof of \cite[Lemma 3.1]{HWJMAA} to get the following lemma. To save space, we omit the proof.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{L20} Assume $(A3)$. For any $0\leq s<t\le T$, $x,y\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ and $ \nu^1,\nu^2,\nu\in C([0,T];\scr P_k)$, \begin{align*}
&|q_{s,t}^{\nu^1,S}(x,y)-q_{s,t}^{\nu^2,S}(x,y)|\leq C \tilde{q}^{S}_{s,t}(x,y)
(S_t-S_s)^{-1}\int_s^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r,\\
&|\nabla q_{s,t}^{\nu,S}(\cdot,y)(x)|\leq C \tilde{q}^{S}_{s,t}(x,y)(S_t-S_s)^{-1/2},\\
&|\nabla q_{s,t}^{\nu^1,S}(\cdot,y)(x)-\nabla q_{s,t}^{\nu^2,S}(\cdot,y)(x)|\leq
C\tilde{q}^{S}_{s,t}(x,y)(S_t-S_s)^{-3/2}
\int_s^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r. \end{align*} \end{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{L2} Assume $(A3)$. For any $0\leq s<t\le T$, $x\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$, $ \nu^1,\nu^2,\nu\in C([0,T];\scr P_k)$ and $\epsilon\in[0,\alpha)$, \begin{equation}\label{g0'} \begin{aligned}
&\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}|q_{s,t}^{\nu^1}(x,y)-q_{s,t}^{\nu^2}(x,y)||y-x|^\epsilon\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\ &\qquad\qquad\;\leq C \E\left[(S_t-S_s)^{-1+\epsilon/2}\int_s^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r \right], \end{aligned} \end{equation} \begin{equation}\label{g1'} \begin{aligned}
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}|\nabla q_{s,t}^{\nu}(\cdot,y)(x)||y-x|^\epsilon\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y&\leq C (t-s)^{(-1+\epsilon)/\alpha}, \end{aligned} \end{equation} \begin{equation}\label{g2'} \begin{aligned}
&\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}|\nabla q_{s,t}^{\nu^1}(\cdot,y)(x)-\nabla q_{s,t}^{\nu^2}(\cdot,y)(x)||y-x|^\epsilon\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\ &\qquad\qquad\leq
C \E\left[(S_t-S_s)^{(-3+\epsilon)/2}
\int_s^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
\right]. \end{aligned} \end{equation} \end{lem} \begin{proof} It follows from \eqref{ES6'} and the first inequality in Lemma \ref{L20} that for all $\epsilon\in[0,\alpha)$, \begin{align*}
&\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}| q_{s,t}^{\nu^1}(x,y)- q_{s,t}^{\nu^2}(x,y)||y-x|^\epsilon\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\ &\leq C\E\left[(S_t-S_s)^{-1}\int_s^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
\times\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} \tilde{q}^{S}_{s,t} (x,y) |y-x|^\epsilon\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\right]\\ &=C\E\left[ (S_t-S_s)^{-1+\epsilon/2}\int_s^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
\right]\times\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} |y|^\epsilon\ff{\exp\left[-\frac{|y|^2}{4K_2}\right]}{(4K_2\pi)^{d/2}}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\ &\leq C\E\left[ (S_t-S_s)^{-1+\epsilon/2}\int_s^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r \right]. \end{align*} This is what we have claimed in \eqref{g0'}. We can prove \eqref{g2'} in a similar way. To prove \eqref{g1'}, we use \eqref{ES6'} and the second inequality in Lemma \ref{L20} to get \begin{align*}
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}|\nabla q_{s,t}^{\nu}(\cdot,y)(x)||y-x|^\epsilon\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y
&\leq C\E\left[(S_t-S_s)^{-1/2}\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} \tilde{q}^{S}_{s,t} (x,y) |y-x|^\epsilon\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\right]\\
&=C\E\left[(S_t-S_s)^{(-1+\epsilon)/2}\right]\times\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} |y|^\epsilon\ff{\exp\left[-\frac{|y|^2}{4K_2}\right]} {(4K_2\pi)^{d/2}}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\ &\leq C\E\left[S_{t-s}^{(-1+\epsilon)/2}\right]. \end{align*} Since by \cite[Lemma 4.1]{DS19}, $$
\E\left[S_{t-s}^{(-1+\epsilon)/2}\right]
=\frac{\Gamma\left(1-\frac{-1+\epsilon}{\alpha}\right)}{\Gamma\left(\frac{3-\epsilon}{2}\right)}\,(t-s)^{(-1+\epsilon)/\alpha}
\leq C(t-s)^{(-1+\epsilon)/\alpha}
\quad \text{for all $\epsilon\in[0,\alpha)$}, $$ this completes the proof. \end{proof}
Recall that $$
X_{s,t}^{x,\mu,\nu}=x+\int_s^tb_r(X_{s,r}^{x,\mu,\nu},\mu_r)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r
+\int_s^t\sigma_r(\nu_r)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_{S_r},\quad x\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\,0\leq s< t\leq T. $$ Since $b$ and $\sigma$ are bounded due to $(A2)$ and $(A3)$, it follows from $\E [S_T^{k/2}]=T^{k/\alpha}\E [S_1^{k/2}]<\infty$ that \begin} \def\beq{\begin{equation}} \def\F{\scr F{equation}\begin{split}\label{gr4e23}
\E\left[\sup_{t\in[s,T]}|X_{s,t}^{x,\mu,\nu}|^k\right]
&\leq C|x|^k+C+C\E\left[\sup_{t\in[s,T]}
\left|
\int_s^t\sigma_r(\nu_r)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_{S_r}
\right|^k
\right]\\
&\leq C|x|^k+C+C\E [S_T^{k/2}]\\
&\leq C(1+|x|^k). \end{split} \end{equation} This implies \begin{equation}\label{NNT}
\E\left[1+|X_{s,t}^{x,\mu,\nu}|^k\right]
\leq C (1+|x|^k),\quad \mu,\nu\in C([0,T];\scr P_k). \end{equation}
We denote by $p_{s,t}^{\mu,\nu}(x,\cdot)$ the density function of $X_{s,t}^{x,\mu,\nu}$. Denote by $P_{s,t}^{\mu,\nu}$ and $Q_{s,t}^\nu$ the inhomogeneous Markov semigroups associated with $X_{s,t}^{x,\mu,\nu}$ and $Y_{s,t}^{x,\nu}$, respectively, i.e.\ for $f\in \scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$, \begin{align*}
P_{s,t}^{\mu,\nu}f(x)&=\E f(X_{s,t}^{x,\mu,\nu})=\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}p_{s,t}^{\mu,\nu}(x,y)f(y)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y,\\ Q_{s,t}^\nu f(x)&=\E f(Y_{s,t}^{x,\nu})=\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}q_{s,t}^{\nu}(x,y)f(y)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y.
\end{align*} As before, write $p_{t}^{\mu,\nu}(x,\cdot)=p_{0,t}^{\mu,\nu}(x,\cdot)$, $P_{t}^{\mu,\nu}=P_{0,t}^{\mu,\nu}$ and $Q_{t}^\nu=Q_{0,t}^\nu$ for $t>0$.
\begin{lem}\label{dunh4} Assume $(A2)$ and $(A3)$. Then for any $0\leq s<t\leq T$, $\mu,\nu\in C([0,T];\scr P_k)$, and $f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$, $$ P_{s,t}^{\mu,\nu}f = Q_{s,t}^{\nu}f + \int_s^t P_{s,r}^{\mu,\nu} \left\<b_r(\cdot, \mu_r),\nabla Q_{r,t}^{\nu}f\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r. $$ \end{lem} \begin{proof} By a standard approximation argument, it suffices to prove the desired assertion for $f\in C_b^2(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$. By the backward Kolmogorov equation, it holds that $$ \frac{\partial Q_{r,t}^{\nu}f}{\partial r} =-\scr A} \def\Lip{{\rm Lip}_r^{\nu}(Q_{r,t}^{\nu}f),\quad 0\leq r< t\leq T, $$ where $\scr A} \def\Lip{{\rm Lip}^{\nu}_r f$ is given by \eqref{L34}. Similarly, we have the forward Kolmogorov equation $$\frac{\partial P_{s,r}^{\mu,\nu}f}{\partial r} =P_{s,r}^{\mu,\nu} [\scr A} \def\Lip{{\rm Lip}_{r}^{\mu,\nu}f],\quad 0\leq s< r\leq T,$$ where $$ \scr A} \def\Lip{{\rm Lip}^{\mu,\nu}_r f(x):=\<b_t(x,\mu_r),\nabla f(x)\>+\scr A} \def\Lip{{\rm Lip}^{\nu}_r f(x). $$ Hence, we have \begin{align*}
P_{s,t}^{\mu,\nu}f -Q_{s,t}^{\nu}f
&=\int_s^t
\frac{\partial}{\partial r}[P_{s,r}^{\mu,\nu}Q^{\nu}_{r,t}f]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&=\int_s^t
P_{s,r}^{\mu,\nu}\{[\scr A} \def\Lip{{\rm Lip}_{r}^{\mu,\nu}-\scr A} \def\Lip{{\rm Lip}_{r}^{\nu}]Q^{\nu}_{r,t}f\}
\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&=\int_s^t P_{s,r}^{\mu,\nu} \left\<b_r(\cdot, \mu_r),\nabla Q_{r,t}^{\nu}f\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r, \end{align*} and this completes the proof. \end{proof}
\subsection{An auxiliary lemma}
\begin{lem}\label{vdh22s} Assume $(A1)$-$(A3)$. Let $\gg\in \scr P_k$ and $\mu^i,\nu^i\in C([0,T];\scr P_k)$, $i=1,2$. Then for large enough $\delta>0$, \begin{align*}
&\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\big\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^1,\nu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^2,\nu^2}}\big\|_{k,var}
\leq C\gamma(1+|\cdot|^k)\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}[\mathbb W_\eta(\nu^1_t,\nu^2_t)+\mathbb W_k(\nu^1_t,\nu^2_t)]\\
&\quad\qquad\qquad\qquad\qquad\qquad+C\gamma(1+|\cdot|^k)\delta^{1/\alpha-1}
\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\left[
\|\mu^1_t-\mu^2_t\|_{k,var}+\mathbb W_{k}(\mu^1_t,\mu^2_t)
\right],\\
&\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\mathbb W_\eta\big(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^1,\nu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^2,\nu^2}}\big)
\leq \frac14\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}[\mathbb W_\eta(\nu^1_t,\nu^2_t)+\mathbb W_k(\nu^1_t,\nu^2_t)]\\
&\quad\qquad\qquad\qquad\qquad\qquad+C\gamma(1+|\cdot|^k)\delta^{1/\alpha-1}
\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\left[
\|\mu^1_t-\mu^2_t\|_{k,var}+\mathbb W_{k}(\mu^1_t,\mu^2_t)
\right]. \end{align*} \end{lem}
\begin{proof} By Lemma \ref{dunh4} with $s=0$, we obtain that for $t>0$ and $f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$, \begin{align*} &\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big[
P_{t}^{\mu^1,\nu^1}f(x)-P_{t}^{\mu^2,\nu^2}f(x)\big]\,\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\\ &=\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} \big[Q_{t}^{\nu^1}f(x)-Q_{t}^{\nu^2}f(x)\big] \,\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\\ &\quad+\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t \big[ P_{r}^{\mu^1,\nu^1} \left\<b_r(\cdot, \mu^1_r),\nabla Q_{r,t}^{\nu^1}f\right\>(x)-P_{r}^{\mu^2,\nu^2} \left\<b_r(\cdot, \mu^2_r),\nabla Q_{r,t}^{\nu^2}f\right\>(x)\big]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\ &=\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} \big[Q_{t}^{\nu^1}f(x)-Q_{t}^{\nu^2}f(x)\big] \,\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x) \\ &\quad+\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t \big[ P_{r}^{\mu^1,\nu^1} \left\<b_r(\cdot, \mu^1_r),\nabla Q_{r,t}^{\nu^1}f\right\>(x)-P_{r}^{\mu^2,\nu^2} \left\<b_r(\cdot, \mu^1_r),\nabla Q_{r,t}^{\nu^1}f\right\>(x)\big]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\ &\quad+\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t P_{r}^{\mu^2,\nu^2} \left\<b_r(\cdot, \mu^1_r)-b_r(\cdot, \mu^2_r), \nabla Q_{r,t}^{\nu^1}f\right\>(x)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\ &\quad+\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t P_{r}^{\mu^2,\nu^2} \left\<b_r(\cdot, \mu^2_r),\nabla Q_{r,t}^{\nu^1}f-\nabla Q_{r,t}^{\nu^2}f \right\>(x)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\ &=:\mathsf{J}_1+\mathsf{J}_2+\mathsf{J}_3+\mathsf{J}_4. \end{align*} Note that \begin{align*}
\big\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^1,\nu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^2,\nu^2}}\big\|_{k,var}
&=\sup_{f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),|f|\leq 1+|\cdot|^k}\left|
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big[
P_{t}^{\mu^1,\nu^1}f(x)-P_{t}^{\mu^2,\nu^2}f(x)\big]\,\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\right|\\
&\leq\sum_{i=1}^4\sup_{f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),|f|\leq 1+|\cdot|^k}|\mathsf{J}_i|. \end{align*}
We will estimate the terms $|\mathsf{J}_i|$, $i=1,\dots,4$ separately. First, it follows from \eqref{g0'}
with $\epsilon=k$ and $\epsilon=0$ that for all $t\in(0,T]$, $\delta>0$ and $f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ satisfying $|f|\leq 1+|\cdot|^k$, \begin{align*}
|\mathsf{J}_1|&=\left|\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big[q_{t}^{\nu^1}(x,y)-q_{t}^{\nu^2}(x,y)\big]f(y)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y
\right|\\
&\leq \int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big|q_{t}^{\nu^1}(x,y)-q_{t}^{\nu^2}(x,y)\big|(1+|y|^k)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\
&\leq C\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big|q_{t}^{\nu^1}(x,y)-q_{t}^{\nu^2}(x,y)\big|(1+|x|^k+|y-x|^k)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\
&\leq C\gamma(1+|\cdot|^k)\E\left[
\big(S_t^{-1}+S_t^{-1+k/2}\big)\int_0^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
\right]\\
&\leq C\gamma(1+|\cdot|^k)
\sup_{s\in[0,t]}[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]
\times\E\left[
\big(S_t^{-1}+S_t^{-1+k/2}\big)S_t
\right]\\
&\leq C\gamma(1+|\cdot|^k)\text{\rm{e}}^{\delta t}
\sup_{s\in[0,t]}\text{\rm{e}}^{-\delta s}[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]
\times\left(
1+\E\big[S_T^{k/2}\big]
\right)\\
&\leq C\gamma(1+|\cdot|^k)\text{\rm{e}}^{\delta t}
\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]. \end{align*}
Now we turn to $|\mathsf{J}_2|$. Note that \begin{align*}
|\mathsf{J}_2|
&=\left|\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} \big[p_{r}^{\mu^1,\nu^1}(x,y) -p_{r}^{\mu^2,\nu^2}(x,y) \big]
\left\<b_r(y, \mu^1_r),
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\nabla q_{r,t}^{\nu^1}(\cdot,z)(y)f(z)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\right|\\
&\leq\int_0^t\left|
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} \big[p_{r}^{\mu^1,\nu^1}(x,y) -p_{r}^{\mu^2,\nu^2}(x,y) \big]
\left\<b_r(y, \mu^1_r),
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\nabla q_{r,t}^{\nu^1}(\cdot,z)(y)f(z)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y
\right|\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r. \end{align*} Since it follows from $(A2)$ and \eqref{g1'} with $\epsilon=k$ and $\epsilon=0$ that \begin{align*}
\left|\left\<b_r(y, \mu^1_r),
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\nabla q_{r,t}^{\nu^1}(\cdot,z)(y)f(z)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z\right\>\right|
&\leq \|b\|_\infty\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big|\nabla q_{r,t}^{\nu^1}(\cdot,z)(y)\big|(1+|z|^k)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z\\
&\leq C(t-r)^{-1/\alpha}(1+|y|^k), \end{align*}
we have for all $t\in(0,T]$, $\delta>0$ and $f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ satisfying $|f|\leq 1+|\cdot|^k$, \begin{align*}
|\mathsf{J}_2|
&\leq C\int_0^t(t-r)^{-1/\alpha}
\big\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{r}^{\gg,\mu^1,\nu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{r}^{\gg,\mu^2,\nu^2}}\big\|_{k,var}
\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&=C\text{\rm{e}}^{\delta t}\int_0^t\text{\rm{e}}^{-\delta r}
\big\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{r}^{\gg,\mu^1,\nu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{r}^{\gg,\mu^2,\nu^2}}\big\|_{k,var}\cdot(t-r)^{-1/\alpha}
\text{\rm{e}}^{-\delta (t-r)}
\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\leq C\text{\rm{e}}^{\delta t}\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}\big\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{s}^{\gg,\mu^1,\nu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{s}^{\gg,\mu^2,\nu^2}}\big\|_{k,var}
\times\int_0^t
(t-r)^{-1/\alpha}
\text{\rm{e}}^{-\delta (t-r)}
\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\leq C\delta^{1/\alpha-1}\text{\rm{e}}^{\delta t}\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}\big\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{s}^{\gg,\mu^1,\nu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{s}^{\gg,\mu^2,\nu^2}}\big\|_{k,var}, \end{align*} where in the last equality we have used the following fact \begin{equation}\label{expint}
\sup_{t\in[0,T]}
\int_0^t(t-r)^{-1/\alpha}\text{\rm{e}}^{-\delta(t-r)}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r
\leq\int_0^\infty r^{-1/\alpha}\text{\rm{e}}^{-\delta r}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r
=\Gamma\left(1-\frac{1}{\alpha}\right)\delta^{1/\alpha-1} \end{equation}
By $(A2)$, \eqref{g1'} with $\epsilon=k$ and $\epsilon=0$, \eqref{NNT} and \eqref{expint}, we derive for all $t\in(0,T]$, $\delta>0$ and $f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ satisfying $|f|\leq 1+|\cdot|^k$, \begin{align*}
|\mathsf{J}_3|&=\left|\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}
p_{r}^{\mu^2,\nu^2} (x,y)\left\<b_r(y, \mu^1_r)-b_r(y, \mu^2_r),
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\nabla q_{r,t}^{\nu^1}(\cdot,z)(y)f(z)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y
\right|\\
&\leq\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}p_{r}^{\mu^2,\nu^2} (x,y)
|b_r(y, \mu^1_r)-b_r(y, \mu^2_r)|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big|
\nabla q_{r,t}^{\nu^1}(\cdot,z)(y)\big|(1+|z|^k)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z\\
&\leq C\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t(t-r)^{-1/\alpha}\left[
\|\mu^1_r-\mu^2_r\|_{k,var}+\mathbb W_{k}(\mu^1_r,\mu^2_r)
\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}p_{r}^{\mu^2,\nu^2} (x,y)(1+|y|^k)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\
&\leq C\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}(1+|x|^k)\,\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t(t-r)^{-1/\alpha}\left[
\|\mu^1_r-\mu^2_r\|_{k,var}+\mathbb W_{k}(\mu^1_r,\mu^2_r)
\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\leq C\gamma(1+|\cdot|^k)\text{\rm{e}}^{\delta t}\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}\left[
\|\mu^1_s-\mu^2_s\|_{k,var}+\mathbb W_{k}(\mu^1_s,\mu^2_s)
\right]
\times\int_0^t(t-r)^{-1/\alpha}\text{\rm{e}}^{-\delta(t-r)}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\leq C\gamma(1+|\cdot|^k)\delta^{1/\alpha-1}\text{\rm{e}}^{\delta t}\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}\left[
\|\mu^1_s-\mu^2_s\|_{k,var}+\mathbb W_{k}(\mu^1_s,\mu^2_s)
\right]. \end{align*}
It holds from $(A2)$, \eqref{g2'} with $\epsilon=k$ and $\epsilon=0$, and \eqref{NNT} that for all $t\in(0,T]$, $\delta>0$ and $f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ satisfying $|f|\leq 1+|\cdot|^k$, \begin{align*}
|\mathsf{J}_4|&=\left|\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}p_{r}^{\mu^2,\nu^2} (x,y)
\left\<b_r(y, \mu^2_r),
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big[\nabla q_{r,t}^{\nu^1}(\cdot,z)(y)-\nabla q_{r,t}^{\nu^2}(\cdot,z)(y)\big]
f(z)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y
\right|\\
&\leq\|b\|_\infty\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}p_{r}^{\mu^2,\nu^2} (x,y)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big|\nabla q_{r,t}^{\nu^1}(\cdot,z)(y)-\nabla q_{r,t}^{\nu^2}(\cdot,z)(y)\big|
(1+|z|^k)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z\\
&\leq C\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_0^t\E\left[
\left(
(S_t-S_r)^{(k-3)/2}+(S_t-S_r)^{-3/2}
\right)
\int_r^t[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_s
\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\quad\times
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}p_{r}^{\mu^2,\nu^2} (x,y)(1+|y|^k)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\
&\leq C\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}(1+|x|^k)\,\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\\
&\quad\times\int_0^t\E\left[
\left(
(S_t-S_r)^{(k-3)/2}+(S_t-S_r)^{-3/2}
\right)
\int_r^t\text{\rm{e}}^{-\delta s}[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]\cdot\text{\rm{e}}^{\delta s}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_s
\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\leq C\gamma(1+|\cdot|^k)\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]\\
&\quad\times
\int_0^t\E\left[
\left(
(S_t-S_r)^{(k-3)/2}+(S_t-S_r)^{-3/2}
\right)
\int_r^t\text{\rm{e}}^{\delta \tau}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_\tau
\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r. \end{align*}
Combining the bounds for $|\mathsf{J}_i|$, $i=1,2,3,4$, we obtain for any $\delta>0$, \begin{align*}
&\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\big\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^1,\nu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^2,\nu^2}}\big\|_{k,var}
\leq\sum_{i=1}^4\sup_{t\in(0,T]}\sup_{f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),|f|\leq 1+|\cdot|^k}
\text{\rm{e}}^{-\delta t}|\mathsf{J}_i|\\
&\leq C\gamma(1+|\cdot|^k)\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]\\
&\quad+C\delta^{1/\alpha-1}
\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}\big\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{s}^{\gg,\mu^1,\nu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{s}^{\gg,\mu^2,\nu^2}}\big\|_{k,var}\\
&\quad+C\gamma(1+|\cdot|^k)\delta^{1/\alpha-1}\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}\left[
\|\mu^1_s-\mu^2_s\|_{k,var}+\mathbb W_{k}(\mu^1_s,\mu^2_s)
\right]\\
&\quad+C\gamma(1+|\cdot|^k)\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]\\
&\qquad\times
\sup_{t\in(0,T]}\text{\rm{e}}^{-\delta t}\int_0^t\E\left[
\left(
(S_t-S_r)^{(k-3)/2}+(S_t-S_r)^{-3/2}
\right)
\int_r^t\text{\rm{e}}^{\delta \tau}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_\tau
\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r. \end{align*} Taking $\delta>0$ large enough and using Lemma \ref{slimit}\,ii) (with $\kappa=k/2$ and $\kappa=0$) below, we derive the first assertion.
To prove the second inequality, we note that \begin{align*}
\mathbb W_\eta\big(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^1,\nu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^2,\nu^2}}\big)
&=\sup_{f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),[f]_\eta\leq 1,f(0)=0}\left|
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big[
P_{t}^{\mu^1,\nu^1}f(x)-P_{t}^{\mu^2,\nu^2}f(x)\big]\,\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\right|\\
&\leq\sum_{i=1}^4\sup_{f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),[f]_\eta\leq 1,f(0)=0}|\mathsf{J}_i|\\
&\leq\sup_{f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),[f]_\eta\leq 1}|\mathsf{J}_1|
+\sum_{i=2}^4\sup_{f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),|f|\leq1+|\cdot|^k}|\mathsf{J}_i|. \end{align*} It follows from \eqref{g0'} with $\epsilon=\eta$ that for all $t\in(0,T]$, $\delta>0$ and $f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ satisfying $[f]_\eta\leq 1$, \begin{align*}
|\mathsf{J}_1|&=\left|\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)
\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big[q_{t}^{\nu^1}(x,y)-q_{t}^{\nu^2}(x,y)\big]\big(f(y)-f(x)\big)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y
\right|\\
&\leq \int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\gg(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}\big|q_{t}^{\nu^1}(x,y)-q_{t}^{\nu^2}(x,y)\big||y-x|^\eta\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\
&\leq C\E\left[
S_t^{-1+\eta/2}\int_0^t[\mathbb W_\eta(\nu^1_r,\nu^2_r)+\mathbb W_k(\nu^1_r,\nu^2_r)]\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
\right]\\
&\leq C\sup_{s\in[0,t]}\text{\rm{e}}^{-\delta s}[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]
\times\E\left[
S_t^{-1+\eta/2}\int_0^t\text{\rm{e}}^{\delta r}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
\right]. \end{align*}
This, together with the above bounds for $|\mathsf{J}_i|$, $i=2,3,4$, implies that for all $\delta>0$ \begin{align*}
&\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\mathbb W_\eta\big(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^1,\nu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^2,\nu^2}}\big)\\
&\qquad\leq\sup_{t\in(0,T]}\sup_{f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),[f]_\eta\leq 1}\text{\rm{e}}^{-\delta t}|\mathsf{J}_1|
+\sum_{i=2}^4\sup_{t\in(0,T]}\sup_{f\in\scr B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),|f|\leq1+|\cdot|^k}\text{\rm{e}}^{-\delta t}|\mathsf{J}_i|\\
&\qquad\leq C\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]
\times\sup_{t\in(0,T]}\text{\rm{e}}^{-\delta t}\E\left[
S_t^{-1+\eta/2}\int_0^t\text{\rm{e}}^{\delta r}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
\right]\\
&\qquad\quad+C\delta^{1/\alpha-1}
\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}\big\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{r}^{\gg,\mu^1,\nu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{r}^{\gg,\mu^2,\nu^2}}\big\|_{k,var}\\
&\qquad\quad+C\gamma(1+|\cdot|^k)\delta^{1/\alpha-1}\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}\left[
\|\mu^1_s-\mu^2_s\|_{k,var}+\mathbb W_{k}(\mu^1_s,\mu^2_s)
\right]\\
&\qquad\quad+C\gamma(1+|\cdot|^k)\sup_{s\in[0,T]}\text{\rm{e}}^{-\delta s}[\mathbb W_\eta(\nu^1_s,\nu^2_s)+\mathbb W_k(\nu^1_s,\nu^2_s)]\\
&\qquad\qquad\times
\sup_{t\in(0,T]}\text{\rm{e}}^{-\delta t}\int_0^t\E\left[
\left(
(S_t-S_r)^{(k-3)/2}+(S_t-S_r)^{-3/2}
\right)
\int_r^t\text{\rm{e}}^{\delta \tau}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_\tau
\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r. \end{align*}
Combining this with the upper bound for $\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\big\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^1,\nu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{X}_{t}^{\gg,\mu^2,\nu^2}}\big\|_{k,var}$ in the first assertion and using Lemma \ref{slimit} below, we obtain the desired inequality and the proof is now finished. \end{proof}
\subsection{Proof of Proposition \ref{PW}}
\begin{proof}[Proof of Proposition \ref{PW}] By Lemma \ref{PW0} and the second ineqiality in Lemma \ref{vdh22s}, we know that for all $\gamma\in\scr P_k$, $\mu^i,\nu^i\in C([0,T],\scr P_k)$, $i=1,2$, and $\delta>0$ large enough, \begin{equation}\label{wke}
\begin{aligned}
&\sup_{t\in[0,T] }\text{\rm{e}}^{-\delta t}\big[\mathbb W_\eta(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1,\nu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2,\nu^2}})+ \mathbb W_k(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1,\nu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2,\nu^2}})\big]\\
&\qquad\qquad\leq \sup_{t\in[0,T] }\text{\rm{e}}^{-\delta t}\mathbb W_\eta(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1,\nu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2,\nu^2}})
+\sup_{t\in[0,T] }\text{\rm{e}}^{-\delta t}\mathbb W_k(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1,\nu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2,\nu^2}})\\
&\qquad\qquad\le \frac{1}{2}\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}[\mathbb W_\eta(\nu^1_t,\nu^2_t)+\mathbb W_k(\nu^1_t,\nu^2_t)]\\
&\qquad\qquad\quad+C\gamma(1+|\cdot|^k)[\delta^{1/\alpha-1}+\delta^{-1}]
\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\left[
\|\mu^1_t-\mu^2_t\|_{k,var}+\mathbb W_{k}(\mu^1_t,\mu^2_t)\right]. \end{aligned} \end{equation} Taking $\mu^1=\mu^2=\mu\in C([0,T],\scr P_k)$, it holds that for $\delta>0$ large enough, \begin{align*}
\sup_{t\in[0,T] }\text{\rm{e}}^{-\delta t}\big[\mathbb W_\eta(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu,\nu^1}},&\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu,\nu^2}})+ \mathbb W_k(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu,\nu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu,\nu^2}})\big]\\
&\qquad\leq\frac{1}{2}\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}[\mathbb W_\eta(\nu^1_t,\nu^2_t)+\mathbb W_k(\nu^1_t,\nu^2_t)]. \end{align*} This means that, for $\delta>0$ large enough, the map $$\nu\mapsto \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{\cdot}^{\gg,\mu,\nu}}$$ is strictly contractive in $C([0,T];\scr P_k)$ under the complete metric $$\sup_{t\in[0,T] }\text{\rm{e}}^{-\delta t}[\mathbb W_\eta(\nu^1_t,\nu^2_t)+\mathbb W_k(\nu^1_t,\nu^2_t)]$$ for $\nu^1,\nu^2\in C([0,T];\scr P_k)$. Then it has a unique fixed point $\nu^\ast=\nu^\ast(\gg,\mu)\in C([0,T];\scr P_k)$, i.e.\ $\nu=\nu^\ast$ is the unique solution to the equation $$
\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu,\nu}}=\nu_t,\quad t\in[0,T]. $$ Therefore, $X_{t}^{\gg,\mu}=X_{t}^{\gg,\mu,\nu^\ast}$ solves \eqref{ED}, and this proves the first assertion.
Note that $X_{\cdot}^{\gg,\mu,\nu}=X_{\cdot}^{\gg,\mu}$ for $\nu=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{\cdot}^{\gg,\mu}}$. To prove the second claim, we take $\nu^1=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{\cdot}^{\gg,\mu^1}}$ and $\nu^2=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{\cdot}^{\gg,\mu^2}}$ in \eqref{wke} to get that for $\delta>0$ large enough, \begin{align*}
\sup_{t\in[0,T] }\text{\rm{e}}^{-\delta t}&\big[\mathbb W_\eta(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}})+ \mathbb W_k(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}})\big]\\
&\le \frac{1}{2}\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\big[\mathbb W_\eta(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}})
+\mathbb W_k(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}})\big]\\
&\quad+C\gamma(1+|\cdot|^k)[\delta^{1/\alpha-1}+\delta^{-1}]
\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\left[
\|\mu^1_t-\mu^2_t\|_{k,var}+\mathbb W_{k}(\mu^1_t,\mu^2_t)\right], \end{align*} which immediately yields the desired estimate. \end{proof}
\section{Proof of Theorem \ref{EUS}}
\begin{proof}[Proof of Theorem \ref{EUS}] Taking $\nu^1=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{\cdot}^{\gg,\mu^1}}$ and $\nu^2=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{\cdot}^{\gg,\mu^2}}$ in the first inequality in Lemma \ref{vdh22s}, and using the estimate in Proposition \ref{PW}, we obtain that for $\delta>0$ large enough, \begin{align*}
&\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\big[
\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}}\|_{k,var}+\mathbb W_{k}(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}})\big]\\
&\qquad\qquad\qquad\leq
\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\|\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}}-\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}}\|_{k,var}
+\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\mathbb W_{k}(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}})\\
&\qquad\qquad\qquad\leq C\gamma(1+|\cdot|^k)
\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\big[
\mathbb W_\eta(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}})+
\mathbb W_k(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^1}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{t}^{\gg,\mu^2}})
\big]\\
&\qquad\qquad\qquad\quad+C\gamma(1+|\cdot|^k)
\delta^{1/\alpha-1}\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}
\left[
\|\mu^1_t-\mu^2_t\|_{k,var}+\mathbb W_{k}(\mu^1_t,\mu^2_t)
\right]\\
&\qquad\qquad\qquad\leq C\gamma(1+|\cdot|^k)^2\left[
\delta^{1/\alpha-1}+\delta^{-1}
\right]\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}
\left[
\|\mu^1_t-\mu^2_t\|_{k,var}+\mathbb W_{k}(\mu^1_t,\mu^2_t)
\right]. \end{align*} We can now conclude that, for $\delta>0$ large enough, the map $$
\mu\mapsto \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{\cdot}^{\gg,\mu}} $$ is strictly contractive in $C([0,T];\scr P_k)$ under the complete metric $$\sup_{t\in[0,T]}\text{\rm{e}}^{-\delta t}\left[
\|\mu^1_t-\mu^2_t\|_{k,var}+\mathbb W_{k}(\mu^1_t,\mu^2_t)\right]$$ for $\mu^1,\mu^2\in C([0,T];\scr P_k)$. Then it has a unique fixed point $\mu^\ast=\mu^\ast(\gg)\in C([0,T];\scr P_k)$ such that $\mu^\ast=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{\cdot}^{\gg,\mu^\ast}}$, and $X_t=X_{t}^{\gg,\mu^\ast}$ is the unique solution to \eqref{E1} with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\gamma\in\scr P_k$.
Recall that $$
X_t=X_0+\int_0^tb_r(X_r,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_r})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r+\int_0^t\sigma_r(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_r})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_{S_r}, $$ where $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}\in\scr P_k$. Since the coefficients $b$ and $\sigma$ are bounded, as in \eqref{gr4e23} it is easy to get $$
\E\left[\sup_{t\in[0,T]}|X_t|^k\right]
\leq C\E\big[|X_0|^k\big]+C. $$ This completes the proof. \end{proof}
\section{Appendix}\label{app}
\begin{lem}\label{slimit}
Let $T>0$ and $S_t$ be an $\frac{\alpha}{2}$-stable subordinator \textup{(}$0<\alpha<2$\textup{)}.
\noindent\textup{i)}
If $0<\kappa<\alpha/2$, then
$$
\lim_{\delta\rightarrow\infty}\sup_{t\in(0,T]}\text{\rm{e}}^{-\delta t}\E\left[
S_t^{\kappa-1}\int_0^t\text{\rm{e}}^{\delta r}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
\right]=0.
$$
\noindent\textup{ii)}
If $(1-\alpha)/2<\kappa<(1+\alpha)/2$, then
$$
\lim_{\delta\rightarrow\infty}\sup_{t\in(0,T]}\text{\rm{e}}^{-\delta t}
\int_0^t \E\left[(S_t-S_r)^{\kappa-3/2}
\int_r^t\text{\rm{e}}^{\delta \tau}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_\tau
\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r
=0.
$$ \end{lem}
\begin{proof}
i) Since $r\mapsto S_r$ is nondecreasing, we have for any $t>0$, $\delta>0$ and $\epsilon\in(0,1)$,
\begin{align*}
S_t^{\kappa-1}\int_0^t\text{\rm{e}}^{\delta r}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
&=S_t^{\kappa-1}\left(
\int_0^{(1-\epsilon)t}+\int_{(1-\epsilon)t}^t
\right)\text{\rm{e}}^{\delta r}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r\\
&\leq \text{\rm{e}}^{\delta(1-\epsilon)t}S_t^{\kappa-1}S_{(1-\epsilon)t}
+\text{\rm{e}}^{\delta t}S_t^{\kappa-1}\left(S_t-S_{(1-\epsilon)t}\right)\\
&\leq \text{\rm{e}}^{\delta(1-\epsilon)t}S_t^\kappa+\text{\rm{e}}^{\delta t}\left(S_t-S_{(1-\epsilon)t}\right)^\kappa.
\end{align*}
Since a subordinator has stationary increments, it follows that
\begin{align*}
\text{\rm{e}}^{-\delta t}\E\left[
S_t^{\kappa-1}\int_0^t\text{\rm{e}}^{\delta r}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
\right]
&\leq\text{\rm{e}}^{-\delta\epsilon t}\E\left[S_t^\kappa\right]
+\E\left[\left(S_t-S_{(1-\epsilon)t}\right)^\kappa\right]\\
&=\text{\rm{e}}^{-\delta\epsilon t}\E\left[S_t^\kappa\right]
+\E\left[S_{\epsilon t}^\kappa\right]\\
&=\text{\rm{e}}^{-\delta\epsilon t}t^{2\kappa/\alpha}
\E\left[S_1^\kappa\right]
+(\epsilon t)^{2\kappa/\alpha}
\E\left[S_1^\kappa\right],
\end{align*}
which implies that for any $\delta>0$ and $\epsilon\in(0,1)$,
\begin{align*}
\sup_{t\in(0,T]}\text{\rm{e}}^{-\delta t}\E\left[
S_t^{\kappa-1}\int_0^t\text{\rm{e}}^{\delta r}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_r
\right]
&\leq\E\left[S_1^\kappa\right]\left(
\sup_{t\in(0,T]}\text{\rm{e}}^{-\delta\epsilon t}t^{2\kappa/\alpha}
+\sup_{t\in(0,T]}(\epsilon t)^{2\kappa/\alpha}
\right)\\
&\leq\E\left[S_1^\kappa\right]\left(
\left(
\frac{2\kappa}{\alpha\text{\rm{e}}\epsilon\delta}
\right)^{2\kappa/\alpha}
+(\epsilon T)^{2\kappa/\alpha}
\right).
\end{align*}
By letting first $\delta\rightarrow\infty$ then $\epsilon\downarrow0$, we get the desired claim.
\noindent
ii) Pick $\theta\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt$ such that
$$
1-\frac\alpha2<\theta<1\wedge\left(\frac32-\kappa\right).
$$
Since $r\mapsto S_r$ is nondecreasing, for any $0\leq r< t$, $\delta>0$ and $\epsilon\in(0,1)$,
\begin{align*}
&(S_t-S_r)^{\kappa-3/2}\int_r^t\text{\rm{e}}^{\delta \tau}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_\tau
=(S_t-S_r)^{\kappa-3/2}\left(
\int_r^{t-\epsilon(t-r)}+\int_{t-\epsilon(t-r)}^t
\right)\text{\rm{e}}^{\delta \tau}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_\tau\\
&\leq\text{\rm{e}}^{\delta\left(t-\epsilon(t-r)\right)}
(S_t-S_r)^{\kappa-3/2}
\left(S_{t-\epsilon(t-r)}-S_r\right)
+\text{\rm{e}}^{\delta t}(S_t-S_r)^{\theta+\kappa-3/2}
(S_t-S_r)^{-\theta}
\left(
S_t-S_{t-\epsilon(t-r)}
\right)\\
&\leq\text{\rm{e}}^{\delta\left(t-\epsilon(t-r)\right)}
(S_t-S_r)^{\kappa-1/2}
+\text{\rm{e}}^{\delta t}\left(S_{t-\epsilon(t-r)}-S_r\right)^{\theta+\kappa-3/2}
\left(
S_t-S_{t-\epsilon(t-r)}
\right)^{1-\theta}.
\end{align*}
Because of the independent and stationary increments property
of a subordinator, we obtain
\begin{align*}
&\E\left[(S_t-S_r)^{\kappa-3/2}
\int_r^t\text{\rm{e}}^{\delta \tau}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_\tau
\right]\\
&\qquad\leq\text{\rm{e}}^{\delta\left(t-\epsilon(t-r)\right)}
\E\left[
\left(S_t-S_r\right)^{\kappa-1/2}
\right]
+\text{\rm{e}}^{\delta t}\E\left[
\left(S_{t-\epsilon(t-r)}-S_r\right)^{\theta+\kappa-3/2}
\right]
\E\left[
\left(
S_t-S_{t-\epsilon(t-r)}
\right)^{1-\theta}
\right]\\
&\qquad=\text{\rm{e}}^{\delta\left(t-\epsilon(t-r)\right)}
\E\left[
S_{t-r}^{\kappa-1/2}
\right]
+\text{\rm{e}}^{\delta t}\E\left[
S_{(1-\epsilon)(t-r)}
^{\theta+\kappa-3/2}
\right]
\E\left[
S_{\epsilon(t-r)}^{1-\theta}
\right]\\
&\qquad=\text{\rm{e}}^{\delta\left(t-\epsilon(t-r)\right)}
(t-r)^{(2\kappa-1)/\alpha}
\E\left[
S_1^{\kappa-1/2}
\right]\\
&\qquad\quad+\text{\rm{e}}^{\delta t}
\epsilon^{2(1-\theta)/\alpha}(1-\epsilon)^{(2\theta+2\kappa-3)/\alpha}
(t-r)^{(2\kappa-1)/\alpha}
\E\left[
S_1^{\theta+\kappa-3/2}
\right]
\E\left[S_1^{1-\theta}
\right].
\end{align*}
This yields that for any $\delta>0$ and $\epsilon\in(0,1)$,
\begin{align*}
&\sup_{t\in(0,T]}\text{\rm{e}}^{-\delta t}
\int_0^t \E\left[(S_t-S_r)^{\kappa-3/2}
\int_r^t\text{\rm{e}}^{\delta \tau}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D S_\tau
\right]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\quad\qquad\qquad\leq\E\left[
S_1^{\kappa-1/2}
\right]\sup_{t\in(0,T]}\int_0^t
\text{\rm{e}}^{-\delta\epsilon(t-r)}
(t-r)^{(2\kappa-1)/\alpha}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\quad\qquad\qquad\quad+\epsilon^{2(1-\theta)/\alpha}(1-\epsilon)^{(2\theta+2\kappa-3)/\alpha}
\E\left[
S_1^{\theta+\kappa-3/2}
\right]
\E\left[S_1^{1-\theta}
\right]\sup_{t\in(0,T]}\int_0^t
(t-r)^{(2\kappa-1)/\alpha}
\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\quad\qquad\qquad=\E\left[
S_1^{\kappa-1/2}
\right]\int_0^T
\text{\rm{e}}^{-\delta\epsilon r}
r^{(2\kappa-1)/\alpha}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&\quad\qquad\qquad\quad+\epsilon^{2(1-\theta)/\alpha}(1-\epsilon)^{(2\theta+2\kappa-3)/\alpha}
\E\left[
S_1^{\theta+\kappa-3/2}
\right]
\E\left[S_1^{1-\theta}
\right]
\int_0^Tr^{(2\kappa-1)/\alpha}
\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r.
\end{align*}
It remains to let first $\delta\rightarrow\infty$ then $\epsilon\downarrow0$
to finish the proof. \end{proof}
\noindent \textbf{Acknowledgement.} C.-S.\ Deng is supported by Natural Science Foundation of Hubei Province of China (2022CFB129). X.\ Huang is supported by National Natural Science Foundation of China (12271398).
\end{document} |
\begin{document}
\title{Irreducibility of wave-front sets for \RamiC{depth zero cuspidal} representations}
\author{Avraham Aizenbud} \address{Avraham Aizenbud, Faculty of Mathematics and Computer Science, Weizmann Institute of Science, POB 26, Rehovot 76100, Israel } \email{[email protected]} \urladdr{http://www.aizenbud.org}
\author{Dmitry Gourevitch} \address{Dmitry Gourevitch, Faculty of Mathematics and Computer Science, Weizmann Institute of Science, POB 26, Rehovot 76100, Israel } \email{[email protected]} \urladdr{http://www.wisdom.weizmann.ac.il/~dimagur}
\author{Eitan Sayag} \address{Eitan Sayag,
Department of Mathematics, Ben Gurion University of the Negev, P.O.B. 653, Be'er Sheva 84105, ISRAEL}
\email{[email protected]}
\keywords{\DimaB{Representation, reductive group, algebraic group, nilpotent orbit, wave-front set, character, non-commutative harmonic analysis, generalized Gelfand-Graev models.}} \subjclass[2010]{\DimaB{20G05, 20G25, 22E35, 22E46, 20C33}}
\date{\today}
\maketitle \begin{abstract} \RamiB{We show that the results of \cite{BM, DebPar,Oka,Lus,AAu,Tay} imply a special case of the Moeglin-Waldspurger conjecture on wave-front sets. Namely, we deduce that for large enough residue characteristic, the Zariski closure of the wave-front set of any depth zero irreducible cuspidal representation of any reductive group over a non-Archimedean local field is \RamiC{an irreducible variety}.
In more details, we use \cite{BM, DebPar,Oka} to reduce the statement to an analogous statement for finite groups of Lie type, which is proven in \cite{Lus,AAu,Tay}.}
\end{abstract}
\tableofcontents
\section{Introduction}
In this paper we prove the following theorem. \begin{introthm}\label{thm:main} For any $n\in \mathbb{N}$ there exists $T\in \mathbb{N}$ such that for any
\begin{itemize}
\item prime $p>T$,
\item local field $F$ of residue characteristic $p$ such that $val_F(p)<n$,
\item reductive group $\bf G$ defined over $F$ such that $\dim \mathbf{G}<n$,
\item cuspidal irreducible representation $\pi$ of depth zero of $\mathbf{G}(F)$,
\end{itemize}
the Zariski closure of the wave-front set $\operatorname{WF}(\pi)$ in $\fg^*$ is irreducible, where $\fg$ denotes the Lie algebra of $\mathbf{G}$.
\end{introthm}
In \S \ref{sec:form} we formulate a more explicit version of this theorem.
\subsection{Idea of the proof}
The natural analogue of Theorem \ref{thm:main} for finite groups of Lie type was proven in \cite{Lus,AAu,Tay}. Barbasch and Moy \cite{BM} \RamiA{provide} a method to describe the wave-front set of a depth zero representation of $\mathbf{G}(F)$ in terms of certain \Dima{representations} of certain finite groups of Lie type. In general, the description is quite complicated, but for cuspidal representations we \RamiA{make} this description very explicit (see Corollary \ref{cor:cusp.BM} below). This explicit description together with the result of \cite{Lus,AAu,Tay} implies Theorem \ref{thm:main}.
\subsection{Related results} The irreducibility of the wave-front set of irreducible representations of finite groups of Lie type was conjectured (in a different language) in \cite{Kaw}. This conjecture was proven in \cite{Lus,AAu,Tay}.
For irreducible representations of p-adic groups \Rami{the irreducibility of the wave-front set was suggested} in \cite{MW87} and proven for some cases, including all irreducible representations of $\operatorname{GL}_n$\Rami{, see} \cite[Chapter II]{MW87}. \Rami{In \cite{Wal18, Wal20}, the irreducibility of the wave-front set} was proven for \Rami{many cases including} anti-tempered and tempered unipotent representations of groups in the inner class of the split form of $\mathrm{SO}(2n+1)$. Recently, it was proven for irreducible Iwahori-spherical depth zero representations, \RamiA{see} \cite[Theorem 1.3.1]{CMBO}. \RamiA{
\section{Formulation of the main result}\label{sec:form} Throughout the paper we fix a reductive algebraic group $\mathbf{G}$ defined over a local non-Archimedean \RamiA{field} $F$. \begin{notation}We denote: \begin{itemize} \item $k$ -- the residue field of $F$, \item $\fg$ -- the Lie algebra of $\mathbf{G}$, \item $\mathbf{G}'$ -- the derived group of $\mathbf{G}$, \item $\fg'$ -- the Lie algebra of $\mathbf{G}'$, \item $G=\mathbf{G}(F)$, \item $BT(G)$ -- the Bruhat-Tits building of $\mathbf{G}$, \item $\operatorname{irr}(G)$ -- the set of (isomorphism classes of) irreducible smooth representations of $G$. \end{itemize} \RamiA{For} any $x\in BT(G)$ and $r\in \mathbb{R}$ we \RamiA{further denote:} \begin{itemize} \item $G_{x,r}$ and $G_{x,r^+}$ -- the Moy-Prasad subgroups (See \Rami{\cite[\S2.6]{MP94} \RamiA{where} \Dima{they are denoted by $\mathcal P_{x,r}, \mathcal P_{x,r^+}$}}). \item $M_x:=G_{x,0}/G_{x,0^+}$. \item $\mathbf{M}_{x}$ -- \RamiA{the} natural reductive group defined over $k$ s.t. $M_x\cong \mathbf{M}_{x}(k)$ (See \Rami{\cite[\S3.2]{MP94}}) \item $Q_{x}$ -- the normalizer of $G_{x,0}$ inside $G$. \item $G_{r}:=\bigcup_{x\in BT(G)} G_{x,r}$ \item $G_{r^+}:=\bigcup_{x\in BT(G)} G_{x,r^+}$ \item $\fg_{x,r}, \fg_{x,r^+}, \fm_x, \fg_{r}, \fg_{r^+}$ -- the Lie algebra versions of the above \Rami{(See \cite[\S3]{MP94})}.
\end{itemize} \end{notation}
\begin{defn}\label{def:acc}
We say that
\RamiA{the pair $(F,\mathbf{G})$}
is \emph{acceptable}, if the following conditions are satisfied:
\begin{enumerate}
\item \label{it:HCH} The pair $(F,\mathbf{G})$ satisfies the Hales-Moy-Prasad conjecture \Rami{for depth $0$ representations}, {\it i.e.} for any depth $0$ representation $\rho\in \operatorname{irr}(G)$, the Harish-Chandra-Howe character expansion for $\rho$ is satisfied on the set $G_{0^+}$ of topologically unipotent elements in $G$.
\item \label{it:exp} The series defining the exponential map $\fg'\to \mathbf{G}$ given by the adjoint representation converge on $\fg_{0+} \cap \fg'$.
\item \label{it:Cox} For any $x\in BT(G)$, we have $\operatorname{char} k>3(h_x -1)$, where $h_x$ is the Coxeter number of ${\bf M}_x$. Note that in particular this implies that $p$ a good prime for ${\bf M}_x$.
\end{enumerate} \end{defn}
\begin{prop} For any $n\in \mathbb{N}$ there exists $T\in \mathbb{N}$ such that for any
\begin{itemize}
\item prime $p>T$
\item local field $F$ of residue characteristic $p$ such that $val_F(p)<n$ \item reductive group $\bf G$ defined over $F$ such that $\dim \mathbf{G}<n$,
\end{itemize} the pair $(F,\mathbf{G})$ is acceptable. \end{prop} \begin{proof} In order to satisfy condition \eqref{it:Cox} we take $T>3n$. In order to satisfy condition \eqref{it:exp} we take $T>n^2$. It suffices by \cite[Lemma 3.2]{BM}. Finally, one can choose $T$ such that \eqref{it:HCH} is satisfied by \cite[\S 2.2, \S3.4, Theorem 3.5.2]{DebHom} and condition \eqref{it:exp}. \end{proof}
\begin{defn} For $\pi\in \operatorname{irr}(G)$ denote by $\operatorname{WF}(\pi)$ the wave-front set of the character of $\pi$ over the point $1\in G$. It also equals the union of the closures of all the orbits $O$ that appear with non-zero coefficients in the Harish-Chandra-Howe expansion. \end{defn}
The following is a more explicit version of Theorem \ref{thm:main}.
\begin{thm}\label{thm:main.exp} Assume that $(F, \mathbf{G})$ is an acceptable pair. Let $\pi$ be a cuspidal irreducible representation of depth zero of $\mathbf{G}(F)$. Then the Zariski closure of the wave-front set $\operatorname{WF}(\pi)$ in $\fg^*$ is irreducible, where $\fg$ denotes the Lie algebra of $\mathbf{G}$. \end{thm}
From now till the end of the paper we will assume that $(F, \mathbf{G})$ is an acceptable pair. The rest of the paper is dedicated to the proof of this theorem.
\section{Wave-front sets and generalized Gelfand-Graev models for finite groups of Lie type}
Let $k$ be a finite field, $\bf M$ be a reductive group defined over $k$, and $M:={\bf M}(k)$ be its group of $k$-points. We assume that $\operatorname{char} k>3(h -1)$, where $h$ is the Coxeter number of $\mathbf{M}$. In particular, this implies that $\operatorname{char} k$ is a good prime for $\bf M$. Let $\fm$ denote the Lie algebra of $\bf M$.
\RamiA{ \begin{defn}$ $
\begin{itemize}
\item For every nilpotent element $N\in \fm(k)$, let $\Gamma_{N}$ denote the \emph{generalized Gelfand-Graev model} attached to $N$, as in \cite[\S 2.2]{BM}. Since the isomorphism class of $\Gamma_N$ only depends on the orbit of $N$ \Dima{under the adjoint action of $M$}, we will also use the notation $\Gamma_O$ for every nilpotent orbit $O\subset \fm(k)$. \item Let $\sigma$ be a finite-dimensional representation of $M$. Let $GG(\sigma)$ denote the union of all nilpotent $M$-orbits $O\sub \fm(k)$ satisfying $\langle \tau, \Gamma_O\rangle\neq 0,$ where $\langle \tau, \Gamma_O\rangle$ denotes the intertwining number. \item Let $\operatorname{WF}(\sigma)$ denote the Zariski closure of $\mathbf{M}\cdot GG(\sigma)$ in $\fm$. \end{itemize} \end{defn} }
\begin{thm}[{\cite[Theorem 14.10]{Tay}}]\label{thm:lus} If $\operatorname{char} k$ is a good prime for $\bf M$ then for every irreducible representation $\sigma$ of $M$, the algebraic variety $\operatorname{WF}(\sigma)$ is irreducible. \end{thm}
\section{The results of Barbasch-Moy} In this section we describe the results of \cite{BM}\RamiB{, as refined in \cite{DebPar,Oka},} on the relation between wave-front sets of depth zero representations of $G$ and of representations of the \RamiA{finite} groups $M_x$ for various $x\in BT(G)$.
The results in \cite{BM} require certain assumptions, described in \cite[4.4]{BM}. Assumptions (2) and (3) in \cite[4.4]{BM} coincide with assumptions \eqref{it:exp} and \eqref{it:Cox} in Definition \ref{def:acc}. Assumption (1) in \cite[4.4]{BM} can be replaced by assumption \eqref{it:HCH} of Definition \ref{def:acc}. Indeed, this assumption \RamiA{is} only used in \cite{BM} in order to deduce the statement of assumption \eqref{it:HCH} of Definition \ref{def:acc}. Therefore all the results of \cite{BM} are valid for the acceptable pair $(F,\mathbf{G})$.
\begin{defn} Let \begin{enumerate}[(i)] \item $\mathcal N_{o}(G)$ denote the set of nilpotent $G$-orbits in $\fg(F)$. \item $\mathcal I_o(G)=\{(x,O)\, \vert x\in BT(G),\, O \text{ is a nilpotent \RamiB{$M_x-$}orbit in }\fm_x\}.$ \item $F^{un}$ be the unramified closure of $F$.
\item We define a pre-order on $\mathcal N_{o}(G)$ in the following way: $$\Omega\geq \Omega'\quad \text{iff} \quad \overline{G(F^{un})\cdot \Omega}\supset \Omega',$$ where $\overline{G(F^{un})\cdot \Omega}$ denotes the closure in the local topology on $\fg(F^{un})$.
\item \RamiA{We define a \RamiB{pre-}order on \RamiB{$\mathcal I_o(G)$} in the following way:
$$(x',O')\leq(x,O) \text{ iff } x=x' \text{ and } \overline{\RamiB{\mathbf{M}_x \cdot} O}^{Zar} \supset O',$$ where $ \overline{O}^{Zar}$ denotes the Zariski closure.}
\end{enumerate} \end{defn} \RamiB{ \begin{thm}[{\cite[Lemma 5.3.3.]{DebPar}}]
For any $(x,O)\in \mathcal I_o(G)$ there exist and unique $\Omega \in \mathcal N_{o}(G)$ s.t. there exists an ${\mathfrak{sl}}_2$-triple $e,h,f\in \fg_{x,0}$ satisfying:
\begin{itemize}
\item $e\in \Omega$
\item the projections $\bar e, \bar h, \bar f$ form an ${\mathfrak{sl}}_2$-triple in $\fm_x(k)=\fg_{x,0}/\fg_{x,0^+}$
\item $\bar e\in O$. \end{itemize} \end{thm} \begin{notn}
We will denote: $$\cL(x,O):=\Omega$$ \end{notn}
\begin{thm}[{\cite{BM,DebPar}}]\label{thm:cor}$\,$
The map $\cL:\mathcal I_o(G) \to \mathcal N_{o}(G)$ is:
\begin{enumerate}[(i)]
\item surjective cf. \cite[Theorem 5.6.1.]{DebPar},
\item pre-order preserving, cf. \cite[Proposition 3.16]{BM}.
\end{enumerate} \end{thm}
}
\begin{notation} For $x\in BT(G)$ and an \RamiB{$M_x$}-stable subset $\Xi\subset \fm_x$ denote \RamiB{$$\cL_x(\Xi)=\bigcup_{O\in \Xi/M_x} \cL(x,O).$$} \end{notation}
\begin{defn} For $\pi\in \operatorname{irr}(G)$ and $x\in BT(G)$, define $\pi_x:=\pi^{G_{x,0^+}}$ considered as a representation of $M_x=G_{x,0}/G_{x,0^+}$. \end{defn}
\begin{thm}[\RamiB{{\cite[Corollary 1.2]{Oka}}\DimaA{, cf. \cite[\S 5]{BM}}}]\label{thm:BMCor} Let $\pi$ be a representation of $G$ of depth zero. Then we have $$\overline{\operatorname{WF}(\pi)}^{Zar}=\overline{\bigcup_{x\in BT(G)}\RamiB{\cL_x(\operatorname{WF}(\pi_x)(k))}}^{Zar},$$
\Dima{where by $\overline{\operatorname{WF}(\pi)}^{Zar}$ we mean the closure in the Zariski topology.}
\end{thm}
\section{Proof of Theorem \ref{thm:main}}
\RamiA{We will give an explicit version of the results of \cite{BM} for cuspidal representations of depth 0. For this we f}irst recall a construction from \RamiA{\cite{MP96}} that exhausts all depth zero irreducible cuspidal representations of $G$:
\begin{theorem}[{\cite[Propositions 6.6 and 6.8]{MP96}}]\label{thm:cusp.dep.0}
Let $x\in BT(G)$ s.t. $G_{x,0}$ is a maximal parahoric subgroup of $G$, and let $\tau_0\in \operatorname{irr}(Q_x/G_{x,0^+})$ s.t. $\tau_0|_{M_x}$ is a cuspidal representation. Let $\tau$ be the lift of $\tau_0$ to $Q_x$. Then $\pi:=\operatorname{ind}_{Q_x}^G\tau$ is a depth zero cuspidal irreducible representation of $G$. Moreover, any depth zero cuspidal irreducible representation of $G$ can be obtained in this way. \end{theorem}
\begin{prop}\label{prop:allOr0}
Let $x\in BT(G),\tau_0 \in \operatorname{irr}(Q_x/G_{x,0^+}),$ its lift $\tau \in \operatorname{irr}(Q_x)$ and $\pi=\operatorname{ind}_{Q_{x}}^G\tau\in \operatorname{irr}(G)$ be as in Theorem \ref{thm:cusp.dep.0},
and let $y\in BT(G)$. Then
\begin{enumerate}[(i)]
\item \label{it:0}If $\pi_y\neq 0$ then there exists $g\in G$ such that $gG_{x,0}g^{-1}=G_{y,0}$.
\item \label{it:Norm}$\pi_x\simeq (\tau_0)|_{M_x}.$
\end{enumerate} \end{prop}
For the proof we will need the following lemma.
\begin{lem}[{cf. the proof of \cite[Proposition 6.6]{MP96}}] \label{lem:BT}
Let $x,y\in BT(G)$, and let $F_x$ and $F_y$ denote the minimal faces that include them. If $F_x\neq F_y$ then the image of $G_{x,0}\cap G_{y,0^+}$ in $G_{x,0}\slash G_{x,0^+}$ includes the unipotent radical of a proper parabolic subgroup of ${\bf M}_x(k)$. \end{lem}
\begin{proof}[Proof of Proposition \ref{prop:allOr0}] We have the following isomorphisms of vector spaces.
\begin{multline*}\pi_y\simeq\bigoplus_{[g]\in Q_x\backslash G \slash G_{y,0^+}} (\operatorname{Ind}^{G_{y,0^+}}_{G_{y,0^+}\cap g^{-1}Q_x g}(\tau|_{gG_{y,0^+}g^{-1}\cap Q_x})^g)^{G_{y,0^+}}\simeq\\
\bigoplus_{[g]\in Q_x\backslash G \slash G_{y,0^+}} (\operatorname{Ind}^{G_{y,0^+}}_{G_{y,0^+}\cap Q_{g^{-1}x}}(\tau|_{G_{gy,0^+}\cap Q_x})^g)^{G_{y,0^+}}\simeq
\bigoplus_{[g]\in Q_x\backslash G \slash G_{y,0^+}} ((\tau|_{G_{gy,0^+}\cap Q_x})^g)^{G_{y,0^+}\cap Q_{g^{-1}x}}\simeq\\ \bigoplus_{[g]\in Q_x\backslash G \slash G_{y,0^+}} \tau^{G_{gy,0^+}\cap Q_x} \end{multline*} By Lemma \ref{lem:BT} and the cuspidality of $\tau_0$ we obtain $$\pi_y\simeq \bigoplus_{[g]\in Q_x\backslash G \slash G_{y,0^+} \text{ s.t. }F_{gy}=F_x} \tau^{G_{x,0^+}}\simeq\bigoplus_{[g]\in Q_x\backslash G \slash G_{y,0^+} \text{ s.t. }F_{gy}=F_x} \tau_0.$$ This proves \eqref{it:0}. To prove \eqref{it:Norm} we use the following isomorphism of representations of $G_{x,0}$.
\begin{multline}\label{=pix}
\pi_x\simeq\bigoplus_{[g]\in Q_x\backslash G \slash G_{x,0}} (\operatorname{Ind}^{G_{x,0}}_{G_{x,0}\cap g^{-1}Q_xg}(\tau|_{gG_{x,0}g^{-1}\cap Q_x})^g)^{G_{x,0^+}}\cong \\
\bigoplus_{[g]\in Q_x\backslash G \slash G_{x,0}} (\operatorname{Ind}^{G_{x,0}}_{G_{x,0}\cap Q_{g^{-1}x}}(\tau|_{G_{gx,0}\cap Q_x})^g)^{G_{x,0^+}} \end{multline} For any $g\in G$ we have a vector space isomorphism:
\begin{multline*}(\operatorname{Ind}^{G_{x,0}}_{G_{x,0}\cap Q_{g^{-1}x}}(\tau|_{G_{gx,0}\cap Q_x})^g)^{G_{x,0^+}}\cong
\bigoplus_{[h]\in Q_x\backslash Q_xgG_{x,0} \slash G_{x,0^+}} (\operatorname{Ind}^{G_{x,0^+}}_{G_{x,0^+}\cap Q_{h^{-1}x}}(\tau|_{G_{hx,0^+}\cap Q_{x}})^h)^{G_{x,0^+}}\cong\\
\bigoplus_{[h]\in Q_x\backslash Q_xgG_{x,0} \slash G_{x,0^+}} ((\tau|_{G_{hx,0^+}\cap Q_{x}})^{h})^{G_{x,0^+}\cap Q_{h^{-1}x}} \cong \bigoplus_{[h]\in Q_x\backslash Q_xgG_{x,0} \slash G_{x,0^+}} \tau^{G_{hx,0^+}\cap Q_{x}} \end{multline*}
By Lemma \ref{lem:BT} and the cuspidality of $\tau$, if the space above does not vanish then for some $h\in Q_{x}gG_{x,0}$ we have $F_x=F_{hx}$. In other words $ Q_{x}gG_{x,0}$ intersects $Q_{x}$, and thus $g\in Q_x$. To sum up, if $(\operatorname{Ind}^{G_{x,0}}_{G_{x,0}\cap Q_{g^{-1}x}}(\tau|_{G_{gx,0}\cap Q_x})^g)^{G_{x,0^+}}\neq 0$ then $g\in Q_x$. Using \eqref{=pix} we obtain
$$\pi_x\simeq (\tau|_{G_{x,0}})^{G_{x,0^+}}=(\tau_0)|_{M_x}$$ \end{proof}
Proposition \ref{prop:allOr0} and Theorem \ref{thm:BMCor} imply the following corollary. \begin{cor} \label{cor:cusp.BM}
Let $x\in BT(G),\tau_0 \in \operatorname{irr}(Q_x/G_{x,0^+}),$ its lift $\tau \in \operatorname{irr}(Q_x)$ and $\pi=\operatorname{ind}_{Q_{x}}^G\tau\in \operatorname{irr}(G)$ be as in Theorem \ref{thm:cusp.dep.0},
Then $$\overline{\operatorname{WF}(\pi)}^{Zar}=\overline{\RamiB{\cL_x}(\operatorname{WF}(\tau_0))}^{Zar}.$$ \end{cor}
In view of
Theorem \ref{thm:cusp.dep.0},
this corollary, together with Theorem \ref{thm:cor} and Theorem \ref{thm:lus}, imply Theorem \ref{thm:main.exp}.
\end{document} |
\begin{document}
\title[Naturally reductive pseudo Riemannian 2-step nilpotent Lie groups] {Naturally reductive pseudo Riemannian \\ 2-step nilpotent Lie groups}
\author{Gabriela P. Ovando $^1$} \address{G. P. Ovando: CONICET and ECEN-FCEIA, Universidad Nacional de Rosario \\Pellegrini 250, 2000 Rosario, Santa Fe, Argentina} \footnote{ Currently: Abteilung f\"ur Reine Mathematik, Albert-Ludwigs Universit\"at Freiburg, Eckerstr.1, 79104 Freiburg, Germany.} \email{[email protected]}
\begin{abstract} This paper deals with naturally reductive pseudo Riemannian 2-step nilpotent Lie groups $(N, \la \,,\,\ra_N)$. In the cases under consideration they are related to bi-invariant metrics. On the one hand, whenever $\la \,,\, \ra_N$ restricts to a metric in the center it is proved that the simply connected Lie group $N$ arises from a Lie algebra $\ggo$ and a representation of it. The Lie algebra $\ggo$ carries an ad-invariant metric and its corresponding Lie group acts as a group of isometries of $(N, \la \,,\,\ra)$ fixing the identity element. On the other hand, a bi-invariant metric $\la\,,\,\ra$ on $N$ provides another family of examples of naturally reductive spaces, namely those of the form $(N/\Gamma, \la\,,\,\ra)$ being $\Gamma\subset N$ a lattice, which are also investigated. \end{abstract}
\thanks{{\it (2010) Mathematics Subject Classification}: 53C50 22E25 53B30 53C30. }
\maketitle
\section{Introduction}
The 2-step nilpotent groups are nonabelian and from the algebraic point of view as close as possible to be Abelian and they evidence a rich geometry when equipped with a metric tensor. While they have been extensively investigated
in the Riemannian situation, in the case of indefinite metrics, there
are significant advances as showed in \cite{Bo, C-P1, C-P2, Ge, J-P-L,J-P,J-P-P,Pa} but there are still several open problems. A first obstacle appears when trying to traduce the left invariant metric to the Lie algebra level. So far all attempts in this direction take as starting point the Riemannian model. Among these pseudo Riemannian spaces, the {\em naturally reductive} ones are endowed with nice simple algebraic and geometric properties. Examples of them are provided by 2-step nilpotent Lie groups carrying a bi-invariant metric.
Important studies concerning the structure of a naturally reductive Riemannian Lie group $G$ when $G$ is compact and simple or when $G$ is non compact and semisimple were given by D'Atri-Ziller \cite{DA-Z} and Gordon \cite{Go} respectively. Gordon showed that every naturally reductive Riemannian manifold may be realized as a homogeneous space $G/H$ with Lie group $G$ of the form $G=G_{nc}G_cN$ where $G_{nc}$ is a non compact semisimple normal subgroup, $G_c$ is compact semisimple and $N$ is the nilradical of $G$. Furthermore $N\cap H=\{0\}$ and the induced metrics on each of $G_{nc}/(G_{nc}\cap H)$, $G_c/(G_c \cap H)$ and $N (=N/(N\cap H))$ are naturally reductive so that the study of naturally reductive metrics is partially reduced to the cases in which $G$ is semisimple either of compact or non compact type or $G$ is nilpotent. In the last case Gordon proved that $G$ must be at most 2-step nilpotent.
Lauret \cite{La} exploited this result to afford a classification of naturally reductive Riemannian connected simply connected nilmanifolds. According to Wilson \cite{Wi} such a manifold can be realized as a 2-step nilpotent Lie group equipped with a left invariant metric.
Later Tricerri and Vanhecke \cite{T-V2} proved that a Riemannian manifold is a naturally reductive homogeneous space if and only if there exists a homogeneous structure $T$ satisfying $T_x x=0$ for all tangent vectors $x$, offering in this way an infinitesimal description of these reductive manifolds. The notion of {\em homogeneous structure} was introduced by Ambrose and Singer \cite{A-S} to characterize connected simply connected and complete homogeneous Riemannian manifolds. In the Riemannian case every homogeneous manifold is complete and reductive. More recently Gadea and Oubi\~na \cite{G-O1} proved that a connected simply connected and complete pseudo Riemannian manifold admits a pseudo-Riemannian structure if and only if it is reductive homogeneous. While Tricerri and Vanhecke \cite{T-V1} achieved the classification of homogeneous Riemannian structures, in the pseudo Riemannian case, a complete classification is still a pending item.
However Calvaruso and Marinosci \cite{Ca,C-M1, C-M2} studied homogeneous structures in dimension three, obtaining with their results the naturally reductive Lie groups with a left invariant Lorent\-zian metric. In particular the Heisenberg Lie group admits two naturally reductive left invariant Lorent\-zian metrics (and for which the center
is non degenerate).
In this paper we provide constructions for naturally reductive pseudo Riemannian 2-step nilpotent Lie groups. By following a similar approach to that one of Gordon, one gets necessary and sufficient conditions to have naturally reductive pseudo Riemannian 2-step nilpotent Lie groups with non degenerate center -Theorem (\ref{t1})-. This enables to attach this kind of Lie groups to Lie algebras endowed with an ad-invariant metric and to certain kind of representations of them:
\vskip 2pt
{\bf Theorem \ref{t2}} {\em Let $\ggo$ denote a Lie algebra carrying an ad-invariant metric $\la\,,\,\ra_{\ggo}$ and let $(\pi, \vv)$ be a real faithful representation of $\ggo$ without trivial subrepresentations and
such that the metric on $\vv$, $\la\,,\,\ra_{\vv}$ is $\pi(\ggo)$-invariant. Let $\nn$ denote the Lie algebra
$\nn=\ggo \oplus \vv$ whose the Lie bracket is given by
$$\begin{array}{rcl}
[\ggo,\ggo]_{\nn}=[\ggo, \vv]_{\nn} =0 & [\vv, \vv] \subseteq \ggo \\ \\ \la [u,v], x\ra_{\ggo} = \la \pi(x) u, v\ra_{\vv}& \mbox{ for all } x\in \ggo, \, \forall u, v\in \vv, \end{array} $$ equip $\nn$ with the metric
$\la\,,\,\ra$
$$ \la\,,\,\ra_{\ggo\times \ggo}= \la\,,\,\ra_{\ggo}\qquad
\la\,,\,\ra_{\vv\times \vv}= \la\,,\,\ra_{\vv}\qquad \la \ggo, \vv\ra=0$$
then the corresponding simply connected 2-step nilpotent Lie group $(N, \la\,,\,\ra)$, being $\la\,,\,\ra$
the left invariant metric
induced by $\la\,,\,\ra$ above, is a naturally reductive pseudo Riemannian
space.
The converse holds whenever the center of $\nn$ is non degenerate and $j$ (defined as in (\ref{br})) is faithful. }
\vskip 2pt
The previous result empores the understanding of some geometrical features such as the isometry group -Proposition 3.5- and it sets up the construction of new examples, in particular by describing some naturally reductive metrics in the Heisenberg Lie group $H_{2n+1}$.
We also bring into consideration 2-step nilpotent Lie groups furnished with a bi-invariant metric in order to check geometrical and algebraic structure differences between metrics for which the center is either degenerate or either non degenerate. In fact bi-invariant metrics offer examples of flat pseudo Riemannian metrics for which the e isometry group contains the group of orthogonal automorphisms as a proper subgroup. Another application of bi-invariant metrics promotes the construction of pseudo Riemannian naturally reductive compact spaces.
\section{On 2-step nilpotent Lie groups with a left invariant pseudo Riemannian
metric}
In this section we show suitable decompositions of the Lie algebra corresponding to a 2-step nilpotent
Lie group equipped with a left invariant pseudo Riemannian metric. We are
mainly interested here in those metrics for which the center is non degenerate, a fact that determines unambiguously the decomposition.
A {\em metric} on a real vector space $\vv$ is a non
degenerate symmetric bilinear form $\la\,,\,\ra:\vv \times \vv \to \RR$.
Whenever $\vv$ is the Lie algebra of a given Lie group $G$, by identifying
$\vv$ with the set of left invariant vector fields on $G$, the metric
induces by mean of the left translations, a pseudo
Riemannian metric tensor on
the corresponding Lie group. Conversely any left invariant pseudo
Riemannian metric on $G$ is completely determined by its value at the identity
tangent space $T_eG$.
Let $(N, \la\,,\,\ra)$ denotes a 2-step nilpotent Lie group endowed with a left invariant pseudo Riemannian metric. There exist several ways to describe the structure of the corresponding Lie algebra $\nn$. The main difficult yields on the existence of degenerate subspaces as one can see below.
\vskip 3pt
{\bf (a)} If the center is degenerate, the null subspace is defined uniquely as $$\uu=\{x\in \zz \, \mbox{ such that }\, \la x, z\ra=0 \quad \forall z\in \zz\}$$ and therefore the center of $\nn$ decomposes as a direct sum of vector subspaces $$\zz =\uu \oplus \tilde{\zz}$$ where $\tilde{\zz}$ is a complementary subspace of $\uu$ in $\zz$ and it is easy to prove that the restriction of the metric to $\tilde{\zz}$ is non degenerate. Moreover it is possible to find a isotropic subspace $\vv\subset \nn$ such that $\vv \cap \zz=\{0\}$ and the metric on $\uu\oplus \vv$ is non degenerate. This subspace $\vv$ is not well defined invariantly but once $\vv$ is fixed, one can take $\tilde{\zz}$ as the portion of the center in $(\uu\oplus\vv)^{\perp}$ and to complete the decomposition of $\nn$ as a orthogonal direct sum \begin{equation}\label{ortsum} \nn=(\uu \oplus \vv) \oplus (\tilde{\zz}\oplus \tilde{\vv}) \end{equation} in such a way that $(\uu\oplus \vv)^{\perp}=\tilde{\zz}\oplus \tilde{\vv}$ and (\ref{ortsum}) is a Witt decomposition. Note that $\tilde{\vv}$ is non degenerate. Moreover it is possible to define a lineal map $j:\zz \to \End(\vv \oplus \tilde{\vv})$ which play a similar role to that one in the Riemannian case (see \cite{C-P1} for details).
In the last section of the present work, we show similar results for the case of bi-invariant metrics.
\vskip 3pt
{\bf (b)} Let $e_1, \hdots, e_p$ denote a basis of $\zz$. For any $u, v \in \nn$, the Lie bracket can be written
$$[u, v] = \sum_{i=1}^p \la J_i u, v\ra e_i, $$ where $J_i:\nn \to \nn$ are self adjoint endomorphisms with respect to $\la \,,\,\ra$ and $\zz=\cap_{i=1}^p ker J_i$. In fact, $[u,v]=\sum \omega_i(u,v) e_i$ where $\omega_i:\nn \times \nn \to \RR$ for $i=1, \hdots p$, is a familly of skew symmetric bilinear 2-forms which represents the coordinates of $[u,v]$ with respect to the fixed basis. Since the metric on $\nn$ is non degenerate, for every i there exists a endomorphism $J_i:\nn \to \nn$ such that $\omega_i(u,v)=<J_iu,v>$.
The endomorphisms $J_i$ are thus called the {\em structure endomorphisms}
associated to $e_1 , . . . , e_p$ (see \cite{Bo}).
\vskip 3pt
Examples of pseudo Riemannian 2-step nilpotent Lie groups $N$ arise by considering the simply connected Lie groups whose Lie algebra can be constructed as follows. Let $(\zz, \la \,,\,\ra_{\zz})$ and $(\vv, \la \,,\,\ra_{\vv})$ denote vector spaces endowed with (non necessarly definite) metrics. Let $\nn$ denote the
direct sum as vector spaces \begin{equation}\label{des2} \nn=\zz \oplus \vv \qquad \quad\mbox{ direct sum } \end{equation} and let $\la\,,\,\ra$ denote the metric given by \begin{equation}\label{met}
\la \,,\,\ra_{|_{\zz \times \zz}}=\la \,,\,\ra_{\zz}\qquad
\la \,,\,\ra_{|_{\vv \times \vv}}=\la \,,\,\ra_{\vv} \qquad \la \zz, \vv \ra=0. \end{equation}
Let $j:\zz \to \End(\vv)$ be a linear map such that $j(z)$ is self adjoint with respect to $\la \,,\,\ra_{\vv}$ for all $z\in \zz$. Then $\nn$
becomes a 2-step nilpotent Lie algebra if one defines a Lie bracket by \begin{equation}\label{br} \begin{array}{rcl} [x,y] & = & 0 \quad \mbox{ for all }x\in \zz, y\in \nn\\ \la [u,v], x\ra & = & \la j(x) u,v\ra \qquad \mbox{ for } x\in \zz, u,v\in \vv. \end{array} \end{equation}
Conversely, let $\nn$ denote a 2-step nilpotent Lie algebra furnished with a metric for which the center is non degenerate. Then $\nn$ can be decomposed into a orthogonal direct sum as in (\ref{des2}) being $\vv:=\zz^{\perp}$ and the Lie bracket on $\nn$ induces self adjoint linear maps $j(x)$ for $x\in \zz$ given by (\ref{br}).
\begin{prop} \label{p1} Let $(N,\la\,,\,\ra)$ denote a simply connected 2-step nilpotent Lie group equipped with a left invariant pseudo Riemannian metric. If the center of $N$ is non degenerate then its Lie algebra $\nn$ admits a orthogonal decomposition as in (\ref{des2}) and the corresponding Lie bracket can be obtained by (\ref{br}). \end{prop}
This includes the Riemannian case, that is, when the metric $\la\,,\,\ra$ is positive definite. In this situation, the inner product $\la\,,\,\ra_+$ produces a decomposition of the center of the Lie algebra $\nn$ as a orthogonal direct sum as vector spaces $$\zz=\ker j \oplus C(\nn)$$ and moreover $j$ is injective if and only if there is no Euclidean factor in the De Rahm decomposition of the simply connected Lie group $(N, \la\,,\,\ra_+)$ (see \cite{Go}). This does not necessarly hold in the pseudo Riemannian case.
Below we show an example of a Lorentzian metric on a 2-step nilpotent Lie algebra $\nn$, where the center is non degenerate and such that $ker(j)=[\nn,\nn]$, so that a splitting as above is not possible.
\begin{exa} \label{exa1} Let $\RR \times \hh_3$ be the 2-step nilpotent Lie algebra spanned by the vectors $e_1,e_2, e_3, e_4$ with the Lie bracket $[e_1, e_2]=e_3$. Define a metric where the non trivial relations are $$\la e_1, e_1\ra=\la e_2, e_2\ra=\la e_3, e_4\ra=1.$$ After (\ref{br}) one can verify that $j(e_3)\equiv 0$, while $$j(e_4)=\left(\begin{matrix} 0 & -1\\ 1 & 0 \end{matrix} \right) $$ Notice that $e_4\notin C(\RR \times \hh_3)$ and $ker j=\RR e_3=C(\RR\times \hh_3)$, that is $ker j=C(\nn)$. \end{exa}
Let $\Or(\vv, \la\,,\,\ra_{\vv})$ denote the group of linear maps on $\vv$ which are isometries for
$\la\,,\,\ra_{\vv}$ and whose Lie algebra $\sso(\vv,\la\,,\,\ra_{\vv})$ is the
set of linear maps on $\vv$ that are self adjoint with respect to
$\la \,,\,\ra_{\vv}$. The next goal is to describe the group of isometries
which plays an important role in the next section. Start with the next
result proved in \cite{C-P1}.
\begin{prop} \label{icp} Let $N$ denote a 2-step nilpotent Lie group endowed with a left invariant pseudo Riemannian metric, with respect to which the center is non degenerate. Then the group of isometries fixing the identity coincides with the group of orthogonal automorphisms of $N$. \end{prop}
Denote by $H$ the group of orthogonal automorphisms and by $N$ also the subgroup of isometries consisting of left translations by elements of $N$.
Consider the isometries of the form $h n$ where $h\in H$ and $n\in N$, and denote it by $I_a(N)$. Then $N$ is a normal subgroup of $I_a(N)$, $N\cap H=\{e\}$ and therefore after (\ref{icp}) one has $$I(N) = I_a(N)= H \ltimes N.$$
Whenever $(N, \la\,,\,\ra)$ is simply connected, we do not distinguish between the group of automorphisms of $N$ and of $\nn$. Thus one obtains that the group $H$ is given by \begin{equation}\label{oa} H=\{(\phi, T)\in \Or(\zz, \la\,,\,\ra_{\zz}) \times \Or(\vv, \la\,,\,\ra_{\vv}): Tj(x)T^{-1}=j(\phi x), \quad x\in \zz\} \end{equation} while its Lie algebra $\hh=\Der(\nn)\cap \sso(\nn,\la\,,\,\ra)$ is \begin{equation}\label{sd} \hh=\{(A,B)\in \sso(\zz,\la\,,\,\ra_{\zz}) \times \sso(\vv,\la\,,\,\ra_{\vv}): [B,j(x)]=j(Ax),\quad x\in \zz\}. \end{equation} In fact, let $\psi$ denote an orthogonal automorphism of $(\nn, \la\,,\,\ra)$. As automorphism $\psi(\zz)\subseteq \zz$ and since the decomposition $$ \nn = \zz \oplus \vv$$
is orthogonal then $\psi(\vv)\subseteq \vv$. Set $\phi:=\psi_{|_{\zz}}$ and
$T:=\psi_{|_{\vv}}$, thus $(\phi, T)\in \Or(\zz, \la\,,\,\ra_{\zz})\times \Or(\vv,\la\,,\,\ra_{\vv})$ such that $$\begin{array}{rcl} \la \phi^{-1}[u,v], x\ra &=& \la [Tu,Tv],j(x) \ra \quad \mbox{if and only if}\\ \la j(\phi x) u, v \ra & = & \la j(x)Tu, Tv \ra \end{array} $$ which implies (\ref{oa}). By derivating (\ref{oa}) one gets (\ref{sd}).
\begin{prop} Let $N$ denote a simply connected 2-step nilpotent Lie group endowed with a left invariant pseudo Riemannian metric, with respect to which the center is non degenerate. Then the group of isometries is $$I(N) = H \ltimes N.$$ where $N$ denotes the set of left translations by elements of $N$ and $H$ the isotropy subgroup is given by (\ref{oa}) with Lie algebra as in (\ref{sd}). \end{prop}
\begin{exa} \label{change} Let $\nn$ be a 2-step nilpotent Lie algebra equipped with an inner product and denote it by $\la \,,\,\ra_+$. Let $J_z\in \sso(\vv, \la\,,\,\ra_+)$ denote the maps in (\ref{br}) for the inner product.
We shall consider a non definite metric $\la\,,\, \ra$ on $\nn$ by changing the sign of the metric on the center ${\zz}$; thus the metric on $\vv$ remains invariant and we take $$\la z_i, z_j\ra=-\la z_i, z_j\ra_+\qquad \mbox{ for } z_i, z_j\in \zz \qquad \mbox{ and } \qquad \la \zz, \vv\ra=0.$$
By (\ref{br}) the maps $j(z)$ for the metric $\la\,,\,\ra$ on $\nn$ are $$-\la z, [u,v]\ra_+= -\la J(z) u,v\ra_+ =\la j(z)u, v\ra=\la z, [u,v]\ra,\quad \mbox{ for } z\in \zz$$ that is $j(z)=-J(z)$ for every $z\in \zz$.
We work out an example on the Heisenberg Lie group $H_3$. This is the simply connected Lie group whose Lie algebra is $\hh_3$ which is spanned by the vectors $e_1, e_2, e_3$, with the non trivial Lie bracket relation $[e_1, e_2]=e_3$. The canonical left invariant metric $(\la\,,\,\ra_+)$ is that one obtained by declaring the basis above to be orthogonal and the map $J(e_3)$ for $\la\,,\,\ra_+$ is $$\left( \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right) $$
A Lorentzian metric $\la\,,\,\ra$ is obtained on $H_3$ by changing the sign of the canonical metric on the center. Kaplan showed that $(H_3,\la\,,\,\ra_+)$ is naturally reductive (\cite{Ka}). In the next sections we shall see that $(H_3,\la\,,\,\ra)$ and generalizations of it, are also naturally reductive.
By (\ref{oa}) the group of isometries for any of these both metrics is $(\RR \times O(2))\ltimes H_3$, where the action of the isotropy group is given by $(\lambda, A)\cdot (z+v)=\lambda z + Av$ for $z\in \zz$ and $v\in \vv = span\{e_1,e_2\}$, $\lambda \in \RR$ and $A\in O(2)$.
\end{exa}
\begin{defn} A homogeneous manifold $M$ is said to be {\em naturally reductive} if there is a transitive Lie group of isometries $G$ with Lie algebra $\ggo$ and there exists a subspace $\mm\subseteq \ggo$ complementary to $\hh$, the Lie algebra of the isotropy group $H$, in $\ggo$ such that $$\Ad(H)\mm \subseteq \mm \qquad \mbox{ and }\qquad \la [x,y]_{\mm}, z\ra + \la y, [x,z]_{\mm} \ra=0 \qquad \mbox{ for all } x, y,
z\in \mm.$$ \end{defn} Frequently we will say that a metric on a homogeneous space $M$ is naturally reductive even though it is not naturally with respect to a particular transitive group of isometries (see Lemma 2.3 in \cite{Go}).
For naturally reductive metrics the geodesics passing through $m\in M$ are of the form $$\gamma(t)=\exp(t x) \cdot m\qquad \quad \mbox{ for some }x\in \mm.$$
A point $p$ of a pseudo Riemannian manifold is called a {\em pole} provided the exponential map $exp_p$ is a diffeomorphism. Furthemore if $o$ is a pole of the naturally reductive pseudo Riemannian manifold $G/H$, then the map $(x, h) \to \exp(x) h$ is a diffeomorphism of $\mm \times H\to G$ \cite{ON} Ch. 11.
Indeed pseudo Riemannian
symmetric spaces are naturally reductive. Examples of naturally
reductive spaces arise from Lie groups equipped with a bi-invariant metric,
which could exist for nilpotent ones. In the Riemmannian case, if
a nilmanifold $N$ admits a naturally reductive metric, then $N$ is at most
2-step nilpotent \cite{Go}.
\section{Naturally reductive metrics with non degenerate center: a characterization}
In this section we achieve a characterization of naturally reductive pseudo Riemannian simply connected 2-step nilpotent Lie groups with non degenerate center
by studying the set of maps $j(z)$ $z\in \zz$ defined in (\ref{br}), showing that they build a subalgebra of the Lie algebra of the isotropy group $H$.
\begin{lem} \label{l1} Let $(\nn, \la\,,\,\ra)$ denote a 2-step nilpotent Lie
algebra equipped with a metric for which its center $\zz$ is non degenerate
and assume $j$ is injective. Let
$\hh=\sso(\nn,\la\,,\,\ra) \cap \Der(\nn)$ denote the Lie subalgebra of the group of
isometries
fixing the identity element in the corresponding simply connected Lie group
$N$. Then
\vskip 3pt
{\rm i)} $\hh$ leaves each of $\zz$ and $\vv$ invariant,
\vskip 2pt
{\rm ii)} For $\phi\in \hh$,
$$\phi_{|_{\zz}} = j^{-1} \circ \ad_{\sso(\vv)}\phi_{|_{\vv}}\circ j.$$
In particular $\phi \to \phi_{|_{\vv}}$ is an isomorphism of $\hh$ onto a
subalgebra of $\sso(\vv,\la\,,\,\ra_{\vv})$.
\vskip 2pt
{\rm iii)} Let $\phi \in \sso(\vv,\la\,,\,\ra_{\vv})$. Then $\phi$ extends to an
element of $\hh$
if and only if $[\phi, j(\zz)]\subseteq J(\zz)$ and
$j^{-1} \circ \ad_{\sso(\vv)}\phi_{|_{\vv}}\circ j \in \sso(\zz,\la\,,\,\ra_{\zz})$.
\end{lem} \begin{proof} i) is easy to prove. We shall show (ii) and (iii). Let $A\in \sso(\zz,\la\,,\,\ra_{\zz})$ and $B\in \sso(\vv,\la\,,\,\ra_{\vv})$, the linear map $\phi$ which agrees with $(A, B)\in \zz \oplus \vv$ lies in $\hh$ if and only
if
$$\la j(Ax)u, v\ra=\la (B j(x)-j(x)B)u, v\ra \qquad \mbox{ for }x\in \zz,
\,u,v\in \vv$$
which is equivalent to $j(A(x))=[B, j(x)]$ the last one denotes the Lie
bracket in $\sso(\vv,\la\,,\,\ra_{\vv})$ and since $j$ was assumed injective one gets
$A=j^{-1}\circ\ad_{\sso(\vv)}(B) \circ j$. \end{proof}
The proof of the next theorem coincides with that one given by C. Gordon in \cite{Go}. For the sake of completeness we include it here. However the consequences are quite different from the Riemannian situation.
\begin{thm} \label{t1} Let $(N, \la\,,\,\ra)$ denote a 2-step simply connected Lie group equipped with a left invariant pseudo Riemannian metric such that the center is non degenerate and assume $j$ is injective. Then the metric is naturally reductive with respect to $G=H\ltimes N$ being $H$ the group of orthogonal automorphisms, if and only if \vskip 3pt
{\rm (i)} $j(\zz)$ is a Lie subalgebra of $\sso(\vv,\la\,,\,\ra_{\vv})$ and
\vskip 2pt
{\rm (ii)} $[j(x),j(y)] = j(\tau_x y)$ where $\tau_x\in \sso(\zz,\la\,,\,\ra_{\zz})$ for any $x\in \zz$. \end{thm}
\begin{proof} Let $\ggo=\hh \ltimes \nn$ be the Lie algebra of $G=H\ltimes N$
and assume $N$ is naturally reductive with respect to $\ggo=\hh \oplus \mm$.
Set $\pi:\nn \to \hh$ so that
$$\mm=\{ x + \pi(x): x\in \nn\}.$$
The condition for natural reductivity says
$$ \la [x+\pi(x),y+\pi(y)]_{\mm}, z+\pi(z)\ra_{\mm} =- \la y+\pi(y), [x+\pi(x),z+\pi(z)]_{\mm} \ra_{\mm}$$ where $\la\,,\,\ra$ is the pseudo Riemannian metric on $\mm$, so that the previous equality can be interpreted on $\nn$ as \begin{equation}\label{onn} \la [x,y]+\pi(x)y-\pi(y)x, z\ra=-\la y, [x,z]+\pi(x) z-\pi(z)x\ra. \end{equation} where $\pi(x)$ is view as a linear operator on $\nn$ and one writes $\pi(x)y=[x,y]$ when $x, y\in \nn$. Since $\pi(x)\in \sso(\nn, \la\,,\,\ra)$ the terms involving $\pi(x)$ cancel and (\ref{onn}) yields \begin{equation}\label{e11} \ad(y)^*z+\ad(z)^*y = \pi(y) z+\pi(z) y\quad \mbox{ for all }y, z\in \nn. \end{equation} Since $[\hh, \nn]\subseteq \nn$ and $[\hh, \mm]\subseteq \mm$, one has $$[\pi(x), y+\pi(y)]=\pi(x) y +[\pi(x),\pi(y)]\in \mm$$ and therefore \begin{equation}\label{e12} \pi(\pi(x)y)=[\pi(x), \pi(y)] \quad \mbox{ for all } x,y \in \nn. \end{equation} If $z\in \zz$ and $y\in \vv$, $\ad(z)^*y=0$ and (\ref{e11}) says \begin{equation}\label{e13} j(z)y=\pi(y)z+\pi(z)y. \end{equation}
But $\pi(y)z\in \zz$ and $\pi(z)y\in \vv$, so (\ref{e13}) implies
$$\pi(z)_{|_{\vv}}= j(z)\in \sso(\vv,\la\,,\,\ra_{\vv})\quad \mbox{ for every } z\in \zz.$$ It then follows that $$[j(x), j(\zz)]\subset j(\zz) \quad \mbox{ and } \quad [j(x), j(y)]=j(\tau_x y)\quad \mbox{ for } \tau_x\in \sso(\zz,\la\,,\,\ra_{\zz}),\quad x,y\in \zz.$$
Conversely if (i) and (ii) hold, extend $j(x)$ to an element $\pi(x)$ of $\hh$ such that the restriction of $\pi(x)$ to $\zz$ is given by the left hand side of (ii). Extend $\rho$ as a linear map of $\nn$ by declaring
$\pi_{|{\vv}}\equiv 0$. We claim (\ref{e12}) hold for all $x, y\in \nn$. In fact it is easy to verify it if at least one of $x,y\in \vv$. Assume $x,y \in \zz$, then
$$\pi(\pi(x)y)_{|_{\vv}}=j(j^ {-1}[j(x),j(y)])=[j(x),j(y)]$$ and therefore (\ref{e12}) is true after (\ref{l1}) ii). Define $$\mathfrak l=\pi(\nn), \quad \mm=\{ x + \pi(x) : x \in \nn\}, \quad \mbox{ and } \quad \kk=\mathfrak l \oplus \mm.$$ By (\ref{e12}) $\mathfrak l$ is a Lie subalgebra of $\hh$ and $[\mathfrak l, \mm]\subseteq \mm$ and since $\kk=\mathfrak l \oplus \nn$, $\kk$ is a Lie subalgebra of $\ggo$.
We assert that (\ref{e11}) is valid. This can be easily checked whenever at least one of $x, y\in \vv$. If both $x, y\in \zz$ the left-hand side of (\ref{e11}) is zero. The right-hand side lies in $\zz \cap ker(\pi)$, but $\ker(\pi)=\ker(j)$ and since $j$ is injective one has $\zz \cap ker(\pi)=\{0\}$, which proves (\ref{e11}). By following the argument preceding (\ref{e11}) backwards, one can see that $M$ is naturally reductive with respect to $\kk$. \end{proof}
If $\hh$ is a Lie subalgebra of $\End(\vv)$ such that $\hh\subseteq \sso(\vv, \la\,,\,\ra_{\vv})$ then we call $\la\,,\,\ra$ an {\em $\hh$-invariant metric}.
In the conditions of Theorem (\ref{t1}) it follows that if $(N, \la\,,\,\ra)$ is naturally reductive then the bilinear map $\tau$ defines a Lie algebra structure on $\zz$ and the map $j:\zz \to \sso(\vv,\la\,,\,\ra_{\vv})$ becomes a real representation of the Lie algebra $(\zz, \tau)$. Furthermore the metric on $\vv$ is $j(\zz)$-invariant and since $\tau_x\in \sso(\zz, \la\,,\,\ra_{\zz})$ the metric on $\zz$ is ad($\zz$)-invariant, where $\ad$ denotes the adjoint representation of $(\zz, \tau)$.
Conversely let $\ggo$ be a real Lie algebra endowed with an ad($\ggo$)-invariant metric $\la \,,\, \ra_{\ggo}$ and let $(\pi, \vv)$ be a faithful representation of $\ggo$ endowed with a $\pi(\ggo)$-invariant metric $\la\,,\,\ra_{\vv}$ and without trivial subrepresentations, that is, $\bigcap_{x\in \ggo}ker \pi(x)=\{0\}$. Define a 2-step nilpotent Lie algebra structure on the vector space underlying $\nn=\ggo \oplus \vv$ by the following bracket \begin{equation}\label{brac} \begin{array}{ll} [\ggo,\ggo]_{\nn}=[\ggo, \vv]_{\nn} =0 & [\vv, \vv] \subseteq \ggo \\ \\ \la [u,v], x\ra_{\ggo} = \la \pi(x) u, v\ra_{\vv}& \forall x\in \ggo, u, v\in \vv. \end{array} \end{equation} and equip $\nn$ with the metric obtained as the product metric \begin{equation}\label{metric}
\la \,,\,\ra_{|_{\ggo \times \ggo}}=\la \,,\, \ra_{\ggo}\qquad \la
\,,\,\ra_{|_{\vv \times \vv}}=\la \,,\, \ra_{\vv} \qquad \la \ggo, \vv\ra=0. \end{equation}
Take $N$ the simply connected 2-step nilpotent Lie group with Lie algebra $\nn$ and endow it with the left invariant metric determined by $\la \,,\,\ra$.
Since $(\pi, \vv)$ has no trivial subrepresentations, the center of $\nn$ coincides with $\ggo$. Moreover $\vv$ is its orthogonal complement and the transformation $j(x)$ defined as in (\ref{br}) is precisely $\pi(x)$ for all $x\in \ggo$. Since $(\pi, \vv)$ is faithful, the commutator of $\nn$ is $\ggo$:
$C(\nn)=\ggo$. Since the set $\{\pi(x)\}_{x\in \ggo}$ is a Lie subalgebra of
$\sso(\vv,\la\,,\,\ra_{\vv})$ we conclude that $(N, \la\,,\,\ra)$ is
naturally reductive.
\begin{thm}\label{t2} Let $\ggo$ denote a Lie algebra equipped with an ad-invariant metric $\la\,,\,\ra_{\ggo}$ and let $(\pi, \vv)$ be a real faithful representation of $\ggo$ without trivial subrepresentations and endowed with a $\pi(\ggo)$-invariant metric $\la\,,\,\ra_{\vv}$. Let $\nn$ be the Lie algebra $\nn=\ggo \oplus \vv$ direct sum of vector spaces, together with the Lie bracket given by (\ref{br}) and furnished with the metric
$\la\,,\,\ra$ as in (\ref{metric}). Then the corresponding simply connected
2-step nilpotent Lie group
$(N,\la \,,\,\ra)$, being $\la\,,\,\ra$ the induced left invariant metric, is a naturally reductive pseudo Riemannian
space.
The converse holds whenever the center of $N$ is non degenerate and $j$ is
faithful. \end{thm}
\begin{rem} Suppose the representation $(\pi, \vv)$ of $\ggo$ is not faithful. Thus $$z\in ker \pi \Longleftrightarrow \la z, [u,v]\ra=0 \quad \forall u,v \in \vv $$ $\Longrightarrow z \in C(\nn)^{\perp}$. Since the metric on the center $\ggo$ is non definite, $ker \pi \cap C(\nn)$ coul be non trivial, so that the sum as vector spaces $ker \pi+C(\nn)$ is not necessarly direct.
When $\pi$ has some trivial subrepresentation, $$u\in \cap_{x\in \ggo} \pi(x) \Longleftrightarrow \la \pi(x)u, v\ra=0 \quad \forall v\in \vv,$$ $\Longrightarrow\la x, [u,v]\ra=0$ for all $x\in \ggo$, thus $[u,v]=0$ for all $v\in \vv$ which says $u\in \zz(\nn)$. Hence $\ggo \subsetneq \zz(\nn)$. \end{rem}
\begin{rem} While in the Riemannian case, the condition of the metric to be positive definite says that $\ggo$ must be compact, in the pseudo Riemannian case the statement above imposes the restriction on $\ggo$ to carry an ad-invariant metric. See the next example. \end{rem}
\begin{exa} \label{exad} The Killing form on any semisimple Lie algebra is an ad-invariant metric.
Any Lie algebra $\ggo$ can be embedded into a Lie algebra which admits an ad-invariant metric. In fact, the cotangent $\ct^*\ggo=\ggo \ltimes_{coad}\ggo^*$, being $coad$ the coadjoint representation, admits a neutral ad-invariant metric which is given by: $$\la (x_1, \varphi_1),(x_2,\varphi_2)\ra=\varphi_1(x_2)+\varphi_2(x_1)\qquad \qquad x_1, x_2\in \ggo,\quad \varphi_1, \varphi_2\in \ggo^*.$$ Notice that both $\ggo$ and $\ggo^*$ are isotropic subspaces. \end{exa}
\vskip 3pt
A {\em data set} $(\ggo, \vv, \la\,,\,\ra)$ consists of
(i) a Lie algebra $\ggo$ equipped with a $\ad$-invariant metric $\la\,,\,\ra_{\ggo}$,
(ii) a real faithful representation of $\ggo$: $(\pi, \vv)$, without trivial subrepresentations,
(iii) $\la\,,\,\ra$ is a $\ggo$-invariant metric on $\nn=\ggo\oplus
\vv$, i.e. $\la\,,\,\ra_{|_{\ggo\times \ggo}}=\la\,,\,\ra_{\ggo}$ is ad($\ggo$)-invariant and
$\la\,,\,\ra_{|_{\vv\times \vv}}$ is $\pi(\ggo)$-invariant and $\la \ggo,\vv\ra=0$.
A data set $(\ggo, \vv, \la\,,\,\ra)$ determines a 2-step nilpotent Lie group denoted by $N(\ggo, \vv)$ whose Lie algebra is the underlying vector space $\nn=\ggo \oplus \vv$ with the Lie bracket defined by (\ref{brac}). Extend the metric on $\ggo$ by left translations after identifying $\nn\simeq T_eN(\ggo, \vv)$, so that $N(\ggo, \vv)$ becomes a naturally reductive pseudo Riemannian 2-step nilpotent Lie group (\ref{t2}).
We study the isometry group in this case. Let $\hh$ denote the Lie algebra of the isometries fixing the identity element; by (\ref{sd}) an element $D\in \hh$ is a self adjoint derivation which can be written as $D=(A,B)\in \sso(\ggo,\la\,,\,\ra_{\ggo}) \times \sso(\vv,\la\,,\,\ra_{\vv})$ such that $$ B\pi(x) -\pi(x) B =\pi(Ax),\quad \forall x\in \ggo.$$ Denote by $[\,,\,]_{\nn}$ the Lie bracket on $\nn$ and by $[\,,\,]$ the Lie brackets on $\ggo$ and $\End(\vv)$. Then $$\begin{array}{rcl} \pi(A[x,y]) & = & B\pi([x,y])-\pi([x,y])B = B[\pi(x),\pi(y)]-[\pi(x),\pi(y)]B \\ & = & [B, [\pi(x),\pi(y)]]=[[B,\pi(x)]+[\pi(x),[B,\pi(y)]]\\ & = & [\pi(Ax),\pi(y)]+[\pi(x),\pi(Ay)]=\pi([Ax,y]+[x,Ay]). \end{array} $$ Since $\pi$ is faithful then $$A[x,y]=[Ax,y]+[x,Ay]\qquad \quad\mbox{ for all } x,y \in \ggo, $$ that is, $A\in \Der(\ggo) \cap \sso(\ggo,\la\,,\,\ra_{\ggo})$.
\begin{prop} \label{pi} The group of isometries fixing the identity on a naturally reductive pseudo Riemannian 2-step nilpotent Lie group $N(\ggo, \vv)$ as in (\ref{t2}) has Lie algebra $$\hh=\{(A,B)\in (\Der(\ggo)\cap \sso(\ggo, \la\,,\,\ra_{\ggo}))\times \sso(\vv,\la\,,\,\ra_{\vv})\,:\, [\pi(x),B]=\pi(Ax)\quad \forall x\in \ggo\}.$$ \end{prop}
Whenever $\ggo$ is semisimple, the ad-invariant metric on $\ggo$ is the Killing form; therefore any self adjoint derivation of $\ggo$ is of the form $\ad(x)$ for some $x\in \ggo$ . In this case one can consider $\ggo \subset \hh$ where the action is given as $$ x \cdot (z + v) = \ad(x) z + \pi(x) v\qquad x\in \ggo, \,z+ v\in \nn$$ being $\ad(x)$ the adjoint map on the semisimple Lie algebra $\ggo$. Thus an element $D=(A,B)\in \hh$ is of the form $$(A,B)=(\ad(x), \pi(x))+(0,B')\qquad x\in \ggo$$ with $B'=B-\pi(x)\in \End_{\ggo}(\vv)\cap \sso(\vv, \la\,,\,\ra_{\vv})=\ee_{\ggo}$, where $\End_{\ggo}(\vv)$ denotes the set of intertwinning operators of the representation $(\pi,\vv)$ of $\ggo$. Since $\ggo$ and $\ee_{\ggo}$ commute, then $\hh=\ggo\oplus \ee_{\ggo}$ is a direct sum of Lie algebras, here we identify $\ggo$ with the set $\{(\ad(x), \pi(x)): x\in \ggo\}\subseteq \hh$. This argues the following result.
\begin{cor} \label{coro} In the conditions of (\ref{pi}) with data set $(\ggo,\vv, \la\,,\,\ra)$ for $\ggo$ semisimple the group of isometries fixing the identity element is $$H=G \times U\qquad \qquad U=\End_{\ggo}(\vv) \cap \Or(\vv, \la\,,\,\ra_{\vv}).$$ \end{cor} \begin{proof} By (\ref{oa}) we have that $$H=\{(\phi, T)\in \Or(\ggo, \la\,,\,\ra_{\ggo}) \times \Or(\vv, \la\,,\,\ra_{\vv}): T\pi(x)T^{-1}=\pi(\phi x), \quad x\in \ggo\}.$$ Hence $\phi=\pi^{-1}\circ \Ad(T)\circ \pi\in \Aut(\ggo)$. Since $\ggo$ is semisimple any automorphism of $\ggo$ is an inner automorphism, thus there exist $g\in G$ such that $\phi=\Ad(g)$. By the paragraph above,
$(Ad(g),\pi(g))\in H$ and therefore $\pi(g)^{-1} T\in U$. Hence $$(\phi,T)=(\Ad(g), \pi(g)) \cdot (I, \pi(g)^{-1} T),$$ which says $H=G\times U$. \end{proof}
\begin{rem} Compare with \cite{La}. \end{rem}
\section{Geometry and Examples of naturally reductive 2-step nilmanifolds with non degenerate center}
The aim of this section is twofold. In the first part we write explicitly some geometric features to bring into the proof of (\ref{icp}), while in the second part we show examples of naturally reductive pseudo Riemannian 2-step nilpotent Lie groups with non degenerate center.
Recall that a 2-step nilpotent Lie algebra $\nn$ is said to be {\em non singular} if
$\ad(x)$ maps $\nn$ onto $\zz$ for every $x\in \nn-\zz$. Suppose $\nn$ is equipped with a metric as
in (\ref{des2}) then $\nn$ is non singular if and only if $j(x)$ is non
singular for every $x\in \zz$. We shall say that a Lie group is non singular if its corresponding Lie algebra is non singular.
Whenever $N$ is simply connected 2-step nilpotent the exponential map $\exp:\nn \to N$ produces global coordinates. In terms of this map the product on $N$ can be obtained by $$\exp(z_1+v_1) \exp(z_2+v_2)= \exp(z_1+z_2+\frac12 [v_1,v_2] + v_1+v_2) \quad\mbox{ for } z_1, z_2\in \zz, \, v_1, v_2\in \vv.$$
We shall study the geometry of 2-step nilpotent Lie groups when they are endowed with a left invariant (pseudo Riemannian) metric $\la\,,\,\ra$ with respect to which the center is non degenerate. In the Riemannian case a deep study of the geometry can be found in the works of P. Eberlein \cite{Eb1, Eb2}.
The covariant derivative $\nabla$ is left invariant, hence one can see $\nabla$ as a bilinear form on $\nn$ getting the formula \begin{equation}\label{nabla}
\nabla_x y= \frac12([x,y]-\ad(x)^*y -\ad(y)^*x) \qquad \quad \mbox{ for }
x,y \in \nn, \end{equation} where $\ad(x)^*$ denotes the adjoint of $\ad(x)$. By writing this explicitly one obtains \begin{equation}\label{nablaex} \begin{array}{rcll} \nabla_x y & = & \frac 12[x,y] & \mbox{ for all } x,y \in \vv\\ \nabla_x y= \nabla _y x & = & -\frac12 j(y) x & \mbox{ for all } x\in \vv, y\in \zz \\ \nabla_x y & = & 0 & \mbox{ for all } x, y\in \zz \end{array} \end{equation}
Since translations on the left are isometries, to describe the geodesics of $(N, \la\,,\,\ra)$ it suffices to describe those
geodesics that begin at $e$ the identity of $N$. Let $\gamma(t)$ be a curve
with $\gamma(0) = e$, and let $\gamma'(0) = z_0+v_0 \in \nn$, where $z_0\in
\zz$ and $v_0\in \vv$. In exponential coordinates we write
$$\gamma(t)=\exp(z(t)+v(t)), \quad \mbox{ where } z(t)\in \zz, \,
v(t)\in \vv\quad \mbox{ for all $t$ and } z'(0)=z_0,\, v'(0)=v_0.$$
The curve $\gamma(t)$ is a geodesic if and only if the following equations are satisfied:
\begin{eqnarray} \label{egeo}
v''(t) & = & j(z_0) v'(t) \mbox{ for all }t \in \RR \\
z_0 & \equiv & z'(t) + \frac12 [v'(t), v(t)] \mbox{ for all }t \in \RR
\end{eqnarray}
These equations were derived by A. Kaplan in \cite{Ka} to study
2-step nilpotent groups N of Heisenberg type, but the proof is valid in
general for 2-step nilpotent Lie groups equipped with a left invariant
pseudo Riemannian metric where the center is non degenerate as noted in
\cite{Ge} and \cite{Bo}.
Let $\gamma(t)$ be a geodesic of $N$ with $\gamma(0) = e$. Write
$\gamma'(0) = z_0+v_0$, where $z_0\in \zz$ and $v_0\in \vv$ and identify
$\nn=T_eN$. Then
\begin{equation}\label{geo}
\gamma'(t) = dL_{\gamma(t)}(e^{tj(z_0)} v_0 + z_0) \qquad \mbox{ for all }
t \in \RR
\end{equation} where $e^{tj(z_0)}= \sum_{n=0}^{\infty} \frac{t^n}{n!} j(z_0)^n$. In fact, write $\gamma(t)=exp (z(t)+v(t))$, where $z(t)$ and $v(t)$ lie in $\zz$ and $\vv$ respectively for all $t \in \RR$. By using the previous equations (\ref{egeo}) one has $$\begin{array}{rcl} \gamma'(t) &=& d\exp_{z(t)+v(t)}(z'(t)+v'(t))_{z(t)+v(t)} \\ & = & dL_{\gamma(t)}(z'(t) + \frac12 [v'(t), v(t)]+ v')\\ & = & dL_{\gamma(t)}(z_0+ v'). \end{array} $$ Now by integrating the first equation of (\ref{egeo}) one gets $v'(t)=e^{tj(z_0)} v_0$ which proves (\ref{geo}).
For $x,y$ elements in $\nn$ the curvature tensor is defined by $$R(x,y)=[\nabla_x, \nabla_y]-\nabla_{[x,y]}.$$ Using (\ref{nablaex}) one gets \begin{equation}\label{cu} R(x,y)z = \left\{ \begin{array}{ll}
\frac12 j([x,y])z -\frac14j([y,z])x+\frac14 j([x,z])y& \mbox{ for } x,y,z \in \vv, \\ \\
-\frac14 [x,j(y)z] & \mbox{ for } x, y \in \vv, \, z\in \zz,\\ \\
-\frac14 [x,j(z) y]+\frac14 [y,j(z) x] & \mbox{ for } x, z \in \vv, \, y\in \zz,\\ \\
-\frac14 j(y)j(z) x & \mbox{ for } x\in \vv, y, z \in \zz,\\ \\
\frac14 [j(x),j(y)]z & \mbox{ for } x,y \in \zz, z\in \vv,\\\\
0 & \mbox{ for } x,y, z\in \zz. \end{array} \right. \end{equation}
Let $\Pi\subseteq \nn$ denote a non degenerate plane and let $Q$ be given by $$Q(x,y)=\la x,x \ra \la y,y\ra-\la x,y \ra^2.$$
The non degeneracy property is equivalent to ask $Q(v,w)\neq 0$ for one -hence every- basis $v,w\in \Pi$ \cite{ON}. The sectional curvature of $\Pi$ is the number $K(x,y):=\la R(x,y)y, x\ra /Q(x,y)$, which is independent of the choice of the basis. Now take a orthonormal basis for $\Pi$, that is a linearly independent set $\{x,y\}$ such that $\la x,y\ra=0$ and $\la x, x \ra =\pm 1$ and $\la y, y \ra=\pm 1$.
After (\ref{cu}) one obtains \begin{equation}\label{sect} K(x,y) = \left\{ \begin{array}{ll} -\frac{3 \varepsilon_1 \varepsilon_2}4 \la [x,y], [x,y]\ra & \mbox{ for }x,y\in \vv \\ \\
\frac{\varepsilon_1 \varepsilon_2}4 \la j(y)x, j(y)x \ra & \mbox{ for }x\in \vv, y \in \zz,\\ \\ 0 & \mbox{ for }x,y \in \zz \end{array} \right. \end{equation} being $\varepsilon_1:=\la x,x \ra$ and $\varepsilon_2:= \la y, y\ra$.
\vskip 3pt
The Ricci tensor is given by $Ric(x,y)={\rm trace}(z \to R(z,x)y), z\in \nn$ for arbitrary elements $x,y\in \nn$.
\begin{prop}\label{p2} Let $\{z_i\}$ denote a orthonormal basis of $\zz$ and $\{v_j\}$ a orthonormal basis of $\vv$. It holds $$Ric(x,y)=\left\{ \begin{array}{ll} 0 & \mbox{ for }x\in \vv, y\in \zz\\ \\ \frac12 \sum_i \varepsilon_i \la j(z_i)^2 x, y\ra & \mbox{ for } x, y\in \vv,\, \varepsilon_i=\la z_i, z_i\ra\\ \\ -\frac14 \sum_j {\varepsilon}_j \la j(x) j(y) v_j, v_j\ra & \mbox{ for } x,y \in \zz,\, {\varepsilon}_j=\la v_j, v_j\ra. \end{array} \right. $$ \end{prop}
Due to symmetries of the curvature tensor, the Ricci tensor is a symmetric bilinear form on $\nn$ and hence there exists a symmetric linear transformation $T:\nn \to \nn$ such that $Ric(x,y)=\la Tx,y\ra$ for all $x,y \in \nn$. $T$ is called the Ricci transformation. Let $\{e_k\}$ denote a orthonormal basis of $\nn$; it holds $$Ric(x,y)=\sum_k \varepsilon_k \la R(e_k,x)y, e_k \ra =\la -\sum_k \varepsilon_k R(e_k,x) e_k, y\ra$$ which implies \begin{equation} \label{rt} T(x)= -\sum_k \varepsilon_k R(e_k,x) e_k, \qquad \mbox{ being } \varepsilon_k=\la e_k, e_k\ra. \end{equation} According to the results in (\ref{p2}) we have that $\zz$ and $\vv$ are $T$-invariant subspaces and $$ T(x) = \left\{ \begin{array}{ll}
\frac12 j(\sum_i \varepsilon_i z_i)^2 x & x\in \vv, \quad\varepsilon_i=\la z_i, z_i\ra \\ \\
\frac14 \sum_j \varepsilon_j [v_j, j(x)v_j] & x\in \zz \qquad \varepsilon_j=\la v_j, v_j \ra. \end{array} \right. $$ where $\{z_i\}$ and $\{v_j\}$ are orthonormal basis of $\zz$ and $\vv$ respectively.
\begin{rem} The formulas above were used in \cite{C-P1} to prove (\ref{icp}).
\end{rem}
\begin{rem} For naturally reductive metrics in the formulas
above, replace the maps $j$ be the corresponding representation $\pi:\ggo
\to \sso(\vv, \la\,,\,\ra_{\vv})$.
\end{rem}
Below we expose examples of naturally reductive metrics on 2-step
nilpotent Lie groups. This is achieved by translating the data at the Lie
algebra level to the corresponding simply connected Lie group by
following the key results provided in
(\ref{t2}). We shall make use of
euclidean and semisimple Lie algebras in order to obtained ad-invariant
metrics. For further details on Lie algebras with ad-invariant metrics see
for instance \cite{F-S,M-R}. Concerning isometries between pseudo Riemannian
2-step nilpotent Lie groups notice that orthogonal isomorphisms gives rise to isometries between the corresponding Lie groups (*).
\vskip 3pt
{\rm (i)} \,{\it Riemannian examples.} Naturally reductive Riemannian nilmanifolds arise by considering a data set with $\ggo$ compact. Recall that if $\ggo$ is compact then $\ggo=\kk \oplus \cc$ where $\kk=[\ggo, \ggo]$ is a compact semisimple Lie algebra and $\cc$ is the center (see \cite{Wa}). In \cite{La} they were extended studied.
In the Riemannian case the converse of (*) above holds \cite{Wi}.
\vskip 2pt
{\rm (ii)} \,{\it Modified Riemannian.} Take any of those data sets corresponding to the positive definite case and follow the ideas in (\ref{change}). Clearly all requierements in (\ref{t2}) apply and so one can produce naturally reductive pseudo Riemannian metrics of signature ($\dim \ggo, \dim \vv)$.
Let $N(\ggo,\vv)$ denote a Riemannian naturally reductive nilmanifold obtained from a data set $(\ggo, \vv, \la\,,\,\ra)$. Let $\tilde{N}(\ggo,\vv)$ denote the pseudo Riemannian 2-step nilpotent Lie group obtained by changing the sign of the metric on $\ggo$. Therefore by \cite{Wi} $$N(\ggo,\vv)\simeq N'(\ggo',\vv') \qquad \Longleftrightarrow \qquad \nn(\ggo,\vv)\simeq \nn(\ggo, \vv')$$ and this occurs if and only if there an isometric isomorphism $\phi:(\ggo, \la\,,\,\ra_+) \to (\ggo', \la\,,\,\ra_+')$ and a isometry $T:(\vv, \la\,,\,\ra_+) \to (\vv',\la,,\ra_+)$ such that $$T\pi(x)T^{-1}=\pi'(\phi x) \qquad \mbox{ for all }x\in \ggo.$$ Clearly $\phi:(\ggo, -\la\,,\,\ra_+) \to (\ggo', -\la\,,\,\ra_+')$ is also a isometric isomorphism, so that the corresponding simply connected Lie groups are isometric. Thus one has what follows.
\begin{prop} If $N(\ggo,\vv)\simeq N'(\ggo',\vv')$ then $\tilde{N}(\ggo,\vv)\simeq \tilde{N}'(\ggo',\vv')$. \end{prop}
In \cite{La} detailed conditions to get the isometries $N(\ggo,\vv)\simeq N'(\ggo',\vv')$ were obtained.
\vskip 2pt
{\rm (iii)} \, {\it Abelian center.} Let $\RR^{2n}$ be equipped with a metric $B$, that is, $B$ is determined by a non singular symmetric linear map such that $$B(x,y)=\la b x, y\ra \qquad \la \,,\,\ra \mbox{ the canonical inner product on } \RR^{2n}.$$
Let $t\in \sso(\RR^{2n}, B)$, that is $t$ may satisfy $t^*=-btb$ where $t^*$ denotes adjoint with respect to the canonical inner product on $\RR^{2n}$.
Any non singular $t\in \sso(\RR^{2n}, B)$ gives rise to a faithful
representation of $\RR$ to $(\RR^{2n},B)$ without trivial
subrepresentations. Let $\nn$ be the vector space direct sum
$\RR z \oplus \RR^{2n}$ equipped with a metric $\la\,,\,\ra$ such that
$$\la z, \RR^{2n}\ra=0\qquad \la z, z \ra= \lambda\in \RR-\{0\} \qquad
\la\,,\,\ra_{\RR^{2n}}=B.$$
Define a Lie bracket on $\nn$ by
$$[z, y]=0\qquad \forall y\in \nn\qquad \mbox{ and }\quad \la [u,v], z\ra= B(t u,v) \quad u,v\in \RR^{2n}.$$
According to (\ref{t2}) this Lie bracket makes of $\nn$ a 2-step nilpotent Lie algebra and the given metric is naturally reductive whenever the center is non degenerate. This Lie algebra is isomorphic to the Heisenberg Lie algebra.
Furthermore, the group of isometries fixing the identity element has Lie algebra
\begin{equation} \label{ih} \hh=\mathcal Z_{\sso(\RR^{2n}, B)} (t)
\end{equation}
where $\mathcal Z_{\sso(\RR^{2n}, B)} (t)$ denotes the centralizer of $t$
in $\sso(\RR^{2n},B)$, which can be verified by applying Proposition (\ref{pi}).
In this way one gets naturally reductive metrics on the Heisenberg
Lie group of dimension 2n+1. The converse also holds.
\begin{prop} Any left invariant pseudo Riemannian metric on the Heisenberg Lie group $H_{2n+1}$ for which the center is non degenerate is naturally reductive.
The isotropy group has Lie algebra $\hh$ as in (\ref{ih}). \end{prop} \begin{proof} Let $\hh_{2n+1}$ denote the Lie algebra of $H_{2n+1}$ and decompose it as a orthogonal direct sum $\hh_{2n+1}=\RR z \oplus \vv$. Then the restriction of the metric to $\vv$ defines a metric $B$ of signature $(k,m)$. The map $j$ defined in (\ref{br}) is indeed self adjoint with respect to
$B:=\la\,,\,\ra_{|_{\vv\times\vv}}$ and it generates a subalgebra of
$\sso(\vv, B)$. Thus $z \to j(z)$ defines a faithful representation
without trivial subrepresentations since by $t:=j(z)$ one has
$$tu=0 \Longleftrightarrow B( tu, v)=0 \quad \forall v\in \vv \Longleftrightarrow B(z,[u,v])=0 \quad \forall v\in \vv.$$
But since the center is non degenerate then $[u,v]=0$ for all $v\in \vv$ which implies $u=0$.
Indeed any non degenerate metric on $\RR z$ is ad-invariant. Hence the statements of (\ref{t2}) are satisfied and the metric on $\hh_{2n+1}$ is naturally reductive. \end{proof}
\begin{exa} Let $\hh_3$ denote the Heisenberg Lie algebra of dimension three with basis $e_1, e_2, e_3$ satisfying the Lie brackets $[e_1, e_2]=e_3$.
Lorentzian metrics on $\hh_3$ with non degenerate center can be defined by
$$\begin{array}{lrclcl}
(1) & -\la e_3, e_3\ra & = & 1 & = & \la e_1, e_1\ra=\la e_2, e_2\ra \\
(2) & \la e_3, e_3\ra & = & 1 & = & -\la e_1, e_1\ra=\la e_2, e_2\ra
\end{array}
$$
Thus in the basis $e_1,
e_2$ the map $j_1(e_3)$ for the metric in (1) is represented by the matrix
$$\left( \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix} \right)$$
(compare with (\ref{change})) while $j_2(e_3)$ for the metric (2) one has
$$\left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right).$$
See \cite{Ge} for more results concerning Lorentzian metrics.
\end{exa}
The construction on the Heisenberg Lie algebra, can be extended in the following way. Set $B$ a non degenerate symmetric bilinear form on $\RR^k$ and let $t_1, \hdots t_l$ be commuting linear maps in $\sso(\RR^k, B)$ and such that $\bigcap_i ker(t_i)=\{0\}$.
Set $\nn=\RR^l \oplus \RR^k$ direct sum of vector spaces, equipped $\RR^l$ with any metric and $\nn$ with the product metric such that $\la \RR^l, \RR^k\ra=0$.
The triple $(\RR^l, \RR^k, \la\,,\,\ra)$ is a data set which induces a naturally reductive metric on the corresponding simply connected 2-step nilpotent Lie group with Lie algebra $\nn$.
\vskip 2pt \, {\it Semisimple center.} Let $\RR^{p,q}$ denote the real vector space $\RR^{p+q}$ endowed with a metric $\la\,,\ra_{p,q}$ of signature $(p,q)$. Let $\sso(p,q)$ denote the set of self adjoint transformations for $\la\,,\ra_{p,q}$. This a semisimple Lie algebra and the Killing form $K$ a natural ad-invariant metric on $\sso(p,q)$. Indeed $\sso(p,q)$ acts on $\RR^{p,q}$ just by evaluation. Take the direct sum as vector spaces $\nn=\sso(p,q)\oplus \RR^{p,q}$ and equipped with the product metric $\la\,,\,\ra_{\nn}$ such that $\la\,,\,\ra_{\sso(p,q)\times \sso(p,q)}=K$, $\la\,,\,\ra_{\RR^{p,q}\times \RR^{p,q}}=\la\,,\,\ra_{p,q}$ and $\la\sso(p,q), \RR^{p,q}\ra=0$. Thus a Lie bracket can be defined on $\nn$ by $$K( [u,v], A )=\la A u, v\ra_{p,q} \qquad \quad \mbox{ for all }u,v \in \RR^{p,q}, A\in \sso(p,q).$$ The corresponding 2-step nilpotent Lie group equipped with the left invariant metric induced by the metric above, makes of $N$ a naturally reductive pseudo Riemannian space (Theorem (\ref{t2}).
A similar construction can be done by restriction of the evaluating action to a non degenerate subalgebra of $\sso(p,q)$.
\vskip 2pt
{\rm (v)} \, {\it Modified tangent semisimple.} The Killing form $K$ is an ad-invariant metric on any semisimple Lie algebra $\ggo$. As usual the tangent Lie algebra $\ct \ggo$ is the semidirect product $\ggo \ltimes \ggo$ via the adjoint representation. We shall modify the algebraic structure on $\ct \ggo$ in order to get a naturally reductive pseudo Riemannian 2-step nilpotent Lie group.
Take the Lie algebra $\ggo$ together with the Killing form and let $\vv$ denote the underlying vector space to $\ggo$ endowed also with the Killing form metric. To this pair $(\ggo, \vv)$ attach
- the metric given by $\la\,,\,\ra_{\ggo}=\la\,,\,\ra_{\vv}=K$ and $\la\ggo,\vv\ra=0$;
- the adjoint representation $\ad:\ggo \to \sso(\vv, K)$.
The adjoint representation is faithful and there is no trivial subrepresentations, so that $(\ggo, \vv, K+K)$ constitues a data set for a 2-step nilpotent Lie group $N(\ggo, \vv)$ and by (\ref{t2}) it is naturally reductive pseudo Riemannian. Clearly the signature of this metric is twice as much the signature of $B$ and the isometry group can be computed with (\ref{coro}).
Notice that whenever $\ggo$ is compact the procedure above is a case of the construction for naturally reductive Riemannian nilmanifolds (see (i)).
In the next section we shall see that the 2-step nilpotent Lie algebra above, together with another metric gives rise to a Lie algebra carrying an ad-invariant metric.
\vskip 2pt
\section{Other examples of naturally reductive metrics}
In this section we study 2-step nilpotent Lie algebras with ad-invariant metrics. The corresponding Lie group carries a bi-invariant metric for which the center is degenerate.
An {\em ad-invariant metric} on a Lie algebra $\ggo$ is a non degenerate symmetric
bilinear map $\la\,,\,\ra:\ggo \times \ggo \to \RR$ such that \begin{equation} \la [x,y], z\ra + \la y, [x,z]\ra =0 \qquad \mbox{ for all } x,y, z \in \nn. \end{equation} Recall that on a connected Lie group $G$ furnished with a left invariant pseudo Riemannian metric $\la\,,\,\ra$, the following statements are equivalent (see \cite{ON} Ch. 11):
\begin{enumerate} \item $\la\,,\,\ra$ is right invariant, hence bi-invariant; \item $\la\,,\,\ra$ is $\Ad(G)$-invariant; \item the inversion map $g\to g^{-1}$ is an isometry of $G$; \item $\la [x,y], z\ra + \la y, [x,z]\ra =0$ for all $x,y, z \in \ggo$; \item $\nabla_xy =\frac12 [x,y]$ for all $x,y\in \ggo$, where $\nabla$ denotes the Levi Civita connection; \item the geodesics of $G$ starting at $e$ are the one parameter subgroups of $G$. \end{enumerate}
Clearly $(G, \la\,,\,\ra)$ is naturally reductive, which by (3) is a symmetric space. Furthermore one has
\begin{itemize}
\item the Levi-Civita connection is given by $$\nabla_x y=\frac 12 [x,y] \qquad \mbox{ for all } x,y \in \ggo,$$
\item the curvature tensor is $$R(x,y)=\frac14 \ad([x,y])\qquad \mbox{ for }x,y \in \ggo.$$ \end{itemize}
Hence any {\em simply connected 2-step nilpotent Lie group equipped with a bi-invariant metric is flat}.
The set of nilpotent Lie groups carrying a bi-invariant pseudo Riemannian metric is non empty. An element of this set is for instance the simply connected Lie group whose Lie algebra is the free 3-step nilpotent Lie algebra in two generators: in fact, $\nn$ the Lie algebra spanned as vector space by $e_1, e_2, e_3, e_4, e_5$ with the non zero Lie brackets $$[e_1, e_2]=e_3\qquad [e_1, e_3]=e_4\qquad [e_2, e_3]=e_5, $$
carries the ad-invariant metric defined by the non vanishing symmetric relations $$\la e_3, e_3\ra=1=\la e_1, e_5\ra=-\la e_2, e_4\ra.$$
Otherwise in the Riemannian case, a naturally reductive nilpotent Lie group may be at most 2-step nilpotent \cite{Go}.
\vskip 3pt
Let $\nn$ denote a 2-step nilpotent Lie algebra with an
ad-invariant metric $\la\,,\,\ra$. It is not hard to prove that $\zz^{\perp}=C(\nn)$ and
therefore the center is always an isotropic ideal. Moreover $\nn$ decomposes as a
orthogonal product
\begin{equation}\label{desad} \nn=\tilde{\zz} \times \tilde{\nn}
\end{equation}
where $\tilde{\zz}$ is a non degenerate central ideal and $\tilde{\nn}$ is a 2-step
nilpotent ideal of corank zero, being the corank of $\nn$ uniquely defined by the scalar $k:=\dim
\zz -\dim C(\nn)$. This follows essentially from the fact that the ad-invariant metric is non
degenerate on any complementary subspace of $C(\nn)$ in $\zz$. Thus by choosing
such a complement $\tilde{\zz}$, $\zz=\tilde{\zz}\oplus C(\nn)$ and
its orthogonal complement in $\nn$, $\nn=\tilde{\zz}\oplus \tilde{\zz}^{\perp}$ one gets a
decomposition as above (\ref{desad}) with $\tilde{\nn}:=\tilde{\zz}^{\perp}$.
Assume now the corank of $(\nn, \la\,,\,\ra)$ vanishes, so that
$\zz^{\perp}=C(\nn)=\zz$. One can produce a isotropic subspace $\vv_1$ such that
the ad-invariant metric on $\zz\oplus \vv_1$ is non degenerate. Hence one
obtains a orthogonal decomposition as vector spaces
$$\nn=(\zz \oplus \vv_1)\oplus \vv_2, \qquad\mbox{ where }\vv_2=(\zz\oplus
\vv_1)^{\perp}.$$ We claim $\vv_2=0$. In fact for all $x\in \vv_2$ one has $\la x, [u,v]\ra=0$ for all $u,v\in \nn$ implying that $x\in C(\nn)^{\perp}\cap \vv_2=\{0\}$. Hence there is a splitting of $\nn$ as a direct sum of the isotropic subspaces $\zz$ and $\vv$ so that the metric on $\nn$ is neutral: $$\nn=\zz \oplus \vv.$$
Among other possible constructions, 2-step nilpotent Lie algebras admitting an ad-invariant metric can be obtained as follows. Let $(\vv, \la \,,\, \ra_+)$ denote a real vector space equipped with an inner product and let $\rho:\vv \to \sso(\vv)$ an injective linear map satisfying \begin{equation}\label{jad} \rho(u)u=0\qquad\mbox{ for all }u\in \vv. \end{equation}
Consider the vector space $\nn:=\vv^*\oplus \vv$ furnished with the canonical neutral metric $\la\,,\,\ra$ and define a Lie bracket on $\nn$ by \begin{equation}\label{brad} \begin{array}{rcl} [x,y] & = & 0 \quad \mbox{ for }x\in \vv^*, y\in \nn\quad \mbox{ and }\quad [\nn,\nn]\subseteq \vv^*\\ \la [u,v], w\ra & = & \la \rho(w) u,v\ra_+ \qquad \mbox{ for all } u,v,w\in \vv. \end{array} \end{equation}
Then $\nn$ becomes a 2-step nilpotent Lie algebra of corank zero for which the metric $\la\,,\,\ra$ is ad-invariant. This construction was called the {\em modified cotangent}, since $\nn$ is linear isomorphic to the cotangent of $\vv$. Notice that the commutator coincides with the center and it equals $\vv^*$. This allows to construct 2-step nilpotent Lie algebras of null corank which carry an ad-invariant metric. Furthermore this is basically the way to obtain such Lie algebras, see \cite{Ov}:
\begin{thm} \label{mod} Let $(\nn, \la\,,\,\ra)$ denote a 2-step nilpotent Lie algebra of corank $m$ endowed with an ad-invariant metric. Then $(\nn, \la\,,\,\ra)$ is isometric isomorphic to an orthogonal direct product of the Lie algebras $\RR^m$ and a modified cotangent. \end{thm}
One can get 2-step nilpotent examples by proceding as follows. Let $(\ggo, B)$ denote a compact semisimple Lie algebra and $B$ its Killing form. Since $B$ is negative definite on $\ggo$, $-B$ determines an inner product on $\ggo$. The adjoint map, $\ad:\ggo \to \sso(\ggo, B)$ satisfies (\ref{jad}), therefore the vector space $\ggo^* \oplus \ggo$ equipped with the Lie bracket defined in (\ref{brad}) makes of $\ggo^*\oplus \ggo$ a 2-step nilpotent Lie algebra which carries an ad-invariant metric, the usual neutral metric on $\ggo^*\oplus \ggo$.
From (\ref{jad}) it is clear that the non singular Lie algebras cannot carry an ad-invariant metric. Note that if a 2-step nilpotent Lie algebra admits and ad-invariant metric, then $\dim \nn -\dim \zz=\dim C(\nn)$. This condition is however not sufficient.
A self adjoint derivation $\phi$ in such a Lie algebra of zero corank has the form $$\begin{array}{rcll} \phi(z)& = &-A^*z\in \vv^*& \mbox{ for } z\in \vv^*\\
\phi(v)& = & B v + A v & \mbox{ where }Bv\in \vv^*, \, Av\in \vv, \,\mbox{ for } v\in \vv \end{array} $$ and such that $A^*$ denotes the dual map of $A$: $A^*\varphi= \varphi \circ A$ for $\varphi \in \vv^*$. On the other hand according to the results in \cite{Mu} the isotropy group of isometries fixing the identity element on the corresponding 2-step nilpotent Lie group consists of the self adjoint transformations with respect to $\la\,,\,\ra$. Thus $$I_a(N)\subseteq I(N).$$
Examples of 2-step nilpotent Lie algebras with ad-invariant metrics arise
by taking $\ct^*\nn$, the cotangent of any 2-step nilpotent Lie algebra $\nn$ together with the canonical neutral metric (see (\ref{exad})). Let $\nn=\zz\oplus \vv$ denote a 2-step nilpotent Lie algebra, where $\vv$ is any complementary subspace of $\zz$ in $\ggo$. Let $z_1, \hdots, z_m$ be a basis of the center $\zz$ and let $v_1, \hdots, v_n$ be a basis of the vector space $\vv$. Thus $$[v_i, v_j]=\sum_{s=k}^m c_{ij}^s z_s \qquad \quad i, j=1, \hdots n.$$ Let $\ct^*\nn=\nn\ltimes \nn^*$ denote the cotangent Lie algebra obtained via the coadjoint representation. Indeed
$z^1, \hdots, z^m, v^1, \hdots, v^n$ becomes the dual basis of the basis above adapted to the decomposition $\nn^*=\zz^*\oplus \vv^*$. The non trivial Lie bracket relations concerning the coadjoint action follow $$ [v_i, z^j]=\sum_{s=1}^n d_{ij}^s v^s \qquad \quad \mbox{ for } i=1, \hdots n, j=1, \hdots m. $$ Thus $[v_i, z^j](v_k)= d_{ij}^k$ and by the definition $$ [v_i, z^j](v_k) =-z^j(\sum_{s=1}^m c_{ik}^s z^s)= - c_{ik}^j \qquad \quad i,k=1, \hdots n, j=1, \hdots m. $$ Therefore $d_{ij}^k= - c_{ik}^j$ for $i,k=1, \hdots n, j=1, \hdots m$.
It is clear that if for some basis of $\nn$ the structure constants are rational numbers then by choosing the union of this basis and its dual on $\ct^*\nn$ one gets rational structure constants for $\ct^*\nn$. Thus by the Mal'cev criterium $N$ and its cotangent $\ct^*N$, the simply connected Lie group with Lie algebra $\ct^*\nn$, admits a lattice which induces a compact quotient (see \cite{O-V,Ra} for instance).
Let $\Gamma\subset \ct^*N$ denote a cocompact lattice of $\ct^*N$. Indeed $\ct^*N$ acts on the compact nilmanifold $(\ct^*N)/\Gamma$ by left translation isometries if we induce to the quotient the bi-invariant metric corresponding to the neutral canonical one on $\ct^*\nn$. The tangent space at the representative $e$ can be identified with $\ct^*\nn \simeq T_e((\ct^*N)/\Gamma)$ so that $\ct^*\nn=\{0\}\oplus \ct^*\nn$ and clearly $Ad(\Gamma)\ct^*\nn\subseteq \ct^*\nn$ which says that $(\ct^*N)/\Gamma$ is homogeneous reductive. Moreover
the induced metric on the quotient satisfies $$\la[x,y],z\ra+ \la [x,z], y\ra=0\qquad \quad \forall x,y, z\in \ct^*\nn.$$
\begin{prop} Let $N$ denote a 2-step nilpotent Lie group. If it admits a cocompact lattice then the cotangent Lie group $\ct^*N$ admits a cocompact lattice $\Gamma$ such that $(\ct^*N)/\Gamma$ is pseudo Riemannian naturally reductive. \end{prop}
\begin{exa} The low dimensional 2-step nilpotent Lie group $N$ admitting an ad-invariant metric occurs in dimension six.
This Lie algebra can be also be described as the cotangent of the Heisenberg Lie algebra $\ct^*\hh_3$. Explicitly let $e_1,e_2,e_3, e_4, e_5, e_6$ be a basis of $\nn$; the Lie brackets are $$[e_4,e_5]=e_1 \qquad [e_4,e_6]=e_2 \qquad [e_5,e_6]=e_3$$ and the ad-invariant metric is defined by the non zero symmetric relations $$1 = \la e_1, e_6\ra =\la e_2, e_5\ra =\la e_3, e_4\ra.$$
The corresponding simply connected six dimensional Lie group $N$ can be modelled on $\RR^6$ together with the multiplication group given by $$\begin{array}{rcl} (x_1,x_2,x_3, x_4,x_5,x_6) \cdot (y_1, y_2,y_3,y_4,y_5,y_6) & = & (x_1 + y_1 + \frac12(x_4y_5-x_5y_4), \\ && x_2+y_2+\frac12(x_4y_6-x_6y_4),\\ & & x_3+y_3+\frac12(x_5y_6-x_6y_5),\\ && x_4+y_4, x_5+y_5, x_6+y_6). \end{array} $$ By the Malcev criterium $N$ admits a cocompact lattice $\Gamma$.
By inducing the bi-invariant metric of $N$ to $N/\Gamma$ one gets a
invariant metric on $N/\Gamma$, and in this way $N/\Gamma$ is
a pseudo Riemannian naturally reductive compact nilmanifold.
For instance the subgroup of $N$ given by $$\Gamma=\{(k_1,k_2,k_3, 2k_4, k_5, 2k_6)\,/\mbox{ for }\,k_i\in \ZZ \, \forall i=1,2,3,4,5,6\}$$ is a co-compact lattice of $N$, so that $N/\Gamma$ is a compact homogeneous manifold.
\end{exa}
\
{\sc Acknoledgements.} The author expresses her deep gratitud to Victor Bangert, for his support during the stay of the author at the Universit\"at Freiburg, where this work was done.
The author is partially supported by DAAD, CONICET, ANPCyT and Secyt - Universidad Nacional de C\'ordoba.
\end{document} |
\begin{document}
\begin{abstract}
We derive eigenvalue estimates for concentration operators associated with the discrete Fourier transform and two concentration domains satisfying certain regularity conditions. These conditions are met, for example, when the discrete domain, contained in a lattice, is obtained by discretization of a suitably regular domain in the Euclidean space. As a limit, we obtain eigenvalue estimates for Fourier concentration operators associated with two suitably regular domains in the Euclidean space.
Our results cover for the first time non-convex and non-symmetric concentration models in the spatial and frequency domains, as demanded by numerous applications that exploit the expected approximate low dimensionality of the modeled phenomena. The proofs build on Israel's work on one dimensional intervals [arXiv: 1502.04404v1]. The new ingredients are the use of redundant wave-packet expansions and a dyadic decomposition argument to obtain Schatten norm estimates for Hankel operators. \end{abstract}
\maketitle
\section{Introduction and results} Fourier concentration operators act by incorporating a spatial cut-off and a subsequent frequency cut-off to the Fourier inversion formula. The chief example concerns the Fourier transform on the Euclidean space $\mathcal{F}: L^2(\mathbb{R}^d) \to L^2(\mathbb{R}^d)$, the cut-offs are then given by the indicator functions of two compact domains $E,F \subseteq \mathbb{R}^d$, and the concentration operator is \begin{align}\label{eq_S}
S f=\chi_F \mathcal{F}^{-1}\chi_E \mathcal{F} \chi_F f, \qquad f \in L^2(\mathbb{R}^d). \end{align} These operators, and their analogues defined with respect to the discrete Fourier transform $L^2([-1/2,1/2]^d) \to \ell^2(\mathbb{Z}^d)$ play a crucial role in many analysis problems and fields of application where the shapes of $E,F$ are dictated by various physical constraints or measurement characteristics \cite{MR140732,MR140733,MR147686,MR2883827}.
The basic intuition is that the concentration operator \eqref{eq_S} is approximately a projection with rank $\tr(S) = |E| \cdot |F|$. The error of such heuristic is encoded by the so-called \emph{plunge region} \begin{align}\label{eq_Ms} M_\varepsilon(S)= \{ \lambda \in \sigma(S): \varepsilon < \lambda < 1- \varepsilon\}, \qquad \varepsilon \in (0,1/2), \end{align} consisting of intermediate eigenvalues. Asymptotics for the cardinality of $M_\varepsilon(S)$ go back to Landau and Widom \cite{MR593228,MR487600} for the case of one dimensional intervals $E=[-a,a]$, $F=[-b,b]$ and read \begin{align}\label{eqc} \# M_\varepsilon(S) = c \cdot \log(ab) \cdot \log(\tfrac{1-\varepsilon}{\varepsilon}) + o(\log(ab)), \qquad \mbox{as } ab \to \infty, \end{align} for an explicit constant $c$ that depends on the normalization of the Fourier transform. The modern spectral theory of Wiener-Hopf operators gives similar asymptotics for concentration operators associated to rather general multi-dimensional domains subject to increasing isotropic dilations.
While \eqref{eqc} precisely describes the cardinality of the set $M_\varepsilon(S)$ in the limit $ab \to \infty$, the asymptotic is often insufficient for many purposes because of the quality of the error terms. Indeed, the error term in \eqref{eqc} depends in an unspecified way on the spectral threshold $\varepsilon$, which precludes applications where $\varepsilon$ is let to vary with the domains $E,F$. Such limitations have motivated a great amount of work aimed at deriving \emph{upper bounds} for $\# M_\varepsilon(S)$ that are \emph{threshold robust}, that is, bounds that are effective for concrete concentration domains and explicit in their dependence on the spectral threshold \cite{is15,KaRoDa,MR3887338,MR3926960,BoJaKa,Os}, significantly improving on more classical results in this spirit \cite{MR169015}.
With the exception of \cite{is15}, the mentioned articles on threshold-robust spectral bounds for Fourier concentration operators concern only the one dimensional case, because they exploit a connection with a Sturm–Liouville equation which is specific of that setting. Such methods can be applied to some extent to higher dimensional domains enjoying special symmetries \cite{MR181766, MR673519}. On the other hand, while \cite{is15} studies Fourier concentration operators associated with one dimensional intervals, the technique introduced by Israel is very general, as it relies on an explicit almost diagonalization of the concentration operator. In fact, as we were finishing this work, the preprint \cite{isma23} provided an extension of \cite{is15} to higher dimensions (see Sections \ref{sec_e} and \ref{sec_r}).
In this article we derive upper bounds for the number of intermediate eigenvalues \eqref{eq_Ms} associated with either the continuous or discrete Fourier transforms. In contrast to other results in the literature, we obtain estimates that apply to two suitably regular multi-dimensional spatial and frequency domains, which do not need to exhibit special symmetries. In this way, our results cover for the first time many setups of practical relevance, see Section \ref{sec_sig}.
Our proofs build on Israel's technique \cite{is15} and incorporate novel arguments to treat non-convex domains and their discrete counterparts. Instead of the orthonormal wave-packet basis from \cite{is15}, we use more versatile redundant expansions (frames). Second, we introduce a dyadic decomposition method implemented by means of Schatten norm estimates for Hankel operators, see Section \ref{sec_r}.
\subsection{The Euclidean space}\label{sec_e}
Given two compact sets $E,F \subseteq \mathbb{R}^d$, the \emph{Fourier concentration operator} $S: L^2(\mathbb{R}^d) \to L^2(\mathbb{R}^d)$ is defined by \eqref{eq_S} where $\mathcal{F}$ denotes the Fourier transform
\begin{align}\label{eq_cft}
\mathcal{F} f(\xi) = \int_{\mathbb{R}^d} f(x) e^{-2\pi i x \xi} \,dx.
\end{align}
A set $E \subseteq \mathbb{R}^d$ is said to have a \emph{maximally Ahlfors regular boundary} if
there exists a constant $\kappa_{\partial E}>0$ such that
\begin{align*}
\mathcal{H}^{d-1}\big(\partial E \cap B_{r}(x) \big)\geq \kappa_{\partial E} \cdot r^{d-1}, \qquad 0 < r \leq \mathcal{H}^{d-1}(\partial E)^{1/(d-1)}, \quad x \in \partial E.
\end{align*}
Here, $\mathcal{H}^{d-1}$ denotes the $(d-1)$-dimensional Hausdorff measure. The term maximal in the definition refers to the range of $r$ for which the estimate is required to hold. See Section \ref{sec:pre} for more context on Ahlfors regularity. In what follows, we denote for short $|\partial E| = \mathcal{H}^{d-1}\big(\partial E)$.
In this article we prove the following.
\begin{theorem}\label{th1}
Let $E,F\subseteq\mathbb{R}^d$, $d\geq 2$, be compact domains with maximally Ahlfors regular boundaries with constants $\kappa_{\partial E},\kappa_{\partial F}$ respectively, and assume that that $|\partialE||\partial F|\ge 1$. Consider the concentration operator \eqref{eq_S} and its eigenvalues $\{\lambda_n: n \in \mathbb{N}\}$.
Then for every $\alpha\in(0,1/2)$, there exists $A_{\alpha,d}\geq 1$ such that for $\varepsilon\in(0,1/2)$: \begin{multline}\label{eq_taa}
\#\big\{n\in\mathbb{N}:\ \lambda_n \in(\varepsilon,1-\varepsilon)\big\}
\leq A_{\alpha,d} \cdot \frac{|\partial E|}{\kappa_{\partial E} } \cdot \frac{|\partial F|}{\kappa_{\partial F} } \cdot \log\left( \frac{|\partial E||\partial F|}{ \kappa_{\partial E}\ \varepsilon}\right)^{2d(1+\alpha)+1}. \end{multline}
\end{theorem} The strength of Theorem \ref{th1} lies in the fact that the right-hand side of \eqref{eq_taa} grows only mildly on $\varepsilon$, in agreement with the Landau-Widom asymptotic formula for one dimensional intervals \eqref{eqc}. In contrast, cruder estimates based on computing first and second moments of concentration operators, as done often in sampling theory \cite{la67}, lead to error bounds of the order $O(1/\varepsilon)$.
A result closely related to Theorem \ref{th1} is presented in the recent preprint \cite{isma23}. For $F=[0,1]^d$ and $E=r K$, where $r\geq1$ is a dilation parameter and $K \subset B_1(0) \subset \mathbb{R}^d$ is a \emph{convex, coordinate symmetric domain} \cite[Theorem 1.1]{isma23} gives the following bound for $\varepsilon \in (0,1/2)$: \begin{align}\label{eq_d} \#\big\{n\in\mathbb{N}:\ \lambda_n \in(\varepsilon,1-\varepsilon)\big\} \leq C_d \cdot \max\{ r^{d-1} \log(r/\varepsilon)^{5/2}, \log(r/\varepsilon)^{5d/2}\}. \end{align} For large $r$, the right-hand side of \eqref{eq_d} becomes $O_{d}\big(r^{d-1} \log(r/\varepsilon)^{5/2})$ while Theorem \ref{th1} gives the weaker bound $O_{\alpha,d}\big(r^{d-1} \log(r/\varepsilon)^{2d(\alpha+1)+1}\big)$. On the other hand, Theorem \ref{th1} applies to possibly non-convex, non-coordinate-symmetric and non-dilated domains $E$, and other regular domains $F$ besides cubes. \footnote{As pointed out in \cite{isma23}, when $E$ and $F$ are both cubes, even slightly stronger estimates hold, c.f. \cite[Theorem 1.2]{isma23}.}
Our work is in great part motivated by applications where concentration domains may be non-convex, such as the complement of a disk within a two dimensional square; see Section \ref{sec_sig}. Such a domain $E$ is allowed by Theorem \ref{th1} (and Theorems \ref{th2} and \ref{th3} below) and has moreover a favorable regularity constant $\kappa_{\partial E}$.
\subsection{Discretization of continuous domains}
Theorem \ref{th1} is obtained by taking a limit on a more precise result concerning a discrete setting, which is our main focus.
We consider a \emph{resolution parameter} $L>0$ and define the \emph{discrete Fourier transform} $\mathcal{F}_L: L^2((-L/2,L/2)^d)\to \ell^2(L^{-1}\mathbb{Z}^d)$ by
\begin{align}\label{eq_dft}
\mathcal{F}_L f(k/L)=\int_{(-L/2,L/2)^d} f(x) e^{-2\pi i x k/L} dx, \qquad k \in \mathbb{Z}^d.
\end{align}
We think of $L$ as a discretization parameter for an underlying continuous problem.
Let us define the \emph{discretization at resolution} $L> 0$ of a domain $E\subseteq\mathbb{R}^d$ by
\begin{align}\label{eq_eL}
E_L=L^{-1}\mathbb{Z}^d\cap E.
\end{align}
Given two compact domains $E \subseteq \mathbb{R}^d$ and $F\subseteq (-L/2,L/2)^d$, consider the
\emph{discretized concentration operator} $T:L^2(F)\to L^2(F)$ given by
\begin{align}\label{eq_T2}
T=\chi_F \mathcal{F}_L^{-1}\chi_{E_L}\mathcal{F}_L.
\end{align}
Our second result reads as follows.
\begin{theorem}\label{th2}
Let $E,F\subseteq\mathbb{R}^d$, $d\geq 2$, be compact domains with maximally Ahlfors regular boundaries with constants $\kappa_{\partial E},\kappa_{\partial F}$ respectively, and assume that that $|\partialE||\partial F|\ge 1$.
Fix a discretization resolution $L\geq |\partialE|^{-1/(d-1)}$ such that $F\subseteq (-L/2,L/2)^d$ and consider the discretized concentration operator \eqref{eq_T2} and its eigenvalues $\{\lambda_n: n \in \mathbb{N}\}$.
Then for every $\alpha\in(0,1/2)$ there exists $A_{\alpha,d}\geq 1$ such that for $\varepsilon\in(0,1/2)$:
\begin{multline}\label{eq_i7}
\#\big\{n\in\mathbb{N}:\ \lambda_n\in(\varepsilon,1-\varepsilon)\big\}
\leq A_{\alpha,d} \cdot \frac{|\partial E|}{\kappa_{\partial E} } \cdot \frac{|\partial F|}{\kappa_{\partial F} } \cdot \log\left( \frac{|\partial E||\partial F|}{ \kappa_{\partial E}\ \varepsilon}\right)^{2d(1+\alpha)+1}.
\end{multline}
\end{theorem}
The eigenvalue sequence on the left-hand side of \eqref{eq_i7} is finitely supported (with a bound that depends on the resolution parameter $L$). In contrast, the right-hand side of \eqref{eq_i7} is independent of $L$. In applications, this helps capture the transition between analog models and their finite computational counterparts, rigorously showing that the latter remain faithful to the former.
\subsection{The discrete Fourier transform}
Finally, we consider a discrete concentration problem associated with the usual \emph{discrete Fourier transform}, denoted
\[\mathcal{F}_1: L^2((-1/2,1/2)^d)\to \ell^2(\mathbb{Z}^d)\]
for consistency with \eqref{eq_dft}.
Given a finite set $\Omega \subseteq \mathbb{Z}^d$ and $F \subseteq (-1/2,1/2)^d$, the \emph{discrete Fourier concentration operator} $T:L^2(F)\to L^2(F)$ is defined as
\begin{align}\label{eq_T}
T = \chi_F \mathcal{F}_1^{-1}\chi_{\Omega}\mathcal{F}_1 .
\end{align}
The discrete boundary of a set $\Omega\subseteq \mathbb{Z}^d$ is given by
\begin{align}\label{eq_b}
\partial \Omega = \{k \in \Omega: \min\{|j-k|: j \in \mathbb{Z}^d \smallsetminus \Omega\} = 1\}.
\end{align}
We say that $\Omega\subseteq \mathbb{Z}^d$ has a \emph{maximally Ahlfors regular boundary} if there exists a constant $\kappa_{\partial \Omega}$ such that
\begin{align*}
\inf_{k\in \partial \Omega} \#\big(\partial \Omega\cap k+ [-n/2,n/2)^d\big)\geq \kappa_{\partial \Omega} \cdot n^{d-1},\quad 1\leq n\leq (\#\partial \Omega)^{1/(d-1)},\ k\in \partial \Omega.
\end{align*}
(Note the slight notational abuse: though $\Omega \subseteq \mathbb{Z}^d \subseteq \mathbb{R}^d$, the notions of boundary and boundary regularity are to be understood in the discrete sense.)
Our last result reads as follows.
\begin{theorem}\label{th3}
Let $d\ge 2$, $\Omega\subseteq \mathbb{Z}^d$ a finite set with {maximally Ahlfors regular boundary} and constant $\kappa_{\partial \Omega}$. Let $F\subseteq (-1/2,1/2)^d$ be compact with maximally Ahlfors regular boundary and constant $\kappa_{\partial F}$. Assume that $\#\partial\Omega \cdot |\partial F|\ge 1$, and consider the concentration operator \eqref{eq_T} and its eigenvalues $\{\lambda_n: n \in \mathbb{N}\}$.
Then for every $\alpha\in(0,1/2)$ there exists $A_{\alpha,d}\geq 1$ such that for $\varepsilon\in(0,1/2)$:
\begin{multline*}
\#\big\{n\in\mathbb{N}:\ \lambda_n\in(\varepsilon,1-\varepsilon)\big\}
\leq A_{\alpha,d} \cdot \frac{\#\partial \Omega}{\kappa_{\partial \Omega}} \cdot \frac{|\partial F|}{\kappa_{\partial F} } \cdot \log\left( \frac{\#\partial \Omega \cdot |\partial F|}{ \kappa_{\partial \Omega}\ \varepsilon}\right)^{2d(1+\alpha)+1}.
\end{multline*}
\end{theorem}
\subsection{One sided estimates}
We remark that bounds on the number of intermediate eigenvalues, as in Theorems \ref{th1}, \ref{th2} and \ref{th3}, can be equivalently formulated in terms of the distribution function
\begin{align*}
N_\varepsilon := \{ n \in \mathbb{N}: \lambda_n > \varepsilon\}, \qquad \varepsilon\in(0,1).
\end{align*}
\begin{rem}\label{rem_N}
For example, for $\varepsilon\in(0,1)$ under the assumptions of Theorem \ref{th1} we have
\begin{align}\label{eq_N}
\big|\#N_\varepsilon(S)-|E|\cdot |F|\big|\leq C_{\alpha,d} \cdot \frac{|\partial E|}{\kappa_{\partial E} } \cdot \frac{|\partial F|}{\kappa_{\partial F} } \cdot \log\left( \frac{|\partial E||\partial F|}{ \kappa_{\partial E}\ \min\{\varepsilon, 1-\varepsilon\}}\right)^{2d(1+\alpha)+1}.
\end{align}
\end{rem}
See Section \ref{sec_rem} for details.
\subsection{Significance}\label{sec_sig}
Fourier concentration operators arise in problems where functions are assumed to be supported on a certain domain $F$ and their Fourier transforms are known or measured on a second domain $E$. The insight that the class of such functions is approximately a vector space of dimension $|E| \cdot |F|$ is at the core of many classical and modern physical and signal models, and measurement and estimation methods \cite{MR2883827} \cite[``A Historical View'']{grunbaum2022serendipity}.
While in classical applications, such as telecommunications, the concentration domains are rectangles or unions thereof, the increasingly complex geometric nature of data and physical models has sparked great interest in spatial and frequency concentration domains with possibly intricate shapes. To name a few: (a) In geophysics and astronomy the power sprectrum of various quantities of interest is often assumed to be bandlimited to a disk or annulus and needs to be estimated from measurements taken on a domain as irregular as a geological continent \cite{Simons+2003a,Han+2008a,MR2812375,simons2000isostatic}; (b) The Fourier extension algorithm approximates a function on an arbitrary domain by a Fourier series on an enclosing box and crucially exploits the expected moderate size of the plunge region \eqref{eq_Ms} \cite{MR3803283,MR3989238}; (c) Noise statistics are often estimated from those pixels of a square image located outside a central disk, which is assumed to contain the signal of interest --- thus, the need to sample pure noise leads one to consider the complement of a disk within a two dimensional square as concentration domain (or, more realistically, a set of grid points within that domain) \cite{bhzhsi16, MR3634987, ansi17, anro20}.
The expected low dimensionality of physical and signal models based on spatio-temporal constraints is often exploited without direct computation of eigendecompositions of Fourier concentration operators (which may in fact be ill-posed in the absence of symmetries). Rather, the expected asymptotic spectral profile of such operators informs strategies based on randomized linear algebra and information theory. The quest to analyze such models and methods has motivated a great amount of recent research \cite{is15,KaRoDa,MR3887338,MR3926960,BoJaKa,Os, isma23} which led to deep and far reaching improvements over more classical non-asymptotic results \cite{MR169015}. However, the mentioned literature covers only rectangular or other convex and symmetric domains, which precludes applications involving complex geometries. Due to this limitation, signal and measurement models with a complex geometric nature are often analyzed based on estimates much cruder than \eqref{eq_taa}, which have error factors of the order $1/\varepsilon$ instead of $\log(1/\varepsilon)^\alpha$ \cite{MR2812375,MR3803283, anro20}, and thus poorly reflect the remarkable practical effectiveness of low dimensional models. Estimates on measurement/reconstruction complexity, estimation confidence, or approximation/stability trade-offs are as a consequence orders of magnitude too conservative. Our work bridges this theory to practice gap by providing the first spectral deviation estimates for Fourier concentration operators valid without simplifying symmetry or convexity assumptions that also match the precision of what is rigorously known for intervals \cite{slepo61,sle78, is15,KaRoDa,MR3887338,MR3926960,BoJaKa,Os, isma23}. In addition, while Euclidean Fourier concentration operators help analyze
computational schemes in their asymptotic continuous limit, our results for the discrete Fourier transform apply, more quantitatively, to finite settings, as they occur in many applications.
\subsection{Methods and related literature}\label{sec_r}
We work for the most part with the discrete Fourier transform and then obtain consequences for the continuous one by a limiting argument. Theorem \ref{th2} is thus a more precise and quantitative version of Theorem \ref{th1}, and is proved in two steps. We first revisit Israel's argument \cite{is15} and adapt it to prove eigenvalue estimates when one of the domains is a rectangle and the other one is a general multi-dimensional discrete domain (Theorem \ref{th:cube} below). These estimates are slightly stronger than those in Theorem \ref{th2}, and the extra precision is exploited in the subsequent step. We follow the method of almost diagonalization with wave-packets, which we achieve, unlike \cite{is15}, through a redundant system (frame) instead of an orthonormal basis. The versatility of redundant expansions helps us avoid requiring symmetries from the concentration domain.
The second step is a decomposition, rescaling, and dyadic approximation argument, implemented by means of $p$-Schatten norm estimates for certain Hankel operators, and especially by quantifying those estimates as a function of $p$, as $p \to 0^+$. In this way we reduce the problem to the case where one of the domains is a rectangle, while relying on the refined estimates derived in the first step.
Our intermediate result, Theorem \ref{th:cube}, is close in spirit to Theorem 1.1 in \cite{isma23} (which appeared as we were finishing this article). The estimates in \cite{isma23}, formulated in the context of the continuous Fourier transform and concerning dilated convex domains, are stronger than what follows from Theorem \ref{th:cube} in that regime, as \cite[Theorem 1.1]{isma23} involves smaller powers of a certain logarithmic factor (see also Section \ref{sec_e} and \eqref{eq_d}). On the other hand, Theorem \ref{th:cube} concerns a sufficiently regular possibly non-convex and non-symmetric domain, and covers the discrete Fourier transform (while Theorem \ref{th2} concerns two such domains).
We also mention our recent work on concentration operators for the short-time Fourier transform \cite{maro21}, that also makes use of Ahlfors regularity and Schatten norm estimates. Though the goals and results are philosophically similar to those in the present article, the settings are rather different from the technical point of view. Indeed, the arguments used in \cite{maro21} rely on the rapid off-diagonal decay of the reproducing kernel of the range of the short-time Fourier transform, and do not seem to be applicable to Fourier concentration operators.
The remainder of the article is organized as follows. Section \ref{sec:pre} sets up the notation and provides background on boundary regularity. Section \ref{sec_i} revisits and aptly adapts the technique from \cite{is15}. This is used in Section~\ref{sec_gc} to prove Theorem \ref{th:cube}. Theorem \ref{th2} is proved in Section \ref{sec_step2}, Theorem \ref{th1} is proved in Section \ref{sec:con}, and Theorem \ref{th3} is proved in Section \ref{sec_dis}. Remark \ref{rem_N} is proved in Section \ref{sec_rem}.
\section{Preliminaries}\label{sec:pre}
\subsection{Notation}
We shall focus on Theorem \ref{th2} and set up the notation accordingly. Theorems \ref{th1} and \ref{th3} will be obtained afterwards as an application of Theorem \ref{th2}.
We denote cubes by $Q_a=[-a/2,a/2)^d$. The Euclidean norm on $\mathbb{R}^d$ is denoted $|\cdot|$.
For two non-negative functions $f,g$ we write $f \lesssim g$ if there exist a constant $C$ such that $f(x) \leq C g(x)$, and write $f \asymp g$ is $f \lesssim g$ and $g \lesssim f$. The implied constant is allowed to depend on the dimension $d$ and the parameter $\alpha$ from Theorems \ref{th1}, \ref{th2} and \ref{th3}, but not on other parameters.
We enumerate the eigenvalues of a compact self adjoint operator $L: \mathcal{H} \to \mathcal{H}$ acting on a Hilbert space $\mathcal{H}$
as follows:
\begin{align}
\label{eqord}
\lambda_k=\inf\{\|L-S\| : \ S\in \mathcal{L}(L^2(\mathbb{R}^d)),\,\dim(\mathrm{Range} (S))<k\},
\qquad k \geq 1.
\end{align}
Then $\{ \lambda_k: k \geq 1 \} \smallsetminus \{0\} = \sigma(L) \smallsetminus \{0\}$ as sets with multiplicities --- see, e.g., \cite[Lemma 4.3]{dj}.
Recall that the \emph{discretization at resolution} $L> 0$ of a set $E \subseteq \mathbb{R}^d$ is defined by \eqref{eq_eL}. We also write \[E_L^c= L^{-1}\mathbb{Z}^d\smallsetminusE_L\] and $\partialE_L$ for the points in $E_L$ which are at distance $L^{-1}$ of $E_L^c$:
\begin{align*}
\partialE_L = \big\{k/L \in E_L: \min\{|k/L-j/L|: j/L \in E_L^c\} = L^{-1}\big\}.
\end{align*}
For $L=1$ this is consistent with \eqref{eq_b}.
We will work with the discrete Fourier transform $\mathcal{F}_L: L^2((-L/2,L/2)^d)\to \ell^2(L^{-1}\mathbb{Z}^d)$ given by \eqref{eq_dft} and reserve the notation $\mathcal{F}f$ or $\widehat{f}$ for the continuous Fourier transform \eqref{eq_cft}. Note that if $\supp (f)\subseteq (-L/2,L/2)^d$, then $\mathcal{F}_L f(k/L)=\mathcal{F} f(k/L)$ for every $k\in \mathbb{Z}^d$.
We also write $P_{E,L}=\mathcal{F}_L^{-1}\chi_{E_L}\mathcal{F}_L.$ For $F\subseteq (-L/2,L/2)^d$ we define the operator $T=T_{E,F,L}:L^2(F)\to L^2(F)$ by
\begin{align*}
T=T_{E,F,L}=\chi_F P_{E,L}
\end{align*}
and let $\lambda_n=\lambda_n(T)$ denote its eigenvalues as in \eqref{eqord}. An easy computation shows that
\[T_{t^{-1}E,tF,tL}=\mathcal{M}_{t^{-1}}T_{E,F,L}\mathcal{M}_t, \qquad t>0,\]
where $\mathcal{M}_t$ denotes the \emph{dilation operator}
\begin{align*}
\mathcal{M}_t f(x)=f(tx).
\end{align*}
In particular,
\begin{align}
\label{eq:stretch}
\lambda_n(T_{t^{-1}E,tF,tL})=\lambda_n(T_{E,F,L}), \qquad n\in \mathbb{N}.
\end{align}
\subsection{Boundary regularity}
Let us introduce regularity of sets in more generality and discuss a few properties.
An $\mathcal{H}^{d-1}$-measurable set $X \subseteq \mathbb{R}^d$ is said to be {\em lower Ahlfors $(d-1)$-regular} (regular for short) at scale $\eta_X>0$ if there exists a constant $\kappa_X>0$ such that
\begin{align*}
\mathcal{H}^{d-1}\big(X \cap B_{r}(x) \big)\geq \kappa_X \cdot r^{d-1}, \qquad 0 < r \leq \eta_X, \quad x \in X.
\end{align*}
Note that if $X \subseteq \mathbb{R}^d$ is regular at scale $\eta_X>0$ with constant $\kappa_X>0$ and $t>0$, then $t X \subseteq \mathbb{R}^d$ is regular at scale $\eta_{tX}=t \eta_X$ with constant $\kappa_{tX}=\kappa_X$.
By differentiation around a point of positive $\mathcal{H}^{d-1}$-density,
\begin{align}\label{eqkappa}
\kappa_X\le c_d,
\end{align}
for any regular $X$ of finite $\mathcal{H}^{d-1}$-measure. We also mention that if $X$ is regular with parameters $\eta_X$ and $\kappa_X$, then choosing an arbitrary $x\in X$ gives
\begin{align}\label{eqone}
\mathcal{H}^{d-1}\big(X \big)\ge \mathcal{H}^{d-1}\big(X \cap B_{\eta_X}(x) \big)\geq \kappa_X \cdot \eta_X^{d-1}.
\end{align}
We shall use the following basic result, derived from \cite{Ca}.
\begin{lemma}\label{lem:coar}
There exists a universal constant $C_d>0$ such that for every compact set $X \subseteq \mathbb{R}^{d}$ that is regular at scale $\eta_X>0$ with constant $\kappa_X$ and every $s>0$,
\begin{align*}
|X+B_s(0)| \leq \frac{C_d}{\kappa_X} \cdot \mathcal{H}^{d-1}(X) \cdot s \cdot \Big(1+\frac{s^{d-1}}{\eta^{d-1}_X}\Big).
\end{align*}
\end{lemma}
\begin{proof}
From \cite[Theorems 5 and 6]{Ca} it follows that
\begin{align*}
\mathcal{H}^{d-1} \big(\{x \in \mathbb{R}^{d}:\ d(x,X)=r\} \big) \leq \frac{C_d}{\kappa_X} \cdot \mathcal{H}^{d-1}(X) \cdot \Big(1+\frac{r^{d-1}}{\eta_X^{d-1}}\Big),
\end{align*}
for almost every $r>0$, and in addition, $| \nabla d(x,X) | = 1$, for almost every $x \in \mathbb{R}^{d}$. From this and the coarea formula --- see, e.g., \cite[Theorem 3.11]{evga92} ---
it follows that
\begin{align*}
|X+B_s(0)|&= \int_{\mathbb{R}^d} \chi_{[0,s)}(d(x,X)) dx
= \int_{\mathbb{R}^d} \chi_{[0,s)}(d(x,X)) | \nabla d(x,X) | dx
\\ &= \int_0^s \mathcal{H}^{d-1}\big(\{x:\ d(x,X)=r\} \big) dr
\leq \frac{C_d}{\kappa_X} \mathcal{H}^{d-1}(X) \int_0^s \Big(1+\frac{r^{d-1}}{\eta_X^{d-1}}\Big) dr
\\ &\leq \frac{C_d}{\kappa_X} \mathcal{H}^{d-1}(X) s\Big(1+\frac{s^{d-1}}{\eta_X^{d-1}}\Big). \qedhere
\end{align*}
\end{proof}
\begin{corollary}\label{cor:coar}
For $E\subseteq\mathbb{R}^d$ a compact domain with regular boundary at scale $\eta_{\partialE}\ge 1$ with constant $\kappa_{\partialE}$ and a discretization resolution $L\ge 1$, we have
\begin{align*}
L^{-d}\#E_L\lesssim |E|+\frac{|\partialE|}{\kappa_{\partialE} L}.
\end{align*}
In particular, for $d\geq 2$,
\begin{align*} L^{-d}\#E_L\lesssim \frac{\max\{|\partialE|^{d/(d-1)},1\}}{\kappa_{\partialE}}.
\end{align*}
\end{corollary}
\begin{proof}
Recall that $Q_{L^{-1}}=L^{-1}[-1/2,1/2)^d$ and define
$E_L'=\{m\in E_L :\ m+Q_{L^{-1}}\subseteq E\}$.
From Lemma~\ref{lem:coar}, we get
\begin{align*}
L^{-d}\#E_L &= \Big|\bigcup_{m\inE_L'} m+Q_{L^{-1}}\Big|+ \Big|\bigcup_{m\inE_L\smallsetminusE_L'} m+Q_{L^{-1}}\Big|\le |E|+|\partialE+B_{L^{-1}\sqrt{d}}(0)|
\\ &\lesssim |E|+ \frac{|\partialE|}{\kappa_{\partialE} L}.
\end{align*}
Finally, the second inequality follows from the isoperimetric inequality $|E|\lesssim|\partialE|^{d/(d-1)}$ and \eqref{eqkappa}.
\end{proof}
\section{Israel's argument revisited}\label{sec_i}
We now revisit the core argument of \cite{is15} and aptly adapt it so as to treat multi-dimensional and discrete domains.
\subsection{Israel's lemma}
We need a slight generalization of Lemma~1 in \cite{is15}, phrasing it in terms of frames rather than orthonormal bases. We include a proof for the sake of completeness.
Recall that a frame for a Hilbert space $\mathcal{H}$ is a subset of vectors $\{\phi_i\}_{i\in\mathcal{I}}$ for which the exist constants $0<A,B<\infty$ ---
called lower and upper frame bounds --- such that
\begin{align*}
A \| f \|^2 \leq \sum_{i \in \mathcal{I}} |\langle f, \phi_i \rangle|^2 \leq B \|f\|^2, \qquad f \in \mathcal{H}.
\end{align*}
If, moreover, $A=B$, the say that the frame is \emph{tight}.
\begin{lemma}\label{lem:main}
Let $T:\mathcal{H}\to \mathcal{H}$ be a positive, compact, self-adjoint operator
on a Hilbert space $\mathcal{H}$ with $\|T\|\leq 1$ and eigendecomposition
$T=\sum_{n\ge 1}\lambda_n\langle \cdot ,f_n\rangle f_n$.
Let $\{\phi_i\}_{i\in\mathcal{I}}$ be a frame of unit norm vectors for $\mathcal{H}$ with lower frame bound $A$. If $\mathcal{I}= \mathcal{I}_1\cup\mathcal{I}_2\cup\mathcal{I}_3$, and
\begin{equation}\label{eq:eps-cubed}
\sum_{i\in\mathcal{I}_1}\|T\phi_i\|^2+\sum_{i\in\mathcal{I}_3}\|(I-T)\phi_i\|^2\leq \frac{A}{2}\varepsilon^2,
\end{equation}
then $\# M_\varepsilon(T)\leq \frac{2}{A}\#\mathcal{I}_2,$ where $M_\varepsilon(T)$ is defined as in \eqref{eq_Ms}.
\end{lemma}
\proof Let $S_\varepsilon=\text{span}\{f_n:\ \lambda_n\in(\varepsilon,1-\varepsilon)\}$ and let $P_\varepsilon:\mathcal{H}\to S_\varepsilon$ denote the orthogonal projection onto $S_\varepsilon$. For $f\in S_\varepsilon$ one has $\|f\|-\|(I-T)f\|\leq \|Tf\|\leq (1-\varepsilon)\|f\|$, which shows
$$
\varepsilon\|f\|\leq \|Tf\|,\quad\text{and}\quad\varepsilon\|f\|\leq \|(I-T)f\|,\quad f\in S_\varepsilon.
$$
Note that $T$ and $P_\varepsilon$ commute since $S_\varepsilon$ is spanned by a collection of eigenvectors of $T$.
Therefore, by \eqref{eq:eps-cubed} we obtain
$$
\sum_{i\in\mathcal{I}_1\cup\mathcal{I}_3}\varepsilon^2\|P_\varepsilon \phi_i\|^2\leq \sum_{i\in\mathcal{I}_1}\|TP_\varepsilon \phi_i\|^2+\sum_{i\in \mathcal{I}_3}\|(I-T)P_\varepsilon \phi_i\|^2\leq \frac{A}{2}\varepsilon^2,
$$
which implies
\begin{equation}\label{eq:A/2}
\sum_{i\in\mathcal{I}_1\cup\mathcal{I}_3}\|P_\varepsilon \phi_i\|^2\leq \frac{A}{2}.
\end{equation}
Using the frame property we get for $f \in S_\varepsilon$:
$$
A\|f\|^2\leq \sum_{i\in\mathcal{I}}|\langle f,\phi_i\rangle |^2=\sum_{i\in\mathcal{I}}|\langle f,P_\varepsilon \phi_i\rangle|^2.
$$
Now assume that $\text{dim}(S_\varepsilon)\geq 1$ (otherwise the result is trivial), take an orthonormal basis $\{\psi_k\}_{k=1}^{\text{dim}(S_\varepsilon)}$ of $S_\varepsilon$, and sum the inequality above over all basis elements to derive
\begin{align*}
A \cdot \# M_\varepsilon(T)&=A \cdot \text{dim}(S_\varepsilon)= A\sum_{k=1}^{\text{dim}(S_\varepsilon)}\|\psi_k\|^2\leq \sum_{k=1}^{\text{dim}(S_\varepsilon)}\sum_{i\in\mathcal{I}}|\langle \psi_k,\phi_i\rangle|^2
\\
&= \sum_{i\in\mathcal{I}}\|P_\varepsilon\phi_i\|^2\leq \sum_{i\in\mathcal{I}_2}\|\phi_i\|^2+\frac{A}{2}= \#\mathcal{I}_2 +\frac{A}{2}\le \#\mathcal{I}_2 +\frac{A}{2}\# M_\varepsilon(T),
\end{align*}
where in the second line we used \eqref{eq:A/2}.
This shows that $\# M_\varepsilon(T)\leq \frac{2}{A}\#\mathcal{I}_2.$
$\Box$\\
\subsection{Local trigonometric frames}
In this section, we construct a tight frame that allows us to apply Lemma~\ref{lem:main}.
Let $\alpha>0$, and $\theta\in\mathcal{C}^\infty(\mathbb{R})$ be such that
\begin{enumerate}[label=(\roman*)]
\item\label{theti} $\theta(x)=1,$ for $x\geq 1$, and $\theta(x)= 0,$ for $x\leq -1$,
\item\label{thetii} $\theta(-x)^2+\theta(x)^2=1$, for every $x\in\mathbb{R}$,
\item\label{thetiii} $|D^k\theta(x)|\leq C_\alpha^k k^{(1+\alpha)k}$, for all $k\in\mathbb{N}_0$, all $x\in\mathbb{R}$, and a constant $C_\alpha>0$.
\end{enumerate}
See, for example, \cite[Proposition~1]{is15} or \cite[Chapter 1]{MR1408902} for the existence of such a function.
Let $W>0$. We decompose the interval $\big(\hspace{-3pt}-\frac{W}{2},\frac{W}{2}\big)$ into disjoint intervals
$$
I_j=x_j+\frac{W}{3 \cdot 2^{|j|+1}}[-1,1),\quad j\in\mathbb{Z},
$$
where
\[x_j=\frac{\sign(j) W}{2}\Big(1-\frac{1}{2^{|j|}}\Big).\]
Note that $|I_j|=|I_{|j|}|=2|I_{|j|+1}|$ for every $j\in\mathbb{Z}$. We will also denote $D_j=I_j\cup I_{j+1}$.
Now define
$$
\theta_j(x)=\theta\left(\frac{2(x-x_{j})}{|I_{j}|}\right)\theta\left(-\frac{2(x-x_{j+1})}{|I_{j+1}|}\right).
$$
We have that $\theta_j(x)=0$ for $x \notin D_j $, and furthermore by properties \ref{theti} and \ref{thetii}
\begin{align*}
\|\theta_j\|_2^2&= \int_{I_j}\theta\left(\frac{2(x-x_{j})}{|I_{j}|}\right)^2 dx+\int_{I_{j+1}}\theta\left(-\frac{2(x-x_{j+1})}{|I_{j+1}|}\right)^2 dx
\\&= \frac{|I_{j}|}{2} \int_{-1}^1 \theta(x)^2 dx + \frac{|I_{j+1}|}{2} \int_{-1}^1 \theta(x)^2 dx=\frac{|D_{j}|}{2}.
\end{align*}
We define the set of vectors
\begin{equation}\label{def:Phi}
\phi_{j,k}(x)=\sqrt{\frac{2}{|D_{j}|}}\cdot \theta_j(x)\cdot\exp\left(2\pi i\frac{xk}{|D_j|}\right),\quad j,k\in\mathbb{Z},
\end{equation}
and note that $\|\phi_{j,k}\|_2=1$.
\begin{lemma}
The family $\{\phi_{j,k}\}_{j,k\in\mathbb{Z}}$ defined in \eqref{def:Phi} forms a tight frame for $L^2(-W/2,W/2)$ with frame constants $A=B=2$.
\end{lemma}
\begin{proof}
Let $f \in L^2(-W/2,W/2)$ and set $f_j:=f \chi_{I_j}$ so that $f=\sum_{j\in \mathbb{Z}}f_j$. Since $\text{supp}(\theta_j)\subseteq D_j=I_j\cup I_{j+1}$, we observe
\begin{align*}
\sum_{j,k\in\mathbb{Z}}|\langle f,\phi_{j,k}\rangle|^2&=\sum_{j,k\in\mathbb{Z}}\left|\langle f_j+f_{j+1},\phi_{j,k}\rangle\right|^2.
\end{align*}
As $\left\{ |D_j|^{-1/2}\exp\left(2\pi ikx/|D_j|\right)\right\}_{k\in\mathbb{Z}}$ is an orthonormal basis for $L^2(D_j)$, we find that
$$
\sum_{k\in\mathbb{Z}}|\langle f_j+f_{j+1},\phi_{j,k}\rangle|^2=2 \| (f_j+f_{j+1})\theta_j \|^2_2=2\|f_j\theta_j \|^2_2+2\| f_{j+1}\theta_j \|^2_2.
$$
Combining both identities and using property \ref{thetii}, we conclude
\begin{align*}
\sum_{j,k\in\mathbb{Z}}|\langle f,\phi_{j,k}\rangle|^2&=2\sum_{j\in\mathbb{Z}}\Big(\|f_j\theta_j \|^2_2+\| f_{j+1}\theta_j \|^2_2\Big)
=2 \sum_{j\in\mathbb{Z}}\Big(\|f_j\theta_j \|^2_2+\| f_j\theta_{j-1} \|^2_2\Big)
\\ &= 2 \sum_{j\in\mathbb{Z}} \int_{I_j}|f(x)|^2 \left(\theta\left(\frac{2(x-x_{j})}{|I_{j}|}\right)^2+\theta\left(-\frac{2(x-x_{j})}{|I_{j}|}\right)^2\right) dx
\\&=2 \sum_{j\in\mathbb{Z}}\|f_j \|^2_2=2\|f\|_2^2. \qedhere
\end{align*}
\end{proof}
Let $0<W_i\le L,\ i=1,...,d,$ and consider the rectangle $\prod_{i=1}^d(-W_i/2,W_i/2)$. Set also
\begin{align*}
W_{\max} := \max_{i=1,...,d} W_i.
\end{align*}
We define a frame for $L^2\big(\prod_{i=1}^d(-W_i/2,W_i/2)\big)$ via the tensor product
\begin{equation*}
\Phi_{j,k}(x)=\Phi_{j_1,...,j_d,k_1,...k_d}(x_1,...,x_d)=\phi_{j_1,k_1}(x_1)\cdot\ldots\cdot \phi_{j_d,k_d}(x_d),
\end{equation*}
where each family $\{\phi_{j_i,k_i}(x_i)\}_{j_i,k_k\in\mathbb{Z}}$ is the frame for $L^2(-W_i/2,W_i/2)$ given by \eqref{def:Phi}.
This construction also yields a tight frame with frame bounds equal to $2^d$.
\subsection{Energy estimates}
Consider
$$
\psi_j(x)=\theta_j\Big(|D_j|x+x_j-\frac{W}{3 \cdot 2^{|j|+1}}\Big), \quad x\in \mathbb{R},\ j\in\mathbb{Z}.
$$
A straightforward computation shows that $\psi_j$ is supported on $[ 0,1]$ and satisfies $|D^k \psi_j(x)|\leq \widetilde{C_\alpha}^k k^{(1+\alpha)k}$ by property \ref{thetiii}. As shown in \cite{MR1658223} or \cite[Lemma~4]{is15} it thus follows that $|\widehat{\psi_j}(\xi)|\leq A_\alpha \cdot \exp\left(-a_\alpha|\xi|^{(1+\alpha)^{-1}}\right)$. Since $1-\alpha\leq (1+\alpha)^{-1}$, we derive that $t^{(1+\alpha)^{-1}}\geq t^{1-\alpha}-1$ for $t\geq 0$. Adjusting the constant $A_\alpha$, we therefore get
\begin{equation}\label{eq:F(theta)}
|\widehat{\theta_j}(\xi)|\leq A_\alpha \cdot |D_j| \cdot \exp\left(-a_\alpha(|D_j|\cdot|\xi|)^{1-\alpha }\right),\quad \xi\in\mathbb{R}.
\end{equation}
With this at hand, we estimate the decay of
$$
\mathcal{F}(\Phi_{j,k})(\xi)=2^{d/2} \prod_{i=1}^d| D_{j_i}|^{-1/2}\cdot \widehat{\theta_{j_i}}\left(\xi_i-\frac{k_i}{| D_{j_i}|}\right),\quad \xi\in\mathbb{R}^d.
$$
Define
\[M_j=\diag(|D_{j_1}|,...,|D_{j_d}|)\in\mathbb{R}^{d\times d}.\]
By \eqref{eq:F(theta)} (possibly enlarging $A_\alpha$) and $|\xi|^{1-\alpha}\leq \sum_{i=1}^d|\xi_i|^{1-\alpha}$, it follows
\begin{align}
|\mathcal{F}(\Phi_{j,k})(\xi)|&\leq A_\alpha^d\prod_{i=1}^d | D_{j_i}|^{1/2}\cdot \exp\left(-a_\alpha\big||D_{j_i}|\xi_i-k_i\big|^{1-\alpha }\right)\nonumber
\\
&\le A_\alpha^d \cdot \text{det}(M_j)^{1/2}\cdot \exp\left(-a_\alpha\big|M_j(\xi-\xi_{j,k})\big|^{1-\alpha }\right)\label{eq:decay},
\end{align}
where $(\xi_{j,k})_i=k_i| D_{j_i}|^{-1}$.
Consider now a compact domain $E \subseteq \mathbb{R}^d$.
Let $s\ge 1$ be a parameter that will be determined later.
For $j\in\mathbb{Z}^d$ fixed, we cover the index set $\mathbb{Z}^d$ with three subsets as follows
\begin{align}\label{eq_set1}
\begin{aligned}
\mathcal{L}_j^{\text{low}}&:=\big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_jE_L^c)\geq s \big\};
\\
\mathcal{L}_j^{\text{med}}&:=\big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_jE_L)< s \text{, and } \text{dist}(k,M_jE_L^c)< s\};
\\
\mathcal{L}_j^{\text{high}}&:=\big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_jE_L)\geq s \big\}.
\end{aligned}
\end{align}
(Here, $\text{dist}$ is associated with the usual Euclidean distance.) We claim that
\begin{align}
\label{eq:lj}
\mathcal{L}_j^{\text{med}}\subseteq\big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_j\partialE_L)< s\big\};
\end{align}
let us briefly sketch an argument.
Fix $k\in\mathcal{L}_j^{\text{med}}$ and let $k_0\in M_j L^{-1}\mathbb{Z}^d$ minimize the distance to $k$. For any point $x\in M_j L^{-1}\mathbb{Z}^d$ with $|k-x|<s$ we can construct a path of adjacent points in $M_j L^{-1}\mathbb{Z}^d$ from $x$ to $k_0$ whose distance to $k$ is decreasing. In particular, choosing $x_1\in M_jE_L$ and $x_2\in M_jE_L^c$ at distance less than $s$ from $k$, we can connect $x_1$ and $x_2$ through a path of adjacent points in $M_jL^{-1}\mathbb{Z}^d$ that stays at distance less than $s$ from $k$. Necessarily, one of the points in the path must belong to $M_j\partialE_L$, which proves \eqref{eq:lj}.
The indices $(j,k)$ with $k\in \mathcal{L}_j^{\text{med}}$, and $j$ satisfying a certain condition specified below (see \eqref{eq_set2}) will play the role of $\mathcal{I}_2$ in Lemma \ref{lem:main}, so we need to estimate $\#(\mathcal{L}_j^{\emph{med}})$.
\begin{lemma}\label{lem:L}
Let $E\subseteq \mathbb{R}^d$ be a compact domain with regular boundary at scale $\eta_{\partialE}\ge 1$ and constant $\kappa_{\partialE}$.
Let $L\ge W_{\max}$ and $s\ge 1$. Then for all $j\in\mathbb{Z}^d$ we have
\begin{align*}
\#\big\{k\in\mathbb{Z}^d:\ \emph{dist}(k,M_j\partialE_L)< s\big\}\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1}\cdot \frac{|\partial E|}{\kappa_{\partialE} } \cdot s^d.
\end{align*} \end{lemma} \begin{proof}
Since for every $x\in \mathbb{R}^d$ the cube $x+Q_1$ contains one point in $\mathbb{Z}^d$, \begin{align}\label{eqdis}
\#\big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_j\partialE_L)< s\}&\lesssim s^d\cdot\#\big\{k\in\mathbb{Z}^d:\ k\in M_j\partialE_L+Q_{1}\}.
\end{align} Second, we claim that for $0<a\leq 1$ and a set $X \subset \mathbb{R}$, \begin{align}\label{eq_p} \#\big\{k\in\mathbb{Z}:\ k\in aX+[-1/2,1/2)\}\le 2 \, \#\big\{k\in\mathbb{Z}:\ k\in X+[-1/2,1/2)\}. \end{align}
To see this, consider a map that sends $k=ax+t$, with $x \in X$ and $t \in [-1/2,1/2)$ to the unique integer in $x+[-1/2,1/2)$, denoted $k_*$ (the choice of $x$ and $t$ may be non-unique). Whenever $k_*=k'_*$, it follows that $|k-k'| \leq 1$. Hence, at most two $k's$ are mapped into the same $k_*$.
From \eqref{eqdis}, if we apply \eqref{eq_p} componentwise (noting that $(M_j)_{i,i}\leq W_{\max}$), we obtain for $s\geq 1$
\begin{align*}
\#\big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_j\partialE_L)< s\}&
&\lesssim s^d\cdot\#\big\{k\in\mathbb{Z}^d:\ k\in W_{\max}\partialE_L+Q_{1} \}.
\end{align*}
Since for every $x\in \partialE_L$ there exists $x'\in \partial E$ such that $|x-x'|\leq L^{-1}$, and $W_{\max}/L\leq 1$, it follows that
\begin{align}
\#\big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_j\partialE_L)< s\}&\lesssim s^d\cdot\#\big\{k\in\mathbb{Z}^d:\ k\in W_{\max}\partialE+Q_{3} \} \notag \\
&=: s^d\cdot\# (\mathcal{K}_{W_{\max}}).\label{eq:pl}
\end{align}
Now let $k\in \mathcal{K}_{W_{\max}}$. There exists at least one point $x_k\in \partialE $ such that $k\in W_{\max}x_k+Q_{3}$. In particular, we have that $x_k\in W_{\max}^{-1}k+Q_{4/W_{\max}}$. Therefore, for every $k\in \mathcal{K}_{W_{\max}}$ we get by regularity of $\partialE$
\begin{align*}
\kappa_{\partialE}\cdot \min\{W_{\max}^{-1},\eta_{\partialE}\}^{d-1} &\le \mathcal{H}^{d-1}\big( \partial E\cap B_{1/W_{\max}}(x_k)\big)
\\
&\le \mathcal{H}^{d-1}\big( \partial E\cap x_k+Q_{2 /W_{\max}} \big)
\\ &\le \mathcal{H}^{d-1}\big( \partial E\cap W_{\max}^{-1}k+Q_{6 /W_{\max}} \big). \notag
\end{align*} So,
\begin{align*}
\kappa_{\partialE}\cdot \min\left\{W_{\max}^{-1},\eta_{\partialE}\right\}^{d-1}\cdot\#(\mathcal{K}_{W_{\max}})&\leq
\sum_{k\in\mathcal{K}_{W_{\max}}} \mathcal{H}^{d-1}\big( \partial E\cap W_{\max}^{-1}k+Q_{6 /W_{\max}} \big)
\\&\lesssim \sum_{k\in\mathbb{Z}^d} \mathcal{H}^{d-1}\big( \partial E\cap W_{\max}^{-1}k+Q_{1 /W_{\max}} \big)\\ &=\mathcal{H}^{d-1}( \partial E).
\end{align*}
Plugging this estimate into \eqref{eq:pl} completes the proof. \end{proof}
Next, for a compact domain $E \subseteq \mathbb{R}^d$ and a parameter $s \geq 1$ we recall the sets \eqref{eq_set1}, introduce a second auxiliary parameter $0<\delta<1$, and define the following covering of $\mathbb{Z}^{2d}$: \begin{align}\label{eq_set2} \begin{aligned}
\Gamma^{\text{low}}&:=\big\{(j,k):\ \min_{i }| D_{j_i}|\geq \delta,\ k\in\mathcal{L}_j^{\text{low}}\big\};
\\
\Gamma^{\text{med}}&:=\big\{(j,k):\ \min_{i }| D_{j_i}|\geq \delta,\ k\in\mathcal{L}_j^{\text{med}}\big\};
\\
\Gamma^{\text{high}}&:=\big\{(j,k):\ \min_{i }| D_{j_i}|\geq \delta,\ k\in\mathcal{L}_j^{\text{high}}\big\}\cup \big\{(j,k):\ \min_{i }| D_{j_i}|< \delta,\ k\in\mathbb{Z}^d\big\}. \end{aligned} \end{align}
\begin{lemma}\label{lem:gamma-med}
Under the conditions of Lemma~\ref{lem:L}, let $0<\delta<1$ and consider the set $\Gamma^{\emph{med}}$ from \eqref{eq_set2}. Then
\[\#(\Gamma^{\emph{med}})\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \cdot \frac{|\partial E|}{\kappa_{\partialE}} \cdot
\max\{\log(W_{\max}/\delta),1\}^d
\cdot s^d .\] \end{lemma} \begin{proof}
By \eqref{eq:lj} and Lemma~\ref{lem:L} it follows
$$
\#(\Gamma^{\text{med}})= \sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|\geq \delta}}\#(\mathcal{L}_j^{\text{med}})\lesssim \sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|\geq \delta}}\max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E|}{\kappa_{\partialE}} s^d.
$$
In each coordinate, we have that the number of intervals $ D_{j_i}$ for which $| D_{j_i}|\geq \delta $ is bounded by
$C\max\{\log(W_{\max}/\delta),1\}$. Hence, we arrive at
\[
\#(\Gamma^{\text{med}})\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E|}{\kappa_{\partialE}}
\max\{\log(W_{\max}/\delta),1\}^d
s^d.\qedhere\] \end{proof}
\begin{lemma} \label{lem:est-gamma}
Let $d\geq 2$, $L\ge \max\{W_{\max},1\},$ and $E\subseteq \mathbb{R}^d$ be a compact domain with regular boundary at scale $\eta_{\partialE}\geq 1$ with constant $\kappa_{\partialE}$ and such that $|\partial E|\ge 1$. Let $s\ge 1$ and $\delta \in (0,1)$ be parameters and consider the sets from \eqref{eq_set2}. Then
there exists a constant $c=c_\alpha>0$ such that
\begin{align}\label{eq:est-gamma-low}
\sum_{(j,k)\in\Gamma^{\emph{low}}} L^{-d} &\sum_{m\inE^c_L}|\mathcal{F}(\Phi_{j,k})(m)|^2
\notag \\ &\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \cdot\frac{|\partial E| }{\kappa_{\partialE}}\cdot\exp\big(-cs^{1-\alpha}\big)
\max\{\log(W_{\max}/\delta),1\}^d,
\end{align}
and
\begin{align}\label{eq:est-gamma-high}
\sum_{(j,k)\in\Gamma^{\emph{high}}} L^{-d} \sum_{m\inE_L}|\mathcal{F}(\Phi_{j,k})(m)|^2 \lesssim& \frac{\max\{W_{\max}, 1/\eta_{\partialE}\}^{d-1}}{\kappa_{\partialE}}\cdot\Big(|\partialE|^{\frac{d}{d-1}} \cdot\delta \notag
\\
& + |\partial E| \cdot \exp\big(-cs^{1-\alpha}\big)
\cdot\max\{\log(W_{\max}/\delta),1\}^d
\Big).
\end{align} \end{lemma} \proof For $j\in\mathbb{Z}^d $ and $l\in\mathbb{N}_0$ we set \begin{equation*}
\mathcal{L}_{j,l}^{\text{low}}=\big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_jE_L^c)\in [s2^l,s2^{l+1}) \big\}, \end{equation*} and \begin{equation*}
\mathcal{L}_{j,l}^{\text{high}}=\big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_j E_L)\in [s2^l,s2^{l+1}) \big\}. \end{equation*} Notice that \begin{align*}
\mathcal{L}_{j,l}^{\text{low}}\cup \mathcal{L}_{j,l}^{\text{high}}&\subseteq \big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_jE_L)< s2^{l+1} \text{, and } \text{dist}(k,M_jE_L^c)< s2^{l+1}\}
\\ &\subseteq\big\{k\in\mathbb{Z}^d:\ \text{dist}(k,M_j\partialE_L)< s2^{l+1}\}, \end{align*} where the last step follows as in \eqref{eq:lj}. From Lemma \ref{lem:L} we get \begin{equation}\label{eq:counting}
\#(\mathcal{L}_{j,l}^{\text{low}})
,\#(\mathcal{L}_{j,l}^{\text{high}})\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E| }{\kappa_{\partialE} } s^d2^{dl}. \end{equation} From \eqref{eq:decay} it follows that if $k\in\mathcal{L}_{j,l}^{\text{low}}$ \begin{align}
L^{-d}\sum_{m\inE_L^c}|\mathcal{F}(\Phi_{j,k})(m)|^2&\leq L^{-d} \sum_{m\inE_L^c} A_\alpha^{2d}\det(M_j)\exp\big(-2a_\alpha|M_j(m-\xi_{j,k})|^{1-\alpha}\big)\nonumber
\\
&\leq A_\alpha^{2d} L^{-d}\det(M_j) \sum_{m'\in M_j E_L^c } \exp\big(-2a_\alpha|m'-k|^{1-\alpha}\big)\nonumber
\\
&\lesssim \int_{\{|x|\ge s2^l\}} \exp\big(-2a_\alpha|x|^{1-\alpha}\big)dx\nonumber
\\
&\lesssim \exp\big(-c(s2^l)^{1-\alpha}\big),\label{eq:est-L-low} \end{align} where $c$ can for example be chosen as $a_\alpha$. A similar argument also shows that for $k\in \mathcal{L}_{j,l}^{\text{high}}$, \begin{equation*}\label{eq:est-L-high}
L^{-d} \sum_{m\inE_L}|\mathcal{F}(\Phi_{j,k})(m)|^2\lesssim \exp\big(-c(s2^l)^{1-\alpha}\big). \end{equation*} As $\mathcal{L}_j^{\text{low}}=\bigcup_{l\in\mathbb{N}_0} \mathcal{L}_{j,l}^{\text{low}}$, it follows from \eqref{eq:counting} and \eqref{eq:est-L-low} that \begin{multline*}
\sum_{(j,k)\in\Gamma^{\text{low}}}L^{-d}\sum_{m\inE_L^c}|\mathcal{F}(\Phi_{j,k})(m)|^2= \sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|\geq \delta}}\sum_{l\in\mathbb{N}_0}\sum_{k\in\mathcal{L}_{j,l}^{\text{low}}}L^{-d}\sum_{m\inE_L^c}|\mathcal{F}(\Phi_{j,k})(m)|^2
\\
\begin{aligned}
&\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E| }{\kappa_{\partialE} } \sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|\geq \delta}}\sum_{l\in\mathbb{N}_0}
(s 2^{l})^d \exp\big(-c(s2^l)^{1-\alpha}\big)
\\
&\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E| }{\kappa_{\partialE} }\sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|\geq \delta}}\exp\big(-c's^{1-\alpha}\big)
\\
&\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E| }{\kappa_{\partialE} }\exp\big(-c's^{1-\alpha}\big)\max\{\log(W_{\max}/\delta),1\}^d,
\end{aligned} \end{multline*} which completes the proof of \eqref{eq:est-gamma-low}. Again, we can use an analogous reasoning to show \begin{multline*}
\sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|\geq \delta}} \sum_{k\in\mathcal{L}_{j}^{\text{high}}}L^{-d}\sum_{m\in E_L}|\mathcal{F}(\Phi_{j,k})(m)|^2
\\
\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E| }{\kappa_{\partialE} }\exp\big(-c's^{1-\alpha}\big)\max\{\log(W_{\max}/\delta),1\}^d. \end{multline*}
Now suppose that $j\in\mathbb{Z}^d$ is such that $\min_{1\le i \le d}| D_{j_i}|<\delta$. For every $m\in\mathbb{Z}^d$ we can uniformly bound the following series $$
\sum_{k\in\mathbb{Z}^d}\exp\big(-2a_\alpha |M_j(m-\xi_{j,k})|^{1-\alpha}\big)=\sum_{k\in\mathbb{Z}^d}\exp\big(-2a_\alpha |M_jm-k|^{1-\alpha}\big)\leq C. $$
Since $\det(M_j)=|D_j|$, where $D_j=D_{j_1}\times...\times D_{j_d}$, we thus get by \eqref{eq:decay} \begin{align*}
\sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|< \delta}}\sum_{k\in\mathbb{Z}^d} L^{-d}\sum_{m\inE_L} \left|\mathcal{F}(\Phi_{j,k})(m)\right|^2
&\leq C \sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|< \delta}} L^{-d}\sum_{m\inE_L} \text{det}(M_j)
\\
&\leq C L^{-d}\#E_L\sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|< \delta}} |D_j|
\\
&\lesssim \frac{|\partialE|^{d/(d-1)}}{\kappa_{\partialE}} \sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|< \delta}} |D_j|, \end{align*} where in the last inequality we used Corollary \ref{cor:coar}. Finally, \begin{align*}
\sum_{\substack{j\in\mathbb{Z}^d\\ \min| D_{j_i}|< \delta}} |D_j|&\le \sum_{i=1}^d \sum_{\substack{j\in\mathbb{Z}^d\\ | D_{j_i}|< \delta}} |D_j|
\le \sum_{i=1}^d
W_{\max}^{d-1} 4\delta
\\ &\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1}\delta, \end{align*}
where we used that each interval $D_{j_i}$ is at most at $|D_{j_i}|<\delta$ distance from the boundary of $(-W_i/2,W_i/2)$ and consequently the union of all such intervals lie in the complement of a rectangle of side-lengths $W_i-4\delta$. This concludes the proof of~\eqref{eq:est-gamma-high}.
$\Box$\\
\section{General domain vs. rectangle}\label{sec_gc}
In this section, we prove the following variant of Theorem \ref{th2} for $F$ a rectangle.
\begin{theorem}\label{th:cube}
Let $L\ge 1$ be a discretization resolution, $d\geq 2$, and $E\subseteq\mathbb{R}^d$ be a compact domain with regular boundary at scale $\eta_{\partialE}\ge 1$ with constant $\kappa_{\partialE}$ and such that $|\partial E|\ge 1$. For $0<W_i\le L$, $i=1,...,d$, take $F=\prod_{i=1}^d(-W_i/2,W_i/2)$ and denote $W_{\max}= \max_i W_i$.
Then for every $\alpha\in(0,1/2)$ there exists $A_{\alpha,d}\geq 1$ such that for $\varepsilon\in(0,1/2)$:
\begin{multline*}
\#\big\{n\in\mathbb{N}:\ \lambda_n\in(\varepsilon,1-\varepsilon)\big\}
\\ \leq A_{\alpha,d} \cdot \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \cdot \frac{|\partial E|}{\kappa_{\partialE} } \cdot \log\left(\frac{ \max\{W_{\max},1/\eta_{\partialE}\}^{d-1}|\partial E|}{\kappa_{\partialE}\ \varepsilon} \right)^{2d(1+\alpha)}.
\end{multline*} \end{theorem}
\begin{proof}
We adopt all the notation of Section \ref{sec_i}. Fix parameters $s \geq 1$, $\delta \in (0,1)$ and consider the sets from \eqref{eq_set2}.
Observe that for $f\in L^2(F)$ one has
$$
\|Tf\|_2^2=\|\chi_{F}P_{E,L} f\|_2^2\leq \|P_{E,L} f\|_2^2=L^{-d}\sum_{m\in E_L}\big|\widehat{f}(m)\big|^2,
$$
and
$$
\|f-Tf\|_2^2=\|\chi_{F}f-\chi_{F} P_{E,L} f\|_2^2\leq \|(I-P_{E,L}) f\|_2^2=L^{-d}\sum_{m\in E_L^c}\big|\widehat f(m)\big|^2.
$$
By Lemma~\ref{lem:est-gamma} it thus follows
\begin{align}
\label{eqeps1}
\sum_{(j,k)\in\Gamma^{\text{low}}}&\|(I-T)\Phi_{j,k}\|_2^2+\sum_{(j,k)\in\Gamma^{\text{high}}}\|T\Phi_{j,k}\|_2^2
\\
&\leq C \frac{\max\{W_{\max},1/\eta_{\partialE}\}^{d-1}}{\kappa_{\partialE}} \Big(|\partial E| \exp\big(-cs^{1-\alpha}\big)\max\{\log(W_{\max}/\delta),1\}^d \notag
\\
&\hspace{6cm}+ |\partialE|^{d/(d-1)}\delta \Big), \notag
\end{align}
where the constants depend only on $\alpha$ and $d$, and $C$ can be taken $\geq 2$.
At last, we can now specify the parameters $\delta$ and $s$ in order for the sets $\Gamma^{\text{low}},\Gamma^{\text{med}}$ and $\Gamma^{\text{high}}$ to play the role of $\mathcal{I}_1,\mathcal{I}_2$ and $\mathcal{I}_3$ in Lemma \ref{lem:main}. We take
\begin{align*}
\delta =\frac{\kappa_{\partialE}\ \varepsilon^2 }{C\ \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} |\partialE|^{d/(d-1)}}.
\end{align*}
This ensures that
\begin{align}
\label{eqeps2}
\frac{C \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} |\partialE|^{d/(d-1)}}{\kappa_{\partialE}}\delta \leq \varepsilon^2.
\end{align}
while \eqref{eqone} shows that $\delta \in (0,1)$.
In addition, we shall select $s$ such that
\begin{align}
\label{eqeps3}
C \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E| }{\kappa_{\partialE} }\exp\big(-cs^{1-\alpha}\big)\max\{\log(W_{\max}/\delta),1\}^d\leq \varepsilon^2.
\end{align}
This condition on $s$ is equivalent to
\begin{align}\label{eqs}
s\hspace{-1pt}\geq \hspace{-1pt}\left(\hspace{-1pt}\frac{1}{c}\log\hspace{-2pt}\left(\frac{C \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} |\partial E| \max\{\log(W_{\max}/\delta),1\}^d}{\kappa_{\partialE}\ \varepsilon^2}\right)\hspace{-2pt}\right)^{\hspace{-2pt}1/(1-\alpha)}.
\end{align}
Denote
\[u=\max\{W_{\max},1/\eta_{\partialE}\}^{d-1}|\partialE|/\kappa_{\partialE},\]
which satisfies $u\ge 1$ by \eqref{eqone}, and note that \begin{align}
\label{eqdel}
\log(W_{\max}/\delta) \le \log \Big(\frac{C \kappa_{\partialE}^{1/(d-1)} u^{d/(d-1)} }{ \varepsilon^2} \Big) \lesssim \log (u/\varepsilon), \end{align} where in the last step we used \eqref{eqkappa}. So, we can bound the right-hand side of \eqref{eqs} by \[\Big(\frac{1}{c}\log\Big( \frac{C' u \log(u/\varepsilon)^d}{\varepsilon^2}\Big)\Big)^{1/(1-\alpha)}\lesssim \log(u/\varepsilon)^{1/(1-\alpha)}.\] In particular, \eqref{eqs} is satisfied if
\begin{align}\label{eq_ps}
s= A_{\alpha,d} \log\left(\frac{ \max\{W_{\max},1/\eta_{\partialE}\}^{d-1}|\partial E|}{\kappa_{\partialE} \ \varepsilon} \right)^{1/(1-\alpha)},
\end{align}
for an adequate constant $A_{\alpha,d}$. Moreover, we can guarantee that $s\ge 1$, since, by \eqref{eqone}, the term inside the logarithm in \eqref{eq_ps} is $\ge 2$. From \eqref{eqeps1}, \eqref{eqeps2}, \eqref{eqeps3}, Lemma \ref{lem:main} and Lemma~\ref{lem:gamma-med},
\begin{align*}
\# M_\varepsilon(T)&\leq 2^{1-d}\#(\Gamma^{\text{med}}) \\
&\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E|}{\kappa_{\partialE} } \max\{\log(W_{\max}/\delta),1\}^d s^d
\\
&\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E|}{\kappa_{\partialE} } \log\left(\frac{ \max\{W_{\max},1/\eta_{\partialE}\}^{d-1}|\partial E|}{\kappa_{\partialE} \ \varepsilon} \right)^{d/(1-\alpha)+d}
\\
&\lesssim \max\{W_{\max},1/\eta_{\partialE}\}^{d-1} \frac{|\partial E|}{\kappa_{\partialE} } \log\left(\frac{ \max\{W_{\max},1/\eta_{\partialE}\}^{d-1}|\partial E|}{\kappa_{\partialE} \ \varepsilon} \right)^{2d(1+\alpha)},
\end{align*}
where in the third step we used \eqref{eqdel} and the definition of $s$. \end{proof}
\section{Eigenvalue estimates for two domains}\label{sec_step2}
\subsection{Schatten quasi-norm estimates}
For $0<p\le 1$, and $\varepsilon>0$, define the auxiliary function $g=g_{p,\varepsilon}:[0,1]\to\mathbb{R}$ given by \[g(t)=\Big(\frac{t-t^2}{\varepsilon-\varepsilon^2}\Big)^p.\] Note that since $\chi_{(\varepsilon,1-\varepsilon)}\le g$, for a positive operator $0\le S\le 1$, \begin{align}\label{eq_lp}
\# M_\varepsilon(S)=\tr(\chi_{(\varepsilon,1-\varepsilon)} (S))\le \tr(g(S))=\frac{\|S-S^2\|_p^p}{(\varepsilon-\varepsilon^2)^p}, \end{align}
where $\|\cdot\|_p$, $0<p\le 1$, denotes the Schatten quasi-norm. The next lemma shows that upper bounds for the left-hand side of the last inequality can be transferred to the right-hand side without much loss.
\begin{lemma}
\label{lem:schp}
Suppose that for a positive operator $0\le S\le 1$ there are constants $C,D,a>0$ such that for every $\varepsilon\in (0,1/2)$,
\[\# M_\varepsilon(S)\le C\big(D+\log(\varepsilon^{-1})\big)^a.\]
Then, for every $0<p\le 1$ there is a constant $C_a>0$ such that
\[\|S-S^2\|_p^p\le C_a C\big(D+p^{-1}\big)^a.\] \end{lemma} \begin{proof}
By the symmetry of the function $h(x)=x-x^2$ around $1/2$, for $0\le x \le 1$,
\begin{align*}
h(x)^p&= \int_0^{\min\{x,1-x\}} (h^p)'(t) dt = \int_0^{1/2} \chi_{(t,1-t)}(x) ph^{p-1}(t) h'(t) dt
\\& \le \int_0^{1/2} \chi_{(t,1-t)}(x) p t^{p-1} dt.
\end{align*}
By a monotone convergence argument we get
\begin{align*}
\|S-S^2\|_p^p &\le \int_0^{1/2} M_t(S) p t^{p-1} dt \le C \int_0^{1} (D+\log(t^{-1}))^a p t^{p-1} dt
\\ &= C \int_0^{\infty} (D+u/p)^a e^{-u} du
\\ &\le C (D+1/p)^a + C \int_1^{\infty} (D+u/p)^a e^{-u} du
\\ &\le C (D+1/p)^a + C (D+1/p)^a \int_1^{\infty} u^a e^{-u} du
\\ &\le(1+\Gamma(a+1)) C (D+1/p)^a. \qedhere
\end{align*} \end{proof}
\subsection{Decomposition of the domain and Hankel operators}
In what follows, we let $F\subseteq (-L/2,L/2)^d$ be a compact domain with regular boundary at scale $\eta_{\partial F}=|\partial F|^{1/(d-1)}\ge 1$ with constant $\kappa_{\partial F}$. We also assume that $\mathcal{H}^d(\partial F)=0$ (as otherwise the bounds that we shall to derive are trivial). We construct two auxiliary sets $F^-\subseteq F \subseteq F^+$ which will be dyadic approximations of $F$ from the outside and inside by cubes of side-length at least $1$. More precisely, let \[F=\bigcup_{k\in \mathbb{Z}} \bigcup_{j\in J_k} Q_{k,j}, \qquad \mbox{(up to a null measure set)} \] be a dyadic decomposition of $F$ in pairwise disjoint cubes of the form $Q_{k,j}=Q_{2^{k}}+2^kj$ with $k\in\mathbb{Z}$ and $j\in J_k\subseteq \mathbb{Z}^d$, that are maximal (i.e., they are not contained in a larger dyadic cube included in $F$). We define \[F^-= \bigcup_{k\ge 0} \bigcup_{j\in J_k} Q_{k,j}.\] For $F^+$ we add cubes of length $1$ to fully cover $F$ and intersect them with $(-L/2,L/2)^d$. The result is a covering of $F$ that combines the cubes from $F^-$ with rectangles of maximal side-length $\le 1$. More precisely, define \[V=\{v\in \mathbb{Z}^d: \ (F\smallsetminus F^-)\cap (Q_1+v)\neq \varnothing\},\] and \[F^+=F^- \cup \bigcup_{v\in V}\Big(( Q_1+v)\cap (-L/2,L/2)^d\Big)=: F^- \cup \bigcup_{v\in V} R_v.\] We write $T^\pm$ for $T_{E,F^\pm,L}$.
We shall invoke Theorem \ref{th:cube} and apply it to each rectangle in the decomposition of $F^-$ and $F^+$. This is allowed because translating the rectangles, or removing their null $\mathcal{H}^d$-measure boundaries, does not affect the eigenvalue estimates in question.
For a set $K\subseteq (-L/2,L/2)^d$ define the \emph{Hankel operator} on $L^2((-L/2,L/2)^d)$ by \[H_K=(I-P_{E,L})\chi_{K}P_{E,L}\] and write $H^\pm=H_{F^\pm}$. Note that \[(H_K)^*H_K= P_{E,L}\chi_{K}P_{E,L} -P_{E,L}\chi_{K}P_{E,L}\chi_{K}P_{E,L}= P_{E,L}\chi_{K}P_{E,L}-(P_{E,L}\chi_{K}P_{E,L})^2.\] Since $P_{E,L}\chi_{K}P_{E,L}$ and $T_K$ share the same non-zero eigenvalues, for $p>0$,
\[\|T_K-(T_K)^2\|_p^p=\|H_K\|_{2p}^{2p}.\] {Recall that for two operators $S_1,S_2$ in the $p$-Schatten class, $0<p\le 1$, one has
\[ \|{S_1}+{ S_2}\|_{p}^{p}\leq \| { S_1}\|_{p}^{p}+ \| { S_2}\|_{p}^{p}.\] }
\begin{lemma}
\label{lem:pm}
{Let $L\ge 1$, $d\geq 2$, and $E,F\subseteq \mathbb{R}^d$ be compact domains with regular boundaries at scales $\eta_{\partialE}\ge 1$, $\eta_{\partial F}={ |\partial F|^{1/(d-1)} }\ge1$, with constants $\kappa_{\partialE}$, $\kappa_{\partial F}$ respectively.} Assume also that $|\partial E|\ge 1$ and $F\subseteq (-L/2,L/2)^d$.
Then for $\varepsilon\in(0,1/2)$, we have
\[\# M_\varepsilon(T^\pm)\lesssim \frac{|\partial E|}{\kappa_{\partial E} } \cdot \frac{|\partial F|}{\kappa_{\partial F} } \cdot \log\left(\frac{ |\partial E||\partial F|}{\kappa_{\partial E} \ \varepsilon}\right)^{2d(1+\alpha)+1} .\] \end{lemma} \begin{proof}
If $k\in\mathbb{Z}$ is such that $J_k\neq \emptyset$, then there is a cube of length $2^k$ included in $F$. In particular, the projection of $\partial F$ onto the hyperplane $\{x_1=0\}$ contains a $(d-1)$-dimensional cube of length $2^k$ and therefore $2^{k(d-1)}\leq |\partial F|$. The maximality of the dyadic decomposition of $F$ implies that $Q_{j,k}\subseteq \partial F+ B_{\sqrt{d}2^{k+1}}(0)$ for $j\in J_k$. From Lemma \ref{lem:coar} and the fact that $\eta_{\partial F}={|\partial F|^{1/(d-1)}}$, we thus derive
\begin{align}\label{ecard1}
2^{dk} \#J_k \le |\partial F+ B_{\sqrt{d}2^{k+1}}(0)| \lesssim 2^k \frac{|\partial F|}{\kappa_{\partial F} }\left(1+\frac{2^{k(d-1)}}{|\partial F|}\right) \lesssim 2^k \frac{|\partial F|}{\kappa_{\partial F} }.
\end{align}
Similarly,
\begin{align}\label{ecard2}
\# V \le |\partial F+ B_{\sqrt{d}}(0)| \lesssim \frac{|\partial F|}{\kappa_{\partial F} }.
\end{align}
For {$0<2p\le 1$}, and $\varepsilon\in(0,1/2)$, by \eqref{eq_lp}, we get
\begin{align*}
\# M_\varepsilon(T^+)&\le \frac{\|T^+-(T^+)^2\|_p^p}{(\varepsilon-\varepsilon^2)^p}= \frac{\|H^+\|_{2p}^{2p}}{(\varepsilon-\varepsilon^2)^p}
\\ &\le \left( {2}/{ \varepsilon}\right)^{p} \sum_{k\ge 0} \sum_{j\in J_k} \|H_{Q_{k,j}}\|_{2p}^{2p} + \left( {2}/{ \varepsilon}\right)^{p} \sum_{v\in V} \|H_{R_v}\|_{2p}^{2p}
\\ &\lesssim \varepsilon^{-p} \sum_{k\ge 0} \sum_{j\in J_k} \|T_{Q_{k,j}}-T_{Q_{k,j}}^2\|_{p}^{p} + \varepsilon^{-p} \sum_{v\in V} \|T_{R_v}-T_{R_v}^2\|_{p}^{p}.
\end{align*}
We invoke Theorem \ref{th:cube} to obtain spectral deviation estimates for each operator $T_{Q_{k,j}}$. These imply that we can apply Lemma \ref{lem:schp} to $T_{Q_{k,j}}$ with $$C\lesssim 2^{k(d-1)}\frac{|\partialE|}{\kappa_{\partialE}},\quad\text{ and }\quad D=\log\left(2^{k(d-1)}\frac{|\partialE|}{\kappa_{\partialE}}\right).$$ Similarly, the same holds for $T_{R_v}$ with $k=1$. Choosing $p=\log(2)\big(2\log (\varepsilon^{-1})\big)^{-1}$ {(which ensures that $2p\le 1$ for every $\varepsilon\in(0,1/2)$)} thus yields
\begin{align*}
\# M_\varepsilon(T^+)&\hspace{-1pt}\lesssim \hspace{-1pt} \frac{|\partial E|}{\kappa_{\partial E} }\hspace{-3pt} \left( \sum_{k\ge 0} \sum_{j\in J_k} 2^{k(d-1)}\hspace{-1pt} \log\hspace{-2pt}\left(\hspace{-1pt}\frac{ |\partial E|2^{k(d-1)}}{\kappa_{\partial E} \ \varepsilon}\hspace{-1pt}\right)^{2d(1+\alpha)} \hspace{-5pt} + \hspace{-1pt} \# V\hspace{-3pt} \cdot\hspace{-2pt} \log\hspace{-2pt}\left(\hspace{-1pt}\frac{ |\partial E|}{\kappa_{\partial E} \ \varepsilon}\hspace{-1pt}\right)^{2d(1+\alpha)}\hspace{-1pt} \right)
\\ &\hspace{-1pt}\lesssim \hspace{-1pt} \frac{|\partial E|}{\kappa_{\partial E} } \frac{|\partial F|}{\kappa_{\partial F} }
\sum_{\substack{k\ge 0 \\ 2^{k(d-1)}\le |\partial F|}} \log\left(\frac{ |\partial E|2^{k(d-1)}}{\kappa_{\partial E} \ \varepsilon}\right)^{2d(1+\alpha)}
\\ &\hspace{-1pt}= \hspace{-1pt} \frac{|\partial E|}{\kappa_{\partial E} } \frac{|\partial F|}{\kappa_{\partial F} }
\sum_{0\le k\leq \left\lfloor \frac{\log(|\partial F|)}{\log(2)(d-1)}\right\rfloor}\left( \log\left(\frac{ |\partial E| }{\kappa_{\partial E} \ \varepsilon}\right)+(d-1)\log(2)k\right)^{2d(1+\alpha)},
\end{align*}
where in the second-to-last step we used \eqref{ecard1}, \eqref{ecard2}, and the fact that $2^{k(d-1)}\leq |\partial F|$ whenever $J_k\neq \emptyset$. Finally, noting that for $C,D,a>0$,
\[\sum_{k=0}^N (C+Dk)^a\le\int_0^{N+1} (C+Dx)^a dx\le\frac{( C+D(N+1))^{a+1}}{{D}(a+1)},\]
we get,
\begin{align*}
\\ \# M_\varepsilon(T^+)\lesssim \frac{|\partial E|}{\kappa_{\partial E} } \frac{|\partial F|}{\kappa_{\partial F} } \log\left(\frac{ |\partial E| |\partial F|}{\kappa_{\partial E} \ \varepsilon}\right)^{2d(1+\alpha)+1}.
\end{align*}
The same argument applies to $ \# M_\varepsilon(T^-)$, and yields the desired conclusion. \end{proof}
\subsection{The transition index} The following estimate is part of the proof of \cite[Theorem~1.5]{AbPeRo2} (see also \cite[Lemma~4.3]{maro21}) and allows us to find the index where eigenvalues cross the $1/2$ threshold. We include a proof for the sake of completeness. \begin{lemma} \label{lem:half}
For any trace class operator $0\le S\le 1$,
\begin{enumerate}[label=(\roman*)]
\item\label{onehalf1} $
\lambda_n\leq \frac{1}{2},$ for every $n\geq \lceil\tr(S)\rceil+ \max\{2\tr(S-S^2),1\}$;
\item\label{onehalf2} $\lambda_n\geq \frac{1}{2},$ for every $1\leq n\leq \lceil \tr(S)\rceil- \max\{2\tr(S-S^2),1\}$.
\end{enumerate} \end{lemma} \begin{proof} First notice that if $S$ is an orthogonal projection, then the result holds trivially, so we can assume otherwise. In particular, we have that $\tr(S-S^2)>0$.
Set $K=\lceil\tr(S)\rceil$ and write
\begin{align*}
\tr (S)-\tr(S^2)&=\sum_{n=1}^\infty \lambda_n(1-\lambda_n)
\\
&=\sum_{n=1}^{K}\lambda_n(1-\lambda_n)+\sum_{n=K+1}^\infty\lambda_n(1-\lambda_n)
\\
&\geq \lambda_{K}\sum_{n=1}^{K}(1-\lambda_n)+(1-\lambda_{K})\sum_{n=K+1}^\infty\lambda_n
\\
&=\lambda_K K-\lambda_K\sum_{n=1}^{K}\lambda_n+(1-\lambda_{K})\Big(\tr(S)-\sum_{n=1}^{K}\lambda_n\Big)
\\
&=\lambda_K K+(1-\lambda_{K})\tr(S)-\sum_{n=1}^{K}\lambda_n
\\
&=\tr(S)-\sum_{n=1}^{K}\lambda_n+\lambda_K(K-\tr(S)).
\end{align*}
Hence
\begin{equation}\label{eq:sum1}
\sum_{n=K+1}^\infty\lambda_n=\tr(S)-\sum_{n=1}^{K}\lambda_n\leq \tr (S)-\tr(S^2),
\end{equation}
and
\begin{align}\label{eq:sum2}
\sum_{n=1}^{K-1}(1-\lambda_n)&=\tr(S)-\sum_{n=1}^{K}\lambda_n+\lambda_K(K-\tr(S)) -(1-\lambda_K)(1+\tr(S)-K)\notag
\\
&\leq\tr(S)-\sum_{n=1}^{K}\lambda_n+\lambda_K(K-\tr(S)) \leq \tr (S)-\tr(S^2).
\end{align}
Now let $j\in\mathbb{N}$ such that $j\ge 2(\tr(S)-\tr(S^2))$ and consider $k=K+j$ . It follows from \eqref{eq:sum1} that
$$
2(\tr(S)-\tr(S^2))\cdot\lambda_k\leq j\cdot\lambda_{K+j}\leq \sum_{n=K+1}^\infty\lambda_n\leq \tr (S)-\tr(S^2),
$$
which shows $\lambda_k\leq 1/2$ as $0<\tr(S)-\tr(S^2)<\infty$; this proves part \ref{onehalf1}.
For part \ref{onehalf2}, if $1\leq k=K-j\leq K-2(\tr(S)-\tr(S^2))$ for $j\in\mathbb{N}$, then \eqref{eq:sum2} implies
$$
2(\tr(S)-\tr(S^2))\cdot(1-\lambda_k)\leq j\cdot (1-\lambda_{K-j})\leq \sum_{n=1}^{K-1}(1-\lambda_n)\leq \tr(S)-\tr(S^2),
$$
yielding $\lambda_k\geq 1/2$. This completes the proof. \end{proof}
\subsection{Proof of the main result}
With all the preparatory work at hand, we are ready to prove the main result.
\begin{proof}[Proof of Theorem~\ref{th2}]
Recall from \eqref{eq:stretch} that the eigenvalues of the concentration operator remain the same if we replace $E,F$ and $L$ with $t^{-1}E,tF$ and $tL$ respectively. We choose $t=|\partial E|^{1/(d-1)}$ and notice that $t^{-1}E$ satisfies $|\partial t^{-1}E|=1,$ $\eta_{ \partial t^{-1}E}=t^{-1}\eta_{\partial E}=1$, and $\kappa_{ \partial t^{-1}E}=\kappa_{\partial E}$. Furthermore, we also have that $tF$ has regular boundary at scale $\eta_{ \partial tF}= t\eta_{\partial F}=(|\partialE||\partial F|)^{1/(d-1)}\geq 1$ with constant $\kappa_{\partial tF}=\kappa_{\partial F}$, and $tL\ge 1$ by assumption on $L$.
Note that for $F'\subseteq (-tL/2,tL/2)^d$, the operator $T$ has integral kernel
\[K(x,y)=\chi_{F'}(x)\chi_{F'}(y)\frac{1}{(tL)^d}\sum_{k\in(t^{-1}E)_{tL}} e^{-2\pi i k (x-y)}.\]
Thus,
\[\tr(T)=\int K(x,x)dx=\int_{F'}\frac{1}{(tL)^d}\sum_{k\in (t^{-1}E)_{tL}}1 dx=\frac{\#(t ^{-1}E)_{tL}}{(tL)^d}|F'|.\]
On the other hand, from Lemmas \ref{lem:schp} and \ref{lem:pm} we have that
\begin{align*}
\tr\big(T^\pm-(T^\pm)^2\big)&\leq C_{\alpha,d} \frac{|\partial t^{-1}E|}{\kappa_{\partial t^{-1} E} } \frac{|\partial tF|}{\kappa_{\partial t F} } \log\left(\frac{ e |\partial t^{-1} E| |\partial tF|}{\kappa_{\partial t^{-1}E} }\right)^{2d(1+\alpha)+1}
\\
&= C_{\alpha,d} \frac{|\partial E|}{\kappa_{\partial E} } \frac{|\partial F|}{\kappa_{\partial F} } \log\left(\frac{e |\partial E| |\partial F|}{\kappa_{\partial E} }\right)^{2d(1+\alpha)+1} =:C_{E,F}.
\end{align*}
So from Lemma \ref{lem:half},
\begin{align*}
\lambda_n(T^+) &\leq \frac{1}{2}, & n &\geq \Big\lceil\frac{\#(t^{-1}E)_{tL}}{(tL)^d} |(tF)^+|\Big\rceil+ 2C_{E,F};
\\ \lambda_n(T^-) &\geq \frac{1}{2}, & n&\leq \Big\lceil \frac{\#(t^{-1}E)_{tL}}{(tL)^d} |(tF)^-|\Big\rceil- 2C_{E,F}.
\end{align*}
By Corollary \ref{cor:coar} and $|\partial t^{-1}E|=1$,
\begin{align*}
\#\{n\in \mathbb{N}: \ \lambda_n(T^-)<1/2,\ \lambda_n(T^+)>1/2\} &\lesssim \frac{1}{\kappa_{\partialE}}|(tF)^+\smallsetminus (tF)^-| + C_{E,F}
\\ &\le \frac{1}{\kappa_{\partialE}}|\partial tF + B_{\sqrt{d}}(0)| + C_{E,F}
\\ &\lesssim \frac{1}{\kappa_{\partialE}} \frac{t^{d-1}|\partial F|}{\kappa_{\partial F}} + C_{E,F}
\lesssim C_{E,F},
\end{align*}
where in the second to last step we used Lemma \ref{lem:coar}. Since $\lambda_n(T^-)\le \lambda_n(T)\le \lambda_n(T^+)$ for every $n\in \mathbb{N}$, again by Lemma \ref{lem:pm},
\begin{align*}
\# M_\varepsilon(T)\leq& \#\{n\in \mathbb{N}: \ 1/2\leq\lambda_n(T^-)<1-\varepsilon\}+\#\{n\in \mathbb{N}: \ \varepsilon<\lambda_n(T^+)\leq 1/2\}
\\ &+\#\{n\in \mathbb{N}: \ \lambda_n(T^-)<1/2,\ \lambda_n(T^+)>1/2\}
\\ \lesssim& \# M_\varepsilon(T^-)+\# M_\varepsilon(T^+) + \#\{n\in \mathbb{N}: \ \lambda_n(T^-)<1/2, \lambda_n(T^+)>1/2\}
\\\lesssim& \frac{|\partial E|}{\kappa_{\partial E} } \frac{|\partial F|}{\kappa_{\partial F} } \log\left(\frac{ |\partial E| |\partial F|}{\kappa_{\partial E} \ \varepsilon}\right)^{2d(1+\alpha)+1}. \qedhere
\end{align*} \end{proof}
\section{The continuous Fourier transform}\label{sec:con} In this section we deduce Theorem \ref{th1} from Theorem \ref{th2} by letting $L\to \infty$.
\begin{proof}[Proof of Theorem \ref{th1}]
Fix $E$ and $F$ as in the statement of Theorem \ref{th1}. We consider a sufficiently large resolution such that $L\geq |\partial E|^{-1/(d-1)}$ and $F\subseteq (-L/2,L/2)^d$.
Let $S_L:L^2(\mathbb{R}^d)\to L^2(\mathbb{R}^d)$ be the operator given by \[S_L f=T_{E,F,L} (\chi_F f)=\chi_F \mathcal{F}_L^{-1}\chi_{E_L} \mathcal{F}_L \chi_F f, \qquad f\in L^2(\mathbb{R}^d).\] Note that $S_L$ and $T_{E,F,L}$ share the same non-zero eigenvalues, and recall the operator $S$ from \eqref{eq_S}.
\noindent {\bf Step 1}. We show that \begin{align}\label{eq_a}
\lim_{L\to \infty} \|S_L-S\|=0. \end{align}
Recall that $Q_{L^{-1}}=L^{-1}[-1/2,1/2)^d$ and define the auxiliary set \begin{align*}
\Gamma_L = \bigcup_{m\inE_L} m+Q_{L^{-1}}. \end{align*} Note that the symmetric difference $E\Delta\Gamma_L$ is included in $\partialE+B_{L^{-1}\sqrt{d}}(0)$. From Lemma \ref{lem:coar}, \begin{align*}
|E\Delta\Gamma_L| \le |\partialE+B_{L^{-1}\sqrt{d}}(0)|
\lesssim \frac{|\partialE|}{\kappa L}\big(1+(L\eta_{\partialE})^{-(d-1)}\big)\xrightarrow{L\to\infty} 0. \end{align*} Using this and setting $R_L=\chi_F \mathcal{F}^{-1}\chi_{\Gamma_L} \mathcal{F} \chi_F$, for $f\in L^2(\mathbb{R}^d)$ we have \begin{align*}
\|(R_L-S)f\|_2^2 &\le \|(\chi_{\Gamma_L}-\chi_E) \mathcal{F}(\chi_F f) \|_2^2
\le |E\Delta\Gamma_L| \|\mathcal{F}(\chi_F f)\|_\infty^2
\\ &\le |E\Delta\Gamma_L| \|\chi_F f\|_1^2
\le |E\Delta\Gamma_L| |F| \| f\|_2^2\xrightarrow{L\to\infty} 0. \end{align*} To prove \eqref{eq_a}, it only remains to show that \begin{align}\label{eqb}
\|R_L-S_L\|\xrightarrow{L\to\infty} 0. \end{align} To this end, let $f\in L^2(\mathbb{R}^d)$ and estimate \begin{align*}
\|R_Lf-S_Lf\|_2^2&= \int_F \Big| \int_{\Gamma_L} \mathcal{F}(\chi_F f)(w)e^{2\pi i wx}dw -L^{-d}\sum_{m\in E_L}\mathcal{F}(\chi_F f)(m)e^{2\pi i mx} \Big|^2 dx
\\&= \int_F \Big|\sum_{m\in E_L} \int_{m+Q_{L^{-1}}} \mathcal{F}(\chi_F f)(w)e^{2\pi i wx}-\mathcal{F}(\chi_F f)(m)e^{2\pi i mx} dw \Big|^2 dx
\\&= \int_F \Big|\sum_{m\in E_L} \int_{m+Q_{L^{-1}}} \int_F f(t)\big(e^{2\pi i w(x-t)}-e^{2\pi i m(x-t)}\big)dt dw \Big|^2 dx
\\&\lesssim \int_F \Big(\sum_{m\in E_L} \int_{m+Q_{L^{-1}}} \int_F |f(t)||w-m||x-t| dt dw \Big)^2 dx
\\&\lesssim \int_F \Big(\sum_{m\in E_L} L^{-(d+1)} \int_F |f(t)||x-t| dt \Big)^2 dx
\\&\lesssim (\#E_L)^2 L^{-2(d+1)} \int_F \|f\|_2^2 \int_F |x-t|^2 dt dx
\\&\lesssim L^{-2} \frac{\max\{|\partial E|^{2d/(d-1)},1\}}{\kappa_{\partialE}^2}|F|^2 \diam(F)^2 \|f\|_2^2, \end{align*} where in the last estimate we used Corollary \ref{cor:coar}. Hence \eqref{eqb} holds.
\noindent {\bf Step 2}. Since $S_L$ and $T_{E,F,L}$ share the same non-zero eigenvalues, the estimates in Theorem \ref{th2} apply also to $S_L$ for all sufficiently large $L$. By the Fischer-Courant formula, operator convergence of positive compact operators implies convergence of their eigenvalues. Hence, by \eqref{eq_a}, the estimate satisfied by the spectrum of $S_L$ extends to the spectrum of $S$. \end{proof}
\section{The discrete Fourier transform}\label{sec_dis}
\begin{proof}[Proof of Theorem \ref{th3}] Let us define $E := \Omega+\overline{Q_1}$. Then $\Omega=E_L$ for $L=1$. Let us apply Theorem \ref{th2} with $L=1$ to $E$, $F$. We check the relevant hypotheses.
We first note that $\partial E$ is an almost disjoint union of faces of cubes (by almost disjoint we mean that
the intersection of any two faces has zero $\mathcal{H}^{d-1}$-measure). Moreover, each one is contained in $k+\overline{Q_1}$ for exactly one $k\in \Omega$ that must belong to $\partial \Omega$. In particular, \[\partial E \subset \bigcup_{k \in \partial \Omega} k + \partial Q_1.\] Conversely, for each point $k\in \partial \Omega$ at least one face of the cube $k+\overline{Q_1}$ lies in $\partial E$. Thus, \begin{equation}\label{eq:disc-vs-cont-dom2}
\#\partial\Omega\leq \big|\partialE\big|\leq 2d\cdot\#\partial\Omega, \end{equation} and consequently \begin{align*}
|\partialE|| \partial F|\geq \#\partial\Omega \cdot | \partial F|\geq 1. \end{align*}
Moreover, \eqref{eq:disc-vs-cont-dom2} shows that the choice $L=1$ satisfies $L\ge |\partialE\big|^{-1/(d-1)}$ as $\partial\Omega$ contains at least one point.
Now fix $0<r \le \sqrt{d}\cdot(\#\partial\Omega)^{1/(d-1)}$ and let us show that $\partialE$ is regular at maximal scale. If $r\le 2\sqrt{d}$, and $x\in \partial E$ we clearly have $\mathcal{H}^{d-1}\big(\partialE\cap B_r(x)\big)\gtrsim r^{d-1}$ as $E$ is a union of cubes of length $1$.
If $r> 2\sqrt{d}$, set $n=\lfloor r/\sqrt{d} \rfloor$ and let $x\in\partialE$. There exists $k_x\in \partial \Omega$ such that $|k_x-x|\le \sqrt{d}/2$. Note that for $y\in k_x+\overline{Q_{n+1}}$,
\[|y-x|\le \frac{\sqrt{d} (n+1)}{2}+\frac{\sqrt{d}}{2}\le \frac{r}{2}+\sqrt{d}<r.\] Hence, $k_x+\overline{Q_{n+1}}\subseteq B_r(x)$. This together with the fact that for $k\in \partial \Omega$ at least one face of the cube $k+\overline{Q_1}$ lies in $\partial E$ gives \begin{align*} \mathcal{H}^{d-1}\big(\partialE\cap B_r(x)\big)&\ge \mathcal{H}^{d-1}\big(\partialE\cap k_x+\overline{Q_{n+1}}\big)\ge \mathcal{H}^{d-1}\Big(\partialE\cap \bigcup_{k\in \partial\Omega \cap k_x+Q_{n}} k+\partial Q_1\Big) \\ &\ge \# \big( \partial\Omega \cap k_x+Q_{n}\big) \ge \kappa_{\partial\Omega} n^{d-1} \gtrsim \kappa_{\partial\Omega} r^{d-1}. \end{align*} This shows that $\partialE$ is regular at scale $\sqrt{d}\cdot(\#\partial\Omega)^{1/(d-1)}$ with constant $C_d \cdot\kappa_{\partial\Omega}$. Note that if a set $X$ is regular at scale $\eta_X$ and constant $\kappa_X$, then it is also regular at scale $\alpha\eta_X$ and constant $\min\{1,\alpha^{1-d}\}\kappa_X$, for every $\alpha>0.$ By \eqref{eq:disc-vs-cont-dom2} we therefore see that $\partialE$ is regular at scale
$\eta_{\partialE}=\big|\partialE\big|^{1/(d-1)}$ and constant $\kappa_{\partialE}\asymp \kappa_{\partial\Omega}$.
The desired estimates now follow by applying Theorem~\ref{th2} to $E$ and $F$ with $L=1$, together with \eqref{eq:disc-vs-cont-dom2}. \end{proof}
\section{Proof of Remark \ref{rem_N}}\label{sec_rem} First we combine Lemma~\ref{lem:half}, Lemma~\ref{lem:schp} (for $p=1$) and Theorem~\ref{th1} to conclude that there exist a constant $C=C_{\alpha,d}>0$ such that if $$
n\geq \lceil |E|\cdot |F|\rceil + C\frac{|\partial E|}{\kappa_{\partial E} }\frac{|\partial F|}{\kappa_{\partial F} } \cdot \log\left( \frac{e|\partial E||\partial F|}{ \kappa_{\partial E} }\right)^{2d(1+\alpha)+1} =:C_1, $$ then $\lambda_n\leq 1/2$, and if $$
n\leq \lceil |E|\cdot |F|\rceil - C\frac{|\partial E|}{\kappa_{\partial E} }\frac{|\partial F|}{\kappa_{\partial F} } \cdot \log\left( \frac{e|\partial E||\partial F|}{ \kappa_{\partial E} }\right)^{2d(1+\alpha)+1}=:C_2, $$ then $\lambda_n\geq 1/2$.
For $\varepsilon\in(0,1)$, define $\varepsilon_0:=\min\{\varepsilon, 1-\varepsilon\}\le 1/2$ and let $0<\tau<\varepsilon_0$. Observe that $$ \{1,...,\lfloor C_2\rfloor\}\smallsetminus M_\tau(S)
\subseteq N_{1-\varepsilon_0}(S) \subseteq N_\varepsilon(S) \subseteq N_{\varepsilon_0}(S)\subseteq \{1,...,\lceil C_1\rceil\}\cup M_\tau(S), $$ where we understand $\{1,...,\lfloor C_2\rfloor\}$ to be $\varnothing$ if $C_2<1$. Consequently, \[ C_2-1-\#M_\tau(S) \leq \#N_\varepsilon(S)\leq C_1 +1+\#M_\tau(S).\] Rearranging the last expression and using Theorem~\ref{th1} for $\tau$ gives
\[\big|N_\varepsilon(S)-|E|\cdot |F|\big|\lesssim \frac{|\partial E|}{\kappa_{\partial E} } \cdot \frac{|\partial F|}{\kappa_{\partial F} } \cdot \log\left( \frac{|\partial E||\partial F|}{ \kappa_{\partial E}\ \tau}\right)^{2d(1+\alpha)+1}. \] Letting $\tau \nearrow \varepsilon_0$ yields \eqref{eq_N}.
\end{document} |
\begin{document}
\title{Dual selection games}
\author{Steven Clontz} \address{Department of Mathematics and Statistics, The University of South Alabama, Mobile, AL 36688} \email{[email protected]}
\keywords{Selection principle, selection game, limited information strategies}
\subjclass[2010]{54C30, 54D20, 54D45, 91A44}
\begin{abstract}
Often, a given selection game studied in the literature has
a known dual game. In dual games, a winning
strategy for a player in either game may be used to create
a winning strategy for the opponent in the dual.
For example, the Rothberger selection game involving open covers
is dual to the point-open game. This extends to a general
theorem: if \(\{\ran{f}:f\in\mathbf C(\mc R)\}\) is coinitial in \(\mc A\)
with respect to \(\subseteq\),
where \(\mathbf C(\mc R)=\{f\in(\bigcup\mc R)^{\mc R}:R\in\mc R\Rightarrow f(R)\in R\}\)
collects the choice functions on the set \(\mc R\),
then \(G_1(\mc A,\mc B)\) and \(G_1(\mc R,\neg\mc B)\)
are dual selection games. \end{abstract}
\maketitle
\section{Introduction}
\begin{definition}
An \term{\(\omega\)-length game} is a pair \(G=\<M,W\>\) such that
\(W\subseteq M^\omega\). The set \(M\) is the \term{moveset} of the game,
and the set \(W\) is the \term{payoff set} for the second player. \end{definition}
In such a game \(G\), players \(\plI\) and \(\plII\) alternate making choices \(a_n\in M\) and \(b_n\in M\) during each round \(n<\omega\), and \(\plII\) wins the game if and only if \(\<a_0,b_0,a_1,b_1,\dots\>\in W\).
Often when defining games, \(\plI\) and \(\plII\) are restricted to choosing from different movesets \(A,B\). Of course, this can be modeled with \(\<M,W\>\) by simply letting \(M=A\cup B\) and adding/removing sequences from \(W\) whenever player \(\plI\)/\(\plII\) makes the first ``illegal'' move.
A class of such games heavily studied in the literature (see \cite{MR1378387} and its many sequels) are selection games.
\begin{definition}
The \term{selection game} \(\schStrongSelGame{\mc A}{\mc B}\)
is an \(\omega\)-length game involving Players \(\plI\) and \(\plII\).
During round \(n\), \(\plI\) chooses
\(A_n\in\mc A\), followed by \(\plII\) choosing \(B_n\in A_n\).
Player \(\plII\) wins in the case that \(\{B_n:n<\omega\}\in\mc B\),
and Player \(\plI\) wins otherwise. \end{definition}
For brevity, let
\[
\schStrongSelGame{\mc A}{\neg \mc B}
=
\schStrongSelGame{\mc A}{\mc P\left(\bigcup \mc A\right)\setminus \mc B}
.\]
That is, \(\plII\) wins in the case that \(\{B_n:n<\omega\}\not\in\mc B\),
and \(\plI\) wins otherwise.
\begin{definition}
For a set \(X\), let \(\mathbf C(X)=\{f\in(\bigcup X)^X:x\in X\Rightarrow f(x)\in x\}\)
be the collection of all choice functions on \(X\). \end{definition}
\begin{definition}
Write \(X\preceq Y\) if \(X\) is coinitial in \(Y\) with respect to \(\subseteq\);
that is, \(X\subseteq Y\), and for all \(y\in Y\), there exists \(x\in X\) such that
\(x\subseteq y\).
In the context of selection games, we will say \(\mc A'\) is a \term{selection basis}
for \(\mc A\) when \(\mc A'\preceq \mc A\). \end{definition}
\begin{definition}
The set \(\mc R\) is said to be a \term{reflection} of the set \(\mc A\)
if \[\{\ran f:f\in\mathbf C(\mc R)\}\] is a selection basis for \(\mc A\). \end{definition}
Put another way, \(\mc R\) is a reflection of \(\mc A\) if for every \(A\in\mc A\), there exists \(f\in\mathbf C(\mc R)\) such that \(\ran f\in\mc A\) and \(\ran f\subseteq A\).
As we will see, reflections of selection sets are used frequently (but implicitly) throughout the literature to define dual selection games.
We use the following conventions to describe strategies for playing games.
\begin{definition}
For \(f\in B^A\) and \(X\subseteq A\), let \(f\rest X\) be the restrction of \(f\)
to \(X\). In particular, for \(f\in B^\omega\) and \(n<\omega\), \(f\rest n\)
describes the first \(n\) terms of the sequence \(f\). \end{definition}
\begin{definition}
A \term{strategy} for the first player \(\plI\) (resp. second player \(\plII\))
in a game \(G\) with moveset \(M\) is a function
\(\sigma:M^{<\omega}\to M\). This strategy is said to be \term{winning} if
for all possible \term{attacks} \(\alpha\in M^\omega\) by their opponent,
where \(\alpha(n)\) is played by the opponent during round \(n\),
the player wins the game by playing \(\sigma(\alpha\rest n)\)
(resp. \(\sigma(\alpha\rest n+1)\)) during round \(n\). \end{definition}
That is, a strategy is a rule that determines the moves of a player based upon all previous moves of the opponent. (It could also rely on all previous moves of the player using the strategy, since these can be reconstructed from the previous moves of the opponent and the strategy itself.)
\begin{definition}
A \term{predetermined strategy} for the first player \(\plI\)
in a game \(G\) with moveset \(M\) is a function
\(\sigma:\omega\to M\). This strategy is said to be winning if
for all possible attacks \(\alpha\in M^\omega\) by their opponent,
the first player wins the game by playing \(\sigma(n)\)
during round \(n\). \end{definition}
So a predetermined strategy ignores all moves of the opponent during the game (all moves were decided before the game began).
\begin{definition}
A \term{Markov strategy} for the second player \(\plII\)
in a game \(G\) with moveset \(M\) is a function
\(\sigma:M\times\omega\to M\). This strategy is said to be winning if
for all possible attacks \(\alpha\in M^\omega\) by their opponent,
the first player wins the game by playing \(\sigma(\alpha(n),n)\)
during round \(n\). \end{definition}
So a Markov strategy may only consider the most recent move of the opponent, and the current round number. Note that unlike perfect-information or predetermined strategies, a Markov strategy cannot use knowledge of moves used previously by the player (since they depend on previous moves of the opponent that have been ``forgotten'').
\begin{definition}
Write \(\plI\win G\) (resp. \(\plI\prewin G\)) if player \(\plI\) has a winning
strategy (resp. winning predetermined strategy) for the game \(G\). Similarly,
write \(\plII\win G\) (resp. \(\plII\markwin G\)) if player \(\plII\) has a winning
strategy (resp. winning Markov strategy) for the game \(G\). \end{definition}
Of course, \(\plII\markwin G\Rightarrow \plII\win G\Rightarrow \plI\notwin G\Rightarrow \plI\notprewin G\). In general, none of these implications (not even the second \cite{MR0054922}) can be reversed.
It's worth noting that \(\plI\notprewin \schStrongSelGame{\mc A}{\mc B}\) is equivalent to the selection principle often denoted \(\schStrongSelProp{\mc A}{\mc B}\) in the literature.
The goal of this paper is to characerize when two games are ``dual'' in the following senses.
\begin{definition}
A pair of games \(G(X),H(X)\) defined for a topological space \(X\)
are \term{Markov information dual} if both
of the following hold.
\begin{itemize}
\item \(I\prewin G(X)\) if and only if \(II\markwin H(X)\).
\item \(II\markwin G(X)\) if and only if \(I\prewin H(X)\).
\end{itemize} \end{definition}
\begin{definition}
A pair of games \(G(X),H(X)\) defeind for a topological space \(X\)
are \term{perfect information dual} if both
of the following hold.
\begin{itemize}
\item \(I\win G(X)\) if and only if \(II\win H(X)\).
\item \(II\win G(X)\) if and only if \(I\win H(X)\).
\end{itemize} \end{definition}
\section{Main Results}
The following four theorems demonstrate that reflections characterize dual selection games for both perfect information strategies and certain limited information strategies.
The duality of the Rothberger game \(\schStrongSelGame{\mc O_X}{\mc O_X}\) and the point-open game on \(X\) for perfect information strategies was first noted by Galvin in \cite{MR0493925}, and for Markov-information strategies by Clontz and Holshouser in \cite{2018arXiv180606001C}. These proofs may be generalized as follows.
\begin{theorem}
Let \(\mc R\) be a reflection of \(\mc A\).
Then
\(\plI\prewin\schStrongSelGame{\mc A}{\mc B}\) if and only if
\(\plII\markwin\schStrongSelGame{\mc R}{\neg\mc B}\). \end{theorem}
\begin{proof}
Let \(\sigma\) witness
\(\plI\prewin\schStrongSelGame{\mc A}{\mc B}\).
Since \(\sigma(n)\in\mc A\),
\(\ran{f_n}\subseteq\sigma(n)\)
for some \(f_n\in\mathbf C(\mc R)\). So let
\(\tau(R,n)=f_n(R)\) for all \(R\in \mc R\) and \(n<\omega\).
Suppose \(R_n\in \mc R\) for all \(n<\omega\).
Note that since \(\sigma\) is winning and
\(\tau(R_n,n)=f_n(R_n)\in\ran{f_n}\subseteq\sigma(n)\),
\(\{\tau(R_n,n):n<\omega\}\not\in\mc B\). Thus \(\tau\) witnesses
\(\plII\markwin\schStrongSelGame{\mc R}{\neg\mc B}\).
Now let \(\sigma\) witness
\(\plII\markwin\schStrongSelGame{\mc R}{\neg\mc B}\).
Let \(f_n\in\mathbf C(\mc R)\) be defined by \(f_n(R)=\sigma(R,n)\),
and let \(\tau(n)=\ran{f_n}\in\mc A\).
Suppose that \(B_n\in\tau(n)=\ran{f_n}\) for
all \(n<\omega\). Choose \(R_n\in\mc R\) such that
\(B_n=f_n(R_n)=\sigma(R_n,n)\). Since \(\sigma\) is winning,
\(\{B_n:n<\omega\}\not\in\mc B\). Thus \(\tau\) witnesses
\(\plI\prewin\schStrongSelGame{\mc A}{\mc B}\). \end{proof}
\begin{theorem}
Let \(\mc R\) be a reflection of \(\mc A\).
Then
\(\plII\markwin\schStrongSelGame{\mc A}{\mc B}\) if and only if
\(\plI\prewin\schStrongSelGame{\mc R}{\neg\mc B}\). \end{theorem}
\begin{proof}
Let \(\sigma\) witness
\(\plII\markwin\schStrongSelGame{\mc A}{\mc B}\).
Let \(n<\omega\). Suppose that for each \(R\in\mc R\),
there was \(g(R)\in R\) such that for all \(A\in \mc A\),
\(\sigma(A,n)\not=g(R)\). Then \(g\in\mathbf C(\mc R)\)
and \(\ran g\in\mc A\),
thus \(\sigma(\ran g,n)\not=g(R)\) for all \(R\in\mc R\),
a contradiction.
So choose \(\tau(n)\in\mc R\) such that for all \(r\in \tau(n)\)
there exists \(A_{r,n}\in\mc A\) such that \(\sigma(A_{r,n},n)=r\).
It follows that when \(r_n\in\tau(n)\) for \(n<\omega\),
\(\{r_n:n<\omega\}=\{\sigma(A_{r_n,n}):n<\omega\}\in B\),
so \(\tau\) witnesses
\(\plI\prewin\schStrongSelGame{\mc R}{\neg\mc B}\).
Now let \(\sigma\) witness
\(\plI\prewin\schStrongSelGame{\mc R}{\neg\mc B}\).
Then \(\sigma(n)\in\mc R\), so for \(A\in\mc A\), let
\(f_A\in\mathbf C(\mc R)\) satisfy \(\ran{f_A}\subseteq A\),
and let \(\tau(A,n)=f_A(\sigma(n))\in A\cap\sigma(n)\).
Then if \(A_n\in\mc A\) for \(n<\omega\), \(\tau(A_n,n)\in\sigma(n)\),
so \(\{\tau(A_n,n):n<\omega\}\in\mc B\).
Thus \(\tau\) witnesses
\(\plII\markwin\schStrongSelGame{\mc A}{\mc B}\). \end{proof}
\begin{theorem}
Let \(\mc R\) be a reflection of \(\mc A\).
Then
\(\plI\win\schStrongSelGame{\mc A}{\mc B}\) if and only if
\(\plII\win\schStrongSelGame{\mc R}{\neg\mc B}\). \end{theorem}
\begin{proof}
Let \(\sigma\) witness
\(\plI\win\schStrongSelGame{\mc A}{\mc B}\).
Let \(c(\emptyset)=\emptyset\). Suppose
\(c(s)\in(\bigcup A)^{<\omega}\)
is defined for \(s\in\mc R^{<\omega}\). Since \(\sigma(c(s))\in\mc A\),
let \(f_s\in\mathbf C(\mc R)\) satisfy \(\ran{f_s}\subseteq\sigma(c(s))\),
and let \(c(s\concat\<R\>)=c(s)\concat\<f_s(R)\>\).
Then let \(c(\alpha)=\bigcup\{c(\alpha\rest n):n<\omega\}\)
for \(\alpha\in\mc R^\omega\), so
\[
c(\alpha)(n)
=
f_{\alpha\rest n}(\alpha(n))
\in
\ran{f_{\alpha\rest n}}
\subseteq
\sigma(c(\alpha\rest n))
\]
demonstrating that \(c(\alpha)\) is a legal attack against \(\sigma\).
Let \(\tau(s\concat\<R\>)=f_s(R)\). Consider the attack \(\alpha\in\mc R^\omega\)
against \(\tau\). Then since \(\sigma\) is winning and
\(
\tau(\alpha\rest n+1)=f_{\alpha\rest n}(\alpha(n))\in
\ran{f_{\alpha\rest n}}\subseteq\sigma(c(\alpha\rest n))
\), it follows that \(\{\tau(\alpha\rest n+1):n<\omega\}\not\in\mc B\).
Thus \(\tau\) witnesses
\(\plII\win\schStrongSelGame{\mc R}{\neg\mc B}\).
Now let \(\sigma\) witness
\(\plII\win\schStrongSelGame{\mc R}{\neg\mc B}\).
For \(s\in \mc R^{<\omega}\), define \(f_s\in\mathbf C(\mc R)\)
by \(f_s(R)=\sigma(s\concat\<R\>)\). Let \(\tau(\emptyset)=\ran{f_\emptyset}\in\mc A\),
and for \(x\in\tau(\emptyset)\), choose \(R_{\<x\>}\in\mc R\) such that
\(x=f_{\emptyset}(R_{\<x\>})\) (for other \(x\in\bigcup A\), choose \(R_{\<x\>}\)
arbitrarily as it won't be used). Now let \(s\in(\bigcup A)^{<\omega}\),
and suppose \(R_{s\rest n\concat\<x\>}\in\mc R\) has been defined for
\(n\leq|s|\) and \(x\in\bigcup A\).
Then let \(\tau(s\concat\<x\>)=\ran{f_{\<R_{s\rest 0},\dots,R_s,R_{s\concat\<x\>}\>}}\)
and for \(y\in\tau(s)\) choose \(R_{s\concat\<x,y\>}\) such that
\(x=f_{\<R_{s\rest 0},\dots,R_s,R_{s\concat\<x\>}\>}(R_{s\concat\<x,y\>})\) (and again,
choose \(R_{s\concat\<x,y\>}\) arbitrarily for other \(y\in\bigcup\mc A\) as it won't be used).
Then let \(\alpha\) attack \(\tau\), so
\(\alpha(n)\in\tau(\alpha\rest n)\) and thus
\(\alpha(n)=f_{\<R_{\alpha\rest 0},\dots,R_{\alpha\rest n}\>}(R_{\alpha\rest n+1})
=\sigma(\<R_{\alpha\rest 0},\dots,R_{\alpha\rest n+1}\>)\). Since \(\sigma\) is winning,
\(\{\sigma(\<R_{\alpha\rest 0},\dots,R_{\alpha\rest n+1}\>):n<\omega\}
=\{\alpha(n):n<\omega\}\not\in\mc B\).
Thus \(\tau\) witnesses
\(\plI\win\schStrongSelGame{\mc A}{\mc B}\). \end{proof}
\begin{theorem}
Let \(\mc R\) be a reflection of \(\mc A\).
Then
\(\plII\win\schStrongSelGame{\mc A}{\mc B}\) if and only if
\(\plI\win\schStrongSelGame{\mc R}{\neg\mc B}\). \end{theorem}
\begin{proof}
Let \(\sigma\) witness
\(\plII\win\schStrongSelGame{\mc A}{\mc B}\).
Let \(s\in(\bigcup A)^{<\omega}\) and assume \(a(s)\in\mc A^{|s|}\) is defined
(of course, \(a(\emptyset)=\emptyset\)).
Suppose for all \(R\in\mc R\) there existed \(f(R)\in R\) such that for all
\(A\in\mc A\), \(\sigma(a(s)\concat\<A\>)\not=f(R)\). Then
\(f\in\mathbf C(\mc R)\) and \(\ran{f}\in\mc A\), and thus
\(\sigma(a(s)\concat\<\ran{f}\>)\not=f(R)\) for all \(R\in\mc R\), a contradiction.
So let \(\tau(s)\in\mc R\) satisfy for all \(x\in\tau(s)\) there exists
\(a(s\concat\<x\>)\in\mc A^{|s|+1}\) extending \(a(s)\) such that
\(x=\sigma(a(s\concat\<x\>))\).
If \(\tau\) is attacked by \(\alpha\in(\bigcup R)^\omega\), then
\(\alpha(n)\in\tau(\alpha\rest n)\). So \(\alpha(n)=\sigma(a(\alpha\rest n+1))\),
and since \(\sigma\) is winning,
\(\{\sigma(a(\alpha\rest n+1)):n<\omega\}=\{\alpha(n):n<\omega\}\in\mc B\).
Therefore \(\tau\) witnesses
\(\plI\win\schStrongSelGame{\mc R}{\neg\mc B}\).
Now let \(\sigma\) witness
\(\plI\win\schStrongSelGame{\mc R}{\neg\mc B}\).
Let \(s\in\mc A^{<\omega}\), and suppose \(r(s)\in(\bigcup \mc R)^{|s|}\) is defined
(again, \(r(\emptyset)=\emptyset\)). For \(A\in\mc A\) choose \(f_A\in\mathbf C(\mc R)\)
where \(\ran{f_A}\subseteq A\), and let \(\tau(s\concat\<A\>)=f_A(\sigma(r(s)))\),
and let \(r(s\concat\<A\>)\) extend \(r(s)\) by letting
\(r(s\concat\<A\>)(|s|)=\tau(s\concat\<A\>)\).
If \(\tau\) is attacked by \(\alpha\in\mc A^\omega\),
then since \(\tau(\alpha\rest n+1)=f_{\alpha(n)}(\sigma(r(\alpha\rest n))\in\alpha(n)\cap\sigma(r(\alpha\rest n))\)
and \(\sigma\) is winning, we conclude that \(\tau\) is a legal strategy and
\(\{\tau(\alpha\rest n+1):n<\omega\}\in\mc B\).
Therefore \(\tau\) witnesses
\(\plII\win\schStrongSelGame{\mc A}{\mc B}\). \end{proof}
\begin{corollary}
If \(\mc R\) is a reflection of \(\mc A\),
then \(\schStrongSelGame{\mc A}{\mc B}\) and \(\schStrongSelGame{\mc R}{\neg\mc B}\)
are both perfect information dual and Markov information dual. \end{corollary}
\section{Applications of Reflections}
\begin{definition}\label{selectionSets}
Let \(X\) be a topological space and \(\mc T_X\) be a chosen basis of nonempty sets for its topology.
\begin{itemize}
\item Let \(\mc T_{X,x} = \{U\in\mc T_X : x\in U\}\) be the local point-base at \(x\in X\).
\item Let \(\Omega_{X,x} = \{Y\subseteq X: \forall U\in\mc T_{X,x}(U\cap Y\not=\emptyset)\}\) be the fan at \(x\in X\).
\item Let \(\mc T_{X,F} = \{U\in\mc T_X : F\subseteq U\}\) be the local finite-base at \(F\in[X]^{<\aleph_0}\).
\item Let \(\mc O_X = \{\mc U\subseteq\mc T_X : \bigcup \mc U=X\}\) be the collection
of basic open covers of \(X\).
\item Let \(\mc P_X = \{\mc T_{X,x} : x\in X\}\) be the collection of local point-bases of \(X\).
\item Let \(\Omega_X = \{\mc U\subseteq\mc T_X : \forall F\in[X]^{<\aleph_0}\exists U\in\mc U(F\subseteq U)\}\)
be the collection of basic \(\omega\)-covers of \(X\).
\item Let \(\mc F_X = \{\mc T_{X,F} : F\in [X]^{<\aleph_0}\}\) be the collection of local finite-bases of \(X\).
\item Let \(\mc D_X = \{Y\subseteq X: \forall U\in\mc T_X(U\cap Y\not=\emptyset)\}\) be the collection of dense subsets of \(X\).
\item Let \(\Gamma_{X,x} = \{Y\subseteq X: \forall U\in\mc T_{X,x}(Y\setminus U\in[X]^{<\aleph_0})\}\) be the collection
of converging fans at \(x\in X\). (When intersected with \([X]^{\aleph_0}\), these are the non-trivial
sequences of \(X\) converging to \(x\).)
\end{itemize} \end{definition}
While these notions were defined in terms of a particular basis, the reader may verify the the following.
\begin{proposition}
Let \(\mc A'\) be a selection basis for \(\mc A\).
\begin{itemize}
\item \(\plI\win\schStrongSelGame{\mc A}{\mc B}\Leftrightarrow\plI\win\schStrongSelGame{\mc A'}{\mc B}\).
\item \(\plI\prewin\schStrongSelGame{\mc A}{\mc B}\Leftrightarrow\plI\prewin\schStrongSelGame{\mc A'}{\mc B}\).
\item \(\plII\win\schStrongSelGame{\mc A}{\mc B}\Leftrightarrow\plII\win\schStrongSelGame{\mc A'}{\mc B}\).
\item \(\plII\markwin\schStrongSelGame{\mc A}{\mc B}\Leftrightarrow\plII\markwin\schStrongSelGame{\mc A'}{\mc B}\).
\end{itemize} \end{proposition}
\begin{proposition}
Each selection set in Definition \ref{selectionSets} is a selection basis for the set
defined by replacing \(\mc T_X\) with the set of all nonempty open sets in \(X\). \end{proposition}
As such, the choice of topological basis is irrelevant when playing selection games using these sets.
We may now establish (or re-establish) the following dual games.
\begin{proposition}
\(\mc P_X\) is a reflection of \(\mc O_X\). \end{proposition} \begin{proof}
For every open cover \(\mc U\), the corresponding choice function \(f\in\mathbf C(\mc P_X)\) is simply
the witness that \(x\in f(\mc T_{X,x})\in\mc U\). \end{proof}
\begin{corollary}
\(\schStrongSelGame{\mc O_X}{\mc B}\) and \(\schStrongSelGame{\mc P_X}{\neg\mc B}\) are perfect-information
and Markov-information dual. \end{corollary}
In the case that \(\mc B=\mc O_X\), \(\schStrongSelGame{\mc O_X}{\mc O_X}\) is the well-known Rothberger game, and \(\schStrongSelGame{\mc P_X}{\neg\mc O_X}\) is isomorphic to the point-open game \(PO(X)\): \(\plI\) chooses points of \(X\), \(\plII\) chooses an open neighborhood of each chosen point, and \(\plI\) wins if \(\plII\)'s choices are a cover. So this was simply the classic result that the Rothberger game and point-open game are perfect-information dual \cite{MR0493925}, and the more recent result that these games are Markov-information dual \cite{2018arXiv180606001C}.
\begin{proposition}
\(\mc F_X\) is a reflection of \(\Omega_X\). \end{proposition} \begin{proof}
For every \(\omega\)-cover \(\mc U\), the corresponding choice function \(f\in\mathbf C(\mc F_X)\) is simply
the witness that \(F\subseteq f(\mc T_{X,F})\in\mc U\). \end{proof}
\begin{corollary}
\(\schStrongSelGame{\Omega_X}{\mc B}\) and \(\schStrongSelGame{\mc F_X}{\neg\mc B}\) are perfect-information
and Markov-information dual. \end{corollary}
Note that in the case that \(\mc B=\Omega_X\), \(\schStrongSelGame{\Omega_X}{\Omega_X}\) is the Rothberger game played with \(\omega\)-covers, and \(\schStrongSelGame{\mc F_X}{\neg\Omega_X}\) is isomorphic to the \(\Omega\)-finite-open game \(\Omega FO(X)\): \(\plI\) chooses finite subsets of \(X\), \(\plII\) chooses an open neighborhood of each chosen finite set, and \(\plI\) wins if \(\plII\)'s choices are an \(\omega\)-cover. These games were shown to be dual in \cite{2018arXiv180606001C}.
\begin{proposition}
\(\mc T_X\) is a reflection of \(\mc D_X\). \end{proposition} \begin{proof}
For every dense \(D\), the corresponding choice function \(f\in\mathbf C(\mc T_X)\) is simply
the witness that \(f(U)\in U\cap D\). \end{proof}
\begin{corollary}
\(\schStrongSelGame{\mc D_X}{\mc B}\) and \(\schStrongSelGame{\mc T_X}{\neg\mc B}\) are perfect-information
and Markov-information dual. \end{corollary}
In the case that \(\mc B=\Omega_{X,x}\) for some \(x\in X\), \(\schStrongSelGame{\mc D_X}{\Omega_{X,x}}\) is the strong countable dense fan-tightness game at \(x\), see e.g. \cite{MR2678950}. \(\schStrongSelGame{\mc T_X}{\neg\Omega_{X,x}}\) is the game \(CL(X,x)\) first studied by Tkachuk in \cite{tkachukTwoPointGame}. Tkachuk showed in that paper that these games are perfect-information dual; Clontz and Holshouser previously showed these were Markov-information dual in the case that \(X=C_p(Y)\) \cite{2018arXiv180606001C}.
In the case that \(\mc B=D_X\), then \(\schStrongSelGame{\mc D_X}{\mc D_X}\) is the strong selective separability game introduced in \cite{MR1711901}, and \(\schStrongSelGame{\mc T_X}{\neg\mc D_X}\) is the point-picking game of Berner and Juh\'asz defined in \cite{MR775687}. Scheepers showed that these were perfect-information dual in his paper.
\begin{proposition}
\(\mc T_{X,x}\) is a reflection of \(\Omega_{X,x}\). \end{proposition} \begin{proof}
For every set \(Y\) with limit point \(x\), the corresponding choice function \(f\in\mathbf C(\mc T_{X,x})\) is simply
the witness that \(f(U)\in U\cap Y\). \end{proof}
\begin{corollary}
\(\schStrongSelGame{\Omega_{X,x}}{\mc B}\) and \(\schStrongSelGame{\mc T_{X,x}}{\neg\mc B}\) are perfect-information
and Markov-information dual. \end{corollary}
In the case that \(\mc B=\Gamma_{X,x}\) for some \(x\in X\), \(\schStrongSelGame{\mc T_{X,x}}{\neg\Gamma_{X,x}}\) is Gruenhage's \(W\) game \cite{MR0413049}. Its dual \(\schStrongSelGame{\Omega_{X,x}}{\Gamma_{X,x}}\) characterizes the strong Fr\'echet-Urysohn property \(\plI\notprewin\schStrongSelGame{\Omega_{X,x}}{\Gamma_{X,x}}\) at \(x\), which now seen to be equivalent to \(\plII\notmarkwin\schStrongSelGame{\mc T_{X,x}}{\neg\Gamma_{X,x}}\). This allows us to obtain the following result.
\begin{corollary}
\(\plI\notprewin\schStrongSelGame{\Omega_{X,x}}{\Gamma_{X,x}}\)
if and only if
\(\plI\notwin\schStrongSelGame{\Omega_{X,x}}{\Gamma_{X,x}}\). \end{corollary} \begin{proof}
As shown in \cite{MR510910},
a space is \(w\) at \(x\), that is, \(\plII\notwin\schStrongSelGame{\mc T_{X,x}}{\neg\Gamma_{X,x}}\)
if and only if \(\plI\notprewin\schStrongSelGame{\Omega_{X,x}}{\Gamma_{X,x}}\) for all \(x\in X\). \end{proof}
For \(\mc B=\Omega_{X,x}\), \(\schStrongSelGame{\mc T_{X,x}}{\neg\Omega_{X,x}}\) is the variant of Gruenhage's \(W\) game for clustering. This game is now seen to be dual to the strong countable fan tightness game \(\schStrongSelGame{\Omega_{X,x}}{\Omega_{X,x}}\) at \(x\).
\section{Open Questions}
\begin{question}
Does there exist a natural reflection for \(\Gamma_{X,x}\)
or \(\Gamma_X=\{\mc U\subseteq\mc T_X:\forall x\in X(\mc U\setminus\mc T_{X,x}\in[T_X]^{<\aleph_0})\}\)? \end{question}
\begin{question}
Can these results be extended for \(\schSelGame{\mc A}{\mc B}\)? \end{question}
\end{document} |
\begin{document}
\title{\begin{LARGE}\textbf{Quantum stabilizer codes from Abelian and non-Abelian groups association schemes
}\end{LARGE}} \date{20 July 2014} \author{\textbf{A. Naghipour$^{1,2}$}\footnote{{\it Electronic addresses:} [email protected] and a\[email protected]} \textbf{
M. A. Jafarizadeh$^{3}$} \footnote{{\it Electronic addresses:} [email protected] and [email protected]}
\textbf{
S. Shahmorad$^{2}$} \footnote{{\it Electronic address:} [email protected]}\\
[5pt] {\it $^{1}$Department of Computer Engineering, University College of Nabi Akram,}\\ {\it No. 1283 Rah Ahan Street,
Tabriz, Iran}\\
[2mm]
{\it $^{2}$Department of Applied Mathematics, Faculty of Mathematical Sciences, University of Tabriz,}\\ {\it 29 Bahman Boulevard, Tabriz, Iran }\\ [2mm]
{\it $^{3}$Department of Theoretical Physics and Astrophysics, Faculty of Physics, University of Tabriz,}\\ {\it 29 Bahman Boulevard, Tabriz, Iran }}
\date{27 September 2014}
\maketitle
\leftskip=0pt \hrule\vskip 8pt \begin{small} \hspace{-.8cm} {\bfseries Abstract}\\\\ A new method for the construction of the binary quantum stabilizer codes is provided, where the construction is based on Abelian and non-Abelian groups association schemes. The association schemes based on non-Abelian groups are constructed by bases for the regular representation from $U_{6n}$, $T_{4n}$, $V_{8n}$ and dihedral $D_{2n}$ groups. By using Abelian group association schemes followed by cyclic groups and non-Abelian group association schemes a list of binary stabilizer codes up to $40$ qubits is given in tables $4$, $5$, and $10$. Moreover, several binary stabilizer codes of distances $5$ and $7$ with good quantum parameters is presented. The preference of this method specially for Abelian group association schemes is that one can construct any binary quantum stabilizer code with any distance by using the commutative structure of association schemes. \\
\\ {\bf Keywords:} Stabilizer codes; Association schemes; Adjacency matrices; Cyclic groups; Quantum Hamming bound; Optimal stabilizer codes \parindent 1em \end{small} \vskip 10pt\hrule
\section{\hspace*{-.5cm}Introduction}
The important class of quantum codes are stabilizer codes. The stabilizer codes, first introduced by Gottesman [1]. These codes are useful for building quantum fault tolerant circuits. Stabilizer code encompasses large class of well-known quantum codes, including Shor $9$-qubit code [6], CSS code [7], and toric code [3]. For stabilizer codes, the error syndrome is identified by measuring the generators of the stabilizer group. The several methods for constructing good families of quantum codes by numerous authors over recent years have been proposed. In [8]-[12] many binary quantum codes have been constructed by using classical error-correcting codes, such as Reed-Solomon codes, Reed-Muller codes, and algebraic-geometric codes. The theory was later extended to the nonbinary case, which authors in [13]-[15] have introduced nonbinary quantum codes for the fault-tolerant quantum computation. Several new families of quantum codes, such convolutional quantum codes, subsystem quantum codes have been studied through algebraic and geometric tools and the stabilizer method has been extended to these variations of quantum code [16], [17]. \\ \hspace*{0.5cm} Wang et al. [21] studied the construction of nonadditive AQCs as well as constructions of asymptotically good AQCs derived from algebraic-geometry codes . Wang and Zhu [22] presented the construction of optimal AQCs. Ezerman et al. [23] presented so-called CSS-like constructions based on pairs of nested subfield linear codes. They also employed nested codes (such as BCH codes, circulant codes, etc.) over $\mathbb{F}_{4}$ to construct AQCs in their earlier work [24]. The asymmetry was introduced into topological quantum codes in [25]. Leslie [26] presented a new type of sparse CSS quantum error-correcting code based on the homology of hypermaps. Authors in [27] have studied the construction of AQCs using a combination of BCH and finite geometry LDPC codes. Various constructions of new AQCs have been studied in [28], [29]. Here in this work the dominant underlying theme is that of constructing good binary quantum stabilizer codes of distance $3$ and higher, e.g., codes with good quantum parameters based on Abelian and non-Abelian groups association schemes. Using Abelian and non-Abelian groups association schemes, we obtain many binary quantum stabilizer codes. \\ \hspace*{0.5cm} An association scheme is a combinatorial object with useful algebraic properties (see [30] for an accessible introduction). This mathematical object has very useful algebraic properties which enables one to employ them in algorithmic applications such as the shifted quadratic character problem [31]. A $d$-class symmetric association scheme ($d$ is called the diameter of the scheme) has $d+1$ symmetric relations $R_i$ which satisfy some particular conditions. Each non-diagonal relation $R_i$ can be thought of as the network $(V,R_i)$, where we will refer to it as the underlying graph of the association scheme ($V$ is the vertex set of the association scheme which is considered as the vertex set of the underlying graph). In [32], [33] algebraic properties of association schemes have been employed in order to evaluate the effective resistances in finite resistor networks, where the relations of the corresponding schemes define the kinds of resistances or conductances between any two nodes of networks. In [34], a dynamical system with $d$ different couplings has been investigated in which the relationships between the dynamical elements (couplings) are given by the relations between the vertexes according to the corresponding association schemes. Indeed, according to the relations $R_i$, the so-called adjacency matrices $A_i$ are defined which form a commutative algebra known as Bose-Mesner (BM) algebra. Group association schemes are particular schemes in which the vertices belong to a finite group and the relations are defined based on the conjugacy classes of the corresponding group. Working with these schemes is relatively easy, since almost all of the needed information about the scheme. We will employ the commutative structure of the association schemes in order to the construction of binary quantum stabilizer codes, in terms of the parameters of the corresponding association schemes such as the valencies of the adjacency matrices $A_i$ for $i=1,...,d$. As it will be said in Section 3, in order to construct the binary quantum stabilizer codes, one needs a binary matrix $A=(A_1 \vert A_2)$, such that by removing arbitrarily row or rows from $A$ one can obtain $n-k$ independent generators. After finding the code distance by $n-k$ independent generators one can then determine the parameters of the associated code. \\
\hspace*{0.5cm} The organization of the paper is as follows. In
section 2, we give preliminaries such as quantum stabilizer codes,
association schemes, group association schemes, finite Abelian
groups and finite non-Abelian groups. Section 3 is devoted to the construction of binary quantum
stabilizer codes based on Abelian group association schemes. In section 4,
we construct the binary quantum stabilizer codes based on non-Abelian group association schemes.
The paper ends with a brief conclusion.
\\
\section{\hspace*{-.5cm} Preliminaries } In this section, we give some preliminaries such as quantum codes and association schemes used through the paper. \\
\subsection{\hspace*{-.5cm} Quantum stabilizer codes} We recall quantum stabilizer codes. For material not covered in this subsection, as well as more detailed information about quantum error correcting codes, please refer to [20], [36]. We employ binary quantum error correcting codes (QECCs) defined on the complex Hilbert space $\mathcal{H}_{2}^{\otimes n}$ where $\mathcal{H}_{2}$ is the complex Hilbert space of a single qubit $\alpha \vert 0 \rangle + \beta \vert 1 \rangle$ with $\alpha , \beta \in \mathbb{C}$ and $ \vert \alpha \vert^{2} + \vert \beta \vert^{2}=1$. The fundamental element of stabilizer formalism is the Pauli group $\mathcal{G}_n$ on $n$ qubits. The Pauli group for one qubit is defined to consist of all Pauli matrices, together with multiplicative factors $\pm 1$, $\pm i$: \\ \begin{equation}\label{adm1} \mathcal{G}_1 = \{\pm I, \pm iI, \pm X, \pm iX, \pm Y, \pm iY, \pm Z, \pm iZ\} \end{equation} \\ where $X , Y$ and $Z$ are the usual Pauli matrices and I is the identity matrix. The set of matrices $\mathcal{G}_1$ forms a group under the operation of matrix multiplication. In general, group $\mathcal{G}_n$ consist of all tensor products of Pauli matrices on $n$ qubits again with multiplicative factors $\pm 1$, $\pm i$. \\ \hspace*{0.5cm} Suppose $S$ is a subgroup of $\mathcal{G}_n$ and define $V_S$ to be the set of $n$ qubit states which are fixed by every element of $S$. The $V_S$ is the vector \textit{space stabilized} by $S$, and $S$ is said to be the \textit{stabilizer} of the space $V_S$. \\ Consider the stabilizer $ S=\langle g_1 , ... , g_l \rangle$. The check matrix corresponding to $S$ is an $l \times 2n$ matrix whose rows correspond to the generators $g_1$ through $g_l$; the left hand side of the matrix contains $1$s to indicate which generators contain $X$s, and the right hand side contains $1$s to indicate which generators $Z$s; the presence of a $1$ on both sides indicates a $Y$ in the generator. The $i$-th row of the check matrix is constructed as follows: If $g_i$ contains $I$ on the $j$-th qubit then the matrix contain 0 in $j$-th and $n+j$-th columns. If $g_i$ contains an $X$ on the $j$-th qubit then the element in $j$-th column is 1 and in $n+j$-th column is 0. If it contains $Z$ on $j$-th qubit then $j$-th column contains 0 and $n+j$-th element contains 1. And in the last, if $g_i$ contains operator $Y$ on $j$-th qubit then both $j$-th and $n+j$-th columns contain 1. \\ The check matrix does not contain any information about overall multiplicative factor of $g_i$. We denote by $r(g)$ a row vector representation of operator $g$ from check matrix, which contains $2n$ binary elements. Define $\Lambda$ as: \\ \begin{equation} \Lambda = \left[
\begin{array}{cc}
0 & I \\
I & 0 \\
\end{array}
\right]_{2n\times 2n} \hspace*{3cm} \end{equation} \\ where the matrices $I$ on the off-diagonals are $n \times n$. Elements $g$ and $g'$ of the Pauli group are easily seen to commute if and only if \hspace*{1mm}$r(g) \Lambda r(g')^{T}=0$. Therefore the generators of stabilizer $S=\langle g_1, ... ,g_l\rangle $ with corresponding check matrix $M$ commute if and only if $M \Lambda M^{T}=0$. Let $S=\langle g_1, ... ,g_l\rangle $ be such that $-I$ is not an element of $S$. Then the generators $g_i$, $i \in \{1, ... ,l\}$ are independent if and only if the rows of the corresponding check matrix are linearly independent.\\ Suppose $C(S)$ is a stabilizer code with stabilizer $S$. We denote by $N(S)$ a subset of $\mathcal{G}_n$, which is defined to consist of all elements $E \in \mathcal{G}_n$ such that $EgE^{\dag}\in S$ for all $g \in S$. The following theorem specifies the correction power of $C(S)$. \\ \\ \textbf{Theorem 2.1.} Let $S$ be the stabilizer for a stabilizer code $C(S)$. Suppose $\{E_j\}$ is a set of operators in $\mathcal{G}_n$ such that $E_{j}^{\dag} E_k \notin N(S) - S$ for all $j$ and $k$. Then $\{E_j\}$ is a correctable set of errors for the code $C(S)$. \\ \\ \textit{Proof}. See [36]. \\ \\ \hspace*{0.5cm} Theorem 2.1 motivates the definition of a notion of \textit{distance} for a quantum code in analogy to the distance for a classical code. The \textit{weight} of an error $E\in\mathcal{G}_n$ is defined to be the number of terms in the tensor product which are not equal to the identity. For example, the weight of $X_1 Z_4 Y_8$ is three. The distance of stabilizer code $C(S)$ is given by the minimum weight of an element of $N(S)-S$. In terms of the binary vector pairs $\textbf{(a,b)}$, this is equivalent to a minimum weight of the bitwise OR $\textbf{(a,b)}$ of all pairs satisfying the symplectic orthogonality condition, \\ \begin{equation}
A_1 \textbf{b} + A_2 \textbf{a}=0,
\hspace*{3cm} \end{equation} \\ which are not linear combinations of the rows of the binary check matrix $ A= ( A_1 \vert A_2 )$. \\ \hspace*{0.5cm} A $2$-ary quantum stabilizer code $\mathcal{Q}$, denoted by $[[n,k,d]]_{2}$, is a $2^{k}$-dimensional subspace of the Hilbert space $\mathcal{H}^{\otimes n}_{2}$ stabilized by an Abelian stabilizer group $\mathcal{S}$, which does not contain the operator $-I$ [6], and can correct all errors up to $\lfloor\frac{d-1}{2}\rfloor$ . Explicitly \\ \begin{equation}
\mathcal{Q}=\{\vert \psi \rangle: s \vert \psi \rangle = \vert \psi \rangle, \hspace*{1mm}\forall s \in \mathcal{S}\}.
\hspace*{3cm} \end{equation} \\ This code, encodes $k$ logical qubits into $n$ physical qubits. The rate of such code is $\frac{k}{n}$. Since codespace has dimension $2^{k}$ so that we can encode $k$ qubits into it. The stabilizer $\mathcal{S}$ has a minimal representation in terms of $n-k$ independent generators $\{ g_1, ... , g_{n-k}\hspace*{1mm} \vert \hspace*{1mm}\forall i \in \{1, ... , n-k\}, \hspace*{1mm}g_i \in \mathcal{S}\}$. The generators are independent in the sense that none of them is a product of any other two (up to a global phase). \\ \\ \hspace*{0.5cm} As in classical coding theory, there are two bounds which have been established as necessary conditions for quantum codes. \\ \\ \textbf{Lemma 2.2} (quantum Hamming bound for binary case). For any pure quantum stabilizer code $[[n,k,d]]_{2}$, we have the following inequality \begin{equation} \sum_{j=0}^{[\frac{d-1}{2}]}\binom{n}{j}3^j2^k\leq2^n. \end{equation} \\ \textit{Proof}. See [5]. \\ \\ For any pure quantum stabilizer code with distance $3$, the quantum Hamming bound is written by \begin{equation}
n-k\geq \lceil \log_{2}(3n+1)\rceil. \end{equation} It is also satisfied for degenerate codes of distances $3$ and $5$ [1]. \\ \\ \textbf{Lemma 2.3} (quantum Knill-Laflamme). For any quantum stabilizer code $[[n,k,d]]_{q}$, we have \begin{equation} n\geq k+2d-2. \end{equation} \\ \textit{Proof}. See [2]. \\ \\ The class of quantum stabilizer codes is optimal in the sense that its $k$ with fixed $n$ and $d$ is the largest.
\subsection{\hspace*{-.5cm} Association schemes} The theory of association schemes has its origin in the design of statistical experiments [18] and in the study of groups acting on finite sets [35]. Besides, associations schemes are used in coding theory [19], design theory and graph theory. One of the important preferences of association schemes is their useful algebraic structures that enable one to find the spectrum of the adjacency matrices relatively easy; then, for different physical purposes, one can define particular spin Hamiltonians which can be written in terms of the adjacency matrices of an association scheme so that the corresponding spectra can be determined easily. The reader is referred to [4] for further information on association schemes. \\ \\ \textbf{Definition \hspace*{1mm}2.2.1.} A d-class association scheme $\Omega$ on a finite set $V$ is an order set $\{R_0,R_1, ... ,R_d\}$ of relations on the set $V$ which satisfies the following axioms: \\ \\ (1)\hspace*{1mm}$\{R_0,R_1, ... ,R_d\}$ is a partition of $V\times V$. \\ \\ (2) $R_0$ is the identity relation, i.e., $(x,y)\in R_0$ if and only if $x=y$, whenever $x,y \in V$. \\ \\ (3) Every relation $R_i$ is symmetric, i.e., if $(x,y) \in R_i$ then also $(y,x) \in R_i$, for every $x,y \in V$. \\ \\ (4) Let $0\leq i,j,l \leq d$. Let $x,y \in V$ such that $(x,y) \in R_l$, then the number \\ $$ p_{ij}^{l}= \vert \{z \in V : (x,z) \in R_i \hspace*{1mm}\textrm{and}\hspace*{1mm} (z,y) \in R_j \}\vert$$ \\ only depends on $i,j$ and $l$. \\ \\ The relations $R_0,R_1, ... ,R_d$ are called the associate classes of the scheme; two elements $x,y \in V$ are $i$-th associates if $(x,y) \in R_i$. The numbers $p^{l}_{ij}$ are called the \textit{intersection numbers} of $\Omega$. If \\ \\ $(3)^{'} \hspace*{3mm} R_i^{t}=R_i \hspace*{3mm}\textrm{for} \hspace*{3mm}0\leq i\leq d,\hspace*{3mm} \textrm{where}\hspace*{3mm} R_{i}^{t}=\{(\beta,\alpha): (\alpha,\beta) \in R_i\}$ \\ \\ then the corresponding association scheme is called symmetric. Further, if $p^{l}_{ij}=p^{l}_{ji}$ for all $ 0\leq i,j,l \leq d$, then $\Omega =(V,\{R_i\}_{0\leq i \leq d})$ is called commutative. Let $\Omega$ be a commutative symmetric association scheme of class $d$; then the matrices $A_0,A_1,...,A_d$ defined by \\ \begin{equation} (A_{i})_{\alpha , \beta}= \left\{
\begin{array}{ll}
1 & \hbox{if}\hspace*{2mm}(\alpha , \beta) \in R_i, \\
0 & \hbox{otherwise}
\end{array}
\right. \hspace*{4cm} \end{equation} \\ are adjacency matrices of $\Omega$ and are such that \\ \begin{equation} A_i A_j = \sum_{l=0}^{d}p_{ij}^{l}A_l. \hspace*{4cm} \end{equation} \\ From (2.9), it is seen that the adjacency matrices $A_0,A_1, ... ,A_d$ form a basis for a commutative algebra $\mathbf{A}$ known as the Bose-Mesner algebra of $\Omega$. This algebra has a second basis $E_0, ... E_d$ primitive idempotents, \\ \begin{equation} E_0 =\frac{1}{n}J,\hspace*{3mm}E_i E_j=\delta_{ij} E_i,\hspace*{3mm}\sum_{i=0}^{d}E_i = I,
\hspace*{3cm} \end{equation} \\ where $\nu =\vert V\vert$ and $J$ is an $\nu \times \nu$ all-one matrix in $A$. In terms of the adjacency matrices $A_0,A_1, ... ,A_d$ the four defining axioms of a $d$-class association scheme translate to the following four statements [39]: \\ \begin{equation} \sum_{l=0}^{d}A_l=J,\hspace*{3mm}A_0=I,\hspace*{3mm}A_i =A_i^{T} \hspace*{3mm}\textrm{and} \hspace*{3mm}A_i A_j =\sum_{l=0}^{d}p_{ij}^{l}A_l.
\hspace*{3cm} \end{equation} \\ \\ with $0\leq i,j \leq d$ and where $I$ denotes the $\nu \times \nu$ identity matrix and $A^{T}$ is the transpose of $A$. Consider the cycle graph with $\nu$ vertices by $C_\nu$. It can be easily seen that, for even number of vertices $\nu=2m$, the adjacency matrices are given by \\ \begin{equation} A_i = S^{i}+ S^{-i},\hspace*{3mm} i=1,2, ... ,m-1, \hspace*{3mm}A_m=S^{m},
\hspace*{3cm} \end{equation} \\ where $S$ is an $\nu \times \nu$ circulant matrix with period $\nu (S^{\nu}= I_\nu)$ defined as follows: \\ \begin{equation} S=\left(
\begin{array}{ccccccc}
0 & 0 & 0 & \ldots & 0 & 0 & 1 \\
1 & 0 & 0 & \ldots & 0 & 0 & 0 \\
0 & 1 & 0 & \ldots & 0 & 0 & 0 \\
\vdots\\
0 & 0 & 0 & \ldots & 1 & 0 & 0 \\
0 & 0 & 0 & \ldots & 0 & 1 & 0 \\
\end{array}
\right). \hspace*{3cm} \end{equation} \\ For odd number of vertices $\nu =2m+1$, we have \\ \begin{equation} A_i = S^{i}+ S^{-i},\hspace*{3mm} i=1,2, ... ,m-1,m. \hspace*{3cm} \end{equation} \\ One can easily check that the adjacency matrices in (2.12) together with $A_0=I_{2m}$ (and also the adjacency matrices in (2.14) together with $A_0=I_{2m+1}$) form a commutative algebra. \\
\subsection{\hspace*{-.3cm}Group association schemes} In order to construct quantum stabilizer codes, we need to study the group association schemes. Group association schemes are particular association schemes for which the vertex set contains elements of a finite group $G$ and the relations $R_i$ are defined by \begin{equation} R_i=\{(\alpha,\beta):\alpha \beta^{-1} \in C_i\},
\hspace*{3cm} \end{equation} \\ where $C_{0}=\{e\},C_{1},\ldots,C_{d} $ are the set of conjugacy classes of $G$. Then, $\Omega = (G,\{R_i\}_{0\leq i \leq d})$ becomes a commutative association scheme and it is called the group association scheme of the finite group $G$. It is easy to show that the $\textit{i}$\hspace*{0.3mm}th adjacency matrix is a summation over elements of the $\textit{i}$\hspace*{0.3mm}th stratum group. In fact by the action of $\bar{C}_{i}:=\Sigma_{g\in C_i}{g}$ ($\bar{C}_i$ is called the $\textit{i}$\hspace*{0.3mm}th $\textit{class sum}$) on group elements in the regular representation we observe that $\forall \alpha , \beta , (\bar{C}_i)_{\alpha\beta}= (A_i)_{\alpha\beta}$, so \\ \begin{equation} A_i =\bar{C}_i = \sum_{g \in C_i}g,
\hspace*{3cm} \end{equation} \\ Thus due to (2.9), \begin{equation} \bar{C}_i \bar{C}_j= \sum^{d}_{l=0}p^{l}_{ij} \bar{C}_l,
\hspace*{3cm} \end{equation} \\ However the intersection numbers $p^{l}_{ij}, i,j,l =0,1,...,d$ are given by [38] \\ \begin{equation} p^{l}_{ij}= \frac{\vert C_i \vert \vert C_j \vert}{\vert G \vert} \sum^{d}_{m=0} \frac{\chi_m (g_i)\chi_m (g_j)\overline{\chi_m(g_l)}}{\chi_m (1)},
\hspace*{3cm} \end{equation} where $n:= \vert G \vert $ is the total number of vertices. \\
\subsection{\hspace*{-.3cm}Finite Abelian groups} The classification of finite groups is extremely difficult, but the classification of finite Abelian is not so difficult. It turns out that a fine Abelian group is isomorphic to a product of cyclic groups, and there's a certain uniqueness to this representation. \\
\subsubsection{\hspace*{-.3cm}Cyclic groups and subgroups} Let $G$ be a group and $a\in G$. The subset \begin{equation} \langle a \rangle = \{a^{n}\vert n \in \mathbb{Z}\} \end{equation} \\ is a subgroup of $G$. It is called a $\textit{cyclic subgroup}$ of $G$, or the subgroup $\textit{generated}$ by $a$. If $G=\langle a \rangle$ for some $a\in G$ then we call $G$ a cyclic group. \\ \\ The \textit{order} of an element $a$ in a group is the least positive integer $n$ such that $a^{n}=1$. It's denoted ord $a$. \\ \\ We will often denote the abstract cyclic group of order $n$ by $C_n = \{1,a,a^{2}, ... , a^{n-1}\}$ when the operation is written multiplicatively. It is isomorphic to the underlying additive group of the ring $\mathbb{Z}_n$ where an isomorphism is $f:\mathbb{\mathbb{Z}}_n \rightarrow C_n$ is defined by $f(k)=a^{k}$. \\ \\ $\hspace*{3mm}$Note that cyclic group are all Abelian, since $a^{n} a^{m}=a^{m+n}= a^{m}a^{n}$. The integers $\mathbb{Z}$ under addition is an infinite cyclic group, while $\mathbb{Z}_n$, the integers modulo $n$, is a finite cyclic group of order $n$. Every cyclic group is isomorphic either to $\mathbb{Z}$ or to $\mathbb{Z}_n$ for some $n$. \\
\subsubsection{\hspace*{-.3cm}Product of groups} Using multiplicative notation, if $G$ and $H$ are two groups then $G \times H$ is a group where the product operation $(a,b)(c,d)$ is defined by $(ac,bd)$, for all $a,c\in G$ and all $b,d\in H$. \\ \\ The product of two Abelian groups is also called their \textit{direct sum}, denoted $G \oplus H$. Since every cyclic group of order $n$ is given by the modular integers $\mathbb{Z}_n$ under addition mod $n$. Hence, to illustrate, an Abelian group of order 1200 may actually be isomorphic to, say, the group $\mathbb{Z}_{40}\times \mathbb{Z}_{6}\times \mathbb{Z}_{5}$. Furthermore, the Chinese remainder theorem, as we'll see, which says that if $m$ and $n$ are relatively prime, then $\mathbb{Z}_{_{mn}} \cong \mathbb{Z}_{_{m}} \times \mathbb{Z}_{_{m}}$. In the preceding example, we may then replace $\mathbb{Z}_{_{40}}$ by $\mathbb{Z}_{_{2^{3}}}\times \mathbb{Z}_{5}$, and $\mathbb{Z}_{6}$ by $\mathbb{Z}_{2} \times \mathbb{Z}_{3}$. Therefore, we will state the fundamental theorem like this: every finite Abelian group is the product of cyclic groups of prime power orders. The collection of these cyclic groups will be determined uniquely by the group $G$.
\\ \\ \textbf{Theorem 2.4} (Chinese remainder theorem for groups). Suppose that $n=km$ where $k$ and $m$ are relatively prime. Then the cyclic group $C_n$ is isomorphic to $C_k \times C_m$. More generally, if $n$ is the product $k_1 \cdots k_r$ where the factors are pairwise relatively prime, then \begin{equation} C_n \cong C_{k_{1}} \times ... \times C_{k_{r}}= \prod^{r}_{i=1}C_{k_{_{i}}}.
\hspace*{3cm} \end{equation} \\ In particular, if $n= p^{e_{1}}_{1} ... p^{e_{r}}_{r}$, then the cyclic group $C_n$ factors as the product of the cyclic groups $C_{p_{i}^{e_{i}}}$, that is \\ \begin{equation} C_n \cong \prod^{r}_{i=1} C_{p_{i}^{e_{i}}}.
\hspace*{3cm} \end{equation} \\ \\ \textbf{Theorem 2.5} (Fundamental theorem of finite Abelian groups). Every finite Abelian group is isomorphic to the direct product of a unique collection of cyclic groups, each having a prime power order. \\ \\ \textit{Proof}. See [37]. \\ \\ \hspace*{3mm}For the determination of the number of distinct Abelian group of order $n$ we need to study the partition function. In number theory, a partition of a positive integer $n$ is a way of writing $n$ as a sum of position integers. The number of different partitions of $n$ is given by the partition function $p(n)$ [40]. \\ \\ For instance $p(5)=7$, having seen the seven ways we can partition 5, i.e.,
\\ \\ \begin{equation} \begin{array}{c} 5=5 \\ \hspace*{7mm}5=4+1 \\ \hspace*{7mm}5=3+2 \\ \hspace*{14mm}5=3+1+1 \\ \hspace*{14mm}5=2+2+1 \\ \hspace*{21mm}5=2+1+1+1\\ \hspace*{28mm}5=1+1+1+1+1 \end{array}
\hspace*{3cm} \end{equation} \\ \\ So, there are seven Abelian group of order 32, i.e., \\ \\ \begin{equation} \begin{array}{c}
G_1=\mathbb{Z}_{2^{5}}\\
\hspace*{9mm} G_2=\mathbb{Z}_{2^{4}}\times \mathbb{Z}_{2}\\
\hspace*{10.5mm} G_3=\mathbb{Z}_{2^{3}}\times \mathbb{Z}_{2^{2}} \\
\hspace*{17.7mm}G_4=\mathbb{Z}_{2^{3}}\times \mathbb{Z}_{2}\times \mathbb{Z}_{2} \\
\hspace*{19.2mm}G_5=\mathbb{Z}_{2^{2}}\times \mathbb{Z}_{2^{2}}\times \mathbb{Z}_{2} \\
\hspace*{26.7mm}G_6=\mathbb{Z}_{2^{2}}\times \mathbb{Z}_{2}\times \mathbb{Z}_{2}\times \mathbb{Z}_{2} \\
\hspace*{34mm}G_7=\mathbb{Z}_{2}\times \mathbb{Z}_{2}\times \mathbb{Z}_{2}\times \mathbb{Z}_{2}\times \mathbb{Z}_{2} \end{array} \hspace*{2.2cm} \end{equation} \\ \\ The above function enables us to better express the number of distinct Abelian group of a given order, as follows. \\ \\
\textbf{Theorem 2.6.} Let $n$ denote a positive integer which factors into distinct prime powers, written $n=\prod p_{k}^{e_{k}}$. Then there are exactly $\prod p(e_k)$ distinct Abelian group of order $n$. \\ \\ \hspace*{3mm}In particular, when $n$ is square-free, i.e., all $e_k=1$ then there is a unique Abelian group of order $n$ given by $\mathbb{Z}_{p_{1}} \times \mathbb{Z}_{p_{2}}\times ... \times \mathbb{Z}_{p_{k}}$, which is just the cyclic group $\mathbb{Z}_{n}$, if we may borrow Chinese reminder theorem again. \\ \\
\subsection{\hspace*{-.3cm}Finite non-Abelian groups} A non-Abelian group, also sometimes called a non-commutative group, is a group $(G,*)$ in which there are at least two elements $a$ and $b$ of $G$ such that $a*b\neq b*a$. Non-Abelian groups are pervasive in mathematics and physics. Both discrete groups and continuous groups may be non-Abelian. Most of the interesting Lie groups are non-Abelian, and these play an important role in gauge theory.
\subsubsection{\hspace*{-.3cm}Non-Abelian group $U_{6n}$} The group $U_{6n}$, where $n\geq 1$, is generated by two generators $a$ and $b$ with the following relations: \begin{equation} U_{6n}=\{a,b: a^{2n}=b^3=1, a^{-1}ba=b^{-1}\}.
\hspace*{3cm} \end{equation} The group $U_{6n}$ has $3n$ conjugacy class. The $3n$ conjugacy classes are given by, for $0\leq r\leq n-1$, \begin{equation} \{a^{2r}\}, \{ba^{2r}, b^2a^{2r}\}, \{a^{2r+1}, ba^{2r+1}, b^2a^{2r+1}\}.
\hspace*{3cm} \end{equation} The number of group elements of $U_{6n}$ is $6n$ and the matrix representations of $[a]$ and $[b]$ with respect to the basis $\mathcal{B}=\{a^j, ba^j, b^2a^j\}$, for $0\leq j\leq 2n-1$, are given by \begin{equation} [a]=\left(
\begin{array}{ccc}
S & 0 & 0 \\
0 & 0 & S \\
0 & S & 0 \\
\end{array}
\right), [b]=\left(
\begin{array}{ccc}
0 & I & 0 \\
0 & 0 & I \\
I & 0 & 0 \\
\end{array}
\right) \end{equation} where $I$ is an $2n \times 2n$ identity matrix and $S$ is an $2n \times 2n$ circulant matrix with period $2n (S^{2n}= I_{2n})$. The adjacency matrices $A_0$,$A_1$,...,$A_{3n-1}$ of this group are given by \begin{align} A_r &=[a]^{2r}, \qquad r=0,1,...,n-1 \nonumber \\ A_{n+r} &=[b][a]^{2r}+[b]^2[a]^{2r}, \qquad r=0,1,...,n-1 \\ A_{2n+r} &=[a]^{2r+1}+[b][a]^{2r+1}+[b]^2[a]^{2r+1}, \qquad r=0,1,...,n-1 \nonumber.
\hspace*{3cm} \end{align} One can easily that the adjacency matrices in (2.27) form a commutative algebra [4]. \subsubsection{\hspace*{-.3cm}Non-Abelian group $T_{4n}$} The group $T_{4n}$, where $n\geq 1$, with two generators $a$ and $b$, obeys the following relations: \begin{equation} T_{4n}=\{a,b: a^{2n}=1, a^{n}=b^2, b^{-1}ab=a^{-1}\}.
\hspace*{3cm} \end{equation} The group $T_{4n}$ has $n+3$ conjugacy class. The $n+3$ conjugacy classes are given by \begin{equation} \{1\}, \{a^{n}\}, \{a^r, a^{-r}\}(1\leq r\leq n-1), \{ba^{2j}: 0\leq j\leq n-1\}, \{ba^{2j+1}: 0\leq j\leq n-1\}.
\hspace*{3cm} \end{equation} The number of group elements of $T_{4n}$ is $4n$ and the matrix representations of $[a]$ and $[b]$ with respect to the basis $\mathcal{B}=\{a^j, ba^j, b^2a^j, b^3a^j\}$, for $0\leq j\leq n-1$, are given by \begin{equation} [a]=\left(
\begin{array}{cc}
S & 0 \\
0 & S^{-1} \\
\end{array}
\right), [b]=\left(
\begin{array}{cccc}
0 & 0 & I & 0 \\
0 & 0 & 0 & I \\
0 & I & 0 & 0 \\
I & 0 & 0 & 0 \\
\end{array}
\right) \end{equation} where $I$ is an $n \times n$ identity matrix and $S$ is an $2n \times 2n$ circulant matrix with period $2n (S^{2n}= I_{2n})$. The adjacency matrices $A_0$,$A_1$,...,$A_{n+2}$ of this group are given by \begin{align} A_{0} &=I_{4n}, \qquad n=2,3,... \nonumber \\ A_{1} &=[a]^n, \qquad n=2,3,... \nonumber \\ A_{j+1} &=[a]^{j}+[b]^2[a]^{n-j}, \qquad j=1,...,n-1 \nonumber \\ A_{n+1} &=\sum_{j=0}^{\lceil \frac{n}{2}\rceil-1}([b][a]^{2j}+[b]^3[a]^{2j}), \qquad 2j<n \\ A_{n+2} &=\sum_{j=0}^{\lceil \frac{n-1}{2}\rceil-1}([b][a]^{2j+1}+[b]^3[a]^{2j+1}), \qquad 2j+1<n. \nonumber
\hspace*{3cm} \end{align} One can easily prove that the adjacency matrices in (2.31) form a commutative algebra [4]. \subsubsection{\hspace*{-.3cm}Non-Abelian group $V_{8n}$} The group $V_{8n}$, where $n$ is an odd integer number [38], is generated by two generators $a$ and $b$ with the following relations: \begin{equation} V_{8n}=\{a,b: a^{2n}=b^4=1, ba=a^{-1}b^{-1}, b^{-1}a=a^{-1}b\}.
\hspace*{3cm} \end{equation} The group $V_{8n}$ has $2n+3$ conjugacy class. The $2n+3$ conjugacy classes are given by \begin{align} \{1\}, \{b^2\}, \{a^{2r+1}, b^2a^{-2r-1}\}(0\leq r\leq n-1),\nonumber \\ \{a^{2s}, a^{-2s}\}, \{b^2a^{2s}, b^2a^{-2s}\}(1\leq s\leq \frac{n-1}{2}), \\ \{b^ka^j: j\; \text{even} ,\; k=1,3\}, \; \text{and} \; \{b^ka^j: j\; \text{odd}, \;k=1,3\}\nonumber. \end{align} The number of group elements of $V_{8n}$ is $8n$ and the matrix representations of $[a]$ and $[b]$ with respect to the basis $\mathcal{B}=\{a^j, ba^j, b^2a^j, b^3a^j\}$, for $0\leq j\leq 2n-1$, are given by \begin{equation} [a]=\left(
\begin{array}{cccc}
S & 0 & 0 & 0 \\
0 & 0 & 0 & S^{-1} \\
0 & 0 & S & 0 \\
0 & S^{-1} & 0 & 0 \\
\end{array}
\right), [b]=\left(
\begin{array}{cccc}
0 & I & 0 & 0 \\
0 & 0 & I & 0 \\
0 & 0 & 0 & I \\
I & 0 & 0 & 0 \\
\end{array}
\right) \end{equation} where $I$ is an $2n \times 2n$ identity matrix and $S$ is an $2n \times 2n$ circulant matrix with period $2n (S^{2n}= I_{2n})$. The adjacency matrices $A_0$,$A_1$,...,$A_{2n+3}$ of this group are given by \begin{align} A_0 &=I_{8n}, \nonumber \\ A_{1} &=[b]^2, \nonumber \\ A_{2+j} &=[a]^{2j+1}+[b]^2[a]^{2n-2j-1}, \qquad j=0,1,...,n-1 \nonumber\\ A_{n+1+j} &=[a]^{2j}+[a]^{2n-2j}, \qquad j=1,2,...,\frac{n-1}{2} \nonumber\\ A_{n+1+\frac{n-1}{2}+j} &=[b]^2[a]^{2j}+[b]^2[a]^{2n-2j}, \qquad j=1,2,...,\frac{n-1}{2}\\ A_{2n+1} &=\sum_{j=0}^{n-1}([b][a]^{2j}+[b]^3[a]^{2j}), \nonumber\\ A_{2n+2} &=\sum_{j=0}^{n-1}([b][a]^{2j+1}+[b]^3[a]^{2j+1}). \nonumber
\hspace*{3cm} \end{align} One can easily prove that the adjacency matrices in (2.35) form a commutative algebra [4]. \subsubsection{\hspace*{-.3cm}The dihedral group $D_{2n}$} The dihedral group $G=D_{2n}$ is generated by two generators $a$ and $b$ with the following relations: \begin{equation} D_{2n}=\{a,b: a^{n}=b^2=1, b^{-1}ab=a^{-1}\}
\hspace*{3cm} \end{equation} We consider the case of $n=2m$; the case of odd $n$ can be considered similarly. The dihedral group $G=D_{2n}$ with $n=2m$ has $m+3$ conjugacy classes, are given by \begin{align} \{1\}, \{a^{r}, a^{-r}\}(1\leq r\leq m-1),\nonumber \\ \{a^{m}\}, \{a^{2j}b\}(0\leq j\leq m-1), \\ \{a^{2j+1}b\}(0\leq j\leq m-1).\nonumber \end{align} The adjacency matrices $A_0$,$A_1$,...,$A_{m+2}$ of this group with $n=2m$ are given by \begin{align} A_{0} &=I_{2n}, \nonumber \\ A_{j} &=I_{2}\otimes (S^{j}+S^{-j}), \qquad j=1,2,...,m-1 \nonumber \\ A_{m} &=I_{2}\otimes S^{m}, \nonumber \\ A_{m+1} &=\sigma_{x}\otimes(\sum_{j=0}^{m-1}S^{2j}), \\ A_{m+2} &=\sigma_{x}\otimes(\sum_{j=0}^{m-1}S^{2j+1}). \nonumber
\hspace*{3cm} \end{align} where $S$ is an $n \times n$ circulant matrix with period $n (S^{n}= I_{n})$ and $\sigma_{x}$ is the Pauli matrix. Also, the adjacency matrices of this group with $n=2m+1$ are given by \begin{align} A_{0} &=I_{2n}, \nonumber \\ A_{j} &=I_{2}\otimes (S^{j}+(S^{-1})^j), \qquad j=1,2,...,m \\ A_{m+1} &=\sigma_{x}\otimes J_n. \nonumber
\hspace*{3cm} \end{align} where $S$ is an $n \times n$ circulant matrix with period $n (S^{n}= I_{n})$ and $J_{n}$ is the $n \times n$ all-one matrix. One can easily prove that the adjacency matrices in (2.38) and (2.39) form a commutative algebra [4].
\section{\hspace*{-.3cm}Construction of stabilizer codes from Abelian group association schemes} To construct a quantum stabilizer code of length $n$ based on the Abelian group association schemes we need a binary matrix $A=(A_1 \vert A_2)$ which has $2n$ columns and two sets of rows, making up two $n \times n$ binary matrices $A_1$ and $A_2$, such that by removing arbitrarily row or rows from $A$ we can achieve $n-k$ independent generators. After finding the code distance by $n-k$ independent generators we can then determine the parameters of the associated code. The parameters $[[ n,k,d ]]_{2}$ of the associated quantum stabilizer are its length $n$, its dimension $k$, and its minimum distance $d$. \\ \\
Consider the cycle graph $C_\nu$ with $\nu$ vertices, as is presented in section 2.2. By setting $m=2$ in view of (2.14), we have \\ \begin{equation} A_0 = I_5,\hspace*{4mm} A_1 = S+S^{-1},\hspace*{4mm} A_2=S^{2} +S^{-2} \hspace*{3cm} \end{equation} \\ where $S$ is an $5\times5$ circulant matrix with period 5($S^5=I_5$) defined as follows: \\ \begin{equation} S=\left(
\begin{array}{ccccc}
0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
\end{array}
\right) \end{equation} \\ One can see that $A_i$ for $i=1,2\hspace*{1mm}$ are symmetric and $\sum_{i=0}^{2}A_i =J_5$. Also it can be verified that, $\{A_i,\hspace*{2mm}i=1,2\}$ is closed under multiplication and therefore, the set of matrices $A_0, A_1$ and $A_2$ form a symmetric association scheme. \\ \\ In view of $A_0, A_1$ and $A_2$ we can write the following cases: \\ \begin{equation} A_0,\hspace*{2mm}A_1,\hspace*{2mm}A_2,\hspace*{2mm}A_0 +A_1,\hspace*{2mm}A_0 +A_2,\hspace*{2mm}A_1 +A_2,\hspace*{2mm}A_0 +A_1 +A_2 \hspace*{3cm} \end{equation} \\ By examing the number of combinations of 2 cases selected from a set of the above 7 distinct cases and considering $B_1=S + S^{-1}$ and $B_2=S^{2} + S^{-2}$ the binary matrix $B=(B_1 \vert B_2)$ is written as \\ \begin{equation} B= \left(
\begin{array}{ccccccccccc}
0 & 1 & 0 & 0 & 1 & | & 0 & 0 & 1 & 1 & 0 \\
1 & 0 & 1 & 0 & 0 & | & 0 & 0 & 0 & 1 & 1 \\
0 & 1 & 0 & 1 & 0 & | & 1 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 1 & | & 1 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 1 & 0 & | & 0 & 1 & 1 & 0 & 0 \\
\end{array}
\right) \hspace*{3cm} \end{equation} \\ By removing the last row from the binary matrix $B$ we can achieve $n-k=4$ independent generators. The distance $d$ of the quantum code is given by the minimum weight of the bitwise OR $\textbf{(a,b)}$ of all pairs satisfying the symplectic orthogonality condition, \\ \begin{equation} B_1 \textbf{b} + B_2 \textbf{a}=0 \hspace*{3cm} \end{equation} \\ Let $\textbf{a}=(x_1,x_2,x_3,x_4,x_5)$ and $\textbf{b}=(y_1,y_2,y_3,y_4,y_5) $. Then by using (3.5), we have \\ \begin{equation} \left\{
\begin{array}{ll}
x_3 + x_4 +y_2 +y_5 =0 \\
x_4 + x_5 +y_1 +y_3 =0 \\
x_1 + x_5 +y_2 +y_4 =0 \\
x_1 + x_2 +y_3 +y_5 =0
\end{array} \right. \hspace*{3cm} \end{equation} \\ By using (3.6) we can get the code distance $d$ equal to $3$. Since the number of independent generators is $n-k=4$, therefore $k=1$, thus the $[[ 5,1,3 ]]_{2}$ optimal quantum stabilizer code is constructed. It encodes $k=1$ logical qubit into $n=5$ physical qubits and protects against an arbitrary single-qubit error. Its stabilizer consists of $n-k=4$ Pauli operators in table 1. \\ $$
\begin{tabular}{|c|c|}
\hline
Name & Operator\\
\hline
$g_1$ &I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X \\
$g_2$ & X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\\
$g_3$ & Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z \\
$g_4$ & Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X \\
\hline \end{tabular} $$ \begin{table}[htb] \caption{\small{Stabilizer generators for the $[[5,1,3]]_{2}$ code.}} \label{table:2} \newcommand{\hphantom{$-$}}{\hphantom{$-$}} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{0pc}{0pc} \renewcommand{1.}{1.} \end{table} \\ \\ Similar to case $m=2$ we obtain quantum stabilizer codes from $C_\nu,\hspace*{2mm}\nu=6,7, ...\hspace*{1mm}. $ In the case of $m=3$ from $C_6$ we can write \\ \begin{equation} A_0=I_6,\hspace*{4mm}A_1=S^{1}+S^{-1},\hspace*{4mm}A_2=S^{2}+S^{-2},\hspace*{4mm}A_3=S^{3}
\hspace*{3cm} \end{equation} \\ It can be easily seen that $A_i$ for $i=1,2,3$ are symmetric and $\sum_{i=0}^{3} A_i=J_6$. By choosing $B_1=A_2 +A_3$ and $B_2=A_0 +A_1+A_2$ the binary matrix $B =(B_1 \vert B_2)$ will be in the form \\ \begin{equation} B=\left(
\begin{array}{ccccccccccccc}
0 & 0 & 1 & 1 & 1 & 0 & | & 1 & 1 & 1 & 0 & 1 & 1 \\
0 & 0 & 0 & 1 & 1 & 1 & | & 1 & 1 & 1 & 1 & 0 & 1 \\
1 & 0 & 0 & 0 & 1 & 1 & | & 1 & 1 & 1 & 1 & 1 & 0 \\
1 & 1 & 0 & 0 & 0 & 1 & | & 0 & 1 & 1 & 1 & 1 & 1 \\
1 & 1 & 1 & 0 & 0 & 0 & | & 1 & 0 & 1 & 1 & 1 & 1 \\
0 & 1 & 1 & 1 & 0 & 0 & | & 1 & 1 & 0 & 1 & 1 & 1 \\
\end{array}
\right) \hspace*{3cm} \end{equation} \\ By removing the last row from $B$ and constituting the system of linear equations the analogue of previous case, we can achieve $d=3$. Since the number of independent generators is $n-k=5$, therefore the optimal quantum stabilizer code is of length $6$, that encodes $k=1$ logical qubit, i.e., $[[6,1,3]]_{2}$ is constructed. This code generated by the $n-k=5$ independent generators in table $2$. \\ \\ $$
\begin{tabular}{|c|c|}
\hline
Name & Operator\\
\hline
$g_1$ &Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z \\
$g_2$ & Z\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\\
$g_3$ & Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X \\
$g_4$ & X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y \\
$g_5$ & Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Z \\
\hline \end{tabular} $$ \begin{table}[htb] \caption{\small{Stabilizer generators for the $[[ 6,1,3 ]]_{2}$ code.}} \label{table:2} \newcommand{\hphantom{$-$}}{\hphantom{$-$}} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{0pc}{0pc} \renewcommand{1.}{1.} \end{table} \\ \\ To construct a quantum stabilizer code from $C_7$ by using $(2.14)$, we have \\ \begin{equation} A_0=I_7,\hspace*{4mm}A_1=S+S^{-1},\hspace*{4mm}A_2=S^{2}+S^{-2},\hspace*{4mm}A_3=S^{3}+S^{-3}
\hspace*{2cm} \end{equation} \\ One can see that $A_i$ for $i=1,2,3$ are symmetric and $\sum_{i=0}^{3}A_i=J_7$. Also it can be easily shown that, $\{A_i,\hspace*{1mm}i=1,2,3\}$ is closed under multiplication and therefore, the set of matrices $A_0, ... ,A_3$ form a symmetric association scheme. By choosing $B_1$ and $B_2$ as follows: \\ \begin{equation} B_1=A_1,\hspace*{4mm}B_2=A_2+A_3
\hspace*{3cm} \end{equation} \\ We can be seen that $B_1 B_2^{T}+ B_2 B_1^{T}=0$. So all operators are commute. On the other hand, since \\ \begin{equation} B= \left(
\begin{array}{ccccccccccccccc}
0 & 1 & 0 & 0 & 0 & 0 & 1 & | & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\
1 & 0 & 1 & 0 & 0 & 0 & 0 & | & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 & 0 & 0 & 0 & | & 1 & 0 & 0 & 0 & 1 & 1 & 1 \\
0 & 0 & 1 & 0 & 1 & 0 & 0 & | & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 & | & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 1 & 0 & 1 & | & 1 & 1 & 1 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 1 & 0 & | & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\
\end{array} \right)
\hspace*{3cm} \end{equation} \\ By removing the last row from it by $(3.5)$ the code distance is $d=3$. And also since the number of independent generators is $n-k=6$. Therefore, we can obtain the $[[7,1,3]]_{2}$ quantum stabilizer code. This code generated by $6$ the independent generators in table $3$. \\ $$
\begin{tabular}{|c|c|}
\hline
Name & Operator\\
\hline
$g_1$ & I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\\
$g_2$ & X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Z\\
$g_3$ & Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Z\\
$g_4$ & Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\\
$g_5$ & Z\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\\
$g_6$ & Z\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\\
\hline \end{tabular} $$ \begin{table}[htb] \caption{\small{Stabilizer generators for the $[[ 7,1,3 ]]_{2}$ code.}} \label{table:2} \newcommand{\hphantom{$-$}}{\hphantom{$-$}} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{0pc}{0pc} \renewcommand{1.}{1.} \end{table} \\ Applying (2.12) and (2.14), we can obtain quantum stabilizer codes from $C_\nu ({\nu}=8,9,...)$. \\ \\ \textbf{Remark.} A list of binary quantum stabilizer codes from $C_\nu ({\nu}=8,9,...)$ is given in tables 4 and 5. The first column shows cyclic groups. The second column shows $B_1$ and $B_2$ in terms of $A_i$, $i=0,1,...,m$. The third column shows the value of the length of quantum stabilizer code. The fourth column shows the value of $n-k$. The fifth column shows a list of the quantum stabilizer codes. In this table $I_{n}$ is an $n\times n$ unit matrix and $X$ is an Pauli matrix. Also, we will sometimes use notation where we omit the tensor signs. For example $A_1I_2I_2$ is shorthand for $A_1\otimes I_2\otimes I_2$. All the optimal quantum stabilizer codes, i.e., codes with largest possible $k$ with fixed $n$ and $d$ constructed in table $4$ lengths labeled by $l$ having the best parameters known. The highest rate $\frac{k}{n}$ of $[[n,k,d]]_{2}$ quantum stabilizer codes with minimum distance $d$ is labeled by $u$ in below tables. \\ \\ $$
\begin{tabular}{|c|p{10.5cm}|c|c|l|}
\hline \hline
Cyclic group & $B_i(i=1,2)$ & $n$ & $n-k$ & $[[ n,k,d ]]_{2}$\\
\hline
$C_{8}$ & $B_1=A_3+A_4$,\hspace*{2mm}$B_2=A_2+A_3$ & $8$ & $6$ & $[[ 8,2,3 ]]_{2}$ \\
$C_{2}\times C_{4} $ & $B_1=I_{2}A_2+XA_1$, \hspace*{2mm}$B_2=I_{2}A_1+XA_1+XA_2$ & $8$ & $6$ & $[[ 8,2,3]]_{2}$ \\
$C_{2}\times C_{2}\times C_{2}$ &
$B_1=I_2I_2X+XI_2I_2+XI_2X+XXX$,\hspace*{2mm}$B_2=I_2I_2X+I_2XI_2+XXI_2+XXX$ & $^{l}8$ & $5$ & $[[ 8,3,3 ]]_{2}$ \\
$C_{9}$ & $B_1=A_1+A_2$,\hspace*{2mm}$B_2=A_2+A_4$ & $9$ & $6$ & $[[ 9,3,3 ]]_{2}$ \\
$C_{3}\times C_{3}$ & $B_1=I_3A_1+SS+S^2S^2$, \hspace*{2mm}$B_2=I_3A_1+SS^2+S^2S$ & $9$ & $6$ & $[[ 9,3,3 ]]_{2}$ \\
$C_{10}$ & $B_1=A_2+A_4+A_5$,\hspace*{2mm}$B_2=A_0+A_2+A_3$ & $10$ & $6$ & $[[ 10,4,3 ]]_{2}$ \\
$C_{10}$ & $B_1=A_4$,\hspace*{2mm}$B_2=A_0+A_3+A_5$ & $10$ & $9$ & $[[ 10,1,4 ]]_{2}$ \\
$C_{11}$ & $B_1=A_1+A_3+A_4+A_5$,\hspace*{2mm}$B_2=A_2+A_5$ & $11$ & $7$ & $[[ 11,4,3 ]]_{2}$ \\
$C_{11}$ & $B_1=A_1+A_4+A_5$,\hspace*{2mm}$B_2=A_2+A_5$ & $11$ & $10$ & $[[ 11,1,5 ]]_{2}$ \\
$C_{12}$ & $B_1=A_2+A_4+A_5+A_6$,\hspace*{2mm}$B_2=A_2+A_3+A_5$ & $^{l}12$ & $6$ & $[[ 12,6,3 ]]_{2}$ \\
$C_{12}$ & $B_1=A_2+A_4+A_5+A_6$,\hspace*{2mm}$B_2=A_2+A_3+A_5+A_6$ & $12$ & $7$ & $[[ 12,5,3 ]]_{2}$ \\
$C_{3}\times C_{4} $ & $B_1=I_{12}+I_3A_1+A_1I_4$, \hspace*{2mm}$B_2=A_1A_1+A_1I_4$ & $12$ & $10$ & $[[ 12,2,3 ]]_{2}$ \\
$C_{3}\times C_{2}\times C_{2}$ & $B_1=A_1I_2I_2+A_1I_2X+A_1XX$, \hspace*{2mm}$B_2=I_3XI_2+I_3XX+A_1I_2X$ & $12$ & $8$ & $[[ 12,4,3 ]]_{2}$ \\
$C_{13}$ & $B_1=A_1+A_3+A_4+A_5$,\hspace*{2mm}$B_2=A_2+A_3+A_5$ & $13$ & $8$ & $[[ 13,5,3 ]]_{2}$ \\
$C_{13}$ & $B_1=A_1+A_3+A_4+A_5$,\hspace*{2mm}$B_2=A_2+A_3+A_5$ & $13$ & $12$ & $[[ 13,1,5 ]]_{2}$ \\
$C_{14}$ & $B_1=A_0+A_3+A_4+A_6+A_7$,\hspace*{2mm}$B_2=A_2+A_3+A_5$ & $14$ & $8$ & $[[ 14,6,3 ]]_{2}$ \\
$C_{14}$ & $B_1=A_0+A_3+A_4+A_6+A_7$,\hspace*{2mm}$B_2=A_2+A_3+A_5$ & $14$ & $11$ & $[[ 14,3,4 ]]_{2}$ \\
$C_{15}$ & $B_1=A_3+A_4+A_6+A_7$,\hspace*{2mm}$B_2=A_1+A_2+A_3+A_5$ & $15$ & $9$ & $[[ 15,6,3 ]]_{2}$ \\
$C_{16}$ & $B_1=A_3+A_4+A_6$,\hspace*{2mm}$B_2=A_2+A_3+A_5$ & $16$ & $11$ & $[[ 16,5,3 ]]_{2}$ \\
$C_{16}$ & $B_1=A_0+A_3+A_4+A_8$,\hspace*{2mm}$B_2=A_0+A_1+A_2+A_5$ & $16$ & $8$ & $[[ 16,8,3 ]]_{2}$ \\
$C_{2}\times C_{8}$ & $B_1=I_2A_2+XA_2+XA_4+I_2A_3+I_2A_4+XA_1$,
\hspace*{2mm}$B_2=I_2A_2+XA_3+I_2A_1+I_2A_3+I_2I_8$ & $16$ & $7$ & $[[ 16,9,3 ]]_{2}$ \\
$C_{2}\times C_{2}\times C_{4}$ & $B_1=I_2I_2A_2+A_1I_2A_1+A_1A_1I_4+I_2A_1I_4+I_2A_1A_1+A_1A_1A_1$,
\hspace*{0.1mm}$B_2=I_2I_2A_2+A_1I_2A_2+I_2I_2A_1+I_2A_1A_2+I_2A_1I_4+I_2A_1A_1+A_1A_1A_1$ & $16$ & $8$ & $[[ 16,8,3 ]]_{2}$ \\
$C_{4}\times C_{4}$ & $B_1=I_4A_1+A_1A_1+A_1A_2+A_2A_2$,
\hspace*{2mm}$B_2=I_4A_2+A_1I_4+A_1A_2+A_2I_4+A_2A_1$ & $16$ & $12$ & $[[ 16,4,3 ]]_{2}$ \\
$C_{2}\times C_{2}\times C_{2}\times C_{2}$ & $B_1=XI_2I_2X+XI_2XX+XXXX+I_2XXX$,
\hspace*{2mm}$B_2=I_2I_2I_2X+I_2I_2XI_2+I_2I_2XX+I_2XI_2X+I_2XXI_2+XI_2I_2X+XXI_2I_2+XXXI_2$ & $16$ & $9$ & $[[ 16,7,3 ]]_{2}$ \\
$C_{17}$ & $B_1=A_3+A_4+A_6+A_7+A_8$,\hspace*{2mm}$B_2=A_2+A_3+A_5$ & $17$ & $10$ & $[[ 17,7,3 ]]_{2}$ \\
$C_{17}$ & $B_1=A_3+A_4+A_6+A_7+A_8$,\hspace*{2mm}$B_2=A_2+A_3+A_5$ & $17$ & $14$ & $[[ 17,3,4 ]]_{2}$ \\
$C_{18}$ & $B_1=A_0+A_3+A_4+A_5+A_6$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8+A_9$ & $18$ & $10$ & $[[ 18,8,3 ]]_{2}$ \\
$C_{19}$ & $B_1=A_3+A_4+A_6+A_9$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7$ & $19$ & $10$ & $[[ 19,9,3 ]]_{2}$ \\
$C_{20}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8$ & $20$ & $8$ & $[[ 20,12,3 ]]_{2}$ \\
$C_{21}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8$ & $21$ & $8$ & $[[ 21,13,3 ]]_{2}$ \\
$C_{21}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8$ & $21$ & $11$ & $[[ 21,10,4 ]]_{2}$ \\
$C_{21}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8$ & $21$ & $12$ & $[[ 21,9,5 ]]_{2}$ \\
$C_{21}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8$ & $^{l}21$ & $16$ & $[[ 21,5,7 ]]_{2}$ \\ \hline \hline \end{tabular} $$ \begin{table}[htb] \caption{\small{Quantum stabilizer codes $[[ n,k,d ]]_{2}$.}} \label{table:2} \newcommand{\hphantom{$-$}}{\hphantom{$-$}} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{0pc}{0pc} \renewcommand{1.}{1.} \end{table} \\ \\ $$
\begin{tabular}{|c|p{10.5cm}|c|c|l|} \hline \hline
Cyclic group & $B_i(i=1,2)$ & $n$ & $n-k$ & $[[ n,k,d ]]_{2}$\\
\hline
$C_{25}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8$ & $25$ & $8$ & $[[ 25,17,3 ]]_{2}$ \\
$C_{25}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8$ & $25$ & $12$ & $[[ 25,13,4 ]]_{2}$ \\
$C_{30}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}+A_{14}$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8+A_{13}+A_{15}$
& $30$ & $8$ & $[[ 30,22,3 ]]_{2}$ \\
$C_{30}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}+A_{14}$,\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8+A_{13}+A_{15}$
& $30$ & $18$ & $[[ 30,12,5 ]]_{2}$ \\
$C_{40}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}+A_{12}+A_{14}+A_{15}+A_{16}+A_{18}+A_{19}$,
\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8+A_{12}+A_{15}+A_{16}+A_{17}+A_{18}+A_{20}$
& $40$ & $10$ & $^{u}[[ 40,30,3 ]]_{2}$ \\
$C_{40}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}+A_{12}+A_{14}+A_{15}+A_{16}+A_{18}+A_{19}$,
\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8+A_{12}+A_{15}+A_{16}+A_{17}+A_{18}+A_{20}$
& $40$ & $14$ & $^{u}[[ 40,26,5 ]]_{2}$ \\
$C_{40}$ & $B_1=A_3+A_4+A_6+A_9+A_{10}+A_{12}+A_{14}+A_{15}+A_{16}+A_{18}+A_{19}$,
\hspace*{0.5mm}$B_2=A_3+A_5+A_6+A_7+A_8+A_{12}+A_{15}+A_{16}+A_{17}+A_{18}+A_{20}$
& $^{l}40$ & $19$ & $^{u}[[ 40,21,7 ]]_{2}$ \\
\hline \hline \end{tabular} $$ \begin{table}[htb] \caption{\small{Quantum stabilizer codes $[[ n,k,d ]]_{2}$.}} \label{table:2} \newcommand{\hphantom{$-$}}{\hphantom{$-$}} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{0pc}{0pc} \renewcommand{1.}{1.} \end{table} \subsection{\hspace*{-.3cm}construction of quantum stabilizer codes of distances five and seven from Abelian group association schemes} We can extend the stabilizers of the codes from section $3$ to get distances five and seven codes. The parameters of these codes with $d=5,7$ will be $[[ n,k,d ]]_{2}$. In the case of $m=5$ from $C_{11}$ we can write \\ \begin{equation} A_0=I_{11},\hspace*{0.5mm}A_1=S^{1}+S^{-1},\hspace*{0.5mm}A_2=S^{2}+S^{-2},\hspace*{0.5mm}A_3=S^{3}+S^{-3},\hspace*{0.5mm} A_4=S^{4}+S^{-4},\hspace*{0.5mm}A_5=S^{5}+S^{-5} \end{equation} \\ where $S$ is an $11\times 11$ circulant matrix with period $11$ $(S^{11}=I_{11})$. One can easily see that the above adjacency matrices for $i=1,...,5$ are symmetric and $\sum_{i=0}^{5}A_i=J_{11}$. Also, the set of matrices $A_0, ... ,A_5$ form a symmetric association scheme. By choosing $B_1=A_1+A_4+A_5$ and $B_2=A_2 +A_5$ the binary matrix $B =(B_1 \vert B_2)$ will be in the form \\ \begin{equation} B=\left(
\begin{array}{ccccccccccccccccccccccc}
0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & | & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0\\
1 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & | & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1\\
0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & | & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\
0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & | & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0\\
1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & | & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1\\
1 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & | & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1\\
1 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & | & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0\\
1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & | & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0\\
0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & | & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\
0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & | & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0\\
1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & | & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0\\
\end{array}
\right) \end{equation} \\ By removing the last row from $B$ and by considering $\textbf{a}=(x_{01},...,x_{11})$ and $\textbf{b}=(y_{01},...,y_{11})$, in view of (3.5) we can achieve $d=5$. \\ Since the number of independent generators is $n-k=10$, therefore the quantum stabilizer code is of length $11$, that encodes $k=1$ logical qubit, i.e., $[[11,1,5]]_{2}$ is constructed. This code generated by the $n-k=10$ independent generators in table $6$. \\ \\ $$
\begin{tabular}{|c|c|}
\hline
Name & Operator\\
\hline
$g_1$ &I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}X\\
$g_2$ &X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}Z\\
$g_3$ &Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}I\\
$g_4$ &I\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Y\hspace*{2mm}X\\
$g_5$ &X\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Y\\
$g_6$ &Y\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Y\\
$g_8$ &X\hspace*{2mm}Y\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}I\\
$g_9$ &I\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\\
$g_{10}$ &Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\\
\hline \end{tabular} $$ \begin{table}[htb] \caption{\small{Stabilizer generators for the $[[ 11,1,5 ]]_{2}$ code.}} \label{table:2} \newcommand{\hphantom{$-$}}{\hphantom{$-$}} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{0pc}{0pc} \renewcommand{1.}{1.} \end{table} \\ \\ For construction of distance five quantum stabilizer code from $C_{13}$ by using $(2.14)$, we have \\ \\ \begin{equation} A_0=I_{13},A_1=S+S^{-1},A_2=S^{2}+S^{-2},A_3=S^{3}+S^{-3},
A_4=S^{4}+S^{-4},A_5=S^{5}+S^{-5},A_6=S^{6}+S^{-6} \end{equation} \\ One can see that $A_i$ for $i=1,...,6$ are symmetric and $\sum_{i=0}^{6}A_i=J_{13}$. Also it can be easily shown that, $\{A_i,\hspace*{1mm}i=1,...,6\}$ is closed under multiplication and therefore, the set of matrices $A_0, ... ,A_6$ form a symmetric association scheme. By choosing $B_1$ and $B_2$ as follows: \\ \begin{equation} B_1=A_1+A_3+A_4+A_5,\hspace*{4mm}B_2=A_2+A_3+A_5
\hspace*{3cm} \end{equation} \\ We can be seen that $B_1 B_2^{T}+ B_2 B_1^{T}=0$. So all operators are commute. On the other hand, since \\ \begin{equation} B=\left(
\begin{array}{ccccccccccccccccccccccccccc}
0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 \hspace*{2mm} |\hspace*{2mm} 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0\\
1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \hspace*{2mm} |\hspace*{2mm} 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1\\
0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 \hspace*{2mm} |\hspace*{2mm} 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1\\
1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 \hspace*{2mm} |\hspace*{2mm} 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0\\
1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 1 \hspace*{2mm} |\hspace*{2mm} 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1\\
1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 \hspace*{2mm} |\hspace*{2mm} 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0\\
0 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 \hspace*{2mm} |\hspace*{2mm} 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0\\
0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 \hspace*{2mm} |\hspace*{2mm} 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1\\
1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 \hspace*{2mm} |\hspace*{2mm} 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0\\
1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \hspace*{2mm} |\hspace*{2mm} 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1\\
1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 \hspace*{2mm} |\hspace*{2mm} 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1\\
0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 1 \hspace*{2mm} |\hspace*{2mm} 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0\\
1 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 \hspace*{2mm} |\hspace*{2mm} 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0\\
\end{array}
\right) \end{equation} \\ By removing the last row from $B$ and by considering $\textbf{a}=(x_{01},...,x_{13})$ and $\textbf{b}=(y_{01},...,y_{13})$, in view of (3.5) we can achieve $d=5$. \\ Since the number of independent generators is $n-k=12$, therefore the quantum stabilizer code is of length $13$, that encodes $k=1$ logical qubit, i.e., $[[13,1,5]]_{2}$ is constructed. This code generated by the $n-k=12$ independent generators in table $7$. \\ $$
\begin{tabular}{|c|c|}
\hline
Name & Operator\\
\hline
$g_1$ &I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\\
$g_2$ &X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z\\
$g_3$ &Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\\
$g_4$ &Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\\
$g_5$ &X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\\
$g_6$ &Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\\
$g_7$ &I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\\
$g_8$ &I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\\
$g_9$ &Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}X\\
$g_{10}$ &X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\\
$g_{11}$ &Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\\
$g_{12}$ &Z\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}I\hspace*{2mm}X\\
\hline \end{tabular} $$ \begin{table}[htb] \caption{\small{Stabilizer generators for the $[[ 13,1,5 ]]_{2}$ code.}} \label{table:2} \newcommand{\hphantom{$-$}}{\hphantom{$-$}} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{0pc}{0pc} \renewcommand{1.}{1.} \end{table} \\ For the construction of distance five quantum stabilizer code from $C_{21}$ we choose $B_1$ and $B_2$ as follows: \\ \begin{equation} B_1=A_3+A_4+A_6+A_9+A_{10},\hspace*{4mm}B_2=A_3+A_5+A_6+A_7+A_8 \end{equation} \\ We can be seen that $B_1 B_2^{T}+ B_2 B_1^{T}=0$. So all operators are commute. By removing the last nine rows from $B=(B_1 \vert B_2)$ and by considering $\textbf{a}=(x_{01},...,x_{21})$ and $\textbf{b}=(y_{01},...,y_{21})$, in view of (3.5) we can achieve $d=7$. \\ Since the number of independent generators is $n-k=16$, therefore the optimal quantum stabilizer code is of length $21$, that encodes $k=5$ logical qubit, i.e., $[[21,5,7]]_{2}$ is constructed. This code generated by the $n-k=16$ independent generators in table $8$. The rate $\frac{k}{n}$ of $[[21,5,7]]_{2}$ code is $0.238$. \\ $$
\begin{tabular}{|c|c|}
\hline
Name & Operator\\
\hline
$g_1$ &I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\\
$g_2$ &I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\\
$g_3$ &I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\\
$g_4$ &Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\\
$g_5$ &X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\\
$g_6$ &Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\\
$g_7$ &Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\\
$g_8$ &Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\\
$g_9$ &Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\\
$g_{10}$ &X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\\
$g_{11}$ &X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\\
$g_{12}$ &X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\\
$g_{13}$ &X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}Z\\
$g_{14}$ &Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\\
$g_{15}$ &Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Y\\
$g_{16}$ &Y\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}Y\hspace*{2mm}Z\hspace*{2mm}X\hspace*{2mm}Y\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Y\hspace*{2mm}X\hspace*{2mm}Z\\
\hline \end{tabular} $$ \begin{table}[htb] \caption{\small{Stabilizer generators for the $[[ 21,5,7 ]]_{2}$ code.}} \label{table:2} \newcommand{\hphantom{$-$}}{\hphantom{$-$}} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{0pc}{0pc} \renewcommand{1.}{1.} \end{table} \\ \\
\section{\hspace*{-.3cm}Construction of stabilizer codes from non-Abelian group association schemes} The construction of binary quantum stabilizer codes based on the non-Abelian group association schemes as in the case of Abelian group association schemes. To do so, we choose a binary matrix $A=(A_1 \vert A_2)$, such that by removing arbitrarily row or rows from $A$ we can obtain $n-k$ independent generators. After finding the code distance by $n-k$ independent generators we can then determine the parameters of the associated code. \\ \\
Consider the group $U_{6n}$, as is presented in section 2.5. By setting $n=2$ in view of (2.27), we have \\ \begin{align} A_0 &=I_{12},\nonumber\\ A_1 &=[a]^2,\nonumber\\ A_2 &=[b]+[b]^2, \\ A_3 &=[b][a]^2+[b]^2[a]^2,\nonumber\\ A_4 &=[a]+[b][a]+[b]^2[a],\nonumber\\ A_{5} &=[a]^3+[b][a]^3+[b]^2[a]^3 \nonumber
\hspace*{3cm} \end{align} \\ \\ One can see that $\sum_{i=0}^{5}A_i =J_{12}$, $A_i^{T}\in \{A_0,A_1,...,A_5\}$ for $0\leq i \leq 5$, and $A_iA_j$ is a linear combination of $A_0,A_1,...,A_5$ for $0\leq i,j\leq 5$ . Also it can be verified that, $\{A_i,\hspace*{2mm}i=1,...,5\}$ is closed under multiplication and therefore, the set of matrices $A_0,A_1,...,A_5$ form an association scheme with $5$ classes. \\ \\ By examing the number of combinations of 2 cases selected from a set of 63 distinct cases and considering $B_1=A_2$ and $B_2=A_3+A_5$ the binary matrix $B=(B_1 \vert B_2)$ is written as \\ \begin{equation} B=\left(
\begin{array}{cccccccccccccccccccccccc}
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \hspace*{2mm} |\hspace*{2mm} 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \hspace*{2mm} |\hspace*{2mm} 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \hspace*{2mm} |\hspace*{2mm} 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \hspace*{2mm} |\hspace*{2mm} 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \hspace*{2mm} |\hspace*{2mm} 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \hspace*{2mm} |\hspace*{2mm} 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \hspace*{2mm} |\hspace*{2mm} 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \hspace*{2mm} |\hspace*{2mm} 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \hspace*{2mm} |\hspace*{2mm} 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \hspace*{2mm} |\hspace*{2mm} 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \hspace*{2mm} |\hspace*{2mm} 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \hspace*{2mm} |\hspace*{2mm} 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\
\end{array}
\right) \end{equation} \\ By removing the last four rows from the binary matrix $B$ we can achieve $n-k=8$ independent generators. The distance $d$ of the quantum code is given by the minimum weight of the bitwise OR $\textbf{(a,b)}$ of all pairs satisfying the symplectic orthogonality condition, \\ \begin{equation} B_1 \textbf{b} + B_2 \textbf{a}=0 \hspace*{3cm} \end{equation} \\ Let $\textbf{a}=(x_{01},...,x_{12})$ and $\textbf{b}=(y_{01},...,y_{12})$. Then by using (4.3), we have \\ \begin{equation} \left\{
\begin{array}{ll}
x_{02} + x_{06} + x_{07} + x_{10} + x_{11} + y_{05} + y_{09} =0 \\
x_{03} + x_{07} + x_{08} + x_{11} + x_{12} + y_{06} + y_{10} =0 \\
x_{04} + x_{05} + x_{08} + x_{09} + x_{12} + y_{07} + y_{11} =0 \\
x_{01} + x_{05} + x_{06} + x_{09} + x_{10} + y_{08} + y_{12} =0 \\
x_{02} + x_{03} + x_{06} + x_{10} + x_{11} + y_{01} + y_{09} =0 \\
x_{03} + x_{04} + x_{07} + x_{11} + x_{12} + y_{02} + y_{10} =0 \\
x_{01} + x_{04} + x_{08} + x_{09} + x_{12} + y_{03} + y_{11} =0 \\
x_{01} + x_{02} + x_{05} + x_{09} + x_{10} + y_{04} + y_{12} =0 \end{array} \right.
\hspace*{3cm} \end{equation} \\ By using (4.4) we can get the code distance $d$ equal to $3$. Since the number of independent generators is $n-k=8$, therefore the quantum stabilizer code is of length $12$, that encodes $k=4$ logical qubits, i.e., $[[ 12,4,3 ]]_{2}$ is constructed. This code generated by the $n-k=8$ independent generators in table $9$. \\ $$
\begin{tabular}{|c|c|}
\hline
Name & Operator\\
\hline
$g_1$ &I\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I \\
$g_2$ &I\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z \\
$g_3$ &I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z \\
$g_4$ &Z\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X \\
$g_5$ &X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I \\
$g_6$ &I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}Z \\
$g_7$ &Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z \\
$g_8$ &Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}I\hspace*{2mm}Z\hspace*{2mm}Z\hspace*{2mm}I\hspace*{2mm}X \\
\hline \end{tabular} $$ \begin{table}[htb] \caption{\small{Stabilizer generators for the $[[12,4,3]]_{2}$ code.}} \label{table:1} \newcommand{\hphantom{$-$}}{\hphantom{$-$}} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{0pc}{0pc} \renewcommand{1.}{1.} \end{table} \\ Applying (2.27), (2.31), (2.35), (2.38) and (2.39) we can obtain quantum stabilizer codes from $U_{6n}$, $T_{4n}$, $V_{8n}$ and dihedral $D_{2n}$ groups. A list of quantum stabilizer codes is given in table $10$. \\ \\ \textbf{Remark.} Table $10$ is a list of quantum stabilizer codes from $U_{6n}$, $T_{4n}$, $V_{8n}$ and dihedral $D_{2n}$ groups. The first column shows non-Abelian groups. The second column shows $B_1$ and $B_2$ in terms of $A_i$, $i=0,1,...,m$. where $m$ denotes the number of conjugacy classes of the group $G$. The third column shows the value of the length of quantum stabilizer code. The fourth column shows the value of $n-k$. The fifth column shows a list of the quantum stabilizer codes. \\
$$
\begin{tabular}{|c|p{10.5cm}|c|c|l|}
\hline \hline
Group & $B_i(i=1,2)$ & $n$ & $n-k$ & $[[ n,k,d ]]_{2}$\\
\hline
$U_{12}$ & $B_1=A_1+A_2+A_4$,\hspace*{2mm}$B_2=A_3$ & $12$ & $8$ & $[[ 12,4,3 ]]_{2}$ \\
$U_{12}$ & $B_1=A_1+A_2+A_5$,\hspace*{2mm}$B_2=A_0+A_4$ & $12$ & $8$ & $[[ 12,4,3 ]]_{2}$ \\
$U_{12}$ & $B_1=A_2$,\hspace*{2mm}$B_2=A_3+A_5$ & $12$ & $8$ & $[[ 12,4,3 ]]_{2}$ \\
$U_{12}$ & $B_1=A_1+A_2+A_5$,\hspace*{2mm}$B_2=A_0+A_4$ & $12$ & $11$ & $[[ 12,1,4 ]]_{2}$ \\
$U_{18}$ & $B_1=A_1+A_2+A_3+A_7+A_8$, \hspace*{2mm}$B_2=A_0+A_1+A_2+A_4+A_5$ & $18$ & $12$ & $[[ 18,6,3]]_{2}$ \\ $U_{18}$ & $B_1=A_1+A_2+A_3+A_7$, \hspace*{2mm}$B_2=A_0+A_1+A_2+A_4$ & $18$ & $13$ & $[[ 18,5,3]]_{2}$ \\ $U_{18}$ & $B_1=A_1+A_2+A_3+A_7$, \hspace*{2mm}$B_2=A_0+A_1+A_2+A_4$ & $18$ & $16$ & $[[ 18,2,4]]_{2}$ \\
$U_{24}$ & $B_1=A_0+A_1+A_2+A_3+A_4+A_8+A_{10}$,\hspace*{2mm}$B_2=A_0+A_3+A_5+A_6+A_{11}$ & $24$ & $12$ & $[[ 24,12,3 ]]_{2}$ \\
$U_{24}$ & $B_1=A_0+A_1+A_2+A_3+A_4+A_8+A_{10}$,\hspace*{2mm}$B_2=A_0+A_3+A_5+A_6+A_{11}$ & $24$ & $16$ & $[[ 24,8,5 ]]_{2}$ \\
$T_{12}$ & $B_1=A_2+A_4$,\hspace*{2mm}$B_2=A_0+A_5$ & $12$ & $9$ & $[[ 12,3,3 ]]_{2}$ \\
$T_{12}$ & $B_1=A_0+A_4$, \hspace*{2mm}$B_2=A_1+A_2+A_5$ & $12$ & $10$ & $[[ 12,2,3 ]]_{2}$ \\
$T_{16}$ & $B_1=A_0+A_1+A_2+A_6$,\hspace*{2mm}$B_2=A_0+A_2+A_3$ & $16$ & $14$ & $[[ 16,2,3 ]]_{2}$ \\
$V_{24}$ & $B_1=A_0+A_3+A_6+A_7$,\hspace*{2mm}$B_2=A_0+A_2+A_4$ & $24$ & $20$ & $[[ 24,4,3 ]]_{2}$ \\
$D_{12}$ & $B_1=A_3+A_5$,\hspace*{2mm}$B_2=A_2+A_3+A_5$ & $12$ & $10$ & $[[ 12,2,3 ]]_{2}$ \\ \hline \hline \end{tabular} $$ \begin{table}[htb] \caption{\small{Quantum stabilizer codes $[[ n,k,d ]]_{2}$.}} \label{table:2} \newcommand{\hphantom{$-$}}{\hphantom{$-$}} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{0pc}{0pc} \renewcommand{1.}{1.} \end{table} \\ \\ \section{\hspace*{-.5cm}\ Conclusion} We have developed a new method of constructing binary quantum stabilizer codes from Abelian and non-Abelian groups association schemes. Using this method, we have constructed good binary quantum stabilizer codes of distances $3$, $4$, $5$, and $7$ up to $40$. Furthermore, binary quantum stabilizer codes of a large length $n$ with high distance can be constructed. We can see from tables 4 and 5 that the Abelian association schemes procedure for the construction of the binary quantum stabilizer codes is superior to non-Abelian group association schemes. Although we focused specifically on Abelian and non-Abelian groups association schemes, we expect that the introduced method might then be applied to other association schemes such as association scheme defined over the coset space $G/H$, where $H$ is a normal subgroup of finite group $G$ with prime index., strongly regular graphs, distance regular graphs, etc. These association schemes are under investigation. \\
\end{document} |
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch} {#1}\small\normalsize} \spacingset{1}
\if00 {
\title{\bf Prediction properties of optimum response surface designs}
\author{Heloisa M. de Oliveira\thanks{
The authors gratefully acknowledge financial support from Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior (PNPD/CAPES, Brazil Government) and from FAPESP grant numbers 2013/09282-9 and 2014/01818-0.}\hspace{.2cm}\\
Universidade Federal de Santa Catarina, Curitibanos, SC, 89520-000, Brazil\\
C\'esar B. A. de Oliveira\\
Instituto Tecnol\'ogico de Aeron\'autica, S\~ao Jos\'e dos Campos, SP, 12228-900, Brazil\\
Steven G. Gilmour\\
King's College London, London, WC2R 2LS, UK\\
and \\
Luzia A. Trinca\\
Universidade Estadual Paulista, Botucatu, SP, 18618-689, Brazil}
\maketitle } \fi
\if10 {
\begin{center}
{\LARGE\bf On prediction properties of optimum response surface designs} \end{center}
} \fi
\begin{abstract} Prediction capability is considered an important issue in response surface methodology. Following the line of argument that a design should have several desirable properties we have extended an existing compound design criterion to include prediction properties. Prediction of responses and of differences in response are considered. Point and interval predictions are allowed for. Extensions of existing graphical tools for inspecting prediction performances of the designs in the whole region of experimentation are also introduced. The methods are illustrated with two examples. \end{abstract}
\noindent {\it Keywords:} compound criteria, dispersion graphs, $(DP)$-optimality, FDS, $I$-optimality, pure error.
\spacingset{1.45} \section{Introduction} \label{sec:intro}
Experiments provide important information for discoveries in many research areas. Careful planning of an experiment is very important in order to obtain informative answers to the questions of the research problem at hand. The planning phase can be quite involved and methods for finding optimum designs are very useful when there are several quantitative factors related to the response variables of interest and when there are practical restrictions. Work in this area started by considering the optimization of single design-criterion functions aimed at maximizing the precision of the model parameter estimates or prediction of responses. Computational algorithms are well developed mainly for $D$- and $I$-efficiency \citep{CookNachtsheim89, jonesgoos2012}. Designs obtained by such methods are the best or very close to the best (as they are based on heuristics), given the assumed model, for the property being optimized. However, for practical purposes, an experiment should answer several research questions and so requires a good design with respect to many properties as advocated by \cite{Box-Draper:1975}. Fortunately, in the last decade or so, design methodologies seem to be moving in this direction through the application of compound criteria and multiple objective approaches \citep{goosetal2005, jonesnash2011, luetal11, smucker2012, gilmourtrinca2012, smucker2015, borrotti2016, daSilvaGilmourTrinca2017, trincagilmour2017}.
While the use of compound criteria or multiple objective procedures allow the consideration of a set of one-dimensional properties for constructing the design, graphical techniques add information to illustrate the prediction properties of the designs. The study of design prediction capabilities through graphs advanced with \cite{g-j&m1989} and \cite{myers1992} when they introduced variance dispersion graphs. These graphs were followed by the quantile plots of \cite{khuri96}, the difference variance dispersion graphs of \cite{trincagilmour1999} and the fraction of design space plots of \cite{zahran2003} and \cite{jang2012}. Such techniques are of great value for choosing a final design among many options.
In this paper we consider a flexible compound criterion for optimization of parameter estimation properties as well as prediction. The paper introduces several new methods, namely: (i) difference fraction of design space plots, which show variances of differences in response; (ii) variance dispersion graphs and fraction of design space plots for interval predictions, for both responses and differences in response; (iii) the $I_D$ criterion, for point estimation of differences in response; (iv) the $(IP)$ and $(I_DP)$ criteria for interval estimation of responses and differences in response; (v) using standard errors, rather than variances in the plots; (vi) using relative volume in the plots. These methods can be considered as extensions for prediction criteria motivated by the difference variance dispersion graphs of \cite{trincagilmour1999} and the adjusted criteria of \cite{gilmourtrinca2012}. The designs constructed are further evaluated according to their performances with respect to prediction capabilities using the graphs described and extensions incorporating the new measures. In Section \ref{sec:DC} we review the literature and propose extensions to the usual design criteria. In Section \ref{sec:PC} we discuss graphical methods for prediction evaluation and propose two extensions, and in Section \ref{sec:Ex} we illustrate these methods and compare several designs for two examples. Motivated by these results, we note in Section \ref{sec:CCD} some situations in which central composite designs are optimal. Finally a discussion is presented in Section \ref{sec:disc}.
\section{Design criteria}\label{sec:DC} Data from experiments with $q$ continuous quantitative factors are routinely analyzed by fitting low order polynomials. These are used as approximations to the unknown true function relating the response variable $Y$ and the treatments. A treatment $\mathbf{x}$ is defined by a specific combination of levels of the $q$ factors $X_1,~X_2,~\ldots,~X_q$. The full model for a completely randomized design with $n$ experimental units (runs) is \begin{equation} \mathbf{Y} = \mbox{\boldmath{$\mu$}}(\mathbf{x})+\mbox{\boldmath{$\varepsilon$}}, \label{eq:full} \end{equation} where $\mathbf{Y}$ is the column vector of random variables of dimension $n$, $\mbox{\boldmath{$\mu$}}(\mathbf{x})$ is the mean vector of $\mathbf{Y}$, depending on $\mathbf{x}$, and $\mbox{\boldmath{$\varepsilon$}}$ is the error term random vector satisfying $E(\mbox{\boldmath{$\varepsilon$}})=\mathbf{0}$ and $Var(\mbox{\boldmath{$\varepsilon$}})=\sigma^2\mathbf{I}$. The full model may be further approximated by \begin{equation} \mbox{\boldmath{$\mu$}}(\mathbf{x})\approx\mathbf{X}\mbox{\boldmath{$\beta$}}, \label{eq:poly} \end{equation} where, using standard notation, $\mbox{\boldmath{$\beta$}}$ is the $p$-dimensional vector of unknown parameters and $\mathbf{X}$ is the $\big(n \times p \big)$ model matrix whose rows, denoted by $\mathbf{f}(\mathbf{x})^\prime$, are expansions of levels of the factors in order to accommodate the desired polynomial.
Since the matrix $\mathbf{X}$ is defined by the design and the model approximation, for notational simplicity we will refer to the design as $\mathbf{X}$. As discussed in \cite{gilmourtrinca2012}, fitting the full model (\ref{eq:full}) allows unbiased estimation of $\sigma^2$ if degrees of freedom from treatment replications are available while fitting model (\ref{eq:poly}) allows simplification and also lack of fit checking if there are spare treatment degrees of freedom. In order to construct optimum designs that allow unbiased estimation of error variance, \cite{gilmourtrinca2012} proposed adjustments to the usual alphabetical design criteria, based on the appropriate quantiles of the $F$ distribution, e.g.\ the $(DP)_S(\alpha)$ and $(AP)_S(\alpha)$ criteria. Following their logic, Goos, in the discussion of \cite{gilmourtrinca2012} proposed the same type of adjustment for the $I$-optimality criterion.
\subsection{Prediction of responses}
For any point $\mathbf{x} \in \mathcal{X}$, $\mathcal{X}$ being the region which the experimenter desires to explore, the variance of $\hat{y}(\mathbf{x})$, the estimated response from the fitted polynomial, is $\text{var}(\hat{y}(\mathbf{x}))=\sigma^2 \mathbf{f}(\mathbf{x})^\prime(\mathbf{X}^\prime\mathbf{X})^{-1}\mathbf{f}(\mathbf{x})$. An $I$-optimum design $\mathbf{X}$ is such that the average variance of predictions over the whole experimental region $\mathcal{X}$ is minimized. Let $\Psi=\int_{\mathbf{x}\in\mathcal{X}}d\mathbf{x}$ be the volume of the region $\mathcal{X}$. The average prediction variance is defined as \begin{equation} \text{average}~\text{variance}=\Psi^{-1}\int_{\mathbf{x}\in\mathcal{X}}\text{var}(\hat{y}(\mathbf{x}))d\mathbf{x}\propto\int_{\mathbf{x}\in\mathcal{X}}\mathbf{f}(\mathbf{x})^\prime(\mathbf{X}^\prime\mathbf{X})^{-1}\mathbf{f}(\mathbf{x})d\mathbf{x}.\label{eq:vary}\end{equation} As the integrand in (\ref{eq:vary}) is a scalar, and using properties of the trace of matrix products, it is easily shown that \begin{equation} \text{average}~\text{variance}\propto\text{trace}\left[\mathbf{\mathcal{M}}(\mathbf{X}^\prime\mathbf{X})^{-1}\right],\label{eq:I}\end{equation}where $\mathbf{\mathcal{M}}=\int_{\mathbf{x}\in\mathcal{X}}\mathbf{f}(\mathbf{x})\mathbf{f}(\mathbf{x})^\prime d\mathbf{x}$ is the so called moment matrix of the region. For regular spherical and cubic regions and polynomial models, the matrix $\mathcal{M}$ obeys known patterns, given explicitly, for the full second order model, in \cite{hardin1991sphere} and \cite{hardin1991cube} for example.
Considering that interest is in evaluating the performance of the design for interval predictions, the $I$ criterion may be modified to minimize the average, over the design region $\mathcal{X}$, of the width of pointwise confidence intervals for the mean response. This gives the criterion function \begin{equation} \text{trace}\left[\mathbf{\mathcal{M}}(\mathbf{X}^\prime\mathbf{X})^{-1}\right]F_{1,d;1-\alpha_3},\label{eq:IP}\end{equation} the $(IP)(\alpha_3)$ criterion, where $d$ is the number of pure error degrees of freedom of the design $\mathbf{X}$, $1-\alpha_3$ is the confidence level for pointwise intervals for $E(y(\mathbf{x}))$ and $F_{1,d;1-\alpha_3}$ is the relevant quantile from the $F$ distribution. According to several researchers, prediction is a key point for planning response surface experiments \citep{g-j&m1989, hardin1993, trincagilmour1999, zahran2003, goosjones2011book, jonesgoos2012, borrotti2016}.
\subsection{Prediction of differences in response}
In \cite{trincagilmour1999} it was argued that rather than the response level, prediction of differences in responses would be more interesting. In particular, we are often interested in differences between the estimated response at the expected optimum or standard operating conditions and the estimated response at other locations, i.e.\ $y(\mathbf{x})-y(\mathbf{x}_0)$, where $\mathbf{x}_0$ denotes standard conditions or the prior expected optimum combination. We code the factors, so that $\mathbf{x}_0=\mathbf{0}$, which implies that the focus should be on estimating $y(\mathbf{x})-\beta_0$. There are both theoretical and practical reasons why predicting differences in response makes more sense than predicting responses themselves.
First, the randomization of the experiment ensures that least squares estimators of the parameters are unbiased, except for the estimate of $\beta_0$, which requires the further assumption that the experimental units are a random sample from a population of possible units - see for example \cite{coxreid}, p.32-36, or Chapter 5 of \cite{hinkelmann}. In response surface studies the runs are almost never a random sample and even treating them as a representative sample is usually implausible. Therefore predictions of responses made from the experiment cannot reasonably be applied to the process over time, but predictions of differences in response can.
Secondly, important aspects of the interpretation of fitted response surfaces, such as estimating the location of the stationary point and estimating the location of ridges, do not depend on the intercept. For example, the stationary point is located at $-\mathbf{B}^{-1}\mathbf{b}/2$, where $\mathbf{b}$ and $\mathbf{B}$ contain respectively the first and second order parameters. Similarly, canonical analysis depends on the same vector and matrix. Thus important aspects of response surface interpretation, which are difficult to build directly into design optimality criteria, should be better represented by optimizing the prediction of differences in response than by optimizing predictions of responses.
Finally, if $\mathbf{x}_0$ represents standard operating conditions of the process, we should already have a much better estimate of $E[y(\mathbf{x}_0)]$ from the historical running of the process than we can expect to get from a fairly small experiment. Using the factor coding, we can treat this historical estimate as being the true $\beta_0$. Then the best prediction from the experiment of the response at some $\mathbf{x}$ is not $\hat{y}(\mathbf{x})$, but \begin{equation} \label{eq:ytilde} \tilde{y}(\mathbf{x}) = \beta_0 +\hat{y}(\mathbf{x}) -\hat{\beta}_0. \end{equation} Then the variance of a prediction using this method is \[ \text{var}[\tilde{y}(\mathbf{x})] = \text{var}[\hat{y}(\mathbf{x}) -\hat{\beta}_0] = \text{var}[\hat{y}(\mathbf{x})-\hat{y}(\mathbf{x}_0)]. \] Hence, even if predictions of responses are of interest, the design should be chosen to minimize variances of differences in response.
Based on this argument, we define the $I_D$ criterion which minimizes the average difference variance, \begin{eqnarray}\text{average}~\text{difference}~\text{variance}&=&\Psi^{-1}\int_{\mathbf{x}\in\mathcal{X}}\text{var}[\hat{y}(\mathbf{x})-\hat{y}(\mathbf{x}_0)]d\mathbf{x} \nonumber\\ &\propto&\int_{\mathbf{x}\in\mathcal{X}}[\mathbf{f}(\mathbf{x})-\mathbf{f}(\mathbf{x}_0)]^\prime(\mathbf{X}^\prime\mathbf{X})^{-1}[\mathbf{f}(\mathbf{x})-\mathbf{f}(\mathbf{x}_0)]d\mathbf{x}.\end{eqnarray} For coded factors $\mathbf{x}_0=\mathbf{0}$ and analogously to (\ref{eq:I}) we have \begin{equation}\text{average}~\text{difference}~\text{variance}\propto\text{trace}\left[\mathbf{\mathcal{M}}_0(\mathbf{X}^\prime\mathbf{X})^{-1}\right],\end{equation} where $\mathbf{\mathcal{M}}_0=\int_{\mathbf{x}\in\mathcal{X}}[\mathbf{f}(\mathbf{x})-\mathbf{f}(\mathbf{0})][\mathbf{f}(\mathbf{x})-\mathbf{f}(\mathbf{0})]^\prime d\mathbf{x}$ such that $\mathbf{\mathcal{M}}_0$ is the $\mathbf{\mathcal{M}}$ matrix with first row and first column set to zero. Similarly to the $(IP)(\alpha_3)$ criterion we may now define the $(I_DP)(\alpha_{4})$ criterion that searches for $\mathbf{X}$ which minimizes \begin{equation} \text{trace}\left[\mathbf{\mathcal{M}}_0(\mathbf{X}^\prime\mathbf{X})^{-1}\right]F_{1,d;1-\alpha_{4}},\label{eq:I_DP}\end{equation} where $(1-\alpha_{4})$ is the confidence level for pointwise intervals for expected response differences and $F_{1,d;1-\alpha_{4}}$ is the appropriate $F$ distribution quantile. This minimizes the average, over the design region $\mathcal{X}$, of the width of pointwise confidence intervals for the mean response if we use equation (\ref{eq:ytilde}) for the predictions.
\subsection{Compound criteria}
\cite{hardin1993} and \cite{jonesgoos2012} showed that $I$-optimum designs have smaller losses in efficiency for parameter estimates than $D$-optimum designs have in terms of prediction efficiency. Whereas these authors preferred $I$-optimality on this basis, it is more desirable to build both parameter estimation and prediction into the optimality criterion. This, together with the commonly accepted view that a design should have several good properties, suggests investigating a compound criterion for prediction as well as estimation. To that end we extend the compound criteria of \cite{gilmourtrinca2012} in order to take into account predictions of the response as well as expected differences in the response with respect to the experimental region center. Thus we simply divide \cite{gilmourtrinca2012}'s equation (5) by \begin{equation}
{F^{\kappa_6}_{1;d;1-\alpha_{3}}} {F^{\kappa_8}_{1;d;1-\alpha_{4}}}\text{tr}\left\{\mathbf{\mathcal{M}}(\mathbf{X}^\prime\mathbf{X})^{-1}\right\}^{\kappa_5+\kappa_6}\text{tr}\left\{\mathbf{\mathcal{M}}_0(\mathbf{X}^\prime\mathbf{X})^{-1}\right\}^{\kappa_7+\kappa_8},
\end{equation} where $\kappa_5,~\kappa_6,~\kappa_7~\text{and}~\kappa_8$ are the priority weights for point response prediction, interval response prediction, point response difference prediction and interval response difference prediction, respectively, leading to the more general compound criteria, after ignoring constant terms, given by \begin{equation}
\frac{F^{-\kappa_1}_{p-1,d;1-\alpha_1}{F^{-\kappa_2}_{1,d;1-\alpha_2}}|\mathbf{X}_0^\prime\mathbf{Q}\mathbf{X}_0|^\frac{\kappa_0+\kappa_1}{p-1} (n-d)^{\kappa_4}{F^{-\kappa_6}_{1,d;1-\alpha_3}} {F^{-\kappa_8}_{1,d;1-\alpha_4}}}{ \text{tr}\{\mathbf{W}(\mathbf{X}^\prime\mathbf{X})^{-1}\}^{\kappa_2+\kappa_3}\text{tr}\left\{\mathbf{\mathcal{M}}(\mathbf{X}^\prime\mathbf{X})^{-1}\right\}^{\kappa_5+\kappa_6}\text{tr}\left\{\mathbf{\mathcal{M}}_0(\mathbf{X}^\prime\mathbf{X})^{-1}\right\}^{\kappa_7+\kappa_8}}, \label{eq:CP} \end{equation} where $\sum_{i=0}^8\kappa_i=1$ and $\mathbf{X}_0$ is the $n\times(p-1)$ matrix equal to the $\mathbf{X}$ matrix except that the column of 1's corresponding to the intercept is removed and $\mathbf{Q}=\mathbf{I}-\mathbf{1}\mathbf{1}^\prime/n$ is of dimension $n\times n$. Note that we have included in the formula the $D_S$ criterion. By allowing $\kappa_0>0$ we can use the $D_S$ property to reflect parameter point estimation if desired. Note that the formula allows $L$ type criteria, the $A$ criterion being a particular case. For second-order polynomials we recommend the use of weights through the $\mathbf{W}$ matrix in order to adjust the scale for the different types of parameter in the polynomial, i.e.\ linear, quadratic and interaction parameters.
To find a compromise design by maximizing (\ref{eq:CP}) we can use any algorithm proposed in the literature for factorial designs, such as point- or coordinate-exchange type algorithms.
\section{Design prediction capability}\label{sec:PC} Many of the measures proposed for design construction and evaluation, e.g.\ those of the type presented in Section \ref{sec:DC}, are global measures that try to convey in a single number all the information available in the design (see the discussion in \cite{anderson-cookmontg2009}). Depending on the objectives of the experiment, inspection of only these global measures may not suffice for design choice. This is particularly true for prediction since a design may show a reasonable performance globally by performing extremely well in one portion of the region but badly in another portion that could perhaps be of more interest. Thus, for inspection of design capabilities with respect to prediction, several valuable graphical approaches have been proposed. \cite{g-j&m1989} proposed the variance dispersion graphs (VDGs) that plot the maximum, mean and minimum variances for predictions of the response calculated over various spheres within the region of interest. For a scaled region so that the maximum point is at distance 1 from the center, the radius $r$ varies from 0 to 1. From \cite{g-j&m1989}, for the sphere $U_r$ $(U_r = \{\mathbf{x}: \sum_{i=1}^qx_i^2=r^2\}, r<1)$, the mean, or integrated, variance of predictions is the spherical variance defined by \begin{equation} V^r\propto\Psi^{-1}_r\int_{\mathbf{x}\in U_r}\mathbf{f}(\mathbf{x})^\prime(\mathbf{X}^\prime\mathbf{X})^{-1}\mathbf{f}(\mathbf{x})d\mathbf{x}=\text{tr}\{\mathcal{M}_r(\mathbf{X}^\prime\mathbf{X})^{-1}\}, \end{equation} where $\Psi_r = \int_{\mathbf{x}\in U_r}d\mathbf{x}$ and $\mathcal{M}_r$ is the matrix of moments for the region $U_r$. \cite{vining1993} gave Fortran code to calculate and plot the maximum, minimum and average variances, for given radius, against the distance from the center. VDGs allow visualization of prediction stability over the region and prediction performance of the design in a more informative way than single valued measures. For cuboidal regions, average variances are not calculated and the maximum and minimum variances are searched over restricted hyperspheres when their radii extrapolate the hypercube. The VDG methodology was extended for inspection of variances of response differences by the introduction of difference variance dispersion graphs (DVDGs) by \cite{trincagilmour1999}. For the sphere $U_r$, the mean or integrated variance of differences between predictions at two points, $\mathbf{x}\in \mathcal{X}$ and the design center, is defined by \begin{equation} DV^r\propto\Psi^{-1}_r\int_{\mathbf{x}\in U_r}(\mathbf{f}(\mathbf{x})-\mathbf{f}(\mathbf{0}))^\prime(\mathbf{X}^\prime\mathbf{X})^{-1}(\mathbf{f}(\mathbf{x})-\mathbf{f}(\mathbf{0}))d\mathbf{x}=\text{tr}\{\mathcal{M}_{0r}(\mathbf{X}^\prime\mathbf{X})^{-1}\}, \end{equation} where $\mathcal{M}_{0r}$ is the matrix $\mathcal{M}_r$ with first row and first column set to zero.
Because for each design the VDG and DVDG present three (spherical region) or two (cuboidal region) lines it is difficult to compare more than a very few designs in the same plot. Another drawback of these graphs is that they ignore the relative volume associated with the sphere $U_r$ and may lead to misleading interpretations. The situation is more serious for $q\ge4$. A more recently preferred display is the fraction of design space (FDS) plot proposed by \cite{zahran2003}. The FDS plot shows the variance against the relative volume of the region that has prediction variance at or below a given value.
The FDS plot can be easily extended to difference fraction of design space (DFDS) plots, that is the fraction of design space for variances of the estimated differences between $\hat{y}(\mathbf{x})$ and $\hat{y}(\mathbf{x}_0)$. The usual method to obtain the information for theses graphs is the one outlined in \cite{goosjones2011book} and we use it to obtain FDS and DFDS plots. A very large sample, of size $N$ points, is taken randomly from $\mathcal{X}$ and $v_j=\mathbf{f}(\mathbf{x}_j)^\prime (\mathbf{X}^\prime\mathbf{X})^{-1}\mathbf{f}(\mathbf{x}_j)$ for FDS or $vd_j=(\mathbf{f}(\mathbf{x}_j)-\mathbf{f}(\mathbf{x}_0))^\prime (\mathbf{X}^\prime\mathbf{X})^{-1}(\mathbf{f}(\mathbf{x}_j)-\mathbf{f}(\mathbf{x}_0))$ for DFDS are calculated for $j=1,~2,~\ldots,~N$ ($\mathbf{x}_0$ is fixed at the desired treatment; here we use, as before, $\mathbf{x}_0=\mathbf{0}$). Then these values are sorted such that $v_{(j)}$ (or $vd_{(j)}$) is in the $j^{th}$ position. The graph is simply the plot of $v_{(j)}$ (or $vd_{(j)}$) against $j/(N+1)$.
We suggest and use an alternative for VDG and DVDG by replacing the radius or distance from the design center by the relative volume of the region inside the hypersphere formed by each distance, to the whole design region. This is particularly useful because we add information that the FDS does not show, that is in which parts of the region the design has which properties.
The calculation of the values for constructing VDG, DVDG, FDS and DFDS plots is available in the R package \verb"dispersion" \citep{oliveira_cesar2014}. Versions of theses graphs to explore interval prediction properties are easily obtained by multiplying $v_{(j)}$ or $vd_{(j)}$ by $F_{1,d;1-\alpha}$ for some suitable choice of $\alpha$.
\section{Examples}\label{sec:Ex} In this section we explore the potential of the proposed compound criteria for constructing designs for two experiments. We focus on $D_S$, $(DP)_S$ and prediction efficiencies for constructing the designs. For interval estimation criteria we used $\alpha=0.05$ throughout. The search procedure uses a point exchange algorithm. We further evaluate the prediction capabilities of the designs using several versions of the graphs described in Section \ref{sec:PC}. In the displays we use the standard error (s.e.) instead of the variance scale in order to discriminate better between designs, since most variances are less than 1. The new proposed plots are presented in the paper while slight variations of the old ones are included the in Supplementary Material.
\subsection{Example 1: Cassava bread recipe} \cite{esc} performed experiments in order to gain knowledge for a gluten-free bread recipe using cassava flour for people with coeliac disease. One of the experiments used $n=26$ experimental units to study the effects of $q=3$ factors, the amount of powder albumen ($X_1$); the amount of yeast ($X_2$) and the amount of cassava flour ($X_3$). Other ingredients and factors associated with the mixing and baking process were kept constant. The experimental region was the cube defined by $10\le x_1\le30g$, $5\le x_2\le 15g$ and $45\le x_3\le65g$, and the experimenter decided to use a modified central composite design (CCD) with four center runs and the factorial part duplicated. One objective was to estimate optimum quantities of the ingredients based on some organoleptic characteristics and the primary model considered was the second-order polynomial with $p=10$ regression parameters. Note that the full three-level factorial would use 27 runs and would allow no pure error degrees of freedom. Alternative designs for this experiment were given by \cite{gilmourtrinca2012}, using the inference based and compound criteria, and in \cite{borrotti2016}, using the multi-objective algorithm, MS-TPLS, for both sets of properties, $D$, $A$ and $I$ and $D_S$, $A_S$ and $I_D$. \begin{table}[H]
\caption{\label{tab:desEx1_1} Alternative designs for Example 1 ($n=26$, $q=3$, $p=10$ in cubic region)}
\centering
\renewcommand{.7}{.7}
\begin{tabular}{rrrcrrrcrrrcrrrcrrr}
\toprule
\multicolumn{19}{c}{Design}\\ \multicolumn{3}{c}{4}&&\multicolumn{3}{c}{5}&&\multicolumn{3}{c}{6}&&\multicolumn{3}{c}{7} &&\multicolumn{3}{c}{8} \\
\multicolumn{3}{c}{{$I$} }&&\multicolumn{3}{c}{{$(IP)$}}&&\multicolumn{3}{c}{$I_D$}&&\multicolumn{3}{c}{$(I_DP)$}&&
\multicolumn{3}{c}{$\kappa_1=\kappa_7=.5$} \\
\cmidrule(lr){1-3}\cmidrule(lr){5-7}\cmidrule(lr){9-11}\cmidrule(lr){13-15}\cmidrule(lr){17-19}
$X_1$&$X_2$&$X_3$ &&$X_1$&$X_2$&$X_3$&&$X_1$&$X_2$&$X_3$ &&$X_1$&$X_2$&$X_3$ && $X_1$&$X_2$&$X_3$ \\ \cmidrule(lr){1-3}\cmidrule(lr){5-7}\cmidrule(lr){9-11}\cmidrule(lr){13-15}\cmidrule(lr){17-19}
-1 & -1 & -1 && -1 & -1 & -1&&-1 & -1 & -1 && -1 & -1 & -1 && -1 & -1 & -1 \\
-1 & -1 & 1 && -1 & -1 & 1 &&-1 & -1 & -1 && -1 & -1 & -1 && -1 & -1 & -1 \\
-1 & 1 & -1 && -1 & 1 & -1&&-1 & -1 & 1 && -1 & -1 & 1 && -1 & -1 & 1 \\
-1 & 1 & 1 && -1 & 1 & 1 &&-1 & 1 & -1 && -1 & 1 & -1 && -1 & -1 & 1 \\
1 & -1 & -1 && 1 & -1 & -1&&-1 & 1 & 1 && -1 & 1 & 1 && -1 & 1 & -1 \\
1 & -1 & 1 && 1 & -1 & 1 && 1 & -1 & -1 && -1 & 1 & 1 && -1 & 1 & -1 \\
1 & 1 & -1 && 1 & 1 & -1&& 1 & -1 & 1 && 1 & -1 & -1 && -1 & 1 & 1 \\
1 & 1 & 1 && 1 & 1 & 1 && 1 & 1 & -1 && 1 & -1 & -1 && -1 & 1 & 1 \\
-1 & -1 & 0 && -1 & 0 & 0 && 1 & 1 & 1 && 1 & -1 & 1 && 1 & -1 & -1 \\
-1 & 1 & 0 && -1 & 0 & 0 &&-1 & -1 & 0 && 1 & -1 & 1 && 1 & -1 & -1 \\
1 & -1 & 0 && -1 & 0 & 0 &&-1 & 1 & 0 && 1 & 1 & -1 && 1 & -1 & 1 \\
1 & 1 & 0 && 1 & 0 & 0 && 1 & -1 & 0 & & 1 & 1 & -1 && 1 & -1 & 1 \\
-1 & 0 & -1 && 1 & 0 & 0 && 1 & 1 & 0 & & 1 & 1 & 1 && 1 & 1 & -1 \\
-1 & 0 & 1 && 1 & 0 & 0 &&-1 & 0 & -1 & & 1 & 1 & 1 && 1 & 1 & -1 \\
1 & 0 & -1 && 0 & -1 & 0 &&-1 & 0 & 1 & & -1 & 0 & 0 && 1 & 1 & 1 \\
1 & 0 & 1 && 0 & -1 & 0 && 1 & 0 & -1 && -1 & 0 & 0 && 1 & 1 & 1 \\
0 & -1 & -1 && 0 & -1 & 0 && 1 & 0 & 1 && 1 & 0 & 0 && -1 & 0 & 0 \\
0 & -1 & 1 && 0 & 1 & 0 && 0 & -1 & -1 && 1 & 0 & 0 && -1 & 0 & 0 \\
0 & 1 & -1 && 0 & 1 & 0 && 0 & -1 & 1 && 0 & -1 & 0 && 1 & 0 & 0 \\
0 & 1 & 1 && 0 & 1 & 0 && 0 & 1 & -1 && 0 & -1 & 0 && 0 & -1 & 0 \\
0 & 0 & 0 && 0 & 0 & -1&& 0 & 1 & 1 && 0 & 1 & 0 && 0 & -1 & 0 \\
0 & 0 & 0 && 0 & 0 & -1&& 0 & 0 & 0 && 0 & 1 & 0 && 0 & 1 & 0 \\
0 & 0 & 0 && 0 & 0 & -1&& 0 & 0 & 0 && 0 & 0 & -1 && 0 & 0 & -1 \\
0 & 0 & 0 && 0 & 0 & 1 && 0 & 0 & 0 && 0 & 0 & -1 && 0 & 0 & -1 \\
0 & 0 & 0 && 0 & 0 & 1 && 0 & 0 & 0 && 0 & 0 & 1 && 0 & 0 & 1 \\
0 & 0 & 0 && 0 & 0 & 1 && 0 & 0 & 0 && 0 & 0 & 1 && 0 & 0 & 1 \\
\toprule
\end{tabular} \end{table} \begin{table}[h] \scalefont{.8} \caption{\label{tab:effEx1}Efficiencies of alternative designs for Example 1 ($n=26$, $q=3$, $p=10$ in cubic region)}
\centering
\renewcommand{0.03cm}{0.1cm} \begin{tabular}{cccrrrrrrrrrrrrrrrrrrrrrrrrrrrr} \toprule & & &\multicolumn{8}{c}{Efficiency}\\\cline{4-11} Design&Criterion
&\multicolumn{1}{c}{df(PE,~ LoF)$^{\dagger}$}&\multicolumn{1}{c}{$D_S$}&\multicolumn{1}{c}{$(DP)_S$}
&\multicolumn{1}{c}{$A_S$}&\multicolumn{1}{c}{$(AP)_S$}&\multicolumn{1}{c}{$I$}
&\multicolumn{1}{c}{$(IP)$}&\multicolumn{1}{c}{$I_D$}&\multicolumn{1}{c}{$(I_DP)$} \\
\midrule 1&{{$D_S$, $A_S$}} &(~9,~~7)& 100.00& 86.77& 100.00& 95.50& 75.80& 72.32& 91.93& 87.00\\ 2&{{$(DP)_S$}} & (15,~~1)& 93.81& 100.00& 87.12& 93.72& 69.62& 74.82& 83.47& 88.98\\ 3&{{$(AP)_S$}} & (12,~~4)& 98.79& 97.45& 97.13& 100.00& 72.30& 74.36& 89.23& 91.02\\ 4&{{$I$}} & (~5,~11)& 90.71& 52.42& 87.71& 64.87& 100.00& 73.88& 99.87& 73.19\\ 5&{{$(IP)$}} & (12,~~4)& 79.79& 78.70& 72.80& 74.95& 97.23& 100.00& 87.47& 89.23\\ 6&{{$I_D$}} & (~5,~11)& 93.36& 53.96& 90.67& 67.06& 97.22& 71.83& 100.00& 73.28\\ 7&{{$(I_DP)$}} & (12,~~4)& 95.29& 93.99& 92.11& 94.82& 92.00& 94.63& 98.03& 100.00\\ 8&$\kappa_1=\kappa_7=0.5$ &(12,~4) & 98.68& 97.34& 96.96& 99.82& 84.34& 86.74& 96.77& 98.71\\ 9&{{$I_D,D_S,A_S$-$sym$}} & (~5,~11)& 98.13& 56.71& 96.83& 71.62& 85.89& 63.46& 97.01& 71.09\\ \bottomrule \multicolumn{10}{l}{$ \dagger$df(PE,~LoF): degrees of freedom for pure error, degrees of freedom for lack of fit.} \\
\end{tabular} \end{table}
Here we explore the prediction performances of some of the previously published designs and construct a few other alternatives based on estimation and prediction properties. The new designs are presented in Table \ref{tab:desEx1_1}. In Table \ref{tab:effEx1} we show the properties of the designs in terms of the usual single-valued criteria and the new criteria introduced in Section \ref{sec:DC}. Designs 1 to 3 were presented in \cite{gilmourtrinca2012}, design 9 is the best design \cite{borrotti2016} found for the properties $D_S$, $A_S$ and $I_D$, which they called the $I_D,D_S,A_S$-$symmetrical$ design. Designs from 4 to 8 are the new designs, the first four based on a single prediction property each ($I$, $(IP)$, $I_D$ and $(I_DP)$) and design 8 constructed by using a compound criterion with $\kappa_1=\kappa_7=0.5$ in equation (\ref{eq:CP}), that is, giving equal priority for $(DP)_S$ and point predictions of difference of response.
We note that, as the number of runs is not too small for the model specified, all designs allow for pure error degrees of freedom with designs 4, 6 and 9 ($I$, $I_D$ and $I_D,D_S,A_S$-$sym$) being the least attractive in this respect. Comparisons between designs 1 and 4 confirm the observation of \cite{jonesgoos2012} that the losses of $I$-optimum designs in terms of efficiencies for estimation, with respect to $D_S$ and $A_S$ criteria, are smaller than the losses of efficiencies in terms of prediction of $D_S$- or $A_S$-optimum designs. Similar lessons can be drawn when we compare designs 2 and 5 ($(DP)_S$- and $(IP)$-optimum designs) but now the differences are smaller. However, the results contradict the suggestion of Goos in the discussion of \cite{gilmourtrinca2012} that $I$-optimal designs usually have more replicates that $D$-optimal designs.
In general all designs based on a single property have low performance on at least one property except the $(I_DP)$-optimum design which has a minimum efficiency of 92\%. However, in case we are interested in inferences for the parameters and predictions of differences in response, design 8 (obtained by the compound criterion, considering equal weights for $(DP)_S$ and $I_D$) has very high efficiencies for all properties. Surprisingly, design 8 outperforms design 9, the $I_D,D_S,A_S-sym$ multiple objective design from \cite{borrotti2016}, except for $I$ and $I_D$ properties, although the maximum difference between them in these two properties is only about 1.5\%. For properties like $(DP)_S$, $(AP)_S$, $(I_DP)$ and $(IP)$ the advantage in using design 8 is overwhelming with efficiency gains of 40.63, 28.20, 27.62 and 23.28\%, respectively. It is interesting to note that design 8 is very close to the $(AP)_S$-optimum design (design 3) in terms of pure error and parameter estimation properties but it is considerably superior in terms of overall predictions.
\begin{figure}
\caption{Standard error dispersion graphs (SEDG) of response predictions (interval), for designs in Example 1. Left: distance. Right: relative volume.}
\label{graph:vdgpeEx1}
\end{figure} \begin{figure}
\caption{Standard error dispersion graphs of differences (DSEDG) in response predictions (interval), for designs for Example 1. Left: distance. Right: relative volume.}
\label{graph:dvdgpeEx1}
\end{figure}
\begin{figure}
\caption{FDS plots, in terms of s.e., for designs in Example 1. Left: response interval predictions. Right: difference interval predictions.}
\label{graph:fdspeEx1}
\end{figure}
Figures \ref{graph:vdgpeEx1}-\ref{graph:fdspeEx1} (and Figures A-C in the Suppl.) show the prediction performances of the designs over the unit cube using standard error dispersion graphs (SEDGs). For the dispersion graphs (Figure A, left), the usual pattern is observed, i.e.\ the $(AP)_S$-, $D_S$- and $(DP)_S$-optimum designs have the highest s.e.\ at the center in order to control the precision in the corners. Several designs show two spikes around the relative distances of points in the cube face ($\approx 0.58$) and of points in the edges ($\approx 0.82$) with those of the $(DP)_S$-optimum design being most prominent. Note, however, that this design has the smallest minimum s.e.\ further from the center. In the other hand, the $I$-, $(IP)$- and $I_D$-optimum designs have the smallest s.e.'s in the middle but the s.e.'s are high for the portion away from the center. Our compound criterion design ($\kappa_1=\kappa_7=0.5$) does compromise and has similar performances to the $I_D,D_S,A_S-sym$ design. Note however its superiority when interval prediction of responses is considered (Figure \ref{graph:vdgpeEx1}). The graph at the right hand-side of Figure \ref{graph:vdgpeEx1} presents the same information, but plotted against the relative volume contained within a radius, rather than its distance from the center. This variation of the plot seems more useful since it discriminates better between the designs.
The ordering of the designs in terms of response predictions is better summarized through the FDS graphs in Figures C (right) and \ref{graph:fdspeEx1} (right). It is interesting to note that the performance of the $(DP)_S$-optimum design is not as bad as suspected before. For interval predictions it outperforms design $I_D,D_S,A_S-sym$ in almost the whole region and outperforms the $D_S$-, $(AP)_S$- and $I_D$-optimum design in about $60\%$ of the region. Again, our compound criterion design compromises while the $(IP)$- and $(I_DP)$-optimum designs show the best performances overall.
The designs for Example 1 are quite homogeneous in terms of predictions of differences in the responses (see Figures \ref{graph:dvdgpeEx1}, \ref{graph:fdspeEx1} (right) and C (Supp) and the last two columns of Table \ref{tab:effEx1}). But we can still detect the superiority of our compound design and the $(I_DP)$-optimum design in the whole region.
\subsection{Example 2: $q=5$ factors in spherical region}
\begin{table}[hp]
\scalefont{0.8}
\caption{\label{tab:desEx2_1} Alternative designs for Example 2 ($n=30$, $q=5$, $p=21$ in spherical region)}
\centering
\renewcommand{.7}{.7}
\renewcommand{0.03cm}{0.1cm}
\begin{tabular}{rrrrrccrrrrrccrrrrr}
\toprule
\multicolumn{19}{c}{Design}\\ \multicolumn{5}{c}{1}&&&\multicolumn{5}{c}{2}&&&\multicolumn{5}{c}{3} \\
\multicolumn{5}{c}{$D_S/I $}&&&\multicolumn{5}{c}{$(DP)_S$} &&&\multicolumn{5}{c}{$A_S$} \\
\cmidrule(lr){1-5}\cmidrule(lr){8-12}\cmidrule(lr){15-19}
$X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ &&& $X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ &&& $X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ \\
\cmidrule(lr){1-5}\cmidrule(lr){8-12}\cmidrule(lr){15-19}
-1.12 & 1.12 & -1.12 & 0 & -1.12 &&& -1.29 & -1.29 & 0 & 1.29 & 0 &&& 0 & -1.12 & 1.12 & -1.12 & -1.12 \\
-1.12 & 1.12 & 1.12 & 0 & -1.12 &&& -1.29 & -1.29 & 0 & -1.29 & 0 &&& 0 & -1.12 & 1.12 & 1.12 & -1.12 \\
1.12 & 1.12 & -1.12 & 0 & -1.12 &&& -1.29 & 1.29 & 0 & -1.29 & 0 &&& 0 & 1.12 & 1.12 & -1.12 & -1.12 \\
1.12 & 1.12 & 1.12 & 0 & -1.12 &&& -1.29 & 1.29 & 0 & 1.29 & 0 &&& 0 & 1.12 & 1.12 & 1.12 & -1.12 \\ -1.29 & 1.29 & 0 & 0 & 1.29 &&& -1.29 & 1.29 & 0 & 1.29 & 0 &&& 1.29 & -1.29 & -1.29 & 0 & 0 \\ 1.29 & 1.29 & 0 & 0 & 1.29 &&& 1.29 & -1.29 & -1.29 & 0 & 0 &&& 1.29 & -1.29 & 1.29 & 0 & 0 \\ -1.29 & 0 & -1.29 & 1.29 & 0 &&& 1.29 & 1.29 & -1.29 & 0 & 0 &&& 1.29 & 1.29 & -1.29 & 0 & 0 \\ -1.29 & 0 & 1.29 & 1.29 & 0 &&& 1.29 & 1.29 & -1.29 & 0 & 0 &&& 1.29 & 1.29 & 1.29 & 0 & 0 \\ 1.29 & 0 & -1.29 & 1.29 & 0 &&& 1.29 & 0 & 1.29 & 1.29 & 0 &&& -1.29 & -1.29 & 0 & -1.29 & 0 \\ 1.29 & 0 & 1.29 & 1.29 & 0 &&& 1.29 & 0 & 1.29 & -1.29 & 0 &&& -1.29 & -1.29 & 0 & 1.29 & 0 \\ -1.29 & 0 & 0 & -1.29 & -1.29 &&& 1.29 & 0 & 1.29 & 1.29 & 0 &&& -1.29 & 1.29 & 0 & -1.29 & 0 \\ -1.29 & 0 & 0 & -1.29 & 1.29 &&& 0 & -1.29 & 1.29 & 0 & -1.29 &&& -1.29 & 1.29 & 0 & 1.29 & 0 \\ 1.29 & 0 & 0 & -1.29 & -1.29 &&& 0 & -1.29 & 1.29 & 0 & 1.29 &&& -1.29 & 0 & -1.29 & 0 & -1.29 \\ 1.29 & 0 & 0 & -1.29 & 1.29 &&& 0 & -1.29 & 1.29 & 0 & 1.29 &&& -1.29 & 0 & -1.29 & 0 & 1.29 \\ 0 & -1.29 & -1.29 & -1.29 & 0 &&& 0 & 1.29 & 1.29 & 0 & -1.29 &&& -1.29 & 0 & 1.29 & 0 & -1.29 \\ 0 & -1.29 & 1.29 & -1.29 & 0 &&& 0 & 1.29 & 1.29 & 0 & 1.29 &&& -1.29 & 0 & 1.29 & 0 & 1.29 \\ 0 & 1.29 & -1.29 & -1.29 & 0 &&& 0 & 1.29 & 1.29 & 0 & 1.29 &&& 1.29 & 0 & 0 & -1.29 & -1.29 \\ 0 & 1.29 & 1.29 & -1.29 & 0 &&& 0 & 0 & -1.29 & -1.29 & -1.29 &&& 1.29 & 0 & 0 & -1.29 & 1.29 \\ 0 & -1.29 & -1.29 & 0 & -1.29 &&& 0 & 0 & -1.29 & -1.29 & 1.29 &&& 1.29 & 0 & 0 & 1.29 & -1.29 \\ 0 & -1.29 & 1.29 & 0 & -1.29 &&& 0 & 0 & -1.29 & -1.29 & -1.29 &&& 1.29 & 0 & 0 & 1.29 & 1.29 \\ -1.58 & -1.58 & 0 & 0 & 0 &&& 0 & 0 & -1.29 & 1.29 & 1.29 &&& 0 & -1.29 & -1.29 & 0 & -1.29 \\ 1.58 & -1.58 & 0 & 0 & 0 &&& 0 & 0 & -1.29 & 1.29 & -1.29 &&& 0 & 1.29 & -1.29 & 0 & -1.29 \\ 0 & -1.58 & 0 & 1.58 & 0 &&& 0 & 0 & -1.29 & -1.29 & 1.29 &&& 0 & 0 & 1.29 & -1.29 & 1.29 \\ 0 & 1.58 & 0 & 1.58 & 0 &&& -1.58 & 0 & -1.58 & 0 & 0 &&& 0 & 0 & 1.29 & 1.29 & 1.29 \\ 0 & -1.58 & 0 & 0 & 1.58 &&& -1.58 & 0 & 1.58 & 0 & 0 &&& 0 & -1.58 & 0 & 0 & 1.58 \\ 0 & 0 & -1.58 & 0 & 1.58 &&& -1.58 & 0 & 1.58 & 0 & 0 &&& 0 & 1.58 & 0 & 0 & 1.58 \\ 0 & 0 & 1.58 & 0 & 1.58 &&& 1.58 & 0 & 0 & 0 & -1.58 &&& 0 & 0 & -1.58 & -1.58 & 0 \\ 0 & 0 & 0 & 1.58 & -1.58 &&& 1.58 & 0 & 0 & 0 & 1.58 &&& 0 & 0 & -1.58 & 1.58 & 0 \\ 0 & 0 & 0 & 1.58 & 1.58 &&& 1.58 & 0 & 0 & 0 & 1.58 &&& 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &&& 0 & 0 & 0 & 0 & 0 &&& 0 & 0 & 0 & 0 & 0 \\ \toprule \end{tabular} \end{table}
\begin{table}[hp]
\scalefont{0.8}
\caption{\label{tab:desEx2_2} Alternative designs for Example 2 ($n=30$, $q=5$, $p=21$ in spherical region) }
\centering \renewcommand{.7}{.7} \renewcommand{0.03cm}{0.1cm} \begin{tabular}{rrrrrccrrrrrccrrrrr}
\toprule
\multicolumn{19}{c}{Design}\\ \multicolumn{5}{c}{4}&&&\multicolumn{5}{c}{5}&&&\multicolumn{5}{c}{7} \\
\multicolumn{5}{c}{$(AP)_S$}&&&\multicolumn{5}{c}{$(IP)$} &&&\multicolumn{5}{c}{$(I_DP)$} \\ \cmidrule(lr){1-5}\cmidrule(lr){8-12}\cmidrule(lr){15-19} $X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ &&& $X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ &&& $X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$\\
\cmidrule(lr){1-5}\cmidrule(lr){8-12}\cmidrule(lr){15-19} -1&-1&-1&-1&-1&&&-1&-1&-1&-1&1&&&-1&-1&-1&-1&1\\ -1&-1&-1&1&1&&&-1&-1&-1&-1&1&&&-1&-1&-1&1&-1\\ -1&-1&1&-1&1&&&-1&-1&-1&1&-1&&&-1&-1&-1&1&-1\\ -1&-1&1&-1&1&&&-1&-1&1&-1&-1&&&-1&-1&1&-1&-1\\ -1&-1&1&1&-1&&&-1&-1&1&-1&-1&&&-1&-1&1&1&1\\ -1&1&-1&-1&1&&&-1&-1&1&1&1&&&-1&1&-1&-1&-1\\ -1&1&-1&1&-1&&&-1&1&-1&1&1&&&-1&1&-1&1&1\\ -1&1&1&-1&-1&&&-1&1&1&-1&1&&&-1&1&1&-1&1\\ -1&1&1&1&1&&&-1&1&1&1&-1&&&-1&1&1&1&-1\\ 1&-1&-1&-1&1&&&1&-1&-1&1&1&&&-1&1&1&1&-1\\ 1&-1&-1&1&-1&&&1&-1&1&-1&1&&&1&-1&-1&-1&-1\\ 1&-1&-1&1&-1&&&1&-1&1&-1&1&&&1&-1&-1&1&1\\ 1&-1&1&-1&-1&&&1&-1&1&1&-1&&&1&-1&1&-1&1\\ 1&-1&1&1&1&&&1&1&-1&-1&1&&&1&-1&1&-1&1\\ 1&1&-1&1&1&&&1&1&-1&1&-1&&&1&-1&1&1&-1\\ 1&1&-1&1&1&&&1&1&-1&1&-1&&&1&1&-1&-1&1\\ 1&1&1&-1&1&&&1&1&1&-1&-1&&&1&1&1&-1&-1\\ 1&1&1&1&-1&&&1&1&1&-1&-1&&&1&1&1&1&1\\ 1&1&1&1&-1&&&1&1&1&1&1&&&0&1.12&-1.12&1.12&-1.12\\ 1.12&1.12&-1.12&0&-1.12&&&1&1&1&1&1&&&0&1.12&-1.12&1.12&-1.12\\ 1.12&1.12&-1.12&0&-1.12&&&-1.12&1.12&-1.12&0&-1.12&&&2.24&0&0&0&0\\ 2.24&0&0&0&0&&&-1.12&1.12&-1.12&0&-1.12&&&0&-2.24&0&0&0\\ 0&-2.24&0&0&0&&&0&-1.12&-1.12&-1.12&-1.12&&&0&0&2.24&0&0\\ 0&0&2.24&0&0&&&0&-1.12&-1.12&-1.12&-1.12&&&0&0&0&2.24&0\\ 0&0&0&-2.24&0&&&2.24&0&0&0&0&&&0&0&0&0&2.24\\ 0&0&0&0&2.24&&&0&2.24&0&0&0&&&0&0&0&0&0\\ 0&0&0&0&0&&&0&0&-2.24&0&0&&&0&0&0&0&0\\ 0&0&0&0&0&&&0&0&0&-2.24&0&&&0&0&0&0&0\\ 0&0&0&0&0&&&0&0&0&0&2.24&&&0&0&0&0&0\\ 0&0&0&0&0&&&0&0&0&0&0&&&0&0&0&0&0\\
\toprule
\end{tabular} \end{table}
\begin{table}[hp]
\scalefont{.8}
\caption{\label{tab:desEx2_3} Alternative designs for Example 2 ($n=30$, $q=5$, $p=21$ in spherical region)}
\centering
\renewcommand{.7}{.7}
\renewcommand{0.03cm}{0.1cm} \begin{tabular}{rrrrrccrrrrrccrrrrr}
\toprule
\multicolumn{19}{c}{Design}\\ \multicolumn{5}{c}{8}&&&\multicolumn{5}{c}{9}&&&\multicolumn{5}{c}{10} \\
\multicolumn{5}{c}{$\kappa_1=.3;~\kappa_7=.7$}&&&\multicolumn{5}{c}{$\kappa_1=.1;~\kappa_7=.9$} &&&\multicolumn{5}{c}{$\kappa_0=.9;~\kappa_8=.1$
} \\
\cmidrule(lr){1-5}\cmidrule(lr){8-12}\cmidrule(lr){15-19}
$X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ &&& $X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ &&& $X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ \\ \cmidrule(lr){1-5}\cmidrule(lr){8-12}\cmidrule(lr){15-19} 1.12&-1.12&1.12&1.12&0&&&-1&-1&-1&1&-1&&&-1.29&-1.29&-1.29&0&0\\ 1.12&1.12&1.12&1.12&0&&&-1&-1&1&1&1&&&-1.29&-1.29&1.29&0&0\\ 1.12&-1.12&-1.12&0&1.12&&&-1&1&-1&1&1&&&-1.29&1.29&-1.29&0&0\\ 1.12&1.12&-1.12&0&1.12&&&-1&1&1&1&-1&&&-1.29&1.29&1.29&0&0\\ -1.29&-1.29&-1.29&0&0&&&-1&1&1&-1&1&&&1.29&-1.29&0&0&-1.29\\ -1.29&-1.29&1.29&0&0&&&1&-1&-1&-1&-1&&&1.29&-1.29&0&0&-1.29\\ -1.29&1.29&-1.29&0&0&&&1&-1&1&-1&1&&&1.29&-1.29&0&0&1.29\\ -1.29&1.29&1.29&0&0&&&1&1&-1&-1&1&&&1.29&-1.29&0&0&1.29\\ -1.29&0&0&-1.29&-1.29&&&1&1&1&1&1&&&1.29&1.29&0&0&-1.29\\ -1.29&0&0&-1.29&1.29&&&1&1&1&-1&-1&&&1.29&1.29&0&0&1.29\\ -1.29&0&0&1.29&-1.29&&&-1.29&-1.29&0&-1.29&0&&&-1.29&0&0&-1.29&1.29\\ -1.29&0&0&1.29&1.29&&&1.29&-1.29&0&1.29&0&&&-1.29&0&0&1.29&-1.29\\ -1.29&0&0&1.29&1.29&&&-1.29&0&-1.29&-1.29&0&&&-1.29&0&0&1.29&1.29\\ 1.29&0&-1.29&0&-1.29&&&1.29&0&-1.29&1.29&0&&&-1.29&0&0&-1.29&-1.29\\ 1.29&0&1.29&-1.29&0&&&-1.29&0&0&-1.29&-1.29&&&1.29&0&-1.29&-1.29&0\\ 0&-1.58&0&-1.58&0&&&1.29&0&0&1.29&-1.29&&&1.29&0&-1.29&1.29&0\\ 0&1.58&0&-1.58&0&&&0&-1.29&-1.29&0&1.29&&&1.29&0&1.29&-1.29&0\\ 0&-1.58&0&0&-1.58&&&0&-1.29&1.29&0&-1.29&&&1.29&0&1.29&-1.29&0\\ 0&1.58&0&0&-1.58&&&0&1.29&-1.29&0&-1.29&&&1.29&0&1.29&1.29&0\\ 0&0&-1.58&-1.58&0&&&-2.24&0&0&0&0&&&0&-1.58&0&-1.58&0\\ 0&0&-1.58&1.58&0&&&0&2.24&0&0&0&&&0&-1.58&0&1.58&0\\ 0&0&-1.58&1.58&0&&&0&0&2.24&0&0&&&0&1.58&0&-1.58&0\\ 0&0&1.58&0&-1.58&&&0&0&0&2.24&0&&&0&1.58&0&1.58&0\\ 0&0&1.58&0&1.58&&&0&0&0&0&2.24&&&0&0&-1.58&0&-1.58\\ 0&0&1.58&0&1.58&&&0&0&0&0&0&&&0&0&-1.58&0&1.58\\ 0&0&0&0&0&&&0&0&0&0&0&&&0&0&1.58&0&-1.58\\ 0&0&0&0&0&&&0&0&0&0&0&&&0&0&1.58&0&1.58\\ 0&0&0&0&0&&&0&0&0&0&0&&&0&0&0&0&0\\ 0&0&0&0&0&&&0&0&0&0&0&&&0&0&0&0&0\\ 0&0&0&0&0&&&0&0&0&0&0&&&0&0&0&0&0\\
\toprule
\end{tabular} \end{table}
\cite{jang2012} compared a few classical designs (CCD, Box-Behnken design) for five factors in a spherical region considering several run sizes. Here we constructed several optimum designs for $n=30$ and the second order model ($p=21$) and we compare them with the resolution-V half fraction CCD ($\alpha = \sqrt{5} \approx 2.236$) with four center runs. The designs are shown in Tables \ref{tab:desEx2_1}, \ref{tab:desEx2_2} and \ref{tab:desEx2_3}. Interestingly we found that the $I_D$-optimum design is the resolution-V CCD, which is very unusual for an optimum design chosen from such a large candidate set. We found other equivalences among designs, for example the $D_S$-optimum design is also $I$-optimum, although, since we are using heuristics, we have no absolute guarantee that the true optimum designs for these criteria are equivalent or unique. Design 11 is also similar to a CCD except that it includes four factorial points duplicated (see Table \ref{d11}), the center point is replicated four times and includes the axial pair for only one factor ($X_3$), while for the other factors it includes only one axial point. \begin{table}
\scalefont{0.8}
\caption{\label{d11} Points from the $2^5$ that are duplicated in Design 11 (Table \ref{tab:effEx2})}
\centering
\renewcommand{.7}{.7}
\renewcommand{0.03cm}{0.1cm}
\begin{tabular}{rrrrr}
\toprule
$X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ \\
\toprule
-1 & -1 & 1 & 1 & 1\\
-1 & 1 & -1 & 1 & 1\\
-1 & 1 & 1 & 1 & -1\\
1 & -1 & 1 & -1 & 1\\
\toprule \end{tabular} \end{table}
\begin{table}[ht]
\scalefont{.75}
\caption{\label{tab:effEx2}Efficiencies of alternative designs for Example 2 ($n=30$, $q=5$, $p=21$ in spherical region)}
\centering
\renewcommand{.7}{.7}
\renewcommand{0.03cm}{0.03cm}
\begin{tabular}{cccrrrrrrrr}
\toprule
& & &\multicolumn{8}{c}{Efficiency}\\\cline{4-11}
Design&Criterion
&\multicolumn{1}{c}{df(PE,~ LoF)$^{\dagger}$}&\multicolumn{1}{c}{$D_S$}&\multicolumn{1}{c}{$(DP)_S$}
&\multicolumn{1}{c}{$A_S$}&\multicolumn{1}{c}{$(AP)_S$}&\multicolumn{1}{c}{$I$}
&\multicolumn{1}{c}{$(IP)$}&\multicolumn{1}{c}{$I_D$}&\multicolumn{1}{c}{$(I_DP)$} \\
\midrule
1&{{$D_S$, $I$}} & (0,~9)& 100.00& 0.00& 94.02& 0.00& 100.00& 0.00& 60.31& 0.00\\
2&{{$(DP)_S$}} & (9,~0)& 86.30 & 100.00& 74.33& 90.36& 74.73 & 97.81 & 52.80& 65.56\\
3&{{ $A_S$}} &(1,~8)& 98.16 & 1.35& 100.00& 3.85& 92.86& 3.85& 81.20& 3.10\\
4&{{$(AP)_S$}} &(8,~1) & 87.39 & 94.39 &85.48&100.00 &74.34&93.64&844.84 &98.28\\
5&{{$(IP)$}} & (8,~1)& 88.84& 95.95& 79.04& 92.47& 79.39& 100.00& 54.37& 62.99\\
6&{{CCD, $I_D$}} & (3,~6)& 96.96& 38.09& 95.25& 58.51& 91.82& 60.73& 100.00& 60.82\\
7&{$(I_DP)$}&(8,~1)&85.37&92.20&83.63&97.83&72.21&90.95&86.32&100.00\\
8&$\kappa_1=0.3;~\kappa_7=0.7$&(7,~2)&85.74&84.69&82.89&92.22&73.35&87.87&87.46&96.35\\
9&$\kappa_1=0.1;~\kappa_7=0.9$&(5,~4)&86.71&64.73&85.61&80.60&76.58&77.62&93.34&87.02\\
10&$\kappa_0=0.9;~\kappa_8=0.1$&(5,~4)&93.49& 69.79 & 91.88& 86.50& 84.56 & 85.72& 87.32& 81.40\\
11&$\kappa_0=\kappa_1=.2;~\kappa_3=\kappa_6=.3$&(7,~2)&90.35&89.25&88.94&98.95&78.50&94.03&88.57& 97.58\\
\bottomrule
\multicolumn{10}{l}{$ \dagger$df(PE,~LoF): degrees of freedom for pure error, degrees of freedom for lack of fit.} \\
\end{tabular} \end{table}
The efficiencies of several designs are shown in Table \ref{tab:effEx2}. The optimum designs from the usual criteria do not allow pure error estimation ($D_S/I$) or provide very few treatment replications ($A_S$ and CCD/$I_D$) and thus, efficiencies of these designs with respect to modified criteria are zero or small. We note that designs $(AP)_S$, $(I_DP)$ are similar and have reasonably high efficiencies generally, providing 8 degrees of freedom for error estimation but only one spare degree of freedom to add a higher order term in the model in case experimental results show lack of fit of the quadratic model. Design 11 behaves similarly but has the advantage of allowing two degrees of freedom for lack of fit. We tried many weight patterns for this example to obtain compromise designs but many returned designs equivalent to some of the single property criteria and so, we present results for only four of them, designs 8, 9, 10 and 11. From these we see that designs 9 and 10 balance better the degrees of freedom between pure error and lack of fit. Design 10, which focuses on parameter estimation through the $D_S$ criterion and interval estimation of differences in response, has reasonably high efficiencies overall.
\begin{figure}
\caption{SEDGs (interval) for designs in Example 2. Left: distance. Right: relative volume.}
\label{graph:vdgpeEx2}
\end{figure} \begin{figure}
\caption{DSEDGs (interval) for designs in Example 2. Left: distance. Right: relative volume.}
\label{graph:dvdgpeEx2}
\end{figure}
In Figures \ref{graph:vdgpeEx2}-\ref{graph:fdspeEx2} (and Figures D-F in the Suppl.) we show the prediction performances of the designs over the unit hypersphere. The $D_S/I$- and $A_S$-optimum designs are not shown in the graphs referring to interval predictions because they are too poor for pure error degrees of freedom. Again we see that plotting the information against relative volume discriminates better between the designs. For response point prediction the $I_D$-, $(AP)_S$, $(I_DP)$- and compound optimum designs (8, 9, 10 and 11) have much smaller s.e.'s at the design center. However most of these designs become quite unstable away from the center. From these, the $I_D$-optimum design is the most stable followed by design 10 (left hand-side of Figure D). Similar behavior is observed for interval response prediction (left hand-side of Figure \ref{graph:vdgpeEx2}) although $I_D$ has poorer performances than before due to few pure error degrees of freedom. The $(DP)_S$- and $(IP)$-optimum designs have very similar behavior in both graphs with poor performances at the center of the region. Perhaps fairer comparisons are obtained from Figures \ref{graph:vdgpeEx2} and D, both right hand-side. In these graphs we can see that the advantages of designs $I_D$, 8, 9 and 11 are not so impressive since they are superior for only about $10\%$ of the region. Still, for point response predictions, their minimum values are smaller for about $30\%$ ($I_D$) and about $50\%$ (compound designs) but, because of their instability, we resort to Figure F (left hand-side) where we see lines crossing. The $D_S$/$I$-optimum design has the smallest slope but in order to achieve that, it has higher s.e.'s than other designs such as $A_S$, $I_D$ and 10 in about $50\%$ of the region. For interval response predictions (Figure \ref{graph:vdgpeEx2} the $A_S$-optimum design (not shown in the graph) and the $I_D$ optimum design are clearly no longer competitive. The $(DP)_S$- and $(IP)$-optimum designs have the smallest slopes but have higher s.e.'s than several other designs in about $40\%$ of the region. The $(AP)_S$ and $(I_DP)$-optimum design performs quite well, followed by design 8 (Figure \ref{graph:fdspeEx2}, left).
\begin{figure}
\caption{FDS plots, in terms of s.e., for designs in Example 2.
Left: response interval prediction. Right: difference interval prediction.}
\label{graph:fdspeEx2}
\end{figure}
For point predictions of response differences (Figure E, left) we can identify designs $(DP)_S$, $(IP)$ and $D_S/I$ with even the minimum s.e.'s being high with the last being very stable. All other designs show smaller minimum s.e.'s. Again the $I_D$- and $A_S$-optimum designs are quite stable but perform badly for interval predictions (Figure \ref{graph:dvdgpeEx2}, left). The compound design 10 is perhaps attractive due to its smaller maximum s.e.'s. Once more the patterns are much clearer in Figures \ref{graph:dvdgpeEx2} and E, both right, which separates better the designs. The overall performances are summarized in Figures F and \ref{graph:fdspeEx2} (right). In Figure F (right) we clearly see two groups with the $(DP)_S$-, $(IP)$- and $D_S$-optimum designs having the worst performances for the whole region. The $I_D$-optimum design has the best performance throughout showing that the single criterion $I_D$ summarizes very well the point prediction capabilities in the whole region. We note, however, there are other designs with similar performances, mainly those obtained by compound criteria, although the $A_S$- and $(AP)_S$ and $(I_DP)$-optimum designs follow closely. Now, considering interval predictions of differences (Figure \ref{graph:fdspeEx2}, right), there are three designs with very close to the best performances, namely the $(AP)_S$-, $(I_DP)$- and the compound design 8 (with weights $\kappa_1=0.3$ and $\kappa_7=0.7$, compromising between $(DP)_S$ and $I_D$). The other three compound designs are also close to these.
\section{Central composite designs which are $I_D$-optimal}\label{sec:CCD} The classical approach to designing response surface experiments, mostly commonly using CCDs, and the optimal design approach, most commonly using $D$-optimality, are often contrasted as having quite different philosophies. It is therefore intriguing that the CCD for five factors in 30 runs, based on a resolution-V half-replicate factorial portion, with four center points, in a spherical region, is optimal under the new $I_D$ criterion. It is natural to ask whether this is true for other run sizes and for other numbers of factors.
This was explored by running our exchange algorithm for various numbers of factors and run sizes in spherical regions. Subject to there being a very small chance that the algorithm has failed to find the true optimum, we found the following. \begin{itemize} \item For three factors, the CCD is $I_D$-optimal for $17 \leq n \leq 20$, i.e.\ 3 to 6 center points. \item For four factors, the CCD is $I_D$-optimal for $28 \leq n \leq 32$, i.e.\ 4 to 8 center points. \item For five factors, the CCD, with a half-replicate of the factorial points, is $I_D$-optimal for $30 \leq n \leq 33$, i.e.\ 4 to 7 center points. \item For six factors, the CCD, with a half-replicate of the factorial points, is $I_D$-optimal for $50 \leq n \leq 55$, i.e.\ 6 to 11 center points. \end{itemize}
We did not explore more than six factors. For other run sizes, the CCD is suboptimal. However, for run sizes just outside the range given, the optimal design is similar to a CCD, e.g.\ having one axial point replaced by a center point for small run sizes, or repeating one factorial point for larger run sizes.
Note that these CCDs are optimal only among designs chosen from the candidate set based on the full $3^q$ design, expanded to have points on the surface of the sphere. Nonetheless, we believe this is the first time CCDs have been shown to be optimal among such a large class of designs. The result nicely links the fields of classical and optimal design.
\section{Discussion}\label{sec:disc} We have extended the compound criterion function of \cite{gilmourtrinca2012} to allow for efficient designs in terms of predictions. We focused on two properties, prediction of responses and prediction of differences in the response. Point and interval estimation were considered for both responses and differences.
We also proposed the use of several graphs for depicting the prediction performances of the designs. We have extended the usual graphs such as VDG, DVDG, FDS and DFDS to take into account interval estimation. We have illustrated the methods with two examples, one for a cuboidal and the other for a spherical experimental region of interest. The illustrations showed that the graphs add relevant information mainly if one is interested in predicting the response.
Along with many other authors, we argue that a design should have several good properties and it is important to compare several designs, under a wide range of properties, in order to choose the most appropriate one for the problem at hand. This is good practice even under a single objective optimization since usually there are many designs that are almost equivalent. Evaluating them for several other properties is of great help for discriminating between them.
The usefulness of compound criteria is that a design can be developed according to the objectives of the research. We have illustrated compound optimum designs by combining only two properties at time but of course many properties can be studied together. Even though this was the case for our examples, still the resulting compound designs were quite competitive overall. We have compared a compound design with the one obtained by the multiple objective algorithm of \cite{borrotti2016}. The multiple objective design did not consider inference and thus our compound design showed advantages. We believe that by using compound criteria we can handle many properties of interest more easily than the multiple objective approach. The graphs proposed are helpful to depict detailed pictures of prediction capabilities of the designs. We recommend the use of the proposed variations of VDG and DVDG plots that use the relative volume instead of distance for both point and interval predictions, since these graphs discriminate better between the different designs. All varieties of FDS and DFDS plots are good summaries that are always be useful for making a final choice of design.
\begin{center} {\large\bf SUPPLEMENTARY MATERIAL} \end{center}
\noindent \textbf{SuppMatPrediction.pdf}: a pdf file containing additional graphs for the examples discussed in the paper and a small simulation study to evaluate the performances of the designs in Example 1 with respect to mean and difference response bias predictions.\\ \textbf{codePrediction.rar}: a zipped folder containing R code to obtain designs by optimizing the compound criteria proposed in the article.
\end{document} |
\begin{document}
\title{Formality of spaces with Lusternik-Schnirelmann category 1}
\begin{abstract}
It is a well known fact that formal dg-algebras admit no non-trivial Massey products,
while the converse fails. We prove that by restricting to dg-algebras whose induced
product on cohomology is trivial, we do in fact get this converse. This allows us to
prove that spaces of Lusternik-Schnirelmann category $1$ are formal spaces. \end{abstract}
\section{Introduction}
The notion of formal dg-algebras, being algebras that are quasi-isomorphic to their cohomology algebra, was introduced in \cite{DGMS} to solve problems in rational homotopy theory. In this paper, the authors remark that having no non-vanishing Massey $n$-products is a weaker property than being formal, meaning that Massey $n$-products serve as obstructions to formality. They claim that formality is equivalent to a ``uniform vanishing'' – a stronger version of just being vanishing. In later times the study of dg-algebras have been explored further using the theory of $A_\infty$-algebras. Any dg-algebra $A$ can be viewed as a ``trivial'' $A_\infty$-algebra, which ables us to think about general $A_\infty$-algebras as homotopy theoretic versions of dg-algebras. In \cite{kadeishvili} Kadeishvili proved that the cohomology algebra $H(A)$ of a dg-algebra $A$ naturally admits an $A_\infty$-structure in such a way that there is a quasi-isomorphism of $A_\infty$-algebras $H(A)\longrightarrow A$. The higher products on this $A_\infty$-structure are often claimed to be the Massey products, but this is not always true as the $n$-ary product of the $A_\infty$-structure on $H(A)$ is not always a representative of the Massey $n$-product \cite{detection}. It is however the case that the vanishing of these higher arity products on $H(A)$ is a stronger condition than the vanishing of the Massey $n$-products \cite{AHO} – in fact, if all the higher products vanish, then the dg-algebra is formal. Hence having vanishing $A_\infty$-structure on $H(A)$ is equivalent to $A$ being formal. Using the equivalent definition of formality from \cite{keller} this is true almost by definition.
In \cite{detection} the authors prove that even though the higher products on $H(A)$ might not be the Massey products, they are so up to a sum of lower degree products. We use this to prove that vanishing Massey products is equivalent to formality in the case that the cohomology algebra has trivial products. Hence the vanishing Massey products on spaces with vanishing cup products are always uniformly vanishing in the sense of \cite{DGMS}.
There is a property of a space that allows us to know an upper bound for its cup-length, namely the Lusternik-Schnirelmann category of the space. This is an integer describing how a space can be glued together by cones. If we limit our study to spaces with Lusternik-Schnirelmann $1$, we know that the cup-length is $0$, meaning that we have vanishing products on reduced cohomology. Using the above result we then prove that spaces with Lusternik-Schnirelmann category $1$ are formal. This generalizes the formality for suspended spaces, proven in \cite{FHT}.
The result is most likely already known by specialists, at least in the rational case. But, this seems to be a new method of proving it. Alternatively one can use that any space $X$ with $\text{cat}_{LS}(X)=1$ is a co-H-space \cite{hess}, and then that any co-H-space is a wedge of spheres \cite{co-H-space}. Formality is preserved under the wedge product \cite{hess}, and since spheres are formal, we know that any co-H-space, and thus any space $X$ with $\text{cat}_{LS}(X)=1$ is a formal space.
\section{\texorpdfstring{$A_\infty$}{A}-algebras}
We provide a quick overview of the most important theory on $A_\infty$-algebras. For a more in-depth treatment, see \cite{keller}. Let $K$ be a field of characteristic $0$.
\begin{definition}
An $A_\infty$-algebra $(A, m)$, over $K$ is a $\ensuremath{\mathbb{Z}}$-graded vector
space $A=\bigoplus_{i\in \ensuremath{\mathbb{Z}}}A_i$ together with a family of $K$-linear maps
$m_n : A^{\otimes n}\longrightarrow A$ of degree $2-n$, such that the identities
$$\sum_{r+s+t = n}(-1)^{r+st}m_{r+1+t} (Id^{\otimes r}\otimes m_s \otimes Id^{\otimes t}) = 0$$
hold for all $n, s\geq 1$ and $r, t\geq 0$. \end{definition}
These relations are called the coherence relations, or the Stasheff identities in $A$. For $n=1$ the coherence relation simply becomes $$0 = (-1)^{0+0}m_1 (m_1) = m_1^2 .$$ This means that $A$ is a cochain complex with differential $d=m_1$ as $m_1$ is a degree $1$ map. For $n=2$ we get $$0 = (-1)^{1}m_2(Id\otimes m_1)+(-1)^{0}m_1 (m_2)+(-1)^{1}m_2 (m_1\otimes Id)$$ which reduces to $m_1 m_2 = m_2(m_1\otimes Id + Id\otimes m_1)$. This means that $m_1$ is a derivation with respect to $m_2$ as $m_2$ has degree $2$, usually stated as satisfying the Leibniz rule. The standard Leibniz rule comes out of this formula when applying it to elements and using the Koszul grading rule.
The third relation tells us that $m_2$ is not necessarily associative, and that the associator is given by $m_3$. This is usually referred to as $m_2$ being associative up to homotopy, which gives $A_\infty$-algebras their other name, strong homotopy associative algebras or sha-algebras for short. If $m_3=0$ (or $m_1=0$) this relation reduces to the associator being zero, which means that $m_2$ is an associative product.
\begin{definition}
A dg-algebra is an $A_\infty$-algebra $(A, m)$ where $m_i = 0$ for
$i \geq 3$. For simplicity we usually just denote it by $A$. \end{definition}
By the equations above describing the Stasheff identities, this definition is equivalent to the classical definition, i.e. a $\ensuremath{\mathbb{Z}}$-graded vector space with an associative product and a differential satisfying the Leibniz rule.
\begin{definition}
Let $(A, m^A)$ and $(B, m^B)$ be $A_\infty$-algebras. A morphism of
$A_\infty$-algebras $f:A\longrightarrow B$, also called $A_\infty$-morphism, is a
family of linear maps $f_n:A^{\otimes n}\longrightarrow B$ of degree $1-n$, such that
$$\sum_{n = r+s+t}(-1)^{r+st}f_{r+1+t}(id^{\otimes r}\otimes m_s^A \otimes id^{\otimes t}) = \sum_{k=1}^{n}\sum_{n=i_1+\cdots i_k}(-1)^{u_k} m_k^B(f_{i_1}\otimes \cdots \otimes f_{i_k})$$
where $u_k=\displaystyle \sum_{t=1}^{k-1}t(i_{k-t}-1)$. \end{definition}
We call $f$ an $A_\infty$-isomorphism, or an isomorphism of $A_\infty$-algebras, if $f_1$ is an isomorphism of chain complexes.
Since any $A_\infty$-algebra $(A, m)$ has a map $m_1$ such that $m_1^2=0$, we can also create its cohomology algebra, denoted $H(A)$. The cohomology algebra of a dg-algebra is a graded associative algebra with the induced product from $A$, which we can treat as a dg-algebra by letting it have trivial differential.
Let $f:A\longrightarrow B$ be an $A_\infty$-morphism. We call $f$ an $A_\infty$-quasi-isomorphism, or a quasi-isomorphism of $A_\infty$-algebras, if $f_1$ is a quasi-isomorphism of chain complexes, i.e. it induces an isomorphism on their cohomology algebras.
Notice that if $f_j=0$ for $j\geq 2$ and $m^A_i = 0 = m^B_i$ for $i\geq 3$, i.e. $A$ and $B$ are dg-algebras, then this definition reduces to the standard quasi-isomorphisms of dg-algebras.
\begin{definition}
Let $A$ be a dg-algebra. We say $A$ is formal if it is
$A_\infty$-quasi-isomorphic to a dg-algebra with trivial differential. \end{definition}
A formal dg-algebra is isomorphic to $H(A)$, so we can define a dg-algebra to be formal if it is $A_\infty$-quasi-isomorphic to its cohomology algebra. We also remark that this is not the most classical definition, which uses a zig-zag of dg-quasi-isomorphisms instead of an $A_\infty$-quasi-isomorphism. This is equivalent to the one we are using here. See \cite{AHO} and \cite{keller} for further details.
Formal dg-algebras are very nice algebras that as mentioned have their historical upbringing in rational homotopy theory. Examples include the Sullivan algebras of Kähler manifolds \cite{DGMS}.
\section{Massey products}
Massey products are partially defined higher order cohomology operations on dg-algebras introduced by Massey in \cite{Massey}. Since they are only partially defined we need an easy way to package this information in. This is done through defining systems. Let $A$
be a dg-algebra and denote $\bar{x} = (-1)^{|x|}x$.
\begin{definition}
A defining system for a set of cohomology classes $x_1, \ldots, x_n$
in $H(A)$ is a collection $\{ a_{i,j}\}$ of cochains in $A$ such that
\begin{enumerate}
\item $[a_{i-1, i}] = x_i$
\item $d(a_{i, j}) = \sum_{i<k<j}\overline{a_{i, k}}a_{k, j}$
\end{enumerate}
for all pairs $(i,j)\neq (0,n)$ where $i\leq j$. \end{definition}
\begin{definition}
The Massey $n$-product of $n$ cohomology classes $x_1, \ldots, x_n$,
denoted $\langle x_1, \ldots, x_n\rangle$ is defined to be the set of all $[a_{0,n}]$,
where $$a_{0,n} = \sum_{0<k<n}\overline{a_{0, k}}a_{k, n}$$ such that $\{ a_{i,j} \}$
is a defining system. \end{definition}
For $n=2$ this is just the induced product on cohomology, up to a sign. For $n=3$ this is the classical triple Massey product. When we use the phrase ``all Massey $n$-products'', we mean all Massey $n$-products for $n\geq 3$.
The fact that multiple cohomology classes can be in this set means that the product is only partially defined. If this set contains just a single class, then we say the Massey product is uniquely defined. What matters for us is when these products are ``trivial''.
\begin{definition}
We say that the Massey $n$-product vanishes if it contains zero as an
element, i.e. $0\in \langle x_1, \ldots, x_n\rangle$. \end{definition}
\section{Results on formality}
One of the reasons we are interested in Massey products is that they serve as an obstruction to formality. Intuitively, if Massey products exist on the algebra of cochains $C(X)$ on a topological space $X$, this means that $C(X)$ contains more information about our space than the cohomology ring of $X$. If a non-trivial Massey product exists on $C(X)$, this means that there will always exist another space $Y$, not homeomorphic to $X$, but with the same cohomology ring as $X$. The most famous example of this are the Borromean rings. These are a set of three rings, every pair of them not linked with the two others, but all three still linked. This has the same cohomology ring as three unlinked circles, but the Borromean rings admit a non-vanishing Massey product, while the three unlinked circles does not. Hence the algebra of forms on the Borromean rings can not be formal. This means that Massey products detect non-formality. This is summarized into the following theorem.
\begin{theorem}[\cite{DGMS}]
Let $(A, m)$ be a formal dg-algebra. Then all Massey $n$-products vanish. \end{theorem}
Unfortunately, knowing a dg-algebra has no non-vanishing Massey products is not enough to determine that it is formal. But using the full $A_\infty$-algebras we can get closer to some version of this being true.
\begin{theorem}[\cite{kadeishvili}]
\label{thm:Kadeishvili}
Let $(A,m)$ be a dg-algebra. Then there exists an (up to $A_\infty$-isomorphism)
unique $A_\infty$-algebra structure on its cohomology algebra $H(A)$ with $m_1=0$,
$m_2$ the induced product from $A$, and a quasi-isomorphism of $A_\infty$-algebras
$H(A)\longrightarrow A$. \end{theorem}
Note that this does not mean that all dg-algebras are formal, as now $H(A)$ is not necessarily just a dg-algebra anymore. We can think of this higher structure on $H(A)$ as measuring how far away $A$ is from being formal. Since $m_1=0$ we get that the product $m_2$ is associative, but not for the reason we mentioned earlier. This means that these higher products no longer are interpreted as homotopies, but instead as something more like Massey products. Hence we call them the ``higher products'' on $H(A)$.
We said that the $A_\infty$-structure measures how far away $A$ is from being formal. We noted earlier that an $A_\infty$-algebra with $m_k=0$ for $k\geq 3$ is a dg-algebra. This means that if the $A_\infty$-structure on $H(A)$ has $m_k=0$ for $k\geq 3$, then $A$ is formal, as we then have an $A_\infty$-quasi-isomorphism between two dg-algebras. This is rather important so we state it as a theorem.
\begin{theorem}[\cite{AHO}]
\label{thm:AHO_formal}
Let $(A, m)$ be a dg-algebra. Then $A$ is formal if and only if all the higher products
on $H(A)$ vanish. \end{theorem}
One direction of the proof is by definition, as described above. The other part is because the $A_\infty$-structure on $H(A)$ is unique up to isomorphism of $A_\infty$-algebras.
In \cite{DGMS} the authors say that formality is equivalent to a uniform vanishing of the Massey products. We are intuitively able to choose the zero element as our Massey products in such a nice uniform way that they are a part of an $A_\infty$-structure, which must mean that we have formality, as above.
We want to use this to get an idea of ``how close'' the normal Massey products are from being sufficient obstructions. What we mean by this is that we want to find a case where vanishing Massey products mean we have a formal dg-algebra. To get to such a result we first need a theorem that connects the higher products to normal Massey products.
\begin{theorem}[\cite{detection}]
Let $A$ be a dg-algebra and $x\in \langle x_1, \ldots, x_n\rangle$ with $n\geq 3$.
Then for any $A_\infty$-structure on $H(A)$ we have
$$\epsilon m_n(x_1, \ldots, x_n) = x+\Gamma$$ where
$\Gamma \in \sum_{j=1}^{n-1}\text{Im}(m_j)$ and
$\epsilon = (-1)^{\sum_{j=1}^{n-1} (n-j)|x_j|}$. \end{theorem}
\begin{corollary}[\cite{detection}]
\label{cor:detection_unique}
Let $A$ be a dg-algebra and $m$ an $A_\infty$-structure on its cohomology $H(A)$ such
that $m_k = 0$ for all $1 \leq k \leq n-1$. Then the Massey $n$-product
$\langle x_1, \ldots, x_n \rangle$ is uniquely defined for any set of cohomology
classes $x_1, \ldots, x_n$. Furthermore the unique element in the Massey product is
recovered by $m_n$ up to a sign, i.e.
$\langle x_1, \ldots, x_n \rangle = \epsilon m_n(x_1, \ldots, x_n)$, where again
$\epsilon = (-1)^{\sum_{j=1}^{n-1} (n-j)|x_j|}$. \end{corollary}
By this we get our first result.
\begin{theorem}
\label{thm:1}
Let $A$ be a dg-algebra and $H(A)$ its cohomology algebra. If the induced product on
$H(A)$ is trivial and all Massey $n$-products on $A$ vanish, then $A$ is formal. \end{theorem}
\begin{proof}
By \cref{thm:Kadeishvili} we know that $H(A)$ can be equipped with the structure
$\{m_i\}$ of an $A_\infty$-algebra such that $m_1=0$ and $m_2$ is the product induced
from $A$, which is assumed to be trivial. We claim that $m_k = 0$ for all $k\geq 3$
as well, and hence that $A$ is formal by \cref{thm:AHO_formal}. We prove this
claim by induction.
Since $m_2=0$ we know that all Massey triple products are defined. By the below
induction argument all the higher Massey products will be defined as well. Since
$m_2=0$ we already have our base case.
Assume $m_k = 0$ for $1\leq k\leq n-1$. By \cref{cor:detection_unique} we know
that $\langle x_1, \ldots, x_n \rangle$ consists of a unique element for all choices of
classes $x_1, \ldots, x_n$. This element is by assumption the zero class, as we assumed
all Massey products to be vanishing. This class is recovered up to a sign by $m_n$,
which means $m_n(x_1,\ldots, x_n)=0$ for all choices of $x_1, \ldots, x_n$. Hence
$m_n=0$ and we are done. \end{proof}
This proof shows that when we have a trivial induced product on $H(A)$, the vanishing Massey products neatly forms a trivial $A_\infty$-structure on $H(A)$, which we earlier said was the way of interpreting the uniform vanishing.
It is tempting to think that having trivial product in cohomology also makes every attempt to build and produce a Massey product impossible. But there are examples of this not being the case. One example is the free loop space of an even-dimensional sphere. Its cohomology algebra has trivial product, and it is shown in \cite{nonformal_loop} to have non-zero Massey products. Hence it can not be formal. We also mentioned the Borromean rings earlier, which gives another example.
\begin{question} Is there a more general procedure for choosing elements in all the Massey products in such a way that they form an $A_\infty$-structure? \end{question}
\section{The Lusternik-Schnirelmann category}
For topological spaces the criteria of having trivial cup products is quite strong. Say we have a path-connected topological space $X$. It has zeroth cohomology $H^0(X;K)\cong K$. Hence the requirement to have trivial cup product reduces to requiring $H^i(X;K)=0$ for $i>0$, which is really limiting. One solution to this is to look at reduced cohomology.
\begin{definition}
Let $X$ be a topological space and $C^\ast(X;K)$ its cochain complex (treated here as a
dg-algebra). We define its augmented cochain dg-algebra, denoted
$\widetilde{C}^\ast(X;K)$, by adding a copy of the ground field $K$ injectively farthest
to the left, i.e.
\begin{equation*}
\cdots \longrightarrow 0 \longrightarrow K \overset{\epsilon}\longrightarrow
C^0(X;K)\longrightarrow C^1(X;K) \longrightarrow \cdots \longrightarrow C^n(X;K)
\longrightarrow \cdots
\end{equation*} \end{definition}
The cohomology algebra of the augmented cochain complex is called the reduced cohomology algebra of $X$ and is denoted $\widetilde{H}^\ast(X;K)$. If the space $X$ is connected, then $\widetilde{H}^0(X;K)=0$, meaning that we have completely removed the problem described above.
Spaces with trivial cup product are not in abundance -- even in reduced cohomology -- but examples include the spheres and more generally a suspended space. There are ways to make sure that a space has a trivial cup product and one of these is using the Lusternik-Schnirelmann category.
\begin{definition}
Let $X$ be a topological space. The Lusternik-Schnirelmann category of $X$, denoted
$\text{cat}_{LS}(X)$, is the least integer $n$ such that there is a covering of $X$
by $n+1$ open subsets $U_i$ that are all contractible in $X$, i.e. their inclusion
into $X$ is null-homotopic. \end{definition}
This invariant was originally developed in \cite{lscat} as an invariant on manifolds to be a lower bound for the number of critical points any real valued function on it could have. It has since become a useful – but very difficult to calculate – invariant of topological spaces.
Recall that the cup-length of a topological space $X$, denoted $\text{cl}(X)$ is the largest integer $n$ such that a chain
$[x_1]\cup \cdots \cup [x_n]$ of cohomology classes with $\deg|x_i|\geq 1$ is non-zero. We have the following fundamental relation between the cup length and the Lusternik-Schnirelmann category of $X$.
\begin{lemma}[\cite{lscategorybook}]
Let $X$ be a topological space. Then the cup length of $X$ is a lower bound for its
Lusternik-Schnirelmann category, i.e. $\text{cl}(X)\leq \text{cat}_{LS}(X)$. \end{lemma}
Thus, choosing spaces with $\text{cat}_{LS}(X) = 1$ means we have trivial product on reduced cohomology. Examples of such spaces are again the suspended spaces, as they are the union of two cones. The last thing we need to know is whether spaces with Lusternik-Schnirelmann category $1$ admit any non-vanishing Massey $n$-products. Luckily, by Rudyak we know that they dont.
\begin{theorem}[\cite{Rudyak}]
\label{thm:rudyak}
Let $X$ be a topological space with $\text{cat}_{LS}(X)\leq 1$ and
$x_1, \ldots, x_n \in \widetilde{H}^\ast(X;K)$. If the Massey $n$-product
$\langle x_1, \ldots, x_n\rangle$ is defined then
$\langle x_1, \ldots, x_n\rangle = 0$. \end{theorem}
We call spaces with formal reduced cochain algebras reduced formal spaces. This means now that spaces with Lusternik-Schnirelmann category $1$ are reduced formal by \cref{thm:rudyak} and \cref{thm:1}.
\begin{corollary}
\label{cor:reduced_formal}
Let $X$ be a topological space with $\text{cat}_{LS}(X)\leq 1$. Then
$\widetilde{C}^\ast(X;K)$ is a formal dg-algebra. \end{corollary}
We still have to deal with the degree $0$ cochains in order to say that a space $X$ with $\text{cat}_{LS}(X)\leq 1$ is formal and not just reduced formal. To do this we must construct the neccesary span of dg-quasi-isomorphism.
\begin{theorem}
\label{thm:reduced_formal_formal}
Any reduced formal topological space $X$ is formal. \end{theorem}
\begin{proof}
Since $X$ is reduced formal we know that there is a span of dg-quasi-isomorphisms
$$\widetilde{H}^\ast(X;K)\longleftarrow B \longrightarrow \widetilde{C}^\ast(X;K)$$ for some
dg-algebra $B$:
\begin{center}
\begin{tikzpicture}
\node (1) {$k$};
\node (2) [node distance=3cm, right of=1] {$C^0(X;K)$};
\node (3) [node distance=3cm, right of=2] {$C^1(X;K)$};
\node (4) [node distance=3cm, right of=3] {$C^2(X;K)$};
\node (5) [node distance=1.6cm, below of=1] {$B^{-1}$};
\node (6) [node distance=3cm, right of=5] {$B^0$};
\node (7) [node distance=3cm, right of=6] {$B^1$};
\node (8) [node distance=3cm, right of=7] {$B^2$};
\node (9) [node distance=1.6cm, below of=5] {$0$};
\node (10) [node distance=3cm, right of=9] {$0$};
\node (11) [node distance=3cm, right of=10] {$H^1(X;K)$};
\node (12) [node distance=3cm, right of=11] {$H^2(X;K)$};
\node (13) [node distance=2cm, left of=1] {$\cdots$};
\node (14) [node distance=2cm, left of=5] {$\cdots$};
\node (15) [node distance=2cm, left of=9] {$\cdots$};
\node (16) [node distance=2cm, right of=4] {$\cdots$};
\node (17) [node distance=2cm, right of=8] {$\cdots$};
\node (18) [node distance=2cm, right of=12] {$\cdots$};
\draw [-to] (13) to node {} (1);
\draw [-to] (14) to node {} (5);
\draw [-to] (15) to node {} (9);
\draw [-to] (4) to node {} (16);
\draw [-to] (8) to node {} (17);
\draw [-to] (12) to node {} (18);
\draw [-to] (1) to node {$\epsilon$} (2);
\draw [-to] (2) to node {$d^0$} (3);
\draw [-to] (3) to node {$d^1$} (4);
\draw [-to] (5) to node {$d^{-1}_B$} (6);
\draw [-to] (6) to node {$d^0_B$} (7);
\draw [-to] (7) to node {$d^1_B$} (8);
\draw [-to] (9) to node {} (10);
\draw [-to] (10) to node {} (11);
\draw [-to] (11) to node {$0$} (12);
\draw [-to] (5) to node {$q^{-1}$} (1);
\draw [-to] (5) to node {} (9);
\draw [-to] (6) to node {$q^0$} (2);
\draw [-to] (6) to node [swap]{$p^0$} (10);
\draw [-to] (7) to node {$q^1$} (3);
\draw [-to] (7) to node [swap]{$p^1$} (11);
\draw [-to] (8) to node {$q^2$} (4);
\draw [-to] (8) to node [swap]{$p^2$} (12);
\end{tikzpicture}
\end{center}
By removing the copy of $k$ from the augmented cochain complex, we can insert
a copy of $k$ as $H^0(X;K)$ and form a new span of dg-quasi-isomorphisms:
\begin{center}
\begin{tikzpicture}
\node (1) {$0$};
\node (2) [node distance=3cm, right of=1] {$C^0(X;K)$};
\node (3) [node distance=3cm, right of=2] {$C^1(X;K)$};
\node (4) [node distance=3cm, right of=3] {$C^2(X;K)$};
\node (5) [node distance=1.6cm, below of=1] {$0$};
\node (6) [node distance=3cm, right of=5] {$B^0\oplus B^{-1}$};
\node (7) [node distance=3cm, right of=6] {$B^1$};
\node (8) [node distance=3cm, right of=7] {$B^2$};
\node (9) [node distance=1.6cm, below of=5] {$0$};
\node (10) [node distance=3cm, right of=9] {$k$};
\node (11) [node distance=3cm, right of=10] {$H^1(X;K)$};
\node (12) [node distance=3cm, right of=11] {$H^2(X;K)$};
\node (13) [node distance=2cm, left of=1] {$\cdots$};
\node (14) [node distance=2cm, left of=5] {$\cdots$};
\node (15) [node distance=2cm, left of=9] {$\cdots$};
\node (16) [node distance=2cm, right of=4] {$\cdots$};
\node (17) [node distance=2cm, right of=8] {$\cdots$};
\node (18) [node distance=2cm, right of=12] {$\cdots$};
\draw [-to] (13) to node {} (1);
\draw [-to] (14) to node {} (5);
\draw [-to] (15) to node {} (9);
\draw [-to] (4) to node {} (16);
\draw [-to] (8) to node {} (17);
\draw [-to] (12) to node {} (18);
\draw [-to] (1) to node {} (2);
\draw [-to] (2) to node {$d^0$} (3);
\draw [-to] (3) to node {$d^1$} (4);
\draw [-to] (5) to node {} (6);
\draw [-to] (6) to node {$[d^0_B, 0]$} (7);
\draw [-to] (7) to node {$d^1_B$} (8);
\draw [-to] (9) to node {} (10);
\draw [-to] (10) to node {$0$} (11);
\draw [-to] (11) to node {$0$} (12);
\draw [-to] (5) to node {} (1);
\draw [-to] (5) to node {} (9);
\draw [-to] (6) to node {$[q^0, 0]$} (2);
\draw [-to] (6) to node [swap]{$[0, q^{-1}]$} (10);
\draw [-to] (7) to node {$q^1$} (3);
\draw [-to] (7) to node [swap]{$p^1$} (11);
\draw [-to] (8) to node {$q^2$} (4);
\draw [-to] (8) to node [swap]{$p^2$} (12);
\end{tikzpicture}
\end{center}
The squares in the diagram commutes due to the original squares commuting
and the cohomologies also agree by construction. Thus we have a span of
dg-quasi-isomorphisms $$H^\ast(X;K)\longleftarrow B'\longrightarrow C^\ast(X;K),$$
which means $X$ is formal. \end{proof}
We are now ready to conclude with our main result. \begin{theorem}
Let $X$ be a space with $\text{cat}_{LS}(X)\leq 1$. Then $X$ is formal. \end{theorem}
\begin{question}
Are there any formal spaces with trivial cup product and Lusternik-Schnirelmann
category greater than $1$? \end{question}
\end{document} |
\begin{document}
\title[On amicable tuples] {On amicable tuples} \author[Y. Suzuki]{Yuta Suzuki} \date{}
\subjclass[2010]{Primary 11A25, Secondary 11J25} \keywords{Amicable pairs; Amicable tuples; Finiteness theorem.}
\begin{abstract} For an integer $k\ge2$, a tuple of $k$ positive integers $(M_i)_{i=1}^{k}$ is called an \textit{amicable $k$-tuple} if the equation \[ \sigma(M_1)=\cdots=\sigma(M_k)=M_1+\cdots+M_k \] holds. This is a generalization of amicable pairs. An \textit{amicable pair} is a pair of distinct positive integers each of which is the sum of the proper divisors of the other. Gmelin~(1917) conjectured that there is no relatively prime amicable pairs and Artjuhov~(1975) and Borho~(1974) proved that for any fixed positive integer $K$, there are only finitely many relatively prime amicable pairs $(M,N)$ with $\omega(MN)=K$. Recently, Pollack~(2015) obtained an upper bound \[ MN<(2K)^{2^{K^2}} \] for such amicable pairs. In this paper, we improve this upper bound to \[ MN<\frac{\pi^2}{6}2^{4^K-2\cdot 2^K} \] and generalize this bound to some class of general amicable tuples. \end{abstract} \maketitle
\section{Introduction}
For an integer $k\ge2$, a tuple of $k$ positive integers \[ (M_1,\,\ldots\,,M_k) \] is called an \textit{amicable $k$-tuple}~\cite{Dickson_triple} if the equation \begin{equation} \label{EQ:definition_amicable} \sigma(M_1)=\cdots=\sigma(M_k)=M_1+\cdots+M_k \end{equation} holds, where $\sigma(n)$ is the usual divisor summatory function. This is a generalization of amicable pairs. An \textit{amicable pair} is a pair of distinct positive integers each of which is the sum of the proper divisors of the other. For a pair of distinct positive integers~$(M,N)$, this definition of amicable pair can be rephrased as \[ \sigma(M)-M=N\quad\text{and}\quad\sigma(N)-N=M, \] which is equivalent to \[ \sigma(M)=\sigma(N)=M+N \] so $(M,N)$ is an amicable pair if and only if $(M,N)$ is an amicable 2-tuple.
Although the history of amicable pairs can be traced back more than 2000 years ago to Pythagoreans, their nature is still shrouded in mystery. In 1917, Gmelin~\cite{Gmelin} noted that there is no amicable pairs $(M,N)$ on the list at that time for which $M$ and $N$ are relatively prime. Based on this observation, he conjectured that there is no such relatively prime amicable pair.
As for this problem, Artjuhov~\cite{Artjuhov} and Borho~\cite{Borho} proved that for any fixed integer $K\ge1$, there are only finitely many relatively prime amicable pairs $(M,N)$ with $\omega(MN)=K$ where $\omega(n)$ denote the number of distinct prime factors of $n$. Recently, Pollack~\cite{Pollack_bound} obtained an explicit upper bound \begin{equation} \label{EQ:Pollack} MN<(2K)^{2^{K^2}} \end{equation} for relatively prime amicable pairs $(M,N)$, where $K=\omega(MN)$. The main aim of this paper is to improve and generalize this result of Pollack.
Actually, not only for amicable pairs as in Gmelin's conjecture, members of each amicable tuple seem to share relatively many common factors. Although there are not so many examples of large amicable $k$-tuples for $k\ge4$, it seems further that there is no amicable tuple whose greatest common divisor is 1. For some technical and artificial reason, we consider a much stronger condition on amicable tuples. We define this condition not only for amicable tuples but for general tuples. \begin{definition} For an integer $k\ge2$, we say a tuple of distinct positive integers $(M_i)_{i=1}^k$ is \textit{anarchy} if \[ (M_i,M_j\sigma(M_j))=1 \] for all distinct $i,j$, where $(a,b)$ denotes the greatest common divisor of $a$ and $b$. \end{definition} We also introduce a new class of tuples of integers, which has been essentially introduced by Kozek, Luca, Pollack and Pomerance~\cite{Harmonious}. \begin{definition} For an integer $k\ge2$, we say a tuple of positive integers $(M_i)_{i=1}^k$ is a \textit{harmonious $k$-tuple} if the equation \begin{equation} \label{EQ:fundamental_eq} \frac{M_1}{\sigma(M_1)}+\cdots+\frac{M_k}{\sigma(M_k)}=1 \end{equation} holds. \end{definition} In this paper, we generalize and improve Pollack's upper bound \eqref{EQ:Pollack} as follows. \begin{theorem} \label{Thm:main} For any anarchy harmonious tuple $(M_i)_{i=1}^{k}$, we have \[ M_1\cdots M_k<\frac{\pi^2}{6}\,2^{4^K-2\cdot2^K}, \] where $K=\omega(M_1\cdots M_k)$. \end{theorem}
Note that if $(M_i)_{i=1}^{k}$ is an amicable tuple, then \[ 1 =\frac{M_1}{M_1+\cdots+M_k}+\cdots+\frac{M_k}{M_1+\cdots+M_k} =\frac{M_1}{\sigma(M_1)}+\cdots+\frac{M_k}{\sigma(M_k)} \] so every amicable tuple is harmonious. Also, for any amicable tuple $(M_i)_{i=1}^{k}$, \[ (M_i,M_j\sigma(M_j))=(M_i,M_j(M_1+\ldots+M_k)) \] so an amicable tuple $(M_i)_{i=1}^{k}$ is anarchy if and only if $M_i$ are pairwise coprime and $M_1\cdots M_k$ is coprime to $M_1+\cdots+M_k$. Moreover, an amicable pair $(M,N)$ is anarchy if and only if $M$ and $N$ are relatively prime since \[ (M(M+N),N)=1\ \Longleftrightarrow\ (M,N)=1\ \Longleftrightarrow\ (M,N(M+N))=1. \] Thus Theorem \ref{Thm:main} leads the following corollaries. \begin{theorem} \label{Thm:amicable} For any relatively prime amicable pair $(M,N)$, we have \[ MN<\frac{\pi^2}{6}\,2^{4^K-2\cdot2^K}, \] where $K=\omega(MN)$. \end{theorem}
\begin{theorem} \label{Thm:amicable_tuple} For any amicable tuple $(M_i)_{i=1}^{k}$ satisfying \begin{equation} \label{EQ:tuple_cond} M_1,\ldots,M_k\text{ are pairwise coprime and }\, (M_1\cdots M_k,M_1+\cdots+M_k)=1, \end{equation} we have \[ M_1\cdots M_k<\frac{\pi^2}{6}\,2^{4^K-2\cdot2^K}, \] where $K=\omega(M_1\cdots M_k)$. \end{theorem} \noindent Therefore, Theorem \ref{Thm:amicable} improves the upper bound \eqref{EQ:Pollack}.
Even without divisibility condition like relative primality or anarchy, Borho~\cite{Borho2} proved an upper bound for amicable pairs in terms of $\Omega(n)$, the number of prime factors of $n$ counted with multiplicity. He also dealt with \textit{unitary amicable pairs}. A unitary amicable pair is a pair of positive integers $(M,N)$ satisfying the equation \[ \sigma^\ast(M)=\sigma^\ast(N)=M+N,\quad \sigma^\ast(n):=\sum_{d\parallel n}d, \]
where $d\parallel n$ means $d|n$ and $(d,n/d)=1$. Borho's upper bound is \[ MN<L^{2^{L}} \] for any amicable or unitary amicable pairs $(M,N)$ with $L=\Omega(MN)$ for amicable pairs and $L=\omega(M)+\omega(N)$ for unitary amicable pairs. Indeed, we can see a prototype of our Lemma \ref{Lem:HB_ineq1} and \ref{Lem:HB_ineq2} on Diophantine equation in Satz 1 of Borho~\cite{Borho2}. By introducing our lemmas in Borho's argument, we can improve and generalize Borho's theorem. As Kozek, Luca, Pollack and Pomerance~\cite{Harmonious} remarked, what we can deal with are not only amicable tuples but harmonious tuples. Thus we first introduce the unitary analogue of harmonious tuples. \begin{definition} For an integer $k\ge2$, we say a tuple of positive integers $(M_i)_{i=1}^k$ is a \textit{unitary harmonious tuple} if the equation \begin{equation} \label{EQ:fundamental_eq_unitary} \frac{M_1}{\sigma^\ast(M_1)}+\cdots+\frac{M_k}{\sigma^\ast(M_k)}=1 \end{equation} holds. \end{definition}
Then our theorem of the Borho-type is the following. \begin{theorem} \label{Thm:Omega} For any harmonious tuple $(M_i)_{i=1}^{k}$, we have \[ M_1\cdots M_k\le k^{-k}(2^{2^{L}}-2^{2^{L-1}}), \] where $L=\Omega(M_1\cdots M_k)$. For any unitary harmonious tuple $(M_i)_{i=1}^{k}$, we again have the same upper bound but $L$ is replaced by $L=\omega(M_1)+\cdots+\omega(M_k)$. \end{theorem} We prove Theorem~\ref{Thm:Omega} in Section~\ref{Section:Omega}.
The above problems on the upper bound of harmonious or amicable tuples are analogues of a similar problem in the context of odd perfect numbers, which has been studied since a long time ago. A positive integer $N$ is called a \textit{perfect number} if its sum of all proper divisors is equal to $N$ itself, i.e.~if $\sigma(N)=2N$. It is also long-standing mystery whether or not there is an odd perfect number. The finiteness theorem like the Artjuhov--Borho theorem was proved for odd perfect numbers by Dickson~\cite{Dickson_bound} in 1913. However, it took more than 60 years to get the explicit upper bound. The first explicit upper bound for odd perfect numbers was achieved by Pomerance~\cite{Pomerance_bound} in 1977. Pomerance obtained an upper bound \[ N<(4K)^{(4K)^{2^{K^2}}} \] for an odd perfect number with $K=\omega(N)$. Note that by modifying the method of Pomerance slightly, we may improve this upper bound to \[ N<(4K)^{2^{K^2}} \] as remarked in Lemma 2 and Remark of \cite{Pollack_bound}. Further improvement on this upper bound was given by Heath-Brown~\cite{HB_OPN} by using a new method. Heath-Brown's upper bound is \[ N<4^{4^{K}}. \] Heath-Brown's method has been further developed by several authors. The list of the world records of upper bounds based on Heath-Bronws's method is here: \[ \begin{array}{ll} C^{4^{K}}&(\text{Cook~\cite{Cook}}),\quad(C=(195)^{1/7}=2.123\ldots\,),\\ 2^{4^{K}}&(\text{Nielsen~\cite{Nielsen}}),\\ 2^{4^{K}-2^K}&(\text{Chen and Tang~\cite{Chen_Tang}}),\\ 2^{(2^K-1)^2}&(\text{Nielsen~\cite{Nielsen_New}}), \end{array} \] where the last result of Nielsen is the current best result.
The method of Pollack~\cite{Pollack_bound} was mainly based on the method of Pomerance. And Pollack~\cite[p.~38]{Pollack_bound} suggested that it would be interesting to find whether the method of Heath-Brown is available in the context of amicable pairs. Indeed, the method we use in this paper mainly follows a version of Heath-Brown's method given by Nielsen~\cite{Nielsen_New} and Theorem \ref{Thm:amicable} corresponds to Nielsen's latest bound on odd perfect numbers. Thus, this paper gives one possible answer to Pollack's suggestion.
Also, Pollack~\cite[p.~680]{Pollack_kin} asked to find a suitable condition to obtain a finiteness theorem for amicable tuples or sociable numbers. Our condition \eqref{EQ:tuple_cond} in Theorem~\ref{Thm:amicable_tuple} can be, though it seems too strong, one of partial answers to his question. However, the current author have no idea for the same problem with sociable numbers.
It would be interesting to note that there are many examples of harmonious pairs which are relatively prime but not anarchy. In order to list up such pairs, we used a \texttt{C} program, which is based on a program provided by Yuki Yoshida~\cite{yos}. By using this program, we can list up all $2566$ relatively prime harmonious pairs among all $49929$ harmonious pairs $(M,N)$ up to $10^8$ in the sense $M\le N\le10^8$, and we find none of these examples is anarchy. For interested readers, we listed up all $30$ relatively prime harmonious pairs up to $10^5$ in Table~\ref{Table:PHP1} and listed the number of harmonious and relatively prime harmonious pairs in several ranges in Table~\ref{Table:PHP2} This numerical search shows that Theorem~\ref{Thm:main} captures more pairs in its scope than Gmelin's conjecture. Therefore, it is natural to ask:~are there anarchy harmonious tuples? Surprisingly, by continuing the numerical search, we find an \textit{anarchy in the harmony}, i.e.~an anarchy harmonious pair \[ (M,N)=(64,173369889), \] which is the only anarchy harmonious pair with $M\le N\le10^9$. Note that \[ 64=2^6,\quad 173369889=3^4\times7^2\times11^2\times19^2,\quad \] \[ \sigma(64)=127,\quad \sigma(173369889)=3^2\times7\times11^2\times19^2\times127. \] The next natural problem may be: how many anarchy harmonious pairs are there? Also, the above observations indicates some possibility to improve Theorem \ref{Thm:main} by introducing a new condition stronger than anarchy.
\begin{table}[htb] \renewcommand{1.06}{1.06} \centering \caption{All coprime harmonious pairs $(M,N)$ with $M\le N\le 10^5$} \label{Table:PHP1}
\begin{tabular}{|cc|cc||cc|}\hline
$M$ & $N$ & \multicolumn{2}{c||}{Factorization of $M$ and $N$} & $(M,\sigma(N))$ & $(\sigma(M),N)$\\ \hline \hline $135$ & $3472$ & $3^3 \times 5$ & $2^4 \times 7 \times 31$ & $1$ & $16$ \\ $135$ & $56896$ & $3^3 \times 5$ & $2^6 \times 7 \times 127$ & $1$ & $16$ \\ $285$ & $45136$ & $3 \times 5 \times 19$ & $2^4 \times 7 \times 13 \times 31$ & $1$ & $16$ \\ $315$ & $51088$ & $3^2 \times 5 \times 7$ & $2^4 \times 31 \times 103$ & $1$ & $16$ \\ $345$ & $38192$ & $3 \times 5 \times 23$ & $2^4 \times 7 \times 11 \times 31$ & $3$ & $16$ \\ $868$ & $1485$ & $2^2 \times 7 \times 31$ & $3^3 \times 5 \times 11$ & $4$ & $1$ \\ $1204$ & $4455$ & $2^2 \times 7 \times 43$ & $3^4 \times 5 \times 11$ & $4$ & $11$ \\ $1683$ & $3500$ & $3^2 \times 11 \times 17$ & $2^2 \times 5^3 \times 7$ & $3$ & $4$ \\ $1683$ & $62000$ & $3^2 \times 11 \times 17$ & $2^4 \times 5^3 \times 31$ & $3$ & $8$ \\ $2324$ & $9945$ & $2^2 \times 7 \times 83$ & $3^2 \times 5 \times 13 \times 17$ & $28$ & $3$ \\ $3556$ & $63855$ & $2^2 \times 7 \times 127$ & $3^3 \times 5 \times 11 \times 43$ & $4$ & $1$ \\ $4455$ & $21328$ & $3^4 \times 5 \times 11$ & $2^4 \times 31 \times 43$ & $11$ & $8$ \\ $4845$ & $7084$ & $3 \times 5 \times 17 \times 19$ & $2^2 \times 7 \times 11 \times 23$ & $3$ & $4$ \\ $5049$ & $65968$ & $3^3 \times 11 \times 17$ & $2^4 \times 7 \times 19 \times 31$ & $1$ & $16$ \\ $6244$ & $43875$ & $2^2 \times 7 \times 223$ & $3^3 \times 5^3 \times 13$ & $28$ & $1$ \\ $6244$ & $90675$ & $2^2 \times 7 \times 223$ & $3^2 \times 5^2 \times 13 \times 31$ & $28$ & $1$ \\ $6675$ & $33488$ & $3 \times 5^2 \times 89$ & $2^4 \times 7 \times 13 \times 23$ & $3$ & $8$ \\ $7155$ & $13244$ & $3^3 \times 5 \times 53$ & $2^2 \times 7 \times 11 \times 43$ & $3$ & $4$ \\ $9945$ & $41168$ & $3^2 \times 5 \times 13 \times 17$ & $2^4 \times 31 \times 83$ & $3$ & $8$ \\ $12124$ & $84825$ & $2^2 \times 7 \times 433$ & $3^2 \times 5^2 \times 13 \times 29$ & $28$ & $1$ \\ $13275$ & $81424$ & $3^2 \times 5^2 \times 59$ & $2^4 \times 7 \times 727$ & $1$ & $4$ \\ $13965$ & $23312$ & $3 \times 5 \times 7^2 \times 19$ & $2^4 \times 31 \times 47$ & $3$ & $16$ \\ $24327$ & $75460$ & $3^3 \times 17 \times 53$ & $2^2 \times 5 \times 7^3 \times 11$ & $9$ & $20$ \\ $31724$ & $61335$ & $2^2 \times 7 \times 11 \times 103$ & $3^2 \times 5 \times 29 \times 47$ & $4$ & $3$ \\ $32835$ & $92456$ & $3 \times 5 \times 11 \times 199$ & $2^3 \times 7 \times 13 \times 127$ & $15$ & $8$ \\ $34485$ & $37492$ & $3 \times 5 \times 11^2 \times 19$ & $2^2 \times 7 \times 13 \times 103$ & $1$ & $28$ \\ $52700$ & $68211$ & $2^2 \times 5^2 \times 17 \times 31$ & $3^2 \times 11 \times 13 \times 53$ & $4$ & $9$ \\ $55341$ & $58900$ & $3^2 \times 11 \times 13 \times 43$ & $2^2 \times 5^2 \times 19 \times 31$ & $1$ & $4$ \\ $60515$ & $78864$ & $5 \times 7^2 \times 13 \times 19$ & $2^4 \times 3 \times 31 \times 53$ & $1$ & $48$ \\ $62992$ & $63855$ & $2^4 \times 31 \times 127$ & $3^3 \times 5 \times 11 \times 43$ & $16$ & $1$ \\ \hline \end{tabular} \end{table}
\begin{table}[htb] \renewcommand{1.06}{1.06} \centering \setlength{\tabcolsep}{4pt} \caption{Number of harmonious pairs $(M,N)$ with $M\le N\le 10^k$} \label{Table:PHP2}
\begin{tabular}{|c||ccccccccc|}\hline
& $10$ & $\phantom{^2}10^2$ & $\phantom{^3}10^3$ & $\phantom{^4}10^4$ &
$\phantom{^5}10^5$ & $\phantom{^6}10^6$ & $\phantom{^7}10^7$ & $\phantom{^8}10^8$ & $\phantom{^9}10^9$\\ \hline \hline Harmonious & $1$ & $10$ & $55$ & $252$ & $983$ & $3666$ & $13602$ & $49929$ & $176453$\\ Coprime harmonious & $0$ & $0$ & $0$ & $6$ & $30$ & $133$ & $631$ & $2566$ & $10013$\\ \hline \end{tabular} \end{table}
\section{Notation} We denote the greatest common divisor of positive integers $a$ and $b$ by $(a,b)$, which we may distinguish from the notation for a pair of integers $(M,N)$ by the context. For positive integers $d$ and $n$, we write $d\parallel n$ if $d\mid n$ and $(d,n/d)=1$.
For any finite set $\mathcal{S}$ of integers, we let \[ \Pi(\mathcal{S})=\prod_{m\in \mathcal{S}}m,\quad \Phi(\mathcal{S})=\prod_{m\in \mathcal{S}}(m-1),\quad \Psi(\mathcal{S})=\Pi(\mathcal{S})\Phi(\mathcal{S}). \] Following the notation of Nielsen \cite{Nielsen_New}, we let \[ F_r(x)=x^{2^{r}}-x^{2^{r-1}} \] for an integer $r\ge1$ and a real number $x\ge1$ and we let $F_0(x)=x-1$ for the case $r=0$ and $x\ge1$. Actually, we will not use the full power of this function $F_r(x)$ for the proof of Theorem \ref{Thm:main}, but we introduce them for trying to give a better bound in lemmas on Diophantine inequalities. We prepare two lemmas on $F_r(x)$.
\begin{lemma} \label{Lem:F_increasing} For any integer $r\ge0$, $F_r(x)$ is increasing as a function of $x\ge1$. \end{lemma} \begin{proof} This is obvious for $r=0$ and also obvious for $r\ge1$ from the factorization \begin{equation} \label{EQ:F_factorization} F_r(x)=x^{2^{r-1}}(x^{2^{r-1}}-1) \end{equation} since both of $x^{2^{r-1}}$ and $(x^{2^{r-1}}-1)$ are increasing and non-negative for $x\ge1$. \end{proof}
\begin{lemma} \label{Lem:F_scaling} For any integer $r\ge1$ and real numbers $\alpha,x\ge1$, we have \[ F_r(x)\le\alpha^{-2^{r-1}}F_r(\alpha x)\le\alpha^{-1}F_r(\alpha x). \] \end{lemma} \begin{proof} By \eqref{EQ:F_factorization}, we have \[ \alpha^{-2^{r-1}}F_r(\alpha x) = x^{2^{r-1}}\left((\alpha x)^{2^{r-1}}-1\right) \ge x^{2^{r-1}}\left(x^{2^{r-1}}-1\right) = F_r(x). \] This completes the proof. \end{proof}
\section{Lemmas on Diophantine inequalities} \label{Section:Diophantine_inequality} In this section, we prove variants of Heath-Brown's lemma~\cite[Lemma 1]{HB_OPN} on Diophantine inequalities related to the equation \eqref{EQ:fundamental_eq}. We need to introduce some modification suitable for applications to amicable tuples. Our proof of Theorem \ref{Thm:main} heavily relies on the equation \eqref{EQ:fundamental_eq}, or its generalization \begin{equation} \label{EQ:fundamental_eq_amicable} \frac{b_1}{a_1}\frac{M_1}{\sigma(M_1)}+\cdots+\frac{b_k}{a_k}\frac{M_k}{\sigma(M_k)}=1, \end{equation} where $a_i,b_i\ge1$ are integers. This equation is not so flexible as the equation \begin{equation} \label{EQ:fundamental_eq_OPN} \frac{\sigma(M)}{M}=\frac{a}{b}, \end{equation} which is used in the context of perfect numbers.
Actually, in the induction steps of Heath-Brown's method, there are two point to use such Diophantine equation or corresponding inequalities. For the first point, we use Diophantine inequality with its original form. On the other hand, in the second point, we need to take the ``reciprocal'' of the same Diophantine inequality. For odd perfect numbers, we can take the reciprocal of \eqref{EQ:fundamental_eq_OPN} without any big change of its shape. However, for amicable pairs, we need to take the reciprocal of each terms in \eqref{EQ:fundamental_eq_amicable}, which transforms the equation into slightly different shape. Thus, we prepare two different lemmas.
We start with Lemma 2 of Cook~\cite{Cook} in its refined form. This refinement was given by Goto~\cite[Lemma 2.4]{Goto}. Nielsen~\cite[Lemma 1.2]{Nielsen_New} also proved this refinement in even stronger form, which allows us to have some equalities between $x_i$. We also need a variant of Cook's lemma given by Goto~\cite[Lemma 2.5]{Goto}. For completeness, we give a proof of these lemmas following the argument of Nielsen~\cite{Nielsen_New}.
\begin{lemma} \label{Lem:pre_Cook} For real numbers $0<x_1\le x_2$ and $0<\alpha<1$, we have \[ \left(1-\frac{1}{x_1}\right)\left(1-\frac{1}{x_2}\right) >\left(1-\frac{1}{x_1\alpha}\right)\left(1-\frac{1}{x_2\alpha^{-1}}\right) \] and \[ \left(1+\frac{1}{x_1}\right)\left(1+\frac{1}{x_2}\right) <\left(1+\frac{1}{x_1\alpha}\right)\left(1+\frac{1}{x_2\alpha^{-1}}\right). \] \end{lemma} \begin{proof} By expanding both sides of the inequalities, we find that it suffices to prove \[ \frac{1}{x_1}+\frac{1}{x_2}<\frac{1}{x_1\alpha}+\frac{1}{x_2\alpha^{-1}}. \] This is equivalent to \[ \left(\alpha-\frac{x_2}{x_1}\right)\left(\alpha-1\right)>0. \] Since $\alpha<1\le x_2/x_1$, the last inequality holds. This completes the proof. \end{proof}
\begin{lemma} \label{Lem:Cook} Let \begin{equation} \label{EQ:xy_Cook_condition1} 1<x_1\le x_2\le\cdots\le x_k,\quad 1<y_1\le y_2\le\cdots\le y_k \end{equation} be sequences of real numbers satisfying \begin{equation} \label{EQ:xy_Cook_condition2} \prod_{i=1}^{s}x_i\le\prod_{i=1}^{s}y_i \end{equation} for every $s$ with $1\le s\le k$. Then we have \[ \prod_{i=1}^{k}\left(1-\frac{1}{x_i}\right)\le\prod_{i=1}^{k}\left(1-\frac{1}{y_i}\right),\quad \prod_{i=1}^{k}\left(1+\frac{1}{x_i}\right)\ge\prod_{i=1}^{k}\left(1+\frac{1}{y_i}\right), \] where each of two equalities holds if and only if $x_i=y_i$ for every $i\ge1$. \end{lemma} \begin{proof} We first fix the tuple $\mathbf{x}=(x_i)$. Let us identify the tuple $\mathbf{y}=(y_i)$ with a point in the Euclidean space $\mathbb{R}^{k}$ and let \[
\mathcal{R}=\Set{\mathbf{y}\in\mathbb{R}^k| y_1\le\cdots\le y_k\ \text{and}\ \prod_{i=1}^{s}x_i\le\prod_{i=1}^{s}y_i\ \text{for all $1\le s\le k$}}, \] \[ G(\mathbf{y})=\prod_{i=1}^{k}\left(1-\frac{1}{y_i}\right),\quad H(\mathbf{y})=\prod_{i=1}^{k}\left(1+\frac{1}{y_i}\right) \] Note that the condition $y_1>1$ in \eqref{EQ:xy_Cook_condition1} is assured by the case $s=1$ of \eqref{EQ:xy_Cook_condition2} since $y_1\ge x_1>1$. Thus what we have to prove is that the minimum value of $G(\mathbf{y})$ and the maximum value of $H(\mathbf{y})$ for $\mathbf{y}\in\mathcal{R}$ is taken only at $\mathbf{y}=\mathbf{x}$.
Note that $G(\mathbf{y})$ is increasing in every variable $y_i$ and $H(\mathbf{y})$ is decreasing in every variable $y_i$. and that if $\mathbf{y}\in\mathcal{R}$, then \[ \left(\min\left(y_1,\prod_{i=1}^{k}x_i\right),\ldots,\min\left(y_k,\prod_{i=1}^{k}x_i\right)\right)\in\mathcal{R}. \] Thus, the minimum value of $G(\mathbf{y})$ and the maximum value of $H(\mathbf{y})$ for $\mathbf{y}\in\mathcal{R}$ exists and is taken in the closed set $\mathcal{R}\cap[1,\prod_{i=1}^{k}x_i]^k$.
Take $\mathbf{y}\in\mathbf{R}$ with $\mathbf{y}\neq\mathbf{x}$ arbitrarily. Since we proved the existence of the minimum and maximum values of $G(\mathbf{y})$ and $H(\mathbf{y})$, it suffices to prove that we can modify $\mathbf{y}$ to $\tilde{\mathbf{y}}\in\mathcal{R}$ such that $G(\mathbf{y})>G(\tilde{\mathbf{y}})$ and $H(\mathbf{y})<H(\tilde{\mathbf{y}})$. Take the smallest index $t$ with $x_t\neq y_t$ and $1\le t\le k$. Then we have \begin{equation} \label{EQ:xy_equal} x_i=y_i\ \ \text{for all $1\le i<t$} \end{equation} so that \begin{equation} \label{EQ:xy_product_equal} \prod_{i=1}^{t-1}x_i=\prod_{i=1}^{t-1}y_i. \end{equation} By \eqref{EQ:xy_Cook_condition2}, \eqref{EQ:xy_product_equal} and $x_t\neq y_t$, we have \begin{equation} \label{EQ:xy_phase_transition} x_t<y_t,\quad\text{so}\quad\prod_{i=1}^{t}x_i<\prod_{i=1}^{t}y_i. \end{equation} If $t$ is the last index, i.e.~$t=k$, then by \eqref{EQ:xy_equal} and \eqref{EQ:xy_phase_transition} we have \begin{align*} G(\mathbf{y}) =\prod_{i=1}^{k-1}\left(1-\frac{1}{x_i}\right)\left(1-\frac{1}{y_k}\right) >\prod_{i=1}^{k-1}\left(1-\frac{1}{x_i}\right)\left(1-\frac{1}{x_k}\right) =G(\mathbf{x}), \end{align*} \begin{align*} H(\mathbf{y}) =\prod_{i=1}^{k-1}\left(1+\frac{1}{x_i}\right)\left(1+\frac{1}{y_k}\right) <\prod_{i=1}^{k-1}\left(1+\frac{1}{x_i}\right)\left(1+\frac{1}{x_k}\right) =H(\mathbf{x}). \end{align*} Thus $\tilde{\mathbf{y}}=\mathbf{x}$ satisfies the conditions. Hence we may assume $1\le t\le k-1$.
We next take the smallest index $v$ with $y_v=y_{t+1}$ and $t<v\le k$. Then we have \begin{equation} \label{EQ:v_cond} y_{t+1}=y_{t+2}=\cdots=y_{v}<y_{v+1}, \end{equation} where we use a convention $y_{k+1}=2y_k$. Now we prove \begin{equation} \label{EQ:critical_range_ineq} \prod_{i=1}^{s}x_i<\prod_{i=1}^{s}y_i\ \ \text{for all $t\le s<v$}. \end{equation} Assume to the contrary that there is $s$ with $t\le s<v$ and \[ \prod_{i=1}^{s}x_i\ge\prod_{i=1}^{s}y_i. \] By \eqref{EQ:xy_Cook_condition2}, we find that \begin{equation} \label{EQ:s_product_equal} \prod_{i=1}^{s}x_i=\prod_{i=1}^{s}y_i. \end{equation} Thus, by the second inequality in \eqref{EQ:xy_phase_transition}, \[ \prod_{i=t+1}^{s}x_i = \left(\prod_{i=1}^{t}x_i\right)^{-1}\prod_{i=1}^{s}x_i > \left(\prod_{i=1}^{t}y_i\right)^{-1}\prod_{i=1}^{s}y_i = \prod_{i=t+1}^{s}y_i. \] Combined with \eqref{EQ:xy_Cook_condition1} and \eqref{EQ:v_cond}, this gives \[ x_{s+1}\ge x_s \ge\left(\prod_{i=t+1}^{s}x_i\right)^{1/(s-t)} >\left(\prod_{i=t+1}^{s}y_i\right)^{1/(s-t)}=y_{s}=y_{s+1} \] since $s+1\le v$. By multiplying both sides by \eqref{EQ:s_product_equal}, we obtain \[ \prod_{i=1}^{s+1}x_i>\prod_{i=1}^{s+1}y_i, \] which contradicts to the assumption \eqref{EQ:xy_Cook_condition2}. Thus we obtain \eqref{EQ:critical_range_ineq}.
By $x_t<y_t$ , $y_v<y_{v+1}$ and \eqref{EQ:critical_range_ineq}, we find that \[ 0\le\max\left(x_t/y_t,\ y_{v}/y_{v+1},\ \prod_{i=1}^{t}x_i/y_i,\ \ldots,\ \prod_{i=1}^{v-1}x_i/y_i\right)<1 \] so we can take a positive real number $\alpha$ with \begin{equation} \label{EQ:alpha_choice} \max\left(x_t/y_t,\ y_{v}/y_{v+1},\ \prod_{i=1}^{t}x_i/y_i,\ \ldots,\ \prod_{i=1}^{v-1}x_i/y_i\right)<\alpha<1. \end{equation} We now define $\tilde{\mathbf{y}}=(\tilde{y}_i)\in\mathbb{R}^k$ by \begin{equation} \label{EQ:tilde_choice} \tilde{y}_i= \left\{ \begin{array}{ll} y_i&(\text{for $i\neq t,v$}),\\ y_i\alpha&(\text{for $i=t$}),\\ y_i\alpha^{-1}&(\text{for $i=v$}),\\ \end{array} \right. \end{equation} and check that this $\tilde{\mathbf{y}}$ satisfies the desired conditions.
First, we check \begin{equation} \label{EQ:tilde_Cook_condition1} \tilde{y}_1\le\cdots\le\tilde{y}_k,\quad\text{i.e.}\quad \tilde{y}_i\le\tilde{y}_{i+1}\ \ \text{for all $1\le i<k$}. \end{equation} This is obvious for $i\not\in\{t-1,t,v-1,v\}$ since in this case, $\tilde{y}_i$ and $\tilde{y}_{i+1}$ coincide with $y_i$ and $y_{i+1}$ respectively. For the case $i\in\{t-1,t,v-1,v\}$ and $1\le i<k$, we use \eqref{EQ:alpha_choice} to check \[ \tilde{y}_{t-1}=y_{t-1}=x_{t-1}\le x_t=y_t\cdot(x_t/y_t)<y_t\alpha=\tilde{y}_t, \] \[ \tilde{y}_{t}=y_t\alpha<y_t\le y_{t+1}\le\tilde{y}_{t+1},\quad \tilde{y}_{v-1}\le y_{v-1}\le y_v<y_v\alpha^{-1}=\tilde{y}_v, \] \[ \tilde{y}_v=y_v\alpha^{-1}<y_v\cdot(y_v/y_{v+1})^{-1}=y_{v+1}=\tilde{y}_{v+1}. \] Thus the condition \eqref{EQ:tilde_Cook_condition1} holds.
Second, we check \begin{equation} \label{EQ:tilde_Cook_condition2} \prod_{i=1}^{s}x_i\le\prod_{i=1}^{s}\tilde{y}_i\ \ \text{for all $1\le s\le k$}. \end{equation} This is obvious for $1\le s<t$ and $v\le s\le k$ since in these cases, we have \[ \prod_{i=1}^{s}\tilde{y}_i=\prod_{i=1}^{s}y_i \] by our choice \eqref{EQ:tilde_choice}. For $t\le s<v$, we see that \[ \prod_{i=1}^{s}x_i =\left(\prod_{i=1}^{s}x_i/y_i\right)\left(\prod_{i=1}^{s}y_i\right) <\alpha\prod_{i=1}^{s}y_i =\prod_{i=1}^{s}\tilde{y}_i \] by \eqref{EQ:alpha_choice}. Thus the condition \eqref{EQ:tilde_Cook_condition2} also holds, i.e. $\tilde{\mathbf{y}}\in\mathcal{R}$.
Finally, we check that $G(\mathbf{y})>G(\tilde{\mathbf{y}})$ and $H(\mathbf{y})<H(\tilde{\mathbf{y}})$. Since $0<\alpha<1$ and $y_t\le y_{t+1}\le y_v$, by recalling \eqref{EQ:tilde_choice}, we can apply Lemma \ref{Lem:pre_Cook} to obtain \begin{align*} G(\mathbf{y}) &= \prod_{\substack{i=1\\i\neq t,v}}^{k}\left(1-\frac{1}{\tilde{y}_i}\right) \left(1-\frac{1}{y_t}\right)\left(1-\frac{1}{y_v}\right)\\ &>\prod_{\substack{i=1\\i\neq t,v}}^{k}\left(1-\frac{1}{\tilde{y}_i}\right) \left(1-\frac{1}{y_t\alpha}\right)\left(1-\frac{1}{y_v\alpha^{-1}}\right) = G(\tilde{\mathbf{y}}), \end{align*} \begin{align*} H(\mathbf{y}) &= \prod_{\substack{i=1\\i\neq t,v}}^{k}\left(1+\frac{1}{\tilde{y}_i}\right) \left(1+\frac{1}{y_t}\right)\left(1+\frac{1}{y_v}\right)\\ &<\prod_{\substack{i=1\\i\neq t,v}}^{k}\left(1+\frac{1}{\tilde{y}_i}\right) \left(1+\frac{1}{y_t\alpha}\right)\left(1+\frac{1}{y_v\alpha^{-1}}\right) = H(\tilde{\mathbf{y}}). \end{align*} Thus our $\tilde{\mathbf{y}}$ satisfies the desired conditions. This completes the proof. \end{proof}
We next prove the first lemma on Dipophantine inequality. The Diophantine inequality in the next lemma seems to be a natural generalization of the Diophantine inequality (2) of \cite{HB_OPN} to the linear form with several summands. \begin{lemma} \label{Lem:HB_ineq1} Let $k\ge1$ be an integer. For $R\ge1$, consider a sequence of integers \[ \mathcal{M}=(m_j)_{j=1}^{R},\quad1<m_1\le m_2\le\cdots\le m_R, \] a decomposition of the index set \[ J=\{1,\,\ldots,\,R\},\quad J=\bigcup_{i=1}^{k}J_i,\quad J_1,\,\ldots,\,J_k\colon\text{disjoint} \] and a tuple of integers \[a_1,\,\ldots,\,a_k,\,b_1,\,\ldots,\,b_k\ge1\] satisfying $a_i\ge b_i$ for all $i$. If a pair of inequalities \begin{equation} \label{EQ:HB_assump_A1} \sum_{i=1}^{k}\frac{b_i}{a_i}\prod_{\substack{j=1\\j\in J_i}}^{R}\left(1-\frac{1}{m_j}\right)\le1 \end{equation} \begin{equation} \label{EQ:HB_assump_B1} \sum_{i=1}^{k}\frac{b_i}{a_i}\prod_{\substack{j=1\\j\in J_i\\}}^{R-1}\left(1-\frac{1}{m_j}\right)>1 \end{equation} holds, then we have \begin{equation} \label{EQ:M_bound1} a\prod_{j=1}^{R}m_j\le F_{R}(a+1), \end{equation} where $a=a_1\cdots a_k$. \end{lemma}
\begin{proof} We first give a preliminary remark. By \eqref{EQ:HB_assump_B1}, we always have \begin{equation} \label{EQ:abcd_ineq_pre1} \sum_{i=1}^{k}\frac{b_i}{a_i}>1 \end{equation} since each of $\Pi(1-1/m)$ is $\le1$ even for the case they are empty products. Since the left-hand side of \eqref{EQ:abcd_ineq_pre1} is a rational fraction with denominator $a$, \begin{equation} \label{EQ:abcd_ineq1} \sum_{i=1}^{k}\frac{b_i}{a_i}\ge\frac{a+1}{a}. \end{equation} We use this inequality several times below.
We use induction on $R$. If $R=1$, by symmetry we may assume without loss of generality that $J_1=\cdots=J_{k-1}=\emptyset$ and $J_{k}=\{1\}$. Then \eqref{EQ:HB_assump_A1} implies \[ \sum_{i=1}^{k}\frac{b_i}{a_i}-\frac{b_k}{a_k}\cdot\frac{1}{m_1}\le1. \] By \eqref{EQ:abcd_ineq1}, we find that \[ \frac{b_k}{a_k}\cdot\frac{1}{m_1}\ge\sum_{i=1}^{k}\frac{b_i}{a_i}-1\ge\frac{1}{a} \] so that \[ am_1\le a^2\le a(a+1)=F_1(a+1) \] since $a_k\ge b_k$. This completes the proof of the case $R=1$.
We next assume that the assertion holds for any sequence $\mathcal{M}$ of length $\le R-1$ and prove the assertion for the case in which $\mathcal{M}$ has the length $R$. We use a special sequence \[ 1<x_{1}<\cdots<x_{R}, \] which is defined by \[ x_{j}= \left\{ \begin{array}{ll} (a+1)^{2^{j-1}}+1&(\text{for $1\le j< R$})\\ (a+1)^{2^{R-1}}&(\text{for $j=R$}). \end{array} \right. \] We first consider the case \[ \prod_{j=1}^{r}x_{j}>\prod_{j=1}^{r}m_{j}, \] for some $1\le r<R$. Then we have \begin{equation} \label{EQ:acm_small_m1} am_{1}\cdots m_{r} <ax_{1}\cdots x_{r} = (a+1)^{2^{r}}-1. \end{equation} By using notations \[ a'_i=a_i\prod_{\substack{j=1\\j\in J_i}}^{r}m_{j},\quad b'_i=b_i\prod_{\substack{j=1\\j\in J_i}}^{r}(m_{j}-1),\quad a'=a'_1\cdots a'_k, \] we can rewrite \eqref{EQ:HB_assump_A1} and \eqref{EQ:HB_assump_B1} as \[ \sum_{i=1}^{k}\frac{b'_i}{a'_i}\prod_{\substack{j=r+1\\j\in J_i}}^{R}\left(1-\frac{1}{m_j}\right)\le1,\quad \sum_{i=1}^{k}\frac{b'_i}{a'_i}\prod_{\substack{j=r+1\\j\in J_i}}^{R-1}\left(1-\frac{1}{m_j}\right)>1. \] Note that the condition $r<R$ is necessary for rewriting the inequality \eqref{EQ:HB_assump_B1} as above. By the induction hypothesis, we obtain \[ a\prod_{j=1}^{R}m_j = a'\prod_{j=r+1}^{R}m_j \le F_{R-r}(a'+1) = F_{R-r}(am_{1}\cdots m_{r}+1) \] By \eqref{EQ:acm_small_m1} and Lemma \ref{Lem:F_increasing}, this implies \[ a\prod_{j=1}^{R}m_j \le F_{R-r}\left((a+1)^{2^{r}}\right) = F_{R}(a+1) \] so the assertion follows. Thus we may assume \begin{equation} \label{EQ:partial_product_small1} \prod_{j=1}^{r}x_j\le\prod_{j=1}^{r}m_j \end{equation} for all $1\le r<R$. We may also assume \eqref{EQ:partial_product_small1} for the case $r=R$ since otherwise \begin{equation} \label{EQ:xy_calculation1} a\prod_{j=1}^{R}m_j\le a\prod_{j=1}^{R}x_j=F_R(a+1) \end{equation} and the assertion follows. Thus, for the remaining case, we have \eqref{EQ:partial_product_small1} for every $1\le r\le R$. Then we can apply Lemma \ref{Lem:Cook} to obtain \begin{equation} \label{EQ:Cook_M1} \prod_{j=1}^{R}\left(1-\frac{1}{m_{j}}\right) \ge\prod_{j=1}^{R}\left(1-\frac{1}{x_{j}}\right) =\frac{a}{a+1} \end{equation} Then by \eqref{EQ:HB_assump_A1} and \eqref{EQ:abcd_ineq1}, we have \[ 1 \ge \sum_{i=1}^{k}\frac{b_i}{a_i}\prod_{\substack{j=1\\j\in J_i}}^{R}\left(1-\frac{1}{m_{j}}\right) \ge \prod_{j=1}^{R}\left(1-\frac{1}{m_{j}}\right)\sum_{i=1}^{k}\frac{b_i}{a_i} \ge \frac{a}{a+1}\sum_{i=1}^{k}\frac{b_i}{a_i}\ge1. \] Thus we must have the equality in \eqref{EQ:Cook_M1}. By Lemma \ref{Lem:Cook}, we find that \[ m_{1}=x_{1},\ \ \ldots,\ \ m_{R}=x_{R}. \] By using \eqref{EQ:xy_calculation1}, we have the assertion again. This completes the proof. \end{proof}
We next prove the second lemma on Diophantine inequality. \begin{lemma} \label{Lem:HB_ineq2} Let $k\ge1$ be an integer. For $R\ge1$, consider a sequence of integers \[ \mathcal{M}=(m_j)_{j=1}^{R},\quad1<m_1\le m_2\le\cdots\le m_R, \] a decomposition of the index set \[ J=\{1,\,\ldots,\,R\},\quad J=\bigcup_{i=1}^{k}J_i,\quad J_1,\,\ldots,\,J_k\colon\text{disjoint} \] and a tuple of integers \[a_1,\,\ldots,\,a_k,\,b_1,\,\ldots,\,b_k\ge1.\] If a pair of inequalities \begin{equation} \label{EQ:HB_assump_A2} \sum_{i=1}^{k}\frac{b_i}{a_i}\prod_{\substack{j=1\\j\in J_i}}^{R}\left(1-\frac{1}{m_j}\right)^{-1}\ge1 \end{equation} \begin{equation} \label{EQ:HB_assump_B2} \sum_{i=1}^{k}\frac{b_i}{a_i}\prod_{\substack{j=1\\j\in J_i}}^{R-1}\left(1-\frac{1}{m_j}\right)^{-1}<1 \end{equation} holds, then we have \begin{equation} \label{EQ:M_bound1} a\prod_{j=1}^{R}(m_j-1)\le F_{R}(a), \end{equation} where $a=a_1\cdots a_k$. \end{lemma}
\begin{remark} \label{Rem:abcd_cond} We assumed $a_i\ge b_i$ in Lemma \ref{Lem:HB_ineq1}, but we do not assume this condition in Lemma \ref{Lem:HB_ineq2} above. Actually, by \eqref{EQ:HB_assump_B2}, we automatically obtain $a_i>b_i$. \end{remark}
\begin{proof} By \eqref{EQ:HB_assump_B2}, we always have \begin{equation} \label{EQ:abcd_ineq2} \frac{1}{a}\le\sum_{i=1}^{k}\frac{b_i}{a_i}\le1-\frac{1}{a} \end{equation} as in the proof of \eqref{EQ:abcd_ineq1}. Again this is a key in the argument below.
We use induction on $R$. If $R=1$, by symmetry we may assume without loss of generality that $J_1=\cdots=J_{k-1}=\emptyset$ and $J_{k}=\{1\}$. Then \eqref{EQ:HB_assump_A2} implies \[ 1 \le \sum_{i=1}^{k-1}\frac{b_i}{a_i}+\frac{b_k}{a_k}\left(1-\frac{1}{m_1}\right)^{-1} = \sum_{i=1}^{k}\frac{b_i}{a_i}+\frac{b_k}{a_k}\cdot\frac{1}{m_1-1}. \] By \eqref{EQ:abcd_ineq2}, we find that \[ \frac{b_k}{a_k}\cdot\frac{1}{m_1-1}\ge1-\sum_{i=1}^{k}\frac{b_i}{a_i}\ge\frac{1}{a} \] so that \[ a(m_1-1)\le a(a-1)=F_1(a) \] since $a_k>b_k$. This completes the proof of the case $R=1$.
We next assume that the assertion holds for any sequence $\mathcal{M}$ of length $\le R-1$ and prove the assertion for the case in which $\mathcal{M}$ has the length $R$. We use a special sequence \[ 1<x_{1}<\cdots<x_{R}, \] which is defined by \[ x_{j}= \left\{ \begin{array}{ll} a^{2^{j-1}}+1&(\text{for $1\le j< R$})\\ a^{2^{R-1}}&(\text{for $j=R$}). \end{array} \right. \] We first consider the case \[ \prod_{j=1}^{r}(x_{j}-1)>\prod_{j=1}^{r}(m_{j}-1), \] for some $1\le r<R$. Then we have \begin{equation} \label{EQ:acm_small_m2} a(m_{1}-1)\cdots (m_{r}-1) <a(x_{1}-1)\cdots (x_{r}-1) = a^{2^{r}}. \end{equation} By using notations \[ a''_i=a_i\prod_{\substack{j=1\\j\in J_i}}^{r}(m_{j}-1),\quad b''_i=b_i\prod_{\substack{j=1\\j\in J_i}}^{r}m_{j},\quad a''=a''_1\cdots a''_k, \] we can rewrite \eqref{EQ:HB_assump_A2} and \eqref{EQ:HB_assump_B2} as \[ \sum_{i=1}^{k}\frac{b''_i}{a''_i} \prod_{\substack{j=r+1\\j\in J_i}}^{R}\left(1-\frac{1}{m_j}\right)^{-1}\ge1,\quad \sum_{i=1}^{k}\frac{b''_i}{a''_i} \prod_{\substack{j=r+1\\j\in J_i}}^{R-1}\left(1-\frac{1}{m_j}\right)^{-1}<1. \] By the induction hypothesis and \eqref{EQ:acm_small_m2}, we obtain \begin{align*} a\prod_{j=1}^{R}(m_j-1) &= a''\prod_{j=r+1}^{R}(m_j-1) \le F_{R-r}(a'') \le F_{R}(a) \end{align*} so the assertion follows. Thus we may assume \begin{equation} \label{EQ:partial_product_small2} \prod_{j=1}^{r}(x_j-1)\le\prod_{j=1}^{r}(m_j-1) \end{equation} for all $1\le r<R$. We may also assume \eqref{EQ:partial_product_small2} for the case $r=R$ since otherwise \begin{equation} \label{EQ:xy_calculation2} a\prod_{j=1}^{R}(m_j-1)\le a\prod_{j=1}^{R}(x_j-1)=F_R(a) \end{equation} and the assertion follows. Thus, for the remaining case, we have \eqref{EQ:partial_product_small2} for every $1\le r\le R$. Note that by \eqref{EQ:abcd_ineq2}, \[ 1<a=(x_1-1). \] Thus we can apply Lemma \ref{Lem:Cook} to obtain \begin{equation} \label{EQ:Cook_M2} \prod_{j=1}^{R}\left(1-\frac{1}{m_{j}}\right)^{-1} = \prod_{j=1}^{R}\left(1+\frac{1}{m_{j}-1}\right) \le \prod_{j=1}^{R}\left(1+\frac{1}{x_{j}-1}\right) =\frac{a}{a-1}. \end{equation} Then by \eqref{EQ:HB_assump_A2} and \eqref{EQ:abcd_ineq2}, we have \[ 1 \le \sum_{i=1}^{k}\frac{b_i}{a_i}\prod_{\substack{j=1\\j\in J_i}}^{R}\left(1-\frac{1}{m_{j}}\right)^{-1} \le \prod_{j=1}^{R}\left(1-\frac{1}{m_{j}}\right)^{-1}\sum_{i=1}^{k}\frac{b_i}{a_i} \le \frac{a}{a-1}\sum_{i=1}^{k}\frac{b_i}{a_i} \le1. \] Thus we must have the equality in \eqref{EQ:Cook_M2}. By Lemma \ref{Lem:Cook}, we find that \[ m_{1}=x_{1},\ \ \ldots,\ \ m_{R}=x_{R}. \] By using \eqref{EQ:xy_calculation2}, we have the assertion again. This completes the proof. \end{proof}
\section{Upper bounds \`a la Borho} \label{Section:Omega} In this section, we prove Theorem 4, which gives upper bounds of Borho-type. \begin{proof}[Proof of Theorem \ref{Thm:Omega}] We first consider a harmonious tuple $(M_{i})_{i=1}^{k}$. Note that \[ \frac{M}{\sigma(M)} = \prod_{p^e\parallel M}\frac{p^e}{1+\cdots+p^e}\\ = \prod_{p^e\parallel M}\prod_{f=1}^{e}\left(1-\frac{1}{1+\cdots+p^f}\right) \] as it is mentioned in the proof of Satz 3 of \cite{Borho2}. Thus, \eqref{EQ:fundamental_eq} is rewritten as \[ \sum_{i=1}^{k}\prod_{p^e\parallel M_i}\prod_{f=1}^{e}\left(1-\frac{1}{1+\cdots+p^f}\right)=1. \] If we remove any factor from any summand, then the left-hand side becomes larger. Thus we can apply Lemma \ref{Lem:HB_ineq1} and obtain \begin{equation} \label{EQ:Omega_after_lemma} \sigma(M_1)\cdots\sigma(M_k) \le \prod_{i=1}^{k}\prod_{p^e\parallel M_i}\prod_{f=1}^{e}(1+\cdots+p^f) \le F_L(2)=2^{2^{L}}-2^{2^{L-1}}, \end{equation} where $L$ is given by the number of factors in the product above, so \[ L=\sum_{i=1}^{k}\sum_{p^e\parallel M}e=\sum_{i=1}^{k}\Omega(M_i)=\Omega(M_1\cdots M_k). \] Note that $M_i$ can share some common factor since we do not assume anything on the divisibility. However, this does not affect the above arguments. Now by using the inequality of the arithmetic and geometric mean in \eqref{EQ:fundamental_eq}, we find that \[ 1 =\frac{M_1}{\sigma(M_1)}+\cdots+\frac{M_k}{\sigma(M_k)} \ge k\left(\frac{M_1\cdots M_k}{\sigma(M_1)\cdots\sigma(M_k)}\right)^{1/k} \] so \[ \sigma(M_1)\cdots\sigma(M_k) = \left(\frac{\sigma(M_1)\cdots\sigma(M_k)}{M_1\cdots M_k}\right)M_1\cdots M_k \ge k^kM_1\cdots M_k. \] On inserting this into \eqref{EQ:Omega_after_lemma}, we arrive at the assertion for amicable tuples.
We next consider a unitary harmonious tuple $(M_{i})_{i=1}^{k}$. By using \[ \frac{M}{\sigma^\ast(M)}=\prod_{p^e\parallel M}\frac{p^e}{1+p^e}=\prod_{p^e\parallel M}\left(1-\frac{1}{1+p^e}\right) \] we can rewrite \eqref{EQ:fundamental_eq_unitary} as \[ \sum_{i=1}^{k}\prod_{p^e\parallel M_i}\left(1-\frac{1}{1+p^e}\right)=1. \] Applying Lemma \ref{Lem:HB_ineq1} as above, we see that \begin{equation} \label{EQ:unitary_after_lemma} \sigma^\ast(M_1)\cdots\sigma^\ast(M_k) = \prod_{i=1}^{k}\prod_{p^e\parallel M_i}(1+p^e) \le F_L(2)=2^{2^{L}}-2^{2^{L-1}}, \end{equation} where $L$ is given by \[ L=\sum_{i=1}^{k}\sum_{p^e\parallel M_i}1=\omega(M_1)+\cdots+\omega(M_k). \] By using the inequality of the arithmetic and geometric mean in \eqref{EQ:fundamental_eq_unitary}, we find \[ \sigma^\ast(M_1)\cdots\sigma^\ast(M_k) = \left(\frac{\sigma^\ast(M_1)\cdots\sigma^\ast(M_k)}{M_1\cdots M_k}\right)M_1\cdots M_k \ge k^kM_1\cdots M_k \] On inserting this into \eqref{EQ:unitary_after_lemma}, we obtain the assertion for unitary harmonious pairs. \end{proof}
\section{The induction lemma} \label{Section:induction} In this section, we prove an induction lemma. We start with a lemma on the divisibility, whose special case is also used in the proof of Lemma 4 of \cite{Pollack_bound}.
\begin{lemma} \label{Lem:divisibility_lemma} Let $k\ge2$ be an integer, $(M_i)_{i=1}^{k}$ be an anarchy harmonious tuple and suppose that a tuple of decompositions \[ M_i=U_iV_i,\quad (U_i,V_i)=1,\quad U:=U_{1}\cdots U_{k}>1 \] is given. Then we have \[ \sum_{i=1}^{k} \frac{V_i}{\sigma(V_i)} \prod_{\substack{p\mid U_i\\p\in\mathcal{S}}}\left(1-\frac{1}{p}\right) \neq1 \] for any set $\mathcal{S}$ of prime factors of $U$. \end{lemma}
\begin{proof} Assume to the contrary that \begin{equation} \label{EQ:assumption_divisibility_lemma} \sum_{i=1}^{k} \frac{V_i}{\sigma(V_i)} \prod_{\substack{p\mid U_i\\p\in\mathcal{S}}}\left(1-\frac{1}{p}\right) =1 \end{equation} for some set $\mathcal{S}$ of prime factors of $U$. We first claim that $\mathcal{S}$ is non-empty. Since $U=U_{1}\cdots U_{k}>1$, we have $U_{i}/\sigma(U_{i})<1$ for some $i$. Thus, by \eqref{EQ:fundamental_eq}, \[ 1=\sum_{i=1}^{k}\frac{U_i}{\sigma(U_i)}\frac{V_i}{\sigma(V_i)}<\sum_{i=1}^{k}\frac{V_i}{\sigma(V_i)}. \] Comparing this with \eqref{EQ:assumption_divisibility_lemma}, $\mathcal{S}$ should be non-empty. By multiplying \eqref{EQ:assumption_divisibility_lemma} by \[ \prod_{i=1}^{k}\sigma(V_i)\prod_{p\in\mathcal{S}}p, \] we have \begin{equation} \label{EQ:after_multiplying} \sum_{i=1}^{k} V_i\prod_{\substack{j=1\\i\neq j}}^{k}\sigma(V_j) \prod_{\substack{p\mid U_i\\p\in\mathcal{S}}}(p-1) \prod_{\substack{p\nmid U_i\\p\in\mathcal{S}}}p = \prod_{i=1}^{k}\sigma(V_i)\prod_{p\in\mathcal{S}}p. \end{equation} Let $P$ be the largest prime in $\mathcal{S}$, which exists since $\mathcal{S}$ is non-empty. By symmetry, we may assume $P\mid U_1$. Then in \eqref{EQ:after_multiplying}, all terms except the case $i=1$ on the left-hand side and the right-hand side are divisible by $P$ since $(M_i)_{i=1}^{k}$ is anarchy so $U_{1}.\ldots,U_{k}$ are pairwise coprime. This implies \begin{equation} \label{EQ:P_divides} P\mid V_1\prod_{j=2}^{k}\sigma(V_j) \prod_{\substack{p\mid U_1\\p\in\mathcal{S}}}(p-1) \prod_{\substack{p\nmid U_1\\p\in\mathcal{S}}}p \end{equation} Since $P$ is the largest prime in $\mathcal{S}$ and $(M_i)_{i=1}^{k}$ is anarchy, we find \[ P\nmid \prod_{j=2}^{k}\sigma(V_j) \prod_{\substack{p\mid U_1\\p\in\mathcal{S}}}(p-1)\prod_{\substack{p\nmid U_1\\p\in\mathcal{S}}}p. \] Thus by \eqref{EQ:P_divides}, $P\mid V_1$, which contradicts to $P\mid U_1$ and $(U_1,V_1)=1$. \end{proof}
By using Lemma \ref{Lem:divisibility_lemma}, we can prove now the following variant of Heath-Brown's induction lemma, which corresponds to Lemma 1.5 of \cite{Nielsen_New}. \begin{lemma} \label{Lem:HB_induction} Let $k\ge2$ be an integer, $(M_i)_{i=1}^{k}$ be an anarchy harmonious tuple and suppose that a tuple of decompositions \[ M_i=U_iV_i,\quad (U_i,V_i)=1,\quad U:=U_{1}\cdots U_{k}>1,\quad V:=V_{1}\cdots V_{k}, \] and a set $\mathcal{S}$ of prime factors of $U$ are given. Then there is a tuple of decompositions \[ M'_i=U'_iV'_i,\quad (U'_i,V'_i)=1,\quad U':=U'_{1}\cdots U'_{k},\quad V':=V'_{1}\cdots V'_{k},\quad V\parallel V', \] and a set $\mathcal{S}'$ of prime factors of $U'$ with the following conditions\,{\upshape:} \begin{enumerate} \renewcommand{{\upshape(\roman{enumi})}}{{\upshape(\roman{enumi})}}
\item $v:=|\mathcal{P}'|\ge1$, where $\mathcal{P}':=\{p\colon\text{prime}\mid p\mid V',p\nmid V\}$, \item we have \[ \sigma(V')\Pi(\mathcal{S}')\Psi(\mathcal{P}') \le F_{v+w}(\sigma(V)\Pi(\mathcal{S})+1), \]
where $w:=v+|\mathcal{S}'|-|\mathcal{S}|$, \item if $w=0$, then the inequality in {\upshape(ii)} can be improved to \[ \sigma(V')\Pi(\mathcal{S}')\Psi(\mathcal{P}') \le F_{v+w}(\sigma(V)\Pi(\mathcal{S})).\hphantom{{}+{}1} \] \end{enumerate} \end{lemma}
\begin{proof} We first show that there is a set $\mathcal{T}$ of prime factors of $U$ satisfying \begin{equation} \label{EQ:T_disjoint_cond} \mathcal{S}\cap\mathcal{T}=\emptyset, \end{equation} \begin{equation} \label{EQ:T_size_cond} \sum_{i=1}^{k} \frac{V_i}{\sigma(V_i)} \prod_{\substack{p\mid U_i\\p\in\mathcal{S}\cup\mathcal{T}}}\left(1-\frac{1}{p}\right)<1 \end{equation} and \begin{equation} \label{EQ:T_bound}
\sigma(V)\Pi(\mathcal{S})\Pi(\mathcal{T})\le F_{w}(\sigma(V)\Pi(\mathcal{S})+1),\quad w:=|\mathcal{T}|. \end{equation} This $w$ will be the same quantity as in the condition (ii). By Lemma \ref{Lem:divisibility_lemma}, the quantity \begin{equation} \label{EQ:S_form} H:= \sum_{i=1}^{k}\frac{V_i}{\sigma(V_i)}\prod_{\substack{p\mid U_i\\p\in\mathcal{S}}}\left(1-\frac{1}{p}\right) \end{equation} never equals $1$. We consider two cases separately according to the size of $H$.
If $H<1$, we just take $\mathcal{T}=\emptyset$ so that $w=0$. This choice obviously satisfies the conditions \eqref{EQ:T_disjoint_cond}, \eqref{EQ:T_size_cond} and \eqref{EQ:T_bound}. Thus the case $H<1$ is done.
We next consider the case $H>1$. Since $U>1$, we have \[ \frac{U_i}{\sigma(U_i)}>\prod_{p\mid U_i}\left(1-\frac{1}{p}\right) \] for some $i$. Thus, by \eqref{EQ:fundamental_eq}, we see that \begin{equation} \label{EQ:beyond} \sum_{i=1}^{k}\frac{V_i}{\sigma(V_i)}\prod_{p\mid U_i}\left(1-\frac{1}{p}\right) <1. \end{equation} By using notation \[ a_i=\sigma(V_i)\prod_{\substack{p\mid U_i\\p\in\mathcal{S}}}p,\quad b_i=V_i\prod_{\substack{p\mid U_i\\p\in\mathcal{S}}}(p-1), \] we can rewrite \eqref{EQ:beyond} as \[ \sum_{i=1}^{k} \frac{b_i}{a_i}\prod_{\substack{p\mid U_i\\p\not\in\mathcal{S}}}\left(1-\frac{1}{p}\right) <1. \] Note that $a_i\ge b_i$ for all $i$. Thus, by comparing this inequality with $H>1$ and examining from the smallest prime factors of $U$ outside $\mathcal{S}$, we can find a non-empty set $\mathcal{T}=\{p_1,\ldots,p_w\}$ of prime factors of $U$ with $p_1<\cdots<p_w$, which satisfies $\mathcal{S}\cap\mathcal{T}=\emptyset$ and two inequalities \begin{equation} \label{EQ:HB_assump_T} \sum_{i=1}^{k} \frac{b_i}{a_i}\prod_{\substack{j=1\\p\mid U_i}}^{w}\left(1-\frac{1}{p_j}\right) \le1,\quad \sum_{i=1}^{k} \frac{b_i}{a_i}\prod_{\substack{j=1\\p\mid U_i}}^{w-1}\left(1-\frac{1}{p_j}\right) >1. \end{equation} By applying Lemma \ref{Lem:HB_ineq1} to this pair of inequalities, we find that \[ a\Pi(\mathcal{T})=a\prod_{j=1}^{w}p_j\le F_w(a+1),\quad a=a_1\cdots a_k=\sigma(V)\Pi(\mathcal{S}), \] i.e.~\eqref{EQ:T_bound} holds. For \eqref{EQ:T_size_cond}, we expand the definition of $a_i$ and $b_i$ in \eqref{EQ:HB_assump_T} to obtain \[ \sum_{i=1}^{k} \frac{V_i}{\sigma(V_i)}\prod_{\substack{p\mid U_i\\p\in\mathcal{S}\cup\mathcal{T}}}\left(1-\frac{1}{p}\right) \le1. \] Then this equality cannot hold by Lemma \ref{Lem:divisibility_lemma} so the condition \eqref{EQ:T_size_cond} holds. Therefore, in any case, we succeeded to find a set $\mathcal{T}$ satisfying the desired conditions.
We next show that there is a non-empty subset $\mathcal{P}'$ of $\mathcal{S}\cup\mathcal{T}$ which satisfies \begin{equation} \label{EQ:P_bound} \sigma(V)\Pi(\mathcal{S})\Pi(\mathcal{T}) \prod_{\substack{p^e\parallel U\\p\in\mathcal{P}'}}(p^{e+1}-1)
\le F_{v}(\sigma(V)\Pi(\mathcal{S})\Pi(\mathcal{T})),\quad v=|\mathcal{P}'|. \end{equation} These $\mathcal{P}'$ and $v$ will be the same objects as in the condition (i). Since $n/\sigma(n)\le 1$ for any positive integer $n$, \eqref{EQ:fundamental_eq} implies \begin{equation} \label{EQ:beyond2} \sum_{i=1}^{k} \frac{V_i}{\sigma(V_i)} \prod_{\substack{p^e\parallel U_i\\p\in\mathcal{S}\cup\mathcal{T}}}\frac{1-1/p}{1-1/p^{e+1}} \ge1 \end{equation} By using notations \[ a'_i=\sigma(V_i)\!\!\!\prod_{\substack{p\mid U_i\\p\in\mathcal{S}\cup\mathcal{T}}}\!\!\!p,\quad b'_i=V_i\!\!\!\prod_{\substack{p\mid U_i\\p\in\mathcal{S}\cup\mathcal{T}}}\!\!\!(p-1), \] we can rewrite \eqref{EQ:beyond2} as \begin{equation} \label{EQ:beyond3} \sum_{i=1}^{k} \frac{b'_i}{a'_i} \prod_{\substack{p^e\parallel U_i\\p\in\mathcal{S}\cup\mathcal{T}}} \left(1-\frac{1}{p^{e+1}}\right)^{-1} \ge1. \end{equation} Similarly, \eqref{EQ:T_size_cond} can be rewritten as \begin{equation} \label{EQ:T_size_cond2} \sum_{i=1}^{k} \frac{b'_i}{a'_i}<1. \end{equation} By comparing \eqref{EQ:beyond3} and \eqref{EQ:T_size_cond2} and examining from the smallest values of \[ p^{e+1},\quad p\in\mathcal{S}\cup\mathcal{T},\ p^e\parallel U, \] we can find a non-empty subset $\mathcal{P}'=\{P_1,\ldots,P_v\}$ of $\mathcal{S}\cup\mathcal{T}$ with \[ P_1^{e_1+1}<\cdots<P_v^{e_v+1},\quad P_j^{e_j}\parallel U \] satisfying two inequalities \begin{equation} \label{EQ:HB_assump_P} \sum_{i=1}^{k}
\frac{b'_i}{a'_i}\prod_{\substack{j=1\\P_j|U_i}}^{v}\left(1-\frac{1}{P_j^{e_j+1}}\right)^{-1} \ge1,\quad \sum_{i=1}^{k}
\frac{b'_i}{a'_i}\prod_{\substack{j=1\\P_j|U_i}}^{v-1}\left(1-\frac{1}{P_j^{e_j+1}}\right)^{-1} <1. \end{equation} Then by applying Lemma \ref{Lem:HB_ineq2} to \eqref{EQ:HB_assump_P}, we find that \[ a' \prod_{j=1}^{v}(P_j^{e_j+1}-1) \le F_{v}(a'),\quad a'=a'_1\cdots a'_k=\sigma(V)\Pi(\mathcal{S})\Pi(\mathcal{T}), \] i.e.~\eqref{EQ:P_bound} holds. Thus our $\mathcal{P}'$ satisfies the desired condition.
Finally, we choose \[ \mathcal{S}'=(\mathcal{S}\cup\mathcal{T})\setminus\mathcal{P}',\quad V'_i=V_i\prod_{\substack{p^e\parallel U_i\\p\in\mathcal{P}'}}p^e,\quad M_{i}=U'_{i}V'_{i}. \] Then it is clear that $(U'_{i},V'_{i})=1$, $V\parallel V'$. The set $\mathcal{S}'$ consists of some prime factors of $U'$ since the prime factors of $U'$ are those of $U$ outside the set $\mathcal{P}'$.
The remaining task is to check the conditions (i), (ii) and (iii). Note that the notations on $\mathcal{P}'$ and $v$ keep its consistency. Since $\mathcal{P}'$ is non-empty, the condition (i) of the lemma is satisfied. For the consistency on $w$, it suffices to see \[
v+|\mathcal{S}'|-|\mathcal{S}| =
|\mathcal{P}'|+|\mathcal{S}\cup\mathcal{T}|-|\mathcal{P}'|-|\mathcal{S}| =
|\mathcal{S}\cup\mathcal{T}|-|\mathcal{S}| =
|\mathcal{T}|. \] We prove the inequality in (ii) and (iii). By our choice of $V'_i$ and $\mathcal{S}'$, \begin{align*} \sigma(V')\Pi(\mathcal{S}')\Pi(\mathcal{P}')\Phi(\mathcal{P}') &= \sigma(V)\sigma(\prod_{\substack{p^e\parallel U\\p\in\mathcal{P}'}}p^e) \Pi(\mathcal{S})\Pi(\mathcal{T})\Phi(\mathcal{P}')\\ &= \sigma(V)\Pi(\mathcal{S})\Pi(\mathcal{T}) \prod_{\substack{p^e\parallel U\\p\in\mathcal{P}'}}(p^{e+1}-1) \end{align*} By \eqref{EQ:P_bound} and the definition of $\Psi$, this implies \[ \sigma(V')\Pi(\mathcal{S}')\Psi(\mathcal{P}') \le F_{v}(\sigma(V)\Pi(\mathcal{S})\Pi(\mathcal{T})). \] If $w=0$, then $\mathcal{T}=\emptyset$ so that this inequality already gives the inequality in (iii). We substitute \eqref{EQ:T_bound} here. Then we arrive at \begin{align*} \sigma(V')\Pi(\mathcal{S}')\Psi(\mathcal{P}') &\le F_{v}(F_{w}(\sigma(V)\Pi(\mathcal{S})+1))\\ &\le F_{v}((\sigma(V)\Pi(\mathcal{S})+1)^{2^{w}})\\ &= F_{v+w}(\sigma(V)\Pi(\mathcal{S})+1). \end{align*} Thus the inequality in (ii) also holds. This completes the proof. \end{proof}
\section{Completion of the proof of Theorem \ref{Thm:main}}
We start with carrying out the induction given in Section \ref{Section:induction}.
\begin{lemma} \label{Lem:carry_out_induction} For any anarchy harmonious tuple $(M_{i})_{i=1}^{k}$ with $\omega(M_1\cdots M_k)=K$, \[ \sigma(M_1\cdots M_k)\frac{\Phi(\mathcal{P})}{\Pi(\mathcal{P})}\le F_{2K}(2)\Pi(\mathcal{P})^{-2}, \] where $\mathcal{P}$ is the set of all prime factors of $M_1\cdots M_k$. \end{lemma} \begin{proof} Let $(M_{i})_{i=1}^{k}$ be an anarchy harmonious tuple with $K=\omega(M_1\cdots M_k)$. We apply Lemma \ref{Lem:HB_induction} inductively to construct tuples of decompositions \[ M_{i}(\nu)=U_{i}(\nu)V_{i}(\nu),\quad(U_{i}(\nu),V_{i}(\nu))=1,\quad(\nu=0,1,2,\ldots) \] and sets of primes \[ \mathcal{S}(\nu),\ \mathcal{P}(\nu),\quad(\nu=0,1,2,\ldots), \] where $\mathcal{S}(\nu)$ is a set of prime factors of $U(\nu):=U_1(\nu)\cdots U_k(\nu)$. We start with \[ U_{i}(0)=M_i,\ V_{i}(0)=1,\quad\mathcal{S}(0)=\emptyset,\ \mathcal{P}(0)=\emptyset. \] In general steps, we apply Lemma \ref{Lem:HB_induction} to the $\nu$-th term with \[ U_{i}=U_{i}(\nu),\ V_{i}=V_{i}(\nu),\quad\mathcal{S}=\mathcal{S}(\nu) \] and define the $(\nu+1)$-th term by \[ U_{i}(\nu+1)=U'_{i},\ V_{i}(\nu+1)=V'_i,\quad \mathcal{S}(\nu+1)=\mathcal{S}',\ \mathcal{P}(\nu+1)=\mathcal{P}'. \] Then we can continue this induction step as long as $U(\nu)>1$. Let \[
v(\nu)=|\mathcal{P}(\nu)|,\quad w(\nu)=v(\nu)+|\mathcal{S}(\nu)|-|\mathcal{S}(\nu-1)| \] for $\nu\ge1$ as long as the induction step is available. By the definition of $\mathcal{P}(\nu)$, \begin{equation} \label{EQ:P_partition} \mathcal{P}(\nu)=\{p\colon\text{prime}\mid p\mid V(\nu)\}\setminus \{p\colon\text{prime}\mid p\mid V(\nu-1)\}, \end{equation} where $V(\nu):=V_1(\nu)\cdots V_k(\nu)$, so \[ v(\nu)=\omega(V(\nu))-\omega(V(\nu-1)) \] since $V(\nu-1)\parallel V(\nu)$. Thus, by (i) of Lemma~\ref{Lem:HB_induction}, \begin{equation} \label{EQ:v_sum} s\le v(1)+\cdots+v(s)=\omega(V(s))\le \omega(M_1\cdots M_k)=K \end{equation} for any $s\ge1$ if the induction step is available until the $(s-1)$-th step. Thus the induction step stops in finitely many steps. Suppose that the induction step stops at the $n$-th step to produce \[ U_{i}(n)=1,\ V_{i}(n)=M_i,\quad\mathcal{S}(n)=\emptyset. \] Thus by (iii) of Lemma~\ref{Lem:HB_induction}, for $2\le s\le n$ satisfying $w(s)=0$, we have \begin{align*} \sigma(V(s))\Pi(\mathcal{S}(s))\Psi(\mathcal{P}(s)) \le F_{v(s)+w(s)}\left(\sigma(V(s-1))\Pi(\mathcal{S}(s-1))\right) \end{align*} Since $v(s)+w(s)=v(s)\ge1$, Lemma \ref{Lem:F_scaling} gives \begin{equation} \label{EQ:induction_bound} \begin{gathered} \sigma(V(s))\Pi(\mathcal{S}(s))\Psi(\mathcal{P}(s))\\ \le \Psi(\mathcal{P}(s-1))^{-1} F_{v(s)+w(s)}\left(\sigma(V(s-1))\Pi(\mathcal{S}(s-1))\Psi(\mathcal{P}(s-1))\right). \end{gathered} \end{equation} For remaining $2\le s\le n$ with $w(s)\ge1$, we use (ii) of Lemma~\ref{Lem:HB_induction} to obtain \begin{align*} \sigma(V(s))\Pi(\mathcal{S}(s))\Pi(\mathcal{P}(s))\Phi(\mathcal{P}(s)) &\le F_{v(s)+w(s)}\left(\sigma(V(s-1))\Pi(\mathcal{S}(s-1))+1\right)\\ &\le F_{v(s)+w(s)}\left(\frac{4}{3}\sigma(V(s-1))\Pi(\mathcal{S}(s-1))\right) \end{align*} since $\sigma(V(s-1))\ge 3$ for $s\ge2$. Also, since $\mathcal{P}(s-1)\neq\emptyset$ for $s\ge2$, we find that \[ \Psi(\mathcal{P}(s-1))=\Pi(\mathcal{P}(s-1))\Phi(\mathcal{P}(s-1))\ge2\ge\left(\frac{4}{3}\right)^2. \] Therefore, by Lemma \ref{Lem:F_scaling}, \begin{gather*} \sigma(V(s))\Pi(\mathcal{S}(s))\Psi(\mathcal{P}(s))\\ \le \Psi(\mathcal{P}(s-1))^{-2^{v(s)+w(s)-2}} F_{v(s)+w(s)}\left(\sigma(V(s-1))\Pi(\mathcal{S}(s-1))\Psi(\mathcal{P}(s-1))\right) \end{gather*} so by using $v(s)+w(s)\ge v(s)+1\ge2$, we again arrive at \eqref{EQ:induction_bound}. Thus the estimate \eqref{EQ:induction_bound} holds for every $2\le s\le n$. Then by using \eqref{EQ:induction_bound} inductively, \begin{align*} &\sigma(V(n))\Pi(\mathcal{S}(n))\Psi(\mathcal{P}(n))\\ {}\le{}& \Psi(\mathcal{P}(n-1))^{-1} F_{v(n)+w(n)}\left(\sigma(V(n-1))\Pi(\mathcal{S}(n-1))\Psi(\mathcal{P}(n-1))\right)\\ {}\le{}& \Psi(\mathcal{P}(n-1))^{-1}\Psi(\mathcal{P}(n-2))^{-2^{v(n)+w(n)}}\\ &\quad\times F_{v(n)+w(n)+v(n-1)+w(n-1)}\left(\sigma(V(n-2))\Pi(\mathcal{S}(n-2))\Psi(\mathcal{P}(n-2))\right)\\ {}\le{}& \Psi(\mathcal{P}(n-1))^{-1}\Psi(\mathcal{P}(n-2))^{-1}\\ &\quad\times F_{v(n)+w(n)+v(n-1)+w(n-1)} \left(\sigma(V(n-2))\Pi(\mathcal{S}(n-2))\Psi(\mathcal{P}(n-2))\right)\\ {}\le{}&\cdots\\ {}\le{}& \Psi\left(\bigsqcup_{\nu=1}^{n-1}\mathcal{P}(\nu)\right)^{-1} F_{v(n)+w(n)+\cdots+v(2)+w(2)}\left(\sigma(V(1))\Pi(\mathcal{S}(1))\Psi(\mathcal{P}(1))\right). \end{align*} By using (ii) of Lemma \ref{Lem:HB_induction} once more and recalling \eqref{EQ:P_partition}, \begin{align*} \sigma(M_1\cdots M_k) &= \Psi(\mathcal{P}(n))^{-1}\sigma(V(n))\Pi(\mathcal{S}(n))\Psi(\mathcal{P}(n))\\ &\le \Psi\left(\mathcal{P}\right)^{-1} F_{\sum_{\nu=1}^{n}(v(\nu)+w(\nu))}(\sigma(V(0))\Pi(\mathcal{S}(0))+1)\\ &= \Psi\left(\mathcal{P}\right)^{-1}F_{\sum_{\nu=1}^{n}(v(\nu)+w(\nu))}(2). \end{align*} By definition of $w(\nu)$ and $\mathcal{S}(0)=\mathcal{S}(n)=\emptyset$, we have \begin{align*} \sum_{\nu=1}^{n}(v(\nu)+w(\nu))
&=2\sum_{\nu=1}^{n}v(\nu)+|\mathcal{S}(n)|-|\mathcal{S}(0)|\\ &=2\sum_{\nu=1}^{n}v(\nu)=2\omega(V(n))=2\omega(M_1\cdots M_k)=2K. \end{align*} Thus the lemma follows. \end{proof}
We next prove the following auxiliary lemma, which is an amicable number analogue of the lemma used in the clever trick of Chen and Tang~\cite[Lemma~2.3]{Chen_Tang} or of Kobayashi (see the remark before Corollary 1.7 of \cite{Nielsen_New}). \begin{lemma} \label{Lem:Chen_Tang} For any anarchy harmonious tuple $(M_{i})_{i=1}^{k}$ with $\omega(M_1\cdots M_k)=K$, \[ \sigma(M_1\cdots M_k)\frac{\Phi(\mathcal{P})}{\Pi(\mathcal{P})}\le F_K(\Pi(\mathcal{P}))\Pi(\mathcal{P})^{-2}, \] where $\mathcal{P}$ is the set of all prime factors of $M_1\cdots M_k$. \end{lemma} \begin{proof} Since $(M_{i})_{i=1}^{k}$ is a harmonious tuple, the identity \eqref{EQ:fundamental_eq} holds. Then by using \[
a_i=\prod_{p|M_i}p,\quad b_i=\prod_{p|M_i}(p-1),\quad a=a_1\cdots a_k, \] we can rewrite the identity \eqref{EQ:fundamental_eq} as \[ \sum_{i=1}^{k} \frac{b_i}{a_i}\prod_{p^e\parallel M_i}\left(1-\frac{1}{p^{e+1}}\right)^{-1}=1. \] Also, if we remove some prime factor of $M_1\cdots M_k$ from this identity, then the left-hand side becomes smaller. Thus, we can apply Lemma \ref{Lem:HB_ineq2} to obtain \[ a\prod_{p^e\parallel M_1\cdots M_k}(p^{e+1}-1)\le F_{K}(a). \] Since $a=\Pi(\mathcal{P})$, this gives \[ \sigma(M_1\cdots M_k)\Pi(\mathcal{P})\Phi(\mathcal{P}) \le F_{K}(\Pi(\mathcal{P})). \] This completes the proof. \end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:main}] If $\Pi(\mathcal{P})>2^{2^{K}}$, then we use Lemma \ref{Lem:carry_out_induction} to obtain \[ \sigma(M_1\cdots M_k)\frac{\Phi(\mathcal{P})}{\Pi(\mathcal{P})} \le F_{2K}(2)\Pi(\mathcal{P})^{-2} < F_{2K}(2)2^{-2\cdot 2^{K}}. \] On the other hand, if $\Pi(\mathcal{P})\le2^{2^{K}}$, then we use Lemma \ref{Lem:Chen_Tang} to obtain \begin{align*} \sigma(M_1\cdots M_k)\frac{\Phi(\mathcal{P})}{\Pi(\mathcal{P})} &\le F_K(\Pi(\mathcal{P})) \Pi(\mathcal{P})^{-2}\\ &= \Pi(\mathcal{P})^{2^{K-1}-2}\left(\Pi(\mathcal{P})^{2^{K-1}}-1\right) \le F_{2K}(2)2^{-2\cdot 2^{K}}. \end{align*} Thus in any case we have \begin{equation} \label{EQ:pre_final} \sigma(M_1\cdots M_k)\frac{\Phi(\mathcal{P})}{\Pi(\mathcal{P})}\le F_{2K}(2)2^{-2\cdot 2^{K}}. \end{equation} Note that \[ \sigma(M_1\cdots M_k)\frac{\Phi(\mathcal{P})}{\Pi(\mathcal{P})} = M_1\cdots M_k \prod_{p^e\parallel M_1\cdots M_k}\left(1-\frac{1}{p^{e+1}}\right) \] Combining this identity with \eqref{EQ:pre_final}, we obtain \[ M_1\cdots M_k \le\prod_{p^e\parallel M_1\cdots M_k}\left(1-\frac{1}{p^{e+1}}\right)^{-1}F_{2K}(2)2^{-2\cdot 2^{K}}. \] By using \[ \prod_{p^e\parallel M_1\cdots M_k}\left(1-\frac{1}{p^{e+1}}\right)^{-1} < \prod_{p}\left(1-\frac{1}{p^{2}}\right)^{-1} = \frac{\pi^2}{6},\quad F_{2K}(2)<2^{4^{K}}, \] we finally arrive at \[ M_1\cdots M_k<\frac{\pi^2}{6}\,F_{2K}(2)2^{-2\cdot2^K}<\frac{\pi^2}{6}\,2^{4^{K}-2\cdot2^K}. \] This completes the proof. \end{proof}
\begin{center} \textbf{Acknowledgements.} \end{center}
The author would like to express his gratitude to Prof.~Kohji Matsumoto for his useful comments and encouragement. In particular, the discussion with him enables the author to find a gap in the original manuscript and to generalize the result to amicable tuples. The author also would like to thank Mr.~Yuki Yoshida for kindly providing his program for searching harmonious pairs and giving useful advices on the searching program. This work was supported by Grant-in-Aid for JSPS Research Fellow (Grant Number: JP16J00906).
\begin{flushleft} {\small {\sc Graduate School of Mathematics, Nagoya University,\\ Chikusa-ku, Nagoya 464-8602, Japan. }
{\it E-mail address}: {\tt [email protected]} } \end{flushleft}
\end{document} |
\begin{document}
\title{On a two-phase Hele-Shaw problem with a time-dependent gap and distributions of sinks and sources} \author{T.V.~Savina, L.~Akinyemi, and A.~Savin}
\maketitle
\begin{abstract} A two-phase Hele-Show problem with a time-dependent gap describes the evolution of the interface, which separates two fluids sandwiched between two plates. The fluids have different viscosities.
In addition to the change in the gap width of the Hele-Shaw cell, the interface is driven by
the presence of some special distributions of sinks and sources located in both the interior and exterior domains. The effect of surface tension is neglected. Using the Schwarz function approach, we give examples of exact solutions when the interface belongs to a certain family of algebraic curves and the curves do not form cusps. The family of curves are defined by the initial shape of the free boundary. \end{abstract}
Muskat problem, Generalized Hele-Shaw flow, Schwarz function, Mother body.
\maketitle
\section{Introduction}
Free boundary problems have been a significant part of modern mathematics for more than a century, since the celebrated Stefan problem, which describes solidification, that is, an evolution of the moving front between liquid and solid phases. Free boundary problems also appear in fluid dynamics, geometry, finance, and many other applications (see \cite{chen} for a detailed discussion). Recently, they started to play an important role in modeling of biological processes involving moving fronts of populations or tumors \cite{fri2015}. These processes include cancer, biofilms, wound healing, granulomas, and atherosclerosis \cite{fri2015}. Biofilms are defined as communities of microorganisms, typically bacteria, that are attached to a surface. The biofilms motivated Friedman et al \cite{fri2014} to consider a two-phase free boundary problem, where one phase is an incompressible viscous fluid, and the other phase is a mixture of two incompressible fluids, which represent the viscous fluid and the polymeric network (with bacteria attached to it) associated with a biofilm. Free boundary problems are also used in modeling of a tumor growth with one phase to be the tumor region, and the other phase to be the normal tissue surrounding the tumor \cite{fri2013}.
A Muskat problem is a free boundary problem related to the theory of flows in porous media \cite{mus}. It describes an evolution of an interface between two immiscible fluids, `oil' and `water', in a Hele-Shaw cell or in a porous medium. Here we study a two-phase Hele-Shaw flow assuming that the upper plate uniformly moves up or down changing the gap width of a Hele-Shaw cell.
Hele-Shaw free boundary problems have been extensively studied over the last century (see \cite{Vas2009}, \cite{Vas2015} and references therein). There are two classical formulations of the Hele-Shaw problems: the one-phase problem, when one of the fluids is assumed to be viscous while the other is effectively inviscid (the pressure there is constant), and the two-phase (or Muskat) problem. A statement of the problem with a time-dependent gap between the plates was mentioned in \cite{EEK} among other generalized Hele-Shaw flows. The one-phase (interior) version of this problem was considered in \cite{tian}, where conditions of existence, uniqueness, and regularity of solutions were established under assumption that surface tension effects on the free boundary are negligible; some exact solutions were constructed as well. An interior problem with a time-dependent gap and a non-zero surface tension was considered in \cite{jpA2015}, where asymptotic solutions were obtained for the case when initial shape of the droplet is a weakly distorted circle. Note also that the mathematical formulation of the interior problem with a time-dependent gap is similar to the problem of evaporation of a thin film \cite{agam}. When the surface tension is negligible, the pressure in both formulations can be obtained as a solution to the Poisson’s equation in a bounded domain with homogeneous Dirichlet data on the free boundary.
Much less progress has been made for the Muskat problem. Regarding the problem with a constant gap width, we should mention works \cite{howison2000}-\cite{contExact}. Specifically, Howison \cite{howison2000} has obtained several simple solutions including the traveling-wave solutions and the stagnation point flow. In \cite{howison2000}, an idea of a method for solving some two-phase problems was proposed and used to reappraise the Jacquard-S\'eguier solution \cite{JS}. Global existence of solutions to some specific two-phase problems was considered in \cite{FT}-\cite{YT}. Crowdy \cite{crowdy2006} presented an exact solution to the Muskat problem for the elliptical initial interface between two fluids of different viscosity. In \cite{crowdy2006}, it was shown that an elliptical inclusion of one fluid remains elliptical when placed in a linear ambient flow of another fluid.
In \cite{contExact}, new exact solutions to the Muskat problem were constructed, extending the results obtained in \cite{crowdy2006}, to other types of inclusions. This paper is concerned with a two-phase Hele-Shaw problem with a variable gap width in the presence of sinks and sources.
Let $\Omega _2 (t) \subset {\mathbb R}^2$ with a boundary $\Gamma (t)$ at time $t$ be a simply-connected bounded domain occupied by a fluid with a constant viscosity $\nu _2$, and let $\Omega _1 (t)$ be the region ${\mathbb R}^2\setminus {\bar \Omega}_2(t)$ occupied by a different fluid of viscosity $\nu _1$. To consider a two-phase Hele-Shaw flow forced by a time-dependent gap, we start with the Darcy's law \begin{equation}\label{1} {\bf v}_j=-k_j\nabla p_j
\quad\mbox{in}\quad \Omega _j (t), \qquad j=1,2,
\end{equation} where ${\bf v}_j$ and $p_j$ are a two-dimensional gap-averaged velocity vector and a pressure of fluid $j$ respectively,
$k_j=h^2(t)/12\nu _j$, and $h(t)$ is the gap width of the Hele-Shaw cell. Equation (\ref{1}) is complemented by the volume conservation, $A(t)h(t)=A(0)h(0)$ for any time $t$, where $A(t)$ and $A(0)$ are the areas of $\Omega _2 (t)$ and $\Omega _2(0)$ respectively. The conservation of volume for a time-dependent gap may be written as a modification of the usual incompressibility condition $$ \nabla\cdot {\bf V_2} =0, $$ where ${\bf V_2}=(u,v,w)$ is a three-dimensional velocity vector of the fluid occupying the domain $\Omega _2 (t)$. Indeed, the averaging of the three-dimensional incompressibility condition across the gap gives \cite{tian}: $$ 0=\int\limits _{0}^{h(t)} (u_x+v_y+w_z)dz/h(t)=u_x^{av}+v_y^{av}+(w(h(t))-w(0))/h(t)=u_x^{av}+v_y^{av}+\frac{\dot h(t)}{h(t)}. $$ Here $z=0$ corresponds to the lower plate and $z=h(t)$ corresponds to the upper plate, and and $h(t)$ and $\dot h(t)$ are assumed to be small enough to avoid any inertial effects as well as to keep the large aspect ratio. The latter implies \cite{tian} \begin{equation}\label{2111} \nabla\cdot {\bf v_2}=-\frac{\dot h(t)}{h(t)} \quad\mbox{in}\quad \Omega (t). \end{equation} Note that similar consideration may be applied to any finite part of the region $\Omega _1(t)$. Thus, equations (\ref{1}) and (\ref{2111}) suggest to formulated the problem in terms of the pressure $p_j$ as a solution to Poisson's equation, \begin{equation}\label{01} \Delta p_j=\frac{1}{k_j}\frac{\dot h(t)}{h(t)}, \end{equation}
almost everywhere in the region $\Omega _j (t)$, satisfying boundary conditions \begin{eqnarray}\label{2} p_1(x,y,t)=p_2(x,y,t) \quad \mbox{on} \quad \Gamma (t),\\ -k_1\pd{p_1}{n}=-k_2\pd{p_2}{n} =v_n \quad \mbox{on} \quad \Gamma(t).\label{3} \end{eqnarray}
We remark that when sinks and sources are present in $\Omega _j (t)$, equation (\ref{01}) has an additional term, $\Delta p_j=\frac{1}{k_j}\frac{\dot h(t)}{h(t)}+\mu_j$, describing the corresponding distribution. Equation (\ref{2}) states the continuity of the pressure under the assumption of negligible
surface tension.
Equation (\ref{3}) means that the normal velocity of the boundary itself coincides with the normal velocity of the fluid
at the boundary.
The free boundary $\Gamma (t)$ moves due to a change of the gap width as well as the presence of sinks and sources located in both regions. The supports of the sinks and sources, specified in section \ref{prelim}, are either points or lines/curves. The presence of sinks and sources obviously changes the dynamics of the evolution of the interface between the fluids, which is shown for an elliptical interface in section \ref{sec:ex}.
For what follows, it is convenient to reformulate the problem in terms of harmonic functions $\tilde p_j$, where \begin{equation} p_j(x,y,t)=\tilde p_j(x,y,t)+\frac{1}{4k_j}\frac{\dot h(t)}{h(t)}(x^2+y^2). \end{equation} Then the problem \eqref{01}-\eqref{2} reduces to \begin{equation}\label{01t} \Delta \tilde p_j=\chi _j\mu _j \quad \mbox{in} \quad \Omega _j (t), \end{equation} where $\chi _j=0$ or $\chi _j=1$ in the absence or presence of sinks and sources in $\Omega _j (t)$ respectively, \begin{eqnarray}\label{2t} \tilde p_1(x,y,t)=\tilde p_2(x,y,t)+ \frac{k_1-k_2}{4k_1k_2}\,\frac{\dot h(t)}{h(t)}(x^2+y^2) \quad \mbox{on} \quad \Gamma (t),\\ -k_1\pd{\tilde p_1}{n} =-k_2\pd{\tilde p_2}{n} = v_n +\frac{1}{4}\frac{\dot h(t)}{h(t)}\pd{}{n}(x^2+y^2) \quad \mbox{on} \quad \Gamma(t).\label{3t} \end{eqnarray}
The main difficulty of the two-phase problems is the fact that the pressure on the interface is unknown. However, if we assume that the free boundary remains within the family of curves, specified by the initial shape of the interface separating the fluids (which is feasible if the surface tension is negligible), the problem is drastically simplified.
In this paper, using reformulation of the Muskat problem with the time-dependent gap in terms of the Schwarz function equation, we describe a method of constructing exact solutions, and using this method we consider examples in the presence and in the absence of additional sinks and sources.
The structure of the paper is as follows. In Section \ref{prelim} we describe the method
of finding exact solutions.
Examples of the exact solutions are given in Section \ref{sec:ex}, and concluding remarks are given in Section \ref{sec:concl}.
\section{The method of finding exact solutions for a Muskat problem with a time-dependent gap}\label{prelim}
Consider a problem \begin{equation}\label{01tm} \Delta \tilde p_j=\chi _j\mu _j \quad \mbox{in} \quad \Omega _j (t), \end{equation} \begin{eqnarray}\label{2tm} \tilde p_1(x,y,t)+\Psi _1(x,y,t)=\tilde p_2(x,y,t)+ \Psi _2(x,y,t) \quad \mbox{on} \quad \Gamma (t),\\ -k_1\pd{\tilde p_1}{n} =-k_2\pd{\tilde p_2}{n} = v_n +\Phi (x,y,t) \quad \mbox{on} \quad \Gamma(t).\label{3tm} \end{eqnarray} In the case when \begin{equation}\label{Psi} \Psi _j=\frac{1}{4k_j}\,\frac{\dot h(t)}{h(t)}(x^2+y^2), \qquad j=1,2, \end{equation} \begin{equation}\label{Phi} \Phi =\frac{1}{4}\,\frac{\dot h(t)}{h(t)}\pd{}{n}(x^2+y^2), \qquad j=1,2, \end{equation} the problem \eqref{01tm}-\eqref{3tm} coincides with \eqref{01t}-\eqref{3t}.
As stated before, the evolution of the interface separating the fluids is forced by the change in the gap width and the presence of sinks and sources. In the absence of the surface tension, there is a possibility to control the interface by keeping $\Gamma(t)$ within a family of curves defined by $\Gamma (0)$. For what follows, it is convenient to reformulate problem (\ref{01tm})--(\ref{3tm}) in terms of the Schwarz function $S(z,t)$ of the curve $\Gamma (t)$ \cite{davis}--\cite{shapiro}.
This function for
a real-analytic curve $\Gamma :=\{ g(x,\,y,\,t)=0\}$ is defined as a solution to the equation
$g\left ((z+\bar z)/2, \, (z-\bar z)/2i,\, t \right ) =0$
with respect to $\bar z$.
This (regular) solution exists in some neighborhood $U_{\Gamma}$ of the curve $\Gamma$, if the assumptions of the implicit function theorem are satisfied \cite{davis}. Note that if $g$ is a polynomial, then the Schwarz function is continuable into $\Omega _j$, generally as a multiple-valued analytic function with a finite number of algebraic singularities (and poles). In $U_{\Gamma}$, the normal velocity, $v_n$, of $\Gamma (t)$ can be written in terms of the Schwarz function \cite{howison92},
$v_n=-i\dot S(z,t)/\sqrt{4\partial _z S(z,t)}$.
Let $\tau$ be an arclength along $\Gamma (t)$, $\psi _j$ be a stream function, and $W_j=\tilde p_j-i\psi _j$ be the complex potential, that is defined on $\Gamma (t)$ and in $\Omega _j (t)\cap U_{\Gamma}$, $j=1,\,2$. Following \cite{cummings}-\cite{mcdonald},
taking into account the Cauchy-Riemann conditions in the $(n,\tau )$ coordinates, for the derivative of $W_j(z,t)$ with respect to $z$ on $\Gamma (t)$ we have \begin{equation}\label{main1} \partial _z {W_j}=\frac{\partial _{\tau}W_j}{\partial _{\tau}z}=
\frac{\partial _{\tau}\tilde p_j+i\partial _{n}\tilde p_j}{\partial _{\tau}z}= \frac{\partial _{\tau}\tilde p_j-i(v_n+\Phi)/k_j}{\partial _{\tau}z}. \end{equation} Expressing $\partial _{\tau}z$ in terms of the Schwarz function, $\partial _{\tau}z=(\partial _z S(z,t))^{-1/2}$, we obtain \begin{equation}\label{main1m} \partial _z {W_j}=\partial _{\tau}\tilde p_j\sqrt{\partial _z S} -\frac{\dot {S}}{2k_j}-\frac{i\Phi}{k_j}\sqrt{\partial _z S}. \end{equation} Here $\partial _z {W}_j\equiv \pd{W_j}{z}$, $\partial _{\tau} {}\equiv \pd{}{\tau}$.
Equation \eqref{2tm} implies that $\tilde p_1+\Psi_1=\tilde p_2+\Psi _2=f$ on $\Gamma (t)$, where $f$ is an unknown function.
To keep $\Gamma (t)$ in a certain family of curves defined by $\Gamma (0)$, for example, in a family of ellipses, we assume that $f$ on $\Gamma (t)$ is a function of time only. This possibility is shown in Section 3, where specific examples are discussed. In that case the problem is simplified drastically, and on $\Gamma (t)$ we have \begin{equation}\label{main2m} \partial _z {W_j}= -\frac{\dot {S}}{2k_j}-\partial _z (\Psi _j(z,S(z,t))-\frac{i\Phi}{k_j}\sqrt{\partial _z S}\qquad j=1,2. \end{equation} For the special case when $\Psi _j$ and $\Phi$ are given by \eqref{Psi}, \eqref{Phi}, the last equation reduces to \begin{equation}\label{main2mg} \partial _z {W_j}= -\frac{1}{2k_j}(\dot {S}+\frac{\dot h}{h} S)\qquad j=1,2. \end{equation}
Remark that each equation \eqref{main2mg} can be continued off of $\Gamma$ into the corresponding $\Omega _j$, where $W_j$ is a multiple-valued analytic function. The equations \eqref{main2m} and \eqref{main2mg} imply that the singularities of $W_1$, $W_2$, and the singularities of the Schwarz function are linked. As such, the singularities of the Schwarz function play the crucial role in the construction of solutions in question.
To find the exact solutions, suppose that at $t=0$ the interface is an algebraic curve, $\sum _{k=0}^n a_k(0) x^{k-n}y^n=0$, with the Schwarz function $S(z,a_k^0)$. Assume that during the course of evolution the Schwarz function of the interface $S(z,a_k(t))\equiv S(z,t)$ is such that $S(z,a_k(0))=S(z,a_k^0)$, which leads us to the following six steps method: \newline 1) Compute $\dot S (z, t)$, locate its singularities, and define their type. \newline 2) Using equations \eqref{main2mg} find preliminary expressions for $\partial _z W_j$. \newline 3) By putting restrictions on the coefficients $a_k(t)$ in the preliminary
expressions for $\partial _z W_j$ eliminate the terms involving undesirable singularities (if possible).
\newline 4) Integrate \eqref{main2mg} with respect to $z$ in order to find $W_j$ up to an arbitrary function of time. \newline 5) Take the real part of $W_j$ in order to obtain $p_j$ up to an arbitrary function of time. \newline 6) Evaluate the quantities $p_j$ on the interface to determine the independent of $z$ function of integration from the steps 3 and 4. \newline 7) Locate the supports and compute the distributions of sinks and sources.
Before describing how to locate the supports, we remark that the distributions in step 7 are related to the two-phase mother body \cite{contExact}. The notion of a mother body arises from the potential theory \cite{Gu1}-\cite{gardiner}
and was adopted to the one-phase Hele-Shaw problem in \cite{external}.
As mentioned above, generally, the complex potentials $W_j$ are multiple-valued functions in $\Omega _j$. For instance, if $\Gamma (t)$ is an algebraic curve, then the singularities of $W_j$ are either poles or algebraic singularities. To choose a branch of $W_j$, one has to introduce the cuts, $\gamma _j(t)$, that serve as supports for the distributions of sinks and sources, $\mu _j(t)$, $j=1,2$. Thus, each cut originates from an algebraic singularity $z_a(t)$ of the potential $W_j$.
The supports consist of those cuts and/or points and do not bound any two-dimensional subdomains in $\Omega _j(t)$, $j=1,2$. Each cut included in the support of $\mu _j(t)$ is contained in the domain $\Omega _j (t)$, and the limiting values of the pressure on each side of the cut are equal. The value of the density of sinks and sources located
on the cut is equal to the jump of the normal derivative $\partial _n p_j$ of the pressure $p_j$. In order for the total flux through the sinks and sources to be finite, all of the singularities of the function $W_j$ must have no more than the logarithmic growth.
The location of $z_a(t)$, as well as the directions of the cuts emanating from $z_a(t)$, are determined by the Schwarz function via \eqref{main2mg}. In the examples considered below, the Schwarz function has the following two representations near its singular points. The first representation being the square root (general position) \begin{equation}\label{eq6} S^g\left( z,t\right) =\xi^g\left( z,t\right)\,\sqrt{z-z_a(t)} +\zeta^g\left( z,t\right), \end{equation} where $z_a(t)$ is a non-stationary singularity, that is $\dot z_a\ne 0$. The second being the reciprocal square root \begin{equation}\label{eq6r} S^r\left( z,t\right) =\frac{\xi^r\left( z,t\right)}{\sqrt{z-z_a(0)}} +\zeta^r\left( z,t\right), \end{equation} where $z_a(0)$ is a stationary singularity, that is $\dot z_a= 0$. Here $\xi^{g,r}\left( z,t\right) $ and $\zeta^{g,r}\left( z,t\right) $ are regular functions of $z$ in a neighborhood of the point $z_a(t)$, and $\xi^{g,r}\left( z_a(t),t\right)\ne 0$.
By plugging \eqref{eq6} and \eqref{eq6r} into \eqref{main2mg}, in a small neighborhood of $z_a(t)$ we have \begin{equation}\label{eq6w} W_j^g\left( z,t\right) =\frac{1}{2k_j}\dot z_a\xi^g\left( z_a(t),t\right)\,\sqrt{z-z_a(t)} +\dots , \end{equation} \begin{equation}\label{eq6wr} W_j^r\left( z,t\right) =\frac{1}{k_j}C_0(t)\,\sqrt{z-z_a(0)} +\dots , \end{equation} where the dots correspond to the smaller and regular terms that do not affect the computation of the directions of the cuts. The quantity $C_0(t)$ is defined by $$ C_0(t)=\dot\xi^r\left( z_a(0),t\right )+\frac{\dot h(t)}{h(t)}\xi^r\left( z_a(0),t\right). $$ Formulas \eqref{eq6w} and \eqref{eq6wr} along with the substitutions $z=z_a+\rho \exp{(i\varphi ^{g,r})}$ (with small $\rho$), imply that \begin{equation}\label{eq6p} p_j^g\left( z,t\right) =\frac{\sqrt{\rho}}{2k_j}\Re [\dot z_a\xi^g\left( z_a(t),t\right)\, \exp{(\frac{i\varphi ^{g}}{2})}] +\dots , \end{equation} \begin{equation}\label{eq6pr} p_j^r\left( z,t\right) =-\frac{\sqrt{\rho}}{k_j}\Re [C_0(t)\exp{(\frac{i\varphi ^{r}}{2})}] +\dots . \end{equation} Computing the zero level of a variation of $p_j$ along a small loop surrounding the singular point, we finally
obtain the following directions of the cuts: for the general position \begin{equation}\label{dir} \varphi ^g=\pi-2(\arg [\xi^g\left( z_a(t),t\right)]+ \arg [\dot z_a]) +2\pi k,\quad k=0,\pm1, \pm 2 ... . \end{equation} and for the reciprocal square root \begin{equation}\label{dirr} \varphi ^r=\pi-2\arg [C_0(t)] +2\pi k,\quad k=0,\pm1, \pm 2 ... . \end{equation}
In the next section, we use the described method to construct exact solutions to the Muskat problem. In the considered examples, the evolution of the interface is driven by the change in the gap width of the Hele-Shaw cell. The examples include
the elliptical shape with and without sinks and sources in the finite domain as well as the Cassini's oval in the presence of sinks and sources.
\section{ Examples of specific initial interfaces \label{sec:ex}}
\subsection{Circle }
To illustrate the method, we start with the simplest example for which the solution is known. Suppose that the initial shape of the interface is a circle with the equation $x^2+y^2=a^2(0)$, and during the evolution the boundary remains circular,
$x^2+y^2=a^2(t)$. The corresponding Schwarz function is $S=a^2(t)/z$. Taking into account the volume conservation, equation \eqref{main2mg} in this case reads as $\partial _z W_j=0$, which implies that $\tilde p_j$ is a function depending on $t$ only,
\begin{equation} \tilde p_j=-\frac{a_0^2h_0\dot h}{4k_jh^2}+f(t), \end{equation} therefore, \begin{equation}\label{Pcircle} p_j(x,y,t)=\frac{1}{4k_j}\frac{\dot h(t)}{h(t)}\Bigl ( x^2+y^2-\frac{a_0^2h_0}{h(t)}\Bigr )+f(t) \end{equation} and $a(t)=a_0\sqrt{h_0/h(t)}$.
\subsection{Ellipse}
Consider a two-phase problem with an elliptical interface, $\Gamma (0)=\left\{ \frac{x^2}{a(0)^2}+\frac{y^2}{b(0)^2}=1\right\}$,
where $a(0)$ and $b(0)$ are given and $a(0)>b(0)$.
The Schwarz function of an elliptical interface with semi-axes $a(t)$ and $b(t)$ is $$ S\left( z,t\right) =\Bigl ( \bigl ( a(t)^2+b(t)^2 \bigr ) z-2a(t)b(t)\sqrt{z^2-d(t)^2}\Bigr )/d(t)^2, $$ where $d(t)=\sqrt{a(t)^2-b(t)^2}\,$ is the half of the inter-focal distance. Assuming that the interface remains elliptical during the course of the evolution, we use equation \eqref{main2mg} \begin{align}\notag
\partial_zW_j=-\frac{1}{2k_j}\left(\partial_t{S}+\frac{\dot{h}}{h}{S}\right). \end{align} Due to the volume conservation of the fluid occupying $\Omega _2(t)$, the product of functions $a(t)$ and $b(t)$ is linked to the gap width, $h(t)$, via the equation $h(t)=a_0b_0h_0/(a(t)b(t)),$ where $a_0=a(0)$, $b_0=b(0),$ and $h_0=h(0)$. Therefore, $\dot{h}(t)/{h(t)}=-\partial_t(ab)/(ab)$, and the equation \eqref{main2mg} could be rewritten as \begin{align}\label{27.91}
\partial_zW_j=-\frac{1}{2k_j}\left(\partial_t{S}-\frac{\partial_t(ab)}{ab}{S}\right), \end{align} which results in
\begin{eqnarray}\label{wzellipse} \partial _z{W_j}=-\frac{z}{2k_j}\Bigl \{\pd{}{t}\Bigl ( \frac{a^2+b^2}{d^2} \Bigr ) -\frac{(a^2+b^2)}{a\,b\, d^2}\pd{}{t}(ab)\Bigr \}
\notag \\
-\frac{(2z^2-d^2)}{ \sqrt{z^2-d^2}}\, \frac{ab}{2k_jd^4}\pd{}{t}\Bigl ( d^2 \Bigr ) \end{eqnarray} and \begin{eqnarray}\label{wzellipse1} W_j=-\frac{z^2}{4k_j}\Bigl \{\pd{}{t}\Bigl ( \frac{a^2+b^2}{d^2} \Bigr ) -\frac{(a^2+b^2)}{a\,b\, d^2}\pd{}{t}(ab)\Bigr \}
\notag \\ -\frac{a\,b\,z}{2k_jd^4}\sqrt{z^2-d^2}\,\pd{}{t}(d^2) +C_j(t),\label{well} \end{eqnarray} where $C_j(t)$ is an arbitrary function of time.
\begin{figure}
\caption{Squeezing of an ellipse: $a_0=2$, $b_0=1$, $h_0=0.1$, $h(t)=h_0-t$; $t=0$, $t=0.05$, $t=0.07$, $t=0.09$: (a) $d^2=const$, (b) $d^2(t)=d_0^2\exp{(25t)}$, (c) $d^2(t)=d_0^2\exp{(-25t)}$. }
\label{fgElli}
\end{figure}
{\it (a) Evolution with constant inter-focal distance}.
To obtain an exact solution in the absence of sinks and sources in the finite part of the plane, we set $d(t)=d(0)$. Then, the second term in the formula (\ref{wzellipse1}) vanishes, which implies the following expression for the pressure \begin{equation} \tilde p_j =\Re [W_j]=\frac{1}{4k_j}\Bigl ( (x^2-y^2)\frac{\dot a\, d_0^2}{a(a^2-d_0^2)}+2\dot a a
\Bigr )+f(t), \end{equation} therefore, \begin{equation} p_j =\frac{\dot a}{2k_j a (a^2-d^2_0)}\Bigl ( d^2_0 x^2-a^2(x^2+y^2)+a^2(a^2-d^2_0)
\Bigr )+f(t) \end{equation} is the solution to the problem \eqref{01}-\eqref{3}. Note that when $d_0=0$, this formula coincides with formula \eqref{Pcircle} related to the circular interface.
Hence, $\Gamma (t)$ is a family of co-focal ellipses, $$ \frac{x^2}{a^2(t)}+\frac{y^2}{b^2(t)}= 1, $$ controlled by one of the functions $a(t)$, $b(t)$ or $h(t)$. If $h(t)$ is given, then \begin{eqnarray} & a^2(t)=\frac{1}{2}\Bigl ( a_0^2-b_0^2+\sqrt{(a_0^2-b_0^2)^2+4a_0^2b_0^2h_0^2/h^2(t)} \Bigr ),\\ & b^2(t)=\frac{1}{2}\Bigl ( b_0^2-a_0^2+\sqrt{(a_0^2-b_0^2)^2+4a_0^2b_0^2h_0^2/h^2(t)} \Bigr ). \end{eqnarray} An example of such an evolution with a linear function $h(t)$ is shown in Fig.~1(a).
{\it (b) Evolution with variable inter-focal distance}.
If we admit solutions with variable inter-focal distance by keeping all terms in (\ref{wzellipse1}), we must allow, in addition to the gap change, some sinks/sources located in $\Omega _2$. In that case, the pressure is \begin{eqnarray} \tilde p_j=-\frac{(x^2-y^2)}{4k_j}\Bigl \{\pd{}{t}\Bigl ( \frac{a^2+b^2}{d^2} \Bigr ) -\frac{(a^2+b^2)}{a\,b\, d^2}\pd{}{t}(ab)\Bigr \}
\notag \\ -\frac{a\,b}{2k_jd^4}\,\pd{}{t}(d^2) \frac{x\,(\alpha ^2-y^2)}{\alpha} -\frac{ab\,(\dot ab-a\dot b)}{2k_jd^2} +f(t),\label{well} \end{eqnarray} where $$ \alpha ^2=\Bigl ( x^2-y^2-d^2 +\sqrt{(x^2-y^2-d^2)^2+4x^2y^2} \Bigr ) /2, $$ therefore, making \begin{eqnarray}
p_j=-\frac{(x^2-y^2)}{4k_j}\Bigl \{\pd{}{t}\Bigl ( \frac{a^2+b^2}{d^2} \Bigr ) -\frac{(a^2+b^2)}{a\,b\, d^2}\pd{}{t}(ab)\Bigr \} -\frac{ab\,(\dot ab-a\dot b)}{2k_jd^2} \notag \\ -\frac{a\,b}{2k_jd^4}\,\pd{}{t}(d^2) \frac{x\,(\alpha ^2-y^2)}{\alpha}
-\frac{\partial _t(ab)}{4k_j ab}(x^2+y^2) +f(t).\label{well1} \end{eqnarray} Equation (\ref{wzellipse1}) implies that there are two singular points in the interior domain $\Omega _2$, $z=\pm d$. The Schwarz function near those points has the square root representation \eqref{eq6} with $$ \xi ^g=-\frac{2ab}{d^2}\sqrt{z\pm d}. $$ The direction of the cut at each point is defined by formula \eqref{dir},
which implies that at the point $z_a=d$, the angle is $\varphi ^g =\pi +2\pi k$ and at the point $z_a=-d$, the angle is $\varphi ^g=2\pi k$, $k=0,\pm 1,\pm 2,\dots$. Thus, the cut $\gamma _2(t)$ is located along the inter-focal segment $[-d,d]$. The density of the distribution of sinks and sources along that segment is given by the formula $$ \mu _2= \frac{ab\,\partial _t (d^2)}{k_2d^4}\,\,\frac{(2x^2-d^2)}{\sqrt{d^2-x^2}}. $$
Such a density changes its sign along the inter-focal segment, so its presence does not affect the area of the ellipse,
$\dot A=\int _{-d}^{d} k_2 \mu _2(x,t)\, dx=0$. Fig.~1 shows how the sinks and sources change the evolution of the interface with increasing (see Fig.~1 (b)) and decreasing (see Fig.~1 (c)) inter-focal distances.
\subsection{The Cassini's oval}
Similar to the previous examples, assume that $\Gamma (t)$ remains in the specific family of curves, the Cassini's ovals, given by the equation $$ \left( x^2+y^2\right) ^2-2b(t)^2\left( x^2-y^2\right) =a(t)^4-b(t)^4, $$
where $a(t)$ and $b(t)$ are unknown positive functions of time.
This curve consists of one closed curve, if $a(t)>b(t)$ (see Fig. \ref{fgCass}), and two closed curves otherwise. Assume that at $t=0$ $a(0)>b(0)$. \begin{figure}
\caption{Squeezing of the Cassini's ovals
for $b(t)=b_0=1$, $a_0=1.1$, $h_0=0.1$, $h(t)=h_0-t$: (a) $t=0$,
(b) $t=0.05$.
}
\label{fgCass}
\end{figure}
The Schwarz function of Cassini's oval, $$ S\left( z,t \right) =\sqrt{b^2z^2+a^4-b^4}\, /\sqrt{z^2-b^2}, $$
has two singularities in $\Omega _1(t)$, $z=\pm i\sqrt{(a^4-b^4)/b^2}$, and two singularities in $\Omega _2(t)$, $z=\pm b$. The corresponding complex velocities have singularities at the same points, \begin{equation}\label{I3} \partial _z W_j=-\frac{1}{2k_j}\Bigl ( \frac{B_1z^2+B_2}{\sqrt{(b^2z^2+a^4-b^4)(z^2-b^2)}}+
\frac{b\dot b \sqrt{b^2z^2+a^4-b^4}}{\sqrt{(z^2-b^2)^3}}
\Bigr ). \end{equation} Here $$ B_1=b\dot b +b^2\dot h /h,\qquad B_2 =2a^3\dot a-2b^3\dot b + (a^4-b^4)\dot h/h, $$ and $\dot h/h=-\dot A/A$ due to volume conservation.
The area of Cassini's oval can be computed in polar coordinates, $A=a^2E(\pi,\frac{b^2}{a^2})=2a^2E(\frac{b^2}{a^2})$, where $E(\phi, k)=\int\limits _0^\phi\sqrt{1-k^2\sin ^2 t }\,dt$ and $E(k)=E(\pi/2,k)$, resulting in \begin{equation} \frac{\dot A}{A}=\frac{2\dot a}{a}+\frac{\partial _t E(\pi,\frac{b^2}{a^2})}{E(\pi,\frac{b^2}{a^2})}. \end{equation} Taking into account (\cite{prudnikov}, p. 772), $$ \pd{E(\phi,k)}{k}=\frac{1}{k}\Bigl ( E(\phi ,k)-F(\phi, k)\Bigr ), $$ where \begin{equation}\label{ellipticF} F(\phi, k)=\int\limits _0^\phi\frac{1}{\sqrt{1-k^2\sin ^2 t }}\,dt, \end{equation} $F(\pi/2,k)=K(k)$, and $ \partial _t E(\pi,\frac{b^2}{a^2})=\Bigl (E(\pi,\frac{b^2}{a^2})-F(\pi,\frac{b^2}{a^2})\Bigr ) \frac{2a\dot b-2b\dot a}{ab}, $ we have \begin{equation} B_1(t)=\frac{b}{aE(\pi,\frac{b^2}{a^2})} \Bigl (-a\dot b E(\pi,\frac{b^2}{a^2})+2(a\dot b-\dot a b)F(\pi,\frac{b^2}{a^2})\Bigr ), \end{equation} \begin{equation} B_2(t)=\frac{2(\dot ab-a\dot b)}{abE(\pi,\frac{b^2}{a^2})} \Bigl (a^4 E(\pi,\frac{b^2}{a^2})-(a^4- b^4)F(\pi,\frac{b^2}{a^2})\Bigr ), \end{equation} and \begin{equation}
W_j=-\frac{1}{2k_j}\Bigl ( B_1 I_1+B_2I_2+ b\dot b I_3
\Bigr ). \end{equation} Here $$ I_1 =\frac{b^2}{a^2}F \Bigl (\cos ^{-1} \bigl ( \frac{b}{z}\bigr ),\, \frac{\sqrt{a^4-b^4}}{a^2}\, \Bigr ) -\frac{a^2}{b^2}E \Bigl (\cos ^{-1} \bigl ( \frac{b}{z}\bigr ),\, \frac{\sqrt{a^4-b^4}}{a^2}\, \Bigr ) +\frac{\sqrt{(z^2b^2+a^4-b^4)(z^2-b^2)}}{zb^2}, $$ \begin{eqnarray}\notag & I_2 =\frac{1}{a^2}F \Bigl (\cos ^{-1} \bigl ( \frac{b}{z}\bigr ),\, \frac{\sqrt{a^4-b^4}}{a^2}\, \Bigr )=\frac{1}{a^2}\int\limits _0 ^{\sqrt{1-b^2/z^2}} \frac{dt}{\sqrt{1-\frac{a^4-b^4}{a^4}t^2}\,\sqrt{1-t^2}}= \\ & \frac{1}{a^2}\int\limits _0 ^{\cos ^{-1} (b/z )}\frac{dt}{\sqrt{1-\frac{a^4-b^4}{a^4}\sin ^2 t}}, \notag \end{eqnarray} and the integral $I_3$ corresponds to the last term in \eqref{I3}.
To ensure that the singularities of the complex potential have no more than the logarithmic type, we eliminate this term by setting $\dot b$ to zero. Thus, we have $$ \dot S\left( z\right) =\frac{2a^3\dot a}{\sqrt{b^2z^2+a^4-b^4}\sqrt{z^2-b^2}}, $$
and the equation \eqref{main2mg} implies \begin{eqnarray}\label{mainCas} & {W_j} =-\frac{a\dot a}{k_jE(\frac{b^2}{a^2})}\Bigl[\Bigl (E(\frac{b^2}{a^2})-K(\frac{b^2}{a^2})\Bigr ) \,F \Bigl (\xi,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr )\\ & +K(\frac{b^2}{a^2})\,E \Bigl (\xi,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr ) -\frac{K(\frac{b^2}{a^2})\,\sqrt{(z^2b^2+a^4-b^4)(z^2-b^2)}}{a^2z}\Bigr ] +C(t), \notag \end{eqnarray}
where $\xi=\cos ^{-1} \bigl (\frac{b}{z} \bigr)$ and $F(\alpha\,,\beta)$ is the incomplete elliptic integral of the first kind \eqref{ellipticF}, $$ F \Bigl (\cos ^{-1} \bigl ( \frac{b}{z}\bigr ),\, \frac{\sqrt{a^4-b^4}}{a^2}\, \Bigr )=\int\limits _0 ^{\sqrt{1-b^2/z^2}} \frac{dt}{\sqrt{1-\frac{a^4-b^4}{a^4}t^2}\,\sqrt{1-t^2}}= \int\limits _0 ^{\cos ^{-1} (b/z )}\frac{dt}{\sqrt{1-\frac{a^4-b^4}{a^4}\sin ^2 t}}. $$ Since $p_j=\Re \,[W_j]$, we need to compute the real parts for each term in \eqref{mainCas}. Using the property $\overline {F(\alpha\,,\beta)}=F(\overline{\alpha}\,,\beta)$ and the summation formula for the elliptic integrals of the first kind \cite{BE}, we have $$ \frac{1}{2} \Bigl [F \Bigl (\xi,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr )+\overline{F \Bigl (\xi,\, \frac{\sqrt{a^4-b^4}}{a^2}}\,\Bigr )\Bigr]=\frac{1}{2}F \Bigl (\alpha ,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr ), $$ where \begin{equation}\label{alpha1} \alpha=\sin ^{-1}\frac{ \cos \overline{\xi}\sin \xi \sqrt{1-\frac{{a^4-b^4}}{a^4}\sin ^2\overline{\xi}}+ \cos \xi\sin \overline{\xi} \sqrt{1-\frac{{a^4-b^4}}{a^4}\sin ^2{\xi}} } { 1-\frac{{a^4-b^4}}{a^4}\sin ^2 \xi \sin ^2 \overline{\xi} } \end{equation} or \begin{equation}\label{alpha3} \alpha=\sin ^{-1}\frac{ a^2z\sqrt{z^2-b^2}\sqrt{b^2\bar z^2+a^4-b^4}+ a^2\bar z \sqrt{\bar z ^2 -b^2} \sqrt{b^2z^2+a^4-b^4} } { b^2z^2\bar z ^2+(a^4-b^4)(z^2+\bar z ^2-b^2) }. \end{equation} Similarly, using the property $\overline {E(\alpha\,,\beta)}=E(\overline{\alpha}\,,\beta)$ and the summation formula for the elliptic integrals of the second kind \cite{BE}, we have $$ \frac{1}{2} \Bigl [E \Bigl (\xi,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr )+\overline{E \Bigl (\xi,\, \frac{\sqrt{a^4-b^4}}{a^2}}\,\Bigr )\Bigr]=\frac{1}{2}E \Bigl (\alpha ,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr )+ \frac{(a^4-b^4)\sqrt{(z^2-b^2)(\bar z^2-b^2)}}{2a^4z\bar z}\sin\alpha . $$ Consequently, the pressure is determined by \begin{eqnarray}\label{mainCasP} {\tilde p_j} = & -\frac{a\dot a}{2k_jE(\frac{b^2}{a^2})}\Bigl[\Bigl (E(\frac{b^2}{a^2})-K(\frac{b^2}{a^2})\Bigr ) \,F \Bigl (\alpha,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr )
+K(\frac{b^2}{a^2})\,E \Bigl (\alpha,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr )\\ & +K(\frac{b^2}{a^2})\,\frac{(a^4-b^4)\sqrt{(z^2-b^2)(\bar z ^2-b^2)}}{a^4z\bar z} -\frac{2K(\frac{b^2}{a^2})}{a^2}\,\Re\{\frac{\sqrt{(z^2b^2+a^4-b^4)(z^2-b^2)}}{z}\}\Bigr ] +C_j(t). \notag \end{eqnarray} Here $$ \Re\{\frac{\sqrt{(z^2b^2+a^4-b^4)(z^2-b^2)}}{z}\}= \frac{x(\alpha _1^2\alpha _2 ^2-x^2y^2b^2+y^2(\alpha _1^2b^2+\alpha _2^2))} {(x^2+y^2)\,\alpha _1\alpha _2}, $$ where $$ \alpha _1 ^2 = (x^2-y^2-b^2+\sqrt{(x^2-y^2-b^2)^2+4x^2y^2}\,)/2 $$ and $$ \alpha _2 ^2 = ((x^2-y^2)b^2+a^4-b^4+\sqrt{((x^2-y^2)b^2+a^4-b^4)^2+4x^2y^2b^2}\,)/2. $$ Taking into account the boundary condition to determine $C_j(t)$, we have
\begin{eqnarray}\label{mainCasP1} & {\tilde p_j} = -\frac{a\dot a}{2k_jE(\frac{b^2}{a^2})}\Bigl[\Bigl (E(\frac{b^2}{a^2})-K(\frac{b^2}{a^2})\Bigr ) \,F \Bigl (\alpha,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr ) \notag
+K(\frac{b^2}{a^2})\,E \Bigl (\alpha,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr )\\ & + K(\frac{b^2}{a^2})\,\frac{(a^4-b^4)\sqrt{(z^2-b^2)(\bar z ^2-b^2)}}{a^4z\bar z}\notag -\frac{2K(\frac{b^2}{a^2})}{a^2}\,\Re\{\frac{\sqrt{(z^2b^2+a^4-b^4)(z^2-b^2)}}{z}\}\\ & -\Bigl (E(\frac{b^2}{a^2})-K(\frac{b^2}{a^2})\Bigr ) \,K \Bigl (\frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr )- K(\frac{b^2}{a^2})\,E \Bigl (\frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr ) \Bigr ] +f(t) \end{eqnarray} or \begin{eqnarray}\label{mainCasP2} & {\tilde p_j} = -\frac{a\dot a}{2k_jE(\frac{b^2}{a^2})}\Bigl[\Bigl (E(\frac{b^2}{a^2})-K(\frac{b^2}{a^2})\Bigr ) \,F \Bigl (\alpha,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr ) \notag
+K(\frac{b^2}{a^2})\,E \Bigl (\alpha,\, \frac{\sqrt{a^4-b^4}}{a^2}\,\Bigr )\\ & + K(\frac{b^2}{a^2})\,\frac{(a^4-b^4)\sqrt{(x^2+y^2)^2-2b^2(x^2-y^2)+b^4}}{a^4(x^2+y^2)}\\ \notag & -\frac{2K(\frac{b^2}{a^2})}{a^2}\,\frac{x(\alpha _1^2\alpha _2 ^2-x^2y^2b^2+y^2(\alpha _1^2b^2+\alpha _2^2))} {(x^2+y^2)\,\alpha _1\alpha _2}
-\frac{\pi}{2}\Bigr ] +f(t). \notag \end{eqnarray} Thereby, $$ p_j=\tilde p_j- \frac{\dot a K(\frac{b^2}{a^2})}{2k_j aE(\frac{b^2}{a^2})}(x^2+y^2). $$
To find the location of sinks and sources in the interior domain $\Omega _2$, note that the Schwarz function near its singular points $z=\pm b$ has the reciprocal square root representation \eqref{eq6r} with $\xi^r(z,t)=\sqrt{b^2z^2+a^4-b^4}/\sqrt{z\pm b}$. Formula \eqref{dirr} implies that $\varphi ^r(b)=\pi$ and $\varphi ^r(-b)=0$. This results (taking into account the symmetry of the problem) in the segment $x\in [-b,b]$ as a location of sinks and sources. The corresponding density is $$ \mu _2 =\frac{B_1x^2+B_2}{k_2\sqrt{(b^2x^2+a^4-b^4)(b^2-x^2)}}. $$ Note that $\int\limits _{-b}^{b}\mu _2 (x)\, dx=0$, which is consistent with the volume conservation.
To determine the location of the sinks and sources in domain $\Omega _1$, we start with singular points $z_a(t)= \pm i\sqrt{(a^4-b^4)}\,/b$. The Schwarz function near these points has the square root representation \eqref{eq6}, and the directions of the cuts are defined by formula \eqref{dir}.
In the neighborhood of the point $z_a(t)= i\sqrt{(a^4-b^4)}\,/b$, we have $\arg [\dot z _a]=\pi /2 +2\pi k$ and $\arg [\xi ^g\left( z_a(t),t\right)]=-\pi /4+\pi k$. Thus, according to \eqref{dir} the direction of the cut is $\varphi ^g=\pi /2+2\pi k$, $k=0,\pm 1,\pm2, \dots$.
Similarly, at the point $z_a(t)= -i\sqrt{(a^4-b^4)}\,/b$, $\arg [\dot z _a]=-\pi /2 +2\pi k$, $\arg [\xi ^g\left( z_a(t),t\right)]=-3\pi /4+\pi k$. Therefore, the direction of the cut is $\varphi ^g=-\pi /2+2\pi k$.
Taking into consideration symmetry with respect to the $x$-axis, we conclude that the
support of $\mu _1$ consists of two rays starting at the branch points and going to infinity (see the dashed lines in Fig.~2). The density of sinks and sources is defined by $$ \mu _1 =\frac{B_1y^2-B_2}{k_1\sqrt{(b^2y^2-a^4+b^4)(b^2+y^2)}}. $$
The evolution of the oval is controlled by a single function $h(t)$, where $b$ is constant and the parameter $a(t)$ is defined by the equation: $$ \frac{\dot h}{h}=-\frac{\dot a}{a}\frac{K(b^2/a^2)}{E(b^2/a^2)}. $$ Fig.~2 shows the evolution of the Cassini's oval under squeezing with $h(t)=h_0-t$ at $t=0$ (see Fig.~2 a) and $t=0.05$ (see Fig.~2 b). The dots correspond to the singular points $z_a$, the dashed lines correspond to the cuts.
\section{Concluding remarks}\label{sec:concl}
We have studied a Muskat problem with a negligible surface tension and a gap width dependent on time. This study extended the results reported in \cite{tian}, \cite{jpA2015}, and \cite{contExact}. We suggested a method of finding exact solutions and applied it to find new exact solutions for initial elliptical shape and Cassini's oval. The idea of the method was to keep the interface within a certain family of curves defined by its initial shape.
For the elliptical shape, we found two types of solutions: without sinks and sources in the interior domain, and with the presence of a special distribution of sinks and sources along the inter-focal distance. In the former solution, the inter-focal distance remains constant, while in the latter, it changes.
For the Cassini's oval, we found a solution to the problem when both a gap change and special distributions of sinks and sources in both the interior and exterior domains are present.
Our mathematical model included an assumption that the volume of the bounded domain $\Omega _2(t)$ is conserved. To show other conserved quantities, we follow Richardson \cite{rich}, \cite{EEK}, \cite{EE} deriving the moment dynamics equation, \begin{equation}\label{moment} \frac{d}{dt}\Bigl [h(t)\int\limits _{\Omega _2(t)} u(x,y) dxdy \Bigr ]=-\chi _2k_2(t)h(t)\int\limits _{\gamma _2(t)} u(s)\mu _2 (s,t) \, ds, \end{equation} where $u(x,y)$ is a harmonic function in a domain $\Omega \supset \Omega _2(t)$. The latter follows from the chain of equalities: $$ \frac{d}{dt}\Bigl [\int\limits _{\Omega _2(t)} u\, dxdy \Bigr ]=\int\limits _{\Gamma(t)} u\,v_n ds =-k_2\int\limits _{\Gamma(t)} u\,\pd{p_2}{n} d\tau $$ $$ =-\frac{\dot h}{h}\int\limits _{\Omega _2(t)} u\, dxdy -\chi _2k_2h\int\limits _{\gamma _2(t)} u\, \mu_2 ds-k_2f(t) \int\limits _{\Gamma(t)} \pd{u}{n} d\tau . $$ By setting $f(t)$ to zero and rearranging the terms, we arrive at \eqref{moment}.
Equation \eqref{moment} implies that in the absence of sinks and sources, $\chi_2=0$, the quantity $h\int _{\Omega _2(t)} u\, dxdy$ is conserved for any harmonic function $u(x,y)$ defined in $\Omega$. A special choice of $u(x,y)\equiv 1$ for $\chi_2=0, 1$, corresponds to the volume conservation - in that case, the integral on the right hand side is zero.
Remark that in the Saffman-Taylor formulation of the problem - where a viscous fluid occupying the gap between two plates is being displaced by a less viscous fluid, which is forced into the gap - unstable fingers are being formed. Similarly, a basic instability - a version of the Saffman-Taylor instability - was identified in \cite{tian} when a viscous circular bubble was surrounded by the air and the upper plate was lifting.
Unstable fingers are subject to tip splitting and exhibit singularities in a finite time. In the present paper we did not consider neither formation of singularities, nor the ways of achieving a regularization. The aim of this study was, in contrast, to avoid formation of singularities by means of a special choice of sinks and sources. Note that linear stability results for the interior problem \cite{tian} indicate that a circular bubble is stable when the plate is moving down. The latter, together with the stability results for the Saffman-Taylor formulation in a radial flow geometry \cite{miranda}, suggest
that the circular interface for the problem in question is expected to be linearly stable in two situations: (i) when a more viscous fluid occupies the interior domain and the upper plate is moving down or (ii) when a less viscous fluid is surrounded by a more viscous fluid and the upper plate is moving up.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\end{document} |
\begin{document}
\title{Redundancy implies robustness for bang-bang strategies}
\begin{abstract} We develop in this paper a method ensuring robustness properties to bang-bang strategies, for general nonlinear control systems. Our main idea is to add bang arcs in the form of needle-like variations of the control. With such bang-bang controls having additional degrees of freedom, steering the control system to some given target amounts to solving an overdetermined nonlinear shooting problem, what we do by developing a least-square approach. In turn, we design a criterion to measure the quality of robustness of the bang-bang strategy, based on the singular values of the end-point mapping, and which we optimize. Our approach thus shows that redundancy implies robustness, and we show how to achieve some compromises in practice, by applying it to the attitude control of a 3d rigid body. \end{abstract}
\section{Introduction}
\subsection{Overview of the method} \label{overview} To introduce the subject, we explain our approach on the control problem consisting of steering the finite-dimensional nonlinear control system \begin{equation}\label{sys_intro} \dot x(t) = f(t,x(t),u(t)), \end{equation} from a given $x(0)=x_0$ to the target point $x(t_f)=x_f$, with a scalar control $u$ that can only switch between two values, say $0$ and $1$. The general method, as well as all assumptions, will be written in details in a further section.
Let $E(x_0,t_f,u) = x(t_f)$ be the end-point mapping, where $x(\cdot)$ is the solution of \eqref{sys_intro} starting at $x(0)=x_0$ and associated with the control $u$. One aims at finding a bang-bang control $u$, defined on $[0,t_f]$ for some final time $t_f>0$, such that $E(x_0,t_f,u) = x_f$.
Many problems impose to implement only bang-bang controls, i.e., controls saturating the constraints but not taking any intermediate value. These are problems where only external actions of the kind on/off can be applied to the system.
Of course, such bang-bang controls can usually be designed by using optimal control theory (see \cite{lee1967foundations,pontryagin1987,Trelat1}). For instance, solving a minimal time control problem, or a minimal $L^1$ norm as in \cite{Caponigro2015}, is in general a good way to design bang-bang control strategies. However, due to their optimality status, such controls often suffer from a lack of robustness with respect to uncertainties, model errors, deviations from the target. Moreover, when the Pontryagin maximum principle yields bang-bang controls, such controls have in general a minimal number of switchings: in dimension $3$ for instance, it is proved in \cite{Krener,Kupka,Schattler} (see also \cite{BonnardChyba,Bonnard2005,Trelat2012} for more details on this issue) that, locally, minimal time trajectories of single-input control-affine systems have generically two switchings. Taking into account the free final time, this makes three degrees of freedom, which is the minimal number to generically make the trajectory reach a target point in $\R^3$, i.e., to solve three (nonlinear) equations.
In these conditions, a natural idea is to add redundancy to such bang-bang strategies, by enforcing the control to switch more times than necessary. These additional switching times are introduced by \emph{needle-like variations}, as in the classical proof of the Pontryagin maximum principle (see \cite{lee1967foundations,pontryagin1987}).
We recall that a needle-like variation $\pi_1=(t_1,\delta t_1,u_1)$ of a given control $u$ is the perturbation $u_{\pi_1}$ of the control $u$ given by \begin{equation} \label{eq_needle} u_{\pi_1}(t) = \left\{ \begin{array}{rcl} u_1 & \textrm{if} & t\in [t_1,t_1+\delta t_1], \\
u(t) & \textrm{otherwise,}& \end{array}\right. \end{equation} where $t_1\in [0,t_f]$ is the time at which the spike variation is introduced, $\delta t_1$ is a real number of small absolute value that stands for the duration of the variation, and $u_1 \in [0,1]$ is some arbitrary element of the set of values of controls. When $\delta t_1<0$, one replaces the interval $[t_1,t_1+\delta t_1]$ with $[t_1+\delta t_1,t_1]$ in \eqref{eq_needle}. It is well known that, if $\vert\delta t_1\vert$ is small enough, the control $u_{\pi_1}$ is admissible (that is, the associated trajectory solution of \eqref{sys_intro} is well-defined on $[0,t_f]$) and generates a trajectory $x_{\pi_1}(\cdot)$, which can be viewed as a perturbation of the nominal trajectory $x(\cdot)$ associated with the control $u$, and which steers the control system to the final point \begin{equation}\label{eq_var} E(x_0,t_f,u_{\pi_1}) = E(x_0,t_f,u) + \vert \delta t_1 \vert\, v_{\pi_1}(t_f) + o(\delta t_1), \end{equation} where the so-called variation vector $v_{\pi_1}(\cdot)$ is the solution of some Cauchy problem related to a linearized system along $x(\cdot)$ (see \cite{lee1967foundations,pontryagin1987,Silva2010} and Proposition \ref{prop_differentiability_epm}).
Recall that the \emph{first Pontryagin cone} $K(t_f)$ is the smallest closed convex cone containing all variation vectors $v_{\pi_1}(t_f)$; it serves as a local convex estimate of the set of reachable points at time $t_f$ (with initial point $x_0$).
\begin{figure}
\caption{Changing the switching times induces a displacement at the final time.}
\label{fig_diff_epm}
\end{figure}
Assume that the nominal control $u$, which steers the system from $x_0$ to the target point $x_f$, is bang-bang and switches $N$ times between the extreme values $0$ and $1$ over the time interval $[0,t_f]$. We denote by $\mathcal{T} = (t_1, \ldots, t_N)$ the vector consisting of its switching times $0<t_1<\cdots<t_N<t_f$. Then the control $u$ can equivalently be represented by the vector $\mathcal{T}$, provided one makes precise the value of $u(t)$ for $t\in(0,t_1)$. One can also add new switching times: for instance if $u(t)=0$ for $t\in(0,t_1)$, given any $s_1\in(0,t_1)$, the needle-like variation $\pi_1=(s_1,\delta s_1,1)$ (with $\vert\delta s_1\vert$ small enough) is a bang-bang control having two new switching times at $s_1$ and $s_1+\delta s_1$.
In what follows, we designate a bang-bang control either by $u$ or by the set $\mathcal{T}=(t_1, \ldots, t_N)$ of its switching times. This is with a slight abuse because we should also specify the value of $u$ along the first bang arc. But we will be more precise, rigorous and general in a further section. The end-point mapping is then reduced to the switching times, and one has $E(x_0, t_f, \mathcal{T}) = x_f$. A variation $\delta\mathcal{T} = (\delta t_1, \ldots, \delta t_N)$ of the switching times generates $N$ variation vectors $(v_{1}(t_f), \ldots, v_{N}(t_f))$, and the corresponding bang-bang trajectory reaches at time $t_f$ the point (see Figure \ref{fig_diff_epm}, where two variations vectors are displayed, for two switching times $t_1$ and $t_2$) \[
E\left(x_0, t_f, \mathcal{T} + \delta\mathcal{T}\right) = x_f + \delta t_1\cdot v_{1}(t_f) + \cdots + \delta t_N \cdot v_{N}(t_f) + \mathrm{o}(\| \delta\mathcal{T} \|) . \] Therefore the end-point mapping $E$ is differentiable with respect to $\mathcal{T}$, and \begin{equation}\label{eq_diff_epm_intro} \frac{\partial E}{\partial\mathcal{T}}(x_0, t_f, \mathcal{T})\cdot \delta\mathcal{T} = \delta t_1\cdot v_{1}(t_f) + \cdots + \delta t_N \cdot v_{N}(t_f) . \end{equation} Notice that compared to \eqref{eq_var}, the absolute values disappear. We will prove this result in details further in the paper. In particular, the range of this differential is the first Pontryagin cone $K(t_f)$ (see also \cite{Silva2010}). Obviously, the more switching times (i.e., degrees of freedom), the more accurate the approximation of the reachable set.
We now add \emph{redundant} switching times $(s_1, \ldots, s_\ell)$ for some $\ell \in \textrm{I\kern-0.21emN}$ in order to generate more degrees of freedom to solve the control problem $$ E\left(x_0,t_f,(t_1, \ldots, t_N, s_1, \ldots, s_\ell)\right) = x_f . $$ We order the times in the increasing order and we still denote by $\mathcal{T}$ the vector of all switching times.
\paragraph{Redundancy creates robustness.} We will see further that these redundant switching times contribute to make the trajectory robust to external disturbances or model uncertainties, we will develop a method to tune the switching times in order to absorb these perturbations and steer the system to the desired target $x_f \in \mathbb{R}^n$.
Here, in this still informal introduction, we show how to use the additional switching times to make the system reach targets $x_f+\delta x_f$ in a neighborhood of $x_f$. The idea is to solve the nonlinear system of equations \begin{equation*} E(x_0,t_f,\mathcal{T}+\delta\mathcal{T}) = x_f +\delta x_f . \end{equation*} Using \eqref{eq_diff_epm_intro}, we propose to solve, at the first order, \begin{equation}\label{eq_rec} \frac{\partial E}{\partial\mathcal{T}}(x_0, t_f, \mathcal{T})\cdot \delta\mathcal{T} = \delta x_f , \end{equation} which makes $n$ equations with $N+\ell$ degrees of freedom. We assume that $N+\ell$ is (possibly much) larger than $n$ and that the matrix in \eqref{eq_rec} is surjective. Then one can solve \eqref{eq_rec} by using the \emph{Moore-Penrose pseudo-inverse} $\left(\frac{\partial E}{\partial\mathcal{T}}\right)^{\dagger}$ of $\frac{\partial E}{\partial\mathcal{T}}$ (see \cite{Golub}, or see \cite{Beutler1,Beutler2} for a theory in infinite dimension), which yields the solution of minimal Euclidean norm \begin{equation*} \delta\mathcal{T} = \left(\frac{\partial E}{\partial\mathcal{T}}\right)^{\dagger} \cdot \delta x_f , \end{equation*} and we have \begin{equation}\label{eq_estimate_simple}
\left\| \delta\mathcal{T} \right\|_2 \leqslant \frac{\left\| \delta x_f \right\|_2}{\sigma_{min}} , \end{equation} where $\sigma_{min}$ is the smallest positive singular value of $\frac{\partial E}{\partial\mathcal{T}}$. This estimate gives a natural measure for robustness, that we will generalize.
The two main contributions of this paper are: \begin{itemize} \item the idea of adding redundant switching times in order to make a nominal bang-bang control more robust, while keeping it as being bang-bang; \item the design of a practical tracking algorithm, consisting of solving an overdetermined nonlinear system by least-squares, thus identifying a robustness criterion that we optimize. \end{itemize} They are developed in a rigorous and general context in the core of the paper.
\subsection{State of the art on robust control design} There is an immense literature on robust control theory, with many existing methods in order to efficiently control a system subjected to uncertainties and disturbances. Whereas there are many papers on $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ methods, except a few contributions in specific contexts, we are not aware of any general theory allowing one to tackle perturbations by using only bang-bang controls. This is the focus of this paper.
Let us however shortly report on robustness methods when one is not bound to design bang-bang controls. In \cite{Koh:1999}, a path-tracking algorithm with bang-bang controls is studied, for a double integrator and a wheeled robot. The technique relies heavily on the expression of the equations and does not apply to more general systems. In \cite{singh1994}, the authors build a robust minimal time control for spacecraft's attitude maneuvers by canceling the poles of some transfer function. A remarkable fact is that the robustified control presents more switchings than the minimal time control. In this case, the robustness is evaluated as the maximum amplitude on a Bode diagram (see also \cite{liu1992} and \cite{liu1993} for similar works). In \cite{You2000}, the authors observe that bang-bang controls are intrinsically not robust, and use pieces of singular trajectories (hence, not bang-bang) to overcome this issue.
In the $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ theories, control systems are often written in the frequency domain using the Laplace transform. For a transfer matrix $G(s)$, the two classical measures for performance are (see \cite{Doyle1989,zhou1996robust}) the $\mathcal{H}_2$ norm and the $\mathcal{H}_{\infty}$ norm respectively: \begin{equation*}
\left\|G\right\|_{2} = \left(\frac{1}{2 \pi} \int^{+ \infty}_{- \infty}{\text{Trace}(G(j \omega) G(j \omega)^*) d\omega} \right)^{1/2}\qquad\textrm{and}\qquad
\left\|G\right\|_{\infty} = \sup_{\omega \in \R} \overline{\sigma}(G(j \omega)), \end{equation*} where $\overline{\sigma}(G)$ is the largest singular value of $G$.
In the linear quadratic theory, the question of optimal tracking has been widely addressed: given a reference trajectory $\xi(\cdot)$, we track it with a solution of some control system $\dot{x}(t) =f(x(t),u(t))$, minimizing a cost of the form \begin{equation*} \int_0^{t_f} \left( \Vert x(t)-\xi(t)\Vert_W^2 + \Vert u(t)\Vert_U^2 \right)\, dt + \Vert x(t_f)-\xi(t_f)\Vert_Q^2 , \end{equation*} with weighted norms (see \cite{Anderson,Kwakernaak,Trelat1}). The first term in the integral measures how close one is to the reference trajectory, the second one measures a $L^2$ norm of the control (energy), and the third one accounts for the distance at final time between the reference trajectory $\xi(\cdot)$ and $x(\cdot)$. Then, the control can be expressed as a feedback function of the error $x(t)-\xi(t)$, involving the solution of some Riccati equation. In \cite{AndreaNovel2013,Khalil}, the authors investigate the question of stabilizing around a slowly time-varying trajectory. They also introduce uncertainties on the model and study the sensitivity of the system to those uncertainties. In the case of the existence of a delay on the input, a feedback law is proposed. In \cite{Lin2007,Tan2009}, uncertainties $p$ are introduced in a linear system $\dot{x}(t) = A(p)x(t) + B u(t)$, and a tracking algorithm is suggested, under matching conditions on the uncertainties or not (see also \cite{Abdallah1991} for a survey on robust control for rigid robots).
In the late 1970's, $\mathcal{H}_{\infty}$ control theory developed. The control system is often described by a plant $G$ and a controller $K$. Then, the dependency of the error $z$ (to be minimized) on the input $v$ can be written as $z = F(G,K) v$. The $\mathcal{H}_{\infty}$ control problem consists of finding the best controller $K$ such that the $\mathcal{H}_{\infty}$ norm of the matrix $F(G,K)$ is minimized: $\left\|F(G,K) \right\|_{\infty} = \sup_{\omega \in \R} \overline{\sigma}(F(G,K)(j \omega))$. It can be interpreted as the maximum gain from the input $v$ to the output $z$. This criterion was introduced in order to deal with uncertainties on the model (on the plant $G$). In \cite{Zames1981}, the author introduced the notion and highlighted the connection with robustness. In \cite{Doyle1989}, a link is shown between the existence of such a controller and conditions on the solutions of two Riccati equations. Following a notion introduced in \cite{Gahinet1992}, the linear matrix inequality (LMI) approach was introduced in \cite{Gahinet1994}, and used in \cite{apkarian2004,noll2006} to solve the $\mathcal{H}_{\infty}$ synthesis. The Riccati equations are replaced with Riccati inequalities, whose set of solutions parameterizes the $\mathcal{H}_{\infty}$ controllers (see also \cite{boyd1994} for the use of LMIs in control theory). The papers \cite{Doyle1981,McFarlane1992,Xie1992} present design procedures in this context to elaborate the feedback controller $K$. In \cite{Ge1996}, the theory is extended to systems with parameters uncertainties and state delays, as well as in \cite{Xu2006}, with stochastic uncertainty.
In many optimal control problems, the application of the Pontryagin maximum principle leads to bang-bang control strategies, and the classical $\mathcal{H}_{2}$ and $\mathcal{H}_{\infty}$ theories were not designed for such a purpose. But the optimal trajectories are in general not robust. Adding needle-like variations is therefore a way to improve robustness, and is the main motivation of this paper. Of course, the method applies to any bang-bang control strategy, not necessarily optimal.
The approach that we suggest in this paper combines an off-line treatment of the control strategies, with a feedback algorithm based on the structure of the control. We emphasize here that this algorithm preserves the bang-bang structure of the control. It consists of applying a nominal control strategy (that needs to be computed \emph{a priori}), and adjusting it in real time, allowing one to track a nominal trajectory. The off-line method takes a solution of the control problem and makes it more robust by adding additional switching times (i.e., \emph{redundancy}), which can be seen as additional degrees of freedom. Note that our analysis is done in the state space, without needing to consider the frequency domain. A key ingredient to the method is the use of needle-like variations.
\subsection{Structure of the paper}
The paper is organized as follows. In Section \ref{sec2}, we develop an algorithm to steer a perturbed system to the desired final point. The method is similar to the one presented in Section \ref{overview}, except that we need to consider a backward problem. Indeed, the final point is fixed, and perturbations appear all along the trajectory. Besides, our measure for robustness comes out naturally in view of \eqref{eq_estimate_simple}. Having identified the robustness criterion, we show in Section \ref{sec3} how to add redundant switching times, leading one to solve a finite-dimensional nonlinear optimization problem. In Section \ref{sec4}, we provide some numerical illustrations on the attitude control problem of a 3-dimensional rigid body.
\section{Tracking algorithm} \label{sec2} \paragraph{Setting.} In this paper, we consider the control system \begin{equation} \dot{x}(t) = f(t,x(t),u(t)), \label{dynamics} \end{equation} where $f$ is a smooth function $\R \times \R^n \times \R^m \rightarrow \R^n$, the state $x(\cdot) \in \R^n$, the control $u(\cdot) \in L^{\infty}([0,t_f];\Omega)$, and $\Omega$ is the subset of $\R^m$: $[a_1,b_1] \times \cdots \times [a_m,b_m]$. We make two additional hypothesis: the controls we consider are ``bang-bang'', with a finite number of switching times: \begin{center} \begin{tabular}{cc} $(H_1)$ & $\forall i \in \llbracket1,m \rrbracket$, $u_i(t) \in \left\{a_i,b_i\right\}$, a.e. \\ $(H_2)$ & $\forall i \in \llbracket1,m \rrbracket$, $u_i$ does not chatter. \\ \end{tabular} \end{center} A control is chattering when it switches infinitely many times over a compact time interval (see \cite{zhu2016,fuller1963}). Therefore, our method does not apply to those controls. However, when the solution of an optimal control problem chatters, provided that it is possible, one could consider a sub-optimal solution, with only a finite number of switching times.
In the context of optimal control, we will denote the cost under the form \begin{equation} C(u) = \int^{t_f}_0{f^0(t,x(t),u(t))\,dt}. \label{cost} \end{equation}
We recalled in the introduction the (classical) definitions of the end-point mapping, of a needle-like variation (\ref{eq_needle}) and the expansion of the end-point mapping subject to a needle-like variation (\ref{eq_var}).
\subsection{Reduced end-point mapping} \label{reduced_epm}
In this subsection, we give the definition of the reduced end-point mapping, and show a differentiability property.
Let us consider a bang-bang control $u(\cdot)$, and its associated trajectory $x(\cdot)$. For the sake of simplicity, we make the additional assumption that for every switching time $t_j$, one and only one component of the control commutes. Therefore, provided we specify the initial value of each component, the control $u$ is entirely characterized by the switching times of its components and can be represented by a vector: \begin{equation*} \left((u_{10}, \ldots,u_{m0}),\left(t_1,i_1\right), \ldots, \left(t_N,i_N\right), t_f \right) \in \Omega \times \R^{2N+1}, \end{equation*} where $u_{i0} \in \{a_i,b_i\}$ is the initial value for the control $u_i(\cdot)$ ($i \in \llbracket1,m\rrbracket)$, $N$ is the total number of switching times, $t_f$ is the final time, and $i_j$ is the component of the control that switches at time $t_j$. As this representation entirely characterizes the control, we will use indistinctly the notation $u$ and $\left((u_{10}, \ldots,u_{m0}),\left(t_1,i_1\right), \ldots, \left(t_N,i_N\right), t_f \right)$ to speak about the control whose components switch at the times $t_j$. In the literature, $\left(\left(t_1,i_1\right), \ldots, \left(t_N,i_N\right)\right)$ is often called a switching sequence.
\begin{remark} Had we wanted to allow simultaneous switching of multiple components, we would need to consider controls represented by: \begin{equation*} \left((u_{10}, \ldots,u_{m0}),\left(t_1,\mathcal{I}_1\right), \ldots, \left(t_N,\mathcal{I}_N\right), t_f \right), \end{equation*} where $\mathcal{I}_j \subset \llbracket1,m \rrbracket$ represents the set of components that switch at time $t_j$. \end{remark}
\begin{definition}[Reduced end-point mapping] We define the \emph{reduced end-point mapping} by \begin{equation*} E(x_0,(u_{10}, \ldots,u_{m0}),\left(t_1,i_1\right), \ldots, \left(t_N,i_N\right),t_f) = x_u(x_0,t_f), \end{equation*} where $u$ is the control represented by $\left((u_{10}, \ldots,u_{m0}),\left(t_1,i_1\right), \ldots, \left(t_N,i_N\right), t_f \right)$, and $x_u(x_0,t_f)$ is the associated state at time $t_f$, starting at $x_0$. \end{definition} Note that in \cite{Maurer2005,Maurer2004}, the authors also reduce a bang-bang control to its switching points, in order to formulate an optimization problem in finite-dimension.
In the following, when writing this reduced end-point mapping, we may consider that the initial point $x_0$ is fixed, as well as the way the components of the control switch (i.e., we consider that the N-tuple $(i_1, \ldots, i_N)$ is fixed), the initial values $u_{i0}$ and the final time $t_f$. In this context, we may forget them in the notations, and denote the reduced end-point mapping by \begin{equation*} E(t_1, \ldots, t_N) = x_u(t_f). \end{equation*}
A remarkable fact is that the reduced end-point-mapping is differentiable. Compared to the expansion \eqref{eq_var} with respect to a needle-like variation, the sign of $\delta t$ does not matter. For the sake of completeness, we give the proof in appendix. \begin{proposition} \label{prop_differentiability_epm} The reduced end-point mapping is differentiable, and \begin{equation*} dE(t_1 , \ldots, t_N) = \begin{pmatrix} v_1(t_f) & \cdots & v_N(t_f) \end{pmatrix} \in \mathcal{M}_{n,N}(\R), \end{equation*} where $v_j(\cdot)$ ($j \in \llbracket1,N \rrbracket$) is the solution of the Cauchy problem, defined for $t \geqslant t_j$: \begin{align*} \dot{v}_j(t) &= \frac{\partial f}{\partial x}(t,x(t),u(t))v_j(t) \\ v_j(t_j) &= \left\{ \begin{tabular}{rl} $f(t_j,x(t_j),(\ldots, a_{i_j}, \ldots)) - f(t_j,x(t_j),u(t_j^+))$ & if $u_{i_j}$ switches from $a_{i_j}$ to $b_{i_j}$. \\ $f(t_j,x(t_j),(\ldots, b_{i_j}, \ldots)) - f(t_j,x(t_j),u(t_j^+))$ & if $u_{i_j}$ switches from $b_{i_j}$ to $a_{i_j}$. \end{tabular} \right. \end{align*} The notation $(\ldots, a_{i_j}, \ldots)$ (resp. $(\ldots, b_{i_j}, \ldots)$) is used to show a difference with $u(t_j^+)$ (resp. $u(t_j^-)$) on the $i_j$-th component only. \end{proposition}
\begin{remark} \label{rq_affine} In the special (and important in practice) case of a control-affine system \[ \dot{x}(t) = f_0(x(t)) + \sum_{k=1}^m{u_k(t) f_k(x(t))}, \] the initial condition on $v_j$ can be written much more easily: \begin{equation*} v_j(t_j) = \left\{ \begin{tabular}{rl} $ (a_{i_j} - b_{i_j}) f_{i_j}(x(t_j))$ & if $u_{i_j}$ switches from $a_{i_j}$ to $b_{i_j}$. \\ $(b_{i_j} - a_{i_j}) f_{i_j}(x(t_j))$ & if $u_{i_j}$ switches from $b_{i_j}$ to $a_{i_j}$. \end{tabular} \right. \end{equation*} \end{remark}
\subsection{Absorbing perturbations} \label{backward_epm}
As explained in the introduction, we present in this paper a closed-loop method to actually steer the system towards a point $x_f$, with bang-bang controls, even in the presence of perturbations.
First, for the sake of simplicity, we will explain how to control the system to some point $x_f + \delta x_f$. We will see that this idea can be adapted for our purpose of controlling a perturbed trajectory, by simply reversing the time.
\paragraph{Perturbations on the final point.}
We briefly generalize the problem introduced in the introduction. Let \[ \overline{u} = \left((u_{10}, \ldots,u_{m0}),\left(\overline{t}_1,i_1\right), \ldots, \left(\overline{t}_N,i_N\right), t_f \right) \in \Omega \times \R^{2N+1} \] be a control such that $x_{\overline{u}}(t_f) = x_f$. That is, using the definition of Subsection \ref{reduced_epm}, we have that \begin{equation*} E(x_0,(u_{10}, \ldots,u_{m0}),\left(\overline{t}_1,i_1\right), \ldots, \left(\overline{t}_N,i_N\right), t_f) = x_f. \end{equation*} Or, considering that the final time $t_f$, the initial point $x_0$, the components $(i_1, \ldots,i_N)$ and the initial values $(u_{10}, \ldots,u_{m0})$ are fixed, \begin{equation*} E(\overline{t}_1, \ldots, \overline{t}_N) = x_f. \end{equation*} Let $\delta x_f$ be some perturbation of the final point $x_f$. We look for a vector $\delta \mathcal{T} = (\delta t_1, \ldots, \delta t_N)$ so that the system reaches the target point $x_f + \delta x_f$: \begin{equation*} E(\overline{t}_1 + \delta t_1, \ldots, \overline{t}_N + \delta t_N) = x_f + \delta x_f. \end{equation*} As we have shown in Proposition \ref{prop_differentiability_epm} the differentiability of the reduced end-point mapping, we can write \begin{equation*}
E(\overline{t}_1 + \delta t_1, \ldots, \overline{t}_N + \delta t_N) = E(\overline{t}_1, \ldots,\overline{t}_N) + dE(\overline{t}_1, \ldots,\overline{t}_N) \cdot \delta \mathcal{T} + o(\| \delta \mathcal{T}\|). \end{equation*} At order one, the solution is given by the solution of the linear equation \begin{equation*} dE(\overline{t}_1, \ldots,\overline{t}_N) \cdot \delta \mathcal{T} = \delta x_f. \end{equation*} It is natural to target the final point $x_f +\delta x_f$ while shifting the switching times as little as possible. That is, we look for the solution of minimal euclidian norm of the previous equation, which is given by $\delta \mathcal{T} = dE(\overline{t}_1, \ldots, \overline{t}_N)^{\dagger} \cdot \delta x_f$.
Therefore, we have shown how to compute, at order one, the correction to apply to control the system to some point $x_f + \delta x_f$: it boils down to solving a least-squares problem. Let us keep in mind that our definitive goal is to control systems that are perturbed all along their trajectory, to a fixed final point $x_f$. In other words, from a perturbed point $x(t) + \delta x(t)$ at some time $t \in [0,t_f)$, we want to absorb the perturbation $\delta x(t)$ and still reach the final point $x_f$. Even if this is a slightly different setting, we show that we can apply the same idea if we look at a \emph{backward problem}.
\paragraph{Absorbing a perturbation at time $t$.} Let $(\overline{x}(\cdot),\overline{u}(\cdot))$ be a nominal solution of the control system \eqref{dynamics}. We assume that when applying in practice the control $\overline{u} = \overline{\mathcal{T}}$, because of model uncertainties and perturbations, we observe a perturbed trajectory $x_{per}(t) = \overline{x}(t) + \delta x(t)$.
Let $t \in [0,t_f]$. Starting from the perturbed point $\overline{x}(t) + \delta x(t)$, which stands as a new initial point, we want to reach the final point $x_f$ in time $t_f-t$. Hence, we look for a control $\overline{u} + \delta u$ such that \[ E(\overline{x}(t) + \delta x(t),\overline{u}+\delta u,t_f-t) = x_f. \] Assume for a moment that the perturbation of the control $\delta u$ is small in $L^{\infty}$ norm. Then, at least formally, one can write \[
E(\overline{x}(t),\overline{u},t_f-t) + \frac{\partial E}{\partial x_0}(\overline{x}(t),\overline{u},t_f-t)\cdot \delta x(t) + \frac{\partial E}{\partial u}(\overline{x}(t),\overline{u},t_f-t)\cdot \delta u + o(\|\delta x(t)\|+\|\delta u\|)= x_f. \] Therefore, at order one, we look for a solution of the (linear) equation \begin{equation} \frac{\partial E}{\partial x_0}(\overline{x}(t),\overline{u},t_f-t)\cdot \delta x(t) + \frac{\partial E}{\partial u}(\overline{x}(t),\overline{u},t_f-t)\cdot \delta u = 0. \label{eq_perturbations} \end{equation}
However, we do not want, in this paper, to apply small perturbations in the $L^{\infty}$ norm, as they would not result in bang-bang controls (However, this is similar to what is done while performing a Ricatti procedure to stabilize a system or track a reference trajectory). Nevertheless, reducing the end-point mapping to the switching times enables us to preserve the bang-bang structure: in the formalism previously introduced, we need to solve the nonlinear system of equations \begin{equation*} E(\overline{x}(t) + \delta x(t), \overline{\mathcal{T}} + \delta \mathcal{T}, t_f - t) = x_f. \end{equation*}
The equation \eqref{eq_perturbations} becomes \begin{equation} \frac{\partial E}{\partial \mathcal{T}}(\overline{x}(t),\overline{\mathcal{T}},t_f-t)\cdot \delta \mathcal{T} = - \frac{\partial E}{\partial x_0}(\overline{x}(t),\overline{\mathcal{T}},t_f-t)\cdot \delta x(t), \label{eq_perturbations_1} \end{equation} where the expression $\partial E / \partial \mathcal{T}$ is given by Proposition \ref{prop_differentiability_epm}.
\paragraph{A backward problem.} Solving this equation requires the computation of the partial differential $\partial E/\partial x_0$ at the initial point $\bar{x}(t)$. We will see now that it can be overcome by introducing a backward problem. Of course, the two formulations are equivalent.
\begin{definition}[Backward end-point mapping] Let $u = (t_1, \ldots, t_N)$ be a bang-bang control, and $t \in [0,t_f]$. We define the backward end-point mapping by \begin{equation*} \tilde{E}(t, t_1, \ldots, t_N) = \tilde{x}(t_f - t), \end{equation*} where $\tilde{x}(\cdot)$ is the solution to the Cauchy problem \begin{align*} \dot{\tilde{x}}(t) &= -f(t_f - t, \tilde{x}(t),u(t_f - t)), \\ \tilde{x}(0) &= x_f. \end{align*} \end{definition}
Note that for the nominal trajectory $(\overline{x}(\cdot),\overline{u}(\cdot))$, we have that \begin{equation*} \tilde{E}(t, \overline{t}_1, \ldots, \overline{t}_N) = \overline{x}(t). \end{equation*} Indeed, we have in this case that $\overline{x}(t) = \tilde{x}(t_f - t)$: if we integrate the nominal system backward, starting from the point $x_f$ during a time period $t_f - t$, we end up at point $\overline{x}(t)$.
\begin{remark} Let $t \in [0,t_f]$, and $j$ be the smallest index such that $\overline{t}_j > t$ (with the convention that $j = N+1$ if $t>t_N$). Then, note that $\overline{t}_1, \ldots, \overline{t}_{j-1}$ do not play any role in the computation of $\tilde{E}(t, \overline{t}_1, \ldots, \overline{t}_N)$. The differential of $\tilde{E}$ can be computed with the Proposition \ref{prop_differentiability_epm}. It is a matrix of size $n \times (N - j + 1)$. \end{remark}
In this context, the problem of adjusting the system back towards $x_f$ writes: at time $t$, find $(t_j, \ldots, t_N)$ such that \begin{equation} \tilde{E}(t,{t}_1, \ldots, {t}_N) = x_{per}(t). \label{prob_backward} \end{equation}
We see that reversing the time, we place ourselves in the setting previously described of aiming at a perturbed final point. Therefore, we have the following proposition.
\begin{proposition} At order one in $\delta x$, the solution of minimal norm of the problem \eqref{prob_backward} is given by $\overline{\mathcal{T}} + \delta \mathcal{T}$, with \begin{equation} \delta \mathcal{T} = d\tilde{E}(t,\overline{\mathcal{T}})^{\dagger}\cdot \delta x(t), \label{eq_recalage} \end{equation} where $d\tilde{E}(t,\overline{\mathcal{T}})^{\dagger}$ denotes the pseudo-inverse of $d\tilde{E}(t,\overline{\mathcal{T}})$. Moreover, we have the estimate \begin{equation}
\left\| \delta \mathcal{T} \right\|_2 \leqslant \frac{1}{\sigma_{min}(t)} \left\| \delta x(t) \right\|_2, \label{estimate_deltat} \end{equation} where $\sigma_{min}(t)$ is the smallest positive singular value of $d\tilde{E}(t,\overline{\mathcal{T}})$. \end{proposition}
\begin{proof} The scheme of the proof has already been exposed previously in the paper. However, we write it extensively here. Let $\delta \mathcal{T} = \mathcal{T} - \overline{\mathcal{T}}$. The problem writes \begin{equation*} \tilde{E}(t,\overline{\mathcal{T}} + \delta \mathcal{T}) = x_{per}(t). \end{equation*} According to Proposition \ref{prop_differentiability_epm}, the backward end-point mapping is differentiable (and we also know how to compute its derivative), so \begin{align*}
\tilde{E}(t,\overline{\mathcal{T}} + \delta \mathcal{T}) &= \tilde{E}(t,\overline{\mathcal{T}}) + d\tilde{E}(t,\overline{\mathcal{T}})\cdot \delta \mathcal{T} + o(\| \delta \mathcal{T} \|) \\
&= \overline{x}(t) + d\tilde{E}(t,\overline{\mathcal{T}})\cdot \delta \mathcal{T} + o(\| \delta \mathcal{T}\|). \end{align*} So, at order one, the problem writes \begin{equation} d\tilde{E}(t,\overline{\mathcal{T}})\cdot \delta \mathcal{T} = \delta x(t). \label{eq_perturbations_2} \end{equation}
It is well known (see \cite{allaire2002algebre} for instance), that the solution of minimal norm of this equation is $\delta \mathcal{T} = d\tilde{E}(t,\overline{\mathcal{T}})^{\dagger}\cdot \delta x(t)$. Besides, let $\sigma_{max}(t) > \cdots > \sigma_{min}(t) > 0$ denote the positive singular values of $d\tilde{E}(t,\overline{\mathcal{T}})$. We have that $\left\| d\tilde{E}(t,\overline{\mathcal{T}})^{\dagger} \right\|_2 = {1}/{\sigma_{min}(t)}$ ($\left\| \cdot \right\|_2$ for a matrix denotes the induced norm corresponding to the euclidean norm), so that \begin{align*}
\left\| \delta \mathcal{T} \right\|_2&= \left\| d\tilde{E}(t,\overline{\mathcal{T}})^{\dagger}\cdot \delta x (t)\right\|_2 \\
& \leqslant \left\| d\tilde{E}(t,\overline{\mathcal{T}})^{\dagger} \right\|_2 \cdot \left\| \delta x (t)\right\|_2 \\
& \leqslant \frac{\left\| \delta x (t)\right\|_2}{\sigma_{min}}, \end{align*} which concludes the proof. \end{proof}
\begin{remark} We have the relation that, for all vector of switching times $\mathcal{T}$ \begin{equation*} E(\tilde{E}(t,\mathcal{T}),\mathcal{T},t_f-t) = x_f. \end{equation*} Differentiating this equality with respect to $\mathcal{T}$, we have that, for all $\delta \mathcal{T}$ \begin{equation*} \frac{\partial E}{\partial x_0}(\tilde{E}(t,\mathcal{T}),\mathcal{T},t_f-t)\cdot d\tilde{E}(t,\overline{T})\cdot \delta \mathcal{T} + \frac{\partial E}{\partial \mathcal{T}}(\tilde{E}(t,\mathcal{T}),\mathcal{T},t_f-t)\cdot \delta \mathcal{T} = 0. \end{equation*} Replacing the second term by its value in \eqref{eq_perturbations_1}, it follows that \begin{equation*} \frac{\partial E}{\partial x_0}(\tilde{E}(t,\mathcal{T}),\mathcal{T},t_f-t)\cdot d\tilde{E}(t,\mathcal{T})\cdot \delta \mathcal{T} = \frac{\partial E}{\partial x_0}(\tilde{E}(t,\mathcal{T}),\mathcal{T},t_f-t)\cdot \delta x(t). \end{equation*}
It is easy to show that $\partial E/\partial x_0$ can be expressed as the resolvent of a linearized system. Therefore, the matrix $\partial E/\partial x_0 $ is invertible, and the equations \eqref{eq_perturbations_1} and \eqref{eq_perturbations_2} are equivalent. But solving \eqref{eq_perturbations_2} only requires to compute the derivative of $\tilde{E}$. This is what we do in the following. \end{remark}
\begin{remark} \label{rq_leastsquares} Note that it might not always be possible to find a solution to the equation $d\tilde{E}(t,\overline{\mathcal{T}})\cdot \delta \mathcal{T} = \delta x(t)$. This may happen for instance if $t > t_{N-n+1}$, i.e., we do not have enough degrees of freedom left to absorb the perturbation $\delta x(t) \in \R^n$. However, we can still give a meaning to the equation $d\tilde{E} \cdot \delta \mathcal{T} = \delta x(t)$. We look for a solution of the least-square problem: \begin{equation*}
\min_{\delta \mathcal{T} \in \R^N} \left\| d\tilde{E}(t,\overline{t}_1, \ldots, \overline{t}_N) \cdot \delta \mathcal{T} - \delta x (t)\right\|^2_2, \end{equation*} for which $\delta \mathcal{T} = d\tilde{E}(t,\overline{t}_1, \ldots, \overline{t}_N)^{\dagger} \cdot \delta x(t)$ is still the solution of minimal norm (see \cite{allaire2002algebre}). We see here emerging the idea that the number of switching times (i.e., degree of freedom) left at time $t$, is going to be an important factor to track the system back towards the final point $x_f$. \end{remark}
\paragraph{Numerical algorithm.}
At time $t$, Equation \eqref{eq_recalage} provides us with a formula to adjust the control so that the perturbed trajectory eventually reaches $x_f$. But it certainly does not enable us to face perturbations that would happen after time $t$. In order to absorb perturbations all along the trajectory, we suggest the following algorithm: Let $\mathcal{T}$ be an initial control. Given an integer $s$ and a subdivision $0 < \tau_1 < \cdots < \tau_s < t_f$ of the interval $[0,t_f]$, we adjust the control at each $\tau_i$ for all $i \in \llbracket 1,s\rrbracket$. That is, for each $i \in \llbracket 1,s\rrbracket$, we measure the drift $\delta x (\tau_i)= x_{per}(\tau_i) - x_{ref}(\tau_i)$, and compute the differential of the backward end-point mapping $d\tilde{E}(\tau_i,\overline{t}_1, \ldots, \overline{t}_N)$. We deduce from \eqref{eq_recalage} that the correction to apply is then $\delta \mathcal{T} = d\tilde{E}(\tau_i,\overline{t}_1, \ldots, \overline{t}_N)^{\dagger} \cdot \delta x(\tau_i)$. We then update the control by considering the new vector of switching times $\mathcal{T} + \delta \mathcal{T}$.
\begin{remark} \label{rq_interchanging} When computing the correction $\mathcal{T} + \delta \mathcal{T}$, it may happen that the new switching times are not ordered, i.e., there exists some integer $ j \in \llbracket1,N-1\rrbracket$ such that $t_{j+1} < t_j$. In this case, we consider that the correction is not physically acceptable, and we reject it. (Note that in some cases, we may want to continue the integration of the system even if two switching times are not ordered. In that case, we can always use the last admissible control, where all the switching times are ordered.) \end{remark}
\begin{remark} The computation of the differential $d\tilde{E}(t,\overline{t}_1, \ldots, \overline{t}_N)$ is done via the integration of a system of ordinary differential equations, which can be done efficiently and quickly using numerical integrators. However, the size of the system (as well as the time required to compute the pseudo-inverse) directly depends on the number of switching times $N$ and on the state dimension $n$. \end{remark}
\section{Promoting robustness} \label{sec3}
Intuitively, we want to say that a control is robust whenever the correction $\delta \mathcal{T}$ required to absorb the perturbation $\delta x(t)$ is small. Since we have shown the estimate $\left\| \delta \mathcal{T} \right\|_2 \leqslant \left\| \delta x(t) \right\|_2 / \sigma_{min}(t)$, a robust trajectory is then one for which the values of $1/\sigma_{min}(t)$ remain small along the trajectory.
\begin{definition} We define the following cost, that we will use to characterize the robustness of a trajectory \begin{equation} C_r(t_1, \ldots, t_N) = \int^{t_N}_0{\frac{1}{\sigma_{min}(t)^2}\, dt}. \label{cout_robu} \end{equation} \end{definition}
\begin{remark} In the previous definition, the upper bound in the integral is $t_N$, because for $t > t_N$, the backward end-point mapping derivative $d\tilde{E}(t,t_1, \ldots,t_N)$ is not defined, and neither is $\sigma_{min}(t)$. For some reason, we may only want to have robustness up until some time $t^{\star} < t_N$. Then the previous definition would become $\int^{t^{\star}}_0{{1}/{\sigma_{min}(t)^2}dt}$. \end{remark}
In this section, we show how the switching times of a trajectory can be chosen to build one that is more robust. We also suggest a new way to design a trajectory, by adding redundant switching times, that give us more degrees of freedom. Note also that we will start from a solution of an \emph{optimal} control problem, because it is of high importance in practice, but the method generally applies when starting from any control, as long as it satisfies the hypothesis $(H_1)$ and $(H_2)$. Starting from an initial control such that $E(t_1, \ldots, t_N) = x_f$, we look for \emph{redundant} switching times $(s_1, \ldots,s_l)$ such that $E(t_1, \ldots, t_N,s_1, \ldots,s_l) = x_f$, while minimizing the cost \eqref{cout_robu} that accounts for robustness: \[ C_r(t_1, \ldots, t_N,s_1, \ldots,s_l). \]
\subsection{An auxiliary optimization problem}
Let us consider a bang-bang trajectory (satisfying the hypothesis ($H_1$) and ($H_2$)) of the control system \eqref{dynamics}, optimal for the cost \eqref{cost}. That is, $\overline{u} = ((u_{10}, \ldots,u_{m0}),\left(\overline{t}_1,i_1\right), \ldots, \left(\overline{t}_N,i_N\right),t_f)$ is an optimal solution of the optimization problem \begin{equation} \begin{array}{ccc} \min_{(i_1, \ldots, i_N)} & \min_{(t_1, \ldots,t_N)} & C(t_1, \ldots,t_N). \\
& \text{s.t. } E(t_1, \ldots, t_N) = x_f \\ \end{array} \label{prob_ini} \end{equation} Let us emphasize the fact that reducing the control to its switching times enables us to reduce a problem in infinite dimension \begin{equation*} \begin{array}{cc} \min_{u \in L^{\infty}([0,t_f];\Omega)} & C(u) \\
\text{s.t. } E(u) = x_f \\ \end{array} \end{equation*} to a finite number of non-linear problems under non-linear constraints in finite dimension, provided we set $N$, as we left aside chattering trajectories.
In order to make the control more robust we suggest to solve the following problem. We fix the components of the control $(i_1, \ldots, i_N)$, and we introduce the cost that accounts for the robustness of a trajectory: \begin{equation*} \begin{array}{cc} \min_{(t_1, \ldots,t_N)} & \lambda_1 C(t_1, \ldots,t_N) + \lambda_2 C_r(t_1, \ldots,t_N), \\
\text{s.t. } E(t_1, \ldots, t_N) = x_f \\ \end{array} \end{equation*} where $\lambda_1$ and $\lambda_2$ are two parameters, chosen to give more or less importance to the different costs. For instance, if $\lambda_1 \gg \lambda_2$, the solution is close to the initial one $(\overline{t}_1, \ldots,\overline{t}_N)$.
\subsection{Redundancy creates robustness}
Let us consider a control $u = ((u_{10}, \ldots,u_{m0}),\left({t}_1,i_1\right), \ldots, \left({t}_N,i_N\right),t_f)$. In order to reduce the optimization space, we will consider in the following subsection that the initial control values $(u_{10}, \ldots,u_{m0})$, the components $(i_1, \ldots, i_N)$ and the final time $t_f$ are fixed, so we will forget them in the notations.
We propose here to go further in order to improve the robustness of the corresponding trajectory. We do so by adding needles to some components of the control. By needle, we mean a short impulse on one of the control. Let us denote by $l$ the number of needles we are willing to add. It means that we look for additional switching times $[(s_1,s_2), \ldots,(s_{2l-1},s_{2l})]$ and components of the control $(j_1, \ldots, j_l)$, so that for all $i \in \llbracket 1,l \rrbracket$, $(s_{2i-1},s_{2i})$ are switching times for the $j_i$-th components of the control (see Figure \ref{fig_redundancy}). It aims at giving us more degrees of freedom while trying to absorb perturbations $\delta x$ by moving the switching times $(\mathcal{T},\mathcal{S}) = (t_1, \ldots, t_N, (s_1,s_2), \ldots,(s_{2l-1},s_{2l}))$. Thus, we are solving the optimization problem \begin{equation} \begin{array}{ccc} \min_{(j_1, \ldots,j_l)} & \min_{(\mathcal{T},\mathcal{S})} & \lambda_1 C(\mathcal{T},\mathcal{S}) + \lambda_2 C_r(\mathcal{T},\mathcal{S}). \\
& \text{s.t. } E(\mathcal{T},\mathcal{S}) = x_f \\ \end{array} \label{prob_add_needles} \end{equation}
\begin{remark} If the original bang-bang control strategy $\bar{u}$ does not come from an optimization process, that is there is no cost $C$ associated with it, we can still consider problem \eqref{prob_add_needles} but with $\lambda_1 = 0$. \end{remark}
\begin{figure}
\caption{Principle of adding needles.}
\label{fig_redundancy}
\end{figure}
Let us denote by $\overline{\mathcal{T}}$ the solution of problem \eqref{prob_ini}, and by $(\mathcal{T}^{\star},\mathcal{S}^{\star})$ the solution of problem \eqref{prob_add_needles}. Then, we have that \begin{equation*} C(\overline{\mathcal{T}}) \leqslant C(\mathcal{T}^{\star}, \mathcal{S}^{\star}). \end{equation*} It means that the solution $(\mathcal{T}^{\star}, \mathcal{S}^{\star})$ is sub-optimal with respect to the initial cost $C$. However, this sub-optimality comes with a gain in terms of robustness. Besides, the loss of optimality (and therefore gain in robustness) can be controlled by the choice of the coefficients $\lambda_1$ and $\lambda_2$.
This problem is a mixed problem, with integer variables (the components $\left(j_1, \ldots, j_l\right)$), and continuous variables (the switching times $\left(t_1, \ldots, t_N, (s_1,s_2), \ldots,(s_{2l-1},s_{2l})\right)$). However, if the components are fixed, we only have to solve a non-linear problem subject to non-linear constraints in finite dimension \begin{equation} \begin{array}{cc} \min_{(\mathcal{T},\mathcal{S})} & \lambda_1 C(\mathcal{T},\mathcal{S}) + \lambda_2 C_r(\mathcal{T},\mathcal{S}). \\
\text{s.t. } E(\mathcal{T},\mathcal{S}) = x_f \\ \end{array} \label{prob_fixed_modes} \end{equation} We used an interior-point algorithm to solve \eqref{prob_fixed_modes}. In \cite{Antsaklis2000,Zhu2015}, gradient-based algorithms are shown to be effective to solve such problems, when the sequence of indices $\left(j_1, \ldots, j_l\right)$ is fixed. Therefore a ``na\"ive'' way to proceed, if $m$ denotes the number of components of the control, is to solve $m^l$ optimization problems, which is extremely costly if $m$ or $l$ is big. A compromise has to be found between the potential benefit in robustness and the computational cost. Such a compromise will however depend on the particular problem at hand, so we do not elaborate too much on this issue and give an example in Section \ref{sec4}. Let us cite \cite{Chyba2008,Chyba2009}, where the authors parametrize an optimal control problem (for the time-minimal and $L^1$ problem) with the switching times of the controls. They simplify its complex structure by fixing the number of switching times, and wonder how many switching times are required to obtain a cost close to the optimal one : the result is striking as 2 or 3 may be enough. However, they know from an \emph{a priori} study the value of the optimal $L^1$ or time-minimal cost, and therefore can stop adding switching times when reaching a given percentage of this optimal value of the criterion. In our problem, we do not know what is the optimal value of the criterion we identified to quantify the robustness of a trajectory. It becomes necessary to find another way to decide how many needles to add.
One could consider tackling directly Problem \ref{prob_add_needles}, a combinatorial optimization problem (which is a class of problem known to be hard to solve). Recent years have seen the development of advanced numerical procedures to deal with the combinatorial nature of those problem at a reasonable computational cost. We give more details on this issue at the end of this section.
\begin{remark} Let us make here a remark on the ordering of the switching times. In the vector $(\mathcal{T},\mathcal{S})$ are stored the switching times ${t}_{i}$ and $s_i$ that represent the control ${u}$. Those swicthing times are not necessarily ordered during or after the optimization process, so let $\mathbb{T} = (\tau_1, \ldots,\tau_{N+2l})$ denote the ordered equivalent to $(\mathcal{T},\mathcal{S})$. So far, we have made the implicit assumption that when we perform the numerical integration of the system, the switching times are ordered: $\tau_{i+1} - \tau_{i} \geqslant 0$ for all $i \in \llbracket 0,N+2l-1 \rrbracket$. We recall that our goal is to absorb perturbations $\delta x$. As explained in Subsection \ref{backward_epm}, we compute at order one the correction to apply $\delta \mathbb{T} = d{E}(\mathbb{T})^{\dagger}\cdot \delta x$. At this point, we could have that $\mathbb{T} + \delta \mathbb{T}$ does not satisfy this ordering property. Then, we consider that $\mathbb{T} + \delta \mathbb{T}$ is not admissible, and an estimate like \eqref{estimate_deltat} would not hold. \label{rq_gap} \end{remark}
In the following, in order to guarantee that we do not have an interchanging of the switching times (at least for small perturbations), we add an additional constraint whilst elaborating the robustified trajectory $({u}(\cdot),{x}(\cdot))$ at \eqref{prob_add_needles}: \begin{equation} \begin{array}{ccc} {\tau}_{i+1} - {\tau}_i \geqslant \eta & \text{for all} & i \in \llbracket 0,N+2l-1 \rrbracket,\\ \end{array} \label{eq_gap} \end{equation} for some $\eta > 0$, where $\mathbb{T} = (\tau_1, \ldots, \tau_{N+2l})$ denotes the re-ordering of the vector $(\mathcal{T},\mathcal{S})$. In that way, we ensure that two consecutive switching times ($\mathcal{T}$ and $\mathcal{S}$ combined) are at least distant of $\eta$. Thus, if $\delta x$ is small enough, the elements of the vector $\mathbb{T} + d{E}(\mathbb{T})^{\dagger}\cdot \delta x $ remain in ascending order. Besides, such a constraint is often highly justified in practice, for instance if a physical system has to spend some minimum time $\eta$ before it switches to another mode. For example, in Section \ref{sec4}, the attitude control of a rigid body is studied. In real life, because of robustness issues and mechanical constraints, nozzles on a space launcher have indeed a minimum activation time.
\begin{remark} Let $t_f$ denote the final time. If $\eta$ is the minimal time between two switchings in \eqref{eq_gap}, then the total number of switchings $N + 2l$ has an upper bound of $\lfloor t_f/\eta \rfloor$. \end{remark}
The elaboration of a robust trajectory in \eqref{prob_add_needles} can be seen as an optimal control problem of switched-mode dynamical system. A recent survey on switched systems can be found in \cite{Zhu2015}. This theory deals with control systems where the dynamics can only take a finite number of modes. To determine the command law, one has to determine the switching times, as well as the different modes of the system. If the modes are fixed (in our case, it means that the components $(i_1, \ldots, i_N, j_1, \ldots, j_l)$ are fixed), it is often called a timing-optimization problem ; if not, a scheduling optimization problem. In \cite{Piccoli1999,Sussmann2000}, necessary conditions are derived, for trajectories of hybrid systems considering a fixed sequence of modes of finite length (in our setting, it corresponds to the Problem \eqref{prob_fixed_modes}). In \cite{Ali2014, Wardi2012}, the authors develop numerical algorithms to solve both the timing and the scheduling problems. Their techniques rely heavily on gradient-like methods. However, the latter problem is much more complex because of its discrete nature: indeed the procedure needs to account for both continuous and discrete control variables, and can therefore be seen as a combinatorial optimization problem. Note that the paper \cite{Ali2014} deals with dwell time constraints. It consists in imposing a threshold $\eta$ between two consecutive switching times which is the constraint we introduced at \eqref{eq_gap}. Let us also mention other techniques to solve scheduling optimization problems, like zoning algorithms \cite{Shaikh2005}, or relaxation methods, where discrete variables are temporarly relaxed into continuous variables \cite{Bengea2005}.
\section{Numerical results} \label{sec4}
In order to illustrate the results of Sections \ref{sec2} and \ref{sec3}, we consider the problem of the attitude control of a rigid body. Let $\omega = (\omega_1,\omega_2, \omega_3)$ be the angular velocity of the body with respect to a frame fixed on the body. Introducing the inertia matrix $I$, the Euler's equation for a rigid body, subjected to torques $(b^1, \ldots, b^m)$, writes: \begin{equation*} I \dot{\omega} = I \omega \wedge \omega + \sum_{k=1}^m {b^k}. \end{equation*}
In the case when the axes of the body frame are the axes of inertia of the body, the matrix $I$ is diagonal: $I = \diag(I_1,I_2,I_3)$. The controlled Euler's equations can then be reduced to
\begin{equation*} \dot{\omega}(t) = f\left(\omega(t),u(t)\right), \end{equation*} where for $1 \leqslant k \leqslant m$, $u_k(t) \in \{0,1\}$ almost everywhere, and the function $f$ describing the dynamics writes: \begin{equation} f(\omega_1,\omega_2,\omega_3,u_1,u_2,u_3,u_4) = \left\{ \begin{array}{l} \alpha_1 \omega_2 \omega_3 + \sum^m_{k=1}{b^k_1 u_k} \\ \alpha_2 \omega_1 \omega_3 + \sum^m_{k=1}{b^k_2 u_k} ~,\\ \alpha_3 \omega_1 \omega_2 + \sum^m_{k=1}{b^k_3 u_k} \\ \end{array}\right. \label{eq_euler} \end{equation} with $\alpha_1 = (I_2 - I_3)/I_1$, $\alpha_2 = (I_3 - I_1)/I_2$ and $\alpha_3 = (I_2 - I_1)/I_3$. This is with a slight abuse in the notations, because we still denote by $b^k$ the normalized vector $(b^k_1/I_1,b^k_2/I_2, b^k_3/I_3)$.
The controllability of such a system has been studied in \cite{bonnard2006mecanique}. Let us mention here the papers \cite{Krstic1999,Outbib1992,Windeknecht1963}, that implement, in the special case of the stabilization of a rigid spacecraft, methods to stabilize the spacecraft towards the point $(0,0,0)$, but once again, the controls used are not bang-bang. Note that \eqref{eq_euler} is a control-affine system, and therefore, Remark \ref{rq_affine} applies.
In the following, we consider the numerical values $\alpha_1 = 1$, $\alpha_2 = -1$, $\alpha_3 = 1$, $b^1 = [2,1,0.3]$, $b^2 = [-2,-1,-0.3]$, $b^3 = [0,0,1]$ and $b^4 = [0,0,-1]$, and initial and final conditions $x_0 = (0,0,0) $ and $x_f = (0.4,-0.3,0.4)$.
We start by building an optimal trajectory for the $L^1$ cost $\int^{t_f}_0{\sum_{j=1}^4 {|u_j(t)|dt}} + t_f$ (the presence of $t_f$ ensures us not to obtain a trajectory with infinite final time). The resolution of such a problem with a $L^1$ cost can be numerically challenging. Numerical methods in optimal control are often categorized in two categories: direct methods and indirect methods. Whereas direct methods consist in a total discretization of the state and control spaces, indirect methods exploit Pontryagin maximum principle. (see \cite{Trelat2012} for a survey on numerical methods in optimal control). The aim of the following subsection is to explain briefly the principle of a continuation method.
\subsection{Computing the nominal trajectory}
The nominal trajectory, optimal for the $L^1$ cost, is computed with a continuation procedure. The idea of such a procedure is to solve first an ``easier'' problem, and deform it step by step to solve the targeted problem. We introduce the continuation parameter $\lambda \in [0,1]$, and we consider the optimal control problem $(\mathcal{P}_{\lambda})$ of steering the system \eqref{eq_euler} from $x_0$ to $x_f$, by minimizing the cost \begin{equation*}
\lambda \int^{t_f}_0{\sum^4_{i=1}{|u_j(t)|^2\, dt}} + (1-\lambda) \int^{t_f}_0{\sum^4_{i=1}{|u_j(t)|\, dt}} + t_f. \end{equation*}
When $\lambda = 0$, we recognize our problem. For some $\lambda \in [0,1]$, solving problem $(\mathcal{P}_{\lambda})$ is done by finding the zeros of a shooting function that results from the application of Pontryagin maximum principle. Solving a shooting problem is done with Newton like methods. Such methods are highly sensitive to their initialization, that can be very difficult, especially in the case of the minimization of the $L^1$ norm $\int^{t_f}_0{|u(t)|dt}$. The continuation procedure is introduced to overcome this difficulty.
For $\lambda=1$, the cost is stricly convex in the controls, and writes \[
\int^{t_f}_0{\sum^4_{i=1}{|u_j(t)|^2\, dt}} + t_f, \] for which the initialization of the induced shooting method is much easier. Therefore, we solve a sequence of optimal control problems, for values of $\lambda$ decreasing from 1 to 0. The result of the shooting problem for some $\lambda \in ]0,1]$ serves as the initialization of another problem with $\lambda' < \lambda$.
\subsection{Robustifying the nominal trajectory}
From this $L^1$ - minimal trajectory, represented on Figure \ref{fig_add_needles}, with three switching times that we denote $(t_1,t_2,t_3)$ we build a new trajectory by solving the problem \eqref{prob_add_needles} with 3 needles (i.e., $l=3$), $\lambda_1 = \lambda_2 = 1$, and taking $\eta = 0.05$ in Equation \eqref{eq_gap}. As explained in Remark \ref{rq_leastsquares}, we see that it is worthwile to have the additional switching times available as long as possible. That is, we force the additional switchings to occur after $t_3$. Keeping in mind Equation \eqref{eq_gap}, this constraint can be written: \[ \begin{array}{ccc} t_{i+1} - t_i \geqslant \eta~~ (\forall i \in \llbracket1,3\rrbracket) ,& s_{1}-t_{3} \geqslant \eta ,& s_{i+1} - s_i \geqslant \eta~~ (\forall i \in \llbracket1,6\rrbracket). \end{array} \] We find that the optimal triplet is $(j_1,j_2,j_3) = (1,4,2)$, for which we have $C = 0.77$ and $C_{r} = 2.22$. We found this optimal triplet by exploring the $4^3 = 64$ possibilities. We then used the heuristic that this solution would make a good choice to start looking for the solution with 4 needles (as it would have been to costly to examine the $4^4 = 256$ possibilities). However we could not make the cost dicrease significantly (the best cost we found was $C_r = 2.07$). This heuristic is very similar to what is used in Branch and Bound methods. Besides, as an element of comparison, the optimal couple when adding only two needles is $(j_1,j_2) = (1,4)$, for which $C_r = 4.25$, and the optimal solution when adding only on needle is $j_1 = 2$, for which $C_r = 30.28$. Thus, we notice a substantial improvement when increasing the number of needles from 1 to 2 and from 2 to 3, whereas it seems less profitable to add a fourth one. We therefore stopped at 3 needles. The controls are displayed on Figure \ref{fig_add_needles}, and the components 1, 2 and 4, on which needles have been added, are represented in red.
\begin{figure}
\caption{Improving the robustness of a trajectory adding needles.}
\label{fig_add_needles}
\end{figure}
In order to represent perturbations, we consider that the principal moments of inertia can vary, causing the coefficients $\alpha_1$, $\alpha_2$ and $\alpha_3$ to vary. Thus we consider the perturbed dynamics \begin{equation} f_{per}(t,\omega_1,\omega_2,\omega_3,u_1,u_2,u_3,u_4) = \left\{ \begin{array}{l} \alpha^{per,\varepsilon}_1(t) \omega_2 \omega_3 + \sum^m_{k=1}{b^k_1 u_k} \\ \alpha^{per,\varepsilon}_2(t) \omega_1 \omega_3 + \sum^m_{k=1}{b^k_2 u_k}~, \\ \alpha^{per,\varepsilon}_3(t) \omega_1 \omega_2 + \sum^m_{k=1}{b^k_3 u_k} \\ \end{array}\right. \label{eq_euler_per} \end{equation}
so that $\varepsilon$ models the size of the perturbation. More precisely, we take $\alpha_i^{per,\varepsilon}(t) = \alpha_i + \varepsilon h_i(t)$, where $h_i(\cdot)$ is some periodic function satisfying $\left\| h_i \right\|_{\infty} \leqslant 1$ (note that the exact expression of $h_i$ is not relevant here, as it is supposed to model any perturbation of the $\alpha_i$). We denote by $x_{per}$ the solution of the Cauchy problem \begin{align*} \dot{x}(t) &= f_{per}(t,x(t),u(t)), \\ x(0) &= x_0. \end{align*}
We denote by $x_{cor}$ the corrected trajectory computed with our algorithm. We show, on Figure \ref{fig_tracking}, the three trajectories, for $\varepsilon = 0.78$ and a cost $C_{r} = 2.22$. We can see the perturbed trajectory $x_{per}$ drifting away from the reference trajectory $x_{ref}$ and away from the final point $x_f$, whereas the corrected trajectory $x_{cor}$ eventually reaches a point very close to $x_f$. Actually, for the trajectories represented on Figure \ref{fig_tracking}, we have that $\left\| x_{cor}(t_f) - x_f \right\| / \left\|x_f\right\| = 5.5 \times 10^{-3}$, whereas $\left\| x_{per}(t_f) - x_f \right\| / \left\|x_f\right\| = 1.3 \times 10^{-1}$. Our algorithm has indeed been able to adjust the perturbed trajectory back towards $x_f$.
One may wonder how this method behaves with respect to the choice of $\varepsilon$. As explained in Remark \ref{rq_interchanging}, we stop if two switching times are interchanged, that is, if $\delta T$ is too big, as the initial vector of switching times satisfies a gap property \eqref{eq_gap}. Actually, this is not strictly true, as we could have a ``big'' correction that does not change the ascending order of the switching times, for instance if we shift all the switching times in the same direction. However, we experimentally notice that the cost $C_{r}$ \emph{has an impact on the size of the perturbation we are able to absorb}.
We build several trajectories, for which we apply our algorithm for increasing values of $\varepsilon$, until the algorithm fails as explained in Remark \ref{rq_interchanging}, for some $\varepsilon_{\max}$. We plot on Figure \ref{fig_size_perturbations} the value of $\varepsilon_{\max}$ with respect to the cost $C_{r}$ (that is, for a given cost $C_r$, $\varepsilon_{max}$ is the smallest value for which there is an interchanging of switching times). Even if the curve is not decreasing (for the reason explained above), we can see that \emph{having a low cost $C_{r}$ enables us to absorb bigger perturbations}.
\begin{figure}
\caption{Size of the maximal perturbation absorbed with respect to the robustness of a trajectory}
\label{fig_size_perturbations}
\end{figure}
\begin{figure}
\caption{Reference, perturbed and corrected trajectories for $\varepsilon = 0.78$, $C_{r} = 2.22$.}
\label{fig_tracking}
\end{figure}
On Figure \ref{fig_tracking_results}, we show the relative error $\|x(t_f)-x_f\|/\|x_f\|$ for the perturbed $x_{per}$ and corrected $x_{cor}$ trajectories, for several values of $\varepsilon$. As we apply order one corrections, we see that our method shows better results for small values of $\varepsilon$, but also gives very satisfactory results for larger values of $\varepsilon$.
\begin{figure}
\caption{Tracking results for several values of $\varepsilon$.}
\label{fig_tracking_results}
\end{figure}
\section{Conclusion}
Starting with the expansion of the end-point mapping with respect to a needle like variation, we have shown in this paper how redundant switching times can be added in order to make a control more robust, for general control systems of the form $\dot{x}(t) = f(t,x(t),u(t))$. Those additional switching times can be seen as extra degrees of freedom meant to help us absorb perturbations. A potential application is to start from a bang-bang solution of an optimal control problem, that is usually not robust, and make it more robust. Then the gain in robustness compensates for the loss in optimality.
In the presence of a perturbation $\delta x$, the correction to apply to the switching times is the solution of an equation $dE \cdot \delta \mathcal{T} = \delta x$. It is natural to try to solve this equation while shifting the switching times as little as possible. The least-squares problem formulation is then the appropriate setting to find the solution of minimal (euclidian) norm of the previous equation, and it is given by $\delta \mathcal{T} = dE^{\dagger} \cdot \delta x$, for which we have the norm estimation $\left\| \delta \mathcal{T} \right\|_2 \leqslant \left\| \delta x \right\|_2/\sigma_{min}$. This enabled us to identify the measure for robustness: \begin{equation*} \int{\frac{1}{\sigma_{min}(t)^2}\, dt}. \end{equation*}
The numerical example studied in Section \ref{sec4} is academic, and was used to legitimize the theoretical ideas explained previously. In a future work, we aim at applying the method to the complete (and more complex) attitude control system of a three-dimensional rigid body, for which we wish to control the angular velocity, as well as the orientation with respect to a fixed reference frame. To the three velocity variables will be added three angles to parametrize the orientation of the body. Thus, a challenge will come from the dimension of the state space (6), as well as the potentially bigger number of needle-like variations required to robustify a trajectory.
\appendix \section{Proof of proposition \ref{prop_differentiability_epm}}
In order to prove the differentiability of the end-point mapping, we start with the differentiability with respect to one component. The proof relies heavily on the expansion \eqref{eq_var}, that we recall first.
\begin{lemma} \label{lemme_aiguilles} Let $t_1 \in [0,t_f[$, and let $u_{\pi_1}(\cdot)$ be a needle-like variation of $u(\cdot)$, with $\pi_1=(t_1,\delta t_1,u_1)$. Then \begin{equation*}
x_{\pi_1}(t_f) = {x}(t_f) + \left| \delta t_1 \right| v_{\pi_1}(t_f) + \mathrm{o}(\delta t_1), \end{equation*} where $v_{\pi_1}(\cdot)$ is the solution of a Cauchy problem on $[t_1,t_f]$ \begin{align*} \dot{v}_{\pi_1}(t) & = \frac{\partial f}{\partial x}(t, {x}(t), {u}(t))v_{\pi_1}(t),\\ v_{\pi_1}(t_1) & = f(t_1, {x}(t_1),u_1) - f(t_1, {x}(t_1), {u}(t_1)). \end{align*} \end{lemma}
\begin{proposition} We denote by $u$ the control $(t_1, \ldots, t_N,t_f)$ and $x(\cdot)$ the associated trajectory of the control system. Let $\delta t_1 \in \R$ be small enough. Then \begin{equation*} E(t_1+\delta t_1, t_2, \ldots, t_N, t_f) = E(t_1, \ldots, t_N, t_f) + \delta t_1 \cdot v_1(t_f) + o(\delta t_1), \end{equation*} where $v_1(\cdot)$ is the solution of the Cauchy problem on $[t_1,t_f]$: \begin{align*} \dot{v}_1(t) &= \frac{\partial f}{\partial x}(t,x(t),u(t))v_i(t), \\ v_1(t_1) &= \left\{ \begin{tabular}{rl} $f(t_1,x(t_1),(\ldots, a_{i_1}, \ldots)) - f(t_1,x(t_1),u(t_1^+))$ & if $u_{i_1}$ switches from $a_{i_1}$ to $b_{i_1}$. \\ $f(t_1,x(t_1),(\ldots, b_{i_1}, \ldots)) - f(t_1,x(t_1),u(t_1^-))$ & if $u_{i_1}$ switches from $b_{i_1}$ to $a_{i_1}$. \end{tabular} \right. \end{align*} \end{proposition}
\begin{proof}
\begin{figure}
\caption{Shifting an opening time is equivalent to add a needle.}
\label{aiguille_demo}
\end{figure} Assume that at time $t_1$ the control $u_{i_1}$ switches from $a_{i_1}$ to $b_{i_1}$, and that $\delta t_1 > 0$. Let us define the needle-like variation $\pi = (t_1,\delta t_1, a_{i_1})$ for the $i_1$-th component of the control. Then, the control $u_{\pi}$ is represented by the vector $(t_1+\delta t_1, \ldots, t_N, t_f)$ (figure \ref{aiguille_demo}): adding the needle-like variation $\pi$ to the $i_1$-th component, with value $a_{i_1}$ and length $\delta t_1$ is equivalent to shifting the opening time to $t_1+\delta t_1$. Thus, we have that $u(t_1^+)_{i_1} = b_{i_1}$ and $u_{\pi}(t_1^+)_{i_1} = a_{i_1}$. Hence, we obtain that, according to lemma \ref{lemme_aiguilles} \begin{equation} \label{var_pos} x_{\pi}(t_f) = x(t_f) + \delta t_1 \cdot v_1(t_f) + o(\delta t_1), \end{equation} where $v_1(\cdot)$ is the solution of the Cauchy problem: \begin{align*} \dot{v}_1(t) &= \frac{\partial f}{\partial x}(t,x(t),u(t))v_1(t), \\ v_1(t_1) &= f(t_1,x(t_1),u_{\pi}(t_1^+)) - f(t_1,x(t_1),u(t_1^+)) \\
&= f(t_1,x(t_1), (\ldots, a_{i_1}, \ldots)) - f(t_1,x(t_1), (\ldots, b_{i_1}, \ldots)). \end{align*} (Between $u_{\pi}(t_1^+)$ and $u(t_1^+)$, only the $i_1$-th component differs.)
If $\delta t_1 <0$, define the variation $\pi = (t_1,\delta t_1, 1)$ for the $i_1$-th component of the control. Then again, the control $u_{\pi}$ is represented by the vector $(t_1+\delta t_1, \ldots, t_N, t_f)$ (figure \ref{aiguille_demo}). Thus, we have that $u(t_1^-)_j = a_{i_1}$ and $u_{\pi}(t_1^-)_{i_1} = 1$. Thanks to lemma \ref{lemme_aiguilles}, we obtain that \begin{equation} \label{var_neg} x_{\pi}(t_f) = x(t_f) - \delta t_1 \cdot w_1(t_f) + o(\delta t_1), \end{equation} where $w_1(\cdot)$ is the solution of the Cauchy problem: \begin{align*} \dot{w}_1(t) &= \frac{\partial f}{\partial x}(t,x(t),u(t))w_1(t), \\ w_1(t_1) &= f(t_1,x(t_1),u_{\pi}(t_1^-)) - f(t_1,x(t_1),u(t_1^-)) \\
&= f(t_1,x(t_1), (\ldots, b_{i_1}, \ldots)) - f(t_1,x(t_1), (\ldots, a_{i_1}, \ldots)) \\
&= -v_1(t_1). \end{align*} Thus, by uniqueness we have $w_1 = -v_1$, and from \eqref{var_pos} and \eqref{var_neg}, we obtain: \begin{equation*} x_{\pi}(t_f) = x(t_f) + \delta t_1 \cdot v_1(t_f) + o(\delta t_1). \end{equation*}
We can proceed the exact same way if at $t_1$, the control $u_{i_1}$ switches from $b_{i_1}$ to $a_{i_1}$ \end{proof}
The general result at proposition \ref{prop_differentiability_epm} follows by an immediate iteration.
\renewcommand{Acknowledgements}{Acknowledgements} \begin{abstract}
This study has been performed in the frame of the CNES Launchers Research \& Technology program. \end{abstract}
\end{document} |
\begin{document}
\title[Uniformity in association schemes]{Uniformity in association schemes and coherent configurations: cometric Q-antipodal schemes and linked systems}
\author[Edwin R. Van Dam]{Edwin R. van Dam} \address{Department of Econometrics and Operations Research, Tilburg University, PO Box 90153, 5000 LE Tilburg, The Netherlands} \email{[email protected]}
\author{William J. Martin} \address{Department of Mathematical Sciences and Department of Computer Science, Wor\-ces\-ter Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA} \email{[email protected]} \thanks{The second author was supported in part by NSA grant number H98230-07-1-0025.\\ \indent This version is published in Journal of Combinatorial Theory, Series A 120 (2013), 1401--1439.}
\author{Mikhail Muzychuk} \address{Department of Mathematics, Netanya Academic College, University St. 1, Netanya 42365, Israel} \email{[email protected]}
\subjclass[2010]{Primary 05E30, Secondary 05B25, 05C50, 51E12}
\dedicatory{Dedicated to the memory of Donald G. Higman}
\keywords{cometric association scheme, imprimitivity, Q-antipodal association scheme, uniform association scheme, linked system, coherent configuration, strongly regular graph decomposition.}
\maketitle
\begin{abstract}
Inspired by some intriguing examples, we study
uniform association schemes and uniform coherent configurations, including cometric Q-antipodal association
schemes. After a review of imprimitivity, we show that an imprimitive
association scheme is uniform if
and only if it is dismantlable, and we cast these schemes in
the broader context of certain --- uniform --- coherent
configurations. We also give a third characterization of uniform
schemes in terms of the Krein parameters, and derive
information on the primitive idempotents of such a scheme.
In the second half of the paper, we apply these results to
cometric association schemes.
We show that each such scheme is uniform if and only if it
is Q-antipodal, and derive results on the parameters of the
subschemes and dismantled schemes of cometric Q-antipodal schemes. We revisit
the correspondence between uniform indecomposable three-class
schemes and linked systems of symmetric designs, and show that
these are cometric Q-antipodal. We obtain a characterization of
cometric Q-antipodal four-class schemes in terms of only a few
parameters, and show that any strongly
regular graph with a (``non-exceptional") strongly regular decomposition gives rise
to such a scheme. Hemisystems in generalized quadrangles provide
interesting examples of such decompositions. We finish with a
short discussion of five-class schemes as well as a
list of all feasible parameter sets for cometric Q-antipodal
four-class schemes with at most six fibres and fibre size at
most 2000, and describe the known examples.
Most of these examples are related to groups, codes, and
geometries.
\end{abstract}
\section{Introduction}
Motivated by the search for cometric (Q-polynomial) association schemes, we study uniform association schemes. Cometric association schemes are the ``dual version'' of distance-regular graphs (metric schemes), and the latter are well-studied objects, cf. \cite{bcn, dkt12}. Classical metric schemes such as Hamming schemes and Johnson schemes are in fact also cometric. Bannai and Ito \cite[p.\ 312]{banito} conjectured that for large enough $d$, a primitive $d$-class scheme is metric if and only if it is cometric. Partly because of this conjecture, the topic of cometric association schemes was studied mainly in connection to distance-regular graphs, at least until the end of last century. An exception to this is the work of Delsarte \cite{del} (and others building on this) who showed the importance of cometric schemes in design theory.
This slowly changed when De Caen and Godsil raised the challenging problem of constructing cometric schemes that are not metric or duals of metric schemes (cf. \cite[p.\ 234]{godsil}, \cite[Acknowledgments]{mmw}). Around the same time, Suzuki derived fundamental results on imprimitive cometric schemes \cite{suzimprim} and on cometric schemes with multiple Q-polynomial orderings \cite{suztwoq}, but examples of the above type were still missing. In the last few years, however, there has been considerable activity in the area, with the first new constructions of cometric (but not metric) schemes given by Martin, Muzychuk, and Williford \cite{mmw}. For a recent overview of results on cometric schemes we refer to the survey on association schemes by Martin and Tanaka \cite{mtanaka}. Very recent is the work of Kurihara and Nozaki \cite{Kur2011T, KN2012JCTA}, Penttila and Williford \cite{penwil}, and Suda \cite{suda3, suda1, suda2, Suda2012JCTA, Suda2012pre}.
Meanwhile, in \cite{HigmanCA}--\cite{Huninform}, Higman obtained numerous results on imprimitive association schemes and coherent configurations. In his paper on four-class schemes and triality \cite{Htriality} and also in an unpublished manuscript \cite{Huninform}, he introduced the concept of uniformity of an imprimitive scheme, and he mentioned several examples of such uniform schemes. It turns out that many of these examples are cometric Q-antipodal. Inspired by this, we work out the concept of uniformity, and apply it to cometric Q-antipodal schemes.
This paper is organized as follows. We finish this introduction with an intriguing introductory example: the linked system of partial $\lambda$-geometries that is related to the Hoffman-Singleton graph. This example gives rise to a cometric Q-antipodal association scheme, and illustrates many of the interesting features we will consider in the paper. In Section \ref{Sec:background}, we remind the reader of basic background material on association schemes, focusing in particular on the natural subschemes and quotient schemes of an imprimitive association scheme. The main results for the first half of the paper are to be found in Sections \ref{sec:uniform} and \ref{Sec:coco}. We first show in Section \ref{Subsec:dismunif} that the dismantlability property introduced in \cite{mmw} is implied by Higman's uniformity property \cite{Huninform}. In order to establish the reverse implication, we need to consider a fission of our uniform association scheme whose adjacency algebra is necessarily non-commutative. So we introduce coherent configurations at this point to draw out the deeper structure that occurs here. Only at the level of this more detailed structure do we see the full equivalence of the dismantlable and uniform properties in Theorem \ref{dismunif}. We finish the first half of the paper with another characterization of the same phenomenon in Section \ref{Subsec:QHigman}, this time cast in terms of Krein parameters only. We introduce Q-Higman schemes and show that these, too, are equivalent to uniform schemes. To place the main concepts discussed here in perspective, we summarize them in the Venn diagram of Figure \ref{venn}.
The second half of the paper returns to the cometric case and explores the implications of the results discussed above for cometric Q-antipodal schemes. In Section \ref{sec:cometricQ}, as in Sections \ref{Sec:background} and \ref{sec:uniform}, we strive to make the paper fairly self-contained; we include all definitions that are not available in the standard literature. We show that each cometric scheme is uniform if and only if it is Q-antipodal, and derive results on the parameters of the subschemes and dismantled schemes of cometric Q-antipodal schemes. This general discussion of cometric Q-antipodal schemes is followed by three more detailed sections focusing on such association schemes with a small number of classes. In Section \ref{sec:threeclass}, we show that uniform indecomposable three-class schemes are always cometric Q-antipodal, and that these correspond naturally to linked systems of symmetric designs. In Section \ref{sec:four}, we study the more complicated case of four-class schemes. We obtain a characterization of cometric Q-antipodal four-class schemes in terms of just a few of their parameters, and show that any strongly regular graph with a (``non-exceptional") strongly regular decomposition gives rise to such a scheme. An exciting special case of recent interest is that of hemisystems in generalized quadrangles. To facilitate future work on such problems, we generate a list of all feasible parameter sets for cometric Q-antipodal four-class schemes with at most six fibres and fibre size at most 2000, and describe the known examples from this table. In the short Section \ref{sec:five}, we mention some examples of five-class schemes that are cometric Q-antipodal. The final section, Section \ref{Sec:misc}, collects some miscellaneous remarks.
As background we refer to Cameron \cite{cameronbook} and Higman \cite{HigmanCA} for coherent configurations, and to Bannai and Ito \cite{banito}, Brouwer, Cohen, and Neumaier \cite{bcn}, Godsil \cite{godsil}, and Martin and Tanaka \cite{mtanaka} for association schemes.
\begin{figure}
\caption{Venn diagram of relevant types of association schemes}
\label{venn}
\end{figure}
\subsection{A linked system of partial $\lambda$-geometries related to the Hoffman-Singleton graph: an instructive example}\label{HOSI}
The maximum size of a coclique in the Hoffman-Singleton graph is 15. There are 100 cocliques of this size, and it is known that one can define a bipartite cometric distance-regular graph $\Gamma$ with diameter four and valency 15 on these 100 cocliques by calling two cocliques adjacent whenever they intersect in eight vertices, cf. \cite[p.\ 393]{bcn}. Miraculously, the distance-four graph $\Gamma_4$ of this graph forms a Hoffman-Singleton graph on each part of the bipartition. Moreover, the union of $\Gamma$ and $\Gamma_4$ is the so-called Higman-Sims graph. In fact, in this way it is clear that the Higman-Sims graph can be decomposed into two Hoffman-Singleton graphs; here we have a strongly regular decomposition of a strongly regular graph, in the sense of Haemers and Higman \cite{HH}. The incidence structure that $\Gamma$ induces between the two parts of the bipartition is a so-called strongly regular design as defined by Higman \cite{Hsrd}, and more specifically a partial $\lambda$-geometry as defined by Cameron and Drake \cite{camdrake}. Building on a description of the Hoffman-Singleton graph by Haemers \cite{Hthesis}, Neumaier \cite{neumaier} describes this partial $\lambda$-geometry --- and hence the graph $\Gamma$ --- using the points, lines, and planes of $PG(3,2)$. So far, so good.
Neumaier goes on to describe how $\Gamma$ can be constructed in the Leech lattice. Using the group $2\cdot U_3(5) \cdot S_3$, he finds three types of 50 vectors each, and between each two types of 50 vectors the above partial $\lambda$-geometry. Moreover, these geometries are linked: we have a linked system of partial $\lambda$-geometries.
What is going on combinatorially is that one can extend the distance-regular graph $\Gamma$ by the 50 vertices of the Hoffman-Singleton graph, by calling a coclique adjacent to a vertex whenever the coclique contains the vertex. This gives a 30-regular graph on 150 vertices, and it generates a uniform imprimitive four-class association scheme. This association scheme turns out to be cometric too (but it is not metric); in fact it is Q-antipodal with three fibres of size 50. Here (again) one of the relations forms a Hoffman-Singleton graph on each fibre, and between each pair of fibres is the incidence structure of a partial $\lambda$-geometry (strongly regular design).
One natural question is whether you can throw in another 50 vertices, and get yet another cometric association scheme. We address this specific case in Section \ref{smallcases}, and give a general bound on the number of fibres in Section \ref{Subsec:absbound}.
Higman also gives the above example in his paper on four-class imprimitive schemes \cite{Htriality}, and in his unpublished manuscript on uniform schemes \cite{Huninform}. This fairly small example illustrates most of the central features considered in this paper and, in our view, the attractive interplay of combinatorial subjects that one sees in the study of cometric Q-antipodal association schemes.
\section{Association schemes} \label{Sec:background}
Our goal in this section is to review briefly the basic definitions from the theory of association schemes that we will need and to summarize some necessary material from the theory of imprimitive schemes. We defer our review of coherent configurations to Section \ref{Sec:coco} since their role will become clear at that point in the narrative.
\subsection{Definitions}
A (symmetric) $d$-class association scheme $(X,\mathcal{R})$ consists of a finite set $X$ of size $v$ and a set $\mathcal{R}$ of relations on $X$ satisfying \begin{itemize} \item $\mathcal{R}=\{R_0,\ldots, R_d\}$ is a partition of $X \times X$;
\item $R_0=\Delta_X:=\{(x,x)| x \in X\}$ is the identity relation;
\item $R_i^\top = R_i$ for each $i$, where $R_i^\top:=\{(x,y) | (y,x) \in R_i\}$; \item there exist integers $p_{ij}^h$ such that
$$ \left| \left\{ z \in X \, | \, (x,z) \in R_i \ {\mbox {\rm and}} \ (z,y) \in R_j\right\}
\right| = p_{ij}^h$$ whenever $(x,y) \in R_h$, for each $i,j,h \in \{0,\ldots , d\}$. \end{itemize} The integers $p_{ij}^h$ are called the intersection numbers of the scheme.
The adjacency matrix $A_R$ of a relation $R$ on $X$ is a $v\times v$ $(0,1)$-matrix defined by $(A_R)_{xy}=1$ if $(x,y)\in R$, and zero otherwise. In this case, we abbreviate by $A_i:=A_{R_i}$ the adjacency matrix of relation $R_i$
and consider $\mathcal{A}:=\langle A_i|i=0,\dots,d \rangle$. Then this vector space is a $(d+1)$-dimensional commutative algebra of symmetric matrices; this is called the Bose-Mesner algebra of the association scheme. Such an algebra admits a basis of pairwise orthogonal primitive idempotents (a nonzero idempotent $E$ of $\mathcal{A}$ is called primitive if $AE$ is proportional to $E$ for each $A\in\mathcal{A}$). We denote these by $E_0, E_1, \ldots, E_d$ with the convention that $E_0 = \frac{1}{v}J$ where $J= \sum_i A_i$ is the all-ones matrix. The first and second eigenmatrices of the scheme are denoted by $P$ and $Q$, respectively, and are defined by the change-of-basis equations $$ A_i = \sum_j P_{ji} E_j \qquad \text{and} \qquad E_j = \frac{1}{v} \sum_i Q_{ij} A_i.$$
The algebra $\mathcal{A}$ is also closed under entrywise (Schur-Hadamard) multiplication $\circ$ of matrices because $A_i \circ A_j = \delta_{ij} A_i$. (We call the $(0,1)$-matrices $A_i$ the primitive Schur idempotents of $\mathcal{A}$.) The (nonnegative) Krein parameters (or dual intersection numbers) $q_{ij}^h$ are the structure constants for this multiplication with respect to the basis of primitive idempotents: $$ E_i \circ E_j = \frac{1}{v} \sum_h q_{ij}^h E_h . $$ We abbreviate $v_i := P_{0i}=p^0_{ii}$ and call this the $i^{\rm th}$ valency; likewise, $m_j := Q_{0j}=q^0_{jj}$ is called the $j^{\rm th}$ multiplicity of the scheme.
\subsection{Metric schemes and cometric schemes}
The association scheme $(X,\mathcal{R})$ is called metric (or ``P-polynomial'') if there exists an ordering $R_0,R_1,\ldots,R_d$ of the relations for which \begin{itemize}
\item $p_{ij}^h = 0$ whenever $0 \le h < |i-j|$ or $i + j < h $, and \item $p_{ij}^{i+j} > 0$ whenever $p_{ij}^{i+j}$ is defined. \end{itemize} An ordering with respect to which these properties hold is called a P-polynomial ordering. In this case, $R_i$ can be interpreted as the distance-$i$ relation in the simple graph $(X,R_1)$ which is necessarily distance-regular. Metric schemes with given P-polynomial orderings are in one-to-one correspondence with distance-regular graphs.
The association scheme $(X,\mathcal{R})$ is called cometric (or ``Q-polynomial'') if there exists an ordering of the primitive idempotents $E_0,E_1,\ldots,E_d$ for which \begin{itemize}
\item $q_{ij}^h = 0$ whenever $0 \le h < |i-j|$ or $i + j < h$, and \item $q_{ij}^{i+j} > 0$ whenever $q_{ij}^{i+j}$ is defined. \end{itemize} It is well known (cf. \cite[Prop. 2.7.1]{bcn}) that to check that a scheme is cometric it suffices to check these properties for $i=1$. An ordering with respect to which these hold is called a Q-polynomial ordering, and $E_1$ is called a Q-polynomial generator. There is no known simple combinatorial or geometric interpretation of the cometric property. Suzuki \cite{suztwoq} showed that, while it is possible to have two distinct Q-polynomial orderings, there can be no more than two such orderings for a given association scheme, with the exception of the cycles. Several important families of association schemes, such as the Hamming schemes and Johnson schemes, are both cometric and metric. But our study here does not assume the metric property at all.
Let $c_i^*:=q^i_{1,i-1}, a_i^*:=q^i_{1i}$, and $b_i^*:=q^i_{1,i+1}$. Then $c_i^* + a_i^* + b_i^* = q^0_{11}$ and the Krein array of the cometric association scheme is defined as $$\{b_0^*,b_1^*,\dots,b_{d-1}^*;c_1^*,c_2^*,\dots,c_d^*\}.$$ Using the Krein array, we define a sequence of orthogonal polynomials $q_j$, $j=0,1,\dots,d+1$ by $q_0(x)=1$, $q_1(x)=x$, and the three-term recurrence $xq_j(x)=c_{j+1}^*q_{j+1}(x)+a_j^*q_j(x)+b_{j-1}^*q_{j-1}(x)$, where we let $c_{d+1}^*:=1$. It follows that $vE_j=q_j(vE_1)$, $j=0,1,\dots,d$ where matrix multiplication is entrywise (and hence the empty product is $J$). Moreover, because $v E_1 \circ E_d = b_{d-1}^* E_{d-1} + a_d^* E_d$, we have that the roots of $q_{d+1}(x)$ are precisely $Q_{i1}$ for $i=0,\ldots,d$. It is now easy to see that, whenever $E_1$ is a Q-polynomial generator for the Bose-Mesner algebra, column one of the matrix $Q$ has $d+1$ distinct entries.
\subsection{Imprimitive schemes} \label{Subsec:imprim}
The association scheme $(X,\mathcal{R})$ with Bose-Mesner algebra $\mathcal{A}$, adjacency matrices $A_i, i=0,1,\dots,d$, and primitive idempotents $E_j, j=0,1,\dots,d$ is called imprimitive if at least one of its nontrivial relations is disconnected (as a graph). It was first shown by Cameron, Goethals, and Seidel \cite{cgs} (and not hard to verify, cf. \cite[Thm.~9.3, Thm.~4.6]{banito}) that imprimitivity is equivalent to each of the following properties: \begin{itemize} \item there is a set $\mathcal{I}$ with $\{0\} \subsetneq \mathcal{I}
\subsetneq \{0,1,\dots,d\}$ such that $\langle A_i | i
\in \mathcal{I} \rangle$ is a matrix subalgebra of $\mathcal{A}$; \item there is a set $\mathcal{J}$ with $\{0\} \subsetneq \mathcal{J}
\subsetneq \{0,1,\dots,d\}$ such that $\langle E_j | j\in
\mathcal{J} \rangle$ is a $\circ$-subalgebra of $\mathcal{A}$; \item there is a matrix\footnote{This matrix is given by Equation \eqref{Eimprim}.} $E \in \mathcal{A}$, not $0$, $I$, or $J$,
such that $E^2=n E$ and $E \circ E = E$ for some $n$; \item the matrix $E_j$ has repeated columns for some $j>0$. \end{itemize}
\noindent For an imprimitive scheme, the sets $\mathcal{I}$ and $\mathcal{J}$ may not be unique, however the various index sets $\mathcal{I}$ and $\mathcal{J}$ are paired by the following equation: \begin{equation} \label{Eimprim} \sum_{i \in \mathcal{I}} A_i=n\sum_{j \in \mathcal{J}} E_j=I_{w}\otimes J_n \end{equation} for some choice of ordering of the vertices. Thus the $v$ vertices are partitioned into $w$ fibres of size $n$. Like $\mathcal{I}$ and $\mathcal{J}$, this partitioning $\mathcal{F}$ into fibres --- the so-called imprimitivity system --- may not be unique, but each of $\mathcal{I}$, $\mathcal{J}$, $\mathcal{F}$ is well-defined given any other one of the three. In the remainder of the paper we will always assume however that $\mathcal{I}$, $\mathcal{J}$, and the imprimitivity system are fixed and given, unless mentioned otherwise. Of the equivalent statements of imprimitivity, the last one could be explained as ``dual imprimitivity". In fact, in this case each of the matrices $E_j$, $j \in \mathcal{J}$ is constant on each fibre $U \in \mathcal{F}$ (i.e., columns $x$ and $y$ of $E_j$ are identical when $x,y\in U$). This is analogous to the fact that each relation $R_i, i \in \mathcal{I}$ is disconnected.
It easily follows that on each fibre $U\in \mathcal{F}$, there is an association scheme --- a so-called subscheme --- induced by the relations indexed by $\mathcal{I}$. In fact, the intersection numbers $\tilde{p}^h_{ij}$ of the subscheme are the same as the corresponding ones in the original scheme, i.e., $$\tilde{p}^h_{ij}=p^h_{ij}, i,j,h \in \mathcal{I}.$$ To put things differently,
$\mathcal{B}:=\langle A_i | i \in \mathcal{I} \rangle$ is a Bose-Mesner subalgebra of the Bose-Mesner algebra $\mathcal{A}$ (i.e., $\mathcal{B}$ is a subalgebra under both ordinary and entrywise multiplication). For later purpose, we define a linear (projection) operator $\pi:\mathcal{A} \rightarrow \mathcal{A}$ by \begin{equation} \label{Epi} \pi(A)=A \circ (I_w \otimes J_n) \end{equation} for $A \in \mathcal{A}$. It is clear that $\pi(A \circ A')=\pi(A) \circ \pi(A')$ for all $A,A' \in \mathcal{A}$. Because the map $\pi$ sends $A = \sum_{i=0}^d c_i A_i$ to $\sum_{i\in \mathcal{I}} c_i A_i$, it is also clear that $\pi(\mathcal{A})=\mathcal{B}$. Note also that $\mathcal{B}$ is a $\circ$-ideal in $\mathcal{A}$, because if $A \in \mathcal{A}$ and $B\in \mathcal{B}$, then $A \circ B =A \circ \pi(B)=\pi(A) \circ B \in \mathcal{B}$.
Each imprimitivity system also gives us a quotient association scheme. Dual to $\mathcal{B}$, consider
$$\mathcal{C}:=\langle E_j | j \in \mathcal{J} \rangle =\{A(I_w \otimes J_n) | A \in \mathcal{A}\};$$ this is also a Bose-Mesner subalgebra of $\mathcal{A}$. It is the image of $\mathcal{A}$ under the projection $\pi^\ast$ which sends $A = \sum_{j=0}^d c_j E_j$ to $\pi^\ast(A):=\frac{1}{n}A(I_w \otimes J_n) = \sum_{j\in \mathcal{J}} c_j E_j$. Each Schur idempotent of $\mathcal{C}$ must be a sum of certain $A_i$ and if $A=\sum_{i\in \mathcal{H}} A_i$ satisfies $A = \frac{1}{n}A(I_w \otimes J_n)$, then $A_{xy} = A_{x'y'}$ whenever $x$ is in the same fibre as $x'$, and $y$ is in the same fibre as $y'$. So, for each $C \in \mathcal{C}$, there exists a well-defined $w\times w$ matrix $\iota(C)$ satisfying $$C = \iota(C) \otimes J_n . $$
It is not hard to verify that the set $\left\{ \iota(C)|C \in \mathcal{C} \right\}$ is a Bose-Mesner algebra also; this gives an association scheme --- the so-called quotient scheme --- on the set of fibres. In this case, the Krein parameters of this quotient scheme are the same as the corresponding ones in the original scheme (cf. \cite[Sec.~2.4]{bcn}). For completeness we mention that Rao, Ray-Chaudhuri, and Singhi \cite{rao} obtained results on the composition factors of imprimitive schemes.
For the topic of this paper --- uniform schemes and, later, cometric Q-antipodal schemes --- our main interest is in the relation between the scheme and its subschemes. The corresponding quotient scheme is in this case trivial, that is, a one-class scheme corresponding to a complete graph. The relationship between the scheme and its subschemes and quotient schemes is essentially worked out by Bannai and Ito \cite[Thm.~II.9.9]{banito} (see also \cite[Sec.~2.4]{bcn} for some information on the relation between the parameters). However, to get a better understanding of what is going on, we include some of their arguments and results (and those of others) applied to subschemes here. (Moreover, Bannai and Ito treated the dual case, which, even though it is analogous, may sometimes be confusing.) By doing this, we derive in Lemma~\ref{subkrein} another (and new, as far as we know) relation between the parameters.
Following Bannai and Ito, we define the relation $\sim^*$ on the index set $\{0,1,\dots,d\}$ (indexing the primitive idempotents) by $$i \sim^* j :\Leftrightarrow q^h_{ij} \neq 0 \text{~for some~}h \in \mathcal{J}.$$
\begin{lemma} The relation $\sim^*$ is an equivalence relation. \end{lemma}
\begin{proof} If $i \sim^* j \sim^* l$, say $q^h_{ij} \neq 0$ and $q^{h'}_{jl} \neq 0$ with $h,h' \in \mathcal{J}$, then by using a standard identity (cf. \cite[Prop. II.3.7(vii)]{banito}, \cite[Lem.~2.3.1(vi)]{bcn}) and the fact that $q^{h''}_{hh'}=0$ if $h'' \notin \mathcal{J}$, we obtain that $$\sum_{h'' \in \mathcal{J}}q^l_{ih''}q^{h''}_{hh'}=\sum_{h''=0}^d q^l_{ih''}q^{h''}_{hh'}=\sum_{j'=0}^d q^{j'}_{ih}q^{l}_{j'h'}\geq q^{j}_{ih}q^{l}_{jh'}>0,$$ and it follows that for some $h'' \in \mathcal{J}$ we have $q^{h''}_{il} \neq 0$, i.e., $i \sim^* l$. \end{proof}
\noindent One of the equivalence classes of this relation must be $\mathcal{J}=:\mathcal{J}_0$, and we label the others by $\mathcal{J}_1,\dots,\mathcal{J}_e$.
\begin{example} In the linked system of geometries described in the introduction, we obtained a four-class imprimitive association scheme on $150$ vertices. In that example, if we use the $Q$-polynomial ordering of the eigenspaces, the relation $\sim^*$ has equivalence classes $\mathcal{J}_0 = \{0,4\}$, $\mathcal{J}_1 = \{1,3\}$ and $\mathcal{J}_2 = \{2\}$, which, as we shall later see, is indicative of the Q-antipodal case. \end{example}
\noindent Now we claim that the idempotents $$F_j:=\sum_{j' \in \mathcal{J}_j}E_{j'},\qquad j=0,1,\dots,e$$ are the primitive idempotents of $\mathcal{B}$, and hence by restricting these to a fibre we obtain the primitive idempotents of the subscheme on that fibre. To prove this claim, and to obtain a useful relation between the Krein parameters of $\mathcal{A}$ and $\mathcal{B}$, we define the nonnegative parameter \begin{equation} \label{Erhodef} \rho^i_j:=\sum_{h\in \mathcal{J}}q^i_{jh}, \end{equation} and note that $\rho^i_j \neq 0$ if and only if $i \sim^* j$. We abbreviate $\rho^j_j=:\rho_j$.
\begin{lemma} \label{idempotents} The primitive idempotents of $\mathcal{B}$ are $F_j$,
$j=0,1,\dots,e$, so $\mathcal{B}$ has dimension $|\mathcal{I}|=e+1$.
Moreover, if $j' \in \mathcal{J}_{j}$, then $\pi(E_{j'})=\frac{\rho_{j'}}{w}F_{j}$. \end{lemma}
\begin{proof} We first note that each primitive idempotent of $\mathcal{B}$ is a sum of primitive idempotents of $\mathcal{A}$, and because $\sum_{j=0}^{d}E_j=I \in \mathcal{B}$, each $E_j$ appears in exactly one such sum. Then for each $j=0,1,\dots,d$, we use \eqref{Epi}, \eqref{Eimprim}, and \eqref{Erhodef} to find $$\pi(E_j)=n \sum_{h \in \mathcal{J}} E_j \circ E_h =\frac{1}{w}\sum_{i=0}^d \rho^i_j E_i.$$ Thus, if $\mathcal{H} \subseteq \{0,\ldots,d\}$ and $F:=\sum_{j \in \mathcal{H}}E_j$ is any idempotent of $\mathcal{B}$, then $$F=\pi(F)= \sum_{j \in \mathcal{H}} \pi(E_j)=\frac{1}{w} \sum_{i=0}^d \sum_{j \in \mathcal{H}} \rho^i_j E_i.$$ This implies that if $i \notin \mathcal{H}$, then $\sum_{j \in \mathcal{H}} \rho^i_j =0$, i.e., if $i \notin \mathcal{H}$ and $j \in \mathcal{H}$, then $i \nsim^* j$, which proves that $\mathcal{H}$ is a union of equivalence classes of $\sim^*$.
On the other hand, take any $0\le j\le d$ and consider the primitive idempotent $F:=\sum_{h \in \mathcal{H}}E_h$ for which $j \in \mathcal{H}$. Because $\frac{1}{w}\sum_{i=0}^d \rho^i_j E_i = \pi(E_j)\in \mathcal{B}$, it is a linear combination of primitive idempotents of $\mathcal{B}$ with a nonzero coefficient for $F$ because $\rho^j_j>0$. So, if $h \in \mathcal{H}$, then $\rho^h_j>0$, which shows that $h$ and $j$ are in the same equivalence class. We may therefore conclude that $\mathcal{H}$ is an equivalence class of $\sim^*$.
Thus, the primitive idempotents of $\mathcal{B}$ are $F_j$, $j=0,1,\dots,e$. For $j' \in \mathcal{J}_{j}$, it then also follows that $\pi(E_{j'})=\frac{1}{w}\sum_{i \sim^*j'} \rho^i_{j'} E_i$ is a multiple of one of these idempotents. So $\rho^i_{j'}=\rho_{j'}$ for all $i \sim^*j'$, and $\pi(E_{j'})=\frac{\rho_{j'}}{w}F_{j}$. \end{proof}
\noindent By working out the products $F_i \circ F_j$, the Krein parameters $\tilde{q}^h_{ij}$ of the subscheme can now be easily expressed in terms of those of the original scheme as $$\tilde{q}^h_{ij}=\frac{1}{w} \sum_{i' \in \mathcal{J}_i, j' \in \mathcal{J}_j} q^{h'}_{i'j'},$$ for each $h' \in \mathcal{J}_h$. Moreover, it follows that the eigenmatrices $\tilde{P}$ and $\tilde{Q}$ of the subscheme are given by
\begin{gather}\label{EPQtilde} \begin{aligned} &\tilde{P}_{ji} = P_{j'i}, \qquad i \in \mathcal{I}, j' \in \mathcal{J}_j, j=0,1,\dots,e; \\ &\tilde{Q}_{ij} = \frac{1}{w}\sum_{j' \in \mathcal{J}_j}Q_{ij'}, \qquad i \in \mathcal{I}, j=0,1,\dots,e. \end{aligned} \end{gather} However, the second part of Lemma \ref{idempotents} can be used to get another useful expression of the Krein parameters of the subschemes.
\begin{lemma}\label{subkrein} If $i' \in \mathcal{J}_i$, $j' \in \mathcal{J}_j$, then $$\tilde{q}^h_{ij}=\frac{1}{\rho_{i'} \rho_{j'}} \sum_{h' \in \mathcal{J}_h} \rho_{h'} q^{h'}_{i'j'} .$$ \end{lemma}
\begin{proof} Let $i' \in \mathcal{J}_i$, $j' \in \mathcal{J}_j$, then \begin{equation*} \rho_{i'} \rho_{j'}F_i \circ F_j = w^2 \pi(E_{i'} \circ E_{j'}) = \frac{w^2}{v} \sum_{h'=0}^d q^{h'}_{i'j'}\pi(E_{h'})=\frac{1}{n} \sum_{h=0}^e \sum_{h' \in \mathcal{J}_h} \rho_{h'} q^{h'}_{i'j'} F_h, \end{equation*} which was to be proven. \end{proof}
\section{Uniform imprimitive schemes} \label{sec:uniform}
So far, we have given a selective review of imprimitive association schemes, focusing on the eigenspaces and the Krein parameters of subschemes. Exploring imprimitivity further, the main goal of this section is to reconcile the concept of dismantlable association scheme introduced in \cite{mmw} with the concept of uniform association scheme introduced earlier in \cite{Huninform}.
\subsection{Dismantlability and uniformity} \label{Subsec:dismunif}
Besides the usual subschemes on each fibre, it was proven in \cite[Thm. 4.7]{mmw} that a cometric Q-antipodal scheme has so-called dismantled schemes on each union of fibres. To generalize this result, and to obtain more information on these dismantled schemes in the subsequent sections, we first define the following.
For a subset $Y$ of the vertices, let $I^Y$ be the $v \times v$ diagonal $(0,1)$-matrix with $(I^Y)_{xx}=1$ if and only if $x \in Y$. For a matrix $M$, we let $$M^{YZ}:=I^Y M I^Z$$ for subsets $Y$ and $Z$. Put differently, $M^{YZ}$ is the $v \times v$ matrix containing the submatrix $M_{YZ}$, and that is zero everywhere else. Algebraically, in most of the following it turns out to be more convenient to work with the matrices $M^{YZ}$ than with the usual submatrices $M_{YZ}$, although essentially they are the same. For a relation $R$ we define related notation $$R^{YZ}:=R \cap (Y \times Z).$$ In case $Y=Z$, we often use shorthand notation $M^{Y}:=M^{YY}$ and $R^{Y}:=R^{YY}$. For a set
$\mathcal{M}$ of matrices, we let $\mathcal{M}^Y:=\{M^Y | M \in \mathcal{M}\}$ and for a set
$\mathcal{R}$ of relations, we let $\mathcal{R}^Y:=\{R^Y \neq \emptyset | R \in \mathcal{R}\}$.
\begin{definition}An imprimitive association scheme $(X,\mathcal{R})$ is called dismantlable if $(Y, \mathcal{R}^Y)$ is an association scheme for each union $Y$ of fibres. In this case, the association scheme $(Y, \mathcal{R}^Y)$ is called a dismantled scheme on $Y$, if $Y$ is the union of at least two fibres. \end{definition}
\noindent This definition first appears in \cite{mmw} where the structure of cometric Q-antipodal association schemes is considered. We shall see in Corollary \ref{Cdismantledallsame} that two dismantled schemes $(Y, \mathcal{R}^Y)$ and $(Y', \mathcal{R}^{Y'})$
of $(X,\mathcal{R})$ with $|Y|=|Y'|$ always have the same parameters.
Bipartite schemes, i.e., imprimitive schemes with two fibres, are trivially dismantlable. Other examples of dismantlable schemes are the so-called uniform association schemes, as defined by Higman in his paper on four-class imprimitive schemes \cite{Htriality} and more generally in an unpublished manuscript \cite{Huninform}. Informally speaking, an imprimitive scheme is uniform if the intersection numbers are divided uniformly over the fibres whereas, in the general case, only the valencies enjoy this property.
To define uniform schemes precisely, we first introduce a bit of notation. Consider an imprimitive scheme with a trivial quotient scheme, i.e., where the quotient is a complete graph. As in Equation \eqref{Eimprim}, let $\mathcal{I}$ denote the indices of relations that occur in the subschemes. For fibres $U$ and $V$, we denote by $\mathcal{I}(U,V)$ the index set of relations that occur between $U$ and $V$; so $A_i^{UV}$ is nonzero precisely if $i \in \mathcal{I}(U,V)$. Because we are assuming that the quotient is a complete graph, $\mathcal{I}(U,V)$ equals $\mathcal{I}$ if $U=V$, and $\mathcal{I}(U,V)=\overline{\mathcal{I}}$ (the complement of $\mathcal{I}$) if $U \neq V$.
\begin{definition} \label{Dunifscheme} An imprimitive association scheme is called uniform if its quotient scheme is trivial, and if there are integers $a_{ij}^h$ such that for all fibres $U,V,$ and $W,$ and $i \in \mathcal{I}(U,V)$, $j \in \mathcal{I}(V,W)$, we have \begin{equation}\label{coh_conf_prod} A_i^{UV} A_j^{VW} = \sum_h a_{ij}^h A_h^{UW}. \end{equation} \end{definition}
\noindent It is easily seen that in this case $p_{ij}^h = a_{ij}^h$ if $i \in \mathcal{I}$ or $j \in \mathcal{I}$, $p_{ij}^h = (w-1) a_{ij}^h$ if $i,j \notin \mathcal{I}$ and $h \in \mathcal{I}$, and $p_{ij}^h= (w-2) a_{ij}^h$ if $i,j,h \notin \mathcal{I}$, i.e., the intersection numbers are divided uniformly over the relevant fibres. Note that bipartite schemes are trivially uniform. But in general, an
antipodal distance-regular cover of a complete graph, while
having a complete quotient, is not uniform; for example, two
adjacent vertices in the icosahedron have only two common
neighbors so $p_{11}^1$ is not divisible by $w-2=4$. However, any imprimitive $d$-class association scheme with only one relation across fibres (a complete multipartite graph) is uniform. Such a scheme can easily be constructed as a wreath product scheme \cite[p.\ 44]{weis}, \cite[p.\ 69]{bailey} of a trivial scheme and an arbitrary scheme. Also the tensor product \cite[p.\ 44]{weis} of a one-class scheme and an arbitrary scheme is uniform. (This is also called the ``direct product'' \cite[p.\ 62]{bailey}.) In this paper, we call a scheme decomposable if it is has the same parameters as a wreath product or tensor product scheme.
\begin{theorem} \label{uniformdismantle} A uniform scheme is dismantlable. Any dismantled scheme of a uniform scheme is also uniform. \end{theorem}
\begin{proof} These claims follow in a straightforward way from the definition of a uniform scheme. \end{proof}
\noindent In Section \ref{sec:uniformcoco} we will show the converse of this proposition, namely that every dismantlable scheme is uniform.
\subsection{Linked systems and triality}
In Section \ref{HOSI} we described what we (and Neumaier \cite{neumaier}) called a linked system of partial $\lambda$-geometries. This linked system is in fact a uniform association scheme with three fibres of size 50. The term linked system was coined by Cameron \cite{cameronlinked} for linked systems of symmetric designs (see also Section \ref{sec:threeclass}).
\begin{example} There are three non-isomorphic $(16,6,2)$ symmetric block designs. Each incidence structure gives us a three-class bipartite association scheme with two fibres of size sixteen. But only one of these can be extended to a linked system of symmetric designs with eight fibres of size sixteen. This is a uniform cometric scheme on $128$ vertices and is the first example in an infinite family which arises from the Kerdock codes \cite{cs} (see also \cite{noda,mmw}). \end{example}
\noindent Following Neumaier, and also Cameron and Van Lint \cite{cvl} (see Section \ref{sec:vls}), we will use the term linked system informally for the combinatorial structure underlying a uniform association scheme. Note also that Higman \cite{Htriality} mentions the term ``system of uniformly linked strongly regular designs''. We will now describe an infinite family of such systems, which we refer to as Higman's ``triality schemes''.
\begin{example} \label{Ex-triality} The dual polar graph $D_4(q)$ is a cometric bipartite distance-regular graph with diameter four defined on the $2(q+1)(q^2+1)(q^3+1)$ maximal isotropic (four-dimensional) subspaces in $GF(q)^8$ with a quadratic form of Witt index 4. One can extend this graph by a third fibre containing the $(q+1)(q^2+1)(q^3+1)$ isotropic one-dimensional subspaces, where a four-dimensional subspace is adjacent to the one-dimensional subspaces that it contains. This extended graph generates a uniform four-class association scheme that is cometric Q-antipodal. Higman \cite{Htriality} explains how this scheme is obtained from classical triality related to the group $O^+_8(q)$, and also how some other sporadic examples, such as the one in Section \ref{HOSI}, have a triality related to some group. Higman also mentions that related to these examples are certain coherent configurations. \end{example}
\section{Coherent configurations and uniformity} \label{Sec:coco}
To understand uniformity better, we will need to recall certain combinatorial structures that are more general than association schemes. As we will see, a (symmetric) $d$-class association scheme can be viewed as a homogeneous coherent configuration of rank $d+1$ in which all relations are symmetric.
\subsection{Definitions and algebraic automorphisms} \label{Subsec:coco}
A coherent configuration is a pair $(X,\mathcal{S})$ consisting of a finite set $X$ of size $v$ and a set $\mathcal{S}$ of binary relations on $X$ such that \begin{itemize} \item $\mathcal{S}$ is a partition of $X \times X$; \item the diagonal relation $\Delta_X$ is the union of some relations in
$\mathcal{S}$; \item for each $R\in\mathcal{S}$ it holds that $R^\top \in \mathcal{S}$; \item there exist integers $p_{ST}^R$ such that
$$ \left| \left\{ z \in X | (x,z) \in S \ {\mbox {\rm and}} \ (z,y) \in T\right\}
\right| = p_{ST}^R$$ whenever $(x,y) \in R$, for each $R,S,T\in\mathcal{S}$. \end{itemize} The relations of $\mathcal{S}$ are called basic relations of the configuration. A basic relation $R$ is called a diagonal relation if $R\subseteq \Delta_X$. Each diagonal relation is of the form $\Delta_U$ for some $U\subseteq X$. Because the relations of $\mathcal{S}$ form a partition of $X \times X$, the diagonal relations of $\mathcal{S}$ form a partition of $\Delta_X$. Thus there exists a uniquely determined partition of $X$ into a set $\mathcal{F}_{\mathcal{S}}$ of $w$ fibres such that $\Delta_{U}
\in \mathcal{S}$ for each $U \in \mathcal{F}_{\mathcal{S}}$. The numbers $v=|X|$ and
$|\mathcal{S}|$ are called the order and the rank of the configuration, respectively.
Given $R\in\mathcal{S}$ and $x\in X$ we define $R(x):=\{y\in X\,|(x,y)\in R\}$. For any basic relation $R$ we define its projections onto the first and second coordinates as
${\sf pr}_1(R):=\{x\in X\,|\,R(x)\neq\emptyset\}$ and ${\sf pr}_2(R):={\sf pr}_1(R^\top)$. One can show that these projections are fibres. So, each basic relation $R$ is contained in
${\sf pr}_1(R) \times {\sf pr}_2(R)$. We write $\mathcal{S}^{UV}$ for the set of all basic relations $R\in\mathcal{S}$ with ${\sf pr}_1(R)=U, {\sf pr}_2(R)=V$, and $r_{UV}:=|\mathcal{S}^{UV}|$. Note that $r_{UV}=r_{VU}$ and
$|\mathcal{S}|=\sum_{U,V}r_{UV}$. The $w\times w$ integer symmetric matrix $(r_{UV})$ is called the type of the configuration.
The last axiom of the definition of coherent configuration implies that $$A_SA_T=\sum_Rp_{ST}^RA_R.$$ It thus follows that the vector subspace of $M_X(\mathbb{C})$ spanned by the adjacency matrices $A_R$, $R\in\mathcal{S}$ is a subalgebra of the full matrix algebra $M_X(\mathbb{C})$. It also explains why the intersection numbers $p_{ST}^R$ are sometimes called structure constants. The subalgebra is called the adjacency algebra of $\mathcal{S}$ and will be denoted by $\mathbb{C}[\mathcal{S}]$. This algebra has the following properties: \begin{itemize} \item it is closed with respect to (ordinary) matrix multiplication; \item it is closed with respect to entrywise (Schur-Hadamard) multiplication $\circ$; \item it is closed with respect to transposition ${}^\top$; \item it contains the identity matrix $I$ and the all-ones
matrix $J$. \end{itemize} Any subspace of $M_X(\mathbb{C})$ which satisfies these conditions is called a coherent algebra. There is a one-to-one correspondence between coherent configurations on $X$ and coherent algebras in $M_X(\mathbb{C})$, i.e., each coherent algebra is the adjacency algebra of a uniquely determined coherent configuration.
An algebraic automorphism of $\mathcal{S}$ is a permutation $\sigma \in {\sf Sym}(\mathcal{S})$ which preserves the structure constants, that is, $p_{ST}^R = p_{\sigma(S)\sigma(T)}^{\sigma(R)}$ for all $R,S,T\in\mathcal{S}$ (an algebraic automorphism of an association scheme is also called a pseudo-automorphism, cf. \cite{ikuta}). One can extend such a $\sigma$ to a linear map from $\mathbb{C}[\mathcal{S}]$ into itself by setting $\sigma(\sum_{R\in\mathcal{S}}\alpha_R A_R):=\sum_{R\in\mathcal{S}}\alpha_RA_{\sigma(R)}$. This yields an automorphism of the adjacency algebra; the linear map defined in this way preserves the ordinary matrix product, Schur-Hadamard product, and matrix transposition, i.e., $\sigma(AB)=\sigma(A)\sigma(B), \sigma(A \circ B)=\sigma(A) \circ \sigma(B)$, and $\sigma(A^{\top})=\sigma(A)^{\top}$ for all $A,B \in \mathbb{C}[\mathcal{S}]$. Vice versa, each permutation $\sigma$ which preserves these three operations is an algebraic automorphism of $\mathcal{S}$.
The algebraic automorphisms of $\mathcal{S}$ form a group (which is a subgroup of ${\sf Sym}(\mathcal{S})$), which will be denoted by ${\sf AAut}(\mathcal{S})$. Any subgroup $G\leq{\sf AAut}(\mathcal{S})$ gives rise to a fusion configuration $\fuse{\mathcal{S}}{G}$ whose basic relations are $\cup_{R \in O}R$, $O \in \Omega$, where $\Omega$ is the set of orbits of $\mathcal{S}$ under the action of $G$. The adjacency algebra of $\fuse{\mathcal{S}}{G}$ can be characterized as the subspace of $\mathbb{C}[\mathcal{S}]$ consisting of all $G$-invariant elements of $\mathbb{C}[\mathcal{S}]$.
The matrices $I^{U}, U \in \mathcal{F}_{\mathcal{S}}$ are the only idempotent matrices of the standard basis $\{ A_R | R \in \mathcal{S}\}$ of $(X,\mathcal{S})$. Therefore any algebraic automorphism $\sigma$ of $\mathcal{S}$ permutes these diagonal matrices, hereby also inducing a permutation $U \mapsto \sigma(U)$ on the set of fibres. So, instead of $\sigma(I^{U})$, we could also write $I^{{\sigma(U)}}$.
If $G\leq{\sf AAut}(\mathcal{S})$ acts transitively on the set of fibres, then $\fuse{\mathcal{S}}{G}$ is homogeneous, that is, it is a coherent configuration with one fibre, or in other words, a --- possibly nonsymmetric --- association scheme.
\subsection{Uniformity in coherent configurations} \label{sec:uniformcoco}
We now make a fundamental observation about uniform association schemes. Consider such a scheme $(X,\mathcal{R})$, with related (generic) notation as above. It follows immediately from \eqref{coh_conf_prod} that the set of relations
$\mathcal{S}:=\{R_i^{UV}| i \in \mathcal{I}(U,V); \, U,V \in \mathcal{F}\}$ forms a coherent configuration, with the same fibres as those of the association scheme, i.e., $\mathcal{F}_{\mathcal{S}}=\mathcal{F}$. Moreover, any $\sigma \in {\sf Sym}(\mathcal{F}_{\mathcal{S}})$ acts as a permutation on $\mathcal{S}$ by $\sigma(R^{UV}_i):=R^{\sigma(U)\sigma(V)}_{i}$, for $i\in \mathcal{I}(U,V)=\mathcal{I}(\sigma(U),\sigma(V))$. In this way, $\sigma$ is an algebraic automorphism of $\mathcal{S}$, because if $i\in \mathcal{I}(U,V)$, $j\in \mathcal{I}(V,W)$, $h\in \mathcal{I}(U,W)$, then $$p_{R^{UV}_i R^{VW}_j}^{R^{UW}_h} = a^h_{ij}= p_{R^{\sigma(U)\sigma(V)}_i R^{\sigma(V)\sigma(W)}_j}^{R^{\sigma(U)\sigma(W)}_h}=p_{\sigma(R^{UV}_i)\sigma( R^{VW}_j)}^{\sigma(R^{UW}_h)}.$$ Moreover, the fusion scheme $\fuse{\mathcal{S}}{{\sf Sym}(\mathcal{F}_{\mathcal{S}})}$ is the association scheme that we started from. These observations are the motivation for the definition of a uniform coherent configuration. But first we need a little more terminology. We say that two triples $(U,V,W)$ and $(U',V',W')$ of fibres have the ``same type'' if and only if there is a permutation $\sigma$ of the fibres such that $\sigma((U,V,W))=(U',V',W')$.
\begin{definition} A coherent configuration $(X,\mathcal{S})$ with at least two fibres is called uniform if there are complementary sets of indices $\mathcal{I}_{\mathcal{S}} \ni 0$ and $\overline{\mathcal{I}_{\mathcal{S}}}$ of sizes $e_{\mathcal{S}}+1$ and $\ell_{\mathcal{S}}$ (say), respectively, such that the basic relations $R\in \mathcal{S}$ can be relabeled as $R=S^{UV}_i$ ($U={\sf pr}_1(R)$, $V={\sf pr}_2(R)$, $i\in \mathcal{I}_\mathcal{S} \cup \overline{\mathcal{I}_{\mathcal{S}}}$) such that \begin{itemize} \item $S^{UU}_0 = \Delta_U$ for each fibre $U$;
\item $\mathcal{S}^{UU}=\{S^{UU}_i | i\in \mathcal{I}_{\mathcal{S}}\}$ for each
fibre $U$ and $\mathcal{S}^{UV}=\{S^{UV}_i | i\in
\overline{\mathcal{I}_{\mathcal{S}}}\}$ for all
fibres $U\neq V$; \item $(S^{UV}_i)^\top = S^{VU}_i$ for all fibres $U\neq
V$; \item for any two triples $(U,V,W)$ and $(U',V',W')$ of the same
type and any $i\in \mathcal{I}_{\mathcal{S}}(U,V)$, $j\in \mathcal{I}_{\mathcal{S}}(V,W)$,
$h\in \mathcal{I}_{\mathcal{S}}(U,W)$, it holds that \begin{equation}\label{coco_uniform}
p_{S^{UV}_i S^{VW}_j}^{S^{UW}_h} = p_{S^{U'V'}_i S^{V'W'}_j}^{S^{U'W'}_h}. \end{equation} \end{itemize} \end{definition}
\noindent In this definition $\mathcal{I}_{\mathcal{S}}(U,V)$ is defined in the same way as before: it equals $\mathcal{I}_{\mathcal{S}}$ if $U=V$, and $\overline{\mathcal{I}_{\mathcal{S}}}$ otherwise. Without loss of generality we will assume that $\mathcal{I}_{\mathcal{S}} \cup \overline{\mathcal{I}_{\mathcal{S}}} = \{0,\dots,e_{\mathcal{S}}+\ell_{\mathcal{S}}\}$.
It is clear from the above observations that from a uniform association scheme one obtains a uniform coherent configuration with $\mathcal{F}_{\mathcal{S}}=\mathcal{F}$, $\mathcal{I}_{\mathcal{S}}=\mathcal{I}$, $\overline{\mathcal{I}_{\mathcal{S}}}=\overline{\mathcal{I}}$, $e_{\mathcal{S}}=e$, $\ell_{\mathcal{S}}=d-e$, and $S_i^{UV}=R_i^{UV}$.
Conversely, given a uniform coherent configuration, any permutation $\sigma$ of the fibres acts --- just as in Section \ref{Subsec:coco} --- as an algebraic automorphism of $\mathcal{S}$, by \eqref{coco_uniform}. Thus, the relations $R_i:=\cup_{U,V} S_i^{UV}$ are the relations of the $(e_{\mathcal{S}}+\ell_{\mathcal{S}})$-class association scheme $\fuse{\mathcal{S}}{{\sf Sym}(\mathcal{F}_{\mathcal{S}})}$. It is clear that this scheme is imprimitive with $\mathcal{F}=\mathcal{F}_{\mathcal{S}}$ and $\mathcal{I}=\mathcal{I}_{\mathcal{S}}$, and that its quotient scheme is trivial. Because \eqref{coh_conf_prod} follows from \eqref{coco_uniform}, this scheme is uniform. We have thus shown a one-to-one correspondence between uniform association schemes and uniform coherent configurations.
\begin{proposition}\label{one-one} If $(X,\mathcal{R})$ is a uniform association scheme, then $(X,\{R_i^{UV}| i \in \mathcal{I}(U,V); U,V \in \mathcal{F}\})$ is a uniform coherent configuration. Conversely, if $(X,\mathcal{S})$ is a uniform coherent configuration, then after relabeling
$(X,\{\cup_{U,V} S_i^{UV}| i = 0,\dots,e_{\mathcal{S}}+\ell_{\mathcal{S}}\})$ is a uniform association scheme on $X$. \end{proposition}
\noindent We will now use this one-to-one correspondence to show that every dismantlable association scheme is uniform.
\begin{theorem} \label{dismunif} An association scheme is dismantlable if and only if it is uniform. \end{theorem}
\begin{proof} One direction has already been shown in Theorem \ref{uniformdismantle}.
Let $(X,\mathcal{R})$ be a dismantlable association scheme. Because bipartite schemes are uniform, we may assume that $w \geq 3$. We must first check that the quotient scheme is trivial. To see this, it suffices to show that, for any three distinct fibres $U$, $V$ and $W$, $\mathcal{I}(U,V)=\mathcal{I}(V,W)$. But this is clear since the dismantled scheme on vertex set $Y=U \cup V \cup W$ is still imprimitive and there is only one choice for its quotient: the trivial scheme on three vertices. So $\mathcal{I}(U,V) = \mathcal{I}(V,W)=\overline{\mathcal{I}}$.
Next, we claim that $\mathcal{S}:=\{R_i^{UV}| i \in \mathcal{I}(U,V); U,V \in \mathcal{F}\}$ forms a coherent configuration on $X$. In order to do this, we will have to consider the intersection numbers of the dismantled schemes $(Y,\mathcal{R}^Y)$ where $Y$ is a union of fibres, which we denote by $p_{ij}^h(Y)$. To establish the claim, we first observe that the non-empty relations $R_i^{UV}$ form a partition of $X \times X$ and that $(R_i^{UV})^\top=R_i^{VU}$.
Now pick an arbitrary triple of relations $R_i^{UV},R_j^{VW},R_h^{UW},$ with $i\in \mathcal{I}(U,V) ,j\in \mathcal{I}(V,W),h\in \mathcal{I}(U,W)$. We have to show that the number $$
\lambda_{ijh}^{UVW}(u,x):= |R_i^{UV}(u)\cap (R_j^{VW})^{\top}(x)| = |R_i(u)\cap R_j(x)\cap V| $$ does not depend on the pair $(u,x)\in R_h^{UW}$.
If $U=V$ then $i\in \mathcal{I}$, implying $R_i(u)\cap V = R_i(u)$. Therefore $ \lambda_{ijh}^{UVW}(u,x) = |R_i(u)\cap R_j(x)|=p_{ij}^h.$ Analogously, $ \lambda_{ijh}^{UVW}(u,x) =p_{ij}^h $ if $V=W$.
Next, we consider the case that $U\neq V$ and $U=W$. In this case $h\in\mathcal{I}$, while $i,j\in\ovr{\mathcal{I}}$. Consider the scheme $(Y,\mathcal{R}^Y)$, where $Y=U\cup V$. For $u,x \in U=W$, we have \begin{equation}\label{pijhY}
p_{ij}^h(Y)=|R_i^{Y}(u)\cap (R_j^{Y})^\top(x)| = |R_i(u)\cap R_j(x)\cap Y|. \end{equation} Because $i,j\in\ovr{\mathcal{I}}$, the intersections $R_i(u)\cap U$ and $R_j(x)\cap U$ are empty. Therefore $R_i(u)\cap R_j(x)\cap Y = R_i(u)\cap R_j(x)\cap V$, and hence $\lambda_{ijh}^{UVW}(u,x) = p_{ij}^h(U\cup V).$
The last case is the one in which the fibres $U,V,W$ are pairwise distinct. Then $i,j,h\in\ovr{\mathcal{I}}$. Consider the scheme $(Y,\mathcal{R}^Y)$, where $Y=U\cup V\cup W$. As before, we have \eqref{pijhY} for $u \in U, x \in W$. Because $i,j\in\ovr{\mathcal{I}}$, we obtain $R_i(u)\cap Y \subseteq V\cup W$ and $R_j(x)\cap Y\subseteq U \cup V$. This implies that $R_i(u)\cap R_j(x)\cap Y = R_i(u)\cap R_j(x)\cap V$, and therefore $\lambda_{ijh}^{UVW}(u,x) = p_{ij}^h(U\cup V\cup W)$.
Thus we proved that the relations in $\mathcal{S}$ form a coherent configuration, with intersection numbers \begin{equation}\label{str_const} p_{R^{UV}_i R^{VW}_j}^{R^{UW}_h}=\lambda_{ijh}^{UVW}= \begin{cases} p_{ij}^h & \mbox{ if }U=V \mbox{ or } V=W;\\ p_{ij}^h(U\cup V) & \mbox{ if }U\neq V, U=W;\\ p_{ij}^h(U\cup V\cup W) & \mbox{ if }U\neq V,V\neq W,W\neq U. \end{cases} \end{equation}
Finally, we shall show that the coherent configuration is uniform. By the above one-to-one correspondence between uniform association schemes and uniform coherent configurations this proves the theorem. To show that the configuration is uniform, we have to prove that $\displaystyle{\lambda_{ijh}^{UVW} } = \displaystyle{\lambda_{ijh}^{U'V'W'}}$ whenever the triples $(U,V,W)$ and $(U',V',W')$ have the same type.
For $U=V$ (and, therefore, $U'=V'$), or if $V=W$, this is clear. If $U\neq V, U=W$ and $U'\neq V',U'=W'$, then $i,j\in\ovr{\mathcal{I}},h\in\mathcal{I}$. In this case we have to show that $p_{ij}^h(U\cup V) = p_{ij}^h(U'\cup V')$. To prove this, it is sufficient to show that $p_{ij}^h(U\cup V) = p_{ij}^h(V\cup W)$ holds for any triple $(U,V,W)$ of pairwise distinct fibres. So, consider a scheme $(Y,\mathcal{R}^Y)$, where $Y=U\cup V\cup W$. Because $h\in\mathcal{I}$, $R_h^{Y} = R_h^{U}\cup R_h^{V}\cup R_h^{W}$. Pick an arbitrary pair $(u,u')\in R_h^{U}$, that is, $(u,u')\in R_h$ and $u,u'\in U$. Because $i,j\in\ovr{\mathcal{I}}$, we have that $$
p_{ij}^h(Y)=|R_i(u)\cap R_j(u')\cap Y| = |R_i(u)\cap R_j(u')\cap V| + |R_i(u)\cap R_j(u')\cap W|= $$ $$
|R_i(u)\cap R_j(u')\cap (U\cup V)| + |R_i(u)\cap R_j(u')\cap(U\cup W)| = p_{ij}^h(U\cup V)+p_{ij}^h(U\cup W). $$ The same argument with $(x,x')\in R_h^{W}$ shows that $p_{ij}^h(W\cup V)+p_{ij}^h(W\cup U) = p_{ij}^h(Y)$, and hence $p_{ij}^h(U\cup V) = p_{ij}^h(V\cup W)$.
Consider now the remaining case where the triples $(U,V,W)$ and $(U',V',W')$ consist of pairwise distinct fibres. In this case $i,j,h\in\ovr{\mathcal{I}}$ and we have to show that $p_{ij}^h(U\cup V \cup W) = p_{ij}^h(U'\cup V' \cup W')$. If $w = 3$, then there is nothing to prove, so we may assume that $w\geq 4$. In this case it is sufficient to show that $p_{ij}^h(U\cup V \cup W) = p_{ij}^h(V\cup W \cup Z)$ holds for each quadruple $U,V,W,Z$ of pairwise distinct fibres. The arguments for this are similar as in the previous case. Consider the scheme $(Y,\mathcal{R}^Y)$, where $Y=U\cup V\cup W\cup Z$. Then it follows from considering pairs $(u,y)\in R_h^{UV}$ and $(y,z)\in R_h^{VZ}$ that $$p_{ij}^h(U\cup V \cup W)+ p_{ij}^h(U\cup V \cup Z) =p_{ij}^h(Y)= p_{ij}^h(V\cup Z \cup W)+p_{ij}^h(V\cup Z \cup U),$$ which finishes the proof. \end{proof}
\noindent As an immediate consequence, we obtain important structural information about dismantled schemes.
\begin{corollary} \label{Cdismantledallsame} Let $(X,\mathcal{R})$ be a dismantlable association scheme with $w$ fibres. If $2\le w' \le w$ and each of $Y,Y' \subseteq X$ are expressible as a union of $w'$ fibres, then the dismantled schemes $(Y,\mathcal{R}^Y)$ and $(Y',\mathcal{R}^{Y'})$ have the same parameters (i.e., same eigenmatrices $P$ and $Q$ and same intersection numbers and Krein parameters, with appropriate orderings of their relations and idempotents). \end{corollary}
\begin{proof} It follows from Definition \ref{Dunifscheme} that the parameters of the dismantled scheme $(Y,\mathcal{R}^Y)$ depend only on $w'$ and the parameters $a_{ij}^h$ and not on the choice of $Y$ itself. \end{proof}
\subsection{Q-Higman schemes} \label{Subsec:QHigman}
In the previous section, we have seen that uniformity of a scheme is equivalent to dismantlability. In this section, we give a characterization of uniform schemes in terms of the Krein parameters (through so-called Q-Higman schemes) and study the idempotents of uniform schemes.
\subsubsection{Krein parameters of Q-Higman schemes}\label{uniformschemes}
With cometric Q-antipodal association schemes in mind, we consider an imprimitive association scheme with $\mathcal{J}=\{0,d\}$, $\mathcal{J}_j=\{j,d-j\}$ for $j=0,1,\dots,\ell-1<\frac{d}{2}$, and $\mathcal{J}_j=\{j\}$ for $j=\ell,\dots,d-\ell$ (for some $\ell$).
For such a scheme we consider the dual intersection matrix $L^*_d$ with entries $(L^*_d)_{ij}:=q^i_{dj}$. First note that $\rho_j=1+q^j_{dj}$. If $j < \ell$ or $j > d-\ell$, then from $E_j \circ (E_0+E_d) = \frac{1}{n} \pi(E_j) =\frac{1}{v}(1+q^j_{dj})(E_j+E_{d-j})$, we find that $E_j \circ E_d = \frac{1}{v}(q^j_{dj}E_j + (1+q^j_{dj})E_{d-j})$, and hence that $q^{d-j}_{dj}=1+q^j_{dj}$, and $q^i_{dj}=0$ for $i \neq j,d-j$.
For $\ell \leq j \leq d-\ell$, we find from $E_j \circ (E_0+E_d) = \frac{1}{v}(1+q^j_{dj})E_j$ that $q^i_{dj}=0$ for $i \neq j$, and hence $q^j_{dj}=w-1$. In other words, the only nonzero entries of $L^*_d$ are on the diagonal and the antidiagonal.
For $j < \ell$ or $j > d-\ell$, we may combine the facts $q^{d-j}_{dj}=1+q^j_{dj}$ and $q^{j}_{dj}+q^{j}_{d,d-j}=m_d=w-1$ to find $(1+q^j_{dj})m_{d-j}=(w-1-q^j_{dj})m_j$. This implies that $m_{d-j} \leq (w-1)m_j$ with equality if and only if $q^j_{dj}=0$. We thus obtain the following:
\begin{lemma} \label{cosetssize2krein} Consider an imprimitive association scheme with $\mathcal{J}=\{0,d\}$, $\mathcal{J}_j=\{j,d-j\}$ for $j=0,1,\dots,\ell-1<\frac{d}{2}$, and $\mathcal{J}_j=\{j\}$ for $j=\ell,\dots,d-\ell$. If $\ell \leq j \leq d-\ell$, then $q^i_{dj}=0$ for $i \neq j$ and $\rho_j =q^j_{dj}+1=w$. If $j < \ell$ or $j > d-\ell$, then $\rho_j=q^{d-j}_{dj}=1+q^j_{dj}$ and $q^i_{dj}=0$ for $i \neq j,d-j$, and moreover, $m_{d-j} \leq (w-1)m_j$ with equality if and only if $q^j_{dj}=0$. \end{lemma}
\noindent The case of equality is one of the motivations for the following definition.
\begin{definition} \label{def:Qanti} An imprimitive association scheme is called Q-Higman if for some $\ell$ such that $1 \leq \ell < \frac{d}{2}+1$ and for some ordering of the primitive idempotents, we have that $\mathcal{J}=\{0,d\}$, $\mathcal{J}_j=\{j,d-j\}$ for $j=0,1,\dots,\ell-1$, $\mathcal{J}_j=\{j\}$ for $j=\ell,\dots,d-\ell$, and $q^d_{jj}=0$ (or equivalently $m_{d-j}=(w-1)m_j$) for $j=0,1,\dots,\ell-1$. \end{definition}
\noindent It is important to note that this Q-Higman property is formulated entirely in terms of the Krein parameters, in particular in terms of the dual intersection matrix $L^*_d$.
\begin{proposition}\label{Qantikrein} An association scheme is Q-Higman if and only if for some $\ell$ such that $1 \leq \ell < \frac{d}{2}+1$, for some $w$, and some ordering of the idempotents it holds that $q^{j}_{d,d-j}=w-1$ for $j < \ell$, $q^{j}_{d,d-j}=1$ and $q^{j}_{dj}=w-2$ for $j > d-\ell$, $q^{j}_{dj}=w-1$ for $\ell \leq j \leq d-\ell$, and $q^i_{dj}=0$ for all other values of $i$ and $j$. Moreover, if this is the case, then $\rho_j=1$ if $j < \ell$, $\rho_{j}=w$ if $\ell \leq j \leq d-\ell$, and $\rho_j=w-1$ if $j > d-\ell$. \end{proposition}
\begin{proof} If the scheme is Q-Higman, then the stated properties follow from the above considerations. On the other hand, suppose that these properties hold. Then it follows that $v(E_0+E_d) \circ (E_0+E_d)=w(E_0+E_d)$ and that $\langle E_0,E_d \rangle$ is a $\circ$-subalgebra. This means that the scheme is imprimitive with $\mathcal{J}=\{0,d\}$ and fibres of size $\frac{v}{w}$. The equivalence classes of $\sim^*$ then easily follow, and so does the conclusion that the scheme is Q-Higman. \end{proof}
\noindent We note that the standard relations between the Krein parameters of a scheme (e.g., see \cite[Lemma 2.3.1]{bcn}) give some more specific information on those of Q-Higman schemes. It can for example be derived (from \cite[Lemma 2.3.1]{bcn} or directly by working out the product $E_i \circ E_j \circ E_d$ in different ways) that if $j<\ell$ and $i$ is arbitrary, then $q^h_{i,d-j}=(w-1)q^{d-h}_{ij}$ for $h < \ell$, $q^h_{i,d-j}=(w-1)q^{h}_{ij}$ for $\ell \leq h \leq d-\ell$, and $q^h_{i,d-j}=q^{d-h}_{ij}+(w-2)q^h_{ij}$ for $h > d-\ell$. It also follows that $q^h_{ij}=q^{d-h}_{ij}$ for all $i$, $\ell \leq j\leq d-\ell$ and $h < \ell$. In the cometric Q-antipodal case, we include these observations in Lemma \ref{Lqijk2} below.
\subsubsection{The idempotents of uniform schemes}
In this section we shall show one of our main results, i.e., that Q-Higman schemes and uniform schemes are the same. For this we will again use the correspondence to uniform coherent configurations.
We remind the reader that $\mathcal{A}=\langle A_i\,|\,i
=0,\dots,d\rangle$ is the Bose-Mesner algebra of the association scheme under consideration, and that $\mathcal{B} =\langle A_i\,|\,i\in \mathcal{I}\rangle$ is the Bose-Mesner subalgebra on the fibres. Moreover, we let
$$\mathcal{D}:=\langle A_i\,|\,i\not\in \mathcal{I}\rangle.$$ In order to show that a uniform scheme is Q-Higman, and to find relations with its dismantled schemes, we study its idempotents. We start off with the case of bipartite schemes, i.e., imprimitive schemes with two fibres.
\begin{lemma}\label{bipartite} A bipartite scheme is Q-Higman. Each primitive idempotent of $\mathcal{B}$ that is not a primitive idempotent of $\mathcal{A}$ is of the form $E+E'$, where $E$ and $E'$ are primitive idempotents of $\mathcal{A}$, and $E-E' \in \mathcal{D}$. \end{lemma}
\begin{proof} Consider a bipartite scheme with fibres $U$ and $V$. Because all relations $R_i, i \notin \mathcal{I}$ are bipartite, it follows that $E=\mtrx{E_{UU}}{E_{UV}}{E_{VU}}{E_{VV}}$ is a primitive idempotent if and only if $E'=\mtrx{E_{UU}}{-E_{UV}}{-E_{VU}}{E_{VV}}$ is a primitive idempotent. Moreover, if $E$ and $E'$ are indeed primitive idempotents of $\mathcal{A}$ and $E_{UV} \neq 0$, or equivalently, $E \notin \mathcal{B}$, then $E+E'$ is a primitive idempotent of the Bose-Mesner subalgebra $\mathcal{B}$, and $E-E'\in \mathcal{D}$. This implies that the primitive idempotents of $\mathcal{B}$ that are not primitive idempotents of $\mathcal{A}$ are of the form $E+E'$, where $E$ and $E'$ are primitive idempotents of $\mathcal{A}$, and $E-E'\in \mathcal{D}$. Thus, all sets $\mathcal{J}_j$ have size at most two. Moreover, the multiplicities of the idempotents $E$ and $E'$ are equal, because $\text{trace}(E)=\text{trace}(E')$. Thus, the scheme is Q-Higman. \end{proof}
\begin{lemma}\label{p:matrixD} Consider a uniform association scheme. Let $F\in\mathcal{B}$ be a primitive idempotent of $\mathcal{B}$. Then $F$ is a primitive idempotent of $\mathcal{A}$ if and only if $F\mathcal{D} = \{0\}$. Let $Y$ be a union of at least two fibres. Then $F^Y$ is a primitive idempotent of $\mathcal{B}^Y$. Moreover, $F^Y$ is a primitive idempotent of $\mathcal{A}^Y$ if and only if $F$ is a primitive idempotent of $\mathcal{A}$. \end{lemma}
\begin{proof} An idempotent $F$ of $\mathcal{A}$ is primitive if and only if $FA$ is proportional to $F$ for each $A\in\mathcal{A}$. Because $F$ is a primitive idempotent of $\mathcal{B}$, $FA$ is proportional to $F$ for each $A\in\mathcal{B}$. Therefore $F$ is a primitive idempotent of $\mathcal{A}$ if and only if $FA$ is proportional to $F$ for each $A\in \mathcal{D}$. So consider $A\in \mathcal{D}$. Because $F$ is block-diagonal and $A^{U}=0$ for $U\in\mathcal{F}$, we obtain $(FA)^{U}=0$. Therefore $FA$ is proportional to $F$ if and only if $FA=0$.
Because of the block-diagonal structure of $\mathcal{B}$, $F^Y$ is clearly a primitive idempotent of $\mathcal{B}^Y$, and $(F\mathcal{D})^Y = F^Y\mathcal{D}^Y$. Because the linear map $A\mapsto A^Y$ is a bijection between $\mathcal{A}$ and $\mathcal{A}^Y$, it follows that $F\mathcal{D}= \{0\}$ if and only if $F^Y\mathcal{D}^Y= \{0\},$ hereby proving the final statement of the lemma. \end{proof}
\begin{theorem}\label{uniformidempotents} Consider a uniform association scheme. Let $F_0,\dots,F_e\in\mathcal{B}$ be a complete set of primitive idempotents of $\mathcal{B}$, ordered such that $F_0,\dots,F_{\ell-1}$ are not primitive in $\mathcal{A}$, and $F_{\ell},\dots,F_e$ are primitive in $\mathcal{A}$. Then for each $j=0,\dots,\ell-1$ there exists a matrix $D_j\in\mathcal{D}$ such that for each union $Y$ of $w' \geq 2$ fibres, the matrices $\frac{1}{w'}(F_j^Y+ D_j^Y)$ and $F_j^Y-\frac{1}{w'}(F_j^Y+ D_j^Y), j=0,\dots,\ell-1$, and $F_{\ell}^Y,\dots,F_e^Y$ are the primitive idempotents of $\mathcal{A}^Y$. \end{theorem} \begin{proof} First of all it follows from Lemma \ref{p:matrixD} that the matrices $F_{\ell}^Y,\dots,F_e^Y$ are primitive idempotents of $\mathcal{A}^Y$. Secondly, we fix $j \in \{0,\dots,\ell-1\}$ for the moment, and let $F:=F_j$. We then claim that there is a matrix $D\in\mathcal{D}$, which is unique up to sign, such that for any two distinct fibres $U, V$, we have \begin{align} F^{UU} D^{UV}&=D^{UV} F^{VV} = D^{UV}, \notag \\ D^{UV}D^{VU}&=F^{UU}, \label{matrixD} \\ D^{VU}D^{UV}&=F^{VV}. \notag \end{align} To prove this claim, we first fix two fibres $U$ and $V$, let $Z:=U\cup V$, and consider the bipartite dismantled scheme on $Z$. By Lemma \ref{p:matrixD} we have that $F^Z$ is a primitive idempotent of $\mathcal{B}^Z$ which is not a primitive idempotent of $\mathcal{A}^Z$. From Lemma \ref{bipartite} we obtain that $F^Z = E + E'$, where $E$ and $E'$ are primitive idempotents of $\mathcal{A}^Z$ such that $E-E' \in \mathcal{D}^Z$. Because the map $D\mapsto D^Z$ is a bijection between $\mathcal{D}$ and $\mathcal{D}^Z$, there is a matrix $D \in \mathcal{D}$ such that $D^Z=E-E'$. Because $E$ and $E'$ are orthogonal, this matrix $D$ satisfies $F^ZD^Z=D^ZF^Z=D^Z$ and $(D^Z)^2=F^Z$. It then follows that $D$ satisfies \eqref{matrixD} for the fixed fibres $U$ and $V$. Now we use the fact that ${\sf Sym}(\mathcal{F})$ acts doubly transitively on $\mathcal{F}$: by applying algebraic automorphisms $\sigma \in{\sf Sym}(\mathcal{F})$ to these equations, we find that they hold for all fibres $U,V$.
It remains to prove uniqueness of $D$. Let $M\in\mathcal{D}$ be a matrix satisfying~\eqref{matrixD}, i.e., $F^ZM^Z=M^ZF^Z=M^Z$ and $(M^Z)^2=F^Z$. Because $F^Z=E+E'$ and $E,E'$ are primitive idempotents of $\mathcal{A}^Z$, there exist four solutions of the equation $(M^Z)^2=F^Z$ with $M^Z\in\mathcal{A}^Z$, namely $\pm E \pm E'$ (this easily follows by writing $M^Z$ as a linear combination of primitive idempotents of $\mathcal{A}^Z$). On the other hand, the matrices $\pm F^Z,\pm D^Z$ satisfy this equation. Therefore $M^Z=\pm D^Z$. Again, because the map $D\mapsto D^Z$ is a bijection between $\mathcal{D}$ and $\mathcal{D}^Z$, we obtain that $M=\pm D$, and the claim is proven.
The above considerations show the existence of $D\in\mathcal{D}$ such that $FD=D$ and $D^2 = (w-1)F+(w-2)D$ for the case $w=2$. Now let us assume that $w \geq 3$. Fix three arbitrary but distinct fibres, say $U,V,W$, and consider the product $D^{UV}D^{VW}$. Because of uniformity this product belongs to $\mathcal{A}^{UW}$. Therefore there exists a $G\in\mathcal{D}$ such that $G^{UW} = D^{UV}D^{VW}$. It follows from~\eqref{matrixD} that $F^{UU} G^{UW} = G^{UW} F^{WW} = G^{UW}$, $G^{UW}G^{WU}=F^{UU}$, and $G^{WU}G^{UW}=F^{WW}$. From the above claim it then follows that $G=\varepsilon D$, where $\varepsilon=\pm 1$. Thus $D^{UV} D^{VW} =\varepsilon D^{UW}$, and after replacing $D$ by $\varepsilon D$ this becomes $D^{UV} D^{VW} =D^{UW}$. Applying --- as before --- algebraic automorphisms $\sigma \in{\sf Sym}(\mathcal{F})$ to this equality we obtain that $D^{U'V'} D^{V'W'} = D^{U'W'}$ for any triple of pairwise distinct fibres $U',V',W'$.
If $Y$ is a union of $w' \geq 2$ fibres, then a routine calculation shows that $(D^Y)^2 = (w'-1)F^Y+(w'-2)D^Y$. After releasing the fixation of $j$ by indexing $F$ and $D$, we thus obtain that \begin{equation}\label{eq:ED} F_j^YD_j^Y=D_j^Y \text{~and~} (D_j^Y)^2 = (w'-1)F^Y_j+(w'-2) D^Y_j. \end{equation} For fixed $Y$, it remains to show that the matrices $E_j:=\frac{1}{w'}(F_j^Y+ D_j^Y)$ and $E_j':=F_j^Y-\frac{1}{w'}(F_j^Y+ D_j^Y), j=0,\dots,\ell-1$, and $F_{\ell}^Y,\dots,F_e^Y$ are the primitive idempotents of $\mathcal{A}^Y$. It follows from~\eqref{eq:ED} that $E_j,E_j'$ are pairwise orthogonal idempotents. To show that $E_j, E_j'$ are orthogonal to $E_h,E_h'$ for $h\neq j$, and to $F_h^Y$ for $h \geq \ell$, it is sufficient to check that $F^Y_j D^Y_h = F^Y_h D^Y_j = D^Y_j D^Y_h =0$. These equations hold because $F^Y_h D^Y_j = F^Y_h F^Y_j D^Y_j =0$ and $D^Y_j D^Y_h = F^Y_j D^Y_j F^Y_h D^Y_h = F^Y_j F^Y_h D^Y_j D^Y_h = 0$.
Thus we have $2\ell+e+1-\ell=e+1+\ell$ pairwise orthogonal idempotents of $\mathcal{A}^Y$. It remains to show that $d+1=e+1+\ell$. Because $d,e,\ell$ do not depend on $w'$ (for $w'\geq 2$; for $\ell$ this follows from Lemma \ref{p:matrixD}), it is enough to check this equality for $w'=2$. But in the case of $w'=2$ each primitive idempotent of $\mathcal{B}$ is either primitive in $\mathcal{A}$ or splits into a sum of two primitive idempotents of $\mathcal{A}$, as we saw in Lemma \ref{bipartite}. This implies that $d+1=e+1+\ell$. \end{proof}
\begin{corollary}\label{coruniformQ} A uniform association scheme is Q-Higman. \end{corollary}
\begin{proof} Consider a uniform association scheme. Apply Theorem \ref{uniformidempotents} with $Y=X$ and $w'=w$ to see that the sets $\mathcal{J}_j$ have size at most two, i.e., its primitive idempotents that are not primitive idempotents of $\mathcal{B}$ come in pairs $E_j, E_j'$. The corresponding multiplicities satisfy $m_j'=\text{trace}(E_j')=\frac{w-1}{w}\text{trace}(F_j)=(w-1)\text{trace}(E_j)=(w-1)m_j$, which concludes the proof. \end{proof}
\noindent Note that implicitly we have shown that $\ell \leq e + 1$ in a uniform association scheme (because this is equivalent to $\ell<\frac d2+1$). In other words, the number of relations between two fibres is at most the number of relations within each fibre. This also follows from a more general result for coherent configurations with commutative schemes on the fibres. Indeed, according to Higman \cite[p.\ 227]{HigmanCA} and Weisfeiler \cite[Cor.~14, p.\ 87]{weis}, the number of relations between two fibres is at most the number of relations in each of these two fibres. The special case where the number of relations between two fibres and within each fibre is the same (for all fibres) comprises the so-called balanced coherent configurations, and these have been studied by Hirasaka and Sharafdini \cite{HR}.
The next result also follows easily from Theorem \ref{uniformidempotents}.
\begin{corollary}\label{dismantledidempotents} Consider a uniform association scheme, with primitive idempotents $E_j, j=0,\dots,d$ (ordered as in Definition \ref{def:Qanti}), and let $Y$ be a union of $w' \geq 2$ fibres. Then the primitive idempotents of the dismantled scheme on $Y$ are $\overline{E}_j:=\frac{w}{w'}E^Y_j$ and $\overline{E}_{d-j}:=E^Y_{d-j}+E^Y_j- \frac{w}{w'}E^Y_j$, $j=0,\dots,\ell-1$, and $\overline{E}_j:=E_j^Y$, $j=\ell,\dots,d-\ell$. \end{corollary}
\noindent To show the converse of Corollary \ref{coruniformQ}, i.e., that a Q-Higman scheme is uniform, we use the following lemma, whose proof is similar to the dismantlability proof of a cometric Q-antipodal scheme in \cite[Thm.~4.7]{mmw}.
\begin{lemma}\label{p_1} Consider a Q-Higman scheme. Then for each fibre $U$, $$ E_j I^U E_h = \begin{cases} w^{-1} E_j & {\rm if\ } h = j {\rm\ and\ } j =0,\dots,\ell-1;\\ E_j I^U - w^{-1} E_j & {\rm if\ } h = d-j {\rm\ and\ } j =0,\dots,\ell-1;\\ I^U E_{d-j} - w^{-1} E_{d-j} & {\rm if\ } h = d-j {\rm\ and\ } j=d-\ell+1,\dots,d;\\ E_j I^U - I^U E_{d-j} + w^{-1} E_{d-j} & {\rm if\ } h = j {\rm\ and\ } j=d-\ell+1,\dots,d;\\ E_j I^U& {\rm if\ } h =j {\rm\ and\ } j =\ell,\dots,d-\ell;\\ 0 & {\rm otherwise.} \end{cases} $$ \end{lemma} \begin{proof} Similar as in the proof of \cite[Thm.~4.7]{mmw}, it follows from \cite[p.\ 61, Eq. 9]{bcn} that \begin{equation}\label{eq_main} \parallel v E_j I^U E_h - n\delta_{jh} E_j\parallel^2 = q_{jh}^dn^2(w-1). \end{equation} To start with the bottom line of the expression for $E_j I^U E_h$: if $h\nsim^* j$ then $h \neq j$ and $q_{jh}^d=0$, and we obtain from \eqref{eq_main} that $E_j I^U E_h = 0$.
If $h = j$ with $j =0,\dots,\ell-1$, then $q_{jh}^d =0$ and so $E_j I^U E_j = w^{-1} E_j$.
If $h = d-j$ with $j =0,\dots,\ell-1$, then $$ E_jI^U E_{d-j} = E_j I^U (I-\sum_{i\neq d-j} E_i) = E_jI^U - E_jI^U E_{j} = E_j I^U - w^{-1} E_j. $$
For $j=d-\ell+1,\dots,d$, we have that $0\le d-j \le \ell-1$, hence from the above it follows that $E_{d-j}I^U E_j = E_{d-j} I^U - w^{-1} E_{d-j}$. By transposing this expression we obtain that $E_jI^U E_{d-j} = I^U E_{d-j} - w^{-1} E_{d-j}$.
Also for $j=d-\ell+1,\dots,d$ we have that $$ E_jI^U E_j = E_j I^U (I-\sum_{i\neq j} E_i) = E_j I^U - E_j I^U E_{d-j} = E_j I^U - I^U E_{d-j} + w^{-1} E_{d-j}. $$ For $j =\ell,\dots,d-\ell$, the idempotent $E_j$ is block-diagonal, implying that $E_jI^U E_j = E_j I^U$. \end{proof}
\begin{theorem}\label{p_coco} Consider a Q-Higman association scheme. Then
$$\mathfrak M:=\left\langle E_j^{UV}\,|\, j =0,\dots,d-\ell \mbox{ and }U,V\in \mathcal{F}\right\rangle$$ is a coherent algebra corresponding to a uniform coherent configuration. \end{theorem} \begin{proof} We shall show that $\mathfrak M$ is closed with respect to transposition, ordinary matrix multiplication, and entrywise multiplication, and contains $I$ and $J$, thus proving it is a coherent algebra.
First however, we claim that $E_j^{UV} \in \mathfrak M$ also for $j=d-\ell+1,\dots,d$. Indeed, in this case $0 \le d-j \le \ell-1$ and $v^{-1}E_j = E_d\circ E_{d-j}$. Therefore \begin{align*} E_{j}^{UV} = v E_d^{UV} \circ E_{d-j}^{UV} &= \begin{cases} -J^{UV} \circ E_{d-j}^{UV} & \mbox{ if } U\neq V \\ (w-1) J^{UV} \circ E_{d-j}^{UV} & \mbox{ if } U=V \end{cases} \\ &= \begin{cases} -E_{d-j}^{UV} & \mbox{ if } U\neq V \\ (w-1) E_{d-j}^{UV} & \mbox{ if } U=V \end{cases} \in \mathfrak M. \end{align*} Hence $E_j^{UV}\in \mathfrak M$ for each $j,U,V$. This implies that $E_j \in \mathfrak M$ for each $j$, and hence $I,J\in \mathfrak M$.
Concerning the closure properties, note that closure with respect to transposition is evident. Closure with respect to matrix multiplication follows from Lemma \ref{p_1}, because it implies that \begin{equation}\label{eq_product} E_i^{UV}E_j^{WZ} =\delta_{VW}\,\delta_{ij}\, \lambda \,E_i^{UZ}\in \mathfrak M, \end{equation} where $\lambda = w^{-1}$ for $i=0,\dots,\ell-1$ and $\lambda=\delta_{WZ}$ for $i=\ell,\dots,d-\ell$ (here $\delta$ is the Kronecker delta). Closure with respect to entrywise multiplication follows from $$ E_j^{UV}\circ E_h^{UV} = (E_j \circ E_h)^{UV} = v^{-1}\sum_{i=0}^d q_{jh}^i E_i^{UV} \in \mathfrak M.$$
It remains to show uniformity. Note that it is clear from the above that $\mathfrak M$ contains all the matrices $A_i^{UV}$; the nonzero matrices among these form a basis of Schur idempotents for the corresponding coherent configuration. Because $A_i^{UV}$ can be expressed as a linear combination of the $E_j^{UV}, j=0,\dots,d-\ell$, it follows from \eqref{eq_product} that the coherent configuration is uniform. \end{proof}
\begin{corollary}\label{Qantipodaluniform} A Q-Higman scheme is uniform. Any dismantled scheme of such a scheme is also Q-Higman. \end{corollary}
\begin{proof} The first statement follows from Theorem \ref{p_coco} and the correspondence between uniform coherent configurations and uniform schemes (Proposition \ref{one-one}). The second statement follows from dismantlability (Proposition \ref{uniformdismantle}) and the converse of the first part (Corollary \ref{coruniformQ}). \end{proof}
\noindent We thus have proven the following.
\begin{theorem} \label{TuniformiffQHigman} An association scheme is uniform if and only if it is Q-Higman. \end{theorem}
\section{Cometric Q-antipodal schemes}\label{sec:cometricQ}
A cometric association scheme (with a Q-polynomial ordering $E_0, E_1,\dots, E_d$) is called Q-antipodal if it is imprimitive with $\mathcal{J}=\{0,d\}$. It is called Q-bipartite if it is imprimitive with $\mathcal{J}=\{0,2,4,\dots\}$, or equivalently if $a_i^*=0$ for all $i$, cf. \cite{suzimprim}.
It was shown by Suzuki \cite{suzimprim} that an imprimitive cometric $d$-class association scheme is Q-antipodal, Q-bipartite, or both, unless possibly when $d=4$ or $d=6$. The exceptional cases for $d=4$ and $d=6$ were later ruled out by Cerzo and Suzuki \cite{cerzo} and Tanaka and Tanaka \cite{TT2011EJC}, respectively. Here we will consider the Q-antipodal case.
\subsection{Uniformity}
Consider a cometric Q-antipodal association scheme. In this case, it follows that the equivalence classes of the relation $\sim^*$ are $\mathcal{J}_j=\{j,d-j\}, j=0,1,\dots,\lfloor \frac{d}{2} \rfloor$. So the primitive idempotents of the Bose-Mesner subalgebra $\mathcal{B}$ are $F_j=E_j+E_{d-j}, j< \frac{d}{2}$, and $F_{\frac{d}{2}}=E_{\frac{d}{2}}$ for $d$ even. Note also that $q^j_{dj}=0$ for $j<\frac{d}{2}$, hence a cometric Q-antipodal scheme is Q-Higman (with $\ell=\lceil \frac{d}{2} \rceil$), and therefore it is also uniform, and dismantlable. On the other hand, we will show now that a uniform cometric scheme is Q-antipodal.
\begin{theorem}\label{cometricuniformQantipodal} A cometric association scheme is uniform if and only if it is Q-antipodal. \end{theorem}
\begin{proof} One direction is clear from the above. Consider now a cometric scheme that is uniform with imprimitivity system $\mathcal{F}$. So the scheme is Q-Higman, and let us assume that the idempotents are ordered as in Definition \ref{def:Qanti}; in particular we have $\mathcal{J}=\{0,d\}$. In order to show that the scheme is cometric Q-antipodal, it suffices to show that $E_d$ is last in a Q-polynomial ordering too. In the case $d=3$, however, a somewhat degenerate case also arises where $E_d$ is second in the Q-polynomial ordering, but in this ordering $E_1$ is last and there is a second imprimitivity system $\mathcal{F}'$ with subscheme corresponding to $\mathcal{J}' = \{0,1\}$.
We first note that it is clear that $E_d$ cannot be a Q-polynomial generator, and that this proves the case $d=2$.
Next, consider the case $d > 3$. Then $E_d$ must take the last position in any Q-polynomial ordering as $E_i \circ E_d \in \langle E_i, E_{d-i} \rangle$ eliminates positions from three up to $d-1$ (taking $E_i$ to be the Q-polynomial generator) and position two (taking $i=d$ and some $E_j$, $j\in \{1,2,\dots,d-1\}$, in position four).
For the case $d=3$, we apply several properties of the Krein parameters from Proposition \ref{Qantikrein}. Consider a Q-polynomial ordering, and assume that $E_3$ is not in its last position. Because $q^3_{11}=0$, this ordering cannot be $E_0,E_1,E_3,E_2$, hence it must be $E_0,E_2,E_3,E_1$. In this latter case, the scheme is cometric Q-bipartite, hence $q^i_{2i}=0$ for all $i$. Because $q^2_{32}=w-2$, it follows that $m_2=\sum_j q^2_{2j}=w-1$, which in turn shows that $m_1=1$. Thus $\{E_0,E_1\}$ induces another imprimitivity system $\mathcal{F}'$ with $\mathcal{J}'=\{0,1\}$. Because $E_1$ is last in the Q-polynomial ordering under consideration, this implies that also in this case the scheme is cometric Q-antipodal. \end{proof}
\noindent An interesting consequence of Theorem \ref{cometricuniformQantipodal} is that among the cometric association schemes, the Q-antipodal ones can be recognized combinatorially.
The exceptional case in the above proof is realized only by the rectangular scheme $R(w,2), w>2$ (the direct product of two trivial schemes; on $w$ and $2$ vertices). Note that this cometric Q-antipodal Q-bipartite scheme has one Q-polynomial ordering, but two ``uniform" imprimitivity systems; for one such system there is a uniform ordering of the idempotents (as in Definition \ref{def:Qanti}) that matches the Q-polynomial ordering, for the other not. The proof of Theorem \ref{cometricuniformQantipodal} thus implies the following.
\begin{corollary}\label{orderfree} Consider a uniform $d$-class association scheme with $\mathcal{J}=\{0,d\}$. If the scheme is cometric then $E_d$ is in the last position in any cometric ordering, unless possibly when $d=3$ and the scheme is isomorphic to the rectangular scheme $R(w,2), w>2$. \end{corollary}
\noindent We next obtain some (known) results for the parameters of cometric Q-antipodal schemes. These are used, for example, to show that the dismantled schemes are also cometric.
\begin{lemma}\label{cometricparameters} A cometric Q-antipodal scheme has $b_j^*=c_{d-j}^*$ for all $j \neq \lfloor \frac{d}{2} \rfloor$, $a_j^*=a_{d-j}^*$ for all $j \neq \frac{d-1}{2}, \frac{d+1}{2}$, and $m_{d-j}=(w-1)m_j$ for $j<\frac{d}{2}$. Moreover, for $j = \lfloor \frac{d}{2} \rfloor$, it holds that $b_j^*=(w-1)c^*_{d-j}$. \end{lemma}
\begin{proof} From the fact that $E_1 \circ F_j \in \mathcal{B}$, it follows that this matrix is a linear combination of the $F_i$. From the expressions of $E_1 \circ E_j$ and $E_1 \circ E_{d-j}$ in terms of Krein parameters and idempotents, we then find that $b_j^*=c_{d-j}^*$ and $a_j^*=a_{d-j}^*$ for all $j \neq \lfloor \frac{d}{2} \rfloor$. It follows from Lemma \ref{cosetssize2krein} that $m_{d-j}=(w-1)m_j$ for $j<\frac{d}{2}$.
For odd $d$, and $j=\frac{d-1}{2}$, we have that $b_j^*=\frac{m_{j+1}}{m_j}c^*_{j+1}=(w-1)c^*_{d-j}$. For even $d$, and $j=\frac{d}{2}$, we have $b_j^*=\frac{m_{j+1}}{m_j}c^*_{j+1}=(w-1)\frac{m_{j-1}}{m_j}b^*_{j-1}=(w-1)c^*_{j}=(w-1)c^*_{d-j}$. \end{proof}
\noindent Before we compute the Krein parameters of the subscheme, we determine the dual intersection matrix $L^*_d$ and the values of $\rho_j$. These follow immediately from Proposition \ref{Qantikrein}.
\begin{lemma} \label{Lqijk} The Krein parameters of a cometric Q-antipodal scheme satisfy the following properties: \begin{enumerate} \item[(i)] $q^{j}_{d,d-j}=w-1$ for $j \leq \frac{d}{2}$; \item[(ii)] $q^{j}_{d,d-j}=1$ and $q^{j}_{dj}=w-2$ for $j > \frac{d}{2}$; \item[(iii)] $q^i_{dj}=0$ for all other values of $i$ and $j$. \end{enumerate} Moreover, $\rho_j=1$ if $j< \frac{d}{2}$, $\rho_{\frac{d}{2}}=w$, and $\rho_j=w-1$ if $j> \frac{d}{2}$. \end{lemma}
\noindent For convenient reference, we also collect here a few equations involving the remaining Krein parameters that were obtained in Section \ref{uniformschemes} above.
\begin{lemma} \label{Lqijk2} The Krein parameters of a cometric Q-antipodal scheme satisfy the following properties: if $0\le j< \frac{d}{2}$ and $0\le i\le d$, then \begin{enumerate} \item[(i)] $q^h_{i,d-j}=(w-1)q^{d-h}_{ij}$ for $h \le \frac{d}{2}$; \item[(ii)] $q^h_{i,d-j}=q^{d-h}_{ij}+(w-2)q^h_{ij}$ for $h > \frac{d}{2}$; and \item[(iii)] $q^h_{i,\frac{d}{2}}=q^{d-h}_{i,\frac{d}{2}}$ for all $h$ when
$d$ is even. \end{enumerate} \end{lemma}
\subsection{Subschemes}
\noindent Lemma \ref{subkrein} can now be used to show that the subschemes are cometric (which is analogous to a result on the folded graph of an antipodal distance-regular graph, see \cite[Prop. 4.2.2.ii]{bcn}).
\begin{proposition}\label{Kreinarray} Let $(X,\mathcal{R})$ be a cometric Q-antipodal association scheme with $w$ fibres, and Krein array $\{b_0^*,b_1^*,\dots,b_{d-1}^*;c_1^*,c_2^*,\dots,c_d^*\}$, where $d \geq 3$. Then the subschemes induced on the fibres are cometric with Krein array $$\{b_0^*,b_1^*,\dots,b_{\frac{d-1}{2}-1}^*;c_1^*,c_2^*,\dots,c_{\frac{d-1}{2}}^*\}$$ for $d$ odd, and Krein array $$\{b_0^*,b_1^*,\dots,b_{\frac{d}{2}-1}^*;c_1^*,c_2^*,\dots,c_{\frac{d}{2}-1}^*,wc_{\frac{d}{2}}^*\}$$ for $d$ even. \end{proposition}
\begin{proof} We use Lemma \ref{subkrein} with $i'=i=1$ and $j'=j \leq \frac{d}{2}$ and $h \leq \frac{d}{2}$.
First, let $h < \frac{d}{2}-1$. Then we have that $q^{d-h}_{1j}=0$ because the scheme is cometric, and hence $\tilde{q}^{h}_{1j}=0$ if $j<h-1$ or $j>h+1$. Moreover, $\tilde{q}^{h}_{1,h-1}=q^{h}_{1,h-1}=c^*_h$, and $\tilde{q}^{h}_{1,h+1}=q^{h}_{1,h+1}=b^*_h$. Similarly, it follows for $h \geq \frac{d}{2}-1$ that $\tilde{q}^{h}_{1j}=0$ if $j<h-1$.
For $h=\frac{d}{2}-1$, we have $\tilde{q}^{h}_{1,h-1}=q^{h}_{1,h-1}=c^*_h$, and $\tilde{q}^{h}_{1,h+1}=\frac{1}{w}(q^{h}_{1,h+1}+(w-1)q^{d-h}_{1,h+1})=\frac{1}{w}(b^*_h+(w-1)c^*_{d-h})=b^*_h$.
For $h=\frac{d}{2}$, we obtain that $\tilde{q}^{h}_{1,h-1}=wq^{h}_{1,h-1}=wc^*_h$, and finally, for $h=\frac{d-1}{2}$, we obtain that $\tilde{q}^{h}_{1,h-1}=q^{h}_{1,h-1}=c^*_h$. Thus, it follows that the scheme is cometric, and the Krein array follows. \end{proof}
\noindent Note that it follows from the proof that the Q-polynomial ordering of idempotents is $F_0,F_1,\dots,F_{\lfloor \frac{d}{2} \rfloor}$. The multiplicities $\tilde{m}_j = \text{~rank~}F_j$ of a subscheme follow for example as follows: $\tilde{m}_j=\tilde{q}^0_{jj}=\frac{1}{w}(q^0_{jj}+q^0_{d-j,d-j})=m_j$ for $j \neq \frac{d}{2}$, and $\tilde{m}_{\frac{d}{2}}=\frac{1}{w}m_{\frac{d}{2}}$.
\subsection{Dismantled schemes}
Proposition \ref{Kreinarray} is a well-known result. In \cite[Thm. 4.7]{mmw} it was shown that a cometric Q-antipodal scheme is dismantlable, with its dismantled schemes being cometric Q-antipodal too. The proof of the latter is not complete however, because incorrect idempotents are suggested there. The fact that such a dismantled scheme is Q-Higman is clear from Corollary \ref{Qantipodaluniform}. That it is cometric Q-antipodal can be shown as follows using Corollary \ref{dismantledidempotents}.
\begin{theorem}\label{Kreinarraydismantled} Let $(X,\mathcal{R})$ be a cometric Q-antipodal association scheme with $w$ fibres, and Krein array $\{b_0^*,b_1^*,\dots,b_{d-1}^*;c_1^*,c_2^*,\dots,c_d^*\}$, where $d \geq 3$, and let $\ell=\lceil \frac{d}{2} \rceil$. Then the dismantled scheme induced on a union $Y$ of $w' \geq 2$ fibres is cometric Q-antipodal with Krein array $\{\overline{b}_0^*,\overline{b}_1^*,\dots,\overline{b}_{d-1}^*;\overline{c}_1^*,\overline{c}_2^*,\dots,\overline{c}_d^*\},$ where $$\overline{c}_{j}^* = c_{j}^* \text{~for~} j \neq \ell \text{,~and~} \overline{c}_{\ell}^* = \frac{w}{w'}c_{\ell}^*~,$$ $$\overline{b}_{j}^* = b_{j}^* \text{~for~} j \neq d-\ell \text{,~and~} \overline{b}_{d-\ell}^*=\frac{w}{w'}\frac{w'-1}{w-1}b_{d-\ell}^*~.$$ \end{theorem}
\begin{proof} The stated result follows from working out the products $\overline{E}_1 \circ \overline{E}_j$ for all $j$, where we use the expressions for $\overline{E}_j$ in Corollary \ref{dismantledidempotents}, and the expressions for the dual intersection numbers $b_j^*, a_j^*$, and $c_j^*$ in Lemma \ref{cometricparameters}. For most cases this is rather straightforward; for readability we will therefore only give the details of one of the more complicated cases, i.e., that of $d$ even and $j=\ell+1$. In this case, with $v'=w'n=\frac{w'}{w}v$ being the number of vertices in $Y$, we have that \begin{align*} v'\overline{E}_1 \circ \overline{E}_{\ell+1} & = vE_1^Y \circ(E^Y_{\ell+1}+ \frac{w'-w}{w'}E^Y_{\ell-1})\\
& = b_{\ell}^*E_{\ell}^Y+a_{\ell+1}^*E_{\ell+1}^Y+c_{\ell+2}^*E_{\ell+2}^Y\\ &\quad + \frac{w'-w}{w'}(b_{\ell-2}^*E_{\ell-2}^Y+a_{\ell-1}^*E_{\ell-1}^Y+c_{\ell}^*E_{\ell}^Y)\\
& = (b_{\ell}^*+\frac{w'-w}{w'}c_{\ell}^*)E_{\ell}^Y + a_{\ell+1}^*(E^Y_{\ell+1}+ \frac{w'-w}{w'}E^Y_{\ell-1})\\ &\quad +c_{\ell+2}^*(E_{\ell+2}^Y+ \frac{w'-w}{w'}E^Y_{\ell-2})\\
& = \frac{w}{w'}\frac{w'-1}{w-1}b_{\ell}^*\overline{E}_{\ell} + a_{\ell+1}^*\overline{E}_{\ell+1} + c_{\ell+2}^*\overline{E}_{\ell+2}. \end{align*} Because $\ell=d-\ell$, it thus follows that $\overline{b}_{d-\ell}^*=\frac{w}{w'}\frac{w'-1}{w-1}b_{d-\ell}^*$, $\overline{a}_{\ell+1}^* = a_{\ell+1}^*$, and $\overline{c}_{\ell+2}^* = c_{\ell+2}^*$. The other parameters follow similarly, and prove the statement. \end{proof}
\begin{corollary} Let $(X,\mathcal{R})$ be a cometric Q-antipodal $d$-class association scheme with $w \geq 3$ fibres, with $d$ odd and $\ell=\frac{d+1}{2}$. Then $a_{\ell}^* \neq 0$. Moreover, if $Y$ is a union of $w'$ fibres, where $w>w' \geq 2$, then $\overline{a}_{\ell-1}^* \neq 0$. \end{corollary}
\begin{proof} If $w>w' \geq 2$, then $\overline{a}_{\ell}^*=\overline{b}_{0}^*-\overline{c}_{\ell}^*-\overline{b}_{\ell}^*= b_0^*-\frac{w}{w'}c_{\ell}^*-b_{\ell}^*<a_{\ell}^*$, and similarly $\overline{a}_{\ell-1}^*>a_{\ell-1}^*$. The result follows from these inequalities. \end{proof}
\noindent So, if $d$ is odd, and the scheme is cometric Q-antipodal Q-bipartite, then $w=2$. Moreover, it cannot be a dismantled scheme of a cometric Q-antipodal scheme with more fibres.
\subsection{The natural ordering of relations}\label{natural ordering}
For a cometric scheme, we define the natural ordering of relations as the one satisfying $Q_{01}>Q_{11}> \cdots >Q_{d1}$. Recall that $Q_{ij}A_i=vE_j \circ A_i$. Because $\sum_{i \in \mathcal{I}} A_i=n(E_0+E_d)=I_{w}\otimes J_n$ for Q-antipodal schemes, it follows that in this case $Q_{id}$ equals $w-1$ if $i \in \mathcal{I}$, and $-1$ otherwise.
The orthogonal polynomials $q_j, j=0,1,\dots,d+1$ associated to the cometric scheme have the property that $Q_{ij}=q_j(Q_{i1}), j=0,1,\dots,d$ and $q_{d+1}(Q_{i1})=0$. Because the roots of $q_j$ and $q_{j+1}$ interlace (a standard and easily proven property of orthogonal polynomials, cf. \cite[Thm. 5.3]{orthog}), it follows that the values of $Q_{id}$ alternate in sign. Thus for cometric Q-antipodal schemes it follows that $\mathcal{I}=\{0,2,4,\dots.\}.$
\section{Three-class uniform schemes; linked systems of symmetric designs}\label{sec:threeclass}
Every two-class imprimitive association scheme is uniform and cometric. It has one (nontrivial) relation within the fibres and one across the fibres (it is a wreath product of two trivial schemes), and may thus be seen as a linked system of complete designs. Likewise, an imprimitive three-class scheme with one relation across the fibres is uniform (and decomposable), but such a scheme clearly cannot be cometric.
It is well-known that (homogeneous) linked systems of symmetric designs give three-class association schemes, and in fact, these are uniform, almost by definition, and cometric Q-antipodal (for information on such linked systems we refer to \cite{vandam}, \cite{mmw}, and the references therein). In \cite[Thm. 5.8]{vandam} it was conversely shown (in a different context though) that imprimitive indecomposable three-class schemes with one extra condition on the multiplicities must come from such linked systems. We can derive this easily now from the results in the previous sections.
Indeed, let us consider a three-class imprimitive association scheme that is indecomposable. Such a scheme must have two relations across the fibres and have a trivial quotient scheme. Thus we may assume that $\mathcal{J}=\{0,3\}$, $\mathcal{J}_1=\{1,2\}$, and $\mathcal{I}=\{0,2\}$. Moreover we may assume that $m_2 \geq m_1$. It then follows that the scheme is uniform (Q-Higman) if and only if $m_2=(w-1)m_1$ (which is the case if and only if $m_1=n-1$). It is clear (straight from the definition) that such a uniform scheme corresponds to a linked system of symmetric designs. We thus obtain the same result as in \cite[Thm. 5.8]{vandam}. The eigenmatrices of a three-class uniform scheme can be written as
\begin{equation*}\label{P-matrix3} P = \begin{bmatrix} 1 & (w-1)k_1 & n-1 & (w-1)(n-k_1) \\ 1 & P_{11} & -1 & -P_{11} \\ 1 & -\frac{1}{w-1}P_{11} & -1 & \frac{1}{w-1}P_{11} \\ 1 &-k_1& n-1 & -(n-k_1) \end{bmatrix} \end{equation*} and \begin{equation*}\label{Q-matrix3} Q = \begin{bmatrix} 1 & n-1 & (w-1)(n-1) & w-1 \\ 1 & Q_{11} & -Q_{11} & -1 \\ 1 & -1 & -(w-1) & w-1 \\ 1 & -\frac{k_1}{n-k_1}Q_{11} & \frac{k_1}{n-k_1}Q_{11} & -1 \end{bmatrix}, \end{equation*} where $k_1$ is the block size of the symmetric designs in the corresponding linked system. If we order the relations such that $P_{11}>0$, then $P_{11}=(w-1)\sqrt{\frac{k_1(n-k_1)}{n-1}}$ and $Q_{11}=\sqrt{\frac{(n-1)(n-k_1)}{k_1}}$. We remark that Noda \cite[Prop. 0]{noda} showed that $\frac{k_1(n-k_1)}{n-1}$ is a square (integer) if $w \geq 3$.
Because the equality $m_2=(w-1)m_1$ is equivalent to $q^3_{11}=0$, it follows that such a uniform scheme is cometric except possibly when $k_1=1$ (note that $q^{3}_{12}>0$ because $1 \sim^* 2$; and $q^2_{11}>0$ follows except when $k_1=1$; we omit the derivation). In case $k_1=1$ however, the scheme is decomposable: it is a rectangular scheme $R(w,n)$ (the direct product of two trivial schemes), which is cometric (and metric) if and only if exactly one of $w$ and $n$ equals $2$. We thus conclude the following.
\begin{proposition}\label{Qpoly3} Consider an imprimitive three-class association scheme that is indecomposable, and assume without loss of generality that $\mathcal{J}=\{0,3\}$, $\mathcal{I}=\{0,2\}$, and $m_2 \geq m_1$. Then it is uniform if and only if $m_2=(w-1)m_1$. If so, then it is cometric Q-antipodal and corresponds to a linked system of symmetric designs. \end{proposition}
\noindent Davis, Martin, and Polhill \cite{DMP} recently constructed new linked systems of symmetric designs by using difference sets. These have the same parameters as the classical ones arising from Kerdock codes \cite{cs}. A recent construction by Holzmann, Kharaghani, and Orrick \cite[Thm. 2.7]{hadi} of real unbiased Hadamard matrices can be used to construct linked systems of symmetric designs with new parameters. More precisely, starting from an arbitrary Hadamard matrix of order $2u$ and an arbitrary set of $w-1$ mutually orthogonal Latin squares of side $2u$, they construct $w-1$ mutually unbiased regular Hadamard matrices of order $4u^2$. Because these Hadamard matrices are regular, they correspond to symmetric $2$-$(4u^2,2u^2-u,u^2-u)$ designs, and one obtains a linked system of $w-1$ such designs, and hence a uniform scheme with $w$ fibres of size $4u^2$.
\section{Four-class cometric Q-antipodal association schemes}\label{sec:four}
We next consider the four-class schemes, comparing the ``class I'' imprimitive schemes of Higman with the cometric Q-antipodal schemes.
\subsection{A linked system of Van Lint-Schrijver partial geometries}\label{sec:vls}
Uniform association schemes with three classes and more than one relation across fibres thus turn out to be cometric. For four classes this is not the case. There are several examples with just two fibres that are not cometric, such as those (non-cometric) schemes generated by bipartite distance-regular graphs with diameter four. The following example of a system of linked partial geometries by Cameron and Van Lint \cite{cvl} is perhaps more interesting because it has three fibres.
\begin{example} Consider the ternary repetition code $C$ of length $6$. The vertices of the association scheme are the $243$ cosets of $C$ in $GF(3)^6$, and these can be partitioned into three fibres according to the sum of the coordinates of any vector in the coset. Consider the graph where two cosets in different fibres are adjacent if one can be obtained from the other by adding a vector of weight one. This defines one of the two relations across fibres, and it generates the entire four-class scheme. The incidence structure between two fibres is a partial geometry that is isomorphic to the one constructed by Van Lint and Schrijver \cite{LS} (with parameters $pg(5+1,5+1,2)$), which has as a point graph (and line graph) a strongly regular graph with parameters $(81,30,9,12)$; this gives the two (nontrivial) relations on the fibres. The scheme is not cometric because $q^1_{13} \neq 0$. \end{example}
\subsection{Higman's imprimitive four-class schemes}
Higman \cite{Htriality} studied imprimitive four-class association schemes, and classified these according to the dimensions of the subalgebras $\mathcal{B}$ and $\mathcal{C}$ associated to a fixed imprimitivity system (or ``parabolic'') as outlined in Section \ref{Subsec:imprim}
above. Since we showed that $\mathcal{B}$ has dimension $|\mathcal{I}|$ and $\mathcal{C}$
has dimension $|\mathcal{J}|$, we may say that a four-class scheme falls into Higman's ``class I'' (relative to a given imprimitivity system)
if it has $|\mathcal{I}|=3$ and $|\mathcal{J}|=2$. It is known that the cometric Q-antipodal four-class association schemes fall into this ``class I". In the next section we shall characterize the cometric schemes in this class.
Let us consider a ``class I" scheme. Although Higman ordered relations and idempotents differently, we will assume (without loss of generality) that $\mathcal{J}=\{0,4\}$ and $\mathcal{I}=\{0,2,4\}$. Then, using Lemma \ref{idempotents}, we may assume that $\mathcal{J}_1=\{1,3\}$ and $\mathcal{J}_2=\{2\}$. So the subscheme on each fibre is a strongly regular graph, on $n$ vertices with valency $k$, say. Let $r$ and $s$ denote the nontrivial eigenvalues of this graph and let $f$ and $g$ denote the multiplicities of $r$ and $s$, respectively. The eigenmatrices $\tilde{P}$ and $\tilde{Q}$ for this strongly regular graph are related to the eigenmatrices of this four-class scheme by Equation \eqref{EPQtilde}. Using this, we claim (and Higman \cite{Htriality} obtained the same) that the eigenmatrices for a ``class I'' scheme can be written as
\begin{equation}\label{P-matrix} P = \begin{bmatrix} 1 & (w-1)k_1 & k & (w-1)(n-k_1) & n-1-k \\ 1 & P_{11} & r & -P_{11} & -1-r \\ 1 & 0 & s & 0 & -1-s \\ 1 & -\frac{m_1}{m_3}P_{11} & r & \frac{m_1}{m_3}P_{11} & -1-r \\ 1 &-k_1& k & -(n-k_1)& n-1-k \end{bmatrix} \end{equation} and \begin{equation}\label{Q-matrix} Q = \begin{bmatrix} 1 & m_1 & wg & m_3 & w-1 \\ 1 & Q_{11} & 0 & -Q_{11} & -1 \\ 1 & \frac{m_1}{k}r & \frac{wg}{k}s & \frac{m_3}{k}r & w-1 \\ 1 & -\frac{k_1}{n-k_1}Q_{11} & 0 & \frac{k_1}{n-k_1}Q_{11} & -1 \\ 1 & -\frac{m_1}{n-1-k}(1+r) & -\frac{wg}{n-1-k}(1+s) & -\frac{m_3}{n-1-k}(1+r) & w-1 \end{bmatrix}, \end{equation} where $k_1:=1+p^1_{12}+p^1_{14}$ and the remaining unknowns are related by $$ m_1+m_3 = wf, \qquad P_{11}m_1 = Q_{11}v_1, \qquad v_1=(w-1)k_1.$$ Indeed, for a given vertex $x$ and a fibre $U$ not containing $x$, $k_1$ equals the number of 1-neighbors of $x$ in $U$. So the incidence structure between any two fibres induced by relation $R_1$ is a square 1-design with block size $k_1$. Thus the total number of 1-neighbors of $x$ equals $v_1=(w-1)k_1$. We also have $Q_{12}=Q_{32}=0$ because $\mathcal{J}_2=\{2\}$ forces $E_2 \in \mathcal{B}$. The remaining simplifications in \eqref{P-matrix} and \eqref{Q-matrix} can easily be checked using the orthogonality relations $Q_{ij}=P_{ji}\frac{m_j}{v_i}$ and (column zero of) $PQ=QP=vI$.
It will benefit us to make the expressions \eqref{P-matrix} and \eqref{Q-matrix} as unambiguous as possible. Let us agree to order the idempotents $E_1$ and $E_3$ by $m_1 \leq m_3$. Unless otherwise noted, we will order the relations $R_2$ and $R_4$ by assuming that $r \geq 0$, and the relations $R_1$ and $R_3$ by assuming $P_{11} \geq 0$. We now verify that, if such a scheme is cometric, then $E_0,E_1,E_2,E_3,E_4$ must be the Q-polynomial ordering, except possibly when $w=2$.
Since columns two and four of $Q$ have repeated entries, neither $E_2$ nor $E_4$ can be a Q-polynomial generator. In fact, $E_4$ must take the last position in any Q-polynomial ordering by the same argument as that in the proof of Theorem \ref{cometricuniformQantipodal}. Finally, $E_2$ cannot take position three because $q_{13}^4 > 0$ follows from $1\sim^* 3$. The last two possibilities for our Q-polynomial ordering are $E_0,E_1,E_2,E_3,E_4$ and $E_0,E_3,E_2,E_1,E_4$. But $q_{43}^3=0$ then gives $m_1=(w-1)m_3$ in the second case (by Lemma \ref{cosetssize2krein}) and, with our conventions above, this can only happen if $w=2$. In fact, when $w=2$, we find that either one of these orderings -- or both of them -- can be Q-polynomial orderings. But in the case where $E_3$ is the Q-polynomial generator, the natural ordering of relations described in Section \ref{natural ordering} is instead $R_0,R_3,R_2,R_1,R_4$.
From the 13-entry of the equation $PQ=vI$ and the 11-entry from the similar equation for the subscheme, we find that $$P_{11}= \sqrt{\frac{m_3(w-1)k_1(n-k_1)}{m_1f}}.$$ By using the expression \cite[Thm.~II.3.6(i)]{banito} \begin{equation} \label{Eqijk} q^h_{ij}=\frac{m_im_j}{v}\sum_l \frac{P_{il}P_{jl}P_{hl}}{v_l^2}, \end{equation} and the similar expression $$\tilde{q}_{11}^1 = \frac{f^2}{n} \left( 1 + \frac{ r^3 }{k^2} - \frac{ (1+r)^3}{(n-1-k)^2} \right)$$ for the subscheme we then derive that $$q^1_{13}=\frac{m_1m_3}{wf^2}\left(\tilde{q}^1_{11}-\sqrt{\frac{m_3}{m_1(w-1)}}\frac{(n-2k_1)\sqrt{f}}{\sqrt{k_1(n-k_1)}}\right),$$ which, of course, must vanish when the scheme is cometric with respect to the ordering $E_0,E_1,E_2,E_3$, $E_4$.
\subsection{Linked systems of strongly regular designs}\label{sec:linkedsrd}
Let us proceed with the expressions of the previous section. From Lemma \ref{cosetssize2krein}, we know that $q^1_{14}=0$ if and only if $m_3=m_1(w-1)$. By Definition \ref{def:Qanti} and Theorem \ref{TuniformiffQHigman}, this happens if and only if the scheme is uniform. In this case, the incidence structure between two fibres is a so-called strongly regular design as defined by Higman \cite{Hsrd}, and the scheme corresponds to a linked system of strongly regular designs. Cameron and Van Lint \cite{cvl} constructed such an example, as we saw, and also the example in Section \ref{HOSI} is a linked system of strongly regular designs.
\begin{proposition}\label{Qpoly4} An imprimitive four-class association scheme of Higman's ``class I'' is cometric (and therefore Q-antipodal) if and only if $r \neq k$, $m_3=(w-1)m_1$, and \begin{equation}\label{q_condition2} \tilde{q}_{11}^1 = \frac{(n-2k_1)\sqrt{f}}{\sqrt{k_1(n-k_1)}}, \end{equation} possibly after reordering the idempotents $E_1$ and $E_3$ and the relations $R_1$ and $R_3$ in the case $w=2$. \end{proposition}
\begin{proof} We address the case $w\ge3$. The same ideas work in the case where $w=2$, but an extra case argument is involved.
First recall that a cometric Q-antipodal scheme is uniform and we have just shown that uniformity, the vanishing of $q_{14}^1$, and the equation $m_3=(w-1)m_1$ are all equivalent.
We know from above that the scheme is cometric if and only if $E_0,E_1, E_2,E_3,E_4$ is a Q-polynomial ordering. So we need $$ q^1_{14}=0 , \quad q^1_{13}=0, \quad q^2_{11} > 0, \quad q^3_{12}>0, \quad q^4_{13}>0 . $$ Observing that $q^2_{11}=\frac{m_1^2}{wf^2}\tilde{q}^2_{11}$ and $q^2_{13}=\frac{m_1m_3}{wf^2}\tilde{q}^2_{11}$, and that $\tilde{q}^2_{11}=0$ if and only if the strongly regular graphs on the fibres are imprimitive with $r=k$, one easily works out the remaining implications in both directions. \end{proof}
\noindent Thus, for a cometric Q-antipodal four-class association scheme, all parameters can be expressed in terms of the number of fibres, $w$, and the parameters of the strongly regular graph.
\begin{corollary} \label{CQantipparams} If $(X,\mathcal{R})$ is a cometric Q-antipodal four-class association scheme with $w$ fibres, then there exists a strongly regular graph with $n=v/w$ vertices, eigenvalues $k$, $r$, and $s$ having multiplicities $1$, $f$, and $g$ respectively, such that the eigenmatrices for $(X,\mathcal{R})$ are given by Equations \eqref{P-matrix} and \eqref{Q-matrix} where $m_1=f$, $m_3=(w-1)f$, $$P_{11}= (w-1)\sqrt{k_1(n-k_1)/f}, \qquad Q_{11}= \sqrt{ f(n-k_1)/k_1 },$$ and \begin{equation}\label{eqS} k_1=\frac{n}{2}\left(1-\frac{\tilde{q}_{11}^1}{\sqrt{4f+(\tilde{q}_{11}^1)^2}}\right). \end{equation} \end{corollary}
\begin{proof} The expression for $k_1$ follows from \eqref{q_condition2}. \end{proof}
\noindent Moreover, because $\tilde{q}_{11}^1$ is always a rational number (even in the case when some entry of $\tilde{P}$ is irrational), we obtain that $\sqrt{k_1(n-k_1)/f}\in\mathbb{Q}$ for a cometric Q-antipodal scheme with four classes, unless perhaps when $n=2k_1$ (equivalently, $\tilde{q}_{11}^1=0$). On the other hand $(w-1)\sqrt{k_1(n-k_1)/f} = P_{11}$ is an algebraic integer. Therefore $P_{11}$ is a rational integer if $n \neq 2k_1$. Because a Q-antipodal cometric scheme is dismantlable, we can take $w=2$ and consider $P_{11}$ for the dismantled scheme; now we see that $k_1(n-k_1)/f$ is a perfect square provided $n \neq 2k_1$.
It also follows from \eqref{eqS} that if $\tilde{q}_{11}^1 \neq 0$, then $4f+(\tilde{q}_{11}^1)^2$ is a square of a rational number. This immediately implies the following result, which we will use in Section \ref{sec:srd}.
\begin{proposition}\label{conference} For a cometric Q-antipodal four-class association scheme, the strongly regular graph on a fibre cannot be a conference graph. \end{proposition} \begin{proof} Assume the contrary. Then $n=2k+1$, $f = k > 0$, $\tilde{q}_{11}^1=(k-2)/2$, $k$ is even, and $$ 4f+(\tilde{q}_{11}^1)^2 = 4k + \left(\frac{k-2}{2}\right)^2. $$ Because $n$ is odd, $\tilde{q}_{11}^1 \neq 0$. Therefore $k^2+12k+4$ is the square of an integer. But $k=-12,0$ are the only even integers for which the expression $k^2+12k+4$ is a perfect square. \end{proof}
\noindent The rationality condition that follows from \eqref{eqS} turns out to be quite a strong one. It is possible to show, for example, that also the lattice graphs cannot occur as our strongly regular graph on the fibres, and probably many more graphs can be excluded in this way. We will employ this condition as well in the next section.
\subsection{\ \ \ Four-class cometric Q-antipodal Q-bipartite association schemes; linked systems of Hadamard symmetric nets}
Recently, four-class cometric Q-antipodal Q-bipartite association schemes were shown to be equivalent to so-called real mutually unbiased bases, and a connection to Hadamard matrices was found in \cite{mub}. We also refer to \cite{abs} for connections between real mutually unbiased bases and association schemes. Here we shall derive the connection to Hadamard matrices, and see cometric Q-antipodal Q-bipartite four-class schemes as linked systems of Hadamard symmetric nets.
So, let us consider a cometric Q-antipodal Q-bipartite four-class association scheme, and its eigenmatrix $Q$ in \eqref{Q-matrix} with $m_3=(w-1)m_1=(w-1)f$ and $Q_{11}= \sqrt{ f(n-k_1)/k_1 }$ from Corollary \ref{CQantipparams}. Since the scheme is cometric Q-bipartite, the column of $Q$ corresponding to a Q-polynomial generator has its $d+1$ distinct values symmetric about zero when ordered naturally \cite[Cor. 4.2]{mmw}. In our case, this is either column one or column three, and in both cases it follows that $r=0$, $n=k+2$, and $n=2k_1$. This implies that $s=-2$, $f=\frac{n}{2}$, and the strongly regular graphs on the fibres are cocktail party graphs (complements of matchings). Now restrict to any dismantled scheme on $w'=2$ fibres; straightforward calculations show that this must correspond to a so-called Hadamard graph, an antipodal bipartite distance-regular graph of diameter four, cf. \cite[p.\ 19, 425]{bcn}. Such graphs correspond to Hadamard matrices; more precisely, the incidence structure between a pair of fibres is a Hadamard symmetric net (that is, a symmetric $(m,\mu)$-net with $m=2$). We thus obtain that cometric Q-antipodal Q-bipartite four-class association schemes are linked systems of Hadamard symmetric nets. Interesting examples of these are given by the extended Q-bipartite doubles of the three-class uniform schemes corresponding to the known linked systems of symmetric designs of Section \ref{sec:threeclass}. We expect that the schemes that arise in this way from the construction by Holzmann, Kharaghani, and Orrick \cite[Thm. 2.7]{hadi} of real unbiased Hadamard matrices are the same as those coming from the mutually unbiased bases constructed by Wojcan and Beth \cite{wb}, but we have not checked the details. For more on the correspondence to real mutually unbiased bases, and bounds on $w$, we refer to \cite{mub}.
On the other hand, we can characterize the cometric Q-antipodal Q-bipartite four-class association schemes as follows.
\begin{proposition}\label{Qbipartite} Consider a cometric Q-antipodal four-class association scheme, such that the strongly regular graph on a fibre is imprimitive. Then it is Q-bipartite. \end{proposition} \begin{proof} By Proposition \ref{Qpoly4}, we have $r\neq k$. So $r=0$, and the strongly regular graph on a fibre must be a complete multipartite graph, say a $t$-partite graph with parts of size $\frac{n}{t}$ each. For such a graph $s=-\frac{n}{t}$, $f=n-t$, and $\tilde{q}_{11}^1 = n-2t$. So, if $\tilde{q}_{11}^1 \neq 0$ (which is equivalent to $s \neq -2$), then $t \leq \frac{n}{3}$, and $4f+(\tilde{q}_{11}^1)^2$ is square (as before by \eqref{eqS}). However, for $t \leq \frac{n}{3}$, we have $(n-2t+2)^2+4t-4=4f+(\tilde{q}_{11}^1)^2<(n-2t+4)^2$, so $4f+(\tilde{q}_{11}^1)^2$ cannot be square. Thus, $\tilde{q}_{11}^1 = 0$ and $t =\frac{n}{2}$, so the strongly regular graph on a fibre is a cocktail party graph, and therefore $n=k+2$ and $n=2k_1$. From the expression for the eigenmatrix $P$ in \eqref{P-matrix} and Equation \eqref{Eqijk}, one can now derive that the Krein parameters $a_i^*=q^{i}_{1i}$ are zero for all $i$. Thus the scheme is Q-bipartite. Note that, in this case, not only is column one, but also is column three of $Q$ symmetric about zero. \end{proof}
\noindent The same result may be derived by using the fact that there are two different imprimitivity systems and Suzuki's results on imprimitive cometric schemes \cite{suzimprim} and cometric schemes with multiple Q-polynomial orderings \cite{suztwoq}. It would be interesting to work this out more generally, that is, for any cometric scheme with multiple imprimitivity systems, but we leave this to the interested reader.
\subsection{Strongly regular graphs with a strongly regular decomposition}\label{sec:srd}
One of the interesting features of the example in Section \ref{HOSI} is that there is a decomposition of the Higman-Sims graph into two Hoffman-Singleton graphs; thus a strongly regular graph decomposes into two strongly regular graphs. Such strongly regular graphs with a strongly regular decomposition were studied by Haemers and Higman \cite{HH} and Noda \cite{noda2}, and they occur in more examples of four-class cometric Q-antipodal association schemes, as we shall see.
Let $\Gamma_0 = (X,E)$ be a primitive strongly regular graph with adjacency matrix $M$, parameter set $(v,k_0,\lambda_0,\mu_0)$, and distinct eigenvalues $k_0>r_0>s_0$. A strongly regular decomposition of $\Gamma_0$ is a partition of $X$ into two sets $U_1$ and $U_2$ such that the induced subgraphs $\Gamma_i:=\Gamma_{U_i}, i=1,2$ are strongly regular.
For our purpose, the sets $U_1$ and $U_2$ will play the role of the $w=2$ fibres of an imprimitive (bipartite) association scheme, and the disjoint union of the graphs $\Gamma_1$ and $\Gamma_2$ is one of the two relations in $\mathcal{I}$. Thus we will only consider the case that the sets $U_1$ and $U_2$ are of equal size, and the parameter sets of $\Gamma_1$ and $\Gamma_2$ are the same, say $(n,k,\lambda,\mu)$. The eigenvalues of both graphs will be denoted by $k \geq r > s$.
To make the connection between a strongly regular graph with a strongly regular decomposition and our four-class association schemes more precise, write $M=\mtrx{M_1}{C}{C^\top}{M_2}$, where the blocks correspond to our partition of $X$. We then define relations by the following adjacency matrices: \begin{equation} \label{Esrdecomp} \begin{split} A_0:=\mtrx{I}{0}{0}{I}, \quad A_1:=&\mtrx{0}{C}{C^\top}{0}, \quad A_2:=\mtrx{M_1}{0}{0}{M_2}, \\ A_3:=\mtrx{0}{J-C}{J-C^\top}{0}, \quad & A_4:=\mtrx{J-M_1-I}{0}{0}{J-M_2-I}. \end{split} \end{equation} We shall determine when these relations form an association scheme, and if they do, we shall see that the scheme is cometric Q-antipodal. But first we make some more observations.
By taking the complements of $\Gamma_i$, $i=0,1,2$, we obtain another strongly regular graph with a strongly regular decomposition; we call this the complementary decomposition. Note that this complementary decomposition determines the same relations, i.e., the same $A_i$, $i=0,\dots,4$, but ordered differently. In case that these relations form an association scheme, it is not clear a priori which ordering corresponds to the one in the eigenmatrix $P$ in \eqref{P-matrix}. The straightforward choice that we make is that we consider that decomposition for which the eigenvalues $k,r,s$ of $\Gamma_i,i=1,2$ correspond to the $k,r,s$ in the eigenmatrix $P$ (however, in the case of hemisystems in the next section we make an exception).
A strongly regular decomposition is called exceptional if $r_0\neq r$ and $s_0\neq s$. It was shown by Haemers and Higman \cite[Thm. 2.7]{HH} (and it also follows from \cite[Thm. 1]{noda2}) that in this exceptional case the graphs $\Gamma_1$ and $\Gamma_2$ are conference graphs. Thus, Proposition~\ref{conference} implies that such an exceptional decomposition does not correspond to a cometric scheme. An example of an exceptional decomposition is that of the Petersen graph into two pentagons.
Note that when the relations defined by \eqref{Esrdecomp} do form an association scheme, then it has a fusion scheme $\{A_0,A_1+A_2,A_3+A_4\}$. In that case it follows from the expression \eqref{P-matrix} for the eigenmatrix $P$ that the strongly regular graph $\Gamma_0$ with adjacency matrix $M=A_1+A_2$ has an eigenvalue $0+s$, hence $s_0=s$, and the decomposition is not exceptional. Note that in \eqref{P-matrix} the roles of $A_1$ and $A_3$ may be swapped, but this has no influence on the observation. Thus, in the case of an exceptional decomposition, \eqref{Esrdecomp} does not yield an association scheme.
We shall now show that if the relations defined by \eqref{Esrdecomp} form a scheme, then this scheme is cometric Q-antipodal. This follows from the following proposition, where we consider Higman's ``class I" schemes with two fibres, i.e., $w=2$; since the scheme is bipartite, it is Q-Higman and Lemma \ref{cosetssize2krein} applies, giving us $m_4=1$, $m_1=m_3=f$ and $q^j_{4,4-j}=1$. Thus, $q^1_{41}=q^3_{43}=0$, and the conditions from Proposition \ref{Qpoly4} for the scheme to be cometric reduce to $r \neq k$ and $q^3_{11}=0$ or $q^1_{33}=0$.
\begin{proposition}\label{srd} Consider an imprimitive four-class association scheme in Higman's ``class I" with two fibres. Suppose that the scheme has a primitive two-class fusion scheme, and that $r \neq k$. Then the scheme is cometric Q-antipodal. \end{proposition} \begin{proof} From the form of the eigenmatrix $P$ in \eqref{P-matrix} it follows that the only way to obtain a primitive two-class fusion (i.e., one where both nontrivial relations correspond to connected strongly regular graphs) is to fuse relation $R_1$ with either $R_2$ or $R_4$, and to fuse the remaining two nontrivial relations. But then there exists a corresponding partition $\{T_1,T_2\}$ of $\{1,2,3,4\}$ such that $E_0$, $E_{T_1}:=\sum_{j\in T_1}E_j$ and $E_{T_2}:=\sum_{j\in T_2}E_j$ are the primitive idempotents of the fusion scheme. Depending on the fusion of relations, one of the fused relations has eigenvalue $\pm P_{11}+r$ corresponding to idempotent $E_1$, and eigenvalue $\mp P_{11}+r$ corresponding to idempotent $E_3$, these two eigenvalues differing by $2P_{11}$ in either case. In any case it follows that $1$ and $3$ are not in the same set $T_i$.
Now assume first that one of $T_1,T_2$ is a one-element set, say $T_1=\{i\}$. From the above it follows that $i \neq 2,4$. If $i=1$, then $E_1\circ E_1$ is a linear combination of $E_0,E_1,E_2+E_3+E_4$. But $q_{11}^4=0$. Therefore $E_1\circ E_1\in\langle E_0,E_1\rangle$ implying that the fusion is imprimitive, which is a contradiction. The case $i=3$ can be settled analogously.
Thus $|T_1|=|T_2|=2$. Without loss of generality $T_1=\{i,4\}$ for $i=1$, or $i=3$ (the case $i=2$ is eliminated by the above considerations). Assume without loss of generality that $i=1$; then $$ v(E_1 + E_4)\circ (E_1+E_4) = (m_1+1)E_0 + x (E_1+E_4)+ y (E_2+E_3) $$ for some non-negative reals $x,y$. Because $q_{11}^4=0$, $q_{14}^4=0$, and $q_{44}^4=0$ (by Proposition \ref{Qantikrein} and using $w=2$), $E_4$ does not appear in the left-hand side. Therefore $x=0$, implying $E_1\circ E_1\in \langle E_0,E_2,E_3\rangle$. Together with $E_3\circ E_3 = E_1\circ E_1$ (which follows from the equations $vE_4\circ E_4 = E_0$ and $vE_3\circ E_4 = E_1$) we obtain $E_3\circ E_3\in \langle E_0,E_2,E_3\rangle$. So $q^1_{33}=0$, which yields the claim. \end{proof}
\noindent What remains is to show that a decomposition that is not exceptional gives an association scheme. This gives the following result.
\begin{proposition}\label{srd_Qpoly} Consider a primitive strongly regular graph with a strongly regular decomposition into parts with the same parameters. Then the above-mentioned relations form an association scheme if and only if the decomposition is not exceptional. If so, then for $r \neq k$, the scheme is cometric Q-antipodal. \end{proposition}
\begin{proof} We showed before that an exceptional decomposition does not correspond to an association scheme. So suppose that the decomposition is not exceptional. From the parameters of the strongly regular graphs it follows that \begin{align*} &M^2 = (r_0+s_0)M -r_0s_0 I +(k_0+r_0s_0)J, \qquad MJ = k_0J,\\ &M_i^2 = (r+s)M_i -rs I +(k+rs)J, \qquad \text{~and~} \qquad M_iJ = kJ, \quad i=1,2. \end{align*} By working out the first equation, it follows that \begin{gather}\label{eq_Lemma21} \begin{aligned} &CJ = C^\top J = (k_0 -k) J, \\ &M_1 C + CM_2 = (r_0+s_0) C +(k_0+r_0s_0) J,\\ &CC^\top = (r_0+s_0-r-s)M_1 - (r_0s_0-rs) I + (k_0 + r_0s_0-k-rs)J,\\ &C^\top C = (r_0+s_0-r-s)M_2 - (r_0s_0-rs) I + (k_0 + r_0s_0-k-rs)J, \end{aligned} \end{gather} and this implies that $(r_0+s_0-r-s)(M_1C-CM_2)=0$. If $r_0+s_0=r+s$, then it follows from a result of Noda \cite[Thm. 1]{noda2} that the decomposition is exceptional, hence we must have that $M_1C=CM_2$. From \eqref{eq_Lemma21} it then follows that $M_1 C = C M_2 = \frac{r_0 +s_0}{2} C+\frac{k_0+r_0s_0}{2}J$. Now a routine check shows that the matrices $A_i, i=0,\dots,4$ form an association scheme, and by Proposition~\ref{srd} this scheme is cometric Q-antipodal. \end{proof}
\noindent For the non-exceptional case, Noda \cite{noda2} found that all parameters of the decomposition can be expressed in terms of $r_0$ and $s_0$. In our case, we have that $s=s_0$, which is the complementary case to the one considered in \cite[Thm. 1]{noda2}. From this result, it follows for example that $r=\frac{r_0+s_0}{2}$. Note that this also follows by considering the eigenvalues of the fusion scheme using \eqref{P-matrix}: indeed, we have $s_0=0+s=-P_{11}+r$, and $r_0=P_{11}+r$.
Haemers and Higman \cite{HH} give a list of parameter sets of non-exceptional decompositions on at most 300 vertices. The smallest example is the Clebsch graph that decomposes into two perfect matchings on 8 vertices. The association scheme corresponding to this decomposition (consider the complementary one for the parameters) is the four-class binary Hamming scheme $H(4,2)$ (which is (co-)metric, (Q-)bipartite, (Q-)antipodal). Note that this is a dismantled scheme of the cometric Q-bipartite Q-antipodal scheme (with $w=3$) related to the so-called $24$-cell. The next example is the Higman-Sims graph decomposing into two Hoffman-Singleton graphs, and there are two more examples: on 112 vertices and 162 vertices. The one on 112 vertices is a decomposition of a generalized quadrangle into two Gewirtz graphs, and it is part of an infinite family of decompositions coming from hemisystems.
\subsubsection{Hemisystems of generalized quadrangles}\label{Sec:hemi}
Segre \cite{segre} introduced the concept of hemisystems on the Hermitian surface $H$ in $PG(3,q^2)$ as a set of lines of $H$ such that every point in $H$ lies on exactly $(q+1)/2$ such lines. This point-line geometry, denoted $H(3,q^2)$, gives an important classical family of generalized quadrangles, called the Hermitian generalized quadrangles. It is now well-known \cite{cdg} that the incidence relation on lines in this hemisystem yields a strongly regular subgraph of the line graph of the geometry. Thus we obtain a strongly regular decomposition of the (strongly regular) line graph of this generalized quadrangle. In fact, this holds for any hemisystem in a generalized quadrangle $GQ(t^2,t)$.
Let $(\mathcal{P},\mathcal{L})$ be the point-line incidence structure of a generalized quadrangle $GQ(t^2,t)$ with $t$ odd. Let $\Gamma_0$ be the line graph: its vertex set is $X=\mathcal{L}$ with two vertices adjacent if the lines have a point in common. This is a strongly regular graph with parameters $( (t^3+1)(t+1), t(t^2+1), \ t-1, t^2+1)$ and with eigenvalues $k_0=t(t^2+1)$, $r_0= t-1$, and $s_0=-1-t^2$.
A hemisystem in $(\mathcal{P},\mathcal{L})$ is a subset $U_1 \subseteq \mathcal{L}$ with the property that every point in $\mathcal{P}$ lies on exactly $(t+1)/2$ lines in $U_1$ and $(t+1)/2$ lines in $U_2=X-U_1$. Cameron, Delsarte, and Goethals \cite{cdg} showed that any hemisystem in a generalized quadrangle of order $(t^2,t)$ corresponds to a strongly regular decomposition of the line graph of the corresponding generalized quadrangle. Because the complementary set $U_2$ of lines of a hemisystem is also a hemisystem, this decomposition $X=U_1 \cup U_2$ has equally sized parts. Moreover, the parameters of the parts are the same: each $U_i$ induces a subgraph $\Gamma_i$ which is strongly regular with parameters $$(n,k,\lambda,\mu)= \left( \frac{1}{2}(t^3+1)(t+1), \frac{1}{2}(t^2+1)(t-1), \ \frac{1}{2}(t-3), \frac{1}{2}(t-1)^2 \right) $$ and eigenvalues $k=\frac{1}{2}(t^2+1)(t-1)$, $r=t-1$, and $s=-\frac{1}{2}(t^2-t+2)$. The decomposition is clearly not exceptional (note though that here we have the complementary setting as in the previous section because $r=r_0$), so by Proposition \ref{srd_Qpoly}, we have
\begin{corollary}
\label{Chemisystems} Let $(\mathcal{P},\mathcal{L})$ be a generalized quadrangle $GQ(t^2,t)$ with $t$ odd and let $\mathcal{C}$ denote the set of all ordered pairs of distinct intersecting lines from $\mathcal{L}$. Suppose $\mathcal{L} = U_1 \cup U_2$ is a partition of the lines into hemisystems. Then the relations $R_0 = \{ (\ell,\ell)| \ell\in \mathcal{L}\}$, $R_1 = \mathcal{C} \cap ( U_1 \times U_2 \cup U_2 \times U_1)$, $R_2 = \mathcal{C} \cap ( U_1 \times U_1 \cup U_2 \times U_2)$, $R_3 = ( U_1 \times U_2 \cup U_2 \times U_1) - R_1$, $R_4 = ( U_1 \times U_1 \cup U_2 \times U_2) - R_0 -R_2$ give a cometric Q-antipodal association scheme on $X=\mathcal{L}$. This scheme has Krein array $ \left\{ (t^2+1)(t-1), (t^2-t+1)^2/t, (t^2-t+1)(t-1)/t, 1 \right.$; $1, (t^2-t+1)(t-1)/t, (t^2-t+1)^2/t,$ $\left. (t^2+1)(t-1) \right\}.$ \end{corollary}
\noindent Segre \cite{segre} constructed a hemisystem in $H(3,q^2)$ (a $GQ(q^2,q)$) for $q=3$; it corresponds to the above-mentioned example on 112 vertices with a decomposition into Gewirtz graphs. A breakthrough was made by Cossidente and Penttila \cite{Penttila}, who constructed hemisystems in $H(3,q^2)$ for all odd prime powers $q$. Bamberg, De Clerck, and Durante \cite{Bamberg} constructed a hemisystem for a nonclassical generalized quadrangle of order $(25,5)$ (which has the same parameters as $H(3,25)$), and Bamberg, Giudici, and Royle \cite{flock} showed that every flock generalized quadrangle has a hemisystem. The latter authors \cite{Bamcomp} recently also classified by computer the hemisystems for the two known generalized quadrangles of order $(25,5)$, and obtained several examples for other small generalized quadrangles.
\subsection{Classification, parameter sets, and examples}
We saw in Section \ref{sec:linkedsrd} that the parameters of a four-class cometric Q-antipodal scheme are completely determined by those of the strongly regular graph on the fibres, together with the number of fibres $w$. We used this to generate ``feasible" parameter sets for four-class cometric Q-antipodal schemes that are not Q-bipartite, and that have $n \leq 2000$ and $w \leq 6$. These parameter sets are listed in the appendix. Standard conditions such as integrality of parameters $p_{ij}^h$ and nonnegativity of the Krein parameters $q_{ij}^h$ were checked. Once a parameter set failed, we did not search for the corresponding parameter set with larger $w$ (because dismantlability would exclude such a parameter set). We also checked one of the so-called absolute bounds on multiplicities, i.e., the one in Proposition \ref{absolutebound} in the next section.
\subsubsection{Absolute bound on the number of fibres} \label{Subsec:absbound}
By the absolute bound we obtain the following bound for $w$.
\begin{proposition}\label{absolutebound} For a four-class cometric Q-antipodal scheme with $a_1^* \neq 0$, we have $w \leq (f+1)(f-2)/2g$. \end{proposition}
\begin{proof} By the absolute bound (cf. \cite[Thm. 2.3.3]{bcn}) and because $a_1^* \neq 0$, we have $$f(f+1)/2 \geq \rank(E_1\circ E_1) =\rank(E_0) +\rank(E_1) + \rank(E_2) = 1+f+wg,$$ and the result follows. \end{proof}
\noindent For example, for the parameter sets with $n=81$ in the appendix, we obtain that $w \leq 3$ from $f=20$ and $g=60$. In general, the bound does not appear to be very good though.
\subsubsection{The small examples} \label{smallcases}
The first family of parameter sets in the appendix ($n=50$) corresponds to the examples (with $w=2$ and $w=3$) related to the Hoffman-Singleton graph in Section \ref{HOSI}. The case $w=2$ corresponds to a distance-regular graph that is uniquely determined by the parameters, cf. \cite[p.\ 393]{bcn}. Now consider more generally an association scheme with $w$ fibres $V_i, i=1,\dots,w$ (in this family of parameter sets). Because also the Hoffman-Singleton graph is determined by its parameters, relation $R_4$ is such a graph on each fibre. Let us call two vertices in distinct fibres incident if they are related by relation $R_1$. Because $p^1_{14}=0$ for all $w$, it follows that if we take a vertex $x \in V_i, i>1$, then the 15 vertices in $V_1$ incident to $x$ will form a coclique in the Hoffman-Singleton graph on $V_1$. Because distinct $x$ are incident to distinct cocliques, and there are exactly 100 distinct cocliques of size 15 in the Hoffman-Singleton graph, it follows that $w \leq 3$. Moreover, because the scheme with $w=2$ is uniquely determined by its parameters, and is a dismantled scheme of a scheme with $w=3$, this implies that the latter scheme is also uniquely determined by its parameters.
For the second family of parameter sets in the appendix ($n=56$) a construction is known for $w=3$. Higman \cite[Ex. 3]{Htriality} for example mentions it can be constructed on the set of ovals in the projective plane of order $4$. The fibres are the three orbits of ovals under the action of the group $L_3(4)$. The case $w=2$ corresponds to a hemisystem of the generalized quadrangle of order $(9,3)$, or equivalently, to a strongly regular decomposition of the point graph of $GQ(3,9)$ into two Gewirtz graphs. It is known that such a decomposition, and hence the corresponding scheme, is unique (the uniqueness of the hemisystem in the generalized quadrangle is proven by Hirschfeld \cite[Thm. 19.3.18]{hirschfeld}, and the uniqueness of the point graph as a strongly regular graph was proven by Cameron, Goethals, and Seidel \cite{cgssrg}). As in the first family of parameter sets, we can show here that $w \leq 3$, and that the scheme with $w=3$ is unique. In this case, the intersection number $p^1_{14}$ equals one (for all $w$), which implies that the set of 20 neighbors in $V_1$ of any vertex $x \notin V_1$ must be an induced matching $10K_2$ in the Gewirtz graph induced on $V_1$. Brouwer and Haemers \cite[p.\ 405]{BH} mention that there are exactly 112 such induced subgraphs in the Gewirtz graph, which implies that $w \leq 3$ as well as the uniqueness of the scheme with $w=3$.
The case $n=64$ has $w \leq 2$. Dismantlability implies that the schemes with $n=64$ and $w>2$ do not exist (a scheme with $w=3$ does not occur because for example the intersection number $p_{11}^1=4.5$ is not integer). The case $w=2$ corresponds to the distance-regular folded 8-cube, which is uniquely determined by its parameters.
For the family of parameter sets with $n=81$, the absolute bound implies that $w \leq 3$. Goethals and Seidel \cite[p.\ 156]{GS} give a decomposition of the strongly regular graph on 243 vertices from the ternary Golay code (also known as Delsarte graph) into three strongly regular graphs on 81 vertices. This gives a scheme with $w=3$ and $n=81$. According to Brouwer \cite{AEBweb}, the decomposition of the unique strongly 56-regular graph on 162 vertices into two strongly regular graphs on 81 vertices is unique, hence the association scheme with $w=2$ is unique as well. We also expect the scheme with $w=3$ to be unique.
Besides the above examples, and the examples related to triality or hemisystems, there occurs one more family of examples in the appendix. These are related to the Leech lattice, cf. \cite[Ex. 4]{Htriality}, and have $n=1408$ and $w \leq 3$.
Curiously, the Krein array $\{176, 135, 24, 1; 1, 24, 135, 176 \}$ is formally dual to the intersection array of a known graph, a cometric antipodal distance-regular double cover on 1344 vertices found by Meixner \cite{meixner}. Likewise, the Krein array $\{56,45,16,1;1,8,45,56 \}$ is formally dual to the intersection array of an antipodal distance-regular triple cover found by Soicher \cite{soicher} which is not cometric.
\section{Five-class cometric Q-antipodal association schemes}\label{sec:five}
In \cite{Hsrd2}, Higman introduced so-called strongly regular designs of the second kind and showed that these are equivalent to coherent configurations of type [3 3;~ 3]. Because such a coherent configuration is balanced --- a concept defined by Hirasaka and Sharafdini \cite{HR} --- its two fibres necessarily have the same size (this was also observed by Higman \cite{Hsrd2}). If in addition the design has self-dual parameters, then it gives rise to a five-class uniform scheme.
A trivial way to obtain such a scheme is by taking the bipartite double of a strongly regular graph (Higman calls the corresponding strongly regular design of the second kind trivial). Though trivial, there are some cometric (and also metric) schemes obtained in this way, such as the ones obtained from the Clebsch graph, Schl\"{a}fli graph, Higman-Sims graph, the McLaughlin graph and both its subconstituents. These strongly regular graphs have in common that $q^1_{11}=0$ and $q^2_{12} \neq 0$. It was in fact claimed by Bannai and Ito \cite[p.\ 314]{banito} that the bipartite double of a scheme is cometric if and only if the (original) scheme is cometric with $q^i_{1i}=0$ for $i \neq d$ and $q^d_{id} \neq 0$.
To obtain less trivial examples of cometric schemes, we checked the examples and table of parameter sets for nontrivial strongly regular designs of the second kind in \cite{Hsrd2}. Four parameter sets in the table there turn out to give cometric schemes. One with $n=162$ (Higman's Example 4.4) is related to $U_4(3)$, and has Krein array $\{21,20,9,3,1;1,3,9,20,21\}$. The second one (Higman's Example 4.5) has $n=176$, and can be described using the Steiner 3-design on 22 points. It has Krein array $\{21,19.36,11,2.64,1;1,2.64,11,19.36,21\}$. The parameter set with $n=243$ can be realized as a dismantled scheme on two of the three fibres of a cometric scheme that is the dual of a metric scheme corresponding to the coset graph of the shortened extended ternary Golay code (cf. \cite[p.\ 365]{bcn}). Its Krein array is $\{22,20,13.5,2,1;1,2,13.5,20,22\}$. The last cometric example from the table has $n=256$ (second such parameter set in Higman's table) and corresponds to the distance-regular folded 10-cube.
Higman also mentions (in his Example 4.3) the strongly regular designs of the second kind related to the family of bipartite cometric distance-regular dual polar graphs $D_5(q)$. We did not bother to completely check all other examples mentioned by Higman \cite{Hsrd2}, but we expect no other cometric examples among these.
\section{Miscellaneous} \label{Sec:misc}
In his book on permutation groups, Cameron \cite[p.\ 79]{cameronbook} describes how to use the computer package GAP to construct the strongly regular decomposition of the Higman-Sims graph into two Hoffman-Singleton graphs. This description can easily be extended to get the linked system of partial $\lambda$-geometries of Section \ref{HOSI}.
We checked whether any of the remaining examples mentioned in Higman's unpublished paper on uniform schemes \cite{Huninform} gives rise to a cometric scheme. Although we should mention that one of the examples (Example 6) is unclear to us, we found no cometric schemes among these examples.
Many of the examples mentioned in this paper, and also examples of other cometric association schemes, are listed on the website \cite{Billwebsite}. Included there are all parameters of the examples.\\
\noindent {\bf Acknowledgements} The authors thank Peter Cameron, Bill Kantor, and Tim Penttila for inspiring discussions on the topic of this paper, and the referees for comments and corrections on an earlier version.
\section*{Appendix}
Below are putative parameter sets of four-class cometric Q-antipodal association schemes with fibre size $n \leq 2000$ and $w \leq 6$, and that are not Q-bipartite. The parameter sets are grouped according to the parameters of the strongly regular graph which would appear as the subscheme on the fibres. These ``srg'' parameters are given at the beginning of each group. An exclamation mark (!) means that the strongly regular graph is unique and a plus sign (+) indicates existence. Most of this information is obtained from the online tables of strongly regular graphs by Brouwer \cite{Brouwersrgtables}. For each group of parameter sets, we give the absolute bound of Proposition \ref{absolutebound} (if relevant).
Each remaining line contains information on one parameter set. Again, an exclamation mark (!) means that the scheme is unique, a plus sign (+) indicates existence, and a minus sign (-) non-existence. Next to this, the Krein array is given, then $w$, the partition $v =1+ v_1+ v_2+ v_3+ v_4$, and the spectrum of $R_1$. At the end, some miscellaneous information is given. The notation {\tt (P)} indicates that the scheme is (or, would be) also metric. The examples listed here appear in the body of the paper as follows.
\begin{itemize} \item {\tt Hoff-Singleton} -- the linked system of partial $\lambda$-geometries related to the Hoffman-Singleton graph (Section \ref{HOSI}) \item {\tt hemisystem} -- schemes arising from hemisystems (Corollary \ref{Chemisystems}) \item {\tt ovals of PG(2,4)} -- Higman's scheme defined on the ovals of $PG(2,4)$ (Section \ref{smallcases}) \item {\tt folded 8-cube} -- (Section \ref{smallcases}) \item {\tt ternary Golay code} -- the decomposition of Goethals and Seidel in \cite{GS} (Section \ref{smallcases}) \item {\tt D\_4(q)} and {\tt O+(8,q), triality} -- Higman's triality schemes and their dismantled schemes (Example
\ref{Ex-triality}) \item {\tt Leech lattice} -- Higman's Leech lattice example
\cite[Ex. 4]{Htriality} (Section \ref{smallcases})\\ \end{itemize}
{\tiny \begin{verbatim}
--------------------
!srg(50,42,35,36) w <= 7 !{21, 16, 6, 1; 1, 6, 16, 21} 2 100=1+ 15+ 42+ 35+ 7 15 5 0 -5 -15 Hoff-Singleton (P) !{21, 16, 8, 1; 1, 4, 16, 21} 3 150=1+ 30+ 42+ 70+ 7 30 10 0 -5 -15 Hoff-Singleton -{21, 16, 9, 1; 1, 3, 16, 21} 4 200=1+ 45+ 42+ 105+ 7 45 15 0 -5 -15 -{21, 16, 9.6,1; 1, 2.4,16, 21} 5 250=1+ 60+ 42+ 140+ 7 60 20 0 -5 -15 -{21, 16, 10, 1; 1, 2, 16, 21} 6 300=1+ 75+ 42+ 175+ 7 75 25 0 -5 -15 --------------------
!srg(56,45,36,36) w <= 5 !{20, 16.333, 4.667, 1; 1, 4.667, 16.333, 20} 2 112=1+ 20+ 45+ 36+ 10 20 6 0 -6 -20 hemisystem !{20, 16.333, 6.222, 1; 1, 3.111, 16.333, 20} 3 168=1+ 40+ 45+ 72+ 10 40 12 0 -6 -20 ovals of PG(2,4) -{20, 16.333, 7, 1; 1, 2.333, 16.333, 20} 4 224=1+ 60+ 45+ 108+ 10 60 18 0 -6 -20 -{20, 16.333, 7.467, 1; 1, 1.867, 16.333, 20} 5 280=1+ 80+ 45+ 144+ 10 80 24 0 -6 -20 --------------------
+srg(64,28,12,12) !{28, 15, 6, 1; 1, 6, 15, 28} 2 128=1+ 8+ 28+ 56+ 35 8 4 0 -4 -8 folded 8-cube (P) --------------------
!srg(81,60,45,42) w <= 3 !{20, 18, 3, 1; 1, 3, 18, 20} 2 162=1+ 36+ 60+ 45+ 20 36 9 0 -9 -36 ternary Golay code +{20, 18, 4, 1; 1, 2, 18, 20} 3 243=1+ 72+ 60+ 90+ 20 72 18 0 -9 -36 ternary Golay code ---------------------
+srg(135,70,37,35) w <= 14 +{50, 31.5, 9.375, 1; 1, 9.375, 31.5, 50} 2 270=1+ 15+ 70+ 120+ 64 15 6 0 -6 -15 D_4(2) (P) +{50, 31.5, 12.5, 1; 1, 6.25, 31.5, 50} 3 405=1+ 30+ 70+ 240+ 64 30 12 0 -6 -15 O+(8,2), triality
{50, 31.5, 14.0625,1; 1, 4.6875,31.5, 50} 4 540=1+ 45+ 70+ 360+ 64 45 18 0 -6 -15
{50, 31.5, 15, 1; 1, 3.75, 31.5, 50} 5 675=1+ 60+ 70+ 480+ 64 60 24 0 -6 -15
{50, 31.5, 15.625, 1; 1, 3.125, 31.5, 50} 6 810=1+ 75+ 70+ 600+ 64 75 30 0 -6 -15 ------------------------
srg(162,140,121,120) w <= 14
{56, 45, 12, 1; 1, 12, 45, 56} 2 324=1+ 36+140+ 126+ 21 36 9 0 -9 -36 (P)
{56, 45, 16, 1; 1, 8, 45, 56} 3 486=1+ 72+140+ 252+ 21 72 18 0 -9 -36
{56, 45, 18, 1; 1, 6, 45, 56} 4 648=1+ 108+140+ 378+ 21 108 27 0 -9 -36
{56, 45, 19.2,1; 1, 4.8,45, 56} 5 810=1+ 144+140+ 504+ 21 144 36 0 -9 -36
{56, 45, 20, 1; 1, 4, 45, 56} 6 972=1+ 180+140+ 630+ 21 180 45 0 -9 -36 ------------------------
srg(196,150,116,110) w <= 6
{45, 40, 6, 1; 1, 6, 40, 45} 2 392=1+ 70+150+ 126+ 45 70 14 0 -14 -70
{45, 40, 8, 1; 1, 4, 40, 45} 3 588=1+ 140+150+ 252+ 45 140 28 0 -14 -70
{45, 40, 9, 1; 1, 3, 40, 45} 4 784=1+ 210+150+ 378+ 45 210 42 0 -14 -70
{45, 40, 9.6,1; 1, 2.4,40, 45} 5 980=1+ 280+150+ 504+ 45 280 56 0 -14 -70
{45, 40, 10, 1; 1, 2, 40, 45} 6 1176=1+ 350+150+ 630+ 45 350 70 0 -14 -70 ------------------------
srg(243,176,130,120) w <= 4
{44, 40.5, 4.5, 1; 1, 4.5, 40.5, 44} 2 486=1+ 99+176+ 144+ 66 99 18 0 -18 -99
{44, 40.5, 6, 1; 1, 3, 40.5, 44} 3 729=1+ 198+176+ 288+ 66 198 36 0 -18 -99
{44, 40.5, 6.75,1; 1, 2.25,40.5, 44} 4 972=1+ 297+176+ 432+ 66 297 54 0 -18 -99 ------------------------
srg(320,220,156,140) w <= 3
{44, 41.667, 3.333, 1; 1, 3.333, 41.667, 44} 2 640=1+ 144+220+ 176+ 99 144 24 0 -24 -144
{44, 41.667, 4.444, 1; 1, 2.222, 41.667, 44} 3 960=1+ 288+220+ 352+ 99 288 48 0 -24 -144 ------------------------
+srg(378,325,280,275) w <= 19 +{104, 88.2, 16.8, 1; 1, 16.8,88.2, 104} 2 756=1+ 78+325+ 300+ 52 78 15 0 -15 -78 hemisystem
{104, 88.2, 22.4, 1; 1, 11.2,88.2, 104} 3 1134=1+ 156+325+ 600+ 52 156 30 0 -15 -78
{104, 88.2, 25.2, 1; 1, 8.4, 88.2, 104} 4 1512=1+ 234+325+ 900+ 52 234 45 0 -15 -78
{104, 88.2, 26.88,1; 1, 6.72,88.2, 104} 5 1890=1+ 312+325+1200+ 52 312 60 0 -15 -78
{104, 88.2, 28, 1; 1, 5.6, 88.2, 104} 6 2268=1+ 390+325+1500+ 52 390 75 0 -15 -78 ------------------------
srg(392,345,304,300) w <= 23
{115, 96, 20, 1; 1, 20, 96, 115} 2 784=1+ 70+345+ 322+ 46 70 14 0 -14 -70 (P)
{115, 96, 26.667,1; 1, 13.333,96, 115} 3 1176=1+ 140+345+ 644+ 46 140 28 0 -14 -70
{115, 96, 30, 1; 1, 10, 96, 115} 4 1568=1+ 210+345+ 966+ 46 210 42 0 -14 -70
{115, 96, 32, 1; 1, 8, 96, 115} 5 1960=1+ 280+345+1288+ 46 280 56 0 -14 -70
{115, 96, 33.333,1; 1, 6.667,96, 115} 6 2352=1+ 350+345+1610+ 46 350 70 0 -14 -70 ------------------------
srg(400,315,250,240) w <= 11
{84, 75, 10, 1; 1, 10, 75, 84} 2 800=1+ 120+315+ 280+ 84 120 20 0 -20 -120
{84, 75, 13.333,1; 1, 6.667,75, 84} 3 1200=1+ 240+315+ 560+ 84 240 40 0 -20 -120
{84, 75, 15, 1; 1, 5, 75, 84} 4 1600=1+ 360+315+ 840+ 84 360 60 0 -20 -120
{84, 75, 16, 1; 1, 4, 75, 84} 5 2000=1+ 480+315+1120+ 84 480 80 0 -20 -120
{84, 75, 16.667,1; 1, 3.333,75, 84} 6 2400=1+ 600+315+1400+ 84 600 100 0 -20 -120 ------------------------
srg(540,385,280,260) w <= 6
{77, 72, 6, 1; 1, 6, 72, 77} 2 1080=1+ 210+385+ 330+154 210 30 0 -30 -210
{77, 72, 8, 1; 1, 4, 72, 77} 3 1620=1+ 420+385+ 660+154 420 60 0 -30 -210
{77, 72, 9, 1; 1, 3, 72, 77} 4 2160=1+ 630+385+ 990+154 630 90 0 -30 -210
{77, 72, 9.6,1; 1, 2.4,72, 77} 5 2700=1+ 840+385+1320+154 840 120 0 -30 -210
{77, 72, 10, 1; 1, 2, 72, 77} 6 3240=1+1050+385+1650+154 1050 150 0 -30 -210 ------------------------
srg(672,440,292,280)
{176, 135, 24, 1; 1, 24, 135, 176} 2 1344=1+ 56+440+ 616+231 56 14 0 -14 -56 (P) ------------------------
srg(704,475,330,300) w <= 4
{76, 72.6, 4.4, 1; 1, 4.4, 72.6, 76} 2 1408=1+ 304+475+ 400+228 304 40 0 -40 -304
{76, 72.6, 5.867,1, 1, 2.933,72.6, 76} 3 2112=1+ 608+475+ 800+228 608 80 0 -40 -304
{76, 72.6, 6.6, 1; 1, 2.2, 72.6, 76} 4 2816=1+ 912+475+1200+228 912 120 0 -40 -304 ------------------------
srg(729,588,477,462) w <= 16
{140, 126, 15, 1; 1,15, 126, 140} 2 1458=1+ 189+ 588+ 540+140 189 27 0 -27 -189
{140, 126, 20, 1; 1,10, 126, 140} 3 2187=1+ 378+ 588+1080+140 378 54 0 -27 -189
{140, 126, 22.5,1; 1, 7.5,126, 140} 4 2916=1+ 567+ 588+1620+140 567 81 0 -27 -189
{140, 126, 24, 1; 1, 6, 126, 140} 5 3645=1+ 756+ 588+2160+140 756 108 0 -27 -189
{140, 126, 25, 1; 1, 5, 126, 140} 6 4374=1+ 945+ 588+2700+140 945 135 0 -27 -189 ------------------------
srg(760,594,468,450) w <= 13
{132, 120.333,12.667,1;1,12.667,120.333,132} 2 1520=1+ 220+ 594+ 540+165 220 30 0 -30 -220
{132, 120.333,16.889,1;1, 8.444,120.333,132} 3 2280=1+ 440+ 594+1080+165 440 60 0 -30 -220
{132, 120.333,19, 1;1, 6.333,120.333,132} 4 3040=1+ 660+ 594+1620+165 660 90 0 -30 -220
{132, 120.333,20.267,1;1, 5.067,120.333,132} 5 3800=1+ 880+ 594+2160+165 880 120 0 -30 -220
{132, 120.333,21.111,1;1, 4.222,120.333,132} 6 4560=1+1100+ 594+2700+165 1100 150 0 -30 -220 ------------------------
srg(800,714,638,630) w <= 34
{204, 175, 30, 1; 1, 30, 175, 204} 2 1600=1+ 120+ 714+ 680+ 85 120 20 0 -20 -120 (P)
{204, 175, 40, 1; 1, 20, 175, 204} 3 2400=1+ 240+ 714+1360+ 85 240 40 0 -20 -120
{204, 175, 45, 1; 1, 15, 175, 204} 4 3200=1+ 360+ 714+2040+ 85 360 60 0 -20 -120
{204, 175, 48, 1; 1, 12, 175, 204} 5 4000=1+ 480+ 714+2720+ 85 480 80 0 -20 -120
{204, 175, 50, 1; 1, 10, 175, 204} 6 4800=1+ 600+ 714+3400+ 85 600 100 0 -20 -120 ------------------------
srg(875,570,385,345) w <= 3
{76, 73.5, 3.5, 1; 1, 3.5, 73.5, 76} 2 1750=1+ 400+ 570+ 475+304 400 50 0 -50 -400
{76, 73.5, 4.667, 1; 1, 2.333,73.5, 76} 3 2625=1+ 800+ 570+ 950+304 800 100 0 -50 -400 -------------------------
+srg(1120,390,146,130) w <= 54 +{300, 212.333,38.889,1;1,38.889,212.333,300} 2 2240=1+ 40+ 390+1080+729 40 12 0 -12 -40 D_4(3) (P) +{300, 212.333,51.852,1;1,25.926,212.333,300} 3 3360=1+ 80+ 390+2160+729 80 24 0 -12 -40 O+(8,3), triality
{300, 212.333,58.333,1;1,19.444,212.333,300} 4 4480=1+ 120+ 390+3240+729 120 36 0 -12 -40
{300, 212.333,62.222,1;1,15.556,212.333,300} 5 5600=1+ 160+ 390+4320+729 160 48 0 -12 -40
{300, 212.333,64.815,1;1,12.963,212.333,300} 6 6720=1+ 200+ 390+5400+729 200 60 0 -12 -40 -------------------------
srg(1210,819,568,525) w <= 6
{117, 112, 6, 1; 1, 6, 112, 117} 2 2420=1+ 495+ 819+ 715+390 495 55 0 -55 -495
{117, 112, 8, 1; 1, 4, 112, 117} 3 3630=1+ 990+ 819+1430+390 990 110 0 -55 -495
{117, 112, 9, 1; 1, 3, 112, 117} 4 4840=1+1485+ 819+2145+390 1485 165 0 -55 -495
{117, 112, 9.6,1; 1, 2.4,112, 117} 5 6050=1+1980+ 819+2860+390 1980 220 0 -55 -495
{117, 112,10, 1; 1, 2, 112, 117} 6 7260=1+2475+ 819+3575+390 2475 275 0 -55 -495 --------------------------
srg(1225,1008,833,812) w <= 23
{216, 196, 21, 1; 1,21, 196, 216} 2 2450=1+ 280+1008+ 945+216 280 35 0 -35 -280
{216, 196, 28, 1; 1,14, 196, 216} 3 3675=1+ 560+1008+1890+216 560 70 0 -35 -280
{216, 196, 31.5,1; 1,10.5,196, 216} 4 4900=1+ 840+1008+2835+216 840 105 0 -35 -280
{216, 196, 33.6,1; 1, 8.4,196, 216} 5 6125=1+1120+1008+3780+216 1120 140 0 -35 -280
{216, 196, 35, 1; 1, 7, 196, 216} 6 7350=1+1400+1008+4725+216 1400 175 0 -35 -280 ----------------------------
+srg(1376,1225,1092,1078) w <= 41 +{300, 264.143,36.857,1;1,36.857,264.143,300} 2 2752=1+ 200+1225+1176+150 200 28 0 -28 -200 hemisystem
{300, 264.143,49.143,1;1,24.571,264.143,300} 3 4128=1+ 400+1225+2352+150 400 56 0 -28 -200
{300, 264.143,55.286,1;1,18.429,264.143,300} 4 5504=1+ 600+1225+3528+150 600 84 0 -28 -200
{300, 264.143,58.971,1;1,14.743,264.143,300} 5 6880=1+ 800+1225+4704+150 800 112 0 -28 -200
{300, 264.143,61.429,1;1,12.286,264.143,300} 6 8256=1+1000+1225+5880+150 1000 140 0 -28 -200 -------------------------
+srg(1408,567,246,216) w <= 27 +{252, 201.667,22, 1;1,22, 201.667,252} 2 2816=1+ 112+ 567+1296+840 112 24 0 -24 -112 Leech lattice +{252, 201.667,29.333,1;1,14.667,201.667,252} 3 4224=1+ 224+ 567+2592+840 224 48 0 -24 -112 Leech lattice
{252, 201.667,33, 1;1,11, 201.667,252} 4 5632=1+ 336+ 567+3888+840 336 72 0 -24 -112
{252, 201.667,35.2, 1;1, 8.8, 201.667,252} 5 7040=1+ 448+ 567+5184+840 448 96 0 -24 -112
{252, 201.667,36.667,1;1, 7.333,201.667,252} 6 8448=1+ 560+ 567+6480+840 560 120 0 -24 -112 ----------------------------
srg(1458,1316,1189,1176) w <= 47
{329, 288, 42, 1; 1, 42, 288, 329} 2 2916=1+ 189+1316+1269+141 189 27 0 -27 -189 (P)
{329, 288, 56, 1; 1, 28, 288, 329} 3 4374=1+ 378+1316+2538+141 378 54 0 -27 -189
{329, 288, 63, 1; 1, 21, 288, 329} 4 5832=1+ 567+1316+3807+141 567 81 0 -27 -189
{329, 288, 67.2,1; 1, 16.8,288, 329} 5 7290=1+ 756+1316+5076+141 756 108 0 -27 -189
{329, 288, 70, 1; 1, 14, 288, 329} 6 8748=1+ 945+1316+6345+141 945 135 0 -27 -189 --------------------------
srg(1625,1044,693,630) w <= 4
{116, 112.667, 4.333,1;1,4.333,112.667, 116} 2 3250=1+ 725+1044+ 900+580 725 75 0 -75 -725
{116, 112.667, 5.778,1;1,2.889,112.667, 116} 3 4875=1+1450+1044+1800+580 1450 150 0 -75 -725
{116, 112.667, 6.5, 1;1,2.167,112.667, 116} 4 6500=1+2175+1044+2700+580 2175 225 0 -75 -725 --------------------------
srg(1701,1190,847,798) w <= 9
{170, 162, 9, 1; 1, 9, 162, 170} 2 3402=1+ 630+1190+1071+510 630 63 0 -63 -630
{170, 162, 12, 1; 1, 6, 162, 170} 3 5103=1+1260+1190+2142+510 1260 126 0 -63 -630
{170, 162, 13.5,1; 1, 4.5,162, 170} 4 6804=1+1890+1190+3213+510 1890 189 0 -63 -630
{170, 162, 14.4,1; 1, 3.6,162, 170} 5 8505=1+2520+1190+4284+510 2520 252 0 -63 -630
{170, 162, 15, 1; 1, 3, 162, 170} 6 10206=1+3150+1190+5355+510 3150 315 0 -63 -630 ----------------------------
srg(1936,1620,1360,1332) w <= 30
{315, 288, 28, 1; 1, 28, 288, 315} 2 3872=1+ 396+1620+1540+315 396 44 0 -44 -396
{315, 288, 37.333,1; 1, 18.667,288, 315} 3 5808=1+ 792+1620+3080+315 792 88 0 -44 -396
{315, 288, 42, 1; 1, 14, 288, 315} 4 7744=1+1188+1620+4620+315 1188 132 0 -44 -396
{315, 288, 44.8, 1; 1, 11.2, 288, 315} 5 9680=1+1584+1620+6160+315 1584 176 0 -44 -396
{315, 288, 46.667,1; 1, 9.333,288, 315} 6 11616=1+1980+1620+7700+315 1980 220 0 -44 -396 --------------------------
srg(1944,1218,792,714) w <= 3
{116, 113.4, 3.6, 1; 1, 3.6, 113.4, 116} 2 3888=1+ 900+1218+1044+725 900 90 0 -90 -900
{116, 113.4, 4.8, 1; 1, 2.4, 113.4, 116} 3 5832=1+1800+1218+2088+725 1800 180 0 -90 -900 -------------------------- \end{verbatim} }
\end{document} |
\begin{document}
\title{From Operads to Dendroidal Sets}
\author{Ittay Weiss}
\address{Mathematical Institute, Utrecht University, Budapestlaan 6, 3584 CD, Utrecht, The Netherlands} \email{[email protected]}
\subjclass [2010]{Primary 55P48, 55U10, 55U35; Secondary 18D50, 18D10, 18G20} \date{January 1, 1994 and, in revised form, June 22, 1994.}
\keywords{Operads, Homotopy Theory, Weak Algebras}
\begin{abstract} Dendroidal sets offer a formalism for the study of $\infty$-operads akin to the formalism of $\infty$-categories by means of simplicial sets. We present here an account of the current state of the theory while placing it in the context of the ideas that led to the conception of dendroidal sets. We briefly illustrate how the added flexibility embodied in $\infty$-operads can be used in the study of $A_{\infty}$-spaces and weak $n$-categories in a way that cannot be realized using strict operads. \end{abstract}
\maketitle
\section{Introduction }
This work aims to be a conceptually self-contained introduction to the theory and applications of dendroidal sets, surveying the current state of the theory and weaving together ideas and results in topology to form a guided tour that starts with Stasheff's work \cite{H spaces} on $H$-spaces, goes on to Boardman and Vogt's work \cite{BV book} on homotopy invariant algebraic structures followed by the generalization \cite{ax hom th ope,BV res ope,res colour ope} of their work by Berger and Moerdijk, arrives at the birth of dendroidal sets \cite{den set,inn Kan in dSet} and ends with the establishment, by Cisinski and Moerdijk in \cite{dSet model hom op,den Seg sp,dSet and simp ope}, of dendroidal sets as models for homotopy operads. With this aim in mind we adopt the convention of at most pointing out core arguments of proofs rather than detailed proofs that can be found elsewhere.
We assume basic familiarity with the language of category theory and mostly follow \cite{Mac Lane CWM}. Regarding enriched category theory we assume little more than familiarity with the definition of a category enriched in a symmetric monoidal category as can be found in \cite{Kelly Enr. Cat. }. The elementary results on presheaf categories that we use can be found in \cite{Mac Moer Sheaves}. Some comfort of working with simplicial sets is needed for which the first chapter of \cite{Goers Jardin} suffices. Some elementary understanding of Quillen model categories is desirable with standard references being \cite{model cat localiza,Hovey Model Cat}.
Operads arose in algebraic topology and have since found applications across a wide range of fields including Algebra, Theoretical Physics, and Computer Science. The reason for the success of operads is that they offer a computationally effective formalism for treating algebraic structures of enormous complexity, usually involving some notion of (abstract) homotopy. As such, the first operads to be introduced were topological operads and most of the other variants are similarly enriched in other categories. However, the presentation we give here of operads treats them as a rather straightforward generalization of the notion of category. It is that viewpoint that quite naturally leads to defining dendroidal sets to serve as the codomain category for a nerve construction for operads, extending the usual nerve of categories.
The path we follow is the following one. We first examine the expressive power of non-enriched symmetric operads. We find that by considering operad maps it is possible to classify a wide range of strict algebraic structures such as associative and commutative monoids and to show that operads carry a closed monoidal structure that, via the internal hom, internalizes algebraic structures. We show that the rather trivial fact that algebraic structures can be transferred along isomorphisms, which we call the isomorphism invariance property, is a result of symmetric operads supporting a Quillen model structure compatible with the monoidal structure. We then turn to the much more challenging homotopy invariance property for algebraic structures in the presence of homotopy notions. We show how the theory of operads is used to adequately handle this more subtle situation, however at a cost. The internalization of these so-called weak algebras, via an internal hom construction, is lost. The sequel can be seen as a presentation of the successful attempt to develop a formalism for weak algebraic structures in which the internalization of algebras is restored. This formalism is given by dendroidal sets and a suitable Quillen model structure which can be used to give a proof of the homotopy invariance property (which is completely analogous to the case of non-enriched operads). The consequences and applicability of the added flexibility of dendroidal sets is portrayed by considering $n$-fold $A_{\infty}$-spaces and weak $n$-categories.
Section 2 introduces in the first half non-enriched symmetric operads and presents their basic theory. The second half is concerned with enriched operads and the Berger-Moerdijk generalization of the Boardman-Vogt $W$-construction. Section 3 is a parallel development of the ideas in Section 2. The first half introduces dendroidal sets and presents their basic theory while the second half is concerned with the homotopy coherent nerve construction with applications to $A_{\infty}$-spaces and weak $n$-categories. Section 4 is devoted to the Cisinski-Moerdijk model structure on dendroidal sets and the way it is used to prove the homotopy invariance property. Section 5 closes this work with a brief presentation of a planar dendroidal Dold-Kan correspondence and discusses the yet unsolved problem of obtaining a satisfactory geometric realization for dendroidal sets. \begin{rem*} Below we work in a convenient category of topological spaces $Top$. In some places it is important that this category be closed monoidal, in which case the category of compactly generated Hausdorff spaces would suffice. We will not remark about such issues further. \end{rem*}
\section{Operads and algebraic structures} \begin{rem*} The reader already familiar with operads who reads this section just to familiarize herself with the notation is strongly advised to look at Remark \ref{rem:To-distinguish-between} and Fact \ref{fac:iso invar} below. For her convenience the opening paragraph below recounts the contents of the entire section. \end{rem*} Our journey starts with non-enriched symmetric operads, also known as symmetric multicategories (originating in Lambek's study of deductive systems in logic \cite{Lambek multi cat}) or symmetric coloured operads (e.g., \cite{res colour ope,Leinster higher}). In the literature on operads these structures are underrepresented probably due to the fact that the first operads, introduced by May in \cite{May GILS}, were enriched in topological spaces and many of the most important uses of operads require enrichment. The point of view of operads we adopt is that operads generalize categories. Consequently, just as a study of categories starts with non-enriched categories, with enrichment usually treated at some later stage, we first present non-enriched symmetric operads. The operadically versed reader will immediately recognize that our definition of algebra differs slightly from the standard one. We define the Boardman-Vogt tensor product of symmetric operads and the notion of natural transformations for symmetric operads that endows the category of symmetric operads with the structure of a symmetric closed monoidal category. We then address the isomorphism invariance property and treat it in the context of a suitable Quillen model structure on symmetric operads. We then turn to the much more subtle and interesting case of the homotopy invariance property and give an expository treatment of the theory developed by Berger and Moerdijk relevant for the rest of the presentation.
\subsection{Trees}
Symmetric (also called 'non-planar') rooted trees are useful in the study of symmetric operads. There is no standard definition of 'tree' that is commonly used (Ginzburg and Kapranov in \cite{GK Kozul duality ope} use a topological definition while Leinster in \cite{Leinster higher} uses a combinatorial one) but all approaches are essentially the same. More recently, Joachim Kock in \cite{Kock poly func} established a close connection between trees and polynomial functors, offering yet another formalism of trees while shedding a different light on the symbiosis between operads and trees.
We present here the formalism of trees we use and introduce terminology for commonly occurring trees as well as grafting of trees. We end the section by presenting a generalization of posets that shows trees to be analogues of finite linear orders.
\subsubsection{Symmetric rooted trees} \begin{defn} A \emph{tree} (short for symmetric rooted tree) is a finite poset $(T,\le)$ which has a smallest element and such that for each $e\in T$ the set $\{y\in T\mid y\le e\}$ is linearly ordered. The elements of $T$ are called \emph{edges }and the unique smallest edge is called the \emph{root}. Part of the information of a tree is a subset $L=L(T)$ of maximal elements, which are called \emph{leaves}. An edge is \emph{outer} if it is either the root or it belongs to $L$, otherwise it is called \emph{inner}. \end{defn} Given edges $e,e'\in T$ we write $e/e'$ if $e'<e$ and if for any $x\in T$ for which $e'\le x\le e$ holds that either $x=e'$ or $x=e$ . For a non-leaf edge $e$ the set $in(e)=\{t\in T\mid t/e\}$ is called the set of \emph{incoming edges} into $e$. For such an edge $e$ the set $v=\{e\}\cup in(e)$ is called the \emph{vertex} above $e$ and we define $in(v)=in(e)$ and $out(v)=e$ which are called, respectively, the set of \emph{incoming edges} and the \emph{outgoing edge} associated to $v$. The \emph{valence }of
$v$ is equal to $|in(v)|$ and could be $0$. Note that there is no vertex associated to a leaf. We will draw trees by the graph dual of their Hesse diagrams with the root at the bottom and will use a $\bullet$ for vertices. For example, in the tree\[ \xymatrix{*{\,}\ar@{-}[dr]_{e} & & *{\,}\ar@{-}[dl]^{f}\\
\,\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\,\,\,\, v} & *{\bullet}\ar@{-}[dr]_{b} & & *{\,}\ar@{-}[dl]_{c}\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\,\, w} & *{\bullet}\ar@{-}[dll]^{d}\\
& & *{\bullet}\ar@{-}[d]_{r} & \,\ar@{}[l]^{u\,\,\,\,\,\,\,\,\,\,\,}\\
& & *{\,}} \] there are three vertices of valence 2,3, and 0 and three leaves $L=\{e,f,c\}$. The outer edges are $e,f,c$, and $r$, where $r$ is the root. The inner edges are then $b$ and $d$.
\subsubsection{Some common trees}
The following types of trees appear often enough in the theory of dendroidal sets to merit their own notation. \begin{defn} For each $n\ge 0$, a tree $L_{n}$ of the form\[ \xymatrix{*{}\ar@{-}[d]\\ *{\bullet}\ar@{..}[d]\\ *{\bullet}\ar@{-}[d]\\ *{\bullet}\ar@{-}[d]\\ *{}} \] with one leaf and $n$ vertices, all unary (i.e., each vertex has valence equal to $1$), will be called a \emph{linear tree of order $n$}. The special case of the tree $L_{0}$ \end{defn} \[ \xymatrix{*{}\ar@{-}[d]\\ *{}} \] consisting of just one edge and no vertices is called the \emph{unit} tree. We denote this tree by $\eta$, or $\eta_{e}$ if we wish to explicitly name its unique edge. In this tree, the only edge is both the root and a leaf. \begin{defn} For each $n\ge 0$, a tree $C_{n}$ of the form \[
\xymatrix{*{}\ar@{-}[dr] & *{}\ar@{}[d]|{\cdots} & *{}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[d]\\
& *{}} \] that has just one vertex and $n$ leaves will be called an \emph{$n$-corolla}. Note that the case $n=0$ results in a tree different than $\eta$. \end{defn}
\subsubsection{Grafting} \begin{defn} Let $T$ and $S$ be two trees whose only common edge is the root $r$ of $S$ which is also one of the leaves of $T$. The \emph{grafting}, $T\circ S$, of $S$ on $T$ along $r$ is the poset $T\cup S$ with the obvious poset structure and set of leaves equal to $(L(S)\cup L(T))-\{r\}$. \end{defn} Pictorially, the grafted tree $T\circ S$ is obtained by putting the tree $S$ on top of the tree $T$ by identifying the output edge of $S$ with the input edge $r$ of $T$. By repeatedly grafting, one can define a full grafting operation $T\circ(S_{1},\cdots,S_{n})$ in the obvious way.
We now state a useful decomposition of trees that allows for inductive proofs on trees. The proof is trivial. \begin{prop} Let $T$ be a tree. Suppose $T$ has root $r$ and $\{r,e_{1},\cdots,e_{n}\}$ is the vertex above $r$. Let $T_{e_{i}}$ be the tree that contains the edge $e_{i}$ as root and everything above it in $T$. Then \[ T=T_{root}\circ(T_{e_{1}},\cdots,T_{e_{n}})\] where $T_{root}$ is the n-corolla consisting of $r$ as root and $\{e_{1},\cdots,e_{n}\}$ as the set of leaves. \end{prop}
\subsubsection{Trees and dendroidally ordered sets}
The trees we defined above are going to be the objects of the category $\Omega$ whose presheaf category $Set_{\Omega}$ is the category of dendroidal sets. Recall that the simplicial category $\Delta$ (whose presheaf category is the category of simplicial sets) can be defined as (a skeleton of) the category of totally ordered finite sets with order preserving maps. In this section we present an extension of the notion of totally ordered finite sets closely related to trees. The content of this section is not used anywhere in the sequel and is presented for the sake of completeness. Consequently we give no proofs and refer the reader to \cite{thesis} for more details.
First we extend the notion of a relation and that of a poset to what we call broad relation and broad poset. For a set $A$ we denote by $A^{+}=(A^{+},+,0)$ the free commutative monoid on $A$. A \emph{broad relation} is a pair $(A,R)$ where $A$ is a set and $R$ is a subset of $A\times A^{+}$. As is common with ordinary relations, we use the notation $aR(a_{1}+\cdots+a_{n})$ instead of $(a,(a_{1}+\cdots+a_{n}))\in R$. \begin{defn} A \emph{broad poset} is a broad relation $(A,R)$ satisfying:\end{defn} \begin{enumerate} \item Reflexivity: $aRa$ holds for any $a\in A$. \item Transitivity: For all $a_{0},\cdots,a_{n}\in A$ and $b_{1},\cdots,b_{n}\in A^{+}$ such that $a_{i}Rb_{i}$ for $1\le i\le n$, holds that if $a_{0}R(a_{1}+\cdots+a_{n})$ then $aR(b_{1}+\cdots+b_{n})$. \item Anti-symmetry: For all $a_{1},a_{2}\in A$ and $b_{1},b_{2}\in A^{+}$ if $a_{1}R(a_{2}+b_{2})$ and $a_{2}R(a_{1}+b_{1})$ then $a_{1}=a_{2}$. \end{enumerate} When $(A,R)$ is a broad poset we denote $R$ by $\le$. The meaning of $<$ is then defined in the usual way.
A \emph{map} of broad posets $f:A\rightarrow B$ is a set function preserving the broad poset structure, that is if $a\le(a_{1}+\cdots+a_{n})$ then $f(a)\le(f(a_{1})+\cdots+f(a_{n}))$. \begin{defn} We denote by $BrdPoset$ the category of all broad posets and their maps. \end{defn} Let $\star$ be a singleton set $\{*\}$ with the broad poset structure given by $*\le*$. Note that $\star$ is not a terminal object in $BrdPoset$. \begin{lem} (Slicing lemma for broad posets) There is an isomorphism of categories between $BrdPoset/\star$ and the category $Poset$ of posets and order preserving maps. Moreover, along this isomorphism one obtains a functor $k_{!}:Poset\to BrdPoset$ which has a right adjoint $k^{*}:BrdPoset\to Poset$ which itself has a right adjoint $k_{*}:Poset\to BrdPoset$. \end{lem} As motivation for the following definition recall that a finite ordinary poset $A$ is linearly ordered if, and only if, it has a smallest element and for every $a\in A$ the set $a_{\uparrow}=\{x\in A\mid a<x\}$ is either empty or has a smallest element. \begin{defn} A finite broad poset $A$ is called \emph{dendroidally ordered} if \end{defn} \begin{enumerate} \item There is an element $r\in A$ such that for every $a\in A$ there is $b\in A^{+}$ such that $r\le a+b$. \item For every $a\in A$ the set $a_{\uparrow}=\{b\in A^{+}\mid a<b\}$ is either empty or it contains an element $s(a)=a_{1}+\cdots+a_{n}$ such that every $b\in a_{\uparrow}$ can be written as $b=b_{1}+\cdots+b_{n}$ with $a_{i}\le b_{i}$ for all $1\le i\le n$. \item For every $a_{0},\cdots,a_{n}\in A$, if $a_{0}\le a_{1}+\cdots+a_{n}$ then for $i\ne j$ there holds $a_{i}\ne a_{j}$. \end{enumerate} Trees are related to finite dendroidally ordered sets as follows. Given a tree $T$ define a broad relation on the set $E(T)$ of edges by declaring $e\le e_{1}+\cdots+e_{n}$ precisely when there is a vertex $v$ such that $in(v)=\{e_{1},\cdots,e_{n}\}$ (without repetitions) and $out(v)=e$. The transitive closure of this broad relation is then a dendroidally ordered set. This constructions can be used to give an equivalence of categories between the full subcategory $DenOrd$ of $BrdPoset$ spanned by the dendroidally ordered sets and the dendroidal category $\Omega$ defined below. It is easily seen that $DenOrd$, upon slicing over $\star$, is isomorphic to the category of all finite linearly ordered sets and order preserving maps.
\subsection{Operads and algebras}
We now present symmetric operads viewed as a generalization of categories where arrows are allowed to have domains of arity $n$ for any $n\in\mathbb{N}$. We then define the notion of $\mathcal{P}$-algebras for a symmetric operad $\mathcal{P}$ which are often referred to as the raison d'\^etre of operads. We deviate here from the common definition of algebras noting that our definition encompasses the standard one. We define an algebra to simply be a morphism between symmetric operads, the difference being purely syntactic. The assertion that symmetric operads exist in order to define algebras thus agrees with the idea that in any category the objects' raison d'\^etre is to serve as domains and codomains of arrows. \begin{defn} A \emph{planar operad} $\mathcal{P}$ consists of a class $\mathcal{P}_{0}$ whose elements are called the \emph{objects} of $\mathcal{P}$ and to each sequence $P_{0},\cdots,P_{n}\in\mathcal{P}_{0}$ a set $\mathcal{P}(P_{1},\cdots,P_{n};P_{0})$ whose elements are called \emph{arrows} depicted by $\psi:(P_{1},\cdots,P_{n})\rightarrow P_{0}$\emph{.} With this notation $(P_{1},\cdots,P_{n})$ is the \emph{domain} of $\psi$, $P_{0}$ its \emph{codomain}, and $n$ its \emph{arity} (which is allowed to be $0$). The domain and codomain are assumed to be uniquely determined by $\psi$. There is for each object $P\in\mathcal{P}_{0}$ a chosen arrow $id_{P}:P\rightarrow P$ called the \emph{identity }at $P$. There is a specified composition rule: Given $\psi_{i}:(P_{1}^{i},\cdots,P_{m_{i}}^{i})\rightarrow P_{i}$, $1\le i\le n$, and an arrow $\psi:(P_{1},\cdots,P_{n})\rightarrow P_{0}$ their composition is denoted by $\psi\circ(\psi_{1},\cdots,\psi_{n})$ and has domain $(P_{1}^{1},\cdots,P_{m_{1}}^{1},P_{1}^{2},\cdots,P_{m_{2}}^{2},\cdots,P_{1}^{n},\cdots,P_{m_{n}}^{n})$ and codomain $P_{0}$. The composition is to obey the following unit and associativity laws:\end{defn} \begin{itemize} \item Left unit axiom: $id_{P}\circ\psi=\psi$ \item Right unit axiom: $\psi\circ(id_{P_{1}},\cdots,id_{P_{n}})=\psi$ \item Associativity axiom: the composition \[ \psi\circ(\psi_{1}\circ(\psi_{1}^{1},\cdots,\psi_{m_{1}}^{1}),\cdots,\psi_{n}\circ(\psi_{1}^{n},\cdots,\psi_{m_{n}}^{n}))\] is equal to\[ (\psi\circ(\psi_{1},\cdots,\psi_{n}))\circ(\psi_{1}^{1},\cdots,\psi_{m_{1}}^{1},\cdots,\psi_{1}^{n},\cdots,\psi_{m_{n}}^{n}).\]
\end{itemize} The morphisms of planar operads are the obvious structure preserving maps. A map of operads will also be referred to as a \emph{functor}. \begin{defn} A \emph{symmetric operad} is a planar operad $\mathcal{P}$ together with actions of the symmetric groups in the following sense: for each $n\in\mathbb{N}$, objects $P_{0},\cdots,P_{n}\in\mathcal{P}_{0}$, and a permutation $\sigma\in\Sigma_{n}$ a function $\sigma^{*}:\mathcal{P}(P_{1},\cdots,P_{n};P_{0})\to\mathcal{P}(P_{\sigma(1)},\cdots,P_{\sigma(n)};P_{0})$. We write $\sigma^{*}(\psi)$ for the value of the action of $\sigma$ on $\psi:(P_{1},\cdots,P_{n})\to P_{0}$ and demand that for any two permutations $\sigma,\tau\in\Sigma_{n}$ there holds $(\sigma\tau)^{*}(\psi)=\tau^{*}\sigma^{*}(\psi)$. Moreover, these actions of the permutation groups are to be compatible with compositions in the obvious sense (see \cite{Leinster higher,May GILS} for more details). \emph{Functors} of symmetric operads $\mathcal{P}\to\mathcal{Q}$ are functors of the underlying planar operads that respect the actions of the symmetric groups. \end{defn} When dealing with operads we make a distinction between small and large ones according to whether the class of objects is, respectively, a set or a proper class. If more care is needed and size issues become important we implicitly assume working in the formalism of Grothendieck universes (\cite{Groth univers}) similarly to the way such issues are avoided in category theory. We now obtain the category $Ope_{\pi}$ of small planar operads and their functors as well as the category $Ope$ of small symmetric operads and their functors. There is clearly a forgetful functor $Ope\to Ope_{\pi}$ which has an easily constructed left adjoint $S:Ope_{\pi}\to Ope$ called the \emph{symmetrization }functor. \begin{rem} We note that our symmetric operads are also called symmetric multicategories (see e.g., \cite{Leinster higher}) as well as symmetric coloured operads. The composition as given above is sometimes called full $\circ$ composition. Using the identities in an operad we can then define what is known as the $\circ_{i}$ composition as follows. Given an arrow $\psi$ of arity $n$ and $1\le i\le n$ one can compose an arrow $\varphi$ onto the $i$-th place of the domain of $\psi$, provided the object at the $i$-th place is equal to the codomain of $\varphi$, by means of $\psi\circ_{i}\varphi=\psi\circ(id,\cdots,id,\varphi,id,\cdots id)$ with $\varphi$ appearing in the $i$-th place. Some authors consider operads defined in terms of the $\circ_{i}$ operations rather than the full $\circ$ composition. In the presence of identities there is no essential difference but if identities are not assumed than one obtains a slightly weaker structure called a \emph{pseudo-operad }(see \cite{MSS book})\emph{.} The operads we consider always have identities so that the full $\circ$ and partial $\circ_{i}$ compositions differ only cosmetically and will be used interchangeably as convenient. \end{rem} Operads are closely related to categories. Indeed, one trivially sees that a category is an operad where each arrow has arity equal to $1$.
A slightly less trivial and more useful fact is the following. Call a symmetric operad \emph{reduced} if it has no $0$-ary operations. We denote by $\star$ an operad with one object and only the identity arrow on it. \begin{lem} (Slicing lemma for symmetric operads) There is an isomorphism between the category $Cat$ of small categories and the slice category $Ope/\star$. Moreover, there are functors $j_{!}:Cat\to Ope$ and $j^{*}:Ope\to Cat$ such that $j^{*}$ is right adjoint to $j_{!}$. The functor $j^{*}$ does not preserve pushouts and thus does not have a right adjoint. However, the restriction of $j_{*}$ to the subcategory of reduced operads does have a right adjoint. Under the isomorphism $Cat\cong Ope/\star$ the functor $j_{!}$ is the forgetful functor $Cat=Ope/\star\to Ope$. \end{lem} \begin{proof} We explicitly describe the functors, omitting any details. Given a category $\mathcal{C}$ the operad $j_{!}(\mathcal{C})$ has $j_{!}(\mathcal{C})_{0}=\mathcal{C}_{0}$ (here $\mathcal{C}_{0}$ stands for the class of objects of the category $\mathcal{C}$) and the arrows in $j_{!}(\mathcal{C})$ are given for $P_{0},\cdots,P_{n}\in j_{!}(\mathcal{C})_{0}$ as follows:\[ j_{!}(\mathcal{C})(P_{1},\cdots,P_{n};P_{0})=\begin{cases} \mathcal{C}(P_{1},P_{0}) & \mbox{if }n=1\\ \emptyset & \mbox{if }n\ne1\end{cases}\] The composition is the same as in $\mathcal{C}$. The right adjoint $j^{*}$ is given for an operad $\mathcal{P}$ as follows. $j^{*}(\mathcal{P})_{0}=\mathcal{P}_{0}$ and the arrows in $j^{*}(\mathcal{P})$ are given for $C,D\in j^{*}(\mathcal{P})_{0}$ by:\[ j^{*}(\mathcal{P})(C,D)=\mathcal{P}(C;D).\] The composition is the same as in $\mathcal{P}$. Finally, the functor $j_{*}$, right adjoint to the restriction of $j^{*}$ to reduced operads, is defined for a category $\mathcal{C}$ as follows. $j_{*}(\mathcal{C})_{0}=\mathcal{C}_{0}$ and the arrows in $j_{*}(\mathcal{C})$ are given for $P_{0},\cdots,P_{n}\in j_{*}(\mathcal{C})$ as follows:\[ j_{*}(\mathcal{C})(P_{1},\cdots,P_{n};P_{0})=\begin{cases} \mathcal{C}(P_{1},P_{0}) & \mbox{if }n=1\\ \{(P_{1},\cdots,P_{n};P_{0})\} & \text{if }n\ne1\end{cases}\] Composition of unary arrows is given as in $\mathcal{C}$. Composition of two arrows at least one of which is not unary is uniquely determined since the hom set of where that arrow is to be found consists of just one object. It is therefore automatic that the composition so defined is associative. \end{proof} \begin{rem} \label{rem:Slicing}The construction of the three functors above follows from general abstract nonsense and is related to locally cartesian closed categories. Indeed, if $\mathcal{C}$ is a category with a terminal object $*$, then for any object $A\in\mathcal{C}_{0}$ the unique arrow $A\to*$ induces a functor between the slice categories $F_{!}:\mathcal{C}/A\to\mathcal{C}/*$. It is then a general result that $F_{!}$ has a right adjoint $F^{*}$ if, and only if, $\mathcal{C}$ admits products with $A$. Moreover, $F^{*}$ has a right adjoint $F_{*}$ if, and only if, $A$ is exponentiable in $\mathcal{C}$. The case we had at hand is when $\mathcal{C}$ is the category of symmetric operads, or its subcategory of reduced symmetric operads, and $A=\star$. \end{rem} Due to this intimate connection between symmetric operads and categories we will employ category theoretic terminology in the context of symmetric operads. For example, we will refer to morphisms of operads as functors, and feel free to use category theoretic terminology within the 'category part' $j^{*}(\mathcal{P})$ of an operad $\mathcal{P}$. So the notion of a unary arrow $f$ in $\mathcal{P}$ being, for instance, an isomorphism, a monomorphism, or a split idempotent simply means that $f$ has the same property in the category $j^{*}(\mathcal{P})$. In this spirit we give the following definition of equivalence of operads. \begin{defn} Let $\mathcal{P}$ and $\mathcal{Q}$ be symmetric operads and $F:\mathcal{P}\to\mathcal{Q}$ a functor. We say that $F$ is an \emph{equivalence of operads }if $F$ is fully faithful (which means that it is bijective on each hom-set) and essentially surjective (which means that $j^{*}(F)$ is an essentially surjective functor of categories). \end{defn} \begin{rem} We make a few remarks to emphasize differences and similarities between the categories $Ope$ and $Cat$:\end{rem} \begin{itemize} \item $Ope$ is small complete and small cocomplete. \item There is a unique initial operad which is, of course, equal to $j_{!}(\emptyset)$. \item For the operad $\star$ above and a terminal category $*$ there holds that $\star\cong j_{!}(*)$ and $*\cong j^{*}(\star)$. \item $\star$ is not terminal but is exponentiable in the category of reduced symmetric operads. \item The terminal object in $Ope$ is the operad $Comm=j_{*}(*)$ consisting of one object and one $n$-ary operation for every $n\in\mathbb{N}$. \item The subobjects of the terminal operad $Comm$ are all of the following form. An operad with one object and for every $n\ge0$ at most one arrow of arity $n$ such that if an arrow of arity $m$ and an arrow of arity $k$ exist then there is also an arrow of arity $m+k-1$. \end{itemize} A typical example of category is obtained by fixing some mathematical object and considering the totality of those objects and their naturally occurring morphisms. In many cases these objects also have a notion of 'morphism of several variables' in which case the totality of objects and their multivariable arrows will actually form an operad. One case in which this is guaranteed is the following. \begin{lem} \label{lem:mon cat as ope}Let $(\mathcal{E},\otimes,I)$ be a symmetric monoidal category and consider for every $x_{0},\cdots,x_{n}\in\mathcal{E}_{0}$ the set $\hat{\mathcal{E}}(x_{1},\cdots,x_{n};x_{0})=\mathcal{E}(x_{1}\otimes\cdots\otimes x_{n},x_{0})$. With the obvious definitions of composition and identities this construction defines a symmetric operad $\hat{\mathcal{E}}$ with $(\hat{\mathcal{E}})_{0}=\mathcal{E}_{0}$.\end{lem} \begin{proof} The associativity of the composition in $\hat{\mathcal{E}}$ is a result of the coherence in $\mathcal{E}$.\end{proof} \begin{rem} Certainly not every symmetric operad is obtained in that way from a symmetric monoidal category (e.g., any of the proper subobjects of $Comm$ or any operad of the form $j_{!}(\mathcal{C})$ for a category $\mathcal{C}$). It is possible to internally characterize those symmetric operads that do arise in that way from symmetric monoidal categories, as is explained in detail in \cite{Hermida rep mult cat} and indicated in \cite{Leinster higher}. \end{rem} Another type of category that arises naturally is one that encodes some properties of arrows abstractly. For example, the free-living isomorphism $0\leftrightarrows 1$ is a category with two distinct objects and, except for the two identities, two other arrows between the objects, each of which is the inverse of the other. A functor from the free-living isomorphism to any category $\mathcal{C}$ corresponds exactly to a choice of an isomorphism in $\mathcal{C}$ and can be seen as the abstract free-living isomorphism becoming concrete in the category $\mathcal{C}$. A similar phenomenon is true in operads, where one readily sees the much greater expressive power of operads compared to categories. Consider for example the terminal operad $Comm$, for which it is straightforward to prove that any functor of operads $Comm\to\hat{\mathcal{E}}$ is the same as a commutative monoid in $\mathcal{E}$, for any symmetric monoidal category $\mathcal{E}$. There is no category $\mathcal{C}$ with the property that functors $\mathcal{C}\to\mathcal{E}$ correspond to commutative monoids in $\mathcal{E}$. \begin{rem} \label{rem:To-distinguish-between}To distinguish between symmetric operads such as $Comm$ thought of as encoding properties of arrows and symmetric operads such as $\hat{\mathcal{E}}$ thought of as environments where operads $\mathcal{P}$ are interpreted concretely we will use letters near $\mathcal{P}$ for abstract symmetric operads and letters near $\mathcal{E}$ for symmetric operads as environments (whether they come from a symmetric monoidal category or not). We will also call symmetric operads $\mathcal{E}$ \emph{environment operads}. The distinction is purely syntactic. \end{rem} The utility of operads is in their ability to codify quite a wide range of algebraic structures in the way described above. The usual terminology one uses is that of an \emph{algebra }of an operad. The following definition of algebra is more general than the usual one (e.g., \cite{MSS book,May GILS}). \begin{defn} Let $\mathcal{P}$ and $\mathcal{E}$ be symmetric operads and consider a functor $F:\mathcal{P}\to\mathcal{E}$. If $F_{0}:\mathcal{P}_{0}\to\mathcal{E}_{0}$ is the object part of the functor $F$ we say that $F$ is a $\mathcal{P}$-\emph{algebra} structure on the collection of objects $\{F_{0}(P)\}_{P\in\mathcal{P}_{0}}$ in the environment operad $\mathcal{E}$. \end{defn} Many basic properties of $\mathcal{P}$-algebras are captured efficiently by the introduction of a closed monoidal structure on $Ope$. The appropriate tensor product of symmetric operads is the Boardman-Vogt tensor product which was first introduced in \cite{BV book} for (certain structures that are essentially equivalent to) symmetric operads enriched in topological spaces. The construction is general enough that it can be performed for operads enriched in other monoidal categories and certainly also in the non-enriched case, which is the version we give now. \begin{defn} Let $\mathcal{P}$ and $\mathcal{Q}$ be two symmetric operads. Their \emph{Boardman-Vogt tensor product} is the symmetric operad $\mathcal{P}\otimes\mathcal{Q}$ with $(\mathcal{P}\otimes\mathcal{Q})_{0}=\mathcal{P}_{0}\times\mathcal{Q}_{0}$ given in terms of generators and relations as follows. For each $Q\in\mathcal{Q}_{0}$ and each operation $\psi\in\mathcal{P}(P_{1},\cdots,P_{n};P)$ there is a generator $\psi\otimes Q$ with domain $(P_{1},Q),\cdots,(P_{n},Q)$ and codomain $(P,Q)$. For each $P\in\mathcal{P}_{0}$ and an operation $\varphi\in\mathcal{Q}(Q_{1},\cdots,Q_{m};Q)$ there is a generator $P\otimes\varphi$ with domain $(P,Q_{1}),\cdots,(P,Q{}_{m})$ and codomain $(P,Q)$. There are five types of relations among the arrows ($\sigma$ and $\tau$ below are permutations whose roles are explained below):
1) $(\psi\otimes Q)\circ((\psi_{1}\otimes Q),\cdots,(\psi_{n}\otimes Q))=(\psi\circ(\psi_{1},\cdots,\psi_{n}))\otimes Q$
2) $\sigma^{*}(\psi\otimes Q)=(\sigma^{*}\psi)\otimes Q$
3) $(P\otimes\varphi)\circ((P\otimes\varphi_{1}),\cdots,(P\otimes\varphi_{m}))=P\otimes(\varphi\circ(\varphi_{1},\cdots,\varphi_{m}))$
4) $\sigma^{*}(P\otimes\varphi)=P\otimes(\sigma^{*}\varphi)$
5) $(\psi\otimes Q)\circ((P_{1}\otimes\varphi),\cdots,(P_{n}\otimes\varphi))=\tau^{*}((P\otimes\varphi)\circ((\psi,Q_{1}),\cdots,(\psi,Q{}_{m})))$ \end{defn} By the relations above we mean every possible choice of arrows $\psi,\varphi,\psi_{i},\varphi_{j}$ for which the compositions are defined. The relations of type 1 and 2 ensure that for any $Q\in\mathcal{P}_{0}$, the map $P\mapsto(P,Q)$ naturally extends to a functor $\mathcal{P}\rightarrow\mathcal{P}\otimes\mathcal{Q}$. Similarly, the relations of type 3 and 4 guarantee that for each $P\in\mathcal{P}_{0}$, the map $Q\mapsto(P,Q)$ naturally extends to a functor $\mathcal{Q}\rightarrow\mathcal{P}\otimes\mathcal{Q}$. The relation of type 5 can be visualized as follows. The left hand side can be drawn as\[ \xymatrix{*{}\ar@{-}[dr]_{(P_{1},Q_{1})} & & *{}\ar@{-}[dl]^{(P_{1},Q_{m})} & & *{}\ar@{-}[dr]_{(P_{n},Q_{1})} & & *{}\ar@{-}[dl]^{(P_{n},Q_{m})}\\
\ar@{}[r]|{P_{1}\otimes\varphi} & *{\bullet}\ar@{-}[drr]_{(P_{1},Q)} & & & \ar@{}[r]|{P_{n}\otimes\varphi} & *{\bullet}\ar@{-}[dll]^{(P_{n},Q)}\\
& & \ar@{}[r]|{\psi\otimes Q} & *{\bullet}\ar@{-}[d]^{(P,Q)}\\
& & & *{}} \] while the right hand side can be drawn as \[ \xymatrix{*{}\ar@{-}[dr]_{(P_{1},Q_{1})} & & *{}\ar@{-}[dl]^{(P_{n},Q_{1})} & & *{}\ar@{-}[dr]_{(P_{1},Q_{m})} & & *{}\ar@{-}[dl]^{(P_{n},Q_{m})}\\
\ar@{}[r]|{\psi\otimes Q_{1}} & *{\bullet}\ar@{-}[drr]_{(P,Q_{1})} & & & \ar@{}[r]|{\psi\otimes Q_{m}\,\,\,\,} & *{\bullet}\ar@{-}[dll]^{(P,Q_{m})}\\
& & \ar@{}[r]|{P\otimes\varphi} & *{\bullet}\ar@{-}[d]^{(P,Q)}\\
& & & *{}} \] As given, the operations cannot be equated since their domains do not agree. There is however an evident permutation $\tau$ that equates the domains and it is that permutation $\tau$ that is used in the equation of type $5$ above. \begin{thm} The category $(Ope,\otimes,\star)$ is a symmetric closed monoidal category. \end{thm} \begin{proof} The internal hom operad $[\mathcal{P},\mathcal{Q}]$ has as objects all morphisms of operads $F:\mathcal{P}\to\mathcal{Q}$ and the arrows with domain $F_{1},\cdots,F_{n}$ and codomain $F_{0}$ are analogues of natural transformations as follows. A \emph{natural transformation} $\alpha$ from $(F_{1},\cdots,F_{n})$ to $F_{0}$ is a family $\{\alpha_{P}\}_{P\in\mathcal{P}_{0}}$, with $\alpha_{P}\in\mathcal{Q}(F_{1}(P),\cdots,F_{n}(P);F_{0}(P))$, satisfying the following property. Given any operation $\psi\in\mathcal{P}(P_{1},\cdots,P_{m};P)$ consider the following diagrams in $\mathcal{Q}$: \[
\xymatrix{*{}\ar@{-}[rd]_{F_{1}(P_{1})} & \ar@{}[d]|<<<{\cdots} & *{}\ar@{-}[dl]^{F_{n}(P_{1})} & & *{}\ar@{-}[dr]_{F_{1}(P_{m})} & \ar@{}[d]|<<<{\cdots} & *{}\ar@{-}[dl]^{F_{n}(P_{m})}\\
\ar@{}[r]|{\,\,\,\,\,\alpha_{P_{1}}} & *{\bullet}\ar@{-}[drr]_{F_{0}(P_{1})} & & \ar@{}[d]|<<<{\cdots} & & *{\bullet}\ar@{-}[dll]^{F_{0}(P_{m})} & \ar@{}[l]|{\alpha_{P_{m}}\,\,\,\,\,}\\
& & & *{\bullet}\ar@{-}[d]_{F_{0}(P)}\ar@{}[r]|{F_{0}(\psi)} & *{}\\
& & & *{}} \]
and \[
\xymatrix{*{}\ar@{-}[rd]_{F_{1}(P_{1})} & \ar@{}[d]|<<<{\cdots} & *{}\ar@{-}[dl]^{F_{1}(P_{m})} & & *{}\ar@{-}[dr]_{F_{n}(P_{1})} & \ar@{}[d]|<<<{\cdots} & *{}\ar@{-}[dl]^{F_{n}(P_{m})}\\
\ar@{}[r]|{\,\, F_{1}(\psi)} & *{\bullet}\ar@{-}[drr]_{F_{1}(P)} & & \ar@{}[d]|<<<{\cdots} & & *{\bullet}\ar@{-}[dll]^{F_{n}(P)} & \ar@{}[l]|{F_{n}(\psi)}\\
& & & *{\bullet}\ar@{-}[d]_{F_{0}(P)}\ar@{}[r]|{\alpha_{P}\,\,\,\,\,} & *{}\\
& & & *{}} \] and let $\varphi_{1}$ and $\varphi_{2}$ be their respective compositions. Then $\varphi_{2}=\sigma^{*}(\varphi_{1})$, where $\sigma$ is the evident permutation equating the domain of $\varphi_{1}$ with that of $\varphi_{2}$. The interested reader is referred to \cite{thesis} for more details on horizontal and vertical compositions of natural transformations leading to the construction of the strict $2$-category of small operads in which the strict $2$-category of small categories embeds. \end{proof} We now return to our general notion of $\mathcal{P}$-algebras in $\mathcal{E}$ and notice the very simple result: \begin{lem} Let $\mathcal{P}$ and $\mathcal{E}$ by symmetric operads. The internal hom $[\mathcal{P},\mathcal{E}]$ is rightfully to be called the operad of $\mathcal{P}$-algebras in $\mathcal{E}$ in the sense that the objects of $[\mathcal{P},\mathcal{E}]$ are the $\mathcal{P}$-algebras in $\mathcal{E}$, the unary arrows are the morphisms of such algebras, and the $n$-ary arrows are 'multivariable' morphisms of algebras (with $0$-ary morphisms thought of as constants). \end{lem} It is trivial to verify for example that $[Comm,Set]$ is isomorphic to the operad obtained from the symmetric monoidal category $CommMon(Set)$ of commutative monoids in $Set$ by means of the construction given in Lemma \ref{lem:mon cat as ope}. Here $Set$ can be replaced by any symmetric monoidal category. This motivates the following definition. \begin{defn} Let $\mathcal{E}$ be a symmetric operad and $S$ some notion of an algebraic structure on objects of $\mathcal{E}$ together with a notion of (perhaps multivariable) morphisms between such structures. We call a symmetric operad $\mathcal{P}$ a \emph{classifying }operad for $S$ (in $\mathcal{E}$) if the operad $[\mathcal{P},\mathcal{E}]$ satisfies that $[\mathcal{P},\mathcal{E}]_{0}$ is precisely the set of $S$-structures in $\mathcal{E}$ and the arrows in $[\mathcal{P},\mathcal{E}]$ correspond precisely to the notion of morphisms between such structures. \end{defn} \begin{example} \label{exa:classifying categories over A}The symmetric operad $Comm$ is a classifying operad for commutative monoids in a symmetric operad $\mathcal{E}$. There is a symmetric operad $As$ that classifies monoids in $\mathcal{E}$ (i.e., an object with an associative binary operation with a unit) which the reader is invited to find. A \emph{magma} is a set together with a binary operation, not necessarily associative, and there is a symmetric operad that classifies magmas. There is also a symmetric operad that classifies non-unital monoids as well as one that classifies non-unital commutative monoids. It is a rather unfortunate fact that there is no symmetric operad that classifies all small categories. However, given a fixed set $A$ consider the category $Cat_{A}$ of \emph{categories over $A$, }in which the objects are categories having $A$ as set of objects and where the arrows are functors between such categories whose object part is the identity. Then there is a symmetric operad $C_{A}$ that classifies categories over $A$. Similarly, with the obvious definition, there is a symmetric operad $O_{A}$ that classifies symmetric operads over $A$. \end{example} \begin{rem} In general, there can be two non-equivalent operads $\mathcal{P}$ and $\mathcal{Q}$ that classify the same algebraic structure. We will not get into the question of detecting when two symmetric operads have equivalent operads of algebras. \end{rem} A well-known phenomenon in category theory is the interchangeability of repeated structures. Thus, for example, a category object in $Grp$ is the same as a group object in $Cat$. With the formalism of symmetric operads we have thus far we can easily prove a whole class of such cases (but in fact not the case just mentioned, since group objects cannot be classified by symmetric operads). \begin{lem} Let $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$ be two symmetric operads and let $\mathcal{E}$ be an environment operad. Then $\mathcal{P}_{1}$-algebras in $\mathcal{P}_{2}$-algebras in $\mathcal{E}$ are the same as $\mathcal{P}_{2}$-algebras in $\mathcal{P}_{1}$-algebras in $\mathcal{E}$. \end{lem} \begin{proof} The precise formulation of the lemma is that there is an isomorphism of symmetric operads $[\mathcal{P}_{1},[\mathcal{P}_{2},\mathcal{E}]]\cong[\mathcal{P}_{2},[\mathcal{P}_{1},\mathcal{E}]]$. The proof is trivial from the symmetry of the Boardman-Vogt tensor product. \end{proof} Consider the operad $As$ that classifies monoids: it has just one object and its arrows of arity $n$ is the set $\Sigma_{n}$ of permutations on $n$-symbols. It is not hard to show that $As\otimes As\cong Comm$ which essentially is Eckman-Hilton duality proving that associative monoids in associative monoids are commutative monoids, except that it is done at the level of classifying operads rather than algebras.
We conclude our review of the basics of operad theory by noting that in the same way that categories can be enriched in a symmetric monoidal category $\mathcal{E}$ (see \cite{Kelly Enr. Cat. }) so can operads be so enriched. With the evident definitions one then obtains the category $Ope(\mathcal{E})$ of all small operads enriched in $\mathcal{E}$. \begin{rem} In the presence of coproducts in $\mathcal{E}$ any non-enriched symmetric operad $\mathcal{P}$ gives rise to an operad $Dis(\mathcal{P})$ enriched in $\mathcal{E}$ in which each hom-object is a coproduct, indexed by the corresponding hom-set in $\mathcal{P}$, of the unit $I$ of $\mathcal{E}$. We will usually refer to $Dis(\mathcal{P})$ as the corresponding discrete operad in $\mathcal{E}$ and call it again $\mathcal{P}$. \end{rem} Our main interest in symmetric operads is in their use in the theory of homotopy invariant algebraic structures where enrichment plays a vital role. However, before we embark on the subtleties of homotopy invariance we briefly treat the isomorphism invariance property for non-enriched symmetric operads.
\subsection{The isomorphism invariance property }
It is a triviality that an algebraic structure can be transferred, uniquely, along an isomorphism. To be more precise and to formulate this in the language of operads, let $\mathcal{P}$ and $\mathcal{E}$ be symmetric operads and $F:\mathcal{P}\to\mathcal{E}$ an algebra structure on $\{F_{0}(P)\}_{P\in\mathcal{P}_{0}}$. Assume that we are given a family $\{f_{P}:F_{0}(P)\to G_{0}(P)\}_{P\in\mathcal{P}_{0}}$ of isomorphisms in $\mathcal{E}$. Then there exists a unique $\mathcal{P}$-algebra structure $G:\mathcal{P}\to\mathcal{E}$ on $\{G_{0}(P)\}_{P\in\mathcal{P}_{0}}$ for which the family $\{f_{P}\}_{P\in\mathcal{P}_{0}}$ forms a natural isomorphism from $F$ to $G$ and thus an isomorphism between the algebras. We call this the \emph{isomorphism invariance property }of algebras.
We can reformulate this property diagrammatically as follows. Let $0$ be a one-object symmetric operad with the identity arrow only, and $0\to(0\leftrightarrows1)$ the inclusion $0\mapsto0$ into the free-living isomorphism. Then a choice of functor $F:\mathcal{P}\to\mathcal{E}$ is the same as a functor $0\to[\mathcal{P},\mathcal{E}]$ while a functor $(0\leftrightarrows1)\to[\mathcal{P},\mathcal{E}]$ can be identified with two functors $\xymatrix{\mathcal{P}\ar@<2pt>[r]\ar@<-2pt>[r] & \mathcal{E}} $ and a natural isomorphism between them. The set of objects $\mathcal{P}_{0}$ seen as a category with only identity arrows can be seen as a symmetric operad. One then has the evident inclusion functor $\mathcal{P}_{0}\to\mathcal{P}$ which induces a functor $[\mathcal{P},\mathcal{E}]\to[\mathcal{P}_{0},\mathcal{E}]$. The isomorphism invariance property for $\mathcal{P}$-algebras in $\mathcal{E}$ is then the statement that in the following diagram:\[ \xymatrix{0\ar[d]\ar[rrr]^{\forall F} & & & [\mathcal{P},\mathcal{E}]\ar[d]\\ 0\leftrightarrows1\ar[rrr]_{\forall\{F_{0}(P)\to G_{0}(P)\}_{P\in\mathcal{P}_{0}}}\ar@{..>}[rrru]^{\exists\alpha} & & & [\mathcal{P}_{0},\mathcal{E}]} \] the diagonal filler exists (and is unique) for any functor $F:\mathcal{P}\to \mathcal{E}$ and any family of isomorphisms $\{F_{0}(P)\to G_{0}(P)\}_{P\in\mathcal{P}_{0}}$.
In the formalism of Quillen model structures there is a conceptual way to see why a lift in the diagram above exists. To present it we recall that a functor $F:\mathcal{C}\to\mathcal{D}$ of categories is an \emph{isofibration} if it has the right lifting property with respect to the inclusion $0\to(0\leftrightarrows1)$. Similarly, a functor $F:\mathcal{P}\to\mathcal{Q}$ of symmetric operads is an \emph{isofibration} of symmetric operads if it has the right lifting property with respect to the same inclusion $0\to(0\leftrightarrows1)$ with each category seen as an operad. Equivalently, $F:\mathcal{P}\to\mathcal{Q}$ is an isofibration (of operads) if, and only if, $j^{*}(F)$ is an isofibration of categories. We now recall the operadic Quillen model structure on symmetric operads. \begin{thm} The category $Ope$ of symmetric operads with the Boardman-Vogt tensor product admits a cofibrantly generated closed monoidal model structure in which the weak equivalences are the operadic equivalences, the cofibrations are those functors $F:\mathcal{P}\to\mathcal{Q}$ such that the object part of $F$ is injective, and the fibrations are the isofibrations. All operads are fibrant and cofibrant. The Quillen model structure induced on $Cat\cong Ope/\star$ is the categorical one (also known as the 'folk' or 'natural' model structure).\end{thm} \begin{proof} A direct verification of the axioms of a model category is not difficult and not too tedious. Further details can be found in \cite{thesis}. \end{proof} Now, in the diagram above the left vertical arrow is a trivial cofibration and the right vertical arrow is, by the monoidal model structure axiom, a fibration and hence the lift exists. We summarize the above discussion: \begin{fact} \label{fac:iso invar}The notion of algebras of operads is internalized to the category $Ope$ by it being closed monoidal with respect to the Boardman-Vogt tensor product. The isomorphism invariance property of algebras is captured by the operadic Quillen model structure and its compatibility with the Boardman-Vogt tensor product. \end{fact}
\subsection{The homotopy invariance property}
In the presence of homotopy in $\mathcal{E}$ one can ask if a stronger property than the isomorphism invariance property holds. Namely, if one merely asks for the arrows $F_{0}(P)\to G_{0}(P)$ to be weak equivalences instead of isomorphisms is it still possible to transfer the algebra structure? A simple example is when one considers a topological monoid $X$ and a topological space $Y$ together with continuous mappings $f:X\to Y$ and $g:Y\to X$ such that $f\circ g$ and $g\circ f$ are homotopic to the respective identities. It is evident that if $f$ and $g$ are not actual inverses of each other then the monoid structure on $X$ will not, in general, induce a monoid structure on $Y$. The question as to what kind of structure is induced goes back to Stasheff's study of $H$-spaces and his famous associahedra that are used to describe the kind of structure that arises \cite{H spaces}. The more general problem for algebraic structures on topological spaces can be addressed by using enriched symmetric operads as is done by Boardman and Vogt in \cite{BV book}. Their techniques and results were generalized by Berger and Moerdijk in a series of three papers \cite{ax hom th ope,BV res ope,res colour ope} and below we present an expository account of the constructions we will need.
First we give a slightly vague definition of the homotopy invariance property. In the context of dendroidal sets below we will give a precise definition that is completely analogous to the definition of the isomorphism invariance property. \begin{defn} Let $\mathcal{E}$ be a symmetric monoidal model category and $\mathcal{Q}$ a symmetric operad enriched in $\mathcal{E}$. We say that $\mathcal{Q}$-algebras have the \emph{homotopy invariance property }if given an algebra $F:\mathcal{Q}\to\hat{\mathcal{E}}$ on $\{F_{0}(Q)\}_{Q\in\mathcal{Q}_{0}}$ and a family $\{f_{Q}:F_{0}(Q)\to G_{0}(Q)\}{}_{Q\in\mathcal{Q}_{0}}$ of weak equivalences in $\mathcal{E}$ (with perhaps some extra conditions) there exists an essentially unique $\mathcal{Q}$-algebra structure $G:\mathcal{Q}\to\hat{\mathcal{E}}$ on $\{G_{0}(Q)\}_{Q\in\mathcal{Q}_{0}}$. \end{defn} It is evident that an arbitrary symmetric operad $\mathcal{P}$ need not have the homotopy invariance property and the problem of sensibly replacing $\mathcal{P}$ by another operad $\mathcal{Q}$ that does have this property is referred to as the problem of finding the up-to-homotopy version of the algebraic structure classified by $\mathcal{P}$. To make this notion precise we recall that in \cite{ax hom th ope,BV res ope,res colour ope} Berger and Moerdijk establish the following result. \begin{thm} Let $\mathcal{E}$ be a cofibrantly generated symmetric monoidal model category. Under mild conditions the category $Ope(\mathcal{E})_{A}$ of symmetric operads enriched in $\mathcal{E}$ with fixed set of objects equal to $A$ and whose functors are the identity on all objects admits a Quillen model structure in which the weak equivalences are hom-wise weak equivalences and the fibrations are hom-wise fibrations. \end{thm} We refer to this model structure as the Berger-Moerdijk model structure on $Ope(\mathcal{E})_{A}$. \begin{rem} The Berger-Moerdijk model structure on symmetric operads over a singleton $A=\{*\}$ (given in \cite{ax hom th ope}) settles one of the open problems listed by Hovey in \cite{Hovey Model Cat}. \end{rem} Among the consequences of the model structure Berger and Moerdijk prove the following. \begin{thm} If $\mathcal{Q}$ is cofibrant in the Berger-Moerdijk model structure on $Ope(\mathcal{E})_{A}$ then $\mathcal{Q}$-algebras in $\mathcal{E}$ have, under mild conditions, the homotopy invariance property. \end{thm} \begin{proof} See Theorem 3.5 in \cite{ax hom th ope} for more details. \end{proof} Thus, the problem of finding the up-to-homotopy version of the algebraic structure classified by a symmetric operad $\mathcal{P}$ enriched in $\mathcal{E}$ reduces to finding a cofibrant replacement $\mathcal{Q}$ of $\mathcal{P}$ in the Berger-Moerdijk model structure on $Ope(\mathcal{E})_{\mathcal{P}_{0}}$. Of course, a cofibrant replacement always exists just by the presence of the Quillen model structure. However, in order to actually compute with it one needs an efficient construction of it, and this is the aim of the $W$-construction.
\subsubsection{The original Boardman-Vogt W-construction for topological operads}
The $W$-construction is a functor $W:Ope(Top)\rightarrow Ope(Top)$ equipped with a natural transformation (an augmentation) $W\rightarrow id$. A detailed account (albeit in a slightly different language than that of operads) can be found in \cite{BV book} where it first appeared. We give here an expository presentation aiming at explaining the ideas important to us.
For simplicity let us describe the planar version of the $W$-construction, that is, we describe a functor taking a planar operad enriched in $Top$ to another such planar operad. We now fix a topological planar operad $\mathcal{P}$ and describe the operad $W\mathcal{P}$. The objects of $W\mathcal{P}$ are the same as those of $\mathcal{P}$. To describe the arrow spaces we consider standard planar trees (a tree is planar when it comes with an orientation of the edges at each vertex and \emph{standard }means that a choice was made of a single planar tree of each isomorphism class of planar isomorphisms of planar trees) whose edges are labelled by objects of $\mathcal{P}$ and whose vertices are labelled by arrows of $\mathcal{P}$ according to the rule that the objects labelling the input edges of a vertex are equal (in their natural order) to the input of the operation labelling that vertex. Similarly the object labelling the output of the vertex is the output object of the operation at the vertex. Moreover, each inner edge in such a tree is given a length $0\le t\le1$. For objects $P_{0},\cdots,P_{n}\in (W\mathcal{P})_{0}$ let $A(P_{1},\cdots,P_{n};P_{0})$ be the topological space whose underlying set is the set of all such planar labelled trees $\bar{T}$ for which the leaves of $\bar{T}$ are labelled by $P_{1},\cdots,P_{n}$ (in that order) and the root of $\bar{T}$ is labelled by $P_{0}$. The topology on $A(P_{1},\cdots,P_{n};P_{0})$ is the evident one induced by the topology of the arrow spaces in $\mathcal{P}$ and the standard topology on the unit interval $[0,1]$.
The space $W\mathcal{P}(P_{1},\cdots,P_{n};P_{0})$ is the quotient of $A(P_{1},\cdots,P_{n};P_{0})$ obtained by the following identifications. If $\bar{T}\in A(P_{1},\cdots,P_{n};P_{0})$ has an inner edge $e$ whose length is $0$ then we identify it with the tree $\bar{T}/e$ obtained from $\bar{T}$ by contracting the edge $e$ and labelling the newly formed vertex by the corresponding $\circ_{i}$-composition of the operations labelling the vertices at the two sides of $e$ (the other labels are as in $\bar{T}$). Thus pictorially we have that locally in the tree a configuration \[ \xymatrix{*{\,}\ar@{-}[dr] & & *{\,}\ar@{-}[dl]\\
*{\,}\ar@{-}[dr] & *{\bullet}\ar@{-}[d]_{0}^{c} & *{\,}\ar@{-}[dl]\ar@{}[l]|{\psi\,\,\,\,\,\,\,\,\,\,}\\
& *{\bullet}\ar@{-}[d] & \ar@{}[l]|{\varphi\,\,\,\,\,\,\,\,\,\,}\\
& *{\,}} \] is identified with the configuration \[ \xymatrix{*{\,}\ar@{-}[drr] & *{\,}\ar@{-}[dr] & & *{\,}\ar@{-}[dl] & *{\,}\ar@{-}[dll]\\
& & *{\bullet}\ar@{-}[d] & \ar@{}[l]|{\psi\circ_{i}\varphi}\\
& & *{\,}} \] Another identification is in the case of a tree $\bar{S}$ with a unary vertex $v$ labelled by an identity. We identify such a tree with the tree $\bar{R}$ obtained by removing the vertex $v$ and identifying its input edge with its output edge. The length assigned to the new edge is determined as follows. If it is an outer edge then it has no length. If it is an inner edge then it is assigned the maximum of the lengths of $s$ and $t$ (where if either $s$ or $t$ does not have a length, i.e., it is an outer edge, then its length is considered to be $0$). The labelling is as in $\bar{S}$ (notice that the label of the newly formed edge is unique since $v$ was labelled by an identity which means that its input and output were labelled by the same object). Pictorially, this identification identifies the labelled tree \[ \xymatrix{*{\,}\ar@{-}[dr] & & *{\,}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[d]_{t}\\
*{\,}\ar@{-}[dr] & *{\bullet}\ar@{-}[d]_{s} & *{\,}\ar@{-}[dl]\ar@{}[l]|{id_{P}\,\,\,\,}\\
& *{\bullet}\ar@{-}[d]\\
& *{\,}\\ \\} \]
with the tree \[ \xymatrix{*{\,}\ar@{-}[dr] & & *{\,}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[dd]\\ *{\,}\ar@{-}[dr] & & *{\,}\ar@{-}[dl]\ar@{}[l]_{\,\,\,\,\,\,\,\,\,\,\,\,\, max\{s,t\}}\\
& *{\bullet}\ar@{-}[d]\\
& *{\,}} \] The composition in $W\mathcal{P}$ is given by grafting such labelled trees, giving the newly formed inner edge length $1$. The augmentation $W\mathcal{P}\to\mathcal{P}$ is the identity on objects and sends an arrow represented by such a labeled tree to the operation obtained by contracting all lengths of internal edges to $0$ and composing in $\mathcal{P}$. We leave the necessary adaptations needed for obtaining the symmetric version of the $W$-construction to the reader. \begin{example} Let $\mathcal{P}$ be the planar operad with a single object and a single $n$-ary operation in each arity $n\ge 1$ and no arrows of arity $0$. We consider $\mathcal{P}$ to be a discrete operad in $Top$. It is easily seen that a functor $\mathcal{P}\rightarrow Top$ corresponds to a non-unital topological monoid (we treat this case for simplicity). Let us now calculate the first few arrow spaces in $W\mathcal{P}$. Firstly, $W\mathcal{P}$ too has just one object. We thus use the notation of classical operads, namely $W\mathcal{P}(n)$ for the space of operations of arity $n$. Clearly $W\mathcal{P}(0)$ is just the empty space. The space $W\mathcal{P}(1)$ consists of labelled trees with one input. Since in such a tree the only possible label at a vertex is the identify, the identification regarding identities implies that $W\mathcal{P}(1)$ is again just a one-point space. In general, since every unary vertex in a labelled tree in $W\mathcal{P}(n)$ can only be labelled by the identity, and those are then identified with trees not containing unary vertices, it suffices to only consider reduced trees, namely trees with no unary vertices. To calculate $W\mathcal{P}(2)$ we need to consider all reduced trees with two inputs, but there is just one such tree, the $2$-corolla, and it has no inner edges, thus $W\mathcal{P}(2)$ is also a one-point space. Things become more interesting when we calculate $W\mathcal{P}(3)$. We need to consider reduced trees with three inputs. There are three such trees, namely\[ \begin{array}{ccc} \xymatrix{*{}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\
& & *{\bullet}\ar@{-}[d]\\
& & *{}}
& \quad\xymatrix{*{}\ar@{-}[dr] & *{}\ar@{-}[d] & *{}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[d]\\
& *{}} \quad & \xymatrix{ & *{}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\ *{}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[d]\\
& *{}} \end{array}\] The middle tree contributes a point to the space $W\mathcal{P}(3)$. Each of the other trees has one inner edge and thus contributes the interval $[0,1]$ to the space. The only identification to be made is when the length of one of those inner edges is $0$, in which case it is identified with the point corresponding to the middle tree. The space $W\mathcal{P}(3)$ is thus the gluing of two copies of the interval $[0,1]$ where we identify both ends named $0$ to a single point. The result is then just a closed interval, $[-1,1]$. However, it is convenient to keep in mind the trees corresponding to each point of this interval. Namely, the tree corresponding to the middle point, $0$, is the middle tree. With a point $0<t\le1$ corresponds the tree on the right where the length of the inner edge is $t$, and with a point $-1\le-t<0$ corresponds the tree on the left where its inner edge is given the length $t$. In this way one can calculate the entire operad $W\mathcal{P}$. It can then be shown that the spaces $\{W\mathcal{P}(n)\}_{n=0}^{\infty}$, reproduce, up to homeomorphism, the Stasheff associahedra. An $A_{\infty}$-space is then an algebra over $W\mathcal{P}$ and $W\mathcal{P}$ classifies $A_{\infty}$-spaces and their \emph{strong} morphisms. \end{example}
\subsubsection{The Berger-Moerdijk generalization of the W-construction to operads enriched in a homotopy environment.}
Observe that in the $W$-construction given above one can construct the space $W\mathcal{P}(P_{1},\cdots,P_{n};P_{0})$ as follows. For each labelled planar tree $\bar{T}$ as above let $H^{\bar{T}}$ be $H^{k}$ where $k$ is the number of inner edges in $\bar{T}$ and $H=[0,1]$, the unit interval. Further, for each vertex $v$ of $\bar{T}$ let $\mathcal{P}(v)=\mathcal{P}(x_{1},\cdots,x_{n};x_{0})$ where $x_{1},\cdots,x_{n}$ are (in that order) the inputs of $v$ and $x_{0}$ its output. Finally, let $\mathcal{P}(\bar{T})$ be the product of $\mathcal{P}(v)$ where $v$ ranges over the vertices of $\bar{T}$. Now, The space $A(P_{1},\cdots,P_{n};P_{0})$ constructed above is homeomorphic to $\coprod_{\bar{T}}(H^{\bar{T}}\times\mathcal{P}(\bar{T}))$ where $\bar{T}$ varies over all labelled standard planar trees $\bar{T}$ whose leaves are labelled by $P_{1},\cdots,P_{n}$ and whose root is labelled by $P_{0}$. The identifications that are then made to construct the space $W\mathcal{P}(P_{1},\cdots,P_{n};P_{0})$ are completely determined by the combinatorics of the various trees $\bar{T}$. This observation is the key to generalizing the $W$-construction to symmetric operads in monoidal model categories $\mathcal{E}$ other than $Top$ and is carried out in \cite{BV res ope,res colour ope}. What is needed is a suitable replacement for the unit interval $[0,1]$ used above to assign lengths to the inner edges of the trees. Such a replacement is the notion of an interval object in a monoidal model category $\mathcal{E}$ given in \cite{BV res ope}. \begin{defn} Let $\mathcal{E}$ be a symmetric monoidal model category $\mathcal{E}$ with unit $I$. An\emph{ interval }object in $\mathcal{E}$ (see Definition 4.1 in \cite{BV res ope}) is a factorization of the codiagonal $I\coprod I\to I$ into a cofibration $I\coprod I\to H$ followed by a weak equivalence $\epsilon:H\to I$ together with an associative operation $\vee:H\otimes H\to H$ which has a neutral element, an absorbing element, and for which $\epsilon$ is a counit. For convenience, when an interval element is chosen we will refer to $(\mathcal{E},H)$ as a \emph{homotopy environment. } \end{defn} Relevant examples to our presentation are the ordinary unit interval in $Top$ with the standard model structure (with $x\vee y=max\{x,y\}$) and the free-living isomorphism $0\leftrightarrows1$ in $Cat$ with the categorical model structure.
In such a setting the topological $W$-construction can be mimicked by gluing together objects $H^{\otimes_{k}}$ instead of cubes $[0,1]^{k}$. This is done in detail in \cite{BV res ope}, to which the interested reader is referred. We thus obtain a functor $W_{H}:Ope(\mathcal{E})\rightarrow Ope(\mathcal{E})$ for any homotopy environment $\mathcal{E}$. Usually we will just write $W$ instead of $W_{H}$, which is quite a harmless convention since Proposition 6.5 in \cite{BV res ope} guarantees that under mild conditions a different choice of interval object yields essentially equivalent $W$-constructions. \begin{example} Consider the category $Cat$ with the categorical model structure. In this monoidal model category we can choose the category $H$ to be the free-living isomorphism $0\leftrightarrows1$ as interval object, with the obvious structure maps. Let us again consider the planar operad $\mathcal{P}$ classifying non-unital associative monoids, this time as a discrete operad in $Cat$. To calculate $W\mathcal{P}(n)$ we should again consider labelled standard planar trees with lengths. The same argument as above implies that we should only consider reduced trees, and a similar calculation shows that $W\mathcal{P}(n)$ is a one-point category for $n=1,2$. Now, to calculate $W\mathcal{P}(3)$ we again consider the three trees as given above. This time the middle tree contributes the category $H^{0}=I$. Each of the other trees contributes the category $H$. The identifications identify the object named $0$ in each copy of $H$ with the unique object of $I$. The result is a contractible category with three objects. In general, the category $W\mathcal{P}(n)$ is a contractible category with $tr(n)$ objects, where $tr(n)$ denotes the number of reduced standard planar trees with $n$ leaves. The composition in $W\mathcal{P}$ is given by grafting of such trees. The operad $W\mathcal{P}$ classifies unbiased monoidal categories and strict monoidal functors (an unbiased monoidal category is a category with an $n$-ary multiplication functor for each $n\ge0$ together with some coherence conditions. See \cite{Leinster higher} for more details as well as a discussion about the equivalence of such categories and ordinary weak monoidal categories). \end{example} The generalized Boardman-Vogt $W$-construction thus provides a computationally tractable way to classify weak algebras for a wide variety of structures in a homotopy environment. However, $W\mathcal{P}$ tends to classify weak $\mathcal{P}$-algebra with their strong morphisms and not with their weak morphisms. Indeed, for some fixed homotopy environment $\mathcal{E}$ assume that $Ope(\mathcal{E})$ is closed monoidal with respect to a Boardman-Vogt type tensor product. If we now consider for a symmetric operad $\mathcal{P}\in Ope(\mathcal{E})_{0}$ the internal hom $[W\mathcal{P},\mathcal{E}]$ then the elements of $[W\mathcal{P},\mathcal{E}]_{0}$ are precisely the weak $\mathcal{P}$-algebras in $\mathcal{E}$. However, a unary arrow in $[W\mathcal{P},\mathcal{E}]$ corresponds to a map of symmetric operads $(W\mathcal{P})\otimes[1]\to\mathcal{E}$ (where $[1]$ is the operad $0\to1$ considered as a discrete operad in $\mathcal{E}$). This already shows that the notion one gets is of strong (because $W$ does not act on $[1]$) morphisms between weak (because $W$ does act on $\mathcal{P}$) $\mathcal{P}$-algebras.
\subsection{Weak maps between weak algebras}
Luckily, to arrive at the right notion of weak morphisms between weak algebras no extra work is needed. Following on the observation above we make the following definition. \begin{defn} Let $\mathcal{P}$ be an operad in $Set$ and $\mathcal{E}$ a homotopy environment. A \emph{weak $\mathcal{P}$-algebra} in $\mathcal{E}$ is a functor of symmetric $\mathcal{E}$-enriched operads $W(\mathcal{P})\to\hat{\mathcal{E}}$. A\emph{ weak map} between up-to-homotopy $\mathcal{P}$-algebras in $\mathcal{E}$ is a functor of symmetric $\mathcal{E}$-enriched operads $W(\mathcal{P}\otimes[1])\to\hat{\mathcal{E}}.$ \end{defn} An obvious question now is whether the collection of all weak $\mathcal{P}$-algebras and their weak maps forms a category. The answer is that they usually do not. A simple example is provided by $A_{\infty}$-spaces where it is known that weak $A_{\infty}$-maps do not compose associatively. The theory so far already suggests a solution to that problem. We denote by $[n]$ the operad $0\to1\to\cdots\to n$ seen as a discrete operad in $\mathcal{E}$. For a symmetric operad $\mathcal{P}$ in $Set$ consider the symmetric operad $\mathcal{P}\otimes[n]$. An algebra for such a symmetric operad is easily seen to be a sequence $X_{0},\cdots,X_{n}$ of $\mathcal{P}$-algebras together with weak $\mathcal{P}$-algebra maps:\[ X_{0}\rightarrow X_{1}\rightarrow\cdots\rightarrow X_{n}\] and all their possible compositions. \begin{prop} Let $\mathcal{P}$ be a symmetric operad in $Set$ and $\mathcal{E}$ a homotopy environment. For each $n\ge0$ let $X_{n}$ be the set of maps \[ W(\mathcal{P}\otimes[n])\to\hat{\mathcal{E}}\] of symmetric operads enriched in $\mathcal{E}$. Then the collection $X=\{X_{n}\}_{n=0}^{\infty}$ can be canonically made into a simplicial set. \end{prop} \begin{proof} The proof follows easily by noting that the sequence $\{\mathcal{P}\otimes[n]\}_{n=0}^{\infty}$ is a cosimplicial object in $Ope$. \end{proof} \begin{defn} \label{def:SimpSetOfWeakAlg}We refer to the simplicial set constructed above as the \emph{simplicial set of weak $\mathcal{P}$-algebras} in $\mathcal{E}$ and denote it by $wAlg[\mathcal{P},\mathcal{E}]$. \end{defn} Recall that for strict algebras one could easily iterate structures simply by considering $[\mathcal{P},[\mathcal{P},\mathcal{E}]]$ which are classified by $\mathcal{P}\otimes\mathcal{P}$. Our journey into weak algebras in a homotopy environment $\mathcal{E}$ led us to the formation of the simplicial set $wAlg[\mathcal{P},\mathcal{E}]$ with the immediate drawback that we cannot, at least not in any straightforward manner, iterate. This problem disappears in the dendroidal setting, as we will see below, and is one of the technical advantages of dendroidal sets over enriched operads in the study of weak algebraic structures.
\section{Dendroidal sets - a formalism for weak algebras}
We now return to non-enriched symmetric operads and introduce the category of dendroidal sets, which is the natural category in which to define nerves of symmetric operads. The category of dendroidal sets is a presheaf category on the dendroidal category $\Omega$ and as such one might expect it to be adequate only for the study of non-enriched symmetric operads. However, we will see that it is in fact versatile enough to treat enriched operads quite efficiently by means of the homotopy coherent nerve construction. We do mention that for weak algebraic structures in certain homotopy environments (such as differentially graded vector spaces) dendroidal sets are inappropriate. One might then consider dendroidal objects instead of dendroidal sets as is explained in \cite{den set}, which also contains all of the results below.
\subsection{The dendroidal category $\Omega$}
To define the dendroidal category $\Omega$ recall the definition of symmetric rooted trees given above. It is evident that any such tree $T$ can be thought of as a picture of a symmetric operad $\Omega(T)$: The objects of $\Omega(T)$ are the edges of $T$ and the arrows are freely generated by the vertices of $T$. In more detail, consider the tree $T$ given by \[ \xymatrix{*{\,}\ar@{-}[dr]_{e} & & *{\,}\ar@{-}[dl]^{f}\\
\,\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\,\,\,\, v} & *{\bullet}\ar@{-}[dr]_{b} & & *{\,}\ar@{-}[dl]_{c}\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\,\, w} & *{\bullet}\ar@{-}[dll]^{d}\\
& & *{\bullet}\ar@{-}[d]_{a} & \,\ar@{}[l]^{r\,\,\,\,\,\,\,\,\,\,\,}\\
& & *{\,}} \] then $\Omega(T)$ has six objects, $a,b,\cdots,f$ and the following generating operations: \[ r\in\Omega(T)(b,c,d;a),\] \[ w\in\Omega(T)(-;d)\]
and \[ v\in\Omega(T)(e,f;b).\] The other operations are units (such as $1_{b}\in\Omega(T)(b;b)$), arrows obtained freely by the $\Sigma_{n}$ actions, and formal compositions of such arrows. \begin{defn} Fix a countable set $X$. The \emph{dendroidal category} $\Omega$ has as objects all symmetric rooted trees $T$ whose edges $E(T)$ satisfy $E(T)\subseteq X$. The arrows $S\to T$ in $\Omega$ are arrows $\Omega(S)\to\Omega(T)$ of symmetric operads. \end{defn} \begin{rem} The role of the set $X$ above should be thought of as the role variables play in predicate calculus. The edges are only there to be carriers of symbols and countably many such carriers will always be enough. Of course, another choice of $X$ would result in an isomorphic category. Note that the dendroidal category $\Omega$ is thus small (in fact is itself countable). \end{rem} Recall the linear trees $L_{n}$ and that the simplicial category $\Delta$ is a skeleton of the category of finite linearly ordered sets and order preserving maps. The subcategory of $\Omega$ spanned by all trees of the form $L_{n}$ with $n\ge0$ is easily seen to be equivalent to the simplicial category $\Delta$. In fact, $\Delta$ can be recovered from $\Omega$ in a more useful way. \begin{prop} (Slicing lemma for the dendroidal category) The simplicial category $\Delta$ is obtained (up to equivalence) from the dendroidal category $\Omega$ by slicing over the linear tree with one edge and no vertices: $\Delta\cong\Omega/L_{0}$. \end{prop} We now describe several types of arrows that generate all of the arrows in $\Omega$. Let $T$ be a tree and $v$ a vertex of valence 1 with $in(v)=e$ and $out(v)=e'$. Consider the tree $T/v$, obtained from $T$ by deleting the vertex $v$ and the edge $e'$, pictured locally as\[ \begin{array}{ccc} \xymatrix{*{\,}\ar@{-}[dr] & & *{\,}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[dr]_{e} & & *{\,}\ar@{-}[dr] & & *{\,}\ar@{-}[dl]\\
& & *{\bullet}\ar@{-}[dr]_{e'}\ar@{}|{\,\,\,\,\,\,\,\,\,\, v} & & *{\bullet}\ar@{-}[dl]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{\,}}
& \xymatrix{\\\\\ar[r]^{\sigma_{v}} & *{}}
& \xymatrix{*{\,}\ar@{-}[dr] & & *{\,}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[ddrr] & & *{\,}\ar@{-}[dr] & & *{\,}\ar@{-}[dl]\\
& \,\ar@{}[r]|{\,\,\,\, e} & & & *{\bullet}\ar@{-}[dl]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{\,}} \end{array}\] There is then a map in $\Omega$, denoted by $\sigma_{v}:T\rightarrow T/v$, which sends $e$ and $e'$ in $T$ to $e$ in $T/v$. An arrow in $\Omega$ of this kind is called a \emph{degeneracy. }
Consider now a tree $T$ and a vertex $v$ in $T$ with exactly one inner edge attached to it. One can obtain a new tree $T/v$ by deleting $v$ and all the outer edges attached to it to obtain, by inclusion of edges, the arrow $\partial_{v}:T/v\to T$ in $\Omega$ called an \emph{outer face.} For example, \[ \begin{array}{ccc}
\xymatrix{\\*{\,}\ar@{-}[dr]_{b} & *{\,}\ar@{-}[d]^{c}\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\,\, w} & *{\bullet}\ar@{-}[dl]^{d}\\
\,\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\,\, r} & *{\bullet}\ar@{-}[d]_{a}\\
& *{\,}}
& \xymatrix{\\\\\ar[r]^{\partial_{v}} & *{}}
& \xymatrix{*{\,}\ar@{-}[dr]_{e} & & *{\,}\ar@{-}[dl]^{f}\\
\,\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\, v} & *{\bullet}\ar@{-}[dr]_{b} & & *{\,}\ar@{-}[dl]_{c}\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\,\, w} & *{\bullet}\ar@{-}[dll]^{d}\\
& & *{\bullet}\ar@{-}[d]_{a} & \,\ar@{}[l]|{r\,\,\,\,\,\,\,\,\,\,\,}\\
& & *{\,}} \end{array}\]
and (to emphasize that it is sometimes possible to remove the root of the tree $T$)\[ \begin{array}{ccc} \xymatrix{\\*{\,}\ar@{-}[dr]_{e} & & *{}\ar@{-}[dl]^{f}\\
\,\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\,\, v} & *{\bullet}\ar@{-}[d]_{b}\\
& *{\,}}
& \xymatrix{\\\\\ar[r]^{\partial_{r}} & *{}}
& \xymatrix{*{\,}\ar@{-}[dr]_{e} & & *{\,}\ar@{-}[dl]^{f}\\
\,\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\, v} & *{\bullet}\ar@{-}[dr]_{b} & & *{\,}\ar@{-}[dl]_{c} & *{}\ar@{-}[dll]^{d}\\
& & *{\bullet}\ar@{-}[d]_{a} & \,\ar@{}[l]|{r\,\,\,\,\,\,\,\,\,\,\,}\\
& & *{\,}} \end{array}\] are both outer faces.
Given a tree $T$ and an inner edge $e$ in $T$, one can obtain a new tree $T/e$ by contracting the edge $e$. One then obtains, by inclusion of edges, the map $\partial_{e}:\Omega(T/e)\rightarrow\Omega(T)$ in $\Omega$ called an \emph{inner face.} For example,
\[ \begin{array}{ccc}
\xymatrix{*{\,}\ar@{-}[rrd]_{e} & *{\,}\ar@{-}[rd]^{f} & & *{\,}\ar@{-}[dl]_{c}\ar@{}[r]|{\,\,\,\,\,\,\,\,\, w} & *{\bullet}\ar@{-}[lld]^{d}\\
& \,\ar@{}[r]_{\,\,\,\,\,\,\,\,\,\,\, u} & *{\bullet}\ar@{-}[d]^{a}\\
& & *{\,}}
& \xymatrix{\\\ar[r]^{\partial_{b}} & *{}}
& \xymatrix{*{\,}\ar@{-}[dr]_{e} & & *{\,}\ar@{-}[dl]^{f}\\
\,\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\,\,\,\, v} & *{\bullet}\ar@{-}[dr]_{b} & & *{\,}\ar@{-}[dl]_{c}\ar@{}[r]|{\,\,\,\,\,\,\,\,\,\,\,\, w} & *{\bullet}\ar@{-}[dll]^{d}\\
& & *{\bullet}\ar@{-}[d]_{a} & \,\ar@{}[l]^{r\,\,\,\,\,\,\,\,\,\,\,}\\
& & *{\,}} \end{array}\]
\begin{thm} Any map $\xymatrix{T\ar[r]^{f} & T'} $ in $\Omega$ factors uniquely as $f=\varphi\pi\delta$, where $\delta$ is a composition of degeneracy maps, $\pi$ is an isomorphism, and $\varphi$ is a composition of (inner and outer) face maps. \end{thm} This result generalizes the familiar simplicial relations in the definition of a simplicial set.
\subsubsection{The category of dendroidal sets} \begin{defn} The category of \emph{dendroidal sets }is the presheaf category $dSet=Set_{\Omega}$. Thus a dendroidal set $X$ consists of a collection of sets $\{X_{T}\}_{T\in\Omega_{0}}$ together with various maps between them. An element $x\in X_{T}$ is called a \emph{dendrex of shape $T$, }or a $T$-\emph{dendrex}. \end{defn} For each tree $T\in\Omega_{0}$ there is associated the \emph{representable dendroidal set }$\Omega[T]=\Omega(-,T)$ which, by the Yoneda Lemma, serves to classify $T$ dendrices in $X$ via the natural bijection $X_{T}\cong dSet(\Omega[T],X)$. The functor $\Omega\to Ope$ which sends $T$ to $\Omega(T)$ induces an adjunction $\xymatrix{dSet\ar@<2pt>[r]^{\tau_{d}} & Ope\ar@<2pt>[l]^{N_{d}}} $, of which $N_{d}$, called the \emph{dendroidal nerve functor}, is given explicitly, for a symmetric operad $\mathcal{P}$, by \[ N_{d}(\mathcal{P})_{T}=Ope(\Omega(T),\mathcal{P}).\] For linear trees $L_{n}$ we write somewhat ambiguously $X_{n}$ instead of $X_{L_{n}}$. This is a harmless convention since for any two linear trees $L_{n}$ and $L_{n}^{'}$ there is a \emph{unique} isomorphism $L_{n}\to L_{n}^{'}$ in $\Omega$. Consider the dendroidal set $\star=\Omega[L_{0}]$. \begin{lem} (Slicing lemma for dendroidal sets) There is an equivalence of categories $dSet/\star\cong sSet$. If we identify $sSet$ as a subcategory of $dSet$ then the forgetful functor $i_{!}:sSet\to dSet$ has a right adjoint $i^{*}$ which itself has a right adjoint $i_{*}$. \end{lem} \begin{proof} We omit the details and just remark that the adjunctions mentioned can be obtained (equivalently) in one of two ways. The first is to consider $\Delta$ as a subcategory of $\Omega$ via an embedding functor $i:\Delta\to\Omega$. This functor $i$ then induces a functor $i^{*}:dSet\to sSet$ which, from the general theory of presheaf categories (see e.g., \cite{Mac Moer Sheaves}), has a left adjoint $i_{!}$ and a right adjoint $i_{*}$. The second way to obtain the adjunctions is to use Remark \ref{rem:Slicing}, with $\mathcal{C}=dSet$ and $A=\Omega[L_{0}]$. \end{proof} \begin{prop} Slicing the adjunction $\xymatrix{dSet\ar@<2pt>[r]^{\tau_{d}} & Ope\ar@<2pt>[l]^{N_{d}}} $ over $\star$ gives the usual adjunction $\xymatrix{sSet\ar@<2pt>[r]^{\tau} & Cat\ar@<2pt>[l]^{N}} $ with $N$ the nerve functor and $\tau$ the fundamental category functor.\end{prop} \begin{proof} The precise meaning of the statement is that denoting a one-object operad with just the identity arrow again by $\star$ and for the dendroidal set $\star=\Omega[L_{0}]$ one has, by slight abuse of notation, that $N_{d}(\star)=\star$ and $\tau_{d}(\star)=\star$; thus the functors $N_{d}$ and $\tau_{d}$ restrict to the respective slices $dSet/\star$ and $Ope/\star$. Then under the identifications $sSet\cong dSet/\star$ and $Cat\cong Ope/\star$ these restrictions give the nerve functor $N:Cat\to sSet$ and its left adjoint $\tau$. \end{proof} A general rule of thumb is that any definition or theorem of dendroidal sets will yield, by slicing over $\star=\Omega[L_{0}]$, a corresponding definition or theorem of simplicial sets. A similar principle is true for operads and categories. We will loosely refer to this process as 'slicing' and say, in the example above for instance, that the usual nerve functor of categories is obtained by slicing the dendroidal nerve functor. \begin{defn} Let $X$ and $Y$ be two dendroidal sets. Their tensor product is given by the colimit\[ X\otimes Y=\lim_{\Omega[T]\rightarrow X,\Omega[S]\to Y}N_{d}(\Omega(T)\otimes\Omega(S)),\] Here we use the canonical expression of a presheaf as a colimit of representables. \end{defn}
As an example of our convention about slicing we mention that slicing the tensor product of dendroidal sets yields the cartesian product of simplicial sets. Note however, that the tensor product in $dSet$ is not the cartesian product. \begin{thm} The category $dSet$ with the tensor product defined above is a closed monoidal category. \end{thm} \begin{proof} This follows by general abstract nonsense. The internal hom is given for two dendroidal sets $X$ and $Y$ by \[ [X,Y]_{T}=dSet(X\otimes\Omega[T],Y).\]
\end{proof} Slicing this theorem proves that $sSet$ is cartesian closed with the usual formula for the internal hom. \begin{thm} In the diagram\[ \xymatrix{Cat\ar@<2pt>[r]^{j_{!}\,\,\,}\ar@<2pt>[d]^{N} & Ope\ar@<2pt>[l]^{j^{*}\,\,\,}\ar@<2pt>[d]^{N_{d}}\\ sSet\ar@<2pt>[r]^{i_{!}}\ar@<2pt>[u]^{\tau} & dSet\ar@<2pt>[l]^{i^{*}}\ar@<2pt>[u]^{\tau_{d}}} \] all pairs of functors are adjunctions with the left adjoint on top or to the left. Furthermore, the following canonical commutativity relations hold: \begin{eqnarray*}
& & \tau N\cong id\\
& & \tau_{d}N_{d}\cong id\\
& & i^{*}i_{!}\cong id\\
& & j^{*}j_{!}\cong id\\
& & j_{!}\tau\cong\tau_{d}i_{!}\\
& & Nj^{*}\cong i^{*}N_{d}\\
& & i_{!}N\cong N_{d}j_{!}.\end{eqnarray*} If we consider the cartesian structures on $Cat$ and $sSet$, the Boardman-Vogt tensor product on $Ope$, and the tensor product of dendroidal sets then the four categories are symmetric closed monoidal categories and the functors $i_{!},N,\tau,j_{!}$ and $\tau_{d}$ are strong monoidal.\end{thm} \begin{rem} The dendroidal nerve functor $N_{d}$ is not monoidal, a fact that plays a vital role in the applicability of dendroidal sets to iterated weak algebraic structures, as we will see below. \end{rem} We do have the following property. \begin{prop} \label{pro:tau of tensor}For symmetric operads $\mathcal{P}$ and $\mathcal{Q}$ there is a natural isomorphism \[ \tau_{d}(N_{d}(\mathcal{P})\otimes N_{d}(\mathcal{Q}))\cong\mathcal{P}\otimes\mathcal{Q}.\] \end{prop} \begin{lem} The dendroidal nerve functor commutes with internal Homs in the sense that for any two operads $\mathcal{P}$ and $\mathcal{Q}$ we have\[ N_{d}([\mathcal{P},\mathcal{Q}])\cong[N_{d}(\mathcal{P}),N_{d}(\mathcal{Q})].\] Moreover, for simplicial sets $X$ and $Y$ we have\[ [i_{!}(X),i_{!}(Y)]\cong i_{!}([X,Y]).\]
\end{lem} The proofs of these results are not hard.
\subsection{Algebras in the category of dendroidal sets}
We again introduce a syntactic difference between dendroidal sets thought of as encoding structure and dendroidal sets as environments to interpret structures in. \begin{defn} Let $E$ and $X$ be dendroidal sets. The dendroidal set $[X,E]$ is called the dendroidal set of $X$-\emph{algebras} in $E$. An element in $[X,E]_{L_{0}}$ is called an $X$-\emph{algebra} in $E$. An element of $[X,E]_{L_{1}}$ is called a \emph{map of $X$-algebras in $E$.} \end{defn} Let us first note that this definition extends the notion of $\mathcal{P}$-algebras in $\mathcal{E}$ for symmetric operads in the sense that for symmetric operads $\mathcal{P}$ and $\mathcal{E}$ there is a natural isomorphism \[ [N_{d}(\mathcal{P}),N_{d}(\mathcal{E})]\cong N_{d}([\mathcal{P},\mathcal{E}]).\] Indeed, this is just the statement that $N_{d}$ commutes with internal Homs.
We thus see that the dendroidal nerve functor embeds $Ope$ in $dSet$ in such a way that the notion of algebras is retained and in both cases is internalized in the form of an internal Hom with respect to a suitable tensor product. We now wish to study homotopy invariance of algebra structures in a dendroidal set $E$. The first step is to specify those arrows along which such algebras are to be invariant.
Recall that for symmetric operads the diagram\[ \xymatrix{0\to1\ar[r]^{\,\,\,\,\, f}\ar[d] & \mathcal{P}\\ 0\leftrightarrows1\ar@{..>}[ru]_{(f,g)}} \] (where the vertical arrow is the inclusion of the free-living arrow into the free-living isomorphism) admits a lift precisely when the arrow $f$ admits an inverse $g$. Taking the dendroidal nerve of this diagram and replacing in it $N_{d}(\mathcal{P})$ by an arbitrary dendroidal set $X$ we arrive at the following definition. \begin{defn} Let $X$ be a dendroidal set. An \emph{equivalence }is a dendrex $x:\Omega[L_{1}]\to X$ such that in the diagram\[ \xymatrix{\Omega[L_{1}]\ar[r]^{x}\ar[d] & X\\ N_{d}(0\leftrightarrows1)\ar@{..>}[ru]_{\hat{x}}} \] a lift $\hat{x}$ exists. \end{defn} Note, that since $\Omega[L_{1}]\cong i_{!}(\Delta[1])$ and $N_{d}(0\leftrightarrows1)=i_{!}(N(0\leftrightarrows1))$, by adjunction the dendrex $x:\Omega[L_{1}]\to X$ is an equivalence if, and only if, in the corresponding diagram of simplicial sets\[ \xymatrix{\Delta[1]\ar[d]\ar[r]^{x} & i^{*}(X)\\ N(0\leftrightarrows1)\ar@{..>}[ru]} \] a lift exists. Thus, being a weak equivalence in the dendroidal set $X$ is actually a property of the simplicial set $i^{*}(X)$. \begin{rem} Note that $N(0\leftrightarrows1)$ is the simplicial infinite dimensional sphere $S^{\infty}$ and thus a lift $S^{\infty}\to i^{*}(X)$ is a rather complicated object. Intuitively, it is a coherent choice of a homotopy inverse of the simplex $x:\Delta[1]\to i^{*}(X)$, together with coherent choices of homotopies, homotopies between homotopies, etc. See \cite{quasi cat} for more details. Note moreover, that an equivalence in $X$ is in some sense as weak as $X$ would allow it to be. If $X=N_{d}(\mathcal{P})$ then a dendrex $\Omega[L_{1}]\to X$ is an equivalence if, and only if, the corresponding unary arrow in $\mathcal{P}$ is an isomorphism. We will see below a more refined nerve construction in which equivalences correspond to a notion weaker than isomorphism. \end{rem} We can now formulate the homotopy invariance property in the language of dendroidal sets. Let $X$ and $E$ be dendroidal sets. We identify, somewhat ambiguously, the set $X_{\eta}$ with the dendroidal set $\coprod_{x\in X_{\eta}}\Omega[\eta]$. Then there is a map of dendroidal sets $X_{\eta}\to X$ which induces a mapping $[X,E]\to[X_{\eta},E]$. Consider now a family $\{f_{x}\}_{x\in X_{\eta}}$ where each $f_{x}$ is an equivalence in $X$. Then this family can be extended (usually in many different ways) to give a map $\hat{f}:N_{d}(0\leftrightarrows1)\to[X_{\eta},E]$. \begin{defn} \label{def:homotop inv proper}Let $X$ and $E$ be a dendroidal sets. We say that $X$-algebras in $E$ have the \emph{homotopy invariance property} if for every $X$-algebra in $E$, given by $F:X\to E$, and any family $\{f_{x}\}_{x\in X_{\eta}}$ and any extension of it to $\hat{f}$ as above that fit into the commutative diagram\[ \xymatrix{\Omega[\eta]\ar[d]\ar[rrr]^{\forall F} & & & [X,E]\ar[d]\\ N_{d}(0\leftrightarrows1)\ar[rrr]^{\hat{f}}\ar@{..>}[rrru]^{\exists\alpha} & & & [X_{\eta},E]} \] a lift $\alpha$ exists. \end{defn} Intuitively, the lift $\alpha$ consists of two $X$-algebras in $E$, the first being $F$ and the second one being obtained by transferring the $X$-algebra structure given by $F$ along the equivalences $\{f_{x}\}_{x\in X_{\eta}}$.
\subsection{The homotopy coherent nerve and weak algebras}
We now show how dendroidal sets enter the picture in the context of operads enriched in a symmetric closed monoidal model category $\mathcal{E}$ with a chosen interval object (which we call a homotopy environment). Recall then that the Berger-Moerdijk generalization of the Boardman-Vogt $W$-construction sends a symmetric operad $\mathcal{P}$ enriched in $\mathcal{E}$ to a cofibrant replacement $W\mathcal{P}$. Recall as well that any non-enriched symmetric operad can be seen as a discrete symmetric operad enriched in $\mathcal{E}$. \begin{defn} Fix a homotopy environment $\mathcal{E}$. Given a symmetric operad $\mathcal{P}$ enriched in $\mathcal{E}$ its \emph{homotopy coherent dendroidal nerve }is the dendroidal set whose set of $T$-dendrices is \emph{\[ hcN_{d}(\mathcal{P})_{T}=Ope(\mathcal{E})(W(\Omega(T)),\mathcal{P})\] }of $\mathcal{E}$-enriched functors between $\mathcal{E}$-enriched operads, where $\Omega(T)$ is seen as a discrete operad enriched in $\mathcal{E}$. \end{defn} The homotopy coherent dendroidal nerve construction, together with the closed monoidal structure on $dSet$ given above, allows for the internalization of the notion of weak algebras. We illustrate this: \begin{defn} Let $\mathcal{P}$ be a non-enriched symmetric operad and $\mathcal{E}$ a homotopy environment. The dendroidal set $[N_{d}(\mathcal{P}),hcN_{d}(\hat{\mathcal{E}})]$ is called the dendroidal set of \emph{weak $\mathcal{P}$-algebras in $\mathcal{E}$}. Here we view $\mathcal{E}$ as an operad enriched in itself (since $\mathcal{E}$ is assumed closed) and thus $\hat{\mathcal{E}}$ as a symmetric operad enriched in $\mathcal{E}$ is well-defined. \end{defn} It can be shown that the $L_{0}$ dendrices in $[N_{d}(\mathcal{P}),hcN_{d}(\hat{\mathcal{E}})]$ correspond to symmetric operad maps $W(\mathcal{P})\to\hat{\mathcal{E}}$ and thus are weak $\mathcal{P}$-algebras. Moreover, the $L_{1}$ dendrices can be seen to correspond to symmetric operad maps $W(\mathcal{P}\otimes[1])\to\hat{\mathcal{E}}$ and thus are weak maps of weak $\mathcal{P}$-algebras. We have thus recovered an internalization of weak algebras and their weak maps and can now consider iterated weak algebraic structures completely analogously to the way this can be done in the context of non-enriched symmetric operads. We illustrate how this works in two examples below.
\subsection{Application to the study of $A_{\infty}$-spaces and weak $n$-categories}
Recall that an $A_{\infty}$-space is an algebra for the topologically enriched operad $W(As)$, where $As$ is the non-enriched symmetric operad that classifies monoids. Let $A=N_{d}(As)$; then, by definition, $[A,hcN_{d}(Top)]$ is the dendroidal set of $A_{\infty}$-spaces and their weak (multivariable) mappings. In the classical definition of $A_{\infty}$-spaces it is not at all clear how to define $n$-fold $A_{\infty}$-spaces. However, we now have a perfectly natural such definition. \begin{defn} The dendroidal set $nA_{\infty}$ of \emph{$n$-fold $A_{\infty}$-spaces} is defined recursively as follows. For $n=1$ we set $1A_{\infty}=[A,hcN_{d}(Top)]$ and for $n\ge 1$: $(n+1)A_{\infty}=[A,nA_{\infty}]$. \end{defn} Thus, we obtain at once notions of weak multivariable mappings of $n$-fold $A_{\infty}$-spaces. And, since $dSet$ is closed monoidal, we can immediately classify $n$-fold $A_{\infty}$-spaces. \begin{prop} For any $n\ge1$ the dendroidal set $A^{\otimes n}$ classifies $n$-fold $A_{\infty}$-spaces. \end{prop} It is at this point not known exactly how $n$-fold $A_{\infty}$-spaces relate to $n$-fold loop spaces. However, the recent work \cite{Fiedor} of Fiedorowicz and Vogt on interchanging $A_\infty$ and $E_{n}$ structures is a first step towards a full comparison of the dendroidal and classical approaches. \begin{rem} Note that were the dendroidal nerve functor monoidal our definition of $n$-fold $A_{\infty}$-spaces would stabilize at $n=2$. Indeed, we would then have $A^{\otimes n}=N_{d}(As)^{\otimes n}=N_{d}(As^{\otimes n})=N_{d}(Comm)$. \end{rem} A similar application, but technically slightly more complicated, is to obtain an iterative definition of weak $n$-categories. First notice that the fact that categories, as well as symmetric operads, can be enriched in a symmetric monoidal category $\mathcal{E}$ is a consequence of the ability to in fact enrich in an arbitrary symmetric operad. We leave the details of defining what a category (or operad) enriched in a symmetric operad $\mathcal{E}$ is to the reader and only mention that this is related to the idea of enriching in an $fc$-multicategory (see \cite{Leins gen enrich}). We now show how in fact categories (and operads) can be enriched in a dendroidal set. Recall from Example \ref{exa:classifying categories over A} that for any set $A$ there is a symmetric operad $C_{A}$ that classifies categories over $A$.
Once more, the ability to easily iterate within the category of dendroidal sets naturally leads to a definition of weak $n$-categories enriched in a dendroidal set $X$ as follows. \begin{defn} Let $X$ be a dendroidal set. The dendroidal set $[N_{d}(\mathcal{C}_{A}),X]$ is called the dendroidal set of \emph{categories over $A$ enriched in $X$} and is denoted by $Cat(X)_{A}.$ \end{defn} It can easily be verified that enriching in the dendroidal nerve of $\hat{\mathcal{E}}$ for $\mathcal{E}$ a symmetric monoidal category agrees with the notion of enrichment in the usual sense.
At this point we would like to collate the various dendroidal sets $Cat(X)_{A}$ into a single dendroidal sets. There is here a technical difficulty and so as not to interrupt the flow of the presentation we refer the reader to Section 4.1 of \cite{den set} for the details of the construction. One then obtains the dendroidal set $Cat(X)$ of categories enriched in $X$. Similarly, using the operad $O_{A}$ classifying symmetric operads over $A$, we can obtain the dendroidal set $Ope(X)$ of symmetric operads enriched in $X$. \begin{defn} Let $X$ be a dendroidal set. Let $_{0}Cat(X)=X$ and define recursively $_{n+1}Cat(X)=Cat(_{n}Cat(X))$ for each $n\ge1$. We call $_{n}Cat(X)$ the dendroidal set of \emph{$n$-categories enriched in $X$. } \end{defn} In particular, considering the category $Cat$ with its categorical model structure and taking $X=hcN_{d}(Cat)$ we obtain for each $n\ge0$ the dendroidal set $_{n}Cat={}_{n}Cat(X)$, which we call the dendroidal set of weak $n$-categories. In \cite{Lukacs thesis} the dendrices in $_{2}Cat(X)_{\eta}$ and $_{3}Cat(X)_{\eta}$ are compared with other definitions of weak $2$-categories and weak $3$-categories to show that the notions are in fact equivalent. The complexity of such comparisons increases rapidly with $n$ and is currently, as is the case with many definitions of weak $n$-categories (see \cite{Leinster survey} for a survey of such), not settled. We mention that we can also consider categories weakly enriched in other dendroidal sets such as $hcN_{d}(Top)$ or $hcN_{d}(sSet)$, where again a full comparison with existing structures is yet to be completed. Of course, we can also consider weak $n$-operads of various sorts as well.
We conclude this section by considering the Baez-Breen-Dolan stabilization hypothesis for weak $n$-categories as defined above. With every reasonable definition of weak $n$-categories there is usually associated a notion of $k$-monoidal $n$-categories for every $k\ge0$. These are weak $(n+k)$-categories having trivial information in all dimensions up to and including $k$. The stabilization hypothesis is that for fixed $n$ the complexity of these structures stabilizes at $k=n+2$. Given a concrete definition of weak $n$-categories this hypothesis can be made exact and it becomes a conjecture. In our case we proceed as follows. \begin{defn} Let $n\ge0$ be fixed. For $k\ge0$ we define recursively the dendroidal set $wCat_{k}^{n}$ of \emph{weak $k$-monoidal $n$-categories} as follows. For $k=0$ we set $wCat_{0}^{n}=\ _{n}Cat$ and for $k>0$ we define $wCat_{k}^{n}=[A,wCat_{k-1}^{n}]$, where $A=N_{d}(As)$. A dendrex of shape $\eta$ in $wCat_{k}^{n}$ is called a \emph{$k$-monoidal $n$-category}.\end{defn} \begin{conjecture} \label{con:(The-Baez-Breen-Dolan-stabilization}(The Baez-Breen-Dolan stabilization hypothesis for our notion of $n$-categories) For a fixed $n\ge0$, there is an isomorphism of dendroidal sets between $wCat_{k}^{n}$ and $wCat_{n+2}^{n}$ for any $k\ge n+2$. \end{conjecture}
\section{Dendroidal sets - models for $\infty$-operads}
In this section we show how dendroidal sets are used to model $\infty$-operads. As very brief motivation for the concepts to follow we first discuss $\infty$-categories, then we present that part of the theory of dendroidal sets needed to define the Cisinski-Moerdijk model structure on dendroidal sets which establishes dendroidal sets as models for homotopy operads and illustrate some of its consequences. The proofs of the results below can be found in \cite{dSet model hom op,inn Kan in dSet}.
\subsection{$\infty$-categories briefly}
We have seen above that in general, weak $\mathcal{P}$-algebras in a homotopy environment $\mathcal{E}$ and their weak maps fail to form a category and that in fact one is immediately led to define the simplicial set of weak algebras $wAlg[\mathcal{P},\mathcal{E}]$. The failure of this simplicial set to be the nerve of a category is a reflection of composition not being associative. However, the composition of weak maps is associative up to coherent homotopies, a fact which induces some extra structure on the simplicial set $wAlg[\mathcal{P},\mathcal{E}]$. Boardman and Vogt in \cite{BV book} formulated this extra structure by means of a condition called the restricted Kan condition. To define it recall that a horn in a simplicial set $X$ is a mapping $\Lambda^{k}[n]\to X$, where $\Lambda^{k}[n]$ is the union in $\Delta[n]$ of all faces except the one opposite the vertex $k$. A horn is called \emph{inner }when $0<k<n$. Boardman and Vogt in \cite{BV book}, page 102, define \begin{quote} A simplicial set $X$ is said to satisfy the restricted Kan condition if every inner horn $\Lambda^{k}[n]\to X$ has a filler. \end{quote} and so $\infty$-categories were born. They consequently prove that in the context of topological operads the simplicial set of weak algebras satisfies the restricted Kan condition. Simplicial sets satisfying the restricted Kan condition are extensively studied by Joyal (in \cite{quasi cat,quasi cat book} under the name 'quasicategories') and by Lurie (in e.g., \cite{Lurie} under the name '$(\infty,1)$-categories' or more simply '$\infty$-categories').
There are several ways to model $\infty$-categories, of which the above restricted Kan condition is one. Three other models are complete Segal spaces, Segal categories, and simplicial categories. For each of these models there is an appropriate Quillen model structure rendering the four different models Quillen equivalent (see \cite{surv infty cat} for a detailed survey). By considering dendroidal sets instead of simplicial sets Cisinski and Moerdijk in \cite{den Seg sp,dSet and simp ope} introduce the analogous dendroidal notions: complete dendroidal Segal spaces, Segal operads, and simplicial operads. Moreover, they establish Quillen model structures for each of these notions, proving they are all Quillen equivalent to a Quillen model structure on dendroidal sets they establish in \cite{dSet model hom op}. All of these model structures and equivalences, upon slicing over a suitable object, reduce to the equivalence of the simplicial based structures mentioned above.
There is yet another approach to $\infty$-operads, taken by Lurie \cite{high top th}, which defines an $\infty$-operad to be a simplicial set with extra structure. In Lurie's approach the highly developed theory of simplicial sets and quasicategories is readily available to provide a rich theory of $\infty$-operads. However, the extra structure that makes a simplicial set into an $\infty$-operad is quite complicated, rendering working with explicit examples of $\infty$-operads difficult. The approach via dendroidal sets replaces the relative simplicity of the combinatorics of linear trees by the complexity of the combinatorics of trees which renders existing simplicial theory unusable but offers very many explicit examples of $\infty$-operads. We believe that this trade-off in complexity will result in these two approaches mutually enriching each other as a future comparisons unfold.
Below, following \cite{dSet model hom op} we give a short presentation of the approach to $\infty$-operads embodied in the Cisinski-Moerdijk model structure on $dSet$ that slices to the Joyal model structure on $sSet$ and use this model structure to prove a homotopy invariance property for algebras in $dSet$. From this point on $\infty$-category means a quasicategory.
\subsection{Horns in $dSet$}
We first introduce some concepts needed for the definition referring the reader to \cite{den set,inn Kan in dSet} for more details. \begin{defn} Let $T$ be a tree and $\alpha:S\rightarrow T$ a face map in $\Omega$. The \emph{$\alpha$-face} of $\Omega[T]$, denoted by $\partial_{\alpha}\Omega[T]$, is the dendroidal subset of $\Omega[T]$ which is the image of the map $\Omega[\alpha]:\Omega[S]\rightarrow\Omega[T]$. Thus we have that \[ \partial_{\alpha}\Omega[T]_{R}=\{\xymatrix{R\ar[r] & S\ar[r]^{\alpha} & T} \mid R\rightarrow S\in\Omega[S]_{R}\}.\] When $\alpha$ is obtained by contracting an inner edge $e$ in $T$ we denote $\partial_{\alpha}$ by $\partial_{e}$.
Let $T$ be a tree. The \emph{boundary} of $\Omega[T]$ is the dendroidal subset $\partial\Omega[T]$ of $\Omega[T]$ obtained as the union of all the faces of $\Omega[T]$:
\[ \partial\Omega[T]=\bigcup_{\alpha\in\Phi_{1}(T)}\partial_{\alpha}\Omega[T].\] where $\Phi_{1}(T)$, is the set of all faces of $T$. \end{defn}
\begin{defn} Let $T$ be a tree and $\alpha\in\Phi_{1}(T)$ a face of $T$. The \emph{$\alpha$-horn} in $\Omega[T]$ is the dendroidal subset $\Lambda^{\alpha}[T]$ of $\Omega[T]$ which is the union of all the faces of $T$ except $\partial_{\alpha}\Omega[T]$:\[ \Lambda^{\alpha}[T]=\bigcup_{\beta\ne\alpha\in\Phi_{1}(T)}\partial_{\beta}\Omega[T].\]
The horn is called an \emph{inner horn} if $\alpha$ is an inner face, otherwise it is called an \emph{outer horn}. We will denote an inner horn $\Lambda^{\alpha}[T]$ by $\Lambda^{e}[T]$, where $e$ is the contracted inner edge in $T$ that defines the inner face $\alpha=\partial_{e}:\Omega[T/e]\rightarrow\Omega[T]$. A horn in a dendroidal set $X$ is a map of dendroidal sets $\Lambda^{\alpha}[T]\rightarrow X$. It is inner (respectively outer) if the horn $\Lambda^{\alpha}[T]$ is inner (respectively outer). \end{defn} \begin{rem} It is trivial to verify that these notions for dendroidal sets extend the common ones for simplicial sets in the sense, for example, that for the simplicial horn $\Lambda^{k}[n]\subseteq\Delta[n]$, the dendroidal set \[ i_{!}(\Lambda^{k}[n])\subseteq i_{!}(\Delta[n])=\Omega[L_{n}]\] is a horn in the dendroidal sense. Furthermore, the horn $\Lambda^{k}[n]$ is inner (i.e., $0<k<n$) if, and only if, the horn $i_{!}(\Lambda^{k}[n])$ is inner. \end{rem} Both the boundary $\partial\Omega[T]$ and the horns $\Lambda^{\alpha}[T]$ in $\Omega[T]$ can be described as colimits as follows. \begin{defn} Let $T_{1}\rightarrow T_{2}\rightarrow\cdots\rightarrow T_{n}$ be a sequence of $n$ face maps in $\Omega$. We call the composition of these maps a \emph{subface} of $T_{n}$ of \emph{codimension} $n$. \end{defn} \begin{prop} \label{pro:Sub-faceOfASub-Face}Let $S\rightarrow T$ be a subface of $T$ of codimension $2$. The map $S\rightarrow T$ decomposes in precisely two different ways as a composition of faces. \end{prop} Let $\Phi_{i}(T)$ be the set of all subfaces of $T$ of codimension $i$. The proposition implies that for each $\beta:S\rightarrow T\in\Phi_{2}(T)$ there are precisely two face maps $\beta_{1}:S\rightarrow T_{1}$ and $\beta_{2}:S\rightarrow T_{2}$ that factor $\beta$ as a composition of face maps. Using these maps we can form two maps $\gamma_{1}$ and $\gamma_{2}$\[ \coprod_{S\rightarrow T\in\Phi_{2}(T)}\Omega[S]\rightrightarrows\coprod_{R\rightarrow T\in\Phi_{1}(T)}\Omega[R]\] where $\gamma_{i}$ ($i=1,2$) has component$\xymatrix{\Omega[S]\ar[r]^{\Omega[\beta_{i}]} & \Omega[T_{i}]\ar[r] & \coprod\Omega[R]} $ for each $\beta:S\rightarrow T\in\Phi_{2}(T)$. \begin{lem} Let $T$ be a tree in $\Omega$. With notation as above we have that the boundary $\partial\Omega[T]$ is a coequalizer\[ \coprod_{S\rightarrow T\in\Phi_{2}(T)}\Omega[S]\rightrightarrows\coprod_{R\rightarrow T\in\Phi_{1}(T)}\Omega[R]\rightarrow\partial\Omega[T]\] of the two maps $\gamma_{1},\gamma_{2}$ constructed above. \end{lem} \begin{cor} A map of dendroidal sets $\partial\Omega[T]\rightarrow X$ corresponds exactly to a sequence $\{x_{R}\}_{R\rightarrow T\in\Phi_{1}(T)}$ of dendrices whose faces match, in the sense that for each subface $\beta:S\rightarrow T$ of codimension $2$ we have $\beta_{1}^{*}(x_{T_{1}})=\beta_{2}^{*}(x_{T_{2}})$. \end{cor} A similar presentation for horns holds as well. For a fixed face $\alpha:S\rightarrow T\in\Phi_{1}(T)$ consider the parallel arrows defined by making the following diagram commute\[ \xymatrix{\Omega[S]\ar[d]\ar[r]^{\beta_{1}} & \Omega[T_{1}]\ar[d]\\ \coprod_{\beta:S\rightarrow T\in\Phi_{2}(T)}\Omega[S]\ar@<2pt>[r]\ar@<-2pt>[r] & \coprod_{R\rightarrow T\ne\alpha\in\Phi_{1}(T)}\Omega[R]\\ \Omega[S]\ar[u]\ar[r]^{\beta_{2}} & \Omega[T_{2}]\ar[u]} \] where the vertical arrows are the canonical injections into the coproduct and where we use the same notation as above. \begin{lem} Let $T$ be a tree in $\Omega$ and $\alpha$ a face of $T$. In the diagram \[ \coprod_{S\rightarrow T\in\Phi_{2}(T)}\Omega[S]\rightrightarrows\coprod_{R\rightarrow T\ne\alpha\in\Phi_{1}(T)}\Omega[R]\rightarrow\Lambda^{\alpha}[T]\]
the dendroidal set $\Lambda^{\alpha}[T]$ is the coequalizers of the two maps constructed above. \end{lem} \begin{cor} A horn $\Lambda^{\alpha}[T]\rightarrow X$ in $X$ corresponds exactly to a sequence $\{x_{R}\}_{R\rightarrow T\ne\alpha\in\Phi_{1}(T)}$ of dendrices that agree on common faces in the sense that if $\beta:S\rightarrow T$ is a subface of codimension $2$ which factors as\[ \xymatrix{ & R_{1}\ar[rd]^{\alpha_{1}}\\ S\ar[rr]^{\beta}\ar[ru]^{\beta_{1}}\ar[rd]_{\beta_{2}} & & T\\
& R_{2}\ar[ru]_{\alpha_{2}}} \] then $\beta_{1}^{*}(x_{R_{1}})=\beta_{2}^{*}(x_{R_{2}}).$\end{cor} \begin{rem} In the special case where the tree $T$ is linear we obtain the equivalent results for simplicial sets. Namely, the presentation of the boundary $\partial\Delta[n]$ and of the horn $\Lambda^{k}[n]$ as colimits of standard simplices, and the description of a horn $\Lambda^{k}[n]\rightarrow X$ in a simplicial set $X$ (see \cite{Goers Jardin}). \end{rem} We are now able to define the dendroidal sets that model $\infty$-operads. \begin{defn} A dendroidal set $X$ is an $\infty$-\emph{operad }if every inner horn $h:\Lambda^{e}[T]\to X$ has a filler $\hat{h}:\Omega[T]\to X$ making the diagram\[ \xymatrix{\Lambda^{e}[T]\ar[d]\ar[r]^{h} & X\\ \Omega[T]\ar[ur]_{\hat{h}}} \] commute. \end{defn} The following relation between $\infty$-categories and $\infty$-operads is trivial to prove: \begin{prop} If $X$ is an $\infty$-category then $i_{!}(X)$ is an $\infty$-operad. If $Y$ is an $\infty$-operad then $i^{*}(Y)$ is an $\infty$-category. \end{prop} It is not hard to see that given any symmetric operad $\mathcal{P}$ its dendroidal nerve $N_{d}(\mathcal{P})$ is an $\infty$-operad. In fact we can characterize those dendroidal sets occurring as nerves of operads as follows. \begin{defn} An $\infty$-operad $X$ is called \emph{strict }if any inner horn in $X$ as above has a unique filler.\end{defn} \begin{lem} \label{lem:strict iff operad}A dendroidal set $X$ is a strict $\infty$-operad if, and only if, there is an operad $\mathcal{P}$ such that $N_{d}(\mathcal{P})\cong X$. \end{lem} A family of examples of paramount importance of $\infty$-operads are given by the following. Recall that when $\mathcal{E}$ is a symmetric monoidal model category a symmetric operad $\mathcal{P}$ enriched in $\mathcal{E}$ is called \emph{locally fibrant} if each hom-object in $\mathcal{P}$ is fibrant in $\mathcal{E}$. \begin{thm} Let $\mathcal{P}$ be a locally fibrant symmetric operad in $\mathcal{E}$, where $\mathcal{E}$ is a homotopy environment. The homotopy coherent nerve $hcN_{d}(\mathcal{P})$ is an $\infty$-operad. \end{thm}
\subsection{The Cisinski-Moerdijk model category structure on $dSet$}
The objective of this section is to present the Cisinski-Moerdijk model structure on $dSet$. All of the material in this section is taken from \cite{dSet model hom op}, to which the reader is referred to for more information and the proofs. In this model structure $\infty$-operads are the fibrant objects and it is closely related to the operadic model structure on $Ope$ and to the Joyal model structure on $sSet$.
We note immediately that the Cisinski-Moerdijk model structure is not a Cisinski model structure (i.e., a model structure on a presheaf category such that the cofibrations are precisely the monomorphisms) due to a technical complication that prevents the direct application of the techniques developed in \cite{Cisinski his model stru}. Indeed, the cofibrations in the model structure are the so-called \emph{normal monomorphisms}. \begin{defn} A monomorphism of dendroidal sets $f:X\rightarrow Y$ is \emph{normal} if for every dendrex $t\in Y_{T}$ that does not factor through $f$ the only isomorphism of $T$ that fixes $t$ is the identity. \end{defn} An important property of dendroidal sets, proved in \cite{inn Kan in dSet}, is the following. \begin{thm} \label{thm:exp}Let $X$ be a normal dendroidal set (i.e., $\emptyset\to X$ is normal) and $Y$ an $\infty$-operad. The dendroidal set $[X,Y]$ is again an $\infty$-operad. \end{thm} \begin{proof} The proof uses the technique of anodyne extensions, as commonly used in the theory of simplicial sets (e.g., \cite{Gab Zisman,Goers Jardin}), suitably adapted to dendroidal sets. Technically though, the dendroidal case is much more difficult. For simplicial sets there is a rather simple description of the non-degenerate simplices of $\Delta[n]\times\Delta[k]$. But for trees $S$ and $T$ a similar description of the non-degenerate dendrices of $\Omega[S]\otimes\Omega[T]$ is given by the so called poset of percolation trees associated with $S$ and $T$. Complete details can be found in \cite{inn Kan in dSet} and we just briefly illustrate the construction for the trees $S$ and $T$: \[ \begin{array}{ccc} \xymatrix{ & *{\,}\ar@{-}[dr] & & *{\,}\ar@{-}[dl]\\
& & *{\circ}\ar@{-}[d]_{e}\\ S= & & *{\circ}\ar@{-}[d]\\
& & *{\,}}
& \quad\quad & \xymatrix{ & *{\,}\ar@{-}[d]_{3} & & *{\,}\ar@{-}[d]_{5}\\
& *{\bullet}\ar@{-}[dr]_{2} & & *{\bullet}\ar@{-}[dl]_{4}\\ T= & & *{\bullet}\ar@{-}[d]_{1}\\
& & *{\,}} \end{array}\] on the following page.
The presentation of $\Omega[T]\otimes\Omega[S]$ is given by the 14 trees $T_{1},\cdots,T_{14}$:
$\xyR{5pt}\xyC{5pt}\xymatrix{*{}\ar@{-}[d] & & *{}\ar@{-}[d] & & *{}\ar@{-}[d] & & *{}\ar@{-}[d]\\ *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl] & & *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[drr] & & & & *{\bullet}\ar@{-}[dll]\\
& & & *{\circ}\ar@{-}[d]_{e_{1}}\\
& & & *{\circ}\ar@{-}[d]\\
& & & *{}\\
& & & T_{1}} $~~~~~$\xyR{5pt}\xyC{5pt}\xymatrix{*{}\ar@{-}[d] & & *{}\ar@{-}[d] & & *{}\ar@{-}[d] & & *{}\ar@{-}[d]\\ *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl] & & *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl]\\
& *{\circ}\ar@{-}[drr]_{e_{2}} & & & & *{\circ}\ar@{-}[dll]^{e_{4}}\\
& & & *{\bullet}\ar@{-}[d]_{e_{1}}\\
& & & *{\circ}\ar@{-}[d]\\
& & & *{}\ar@{-}[]\\
& & & T_{2}} $~~~~~$\xyR{5pt}\xyC{5pt}\xymatrix{*{}\ar@{-}[d] & & *{}\ar@{-}[d] & & *{}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\ *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl] & & & *{\circ}\ar@{-}[d]_{e_{5}}\\
& *{\circ}\ar@{-}[drr]_{e_{2}} & & & & *{\bullet}\ar@{-}[dll]^{e_{4}}\\
& & & *{\bullet}\ar@{-}[d]_{e_{1}}\\
& & & *{\circ}\ar@{-}[d]\\
& & & *{}\\
& & & T_{3}} $
$\xyR{5pt}\xyC{5pt}\xymatrix{*{}\ar@{-}[dr] & & *{}\ar@{-}[dl] & & *{}\ar@{-}[d] & & *{}\ar@{-}[d]\\
& *{\circ}\ar@{-}[d]_{e_{3}} & & & *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[drr]_{e_{2}} & & & & *{\circ}\ar@{-}[dll]^{e_{4}}\\
& & & *{\bullet}\ar@{-}[d]_{e_{1}}\\
& & & *{\circ}\ar@{-}[d]\\
& & & *{}\\
& & & T_{4}} $~~~~~$\xyR{5pt}\xyC{5pt}\xymatrix{*{}\ar@{-}[dr] & & *{}\ar@{-}[dl] & & *{}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\
& *{\circ}\ar@{-}[d]_{e_{3}} & & & & *{\circ}\ar@{-}[d]^{e_{5}}\\
& *{\bullet}\ar@{-}[drr]_{e_{2}} & & & & *{\bullet}\ar@{-}[dll]^{e_{4}}\\
& & & *{\bullet}\ar@{-}[d]_{e_{1}}\\
& & & *{\circ}\ar@{-}[d]\\
& & & *{}\\
& & & T_{5}} $~~~~~$\xyR{5pt}\xyC{5pt}\xymatrix{*{}\ar@{-}[d] & & *{}\ar@{-}[d] & & *{}\ar@{-}[d] & & *{}\ar@{-}[d]\\ *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl] & & *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl]\\
& *{\circ}\ar@{-}[d]_{e_{2}} & & & & *{\circ}\ar@{-}[d]_{e_{4}}\\
& *{\circ}\ar@{-}[drr] & & & & *{\circ}\ar@{-}[dll]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{}\\
& & & T_{6}} $
$\xyR{5pt}\xyC{5pt}\xymatrix{*{}\ar@{-}[d] & & *{}\ar@{-}[d] & & *{}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\ *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl] & & & *{\circ}\ar@{-}[d]_{e_{5}}\\
& *{\circ}\ar@{-}[d]_{e_{2}} & & & & *{\bullet}\ar@{-}[d]\\
& *{\circ}\ar@{-}[drr] & & & & *{\circ}\ar@{-}[dll]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{}\\
& & & T_{7}} $~~~~~$\xyR{5pt}\xyC{5pt}\xymatrix{*{}\ar@{-}[d] & & *{}\ar@{-}[d] & & *{}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\ *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl] & & & *{\circ}\ar@{-}[d]_{e_{5}}\\
& *{\circ}\ar@{-}[d]_{e_{2}} & & & & *{\circ}\ar@{-}[d]\\
& *{\circ}\ar@{-}[drr] & & & & *{\bullet}\ar@{-}[dll]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{}\\
& & & T_{8}} $~~~~~$\xyC{5pt}\xyR{5pt}\xymatrix{*{}\ar@{-}[dr] & & *{}\ar@{-}[dl] & & *{}\ar@{-}[d] & & *{}\ar@{-}[d]\\
& *{\circ}\ar@{-}[d] & & & *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl]\\
& *{\bullet}\ar@{-}[d] & & & & *{\circ}\ar@{-}[d]\\
& *{\circ}\ar@{-}[drr] & & & & *{\circ}\ar@{-}[dll]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{}\\
& & & T_{9}} $
$\xyC{5pt}\xyR{5pt}\xymatrix{*{}\ar@{-}[dr] & & *{}\ar@{-}[dl] & & *{}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\
& *{\circ}\ar@{-}[d] & & & & *{\circ}\ar@{-}[d]\\
& *{\bullet}\ar@{-}[d] & & & & *{\bullet}\ar@{-}[d]\\
& *{\circ}\ar@{-}[drr] & & & & *{\circ}\ar@{-}[dll]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{}\\
& & & T_{10}} $~~~~~$\xyR{5pt}\xyC{5pt}\xymatrix{*{}\ar@{-}[dr] & & *{}\ar@{-}[dl] & & *{}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\
& *{\circ}\ar@{-}[d]_{e_{3}} & & & & *{\circ}\ar@{-}[d]_{e_{5}}\\
& *{\bullet}\ar@{-}[d]_{e_{2}} & & & & *{\circ}\ar@{-}[d]\\
& *{\circ}\ar@{-}[drr] & & & & *{\bullet}\ar@{-}[dll]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{}\\
& & & T_{11}} $~~~~~$\xyC{5pt}\xyR{5pt}\xymatrix{*{}\ar@{-}[dr] & & *{}\ar@{-}[dl] & & *{}\ar@{-}[d] & & *{}\ar@{-}[d]\\
& *{\circ}\ar@{-}[d]_{e_{3}} & & & *{\bullet}\ar@{-}[dr] & & *{\bullet}\ar@{-}[dl]\\
& *{\circ}\ar@{-}[d] & & & & *{\circ}\ar@{-}[d]_{e_{4}}\\
& *{\bullet}\ar@{-}[drr] & & & & *{\circ}\ar@{-}[dll]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{}\\
& & & T_{12}} $
$\xyC{5pt}\xyR{5pt}\xymatrix{*{}\ar@{-}[dr] & & *{}\ar@{-}[dl] & & *{}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\
& *{\circ}\ar@{-}[d]_{e_{3}} & & & & *{\circ}\ar@{-}[d]_{e_{5}}\\
& *{\circ}\ar@{-}[d] & & & & *{\bullet}\ar@{-}[d]_{e_{4}}\\
& *{\bullet}\ar@{-}[drr] & & & & *{\circ}\ar@{-}[dll]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{}\\
& & & T_{13}} $~~~~~$\xyC{5pt}\xyR{5pt}\xymatrix{*{}\ar@{-}[dr] & & *{}\ar@{-}[dl] & & *{}\ar@{-}[dr] & & *{}\ar@{-}[dl]\\
& *{\circ}\ar@{-}[d] & & & & *{\circ}\ar@{-}[d]\\
& *{\circ}\ar@{-}[d] & & & & *{\circ}\ar@{-}[d]\\
& *{\bullet}\ar@{-}[drr] & & & & *{\bullet}\ar@{-}[dll]\\
& & & *{\bullet}\ar@{-}[d]\\
& & & *{}\\
& & & T_{14}} $ \\
The poset structure on these trees is \[ \xyC{10pt}\xyR{10pt}\xymatrix{ & & T_{1}\ar@{-}[dd]\\ \\ & & T_{2}\ar@{-}[ddrr]\ar@{-}[dd]\ar@{-}[ddll]\\ \\T_{3}\ar@{-}[dd]\ar@{-}[ddrr] & & T_{6}\ar@{-}'[dl][ddll]\ar@{-}'[dr][ddrr] & & T_{4}\ar@{-}[dd]\ar@{-}[ddll]\\
& \, & & \,\\ T_{7}\ar@{-}[ddrr]\ar@{-}[dd] & & T_{5}\ar@{-}[dd] & & T_{9}\ar@{-}[ddll]\ar@{-}[dd]\\
& \,\\ T_{8}\ar@{-}[ddr] & & T_{10}\ar@{-}[ddr]\ar@{-}[ddl] & & T_{12}\ar@{-}[ddl]\\ \\ & T_{11}\ar@{-}[ddr] & & T_{13}\ar@{-}[ddl]\\ \\ & & T_{14}} \]
\end{proof} As a special case of this result we may now recover Boardman and Vogt's result that the simplicial set $wAlg[\mathcal{P},\mathcal{E}]$ of weak $\mathcal{P}$-algebras in $\mathcal{E}$ is an $\infty$-category, as follows. \begin{thm} Let $\mathcal{P}$ be a symmetric operad and $\mathcal{E}$ a homotopy environment. If the dendroidal set $N_{d}(\mathcal{P})$ is normal and $hcN_{d}(\hat{\mathcal{E}})$ is an $\infty$-operad then $wAlg[\mathcal{P},\mathcal{E}]$ is an $\infty$-category.\end{thm} \begin{proof} The proof follows by noticing that $i^{*}([N_{d}(\mathcal{P}),hcN_{d}(\hat{\mathcal{E}})])\cong wAlg[\mathcal{P},\mathcal{E}]$. \end{proof} As we have seen above, local fibrancy of $\hat{\mathcal{E}}$ assures that $hcN_{d}(\hat{\mathcal{E}})$ is an $\infty$-operad. See below for a condition on $\mathcal{P}$ sufficient to assure $N_{d}(\mathcal{P})$ is normal. We now turn to the Cisinski-Moerdijk model structure. \begin{thm} The category $dSet$ of dendroidal sets admits a Quillen model structure where the cofibrations are the normal monomorphisms, the fibrant objects are the $\infty$-operads, and the fibrations between $\infty$-operads are the inner Kan fibrations whose image under $\tau_{d}$ is an operadic fibration. The class $\mathcal{W}$ of weak equivalences can be characterized as the smallest class of arrows which contains all inner anodyne extensions, all trivial fibrations between $\infty$-operads and satisfies the 2 out of 3 property. Furthermore, with the tensor product of dendroidal sets, this model structure is a monoidal model category. Slicing this model structure recovers the Joyal model structure on $sSet$ and in the diagram\[ \xyC{25pt}\xyR{25pt}\xymatrix{Cat\ar@<2pt>[r]^{j_{!}\,\,\,}\ar@<2pt>[d]^{N} & Ope\ar@<2pt>[l]^{j^{*}\,\,\,}\ar@<2pt>[d]^{N_{d}}\\ sSet\ar@<2pt>[r]^{i_{!}}\ar@<2pt>[u]^{\tau} & dSet\ar@<2pt>[l]^{i^{*}}\ar@<2pt>[u]^{\tau_{d}}} \] where the categories are endowed (respectively starting from the top-left going clock-wise) with the categorical, operadic, Cisinski-Moerdijk, and Joyal model structures all adjunctions are Quillen adjunction (and none is a Quillen equivalence). \end{thm} \begin{proof} The proof of the model structure is quite intricate and is established in \cite{dSet model hom op}. \end{proof}
\subsection{Homotopy invariance property for algebras in $dSet$}
As a consequence of the Cisinski-Moerdijk model structure on dendroidal sets we obtain the following. \begin{thm} Let $X$ be a normal dendroidal set and $E$ an $\infty$-operad. Then $X$-algebras in $E$ have the homotopy invariance property. \end{thm} \begin{proof} In the diagram defining the homotopy invariance property in Definition \ref{def:homotop inv proper} above the left vertical arrow is a trivial cofibration and the right vertical arrow is a fibration in the Cisinski-Moerdijk model structure and thus the required lift exists. \end{proof} We now recall Fact \ref{fac:iso invar} regarding the internalization of strict algebras by the closed monoidal structure on $Ope$ given by the Boardman-Vogt tensor product and the isomorphism invariance property for such algebras as captured by the operadic monoidal model structure on $Ope$. We recall that a similar such correspondence for weak algebras and their homotopy invariance property does not seem possible within the confines of enriched operads. We are now in a position to summarize the results recounted above in a form completely analogous to the situation of strict algebras. \begin{fact} The notion of algebras of dendroidal sets is internalized to the category $dSet$ by it being closed monoidal with respect to the tensor product of dendroidal sets. The homotopy invariance property of $X$-algebras in an $\infty$-operad, for a normal $X$, holds and is captured by the fact that $dSet$ supports the Cisinski-Moerdijk model structure which is compatible with the tensor product. The notion of a weak $\mathcal{P}$-algebra in a homotopy environment $\mathcal{E}$, where $\mathcal{P}$ is discrete, is subsumed by the notion of algebras in $dSet$ by means of the dendroidal set $[N_{d}(\mathcal{P}),hcN_{d}(\mathcal{E})]$ of weak $\mathcal{P}$-algebras in $\mathcal{E}$. \end{fact}
\subsection{\label{sub:Revisiting-applications}Revisiting applications}
Recall the iterative construction of the dendroidal set $nA_{\infty}$ of $n$-fold $A_{\infty}$-spaces and $_{n}Cat$ of weak $n$-categories as well as $wCat_{k}^{n}$ of $k$-monoidal $n$-categories. It follows from the general theory of dendroidal sets that these are all $\infty$-operads. To see that, one uses Theorem \ref{thm:exp} together with the fact that for a $\Sigma$-free symmetric operad $\mathcal{P}$ (for instance if $\mathcal{P}$ is obtained from a planar operad $\mathcal{Q}$ by the symmetrization functor) then $N_{d}(\mathcal{P})$ is normal. Thus, weak maps of $n$-fold $A_{\infty}$-spaces can be coherently composed and similarly so can weak functors between weak $n$-categories. As for $k$-monoidal $n$-categories we note that $wCat_{k}^{n}$ is, for similar reasons as above, an $\infty$-operad as well.
We may now reduce the Baez-Breen-Dolan stabilization conjecture as follows. \begin{prop} If for any $n\ge0$ the dendroidal set $wCat_{n}^{n}$ is a strict $\infty$-operad then the Baez-Breen-Dolan stabilization conjecture is true. \end{prop} \begin{proof} Recall that $wCat_{n}^{n}=[A^{\otimes n},wCat^{n}]$ and assume it is a strict $\infty$-operad. We wish to prove that $[A^{\otimes n+j},wCat^{n}]\cong[A^{\otimes n+2},wCat^{n}]$ for any fixed $j>2$. By Lemma \ref{lem:strict iff operad} there is an operad $\mathcal{P}$ such that $[A^{\otimes n},wCat^{n}]=N_{d}(\mathcal{P})$. We now have\[ [A^{\otimes n+j},wCat^{n}]=[A^{\otimes j},[A^{\otimes n},wCat^{n}]]=[A^{\otimes j},N_{d}(\mathcal{P})]\] which by adjunction is isomorphic to $N_{d}([\tau_{d}(A^{\otimes j}),\mathcal{P}]).$ However, $A$ is actually the dendroidal nerve of the symmetric operad $As$ classifying associative monoids. By Proposition \ref{pro:tau of tensor} we have\[ \tau_{d}(A^{\otimes j})=\tau_{d}(N_{d}(As)^{\otimes j})\cong As^{\otimes j}\cong Comm.\] and the result follows. \end{proof}
\section{Dendroidal sets - combinatorial models of unknown spaces}
All of the theory of dendroidal sets that directly or indirectly is concerned with algebras (we include the Cisinski-Moerdijk model structure here as well) is very operadic in nature and is closely related to the theory of $\infty$-categories modeled by quasicategories. Simplicial sets are, however, also models for topological homotopy theory. Indeed, simplicial sets were introduced in the context of algebraic topology as combinatorial models of topological spaces. The appropriate equivalence is established in \cite{Quillen hom alg} and was the reason to introduce Quillen model categories. To recall the main result recall the singular functor $Sing:Top\to sSet$ and its left adjoint $|-|:sSet\to Top$ given by geometric realization. \begin{thm} The category $Top$ supports a Quillen model structure in which the weak equivalences are the weak homotopy equivalences and the fibrations are the Serre fibrations. The category $sSet$ supports a Quillen model structure in which the weak equivalences are those maps $f:X\to Y$
for which the geometric realization $|f|$ is a homotopy weak equivalence and the fibrations are the Kan fibrations. With these model structures the adjunction above is a Quillen equivalence. \end{thm} Simplicial sets thus support a topologically flavoured Quillen model structure as well as the Joyal model structure which is categorically flavoured and so simplicial sets play two rather different roles. The close connection between dendroidal sets and simplicial sets raises the question as to the existence of a topologically flavoured interpretation of dendroidal sets as well. This problem is open for debate and interpretation and is certainly far from settled.
We remark first that there is some indication that suggests dendroidal sets do carry topological meaning. Recall the Dold-Kan correspondence that establishes an equivalence of categories between the category $sAb$ of simplicial abelian groups and the category $Ch$ of non-negatively graded chain complexes. This correspondence is useful in the calculation of homotopy groups of simplicial sets and in the definition of Eilenberg-Mac Lane spaces $K(G,n)$ for $n>1$. In \cite{den Dold-Kan} it is shown that there is a planar dendroidal version (where one considers a planar version of $\Omega$ whose objects are planar trees) of the Dold-Kan correspondence. The equivalence is between the category $dAb$ of planar dendroidal abelian groups and the category $dCh$ of planar dendroidal chain complexes. The definition of the latter requires that for each face map $\partial_{\alpha}$ between planar trees there is associated a sign $sgn(\partial_{\alpha})\in\{\pm1\}$ such that the following holds. In the planar version of $\Omega$ it is still true that a face $S\to T$ of codimension $2$ decomposes in precisely two ways as the composition of two faces (see Proposition \ref{pro:Sub-faceOfASub-Face} above). Thus we can write $S\to T$ as $\partial_{\alpha}\circ\partial_{\beta}$ as well as $\partial_{\gamma}\circ\partial_{\delta}$ and we require that $sgn(\partial_{\alpha})\cdot sgn(\partial_{\beta})=-sgn(\partial_{\gamma})\cdot sgn(\partial_{\delta})$. One may now wonder whether these dendroidal chain complexes give rise to some sort of generalized Eilenberg-Mac Lane spaces. A first step towards answering this question should be a clearer specification of goals in a broad context, which is the aim of the rest of this section.
As inspiration we consider the Quillen equivalence between topological spaces and simplicial sets mentioned above. The geometric realization plays there a prominent role and thus a significant aspect of understanding the homotopy behind dendroidal sets is to find a category $dTop$ together with functors $Sing_{d}:dTop\to dSet$ and $|-|_{d}:dSet\to dTop$. The category $dTop$ of course has to be chosen with care so that it will rightfully be considered to be related to topology. We thus expect that there is a fully faithful functor $h_{!}:Top\to dTop$ with a right adjoint $h^{*}:dTop\to Top$ that should be defined 'purely topologically'. We thus expect $dTop$ to be a category of some generalized topological spaces in which ordinary topological spaces embed via $h_{!}$. To allow sufficient flexibility for working with these objects we expect that $dTop$ be small complete and small cocomplete. Moreover, the functor
$|-|_{d}:dSet\to dTop$ should send a dendroidal set $X$ to some generalized space $|X|_{d}$ in such a way that the combinatorial information in $X$ is not lost. We thus expect of any such functor $|-|_{d}$
that if for some $f:X\to Y$ the map $|f|_{d}$ is an isomorphism then $f$ was already an isomorphism. In other words we expect $|-|_{d}$ to be conservative.
The term 'purely topologically' above is of course vague and open to discussion. In an attempt to formalize it recall the various slicing lemmas we have seen above: Slicing symmetric operads over $\star$ gives categories, slicing dendroidal sets over $\Omega[\eta]=N_{d}(\star)$ gives simplicial sets, and slicing $\Omega$ over $\eta$ gives $\Delta$. We thus expect that there is an object $\star\in dTop$ such that slicing $dTop$ over $\star$ gives a category equivalent to $Top$ and that in fact the embedding $h_{!}:Top\to dTop$ is essentially the forgetful functor $dTop/\star\to dTop$. Moreover, noting that the `correct' tensor product of dendroidal sets is not the cartesian one we expect $dTop$ to posses a monoidal structure different from the cartesian product. And, just as the tensor product of dendroidal sets slices to the cartesian product of simplicial sets we expect the monoidal structure on $dTop$ to slice to the cartesian product of topological spaces. Lastly, an important property of the ordinary geometric realization functor is that it commutes with finite products. We expect of the dendroidal geometric realization functor $dSet\to dTop$ to be monoidal with respect to the non-cartesian monoidal structure on each category.
We summarize our expectations in the following formulation. \begin{problem}
Find a category $dTop$ together with a functor $Sing_{d}:dTop\to dSet$, a left adjoint $|-|_{d}:dSet\to dTop$, and an object $\star\in dTop_{0}$ such that:\end{problem} \begin{enumerate} \item $dTop$ is small complete and small cocomplete. \item (Slicing lemma) $dTop/\star$ is equivalent to $Top$. \item The forgetful functor $h_{!}:Top\to dTop$ is an embedding.
\item Slicing $Sing_{d}$ gives $Sing$ and slicing $|-|_{d}$ gives $|-|$.
\item $|-|_{d}$ is conservative. \item $dTop$ admits a non-cartesian monoidal structure that slices over $\star$ to the cartesian product in $Top$ (along $h_{!}$).
\item The functor $|-|_{d}$ is to be a monoidal functor with respect to the tensor structures on $dSet$ and $dTop$. \end{enumerate} We would thus obtain the diagram\[
\xymatrix{sSet\ar@<2pt>[r]^{|-|}\ar@<-2pt>[d]_{i_{!}} & Top\ar@<2pt>[l]^{Sing}\ar@<-2pt>[d]_{h_{!}}\\
dSet\ar@<-2pt>[u]_{i^{*}}\ar@<2pt>[r]^{|-|_{d}} & dTop\ar@<2pt>[l]^{Sing_{d}}\ar@<-2pt>[u]_{h^{*}}} \] where both squares commute.
The quest will be complete with the establishment of Quillen model structures on $dSet$ and $dTop$ that slice respectively to the standard (topological) ones on $sSet$ and $Top$ and such that in the square above all adjunctions are Quillen adjunctions with both horizontal ones Quillen equivalences.
\end{document} |
\begin{document}
\title{\bf The Hamiltonian BVMs (HBVMs) Homepage\thanks{
Work developed within the project {\em Numerical Methods and Software for
Differential Equations}
\chapter*{Preface} {\em Hamiltonian Boundary Value Methods} (in short, {\em HBVMs}) is a new class of numerical methods for the efficient numerical solution of canonical Hamiltonian systems. In particular, their main feature is that of exactly preserving, for the numerical solution, the value of the Hamiltonian function, when the latter is a polynomial of arbitrarily high degree.
Clearly, this fact implies a practical conservation of any analytical Hamiltonian function.
In this notes, we collect the introductory material on HBVMs contained in the {\em HBVMs Homepage}, available at the url:
\centerline{\tt http://web.math.unifi.it/users/brugnano/HBVM/index.html}
The notes are organized as follows:
\begin{itemize} \item Chapter 1: Basic Facts about HBVMs
\item Chapter 2: Numerical Tests
\item Chapter 3: Infinity HBVMs
\item Chapter 4: Isospectral Property of HBVMs and their connections with Runge-Kutta collocation methods
\item Chapter 5: Blended HBVMs
\item Chapter 6: Notes and References
\item Bibliography
\end{itemize}
\chapter{Basic Facts about HBVMs}\label{chap1}
We consider Hamiltonian problems in the form \begin{equation}\label{hamilode} \dot y(t) = J\nabla H(y(t)), \qquad y(t_0) = y_0\in\mathbb R^{2m}, \end{equation}
\noindent where $J$ is a skew-symmetric constant matrix, and the Hamiltonian $H(y)$ is assumed to be sufficiently differentiable. Usually, $$J = \left( \begin{array} {rr} & I_m\\ -I_m \end{array} \right) , \qquad y = \left( \begin{array} {c}q\\p \end{array} \right) , \quad q,p\in\mathbb R^m,$$ so that (\ref{hamilode}) assumes the form $$\dot q = \nabla_p H(q,p), \qquad \dot p = -\nabla_q H(q,p).$$
\noindent The induced dynamical system is characterized by the presence of invariants of motion, among which the Hamiltonian itself: $$\dot H(y(t)) = \nabla H(y(t))^T \dot y(t) = \nabla H(y(t))^TJ\nabla H(y(t)) = 0,$$
\noindent due to the fact that $J$ is skew-symmetric. Such property is usually lost, when numerically solving problem (\ref{hamilode}). This drawback can be overcome by using Hamiltonian BVMs (hereafter, HBVMs).
The key formula which HBVMs rely on, is the {\em line integral} and the related property of conservative vector fields: \begin{equation}\label{Hy} H(y_1) - H(y_0) = h\int_0^1 {\dot\sigma}(t_0+\tau h)^T\nabla H(\sigma(t_0+\tau h))\mathrm{d}\tau, \end{equation}
\noindent for any $y_1 \in \mathbb R^{2m}$, where $\sigma$ is any smooth function such that \begin{equation} \label{sigma}\sigma(t_0) = y_0, \qquad\sigma(t_0+h) = y_1. \end{equation}
\noindent Here we consider the case where $\sigma(t)$ is a polynomial of degree $s$, yielding an approximation to the true solution $y(t)$ in the time interval $[t_0,t_0+h]$. The numerical approximation for the subsequent time-step, $y_1$, is then defined by (\ref{sigma}). After introducing a set of $s$ distinct abscissae \begin{equation}\label{ci}0<c_{1},\ldots ,c_{s}\le1,\end{equation}
\noindent we set \begin{equation}\label{Yi}Y_i=\sigma(t_0+c_i h), \qquad i=1,\dots,s,\end{equation}
\noindent so that $\sigma(t)$ may be thought of as an interpolation polynomial, interpolating the {\em fundamental stages} $Y_i$, $i=1,\dots,s$. We observe that, due to (\ref{sigma}), $\sigma(t)$ also interpolates the initial condition $y_0$.
\begin{rem}\label{c0} Sometimes, the interpolation at $t_0$ is explicitly required. In such a case, the extra abscissa $c_0=0$ is formally added to (\ref{ci}). This is the case, for example, of a Lobatto distribution of the abscissae \cite{brugnano09bit}.\end{rem}
Let us consider the following expansions of $\dot \sigma(t)$ and $\sigma(t)$ for $t\in [t_0,t_0+h]$: \begin{equation} \label{expan} \dot \sigma(t_0+\tau h) = \sum_{j=1}^{s} \gamma_j P_j(\tau), \qquad \sigma(t_0+\tau h) = y_0 + h\sum_{j=1}^{s} \gamma_j \int_{0}^\tau P_j(x)\,\mathrm{d} x, \end{equation}
\noindent where $\{P_j(t)\}$ is a suitable basis of the vector space of polynomials of degree at most $s-1$ and the (vector) coefficients $\{\gamma_j\}$ are to be determined. Because of the arguments in \cite{brugnano09bit,BIT09,BIT10}, we shall consider an {\bf orthonormal basis} of polynomials on the interval $[0,1]$, i.e.: \begin{equation}\label{orto}\int_0^1 P_i(t)P_j(t)\mathrm{d} t = \delta_{ij}, \qquad i,j=1,\dots,s,\end{equation}
\noindent where $\delta_{ij}$ is the Kronecker symbol, and $P_i(t)$ has degree $i-1$. Such a basis can be readily obtained as \begin{equation}\label{orto1}P_i(t) = \sqrt{2i-1}\,\hat P_{i-1}(t), \qquad i=1,\dots,s,\end{equation} with $\hat P_{i-1}(t)$ the shifted Legendre polynomial, of degree $i-1$, on the interval $[0,1]$.
\begin{rem}\label{recur}
From the properties of shifted Legendre polynomials (see, e.g., \cite{AS} or the Appendix in \cite{brugnano09bit}), one readily obtains that the polynomials $\{P_j(t)\}$ satisfy the three-terms recurrence: \begin{eqnarray*} P_1(t)&\equiv& 1, \qquad P_2(t) = \sqrt{3}(2t-1),\\ P_{j+2}(t) &=& (2t-1)\frac{2j+1}{j+1} \sqrt{\frac{2j+3}{2j+1}} P_{j+1}(t) -\frac{j}{j+1}\sqrt{\frac{2j+3}{2j-1}} P_j(t), \quad j\ge1. \end{eqnarray*} \end{rem}
We shall also assume that $H(y)$ is a polynomial, which implies that the integrand in \eqref{Hy} is also a polynomial so that the line integral can be exactly computed by means of a suitable quadrature formula. In general, however, due to the high degree of the integrand function, such quadrature formula cannot be solely based upon the available abscissae $\{c_i\}$: one needs to introduce an additional set of abscissae $\{\hat c_1, \dots,\hat c_r\}$, distinct from the nodes $\{c_i\}$, in order to make the quadrature formula exact: \begin{eqnarray} \label{discr_lin} \displaystyle \lefteqn{\int_0^1 {\dot\sigma}(t_0+\tau h)^T\nabla H(\sigma(t_0+\tau h))\mathrm{d}\tau =}\\ && \sum_{i=1}^s \beta_i {\dot\sigma}(t_0+c_i h)^T\nabla H(\sigma(t_0+c_i h)) + \sum_{i=1}^r \hat \beta_i {\dot\sigma}(t_0+\hat c_i h)^T\nabla H(\sigma(t_0+\hat c_i h)), \nonumber \end{eqnarray}
\noindent where $\beta_i$, $i=1,\dots,s$, and $\hat \beta_i$, $i=1,\dots,r$, denote the weights of the quadrature formula corresponding to the abscissae $\{c_i\}$ and $\{\hat c_i\}$, respectively, i.e., \begin{eqnarray}\nonumber \beta_i &=& \int_0^1\left(\prod_{ j=1,j\ne i}^s \frac{t-c_j}{c_i-c_j}\right)\left(\prod_{j=1}^r \frac{t-\hat c_j}{c_i-\hat c_j}\right)\mathrm{d} t, \qquad i = 1,\dots,s,\\ \label{betai}\\ \nonumber \hat\beta_i &=& \int_0^1\left(\prod_{ j=1}^s \frac{t-c_j}{\hat c_i-c_j}\right)\left(\prod_{ j=1,j\ne i}^r \frac{t-\hat c_j}{\hat c_i-\hat c_j}\right)\mathrm{d} t, \qquad i = 1,\dots,r. \end{eqnarray}
\begin{rem}\label{c01} In the case considered in the previous Remark~\ref{c0}, i.e. when $c_0=0$ is formally considered together with the abscissae (\ref{ci}), the first product in each formula in (\ref{betai}) ranges from $j=0$ to $s$. Moreover, also the range of $\{\beta_i\}$ becomes $i=0,1,\dots,s$. However, for sake of simplicity, we shall not consider this case further. \end{rem}
According to \cite{IT2}, the right-hand side of \eqref{discr_lin} is called \textit{discrete line integral}, while the vectors \begin{equation}\label{hYi} \hat Y_i = \sigma(t_0+\hat c_i h), \qquad i=1,\dots,r, \end{equation}
\noindent are called \textit{silent stages}: they just serve to increase, as much as one likes, the degree of precision of the quadrature formula, but they are not to be regarded as unknowns since, from \eqref{expan}, they can be expressed in terms of linear combinations of the \textit{fundamental stages} (\ref{Yi}).
\begin{defn}\label{defhbvmks} The method defined by substituting the quantities in \eqref{expan} into the right-hand side of \eqref{discr_lin}, and by choosing the unknown coefficients $\{\gamma_j\}$ in order that the resulting expression vanishes, is called {\em Hamiltonian Boundary Value Method with $k$ steps and degree $s$}, in short {\em HBVM($k$,$s$)}, where $k=s+r$ \, \cite{brugnano09bit}.\end{defn}
In such a way, one easily obtains, from (\ref{Hy})--(\ref{sigma}), $$H(\sigma(t_0+h)) = H(y_0),$$
\noindent that is, the value of the Hamiltonian is {\em exactly} preserved at the subsequent approximation, provided by $\sigma(t_0+h)$.
In the sequel, we shall see that HBVMs may be expressed through different, though equivalent, formulations: some of them can be directly implemented in a computer program, the others being of more theoretical interest.
Because of the equality \eqref{discr_lin}, we can apply the procedure directly to the original line integral appearing in the left-hand side. With this premise, by considering the first expansion in \eqref{expan}, the conservation property reads \begin{equation} \label{conservation} \sum_{j=1}^{s} \gamma_j^T \int_0^1 P_j(\tau) \nabla H(\sigma(t_0+\tau h))\mathrm{d}\tau=0, \end{equation}
\noindent which, as is easily checked, is certainly satisfied if we impose the following set of orthogonality conditions \begin{equation} \label{orth} \gamma_j = \int_0^1 P_j(\tau) J \nabla H(\sigma(t_0+\tau h))\mathrm{d}\tau, \qquad j=1,\dots,s. \end{equation}
\noindent Then, from the second relation of \eqref{expan} we obtain, by introducing the operator \begin{eqnarray}\label{Lf}\lefteqn{L(f;h)\sigma(t_0+ch) =}\\ \nonumber && \sigma(t_0)+h\sum_{j=1}^s \int_0^c P_j(x) \mathrm{d} x \, \int_0^1 P_j(\tau)f(\sigma(t_0+\tau h))\mathrm{d}\tau,\qquad c\in[0,1],\end{eqnarray}
\noindent that $\sigma$ is the eigenfunction of $L(J\nabla H;h)$ relative to the eigenvalue $\lambda=1$: \begin{equation}\label{L}\sigma = L(J\nabla H;h)\sigma.\end{equation}
\begin{defn} Equation (\ref{L}) is the {\em Master Functional Equation} defining $\sigma$ ~\cite{BIT09}.\end{defn}
\begin{rem}\label{MFE}
From the previous arguments, one readily obtains that the Master Functional Equation (\ref{L}) characterizes HBVM$(k,s)$ methods, for all $k\ge1$. Indeed, such methods are uniquely defined by the polynomial $\sigma$, of degree $s$, the number of steps $k$ being only required to obtain an exact quadrature formula (see (\ref{discr_lin})).\end{rem}
To practically compute $\sigma$, we set (see (\ref{Yi}) and (\ref{expan})) \begin{equation} \label{y_i} Y_i= \sigma(t_0+c_i h) = y_0+ h\sum_{j=1}^{s} a_{ij} \gamma_j, \qquad i=1,\dots,s, \end{equation}
\noindent where \begin{equation}\label{aij} a_{ij}=\int_{0}^{c_i} P_j(x) \mathrm{d} x, \qquad i,j=1,\dots,s.\end{equation}
\noindent Inserting \eqref{orth} into \eqref{y_i} yields the final formulae which define the HBVMs class based upon the orthonormal basis $\{P_j\}$: \begin{equation} \label{hbvm_int} Y_i=y_0+h\sum_{j=1}^s a_{ij} \int_0^1 P_j(\tau) J \nabla H(\sigma(t_0+\tau h))\mathrm{d}\tau, \qquad i=1,\dots,s. \end{equation}
For sake of completeness, we report the nonlinear system associated with the HBVM$(k,s)$ method, in terms of the fundamental stages $\{Y_i\}$ and the silent stages $\{\hat Y_i\}$ (see (\ref{hYi})), by using the notation \begin{equation}\label{fy} f(y) = J \nabla H(y). \end{equation}
\noindent In this context, it represents the discrete counterpart of \eqref{hbvm_int}, and may be directly retrieved by evaluating, for example, the integrals in \eqref{hbvm_int} by means of the (exact) quadrature formula introduced in \eqref{discr_lin}: \begin{eqnarray}\label{hbvm_sys} \lefteqn{ Y_i =}\\ && y_0+h\sum_{j=1}^s a_{ij}\left( \sum_{l=1}^s \beta_l P_j(c_l)f(Y_l) + \sum_{l=1}^r\hat \beta_l P_j(\hat c_l) f(\widehat Y_l) \right),\quad i=1,\dots,s.\nonumber \end{eqnarray}
\noindent From the above discussion it is clear that, in the non-polynomial case, supposing to choose the abscissae $\{\hat c_i\}$ so that the sums in (\ref{hbvm_sys}) converge to an integral as $r=k-s\rightarrow\infty$, the resulting formula is \eqref{hbvm_int}. This implies that HBVMs may be as well applied in the non-polynomial case since, in finite precision arithmetic, HBVMs are indistinguishable from their limit formulae \eqref{hbvm_int}, when a sufficient number of silent stages is introduced. The aspect of having a {\em practical} exact integral, for $k$ large enough, was already stressed in \cite{BIS, brugnano09bit, BIT09, IP1, IT2}.
We emphasize that, in the non-polynomial case, \eqref{hbvm_int} becomes an operative method, only after that a suitable strategy to approximate the integral is taken into account. In the present case, if one discretizes the {\em Master Functional Equation} (\ref{Lf})--(\ref{L}), HBVM$(k,s)$ are then obtained, essentially by extending the discrete problem (\ref{hbvm_sys}) also to the silent stages (\ref{hYi}). In order to simplify the exposition, we shall use (\ref{fy}) and introduce the following notation: \begin{eqnarray}\nonumber \{\tau_i\} = \{c_i\} \cup \{\hat{c}_i\}, && \{\omega_i\}=\{\beta_i\}\cup\{\hat\beta_i\},\\ \label{tiyi}\\ \nonumber y_i = \sigma(t_0+\tau_ih), && f_i = f(\sigma(t_0+\tau_ih)), \qquad i=1,\dots,k. \end{eqnarray}
\noindent The discrete problem defining the HBVM$(k,s)$ then becomes, \begin{equation}\label{hbvmks} y_i = y_0 + h\sum_{j=1}^s \int_0^{\tau_i} P_j(x)\mathrm{d} x \sum_{\ell=1}^k \omega_\ell P_j(\tau_\ell)f_\ell, \qquad i=1,\dots,k. \end{equation}
\begin{rem}\label{ecc} We also observe that, from (\ref{orth}) and the first relation in (\ref{expan}), one obtains the equations \begin{equation}\label{ecceq} \dot\sigma(t_0+\tau_ih) = \sum_{j=1}^s
P_j(\tau_i) \int_0^1 P_j(\tau)J\nabla H(\sigma(t_0+\tau h))\mathrm{d}\tau, \qquad i=1,\dots,k, \end{equation}
\noindent which may be viewed as {\em extended collocation conditions} according to \cite[Section\,2]{IT2}, where the integrals are (exactly) replaced by discrete sums. \end{rem}
By introducing the vectors $$\boldsymbol{y} = (y_1^T,\dots,y_k^T)^T, \qquad e=(1,\dots,1)^T\in\mathbb R^k,$$ and the matrices \begin{equation}\label{OIP}\Omega={\rm diag}(\omega_1,\dots,\omega_k), \qquad {\cal I}_s,~{\cal P}_s\in\mathbb R^{k\times s},\end{equation}
\noindent whose $(i,j)$th entry are given by \begin{equation}\label{IDPO} ({\cal I}_s)_{ij} = \int_0^{\tau_i} P_j(x)\mathrm{d} x, \qquad ({\cal P}_s)_{ij}=P_j(\tau_i), \end{equation}
\noindent we can cast the set of equations (\ref{hbvmks}) in vector form as \begin{equation}\label{rk0} \boldsymbol{y} = e\otimes y_0 + h({\cal I}_s {\cal P}_s^T\Omega)\otimes I_{2m}\, f(\boldsymbol{y}),\end{equation}
\noindent with an obvious meaning of $f(\boldsymbol{y})$. Consequently, the method can be seen as a Runge-Kutta method with the following Butcher tableau: \begin{equation}\label{rk}
\begin{array}{c|c}\begin{array}{c} \tau_1\\ \vdots\\ \tau_k\end{array} & {\cal I}_s {\cal P}_s^T\Omega\\
\hline &\omega_1\, \dots~ \omega_k
\end{array}\end{equation}
\begin{rem}\label{ascisse} We observe that, because of the use of an orthonormal basis, the role of the abscissae $\{c_i\}$ and of the silent abscissae $\{\hat c_i\}$ is interchangeable, within the set $\{\tau_i\}$. This is due to the fact that all the matrices ${\cal I}_s$, ${\cal P}_s$, and $\Omega$ depend on all the abscissae $\{\tau_i\}$, and not on a subset of them and, moreover, they are invariant with respect to the choice of the fundamental abscissae $\{c_i\}$. \end{rem}
The following result then holds true.
\begin{theo}\label{ordine} Provided that the quadrature defined by the weights $\{\omega_i\}$ has order at least $2s$ (i.e., it is exact for polynomials of degree at least $2s-1$), HBVM($k$,$s$) has order $p=2 s\equiv 2\deg(\sigma)$, whatever the choice of the abscissae $c_1,\dots,c_s$. \end{theo}
\begin{proof} From the classical result of Butcher (see, e.g., \cite[Theorem\,7.4]{HNW}), the thesis follows if the usual simplifying assumptions $C(s)$, $B(p)$, $p\ge 2s$, and $D(s-1)$ are satisfied for the Runge-Kutta method defined by the tableau (\ref{rk}). By looking at the method (\ref{rk0})--(\ref{rk}), one has that the first two (i.e., $C(s)$ and $B(p)$, $p\ge 2s$) are obviously fulfilled: the former by the definition of the method, the second by hypothesis. The proof is then completed, if we prove $D(s-1)$. Such condition can be cast in matrix form, by introducing the vector $\bar{e}=(1,\dots,1)^T\in\mathbb R^{s-1}$, and the matrices $$Q={\rm diag}(1,\dots,s-1),\qquad D={\rm diag}(\tau_1,\dots,\tau_k),\qquad V=(\tau_i^{j-1})\in\mathbb R^{k\times s-1},$$
\noindent (see also (\ref{IDPO})) as $$Q V^T\Omega\left({\cal I}_s{\cal P}_s^T\Omega\right) = \left(\bar{e}\,e^T -V^TD\right)\Omega,$$ i.e., \begin{equation}\label{finito} {\cal P}_s{\cal I}_s^T\Omega V Q = \left(e\,\bar{e}^T -DV\right). \end{equation}
\noindent Since the quadrature is exact for polynomials of degree $2s-1$, one has \begin{eqnarray*} \left({\cal I}_s^T\Omega VQ\right)_{ij} &=& \left( \sum_{\ell=1}^k \omega_\ell \int_0^{\tau_\ell} P_i(x)\mathrm{d} x\,(j \tau_\ell^{j-1}) \right) = \left(\int_0^1 \, \int_0^t P_i(x)\mathrm{d} x (jt^{j-1})\mathrm{d} t\right) \\&=& \left( \delta_{i1}-\int_0^1P_i(x)x^j\mathrm{d} x\right), \qquad i = 1,\dots,s,\quad j=1,\dots,s-1,\end{eqnarray*}
\noindent where the last equality is obtained by integrating by parts, with $\delta_{i1}$ the Kronecker symbol. Consequently, \begin{eqnarray*}\left({\cal P}_s{\cal I}_s^T \Omega V Q\right)_{ij} &=& \left(1 - \sum_{\ell=1}^s P_\ell(\tau_i)\int_0^1 P_\ell(x) x^j\mathrm{d} x \right)\\ &=& (1-\tau_i^j), \qquad i=1,\dots,k,\quad j=1,\dots,s-1,\end{eqnarray*}
\noindent that is, (\ref{finito}), where the last equality follows from the fact that $$\sum_{\ell=1}^s P_\ell(\tau)\int_0^1 P_\ell(x) x^j\mathrm{d} x = \tau^j, \qquad j=1,\dots,s-1.~\mbox{$\Box$}$$ \end{proof}
Concerning the stability of the methods, the following result holds true.
\begin{theo}\label{stab} For all $k$ such that the quadrature formula has order at least $2s\equiv 2\deg(\sigma)$, HBVM($k$,$s$) is perfectly $A$-stable,\footnote{That is, its region of Absolute stability precisely coincides with the left-half complex plane, $\mathbb C^-$.} whatever the choice of the abscissae $c_1,\dots,c_s$. \end{theo}
\begin{proof} As it has been previously observed, a HBVM$(k,s)$ is fully characterized by the corresponding polynomial $\sigma$ which, for $k$ sufficiently large (i.e., assuming that (\ref{discr_lin}) holds true), satisfies the {\em Master Functional Equation} (\ref{Lf})--(\ref{L}), which is independent of the choice of the nodes $c_1,\dots,c_s$ (since we consider an orthonormal basis). When, in place of $f(y)=J\nabla H(y)$ we put the test equation $f(y)=\lambda y$, we have that the collocation polynomial of the Gauss-Legendre method of order $2s$, say $\sigma_s$, satisfies the {\em Master Functional Equation}, since the integrands appearing in it are polynomials of degree at most $2s-1$, so that $\sigma=\sigma_s$. The proof completes by considering that Gauss-Legendre methods are perfectly $A$-stable.~\mbox{$\Box$} \end{proof}
\begin{ex}\label{lobex} As an example, for the methods studied in \cite{brugnano09bit}, based on a Lobatto distribution of the nodes $\{c_0=0,c_1,\dots,c_s\}\cup\{\hat{c}_1,\dots,\hat{c}_{k-s}\}$, one has that $\deg(\sigma)=s$, so that the order of HBVM($k$,$s$) turns out to be $2s$, with a quadrature satisfying $B(2k)$. Finally, we observe that, with such choice of the abscissae HBVM$(s,s)$ reduces to the Lobatto IIIA method of order $2s$.\end{ex}
\begin{ex}\label{gaussex} For the same reason, when one considers a Gauss distribution for the abscissae $\{c_1,\dots,c_s\}\cup\{\hat{c}_1,\dots,\hat{c}_{k-s}\}$, as done in \cite{BIT09}, one also obtains a method of order $2s$ with a quadrature satisfying $B(2k)$. Similarly as in the previous example, HBVM$(s,s)$ now reduces to the Gauss-Legendre method of order $2s$. \end{ex}
\begin{rem}\label{symrem} A number of remarks are in order, to emphasize relevant features of HBVM$(k,s)$: \begin{itemize}
\item From Remark~\ref{ascisse}, HBVM($k$,$s$) are {\em symmetric methods} according to the {\em time reversal symmetry condition} defined in \cite[p.\,218]{BT} (see also \cite{BT09}), provided that the abscissae $\{\tau_i\}$ (see (\ref{tiyi})) are symmetrically distributed ~\cite{brugnano09bit}.
\item By virtue of Theorems~\ref{ordine} and \ref{stab}, all methods in Examples~\ref{lobex} and \ref{gaussex} are symmetric, perfectly $A$-stable, and of order $2s$. In particular such HBVM$(k,s)$ are exact for polynomial Hamiltonian functions of degree $\nu$, provided that \begin{equation}\label{knu} k\ge \frac{\nu s}2.\end{equation}
\item For all $k$ sufficiently large so that (\ref{discr_lin}) holds, HBVM$(k,s)$ based on the $k$ Gauss-Legendre abscissae in $[0,1]$ are {\em equivalent} to HBVM$(k,s)$ based on $k+1$ Lobatto abscissae in $[0,1]$ (see \cite{BIT09}), since both methods define the same polynomial $\sigma$ of degree $s$ (i.e., they satisfy the same Master Functional Equation (\ref{L})--(\ref{Lf})).
\end{itemize} \end{rem}
\chapter{Numerical Tests}\label{chap2}
We here collect a few numerical tests, in order to put into evidence the potentialities of HBVMs \cite{BIS1,brugnano09bit,BIT09}.
\section*{Test problem 1} Let us consider the problem characterized by the polynomial Hamiltonian (4.1) in \cite{Faou}, \begin{equation}\label{fhp} H(p,q) = \frac{p^3}3 -\frac{p}2 +\frac{q^6}{30} +\frac{q^4}4 -\frac{q^3}3 +\frac{1}6, \end{equation}
\noindent having degree $\nu=6$, starting at the initial point $y_0\equiv (q(0),p(0))^T=(0,1)^T$, so that $H(y_0)=0$. For such a problem, in \cite{Faou} it has been experienced a numerical drift in the discrete Hamiltonian, when using the fourth-order Lobatto IIIA method with stepsize $h=0.16$, as confirmed by the plot in Figure~\ref{faoufig0}. When using the fourth-order Gauss-Legendre method the drift disappears, even though the Hamiltonian is not exactly preserved along the discrete solution, as is confirmed by the plot in Figure~\ref{faoufig}. On the other hand, by using the fourth-order HBVM(6,2) with the same stepsize, the Hamiltonian turns out to be preserved up to machine precision, as shown in Figure~\ref{faoufig1}, since such method exactly preserves polynomial Hamiltonians of degree up to 6. In such a case, according to the last item in Remark~\ref{symrem}, the numerical solutions obtained by using the Lobatto nodes $\{c_0=0,c_1,\dots,c_6=1\}$ or the Gauss-Legendre nodes $\{c_1,\dots,c_6\}$ are the same. The fourth-order convergence of the method is numerically verified by the results listed in Table~\ref{tp1}.
\begin{figure}
\caption{\protect Fourth-order Lobatto IIIA method, $h=0.16$, problem (\ref{fhp}): drift in the Hamiltonian.}
\caption{\protect Fourth-order Gauss-Legendre method, $h=0.16$, problem (\ref{fhp}): $H\approx 10^{-6}$.}
\caption{\protect Fourth-order HBVM(6,2) method, $h=0.16$, problem (\ref{fhp}): $H\approx 10^{-16}$.}
\label{faoufig0}
\label{faoufig}
\label{faoufig1}
\end{figure}
\section*{Test problem 2} The second test problem, having a highly oscillating solution, is the Fermi-Pasta-Ulam problem (see \cite[Section\,I.5.1]{hairer06gni}), modelling a chain of 2$m$ mass points connected with alternating soft nonlinear and stiff linear springs, and fixed at the end points. The variables $q_1, . . . , q_{2m}$ stand for the displacements of the mass points, and $p_i = \dot q_i$ for their velocities. The corresponding Hamiltonian, representing the total energy, is \begin{equation}\label{fpu} H(p,q) = \frac{1}2\sum_{i=1}^m\left(p_{2i-1}^2+p_{2i}^2\right) +\frac{\omega^2}4\sum_{i=1}^m\left(q_{2i}-q_{2i-1}\right)^2 +\sum_{i=0}^m\left(q_{2i+1}-q_{2i}\right)^4, \end{equation}
\noindent with $q_0=q_{2m+1}=0$. In our simulation we have used the following values: $m=3$, $\omega=50$, and starting vector $$p_i=0, \quad q_i = (i-1)/10, \qquad i=1,\dots,6.$$
\noindent In such a case, the Hamiltonian function is a polynomial of degree 4, so that the fourth-order HBVM(4,2) method, either when using the Lobatto nodes or the Gauss-Legendre nodes, is able to exactly preserve the Hamiltonian, as confirmed by the plot in Figure~\ref{fpufig1}, obtained with stepsize $h=0.05$. Conversely, by using the same stepsize, both the fourth-order Lobatto IIIA and Gauss-Legendre methods provide only an approximate conservation of the Hamiltonian, as shown in the plots in Figures~\ref{fpufig0} and \ref{fpufig}, respectively. The fourth-order convergence of the HBVM(4,2) method is numerically verified by the results listed in Table~\ref{tp2}.
\begin{figure}
\caption{\protect Fourth-order Lobatto IIIA method,
$h=0.05$, problem (\ref{fpu}): $|H-H_0|\approx 10^{-3}$.}
\caption{\protect Fourth-order Gauss-Legendre method, $h=0.05$, problem (\ref{fpu}): $|H-H_0|\approx 10^{-3}$.}
\caption{\protect Fourth-order HBVM(4,2) method,
$h=0.05$, problem (\ref{fpu}): $|H-H_0|\approx 10^{-14}$.}
\label{fpufig0}
\label{fpufig}
\label{fpufig1}
\end{figure}
\section*{Test problem 3 (non-polynomial Hamiltonian)} In the previous examples, the Hamiltonian function was a polynomial. Nevertheless, as observed above, also in this case HBVM($k$,$s$) are expected to produce a {\em practical} conservation of the energy when applied to systems defined by a non-polynomial Hamiltonian function that can be locally well approximated by a polynomial. As an example, we consider the motion of a charged particle in a magnetic field with Biot-Savart potential.\footnote{\,This kind of motion causes the well known phenomenon of {\em aurora borealis}.} It is defined by the Hamiltonian \cite{brugnano09bit} \begin{eqnarray}\label{biot} \lefteqn{H(x,y,z,\dot{x},\dot{y},\dot{z}) = }\\&&\frac{1}{2m} \left[ \left(\dot{x}-\alpha\frac{x}{\varrho^2}\right)^2 + \left(\dot{y}-\alpha\frac{y}{\varrho^2}\right)^2 + \left(\dot{z}+\alpha\log(\varrho)\right)^2\right],\nonumber \end{eqnarray}
\noindent with $\varrho=\sqrt{x^2+y^2}$, $\alpha= e \,B_0$, $m$ is the particle mass, $e$ is its charge, and $B_0$ is the magnetic field intensity. We have used the values $$m=1, \qquad e=-1, \qquad B_0=1,$$with starting point $$x = 0.5, \quad y = 10, \quad z = 0, \quad \dot{x} = -0.1, \quad \dot{y} = -0.3, \quad \dot{z} = 0.$$
\noindent By using the fourth-order Lobatto IIIA method, with stepsize $h=0.1$, a drift is again experienced in the numerical solution, as is shown in Figure~\ref{biotfig0}. By using the fourth-order Gauss-Legendre method with the same stepsize, the drift disappears even though, as shown in Figure~\ref{biotfig1}, the value of the Hamiltonian is preserved within an error of the order of $10^{-3}$. On the other hand, when using the HBVM(6,2) method with the same stepsize, the error in the Hamiltonian decreases to an order of $10^{-15}$ (see Figure~\ref{biotfig2}), thus giving a practical conservation. Finally, in Table~\ref{tab1} we list the maximum absolute difference between the numerical solutions over $10^3$ integration steps, computed by the HBVM$(k,2)$ methods based on Lobatto abscissae and on Gauss-Legendre abscissae, as $k$ grows, with stepsize $h=0.1$. We observe that the difference tends to 0, as $k$ increases. Finally, also in this case, one verifies a fourth-order convergence, as the results listed in Table~\ref{tp3} show.
\begin{figure}
\caption{\protect Fourth-order Lobatto IIIA method, $h=0.1$, problem (\ref{biot}): drift in the Hamiltonian.}
\caption{\protect Fourth-order Gauss-Legendre method, $h=0.1$, problem (\ref{biot}): $|H-H_0|\approx 10^{-3}$.}
\caption{\protect Fourth-order HBVM(6,2) method,
$h=0.1$, problem (\ref{biot}): $|H-H_0|\approx 10^{-15}$.}
\label{biotfig0}
\label{biotfig1}
\label{biotfig2}
\end{figure}
\begin{table}[hp] \caption{\protect\label{tp1} Numerical order of convergence for the HBVM(6,2) method, problem (\ref{fhp}).}
\begin{tabular}{|c|lllll|} \hline $h$ & 0.32 & 0.16 & 0.08 & 0.04 & 0.02 \\ \hline error &$2.288\cdot10^{-2}$ &$1.487\cdot10^{-3}$ &$9.398\cdot10^{-5}$ &$5.890\cdot10^{-6}$ &$3.684\cdot10^{-7}$\\ \hline order & -- & 3.94 &3.98 &4.00 &4.00\\ \hline \end{tabular}
\caption{\protect\label{tp2} Numerical order of convergence for the HBVM(4,2) method, problem (\ref{fpu}).}
\begin{tabular}{|c|lllll|} \hline $h$ & $1.6\cdot10^{-2}$ &$8\cdot10^{-3}$ & $4\cdot10^{-3}$ & $2\cdot10^{-3}$ & $10^{-3}$ \\ \hline error & $3.030$ &$1.967\cdot10^{-1}$ &$1.240\cdot10^{-2}$ &$7.761\cdot10^{-4}$ &$4.853\cdot10^{-5}$ \\ \hline order & -- & 3.97& 3.99 &4.00 &4.00\\ \hline \end{tabular}
\caption{\protect\label{tp3} Numerical order of convergence for the HBVM(6,2) method, problem (\ref{biot}).}
\begin{tabular}{|c|lllll|} \hline $h$ & $3.2\cdot10^{-2}$ &$1.6\cdot10^{-2}$ &$8\cdot10^{-3}$ & $4\cdot10^{-3}$ & $2\cdot10^{-3}$ \\ \hline error & $3.944\cdot10^{-6}$ &$2.635\cdot10^{-7}$ &$1.729\cdot10^{-8}$ &$1.094\cdot10^{-9}$ &$6.838\cdot10^{-11}$ \\ \hline order & -- & 3.90& 3.93 &3.98 &4.00\\ \hline \end{tabular}
\caption{\protect\label{tab1} Maximum difference between the numerical solutions obtained through the fourth-order HBVM$(k,2)$
methods based on Lobatto abscissae and Gauss-Legendre abscissae for increasing values of $k$, problem (\ref{biot}), $10^3$ steps with stepsize $h=0.1$.}
\centerline{\begin{tabular}{|r|l|} \hline $k$ & $h=0.1$ \\ \hline 2 & $3.97 \cdot 10^{-1}$ \\ 4 & $2.29 \cdot 10^{-3}$\\ 6 & $2.01 \cdot 10^{-8}$\\ 8 & $1.37 \cdot 10^{-11}$\\ 10 & $5.88 \cdot 10^{-13}$\\ \hline \end{tabular}} \end{table}
\section*{Test problem 4 (Sitnikov problem)}
The main problem in Celestial Mechanics is the so called $N$-body problem, i.e. to describe the motion of $N$ point particles of positive mass moving under Newton's law of gravitation when we know their positions and velocities at a given time. This problem is described by the Hamiltonian function: \begin{equation} \label{kepler} H(\boldsymbol{q},\boldsymbol{p})=\frac{1}{2} \sum_{i=1}^N
\frac{||p_i||_2^2}{m_i} - G \sum_{i=1}^N m_i
\sum_{j=1}^{i-1}\frac{m_j}{||q_i-q_j||_2}, \end{equation}
\noindent where $q_i$ is the position of the $i$th particle, with mass $m_i$, and $p_i$ is its momentum.
The Sitnikov problem is a particular configuration of the $3$-body dynamics (see, e.g., \cite{James}). In this problem two bodies of equal mass (primaries) revolve about their center of mass, here assumed at the origin, in elliptic orbits in the $xy$-plane. A third, and much smaller body (planetoid), is placed on the $z$-axis with initial velocity parallel to this axis as well.
The third body is small enough that the two body dynamics of the primaries is not destroyed. Then, the motion of the third body will be restricted to the $z$-axis and oscillating around the origin but not necessarily periodic. In fact this problem has been shown to exhibit a chaotic behavior when the eccentricity of the orbits of the primaries exceeds a critical value that, for the data set we have used, is $\bar e \simeq 0.725$ (see Figure \ref{sit_fig1}).
\begin{figure}
\caption{The upper picture displays the configuration of $3$-bodies in the Sitnikov problem. To an eccentricity of the orbits of the primaries $e=0.75$, there correspond bounded chaotic oscillations of the planetoid as is argued by looking at the space-time diagram in the down picture.}
\label{sit_fig1}
\end{figure}
We have solved the problem defined by the Hamiltonian function \eqref{kepler} by the Gauss method of order 4 (i.e., HBVM(2,2) at 2 Gaussian nodes) and by HBVM(18,2) at 18 Gaussian nodes (order 4, $2$ fundamental and $16$ silent stages), with the following set of parameters in (\ref{kepler}):
\begin{center} \begin{tabular}{ccccccccc} $N$ & $G$ & $m_1$ & $m_2$ & $m_3$ & $e$ & $d$ & $h$ & $t_{\mbox{max}}$ \\[.1cm] \hline \\[-.4cm] $3$ & $1$ & $1$ & $1$ & $10^{-5}$ & $0.75$ & $5$ & $0.5$ & $1500$ \end{tabular} \end{center}
\noindent where $e$ is the eccentricity, $d$ is the distance of the apocentres of the primaries (points at which the two bodies are the furthest), $h$ is the used time-step, and $[0,\,t_{\mbox{max}}]$ is the time integration interval. The eccentricity $e$ and the distance $d$ may be used to define the initial condition $[\boldsymbol{q}_0,\boldsymbol{p}_0]$ (see \cite{James} for the details): $$ \begin{array}{l} \boldsymbol{q}_0 = [-\frac{5}{2},~ 0,~ 0,~ \frac{5}{2},~ 0,~ 0,~ 0,~ 0,~10^{-9}]^T,\\[.1cm] \boldsymbol{p}_0 =[0,~ -\frac{1}{20}\sqrt{10},~ 0,~ 0,~ \frac{1}{20}\sqrt{10},~ 0,~ 0,~ 0,~ \frac{1}{2}]^T. \end{array} $$
First of all, we consider the two pictures in Figure \ref{sit_fig5} reporting the relative errors in the Hamiltonian function and in the angular momentum evaluated along the numerical solutions computed by the two methods. We know that the HBVM(18,2) precisely conserves Hamiltonian polynomial functions of degree at most $18$. This accuracy is high enough to guarantee that the nonlinear Hamiltonian function \eqref{kepler} is as well conserved up to the machine precision (see the upper picture): from a geometrical point of view this means that a local approximation of the level curves of \eqref{kepler} by a polynomial of degree $18$ leads to a negligible error. The Gauss method exhibits a certain error in the Hamiltonian function while, being this formula symplectic, it precisely conserves the angular momentum, as is confirmed by looking at the down picture of Figure \ref{sit_fig5}. The error in the numerical angular momentum associated with the HBVM(18,2) undergoes some bounded periodic-like oscillations.
\begin{figure}
\caption{Upper picture: relative error $|H(y_n)-H(y_0)|/|H(y_0)|$
of the Hamiltonian function evaluated along the numerical solution of the HBVM($18$,$2$) and the Gauss method. Down picture: relative error $|M(y_n)-M(y_0)|/|M(y_0)|$ of the angular momentum evaluated along the numerical solution of the HBVM($18$,$2$) and the Gauss method.}
\label{sit_fig5}
\end{figure}
Figures \ref{sit_fig2} and \ref{sit_fig3} show the numerical solution computed by the Gauss method and HBVM(18,2), respectively. Since the methods leave the $xy$-plane invariant for the motion of the primaries and the $z$-axis invariant for the motion of the planetoid, we have just reported the motion of the primaries in the $xy$-phase plane (upper pictures) and the space-time diagram of the planetoid (down picture).
\begin{figure}
\caption{The Sitnikov problem solved by the Gauss method of order 4, with stepsize $h=0.5$, in the time interval $[0,1500]$. The trajectories of the primaries in the $xy$-plane (upper picture) exhibit a very irregular behavior which causes the planetoid to eventually escape the system, as illustrated by the space-time diagram in the down picture.}
\label{sit_fig2}
\end{figure}
We observe that, for the Gauss method, the orbits of the primaries are irregular in character so that the third body, after performing some oscillations around the origin, will eventually escape the system (see the down picture of Figure \ref{sit_fig2}). On the contrary (see the upper picture of Figure \ref{sit_fig3}), the HBVM(18,2) method generates a quite regular phase portrait. Due to the large stepsize $h$ used, a sham rotation of the $xy$-plane appears which, however, does not destroy the global symmetry of the dynamics, as testified by the bounded oscillations of the planetoid (down picture of Figure \ref{sit_fig3}) which look very similar to the reference ones in Figure \ref{sit_fig1}. This aspect is also confirmed by the pictures in Figure \ref{sit_fig4} displaying the distance of the primaries as a function of the time. We see that the distance of the apocentres (corresponding to the maxima in the plots), as the two bodies wheel around the origin, are preserved by the HBVM(18,2) (down picture) while the same is not true for the Gauss method (upper picture).
\begin{figure}
\caption{The Sitnikov problem solved by the HBVM(18,2) method (order 4), with stepsize $h=0.5$, in the time interval $[0,1500]$. Upper picture: the trajectories of the primaries are ellipse shape. The discretization introduces a fictitious uniform rotation of the $xy$-plane which however does not alter the global symmetry of the system. Down picture: the space-time diagram of the planetoid on the $z$-axis displayed (for clearness) on the time interval $[0, 350]$ shows that, although a large value of the stepsize $h$ has been used, the overall behavior of the dynamics is well reproduced (compare with the down picture in Figure \ref{sit_fig1}).}
\label{sit_fig3}
\end{figure}
\begin{figure}
\caption{Distance between the two primaries as a function of the time, related to the numerical solutions generated by the Gauss method (upper picture) and HBVM(18,2) (down picture). The maxima correspond to the distance of apocentres. These are conserved by HBVM(18,2) while the Gauss method introduces patchy oscillations that destroy the overall symmetry of the system.}
\label{sit_fig4}
\end{figure}
\chapter{Infinity HBVMs}\label{chap3}
From the previous arguments, it is clear that the orthogonality conditions (\ref{orth}), i.e., the fulfillment of the {\em Master Functional Equation} (\ref{L}), is in principle only a sufficient condition for the conservation property (\ref{conservation}) to hold, when a generic polynomial basis $\{P_j\}$ is considered. Such a condition becomes also necessary, when such basis is orthonormal.
\begin{theo}\label{ortbas} Let $\{P_j\}$ be an orthonormal basis on the interval $[0,1]$. Then, assuming $H(y)$ to be analytical, (\ref{conservation}) implies that each term in the sum has to vanish. \end{theo}
\begin{proof} Let us consider the expansion$$g(\tau) \equiv \nabla H(\sigma(t_0+\tau h)) = \sum_{\ell\ge 1} \rho_\ell P_\ell(\tau), \qquad \rho_\ell =(P_\ell,g), \qquad \ell\ge1,$$
\noindent where, in general, $$(f,g) = \int_0^1f(\tau)g(\tau) \mathrm{d}\tau.$$
\noindent Substituting into (\ref{conservation}), yields $$\sum_{j=1}^{s} \gamma_j^T (P_j,g)=\sum_{j=1}^{s} \gamma_j^T \left(P_j,\sum_{\ell\ge 1} \rho_\ell P_\ell\right)=\sum_{j=1}^s \gamma_j^T\rho_j=0.$$
\noindent Since this has to hold whatever the choice of the function $H(y)$, one concludes that \begin{equation}\label{ortoj}\gamma_j^T\rho_j=0, \qquad j=1,\dots,s.~\mbox{$\Box$} \end{equation} \end{proof}
\begin{rem}\label{J} In the case where $\{P_j\}$ is an orthonormal basis, from (\ref{ortoj}) one then derives that $$\gamma_j = S \rho_j, \qquad i=1,\dots,s,$$ where $S$ is any nonsingular skew-symmetric matrix. The natural choice $S=J$ then leads to (\ref{orth}). \end{rem}
Moreover, we observe that, if the Hamiltonian $H(y)$ is a polynomial, the integral appearing at the right-hand side in (\ref{hbvm_int}) is exactly computed by a quadrature formula, thus resulting into a HBVM($k$,$s$) method with a sufficient number of silent stages. As already stressed in the Chapter~\ref{chap1}, in the non-polynomial case such formulae represent the limit of the sequence HBVM($k$,$s$), as $k \rightarrow \infty$.
\begin{defn} For general Hamiltonians, we call the limit formula (\ref{hbvm_int}) {\em Infinity Hamiltonian Boundary Value Method of degree $s$} (in short, {\em $\infty$-HBVM of degree $s$} or {\em HBVM$(\infty,s)$}) ~\cite{BIT09}. \end{defn}
More precisely, due to the choice of the orthonormal basis (\ref{orto1}), $$ \mbox{HBVM}(\infty,s) = \lim_{k\rightarrow\infty} \mbox{HBVM}(k,s),$$ whatever is the choice of the fundamental abscissae $\{c_i\}$.
A worthwhile consequence of Theorems~\ref{ordine} and \ref{stab} is that one can transfer to HBVM$(\infty,s)$ all those properties of HBVM($k$,$s$) which are satisfied starting from a given $k \ge k_0$ on: for example, the order and stability properties.
\begin{cor} \label{ordineinf} Whatever the choice of the abscissae $c_1,\dots,c_s$, HBVM$(\infty,s)$ \eqref{hbvm_int} has order $2s$ and is perfectly $A$-stable. \end{cor}
\chapter[HBVMs and collocation methods] {Isospectral Property of HBVMs and their connections with Runge-Kutta collocation methods}\label{chap4}
When applied to initial value problems, HBVMs may be viewed as a special subclass of Runge-Kutta (RK) methods of collocation type. In Chapter \ref{chap1} (see also \cite{brugnano09bit,BIT09}) the RK formulation turned out useful in stating results pertaining to the order of the new formulae. Here, the RK notation will be exploited to derive the isospectral property of HBVMs and elucidate the existing connections between HBVMs and RK collocation methods \cite{BIT10_1}. In doing this, our aim is twofold:
\begin{enumerate} \item to better elucidate the close link between the new formulae and the classical collocation Runge-Kutta methods;
\item to make the handling of the new formulae more comfortable to the scientific community working in the context of RK methods. \end{enumerate}
In fact, we think that HBVMs (and consequently their RK formulation) may be of interest beyond their application to Hamiltonian systems. Each HBVM($k$,$s$) becomes a classical collocation method when $k=s$, while, for $k>s$, it conserves all the features of the generating collocation formula, including the order (which may be even improved, reaching eventually order $p=2s$) and the dimension of the associated nonlinear system.
Let us then consider the matrix appearing in the Butcher tableau (\ref{rk}), corresponding to HBVM$(k,s)$, i.e., the matrix \begin{equation}\label{AMAT}A = {\cal I}_s {\cal P}_s^T\Omega\in\mathbb R^{k\times k}, \qquad k\ge s,\end{equation}
\noindent whose rank is $s$ (see (\ref{OIP})--(\ref{IDPO})). Consequently it has a $(k-s)$-fold zero eigenvalue. To begin with, we are going to discuss the location of the remaining $s$ eigenvalues of that matrix.
Before that, we state the following preliminary result, whose proof can be found in \cite[Theorem\,5.6 on page\,83]{HW}.
\begin{lem}\label{gauss} The eigenvalues of the matrix \begin{equation}\label{Xs} X_s = \left( \begin{array} {cccc} \frac{1}2 & -\xi_1 &&\\ \xi_1 &0 &\ddots&\\
&\ddots &\ddots &-\xi_{s-1}\\
& &\xi_{s-1} &0\\ \end{array} \right) , \end{equation}
\noindent with \begin{equation}\label{xij}\xi_j=\frac{1}{2\sqrt{(2j+1)(2j-1)}}, \qquad j=1,\dots,s-1,\end{equation}
\noindent coincide with those of the matrix in the Butcher tableau of the Gauss-Legendre method of order $2s$.\end{lem}
We also need the following preliminary result, whose proof derives from the properties of shifted-Legendre polynomials (see, e.g., \cite{AS} or the Appendix in \cite{brugnano09bit}).
\begin{lem}\label{intleg} With reference to the matrices in (\ref{OIP})--(\ref{IDPO}), one has \begin{equation}\label{IPG}{\cal I}_s = {\cal P}_{s+1}\hat{X}_s,\end{equation} where \begin{equation}\label{G} \hat{X}_s = \left( \begin{array} {cccc} \frac{1}2 & -\xi_1 &&\\ \xi_1 &0 &\ddots&\\
&\ddots &\ddots &-\xi_{s-1}\\
& &\xi_{s-1} &0\\ \hline &&&\xi_s \end{array} \right) , \end{equation} with the $\xi_j$ defined by (\ref{xij}). \end{lem}
The following result then holds true \cite{BIT10}.
\begin{theo}[\bf Isospectral Property of HBVMs]\label{mainres} For all $k\ge s$ and for any choice of the abscissae $\{\tau_i\}$ such that $B(2s)$ holds true, the nonzero eigenvalues of the matrix $A$ in (\ref{AMAT}) coincide with those of the matrix of the Gauss-Legendre method of order $2s$. \end{theo}
\begin{proof} For $k=s$, the abscissae $\{\tau_i\}$ have to be the $s$ Gauss-Legendre nodes on $[0,1]$, so that HBVM$(s,s)$ reduces to the Gauss Legendre method of order $2s$, as already observed in Example~\ref{gaussex}.
When $k>s$, from the orthonormality of the basis, see (\ref{orto}), and considering that the quadrature with weights $\{\omega_i\}$ is exact for polynomials of degree (at least) $2s-1$, one easily obtains that $${\cal P}_s^T\Omega{\cal P}_{s+1} = \left( I_s ~ \boldsymbol{0}\right),$$
\noindent since, for all $i=1,\dots,s$, ~and~ $j=1,\dots,s+1$: $$\left({\cal P}_s^T\Omega{\cal P}_{s+1}\right)_{ij} = \sum_{\ell=1}^k \omega_\ell P_i(\tau_\ell)P_j(\tau_\ell) =\int_0^1 P_i(t)P_j(t)\mathrm{d} t=\delta_{ij}.$$
\noindent By taking into account the result of Lemma~\ref{intleg}, one then obtains: \begin{eqnarray}\nonumber A{\cal P}_{s+1} &=& {\cal I}_s {\cal P}_s^T\Omega{\cal P}_{s+1} = {\cal I}_s \left(I_s~\boldsymbol{0}\right) ={\cal P}_{s+1} \hat{X}_s \left(I_s~\boldsymbol{0}\right) = {\cal P}_{s+1}\left(\hat{X}_s~\boldsymbol{0}\right)\\
&=& {\cal P}_{s+1}
\left( \begin{array} {cccc|c} \frac{1}2 & -\xi_1 && &0\\ \xi_1 &0 &\ddots& &\vdots\\
&\ddots &\ddots &-\xi_{s-1}&\vdots\\
& &\xi_{s-1} &0&0\\ \hline &&&\xi_s&0 \end{array} \right) \equiv {\cal P}_{s+1} \widetilde X_s,\label{tXs} \end{eqnarray}
\noindent with the $\{\xi_j\}$ defined according to (\ref{xij}). Consequently, one obtains that the columns of ${\cal P}_{s+1}$ constitute a basis of an invariant (right) subspace of matrix $A$, so that the eigenvalues of $\widetilde X_s$ are eigenvalues of $A$. In more detail, the eigenvalues of $\widetilde X_s$ are those of $X_s$ (see (\ref{Xs})) and the zero eigenvalue. Then, also in this case, the nonzero eigenvalues of $A$ coincide with those of $X_s$, i.e., with the eigenvalues of the matrix defining the Gauss-Legendre method of order $2s$.~\mbox{$\Box$} \end{proof}
\section{HBVMs and collocation methods}
By using the previous result and notations, now we go to elucidate the existing connections between HBVMs and RK collocation methods. We shall continue to use an orthonormal basis $\{P_j\}$, along which the underlying {\em extended collocation} polynomial $\sigma(t)$ is expanded, even though the arguments could be generalized to more general bases, as sketched below. On the other hand, the distribution of the internal abscissae can be arbitrary.
Our starting point is a generic collocation method with $k$ stages, defined by the tableau \begin{equation} \label{collocation_rk}
\begin{array}{c|c}\begin{array}{c} \tau_1\\ \vdots\\ \tau_k\end{array} & \mathcal A \\
\hline &\omega_1\, \ldots ~ \omega_k \end{array} \end{equation} where, for $i,j=1,\dots,k$, $\mathcal A= \left(\alpha_{ij}\right)\equiv\left(\int_0^{\tau_i} \ell_j(\tau) \mathrm{d}\tau \right)$ and $\omega_j=\int_0^{1} \ell_j(\tau) \mathrm{d}\tau$, $\ell_j(t)$ being the $j$th Lagrange polynomial of degree $k-1$ defined on the set of abscissae $\{\tau_i\}$.
Given a positive integer $s\le k$, we can consider a basis $\{p_1(\tau), \dots, p_s(\tau)\}$ of the vector space of polynomials of degree at most $s-1$, and we set \begin{equation} \label{P} \hat{\cal P}_s = \left( \begin{array} {cccc} p_1(\tau_1) & p_2(\tau_1) & \cdots & p_s(\tau_1) \\ p_1(\tau_2) & p_2(\tau_2) & \cdots & p_s(\tau_2) \\ \vdots & \vdots & & \vdots \\ p_1(\tau_k) & p_2(\tau_k) & \cdots & p_s(\tau_k) \end{array} \right) _{k \times s} \end{equation}
\noindent (note that $\hat{\cal P}_s$ is full rank since the nodes are distinct). The class of RK methods we are interested in is defined by the tableau \begin{equation} \label{hbvm_rk}
\begin{array}{c|c}\begin{array}{c} \tau_1\\ \vdots\\ \tau_k\end{array} & A \equiv \mathcal A \hat{\cal P}_s \Lambda_s \hat{\cal P}_s^T \Omega\\
\hline &\omega_1\, \ldots \ldots ~ \omega_k \end{array} \end{equation}
\noindent where $\Omega={\rm diag}(\omega_1,\dots,\omega_k)$ and $\Lambda_s={\rm diag}(\eta_1,\dots,\eta_s)$; the coefficients $\eta_j$, $j=1,\dots,s$, have to be selected by imposing suitable consistency conditions on the stages $\{Y_i\}$ \cite{BIT09}. In particular, when the basis is orthonormal, as we shall assume hereafter, then matrix $\hat{\cal P}_s$ reduces to matrix ${\cal P}_s$ in (\ref{OIP})--(\ref{IDPO}), $\Lambda_s = I_s$, and consequently (\ref{hbvm_rk}) becomes \begin{equation} \label{hbvm_rk1}
\begin{array}{c|c}\begin{array}{c} \tau_1\\ \vdots\\ \tau_k\end{array} & A \equiv \mathcal A {\cal P}_s {\cal P}_s^T \Omega\\
\hline &\omega_1\, \ldots \ldots ~ \omega_k \end{array} \end{equation}
We note that the Butcher array $A$ has rank which cannot exceed $s$, because it is defined by {\em filtering} $\mathcal A$ by the rank $s$ matrix ${\cal P}_s {\cal P}_s^T \Omega$.
The following result then holds true, which clarifies the existing connections between classical RK collocation methods and HBVMs.
\begin{theo}\label{collhbvm} Provided that the quadrature formula defined by the weights $\{\omega_i\}$ is exact for polynomials at least $2s-1$ (i.e., the RK method defined by the tableau (\ref{hbvm_rk1}) satisfies the usual simplifying assumption $B(2s)$), then the tableau (\ref{hbvm_rk1}) defines a HBVM$(k,s)$ method based at the abscissae $\{\tau_i\}$. \end{theo}
\underline{Proof}\quad Let us expand the basis $\{P_1(\tau),\dots,P_s(\tau)\}$ along the Lagrange basis $\{\ell_j(\tau)\}$, $j=1,\dots,k$, defined over the nodes $\tau_i$, $i=1,\dots,k$: $$ P_j(\tau)=\sum_{r=1}^k P_j(\tau_r) \ell_r(\tau),
\qquad j=1,\dots,s.$$
\noindent It follows that, for $i=1,\dots,k$ and $j=1,\dots,s$: $$\int_0^{\tau_i} P_j(x) \mathrm{d}x = \sum_{r=1}^k P_j(\tau_r) \int_0^{\tau_i} \ell_r(x) \mathrm{d}x = \sum_{r=1}^k P_j(\tau_r) \alpha_{ir},$$
\noindent that is (see (\ref{OIP})--(\ref{IDPO}) and (\ref{collocation_rk})), \begin{equation}\label{APeqI} {\cal I}_s = \mathcal A {\cal P}_s. \end{equation}
\noindent By substituting (\ref{APeqI}) into (\ref{hbvm_rk1}), one retrieves that tableau (\ref{rk}), which defines the method HBVM$(k,s)$. This completes the proof.~\mbox{$\Box$}
The resulting Runge-Kutta method \eqref{hbvm_rk1} is then energy conserving if applied to polynomial Hamiltonian systems \eqref{hamilode} when the degree of $H(y)$, is lower than or equal to a quantity, say $\nu$, depending on $k$ and $s$. As an example, when a Gaussian distribution of the nodes $\{\tau_i\}$ is considered, one obtains (\ref{knu}).
\begin{rem}[{\bf About Simplecticity}]\label{symplectic} The choice of the abscissae $\{\tau_1,\dots,\tau_k\}$ at the Gaussian points in $[0,1]$ has also another important consequence, since, in such a case, the collocation method (\ref{collocation_rk}) is the Gauss method of order $2k$ which, as is well known, is a {\em symplectic method}. The result of Theorem~\ref{collhbvm} then states that, for any $s\le k$, the HBVM$(k,s)$ method is related to the Gauss method of order $2k$ by the relation: $$A = {\cal A} ({\cal P}_s{\cal P}_s^T\Omega),$$
\noindent where the {\em filtering matrix} $({\cal P}_s{\cal P}_s^T\Omega)$ essentially makes the Gauss method of order $2k$ ``work'' in a suitable subspace. \end{rem}
It seems like the price paid to achieve such conservation properties consists in the lowering of the order of the new method with respect to the original one \eqref{collocation_rk}. Actually this is not true, because a fair comparison would be to relate method \eqref{rk}--\eqref{hbvm_rk1} to a collocation method constructed on $s$ rather than on $k$ stages. This fact will be fully elucidated in Chapter~\ref{chap5}.
\subsection{An alternative proof for the order of HBVMs}
We conclude this chapter by observing that the order $2s$ of an HBVM$(k,s)$ method, under the hypothesis that \eqref{collocation_rk} satisfies the usual simplifying assumption $B(2s)$, i.e., the quadrature defined by the weights $\{\omega_i\}$ is exact for polynomials of degree at least $2s-1$, may be stated by using an alternative, though equivalent, procedure to that used in the proof of Theorem~\ref{ordine}.
Let us then define the $k \times k$ matrix ${\cal P}\equiv {\cal P}_k$ (see (\ref{OIP})--(\ref{IDPO})) obtained by ``enlarging'' the matrix ${\cal P}_s$ with $k-s$ columns defined by the normalized shifted Legendre polynomials $P_j(\tau)$, $j=s+1,\dots,k$, evaluated at $\{\tau_i\}$, i.e., $${\cal P}= \left( \begin{array} {ccc} P_1(\tau_1) & \dots &P_k(\tau_1)\\ \vdots & &\vdots\\ P_1(\tau_k) & \dots &P_k(\tau_k) \end{array} \right) .$$
\noindent By virtue of property $B(2s)$ for the quadrature formula defined by the weights $\{\omega_i\}$, it satisfies $$ {\cal P}^T \Omega {\cal P} = \left( \begin{array} {ll} I_s & O \\ O & R \end{array} \right) , \qquad R\in\mathbb R^{k-s\times k-s}. $$
\noindent This implies that ${\cal P}$ satisfies the property $T(s,s)$ in \cite[Definition\,5.10 on page 86]{HW}, for the quadrature formula $(\omega_i,\tau_i)_{i=1}^k$. Therefore, for the matrix $A$ appearing in \eqref{hbvm_rk1} (i.e., (\ref{rk}), by virtue of Theorem~\ref{collhbvm}), one obtains: \begin{equation} \label{rk_leg1} {\cal P}^{-1} A {\cal P} = {\cal P}^{-1} \mathcal A {\cal P} \left( \begin{array} {ll} I_s \\ & O \end{array} \right) = \left( \begin{array} {ll} \widetilde{X}_s \\ & O \end{array} \right) , \end{equation}
\noindent where $\widetilde X_s$ is the matrix defined in (\ref{tXs}). Relation \eqref{rk_leg1} and \cite[Theorem\,5.11 on page 86]{HW} prove that method (\ref{hbvm_rk1}) (i.e., HBVM$(k,s)$) satisfies $C(s)$ and $D(s-1)$ and, hence, its order is $2s$.
\begin{rem}[{\bf Invariance of the order}] From the previous result
we deduce the invariance of the superconvergence property
of HBVM($k$,$s$) with respect to the distribution of the abscissae $\tau_i$, $i=1,\dots,k$, the only assumption to get the order $2s$ being that the underlying quadrature formula has degree of precision $2s-1$. Such exceptional circumstance is likely to have interesting applications beyond the purposes here presented. \end{rem}
\chapter{Blended HBVMs}\label{chap5}
We shall now consider some computational aspects concerning HBVM$(k,s)$. In more details, we now show how its cost depends essentially on $s$, rather than on $k$, in the sense that the nonlinear system to be solved, for obtaining the discrete solution, has (block) dimension $s$ \cite{BIS,brugnano09bit,BIT10}.
This could be inferred from the fact that the silent stages (\ref{hYi}) depend on the fundamental stages: let us see the details. In order to simplify the notation, we shall fix the fundamental stages at $\tau_1,\dots,\tau_s$, since we have already seen that, due to the use of an orthonormal basis, they could be in principle chosen arbitrarily, among the abscissae $\{\tau_i\}$. With this premise, we have, from (\ref{discr_lin}), (\ref{aij})--(\ref{hbvm_int}), and by using the notation (\ref{tiyi}), \begin{equation}\label{ys} y_i = y_0 + h\sum_{j=1}^s a_{ij} \sum_{\ell = 1}^k\omega_\ell P_j(\tau_\ell)f_\ell, \qquad i = 1,\dots,s.\end{equation}
This equation is now coupled with that defining the silent stages, i.e., from (\ref{expan}) and (\ref{hYi}), \begin{equation}\label{hy} y_i = y_0 + h\sum_{j=1}^s \gamma_j \int_0^{\tau_i}P_j(t) \mathrm{d} t, \qquad i = s+1,\dots,k. \end{equation}
Let us now partition the matrices ${\cal I}_s,{\cal P}_s\in\mathbb R^{k\times s}$ in (\ref{OIP})--(\ref{IDPO}) into $${\cal I}_{s1},{\cal P}_{s1}\in\mathbb R^{s\times s},\qquad {\cal I}_{s2},{\cal P}_{s2}\in\mathbb R^{k-s\times s},$$
\noindent containing the entries defined by the fundamental abscissae and the silent abscissae, respectively. Similarly, we partition the vector $\boldsymbol{y}$ into $\boldsymbol{y}_1$, containing the fundamental stages, and $\boldsymbol{y}_2$ containing the silent stages and, accordingly, let $$\Omega_1\in\mathbb R^{s\times s}, \qquad \Omega_2\in\mathbb R^{k-s\times k-s},$$
\noindent be the diagonal matrices containing the corresponding entries in matrix $\Omega$. Finally, let us define the vectors $$\boldsymbol{\gamma} = (\gamma_1,\dots,\gamma_s)^T, \qquad e=(1,\dots,1)^T\in\mathbb R^s, \qquad u = (1,\dots,1)^T\in\mathbb R^{k-s}.$$
\noindent Consequently, we can rewrite (\ref{ys}) and (\ref{hy}), as \begin{eqnarray}\label{y1} \boldsymbol{y}_1 &=& e\otimes y_0 + h{\cal I}_{s1} \left( {\cal P}_{s1}^T~ {\cal P}_{s2}^T\right) \left( \begin{array} {cc}\Omega_1 \\ &\Omega_2 \end{array} \right) \otimes I_{2m} \left( \begin{array} {c} f(\boldsymbol{y}_1)\\ f(\boldsymbol{y}_2) \end{array} \right) ,\\ \boldsymbol{y}_2 &=& u\otimes y_0 +h {\cal I}_{s2}\otimes I_{2m} \boldsymbol{\gamma}, \label{y2}\end{eqnarray}
\noindent respectively. The vector $\boldsymbol{\gamma}$ can be obtained by the identity (see (\ref{y_i})) $$\boldsymbol{y}_1 = e\otimes y_0 + h{\cal I}_{s1}\otimes I_{2m} \boldsymbol{\gamma},$$
\noindent thus giving \begin{eqnarray}\nonumber \boldsymbol{y}_2 &=& \left(u-{\cal I}_{s2}{\cal I}_{s1}^{-1}e\right)\otimes y_0 +{\cal I}_{s2}{\cal I}_{s1}^{-1}\otimes I_{2m} \boldsymbol{y}_1\\ &\equiv& \hat u\otimes y_0 +A_1\otimes I_{2m}\boldsymbol{y}_1,\label{y2_1}\end{eqnarray}
\noindent in place of (\ref{y2}), where, evidently, \begin{equation}\label{A1}\hat u = \left(u-{\cal I}_{s2}{\cal I}_{s1}^{-1}e\right)\in \mathbb R^{k-s}, \qquad A_1={\cal I}_{s2}{\cal I}_{s1}^{-1}\in\mathbb R^{k-s\times s}.\end{equation}
\noindent By setting \begin{equation}\label{B1B2} B_1 = {\cal I}_{s1} {\cal P}_{s1}^T\Omega_1 \in\mathbb R^{s\times s}, \qquad B_2 = {\cal I}_{s1} {\cal P}_{s2}^T \Omega_2\in\mathbb R^{s\times k-s},\end{equation}
\noindent substitution of (\ref{y2_1}) into (\ref{y1}) then provides, at last, the system of block size $s$ to be actually solved: \begin{eqnarray}\label{onlys} F(\boldsymbol{y}_1) &\equiv& \boldsymbol{y}_1 - e\otimes y_0 - h\left[ B_1 \otimes I_{2m} f(\boldsymbol{y}_1) + \right.\\ &&\left. B_2 \otimes I_{2m} f\left(\hat u\otimes y_0 +A_1\otimes I_{2m} \boldsymbol{y}_1\right)\right] = \bf0.\nonumber\end{eqnarray}
By using the simplified Newton method for solving (\ref{onlys}), and setting \begin{equation}\label{C} C=B_1+B_2A_1 \in\mathbb R^{s\times s},\end{equation}
\noindent one obtains the iteration: \begin{eqnarray}\label{Newt}
\left( I_s\otimes I_{2m} - hC\otimes J_0 \right) \boldsymbol{\delta}^{(n)} &=& -F(\boldsymbol{y}_1^{(n)}) \equiv \boldsymbol{\psi}_1^{(n)},\\ \boldsymbol{y}_1^{(n+1)} &=& \boldsymbol{y}_1^{(n)} + \boldsymbol{\delta}^{(n)}, \qquad n=0,1,\dots, \nonumber \end{eqnarray}
\noindent where $J_0$ is the Jacobian of $f(y)$ evaluated at $y_0$. Because of the result of Theorem~\ref{mainres}, the following property of matrix $C$ holds true ~\cite{BIT10}.
\begin{theo}\label{isoC} The eigenvalues of matrix $C$ in (\ref{C}) coincide with those of matrix (\ref{Xs}), i.e., with the eigenvalues of the matrix of the Butcher array of the Gauss-Legendre method of order $2s$. \end{theo}
\underline{Proof}\quad Assuming, as usual for simplicity, that the fundamental stages are the first $s$ ones, one has that the discrete problem $$\boldsymbol{y} = \left( \begin{array} {c}e\\ u \end{array} \right) \otimes y_0 +h A\otimes I_{2m} f(\boldsymbol{y}),$$
\noindent which defines the Runge-Kutta formulation of the method, is equivalent, by virtue of (\ref{y1}), (\ref{y2_1}), (\ref{A1}), (\ref{B1B2}), to \begin{eqnarray*}\lefteqn{ \left( \begin{array} {cc} I_s & O_{s\times r}\\ -A_1 & I_r \end{array} \right) \otimes I_{2m} \left( \begin{array} {c} \boldsymbol{y}_1\\ \boldsymbol{y}_2 \end{array} \right) =}\\&& \left( \begin{array} {c} e\\ \hat u \end{array} \right) \otimes y_0 +h \left( \begin{array} {cc} B_1 & B_2\\ O_{r\times s} & O_{r\times r} \end{array} \right) \otimes I_{2m} \left( \begin{array} {c} f(\boldsymbol{y}_1)\\ f(\boldsymbol{y}_2) \end{array} \right) ,\end{eqnarray*}
\noindent where, as usual, $r=k-s$. Consequently, the eigenvalues of the matrix $A$ defined in (\ref{AMAT}) coincides with those of the pencil \begin{equation}\label{pencil}\left(~ \left( \begin{array} {cc} I_s &O_{s\times r}\\ -A_1 & I_r \end{array} \right) ,~ \left( \begin{array} {cc} B_1 &B_2\\ O_{r\times s} & O_{r\times r} \end{array} \right) ~\right).\end{equation}
\noindent That is, $$\mu\in\sigma(A) ~~\Leftrightarrow~~ \mu \left( \begin{array} {cc} I_s &O_{s\times r}\\ -A_1 & I_r \end{array} \right) \left( \begin{array} {c} \boldsymbol{u}\\ \boldsymbol{v} \end{array} \right) = \left( \begin{array} {cc} B_1 &B_2\\ O_{r\times s} & O_{r\times r} \end{array} \right) \left( \begin{array} {c} \boldsymbol{u}\\ \boldsymbol{v} \end{array} \right) ,$$
\noindent for some nonzero vector $(\boldsymbol{u}^T,\boldsymbol{v}^T)^T$. By setting $\boldsymbol{u}=\boldsymbol{0}$, one obtains the $r$ zero eigenvalues of the pencil. For the remaining $s$ (nonzero) ones, it must be $\boldsymbol{v}=A_1\boldsymbol{u}$, so that: $$\mu \boldsymbol{u} = \left( B_1\boldsymbol{u} + B_2\boldsymbol{v} \right) = \left( B_1\boldsymbol{u} + B_2A_1\boldsymbol{u} \right) = C\boldsymbol{u} ~~\Leftrightarrow~~ \mu\in\sigma(C).~\mbox{$\Box$}$$
\begin{rem}\label{algo}
From the result of Theorem~\ref{isoC}, it follows that the spectrum of $C$ doesn't depend on the choice of the $s$ fundamental abscissae, within the nodes $\{\tau_i\}$. On the contrary, its condition number does: the latter appears to be minimized when the fundamental abscissae are symmetrically distributed and approximately evenly spaced in the interval $[0,1]$. As a practical ``{\em rule of thumb}'', the following algorithm appears to be almost optimal: \begin{enumerate} \item let the $k$ abscissae $\{\tau_i\}$ be chosen according to a Gauss-Legendre distribution of $k$ nodes;
\item then, let us consider $s$ equidistributed nodes in $(0,1)$, say $\{\hat \tau_1, \dots, \hat\tau_s\}$;
\item select, as the fundamental abscissae, those nodes among the $\{\tau_i\}$ which are the closest ones to the $\{\hat \tau_j\}$;
\item define matrix $C$ in (\ref{C}) accordingly. \end{enumerate}
\noindent Clearly, for the above algorithm to provide a unique solution (resulting in a symmetric choice of the fundamental abscissae), the difference $k-s$ has to be even which, however, can be easily accomplished. \end{rem}
In order to give evidence of the effectiveness of the above algorithm, in Figure~\ref{condC} we plot the condition number of matrix $C=C(k,s)$, for $s=2,\dots,5$, and $k\ge s$. As one can see, the condition number of $C(k,s)$ turns out to be nicely bounded, for increasing values of $k$, which makes the implementation (that we are going to analyze in the next section) effective also when finite precision arithmetic is used. For comparison, in Figure~\ref{condC1} there is the same plot, obtained by fixing the fundamental abscissae as the first $s$ ones. In such a case, the condition number of $C(k,s)$ grows very fast, as $k$ is increased.
\begin{figure}
\caption{\protect Condition number of the matrix $C=C(k,s)$, for $s=2,3,4,5$ and $k=s,s+1,\dots,100$, with the fundamental abscissae chosen according to the algorithm sketched in Remark~\ref{algo}.}
\caption{\protect Condition number of the matrix $C=C(k,s)$, for $s=2,3,4,5$ and $k=s,s+1,\dots,100$, with the fundamental abscissae chosen as the first $s$ ones.}
\label{condC}
\label{condC1}
\end{figure}
\section{Blended implementation} We observe that, since $C$ is nonsingular, we can recast problem (\ref{Newt}) in the {\em equivalent form} \begin{equation}\label{Newt2}
\gamma\left( C^{-1}\otimes I_{2m} - hI_s\otimes J_0 \right) \boldsymbol{\delta}^{(n)} = -\gamma C^{-1}\otimes I_{2m}\, F(\boldsymbol{y}_1^{(n)}) \equiv \boldsymbol{\psi}_2^{(n)}, \end{equation}
\noindent where $\gamma>0$ is a free parameter to be chosen later. Let us now introduce the {\em weight (matrix) function} \begin{equation}\label{teta}
\theta = I_s\otimes \Phi^{-1}, \qquad \Phi = I_{2m} -h\gamma J_0\in\mathbb R^{2m\times 2m}, \end{equation}
\noindent and the {\em blended formulation} of the system to be solved, \begin{eqnarray}\nonumber
M\boldsymbol{\delta}^{(n)} &\equiv& \left[ \theta \left(I_s\otimes I_{2m}-hC\otimes J_0\right) + \right.\\ \nonumber &&\left. (I-\theta)\gamma\left( C^{-1}\otimes I_{2m}- h I_s\otimes J_0 \right)\right] \boldsymbol{\delta}^{(n)} \\ &=& \theta \boldsymbol{\psi}_1^{(n)}+(I-\theta)\boldsymbol{\psi}_2^{(n)}\equiv \boldsymbol{\psi}^{(n)}.\label{blendNewt} \end{eqnarray}
\noindent The latter system has again the same solution as the previous ones, since it is obtained as the {\em blending}, with weights $\theta$ and $(I-\theta)$, of the two equivalent forms (\ref{Newt}) and (\ref{Newt2}). For iteratively solving (\ref{blendNewt}), we use the corresponding {\em blended iteration}, formally given by \cite{B00,BM02,BM04,BM07,BM08,BM09,BM09a,BMM06,BT2,M04,BIMcode}: \begin{equation}\label{blendit}\boldsymbol{\delta}^{(n,\ell+1)} = \boldsymbol{\delta}^{(n,\ell)} -\theta\left( M\boldsymbol{\delta}^{(n,\ell)}-\boldsymbol{\psi}^{(n)}\right), \qquad \ell=0,1,\dots.\end{equation}
\begin{rem}\label{nonlin} A nonlinear variant of the iteration (\ref{blendit}) can be obtained, by starting at $\boldsymbol{\delta}^{(n,0)}=\bf 0$ and updating $\boldsymbol{\psi}^{(n)}$ as soon as a new approximation is available. This results in the following iteration: \begin{equation}\label{blendit1}\boldsymbol{y}^{(n+1)} = \boldsymbol{y}^{(n)} +\theta\boldsymbol{\psi}^{(n)}, \qquad n=0,1,\dots.\end{equation}\end{rem}
\begin{rem}\label{only1} We observe that, for actually performing the iteration (\ref{teta})--(\ref{blendit}), as well as (\ref{blendit1}), one has to factor {\em only} the matrix $\Phi$ in (\ref{teta}), which has the same size as that of the continuous problem.\end{rem}
We end this section by observing that the above iterations (\ref{blendit}) and (\ref{blendit1}) depend on a free parameter $\gamma$. It will be chosen in order to optimize the convergence properties of the iteration, according to a linear analysis of convergence, which is sketched in the next section.
\section{Linear analysis of convergence}\label{linear}
The linear analysis of convergence for the iteration (\ref{blendit}) is carried out by considering the usual scalar test equation (see, e.g., \cite{BM09} and the references therein), $$y' = \lambda y, \qquad \Re(\lambda)<0.$$
\noindent By setting, as usual $q=h\lambda$, the two equivalent formulations (\ref{Newt}) and (\ref{Newt2}) become, respectively (omitting, for sake of brevity, the upper index $n$), $$(I_s-q C) \boldsymbol{\delta} = \boldsymbol{\psi}_1, \qquad \gamma( C^{-1} -q I_s) \boldsymbol{\delta} = \boldsymbol{\psi}_2.$$
\noindent Moreover, \begin{equation}\label{tetaq}\theta = \theta(q) = (1-\gamma q)^{-1} I_s,\end{equation}
\noindent and the blended iteration (\ref{blendit}) becomes \begin{equation}\label{blendq} \boldsymbol{\delta}^{(\ell+1)} = (I_s -\theta(q)M(q))\boldsymbol{\delta}^{(\ell)} + \theta(q)\boldsymbol{\psi}(q),\end{equation}
\noindent with \begin{eqnarray}\label{Mq} M(q) &=& \theta(q)\left(I_s-q C\right) +(I_s-\theta(q))\gamma\left( C^{-1}-q I_s\right), \\ \boldsymbol{\psi}(q) &=& \theta(q)\boldsymbol{\psi}_1+(I_s-\theta(q))\boldsymbol{\psi}_2.\nonumber\end{eqnarray}
\noindent Consequently, the iteration will be convergent if and only if the spectral radius $\rho(q)$ of the iteration matrix, \begin{equation}\label{Zq}Z(q) = I_s-\theta(q)M(q),\end{equation}
\noindent is less than 1. The set $$\Gamma = \left\{ q\in\mathbb C \,:\, \rho(q)<1 \right\}$$
\noindent is the {\em region of convergence of the iteration}. The iteration is said to be:\begin{itemize} \item {$A$-convergent}, ~ if $\mathbb C^-\subseteq \Gamma$;
\item {$L$-convergent}, ~ if it is $A$-convergent and, moreover, ~$\rho(q)\rightarrow 0$,~ as ~$q\rightarrow\infty$. \end{itemize}
\begin{table}[t]
\caption{\protect\label{params} optimal values (\ref{gammaopt}), and corresponding maximum amplification factors (\ref{rostarmin}), for various values of $s$.} \centerline{\begin{tabular}{|r|r|r|} \hline $s$ & $\gamma$ & $\rho^*$\\ \hline 2 &0.2887 &0.1340\\ 3 &0.1967 &0.2765\\ 4 &0.1475 &0.3793\\ 5 &0.1173 &0.4544\\ 6 &0.0971 &0.5114\\ 7 &0.0827 &0.5561\\ 8 &0.0718 &0.5921\\ 9 &0.0635 &0.6218\\ 10 &0.0568 &0.6467\\ \hline \end{tabular}} \end{table}
\noindent For the iteration (\ref{blendq}) one verifies that (see (\ref{tetaq}), (\ref{Mq}), and (\ref{Zq})) \begin{equation}\label{Zq1} Z(q) = \frac{q}{(1-\gamma q)^2}C^{-1}\left(C-\gamma I_s\right)^2, \end{equation}
\noindent which is the null matrix at $q=0$ and at $\infty$. Consequently, the iteration will be $A$-convergent (and, therefore, $L$-convergent), provided that {\em maximum amplification factor}, \begin{equation}\label{rostar} \rho^* \equiv \max_{\Re(q)=0} \rho(q) ~\le 1. \end{equation}
\noindent From (\ref{Zq1}) one has that, by setting hereafter $\sigma(C)$ the spectrum of matrix $C$, $$\mu\in\sigma(C) ~\Leftrightarrow~\frac{q(\mu-\gamma)^2}{\mu(1-\gamma q)^2}\in\sigma(Z(q)).$$
\noindent By taking into account that
$$\max_{\Re(q)=0}\frac{|q|}{|(1-\gamma q)^2|} = \frac{1}{2\gamma},$$
\noindent one then obtains that $$\rho^* = \max_{\mu\in\sigma(C)}
\frac{|\mu-\gamma|^2}{2\gamma|\mu|},$$
\noindent For Gauss-Legendre methods (and, then, for any matrix $C$ having the same spectrum), it can be shown that (see \cite{BM02,BMM06}) the choice
\begin{equation}\label{gammaopt} \gamma = |\mu_{\min}|\equiv
\min_{\mu\in\sigma(C)}|\mu|,\end{equation}
\noindent minimizes $\rho^*$, which turns out to be given by \begin{equation}\label{rostarmin} \rho^* = 1 -\cos \varphi_{\min} ~<1, \qquad \varphi_{\min}={\rm Arg}(\mu_{\min}). \end{equation}
In Table~\ref{params}, we list the optimal value of the parameter $\gamma$, along with the corresponding maximum amplification factor $\rho^*$, for various values of $s$, which confirm that the iteration (\ref{blendq}) is $L$-convergent.
\begin{rem} We then conclude that the {\em blended iteration} (\ref{blendit}) turns out to be $L$-convergent, for any HBVM$(k,s)$ method, for all $s\ge1$ and $k\ge s$. \end{rem}
We end this chapter, by emphasizing that the property of $L$-convergence has proved to be computationally very effective, as testified by the successful implementation of the codes {\tt BiM} and {\tt BiMD} \cite{M04,BIMcode}. We then expect good performances also for the {\em blended implementation} of HBVM$(k,s)$.
\chapter{Notes and References}\label{chap6}
The approach of using discrete line integrals has been used, at first, by Iavernaro and Trigiante, in connection with the study of the properties of the trapezoidal rule \cite{IT0,IT1,IT2}.
It has been then extended by Iavernaro and Pace \cite{IP1}, thus providing the first example of conservative methods, basically an extension of the trapezoidal rule, named {\em $s$-stage trapezoidal methods}: this is a family of energy-preserving methods of order 2, able to preserve polynomial Hamiltonian functions of arbitrarily high degree.
Later generalizations allowed Iavernaro and Pace \cite{IP2}, and then Iavernaro and Trigiante \cite{IT3}, to derive energy preserving methods of higher order.
The general approach, involving the shifted Legendre polynomial basis, which has allowed a full complete analysis of HBVMs, has been introduced in \cite{brugnano09bit} (see also \cite{BIT0}) and, subsequently, developed in \cite{BIT09}.
The Runge-Kutta formulation of HBVMs, along with their connections with collocation methods, has been studied in \cite{BIT10_1}.
The isospectral property of HBVMs has been also studied in \cite{BIT10}, where the {\em blended} implementation of the methods has been also introduced.
Computational aspects, concerning both the computational cost and the efficient numerical implementation of HBVMs, have been studied in \cite{BIS} and \cite{BIT10}.
Relevant examples have been collected in \cite{BIS1}, where the potentialities of HBVMs are clearly outlined, also demonstrating their effectiveness with respect to standard symmetric and symplectic methods.
Blended implicit methods have been studied in a series of papers \cite{B00,BM02,BM04,BM07,BM08,BM09,BM09a,BMM06,M04} and have been implemented in the two computational codes {\tt BiM} and {\tt BiMD} \cite{BIMcode}.
\end{document} |
\begin{document}
\begin{CJK*}{GBK}{song} \renewcommand{\abovewithdelims}[2]{ \genfrac{[}{]}{0pt}{}{#1}{#2}}
\title{\bf The full automorphism group of the power (di)graph of a finite group}
\author{Min Feng\quad Xuanlong Ma\quad Kaishun Wang\footnote{Corresponding author. \newline {\em E-mail address:} fgmn\[email protected] (M. Feng), [email protected] (X. Ma), [email protected] (K. Wang).}\\ {\footnotesize \em Sch. Math. Sci. {\rm \&} Lab. Math. Com. Sys., Beijing Normal University, Beijing, 100875, China} }
\date{}
\maketitle
\begin{abstract}
We describe the full automorphism group of the power (di)graph of a finite group. As an application, we solve a conjecture proposed by Doostabadi, Erfanian and Jafarzadeh in 2013.
\noindent {\em Key words:} power graph; power digraph; automorphism group.
\noindent {\em 2010 MSC:} 05C25, 20B25. \end{abstract}
\section{Introduction} We always use $G$ to denote a finite group. The {\em power digraph} $\overrightarrow{\mathcal P}_G$ has
$G$ as its vertex set, where there is an arc from $x$ to $y$ if $x\neq y$ and $y$ is a power of $x$. The {\em power graph} $\mathcal P_G$ is the underlying graph of $\overrightarrow{\mathcal P}_G$, which is obtained from $\overrightarrow{\mathcal P}_G$ by suppressing the orientation of each arc and replacing multiple edges by one edge. Kelarev and Quinn \cite{kel1,kel2} introduced the power digraph of a semigroup and called it directed power graph. Chakrabarty, Ghosh and Sen \cite{cha} defined
power graphs of semigroups. Recently, power (di)graphs have been investigated by researchers, see \cite{came,mir,mog,tam,kel3}. A detailed list of results and open questions can be found in \cite{aba}.
In 2013, Doostabadi, Erfanian and Jafarzadeh asserted that the full automorphism group of the power graph of the cyclic group $Z_n$ is isomorphic to the direct product of some symmetry groups.
\noindent{\bf Conjecture} \cite{doo} For every positive integer $n$, \begin{equation*}
{\rm Aut}(\mathcal P_{Z_n})\cong S_{\varphi(n)+1}\times\prod_{d\in D(n)\setminus\{1,n\}} S_{\varphi(d)}, \end{equation*} where $D(n)$ is the set of positive divisors of $n$, and $\varphi$ is the Euler's totient function.
In fact, if $n$ is a prime power, then $\mathcal P_{Z_n}$ is a complete graph by \cite[Theorem 2.12]{cha}, which implies that ${\rm Aut}(\mathcal P_{Z_n})\cong S_n$. Hence, the conjecture does not hold if $n=p^m$ for any prime $p$ and integer $m\geq 2$. The motivation of this paper is to show that this conjecture holds for the remaining case.
In this paper we describe the full automorphism group of the power (di)graph of an arbitrary finite group. As an application, this conjecture is valid if $n$ is not a prime power.
\section{Main results}
Denote by $\mathcal C(G)$ the set of all cyclic subgroups of $G$. For $C\in\mathcal C(G)$, let $[C]$ denote the set of all generators of $C$. Write $$ \mathcal C(G)=\{C_1,\ldots,C_k\}\textup{ and }[C_i]=\{[C_i]_1,\ldots,[C_i]_{s_i}\}. $$
Define $P(G)$ as the set of permutations $\sigma$ on $\mathcal C(G)$ preserving order, inclusion and noninclusion, i.e., $|C_i^\sigma|=|C_i|$ for each $i\in\{1,\ldots,k\}$, and $C_i\subseteq C_j$ if and only if $C_i^\sigma\subseteq C_j^\sigma$. Note that $P(G)$ is a permutation group on $\mathcal C(G)$. This group induces the faithful action on the set $G$: \begin{equation}\label{p(G)} G\times P(G)\longrightarrow G,\quad([C_i]_j,\sigma)\longmapsto [C_i^\sigma]_j. \end{equation}
For $\Omega\subseteq G$, let $S_{\Omega}$ denote the symmetric group on $\Omega$. Since $G$ is the disjoint union of $[C_1],\ldots,[C_k]$, we get the faithful group action on the set $G$: \begin{equation}\label{s1} G\times\prod_{i=1}^kS_{[C_i]}\longrightarrow G,\quad([C_i]_j,(\xi_1,\ldots,\xi_k))\longmapsto ([C_i]_j)^{\xi_i}. \end{equation}
\begin{thm}\label{mainthm1}
Let $G$ be a finite group. Then
\begin{eqnarray*}
{\rm Aut}(\overrightarrow{\mathcal P}_G)=(\prod_{i=1}^k S_{[C_i]})\rtimes P(G),
\end{eqnarray*}
where $P(G)$ and $\prod_{i=1}^k S_{[C_i]}$ act on $G$ as in {\rm (\ref{p(G)})} and {\rm (\ref{s1})}, respectively. \end{thm}
In the power graph $\mathcal P_G$, the {\em closed neighborhood} of a vertex $x$, denoted by $N[x]$, is the set of its neighbors and itself. For $x,y\in G$, define $x\equiv y$ if $N[x]=N[y]$. Observe that $\equiv$ is an equivalence relation. Let $\overline{x}$ denote the equivalence class containing $x$. Write \begin{equation*} \mathcal U(G)=\{\overline x\mid x\in G\}=\{\overline{u_1},\ldots,\overline{u_l}\}. \end{equation*} Since $G$ is the disjoint union of $\overline{u_1},\ldots,\overline{u_l}$, the following is a faithful group action on the set $G$: \begin{equation}\label{s2} G\times\prod_{i=1}^lS_{\overline{u_i}}\longrightarrow G,\quad(x,(\tau_1,\ldots,\tau_l))\longmapsto x^{\tau_i},\quad\textup{where }x\in\overline{u_i}. \end{equation}
\begin{thm}\label{mainthm2}
Let $G$ be a finite group. Then
\begin{eqnarray*}
{\rm Aut}(\mathcal P_G)=(\prod_{i=1}^l S_{\overline{u_i}})\rtimes P(G),
\end{eqnarray*}
where $P(G)$ and $\prod_{i=1}^l S_{\overline{u_i}}$ act on $G$ as in {\rm (\ref{p(G)})} and {\rm (\ref{s2})}, respectively. \end{thm}
The rest of this paper is organized as follows. In Section 3,
the induced action of ${\rm Aut} ({\mathcal P}_G)$ on $\mathcal U(G)$ is discussed.
In Section 4, we prove Theorems~\ref{mainthm1} and \ref{mainthm2}. In Section 5, we determine ${\rm Aut}(\overrightarrow{\mathcal P}_G)$ and ${\rm Aut}(\mathcal P_G)$ when $G$ is cyclic, elementary abelian, dihedral or generalized quaternion.
\section{The induced action of ${\rm Aut} ({\mathcal P}_G)$ on $\mathcal U(G)$}
In \cite{cam}, Cameron proved that each element of $\mathcal U(G)$ is a disjoint union of some $[x]$'s, where $[x]$ denotes the set of generators of $\langle x\rangle$.
\begin{prop}\label{identity}{\rm\cite[Proposition 4]{cam}} Let $e$ be the identity of $G$.
{\rm(i)} If $G=\langle x\rangle$, then $\overline e$ is $G$ or $[e]\cup[x]$ according to $|x|$ is a prime power or not.
{\rm(ii)} If $G$ is a generalized quaternion $2$-group, then $\overline e=[e]\cup[x]$, where $x$ is the unique involution in $G$.
{\rm(iii)} If $G$ is neither a cyclic group nor a generalized quaternion $2$-group, then $\overline e=[e]$. \end{prop}
\begin{prop}\label{closed}{\rm\cite[Proposition 5]{cam}} Let $x$ be an element of $G$. Suppose $\overline x\neq\overline e$. Then one of the following holds.
{\rm(i)} $\overline x=[x]$.
{\rm(ii)} There exist distinct elements $x_1,x_2,\ldots,x_r$ in $G$ such that $$ \overline x=[x_1]\cup[x_2]\cup\cdots\cup[x_r],\quad r\geq 2, $$ where $\langle x_1\rangle\subseteq\langle x_2\rangle\subseteq\cdots\subseteq\langle x_r\rangle$,
$|x_i|=p^{s+i}$ for some prime $p$ and integer $s\geq 0$. \end{prop}
The equivalence class $\overline e$ is said to be of {\em type} I. An equivalence class that does not contain $e$ is said to be of {\em type} II or III according to Proposition~\ref{closed} (i) or (ii) holds. Furthermore, if $\overline x$ is of type III, with reference to Proposition~\ref{closed} (ii), the numbers $p, r, s$ are uniquely determined by $\overline x$. We call $(p,r,s)$ its {\em parameters}.
For each $x\in G$ and $\pi\in{\rm Aut}(\mathcal P_G)$, we have $\overline x^\pi=\overline{x^\pi}.$ Hence,
${\rm Aut}(\mathcal P_G)$ induces an action on $\mathcal U(G)$ as follow: $$ \mathcal U(G)\times {\rm Aut}(\mathcal P_G)\longrightarrow\mathcal U(G),\quad (\overline x,\pi)\longmapsto\overline{x^\pi}. $$
Next we shall show that each orbit of ${\rm Aut}(\mathcal P_G)$ on $\mathcal U(G)$ consists of some equivalence classes of the same type.
Note that $\overline e$ consists of vertices whose closed neighborhoods in $\mathcal P_G$ are $G$. Hence, one gets the following result.
\begin{lemma}\label{typeI}
Each automorphism of $\mathcal P_G$ fixes $\overline e$. \end{lemma}
\begin{lemma}\label{un3}
If $\overline x$ is an equivalence class of type III with parameters $(p,r,s)$, then $|\overline x|=p^s(p^r-1)$. \end{lemma} {\it Proof.\quad} With reference to Proposition~\ref{closed} (ii), we have $$
|[x_i]|=\varphi(p^{s+i})=p^{s+i-1}(p-1), $$
which implies that
$$
|\overline x|=\sum_{i=1}^rp^{s+i-1}(p-1)=p^s(p^r-1),
$$
as desired. $
\Box
$
\begin{lemma}\label{un4}
Suppose $\overline x$ and $\overline y$ are two distinct equivalence classes of type II or III. If $\langle x\rangle\subset\langle y\rangle$, then $|\overline x|\leq|\overline y|$, with equality if and only if the follows hold.
{\rm(i)} Both $\overline x$ and $\overline y$ are of type II.
{\rm(ii)} $|y|=2|x|$ and $|x|$ is odd at least $3$. \end{lemma} {\it Proof.\quad} We divide the proof in three cases:
{\em Case 1.} $\overline x$ is of type III with parameters $(p,r,s)$.
Pick elements $x_1$ and $x_r$ in $\overline x$ of order $p^{s+1}$ and $p^{s+r}$, respectively. Then $\langle x_1\rangle\subseteq\langle x\rangle\subseteq\langle x_r\rangle$. Since $\langle x\rangle\subset\langle y\rangle$, we have $y\in N[x]=N[x_r]$. Note that any element $z$ satisfying $\langle x\rangle\subseteq\langle z\rangle\subseteq\langle x_r\rangle$ belongs to $\overline x$. Then $\langle x_r\rangle\subset\langle y\rangle$. Since $\frac{|y|}{p^{s+r}}$ is more than $1$, it has a prime divisor $p'$. Pick an element $z_0$ in $\langle y\rangle$ of order $p^{s+1}p'$. Then $z_0\in N[x_1]=N[x_r]$. Hence, one of $p^{s+1}p'$ and $p^{s+r}$ is divided by the other. In view of $r\geq 2$, we get $p'=p$. It follows that $p^{s+r+1}$ divides $|y|$, and so $|[y]|\geq p^{s+r}(p-1)\geq p^{s+r}$. Lemma~\ref{un3} implies that $|\overline x|<p^{s+r}$. Because $[y]\subseteq\overline y$, one has $|\overline x|<|\overline y|$.
{\em Case 2.} $\overline y$ is of type III with parameters $(q,t,j)$.
Pick an element $y_1$ in $\overline y$ of order $q^{j+1}$. Since any element $z$ satisfying
$\langle y_1\rangle\subseteq\langle z\rangle\subseteq \langle y\rangle$ belongs to $\overline y$, one gets $\langle x\rangle\subset \langle y_1\rangle$. Pick $y_0\in\langle y_1\rangle$ of order $q^j$. Then $\overline x\subseteq\langle y_0\rangle\setminus\{e\}$. Hence $|\overline x|\leq q^j-1<q^j(q^t-1)$. According to Lemma~\ref{un3}, we have $|\overline x|<|\overline y|$.
{\em Case 3.} $\overline x$ and $\overline y$ are of type II.
Then $|\overline x|=\varphi(|x|)$ and $|\overline y|=\varphi(|y|)$. Since $|x|$ divides $|y|$, it follows that $|\overline x|$ divides $|\overline y|$, and so $|\overline x|\leq|\overline y|$. Note that $|x|\neq|y|$ and $x\neq e$. Then $|\overline y|=|\overline x|$ if and only if $|y|=2|x|$ and $|x|$ is odd at least $3$.
Combining all these cases, we get the desired result. $
\Box
$
\begin{lemma}\label{un1}
Suppose $\overline x$ and $\overline y$ are two distinct equivalence classes of type II or III. If $\langle x\rangle\subset\langle y\rangle$, then $\langle x^\pi\rangle\subset\langle y^\pi\rangle$ for every automorphism $\pi\in{\rm Aut}(\mathcal P_G)$. \end{lemma}
{\it Proof.\quad} Denote by $E_G$ the edge set of $\mathcal P_G$. Since $\{x,y\}\in E_G$, one gets $\{x^\pi,y^\pi\}\in E_G$. Because $\overline x^\pi\neq\overline y^\pi$, we have $\langle x^\pi\rangle\subset\langle y^\pi\rangle$ or $\langle y^\pi\rangle\subset\langle x^\pi\rangle$. Suppose for the contrary that $\langle y^\pi\rangle\subset\langle x^\pi\rangle$. By Lemma~\ref{un4}, we have $|\overline x|\leq|\overline y|$ and $|\overline y^\pi|\leq|\overline x^\pi|$. The fact that $\pi$ is a bijection implies that $|\overline x|=|\overline x^\pi|=|\overline y|=|\overline y^\pi|$. By Lemma~\ref{un4} again, the following hold:
a) For each $u\in\{x,y,x^\pi,y^\pi\}$, $\overline u$ is of type II.
b) $|y|=|x^\pi|=2|x|=2|y^\pi|$ and $|y^\pi|$ is odd at least $3$.
Pick an element $z$ of order $2$ in $\langle y\rangle$. Then $\{z,y\}\in E_G$ and $\{z,x\}\not\in E_G$, which imply that $\{z^\pi,y^\pi\}\in E_G$ and $\{z^\pi,x^\pi\}\not\in E_G$, and hence $\langle y^\pi\rangle\subset\langle z^\pi\rangle$. Consequently, $$
|\overline z^\pi|=|\overline{z^\pi}|\geq|[z^\pi]|=\varphi(|z^\pi|)\geq\varphi(|y^\pi|)\geq 2. $$
Since $\overline z$ is of type II, we get $|\overline z|=\varphi(2)=1$, a contradiction. $
\Box
$
\begin{lemma}\label{un2}
Suppose $\overline x$ is of type II or III. If $|x|$ is a power of a prime $p$, then $|x^\pi|$ is also a power of $p$ for any $\pi\in{\rm Aut}(\mathcal P_G)$. \end{lemma}
{\it Proof.\quad} Pick any prime divisor $q$ of $|x^\pi|$. It suffices to prove $q=p$. We only need to consider that $G$ is not a $p$-group. Let $z^\pi$ be an element of order $q$ in $\langle x^\pi\rangle$. Proposition~\ref{identity} implies that $\overline z^\pi$ is of type II or III. It follows from Lemma~\ref{typeI} that $\overline z$ is of type II or III.
{\em Claim 1. $|\overline z^\pi|=q^r-1$ for some positive integer $r$.}
If $\overline z^\pi$ is of type II, then $|\overline z^\pi|=\varphi(q)=q-1$. If $\overline z^\pi$ is of type III, then its parameters are $(q,r,0)$, which implies that $|\overline z^\pi|=q^r-1$ by Lemma~\ref{un3}.
{\em Claim 2. $|\overline z|=p^j-1$ for some positive integer $j$.}
Since $\{z^\pi,x^\pi\}\in E_G$, one gets $\langle z\rangle\subseteq\langle x\rangle$ or $\langle x\rangle\subseteq\langle z\rangle$. The fact that $z\neq e$ implies that $p$ divides $|z|$. Pick an element $y\in\langle z\rangle$ of order $p$. Note that $\overline y$ is of type II or III. Similar to the proof of Claim 1, we get $|\overline y|=p^j-1$ for some positive integer $j$. It suffices to show that $\overline y=\overline{z}$. Suppose for the contrary that $\overline y\neq\overline{z}$. Then $\langle y\rangle\subset\langle z\rangle$. It follows from Lemma~\ref{un1} that $\langle y^{\pi}\rangle\subset\langle z^\pi\rangle$. Since $|z^\pi|$ is a prime, one has $y^{\pi}=e$. It follows that $\overline y^{\pi}$ is of type I, contrary to Lemma~\ref{typeI}.
Combining Claims 1 and 2, we get $q^r-1=p^j-1$, and so $q=p$, as desired. $
\Box
$
\begin{prop}\label{un5}
Let $\overline x\in\mathcal U(G)$ and $\pi\in{\rm Aut}(\mathcal P_G)$. Then $\overline x$ and $\overline x^\pi$ are of the same type. Moreover, if $\overline x$ is of type III, then $\overline x^\pi$ and $\overline x$ have the same parameters. \end{prop}
{\it Proof.\quad} Suppose that $\overline x$ and $\overline x^\pi$ are of the distinct types. From Lemma~\ref{typeI}, we may assume that $\overline x$ is of type II and $\overline x^\pi$ is of type III with parameters $(p,r,s)$. Then $|\overline x^\pi|=p^s(p^r-1)$ by Lemma~\ref{un3}. Since $|x^\pi|$ is a power of $p$, it follows from Lemma~\ref{un2} that $|x|=p^m$ for some positive integer $m$. Then $|\overline x|=|[x]|=\varphi(p^m)=p^{m-1}(p-1)$. Consequently, we get $p^s(p^r-1)=p^{m-1}(p-1)$, and so $r=1$, a contradiction. Therefore $\overline x$ and $\overline x^\pi$ are of the same type.
Suppose $\overline x$ and $\overline x^\pi$ are of type III with parameters $(p_1,r_1,s_1)$ and $(p_2,r_2,s_2)$, respectively. According to Lemmas~\ref{un3} and \ref{un2}, we get $p_1^{s_1}(p_1^{r_1}-1)=p_2^{s_2}(p_2^{r_2}-1)$ and $p_1=p_2$, and so $(p_1,r_1,s_1)=(p_2,r_2,s_2)$, as desired. $
\Box
$
\section{Proof of main results} In this section we present the proof of Theorems~\ref{mainthm1} and \ref{mainthm2}. The following is an immediate result from (\ref{p(G)}), (\ref{s1}) and (\ref{s2}).
\begin{lemma} \label{lemma1} Let $\pi$ be a permutation on the set $G$.
{\rm(i)} If $\pi\in P(G)$, then $\langle x\rangle^\pi=\langle x^\pi\rangle$ for each $x\in G$.
{\rm(ii)} Then $\pi\in\prod_{i=1}^kS_{[C_i]}$ if and only if $[x]^\pi=[x]$ for each $x\in G$.
{\rm(iii)} Then $\pi\in\prod_{i=1}^lS_{\overline{u_i}}$ if and only if $\overline x^\pi=\overline x$ for each $x\in G$. \end{lemma}
\begin{lemma}\label{subgroups} {\rm(i)} $P(G)$ and $\prod_{i=1}^kS_{[C_i]}$ are subgroups of ${\rm Aut}(\overrightarrow{\mathcal P}_G)$.
{\rm(ii)} $P(G)$ and $\prod_{i=1}^lS_{\overline{u_i}}$ are subgroups of ${\rm Aut}(\mathcal P_G)$.
\end{lemma} {\it Proof.\quad} (i) Pick $\sigma\in P(G)$ and $\xi\in\prod_{i=1}^kS_{[C_i]}$. In order to prove $\{\sigma,\xi\}\subseteq {\rm Aut}(\overrightarrow{\mathcal P}_G)$, by (\ref{p(G)}) and (\ref{s1}), we only need to show that $(x,y)\in A_G$ implies $(x^\sigma,y^\sigma)\in A_G$ and $(x^\xi,y^\xi)\in A_G$, where $A_G$ is the arc set of $\overrightarrow{\mathcal P}_G$. Suppose $(x,y)\in A_G$. Then $\langle y\rangle\subseteq\langle x\rangle$. It follows from Lemma~\ref{lemma1} that $\langle y^\sigma\rangle\subseteq\langle x^\sigma\rangle$ and $\langle y^\xi\rangle\subseteq\langle x^\xi\rangle$. Therefore $(x^\sigma,y^\sigma)\in A_G$ and $(x^\xi,y^\xi)\in A_G$.
(ii) Note that ${\rm Aut}(\overrightarrow{\mathcal P}_G)\subseteq{\rm Aut}(\mathcal P_G)$. By (i), we have $P(G)\subseteq{\rm Aut}(\mathcal P_G)$. Pick $\tau\in\prod_{i=1}^lS_{\overline{u_i}}$ and $\{x,y\}\in E_G$. By Lemma~\ref{lemma1}, we have $x^\tau\in\overline x$ and $y^\tau\in\overline y$. If $\overline x=\overline y$, since $\overline x$ is a clique in $\mathcal P_G$, one has $\{x^\tau,y^\tau\}\in E_G$. If $\overline x\neq\overline y$, then each vertex in $\overline x$ and each vertex in $\overline y$ are adjacent in $\mathcal P_G$, which implies that $\{x^\tau, y^\tau\}\in E_G$. $
\Box
$
Write $\mathcal C'(G)=\{[x]\mid x\in G\}$. For each $[x]\in\mathcal C'(G)$ and $\pi\in{\rm Aut}(\overrightarrow{\mathcal P}_G)$, since $[x]=\{x\}\cup\{y\mid \{(x,y),(y,x)\}\subseteq A_G\}$, we have $$ [x]^\pi=\{x^\pi\}\cup\{y^\pi\mid\{(x^\pi,y^\pi),(y^\pi,x^\pi)\}\subseteq A_G\}=[x^\pi]. $$ Hence, ${\rm Aut}(\overrightarrow{\mathcal P}_G)$ induces an action on $\mathcal C'(G)$: $$ \mathcal C'(G)\times {\rm Aut}(\overrightarrow{\mathcal P}_G)\longrightarrow\mathcal C'(G),\quad ([x],\pi)\longmapsto[x^\pi]. $$
\begin{lemma}\label{normal} {\rm(i)} $P(G)$ is a subgroup of the normalizer of $\prod_{i=1}^kS_{[C_i]}$ in ${\rm Aut}(\overrightarrow{\mathcal P}_G)$.
{\rm(ii)} $P(G)$ is a subgroup of the normalizer of $\prod_{i=1}^lS_{\overline{u_i}}$ in ${\rm Aut}(\mathcal P_G)$. \end{lemma} {\it Proof.\quad} (i) Let $\sigma\in P(G)$ and $\xi\in\prod_{i=1}^kS_{[C_i]}$. For any $x\in G$, combining Lemmas~\ref{lemma1} and~\ref{subgroups}, we have $$ [x]^{\sigma^{-1}\xi\sigma}=[x^{\sigma^{-1}}]^{\xi\sigma} =[x^{\sigma^{-1}}]^\sigma=[x]. $$
It follows that $\sigma^{-1}\xi\sigma\in\prod_{i=1}^kS_{[C_i]}$, and so (i) holds.
(ii) The proof is similar to (i). $
\Box
$
For each $\overline{u_i}\in\mathcal U(G)$, by Propositions~\ref{identity} and \ref{closed}, there exist pairwise distinct $C_{i_1},\ldots,C_{i_t}\in\mathcal C(G)$ such that $\overline{u_i}=\bigcup_{j=1}^t[C_{i_j}]$. Hence, we get the following result.
\begin{lemma}\label{subgroup}
$\prod_{i=1}^kS_{[C_i]}$ is a subgroup of $\prod_{i=1}^lS_{\overline{u_i}}$. \end{lemma}
\begin{lemma}\label{intersection trivially}
$|P(G)\cap(\prod_{i=1}^kS_{[C_i]})|=1$ and $|P(G)\cap(\prod_{i=1}^lS_{\overline{u_i}})|=1$.
\end{lemma}
{\it Proof.\quad} By Lemma~\ref{subgroup}, it is enough to prove $|P(G)\cap(\prod_{i=1}^lS_{\overline{u_i}})|=1$. Pick any $\pi\in P(G)\cap(\prod_{i=1}^lS_{\overline{u_i}})$ and $x\in G$. Write $x=[C_i]_j$. Then $x^\pi=[C_i^\pi]_j$ by $\pi\in P(G)$. Since $\pi\in \prod_{i=1}^lS_{\overline{u_i}}$, one gets $x^\pi\in\overline x$. Note that $\overline x$ is a clique in $\mathcal P_G$. Then $x^\pi\in N[x]$, and so $C_i\subseteq C_i^\pi$ or $C_i^\pi\subseteq C_i$. Since $|C_i^\pi|=|C_i|$, we have $C_i^\pi=C_i$, which implies that $x^\pi=x$, as desired.
$
\Box
$
For $x\in G$, we have $\langle x\rangle=\{x\}\cup\{y\mid (x,y)\in A_G\}$. Hence, for each $\pi\in{\rm Aut}(\overrightarrow{\mathcal P}_G)$,
$$ \langle x\rangle^\pi=\{x^\pi\}\cup\{y^\pi\mid (x^\pi,y^\pi)\in A_G\}=\langle x^\pi\rangle. $$ Therefore, ${\rm Aut}(\overrightarrow{\mathcal P}_G)$ induces an action on $\mathcal C(G)$: $$ \mathcal C(G)\times {\rm Aut}(\overrightarrow{\mathcal P}_G)\longrightarrow\mathcal C(G),\quad (\langle x\rangle,\pi)\longmapsto\langle x^\pi\rangle. $$ It is routine to verify that this group action preserves order, inclusion and noninclusion. Hence, the following result holds.
\begin{lemma}\label{same}
For any $\pi\in{\rm Aut}(\overrightarrow{\mathcal P}_G)$, there exists an element $\sigma\in P(G)$ such that $\langle x\rangle^\pi=\langle x\rangle^\sigma$ for every $x\in G$. \end{lemma}
\noindent{\em Proof of Theorem~\ref{mainthm1}:} It is apparent from Lemmas~\ref{normal} and \ref{intersection trivially} that $(\prod_{i=1}^k S_{[C_i]})\rtimes P(G)$ is a subgroup of ${\rm Aut}(\overrightarrow{\mathcal P}_G)$. Pick any $\pi\in{\rm Aut}(\overrightarrow{\mathcal P}_G)$. By Lemma~\ref{same} there exists an element $\sigma\in P(G)$ such that, for any $x\in G,$ $$ \langle x^{\pi\sigma^{-1}}\rangle=\langle x\rangle^{\pi\sigma^{-1}}=\langle x\rangle, $$ which implies that $x^{\pi\sigma^{-1}}\in[x]$. Then $\pi\sigma^{-1}\in\prod_{i=1}^k S_{[C_i]}$ and $\pi\in(\prod_{i=1}^k S_{[C_i]})(P(G))$. Hence, the desired result follows. $
\Box
$
\begin{prop}\label{un6}
For any $\pi\in{\rm Aut}(\mathcal P_G)$, there exists an element $\tau\in\prod_{i=1}^lS_{\overline{u_i}}$ such that $\tau\pi\in{\rm Aut}(\overrightarrow{\mathcal P}_G)$. \end{prop} {\it Proof.\quad} Without loss of generality, assume that $\overline{u_1}$ is of type I, $\overline{u_i}$ is of type II for $2\leq i\leq d$, and $\overline{u_{j}}$ is of type III with parameters $(p_j,r_j,s_j)$ for $d+1\leq j\leq l$. According to Proposition~\ref{un5} each $\overline{u_j}^\pi$ is of type III with parameters $(p_j,r_j,s_j)$.
For each $t\in\{1,\ldots,r_j\}$, let $\{x_{t1}^{(j)},\ldots,x_{tm_{jt}}^{(j)}\}$ and $\{y_{t1}^{(j)},\ldots,y_{tm_{jt}}^{(j)}\}$ be the sets of elements of order $p_j^{s_j+t}$ in $\overline{u_j}$ and $\overline{u_j}^\pi$, respectively. Then $\tau_j: x_{tm}^{(j)}\longmapsto (y_{tm}^{(j)})^{\pi^{-1}}$
is a permutation on $\overline{u_j}$, where $1\leq t\leq r_j$ and $1\leq m\leq m_{jt}$. Write $\tau=(\tau_1,\ldots,\tau_l)$, where $\tau_1$ is the inverse of the restriction of $\pi$ to $\overline{u_1}$, and $\tau_{i}$ is the identity of $S_{\overline{u_i}}$ for $2\leq i\leq d$. Hence $\tau\in\prod_{i=1}^l S_{\overline{u_i}}$.
We claim that, for each $x\in G$, the equality $|x^{\tau\pi}|=|x|$ holds. We divide our proof into three cases.
{\em Case 1.} $\overline x$ is of type I.
Then $x^{\tau\pi}=(x^{\tau_1})^\pi=x$, and so $|x^{\tau\pi}|=|x|$.
{\em Case 2.} $\overline x$ is of type II.
Then $x^{\tau\pi}=x^\pi$. According to Proposition~\ref{un5} we obtain that $\overline x^\pi$ is of type II, which implies that $[x^\pi]=\overline x^\pi=[x]^\pi$. Hence, one has
$$
\varphi(|x^\pi|)=|[x^\pi]|=|[x]|=\varphi(|x|). $$
Suppose $|x^\pi|\neq|x|$. Without loss of generality, assume that $|x^\pi|<|x|$. Then $|x|=2|x^\pi|$ and $|x^\pi|$ is odd. Pick an element $z\in\langle x\rangle$ of order $2$. It is clear that $\overline z$ is of type II and $\overline z\neq\overline x$. From Lemma~\ref{un1} we get $\langle z^\pi\rangle\subset\langle x^\pi\rangle$. Since $\overline{z}^\pi$ is of type II, we infer that $|z^\pi|$ is odd at least $3$. Hence $|\overline z^\pi|\geq\varphi(3)=2$, contrary to $|\overline z|=1$. Therefore $|x^\pi|=|x|$.
{\em Case 3.} $\overline x$ is of type III with parameters $(p_j,r_j,s_j)$.
Then $x=x_{tm}^{(j)}$ for some indices $t$ and $m$. Since $x^{\tau\pi}=y_{tm}^{(j)}$, we have $|x^{\tau\pi}|=|x|$.
Consequently, our claim is valid.
Finally, we show that $\tau\pi\in{\rm Aut}(\overrightarrow{\mathcal P}_G)$. Suppose $(u,v)\in A_G$. Then $\{u,v\}\in E_G$. It follows from Lemma~\ref{subgroups} that $\tau\pi\in{\rm Aut}(\mathcal P_G)$. Hence $\langle u^{\tau\pi}\rangle\subseteq\langle v^{\tau\pi}\rangle$ or $\langle v^{\tau\pi}\rangle\subseteq\langle u^{\tau\pi}\rangle$.
Since $\langle v\rangle\subseteq\langle u\rangle$, by the claim, $|v^{\tau\pi}|$ divides $|u^{\tau\pi}|$, which implies that $\langle v^{\tau\pi}\rangle\subseteq\langle u^{\tau\pi}\rangle$. So $(u^{\tau\pi},v^{\tau\pi})\in A_G$, as desired. $
\Box
$
Combining Theorem~\ref{mainthm1}, Lemmas~\ref{normal}, \ref{subgroup}, \ref{intersection trivially} and Proposition~\ref{un6}, we complete the proof of Theorem~\ref{mainthm2}.
\section{Examples} In this section we shall compute ${\rm Aut}(\overrightarrow{\mathcal P}_G)$ and ${\rm Aut}(\mathcal P_G)$ if $G$ is cyclic, elementary abelian, dihedral or generalized quaternion. We begin with cyclic groups.
\begin{example}\label{zn}
Let $n$ be a positive integer. Then
{\rm(i)} ${\rm Aut}(\overrightarrow{\mathcal P}_{Z_n})\cong\prod_{d\in D(n)}S_{\varphi(d)}.$
{\rm(ii)} ${\rm Aut}(\mathcal P_{Z_n})\cong\left\{
\begin{array}{ll}
S_n,& \textup{if $n$ is a prime power},\\ S_{\varphi(n)+1}\times\prod_{d\in D(n)\setminus\{1,n\}}S_{\varphi(d)},&\textup{otherwise}.
\end{array}\right.$ \end{example} {\it Proof.\quad} For any $d\in D(n)$, denote by $A_d$ the unique cyclic subgroup of order $d$ in $Z_n$. Note that $P(Z_n)=\mathbf 1_{\{A_d\mid d\in D(n)\}}$ and $S_{[A_d]}\cong S_{\varphi(d)}$, where $\mathbf{1}_{\Omega}$ denotes the identity map on the set $\Omega$. It follows from Theorem~\ref{mainthm1} that (i) holds. If $n$ is a prime power, then ${\rm Aut}(\mathcal P_G)\cong S_n$ by \cite[Theorem 2.12]{cha}. If $n$ is not a prime power, by \cite[Proposition 3.6]{fmw}, $$ \mathcal U(Z_n)=\{[A_d]\mid d\in D(n)\setminus\{1,n\}\}\cup\{[A_1]\cup[A_n]\}. $$ Hence (ii) holds by Theorem~\ref{mainthm2}. $
\Box
$
Example~\ref{zn} shows that the conjecture proposed by Doostabadi, Erfanian and Jafarzadeh holds if $n$ is not a prime power.
Combining Theorems~\ref{mainthm1} and ~\ref{mainthm2}, we get the following result.
\begin{prop}\label{sn}
${\rm Aut}(\overrightarrow{\mathcal P}_G)={\rm Aut}(\mathcal P_G)$ if and only if $\overline x=[x]$ for each $x\in G$. \end{prop}
Let $H$ be a group and $K$ be a permutation group on a set $Y$. The wreath product $H\wr K$ is the semidirect product $N\rtimes K$, where
$N$ is the direct product of $|Y |$ copies of $H$ (indexed by $Y$), and $K$ acts on $N$ by permuting the factors in the same way as it permutes elements of $Y$. If $H$ is a permutation group on a set $X$, then $H\wr K$ has a nature action on $X\times Y$: $$
(X\times Y)\times (H\wr K)\longrightarrow X\times Y,\quad ((x,y_i),(h_{y_1},\ldots,h_{y_{|Y|}};k))\longmapsto(x^{h_{y_i}},y_i^k), $$
where $\{y_1,\ldots,y_{|Y|}\}=Y$.
For a prime $p$ and a positive integer $n$, let $Z^n_p$ denote the elementary abelian $p$-group, i.e., the direct product of $n$ copies of $Z_p$.
\begin{example}
Let $n\geq 2$. Then
$$
{\rm Aut}(\mathcal P_{Z_p^n})={\rm Aut}(\overrightarrow{\mathcal P}_{Z_p^n})\cong S_{p-1}\wr S_m,
$$
where $m=\frac{p^n-1}{p-1}$. \end{example}
{\it Proof.\quad} Write $\mathcal C(Z^n_p)=\{\langle e\rangle, A_1,\ldots,A_m\}$. Then each $A_i$ is isomorphic to $Z_p$ and $|A_i\cap A_j|=1$ for $i\neq j$. Hence, one has $P(Z^n_p)=\mathbf 1_{\{\langle e\rangle\}}S_{\{A_i\mid 1\leq i\leq m\}}$. Combining Theorems~\ref{mainthm1}, \ref{mainthm2} and Proposition~\ref{sn}, we get the desired result. $
\Box
$
\begin{figure}
\caption{The partition of $D_{2n}$ }
\label{D2n}
\end{figure}
\begin{example}
For $n\geq 3$, let $D_{2n}$ denote the dihedral group of order $2n$. Then
{\rm(i)} ${\rm Aut}(\overrightarrow{\mathcal P}_{D_{2n}})\cong \prod_{d\in D(n)}S_{\varphi(d)}\times S_n.$
{\rm(ii)} ${\rm Aut}(\mathcal P_{D_{2n}})\cong \left\{
\begin{array}{ll}
S_{n-1}\times S_n,&\textup{if $n$ is a prime power},\\
\prod_{d\in D(n)}S_{\varphi(d)}\times S_n,&\textup{otherwise}.
\end{array}\right.$ \end{example}
{\it Proof.\quad} Pick $a, b\in D_{2n}$ with $|a|=n$ and $|b|=2$. Then $$ \begin{array}{l} D_{2n}=\{e,a,\ldots,a^{n-1}\}\cup\{b,ab,\ldots,a^{n-1}b\},\\ \mathcal C(D_{2n})=\mathcal C(\langle a\rangle)\cup\{\langle a^ib\rangle\mid 0\leq i\leq n-1\}, \end{array} $$
as shown in Figure~\ref{D2n}. Note that $|a^ib|=2$, $|\langle a^ib\rangle \cap\langle a\rangle|=1$ and $|\langle a^ib\rangle\cap\langle a^jb\rangle|=1$ for $i\neq j$. Hence we have \begin{equation}\label{d} P(D_{2n})=\mathbf{1}_{\mathcal C(\langle a\rangle)} S_{\{\langle a^ib\rangle\mid 0\leq i\leq n-1\}}. \end{equation} By Theorem~\ref{mainthm1}, one has $$ {\rm Aut}(\overrightarrow{\mathcal P}_{D_{2n}})=(\prod_{A\in\mathcal C(\langle a\rangle)}S_{[A]}\rtimes \mathbf{1}_{\langle a\rangle})\times (\prod_{i=0}^{n-1}S_{[\langle a^ib\rangle]}\rtimes S_{\{ a^ib\mid 0\leq i\leq n-1\}}), $$
which implies (i).
Suppose $n$ is a prime power. Then $$\mathcal U(D_{2n})=\{\{e\},\{a^1,\ldots,a^{n-1}\}\}\cup\{\{a^ib\}\mid 0\leq i\leq n-1\}.$$ Theorem~\ref{mainthm2} and (\ref{d}) imply that ${\rm Aut}(\mathcal P_{D_{2n}})\cong S_{n-1}\times S_n$.
Suppose $n$ is not a prime power. Then $\overline{a^i}=[a^i]$. By Proposition~\ref{sn}, our desired result follows. $
\Box
$
Let $Q_{4n}$ denote the generalized quaternion group of order $4n$, i.e., \begin{equation}\label{q4n} Q_{4n}=\langle x,y\mid x^{2n}=e,x^n=y^2, y^{-1}xy=x^{-1}\rangle. \end{equation} The power digraph $\overrightarrow{\mathcal P}_{Q_8}$ is shown in Figure~\ref{Q8}. Observe that
$$
{\rm Aut}(\overrightarrow{\mathcal P}_{Q_8})\cong S_2\wr S_3,\quad {\rm Aut}(\mathcal P_{Q_8})\cong S_2\times (S_2\wr S_3).
$$
\begin{figure}
\caption{The power digraph of $Q_8$}
\label{Q8}
\end{figure}
\begin{figure}
\caption{The partition of $Q_{4n}$}
\label{Q4n}
\end{figure}
\begin{example}
Let $n\geq 3$. Then
{\rm(i)} ${\rm Aut}(\overrightarrow{\mathcal P}_{Q_{4n}})\cong\prod_{d\in D(2n)}S_{\varphi(d)}\times(S_2\wr S_n).$
{\rm(ii)} ${\rm Aut}(\mathcal P_{Q_{4n}})\cong
\begin{cases}
S_2\times S_{2n-2}\times (S_2\wr S_n),&\textup{if $n$ is a power of 2},\\
\prod_{d\in D(2n)}S_{\varphi(d)}\times(S_2\wr S_n),&\textup{otherwise}.
\end{cases}
$ \end{example} {\it Proof.\quad} With reference to $(\ref{q4n})$, $y^{-1}=x^ny$ and $(x^iy)^{-1}=x^{2n-i}y$ for $i\in\{1,\ldots,n-1\}$. So we have $$ \begin{array}{l} Q_{4n}=\{e,x,\ldots,x^{2n-1}\}\cup\{y,x^ny\}\cup\bigcup_{i=1}^{n-1}\{x^iy,x^{2n-i}y\},\\ \mathcal C(Q_{4n})=\mathcal C(\langle x\rangle)\cup\{\langle x^jy\rangle\mid 0\leq j\leq n-1\}, \end{array} $$ as shown in Figure~\ref{Q4n}. Then \begin{equation}\label{q} P(Q_{4n})=\mathbf 1_{\mathcal C(\langle x\rangle)} S_{\{\langle x^jy\rangle\mid 0\leq j\leq n-1\}}. \end{equation} Thus (i) holds from Theorem~\ref{mainthm1}.
Suppose $n$ is a power of $2$. Then $$ \mathcal U(Q_{4n})=\{\{e,x^n\},\langle x\rangle\setminus\{e, x^n\},\{y,x^ny\}\}\cup\{\{x^iy,x^{2n-i}y\}\mid 1\leq i\leq n-1\}. $$ Theorem~\ref{mainthm2} and (\ref{q}) imply (ii) holds.
Suppose $n$ is not a power of $2$. Then $\overline{x^i}=[x^i]$ for each $i\in\{0,1,\ldots,2n-1\}$. From Proposition~\ref{sn} we get the desired result. $
\Box
$
\end{CJK*}
\end{document} |
\begin{document}
\title{Unsaturated deformable porous media flow \\ with phase transition\footnote{The financial supports of the FP7-IDEAS-ERC-StG \#256872 (EntroPhase), of the project Fondazione Cariplo-Regione Lombardia MEGAsTAR ``Matematica d'Eccellenza in biologia ed ingegneria come accelleratore di una nuona strateGia per l'ATtRattivit\`a dell'ateneo pavese'', and of the GA\v CR Grant GA15-12227S and RVO: 67985840 are gratefully acknowledged. The paper also benefited from the support the GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni) of INdAM (Istituto Nazionale di Alta Matematica) for ER.}}
\author{Pavel Krej\v c\'{\i} \thanks{Institute of Mathematics, Czech Academy of Sciences, \v Zitn\'a~25, CZ-11567~Praha 1, Czech Republic, E-mail {\tt [email protected]}.}\,\,, Elisabetta Rocca \thanks{Dipartimento di Matematica, Universit\`a degli Studi di Pavia and IMATI-C.N.R., Via Ferrata 5, I-27100 Pavia, Italy, E-mail {\tt [email protected]}.} \,, and J\"urgen Sprekels \thanks{Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse~39, D-10117 Berlin, Germany, E-mail {\tt [email protected]}, and Department of Mathematics, Humboldt-Universit\"at zu Berlin, Unter den Linden 6, D-10099 Berlin, Germany.} }
\maketitle
\begin{abstract}\noindent In the present paper, a continuum model is introduced for fluid flow in a deformable porous medium, where the fluid may undergo phase transitions. Typically, such problems arise in modeling liquid-solid phase transformations in groundwater flows. The system of equations is derived here from the conservation principles for mass, momentum, and energy and from the Clausius-Duhem inequality for entropy. It couples the evolution of the displacement in the matrix material, of the capillary pressure, of the absolute temperature, and of the phase fraction. Mathematical results are proved under the additional hypothesis that inertia effects and shear stresses can be neglected. For the resulting highly nonlinear system of two PDEs, one ODE and one ordinary differential inclusion with natural initial and boundary conditions, existence of global in time solutions is proved by means of cut-off techniques and suitable Moser-type estimates. \end{abstract}
\section*{Introduction}\label{int}
A model for fluid flow in partially saturated porous media with thermomechanical interaction was proposed and analyzed in \cite{ak,dkr1, dkr2}. Here, we extend the model by including the effects of freezing and melting of the fluid in the pores. Typical examples, in which such situations arise, are related to groundwater flows and to the freezing-melting cycles of water sucked into the pores of concrete. Notice that the latter process forms one of the main reasons for the degradation of concrete in buildings, bridges, and roads. However, many of the governing effects in concrete like the multi-component microstructure, the breaking of pores, chemical reactions, the hysteresis of the saturation-pressure curves, and the occurrence of shear stresses, are still neglected in our model.
While often in continuum models for three-component and multi-component porous media the intention is to describe the propagation of sound waves in these media (e.g., \cite{albers,wil}), we investigate in the present paper -- instead of using partial balance equations for each component -- a continuum model combining the principles of conservation of mass and momentum with the first and the second principles of thermodynamics. In, e.~g., \cite{albers}, flow in porous media is described in Eulerian coordinates in order to incorporate, for example, the effects of fast convection. Here, instead, we assume that slow diffusion is dominant, and choose the Lagrangian description as in \cite{ak, dkr1, wil}. The resulting system of coupled ODEs and PDEs then appears to be a nonlinear extension of the linear model in \cite{show}, referred to as a simplified Biot system, to the case when also the occurrence of temperature changes and phase transitions is taken into account. In addition to the model studied in \cite{dkr1}, we include here the effects of freezing and melting. The idea is the following. The pores in the matrix material contain a mixture of H\!${}_2$\!O and gas, and H\!${}_2$\!O itself is a mixture of the liquid (water) and the solid phase (ice). That is, in addition to the other physical quantities like capillary pressure, displacement, and absolute temperature, we need to consider the evolution of a phase parameter $\chi$ representing the relative proportion of water in the H\!${}_2$\!O part and its influence on pressure changes due to the different mass densities of water and ice. Unlike in \cite{ak,dkr1,dkr2,ss}, we do not consider hysteresis in the model. We believe that the mathematical results can be extended to the case of capillary hysteresis as in \cite{ak,dkr1,dkr2}. In our model without shear stresses, elastoplastic hysteresis effects as in \cite{ak,dkr1,ss} cannot occur.
As it will be detailed in Section~\ref{mod}, we assume that the deformations are small, so that $\mathrm{\,div\,} u$ is the relative local volume change, where $u$ represents the displacement vector. Moreover, we assume that the volume of the matrix material does not change during the process, and thus the volume and mass balance equations with Darcy's law for the water flux lead to a nonlinear degenerate parabolic equation for the capillary pressure, see~\eqref{e4b}. In the equation of motion, we take into account the pressure components due to phase transition and temperature changes, and we further simplify the system in order to make it mathematically tractable by assuming that the process is quasistatic and the shear stresses are negligible. The problem of existence of solutions for the coupled system without this assumption is open and, in our opinion, very challenging. Finally, we use the balance of internal energy and the entropy inequality to derive the dynamics for absolute temperature and phases; they turn out to be, respectively, a parabolic equation for the temperature with highly nonlinear right-hand side (quadratic in the derivatives) and an ordinary differential inclusion for the phase parameter $\chi$.
Finally, let us note that -- in order to model the freezing and melting phenomena in the pores -- we have borrowed here some ideas from our earlier publications on freezing and melting in containers filled with water with rigid, elastic, or elastoplastic boundaries (cf. \cite{mb,kr,krsbottle,krsgrav,krsWil}). It was shown there how important it is to account for the difference in specific volumes of water and of ice.
There is an abundant classical mathematical literature on phase transition processes, see, e.g., the monographs \cite{bs}, \cite{fremond}, \cite{visintin}, and the references therein. It seems, however, that only few publications take into account that the mass densities and specific volumes of the phases differ. In \cite{fr1}, the authors proposed to interpret a phase transition process in terms of a balance equation for macroscopic motions, and to include the possibility of voids. Well-posedness of an initial-boundary value problem associated with the resulting PDE system was proved there, and the case of two different densities $\varrho_1$ and $\varrho_2$ for the two substances undergoing phase transitions was pursued in \cite{fr2}.
Let us also mention the papers \cite{rr1, rr2, RocRos12, RocRos14} dealing with macroscopic stresses in phase transitions models, where the different properties of the viscous (liquid) and elastic (solid) phases were taken into account and the coexisting viscous and elastic properties of the system were given a distinguished role, under the working assumption that they indeed influence the phase transition process. The model studied there includes inertia, viscous, and shear viscosity effects (depending on the phases). This is reflected in the analytical expressions of the associated PDEs for the strain $u$ and the phase parameter $\chi$: the $\chi$-dependence, e.~g., in the stress-strain relation, leads to the possible degeneracy of the elliptic operator therein. Finally, we can quote in this framework the model analyzed in~\cite{kss2} and \cite{kss}, which pertains to nonlinear thermoviscoplasticity: in the spatially one-dimensional case, the authors prove the global well-posedness of a PDE system that both incorporates hysteresis effects and models phase change but, however, does not display a degenerating character.
Another coupled system for temperature, displacement, and phase parameter has been derived in order to model the full thermomechanical behavior of shape memory alloys. A long list of references for further developments can be found in the monographs \cite{fremond} and \cite{visintin}.
The paper is organized as follows: in the next Section~\ref{mod}, we derive the model in full generality from the basic principles of continuum thermodynamics. In Section~\ref{mat}, we state the mathematical problem, the main assumptions on the data, and the main Theorem \ref{t1}, the proof of which is split into Sections~\ref{cut}, \ref{apr}, and \ref{proo}. The steps of the proof are as follows: we first cut off some of the pressure and temperature dependent terms in the system in Section~\ref{cut} by means of a cut-off parameter $R$ and solve the related problem employing a special Galerkin approximation scheme. Then, in Section~\ref{apr}, we first prove the positivity of the temperature by means of a maximum principle technique, and then we perform the -- independent of $R$ -- estimates on the system. They mainly consist of: the energy estimate, the so-called Dafermos estimate (with negative small powers of the temperature), Moser-type and then higher-order estimates for the capillary pressure and for the temperature. This allows us in Section~\ref{proo} to pass to the limit in the cut-off system as $R\to\infty$, which will conclude the proof of the existence result.
\section{The model}\label{mod} We consider a connected domain $\Omega\subset \mathbb{R}^3$ filled by a deformable matrix material with pores containing a mixture of H\!${}_2$\!O and gas, where we assume that H\!${}_2$\!O may appear in one of the two phases: water or ice. We also assume that the volume of the solid matrix remains constant during the process, and let $c_s \in (0,1)$ be the relative proportion of solid in the total reference volume. We denote, for $x\in \Omega$ and time $t\in [0,T]$, \begin{description} \item $W(x,t)\in [0,1]$ ... relative proportion of H\!${}_2$\!O in the total pore volume; \item $A(x,t)\in [0,1]$ ... relative proportion of gas in the total pore volume; \item $\chi(x,t)\in [0,1]$ ... relative proportion of water in the H\!${}_2$\!O part; \item $\xi(x,t)$ ... mass flux vector; \item $p(x,t)$ ... capillary pressure; \item $u(x,t)$ ... displacement vector; \item $\sigma(x,t)$ ... stress tensor; \item $\theta(x,t)$ ... absolute temperature. \end{description} Then $\chi W$ represents the relative proportion of water in the total pore volume, and $(1-\chi) W$ represents the relative proportion of ice in the total pore volume.
We assume that the deformations are small, so that $\mathrm{\,div\,} u$ is the relative local volume change. By hypothesis, the volume of the matrix material does not change, so that the volume balance reads \begin{equation}\label{e1} W(x,t) + A(x,t) + c_s = 1+\mathrm{\,div\,} u(x,t). \end{equation}
For $A$, we assume the functional relation \begin{equation}\label{e2} A = 1-c_s - \varphi(p)\,, \end{equation} where $\varphi$ is an increasing function that satisfies $\varphi(-\infty) = \varphi^\flat \in (0,1)$ and $\varphi(\infty) = 1-c_s$, $\varphi^\flat + c_s < 1$. This means that the porous medium cannot be made completely dry by thermomechanical processes alone. Combining \eqref{e1} with \eqref{e2}, we obtain that \begin{equation}\label{e1a} W = \varphi(p) + \mathrm{\,div\,} u\,. \end{equation}
\subsection{Mass balance}\label{mass}
Consider an arbitrary control volume $V \subset \Omega$. The water content in $V$ is given by the integral $\int_V \rho_L \chi W \,\mathrm{d} x$, where $\rho_L$ is the water mass density, and the ice content is $\int_V \rho_S (1-\chi) W \,\mathrm{d} x$, where $\rho_S$ is the ice mass density. The mass conservation principle then reads \begin{equation}\label{e3} \frac{\,\mathrm{d}}{\,\mathrm{d} t} \int_V \rho_L \chi W \,\mathrm{d} x + \int_{\partial V} \xi\cdot n\,\mathrm{d} s(x) = -\frac{\,\mathrm{d}}{\,\mathrm{d} t} \int_V \rho_S (1-\chi) W \,\mathrm{d} x\,, \end{equation} where $n$ the unit outward normal vector to $\partial V$. In differential form, we obtain \begin{equation}\label{e4} \rho_L (\chi W)_t + \mathrm{\,div\,} \xi = -\rho_S ((1-\chi) W)_t\,. \end{equation} The right-hand side of \eqref{e4} is the positive or negative liquid water source due to the solidification or melting of the ice. We assume the water flux in the form of the Darcy law \begin{equation}\label{e5} \xi = - \mu(p) \nabla p, \end{equation} with a proportionality factor $\mu(p)>0$. This, \eqref{e1a}, and \eqref{e4}, yield the equation \begin{equation}\label{e4b} \big((\chi{+}\rho^*(1{-}\chi))(\varphi(p) + \mathrm{\,div\,} u)\big)_t - \frac{1}{\rho_L}\mathrm{\,div\,} (\mu(p)\nabla p) = 0, \end{equation} with $\rho^* = \rho_S/\rho_L \in (0,1)$.
\subsection{Equation of motion}\label{moti}
The equation of motion is considered in the form \begin{equation}\label{mo1} \rho_M u_{tt} - \mathrm{\,div\,} \sigma = g\,, \end{equation} where $\rho_M$ is the mass density of the matrix material, $\sigma$ is the stress tensor, and $g$ is a volume force acting on the body (e.~g., gravity). For $\sigma$, we prescribe the constitutive equation \begin{equation}\label{mo2} \sigma = B\varepsilon_t + A\varepsilon + \big((\chi{+}\rho^*(1{-}\chi))(\lambda \mathrm{\,div\,} u - p) - \beta(\theta-\theta_c)\big)\delta\,, \end{equation} where $\varepsilon = \nabla_s u :=\frac12 (\nabla u + \nabla u^T)$ is the small strain tensor, $\delta$ is the Kronecker tensor, $B$ is a symmetric positive definite viscosity tensor, $A$ is the symmetric positive definite elasticity tensor of the matrix material, $\lambda>0$ is the bulk elasticity modulus of water, $\theta>0$ is the absolute temperature, $\theta_c>0$ is a fixed referential temperature, and $\beta \in \mathbb{R}$ is the relative solid-liquid thermal expansion coefficient. The term $\,(\chi{+}\rho^*(1{-}\chi))(\lambda \mathrm{\,div\,} u - p)\,$ accounts for the pressure component due to the phase transition.
\subsection{Energy and entropy balance}\label{ener}
We have to derive formulas for the densities of internal energy $U$ and entropy $S$ such that the energy balance balance equation and the Clausius--Duhem inequality hold for all processes. Let $q$ be the heat flux vector, and let $V \subset \Omega$ be again an arbitrary control volume. The total internal energy in $V$ is $\int_V U \,\mathrm{d} x$, and the total mechanical power $Q(V)$ supplied to $V$ equals $$ Q(V) = \int_V \sigma:\varepsilon_t \,\mathrm{d} x - \int_{\partial V} \frac{1}{\rho_L} p\, \xi\cdot n \,\mathrm{d} s(x)\,, $$ where $\xi$ is the fluid mass flux \eqref{e5}. We thus have that \begin{equation}\label{e11} \frac{\,\mathrm{d}}{\,\mathrm{d} t} \int_V U\,\mathrm{d} x + \int_{\partial V} q\cdot n \,\mathrm{d} s(x) =
\int_V \sigma:\varepsilon_t \,\mathrm{d} x - \int_{\partial V} \frac{1}{\rho_L} p\,\xi \cdot n \,\mathrm{d} s(x)\,. \end{equation} Again, by the Gauss formula, we obtain the energy balance equation in differential form, namely \begin{equation}\label{e12} U_t + \mathrm{\,div\,} q = \sigma:\varepsilon_t - \frac{1}{\rho_L} \mathrm{\,div\,}( p\xi)\,. \end{equation} The internal energy and entropy densities $U$ and $S$, as well as the heat flux vector $q$, have to be chosen in order to satisfy, for all processes, the Clausius--Duhem inequality \begin{equation}\label{e13} S_t + \mathrm{\,div\,}\big(\frac{q}{\theta}\big) \ge 0, \end{equation} or, taking into account the energy balance \eqref{e12}, \begin{equation}\label{e14} U_t - \theta S_t + \frac{q\cdot\nabla \theta}{\theta} \le \sigma:\varepsilon_t - \frac{1}{\rho_L} \mathrm{\,div\,}( p\xi)\,. \end{equation} We consider $\varepsilon, \chi, p, \theta$ as state variables and $U, S$ as state functions, independent of $\nabla\theta$. Hence, as a consequence of \eqref{e14}, two inequalities have to hold separately for all processes, namely \begin{equation}\label{e15} q\cdot\nabla \theta \le 0\,, \quad U_t - \theta S_t \le \sigma:\varepsilon_t - \frac{1}{\rho_L} \mathrm{\,div\,}( p\xi)\,. \end{equation} For simplicity, we assume Fourier's law for the heat flux, \begin{equation}\label{e16} q = -\kappa(\theta) \nabla\theta, \end{equation} with the heat conductivity coefficient $\kappa = \kappa(\theta) >0$. We further introduce the free energy $F$ by the formula $F = U - \theta S$, so that, in terms of $F$, the second inequality in \eqref{e15} takes the form \begin{equation}\label{e17} \quad F_t + \theta_t S \le \sigma:\varepsilon_t + \frac{1}{\rho_L} \mathrm{\,div\,}( p\mu(p)\nabla p)\,. \end{equation} We claim that the right choice of $F$ for \eqref{e17} to hold is given by \begin{eqnarray} \nonumber F &=& \frac12 A\varepsilon:\varepsilon + (\chi{+}\rho^*(1{-}\chi))\left(V(p)+ \frac{\lambda}{2} (\mathrm{\,div\,} u)^2\right)\\ \label{e18f}
&& +\, L\chi \left(1 - \frac{\theta}{\theta_c}\right)
-\beta(\theta - \theta_c)\mathrm{\,div\,} u + F_0(\theta) + I(\chi),\\ \label{e18s} S &=& -\frac{\partial F}{\partial\theta} = \frac{L}{\theta_c}\chi + \beta \mathrm{\,div\,} u - F'_0(\theta), \end{eqnarray} where \begin{equation}\label{e18a} V(p) = p \varphi(p) - \Phi(p)\,, \quad \Phi(p) = \int_0^p \varphi(\tau) \,\mathrm{d} \tau\,, \end{equation} $F_0(\theta)$ is a purely caloric component of $F$, $L>0$ is the latent heat, and $I$ is the indicator function of the interval $[0,1]$. It remains to check that if we choose the phase dynamics equation in the form \begin{equation}\label{e6} \gamma(\theta) \chi_t+ \partial I(\chi) \ni (1-\rho^*)\left(\Phi(p)+ p\mathrm{\,div\,} u - \frac{\lambda}{2}(\mathrm{\,div\,} u)^2\right) + L\left(\frac{\theta}{\theta_c}-1\right) \end{equation} with a coefficient $\gamma(\theta)>0$, then \eqref{e17} holds for all processes. Indeed, by \eqref{e18f}--\eqref{e18s} and \eqref{e4b} we have that \begin{eqnarray*} F_t + \theta_t S &=& A\varepsilon: \varepsilon_t + (\chi{+}\rho^*(1{-}\chi))(V'(p)p_t + \lambda \mathrm{\,div\,} u \mathrm{\,div\,} u_t)\\ &&+\, (1 - \rho^*)\chi_t \left(V(p)+ \frac{\lambda}{2} (\mathrm{\,div\,} u)^2\right) + L\chi_t \left(1 - \frac{\theta}{\theta_c}\right)- \beta(\theta-\theta_c)\mathrm{\,div\,} u_t\,,\\ \sigma:\varepsilon_t &=& B \varepsilon_t:\varepsilon_t + A\varepsilon: \varepsilon_t + (\chi{+}\rho^*(1{-}\chi))(\lambda \mathrm{\,div\,} u \mathrm{\,div\,} u_t - p \mathrm{\,div\,} u_t)\\ && -\, \beta(\theta-\theta_c)\mathrm{\,div\,} u_t\,,\\
\frac{1}{\rho_L}\mathrm{\,div\,} (p \mu(p)\nabla p) &=& \frac{1}{\rho_L} \mu(p)|\nabla p|^2 + p (\chi{+}\rho^*(1{-}\chi))(\varphi'(p)p_t + \mathrm{\,div\,} u_t)\\ && +\, p(1 - \rho^*)\chi_t (\varphi(p) + \mathrm{\,div\,} u)\,. \end{eqnarray*} Hence (note that $V(p) - p\varphi(p) = -\Phi(p)$), \begin{eqnarray} \nonumber F_t + \theta_t S - \sigma:\varepsilon_t - \frac{1}{\rho_L}\mathrm{\,div\,} (p \mu(p)\nabla p) &=&
- B \varepsilon_t:\varepsilon_t - \frac{1}{\rho_L} \mu(p)|\nabla p|^2 \\ \nonumber
&& \hspace{-25mm} +\, \chi_t \left(L\left(1 - \frac{\theta}{\theta_c}\right)
+ (1 - \rho^*)\left( \frac{\lambda}{2} (\mathrm{\,div\,} u)^2 - \Phi(p) - p \mathrm{\,div\,} u\right)\right)\\ \label{xx}
&=& - B \varepsilon_t:\varepsilon_t - \frac{1}{\rho_L} \mu(p)|\nabla p|^2 - \gamma(\theta)\chi_t^2 \le 0, \end{eqnarray} by virtue of \eqref{e6}, so that \eqref{e17} holds.
Now observe that \begin{eqnarray} \nonumber U &=& F + \theta S\\ \nonumber &=& \frac12 A\varepsilon:\varepsilon + (\chi{+}\rho^*(1{-}\chi))\left(V(p)+ \frac{\lambda}{2} (\mathrm{\,div\,} u)^2\right)\\ \label{e20} &&+\, L\chi + \beta\theta_c\mathrm{\,div\,} u + F_0(\theta)-\theta F_0'(\theta) + I(\chi)\,. \end{eqnarray} The derivative of the purely caloric component $F_0(\theta) - \theta F'_0(\theta)$ is the specific heat capacity $c(\theta) = -\theta F''(\theta)$. Assuming that $c(\theta) = c_0$ is a positive constant, we obtain that $F_0(\theta) = - c_0\theta \log(\theta/\theta_c)$ up to a linear function, and \begin{equation}\label{e20a} U = \frac12 A\varepsilon:\varepsilon + (\chi{+}\rho^*(1{-}\chi))\left(V(p)+ \frac{\lambda}{2} (\mathrm{\,div\,} u)^2\right) + L\chi + \beta\theta_c\mathrm{\,div\,} u + c_0 \theta + I(\chi)\,. \end{equation} We now rewrite Eq.~\eqref{e12} in a more suitable form, using \eqref{xx}. We have \begin{eqnarray} \nonumber 0 &=& U_t + \mathrm{\,div\,} q - \sigma:\varepsilon_t - \frac{1}{\rho_L} \mathrm{\,div\,}( p\mu(p)\nabla p)\\ \nonumber &=& (F+\theta S)_t + \mathrm{\,div\,} q - \sigma:\varepsilon_t - \frac{1}{\rho_L} \mathrm{\,div\,}( p\mu(p)\nabla p)\\ \label{ene4}
&=& - B \varepsilon_t:\varepsilon_t - \frac{1}{\rho_L} \mu(p)|\nabla p|^2 - \gamma(\theta)\chi_t^2 + \theta S_t + \mathrm{\,div\,} q\,, \end{eqnarray} which yields the identity \begin{equation}\label{ene5} c_0 \theta_t - \mathrm{\,div\,}(\kappa(\theta)\nabla\theta) = B \varepsilon_t:\varepsilon_t
+\frac{1}{\rho_L} \mu(p)|\nabla p|^2 + \gamma(\theta)\chi_t^2 - \frac{L}{\theta_c}\theta\chi_t - \beta \theta \mathrm{\,div\,} u_t\,. \end{equation}
\section{The mathematical problem}\label{mat}
We consider the system \begin{align}\label{ae1} &\big((\chi{+}\rho^*(1{-}\chi))(\varphi(p) + \mathrm{\,div\,} u)\big)_t \,=\, \frac{1}{\rho_L}\mathrm{\,div\,} (\mu(p)\nabla p)\,,\\ \label{ae2} &\,\rho_M u_{tt} \,=\, \mathrm{\,div\,} \sigma + g\,,\\[2mm] &\hspace*{8.7mm}\sigma \,=\, B\nabla_s u_t + A\nabla_s u + ((\chi{+}\rho^*(1{-}\chi))(\lambda \mathrm{\,div\,} u - p) \,-\, \beta(\theta-\theta_c))\delta\,,\label{ae3} \end{align} \begin{align} \label{ae4} &\hspace*{8.5mm}\gamma(\theta) \chi_t+ \partial I(\chi) \,\ni\, (1-\rho^*)\left(\Phi(p)+ p\mathrm{\,div\,} u - \frac{\lambda}{2}(\mathrm{\,div\,} u)^2\right) \,+\, L\left(\frac{\theta}{\theta_c}-1\right),\\[2mm] &\label{ae5} c_0 \theta_t - \mathrm{\,div\,}(\kappa(\theta)\nabla\theta)\,=\, B \nabla_s u_t:\nabla_s u_t
+\frac{1}{\rho_L} \mu(p)|\nabla p|^2 + \gamma(\theta)\chi_t^2 \,-\, \frac{L}{\theta_c}\theta\chi_t - \beta \theta \mathrm{\,div\,} u_t\,, \end{align} for the unknown functions $p,u,\chi,\theta$, coupled with the boundary conditions \begin{eqnarray}\label{be1} u &\!\!=\!\!& 0\,,\\ \label{be2} \xi\cdot n &\!\!=\!\!& \alpha(x) (p - p^*)\,,\\ \label{be3} q \cdot n &\!\!=\!\!& \omega(x) (\theta - \theta^*)\,, \end{eqnarray} on $\partial\Omega$, where $p^*$ is a given outer pressure, $\theta^*$ is a given outer temperature, $\alpha(x) \ge 0$ is the permeability of the boundary, and $\omega(x) \ge 0$ is the heat conductivity of the boundary.
We can also simplify the problem by assuming that water is incompressible. This corresponds to the choice $\lambda = 0$, whence the system becomes \begin{eqnarray}\label{le1} \big((\chi{+}\rho^*(1{-}\chi))(\varphi(p) + \mathrm{\,div\,} u)\big)_t &\!\!=\!\!& \frac{1}{\rho_L}\mathrm{\,div\,} (\mu(p)\nabla p)\,, \\ \label{le2} \rho_M u_{tt} &\!\!=\!\!& \mathrm{\,div\,} \sigma + g\,,\\ \label{le3} \sigma &\!\!=\!\!& B\nabla_s u_t + A\nabla_s u - (p (\chi{+}\rho^*(1{-}\chi)) - \beta(\theta-\theta_c))\delta\,,\qquad\\ \label{le4} \gamma(\theta) \chi_t+ \partial I(\chi) &\!\!\ni\!\!& (1-\rho^*)\left(\Phi(p)+ p\mathrm{\,div\,} u\right) + L\left(\frac{\theta}{\theta_c}-1\right),\\ \nonumber c_0 \theta_t - \mathrm{\,div\,}(\kappa(\theta)\nabla\theta) &\!\!=\!\!& B \nabla_s u_t:\nabla_s u_t
+\frac{1}{\rho_L} \mu(p)|\nabla p|^2 + \gamma(\theta)\chi_t^2\\ \label{le5} &&-\, \frac{L}{\theta_c}\theta\chi_t - \beta \theta \mathrm{\,div\,} u_t\,. \end{eqnarray} We further simplify the system by assuming that the process is quasistatic and that the shear stresses are negligible. Then \eqref{le2}--\eqref{le3} can be reduced to \begin{eqnarray} \label{le2a} 0 &\!\!=\!\!& \mathrm{\,div\,} \sigma + g\,,\\ \label{le3a} \sigma &\!\!=\!\!& (\nu \mathrm{\,div\,} u_t + \lambda_M \mathrm{\,div\,} u - p (\chi{+}\rho^*(1{-}\chi)) - \beta(\theta-\theta_c))\delta\,. \end{eqnarray} Assuming that the force $g$ admits a potential $G$, that is, $g = \nabla G$, this yields \begin{equation}\label{le6} \nu \mathrm{\,div\,} u_t + \lambda_M \mathrm{\,div\,} u - p (\chi{+}\rho^*(1{-}\chi)) - \beta(\theta-\theta_c) \,=\, -G + H(t)\,, \end{equation} where $H(t)$ is an ``integration constant'', $\nu$ is the bulk viscosity coefficient, and $\lambda_M$ is the bulk elasticity modulus of the matrix material. In view of the boundary condition \eqref{be1}, we have that \begin{equation}\label{le7}
H(t)\, =\, - \frac{1}{|\Omega|}\int_{\Omega} (p (\chi{+}\rho^*(1{-}\chi)) + \beta(\theta-\theta_c) -G)(x,t)\,\mathrm{d} x\,. \end{equation} With the new unknown function $w = \mathrm{\,div\,} u$, which represents the {\em relative volume change\/}, the system \eqref{le1}--\eqref{le5} then becomes
\begin{eqnarray}\label{lu1} \big((\chi{+}\rho^*(1{-}\chi))(\varphi(p) + w)\big)_t &\!\!=\!\!& \frac{1}{\rho_L}\mathrm{\,div\,} (\mu(p)\nabla p)\,, \\[1mm] \label{lu2} \nu w_t + \lambda_M w &\!\!=\!\!& p (\chi{+}\rho^*(1{-}\chi)) + \beta(\theta-\theta_c) - G + H(t)\,,\\[1mm] \label{lu4} \gamma(\theta) \chi_t+ \partial I(\chi) &\!\!\ni\!\!& (1-\rho^*)\left(\Phi(p)+ p w\right) + L\left(\frac{\theta}{\theta_c}-1\right),\\[1mm] \label{lu5} c_0 \theta_t - \mathrm{\,div\,}(\kappa(\theta)\nabla\theta) &\!\!=\!\!& \nu w_t^2
+\frac{1}{\rho_L} \mu(p)|\nabla p|^2 + \gamma(\theta)\chi_t^2 - \frac{L}{\theta_c}\theta\chi_t - \beta \theta w_t\,. \end{eqnarray} We prescribe the initial conditions \begin{align} \label{inip} p(x,0) &\,=\, p^0(x)\,,\\ \label{iniu} w(x,0) &\,=\, w^0(x)\,,\\ \label{inich} \chi(x,0) &\,=\, \chi^0(x)\,,\\ \label{init} \theta(x,0) &\,=\, \theta^0(x)\,. \end{align} The weak formulation of Problem \eqref{lu1}--\eqref{lu5} reads as follows: \begin{eqnarray}\label{wu1} \int_{\Omega}\left(((\chi{+}\rho^*(1{-}\chi)) (\varphi(p)+w))_t \eta +\frac{1}{\rho_L}\mu(p)\nabla p\cdot\nabla \eta\right)\,\mathrm{d} x &\!\!=\!\!& \int_{\partial\Omega} \alpha(x)(p^*-p)\eta \,\mathrm{d} s(x),\qquad \\ \label{wu2} \nu w_t + \lambda_M w - p (\chi{+}\rho^*(1{-}\chi)) - \beta(\theta-\theta_c) &\!\!=\!\!& - G + H(t)\quad\mbox{a.~e.},\\ \label{wu4}
\gamma(\theta) \chi_t + \partial I(\chi) - (1-\rho^*)(\Phi(p)+pw) &\!\!\ni\!\!& L\left(\frac{\theta}{\theta_c}-1\right) \quad\mbox{a.~e.},\\ \label{wu5} \int_{\Omega}\left(c_0\theta_t - \gamma(\theta)\chi_t^2 + \frac{L}{\theta_c}\theta\chi_t - \nu w_t^2 + \beta \theta w_t\right) \zeta\,\mathrm{d} x && \\ \nonumber
+\int_{\Omega}\left(- \frac{1}{\rho_L}\mu(p)|\nabla p|^2 \zeta + \kappa(\theta) \nabla \theta\cdot\nabla \zeta\right)\,\mathrm{d} x &=& \int_{\partial\Omega} \omega(x)(\theta^*-\theta)\zeta \,\mathrm{d} s(x)\,, \end{eqnarray} almost everywhere in $(0,T)$ and for all test functions $\eta\in W^{1,2}(\Omega)$ and $\zeta \in W^{1,q^*}(\Omega)$, with some $q^* > 1$ that will be specified below in Theorem \ref{t1}.
\vspace*{3mm} \begin{hypothesis}\label{h1} {\rm We fix a time interval $[0,T]$ and assume that the data of Problem \eqref{wu1}--\eqref{wu5} have the following properties:} \begin{itemize} \item[{\rm (i)}] $\gamma : [0,\infty) \to [0,\infty)$ {\rm is continuous; $\exists 0 < c_\gamma < C_\gamma: c_\gamma(1+\theta) \le \gamma(\theta) \le C_\gamma(1+\theta)$ for all $\theta \ge 0$;} \item[{\rm (ii)}] $\kappa : [0,\infty) \to [0,\infty)$ {\rm is continuous; $\exists 0<c_\kappa< C_\kappa$, $0< a < 1$, $a< \hat a < \frac{16}{5} + \frac65 a: c_\kappa (1+ \theta^{1+a}) \le \kappa(\theta) \le C_\kappa (1+ \theta^{1+\hat a})$ for all $\theta\ge 0$;} \item[{\rm (iii)}] {\rm $\theta^0 \in W^{1,2}(\Omega) \cap L^\infty(\Omega)$, $\theta^*\in L^\infty(\partial\Omega\times (0,T))$, $\theta^*_t\in L^2(\partial\Omega\times (0,T))$, $\exists \bar \theta>0: \theta^0(x) \ge \bar\theta$, $\theta^*(x,t) \ge \bar \theta$;} \item[{\rm (iv)}] $\exists\, 0 < \hat\delta \le \delta < 1/4,\ \exists\, 0<c_\varphi< C_\varphi$ {\rm such that for all
$p\in \mathbb{R}$ we have that\\$c_\varphi \max\{1,|p|\}^{-1-\delta} \le \varphi'(p) \le C_\varphi \max\{1,|p|\}^{-1-\hat\delta}$;} \item[{\rm (v)}] {\rm $\exists 0 < c_\mu < C_\mu: c_\mu \le \mu(p) \le C_\mu$ for all $p \in \mathbb{R}$;} \item[{\rm (vi)}] {\rm $p^0 \in W^{1,2}(\Omega) \cap L^\infty(\Omega)$, $p^*\in L^\infty(\partial\Omega\times (0,T)) \cap L^2(0,T; W^{1,2}(\partial\Omega))$, $p^*_t\in L^2(\partial\Omega\times (0,T))$;} \item[{\rm (vii)}] {\rm $w^0, \chi^0\in L^\infty(\Omega)$, $\chi^0(x) \in [0,1]$ a.\,e., $\int_{\Omega} w^0(x)\,\mathrm{d} x = 0$; \item[{\rm (viii)}] $G\in L^\infty(\Omega\times (0,T))$, $G_t\in L^2(\Omega\times (0,T))$;} \item[{\rm (ix)}] {\rm $\Omega \subset \mathbb{R}^3$ is a bounded connected set of class $C^{1,1}$, $\alpha: \partial\Omega \to [0,\infty)$ is Lipschitz continuous, $\omega \in L^\infty(\partial\Omega)$, $\omega(x) \ge 0$ a.\,e., $\int_{\partial\Omega} \alpha(x)\,\mathrm{d} s(x) >0$, $\int_{\partial\Omega} \omega(x)\,\mathrm{d} s(x) >0$.} \end{itemize} \end{hypothesis}
It is worth noting that it follows from \eqref{lu2} and (vii), using the definition of the functions $G$ and $H$, that \begin{equation} \label{mean} \int_{\Omega} w(x,t)\,\mathrm{d} x\,=\,\int_{\Omega} w^0(x)\,\mathrm{d} x\,=\,0 \quad\forall\, t\in [0,T]. \end{equation}
The main result of this paper is the following existence result.
\begin{theorem}\label{t1} Let Hypothesis \ref{h1} hold true. Then there exists a solution $(p,w,\chi,\theta)$ to the system \eqref{inip}--\eqref{wu5}, \eqref{le7}, with the regularity \begin{align} \label{regu1} &p \in L^\infty(\Omega\times (0,T)), \quad p_t, \nabla\theta \in L^2(\Omega\times (0,T)), \quad \nabla p \in L^\infty(0,T;L^2(\Omega)),\\[1mm] \label{regu2} &\theta, w_t \in L^{\bar p}(\Omega\times (0,T)), \quad w,\chi_t \in L^\infty(0,T;L^{\bar p}(\Omega)) \,\mbox{ for \,$\bar p < 8+a$},\\[1mm] \label{regu3} &\theta_t \in L^2(0,T;W^{-1,q^*}(\Omega)) \mbox{ with \,$q^*>1$\, given by } (\ref{qstar}). \end{align} \end{theorem}
The proof of Theorem \ref{t1} will be divided into several steps which each constitutes a new section in this paper.
\section{Cut-off system}\label{cut}
The strategy for solving Problem \eqref{wu1}--\eqref{wu5} and proving Theorem \ref{t1} is the following: we choose a parameter $R>0$ and first solve a cut-off system with the intention to let $R$ tend to infinity. More precisely, for $R>0$ and $z \in \mathbb{R}$ we denote by $$Q_R(z) = \max\{-R, \min\{z, R\}\}$$ the projection onto $[-R,R]$, and set $$P_R(z) = z - Q_R(z).$$ We further denote \begin{equation}\label{ce1} \varphi_R(p) = \varphi(p) + P_R(p)\,, \quad \Phi_R(p) = \int_0^p \varphi_R(\tau) \,\mathrm{d} \tau\,, \quad V_R(p) = p \varphi_R(p) - \Phi_R(p)\,, \end{equation} and \begin{equation}\label{ce2} \gamma_R(p,\theta) = \left(1+ (p^2 - R^2)^+\right)\gamma(Q_R(\theta^+)), \end{equation}
for $p,\theta \in \mathbb{R}$, and replace \eqref{wu1}--\eqref{wu5} by the cut-off system \begin{eqnarray}\label{cu1} \int_{\Omega}\left(((\chi{+}\rho^*(1{-}\chi)) (\varphi_R(p)+w))_t \eta +\frac{1}{\rho_L}\mu(p)\nabla p\cdot\nabla \eta\right)\,\mathrm{d} x &\!\!=\!\!& \int_{\partial\Omega} \alpha(x)(p^*-p)\eta \,\mathrm{d} s(x),\qquad \\[2mm] \label{cu2} \nu w_t + \lambda_M w - p (\chi{+}\rho^*(1{-}\chi)) - \beta(Q_R(\theta^+)-\theta_c) &\!\!=\!\!& - G + H_R(t)\quad\mbox{a.~e.},\\[2mm]
\label{cu4}
\gamma_R(p,\theta) \chi_t + \partial I(\chi) - (1-\rho^*)(\Phi_R(p)+pw) &\!\!\ni\!\!& L\left(\frac{Q_R(\theta^+)}{\theta_c}-1\right)\quad\mbox{a.~e.}, \end{eqnarray}
\vspace*{-9mm} \begin{align} &\int_{\Omega}\left(c_0\theta_t \zeta + \kappa(Q_R(\theta^+)) \nabla \theta\cdot\nabla \zeta\right)\,\mathrm{d} x \,
- \int_{\Omega}\left(\frac{1}{\rho_L}\mu(p)Q_R(|\nabla p|^2) + \gamma_R(p,\theta)\chi_t^2 + \nu w_t^2\right) \zeta \,\mathrm{d} x \nonumber\\ \label{cu5} &\quad+\int_{\Omega} Q_R(\theta^+)\left(\frac{L}{\theta_c}\chi_t + \beta w_t\right) \zeta\,\mathrm{d} x \,\,=\,\, \int_{\partial\Omega} \omega(x)(\theta^*-\theta)\zeta \,\mathrm{d} s(x), \end{align} for all test functions $\eta,\zeta \in W^{1,2}(\Omega)$, with \begin{equation}\label{le7a}
H_R(t) = - \frac{1}{|\Omega|}\int_{\Omega} (p (\chi{+}\rho^*(1{-}\chi)) + \beta(Q_R(\theta^+)-\theta_c) -G)\,\mathrm{d} x\,. \end{equation}
For the system \eqref{cu1}--\eqref{le7a}, we prove the following result.
\begin{proposition}\label{t2} Let Hypothesis \ref{h1} hold and let $R>0$ be given. Then there exists a solution $(p,w,\chi,\theta)$ to \eqref{cu1}--\eqref{le7a}, \eqref{inip}--\eqref{init} with the regularity $p,w,\chi,\theta,w_t \in L^q(\Omega; C[0,T])$ for $1\le q < 3$, $p_t,\theta_t \in L^2(\Omega\times (0,T))$, and $\nabla p, \nabla\theta, \chi_t \in L^\infty(0,T;L^2(\Omega))$. \end{proposition}
\begin{pf}{Proof of Proposition \ref{t2}}\ Let $$M(p) := \int_0^p \mu(\tau)\,\mathrm{d} \tau , \quad K_R(\theta) := \int_0^\theta \kappa(Q_R(\tau^+))\,\mathrm{d} \tau,$$ and set $v = M(p)$, $z = K_R(\theta)$. Then \eqref{cu1}--\eqref{cu5} is transformed into the system \begin{eqnarray}\label{ku1} \int_{\Omega}\left(((\chi{+}\rho^*(1{-}\chi))(\varphi_R(p)+w))_t \eta + \frac{1}{\rho_L}\nabla v\cdot\nabla \eta\right)\,\mathrm{d} x &\!\!=\!\!& \int_{\partial\Omega} \alpha(x)(p^*-p)\eta \,\mathrm{d} s(x),\qquad \\[2mm] \label{ku2} \nu w_t + \lambda_M w - p (\chi{+}\rho^*(1{-}\chi)) - \beta(Q_R(\theta^+)-\theta_c) &\!\!=\!\!& - G + H_R(t)\quad\mbox{a.~e.},\\ \label{ku4}
\gamma_R(p,\theta) \chi_t + \partial I(\chi) - (1-\rho^*)(\Phi_R(p)+pw) &\!\!\ni\!\!& L\left(\frac{Q_R(\theta^+)}{\theta_c}-1\right)\quad\mbox{a.~e.}, \end{eqnarray}
\vspace*{-9mm} \begin{align} \nonumber &\int_{\Omega}\left(c_0\theta_t \zeta + \nabla z\cdot\nabla \zeta\right)\,\mathrm{d} x \,-
\int_{\Omega}\left(\frac{1}{\rho_L}\mu(p)Q_R(|\nabla p|^2) + \gamma_R(p,\theta)\chi_t^2 + \nu w_t^2\right) \zeta \,\mathrm{d} x \\[1mm] \label{ku5} &\quad+\int_{\Omega} Q_R(\theta^+)\left(\frac{L}{\theta_c}\chi_t + \beta w_t\right) \zeta\,\mathrm{d} x \,\,=\,\, \int_{\partial\Omega} \omega(x)(\theta^*-\theta)\zeta \,\mathrm{d} s(x), \end{align} which we solve by Galerkin approximations. To this end, let $\{e_k; k=0,1,\dots\}$ denote the complete orthonormal system of eigenfunctions of the problem \begin{equation}\label{eigen} -\Delta e_k = \lambda_k e_k \ \mbox{ in\ } \Omega \,, \quad \nabla e_k \cdot n = 0 \ \mbox{ on\ } \partial\Omega\,. \end{equation} We approximate $v$ and $z$ by the finite sums \begin{equation}\label{ge1} v^{(n)}(x,t) = \sum_{k=0}^n v_k(t) e_k(x)\,, \quad z^{(n)}(x,t) = \sum_{k=0}^n z_k(t) e_k(x)\,, \end{equation} where $v_k, z_k, w^{(n)}, \chi^{(n)}$ satisfy the system \begin{eqnarray} \nonumber &&\hspace{-16mm}\int_{\Omega}\left(((\chi^{(n)}{+}\rho^*(1{-}\chi^{(n)}))(\varphi_R(p^{(n)})+w^{(n)}))_t e_k + \frac{1}{\rho_L}\nabla v^{(n)}\cdot\nabla e_k\right)\,\mathrm{d} x \\ \label{gu1} &=& \int_{\partial\Omega} \alpha(x)(p^*-p^{(n)}) e_k \,\mathrm{d} s(x), \quad k=0,1, \dots, n,\\[2mm] \nonumber &&\hspace{-16mm}\nu w^{(n)}_t + \lambda_M w^{(n)} - p^{(n)}(\chi^{(n)} {+} \rho^*(1{-}\chi^{(n)})) - \beta(Q_R((\theta^{(n)})^+)-\theta_c) \\[1mm] \label{gu2} &=& - G + H^{(n)}_R(t)\quad\mbox{a.~e.},\\[2mm] \nonumber &&\hspace{-16mm} \gamma_R(p^{(n)},\theta^{(n)}) \chi^{(n)}_t + \partial I(\chi^{(n)}) - (1-\rho^*)(\Phi_R(p^{(n)})+p^{(n)} w^{(n)})\\[1mm] \label{gu4} &\ni& L\left(\frac{Q_R((\theta^{(n)})^+)}{\theta_c}-1\right)\quad\mbox{a.~e.},\\[2mm] \nonumber &&\hspace{-16mm}\int_{\Omega}\left(c_0\theta^{(n)}_t e_k + \nabla z^{(n)}\cdot\nabla e_k\right) + Q_R((\theta^{(n)})^+)\left(\frac{L}{\theta_c}\chi^{(n)}_t + \beta w^{(n)}_t\right) e_k\,\mathrm{d} x \\ \nonumber
&&- \int_{\Omega}\left(\frac{1}{\rho_L}\mu(p)Q_R(|\nabla p^{(n)}|^2) + \gamma_R(p^{(n)},\theta^{(n)})(\chi^{(n)}_t)^2 + \nu (w^{(n)}_t)^2\right) \zeta \,\mathrm{d} x \\ \label{gu5} &=& \int_{\partial\Omega} \omega(x)(\theta^*-\theta^{(n)})e_k \,\mathrm{d} s(x), \end{eqnarray} with $p^{(n)} := M^{-1}(v^{(n)})$, $\theta^{(n)} := K_R^{-1}(z^{(n)})$, and \begin{equation}\label{le7c}
H^{(n)}_R(t) := - \frac{1}{|\Omega|}\int_{\Omega} (p^{(n)}(\chi^{(n)} + \rho^*(1{-}\chi^{(n)})) + \beta(Q_R((\theta^{(n)})^+)-\theta_c) -G)\,\mathrm{d} x\,, \end{equation} and with the initial conditions \begin{align} \label{inipn} v_k(0) &= \int_{\Omega} M(p^0(x)) e_k(x)\,\mathrm{d} x\,,\\ \label{initn} z_k(0) &= \int_{\Omega} K_R(\theta^0(x)) e_k(x)\,\mathrm{d} x\,,\\ \label{iniun} w^{(n)}(x,0) &= w^0(x)\,,\\ \label{inichn} \chi^{(n)}(x,0) &= \chi^0(x)\,. \end{align} This is an easy ODE system that admits a unique solution on some interval $[0,T_n) \subset [0,T]$. Moreover, the solution $w^{(n)}$ of \eqref{gu1} enjoys the explicit representation \begin{eqnarray}\nonumber && w^{(n)}(x,t) \,=\, \hbox{\rm e}^{-(\lambda_M/\nu)t} w^0(x)+\frac{1}{\nu}\int_0^t \hbox{\rm e}^{(\lambda_M/\nu)(t'-t)}(-G + H^{(n)}_R)(x,t')\,\mathrm{d} t'\\ \label{n1} &&\hspace{5mm} +\, \frac{1}{\nu} \int_0^t \hbox{\rm e}^{(\lambda_M/\nu)(t'-t)} \left(p^{(n)}(\chi^{(n)} {+} \rho^*(1{-}\chi^{(n)})) + \beta(Q_R((\theta^{(n)})^+)-\theta_c)\right)(x,t')\,\mathrm{d} t'.\qquad \end{eqnarray} Also \eqref{gu4} is of a standard form, namely, \begin{equation}\label{n2} \chi^{(n)}_t + \partial I(\chi^{(n)}) \ni F^{(n)}, \end{equation} with \begin{equation}\label{n3} F^{(n)} = (1-\rho^*) \frac{\Phi_R(p^{(n)})+p^{(n)} w^{(n)}}{\gamma_R(p^{(n)},\theta^{(n)})} + \frac{L(Q_R((\theta^{(n)})^+) -\theta_c)}{\theta_c\gamma_R(p^{(n)},\theta^{(n)})}, \end{equation} or, equivalently, \begin{equation}\label{n3a} \chi^{(n)} \in [0,1]\,, \quad (F^{(n)} - \chi^{(n)}_t)(\chi^{(n)} - \tilde\chi) \ge 0 \ \mbox{ a.\,e. }\ \forall \tilde\chi \in [0,1]. \end{equation} By virtue of \eqref{n1}--\eqref{n3}, we have for all $(x,t) \in \Omega\times (0,T_n)$ the inequalities \begin{equation}\label{ge3} \left. \begin{array}{rcl}
|w^{(n)}(x,t)|+|\chi^{(n)}_t(x,t)| &\le& C_R \Big(1+\int_0^t|p^{(n)}(x,t')|\,\mathrm{d} t'
+ \int_0^t\int_{\Omega}|p^{(n)}(x',t')|\,\mathrm{d} x'\,\mathrm{d} t'\Big)\\[2mm]
|w^{(n)}_t(x,t)| &\le& C_R \Big(1+|p^{(n)}(x,t)| + \int_0^t|p^{(n)}(x,t')|\,\mathrm{d} t'\\
&&+\, \int_{\Omega} |p^{(n)}(x',t)|\,\mathrm{d} x' + \int_0^t\int_{\Omega}|p^{(n)}(x',t')|\,\mathrm{d} x'\,\mathrm{d} t'\Big) \end{array} \right\} \end{equation} where, here and in the following, $C_R>0$ denote constants which possibly depend on $R$ and on the data,
but not on $n$.
We now derive some a priori estimates for the solutions to the Galerkin system. To begin with, we first test \eqref{gu1} by $v_k(t)$ and sum over $k=1, \dots, n$ to obtain the identity (note that $v^{(n)} = M(p^{(n)})$, by definition) \begin{eqnarray}\nonumber &&\hspace{-12mm} (1-\rho^*) \int_{\Omega} \chi^{(n)}_t(\varphi_R(p^{(n)}) + w^{(n)}) M(p^{(n)})\,\mathrm{d} x + \int_{\Omega} (\chi^{(n)}{+}\rho^*(1{-}\chi^{(n)}))\varphi_R(p^{(n)})_t M(p^{(n)})\,\mathrm{d} x \\ \nonumber &&+\,\int_{\Omega}(\chi^{(n)}{+}\rho^*(1{-}\chi^{(n)})) w^{(n)}_t M(p^{(n)})\,\mathrm{d} x
+ \frac{1}{\rho_L}\int_{\Omega}|\nabla v^{(n)}|^2 \,\mathrm{d} x\\ \label{gn1} &&+\, \int_{\partial\Omega} \alpha(x)(p^{(n)}-p^*)M(p^{(n)})\,\mathrm{d} s(x) = 0. \end{eqnarray} We rewrite the first term of \eqref{gn1}, using the identity \begin{eqnarray}\nonumber \int_{\Omega} \chi^{(n)}_t(\varphi_R(p^{(n)}) + w^{(n)}) M(p^{(n)})\,\mathrm{d} x &=& \int_{\Omega} \chi^{(n)}_t(\Phi_R(p^{(n)}) + p^{(n)} w^{(n)}) \frac{M(p^{(n)})}{p^{(n)}}\,\mathrm{d} x\\ \label{gn2} &&+\, \int_{\Omega} \chi^{(n)}_t V_R(p^{(n)})\frac{M(p^{(n)})}{p^{(n)}} \,\mathrm{d} x. \end{eqnarray} From \eqref{gu4} it follows that for a.\,e. $(x,t) \in \Omega\times (0,T)$ we have, by Young's inequality, \begin{eqnarray}\nonumber &&\hspace{-12mm}(1-\rho^*)\chi^{(n)}_t(\Phi_R(p^{(n)}) + p^{(n)} w^{(n)}) \frac{M(p^{(n)})}{p^{(n)}}\\ \nonumber &=& \frac{M(p^{(n)})}{p^{(n)}}\left(
\gamma_R(p^{(n)},\theta^{(n)}) \bigl|\chi^{(n)}_t\bigr|^2 + \frac{L}{\theta_c}(\theta_c - Q_R((\theta^{(n)})^+)) \chi_t^{(n)}\right)\\ \label{gn3}
&\ge& \frac{c_{\mu}}{2} \gamma_R(p^{(n)},\theta^{(n)}) \bigl|\chi^{(n)}_t\bigr|^2 - C_R\,. \end{eqnarray} The second term in \eqref{gn1} can be rewritten as \begin{eqnarray}\nonumber \int_{\Omega} (\chi^{(n)}{+}\rho^*(1{-}\chi^{(n)}))\varphi_R(p^{(n)})_t M(p^{(n)})\,\mathrm{d} x &=& \frac{\,\mathrm{d}}{\,\mathrm{d} t} \int_{\Omega} (\chi^{(n)}{+}\rho^*(1{-}\chi^{(n)})) V_{R,M}(p^{(n)})\,\mathrm{d} x\\ \label{gn4} &&-\, (1-\rho^*)\int_{\Omega} \chi^{(n)}_t V_{R,M}(p^{(n)})\,\mathrm{d} x, \end{eqnarray} where we denote \begin{equation}\label{gn5} V_{R,M}(p) = \int_0^p \varphi_R'(\tau) M(\tau)\,\mathrm{d} \tau\,. \end{equation} We see, in particular, that there exist constants $ 0 < c_{R,\mu} < C_{R,\mu}$ such that $$ c_{R,\mu} p^2 \le V_{R,M}(p) \le C_{R,\mu} p^2 $$ for all $p \in \mathbb{R}$. Combining \eqref{gn2}--\eqref{gn5} with \eqref{gn1}, we obtain that \begin{eqnarray}\nonumber &&\hspace{-12mm}\frac{\,\mathrm{d}}{\,\mathrm{d} t} \int_{\Omega} (\chi^{(n)}{+}\rho^*(1{-}\chi^{(n)})) V_{R,M}(p^{(n)})\,\mathrm{d} x
+ \frac{c_{\mu}}{2} \int_{\Omega}\gamma_R(p^{(n)},\theta^{(n)}) \bigl|\chi^{(n)}_t\bigr|^2 \,\mathrm{d} x
+ \frac{1}{\rho_L}\int_{\Omega}|\nabla v^{(n)}|^2 \,\mathrm{d} x\\ \nonumber &&+\, \int_{\partial\Omega} \alpha(x)(p^{(n)}-p^*)M(p^{(n)})\,\mathrm{d} s(x)\\ \nonumber &\le& C_R \,- \int_{\Omega}(\chi^{(n)}{+}\rho^*(1{-}\chi^{(n)})) w^{(n)}_t M(p^{(n)})\,\mathrm{d} x\\ \label{gn6} &&+\, (1-\rho^*)\int_{\Omega} \chi^{(n)}_t \left(V_{R,M}(p^{(n)}) - V_R(p^{(n)})\frac{M(p^{(n)})}{p^{(n)}}\right)\,\mathrm{d} x. \end{eqnarray} By \eqref{ge3} and H\"older's inequality, we have that \begin{equation}\label{gn6a}
\left|\int_{\Omega}(\chi^{(n)}{+}\rho^*(1{-}\chi^{(n)})) w^{(n)}_t M(p^{(n)})(x,t)\,\mathrm{d} x\right|
\le C_R \int_{\Omega} \left(|p^{(n)}|^2(x,t) + \int_0^t |p^{(n)}|^2(x,t') \,\mathrm{d} t'\right)\,\mathrm{d} x, \end{equation} and, similarly, \begin{eqnarray} \nonumber
&&\hspace{-12mm}\left|\int_{\Omega} \chi^{(n)}_t \left(V_{R,M}(p^{(n)}) - V_R(p^{(n)})\frac{M(p^{(n)})}{p^{(n)}}\right)\,\mathrm{d} x\right|
\le C_R \int_{\Omega} |\chi^{(n)}_t| |p^{(n)}|^2 \,\mathrm{d} x \\[1mm] \nonumber
&\le& \frac{c_{\mu}}{4} \int_{\Omega}\gamma_R(p^{(n)},\theta^{(n)}) \bigl|\chi^{(n)}_t\bigr|^2 \,\mathrm{d} x
+ C_R \int_{\Omega} \frac{|p^{(n)}|^4}{\gamma_R(p^{(n)},\theta^{(n)})}\,\mathrm{d} x \\ [1mm]\label{gn7}
&\le& \frac{c_{\mu}}{4} \int_{\Omega}\gamma_R(p^{(n)},\theta^{(n)}) \bigl|\chi^{(n)}_t\bigr|^2 \,\mathrm{d} x
+ C_R \int_{\Omega} \left(1+|p^{(n)}|^2\right)\,\mathrm{d} x. \end{eqnarray} Let $[0,T_n)$ denote the maximal interval of existence of our solution. Using \eqref{gn6}--\eqref{gn7}, and Gronwall's lemma, we thus can infer that \begin{equation}\label{gn8}
\mathop{{\rm sup\,ess}\,}_{t\in (0,T_n)}\int_{\Omega} |p^{(n)}|^2(x,t)\,\mathrm{d} x +\int_0^{T_n}\int_{\Omega} |\nabla p^{(n)}|^2\,\mathrm{d} x\,\mathrm{d} t
+ \int_0^{T_n}\int_{\partial\Omega} \alpha(x)|p^{(n)}|^2 \,\mathrm{d} s(x)\,\mathrm{d} t \le C_R\,. \end{equation} In particular, the Galerkin solution exists globally, and for every $n \in \mathbb{N}$ we have $T_n=T$.
In what follows, we denote by $|\cdot|_p$ the norm in $L^p(\Omega)$, by $\|\cdot\|_p$ the norm in $L^p(\Omega\times (0,T))$, by $\|\cdot\|_{\partial\Omega, p}$ the norm in $L^p(\partial\Omega\times (0,T))$, and by $\|\cdot\|_{W^{\ell,p}(\Omega)}$ the norm in $W^{\ell,p}(\Omega)$ for $\ell \in \mathbb{N}$ and $1 \le p \le \infty$.
Let us recall the Gagliardo-Nirenberg inequality \begin{equation}\label{gn}
|u|_q \le C\left(|u|_s + |u|_s^{1-\rho}|\nabla u|_p^\rho\right), \end{equation} with $$ \rho = \frac{\frac 1s - \frac1q}{\frac 1s + \frac 1N - \frac 1p}\,\,, $$ which is valid for all $1\le s<q$, $1/q > (1/p) - (1/N)$, every bounded open set $\Omega \subset \mathbb{R}^N$ with Lipschitzian boundary, and every function $u \in W^{1,p}(\Omega)$. For $t\in (0,T)$, $N=3$, $s = p = 2$, and $q=4$, we have, in particular, $$
|p^{(n)}(t)|_4 \le C\left(|p^{(n)}(t)|_2 + |p^{(n)}(t)|_2^{1/4} |\nabla p^{(n)}(t)|_2^{3/4}\right). $$ Hence, by \eqref{gn8}, \begin{equation}\label{gn9}
\int_0^T \left(\int_{\Omega} |p^{(n)}(x,t)|^4 \,\mathrm{d} x\right)^{2/3}\,\mathrm{d} t \le C_R\,, \end{equation} independently of $n$.
Next, we test \eqref{gu1} by $\dot v_k(t)$ and sum over $k=0,1,\dots n$ to obtain the identity \begin{eqnarray}\nonumber
&&\hspace{-16mm}\frac{\,\mathrm{d}}{\,\mathrm{d} t} \left(\frac1{2\rho_L}\int_{\Omega} |\nabla v^{(n)}|^2\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x)(\hat M(v^{(n)})-p^* v^{(n)})\,\mathrm{d} s(x)\right)\\ \nonumber && +\int_{\Omega}\left(((\chi^{(n)}{+}\rho^*(1{-}\chi^{(n)}))(\varphi_R(p^{(n)})+w^{(n)}))_t v_t^{(n)}\right)\,\mathrm{d} x\\ \label{gd0} &=& - \int_{\partial\Omega} \alpha(x) p^*_t v^{(n)} \,\mathrm{d} s(x), \end{eqnarray} where $\hat M' = M^{-1}$. We have the pointwise lower bound $$
\varphi_R(p^{(n)})_t v^{(n)}_t \ge C_R |v^{(n)}_t|^2, $$ and thus, by \eqref{gd0} and H\"older's inequality, we have for all $t \in [0,T]$ that \begin{eqnarray}\nonumber
&&\hspace{-16mm}\int_0^t\int_{\Omega} |v^{(n)}_t|^2\,\mathrm{d} x\,\mathrm{d} t' + \left(\int_{\Omega} |\nabla v^{(n)}|^2(x,t) \,\mathrm{d} x
+ \int_{\partial\Omega} \alpha(x)|v^{(n)}|^2(x,t)\,\mathrm{d} s(x)\right) \\ \label{gd1}
&\le& C_R \left(1+ \int_0^t\int_{\Omega} (|w^{(n)}_t|^2 + |\chi^{(n)}_t|^2 |w^{(n)}|^2)\,\mathrm{d} x\,\mathrm{d} t'
+\int_0^t\int_{\partial\Omega} \alpha(x)|v^{(n)}|^2\,\mathrm{d} s(x)\,\mathrm{d} t'\right).\quad
\end{eqnarray} A bound for $\|w^{(n)}_t\|_2^2$ follows from \eqref{ge3} and \eqref{gn8}. Moreover, owing to \eqref{ge3}, we have for $t\in (0,T)$ that $$
\int_{\Omega} |\chi^{(n)}_t|^2 |w^{(n)}|^2 (x,t)\,\mathrm{d} x \le C_R\left(1 + \int_{\Omega} \left(\int_0^t |p^{(n)}(x,t')|\,\mathrm{d} t'\right)^4\,\mathrm{d} x\right). $$ We use the Minkowski inequality in the form $$
\left(\int_{\Omega} \left(\int_0^t |p^{(n)}(x,t')|\,\mathrm{d} t'\right)^4\,\mathrm{d} x\right)^{1/4} \le
\int_0^t \left(\int_{\Omega} |p^{(n)}(x,t')|^4\,\mathrm{d} x\right)^{1/4} \,\mathrm{d} t' $$ to check that $$
\int_{\Omega} |\chi^{(n)}_t|^2 |w^{(n)}|^2 (x,t)\,\mathrm{d} x
\le C_R\left(1 + \left(\int_0^t \left(\int_{\Omega} |p^{(n)}(x,t')|^4\,\mathrm{d} x\right)^{1/4} \,\mathrm{d} t'\right)^4\right) \le C_R, $$ by virtue of \eqref{gn9}. Then \eqref{gd1} and the Gronwall argument imply that \begin{equation}\label{ge2}
\|v^{(n)}(t)\|_{W^{1,2}(\Omega)}^2 + \int_0^t |v^{(n)}_t(t')|_2^2\,\mathrm{d} t' \le C_R, \end{equation} whence also \begin{equation}\label{ge2a}
\|p^{(n)}(t)\|_{W^{1,2}(\Omega)}^2 + \int_0^t |p^{(n)}_t(t')|_2^2\,\mathrm{d} t' \le C_R \end{equation} for $t \in (0,T)$.
We continue by testing \eqref{gu5} by $\dot z_k(t)$ and summing over $k=0,1,\dots n$. Note that, thanks to \eqref{n2}--\eqref{n3}, \eqref{ge3}, and \eqref{gn8}, we have that $$ \gamma_R(p^{(n)},\theta^{(n)})(\chi^{(n)}_t(x,t))^2 + \nu (w^{(n)}_t(x,t))^2 \le
C_R \left(1+|p^{(n)}(x,t)| + \int_0^t|p^{(n)}(x,t')|\,\mathrm{d} t'\right)^3 $$ for a.\,e. $(x,t) \in \Omega \times (0,T)$. This yields the inequality \begin{eqnarray*}
&&\hspace{-16mm}\frac{\,\mathrm{d}}{\,\mathrm{d} t} \left(\frac12\int_{\Omega} |\nabla z^{(n)}|^2\,\mathrm{d} x + \int_{\partial\Omega} \omega(x)(\hat K_R(v^{(n)})-\theta^* v^{(n)})\,\mathrm{d} s(x)\right) + \frac12 \int_{\Omega} c_0\theta^{(n)}_t z^{(n)}_t \,\mathrm{d} x\\
&\le& \int_{\partial\Omega} \omega(x) |\theta^*_t| |z^{(n)}| \,\mathrm{d} s(x)
+ C_R\int_{\Omega} \left(1+|p^{(n)}(x,t)| + \int_0^t|p^{(n)}(x,t')|\,\mathrm{d} t'\right)^6\,\mathrm{d} x, \end{eqnarray*} where $\hat K_R' = K_R^{-1}$. Using \eqref{ge2a} and the Sobolev embedding theorem, we obtain, as before, that \begin{equation}\label{ge2b}
\|\theta^{(n)}(t)\|_{W^{1,2}(\Omega)}^2 + \int_0^t |\theta^{(n)}_t(t')|_2^2\,\mathrm{d} t' \le C_R \end{equation} for $t \in (0,T)$. Hence, there exist a subsequence of $\{(p^{(n)}, \theta^{(n)}): n \in \mathbb{N}\}$, which is again indexed by $n$, and functions $p, \theta$, such that \begin{eqnarray*} p^{(n)}_t \to p_t, \ \theta^{(n)}_t \to \theta_t, && \mbox{weakly in } \ L^2(\Omega\times (0,T)),\\ \nabla p^{(n)} \to \nabla p, \ \nabla \theta^{(n)} \to \nabla \theta, && \mbox{weakly-star in } \ L^\infty(0,T;L^2(\Omega)),\\ p^{(n)} \to p, \ \theta^{(n)} \to \theta, && \mbox{strongly in } \ L^q(\Omega; C[0,T]) \ \hbox{\ for}\ \ 1\le q < 3,\\ \end{eqnarray*} where we used the compact embedding $W^{1,2}(\Omega\times (0,T))\hookrightarrow\hookrightarrow L^q(\Omega;C[0,T])$ for $1\le q<3$, see \cite{bin}.
We now check that the sequences $\{w^{(n)}\}, \{\chi^{(n)}\}, \{w^{(n)}_t\}, \{\chi^{(n)}_t\}$ converge strongly in appropriate function spaces and that the limit functions satisfy the system \eqref{cu1}--\eqref{le7a}. Passing again to a subsequence if necessary, we may fix a set $\Omega' \subset \Omega$ with meas$(\Omega\setminus\Omega') = 0$ such that \begin{equation}\label{n4}
\lim_{n\to \infty} \sup_{t\in [0,T]}|p^{(n)}(x,t) - p(x,t)| = 0\,, \quad \lim_{n\to \infty} \sup_{t\in [0,T]}|\theta^{(n)}(x,t) - \theta(x,t)| = 0, \quad \forall\, x \in \Omega', \end{equation} and such that the functions $t \mapsto p(x,t)$ and $t \mapsto \theta(x,t)$ belong to $C[0,T]$ for all $x \in \Omega'$. In particular, we can define the real numbers $$
\widetilde p(x) := \sup_{t\in [0,T]}|p(x,t)|\,, \quad \widetilde \theta(x): = \sup_{t\in [0,T]}|\theta(x,t)|,
\ \hbox{\ for}\ x \in \Omega'\,. $$ Let $x \in \Omega'$ be arbitrarily fixed now. Then there is some $n_0(x) \in \mathbb{N}$ such that for $n > n_0(x)$ we have
$|p^{(n)}(x,t)| \le 2\widetilde p(x)$ and $|\theta^{(n)}(x,t)| \le 2\widetilde \theta(x)$, for all $t \in [0,T]$ and $x \in \Omega'$.
For $n,m \in \mathbb{N}$, $n,m > n_0(x)$, we have by \eqref{n1} for $t \in [0,T]$ and $x \in \Omega'$ that \begin{equation}\label{n5}
|w^{(n)}(x,t) - w^{(m)}(x,t)| \le C_R(1+\widetilde p(x))\int_0^t (|\chi^{(n)} - \chi^{(m)}| + |p^{(n)} - p^{(m)}| + |\theta^{(n)} - \theta^{(m)}|)(x,t')\,\mathrm{d} t'. \end{equation} Hence, with the notation of \eqref{n3}, \begin{eqnarray}\nonumber
&&\hspace{-16mm}\int_0^t|F^{(n)}(x,t') - F^{(m)}(x,t')|\,\mathrm{d} t'\\ \label{n6}
&\le& C_R(1+\widetilde p(x))^2\int_0^t (|\chi^{(n)} - \chi^{(m)}| + |p^{(n)} - p^{(m)}| + |\theta^{(n)} - \theta^{(m)}|)(x,t')\,\mathrm{d} t'. \end{eqnarray} The well-known $L^1$-Lipschitz continuity result for variational inequalities (see, e.\,g., \cite[Theorem 1.12]{cmuc}) tells us that \begin{equation}\label{n7}
\int_0^t |\chi^{(n)}_t - \chi^{(m)}_t|(x,t')\,\mathrm{d} t' \le 2 \int_0^t|F^{(n)}(x,t') - F^{(m)}(x,t')|\,\mathrm{d} t'. \end{equation} Since $\{p^{(n)}(x,\cdot)\}$ and $\{\theta^{(n)}(x,\cdot)\}$ converge uniformly for each $x \in \Omega'$, we may apply the Gronwall argument to conclude that $\{\chi^{(n)}(x,\cdot)\}$, $\{w^{(n)}(x,\cdot)\}$, $\{w_t^{(n)}(x,\cdot)\}$ are Cauchy sequences in $C[0,T]$ and that $\{\chi_t^{(n)}(x,\cdot)\}$ is a Cauchy sequence in $W^{1,1}(0,T)$, for every $x \in \Omega'$. Hence, there exist functions $\chi, w : \Omega'\times (0,T)$ such that, as $n\to\infty$, \begin{equation}\label{n8}
\sup_{t \in [0,T]}|w^{(n)}(x,t) - w(x,t)| \to 0\,, \ \ \sup_{t \in [0,T]}|\chi^{(n)}(x,t) - \chi(x,t)| \to 0\,, \ \
\sup_{t \in [0,T]}|w_t^{(n)}(x,t) - w_t(x,t)| \to 0 \end{equation} and \begin{equation}\label{n9}
\int_0^T|\chi^{(n)}_t(x,t) - \chi_t(x,t)| \,\mathrm{d} t \to 0,
\end{equation} for all $x \in \Omega'$. Since $|\chi^{(n)}|, |w^{(n)}|, |\chi^{(n)}_t|, |w^{(n)}_t|$ admit a pointwise upper bound \eqref{ge3} in terms of convergent sequences in $L^q(\Omega; C[0,T])$ for $1\le q < 3$, we can use the Lebesgue Dominated Convergence Theorem to conclude that \begin{equation}\label{n10} w^{(n)} \to w\,, \quad w_t^{(n)} \to w_t\,, \quad \chi^{(n)} \to \chi, \quad \mbox{ strongly in } \ L^q(\Omega; C[0,T])\,, \end{equation} and \begin{equation}\label{n11} \chi^{(n)}_t \to \chi_t \quad \mbox{ strongly in } \ L^1(\Omega\times (0,T)). \end{equation} Moreover, from H\"older's inequality we obtain that \begin{eqnarray*}
&&\hspace{-12mm}\int_0^T\int_{\Omega} |\chi^{(n)}_t - \chi_t|^2\,\mathrm{d} x \,\mathrm{d} t = \int_0^T\int_{\Omega} |\chi^{(n)}_t - \chi_t|^{1/3} |\chi^{(n)}_t - \chi_t|^{5/3}\,\mathrm{d} x \,\mathrm{d} t\\
&& \le\, \left(\int_0^T\int_{\Omega} |\chi^{(n)}_t - \chi_t|\,\mathrm{d} x \,\mathrm{d} t\right)^{1/3} \left(\int_0^T\int_{\Omega} |\chi^{(n)}_t - \chi_t|^{5/2}\,\mathrm{d} x \,\mathrm{d} t\right)^{2/3} \to 0, \end{eqnarray*} by virtue of \eqref{n11} and \eqref{ge3}. We can therefore pass to the limit in \eqref{gu1}--\eqref{le7c}, where \eqref{gu4} is interpreted as the variational inequality \eqref{n3a}, and check that its limit is the desired solution to \eqref{cu1}--\eqref{le7a}. \end{pf}
\section{Estimates independent of $R$}\label{apr} In this section, we derive estimates for the solutions to \eqref{cu1}--\eqref{cu5} which are independent of the cut-off parameter $R$. In the entire section, we denote by $C$ positive constants which may depend on the data of the problem, but not on $R$.
\subsection{Positivity of temperature}\label{pos}
For every nonnegative test function $\zeta \in W^{1,2}(\Omega)$ we have, by virtue of \eqref{cu5}, and using the fact that $\gamma_R(p,\theta) \ge c_\gamma>0$, \begin{equation}\label{pe25} \int_{\Omega}\left(c_0\theta_t \zeta + \kappa(Q_R(\theta^+)) \nabla \theta\cdot\nabla \zeta\right)\,\mathrm{d} x + \int_{\partial\Omega} \omega(x)(\theta-\theta^*) \zeta \,\mathrm{d} s(x) \ge - C\int_{\Omega} (Q_R(\theta^+))^2 \zeta \,\mathrm{d} x \end{equation} with a constant $C$ depending only on the constants $L, \theta_c, \beta, \nu, c_\gamma$. Let $\psi$ be the solution of the equation \begin{equation}\label{epsi} c_0 \dot\psi(t) + C\psi^2(t) = 0\,, \quad \psi(0) = \bar\theta\,. \end{equation} Then \begin{equation}\label{epsi2} \psi(t) = \frac{\bar\theta c_0}{c_0 + \bar\theta C t}\,\,, \end{equation} and we have \begin{eqnarray} \nonumber &&\hspace{-16mm}\int_{\Omega}\left(c_0(\psi- \theta)_t \zeta + \kappa(Q_R(\theta^+)) \nabla(\psi- \theta)\cdot\nabla \zeta\right)\,\mathrm{d} x\, - \int_{\partial\Omega} \omega(x)(\theta-\theta^*)\zeta \,\mathrm{d} s(x)\\ \label{pe25b} &\le& C\int_{\Omega} ((Q_R(\theta^+))^2 - \psi^2) \zeta \,\mathrm{d} x \end{eqnarray} for every nonnegative test function $\zeta \in W^{1,2}(\Omega)$. In particular, for $\zeta(x,t) = (\psi(t) - \theta(x,t))^+$, we obtain that \begin{equation}\label{pe25c} \frac{\,\mathrm{d}}{\,\mathrm{d} t}\frac{c_0}{2}\int_{\Omega}((\psi- \theta)^+)^2\,\mathrm{d} x + \int_{\partial\Omega} \omega(x)(\theta^*-\theta)(\psi- \theta)^+ \,\mathrm{d} s(x) \le C\int_{\Omega} ((Q_R(\theta^+))^2 - \psi^2)(\psi- \theta)^+ \,\mathrm{d} x\,. \end{equation} From Hypothesis 2.1\,(iii), we obtain for all values of $x$ and $t$ that $$ (\theta^*-\theta)(\psi- \theta)^+ \ge 0\,, \quad ((Q_R(\theta^+))^2 - \psi^2)(\psi- \theta)^+ = (Q_R(\theta^+) - \psi)(Q_R(\theta^+) + \psi)(\psi- \theta)^+ \le 0\,, $$ and from \begin{equation}\label{pe25d} \frac{\,\mathrm{d}}{\,\mathrm{d} t}\frac{c_0}{2}\int_{\Omega}((\psi- \theta)^+)^2\,\mathrm{d} x \le 0\,, \quad (\psi- \theta)^+(x,0) = 0, \end{equation} we conclude that, independently of $R>0$, \begin{equation}\label{pos0} \theta(x,t) \ge \psi(t) \ge \frac{\bar\theta c_0}{c_0 + \bar\theta C T} > 0 \quad \mbox{ for all }\ x \ \mbox{ and } \ t. \end{equation}
\subsection{Energy estimate}\label{enes}
We test \eqref{cu1} by $\eta=p$, \eqref{cu5} by $\zeta = 1$, and sum up. With the notation \eqref{ce1}, we use the identities \begin{eqnarray}\label{en5} \int_{\Omega}((\chi{+}\rho^*(1{-}\chi))\varphi_R(p))_t p \,\mathrm{d} x &=& \frac{\,\mathrm{d}}{\,\mathrm{d} t}\int_{\Omega} (\chi{+}\rho^*(1{-}\chi)) V_R(p)\,\mathrm{d} x + \int_{\Omega}(1-\rho^*)\Phi_R(p)\chi_t \,\mathrm{d} x\,,\qquad\\ \label{en5a} \int_{\Omega}((\chi{+}\rho^*(1{-}\chi)) w)_t p \,\mathrm{d} x &=& \int_{\Omega} (\chi{+}\rho^*(1{-}\chi)) w_t p \,\mathrm{d} x + \int_{\Omega}(1-\rho^*)wp\chi_t \,\mathrm{d} x\,,\qquad\\ \label{en6} \chi_t \left(\gamma_R(p,\theta)\chi_t - L\frac{Q_R(\theta)}{\theta_c}\right) &=& -L\chi_t + (1-\rho^*)(\Phi_R(p)+pw)\chi_t, \end{eqnarray} which follow from \eqref{cu4}, and we obtain that \begin{eqnarray} \nonumber &&\frac{\,\mathrm{d}}{\,\mathrm{d} t}\int_{\Omega} \left(c_0\theta + L\chi + (\chi{+}\rho^*(1{-}\chi)) V_R(p)+ \frac{\lambda_M}{2} w^2 + \beta\theta_c w\right)\,\mathrm{d} x \\ \label{en1} && \qquad + \int_{\partial\Omega} \left(\omega(x)(\theta-\theta^*) + \alpha(x)(p-p^*)p\right)\,\mathrm{d} s(x) \le \int_{\Omega} w_t(H_R(t) - G)\,\mathrm{d} x. \end{eqnarray} Note that $V_R'(p) = p \varphi_R'(p)$, $V_R(0) = 0$, so that $V_R(p) > 0$ for all $p \ne 0$. Furthermore, $$ \int_{\Omega} w_t H_R(t)\,\mathrm{d} x = H_R(t) \int_{\Omega} w_t \,\mathrm{d} x = 0, $$ so that \eqref{en1} can be written as \begin{eqnarray} \nonumber &&\frac{\,\mathrm{d}}{\,\mathrm{d} t}\int_{\Omega} \left(c_0\theta + L\chi + (\chi{+}\rho^*(1{-}\chi)) V_R(p)+ \frac{\lambda_M}{2} w^2 + (\beta\theta_c+G) w\right )\,\mathrm{d} x \\ \label{en1a} && \qquad + \int_{\partial\Omega} \left(\omega(x)(\theta-\theta^*) + \alpha(x)(p-p^*)p\right)\,\mathrm{d} s(x) \le \int_{\Omega} G_t w\,\mathrm{d} x. \end{eqnarray} By Gronwall's argument and Hypothesis \ref{h1}, we thus have \begin{equation}\label{en0} \mathop{{\rm sup\,ess}\,}_{t\in (0,T)} \int_{\Omega} (\theta + V_R(p) + w^2)\,\mathrm{d} x + \int_0^T\int_{\partial\Omega}(\omega(x)\theta + \alpha(x) p^2)\,\mathrm{d} s(x)\,\mathrm{d} t \le C\,. \end{equation}
\subsection{The Dafermos estimate}\label{dafe}
We denote $\hat\theta = Q_R(\theta) = Q_R(\theta^+)$ and rewrite \eqref{cu5} in the form \begin{align} &\int_{\Omega}\left(c_0\theta_t \zeta + \kappa(\hat\theta) \nabla \theta\cdot\nabla \zeta\right)\,\mathrm{d} x\,
- \int_{\Omega}\left(\frac{1}{\rho_L}\mu(p)Q_R(|\nabla p|^2) + \gamma_R(p,\theta)\chi_t^2 + \nu w_t^2\right) \zeta \,\mathrm{d} x \nonumber\\ \label{cu5a} &+\int_{\Omega} \hat\theta\left(\frac{L}{\theta_c}\chi_t + \beta w_t\right) \zeta\,\mathrm{d} x \,=\, \int_{\partial\Omega} \omega(x)(\theta^*-\theta)\zeta \,\mathrm{d} s(x), \end{align} for every $\zeta \in W^{1,2}(\Omega)$. We test \eqref{cu5a} by $\zeta = -\hat\theta^{-a}$. This yields the identity \begin{eqnarray} \nonumber
&&\hspace{-12mm}\int_{\Omega}\frac{a\kappa(\hat\theta)}{\hat\theta^{1+a}} |\nabla \hat\theta|^2\,\mathrm{d} x +
\int_{\Omega}\hat\theta^{-a}\left(\frac{1}{\rho_L}\mu(p)Q_R(|\nabla p|^2) + \gamma_R(p,\theta)\chi_t^2 + \nu w_t^2\right) \,\mathrm{d} x \\ \label{cu5b} &=&\int_{\Omega} \hat\theta^{1-a}\left(\frac{L}{\theta_c}\chi_t + \beta w_t\right) \,\mathrm{d} x +\int_{\partial\Omega} \omega(x)(\theta-\theta^*)\hat\theta^{-a} \,\mathrm{d} s(x) + \frac{c_0}{1-a}\frac{\,\mathrm{d}}{\,\mathrm{d} t}\int_{\Omega}\hat\theta^{1-a}\,\mathrm{d} x.\qquad \end{eqnarray} By Hypothesis \ref{h1}\,(ii), we have $\frac{\kappa(\hat\theta)}{\hat\theta^{1+a}} \ge c_\kappa$. Furthermore, H\"older's and Young's inequalities give the estimate \begin{equation}\label{de0}
\int_{\Omega} \hat\theta^{1-a}\left(|\chi_t| + |w_t|\right) \,\mathrm{d} x \le \frac C\tau \int_{\Omega} \hat\theta^{2-a}\,\mathrm{d} x + \tau \int_{\Omega} \hat\theta^{-a}\left(\chi_t^2 + w_t^2\right) \,\mathrm{d} x \end{equation} for every $\tau > 0$. This and \eqref{en0} yield the estimate \begin{equation}\label{de1}
\int_0^T\int_{\Omega}|\nabla\hat\theta(t)|^2\,\mathrm{d} x\,\mathrm{d} t \le C\left(1 + \int_0^T\int_{\Omega} \hat\theta^{2-a}\,\mathrm{d} x\,\mathrm{d} t\right)\,. \end{equation} {}From the Gagliardo-Nirenberg inequality \eqref{gn} with $s=1$, $p=2$, and $N=3$, we obtain that \begin{equation}\label{de1a}
|\hat\theta(t)|_q \le C\left(1+ |\nabla\hat\theta(t)|_2^\rho\right), \end{equation} with $\rho = (6/5(1 - (1/q))$, where we used \eqref{en0} once more. In particular, for every $q \le 8/3$, we have by \eqref{de1} and \eqref{de1a} that \begin{equation}\label{de2a}
\int_0^T|\hat\theta(t)|_q^{5q/3(q-1)}\,\mathrm{d} t \le C\left(1+ \int_0^T|\nabla\hat\theta(t)|_2^2\,\mathrm{d} t\right)
\le C\left(1 + \int_0^T|\hat\theta|_{2-a}^{2-a}\,\mathrm{d} x\,\mathrm{d} t\right)\,. \end{equation} Moreover, using \eqref{de2} first for $q = 2-a$ and then for $q=8/3$, we obtain that \begin{equation}\label{de2}
\int_0^T|\hat\theta(t)|_{8/3}^{8/3}\,\mathrm{d} t + \int_0^T|\nabla\hat\theta(t)|_2^2\,\mathrm{d} t \le C, \end{equation} independently of $R$.
\subsection{Estimates for the capillary pressure}\label{capi}
We choose an even function $b:\mathbb{R} \to (0,\infty)$ such that the functions $b$ and $p \mapsto p b(p)$ are Lipschitz continuous and such that $b'(p) \ge 0$ for $p > 0$. Then, owing to \eqref{ge2a}, $\eta = p b(p)$ is an admissible test function in \eqref{cu1}, and the term under the time derivative has the form \begin{eqnarray} \nonumber &&\int_{\Omega} ((\chi{+}\rho^*(1{-}\chi)) (\varphi_R(p)+w))_t p b(p) \,\mathrm{d} x \,= \int_{\Omega} (\chi{+}\rho^*(1{-}\chi)) \varphi_R(p)_t p b(p) \,\mathrm{d} x\\ \label{bbe1} && \quad +\,\int_{\Omega} (\chi{+}\rho^*(1{-}\chi)) w_t p b(p) \,\mathrm{d} x + (1-\rho^*)\int_{\Omega} \chi_t \left(\varphi_R(p) + w\right)p b(p) \,\mathrm{d} x. \end{eqnarray} For $p \in \mathbb{R}$, we put \begin{align*} V_b(p) &\,:=\, \int_0^p \varphi'(\tau)\, \tau\, b(\tau) \,\mathrm{d} \tau,\quad \hat P_{R,b}(p) \,:=\, \int_0^p P'_R(\tau)\, \tau\, b(\tau) \,\mathrm{d}\tau,\\ \Psi_{R,b}(p) &\,:=\, \varphi_R(p) p b(p) - \hat P_{R,b}(p) - V_b(p) - \Phi_R(p) b(p) \,=\, \int_0^p V_R(\tau) \,\tau\, b'(\tau) \,\mathrm{d}\tau\,. \end{align*} Then $V_b(p) > 0$ and $\Psi_{R,b}(p) \ge 0$ for all $p\ne 0$, and \eqref{bbe1} can be rewritten as \begin{eqnarray} \nonumber &&\int_{\Omega} ((\chi{+}\rho^*(1{-}\chi)) (\varphi_R(p)+w))_t p b(p) \,\mathrm{d} x = \frac{\,\mathrm{d}}{\,\mathrm{d} t} \int_{\Omega} (\chi{+}\rho^*(1{-}\chi)) (\hat P_{R,b}(p) + V_b(p)) \,\mathrm{d} x\\ \label{bbe2} && \quad +\,\int_{\Omega} (\chi{+}\rho^*(1{-}\chi)) w_t p b(p) \,\mathrm{d} x + (1-\rho^*)\int_{\Omega} \chi_t \left((\Phi_R(p) + wp)b(p) + \Psi_{R,b}(p)\right)\,\mathrm{d} x.\qquad \end{eqnarray} Owing to \eqref{en6}, we have, with the notation from Subsection \ref{dafe}, that $$ (1-\rho^*)\chi_t (\Phi_R(p) + wp)\, =\, \gamma_R(p,\theta) \chi_t^2 \,+\, \frac{L}{\theta_c} (\theta_c - \hat\theta) \chi_t\, \ge\, \frac12 \gamma_R(p,\theta) \chi_t^2 - C (1+ \hat\theta) $$ with a constant $C>0$ independent of $R$. Similarly, $$
\left|\int_{\Omega} \chi_t \Psi_{R,b}(p)\,\mathrm{d} x \right| \le \frac14 \int_{\Omega} \gamma_R(p,\theta) \chi_t^2 b(p) \,\mathrm{d} x + C \int_{\Omega} \frac{\Psi_{R,b}^2(p)}{b(p)\gamma_R(p,\theta)}\,\mathrm{d} x. $$ We have, by definition, that $\Psi_{R,b}(p) \le V_R(p) b(p)$, hence $$ \frac{\Psi_{R,b}^2(p)}{b(p)\gamma_R(p,\theta)} \,\le\, C\frac{V_{R}^2(p) b(p)}{\gamma_R(p,\theta)}\, \le\, C b(p)\frac{(V(p) + \frac12(p^2 - R^2)^+)^2}{1 + (p^2 - R^2)^+} \,\le\, C p^2 b(p) $$ independently of $R$. We conclude that \begin{eqnarray} \nonumber &&\int_{\Omega} ((\chi{+}\rho^*(1{-}\chi)) (\varphi_R(p)+w))_t p b(p) \,\mathrm{d} x \ge \frac{\,\mathrm{d}}{\,\mathrm{d} t} \int_{\Omega} (\chi{+}\rho^*(1{-}\chi)) (\hat P_{R,b}(p) + V_b(p)) \,\mathrm{d} x\\ \label{bbe3} && \quad +\,\frac14 \int_{\Omega} \gamma_R(p,\theta) \chi_t^2 b(p) \,\mathrm{d} x
- C \int_{\Omega} \left(1+ |w_t| + |p| + \hat\theta\right) |p| b(p) \,\mathrm{d} x.\qquad \end{eqnarray} {}From \eqref{cu1}, with $\eta = p b(p)$, we thus obtain, in particular, that \begin{eqnarray} \nonumber &&\hspace{-10mm}
\frac{\,\mathrm{d}}{\,\mathrm{d} t} \int_{\Omega} (\chi{+}\rho^*(1{-}\chi)) (\hat P_{R,b}(p)+V_b(p)) \,\mathrm{d} x + \int_{\Omega} \mu(p)(p b'(p) + b(p))|\nabla p|^2 \,\mathrm{d} x \\ \label{bbe4} && +\, \int_{\partial\Omega} \alpha(x) (p-p^*) p b(p)\,\mathrm{d} s(x)
\le C \int_{\Omega} \left(1{+}|w_t|{+}|p|{+}\hat\theta\right) |p| b(p) \,\mathrm{d} x\,,
\end{eqnarray} with a constant $C>0$ which is independent of both $b$ and $R$. To estimate the right-hand side of \eqref{bbe4}, we first notice that $|H(t)| \le C(1 + \int_{\Omega} |p|\,\mathrm{d} x)$, and from \eqref{cu2} and Hypothesis \ref{h1}\,(viii) we obtain the pointwise bounds \begin{eqnarray} \label{c3}
|w(x,t)| &\le& C\left(1 + \int_0^t(|p(x,{t'})| + \hat\theta(x,{t'}))\,\mathrm{d}{t'}
+ \int_0^t\int_{\Omega}|p(x',{t'})|\,\mathrm{d} x'\,\mathrm{d}{t'}\right),\\ \label{c3a}
|w_t(x,t)| &\le& |w(x,t)|+ C\left(1 + |p(x,t)| + \hat\theta(x,t)+ \int_{\Omega}|p(x',t)|\,\mathrm{d} x' \right)\,. \end{eqnarray} In particular, for $b(p) \equiv 1$ we have $\Psi_{R,b}(p) = 0$ and $V_b=V,$ and it follows from \eqref{bbe4}--\eqref{c3} that \begin{eqnarray}\label{esti} \nonumber
&&\hspace{-16mm} \int_{\Omega} ((\chi{+}\rho^*(1{-}\chi)) V(p))(x,t)\,\mathrm{d} x + \int_0^t\int_{\Omega} \mu(p)|\nabla p|^2(x,t')\,\mathrm{d} x\,\mathrm{d}{t'}
+ \int_0^t\int_{\partial\Omega} \alpha(x) |p|^{2}(x,t')\,\mathrm{d} s(x)\,\mathrm{d}{t'}\\ \label{c4}
&\le& C \left(1+ \int_0^t\int_{\Omega} \left( \hat\theta |p| + |p|^2\right)\,\mathrm{d} x\,\mathrm{d}{t'}\right).
\end{eqnarray} We have, by Hypothesis \ref{h1}\,(iv), that $V(p) \ge c_{\varphi}(|p|^{1-\delta} - \delta)/(1-\delta)$. The energy estimate \eqref{en0} then yields that \begin{equation}\label{c4a}
\int_{\Omega} |p|^{1-\delta}(x,t)\,\mathrm{d} x \le C\,. \end{equation} Moreover, by \eqref{de2}, $\hat\theta$ is bounded in $L^{8/3}(\Omega\times (0,T))$. We thus obtain from \eqref{c4} that \begin{eqnarray} \nonumber
&&\hspace{-16mm} \int_{\Omega} ((\chi{+}\rho^*(1{-}\chi)) V(p))(x,t)\,\mathrm{d} x + \int_0^t\int_{\Omega} \mu(p)|\nabla p|^2\,\mathrm{d} x\,\mathrm{d}{t'}
+ \int_0^t\int_{\partial\Omega} \alpha(x) |p|^{2}\,\mathrm{d} s(x)\,\mathrm{d}{t'}\\ \label{c5}
&\le& C \left(1+ \int_0^t\int_{\Omega} |p|^2\,\mathrm{d} x\,\mathrm{d}{t'}\right)\,. \end{eqnarray} Furthermore, by Hypothesis \ref{h1}\,(ix), $\Omega$ is connected, and $\int_{\partial\Omega}\alpha(x)\,\mathrm{d} s(x)>0$. This implies that there exists a constant $C_\Omega>0$, which depends only on $\Omega$, such that, a.~e. in $(0,T)$, \begin{equation}\label{c6}
C_\Omega\,\|p\|_{W^{1,2}(\Omega)}^2\,\le\,\int_{\partial\Omega} \alpha(x) |p|^{2}\,\mathrm{d} s(x) + \int_{\Omega} \mu(p)|\nabla p|^2\,\mathrm{d} x\,. \end{equation} Moreover, we infer from H\"older's inequality that \begin{equation}\label{c6a}
\int_{\Omega} |p|^2\,\mathrm{d} x = \int_{\Omega} |p|^{(1-\delta)/2}|p|^{(3+\delta)/2}\,\mathrm{d} x \le \left(\int_{\Omega} |p|^{1-\delta}\,\mathrm{d} x\right)^{1/2}
\left(\int_{\Omega} |p|^{3+\delta}\,\mathrm{d} x\right)^{1/2}\,. \end{equation} Hence, by \eqref{c4a}, \begin{align} \label{c6b}
&\int_0^t\int_{\Omega} |p|^2\,\mathrm{d} x\,\mathrm{d}{t'} \le C\int_0^t\left(\int_{\Omega} |p|^{3+\delta}\,\mathrm{d} x\right)^{1/2}\,\mathrm{d}{t'}
\le C\left(\int_0^t\left(\int_{\Omega} |p|^{3+\delta}\,\mathrm{d} x\right)^{2/(3+\delta)}\,\mathrm{d}{t'} \right)^{(3+\delta)/4} \nonumber \\
&\quad\le\,C\left(\int_0^t |p(t')|_{3+\delta}^2\,\mathrm{d} t'\right)^{(3+\delta)/4}\,. \end{align} Since $\delta<1$, we have the embedding inequality $$
|p(t)|_{3+\delta}^2 \le C\,\|p(t)\|_{W^{1,2}(\Omega)}^2, $$ so that from \eqref{c6b} it follows that \begin{equation}\label{c7}
\int_0^t\int_{\Omega}|p|^2\,\mathrm{d} x\,\mathrm{d} {t'} \,\le\, C\left(\int_0^t\|p({t'})\|_{W^{1,2}(\Omega)}^2\,\mathrm{d} {t'}\right)^{(3+\delta)/4}. \end{equation} Employing Young's inequality, we therefore conclude from \eqref{esti} and \eqref{c6} that \begin{equation}\label{c7a}
\|p\|_{L^2(0,T;W^{1,2}(\Omega))} \le C. \end{equation} Moreover, for $(x,t) \in \Omega\times (0,T)$, $q\ge 1$ and $s > 1$, we have $$
|p(x,t)|^q = |p(x,t)|^{(1-\delta)/s} |p(x,t)|^{q-(1-\delta)/s}, $$ whence, by H\"older's inequality with $s' = s/(s-1)$, $$
|p(t)|_q^q = \left(\int_{\Omega}|p(x,t)|^{1-\delta}\,\mathrm{d} x\right)^{1/s} \left(\int_{\Omega}|p(x,t)|^{(q-(1-\delta)/s)s'}\,\mathrm{d} x\right)^{1/s'}. $$ We thus obtain from \eqref{c4a} and \eqref{c7a} that \begin{equation}\label{c8}
\int_0^T |p(t)|_q^q \,\mathrm{d} t \le C, \end{equation} provided that $(q-(1-\delta)/s)s' \le 6$ and $s' \ge 3$. In other words, $$ q \le \frac{1-\delta}{s} + \frac{6}{s'} = \frac{5+\delta}{s'} + 1-\delta \le \frac{5+\delta}{3} + 1-\delta, $$ and the maximal admissible value for $q$ in \eqref{c8} is given by \begin{equation}\label{c9} q = \frac{8 - 2\delta}{3}\,. \end{equation} Let now the function $b$ in \eqref{bbe4} be arbitrary. For $p\in \mathbb{R}$, we put $\hat b(p): = \int_0^p \tau\,b(\tau) \,\mathrm{d} \tau$. Then $\hat b$ is convex, and we have the inequality \begin{equation}\label{hatb} \hat b(p) - \hat b(p^*) \le (p-p^*) \hat b'(p) = (p-p^*) p b(p). \end{equation} {}From \eqref{bbe4}, \eqref{de2}, \eqref{c3}, \eqref{c8}, \eqref{c9}, and \eqref{hatb} it follows that there exists a function $h\in L^q(\Omega\times (0,T))$ such that $$
\|h\|_q \le C, $$ with a constant $C>0$ independent of $b$ and $R$, as well as \begin{eqnarray} \nonumber &&\hspace{-10mm}
\frac{\,\mathrm{d}}{\,\mathrm{d} t} \int_{\Omega} (\chi{+}\rho^*(1{-}\chi))(\hat P_{R,b}(p)+ V_b(p)) \,\mathrm{d} x + c_{\mu} \int_{\Omega} b(p)|\nabla p|^2 \,\mathrm{d} x +\int_{\partial\Omega}\alpha(x)\hat b(p)\,\mathrm{d} s(x)\\ \label{bbe5}
&\le& \int_{\partial\Omega}\alpha(x)\hat b(p^*)\,\mathrm{d} s(x) + \int_{\Omega} h |p| b(p) \,\mathrm{d} x. \end{eqnarray}
Integration of \eqref{bbe5} in time, using the fact that $\chi+\rho^*(1-\chi)\ge\rho^*>0$, yields the estimate \begin{align} \nonumber &
\int_{\Omega} V_b(p)(x,t) \,\mathrm{d} x + \int_0^T\int_{\Omega} b(p)|\nabla p|^2 \,\mathrm{d} x \,\mathrm{d} {t'} +\int_0^T\int_{\partial\Omega}\alpha(x)\hat b(p)\,\mathrm{d} s(x)\,\mathrm{d} {t'}\\ \label{bbe6} &\le\, C\left(\int_{\Omega} V_b(p)(x,0) \,\mathrm{d} x + \int_0^T\int_{\partial\Omega}\alpha(x)\hat b(p^*)\,\mathrm{d} s(x)\,\mathrm{d}{t'}
+ \left(\int_0^T\int_{\Omega} (|p| b(p))^{q'} \,\mathrm{d} x \,\mathrm{d} {t'}\right)^{1/q'}\right), \end{align} for all $t \in [0,T]$, with $q' = \frac{q}{q-1} = \frac{8-2\delta}{5-2\delta}$.
Now let $k>0$ be given, and let $\{b_n\}_{n \in \mathbb{N}}$ be a sequence of even, smooth, bounded functions which are nondecreasing in $(0,\infty)$ and such that $b_n(p) \nearrow |p|^{2k}$ locally uniformly in $\mathbb{R}$. Then $(|p| b_n(p))^{q'} \nearrow |p|^{(1+2k)q'}$ locally uniformly. From \eqref{c8} we know that $p\in L^q(\Omega\times (0,T))$, where $q$ is given by \eqref{c9}. Hence, the integral on the right-hand side of \eqref{bbe6} is meaningful if $(1+2k)q'\le q$, that is, if $3k \le 1 - \delta$. In particular, thanks to Hypothesis \ref{h1}\,(iv), $k=\delta$ is an admissible choice.
We continue by induction. To this end, assume that \begin{equation}\label{bbe7}
\int_0^T \int_{\Omega} |p|^{(1+2k)q'} \,\mathrm{d} x \,\mathrm{d} t =: J_k < \infty \end{equation} holds true for some $k \ge \delta$. Using the denotations $$V_{b_n}(p):=\int_0^p\varphi'(\tau)\,\tau\,b_n(\tau)\,\mathrm{d}\tau,\quad \hat b_n(p):=\int_0^p\tau\,b_n(\tau)\,\mathrm{d}\tau \quad \mbox{for $\,n\in\mathbb{N}$}, $$ we can estimate the terms occurring on the right-hand side of \eqref{bbe6} for $n\in\mathbb{N}$ as follows: \begin{eqnarray*}
\int_{\Omega} V_{b_n}(p)(x,0) \,\mathrm{d} x &\le& C|p^0|_\infty^{2k + 1 - \hat\delta}\,,\\
\int_0^T\int_{\partial\Omega}\alpha(x)\hat b_n(p^*)\,\mathrm{d} s(x)\,\mathrm{d}{t'} &\le& C\|p^*\|_{\partial\Omega,\infty}^{2k+2}\,.
\end{eqnarray*} Put $E = \max\{1, |p^0|_\infty, \|p^*\|_{\partial\Omega,\infty}\}$. Then \eqref{bbe6} can for $n\in\mathbb{N}$ be rewritten as \begin{equation}\label{bbe8}
\int_{\Omega} V_{b_n}(p)(x,t) \,\mathrm{d} x + \int_0^T\!\!\!\int_{\Omega} b_n(p)|\nabla p|^2 \,\mathrm{d} x \,\mathrm{d} {t'} +\int_0^T\!\!\!\int_{\partial\Omega}\!\!\alpha(x)\hat b_n(p)\,\mathrm{d} s(x)\,\mathrm{d} {t'} \le C\max\{E^{2k+2}, J_k^{1/q'}\},
\end{equation} independently of $k$, $R$, and $n$. By virtue of Fatou's lemma, we can take the limit as $n \to \infty$ to obtain that \eqref{bbe8} holds true for $b(p) = |p|^{2k}$. Using the estimate \begin{equation}\label{bbe9}
V_b(p) \ge \frac{c_{\varphi}}{2k+1-\delta}\left(|p|^{2k + 1 -\delta} - \frac{1+\delta}{2k+2}\right), \end{equation} we thus have shown that \begin{eqnarray} \nonumber
&&\hspace{-12mm}\frac{1}{2k+1}\int_{\Omega} |p|^{2k + 1 -\delta}(x,t) \,\mathrm{d} x
+ \int_0^T\int_{\Omega} |p|^{2k} |\nabla p|^2 \,\mathrm{d} x \,\mathrm{d} {t'}
+\frac{1}{2k+2}\int_0^T\int_{\partial\Omega}\alpha(x)|p|^{2k+2}\,\mathrm{d} s(x)\,\mathrm{d} {t'}\\ \label{bbe10} &\le& C\max\{E^{2k+2}, J_k^{1/q'}\}. \end{eqnarray}
\subsection{Moser iterations} \label{mose}
We first recall a technical lemma proved in \cite[Lemma~3.1]{kg}.
\begin{lemma}\label{l1} Let $\Omega \subset \mathbb{R}^N$ be a bounded Lipschitzian domain, $N\ge 2$. Moreover, let $q_0 = (N+2)/2$, $q_0' = (N+2)/N$, and suppose that the real numbers
$s, r$ satisfy the inequalities \begin{equation}\label{m1} \frac12 \le s \le r \le \frac{N+2s}{N+2} \le 1\,. \end{equation} Furthermore, assume that a function $v \in L^2(0,T;W^{1,2}(\Omega))$ satisfies for a.~e. $t \in (0,T)$ the inequality \begin{equation}\label{m2}
|v(t)|_{2s}^{2s} + \int_0^T |v({t'})|^2_{W^{1,2}(\Omega)}\,\mathrm{d} {t'}
\le A \max\left\{B, \|v\|_{2rq'}\right\}^{2r}, \end{equation} for some $q' < q_0'$, $A\ge 1$, and $B\ge 1$. Then there exists a constant $C\ge 1$, which is independent of the choice of $v$, $B$, and $A$, such that \begin{equation}\label{m3}
\|v\|_{2rq_0'} \le C A^{1/(2r)} \max \left\{B,\|v\|_{2rq'}\right\}. \end{equation} \end{lemma}
We now apply Lemma \ref{l1} to the inequality \eqref{bbe10} with $q$
given by \eqref{c9}. Put $v_k := p |p|^{k}$. Then \eqref{bbe10} can be rewritten, using H\"older's inequality, as \begin{equation}\label{m12}
|v_k(t)|_{2s}^{2s} + \int_0^T |v_k({t'})|^2_{W^{1,2}(\Omega)}\,\mathrm{d} {t'}
\le (k+1)^2 A \max\left\{E^{k+1}, \|v_k\|_{2rq'}\right\}^{2r} \end{equation} with $$ 2s = \frac{2k + 1 - \delta}{k+1}\,, \quad 2r = \frac{2k + 1}{k+1}, \quad q' = \frac{q}{q-1}\,, $$ and with a constant $A\ge 1$ depending only on the initial and boundary data. We see that the hypothesis \eqref{m1} of Lemma \ref{l1} is fulfilled whenever $k \ge \delta$. The assertion of Lemma \ref{l1} then ensures that \begin{equation}\label{m14}
\|v_k\|_{2rq_0'} \le C ((k+1)^2 A)^{1/(2r)} \max \left\{E^{k+1},\|v_k\|_{2rq'}\right\}, \end{equation} which entails that \begin{equation}\label{m15}
\max\left\{E,\|p\|_{(2k+1)q_0'}\right\} \le C^{1/(k+1)} ((k+1)^2 A)^{1/(2k+1)}\max\left\{E,\|p\|_{(2k+1)q'}\right\}.
\end{equation} By induction, we check that the choice $b(p) = |p|^{2k}$ is justified for every $k \ge \delta$. Moreover, we set $\widetilde \nu := (q_0'/q') - 1 > 0$ and define the sequence $\{k_j\}_{j\ge 0}$ by the formula \begin{equation}\label{m16} 2k_j + 1 = (2\delta +1)(1+\widetilde\nu)^j.
\end{equation} Set $D_j := \max\left\{E,\|p\|_{(2k_j+1)q_0'}\right\}$. Then \eqref{m15} takes the form \begin{equation}\label{m17} D_j \le C^{1/(k_j+1)} ((k_j+1)^2 A)^{1/(2k_j+1)} D_{j-1} \ \hbox{\ for}\ \ j\in \mathbb{N}, \end{equation} and therefore, \begin{equation}\label{m18} \log D_j - \log D_{j-1} \le \frac{1}{k_j+1} \log C + \frac{1}{2k_j+1} \log((k_j+1)^2 A). \end{equation} We have $k_0 = \delta$ and $D_0 \le C$, by \eqref{c8}--\eqref{c9} and the condition $\delta<1/4$ in Hypothesis \ref{h1}\,(iv). The series on the right-hand side of \eqref{m18} is convergent, and we thus have $$ D_j \le D_0 \prod_{j=1}^\infty C^{1/(k_j+1)^2}((k_j+1)^2 A)^{1/(2k_j+1)} \le C^* $$ with a constant $C^*$ independent of $j$, which enables us to conclude that \begin{equation}\label{m19}
\|p\|_\infty \le C^*. \end{equation}
\subsection{Higher order estimates for the capillary pressure}\label{high}
We aim at taking the limit as $R\nearrow \infty$ in \eqref{cu1}--\eqref{le7a}. Hence, we can restrict ourselves to parameter values $R > C^*$ with $C^*$ from \eqref{m19} and rewrite \eqref{cu1}--\eqref{le7a} in the form \begin{eqnarray}\label{hu1} \int_{\Omega}\left(((\chi{+}\rho^*(1{-}\chi)) (\varphi(p)+w))_t \eta +\frac{1}{\rho_L}\mu(p)\nabla p\cdot\nabla \eta\right)\,\mathrm{d} x &=& \int_{\partial\Omega} \alpha(x)(p^*-p)\eta \,\mathrm{d} s(x),\qquad \\ \label{hu2} \nu w_t + \lambda_M w - p(\chi + \rho^*(1{-}\chi)) - \beta(\hat\theta-\theta_c) &=& - G + H_R(t)\quad a.~e.,\\ \label{hu4}
\gamma(\hat\theta) \chi_t + \partial I(\chi) - (1-\rho^*)(\Phi(p)+pw) &\ni& L\left(\frac{\hat\theta}{\theta_c}-1\right)\quad a.~e., \\ \nonumber \int_{\Omega}\left(c_0\theta_t \zeta + \kappa(\hat\theta) \nabla \theta\cdot\nabla \zeta\right)\,\mathrm{d} x + \int_{\partial\Omega} \omega(x)(\theta-\theta^*)\zeta \,\mathrm{d} s(x)
&=& \frac{1}{\rho_L} \int_{\Omega} \mu(p)Q_R(|\nabla p|^2)\zeta\,\mathrm{d} x\\ \label{hu5} &&\hspace{-110mm} +\,\int_{\Omega}\Big(\chi_t\big((1{-}\rho^*)(\Phi(p) + pw)-L\big) + w_t((\chi{+}\rho^*(1{-}\chi)) p {-} \lambda_M w {-} \beta\theta_c {-} G {+} H_R(t))\Big)\zeta\,\mathrm{d} x\quad \end{eqnarray} for every test functions $\eta,\zeta \in W^{1,2}(\Omega)$, with $\hat\theta = Q_R(\theta)$ and \begin{equation}\label{hu6}
H_R(t) = - \frac{1}{|\Omega|}\int_{\Omega} (p (\chi{+}\rho^*(1{-}\chi)) + \beta(\hat\theta -\theta_c) -G)(x,t)\,\mathrm{d} x\,. \end{equation} We test \eqref{hu1} by $\eta = \mu(p) p_t$, which is an admissible choice by \eqref{ge2a}. Then \begin{eqnarray}\nonumber
&&\hspace{-12mm}\int_{\Omega} (\chi{+}\rho^*(1{-}\chi)) \varphi'(p)\mu(p)|p_t|^2 \,\mathrm{d} x
+\frac{1}{2\rho_L}\frac{\,\mathrm{d}}{\,\mathrm{d} t} \int_{\Omega}\mu^2(p)|\nabla p|^2\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x)(p-p^*)\mu(p) p_t \,\mathrm{d} s(x)\\ \label{he1} &=& \int_{\Omega} \left((1-\rho^*)\chi_t w + (\chi{+}\rho^*(1{-}\chi)) w_t\right)\mu(p) p_t\,\mathrm{d} x. \end{eqnarray} Note that by Hypothesis \ref{h1}\,(iv) and \eqref{m19}, we have $$ \varphi'(p) \ge \frac{c_{\varphi}}{\max\{1, C^*\}^{1+\delta}}. $$ We set \begin{equation}\label{mu} \hat\mu(p) = \int_0^p \tau \mu(\tau) \,\mathrm{d} \tau, \quad M(p) = \int_0^p \mu(\tau)\,\mathrm{d}\tau, \end{equation} and integrate \eqref{he1} in time to obtain the estimate \begin{eqnarray}\nonumber
&&\hspace{-12mm}\int_0^{t}\int_{\Omega} |p_t|^2 \,\mathrm{d} x\,\mathrm{d} {t'} +
\int_{\Omega}|\nabla p|^2(x,t)\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x)\hat\mu(p)(x,t) \,\mathrm{d} s(x)\\ \nonumber
&\le& C\Bigg({1\,+}\int_{\partial\Omega}\alpha(x)M(p)|p^*|(x,t) \,\mathrm{d} s(x)
+ \int_0^{t} \int_{\partial\Omega}\alpha(x)M(p)|p^*_t|(x,t') \,\mathrm{d} s(x)\,\mathrm{d} t'\\ \nonumber
&&+\, \int_0^{t}\int_{\Omega} (|\chi_t w| + |w_t|)|p_t|\,\mathrm{d} x \,\mathrm{d}{t'}\Bigg)\\ \label{he2}
&\le& C\left(1 + \int_0^{t}\int_{\Omega} (|\chi_t w| + |w_t|)|p_t|\,\mathrm{d} x \,\mathrm{d}{t'}\right) \end{eqnarray} for all $t \in [0,T]$, whence we infer that \begin{eqnarray}\nonumber
&&\hspace{-12mm}\int_0^{t}\int_{\Omega} |p_t|^2 \,\mathrm{d} x\,\mathrm{d} {t'} +
\int_{\Omega}|\nabla p|^2(x,t)\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x)\hat\mu(p)(x,t) \,\mathrm{d} s(x)\\ \label{he3}
&\le& C\left(1 + \int_0^{t}\int_{\Omega} (|\chi_t w|^2 + |w_t|^2)\,\mathrm{d} x \,\mathrm{d}{t'}\right). \end{eqnarray} By virtue of \eqref{c3}--\eqref{c3a}, \eqref{hu4}, and \eqref{m19}, we have the pointwise bounds \begin{eqnarray} \label{he4}
|w(x,t)| &\le& C \left(1+ \int_0^t \hat\theta(x,{t'})\,\mathrm{d}{t'}\right),\\ \label{he5}
|\chi_t(x,t)| &\le& C(1+ |w(x,t)|) \,\le\, C \left(1+ \int_0^t \hat\theta(x,{t'})\,\mathrm{d}{t'}\right),\\ \label{he6}
|w_t(x,t)| &\le& C \left(1+ \hat\theta(x,t) + \int_0^t \hat\theta(x,{t'})\,\mathrm{d}{t'}\right). \end{eqnarray} By \eqref{de2} and the Sobolev embedding theorem, we know that $\hat\theta$ is bounded in $L^{8/3}(\Omega\times (0,T)) \cap L^2(0,T; L^6(\Omega))$. Let us recall again the Minkowski inequality $$ \left(\int_{\Omega} \left(\int_0^t \hat\theta (x,{t'}) \,\mathrm{d}{t'}\right)^6\,\mathrm{d} x\right)^{1/6} \le \int_0^t \left(\int_{\Omega} \hat\theta^6(x,{t'})\,\mathrm{d} x\right)^{1/6} \,\mathrm{d}{t'}\,, $$ which implies that \begin{eqnarray} \label{he4a}
\int_{\Omega} \left(|w(x,t)|^6 + |\chi_t(x,t)|^6\right) \,\mathrm{d} x &\le& C \quad \mbox{ for a.\,e. } t \in (0,T),\\ \label{he5a}
\|w_t\|_{8/3} &\le& C. \end{eqnarray} Hence, the right-hand side of \eqref{he3} is bounded independently of $R$, and we have for all $t \in [0,T]$ that \begin{equation}\label{he7}
\int_0^t\int_{\Omega} |p_t|^2 \,\mathrm{d} x\,\mathrm{d} {t'} +
\int_{\Omega}|\nabla p|^2(x,t)\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x)\hat\mu(p)(x,t) \,\mathrm{d} s(x) \le C\,. \end{equation} Now let $M(p)$ be as in \eqref{mu}. By \eqref{he4a}--\eqref{he7}, and by comparison in \eqref{hu1}, the term $\Delta M(p)$ is bounded in $L^{2}(\Omega\times (0,T))$, independently of $R$. In terms of the new variable $\tilde p = M(p)$, the boundary condition \eqref{be2} is nonlinear, and the $W^{2,2}$-regularity of $ M(p)$ follows from considerations similar to those used in the proof of \cite[Theorem 4.1]{kpa}, inspired by \cite{jn}. We thus may employ the Gagliardo-Nirenberg inequality \eqref{gnm} in the form \begin{equation}\label{gnm}
|\nabla M(p)(t)|_q \le C\left(|\nabla M(p)(t)|_2+|\nabla M(p)(t)|_2^{1-\rho}|\Delta M(p)(t)|_2^\rho\right) \end{equation} with $\rho = 3(\frac12 - \frac{1}{q})$. Together with \eqref{he7}, we conclude that \begin{equation}\label{he8a}
\int_0^T|\nabla p(t)|_q^s \,\mathrm{d} t \le C \quad \hbox{\ for}\ q \in (2,6] \ \mbox{ and }\ \frac{1}{q} + \frac{2}{3s} = \frac12. \end{equation} In particular, for $s=4$ and $s=q$ we obtain, respectively, \begin{equation}\label{he8}
\int_0^T|\nabla p(t)|_3^4 \,\mathrm{d} t \le C\,, \quad \|\nabla p\|_{10/3} \le C. \end{equation}
\subsection{Higher order estimates for the temperature}\label{temp}
The previous estimates \eqref{he4a}--\eqref{he5a} and \eqref{he8} entail that \eqref{hu5} has the form \begin{equation}\label{te1} \int_{\Omega}\left(c_0\theta_t \zeta + \kappa(\hat\theta) \nabla \theta\cdot\nabla \zeta\right)\,\mathrm{d} x + \int_{\partial\Omega} \omega(x)(\theta-\theta^*)\zeta \,\mathrm{d} s(x) = \int_{\Omega} \tilde F \zeta\,\mathrm{d} x \end{equation} for every $\zeta \in W^{1,2}(\Omega)$, with a function $\tilde F$ such that \begin{equation}\label{te2}
\|\tilde F\|_{5/3} \le C\,, \quad \int_0^T|\tilde F(t)|_{3/2}^2 \,\mathrm{d} t \le C\,, \end{equation} independently of $R$.
Assume now that for some $p_0 \ge 8/3$ we have \begin{equation}\label{te3}
\|\hat\theta\|_{p_0} \le C. \end{equation} We know that this is true for $p_0 = 8/3$ by virtue of \eqref{de2}. Set $r_0 = 2p_0/5$. Then we may put $\zeta = \hat\theta^{r_0}$ in \eqref{te1} and obtain, using Hypothesis \ref{h1}\,(ii), that \begin{equation}\label{te4}
\frac{1}{r_0+1}\int_{\Omega} \hat\theta^{r_0 + 1}(x,t)\,\mathrm{d} x + r_0 \int_0^t\int_{\Omega} \hat\theta^{r_0 + a} |\nabla\hat\theta|^2\,\mathrm{d} x\,\mathrm{d}{t'} \le C\,. \end{equation} We now denote $$ v = \hat\theta^p\,, \quad p = 1 + \frac{r_0+a}{2}\,, \quad s = \frac{r_0 + 1}{p}\,, $$ and rewrite \eqref{te4} as \begin{equation}\label{te5}
\int_{\Omega} |v|^{s}(x,t)\,\mathrm{d} x + \int_0^t\int_{\Omega} |\nabla v|^2\,\mathrm{d} x\,\mathrm{d}{t'} \le C(r_0 + 1)\,.
\end{equation} By the Gagliardo-Nirenberg inequality \eqref{gn}, we have $\|v\|_q \le C(r_0 + 1)$ for $q = 2 + \frac{2s}{3}$. Hence, \begin{equation}\label{te6}
\|\hat\theta\|_{p_1} \le C(r_0 + 1) \quad \hbox{\ for}\ \ p_1 = pq = \frac{2p_0}{3} + \frac83 + a\,. \end{equation} We now proceed by induction according to the recipe $ p_{j+1} = \frac{2p_j}{3} + \frac83 + a\,,\ r_j = \frac{2p_j}{5}\,. $ We have $\lim_{j\to \infty} p_j = 8 + 3a$. After finitely many steps, we may stop the algorithm and put $\bar p := p_j < 8 + 3a$ with \begin{equation}\label{te7}
\|\hat\theta\|_{\bar p} + \mathop{{\rm sup\,ess}\,} |\hat\theta(t)|_{\bar r+ 1} \le C\,,\quad \bar r = \frac{2\bar p}{5} > \hat a, \end{equation} with the constant $\hat a$ introduced in Hypothesis \ref{h1}\,(ii). By Proposition \ref{t2}, we may test \eqref{te1} by $\theta$, which yields \begin{equation}\label{te8}
\int_{\Omega} \theta^{2}(x,t)\,\mathrm{d} x + \int_0^t\int_{\Omega} \kappa(\hat\theta) |\nabla\theta|^2\,\mathrm{d} x\,\mathrm{d}{t'}
+ \int_0^t\int_{\partial\Omega} \omega(x) \theta^2 \,\mathrm{d} s(x)\,\mathrm{d}{t'} \le C \|\theta\|_{5/2}\,. \end{equation} Using the Gagliardo-Nirenberg inequality again, for instance, we conclude that \begin{equation}\label{te9}
\int_{\Omega} \theta^{2}(x,t)\,\mathrm{d} x + \int_0^t\int_{\Omega} \kappa(\hat\theta) |\nabla\theta|^2\,\mathrm{d} x\,\mathrm{d}{t'} + \int_0^t\int_{\partial\Omega} \omega(x) \theta^2 \,\mathrm{d} s(x)\,\mathrm{d}{t'} \le C\,. \end{equation} This enables us to derive an upper bound for the integral $\int_{\Omega} \kappa(\hat\theta) \nabla\theta\cdot\nabla\zeta \,\mathrm{d} x$, which we need for getting an estimate for $\theta_t$ from the equation \eqref{te9}. We have, by H\"older's inequality and Hypothesis \ref{h1}\,(ii), that \begin{eqnarray} \nonumber
\int_{\Omega} |\kappa(\hat\theta) \nabla\theta\cdot\nabla\zeta| \,\mathrm{d} x &=&
\int_{\Omega} |\kappa^{1/2}(\hat\theta) \nabla\theta\cdot\kappa^{1/2}(\hat\theta)\nabla\zeta| \,\mathrm{d} x\\ \label{e715}
&\le& C\left(\int_{\Omega}\kappa(\hat\theta) |\nabla\theta|^2\,\mathrm{d} x\right)^{1/2}
\left(\int_{\Omega} \hat\theta^{1+\hat a}|\nabla\zeta|^2\,\mathrm{d} x\right)^{1/2}. \end{eqnarray} We now choose $\hat q > 1$ such that $(1+\hat a)\hat q = 1+\bar r$, where $\bar r$ is defined in \eqref{te7}. Choosing now \begin{equation}\label{qstar} q^* = \frac{2 \hat q}{\hat q - 1}, \end{equation} we obtain from H\"older's inequality that \begin{equation}\label{e714}
\int_{\Omega} \hat\theta^{1+\hat a}|\nabla\zeta|^2\,\mathrm{d} x \le \left(\int_{\Omega} \hat\theta^{1+\bar r} \,\mathrm{d} x\right)^{1/\hat q}
\left(\int_{\Omega} |\nabla\zeta|^{q^*}\,\mathrm{d} x\right)^{2/q^*}
\le C \left(\int_{\Omega} |\nabla\zeta|^{q^*}\,\mathrm{d} x\right)^{2/q^*}\,, \end{equation} by virtue of \eqref{te7}. Eq.~\eqref{e715} then yields the bound \begin{equation}\label{e716}
\int_{\Omega} |\kappa(\hat\theta) \nabla\theta\cdot\nabla\zeta| \,\mathrm{d} x
\le C\left(\int_{\Omega}\kappa(\hat\theta) |\nabla\theta|^2\,\mathrm{d} x\right)^{1/2}
\left(\int_{\Omega} |\nabla\zeta|^{q^*}\,\mathrm{d} x\right)^{1/q^*}. \end{equation} Hence, by \eqref{te9}, \begin{equation}\label{e717}
\int_0^T\int_{\Omega} |\kappa(\hat\theta) \nabla\theta\cdot\nabla\zeta| \,\mathrm{d} x \,\mathrm{d} t
\le C \|\zeta\|_{L^2(0,T;W^{1,q^*}(\Omega))}\,. \end{equation} {}From \eqref{te2} it follows that testing with $\zeta \in L^2(0,T;W^{1,q^*}(\Omega))$ is admissible. We thus obtain from \eqref{te1} that \begin{equation}\label{e718}
\int_0^T\int_{\Omega} \theta_t \zeta \,\mathrm{d} x \,\mathrm{d} t \le C \|\zeta\|_{L^2(0,T;W^{1,q^*}(\Omega))}\,. \end{equation}
\section{Proof of Theorem \ref{t1}} \label{proo}
Let $R_i \nearrow \infty$ be a sequence such that $R_1 > C^*$ with $C^*$ as in \eqref{m19}, and let $(p,w,\chi,\theta) = (p^{(i)}, w^{(i)}, \chi^{(i)}, \theta^{(i)})$ be solutions of \eqref{hu1}--\eqref{hu6} corresponding to $R = R_i$, with $\hat\theta = \hat\theta^{(i)} = Q_{R_i}(\theta^{(i)})$ and test functions $\eta,\zeta \in W^{1,2}(\Omega)$. Our aim is to check that at least a subsequence converges as $i \to \infty$ to a solution of \eqref{wu1}--\eqref{wu5}, \eqref{le7}, with test functions $\eta\in W^{1,2}(\Omega)$, $\zeta \in W^{1,q^*}(\Omega)$ with $q^*$ as in Theorem \ref{t1}.
First, for the capillary pressure $p = p^{(i)}$ we have the estimates \eqref{m19}, \eqref{he7}, \eqref{he8}, which imply that, passing to a subsequence if necessary, \begin{eqnarray*} p^{(i)} \to p && \mbox{strongly in } L^r(\Omega\times (0,T)) \ \mbox{ for every } \ r\ge 1\,,\\ p_t^{(i)} \to p_t && \mbox{weakly in } L^2(\Omega\times (0,T))\,,\\ \nabla p^{(i)} \to \nabla p && \mbox{strongly in } L^r(\Omega\times (0,T)) \ \mbox{ for every } \ 1 \le r < \frac{10}{3}\,. \end{eqnarray*} We easily show that \begin{equation}\label{pr2}
Q_{R_i}\bigl(|\nabla p^{(i)}|^2\bigr) \to |\nabla p|^2 \ \mbox{ strongly in } L^r(\Omega\times (0,T)) \ \mbox{ for every } \ 1 \le r < \frac{5}{3}\,. \end{equation} Indeed, let $\Omega^{(i)}_T \subset \Omega\times (0,T)$ be the set of all $(x,t) \in \Omega\times (0,T)$
such that $|\nabla p^{(i)}(x,t)|^2 > R_i$. By \eqref{he8}, we have $$
C \ge \int_0^T\int_{\Omega} |\nabla p^{(i)}(x,t)|^{10/3} \,\mathrm{d} x\,\mathrm{d} t \ge \iint_{\Omega^{(i)}_T}|\nabla p^{(i)}(x,t)|^{10/3} \,\mathrm{d} x\,\mathrm{d} t
\ge |\Omega^{(i)}_T| R_i^{5/3}\,, $$
hence $|\Omega^{(i)}_T| \le C R_i^{-5/3}$. For $r < \frac{5}{3}$, we use H\"older's inequality to get the estimate \begin{eqnarray*}
&&\hspace{-12mm}\int_0^T\int_{\Omega} \left|Q_{R_i}(|\nabla p^{(i)}|^2) - |\nabla p^{(i)}|^2\right|^r\,\mathrm{d} x\,\mathrm{d} t
= \iint_{\Omega^{(i)}_T}\left|R_i - |\nabla p^{(i)}|^2\right|^r\,\mathrm{d} x\,\mathrm{d} t
\le \iint_{\Omega^{(i)}_T} |\nabla p^{(i)}|^{2r}\,\mathrm{d} x\,\mathrm{d} t \\
&\le& \left(\iint_{\Omega^{(i)}_T} |\nabla p^{(i)}|^{10/3}\,\mathrm{d} x\,\mathrm{d} t\right)^{3r/5} |\Omega^{(i)}_T|^{1-(3r/5)}\,, \end{eqnarray*} and \eqref{pr2} follows.
For the temperature $\theta = \theta^{(i)}$, we proceed in a similar way. From the compactness result in \cite[Theorem 5.1]{li}, it follows that, for a subsequence, $$ \theta^{(i)} \to \theta \ \mbox{ strongly in } L^2(\Omega\times (0,T)). $$ Furthermore, by \eqref{te7}, $\hat\theta^{(i)}$ are uniformly bounded in $L^{r}(\Omega\times (0,T))$ for every $r < 8+3a$. A similar argument as above yields that $$ \hat\theta^{(i)} \to \theta \ \mbox{ strongly in } L^r(\Omega\times (0,T)) \ \mbox{ for every } \ 1 \le r < 8+3a\,. $$ Indeed, by \eqref{te9} and \eqref{e718}, \begin{eqnarray*} \theta_t^{(i)} \to \theta_t && \mbox{weakly in } L^2(0,T; W^{-1,q^*}(\Omega))\,,\\ \nabla \theta^{(i)} \to \nabla \theta && \mbox{weakly in } L^2(\Omega\times (0,T))\,. \end{eqnarray*} The strong convergences of $w^{(i)} \to w$, $w^{(i)}_t \to w_t$, $\chi^{(i)} \to \chi$, $\chi^{(i)}_t \to \chi_t$ are handled using the estimates \eqref{he4}--\eqref{he6} similarly as in the proof of Proposition \ref{t2} at the end of Section \ref{cut}. This enables us to pass to the limit as $R\nearrow \infty$ in the system \eqref{hu1}--\eqref{hu6} and thus to complete the proof of Theorem \ref{t1}.
{\small
}
\end{document} |
\begin{document}
\title{The quantum query complexity of learning multilinear polynomials}
\author{Ashley Montanaro\footnote{Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, UK; {\tt [email protected]}.}}
\maketitle
\begin{abstract} In this note we study the number of quantum queries required to identify an unknown multilinear polynomial of degree $d$ in $n$ variables over a finite field $\mathbb{F}_q$. Any bounded-error classical algorithm for this task requires $\Omega(n^d)$ queries to the polynomial. We give an exact quantum algorithm that uses $O(n^{d-1})$ queries for constant $d$, which is optimal. In the case $q=2$, this gives a quantum algorithm that uses $O(n^{d-1})$ queries to identify a codeword picked from the binary Reed-Muller code of order $d$. \end{abstract}
\section{Introduction}
A central problem in computational learning theory is to determine the complexity of identifying an unknown function of a certain type, given access to that function via an oracle. We say that a class $\mathcal{F}$ of functions can be {\em learned} using $t$ queries if any function $f \in \mathcal{F}$ can be identified with $t$ uses of $f$ (perhaps allowing some probability of error). It is known that some classes of functions can be learned more efficiently by quantum algorithms than is possible classically. In particular, one of the earliest results in the field of quantum computation is that the class of linear functions $\mathbb{F}_2^n\rightarrow \mathbb{F}_2$ (also known as Hadamard codewords) can be learned using a single quantum query~\cite{bernstein97}, whereas $\Omega(n)$ queries are required classically. Here we generalise this result to quantum learning of {\em multilinear} functions over general finite fields.
Let $\mathbb{F}_q$ denote the finite field with $q=p^r$ elements for some prime $p$. Every function $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ can be represented as a polynomial in $n$ variables over $\mathbb{F}_q$. $f$ is said to be a degree $d$ polynomial if it can be written as a polynomial whose every term is of total degree at most $d$. For example, the function $f:\mathbb{F}_5^3 \rightarrow \mathbb{F}_5$ defined by $f(x) = 2 x_1 + 4 x_1^2 x_2 + x_1 x_2 x_3$ is a degree 3 polynomial. The set of polynomials of degree $d$ in $n$ variables over $\mathbb{F}_q$ is known as the (generalised) Reed-Muller code of order $d$ over $\mathbb{F}_q$.
We say that a degree $d$ polynomial $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ is multilinear if it can be written as
\[ f(x) = \sum_{S \subseteq [n],|S|\le d} \alpha_S \prod_{i \in S} x_i \]
for some coefficients $\alpha_S \in \mathbb{F}_q$, where $[n]$ denotes the set $\{1,\dots,n\}$. Note that in the case $S = \emptyset$ we define $\prod_{i \in S} x_i = 1$. For example, any multilinear polynomial of degree 3 can be written as
\[ f(x) = \alpha_{\emptyset} + \sum_{i} \alpha_{\{i\}} x_i + \sum_{i<j} \alpha_{\{i,j\}} x_i x_j + \sum_{i<j<k} \alpha_{\{i,j,k\}} x_i x_j x_k. \]
Technically, such functions are {\em multiaffine} rather than multilinear, as they are affine in each variable; however, we use the ``multilinear'' terminology for consistency with prior work. In particular, note that in this terminology, linear functions $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ (i.e.\ functions such that $f(x+y) = f(x) + f(y)$) are equivalent to degree 1 multilinear polynomials with no constant term. In the important special case $q=2$ (Boolean functions), every function $f:\mathbb{F}_2^n \rightarrow \mathbb{F}_2$ is multilinear.
Given the ability to query a multilinear degree $d$ polynomial $f$ on arbitrary $x \in \mathbb{F}_q^n$, we would like to determine ({\em learn}) $f$ using the smallest possible number of queries. A straightforward classical algorithm can solve this problem by querying $f(x)$ for all strings $x \in \mathbb{F}_q^n$ that contain only 0 and 1, and such that $|x| \le d$. (We write $|x|$ for the Hamming weight of $x\in \mathbb{F}_q^n$, i.e.\ the number of non-zero components.) To see this, first consider the special case where for some $k$, $\alpha_S = 0$ for all $S$ such that $|S| < k$. Then knowing $f(x)$ for all $x$ of the above form such that $|x|=k$ is sufficient to determine all of the degree $k$ coefficients of $f$ (note that this relies on $f$ being multilinear). More generally, let $f_k$ denote the degree $k$ part of $f$, i.e.
\[ f_k(x) = \sum_{S \subseteq [n],|S|=k} \alpha_S \prod_{i \in S} x_i.\]
For any $k$, once $f_{\ell}$ is known for all $\ell \le k$, the degree $k+1$ coefficients can be determined from the inputs of Hamming weight $k+1$: whenever $f$ is queried on $x$, subtract $\sum_{\ell=0}^k f_{\ell}(x)$ from the result to simulate that $\alpha_S=0$ for all $S$ such that $|S|\le k$. The algorithm can therefore learn $f$ with certainty using $1 + n + \binom{n}{2}+\binom{n}{3}+\dots+\binom{n}{d}$ queries, which is $O(n^d)$ for constant $d$. In the special case of functions $f:\mathbb{F}_2^n \rightarrow \mathbb{F}_2$, all polynomials are multilinear. This implies that the class of all degree $d$ polynomials $f:\mathbb{F}_2^n \rightarrow \mathbb{F}_2$ can be learned using $O(n^d)$ queries.
It is also easy to see that the above algorithm is exactly optimal in an information-theoretic sense. As the number of distinct multilinear degree $d$ polynomials of $n$ variables over $\mathbb{F}_q$ is equal to
\[ q^{1 + n + \binom{n}{2} + \binom{n}{3} + \dots + \binom{n}{d}}, \]
and as a classical query to $f$ only provides $\log_2 q$ bits of information, any classical algorithm must make $1 + n+\binom{n}{2}+\binom{n}{3}+\dots+\binom{n}{d} = \Omega(n^d)$ queries to $f$ in order to identify it with certainty. A similar bound can be proven for bounded-error algorithms. Indeed, let $f$ be picked uniformly at random, and consider an algorithm (without loss of generality deterministic) that makes at most $c$ queries to $f$ before it outputs an answer. Such an algorithm can output the correct answer for at most $q^c$ functions $f$, and hence succeeds with probability at most $q^{c-(1 + n + \binom{n}{2} + \binom{n}{3} + \dots + \binom{n}{d})}$.
Using similar techniques, one can find a lower bound for {\em quantum} query algorithms~\cite{hoyer05}. In the standard quantum query model, the algorithm accesses $f$ via the unitary operation $O_f \ket{x}\ket{y} = \ket{x}\ket{y + f(x)}$, where $x \in \mathbb{F}_q^n$, $y \in \mathbb{F}_q$. We formalise a lower bound on the number of queries required to identify $f$ in this model as the following proposition\footnote{In the case $q = 2$, this lower bound can also be obtained from independent results of Farhi et al~\cite{farhi99} and Servedio and Gortler~\cite{servedio01}.}.
\begin{prop} Let $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ be a multilinear degree $d$ polynomial over $\mathbb{F}_q$. Then any quantum query algorithm which learns $f$ with bounded error must make $\Omega(n^{d-1})$ queries to $f$. \end{prop}
\begin{proof} Each query can be seen as a round of a communication process, where in each round the algorithm sends the registers $\ket{x}$ and $\ket{y}$ to the oracle, using $(n + 1) \log_2 q$ qubits of communication; the oracle then performs the map $\ket{x}\ket{y} \mapsto \ket{x}\ket{y + f(x)}$ and returns the registers to the algorithm. Let $f$ be picked uniformly at random from the set of degree $d$ multilinear polynomials, let $X$ be the corresponding random variable, and let $Y$ be the random variable corresponding to the function which is output by the algorithm. By Holevo's theorem \cite{holevo73} (see also~\cite{cleve98}), after $r$ rounds of communication, the mutual information between $X$ and $Y$ satisfies the upper bound
\[ I(X:Y) \le 2r (n+1) \log_2 q. \]
On the other hand, Fano's inequality~\cite{cover06} states that the probability $P_e$ of identifying $f$ incorrectly satisfies the lower bound
\[ P_e \ge 1 - \frac{I(X:Y)+1}{\log_2\left(q^{1 + n+\binom{n}{2}+\binom{n}{3}+\dots+\binom{n}{d}}\right)}, \]
which thus implies that
\[ P_e \ge 1 - \frac{2r (n+1) + 1/\log_2 q}{1 + n+\binom{n}{2}+\binom{n}{3}+\dots+\binom{n}{d}}. \]
For this quantity to be upper bounded by a constant, we must have $r = \Omega(n^{d-1})$.
\end{proof}
The main result of this note is that this asymptotic scaling can actually be achieved.
\begin{thm} \label{thm:main} Let $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ be a multilinear degree $d$ polynomial over $\mathbb{F}_q$. Then there is an exact quantum algorithm which learns $f$ with certainty using $1 + \sum_{i=1}^d 2^{i-1} \binom{n}{i-1}$ queries to $f$, which is $O(n^{d-1})$ for constant $d$. \end{thm}
The case $d=1$, $q=2$ of this result was previously proven by Bernstein and Vazirani~\cite{bernstein97}, while a bounded-error quantum algorithm using $O(n)$ queries for the case $d=2$, $q=2$ was more recently given by R\"otteler~\cite{roetteler09}; by contrast, the algorithm given here is exact and works for all $d$ and all fields $\mathbb{F}_q$. In related work, a quantum algorithm for estimating quadratic forms over the reals using $O(n)$ queries had previously been given by Jordan \cite[Appendix D]{jordan08}.
\section{Proof of Theorem \ref{thm:main}}
The only quantum ingredient we will need to prove Theorem \ref{thm:main} is the following lemma, which is implicit in \cite{beaudrap02,vandam06} and is a simple extension of the Bernstein-Vazirani algorithm \cite{bernstein97} for identifying linear functions over $\mathbb{F}_2$.
\newcounter{linlem}\setcounter{linlem}{\value{thm}}
\begin{lem}[\cite{beaudrap02,vandam06}] \label{lem:linear} Let $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ be linear, and let $g:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ be the function $g(x) = f(x) + \beta$ for some constant $\beta \in \mathbb{F}_q$. Then $f$ can be determined exactly using one quantum query to $g$. \end{lem}
For completeness, we give a full proof of Lemma \ref{lem:linear} in Appendix \ref{sec:linproof}.
We will derive a quantum algorithm to learn an unknown multilinear degree $d$ polynomial $f$ by introducing a {\em linear} function $f_S$ of $n$ variables which can be produced using a relatively small number of queries to $f$, and from which $f$ can be determined using Lemma \ref{lem:linear}. This technique is somewhat similar to the approach used to learn quadratic polynomials with bounded error in the work~\cite{roetteler09}. A related function was previously used by Kaufman and Ron \cite{kaufman06} to produce an efficient classical {\em tester} for low-degree polynomials over finite fields.
For any $k$-subset $S \subseteq [n]$, let $S_j$ denote the $j$'th element of $S$, where $S$ is considered as an increasing sequence of integers. For $i \in [n]$, let $e_i$ denote the $i$'th element in the standard basis for the vector space $\mathbb{F}_q^n$. For any $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ and any subset $S \subseteq [n]$, define the function $f_S:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ as follows:
\[ f_S(x) = \sum_{\beta_1,\dots,\beta_k \in \{0,1\}} (-1)^{k - \sum_{i=1}^{k} \beta_i}\,f\left(x + \sum_{j=1}^k \beta_j e_{S_j}\right), \]
where the inner sum is over $\mathbb{F}_q^n$ and the outer sum is over $\mathbb{F}_q$. For example, for $S = \{1,2\}$, $f_S(x) = f(x) - f(x + e_1) - f(x + e_2) + f(x + e_1 + e_2)$. When $q=2$, $f_S(x)$ sums $f$ over the affine subspace of $\mathbb{F}_2^n$ positioned at $x$ and spanned by $\{e_i:i\in S\}$. It is clear that a query to $f_S$ can be simulated using $2^k$ queries to $f$. One way of understanding $f_S$ is in terms of {\em discrete derivative} operators. If we define the discrete derivative of $f$ in direction $i \in [n]$ as $(\Delta_i f)(x) = f(x + e_i) - f(x)$, then $f_S(x) = (\Delta_{S_1} \Delta_{S_2} \dots \Delta_{S_k} f)(x)$. In other words, $f_S$ is the function obtained by taking the derivative of $f$ with respect to all of the variables in $S$.
We will be interested in querying $f_S$ for sets $S$ of size $d-1$. In this case, we have the following characterisation for multilinear polynomials $f$.
\newcounter{fslem}\setcounter{fslem}{\value{thm}}
\begin{lem} \label{lem:fs} Let $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ be a multilinear polynomial of degree $d$ with expansion
\[ f(x) = \sum_{T \subseteq [n],|T| \le d} \alpha_T \prod_{i \in T} x_i. \]
Then, for any $S$ such that $|S|=d-1$,
\[ f_S(x) = \alpha_S + \sum_{k \notin S} \alpha_{S \cup \{k\}} x_k. \] \end{lem}
Lemma \ref{lem:fs} follows easily from expressing $f_S$ in terms of discrete derivatives; we also give a simple direct proof in Appendix \ref{sec:fsproof}. We are now ready to describe a quantum algorithm which uses $f_S$ to learn the degree $d$ component of $f$.
\begin{algorithm}[H] \label{alg:learntop}
\ForEach{$S \subseteq [n]$ such that $|S|=d-1$}{ Use one query to $f_S$ to learn the coefficients $\alpha_{S \cup \{k\}}$, for all $k \notin S$\; }
Output the function $f_d$ defined by $f_d(x) = \sum_{S \subseteq [n],|S|=d} \alpha_S \prod_{i \in S} x_i$\; \caption{Learning the degree $d$ component of $f$} \end{algorithm}
Correctness of this algorithm follows from Lemmas \ref{lem:linear} and \ref{lem:fs}. By Lemma \ref{lem:fs}, for any $S$ such that $|S|=d-1$, knowledge of the degree 1 component of $f_S$ is sufficient to determine $\alpha_{S \cup \{k\}}$ for all $k \notin S$. Therefore, knowing the degree 1 part of $f_S$ for all $S \subseteq [n]$ such that $|S|=d-1$ is sufficient to completely determine all degree $d$ coefficients of $f$. By Lemma \ref{lem:linear}, for any $S$ with $|S|=d-1$, the degree 1 component of $f_S$ can be determined with one quantum query to $f_S$. This implies that Algorithm \ref{alg:learntop} completely determines the degree $d$ component of $f$ using $\binom{n}{d-1}$ queries to $f_S$, each of which uses $2^{d-1}$ queries to $f$.
Once the degree $d$ component of $f$ has been learned, $f$ can be reduced to a degree $d-1$ polynomial by crossing out the degree $d$ part whenever the oracle for $f$ is called. That is, whenever the oracle is called on $x$, we subtract $f_d(x)$ from the result (recall $f_d$ is the degree $d$ part of $f$), at no extra query cost. Inductively, $f$ can be determined completely using
\[ 2^{d-1} \binom{n}{d-1} + 2^{d-2} \binom{n}{d-2} + \dots + 2n + 1 + 1 \]
queries; the last query is to determine the constant term $\alpha_{\emptyset}$, which can be achieved by classically querying $f(0^n)$. The number of queries used is therefore $O(n^{d-1})$ for constant $d$, completing the proof of Theorem \ref{thm:main}.
\section{Quantum learning of linear functions} \label{sec:linproof}
In order to prove Lemma \ref{lem:linear}, we will use the quantum Fourier transform (QFT) over general finite fields. This was originally defined by de Beaudrap, Cleve and Watrous \cite{beaudrap02} and independently by van Dam, Hallgren and Ip \cite{vandam06}. The QFT over $\mathbb{F}_q$ is defined as the unitary operation
\[ Q_q \ket{x} = \frac{1}{\sqrt{q}} \sum_{y \in \mathbb{F}_q} \omega^{\Tr(xy)} \ket{y}, \]
where $\omega = e^{2 \pi i/p}$ (recall $q=p^r$) and the trace function $\Tr:\mathbb{F}_q \rightarrow \mathbb{F}_p$ is defined by $\Tr(x) := x + x^p + x^{p^2} + \dots + x^{p^{r-1}}$. If $q$ is prime (i.e.\ $r=1$), then of course $\Tr(x) = x$. The trace is linear: $\Tr(x + y) = \Tr(x) + \Tr(y)$ (see \cite{lidl97} for the proof of this and other standard facts about finite fields). This allows the $n$-fold tensor product of QFTs to be written concisely as
\[ Q_q^{\otimes n} \ket{x} = \frac{1}{q^{n/2}} \sum_{y \in \mathbb{F}_q^n} \omega^{\Tr (x \cdot y)} \ket{y}, \]
where $x \cdot y = \sum_{i=1}^n x_i y_i$, the sum being taken over $\mathbb{F}_q$.
For any function $f:\mathbb{F}_q \rightarrow \mathbb{F}_q$, let $U_f$ be the unitary operator that maps $\ket{x} \mapsto \omega^{\Tr(f(x))} \ket{x}$. Given access to $f$, $U_f$ can be implemented using a standard phase kickback trick as follows.
\begin{lem}[\cite{beaudrap02,vandam06}] \label{lem:kickback} $U_f$ can be implemented using one query to $f$. \end{lem}
\begin{proof} To implement $U_f$, append an ancilla register $\ket{y}$, $y \in \mathbb{F}_q$, in the initial state $\ket{1}$. Apply $Q_q^{-1}$ to this register to produce
\[ \frac{1}{\sqrt{q}} \sum_{y \in \mathbb{F}_q} \omega^{-\Tr(y)} \ket{y}, \]
then apply $O_f$ to both registers (recall $O_f \ket{x}\ket{y} = \ket{x}\ket{y + f(x)}$). For any $x \in \mathbb{F}_q$, the initial state $\ket{x}\ket{1}$ is mapped to
\[ \frac{1}{\sqrt{q}} \ket{x} \sum_{y \in \mathbb{F}_q} \omega^{-\Tr(y)} \ket{y + f(x)} = \frac{1}{\sqrt{q}} \ket{x} \sum_{y \in \mathbb{F}_q} \omega^{-\Tr (y-f(x))} \ket{y} = \omega^{\Tr f(x)} \ket{x} \frac{1}{\sqrt{q}} \sum_{y \in \mathbb{F}_q} \omega^{-\Tr(y)} \ket{y}, \]
where we use the linearity of the trace function. As the second register is left unchanged by $O_f$, it can be ignored. \end{proof}
We are now ready to prove Lemma \ref{lem:linear}.
\setcounter{thm}{\value{linlem}}
\begin{lem}[\cite{beaudrap02,vandam06}] Let $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ be linear, and let $g:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ be the function $g(x) = f(x) + \beta$ for some constant $\beta \in \mathbb{F}_q$. Then $f$ can be determined exactly using one quantum query to $g$. \end{lem}
\begin{proof} First observe that $f$ will be linear if and only if $f(x) = a \cdot x = \sum_{i=1}^n a_i x_i$ for some $a \in \mathbb{F}_q^n$. Create the state
\[ \ket{\psi_g} := \frac{1}{q^{n/2}} \sum_{x \in \mathbb{F}_q^n} \omega^{\Tr (a \cdot x + \beta)} \ket{x} \]
via the technique of Lemma \ref{lem:kickback}, using one query to $g$. Now apply the $n$-fold tensor product of the inverse quantum Fourier transform to produce
\[ (Q_q^{-1})^{\otimes n} \ket{\psi_g} = \frac{1}{q^n} \sum_{x \in \mathbb{F}_q^n} \omega^{\Tr (a \cdot x + \beta)} \sum_{y \in \mathbb{F}_q^n} \omega^{-\Tr(x \cdot y)} \ket{y} = \frac{1}{q^n} \omega^{\Tr(\beta)} \sum_{y \in \mathbb{F}_q^n} \left( \sum_{x \in \mathbb{F}_q^n} \omega^{\Tr ((a-y) \cdot x)} \right) \ket{y}. \]
Note that $\beta$ has been relegated to an unobservable global phase, and the sum over $x$ will be zero unless $y=a$, in which case it will equal $q^n$. A measurement in the computational basis therefore yields $a$ with certainty, which suffices to determine $f$. \end{proof}
\section{Proof of Lemma \ref{lem:fs}} \label{sec:fsproof}
We finally prove Lemma \ref{lem:fs}, which we restate for convenience.
\setcounter{thm}{\value{fslem}}
\begin{lem} Let $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ be a multilinear polynomial of degree $d$ with expansion
\[ f(x) = \sum_{T \subseteq [n],|T| \le d} \alpha_T \prod_{i \in T} x_i. \]
Then, for any $S$ such that $|S|=d-1$,
\[ f_S(x) = \alpha_S + \sum_{k \notin S} \alpha_{S \cup \{k\}} x_k. \] \end{lem}
\begin{proof}
For brevity, write $|\beta| = \sum_{i=1}^{d-1} \beta_i$. Let $\delta_{xy}$ be the Dirac delta function ($\delta_{xy}=1$ if $x=y$, and $\delta_{xy}=0$ otherwise). By the definition of $f_S$, for any $x \in \mathbb{F}_q^n$ we have
\begin{eqnarray*} f_S(x) &=& \sum_{\beta_1,\dots,\beta_{d-1} \in \{0,1\}} (-1)^{d-1-|\beta|} \sum_{T \subseteq [n],|T| \le d} \alpha_T \prod_{i \in T} \left(x_i + \sum_{j=1}^{d-1} \beta_j (e_{S_j})_i\right) \\
&=& (-1)^{d-1} \sum_{T \subseteq [n],|T| \le d} \alpha_T \sum_{\beta_1,\dots,\beta_{d-1} \in \{0,1\}} (-1)^{|\beta|} \prod_{i \in T} \left(x_i + \sum_{j=1}^{d-1} \beta_j \delta_{S_ji}\right). \end{eqnarray*}
Now note that for all $T$ such that $S \nsubseteq T$, the sum over $\beta_1,\dots,\beta_{d-1}$ will equal 0. This is because in this case there must exist an index $j \in [d-1]$ such that $S_j \notin T$, so for this $j$, $\beta_j$ does not appear in the product over $T$. So, after summing over the $\beta_i$ such that $i \neq j$, we are left with the sum $\sum_{\beta_j\in\{0,1\}} (-1)^{\beta_j} K_T$ for some constant $K_T$; this evaluates to 0 for any $K_T$. As $|S|=d-1$ and $|T| \le d$, this implies that we can rewrite $f_S(x)$ as
\begin{eqnarray*} f_S(x) &=& (-1)^{d-1}\alpha_S \sum_{\beta_1,\dots,\beta_{d-1} \in \{0,1\}} (-1)^{|\beta|} \prod_{i \in S} \left(x_i + \sum_{j=1}^{d-1} \beta_j \delta_{S_ji}\right)\\
&+& (-1)^{d-1} \sum_{k \notin S} \alpha_{S \cup \{k\}} \sum_{\beta_1,\dots,\beta_{d-1} \in \{0,1\}} (-1)^{|\beta|} \prod_{i \in S \cup \{k\}} \left(x_i + \sum_{j=1}^{d-1} \beta_j \delta_{S_ji}\right)\\
&=& (-1)^{d-1} \alpha_S \sum_{\beta_1,\dots,\beta_{d-1} \in \{0,1\}} (-1)^{|\beta|} \prod_{i=1}^{d-1} \left(x_{S_i} + \beta_i \right)\\
&+& (-1)^{d-1} \sum_{k \notin S} \alpha_{S \cup \{k\}} \sum_{\beta_1,\dots,\beta_{d-1} \in \{0,1\}} (-1)^{|\beta|} x_k \prod_{i=1}^{d-1} \left(x_{S_i} + \beta_i \right)\\ &=& (-1)^{d-1} \left(\prod_{i=1}^{d-1} \left( \sum_{\beta_i\in \{0,1\}} (-1)^{\beta_i} (x_{S_i} + \beta_i) \right) \right) \left( \alpha_S + \sum_{k \notin S} \alpha_{S \cup \{k\}}x_k \right)\\ &=& \alpha_S + \sum_{k \notin S} \alpha_{S \cup \{k\}}x_k \end{eqnarray*}
as claimed. \end{proof}
\end{document} |
\begin{document}
\title{The elastic trefoil is the twice covered circle}
\begin{abstract} We investigate the elastic behavior of knotted loops of springy wire. To this end we minimize the classic bending energy~$\ensuremath{E_{\mathrm{bend}}}=\int\ensuremath{\varkappa}^2$ together with a small multiple of ropelength~$\ensuremath{\mathcal R}=\textnormal{length}/\textnormal{thickness}$ in order to penalize selfintersection. Our main objective is to characterize {\it elastic knots}, i.e., all limit configurations of energy minimizers of the total energy
$\ensuremath{E_\ensuremath{\vartheta}}:=\ensuremath{E_{\mathrm{bend}}}+\ensuremath{\vartheta}\ensuremath{\mathcal R}$ as $\vartheta$ tends to zero. The elastic unknot
turns out to be the round circle
with bending energy $(2\pi)^2$. For all
(non-trivial) knot
classes for which the natural lower bound $(4\pi)^2$ for the bending energy
is sharp, the respective elastic knot is the twice covered circle. The
knot classes for which $(4\pi)^2$ is sharp are precisely the
$(2,b)$-torus knots for odd $b$ with $|b|\ge 3$
(containing the trefoil). In particular,
the elastic trefoil is the twice covered circle. \end{abstract}
\paragraph{Keywords:} Knots, torus knots, bending energy, ropelength, energy minimizers.
\paragraph{AMS Subject Classification:} 49Q10, 53A04, 57M25, 74B05
\section{Introduction}
The central issue addressed in this paper is the following: {\rm Knotted loops made of elastic wire spring into some (not necessarily unique) stable configurations when released. Can one characterize these configurations?}
There are (at least) three beautiful toy models of such springy knots designed by J.~Langer; see the images in Figure \ref{fig:springy-knots}. And one may ask: why isn't there the springy trefoil? Simply experimenting with an elastic wire with a hinge reveals the answer: the final shape of the elastic trefoil would simply be too boring to play with, forming two circular flat loops stacked on top of each other; see the image on the bottom right of Figure \ref{fig:springy-knots}.
\begin{figure}
\caption{Springy knots: figure-eight knot, mathematician's loop, and Chinese
button knot.
Wire models manufactured by {\sc why knots}, Box 635, Aptos, CA 95003,
in 1980;
coloured photographs by B. Bollwerk, Aachen.}
\label{fig:springy-knots}
\end{figure}
Mathematically, the classification of elastic knots is a fascinating problem, and our aim is to justify the behaviour of the trefoil and of more general torus knots by means of the simplest possible model in elasticity. Ignoring all effects of extension and shear the wire is represented by a sufficiently smooth closed curve $\gamma:\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}\to\ensuremath{\mathbb{R}}^3$ of unit length and parametrized by arclength (referred to as {\it unit loop}). We follow Bernoulli's approach to consider the {\it bending energy} \begin{equation}\label{bending} \ensuremath{E_{\mathrm{bend}}}(\gamma):=\int_\gamma\ensuremath{\varkappa}^2\ensuremath{\,\mathrm{d}} s \end{equation}
as the only intrinsic elastic energy---neglecting any additional torsional effects, and we also exclude external forces and friction that might be present in Langer's toy models. Here, $\ensuremath{\varkappa}=|\gamma''|$ is the classic local curvature of the curve. To respect a given knot class when minimizing the bending energy we have to preclude self-crossings. In principle we could add any self-repulsive {\it knot energy} for that matter, imposing infinite energy barriers between different knot classes; see, e.g., the recent surveys \cite{blatt-reiter-proc,blatt-reiter-mbmb,randy-project_2013,isaac_2014} on such energies and their impact on geometric knot theory. But a solid (albeit thin) wire motivates a steric constraint in form of a fixed (small) thickness of all curves in competition. This, and the geometric rigidity it imposes on the curves lead us to adding a small amount of {\it ropelength} $\ensuremath{\mathcal R}$ to form the \emph{total energy} \begin{equation}\label{total-energy} \ensuremath{E_\ensuremath{\vartheta}} := \ensuremath{E_{\mathrm{bend}}}+\ensuremath{\vartheta}\ensuremath{\mathcal R},\qquad\ensuremath{\vartheta} >0, \end{equation} to be minimized within a prescribed tame\footnote{A
knot class is called \emph{tame} if it contains polygons, i.e., piecewise (affine) linear loops. Any knot class containing smooth curves is tame, see Crowell and Fox~\cite[App.~I]{crowell-fox}, and vice versa, any tame knot class contains smooth representatives. Consequently, $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})\ne\emptyset$ if and only if $\ensuremath{\mathcal{K}}$ is tame.} knot class $\ensuremath{\mathcal{K}}$, that is, on the class $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ of all unit loops representing $\ensuremath{\mathcal{K}}$. As ropelength is defined as the quotient of length and thickness it boils down for unit loops to $\ensuremath{\mathcal R}(\gamma)=1/\triangle[\gamma]$. Following Gonzalez and Maddocks \cite{gm} the thickness $\triangle[\cdot]$ may be expressed as \begin{equation}\label{thickness} \triangle[\gamma]:=\inf_{u,v,w\in\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}\atop u\not= v\not=w\not= u}R(\gamma(u), \gamma(v),\gamma(w)), \end{equation} where $R(x,y,z)$ denotes the unique radius of the (possibly degenerate) circle passing through $x,y,z\in\ensuremath{\mathbb{R}}^3.$
By means of the direct method in the calculus of variations we show that in every given (tame) knot class $\ensuremath{\mathcal{K}}$ and for every $\ensuremath{\vartheta}>0$ there is indeed a unit loop $\gamma_\ensuremath{\vartheta}\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ minimizing the total energy $\ensuremath{E_\ensuremath{\vartheta}}$ within $\ensuremath{\mathcal{K}}$; see Theorem \ref{thm:existence-total} in Section \ref{sect:mini}.
To understand the behaviour of very thin springy knots we investigate the limit $\ensuremath{\vartheta}\to 0$. More precisely, we consider arbitrary sequences $(\gamma_\ensuremath{\vartheta})_\ensuremath{\vartheta} $ of minimizers in a fixed knot class $\ensuremath{\mathcal{K}}$ and look at their possible limit curves $\gamma_0$ as $\ensuremath{\vartheta}\to 0$. We call any such limit curve an {\it elastic knot} for $\ensuremath{\mathcal{K}}$. None
of these elastic knots is embedded (as we would expect in view of the
self-contact present in the
wire models in Figure \ref{fig:springy-knots})---unless $\ensuremath{\mathcal{K}}$ is the unknot, in which case $\gamma_0$ is the once-covered circle; see Proposition \ref{prop:nonembedded}. However, it turns out that each elastic knot $\gamma_0$ lies in the $C^1$-closure of unit loops representing ${\ensuremath{\mathcal{K}}}$, and non-trivial elastic knots can be shown to have strictly smaller bending energy $\ensuremath{E_{\mathrm{bend}}}$ than any unit loop in $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ (Theorem \ref{thm:existence-limit}). This minimizing property of elastic knots is particularly interesting for those
non-trivial knot classes $\ensuremath{\mathcal{K}}$ permitting representatives with bending energy arbitrarily close to the smallest possible lower bound $(4\pi)^2$ (due to F\'ary's and Milnor's lower bound $4\pi$ on total curvature
\cite{fary,milnor}): We can show that for those knot classes
the {\it only} possible shape of any elastic knot is that of the twice-covered circle. This naturally leads to the question:
{\it For which knot classes $\ensuremath{\mathcal{K}}$ do we have $\inf_{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}\ensuremath{E_{\mathrm{bend}}}=(4\pi)^2$?}
We are going to show that this is true {\it exactly} for the class $\ensuremath{\mathcal{T}}(2,b)$
of $(2,b)$-torus knots for any odd integer $b$ with $|b|\ge 3.$ Any other non-trivial knot class has a strictly larger infimum of bending energy. These facts and several other characterizations of $\ensuremath{\mathcal{T}}(2,b)$ are contained in our Main Theorem \ref{thm:elastic-shapes}, from which we can extract the following complete description of elastic $(2,b)$-torus knots (including the trefoil):
\begin{theorem}[Elastic $(2,b)$-torus knots]\label{thm:main} For any odd integer $\abs b\ge 3$ the unique elastic $(2,b)$-torus knot is the twice-covered circle. \end{theorem}
This result confirms our mechanical and numerical experiments (see Figure~\ref{fig:low_trefoil_tgA} on the left and Figure~\ref{fig:trefoil_sim}), as well as the heuristics and the Metropolis Monte Carlo simulations of Gallotti and Pierre-Louis~\cite{gallotti-pierre-louis_2006}, and the numerical gradient-descent results by Avvakumov and Sossinsky, see~\cite{sossinsky} and references therein.
Our results especially affect knot classes with bridge number two (see below for the precise definition) which in the majority of cases appear in applications, e.g., DNA knots, see Sumners~\cite[p.~338]{sumners:dna}. The Main Theorem \ref{thm:elastic-shapes}, however, implies also that for knots \emph{different} from the $(2,b)$-torus knots, the respective elastic knot is definitely \emph{not} the twice-covered circle. Similar shapes as in Figure \ref{fig:trefoil_sim} have been obtained numerically by Buck and Rawdon~\cite{BR} for a related but different variational problem: they minimize ropelength with a prescribed curvature bound (using a variant of the Metropolis Monte Carlo procedure), see~\cite[Fig.~8]{BR}.
\begin{figure}\label{fig:trefoil_sim}
\end{figure}
The idea of studying $\gamma_0$ as a limit configuration of minimizers of the mildly penalized bending energy goes back to earlier work of the third author \cite{vdM:meek}, only that there ropelength in \eqref{total-energy} is replaced by a self-repulsive potential, like the M\"obius energy introduced by O'Hara \cite{oha:en}. By means of a Li-Yau-type inequality for general loops \cite[Theorem 4.4]{vdM:meek} it was shown there that for elastic $(2,b)$-torus knots, the maximal multiplicity of double points is three \cite[p.~51]{vdM:meek}. Theorem \ref{thm:main} clearly shows that this multiplicity bound is not sharp: the twice-covered circle has infinitely many double points all of which have multiplicity two. Lin and Schwetlick~\cite{lin-schwetlick_2010} consider the gradient flow of
the elastic energy plus the M\"obius energy scaled by a certain parameter. However, there is no analysis of the equilibrium shapes, and they do not consider the limit case of sending the prefactor of the M\"obius term to zero.
Directly analyzing the shape or even only the regularity of the $\ensuremath{E_\ensuremath{\vartheta}}$-minimizers $\gamma_\ensuremath{\vartheta}$ for positive $\ensuremath{\vartheta}$ without going to the limit $\ensuremath{\vartheta}\to 0$ seems much harder because of a~priorily unknown (and possibly complicated) regions of self-contact that are determined by the minimizers $\gamma_\ensuremath{\vartheta}$ themselves. Necessary conditions were derived by a Clarke-gradient approach in
\cite{heiko2} for nonlinearly elastic rods,
and for
an alternative elastic self-obstacle formulation
regularity results were established in
\cite{vdM:eke3} depending on the geometry of contact. If one replaces ropelength in \eqref{total-energy} by a self-repulsive potential like in \cite{vdM:meek}
one can prove $C^\infty$-smoothness of $\gamma_\ensuremath{\vartheta}$ with the deep analytical methods developed by He~\cite{he:elghf}, and the second author in various cooperations \cite{reiter:atme,reiter:rkepdc,blatt-reiter2,blatt-reiter-schikorra_2012}. But the corresponding Euler-Lagrange equations for $\ensuremath{E_\ensuremath{\vartheta}}$ involving complicated non-local terms do not seem to give immediate access to determining the shape of $\gamma_\ensuremath{\vartheta}.$ Notice that directly minimizing the bending energy $\ensuremath{E_{\mathrm{bend}}}$ in the $C^1$-closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ generally leads to a much larger number of minimizers of which the majority seems to correspond to quite unstable configurations in physical experiments. Our approach of penalizing the bending energy by $\ensuremath{\vartheta}$ times ropelength and approximating zero thickness by letting $\ensuremath{\vartheta}\to 0$ may be viewed as selecting those $\ensuremath{E_{\mathrm{bend}}}$-minimizers that correspond to physically reasonable springy knots with sufficiently small thickness.
Recall that our simple model neglects any effects of torsion. Twisting the wire in the experiments before closing it at the hinge (without releasing the twist) leads to completely different stable configurations; see Figure \ref{fig:low_trefoil_tgA} on the right. So, in that case a more general Lagrangian taking into account also these torsional effects would need to be considered, and the question of classifying elastic knots with torsion is wide open.
\begin{figure}
\caption{Mechanical experiments. Left: The springy trefoil knot is close to the twice-covered circle. Right: Adding a twist leads to a stable flat trefoil configuration close to a planar figure-eight. Wire models by courtesy of John~H.~Maddocks, Lausanne.}
\label{fig:low_trefoil_tgA}
\end{figure}
This is our strategy:
In Section~\ref{sect:mini} we establish first the existence of minimizers $\gamma_\ensuremath{\vartheta}$ of the total energy $\ensuremath{E_\ensuremath{\vartheta}}$ for each positive $\ensuremath{\vartheta}$ (Theorem \ref{thm:existence-total}). Then we pass to the limit $\ensuremath{\vartheta}\to 0$ to obtain a limit configuration $\gamma_0$ in the $C^1$-closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ whose bending energy $\ensuremath{E_{\mathrm{bend}}}(\gamma_0)$ serves as a lower bound on $\ensuremath{E_{\mathrm{bend}}}$ in $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$; see Theorem \ref{thm:existence-limit}. By means of the classic uniqueness result of Langer and Singer \cite{LS:cs} on stable elasticae in $\ensuremath{\mathbb{R}}^3$ we identify in Proposition \ref{prop:nonembedded} the round circle as the unique elastic unknot. Elastic knots for knot classes with $(4\pi)^2$ as sharp lower bound on the bending energy turn out to have constant curvature $4\pi$; see Proposition \ref{prop:constcurv}. By a Schur-type result (Proposition \ref{prop:minarc}) we can use this curvature information to establish a preliminary classification of such elastic knots as {\it tangential pairs of circles}, i.e., as pairs of round circles each with radius $1/(4\pi)$ with (at least) one point of tangential intersection; see Figure \ref{fig:intermediate} and Corollary \ref{cor:tg8}. The key here to proving constant curvature is an extension of the classic F\'ary--Milnor theorem on total curvature to the $C^1$-closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$; see Theorem \ref{thm:fm} in the Appendix. Our argument for that extension crucially relies on Denne's result on the existence of alternating quadrisecants~\cite{denne}. The fact that the elastic knot $\gamma_0$ for $\mathcal{K}$ is a tangential pair of circles implies by means of Proposition \ref{prop:braids} that
$\mathcal{K}$ is actually the class $\ensuremath{\mathcal{T}}(2,b)$ for odd $b$ with $|b|\ge 3$.
In order to extract the doubly-covered circle from the one-parameter family of tangential pairs of circles as the only possible elastic knot for $\ensuremath{\mathcal{T}}(2,b)$
we use in Section~\ref{sect:torus} explicit $(2,b)$-torus knots as suitable
comparison curves. Estimating their
bending energies and thickness values allows us to establish improved
growth estimates for the total energy and ropelength of the $\ensuremath{E_\ensuremath{\vartheta}}$-minimizers
$\gamma_\ensuremath{\vartheta}$; see Proposition \ref{prop:estimate-torus}.
In his seminal article~\cite{milnor}
Milnor
derived
the lower bound for the total curvature
by studying the \emph{crookedness} of a curve and relating it to
the total curvature.
For some regular curve $\gamma:\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}\to\ensuremath{\mathbb{R}}^3$
its crookedness is the infimum over all $\nu\in\ensuremath{\mathbb{S}}^{2}$ of
\begin{equation}\label{mu}
\mu(\gamma,\nu) := \#\sett{t_{0}\in\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}{t_{0} \textnormal{ is a local maximizer of }
t\mapsto\sp{\gamma(t),\nu}_{\ensuremath{\mathbb{R}}^{3}}}.
\end{equation} For any curve $\gamma$ close to a tangential pair of circles that is \emph{not} the doubly covered circle we can show in Lemma \ref{lem:crook} that the set of directions $\nu\in\ensuremath{\mathbb{S}}^2$ for which $\mu(\gamma,\nu)\ge 3$ is bounded in measure from below by some multiple of thickness $\triangle[\gamma]$. Assuming finally that $\gamma_\ensuremath{\vartheta} $ converges for $\ensuremath{\vartheta}\to 0$ to such a limiting tangential pair of circles different from the doubly covered circle we use this crookedness estimate to obtain a contradiction against the total energy growth rate proved in Proposition~\ref{prop:estimate-torus}. Therefore, the only possible limit configuration $\gamma_0$, i.e., elastic knot in the class of $(2,b)$-torus knots, is the doubly covered circle.
As pointed out above, the heart of our argument consists of two bounds on the bending energy, the lower one, $(4\pi)^2$, imposed by the F\'ary--Milnor inequality, the upper one given by comparison curves. The latter ones are constructed by considering a suitable $(2,b)$-torus knot lying on a (standard) torus and then shrinking the width of the torus to zero, such that the bending energy of the torus knot tends to the lower bound $(4\pi)^2$. This indicates that a more general result should be valid if these bounds can be extended to other knot classes.
Milnor~\cite{milnor} proved that the lower bound on the total curvature is in fact $2\pi\bri\ensuremath{\mathcal{K}}$ where $\bri\ensuremath{\mathcal{K}}$ denotes the \emph{bridge index}, i.e., the minimum of crookedness\footnote{In fact, the bridge index is defined as the minimum over the bridge number. The latter coincides with crookedness for tame loops, see Rolfsen~\cite[p.~115]{rolfsen}.} over the knot class~$\ensuremath{\mathcal{K}}$. So we should ask which knot classes $\ensuremath{\mathcal{K}}$ can be represented by a curve made of a number of strands, say $a$ strands, passing inside a (full) torus, virtually in the direction of its central core. The minimum value for $a$ with respect to the knot class~$\ensuremath{\mathcal{K}}$ is referred to as \emph{braid index}, $\bra\ensuremath{\mathcal{K}}$. Thus we are led to believing that the following assertion holds true which has already been stated by Gallotti and Pierre-Louis~\cite{gallotti-pierre-louis_2006}.
\begin{conjecture}[{Circular elastic knots}]
The $a$-times covered circle is the (unique) elastic knot for the
(tame) knot class $\ensuremath{\mathcal{K}}$ if
$\bra\ensuremath{\mathcal{K}}=\bri\ensuremath{\mathcal{K}}=a$. \end{conjecture}
The shape of elastic knots for more general knot classes is one of the topics of
ongoing research. Here we only mention a conjecture that has personally been communicated to the third author by Urs Lang in 1997.
\begin{conjecture}[Spherical elastic knots]\label{conj:lang}
Any elastic prime knot is a spherical or planar curve. \end{conjecture}
Our numerical experiments (see Figures~\ref{fig:trefoil_sim} and \ref{fig:general_knot_classes}) as well as the simulations performed by Gallotti and Pierre-Louis~\cite[Figs.\@ 6 \& 7]{gallotti-pierre-louis_2006} seem to support this conjecture.
\begin{figure}
\caption{Simulated annealing experiments for other knot classes.
Left: The figure-eight knot ($4_{1}$) (resembling
the blue springy figure-eight in
Figure \ref{fig:springy-knots}) seems to lie on a sphere. Right:
A planar
Chinese button knot ($9_{40}$) (in contrast to the spherical red springy wire
in Figure \ref{fig:springy-knots} indicating that tying a more
complex wire knot may impose some amount of physical
torsion in addition to pure bending).}
\label{fig:general_knot_classes}
\end{figure}
\paragraph{Acknowledgements.}
The second author was partially supported by DFG Trans\-regional Collaborative Research Centre SFB TR 71. We gratefully acknowledge stimulating discussions with Elizabeth Denne, Sebastian Scholtes, and John Sullivan. We would like to thank Thomas~El~Khatib for bringing reference~\cite{gallotti-pierre-louis_2006} to our attention.
\section{Existence of elastic knots}\label{sect:mini}
To ease notation we shall simply identify the intrinsic distance on $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}$ with
$|x-y|$, that is, $$
|x-y|\equiv |x-y|_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}:=\min\{|x-y|,1-|x-y|\}. $$ For any knot class $\ensuremath{\mathcal{K}}$ we define the class $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ of {\it unit loops representing $\ensuremath{\mathcal{K}}$} as $$
\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}}):=\{\gamma\in H^2(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3):\gamma(0)=0, |\gamma'|\equiv 1\,{\textnormal{on\,\,}}\,\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\textnormal{\,$\gamma$ is of knot type $\ensuremath{\mathcal{K}}$}\}, $$ where $H^2(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$ denotes the class of $1$-periodic Sobolev functions whose second weak derivatives are square-integrable. The bending energy \eqref{bending} on the space of curves $\gamma\in H^{2}(\ensuremath{\mathbb{R}}/L\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^{3})$, $L>0$, reads as \begin{equation}\label{bending2} \ensuremath{E_{\mathrm{bend}}}(\gamma)=\int_{\gamma}\ensuremath{\varkappa}^{2}\ensuremath{\,\mathrm{d}} s =\int_{\ensuremath{\mathbb{R}}/L\ensuremath{\mathbb{Z}}}\ensuremath{\varkappa}^{2}\abs{\gamma'}\ensuremath{\,\mathrm{d}} t =\int_0^L\frac{\abs{\gamma''\wedge\gamma'}^2}{\abs\gamma'^{5}}\ensuremath{\,\mathrm{d}} t, \end{equation}
which reduces to the squared $L^2$-norm $\|\gamma''\|^2_{L^2}$ of the (weak) second derivative $\gamma''$ if $\gamma$ is parametrized by arclength. The ropelength functional defined as the quotient of length and thickness simplifies on $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ to \begin{equation}\label{rope} \ensuremath{\mathcal R}(\gamma)=\frac{1}{\triangle[\gamma]}, \end{equation} where thickness $\triangle[\gamma]$ can be expressed as in \eqref{thickness}. For $\ensuremath{\vartheta} >0$ we want to minimize the \emph{total energy} $\ensuremath{E_\ensuremath{\vartheta}}$ as given in \eqref{total-energy} on the class $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ of unit loops representing $\ensuremath{\mathcal{K}}$. Note that, in contrast to the bending energy, a unit loop in $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ has finite total energy $\ensuremath{E_\ensuremath{\vartheta}}$ if and only if it belongs to $C^{1,1}$, see~\cite[Lemma~2]{gmsm} and \cite[Theorem 1 (iii)]{SvdM}. \begin{theorem}[Minimizing the total energy]\label{thm:existence-total} For any fixed tame knot class $\ensuremath{\mathcal{K}}$ and for each $\ensuremath{\vartheta} >0$ there exists a unit loop $\gamma_\ensuremath{\vartheta}\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ such that \begin{equation}\label{mini-total} \ensuremath{E_\ensuremath{\vartheta}}(\gamma_\ensuremath{\vartheta})=\inf_{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}\ensuremath{E_\ensuremath{\vartheta}}(\cdot). \end{equation} \end{theorem}
\begin{proof} The total energy is obviously nonnegative, and $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ is not empty since one may scale any smooth representative of $\ensuremath{\mathcal{K}}$ (which exists due to tameness) down to length one and reparametrize to arclength, so that the infimum in \eqref{mini-total} is finite. Taking a minimal sequence $(\gamma_k)_k\subset\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ with $\ensuremath{E_\ensuremath{\vartheta}}(\gamma_k)\to\inf_{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}\ensuremath{E_\ensuremath{\vartheta}}$ as $k\to\infty$ we get the uniform bound $$
\|\gamma_k''\|_{L^2}^{2}=\ensuremath{E_{\mathrm{bend}}}(\gamma_k)\le\ensuremath{E_\ensuremath{\vartheta}}(\gamma_k)\le 1+\inf_{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}\ensuremath{E_\ensuremath{\vartheta}}<\infty\quad{\textnormal{for all\,\,}} k\gg 1, $$
so that with $\gamma_k(0)=0$ and $|\gamma_k'|\equiv 1$ for all $k\in\ensuremath{\mathbb{N}}$ we have a uniform bound on the full $H^2$-norm of the $\gamma_k$ independent of $k$ for $k$ sufficiently large. Since $H^2(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$ as a Hilbert space is reflexive and $H^2(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$ is compactly embedded in $C^1(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$ this implies the existence of some $\gamma_\ensuremath{\vartheta}\in H^2(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$ and a subsequence $(\gamma_{k_i})_i\subset (\gamma_k)_k$ converging weakly in $H^2$ and strongly in $C^1$ to $\gamma_\ensuremath{\vartheta}$ as $i\to\infty.$ Thus we
obtain $\gamma_\ensuremath{\vartheta}(0)=0$ and $|\gamma_\ensuremath{\vartheta}'|\equiv 1$ on $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}.$ Since thickness is upper semicontinuous \cite[Lemma 4]{gmsm} and the bending energy lower semicontinuous with respect to this type of convergence we arrive at \begin{equation}\label{direct-method} \ensuremath{E_\ensuremath{\vartheta}}(\gamma_\ensuremath{\vartheta})\le\liminf_{i\to\infty}\ensuremath{E_\ensuremath{\vartheta}}(\gamma_{k_i})=\inf_{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}\ensuremath{E_\ensuremath{\vartheta}}(\cdot)<\infty. \end{equation} In particular by definition of $\ensuremath{E_\ensuremath{\vartheta}}$, one has $\ensuremath{\mathcal R}(\gamma_\ensuremath{\vartheta})<\infty$ or $\triangle[\gamma_\ensuremath{\vartheta}]>0$, which implies by \cite[Lemma 1]{gmsm} that $\gamma_\ensuremath{\vartheta}$ is embedded. As all closed curves in a $C^1$-neighbourhood of a given embedded curve are isotopic, as shown by Diao, Ernst, and Janse van Rensburg~\cite[Lemma~3.2]{dej}, see also~\cite{blatt:isot,reiter:isot}, we find that $\gamma_\ensuremath{\vartheta}$ is of knot type $\ensuremath{\mathcal{K}}$; hence $\gamma_\ensuremath{\vartheta}\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}}).$ This gives $\inf_{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}\ensuremath{E_\ensuremath{\vartheta}}(\cdot)\le\ensuremath{E_\ensuremath{\vartheta}}(\gamma_\ensuremath{\vartheta})$, which in combination with \eqref{direct-method} concludes the proof. \end{proof}
Since finite ropelength, i.e., positive thickness, implies $C^{1,1}$-regularity we know that $\gamma_\ensuremath{\vartheta}\in C^{1,1}(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$. However, we are not going to exploit this improved regularity, since we investigate the limit $\ensuremath{\vartheta}\to 0$ and the corresponding limit configurations, the elastic knots $\gamma_0$ for the given knot class $\ensuremath{\mathcal{K}}$.
\begin{theorem}[Existence of elastic knots]\label{thm:existence-limit} Let $\ensuremath{\mathcal{K}}$ be any fixed tame knot class, $\ensuremath{\vartheta}_i\to 0$ and $(\gamma_{\ensuremath{\vartheta}_i})_{i}\subset\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$, such that $E_{\ensuremath{\vartheta}_i}(\gamma_{\ensuremath{\vartheta}_i})=\inf_{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}E_{\ensuremath{\vartheta}_i}(\cdot)$ for each $i\in\ensuremath{\mathbb{N}}.$ Then there exists $\gamma_0\in H^2(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$ and a subsequence $(\gamma_{\ensuremath{\vartheta}_{i_k}})_k\subset (\gamma_{\ensuremath{\vartheta}_i})_{i}$ such that the $\gamma_{\ensuremath{\vartheta}_{i_k}}$ converge weakly in $H^2$ and strongly in $C^1$ to $\gamma_0$ as $k\to\infty$. Moreover, \begin{equation}\label{bendminimizing} \ensuremath{E_{\mathrm{bend}}}(\gamma_0)\le\ensuremath{E_{\mathrm{bend}}}(\beta)\quad{\textnormal{for all\,\,}} \beta\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}}). \end{equation} The estimate~\eqref{bendminimizing} is strict unless $\ensuremath{\mathcal{K}}$ is the unknot class. \end{theorem}
\begin{definition}[Elastic knots]\label{def:elasticknot} Any curve $\gamma_0$ as in Theorem \ref{thm:existence-limit} is called an \emph{elastic knot for $\ensuremath{\mathcal{K}}$}. \end{definition}
\begin{proof}[Theorem~\ref{thm:existence-limit}] For any $\beta\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ and any $\ensuremath{\vartheta} >0$ we can estimate \begin{equation}\label{bending-estimate} \ensuremath{E_{\mathrm{bend}}}(\gamma_\ensuremath{\vartheta})\le\ensuremath{E_\ensuremath{\vartheta}}(\gamma_\ensuremath{\vartheta})\le\ensuremath{E_\ensuremath{\vartheta}}(\beta), \end{equation} where $\gamma_\ensuremath{\vartheta}\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ is a global minimizer of $\ensuremath{E_\ensuremath{\vartheta}}$ within $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ whose existence is guaranteed by Theorem \ref{thm:existence-total}. Now we restrict to $\beta\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})\cap C^{1,1}(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$, which implies by means of \cite[Theorem 1 (iii)]{SvdM} that the right-hand side of~\eqref{bending-estimate} is finite. Now the right-hand side tends to $\ensuremath{E_{\mathrm{bend}}}(\beta)<\infty$ as $\ensuremath{\vartheta}\to 0$ and we find a constant $C$ independent of $\ensuremath{\vartheta}$ such that \begin{equation}\label{unifH2}
\|\gamma_\ensuremath{\vartheta}\|_{H^2}\le C\quad{\textnormal{for all\,\,}} \ensuremath{\vartheta}\in (0,1). \end{equation} In particular, this uniform estimate holds for the $\gamma_{\ensuremath{\vartheta}_i} \in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ so that there is $\gamma_0\in H^2(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$ and a subsequence $(\gamma_{\ensuremath{\vartheta}_{i_k}})_k\subset (\gamma_{\ensuremath{\vartheta}_i})_i$ with $\gamma_{\ensuremath{\vartheta}_{i_k}}\rightharpoonup\gamma_0$ in $H^2$ and $\gamma_{\ensuremath{\vartheta}_{i_k}}\to\gamma_0$ in $C^1$ as $k\to\infty.$ The bending energy $\ensuremath{E_{\mathrm{bend}}}$ is lower semicontinuous with respect to weak convergence in $H^2$ which implies $$ \ensuremath{E_{\mathrm{bend}}}(\gamma_0)\le\liminf_{k\to\infty}\ensuremath{E_{\mathrm{bend}}}(\gamma_{\ensuremath{\vartheta}_{i_k}})\overset{\eqref{bending-estimate}}{\le}\liminf_{k\to\infty}E_{\ensuremath{\vartheta}_{i_k}}(\beta)=\ensuremath{E_{\mathrm{bend}}}(\beta). $$ Thus we have established~\eqref{bendminimizing} for any $\beta\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})\cap C^{1,1}(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$. In order to extend it to the full domain, we approximate an arbitrary $\beta\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ by a sequence of functions $\seqn\beta\subset C^{\infty}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$ with respect to the $H^{2}$-norm. As $H^{2}$ embeds into $C^{1,1/2}$, the tangent vectors $\beta_{k}'$ uniformly converge to $\beta'$, so $\abs{\beta_{k}'}\ge c>0$ for all $k\gg1$.
Furthermore, the $\beta_{k}$ are injective since for $\beta$ there are positive constants $c$ and $\ensuremath{\varepsilon}$ depending only on $\beta$ such that $$
|\beta(s)-\beta(t)|\ge c|s-t|_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}\quad{\textnormal{for all\,\,}} |s-t|_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}<\ensuremath{\varepsilon}, $$
because $|\beta'|\equiv 1$, and in consequence, there is another constant $\delta=\delta(\beta)\in (0,c\ensuremath{\varepsilon}]$ such that $$
|\beta(s)-\beta(t)|\ge \delta\quad{\textnormal{for all\,\,}} |s-t|_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}\ge\ensuremath{\varepsilon}, $$ for $\beta$ is injective. Consequently, for given distinct parameters $s,t\in [0,1)$ we can estimate $$
|\beta_k(s)-\beta_k(t)| \ge
|\beta(s)-\beta(t)|-2\|\beta_k'-\beta'\|_{L^\infty}|s-t|_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}, $$
which is positive for $k\gg 1$ independent of the intrinsic distance $|s-t|_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}$. In addition, the $\beta_k$ represent the same knot class $\ensuremath{\mathcal{K}}$ as $\beta$ does for $k\gg 1$, since isotopy is stable under $C^1$-convergence \cite{dej,blatt:isot,reiter:isot}. According to~\cite[Thm.~A.1]{reiter:rkepdc} the sequence $\seqn{\tilde\beta}$ of smooth curves, where $\tilde\beta_{k}$ is obtained from $\beta_{k}$ (after omitting finitely many $\beta_k$) by rescaling to unit length and then reparametrizing to arc-length, converges to $\tilde\beta=\beta$ with respect to the $H^{2}$-norm, and, of course, $\tilde\beta_{k}\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ for all $k$. We conclude \[ \ensuremath{E_{\mathrm{bend}}}(\gamma_0)\le\ensuremath{E_{\mathrm{bend}}}(\tilde\beta_{k}) = \lnorm{\tilde\beta_{k}''}^{2} \to\lnorm{\beta''}^{2} = \ensuremath{E_{\mathrm{bend}}}(\beta). \] Assume now $\ensuremath{E_{\mathrm{bend}}}(\gamma_{0})=\ensuremath{E_{\mathrm{bend}}}(\beta)$ for some $\beta\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ where $\ensuremath{\mathcal{K}}$ is non-trivial. In this case, $\beta$ would be a local minimizer, and therefore a stable closed elastica as there are no restrictions for variations. According to the result of J.~Langer and D.~A.~Singer~\cite{LS:cs}\footnote{Recall that \emph{elasticae} are the critical curves for the bending energy. Notice that Langer and Singer work on the tangent indicatrices of arclength parametrized curves with a fixed point, i.e., on {\it balanced curves} on $\ensuremath{\mathbb{S}}^2$ through a fixed point, so that their variational arguments can be applied to the tangent vectors of curves in $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$; see \cite[p. 78]{LS:cs}.}, $\beta$ turns out to be the round circle, hence $\ensuremath{\mathcal{K}}$ is the unknot, contradiction. \end{proof}
\section{The elastic unknot and tangential pairs of circles} \label{sec:shape} The springy knotted wires strongly suggest that we should expect that the elastic knots generally exhibit self-intersections, and this is indeed the case unless the knot class is trivial. In fact, the elastic unknot is the round circle of length one. \begin{proposition}[Non-trivial elastic knots are not embedded]\label{prop:nonembedded} The round circle of length one is the unique elastic unknot. If there exists an embedded elastic knot $\gamma_0$ for a given knot class $\ensuremath{\mathcal{K}}$ then $\ensuremath{\mathcal{K}}$ is the unknot (so that $\gamma_0$ is the round circle of length one). In particular, if $\ensuremath{\mathcal{K}}$ is a non-trivial knot class, every elastic knot for $\ensuremath{\mathcal{K}}$ must have double points. \end{proposition}
\begin{proof} The round circle of length one uniquely minimizes $\ensuremath{E_{\mathrm{bend}}}$ and $\ensuremath{\mathcal R}$ simultaneously in the class of all arclength parametrized curves in $H^2(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3).$ For the bending energy $\ensuremath{E_{\mathrm{bend}}}$ this is true since the round once-covered circle is the only stable closed elastica in $\ensuremath{\mathbb{R}}^3$ according to the work of J.~Langer and D.~A.~Singer \cite{LS:cs}, and for ropelength this follows, e.g., from the more general uniqueness result in \cite[Lemma 7]{SvdM07} for the functionals $$ \ensuremath{\mathcal{U}}_p(\gamma):=\br{\int_\gamma\sup_{v,w\in\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}\atop u\not=v\not=w\not=u}R^{-p}(\gamma(u),\gamma(v),\gamma(w))\ensuremath{\,\mathrm{d}} u}^{1/p}, \quad p\ge 1. $$ For closed rectifiable curves $\gamma$ different from the round circle with $\ensuremath{\mathcal R}(\gamma)<\infty$ one has indeed $$ 2\pi=\ensuremath{\mathcal R}(\textnormal{circle})=\ensuremath{\mathcal{U}}_p(\textnormal{circle})\overset{\text{\cite[Lem.~7]{SvdM07}}}{<}\ensuremath{\mathcal{U}}_p(\gamma)\le\ensuremath{\mathcal R}(\gamma). $$
Hence the round circle uniquely minimizes also the total energy $\ensuremath{E_\ensuremath{\vartheta}}$ for each $\ensuremath{\vartheta}>0$ in $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ when $\ensuremath{\mathcal{K}}$ is the unknot. Thus any elastic unknot as the $C^1$-limit of $E_{\ensuremath{\vartheta}_i}$-minimizers as $\ensuremath{\vartheta}_i\to 0$ is also the round circle of length one.
If $\gamma_0$ is embedded then according to the stability of isotopy classes under $C^1$-perturbations (see \cite{dej,blatt:isot,reiter:isot}) we find $\gamma_0\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ since $\gamma_0$ was obtained as the weak $H^2$-limit of $E_{\ensuremath{\vartheta}_i}$-minimizers $\gamma_{\ensuremath{\vartheta}_i}\in\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}}).$ Thus, by~\eqref{bendminimizing}, the curve $\gamma_0$ is a local minimizer of $\ensuremath{E_{\mathrm{bend}}}$ in $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$, and therefore it is a stable elastica. Thus, again by the stability result of Langer and Singer, $\gamma_0$ is the round circle of length one. Consequently, $\ensuremath{\mathcal{K}}$ is the unknot, which proves the proposition. \end{proof}
In the appendix we extend the F\'ary--Milnor theorem to the $C^1$-closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$, where $\ensuremath{\mathcal{K}}$ is a non-trivial knot class; see Theorem \ref{thm:fm}. This allows us in a first step to show that elastic knots for $\ensuremath{\mathcal{K}}$ have constant curvature if one can get arbitrarily close with the bending energy in $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ to the natural lower bound $(4\pi)^2$ induced by F\'ary and Milnor. \begin{proposition}[Elastic knots of constant curvature]\label{prop:constcurv} For any knot class $\ensuremath{\mathcal{K}}$ with \begin{equation}\label{infimaequal} \inf_{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}\ensuremath{E_{\mathrm{bend}}}=(4\pi)^2 \end{equation} one has $\ensuremath{\varkappa}_{\gamma_0} = 4\pi$ a.e.\@ on $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}$ for each elastic knot $\gamma_0$ for $\ensuremath{\mathcal{K}}$. In particular, $\gamma_0\in C^{1,1}(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3).$ \end{proposition}
\begin{proof} According to our generalized F\'ary--Milnor Theorem, Theorem \ref{thm:fm} in the Appendix applied to $\gamma_0\in\overline{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}^{C^1}$, we estimate by means of H\"older's inequality \[ (4\pi)^2\overset{\eqref{eq:fm}}{\le} \left(\int_{\gamma_0}\ensuremath{\varkappa}_{\gamma_0}\right)^2 \le\ensuremath{E_{\mathrm{bend}}}(\gamma_0)\stackrel{\eqref{bendminimizing}} \le \inf_{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}\ensuremath{E_{\mathrm{bend}}}\stackrel{\eqref{infimaequal}}=(4\pi)^2, \] hence equality everywhere. In particular, equality in H\"older's inequality implies a constant integrand, which we calculate to be $\ensuremath{\varkappa}_{\gamma_0}=4\pi$ a.e.\@ on $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}$. \end{proof}
Of course, there are many closed curves of constant curvature, see e.g.\@ Fenchel~\cite{fenchel1930}, even in every knot class, see McAtee \cite{mcatee_2007}. Closed spaces curves of constant curvature may, e.g., be constructed by joining suitable arcs of helices, see Koch and Engelhardt~\cite{koch-engelhardt} for an explicit construction and examples. But in order to identify the possible shape of elastic knots for knot classes that satisfy assumption~\eqref{infimaequal} recall that any minimizer in a non-trivial knot class has at least one double point; see Proposition \ref{prop:nonembedded}. In addition, the length is fixed to one, so that we can reduce the possible shapes of elastic knots considerably by means of a Schur-type argument that connects length and constant curvature; see the following Proposition~\ref{prop:minarc}. This will in particular lead to the proof of the classification result, Theorem~\ref{thm:elastic-shapes}.
\begin{proposition}[Shortest arc of constant curvature]\label{prop:minarc}
Let $\gamma\in H^{2}([0,L],\ensuremath{\mathbb{R}}^{3})$, $L>0$, be parametrized by
arc-length with constant curvature $\ensuremath{\varkappa} = \abs{\gamma''} =4\pi$ a.e.\@
and coinciding endpoints $\gamma(0) = \gamma(L)$.
Then $L\ge\frac12$ with equality if and only if $\gamma$ is a circle
with radius $\frac1{4\pi}$.
\end{proposition}
Before proving this rigidity result let us note an immediate consequence.
\begin{corollary}[Tangential pairs of circles]\label{cor:tg8} Every closed arclength parametrized curve $\gamma\in H^2(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$ with at least
one double point and with constant curvature $\ensuremath{\varkappa}_\gamma=|\gamma''| = 4\pi$ a.e.\@ on $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}$, is a \emph{tangential pair of circles}. That is, $\gamma $ belongs, up to isometry and parametrization, to
the one-parameter family of tangentially intersecting circles
\begin{equation}\label{eq:tg8}
\tge_\varphi:t\mapsto\tfrac1{4\pi}
\begin{cases}
\uni1(1-\cos(4\pi t)) - \uni2\sin(4\pi t), & t\in [0,\tfrac12], \\
(\uni1\cos\varphi + \uni3\sin\varphi)(1-\cos(4\pi t)) - \uni2\sin(4\pi t), & t\in [\tfrac12,1],
\end{cases}
\end{equation}
where $\varphi\in [0,\pi]$.
Here $\uni k$ denotes the $k$-th unit vector in $\ensuremath{\mathbb{R}}^{3}$, $k=1,2,3$. \end{corollary} Note that $\tge_0$ is a doubly covered circle and $\tge_\pi$ a tangentially intersecting planar figure eight, both located in the plane spanned by $\uni1$ and $\uni2$. For intermediate values $\varphi\in (0,\pi)$ one obtains a configuration as shown in Figure~\ref{fig:intermediate}.
\begin{figure}\label{fig:intermediate}
\end{figure}
\begin{proof}[Corollary~\ref{cor:tg8}] Applying Proposition~\ref{prop:minarc}
we find that the length of any arc starting from a double point
amounts to at least $\tfrac12$.
As $\ensuremath{\mathscr{L}}(\gamma)=1$ we have precisely two arcs of
length~$\tfrac12$ between each double point
which again by Proposition~\ref{prop:minarc} implies that both these
connecting arcs are
circles of radius $\tfrac1{4\pi}$.
They have to meet tangentially due to the embedding
$H^{2}\hookrightarrow
C^{1,1/2}$.
Thus $\gamma=\tge_{\varphi}$ for some
$\varphi\in [0,\pi]$. \end{proof}
\begin{proof}[Proposition~\ref{prop:minarc}] Obviously, the statement is equivalent to minimizing $L>0$ over $f = \gamma' \in H^{1}([0,L],\mathbb S^{2})$ with $\abs{f'} = 4\pi$ a.e.\@ and $\int_{0}^{L}f=0$.
As $f\not\equiv0$ there is some $T\in(0,L)$ maximizing $t\mapsto \abs{\int_{0}^{t}f(\theta)\ensuremath{\,\mathrm{d}}\theta}^{2}$ which leads to
\[ 0 = 2\sp{\int_{0}^{T}f(\theta)\ensuremath{\,\mathrm{d}}\theta,f(T)}
= -2\sp{\int_{T}^{L}f(\theta)\ensuremath{\,\mathrm{d}}\theta,f(T)}. \]
We consider
\[ g : t\mapsto\sign(t-T)\sp{\int_{T}^{t}f(\theta)\ensuremath{\,\mathrm{d}}\theta,f(T)}
= \sign(t-T)\int_{T}^{t}\sp{f(\theta),f(T)}\ensuremath{\,\mathrm{d}}\theta \]
for $t\in[0,L]$. By assumption we have $g(0)=g(L)=0$.
As $f$ takes its values in the sphere $\mathbb S^{2}$
and moves with constant speed, we can estimate
\begin{equation}\label{eq:angle}
\angle\br{f(\theta),f(T)} \le \ensuremath{\mathscr{L}}(f|_{[\theta,T]})=
\int_\theta^T|f'(\tau)|\,d\tau = 4\pi\abs{\theta-T}.
\end{equation}
As long as $\theta \in [T-\frac14,T+\frac14]$ we obtain by monotonicity of
the cosine function
\begin{equation}\label{eq:cosle}
\cos(4\pi\abs{\theta-T}) \le \cos\angle\br{f(\theta),f(T)}
= \sp{f(\theta),f(T)}.
\end{equation} So, for $t\in [T-\frac14,T+\frac14]$, as cosine is even,
\begin{equation}\label{eq:gge}
\begin{split}
g(t) &\ge \sign(t-T)\int_{T}^{t}\cos(4\pi\br{\theta-T})\ensuremath{\,\mathrm{d}}\theta
= \left.\tfrac1{4\pi}{\sign(t-T)}\sin(4\pi\br{\theta-T})\right|_{T}^{t} \\
&= \tfrac{1}{4\pi}\sin(4\pi\abs{t-T}).
\end{split}
\end{equation}
The right-hand side is positive for $t\in(T-\frac14,T)\cup(T,T+\tfrac14)$
and vanishes if $t\in\set{0,T\pm\frac14}$.
Now $g(0)=g(L)=0$ together with $0<T<L$ yields
\[ 0,L \notin (T-\tfrac14,T+\tfrac14), \]
and one has $0\le T-\frac14<T+\frac14\le L$,
and therefore $T\ge \tfrac14$ as well as
\[ L \ge T+\tfrac14 \ge \tfrac12. \]
The case $L=\frac12$ enforces $T=\frac14$, and inserting $t=L=\frac12$ in~\eqref{eq:gge} we arrive at
\[ 0 = \int_{1/4}^{1/2}\sp{f(\theta),f(\tfrac14)}\ensuremath{\,\mathrm{d}}\theta
\ge \int_{1/4}^{1/2}\cos\br{4\pi\br{\theta-\tfrac14}}\ensuremath{\,\mathrm{d}}\theta = 0, \]
thus
\[ \int_{1/4}^{1/2}\sq{\sp{f(\theta),f(\tfrac14)}-
\cos\br{4\pi\br{\theta-\tfrac14}}}\ensuremath{\,\mathrm{d}}\theta = 0. \]
The integrand is non-negative by~\eqref{eq:cosle}, so it vanishes a.e. on
$[\frac14,\frac12]$.
By continuity we obtain equality in~\eqref{eq:cosle} for any $\theta\in[\frac14,
\frac12]$.
Similarly, inserting $t=0$ in \eqref{eq:gge} we obtain
equality in~\eqref{eq:cosle}
also for $\theta\in[0,\tfrac14]$, whence for all $\theta\in [0,\frac12]$,
and subsequently equality in~\eqref{eq:angle}
for all $\theta\in [0,\frac12].$
We especially face
$\angle\br{f(0),f(\frac14)} = \angle\br{f(\frac14),f(\frac12)} = \pi$.
So $f$ joins two antipodal points by an arc of length $\pi$.
Therefore both $f|_{[0,1/4]}$ and $f|_{[1/4,1/2]}$ are great semicircles connecting $f(0)$ and $f(\frac14)$ resp.\@ $f(\frac14)$ and $f(\frac12)$, and if these two great semicircles did not belong to the same great circle then
$\int_0^Lf\not=0$, contradiction.
Note that $t\mapsto \int_{0}^{t}f(\theta)\ensuremath{\,\mathrm{d}}\theta$ is also a circle
as $f$ parameterizes a great circle on $\ensuremath{\mathbb{S}}^2$ with constant speed $4\pi$ a.e. \end{proof}
\section{Tangential pairs of circles identify $(2,b)$-torus knots}\label{sec:class}
We have seen in the previous section that elastic knots are restricted to the one-parameter family \eqref{eq:tg8} of tangential pairs of circles, if $(4\pi)^2$ is the sharp lower bound for the bending energy on $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$; see \eqref{infimaequal}.
And since elastic knots by definition lie in the $C^1$-closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ one can ask the question, whether the existence of tangential pairs of circles in that $C^1$-closure determines the knot class $\ensuremath{\mathcal{K}}$ in any way. This is indeed the case, $\ensuremath{\mathcal{K}}$ turns out to be either unknotted or a $(2,b)$-torus knot for odd $b$ with $|b|\ge 3$ as will be shown in Proposition \ref{prop:braids}. The following preliminary construction of sufficiently small cylinders containing the self-intersection of a given tangential pair of circles in Lemmata \ref{lem:lgr} and \ref{lem:zylinder} as well as the explicit isotopy constructed in Lemma \ref{lem:isohandle} do not only prepare the proof of Proposition \ref{prop:braids} but are also the foundation for the argument to single out the twice-covered circle as the only possible shape of an elastic knot in $\ensuremath{\mathcal{T}}(2,b)$ in Section \ref{sec:5}.
We intend to characterize the situation of a curve $\gamma$ being close to $\tge_{\varphi}$ for $\varphi\in(0,\pi]$ with respect to the $C^{1}$-topology, i.e., \[ \norm{\gamma-\tge_{\varphi}}_{C^{1}}\le\delta \] where $\delta>0$ will depend on some $\ensuremath{\varepsilon}>0$.
To this end, we consider a cylinder $\mathcal Z$ around the intersection point of the two circles of $\tge_{\varphi}$ (which is the origin), see Figure~\ref{fig:zylinder}. Its axis will be parallel to the tangent line (containing $\mathbf e_{2}$) and centered at the origin. More precisely, \begin{equation}\label{grundzylinder}
\mathcal Z := \sett{(x_{1},x_{2},x_{3})^{\top}\in\ensuremath{\mathbb{R}}^{3}}{x_{1}^{2}+x_{3}^{2}\le\zeta^{2},\abs{x_{2}}\le\eta}, \end{equation} where \[ \eta:=\sqrt{\nfrac\zeta{({8\pi)}}}
\qquad\text{for some }\zeta\in\left(0,\tfrac1{32\pi}\right]. \] This will produce a ``braid representation'' shaped form of the knot $\gamma$. (See, e.g., Burde and Zieschang~\cite[Chap.~2~D, 10]{BZ} for information on braids and their closures.) Outside $\mathcal Z$ the curve $\gamma$ consists of two ``unlinked'' handles which both connect the opposite caps of $\mathcal Z$. Inside $\mathcal Z$ it consists of two sub-arcs $\tilde\gamma_{1}$ and $\tilde\gamma_{2}$ of $\gamma$ which can be reparametrized as graphs over the $\mathbf e_{2}$-axis (thus also entering and leaving at the caps).
As follows, the knot type of $\gamma$ can be analyzed by studying the over- and under-crossings inside $\mathcal Z$ viewed under a suitable projection. Due to the graph representation, each fibre \[ \mathcal Z_{\xi}:=\mathcal Z\cap\br{\xi+\mathbf e_{2}^{\perp}} = \sett{(x_{1},\xi,x_{3})^{\top}\in\ensuremath{\mathbb{R}}^{3}}{x_{1}^{2}+x_{3}^{2}\le\zeta^{2}}, \qquad \xi\in\sq{-\tfrac1{16\pi},\tfrac1{16\pi}}, \] is transversally met by precisely one point $\tilde\gamma_{1}(\xi)$, $\tilde\gamma_{2}(\xi)$ of each arc. By embeddedness of $\gamma$, this defines a vector \begin{equation}\label{eq:a-xi}
a_{\xi} := \tilde\gamma_{1}(\xi)-\tilde\gamma_{2}(\xi)\in\mathbf e_{2}^{\perp} \end{equation} of positive length. Set \begin{equation}\label{eq:nu} \nu:=\mathbf e_{1}\cos\nfrac\varphi2+\mathbf e_{3}\sin\nfrac\varphi2. \end{equation} Since both, $\nu$ and $a_\xi$ for every $\xi\in [-\eta,\eta]$, are contained in the plane $\mathbf e_2^\perp$, we may write \begin{equation}\label{betadef} a_\xi= \left(\begin{array}{ccc} \cos\beta(\xi) & 0 & -\sin\beta(\xi)\\ 0 & 1 & 0 \\ \sin\beta(\xi) & 0 & \cos\beta(\xi)\end{array}\right)\nu =\mathbf e_{1}\cos\br{\nfrac\varphi2+\beta(\xi)}+\mathbf e_{3}\sin\br{\nfrac\varphi2+\beta(\xi)}, \end{equation} which defines a continuous function $\beta\in C^0([-\eta,\eta])$ measuring the angle between $\nu$ and $a_\xi$ for $\xi\in [-\eta,\eta].$ This function is uniquely defined if we additionally set $\beta(-\eta)\in [0,2\pi)$, and it traces possible multiple rotations of the vector $a_\xi$ about the $\mathbf e_2$-axis as $\xi$ traverses the parameter range from $-\eta$ to $+\eta$. So $\beta(\xi)$ should {\it not} be considered an element of $\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}$.
Choosing $\delta$ sufficiently small, it will turn out that the difference of the $\beta$-values at the caps of the cylinder $\mathcal{Z}$, i.e., \[ \Delta_{\beta}\equiv \Delta_{\beta,\eta} := \beta(\eta)-\beta(-\eta) \] captures the essential topological information of the knot type of $\gamma$. (Since $\eta$ will be fixed later on we may as well suppress the dependence of $\Delta_\beta$ on $\eta.$)
In order to make these ideas more precise, we first introduce a local graph representation of $\gamma\cap\mathcal Z$ in Lemma~\ref{lem:lgr} below. Then we employ an isotopy that maps $\gamma$ outside $\mathcal Z$ to $\tge_{\varphi}$, see Lemma~\ref{lem:isohandle}. Proposition~\ref{prop:braids} will characterize the knot type of $\gamma$ assuming that the opening angle $\varphi$ of the tangential pair of circles $\tge_\varphi$ in \eqref{eq:tg8} is different from zero, and in Corollary~\ref{cor:braids} we state the corresponding result for $\varphi=0$.
The reparametrization can explicitly be written for $\tge_{\varphi}$, $\varphi\in[0,\pi]$. In fact, letting $\widetilde{\tge}_{\varphi,1}(\xi) := \tge_{\varphi}\br{\frac{\arcsin(4\pi\xi)}{4\pi}}$ and $\widetilde{\tge}_{\varphi,2}(\xi) := \tge_{\varphi}\br{\frac{\arcsin(4\pi\xi)}{4\pi}+\frac12}$ we arrive at $\widetilde{\tge}_{\varphi,j}(\xi)\in-\xi\mathbf e_{2}+\mathbf e_{2}^{\perp}$ for all $\xi\in[-\frac{1}{4\pi},\frac{1}{4\pi}]$ and $j=1,2$.
\begin{lemma}[Local graph representation]\label{lem:lgr}
Let $\varphi\in[0,\pi]$, $\ensuremath{\varepsilon}>0$, and $\gamma\in C^{1}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$ with
\begin{equation}\label{eq:lgr}
\norm{\gamma-\tge_{\varphi}}_{C^{1}}
\le\delta=\delta_{\ensuremath{\varepsilon}}:=\min\br{\tfrac\ensuremath{\varepsilon}{42},\tfrac1{16\pi}}.
\end{equation}
Then there are two diffeomorphisms $\phi_{1},\phi_{2}\in C^{1}([-\frac{1}{16\pi},\frac{1}{16\pi}])$
such that, for $j=1,2$,
\[ \tilde\gamma_{j}(\xi):=\gamma(\phi_{j}(\xi))\in-\xi\mathbf e_{2}+\mathbf e_{2}^{\perp}\qquad\text{for all } \xi\in[-\tfrac{1}{16\pi},\tfrac{1}{16\pi}] \]
and
\begin{equation*}
\norm{\tilde\gamma_{j}-\widetilde{\tge}_{\varphi,j}}_{C^{1}([-\frac{1}{16\pi},\frac{1}{16\pi}])} \le \ensuremath{\varepsilon}.
\end{equation*} \end{lemma}
\begin{proof}
From~\eqref{eq:tg8} we infer for $t\in[-\tfrac1{24},\tfrac1{24}]
\cup[\tfrac{11}{24},\tfrac{13}{24}]$
\[ \sp{\tge_{\varphi}'(t),-\mathbf e_{2}} = \cos(4\pi t) \ge \
\cos\tfrac\pi6 = \tfrac12\sqrt3, \]
so, for $\norm{\gamma'-\tge_{\varphi}'}_{C^{0}}\le\frac1{2}\br{\sqrt3-1}$,
\[ \sp{\gamma'(t),-\mathbf e_{2}} \ge
\sp{\tge_{\varphi}'(t),-\mathbf e_{2}} - \norm{\gamma'-\tge_{\varphi}'}_{C^{0}}
\ge \tfrac12. \]
Thus the first derivative of the $C^{1}$-mapping
$t\mapsto\sp{\gamma(t),-\mathbf e_{2}}$
is strictly positive on $[-\frac1{24},\frac1{24}]$ and $[\frac{11}{24},\frac{13}{24}]$.
Consequently it is invertible, and its inverse is also $C^{1}$.
Claiming $\norm{\gamma-\tge_{\varphi}}_{C^{0}} \le \delta \le \frac1{16\pi}$,
its image contains the interval $[-\frac1{16\pi},\frac1{16\pi}]$
since $\sp{\tge_{\varphi}\br{[-\frac1{24},\frac1{24}]},-\mathbf e_{2}}=[-\frac{1}{8\pi},\frac{1}{8\pi}]$.
We denote the
respective inverse functions, restricted to $[-\frac1{16\pi},\frac1{16\pi}]$,
by $\phi_{1},\phi_{2}$.
In order to estimate the distance between $x:=\phi_{1}(\xi)$ and
$y:=\frac{\arcsin(4\pi\xi)}{4\pi}$ for $\xi\in[-\frac1{16\pi},\frac1{16\pi}]$
we first remark that, by construction, $x\in[-\frac1{24},\frac1{24}]$.
We obtain
\begin{align*}
\abs{\gamma(x)-\tge_{\varphi}(x)}
&\ge\dist\br{-\xi\mathbf e_{2}+\mathbf e_{2}^{\perp},\tge_{\varphi}(x)} \\
&=\abs{\xi-\sp{\tge_{\varphi}(x),-\mathbf e_{2}}} \\
&=\tfrac1{4\pi}\abs{4\pi\xi-\sin(4\pi x)} \\
&=\tfrac1{4\pi}\abs{\sin(4\pi y)-\sin(4\pi x)} \\
&=\abs{\int_{x}^{y}\cos(4\pi s)\ensuremath{\,\mathrm{d}} s} \\
&\ge \min\br{\cos(4\pi y),\cos(4\pi x)}\abs{x-y} \\
&\ge \min\br{\tfrac14\sqrt{15},\cos\tfrac\pi6}\abs{x-y} \\
&=\tfrac12\sqrt3\abs{x-y},
\end{align*} where we used the fact that $x$ and $y$ are so small that we are in the strictly concave region of the cosine near zero.
Letting $\norm{\gamma-\tge_{\varphi}}_{C^{1}}\le\delta\le\tfrac1{16\pi}$,
we arrive at
\[ \abs{\phi_{1}(\xi)-\tfrac{\arcsin(4\pi\xi)}{4\pi}} = \abs{x-y}\le\tfrac23\sqrt3\delta
\qquad\text{for }\xi\in[-\tfrac1{16\pi},\tfrac1{16\pi}]. \]
For the derivatives, we infer
\begin{align*}
&\sp{\tge_{\varphi}\br{\tfrac{\arcsin(4\pi\xi)}{4\pi}},-\mathbf e_{2}} = \xi,
&&\sp{\tge_{\varphi}'\br{\tfrac{\arcsin(4\pi\xi)}{4\pi}},-\mathbf e_{2}} = \sqrt{1-(4\pi\xi)^{2}},
\end{align*}
thus
\begin{align*}
&\abs{\phi_{1}'(\xi)-\tfrac{\ensuremath{\,\mathrm{d}}}{\ensuremath{\,\mathrm{d}}\xi}\tfrac{\arcsin(4\pi\xi)}{4\pi}}
= \abs{\frac1{{\sp{\gamma'(\phi_{1}(\xi)),-\mathbf e_{2}}}}-\frac1{\sqrt{1-(4\pi\xi)^{2}}}} \\
&=\abs{\frac{\sp{\tge_{\varphi}'(\tfrac{\arcsin(4\pi\xi)}{4\pi}),-\mathbf e_{2}}-\sp{\gamma'(\phi_{1}(\xi)),-\mathbf e_{2}}}{\sqrt{1-(4\pi\xi)^{2}}\sp{\gamma'(\phi_{1}(\xi)),-\mathbf e_{2}}}} \\
&\le\abs{\frac{\sp{\tge_{\varphi}'(\tfrac{\arcsin(4\pi\xi)}{4\pi})-\gamma'(\phi_{1}(\xi)),-\mathbf e_{2}}}{\frac14\sqrt{15}\br{\sqrt{1-(4\pi\phi_{1}(\xi))^{2}}-\sp{\tge_{\varphi}'(\phi_{1}(\xi))-\gamma'(\phi_{1}(\xi)),-\mathbf e_{2}}}}} \\
&\le\frac{\abs{\tge_{\varphi}'(\tfrac{\arcsin(4\pi\xi)}{4\pi})-\gamma'(\phi_{1}(\xi))}}{\frac14\sqrt{15}\br{\sqrt{1-(\tfrac\pi6)^{2}}-\delta}} \\
&\le\frac{4\pi\abs{\tfrac{\arcsin(4\pi\xi)}{4\pi}-\phi_{1}(\xi)}
+\abs{\tge_{\varphi}'(\phi_{1}(\xi))-\gamma'(\phi_{1}(\xi))}}{\frac14\sqrt{15}\br{\sqrt{1-(\tfrac\pi6)^{2}}-\frac1{16\pi}}} \\
&\le\frac{\frac{8\pi}{3}\sqrt3+1}{\frac14\sqrt{15}\br{\sqrt{1-(\tfrac\pi6)^{2}}-\frac1{16\pi}}}\cdot\delta \\
&\le 20\delta \qquad\text{for }\xi\in[-\tfrac1{16\pi},\tfrac1{16\pi}].
\end{align*}
We arrive at
\begin{align*}
\norm{\tilde\gamma_{1}-\widetilde{\tge}_{\varphi,1}}_{C^{0}([-\frac{1}{16\pi},\frac{1}{16\pi}])}
&=\norm{\gamma\circ\phi_{1}-\tge_{\varphi}(\tfrac{\arcsin(4\pi\cdot)}{4\pi})}_{C^{0}} \\
&\le\norm{\gamma\circ\phi_{1}-\tge_{\varphi}\circ\phi_{1}}_{C^{0}}
+\norm{\tge_{\varphi}\circ\phi_{1}-\tge_{\varphi}(\tfrac{\arcsin(4\pi\cdot)}{4\pi})}_{C^{0}} \\
&\le\br{1+\tfrac23\sqrt3}\delta \\
&\le3\delta, \\
\norm{\tilde\gamma_{1}'-\widetilde{\tge}_{\varphi,1}'}_{C^{0}([-\frac{1}{16\pi},\frac{1}{16\pi}])}
&=\norm{\br{\gamma'\circ\phi_{1}}\phi_{1}'-\tge_{\varphi}'\br{\tfrac{\arcsin(4\pi\cdot)}{4\pi}}\frac1{\sqrt{1-(4\pi\cdot)^{2}}}}_{C^{0}} \\
&\le
\norm{\br{\gamma'\circ\phi_{1}}\br{\phi_{1}'-\frac1{\sqrt{1-(4\pi\cdot)^{2}}}}}_{C^{0}}
+\tfrac43\norm{\gamma'\circ\phi_{1}-\tge_{\varphi}'\circ\phi_{1}}_{C^{0}} \\
&\qquad{} +\tfrac43\norm{\tge_{\varphi}'\circ\phi_{1}-\tge_{\varphi}'\br{\tfrac{\arcsin(4\pi\cdot)}{4\pi}}}_{C^{0}} \\
&\le 20\delta\norm{\gamma'}_{C^{0}} + \tfrac43\delta + \tfrac43\cdot4\pi\cdot\tfrac23\sqrt3\delta \\
&\le\br{20+20\delta+21}\delta \\
&\le 42\delta <\varepsilon
\end{align*}
by our choice of $\delta$. The case $j=2$ is symmetric. \end{proof}
Now we can state that the cylinder has in fact the form claimed above.
\begin{lemma}[Two strands in a cylinder]\label{lem:zylinder}
Let $\varphi\in[0,\pi]$, $\zeta\in\left(0,\tfrac1{96\pi}\right]$, and $\gamma\in C^{1}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$ with
\begin{equation}
\norm{\gamma-\tge_{\varphi}}_{C^{1}} \le \delta\equiv\delta_{\zeta/2}
\end{equation}
where $\delta_{\ensuremath{\varepsilon}}>0$ is the constant from~\eqref{eq:lgr}.
Then the intersection of $\gamma$ with the cylinder
$\mathcal Z$ defined in \eqref{grundzylinder}
consists of two (connected) arcs $\tilde\gamma_{1}$,
$\tilde\gamma_{2}$
which enter and leave at the caps of $\mathcal Z$. These arcs
can be written as graphs over the $\mathbf e_{2}$-axis, so
each fibre
$\mathcal Z_{\xi}$
is met by both $\tilde\gamma_{1}$ and $\tilde\gamma_{2}$ transversally in precisely one point
respectively. \end{lemma}
Note that the images of $\tilde\gamma_{1}$ and $\tilde\gamma_{2}$ might possibly intersect.
\begin{proof}
Applying Lemma~\ref{lem:lgr} we merely have to show that
$\mathcal Z$ is not too narrow. We compute for $\abs\xi\le\eta$, $j=1,2$,
\begin{align*}
\abs{\widetilde{\tge}_{\varphi,j}(\xi)+\xi\mathbf e_{2}}
&=\tfrac1{4\pi}\br{1-\cos\arcsin(4\pi\xi)} \\
&=\tfrac1{4\pi} \frac{(4\pi\xi)^{2}}{1+\sqrt{1-(4\pi\xi)^{2}}}\le4\pi\xi^{2}\le\tfrac\zeta2, \\
\abs{\tilde\gamma_{j}(\xi)+\xi\mathbf e_{2}}&\le \tfrac\zeta2+\tfrac\zeta2=\zeta.
\end{align*}
Furthermore we have to show that there exist no other intersection
points of $\gamma$ with $\mathcal Z$.
In the neighborhood of the caps of $\mathcal Z$
there are no such points apart from those belonging to $\tilde\gamma_{1},\tilde\gamma_{2}$
since the tangents of $\gamma$ transversally meet
the normal disks of $\tge_{\varphi}$ as follows.
As $\abs{\gamma(t)-\tge_{\varphi}(t)}\le\delta\le\frac\ensuremath{\varepsilon}{42}$,
all points belong to the $\delta$-neighborhood of $\tge_{\varphi}$,
and any point $\gamma(t)$ belongs to a normal disk
centered at $\tge_{\varphi}(\tilde t)$ with
$\abs{\gamma(t)-\tge_{\varphi}(\tilde t)}\le\delta$,
so
\begin{equation}\label{eq:t-tilde1}
\abs{\tge_{\varphi}(\tilde t)-\tge_{\varphi}(t)}\le2\delta.
\end{equation}
Furthermore, as $\tge_{\varphi}$ parametrizes a circle on $[0,\tfrac12]$
and $[\tfrac12,1]$, we arrive at
\begin{equation}\label{eq:t-tilde2}
\sin\br{2\pi\abs{\tilde t-t}} = 2\pi\abs{\tge_{\varphi}(\tilde t)-\tge_{\varphi}(t)} \qquad\text{for either }t,\tilde t\in[0,\tfrac12]\text{ or }t,\tilde t\in[\tfrac12,1].
\end{equation}
Therefore, the angle between
$\tge_{\varphi}'(\tilde t)$ and $\tge_{\varphi}'(t)$
amounts to at most
\begin{align*}
\arccos\sp{\tge_{\varphi}'(\tilde t),\tge_{\varphi}'(t)}
&=\arccos\br{1+\sp{\tge_{\varphi}'(\tilde t)-\tge_{\varphi}'(t),\tge_{\varphi}'(t)}} \\
&\le\arccos\br{1-\abs{\tge_{\varphi}'(\tilde t)-\tge_{\varphi}'(t)}} \\
&\le\arccos\br{1-4\pi\abs{\tilde t-t}} \\
&\refeq{t-tilde2}=\arccos\br{1-2\arcsin\br{2\pi\abs{\tge_{\varphi}(\tilde t)-\tge_{\varphi}(t)}}} \\
&\refeq{t-tilde1}\le\arccos\br{1-2\arcsin\br{4\pi\delta}} \\
&\le\arccos\br{1-2\arcsin{\tfrac14}}
<1.1.
\end{align*}
From $\cos x\le1-\tfrac{x^{2}}\pi$ for $x\in[-\tfrac\pi2,\tfrac\pi2]$
we infer
\begin{equation}\label{eq:arccos-x}
x\ge\arccos\br{1-\tfrac{x^{2}}\pi},
\end{equation}
so the angle between $\gamma'(t)$ and $\tge_{\varphi}'(t)$
is bounded above by
\begin{equation}\label{eq:arccos}
\begin{split}
\arccos\frac{\sp{\gamma'(t),\tge_{\varphi}'(t)}}{\abs{\gamma'(t)}}
&\le\arccos\br{\frac{1+\sp{\gamma'(t)-\tge_{\varphi}'(t),\tge_{\varphi}'(t)}}{1+\delta}} \\
&\le\arccos\frac{1-\delta}{1+\delta}
\overset{\eqref{eq:arccos-x}}{\le}\sqrt{\tfrac{2\pi\delta}{1+\delta}}
\le\sqrt{2\pi\delta}\le\tfrac{\sqrt2}4<0.4.
\end{split}
\end{equation}
Thus transversality is established by
\begin{equation}\label{eq:tpc-transversal}
\angle\br{\gamma'(t),\tge_{\varphi}'(\tilde t)}
< 1.5<\tfrac\pi2.
\end{equation}
Now we want to determine which points of $\gamma$ actually lie in the cylinder $\mathcal{Z}$. Such points satisfy
$\abs{\sp{\gamma(t),-\mathbf e_{2}}}\le\eta$
which implies
\[ \abs{\sp{\tge_{\varphi}(t),-\mathbf e_{2}}}\le\eta+\delta
\le\tfrac1{16\pi}+\tfrac1{16\pi}=\tfrac1{8\pi}. \]
This, by definition of $\tge_{\varphi}$ in \eqref{eq:tg8},
leads to $\abs{\sin(4\pi t)}\le\frac12$
which defines four connected arcs in $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}$.
Two of them, namely $[-\tfrac1{24},\tfrac1{24}]$ and $[\tfrac{11}{24},\tfrac{13}{24}]$, (partially) belong to $\mathcal Z$;
for the other two, $[\tfrac{5}{24},\tfrac{7}{24}]$ and $[\tfrac{17}{24},\tfrac{19}{24}]$, we obtain
\begin{align*}
\dist\br{\tge_{\varphi}(t),\ensuremath{\mathbb{R}}\mathbf e_{2}}
&=\frac{1-\cos(4\pi t)}{4\pi}\ge\frac{1-\frac12\sqrt2}{4\pi}, \\ \textnormal{hence}\qquad \dist\br{\gamma(t),\ensuremath{\mathbb{R}}\mathbf e_{2}}
&\ge\frac{1-\frac12\sqrt2}{4\pi}-\delta
\ge\frac{4-2\sqrt2-1}{16\pi}
>\frac1{96\pi}\ge\zeta.
\end{align*} \end{proof}
We just have seen that $\mathcal Z$ only contains two sub-arcs of $\gamma$ which are parametrized over the $\mathbf e_{2}$-axis. Thus $\gamma\setminus\mathcal Z$ consists of two arcs which both join (different) caps of $\mathcal Z$. In fact, they have the form of two ``handles''. We intend to map them onto $\tge_{\varphi}$ by a suitable isotopy.
The actual construction is a little bit delicate as we construct an isotopy on normal disks of $\tge_{\varphi}$ which is not consistent with the fibres $\mathcal Z_{\xi}$ of the cylinder. Therefore, we cannot claim to leave the entire cylinder $\mathcal Z$ pointwise invariant. Instead, we consider the two middle quarters of $\mathcal Z$, more precisely
\[ \mathcal Z' := \sett{(x_{1},x_{2},x_{3})^{\top}\in\ensuremath{\mathbb{R}}^{3}}{x_{1}^{2}+x_{3}^{2}\le\zeta^{2},\abs{x_{2}}\le\nfrac\eta2}, \] which will contain the entire ``linking'' of the two arcs provided $\delta$ has been chosen accordingly. In fact we will construct an isotopy on $B_{2\ensuremath{\varepsilon}}\br{\tge_{\varphi}}\setminus\mathcal Z'$ which leaves $\mathcal Z'$ pointwise invariant and maps $\gamma\setminus\mathcal Z$ to $\tge_{\varphi}\setminus\mathcal Z$.
The idea is first to map $\tilde\gamma_{j}$ to $\widetilde{\tge}_{\varphi,j}$ on $\abs\xi\in[\nfrac\eta2+\ensuremath{\varepsilon},\eta-\ensuremath{\varepsilon}]$ and to straight lines connecting $\tilde\gamma_{j}(\nfrac\eta2)$ to $\widetilde{\tge}_{\varphi,j}(\nfrac\eta2+\ensuremath{\varepsilon})$ and $\widetilde{\tge}_{\varphi,j}(\eta-\ensuremath{\varepsilon})$ to $\tilde\gamma_{j}(\eta)$ on $\abs\xi\in[\nfrac\eta2,\nfrac\eta2+\ensuremath{\varepsilon}]\cup[\eta-\ensuremath{\varepsilon},\eta]$. A second isotopy on normal disks of $\tge_{\varphi}$ maps $\gamma\setminus\mathcal Z$ and the straight line inside $\mathcal Z$ on $\abs\xi\in[\eta-\ensuremath{\varepsilon},\eta]$ to $\tge_{\varphi}$.
\begin{figure}
\caption{The handle isotopy on $\mathcal Z$: an intermediate stage, where one connects the part of $\tge_{\varphi}$ in $\mathcal{Z}\setminus\mathcal{Z'}$ to $\gamma$ by short straight segments.}
\label{fig:zylinder}
\end{figure}
\begin{lemma}[Handle isotopy]\label{lem:isohandle}
Let $\varphi\in(0,\pi]$, $\zeta\in\left(0,\tfrac1{96\pi}\right]$, and $\gamma\in C^{1}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$ with
\begin{equation*}
\norm{\gamma-\tge_{\varphi}}_{C^{1}} \le \delta\equiv
\delta_{\ensuremath{\varepsilon}}
\end{equation*}
where $\delta_{\ensuremath{\varepsilon}}>0$ is the constant from~\eqref{eq:lgr} and
\begin{equation}\label{eq:epsilon}
\ensuremath{\varepsilon}\equiv\ensuremath{\varepsilon}_{\zeta}:=\min\br{\nfrac\zeta{64}\sin\nfrac\varphi2,\tfrac1{20}\sqrt{\nfrac{\zeta}{8\pi}}}.
\end{equation}
Then there is an isotopy of $B_{2\ensuremath{\varepsilon}}\br{\tge_{\varphi}}$
which
\begin{itemize}
\item leaves $\mathcal Z'$ and $\partial B_{2\ensuremath{\varepsilon}}\br{\tge_{\varphi}}$
pointwise fixed,
\item deforms $\gamma\setminus\mathcal Z$ to $\tge_{\varphi}\setminus\mathcal Z$, and
\item moves points by at most $4\ensuremath{\varepsilon}$.
\end{itemize} \end{lemma} \begin{remark}\label{rem:iso} From the proof it will become clear that the isotopy actually deforms all of $\gamma$ outside a small $\varepsilon$-neighbourhood $B_\varepsilon( \mathcal{Z}')$ of the smaller cylinder $\mathcal{Z}'$ to $\tge_\varphi.$ In addition, in the small region $B_\varepsilon(\mathcal{Z}')\setminus \mathcal{Z}'$ the curve $\gamma$ is deformed into a straight segment that lies in the $2\varepsilon$-neighbourhood of $\tge_{\varphi}$. \end{remark}
Notice for the following proof that the $\varepsilon$-neighbourhood of $\tge_\varphi$ coincides with the union of all $\varepsilon$-normal disks of $\tge_\varphi$.
\begin{proof}
On circular fibres we may employ an isotopy adapted from
Crowell and Fox~\cite[App.~I, p.~151]{crowell-fox}.
For an arbitrary closed circular planar disk $D$
and given \emph{interior} points $p_{0},p_{1}\in D$ we may define
an homeomorphism $g_{D,p_{0},p_{1}}:D\to D$
by mapping any ray joining $p_{0}$ to a point $q$ on the boundary of $D$
linearly onto the ray joining $p_{1}$ to $q$ so that
$p_{0}\mapsto p_{1}$ and $q\mapsto q$.
This leaves the boundary $\partial D$ pointwise invariant.
Furthermore, $g_{D,p_{0},p_{1}}$ is continuous in $p_{0}$, $p_{1}$ and $D$ (thus especially in
the center and the radius of $D$).
Of course, as $g_{D,p_{0},p_{1}}$ maps $D$ onto itself,
any point is moved by at most the diameter of~$D$.
The isotopy is now provided by
the homeomorphism
\begin{equation}\label{eq:isotopy}
H:[0,1]\times D\to[0,1]\times D, \qquad
(\lambda,x)\mapsto(\lambda,g_{D,p_{0},(1-\lambda) p_{0}+\lambda p_{1}}(x))
\end{equation}
which analogously works for the ellipsoid.
Now we apply this isotopy to any fibre $\mathcal Z_{\xi}$
of
\[ \mathcal Z\setminus\mathcal Z' = \bigcup_{\abs\xi\in[\eta/2,\eta]}\mathcal Z_{\xi}. \]
Here, for $j=1,2$ and any $\xi$ satisfying
$\abs\xi\in\sq{\nfrac\eta2,\eta}$,
we let $\set{p_{0,j}}=\mathcal Z_{\xi}\cap
\tilde\gamma_{j}$ and $D_{j}$ be the $2\ensuremath{\varepsilon}$-ball
centered at $p_{1,j}:=\widetilde{\tge}_{\varphi,j}(\xi)$.
As $\ensuremath{\varepsilon}<\nfrac\zeta4$ we have $D_{j}\subset \mathcal Z_{\xi}$,
By our choice of $\ensuremath{\varepsilon}$,
the disks $D_{1}$ and $D_{2}$ in each fibre are
disjoint. To see this, we compute using
$1-\cos\varphi=2\sin^{2}\nfrac\varphi2$
\begin{align}\label{eq:disjoint-disks}
\begin{split}
\abs{\widetilde{\tge}_{\varphi,1}(\xi)-\widetilde{\tge}_{\varphi,2}(\xi)}
&=\tfrac1{4\pi}\br{1-\cos\arcsin(4\pi\xi)}\sqrt{2-2\cos\varphi} \\
&=\tfrac1{4\pi}\br{1-\sqrt{1-(4\pi\xi)^{2}}}\cdot2\sin\nfrac\varphi2 \\
&=\frac{(4\pi\xi)^{2}}{1+\sqrt{1-(4\pi\xi)^{2}}}\cdot\frac{\sin\tfrac\varphi2}{2\pi}
\ge4\pi\xi^{2}\sin\nfrac\varphi2 \\
&\ge\pi\eta^{2}\sin\nfrac\varphi2=\nfrac\zeta8\sin\nfrac\varphi2
\ge8\ensuremath{\varepsilon}
\end{split}
\end{align}
for any $\xi\in\sq{\nfrac\eta2,\eta}$,
so $\dist(D_{1},D_{2})
\ge8\ensuremath{\varepsilon}-2\cdot2\ensuremath{\varepsilon}=4\ensuremath{\varepsilon}>0$.
Now we construct an isotopy on the fibres
contained in $\mathcal Z\setminus\mathcal Z'$,
see Figure~\ref{fig:zylinder}.
We will always employ the isotopy~\eqref{eq:isotopy}
on the fibres with $p_{0,j}:=\tilde\gamma_j(\xi)$.
For $\xi \in \sq{\nfrac\eta2,\nfrac\eta2+\ensuremath{\varepsilon}}$
we let $p_{1,j}$ be the intersection
of $\mathcal Z_{\xi}$ with the straight line
joining $\tilde\gamma_{j}(\nfrac\eta2)$ and $\widetilde{\tge}_{\varphi,j}(\nfrac\eta2+\ensuremath{\varepsilon})$.
To this end we have to ensure that
this line belongs to the $2\ensuremath{\varepsilon}$-neighborhood
of $\tge_{\varphi}$.
(In fact, its interior points belong to $\mathcal Z\setminus\mathcal Z'$ since any cylinder is convex.)
As $\tilde\gamma_{j}(\nfrac\eta2)$
belongs to the $\ensuremath{\varepsilon}$-neighborhood
of $\tge_{\varphi}$,
it is sufficient to apply Lemma~\ref{lem:litn} below.
To this end, we let $f = \mathbb P\,\widetilde{\tge}_{\varphi,j}$
where $\mathbb P$ denotes the projection to $\mathbf e_{2}^{\perp}$, and
\[ f'(\xi) = \frac{\mathbb P\tge_{\varphi}'\br{\frac{\arcsin(4\pi\xi)}{4\pi}}}{\sqrt{1-(4\pi\xi)^{2}}},
\quad
f''(\xi) = \frac{\mathbb P\tge_{\varphi}''\br{\frac{\arcsin(4\pi\xi)}{4\pi}}}{{1-(4\pi\xi)^{2}}}
+ (4\pi)^{2}\xi\frac{\mathbb P\tge_{\varphi}'\br{\frac{\arcsin(4\pi\xi)}{4\pi}}}{\br{1-(4\pi\xi)^{2}}^{3/2}}. \]
As $\abs{\tge_{\varphi}'}\equiv1$, $\abs{\tge_{\varphi}''}\equiv4\pi$
(up to the points $t=0$, $t=\tfrac12$ of tangential intersection), we arrive at
$\abs{f''}\le6\pi$
for any $\xi\in\sq{-\frac1{16\pi},\frac1{16\pi}}$.
Then, applying Lemma~\ref{lem:litn} to
$f(\cdot-\nfrac\eta2)$ with $\ell=\ensuremath{\varepsilon}$, $K=6\pi$, and
$y=\tilde\gamma_{j}(\nfrac\eta2)$, the distance of any point of
the straight line to $f$ is bounded by
$\sqrt2\cdot 6\pi\ensuremath{\varepsilon}^{2}+\ensuremath{\varepsilon}<2\ensuremath{\varepsilon}$
for $\ensuremath{\varepsilon}<\frac1{6\pi\sqrt2}$.
For future reference we remark that,
by $\abs{\widetilde{\tge}'_{\varphi,j}}\le\nfrac4{\sqrt{15}}$, the angle between
the straight line and the $\mathbf e_{2}$-axis
is bounded above by
\begin{equation}\label{eq:straight-line-angle}
\arctan\frac{\abs{\tilde\gamma_{j}(\nfrac\eta2)-\widetilde{\tge}_{\varphi,j}(\nfrac\eta2)}
+\abs{\widetilde{\tge}_{\varphi,j}(\nfrac\eta2)-\widetilde{\tge}_{\varphi,j}(\nfrac\eta2+\ensuremath{\varepsilon})}}{\br{\nfrac\eta2+\ensuremath{\varepsilon}}-\nfrac\eta2}
\le\arctan\tfrac{\ensuremath{\varepsilon}+4\ensuremath{\varepsilon}/\sqrt{15}}{\ensuremath{\varepsilon}}<\tfrac25\pi.
\end{equation}
For $\xi \in \sq{\nfrac\eta2+\ensuremath{\varepsilon},\eta-\ensuremath{\varepsilon}}$
we let $p_{1,j} = \widetilde{\tge}_{\varphi,j}(\xi)$.
For $\xi \in \sq{\eta-\ensuremath{\varepsilon},\eta}$
we let $p_{1,j}$ be the intersection
of $\mathcal Z_{\xi}$ with the straight line
joining $\tilde\gamma_{j}(\eta)$ and $\widetilde{\tge}_{\varphi,j}(\eta-\ensuremath{\varepsilon})$.
We argue as before using Lemma~\ref{lem:litn}.
This particular straight line ends at one cap of $\mathcal Z$
and will be moved to $\tge_{\varphi}$ by the second isotopy.
The same construction can be applied for the corresponding negative
values of $\xi$.
Now we have obtained the situation sketched in Figure~\ref{fig:zylinder}.
We consider the $2\ensuremath{\varepsilon}$-neighborhood of $\tge_{\varphi}$.
It consists of normal disks centered at the points of $\tge_{\varphi}$.
We restrict to a neighborhood containing those normal disks
that do not intersect $\mathcal Z'$.
They cover
the straight lines in $\mathcal Z$ at $\abs\xi \in \sq{\eta-\ensuremath{\varepsilon},\eta}$
for otherwise there would be some normal disk (of radius $2\ensuremath{\varepsilon}$)
intersecting the straight line (at $\abs\xi\in[\eta-\ensuremath{\varepsilon},\eta]$)
and $\mathcal Z'$ (at $\xi\in[-\nfrac\eta2,\nfrac\eta2]$).
However, by construction, $(\eta-\ensuremath{\varepsilon})-\nfrac\eta2=\nfrac\eta2-\ensuremath{\varepsilon}
=\tfrac12\sqrt{\nfrac\zeta{8\pi}}-\ensuremath{\varepsilon}\refeq{epsilon}\ge9\ensuremath{\varepsilon}$,
a contradiction.
In order to apply the isotopy which moves the points of $\gamma$ to $\tge_{\varphi}$,
we have to show
that all points of $\gamma$ (and the straight lines as well) belong to
the $2\ensuremath{\varepsilon}$-neighborhood of $\tge_{\varphi}$ and transversally meet the
corresponding normal disk.
For the straight lines we have already seen (using
Lemma~\ref{lem:litn}) that they lie inside
the $2\ensuremath{\varepsilon}$-neighborhood of $\tge_{\varphi}$.
On $\abs\xi\le\eta\le\frac1{\sqrt3\cdot16\pi}$
the angle $\alpha$ between the $\mathbf e_{2}$-axis
and the tangent line to $\widetilde{\tge}_{\varphi,j}$ amounts at most
to
\begin{equation}\label{eq:axis-angle}
\arcsin\frac{\frac1{16\pi\sqrt3}}{\frac1{4\pi}}=\arcsin\tfrac1{4\sqrt3}<\tfrac\pi{10};
\end{equation}
see Figure~\ref{fig:tangent}.
\begin{figure}\label{fig:tangent}
\end{figure}
Therefore, using~\eqref{eq:straight-line-angle},
the angle between the straight line
and the tangent line to $\widetilde{\tge}_{\varphi,j}$
is strictly smaller than $\frac\pi2$,
which implies transversality.
As calculated above, the distance between $\mathcal Z_{\eta}$
and $\mathcal Z_{\eta/2}$ amounts to
at least $10\ensuremath{\varepsilon}$, so the straight line is covered
by normal disks which do not intersect $\mathcal Z'$.
For the other points of $\gamma$ outside $\mathcal Z$ we argue as
in~\eqref{eq:tpc-transversal}. \end{proof}
\begin{lemma}[Lines inside a graphical tubular neighborhood]\label{lem:litn}
Let $f\in C^{2}([0,\ell],\ensuremath{\mathbb{R}}^{d})$, $\abs{f''}\le K$, $y\in\ensuremath{\mathbb{R}}^{d}$.
Then, $g(x):=f(x) - \br{f(0)+\frac x\ell(y-f(0))}$ satisfies
\[ \abs{g(x)} \le \sqrt dK\ell^{2} + \abs{y-f(\ell)}
\qquad\text{for any }x\in[0,\ell]. \] \end{lemma}
\begin{proof}
We compute
\[ g(x) = \int_{0}^{x} \br{f'(\xi)-\frac{f(\ell)-f(0)}{\ell}-\frac{y-f(\ell)}{\ell}}\ensuremath{\,\mathrm{d}}\xi. \]
By the mean value theorem, there are $\sigma_{1},\dots,\sigma_{d}\in(0,\ell)$
with $f'_{j}(\sigma_{j})=\frac{f_{j}(\ell)-f_{j}(0)}{\ell}$, $j=1,\dots,d$, so
\begin{align*}
\abs{g(x)} &\le \int_{0}^{x} \sqrt{\abs{f'_{1}(\xi)-f'_{1}(\sigma_{1})}^{2}
+ \cdots + \abs{f'_{d}(\xi)-f'_{d}(\sigma_{d})}^{2}}\ensuremath{\,\mathrm{d}}\xi + \abs{y-f(\ell)} \\
&\le\sqrt dK\ell^{2}+\abs{y-f(\ell)}.
\end{align*} \end{proof}
Recall the definition of the continuous accumulating angle function $\beta:[-\eta,\eta]\to\ensuremath{\mathbb{R}}$ in \eqref{betadef}, and $\Delta_{\beta} = \beta(\eta)-\beta(-\eta)$, where $a_\xi$ is the segment connecting $\tilde\gamma_1(\xi)$ and $\tilde\gamma_2(\xi)$ (see \eqref{eq:a-xi}), and $\nu$ was given by \eqref{eq:nu}.
\begin{proposition}[Only $(2,b)$-torus knots are $C^1$-close to $\tge_{\varphi}$]\label{prop:braids}
Let $\varphi\in(0,\pi]$, $\zeta\in\left(0,\tfrac1{96\pi}\right]$, and $\gamma\in C^{1}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$ be embedded with
\begin{equation*}
\norm{\gamma-\tge_{\varphi}}_{C^{1}} \le \delta
\end{equation*}
where $\delta\equiv\delta_{\ensuremath{\varepsilon}}>0$ and $\ensuremath{\varepsilon}\equiv\ensuremath{\varepsilon}_{\zeta}>0$
are defined in~\eqref{eq:lgr} and~\eqref{eq:epsilon}.
Let $b\in\ensuremath{\mathbb{Z}}$ denote the rounded value of
$\Delta_{\beta}/\pi$.
Then $b$ is an odd integer, and $\gamma$
is unknotted if $b=\pm1$ and belongs to $\tkc$, if $|b|\ge 3.$ \end{proposition}
\begin{figure}
\caption{The maximal possible angle between the lines
defined by $a_{\pm\xi}$ and $\nu$.}
\label{fig:a-angle}
\end{figure}
\begin{proof}
For each $\xi\in [\eta/2,\eta]$ (in fact,
for any $\xi\in(0,\eta]$) the line $\ensuremath{\mathbb{R}}\nu$
is perpendicular to the vectors
$\chi_{\pm\xi}:=\widetilde{\tge}_{\varphi,1}(\pm\xi)- \widetilde{\tge}_{\varphi,2}(\pm\xi)$, and notice that $\chi_\xi=-\chi_{-\xi}$.
The endpoints $\tilde\gamma_{1}(\pm\xi)$ and $\tilde\gamma_{2}(\pm\xi)$ of the vectors $a_{\pm\xi}$ are contained in disks of radius $2\ensuremath{\varepsilon}$
inside $\mathcal Z_{\pm\xi}$ centered at $\widetilde{\tge}_{\varphi,j}(\pm\xi)$,
$j=1,2$, for $\xi\in [\eta/2,\eta]$.
According to~\eqref{eq:disjoint-disks},
these disks are disjoint, having distance at least $4\ensuremath{\varepsilon}$, so that the usual (unoriented) angle between $\chi_{\pm\xi}$ and $a_{\pm\xi}$
is bounded by $\arcsin\frac{2\ensuremath{\varepsilon}}{4\ensuremath{\varepsilon}}=\frac\pi6$, see Figure~\ref{fig:a-angle}. Consequently, $$ \angle (\nu,a_\xi)\in [\tfrac\pi2-\tfrac\pi6,\tfrac\pi2+\tfrac\pi6]\quad{\textnormal{for all\,\,}}
|\xi|\in [\tfrac{\eta}2,\eta]; $$ hence, according to~\eqref{betadef}, $\beta(-\eta)\in[\tfrac{3\pi}2-\tfrac\pi6,-\tfrac{3\pi}2+\tfrac\pi6]$. For the range $\xi\in [\eta/2,\eta]$ such an explicit statement about $\beta(\xi)$ (taking values in $\ensuremath{\mathbb{R}}$) cannot be made, since the vector $a_\xi$ may have rotated several times while $\xi$ traverses the interval $[-\eta/2,\eta/2]$, but the vector $a_\xi$ points roughly into the direction of $\chi_\xi$ for each $\xi\in [\eta/2,\eta]$, which implies that there is an integer $m\in\ensuremath{\mathbb{Z}}$ (counting those rotations) such that $$ (2m+1)\pi-\tfrac{\pi}3\le \beta(\xi)-\beta(-\xi) \le (2m+1)\pi+\tfrac{\pi}3\quad {\textnormal{for all\,\,}} \xi\in [\eta/2,\eta]. $$ This in turn yields that the rounded value of $(\beta(\xi)-\beta(-\xi))/\pi$ for all $\xi\in [\eta/2,\eta)$, and hence also $b$ equals $(2m+1)$, an odd number.
Therefore, in order to determine the knot type of $\gamma$, we may apply Lemma~\ref{lem:isohandle} to deform $\gamma$ into an ambient isotopic curve $\gamma_{*}$ and analyze that curve instead. By Remark \ref{rem:iso} the previous arguments apply to $\gamma_{*}$ as well, in particular the crucial angle-estimate based on
Figure~\ref{fig:a-angle}. So the rounded value $b_*$ of $\Delta_{\beta_{*}}$ of the angle $\beta_{*}(\xi)$ defined by $\mathbf e_{1}\cos\br{\nfrac\varphi2+\beta_{*}(\xi)}+\mathbf e_{3}\sin\br{\nfrac\varphi2+\beta_{*}(\xi)}=a_{*\xi}:=\tilde{\gamma}_{*1}(\xi)-\tilde{\gamma}_{*2}(\xi)$ coincides with $b$, since $\gamma_*$ coincides with $\gamma$ in the subcylinder $\mathcal{Z}'$. (By construction we know (see Lemma~\ref{lem:isohandle} and Remark \ref{rem:iso}) that $\gamma_*\cap\mathcal{Z}$ consists of two sub-arcs of $\gamma_{*}$
transversally meeting each fibre $\mathcal{Z}_\xi$, $\xi\in [-\eta,\eta]$, of the cylinder $\mathcal Z$, so the vectors $a^*_\xi$ are well-defined.)
We may now consider the $C^{1}$-mapping
$\mathfrak n:[-\nfrac\eta2,\nfrac\eta2]\to\ensuremath{\mathbb{S}}^{1}$,
$\xi\mapsto\frac{a^*_{\xi}}{\abs{a^*_{\xi}}}\in {\textnormal{span\,}}\{\mathbf e_1,\mathbf e_3\}$.
By Sard's theorem, almost any direction $\tilde\nu\in\ensuremath{\mathbb{S}}^{1}$
is a regular value of $\mathfrak n$, i.e.,
its preimage consists of isolated (thus, by compactness,
finitely many) points
$-\nfrac\eta2\le\xi_{1}<\cdots<\xi_{k}\le\nfrac\eta2$
(so-called regular points)
at which the derivative of $\mathfrak n$ does not vanish.
At these points we face a self-intersection
of the two strands inside $\mathcal Z$
when projecting onto $\tilde\nu^{\perp}$.
Due to $\mathfrak n'(\xi_{j})\ne0$, $j=1,\dots,k$,
each of these points can be identified to be either an \emph{overcrossing} ($\overcrossing$) or
an \emph{undercrossing} ($\undercrossing$).
We choose some $\psi\in\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}$ arbitrarily close
to $\nfrac\varphi2$, such that
\[ \tilde\nu = \mathbf e_{1}\cos\psi+\mathbf e_{3}\sin\psi \]
is a regular value of $\mathfrak n$.
As $\gamma_{*}$ coincides with $\tge_{\varphi}$
outside $\mathcal Z$ and $a^*_{\xi}/\abs{a^*_{\xi}}$
is bounded away from the projection line $\ensuremath{\mathbb{R}}\tilde\nu$, i.e., $\angle(a^*_\xi,
\nu)\in [\nfrac\pi2-\nfrac\pi6,\nfrac\pi2+\nfrac\pi6]$, for all $\abs\xi\in[\nfrac\eta2,\eta]$
there are no crossings outside $\mathcal Z'$.
In fact, the projection provides a two-braid presentation of the knot
as we see two strands transversally passing through the fibres of a narrow
cylinder in the same direction.
This is a (two-) \emph{braid}.
The fact that the strands' end-points
on one cap of the cylinder are connected to the
end-points on the opposite cap by two ``unlinked'' arcs (outside $\mathcal Z'$)
provides a \emph{closure} of the braid.
The isotopy class of any braid
consisting of $n$ strands is uniquely characterized by
a \emph{braid word}, i.e., an element of the
group $\mathcal B_{n}$ which
is given by $n-1$ generators $\sigma_{1},\dots,\sigma_{n-1}$
and the relations
\begin{equation}\label{eq:braid-rel}
\begin{split}
\sigma_{j}\sigma_{j+1}\sigma_{j}=\sigma_{j+1}\sigma_{j}\sigma_{j+1}
&\quad\text{for }j=1,\dots,n-2, \\
\text{and}\qquad \sigma_{j}\sigma_{k}=\sigma_{k}\sigma_{j}
&\quad\text{for }1,\dots,j<k-1,\dots,n-2,
\end{split}
\end{equation}
see Burde and Zieschang~\cite[Prop.~10.2, 10.3]{BZ}.
Closures of braids (resulting in knots or links)
are ambient isotopic
if and only if the braids are \emph{Markov equivalent}.
The latter means that the braids are connected by
a finite sequence of braids where two consecutive
braids are either conjugate or
related by a \emph{Markov move}, see~\cite[Def.~10.21, Thm.~10.22]{BZ}.
The latter replaces $\mathfrak z\in\mathcal B_{n-1}$
by $\mathfrak z\sigma_{n-1}^{\pm1}$.
In the special case of two braids this condition simplifies
as follows.
As $\mathcal B_{2}$ has only one generator,
namely $\overcrossing$ with $(\overcrossing)^{-1}=\undercrossing$,
in fact $\mathcal B_{2}\cong\ensuremath{\mathbb{Z}}$,
conjugate braids are identical.
As $\mathcal B_{1}=\set1$,
a Markov move can only be applied to the word $1$,
thus proving that the closed one-braid, i.e., the round circle,
is ambient isotopic to the closures of both $\overcrossing$
and $\undercrossing$, which settles the case $b=b_*=\pm 1.$
Assume now $|b|\ge 3$.
The braid represented by $\tilde\gamma_{1}$ and $\tilde\gamma_{2}$
is characterized by the braid word $(\overcrossing)^{k}$
for some $k\in\ensuremath{\mathbb{Z}}$.
If $k$ were even we would arrive at a two-component link
which is impossible.
Any overcrossing $\overcrossing$
is equivalent to half a rotation of $a_{\xi}$ in positive
direction (with respect to the $\mathbf e_{1}$-$\mathbf e_{3}$-plane).
This gives $k= b$.
On the other hand,
one easily checks from~\eqref{eq:torus-knot} that
a $(2,b)$-torus knot
has the braid word $\sigma_{1}^{-b}$, $b\in1+2\ensuremath{\mathbb{Z}}$, $b\ne\pm1$
(cf.~Artin~\cite[p.~56]{artin}). \end{proof}
\begin{corollary}[Torus knots]\label{cor:braids}
Any embedded $\gamma\in C^{1}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$ with
$\norm{\gamma-\tge_{0}}_{C^{1}} \le \tfrac1{100}$
is either unknotted or belongs to $\tkc$ for some odd $b\ne\pm1$. \end{corollary}
\begin{sketch}
We argue similarly to the preceding argument.
The image of $\tge_{0}$ coincides with the circle of radius $\nfrac1{4\pi}$
in the $\mathbf e_{1}$-$\mathbf e_{2}$-plane centered at $\nfrac{\mathbf e_{1}}{8\pi}$.
Consider the $\frac1{100}$-neighborhood of $\tge_{0}$
fibred by the normal disks of this circle.
Any of these normal disks is transversally met by $\gamma$ in precisely two points.
Consider the Gau{\ss} map $\mathfrak n:\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}\to\ensuremath{\mathbb{S}}^{2}$,
$(s,t)\mapsto\frac{\gamma(s)-\gamma(t)}{\abs{\gamma(s)-\gamma(t)}}$.
Off the diagonal, this map is well-defined and $C^{1}$.
Sard's lemma gives the existence of some $\nu_{0}\in\ensuremath{\mathbb{S}}^{2}$
arbitrarily close to $\mathbf e_{3}$,
such that any crossing of $\gamma$ in the projection onto $\nu_{0}^{\perp}$
is either an over- or an undercrossing.
Here we face the situation of a deformed cylinder with its caps
glued together.
By stretching and deforming, we arrive at a usual braid representation.
We conclude as before. \end{sketch}
\section{Comparison $(2,b)$-torus knots and energy estimates}\label{sect:torus}
Let $a,b\in\ensuremath{\mathbb{Z}}\setminus\set{-1,0,1}$ be coprime, i.e.,\@ $\gcd(\abs a,\abs b)=1$. The \emph{$(a,b)$-torus knot class} $\tkc[a,b]$ contains the one-parameter family of curves \begin{equation}\label{eq:torus-knot}
\tau_\ensuremath{\varrho}:t\mapsto
\begin{pmatrix}
(1+\ensuremath{\varrho}\cos(bt)) \cos(at) \\
(1+\ensuremath{\varrho}\cos(bt)) \sin(at) \\
\ensuremath{\varrho} \sin(bt)
\end{pmatrix}, \quad t\in\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}, \end{equation} where the parameter $\ensuremath{\varrho}\in(0,1)$ can be chosen arbitrarily. For information on torus knots we refer to Burde and Zieschang~\cite[Chapters~3~E, 6]{BZ}.
As $\tkc[-a,b]=\tkc[a,-b]$ \cite[Prop.~3.27]{BZ}, it suffices to consider $a>1$. Since $\tkc[a,-b]$ contains the mirror images of $\tkc[a,b]$ we may also, keeping in mind this symmetry, pass to $b>1$. Note, however, that the latter classes are in fact disjoint, i.e., torus knots are not amphicheiral~\cite[Thm.~3.29]{BZ}.
We will later restrict to $a=2$; in this case $\gcd(2,b)=1$
holds for any odd $b$ with $|b|\ge 3$.
The two (mirror-symmetric) trefoil knot classes coincide with the $(2,\pm3)$-torus knot classes. \begin{figure}
\caption{Plot of $\tau_{1/3}$ for the trefoil knot class $\tkc[2,3]$}
\end{figure}
The \emph{total curvature} of a given curve $\gamma\in H^2(\ensuremath{\mathbb{R}}/L\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$, $L>0$, is given by \begin{equation}\label{eq:tc} \TC(\gamma)=\int_{\gamma}\ensuremath{\varkappa}\ensuremath{\,\mathrm{d}} s = \int_{\ensuremath{\mathbb{R}}/L\ensuremath{\mathbb{Z}}} {\ensuremath{\varkappa}} \abs{\gamma'}\ensuremath{\,\mathrm{d}} t = \int_0^{L} \frac{\abs{\gamma''\wedge\gamma'}}{\abs{\gamma'}^2}\ensuremath{\,\mathrm{d}} t
\stackrel{\abs\gamma'\equiv1}=
\int_0^{L} {\abs{\gamma''}}\ensuremath{\,\mathrm{d}} t. \end{equation}
The torus knots $\tau_\ensuremath{\varrho}$ introduced in~\eqref{eq:torus-knot} lead to a family of comparison curves that approximate the $a$-times covered circle $\tau_{0}$ with respect to the $C^{k}$-norm for any $k\in\ensuremath{\mathbb{N}}$ as $\ensuremath{\varrho}\searrow0$. Rescaling and reparametrizing to arc-length we obtain $\tilde\tau_\ensuremath{\varrho}\in\ensuremath{\mathscr{C}}({{\tkc[a,b]}})$. As this does not destroy $H^{2}$-convergence~\cite[Thm.~A.1]{reiter:rkepdc}, we find that the arclength parametrization $\tilde\tau_{0}$ of the $a$-times covered circle lies in the (strong) $H^2$-closure of $\ensuremath{\mathscr{C}}({{\tkc[a,b]}})$.
\begin{lemma}[Bending energy estimate for comparison torus knots]\label{lem:tau-estimate}
There is a constant $C=C(a,b)$ such that
\begin{equation}\label{eq:tau-estimate}
\ensuremath{E_{\mathrm{bend}}}(\tilde\tau_\ensuremath{\varrho})\le (2\pi a)^2 + C\ensuremath{\varrho}^2 \qquad\text{for all }\ensuremath{\varrho}\in\sq{0,\tfrac{a}{4\sqrt{a^2+b^2}}}\,.
\end{equation} \end{lemma}
\begin{proof}
We begin with computing the first derivatives of $\tau_\ensuremath{\varrho}$,
\begin{align}
\tau_\ensuremath{\varrho}'(t)
&=
\begin{pmatrix}
-b\ensuremath{\varrho}\sin(bt)\cos(at)-a(1+\ensuremath{\varrho}\cos(bt))\sin(at) \\
-b\ensuremath{\varrho}\sin(bt)\sin(at)+a(1+\ensuremath{\varrho}\cos(bt))\cos(at) \\
b\ensuremath{\varrho}\cos(bt)
\end{pmatrix}, \notag\\
\tau_\ensuremath{\varrho}''(t)
&= -
\begin{pmatrix}
\br{a^2+(a^2+b^2)\ensuremath{\varrho}\cos(bt)} \cos(at) - 2ab\ensuremath{\varrho}\sin(bt)\sin(at) \\
\br{a^2+(a^2+b^2)\ensuremath{\varrho}\cos(bt)} \sin(at) + 2ab\ensuremath{\varrho}\sin(bt)\cos(at) \\
b^2\ensuremath{\varrho}\sin(bt)
\end{pmatrix}, \notag\\
\abs{\tau_\ensuremath{\varrho}'(t)}^2
&=
b^2\ensuremath{\varrho}^2 + a^2\br{1+\ensuremath{\varrho}\cos(bt)}^2\notag \\
&= a^2 + 2a^2\cos(bt)\ensuremath{\varrho} + \br{b^2+a^2\cos^2(bt)}\ensuremath{\varrho}^2, \label{first-deriv-taurho}\\
\abs{\tau_\ensuremath{\varrho}''(t)}^2
&=
a^4 + 2a^2(a^2+b^2)\cos(bt)\ensuremath{\varrho} + \sq{(4a^2+b^2)b^{2} + a^{2}(a^2-2b^2)\cos^{2}(bt)}\ensuremath{\varrho}^2,\notag \\
\sp{\tau_\ensuremath{\varrho}''(t),\tau_\ensuremath{\varrho}'(t)}
&=
-a^2b\ensuremath{\varrho}\sin(bt)\br{1+\ensuremath{\varrho}\cos(bt)},\notag
\end{align}
which by means of the Lagrange identity implies
\begin{align*}
\ensuremath{E_{\mathrm{bend}}}(\tau_\ensuremath{\varrho})
&=
\int_0^{2\pi}\frac{\abs{\tau_\ensuremath{\varrho}''(t)\wedge\tau_\ensuremath{\varrho}'(t)}^2}{\abs{\tau_\ensuremath{\varrho}'(t)}^5}\ensuremath{\,\mathrm{d}} t
=\int_0^{2\pi}\frac{\abs{\tau_\ensuremath{\varrho}''(t)}^2\abs{\tau_\ensuremath{\varrho}'(t)}^2-\sp{\tau_\ensuremath{\varrho}''(t),\tau_\ensuremath{\varrho}'(t)}^2}{\abs{\tau_\ensuremath{\varrho}'(t)}^5}\ensuremath{\,\mathrm{d}} t \\
&=\int_0^{2\pi}\br{\frac{\abs{\tau_\ensuremath{\varrho}''(t)}^2}{\abs{\tau_\ensuremath{\varrho}'(t)}^3}-\frac{\sp{\tau_\ensuremath{\varrho}''(t),\tau_\ensuremath{\varrho}'(t)}^2}{\abs{\tau_\ensuremath{\varrho}'(t)}^5}}\ensuremath{\,\mathrm{d}} t.
\end{align*}
As \begin{equation}\label{abl-reg} \abs{\tau_\ensuremath{\varrho}'(t)}^2 \ge \tfrac{7}{16}a^2\quad\textnormal{ because $(a^{2}+b^{2})\ensuremath{\varrho}^{2}\le \tfrac1{16}a^2$ and $2a^{2}\ensuremath{\varrho}\le \tfrac12a^{2}$} \end{equation}
the subtrahend in $\ensuremath{E_{\mathrm{bend}}}(\tau_\ensuremath{\varrho})$
and the third term in the expression for $|\tau_\ensuremath{\varrho}''|^2$ are bounded by $C\ensuremath{\varrho}^2$ uniformly in~$t$.
Expanding $z^{-3/2} = 1-\tfrac32z+\mathcal{O}(z^2)$ as $z\to 0$
we derive
\[ |\tau_\ensuremath{\varrho}'(t)|^{-3}
= {\br{b^2\ensuremath{\varrho}^2 + a^2\br{1+\ensuremath{\varrho}\cos(bt)}^2}^{-3/2}}
= \frac1{a^{3}} - \frac{3}{a^{3}}\cos\br{bt}\ensuremath{\varrho} + \mathcal O(\ensuremath{\varrho}^{2}), \]
which yields
\begin{align*}
\ensuremath{E_{\mathrm{bend}}}(\tau_\ensuremath{\varrho}) = &\int_0^{2\pi}\br{a^4 + 2a^2(a^2+b^2)\cos(bt)\ensuremath{\varrho}}\br{\frac1{a^{3}} - \frac{3}{a^{3}}\cos\br{bt}\ensuremath{\varrho} + \mathcal O(\ensuremath{\varrho}^{2})}\ensuremath{\,\mathrm{d}} t + \mathcal{O}(\ensuremath{\varrho}^2)\\
= &\int_0^{2\pi}\br{a + \tfrac{2b^2-a^2}{a}\cos\br{bt}\ensuremath{\varrho}}\ensuremath{\,\mathrm{d}} t + \mathcal{O}(\ensuremath{\varrho}^2)=2\pi a+ \mathcal{O}(\ensuremath{\varrho}^2)\quad\textnormal{as $\ensuremath{\varrho}\searrow 0$.}
\end{align*}
Finally, in order to pass to $\tilde\tau_\ensuremath{\varrho}$, we recall that $\ensuremath{E_{\mathrm{bend}}}$ is invariant under reparametrization
and $\ensuremath{E_{\mathrm{bend}}}(r\gamma)=r^{-1}\ensuremath{E_{\mathrm{bend}}}(\gamma)$ for $r>0$. So the claim follows for $r:=\ensuremath{\mathscr{L}}( \tau_\ensuremath{\varrho})^{-1}$ by
\begin{align*}
\ensuremath{\mathscr{L}}(\tau_\ensuremath{\varrho})
&= \int_0^{2\pi} \abs{\tau_\ensuremath{\varrho}'(t)}\ensuremath{\,\mathrm{d}} t
= \int_0^{2\pi} \sqrt{b^2\ensuremath{\varrho}^2 + a^2\br{1+\ensuremath{\varrho}\cos(bt)}^2}\ensuremath{\,\mathrm{d}} t \\
&= \int_0^{2\pi} \br{a + a\cos(bt)\ensuremath{\varrho} + \mathcal O(\ensuremath{\varrho}^2)}\ensuremath{\,\mathrm{d}} t
= 2\pi a + \mathcal O(\ensuremath{\varrho}^2)\quad\textnormal{as $\ensuremath{\varrho}\to 0$.}
\end{align*} \end{proof}
\begin{proposition}[Ropelength estimate for comparison torus knots]\label{prop:torus-rate} We have \begin{equation*}
\ensuremath{\mathcal R}(\tilde\tau_\ensuremath{\varrho}) \le \frac C\ensuremath{\varrho} \qquad
\text{for }0<\ensuremath{\varrho}\ll1, \end{equation*} where $C$ is a constant depending only on $a$ and $b$. \end{proposition}
\begin{proof}
The invariance of $\ensuremath{\mathcal R}$ under reparametrization, scaling, and translation
implies $\ensuremath{\mathcal R}(\tilde\tau_\ensuremath{\varrho})=\ensuremath{\mathcal R}(\tau_\ensuremath{\varrho})$.
According to Lemma~\ref{lem:tau-estimate} above, the
squared curvature of~$\tau_{\ensuremath{\varrho}}$
amounts to
\[ \frac{\abs{\tau_\ensuremath{\varrho}''(t)\wedge\tau_\ensuremath{\varrho}'(t)}^{2}}{\abs{\tau_\ensuremath{\varrho}'(t)}^6}
\le \frac{\abs{\tau_\ensuremath{\varrho}''(t)}^2}{\abs{\tau_\ensuremath{\varrho}'(t)}^4} \]
which is uniformly bounded independent of $\ensuremath{\varrho}$ and $t$ as long as $\ensuremath{\varrho}$ is in the range required in~\eqref{eq:tau-estimate} by some $\ensuremath{\varkappa}_{0}>0$ (combine \eqref{first-deriv-taurho} with \eqref{abl-reg}).
By $\ensuremath{\mathscr{L}}(\tau_{\ensuremath{\varrho}}|_{[s,t]})$ we denote the length
of the shorter subarc of~$\tau_{\ensuremath{\varrho}}$ connecting the points
$\tau_{\ensuremath{\varrho}}(s)$
and $\tau_{\ensuremath{\varrho}}(t)$.
As $\abs{\tau_{\ensuremath{\varrho}}'}$ uniformly converges to $a$ as $\ensuremath{\varrho}\to 0$ (see \eqref{first-deriv-taurho}),
we may find some $\ensuremath{\varrho}_{0}\in\left(0,\tfrac{a}{4\sqrt{a^2+b^2}}\right]$ such that
$a\abs{s-t}_{\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}}\ge\frac12\ensuremath{\mathscr{L}}(\tau_{\ensuremath{\varrho}}|_{[s,t]})$
for any $\ensuremath{\varrho}\in(0,\ensuremath{\varrho}_{0}]$.
The estimate
\begin{equation}\label{eq:lower-dist-bound}
\abs{\tau_{\ensuremath{\varrho}}(s)-\tau_{\ensuremath{\varrho}}(t)} \ge c\ensuremath{\varrho}
\quad\text{for any }s,t\in\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}} \text{ with }
\abs{s-t}_{\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}}\ge\tfrac1{20a\ensuremath{\varkappa}_{0}}
\text{ and }\ensuremath{\varrho}\in(0,\ensuremath{\varrho}_{0}]
\end{equation}
which will be proven below for some uniform $c>0$ now implies
\[ \abs{\tau_{\ensuremath{\varrho}}(s)-\tau_{\ensuremath{\varrho}}(t)} \ge c\ensuremath{\varrho}
\quad\text{for any }s,t\in\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}} \text{ with }
\mathscr L(\tau_\ensuremath{\varrho}|_{[s,t]})\ge\tfrac1{10\ensuremath{\varkappa}_0}
\text{ and }\ensuremath{\varrho}\in(0,\ensuremath{\varrho}_{0}]. \]
As $\ensuremath{\mathscr{L}}(\tau_{\ensuremath{\varrho}})$ is also uniformly bounded, we may apply
Lemma~\ref{lem:qtb} below, which yields the desired.
It remains to verify~\eqref{eq:lower-dist-bound}.
Recall that the limit curve $\tau_{0}$ is an $a$-times covered circle
parametrized by uniform speed $a$. Fix $t\in\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}$ and
consider the image of~$\tau_{\ensuremath{\varrho}}(s)$ restricted to
$s\in\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}$ with
$\abs{s-t}_{\ensuremath{\mathbb{R}}/\frac{2\pi}a\ensuremath{\mathbb{Z}}}<\frac\pi{2ab}$,
i.e.\@
$s\in t+\frac{2\pi}a\ensuremath{\mathbb{Z}}+(-\frac\pi{2ab},\frac\pi{2ab})$,
see Figure~\ref{fig:torus-reg}.
It consists of $a$ disjoint arcs (on the $\ensuremath{\varrho}$-torus,
in some neighborhood of $\tau_{0}(t)$).
The associated angle function~$s\mapsto bs$ (see~\eqref{eq:torus-knot})
strides across \emph{disjoint} regions of length $\frac\pi a$ on $\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}$ for each of
those arcs.
These regions have positive distance $\ge\frac\pi{a}$
on the surface of the $\ensuremath{\varrho}$-torus which
leads to a uniform positive lower bound on $\frac1\ensuremath{\varrho}\abs{\tau_{\ensuremath{\varrho}}(s)-\tau_{\ensuremath{\varrho}}(t)}$
for $s\in\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}$ where
$\abs{s-t}_{\ensuremath{\mathbb{R}}/\frac{2\pi}a\ensuremath{\mathbb{Z}}}<\frac\pi{2ab}$ but
$\abs{s-t}_{\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}}>\frac\pi{2ab}$.
The latter restriction reflects the fact that each arc has zero
distance to itself.
\begin{figure}\label{fig:torus-reg}
\end{figure}
If, on the other hand,
$\abs{s-t}_{\ensuremath{\mathbb{R}}/\frac{2\pi}a\ensuremath{\mathbb{Z}}}\ge\lambda:=\min\br{\frac\pi{2ab},\frac1{20a\ensuremath{\varkappa}_{0}}}$,
we may derive
\begin{align*}
\abs{\tau_{\ensuremath{\varrho}}(s)-\tau_{\ensuremath{\varrho}}(t)}
\ge\abs{\tau_{0}(s)-\tau_{0}(t)} - 2\ensuremath{\varrho}
&\ge2\sin\br{\tfrac{a}{2}\abs{s-t}_{\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}}}-2\ensuremath{\varrho}_{0} \\
&\ge2\sin\br{\tfrac{a}{2}\abs{s-t}_{\ensuremath{\mathbb{R}}/\frac{2\pi}a\ensuremath{\mathbb{Z}}}}-2\ensuremath{\varrho}_{0}
\ge2\sin{\tfrac{a\lambda}{2}}-2\ensuremath{\varrho}_{0}.
\end{align*}
Diminishing $\ensuremath{\varrho}_{0}$ if necessary, the right-hand side is positive for all
$\ensuremath{\varrho}\in(0,\ensuremath{\varrho}_{0}]$. \end{proof}
\begin{lemma}[Quantitative thickness bound]\label{lem:qtb}
Let $\gamma\in C^{2}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$ be a regular curve with
uniformly bounded curvature $\ensuremath{\varkappa}\le\ensuremath{\varkappa}_{0}$, $\ensuremath{\varkappa}_{0}>0$, and
assume that
\[ \delta := \inf\sett{\abs{\gamma(s)-\gamma(t)}}{s,t\in\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\mathscr L(\gamma|_{[s,t]})\ge\tfrac1{10\ensuremath{\varkappa}_0}} \]
is positive
where $\ensuremath{\mathscr{L}}(\gamma|_{[s,t]})$ denotes the length of the shorter sub-arc of~$\gamma$
joining the points $\gamma(s)$ and $\gamma(t)$.
Then $\triangle[\gamma]\ge\min\br{\tfrac\delta2,\frac1{\ensuremath{\varkappa}_{0}}}>0$, thus
\[ \ensuremath{\mathcal R}(\gamma)\le{\max\br{\tfrac2\delta,\ensuremath{\varkappa}_0}}{\mathscr L(\gamma)}<\infty. \] \end{lemma}
\begin{proof}
As the quantities in the statement do not depend on the actual
parametrization and distances, thickness, and the reciprocal
curvature are positively homogeneous of degree one,
there is no loss of generality in assuming arc-length parametrization.
According to~\cite[Thm.~1]{lsdr}, the thickness equals
the minimum of $\nfrac1{\max\ensuremath{\varkappa}}\ge\nfrac1{\ensuremath{\varkappa}_{0}}$
and one half of the \emph{doubly critical self-distance}, that is,
the infimum over all distances $\abs{\gamma(s)-\gamma(t)}$
where $s,t\in\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}$ satisfy $\gamma'(s)\perp\gamma(s)-\gamma(t)\perp\gamma'(t)$.
By our assumption we only need to show that the doubly critical self-distance is not attained on the parameter range where
$\ensuremath{\mathscr{L}}(\gamma|_{[s,t]})=\abs{s-t}_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}<\tfrac1{10\ensuremath{\varkappa}_0}$.
To this end we show that any angle between
$\gamma'(t)$ and $\gamma(s)-\gamma(t)$
is smaller than $\frac\pi6<\frac\pi2$ if $\abs{s-t}_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}\le\tfrac1{10\ensuremath{\varkappa}_0}$.
We obtain for $w:=\abs{s-t}_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}\in\sq{0,\frac1{10\ensuremath{\varkappa}_{0}}}$
(note that $\frac1{10\ensuremath{\varkappa}_{0}}<\frac12$ by Fenchel's theorem)
\begin{align*}
\abs{\sp{\gamma(s)-\gamma(t),\gamma'(t)}}
&= \abs{(s-t) + (s-t)^{2}\int_{0}^{1}(1-\ensuremath{\vartheta})\sp{\gamma''(t+\ensuremath{\vartheta}(s-t)),\gamma'(t)}\ensuremath{\,\mathrm{d}}\ensuremath{\vartheta}} \\
&\ge w\br{1-\ensuremath{\varkappa}_{0}w} \ge \tfrac9{10}w.
\end{align*}
Using the Lipschitz continuity of~$\gamma$, this yields
for the angle $\ensuremath{\alpha}\in[0,\frac\pi2]$ between the lines parallel to
$\gamma(s)-\gamma(t)$ and $\gamma'(t)$
\[ \cos\ensuremath{\alpha}
= \frac{\abs{\sp{\gamma(s)-\gamma(t),\gamma'(t)}}}{\abs{\gamma(s)-\gamma(t)}}
\ge \tfrac9{10} > \tfrac12\sqrt3=\cos\tfrac\pi6
\qquad\Longrightarrow\qquad\ensuremath{\alpha}<\tfrac\pi6. \] \end{proof}
Combining Lemma~\ref{lem:tau-estimate} with the previous ropelength estimate we can use the comparison torus knots to obtain non-trivial growth estimates on the total energy $\ensuremath{E_\ensuremath{\vartheta}}$ and on the ropelength $\ensuremath{\mathcal R}$ of minimizers $\gamma_\ensuremath{\vartheta}$.
\begin{proposition}[Total energy growth rate for minimizers]\label{prop:estimate-torus}
For $a,b\in\ensuremath{\mathbb{Z}}\setminus\set{-1,0,1}$ with $\gcd(\abs a,\abs b)=1$ there is a positive constant $C=C(a,b)$ such that
any sequence $\seq[\ensuremath{\vartheta}>0]\ensuremath{\g_\ensuremath{\vartheta}}$ of $E_\ensuremath{\vartheta}$-minimizers in $\ensuremath{\mathscr{C}}(\mathcal{T}(a,b))$ satisfies
\begin{equation}\label{eq:energy-growth-rate}
\ensuremath{E_\ensuremath{\vartheta}}(\ensuremath{\g_\ensuremath{\vartheta}}) \le (2a\pi)^{2}+C{\ensuremath{\vartheta}^{2/3}}
\end{equation}
and if $a=2$
\begin{equation}\label{eq:ropelength-growth-rate}
\ensuremath{\mathcal R}(\ensuremath{\g_\ensuremath{\vartheta}}) \le C{\ensuremath{\vartheta}^{-1/3}}.
\end{equation}
\end{proposition}
\begin{proof}
The first claim immediately follows by $\ensuremath{E_\ensuremath{\vartheta}}(\ensuremath{\g_\ensuremath{\vartheta}})\le\ensuremath{E_\ensuremath{\vartheta}}(\tilde{\tau}_{\ensuremath{\vartheta}^{1/3}})$ from Lemma~\ref{lem:tau-estimate} and
Proposition~\ref{prop:torus-rate}.
The classic F\'ary--Milnor Theorem applied to $\gamma_\ensuremath{\vartheta}$ gives
$(4\pi)^{2} + \ensuremath{\vartheta}\ensuremath{\mathcal R}(\ensuremath{\g_\ensuremath{\vartheta}}) \le \ensuremath{E_\ensuremath{\vartheta}}(\ensuremath{\g_\ensuremath{\vartheta}})$.
Now the first estimate for $a=2$ implies the second one. \end{proof}
\section{Crookedness estimate and the elastic $(2,b)$-torus knot\\ Proofs of the main theorems}\label{sec:5}
In his seminal article~\cite{milnor} on the F\'ary--Milnor theorem, Milnor derived the lower bound for the total curvature of knotted arcs by studying the \emph{crookedness} of a curve and relating it to the total curvature. For some \emph{regular} curve $\gamma\in C^{0,1}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$, i.e., a Lipschitz-continuous mapping $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}\to\ensuremath{\mathbb{R}}^{d}$ which is not constant on any open subset of $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}$, the crookedness of~$\gamma$ is the infimum over all $\nu\in\ensuremath{\mathbb{S}}^{2}$ of \[ \mu(\gamma,\nu) := \#\sett{t_{0}\in\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}{t_{0} \textnormal{ is a local maximizer of } t\mapsto\sp{\gamma(t),\nu}_{\ensuremath{\mathbb{R}}^{3}}}. \] We briefly cite the main properties, proofs can be found in~\cite[Sect.~3]{milnor}.
\begin{proposition}[Crookedness]\label{prop:crook}
Let $\gamma\in C^{0,1}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$ be a regular curve. Then
\[ \TC(\gamma) = \tfrac12\int_{\ensuremath{\mathbb{S}}^{2}}\mu(\gamma,\nu)\ensuremath{\,\mathrm{d}}\area(\nu). \]
Any partition of $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}$ gives rise to an inscribed regular polygon $p\in C^{0,1}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$
with $\mu(p,\nu)\le\mu(\gamma,\nu)$ for all directions $\nu\in\ensuremath{\mathbb{S}}^{2}$ that are
not perpendicular to some edge of~$p$. \end{proposition}
The following statement is the heart of the argument for Theorem~\ref{thm:main}.
\begin{lemma}[Crookedness estimate]\label{lem:crook}
Let $\varphi\in(0,\pi]$,
\[ \zeta := \tfrac12\min\br{\tfrac{1}{8\pi}\sin\nfrac\varphi4,\tfrac1{96\pi}}, \]
and $\gamma\in C^{1,1}\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$ be
a non-trivially knotted, arclength parametrized curve
with
\begin{equation}\label{eq:g-phi}
\norm{\gamma-\tge_{\varphi}}_{C^{1}}\le\delta
\end{equation}
where $\delta\equiv\delta_{\ensuremath{\varepsilon}}>0$ and $\ensuremath{\varepsilon}\equiv\ensuremath{\varepsilon}_{\zeta}>0$
are defined in~\eqref{eq:lgr} and~\eqref{eq:epsilon}.
Then the set
\[ \mathcal B(\gamma) := \sett{\nu\in\ensuremath{\mathbb{S}}^{2}}{\mu(\gamma,\nu)\ge3} \]
is measurable with respect to the two-dimensional Hausdorff measure $\ensuremath{\mathscr{H}}^2$ on~$\ensuremath{\mathbb{S}}^{2}$ and satisfies
\[ \area{(\mathcal B(\gamma))} \ge
\tfrac\pi{16}\cdot\frac{\varphi}{\ensuremath{\mathcal R}(\gamma)}. \] \end{lemma}
\begin{proof}
As to the measurability of $\mathcal B(\gamma)$,
we may consider the two-dimensional Hausdorff measure $\area$
on $\ensuremath{\mathbb{S}}^{2}\setminus\mathscr N$
where $\mathscr N$ denotes the set of measure zero where $\mu(\gamma,\cdot)$
is infinite.
From the fact
that $\mu(\gamma,\cdot)$ is $\area$-a.e.\@ lower semi-continuous on $\ensuremath{\mathbb{S}}^{2}
\setminus\mathscr N$
if $\gamma\in C^{1}$,
we infer that $\sett{\nu\in\ensuremath{\mathbb{S}}^{2}\setminus\mathscr N}{\mu(\gamma,\nu)\le2}$ is closed,
thus measurable.
Therefore, its complement (which coincides with $\mathcal B(\gamma)$
up to a set of measure zero) is also measurable.
As $\gamma$ is embedded and $C^{1,1}$, its thickness
$\triangle[\gamma]=\nfrac1{\ensuremath{\mathcal R}(\gamma)}$ is positive.
We consider the diffeomorphism
$\Phi:(0,\pi)\times\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}\to\ensuremath{\mathbb{S}}^{2}\setminus\set{\pm\mathbf e_{2}}$,
\[ (\ensuremath{\vartheta},\psi)\mapsto
\begin{pmatrix}-\sin\ensuremath{\vartheta}\sin\psi\\\cos\ensuremath{\vartheta}\\\sin\ensuremath{\vartheta}\cos\psi \end{pmatrix}. \]
We aim at showing that, for a.e.\@ $\psi\in[\nfrac\varphi4,\nfrac{3\varphi}4]$,
there is some sub-interval $J_{\psi}\subset(\tfrac\pi6,\tfrac{5\pi}6)$
with $\abs{J_{\psi}}\ge\tfrac\pi4\triangle[\gamma]$ and
$\Phi(\ensuremath{\vartheta},\psi)\in\mathcal B(\gamma)$ for any $\ensuremath{\vartheta}\in J_{\psi}$.
In this case we have
\[ \area(\mathcal B(\gamma))
= \iint_{\ensuremath{\mathbb{S}}^{2}\setminus\set{\pm\mathbf e_{2}}}\chi_{\mathcal B(\gamma)}\ensuremath{\,\mathrm{d}}\area
=\int_{0}^{\pi}\int_{\ensuremath{\mathbb{R}}/2\pi\ensuremath{\mathbb{Z}}}\chi_{\Phi^{-1}(\mathcal B(\gamma))}\sin\ensuremath{\vartheta}\ensuremath{\,\mathrm{d}}\psi\ensuremath{\,\mathrm{d}}\ensuremath{\vartheta}
\ge\tfrac\pi{16}\varphi\triangle[\gamma]. \]
For given $\varphi\in (0,\pi]$ let
\[ \tilde{\zeta} := 2\zeta = \min\br{\tfrac{1}{8\pi}\sin\nfrac\varphi4,\tfrac1{96\pi}} \]
and denote the corresponding cylinder by $\widetilde{\mathcal Z}$,
cf.\@ Section~\ref{sec:class}.
For an arbitrary $\psi\in[\nfrac\varphi4,\nfrac{3\varphi}4]$.
The orthogonal projection onto $\nu_{\psi}^{\perp}$ for
\[ \nu_{\psi} := \cos\psi\mathbf e_{1}+\sin\psi\mathbf e_{3}
= \Phi(\tfrac\pi2,\psi-\tfrac\pi2) \]
will be denoted by $\P$.
Note that $\nu_{\psi}\perp\Phi(\ensuremath{\vartheta},\psi)$ for any $\ensuremath{\vartheta}\in(0,\pi)$.
Firstly we note that the half circles $h$
of $\tge_{\varphi}$ apart from the cylinder $\widetilde{\mathcal Z}$,
namely $\tge_{\varphi}$
restricted to $[\tfrac18,\tfrac38]$ and $[\tfrac58,\tfrac78]$ respectively,
do not interfere with $\widetilde{\mathcal Z}$ when projected onto $\nu_{\psi}^{\perp}$,
more precisely, $\P h\cap\P\widetilde{\mathcal Z}=\emptyset$.
To see this, consider the projection $\Pi$ onto $\mathbf e_{2}^{\perp}$,
see Figure~\ref{fig:e2}.
The two segments connecting $\tge_{\varphi}(\tfrac18)$
to $\tge_{\varphi}(\tfrac38)$, and $\tge_{\varphi}(\tfrac58)$
to $\tge_{\varphi}(\tfrac78)$, respectively, are mapped onto
points $P,P'\in\mathbf e_2^\perp$ under the projection $\Pi$,
and $P$ and $P'$ separate $\Pi(h)$ and $\Pi(\widetilde{\mathcal Z})$.
From Figure~\ref{fig:e2} we read off that
no projection line of $\P$ meets both
$\mathcal Z$ and $h$, since otherwise, such a projection line (parallel to $\nu_\psi$ and orthogonal to $\mathbf e_{2}$ by definition) projected onto $\mathbf e_2^\perp$ under $\Pi$, would intersect both, $\Pi(\widetilde{\mathcal Z})$ and $\Pi(h)$, which is impossible.
\begin{figure}
\caption{Under the projection $\Pi$ onto $\mathbf e_{2}^{\perp}$,
the cylinder $\widetilde{\mathcal Z}$ is mapped to a disk.
Each line enclosing an angle smaller than $\beta_{0}$
with $\nu$ cannot meet both this circle and (the line) $\Pi(h)$.
Note that $\beta_{0}=\nfrac\varphi2-\alpha$
and $\sin\ensuremath{\alpha}=8\pi\zeta\le\tfrac12\sin\nfrac\varphi4<\sin\nfrac\varphi4$,
so $\ensuremath{\alpha}<\nfrac\varphi4$ and $\beta_{0}>\nfrac\varphi4$.}
\label{fig:e2}
\end{figure}
\begin{figure}
\caption{Projection onto $\nu_{\psi}^{\perp}$.}
\label{fig:ellipses}
\end{figure}
Now we consider the projection $\P$, see Figure~\ref{fig:ellipses}.
By construction there is (in the projection) one ellipse-shaped component
on both sides of the line $\ensuremath{\mathbb{R}}\mathbf e_{2}$.
As shown in~\eqref{eq:axis-angle},
the angle between $\tge_{\varphi}'$ and $\mathbf e_{2}$
is bounded by~$\tfrac\pi{10}$ on $|\xi|\le \tfrac1{16\sqrt{3}\pi}$.
Proceeding as in~\eqref{eq:arccos}
and using~\eqref{eq:g-phi}, we arrive at
\begin{align*}
\angle\br{\tilde\gamma_{j}',\widetilde{\tge}_{\varphi,j}'}
&\le \arccos\frac{\sp{\tilde\gamma_{j}',\widetilde{\tge}_{\varphi,j}'}}{\abs{\tilde\gamma_{j}'}\abs{\widetilde{\tge}_{\varphi,j}'}}
\le \arccos\frac{\abs{\widetilde{\tge}_{\varphi,j}'}-\ensuremath{\varepsilon}}{\abs{\widetilde{\tge}_{\varphi,j}'}+\ensuremath{\varepsilon}}
\;\refeq{arccos-x}\le\sqrt{\frac{2\pi\ensuremath{\varepsilon}}{\abs{\widetilde{\tge}_{\varphi,j}'}+\ensuremath{\varepsilon}}} \\
&\le\sqrt{2\pi\ensuremath{\varepsilon}}\;\refeq{epsilon}
\le\sqrt{\tfrac1{10}\sqrt{\tfrac1{8\cdot96}}}
< 0.1
< \tfrac\pi{10}, \qquad j=1,2.
\end{align*}
Therefore the secant defined by two points of $\gamma$
inside $\widetilde{\mathcal Z}$,
either both on $\tilde\gamma_{1}$, or both on $\tilde\gamma_{2}$,
always encloses with $\mathbf e_{2}$ an angle of at most $\tfrac\pi5$,
and the same holds true for the projection onto $\nu_{\psi}^{\perp}$.
Now we pass to the cylinder $\mathcal Z$
corresponding to $\zeta = \nfrac{\tilde{\zeta}}2$.
Note that the distance of $\partial\mathcal Z$ to $\partial\widetilde{\mathcal Z}$
is bounded below by $\min\br{\zeta,\br{\sqrt2-1}\sqrt{\nfrac\zeta{8\pi}}}
\ge\zeta$
and that
\begin{equation}\label{eq:zeta-thick}
\zeta\ge\triangle[\gamma]
\end{equation}
for otherwise the strands of $\gamma$ would not fit into $\mathcal Z$
which is guaranteed by Lemma~\ref{lem:zylinder}.
Applying Proposition~\ref{prop:braids} (using Sard's theorem) and assuming $b\ge3$ (the case $b\le-3$
being symmetric; recall that $b=\pm1$ leads to the unknot)
there are for a.e.\@ $\psi\in[\nfrac\varphi4,\nfrac{3\varphi}4]$ points
\[ -\eta<\xi_{A}<\xi_{E}<\xi_{B}<\xi_{F}<\xi_{C}<\eta \]
such that
$a_{\xi_X}$ is parallel to $\nu_{\psi}$ (thus $\P a_{\xi_X}=0$)
for $X\in\set{A,B,C}$ while $a_{\xi_X}$ is perpendicular
to $\nu_{\psi}$ for $X\in\set{E,F}$ and
\[ \beta(\xi_{A}) + \pi = \beta(\xi_{B}) = \beta(\xi_{C}) - \pi. \]
We claim
\begin{equation}\label{eq:thickness-estimate}
\abs{a_{\xi_{E}}},\abs{a_{\xi_{F}}}\ge2\triangle[\gamma].
\end{equation}
To see this, consider the map $[-\eta,\eta]^{2}\to(0,1)$,
$(\xi_{1},\xi_{2})\mapsto\abs{\tilde\gamma_{1}(\xi_{1})-\tilde\gamma_{2}(\xi_{2})}$.
There is at least one global minimizer $(\tilde\xi_{1},\tilde\xi_{2})$,
and consequently we have $\tilde\gamma_{1}'(\tilde\xi_{1})
\perp \tilde\gamma_{1}(\tilde\xi_{1})-\tilde\gamma_{2}(\tilde\xi_{2}) \perp \tilde\gamma_{2}'(\tilde\xi_{2})$.
For any $\tilde\ensuremath{\varepsilon}>0$ we may choose some $\tilde\xi_{\tilde\ensuremath{\varepsilon}}\in[-\eta,\eta]$
close to $\tilde\xi_{1}$ such that the radius $\tilde\ensuremath{\varrho}_{\tilde\ensuremath{\varepsilon}}$ of the circle
passing through $\tilde\gamma_{1}(\tilde\xi_{1})$,
$\tilde\gamma_{2}(\tilde\xi_{2})$, $\tilde\gamma_{1}(\tilde\xi_{\tilde\ensuremath{\varepsilon}})$
satisfies
$\abs{\tilde\gamma_{1}(\tilde\xi_{1})-\tilde\gamma_{2}(\tilde\xi_{2})}
\ge2\tilde\ensuremath{\varrho}_{\tilde\ensuremath{\varepsilon}}-\tilde\ensuremath{\varepsilon}\ge2\triangle[\gamma]-\tilde\ensuremath{\varepsilon}$
(cf.\@ Litherland et al.~\cite[Proof of Thm.~3]{lsdr}).
We let
\begin{align*}
A&:=\P\tilde\gamma_{1}(\xi_{A})=\P\tilde\gamma_{2}(\xi_{A}), &
E&:=\P\tilde\gamma_{1}(\xi_{E}), &
E'&:=\P\tilde\gamma_{2}(\xi_{E}), \\
B&:=\P\tilde\gamma_{1}(\xi_{B})=\P\tilde\gamma_{2}(\xi_{B}), &
F&:=\P\tilde\gamma_{1}(\xi_{F}), &
F'&:=\P\tilde\gamma_{2}(\xi_{F}), \\
C&:=\P\tilde\gamma_{1}(\xi_{C})=\P\tilde\gamma_{2}(\xi_{C}),
\end{align*}
and denote the secant through $A$ and $C$ by $g$
which defines two half planes, $G$ and $G'$.
As shown before, this line encloses an angle of
at most $\nfrac\pi5$ with the $\mathbf e_{2}$-axis.
Therefore, the line orthogonal to and bisecting $\overline{AC}$,
is spanned by $\Phi(\ensuremath{\vartheta}_{0},\psi)\in\nu_{\psi}^{\perp}\cap\ensuremath{\mathbb{S}}^{2}$ for
some $\ensuremath{\vartheta}_{0}\in[\nfrac{3\pi}{10},\nfrac{7\pi}{10}]$ and meets $\P\gamma$
outside $\P\mathcal Z$ in two points, $D\in G$, $D'\in G'$.
We arrange the labels of these half planes so that $AEFCDAE'F'CD'A$ is a closed
polygon inscribed in $\P\gamma$.
We aim at showing that, for some inscribed (sub-) polygon~$p$,
there is some sub-interval $J_{\psi}\subset(\tfrac\pi6,\tfrac{5\pi}6)$
with $\abs{J_{\psi}}\ge\frac{\triangle[\gamma]}{10}$ and
\[ 3\le\mu(p,\Phi(\ensuremath{\vartheta},\psi))<\infty \quad\text{for all }
\ensuremath{\vartheta}\in J_{\psi}. \]
The definition of $\mu(\cdot,\cdot)$ implies
that the corresponding polygon inscribed in the (non-projected)
curve $\gamma_{*}$ also satisfies the latter estimate.
We construct~$p$ as follows.
We distinguish three cases which are depicted in Figure~\ref{fig:cases}.
\begin{figure}
\caption{Three cases in the drawing plane $\nu_\psi^\perp$
discussed in the proof of Lemma~\ref{lem:crook}. Intersecting the blue
shaded regions leads to the angular region $J_\psi$ in each case.}
\label{fig:cases}
\end{figure}
If
\begin{equation}\label{eq:crookcase}
\dist(E,g),\dist(F',g)\ge\tfrac{\triangle[\gamma]}2\qquad\text{and}\qquad E,F'\in G
\end{equation}
then, using~\eqref{eq:zeta-thick}, the polygon
$AECDAF'CD'A$ has three local maxima at $E$, $D$, and $F'$
when projected onto any $\Phi(\ensuremath{\vartheta},\psi)\in\ensuremath{\mathbb{S}}^{2}$ with
$\abs{\ensuremath{\vartheta}-\ensuremath{\vartheta}_{0}}\le\arctan {\triangle[\gamma]}$. As $\arctan x\ge\tfrac\pi4x$ for $x\in[0,1]$,
the set $J_{\psi}:=(\ensuremath{\vartheta}_{0}-\tfrac\pi4\triangle[\gamma],\ensuremath{\vartheta}_{0}+\tfrac\pi4\triangle[\gamma])$ is contained in $(\tfrac\pi6,\tfrac{5\pi}6)$
due to $\tfrac\pi4\triangle[\gamma]\refeq{zeta-thick}\le\tfrac\pi4\zeta
\le\tfrac1{8\cdot96}<0.0014$
and satisfies $\abs{J_{\psi}}=\tfrac\pi2\triangle[\gamma]$.
Now assume that \emph{both} $E$ and $F'$ do not meet the (first) conditions in~\eqref{eq:crookcase}, so, by~\eqref{eq:thickness-estimate},
\[ \dist(F,g),\dist(E',g)\ge\tfrac{\triangle[\gamma]}2\qquad\text{and}\qquad E',F\in G'. \]
But this is symmetric to~\eqref{eq:crookcase}
since the polygon $AFCDAE'CD'A$
has three local \emph{minima} at $E'$, $D'$, and $F$
when projected onto any $\Phi(\ensuremath{\vartheta},\psi)\in\ensuremath{\mathbb{S}}^{2}$ with
$\abs{\ensuremath{\vartheta}-\ensuremath{\vartheta}_{0}}\le\arctan\triangle[\gamma]$.
Employing the same argument as for~\eqref{eq:crookcase},
this yields the desired as the number of local maxima and minima agrees.
Finally we have to deal with the ``mixed case''.
Without loss of generality we may assume
\[ \dist(E,g),\dist(F,g)\ge\tfrac{\triangle[\gamma]}2\qquad\text{and}\qquad E\in G,F\in G'. \]
But now
the polygon
$EFDACD'E$ has three local maxima at $E$, $D$, and $C$
when projected onto any $\Phi(\ensuremath{\vartheta},\psi)\in\ensuremath{\mathbb{S}}^{2}$ with
$\abs{\ensuremath{\vartheta}-\ensuremath{\vartheta}_{0}}\le\arctan\triangle[\gamma]$
and $\sp{\Phi(\ensuremath{\vartheta},\psi),C-A}>0$
which completes the proof with $\abs{J_{\psi}}=\tfrac\pi4\triangle[\gamma]$. \end{proof}
\begin{proof}[Theorem~\ref{thm:main}] Let us now assume that $\tge_{\varphi}$, $\varphi\in (0,\pi]$, is the $C^1$-limit of $\ensuremath{E_\ensuremath{\vartheta}}$-minimizers ~$\br{\ensuremath{\g_\ensuremath{\vartheta}}}_{\ensuremath{\vartheta}>0}$ as $\ensuremath{\vartheta}\to 0$. In the light of Proposition~\ref{prop:estimate-torus} for $a=2$, \ref{prop:crook} and Lemma~\ref{lem:crook} we arrive at \begin{align*}
C\ensuremath{\vartheta}^{2/3}\;&\refeq{energy-growth-rate}{\ge}\; \ensuremath{E_{\mathrm{bend}}}(\ensuremath{\g_\ensuremath{\vartheta}})-(4\pi)^{2} = \int_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}\abs{\ensuremath{\ddg_\ensuremath{\vartheta}}}^{2}-(4\pi)^{2}
\ge \br{\int_{\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}}\abs{\ensuremath{\ddg_\ensuremath{\vartheta}}}}^{2}-(4\pi)^{2} \\
&=\TC(\ensuremath{\g_\ensuremath{\vartheta}})^{2}-(4\pi)^{2}\ge 4\pi\br{\TC(\ensuremath{\g_\ensuremath{\vartheta}})-4\pi} = 2
\pi\int_{\ensuremath{\mathbb{S}}^{2}}\br{\mu\br{\ensuremath{\g_\ensuremath{\vartheta}},\nu}-2}\ensuremath{\,\mathrm{d}} A(\nu) \\
&\ge 2\pi\area\br{\mathcal B(\ensuremath{\g_\ensuremath{\vartheta}})}\ge \frac {\varphi\pi^2}{8\ensuremath{\mathcal R}(\ensuremath{\g_\ensuremath{\vartheta}})} \overset{\eqref{eq:ropelength-growth-rate}}{\ge} c\ensuremath{\vartheta}^{1/3} \end{align*} for some positive constant $c$ depending on $\varphi$ and $\ensuremath{\vartheta}>0$ sufficiently small, which is a contradiction as~$\ensuremath{\vartheta}\searrow0$. Thus we have proven that the limit curve $\gamma_0\in\scl[\tkc]$ is isometric to the twice covered circle~$\tge_0$. \end{proof}
Our Main Theorem now reads as follows.
\begin{theorem}[Two bridge torus knot classes]\label{thm:elastic-shapes} For any knot class $\ensuremath{\mathcal{K}}$ the following statements are equivalent. \begin{enumerate} \item\label{item:class} $\ensuremath{\mathcal{K}}$ is the $(2,b)$-torus knot class for some odd integer $b\in\ensuremath{\mathbb{Z}}$ where $\abs b\ge3$; \item\label{item:inf}
$\inf_{\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})}\ensuremath{E_{\mathrm{bend}}}=(4\pi)^2$; \item\label{item:C1} $\tge_{\varphi}$ belongs to the $C^{1}$-closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ for some $\varphi\in[0,\pi]$ and $\ensuremath{\mathcal{K}}$ is not trivial; \item\label{item:some-elastic} for some $\varphi\in[0,\pi]$ the pair of tangentially intersecting circles $\tge_{\varphi}$ is an elastic knot for $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$; \item\label{item:0-elastic} the unique elastic knot for $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ is the twice covered circle $\tge_{0}$. \end{enumerate} \end{theorem}
\begin{proof}[Theorem~\ref{thm:elastic-shapes}] \ref{item:class} $\Rightarrow$ \ref{item:inf} follows from Theorem~\ref{thm:fm} and the estimate in Lemma~\ref{lem:tau-estimate} for the comparison torus curves defined in~\eqref{eq:torus-knot}.
\ref{item:inf} $\Rightarrow$ \ref{item:C1}: As the bending energy of the circle amounts to $(2\pi)^{2}$, the knot class cannot be trivial. From Proposition~\ref{prop:constcurv} we infer that any elastic knot (which exists and belongs to the $C^1$-closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ according to Theorem~\ref{thm:existence-limit}) has constant curvature $4\pi$ a.e. Proposition \ref{prop:nonembedded} guarantees that any such elastic knot must have double points, which permits to apply Corollary~\ref{cor:tg8}. Thus, such an elastic knot coincides (up to isometry) with a tangential pair of circles $\tge_\varphi$ for some $\varphi\in [0,\pi]$, which therefore lies in the $C^1$-closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ as desired.
\ref{item:C1} $\Rightarrow$ \ref{item:class} is an immediate consequence of Proposition~\ref{prop:braids} and Corollary~\ref{cor:braids}.
Hence, so far we have shown the equivalence of the first three items.
\begin{figure}\label{fig:hinge}
\end{figure}
\ref{item:class} $\Rightarrow$ \ref{item:0-elastic} is just the assertion of Theorem~\ref{thm:main}.
\ref{item:0-elastic} $\Rightarrow$ \ref{item:some-elastic} is immediate.
\ref{item:some-elastic} $\Rightarrow$ \ref{item:class}: Since elastic knots for $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ lie in the $C^1$-closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ by Theorem \ref{thm:existence-limit} we find by Proposition~\ref{prop:braids} for $\varphi\in (0,\pi]$, and by Corollary~\ref{cor:braids} for $\varphi =0$ that $\ensuremath{\mathcal{K}}=\ensuremath{\mathcal{T}}(2,b)$ for some odd $b\in\ensuremath{\mathbb{Z}}$, $\abs b\ge3$, or that $\ensuremath{\mathcal{K}}$ is trivial. The latter can be excluded since the unique elastic unknot is the once-covered circle by Proposition~\ref{prop:nonembedded}. \end{proof} \begin{remark}[Strong $C^k$-closure] Using an explicit construction like the one indicated in Figure \ref{fig:hinge} one can show as well that each item above is also equivalent to the following:
\emph{For every $\varphi\in[0,\pi]$ the corresponding tangential pair of circles $\tge_{\varphi}$ belongs to the $C^{k}$-closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ for any $k\in\ensuremath{\mathbb{N}}$, and $\ensuremath{\mathcal{K}}$ is not trivial.} \end{remark}
\begin{appendix}
\section{The F\'ary--Milnor theorem for the $C^1$-closure of the knot class}
\begin{theorem}[Extending the F\'ary--Milnor theorem to the $C^1$-closure of $\mathcal C(\ensuremath{\mathcal{K}})$]\label{thm:fm}
Let $\ensuremath{\mathcal{K}}$ be a non-trivial knot class and let
$\gamma$ belong to the closure of $\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}})$ with respect to the $C^1$-norm.
Then the total curvature~\eqref{eq:tc} satisfies
\begin{equation}\label{eq:fm}
T(\gamma)\ge4\pi.
\end{equation} \end{theorem}
We begin with two auxiliary tools and abbreviate, for any two vectors $X,Y\in\ensuremath{\mathbb{R}}^{3}$, \[ \overrightarrow{XY} := Y-X. \]
\begin{lemma}[Approximating tangents]\label{lem:apprtang} Let
$\seqn{\gamma}\subset C^1(\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}^3)$ be a sequence of
embedded arclength parametrized closed curves
converging with respect to the $C^1$-norm to some limit curve~$\gamma $.
Assume that there are sequences
$\seqn x,\seqn y\subset [0,1)$ of parameters
satisfying $x_k<y_k$ for all $k\in\ensuremath{\mathbb{N}}$ and
$x_k,y_k\to z\in [0,1)$ as $k\to\infty$. Then one has for
$X_k:=\gamma_k(x_k)$, $Y_k:=\gamma_k(y_k)$
\begin{equation}\label{eq:direconv}
\uvector{X_kY_k}=\frac{Y_k-X_k}{|Y_k-X_k|}
\xrightarrow{k\to\infty} \gamma'(z)\in\mathbb S^2.
\end{equation} \end{lemma}
\begin{proof}
Since all $\gamma_k$ are injective we find that $X_k\not= Y_k$ for $k\in\ensuremath{\mathbb{N}}$, so that the unit vectors $\uvector{X_kY_k}$ are well-defined, and a subsequence (still denoted by $\uvector{X_kY_k}$) converges to some limit unit vector $d\in\ensuremath{\mathbb{S}}^2$ because $\ensuremath{\mathbb{S}}^2$ is compact. It suffices to show that $d=\gamma'(z)$ to conclude that the \emph{whole} sequence converges to $\gamma'(z),$ since any other subsequence of the $\uvector{X_kY_k}$ with limit $\tilde{d}\in\ensuremath{\mathbb{S}}^2$ satisfies $\tilde{d}=\gamma'(z)$ as well.
We compute
\begin{align}\label{deviation}
\begin{split}
&{\uvector{X_kY_k}-\frac{\gamma(y_k)-\gamma(x_k)}{y_k-x_k}}
={\frac{\gamma_k(y_k)-\gamma_k(x_k)}{\abs{\gamma_k(y_k)-\gamma_k(x_k)}}-\frac{\gamma(y_k)-\gamma(x_k)}{y_k-x_k}} \\
&=\frac{\int_0^1\gamma'_k(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta}{\abs{\int_0^1\gamma'_k(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta}}-\int_0^1\gamma'(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta \\
&=\br{\frac1{\abs{\int_0^1\gamma'_k(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta}}-1}{\int_0^1\gamma'_k(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta} + {} \\
&\qquad{}+\int_0^1(\gamma'_k-\gamma')(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta \\
&=\frac{1-\abs{\int_0^1\gamma'_k(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta}}{\abs{\int_0^1\gamma'_k(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta}}{\int_0^1\gamma'_k(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta} + {} \\
&\qquad{}+\int_0^1(\gamma'_k-\gamma')(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta.
\end{split}
\end{align}
From the assumptions we infer
\begin{align*}
1&\ge\abs{\int_0^1\gamma'_k(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta} \\
&\ge
1-2\|\gamma_k'-\gamma'\|_{L^\infty}-\abs{\int_0^1(\gamma'(x_k+\theta(y_k-x_k))-\gamma'(x_k))
\ensuremath{\,\mathrm{d}}\theta}\\
&\ge 1-o(1)\quad\textnormal{as $k\to\infty$,}
\end{align*}
so that in particular $\big|\int_0^1\gamma'_k(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta\big|\ge 1/2$
for $k\gg 1$. Inserting these two facts into \eqref{deviation}
yields
\begin{align*}
&\abs{\uvector{X_kY_k}-\frac{\gamma(y_k)-\gamma(x_k)}{y_k-x_k}}
\le 2\abs{1-\Big|\int_0^1\gamma'_k(x_k+\theta(y_k-x_k))\ensuremath{\,\mathrm{d}}\theta\Big|} + \lnorm[\infty]{\gamma'_k-\gamma'}=o(1)
\end{align*}
as $k\to\infty.$ \end{proof}
\begin{lemma}[Regular curves cannot stop]\label{lem:stop}
\\
Let $\gamma\in C^1\ensuremath{(\R/\ensuremath{\mathbb{Z}},\R^3)}$ be regular, i.e., $|\gamma'|>0$ on $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}$, and let
$0\le x<y<1$ such that $\gamma(x)=\gamma(y)$.
Then there is some $z\in(x,y)$ such that $\gamma(x)\ne\gamma(z)\ne\gamma(y)$. \end{lemma}
\begin{proof}
Assuming the contrary leads to a situation where $\gamma$ is constant on an interval of positive measure,
i.e.\@ $\gamma'$ vanishes contradicting $\abs\gamma' >0$. \end{proof}
\begin{proof}[Theorem~\ref{thm:fm}]
Let $\seqn{\gamma}$ be a sequence of knots in~$\ensuremath{\mathscr{C}}(\ensuremath{\mathcal{K}}) $ converging with respect to the $C^1$-norm to some limit curve~$\gamma$.
By Denne's theorem~\cite[Main Theorem, p. 6]{denne},
each knot $\gamma_k$, $k\in\ensuremath{\mathbb{N}}$, has an alternating quadrisecant,
i.e., there are numbers
$0\le a_k<b_k<c_k<d_k<1$ such that the points $A_k:=\gamma_k(a_k)$, $B_k:=\gamma_k(b_k)$, $C_k:=\gamma_k(c_k)$, $D_k:=\gamma_k(d_k)$ are collinear
and appear in the order $A$, $C$, $B$, $D$ on the quadrisecant line, see Figure~\ref{fig:fm1}.
(Note that our labeling of these points differs from that in~\cite{denne}.)
\begin{figure}
\caption{An alternating quadrisecant}
\label{fig:fm1}
\end{figure}
Without loss of generality we may assume $a_k\equiv0$. By compactness, we may pass to a subsequence (without relabelling)
such that $(a_k,b_k,c_k,d_k)$ converges to $(a,b,c,d)$ as $k\to\infty$. Of course,
\begin{equation*}
0=a\le b\le c\le d\le 1.
\end{equation*}
Note that some or all of the corresponding points $A,B,C,D\in\ensuremath{\mathbb{R}}^3$, which are still collinear, may coincide and that $1\equiv0$ in $\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}}$.
By Milnor~\cite[Theorem 2.2]{milnor} the total curvature of~$\gamma$ is bounded from below by the total curvature of any inscribed (closed) polygon.
For each possible location of the points $A,B,C,D$ we estimate the total curvature of $\gamma$ by means of suitably chosen inscribed polygons.
\renewcommand{\arabic{enumi}.}{\arabic{enumi}.}
\renewcommand{\arabic{enumi}.\arabic{enumii}.}{\arabic{enumi}.\arabic{enumii}.}
\renewcommand{\arabic{enumi}.\arabic{enumii}.\arabic{enumiii}.}{\arabic{enumi}.\arabic{enumii}.\arabic{enumiii}.}
\renewcommand{\arabic{enumi}.\arabic{enumii}.\arabic{enumiii}.\arabic{enumiv}.}{\arabic{enumi}.\arabic{enumii}.\arabic{enumiii}.\arabic{enumiv}.}
\renewcommand{\arabic{enumi}}{\arabic{enumi}}
\renewcommand{\arabic{enumii}}{\arabic{enumii}}
\renewcommand{\arabic{enumiii}}{\arabic{enumiii}}
\renewcommand{\arabic{enumiv}}{\arabic{enumiv}}
\makeatletter\renewcommand\p@enumiii{\arabic{enumi}.\arabic{enumii}.}\makeatother
\begin{enumerate}[font=\fontseries \bfdefault \selectfont \boldmath, align=left, leftmargin=0pt, labelindent=\parindent, listparindent=\parindent, labelwidth=0pt, itemindent=1 ex]
\item If $\abs{BC}>0$ the polygon $ABCDA$ inscribed in $\gamma$ is non-degenerate in the sense that $0=a<b<c<d<1$.
All exterior angles angles of $ABCDA$ equal $\pi$ and sum up to~$4\pi$; hence
\eqref{eq:fm} holds.
\item $\abs{BC}=0$, i.e.\@ $B=C$
\begin{enumerate}[font=\fontseries \bfdefault \selectfont \boldmath, align=left, leftmargin=0pt, labelindent=\parindent, listparindent=\parindent, labelwidth=0pt, itemindent=1 ex]
\item $\min\br{\abs{AC},\abs{BD}}>0$
\begin{enumerate}[font=\fontseries \bfdefault \selectfont \boldmath, align=left, leftmargin=0pt, labelindent=\parindent, listparindent=\parindent, labelwidth=0pt, itemindent=1 ex]
\item\label{item:approx} If $b=c$, i.e.\@ there is no loop of $\gamma$ between~$B$ and~$C$, we may consider the polygon
$AB_\ensuremath{\varepsilon} C_\ensuremath{\varepsilon} DA$ with $B_\ensuremath{\varepsilon}:=\gamma(b-\ensuremath{\varepsilon})$ and $C_\ensuremath{\varepsilon}:=\gamma(c+\ensuremath{\varepsilon})$.
Note that, in general, the point $B=C$ does not belong to the polygon, see Figure~\ref{fig:fm2}.
By arc-length parametrization, $B_\ensuremath{\varepsilon}$ and $C_\ensuremath{\varepsilon}$ cannot coincide for $0<\ensuremath{\varepsilon}\ll1$.
\begin{figure}
\caption{(left) Situation~\ref{item:approx},
(right) Situation~\ref{item:auxpoint}.}
\label{fig:fm2}
\label{fig:fm3}
\end{figure}
We deduce that $\uvector{B_\ensuremath{\varepsilon} C_\ensuremath{\varepsilon}}$
converges to $\gamma'(b)$ as $\ensuremath{\varepsilon}\to 0$. From Lemma~\ref{lem:apprtang} we infer that $\gamma'(b)$ points towards~$A$
(due to $\uvector{B_k A_k}=\uvector{B_k C_k}\to\gamma'(b)$ and the order $A_kC_kB_kD_k$ of the approximating points on the respective quadrisecant).
Therefore, since $B_\ensuremath{\varepsilon}\to B$ and $C_\ensuremath{\varepsilon}\to C$ as $\ensuremath{\varepsilon}\to 0$, for given $\delta>0$ we obtain some $\ensuremath{\varepsilon}_\delta>0$ such that
$\angle\br{\overrightarrow{AB_\ensuremath{\varepsilon}},\overrightarrow{B_\ensuremath{\varepsilon} C_\ensuremath{\varepsilon}}}>\pi-\delta$,
$\angle\br{\overrightarrow{B_\ensuremath{\varepsilon} C_\ensuremath{\varepsilon}},\overrightarrow{C_\ensuremath{\varepsilon} D}}>\pi-\delta$, $\angle\br{\overrightarrow{C_\ensuremath{\varepsilon} D},\overrightarrow{D A}}>\pi-\delta$, and $\angle\br{\overrightarrow{D A},\overrightarrow{A B_\ensuremath{\varepsilon}}}>\pi-\delta$
for all $\ensuremath{\varepsilon}\in(0,\ensuremath{\varepsilon}_\delta]$. We arrive at a lower estimate of $4\pi-4\delta$ for the total curvature of the polygon $AB_\ensuremath{\varepsilon} C_\ensuremath{\varepsilon} DA$
which is a lower bound for the total curvature of~$\gamma$. Letting $\delta\searrow0$ yields the
desired.
\item\label{item:auxpoint} If $b<c$ there is a loop of $\gamma$ between~$B$ and~$C$ according to Lemma \ref{lem:stop} such that we may choose some $m_{bc}\in(b,c)$ with
$B=C\ne M_{BC}:=\gamma(m_{bc})$. The total curvature of $\gamma$ is bounded below by the total curvature of the polygon $ABM_{BC}CDA$.
Let $\mu:=\angle\br{\overrightarrow{AB},\overrightarrow{BM_{BC}}}$.
We obtain
\begin{align*}
\angle\br{\overrightarrow{AB},\overrightarrow{BM_{BC}}} &= \mu, &
\angle\br{\overrightarrow{BM_{BC}},\overrightarrow{M_{BC}C}} &= \pi, &
\angle\br{\overrightarrow{M_{BC}C},\overrightarrow{CD}} &= \pi-\mu, \\
\angle\br{\overrightarrow{CD},\overrightarrow{DA}} &= \pi, &
\angle\br{\overrightarrow{DA},\overrightarrow{AB}} &= \pi.
\end{align*}
\end{enumerate}
\item $\abs{AC}=0$, $\abs{BD}>0$, i.e.\@ $A=B=C\ne D$
\begin{enumerate}[font=\fontseries \bfdefault \selectfont \boldmath, align=left, leftmargin=0pt, labelindent=\parindent, listparindent=\parindent, labelwidth=0pt, itemindent=1 ex]
\item\label{item:contrad}The situation $a=b=c$ cannot occur as applying Lemma~\ref{lem:apprtang} twice
(and recalling the order $A_kC_kB_kD_k$ of the approximating points on the respective quadrisecant) yields $\gamma'(b)=-\gamma'(b)$
contradicting $\abs{\gamma'(b)}=1$.
\item\label{item:approx-auxpoint} If $a=b<c$ we may simultaneously apply the techniques from~\ref{item:approx}
and~\ref{item:auxpoint} considering the polygon $A_\ensuremath{\varepsilon} BB_\ensuremath{\varepsilon} M_{BC}CDA_\ensuremath{\varepsilon}$ with
$A_\ensuremath{\varepsilon}:=\gamma(a-\ensuremath{\varepsilon})$ and $B_\ensuremath{\varepsilon}:=\gamma(b+\ensuremath{\varepsilon})$.
Note that, in contrast to~\ref{item:approx}, we inserted the point $B=C=A$ between $A_\ensuremath{\varepsilon}$ and $B_\ensuremath{\varepsilon}$
which is admissible due to $a-\ensuremath{\varepsilon}<a=b<b+\ensuremath{\varepsilon}$.
We obtain the triangles $BB_\ensuremath{\varepsilon} M_{BC}$ and $CDA_\ensuremath{\varepsilon}$. We label the \emph{inner} angles (starting from $B=C$ in each of the two triangles)
by $\zeta_\ensuremath{\varepsilon}$, $\eta_\ensuremath{\varepsilon}$, $\theta_\ensuremath{\varepsilon}$ and $\lambda_\ensuremath{\varepsilon}$, $\mu_\ensuremath{\varepsilon}$, $\nu_\ensuremath{\varepsilon}$ and define
\begin{align*}
\sigma_\ensuremath{\varepsilon} &:= \angle\br{\overrightarrow{A_\ensuremath{\varepsilon} B},\overrightarrow{BB_\ensuremath{\varepsilon}}}, &
\tau &:= \angle\br{\overrightarrow{M_{BC}C},\overrightarrow{CD}},
\end{align*}
see Figure~\ref{fig:fm4}.
\begin{figure}
\caption{(left) Situation~\ref{item:approx-auxpoint}. Note that, in general, the triangles $BB_\ensuremath{\varepsilon} M_{BC}$ and $CDA_\ensuremath{\varepsilon}$
are not coplanar and the angles $\sigma_\ensuremath{\varepsilon}$ and $\tau$ do not belong to any of the planes defined by the triangles.
(right) Situations~\ref{item:spherical} and~\ref{item:3loops}. For the latter one has to replace~$D$ by~$M_{DA}$.}
\label{fig:fm4}
\label{fig:fm5}
\end{figure}
Therefore, the total curvature of the polygon amounts to the sum of \emph{exterior} angles
\begin{equation*}
\sigma_\ensuremath{\varepsilon} + \br{\pi-\eta_\ensuremath{\varepsilon}} + \br{\pi-\theta_\ensuremath{\varepsilon}} + \tau + \br{\pi-\mu_\ensuremath{\varepsilon}} + \br{\pi-\nu_\ensuremath{\varepsilon}}
= \sigma_\ensuremath{\varepsilon} + \tau + 2\pi + \zeta_\ensuremath{\varepsilon} + \lambda_\ensuremath{\varepsilon}.
\end{equation*}
Since $\uvector{A_\ensuremath{\varepsilon} B}$ and $\uvector{BB_\ensuremath{\varepsilon}}$ \/\/ approximate $\gamma'(b)$,
which is a positive multiple of $\overrightarrow{CD}$ (because $c<d$ for $C\not= D$ which implies $\uvector{C_k D_k}=\uvector{A_k C_k}\to\gamma'(b)=\uvector{C D}=\uvector{B D}$ by Lemma \ref{lem:apprtang}),
we deduce $\sigma_\ensuremath{\varepsilon}\to 0$, $\zeta_\ensuremath{\varepsilon}\to\pi-\tau$ and $\lambda_\ensuremath{\varepsilon}\nearrow\pi$ as $\ensuremath{\varepsilon}\searrow0$
from Lemma~\ref{lem:apprtang}.
\item For the case $a<b=c$ we consider the polygon $AM_{AB}B_\ensuremath{\varepsilon} CC_\ensuremath{\varepsilon} DA$ for $B_\ensuremath{\varepsilon}:=\gamma(b-\ensuremath{\varepsilon})$ and $C_\ensuremath{\varepsilon}:=\gamma(c+\ensuremath{\varepsilon})$
which may be treated similarly to~\ref{item:approx-auxpoint}.
\item\label{item:spherical} If $a<b<c$ we consider the polygon $AM_{AB}BM_{BC}CDA$ applying the technique
from~\ref{item:auxpoint} twice. Defining
\begin{align*}
\xi&:=\angle\br{\overrightarrow{BM_{AB}},\overrightarrow{BM_{BC}}}, &
\eta&:=\angle\br{\overrightarrow{CM_{BC}},\overrightarrow{CD}}, &
\zeta&:=\angle\br{\overrightarrow{AD},\overrightarrow{AM_{AB}}}
\end{align*}
as indicated in Figure~\ref{fig:fm5} we arrive at
\begin{align*}
\angle\br{\overrightarrow{AM_{AB}},\overrightarrow{M_{AB}B}} &= \pi, &
\angle\br{\overrightarrow{M_{AB}B},\overrightarrow{BM_{BC}}} &= \pi-\xi, &
\angle\br{\overrightarrow{BM_{BC}},\overrightarrow{M_{BC}C}} &= \pi, \\
\angle\br{\overrightarrow{M_{BC}C},\overrightarrow{CD}} &= \pi-\eta, &
\angle\br{\overrightarrow{CD},\overrightarrow{DA}} &= \pi, &
\angle\br{\overrightarrow{DA},\overrightarrow{AM_{AB}}} &= \pi-\zeta
\end{align*}
so the total curvature of the polygon amounts to~$6\pi-\xi-\eta-\zeta$.
Consider the unit vectors $\uvector{AM_{AB}}$, $\uvector{AM_{BC}}$, and $\uvector{AD}$.
Unless they are coplanar they define a unique triangle on~$\mathbb S^2$ of area $<2\pi$ which is equal to the
sum of the three angles between them, namely $\xi$, $\eta$, and $\zeta$.
Therefore, we obtain the estimate $\xi+\eta+\zeta\le 2\pi$.
\end{enumerate}
\item $\abs{AC}>0$, $\abs{BD}=0$, i.e.\@ $A\ne B=C=D$. This case is symmetric to the preceding one.
\item $\abs{AC}=\abs{BD}=0$, i.e.\@ $A=B=C=D$.
For $\Box_1,\Box_2,\Box_3,\Box_4\in\set{{=},{<}}$ we abbreviate
\begin{equation*}
(\Box_1,\Box_2,\Box_3,\Box_4) := \br{0\equiv a\Box_1b\Box_2c\Box_3d\Box_41\equiv0}.
\end{equation*}
Obviously there are $16$ cases.
\begin{enumerate}[font=\fontseries \bfdefault \selectfont \boldmath, align=left, leftmargin=0pt, labelindent=\parindent, listparindent=\parindent, labelwidth=0pt, itemindent=1 ex]
\item Impossible cases: as shown in~\ref{item:contrad}, there cannot arise two neighboring equality signs.
This excludes the following nine situations:
$({=},{=},{=},{=})$,
$({=},{=},{=},{<})$,
$({=},{=},{<},{=})$,
$({=},{=},{<},{<})$,
$({=},{<},{=},{=})$,
$({=},{<},{<},{=})$,
$({<},{=},{=},{=})$,
$({<},{=},{=},{<})$,
$({<},{<},{=},{=})$.
\item\label{item:2loops} Two loops: for the situation $({=},{<},{=},{<})$ we consider the polygon
\begin{equation*}
A_\ensuremath{\varepsilon} BB_\ensuremath{\varepsilon} M_{BC}C_\ensuremath{\varepsilon} DD_\ensuremath{\varepsilon} M_{DA}A_\ensuremath{\varepsilon}
\end{equation*}
with $A_\ensuremath{\varepsilon}:=\gamma(a-\ensuremath{\varepsilon})$, $B_\ensuremath{\varepsilon}:=\gamma(b+\ensuremath{\varepsilon})$, $C_\ensuremath{\varepsilon}:=\gamma(c-\ensuremath{\varepsilon})$, and $D_\ensuremath{\varepsilon}:=\gamma(d+\ensuremath{\varepsilon})$.
We obtain the (in general non-planar) quadrilaterals $BB_\ensuremath{\varepsilon} M_{BC}C_\ensuremath{\varepsilon}$
and $DD_\ensuremath{\varepsilon} M_{DA}A_\ensuremath{\varepsilon}$. Proceeding similarly to~\ref{item:approx-auxpoint},
we label the \emph{inner} angles (starting from $B$)
by $\zeta_\ensuremath{\varepsilon}$, $\eta_\ensuremath{\varepsilon}$, $\theta_\ensuremath{\varepsilon}$, $\iota_\ensuremath{\varepsilon}$ and $\lambda_\ensuremath{\varepsilon}$, $\mu_\ensuremath{\varepsilon}$, $\nu_\ensuremath{\varepsilon}$, $\xi_\ensuremath{\varepsilon}$ and define
\begin{align*}
\sigma_\ensuremath{\varepsilon} &:= \angle\br{\overrightarrow{A_\ensuremath{\varepsilon} B},\overrightarrow{BB_\ensuremath{\varepsilon}}}, &
\tau_\ensuremath{\varepsilon} &:= \angle\br{\overrightarrow{C_\ensuremath{\varepsilon} D},\overrightarrow{DD_\ensuremath{\varepsilon}}},
\end{align*}
see Figure~\ref{fig:fm6}.
\begin{figure}
\caption{(left) Situation~\ref{item:2loops}. Note that both quadrilaterals $BB_\ensuremath{\varepsilon} M_{BC}C_\ensuremath{\varepsilon}$
and $DD_\ensuremath{\varepsilon} M_{DA}A_\ensuremath{\varepsilon}$ are, in general, non-planar.
(right) Situation~\ref{item:4loops}.}
\label{fig:fm6}
\label{fig:fm7}
\end{figure}
For the angular sum in a non-planar quadrilateral we obtain
\begin{align*}
\zeta_\ensuremath{\varepsilon}+\eta_\ensuremath{\varepsilon}+\theta_\ensuremath{\varepsilon}+\iota_\ensuremath{\varepsilon}&\le2\pi & \lambda_\ensuremath{\varepsilon}+\mu_\ensuremath{\varepsilon}+\nu_\ensuremath{\varepsilon}+\xi_\ensuremath{\varepsilon}\le2\pi.
\end{align*}
The total curvature of the polygon amounts to the sum of \emph{exterior} angles
\begin{align*}
&\sigma_\ensuremath{\varepsilon} + \br{\pi-\eta_\ensuremath{\varepsilon}} + \br{\pi-\theta_\ensuremath{\varepsilon}} + \br{\pi-\iota_\ensuremath{\varepsilon}}
+ \tau_\ensuremath{\varepsilon} + \br{\pi-\mu_\ensuremath{\varepsilon}} + \br{\pi-\nu_\ensuremath{\varepsilon}} + \br{\pi-\xi_\ensuremath{\varepsilon}} \\
&\ge \sigma_\ensuremath{\varepsilon} + \tau_\ensuremath{\varepsilon} + 2\pi + \zeta_\ensuremath{\varepsilon} + \lambda_\ensuremath{\varepsilon}.
\end{align*}
From Lemma~\ref{lem:apprtang} we infer $\zeta_\ensuremath{\varepsilon},\lambda_\ensuremath{\varepsilon}\nearrow\pi$.
The case $({<},{=},{<},{=})$ is shifted by one position.
\item\label{item:3loops} Three loops: the case $({<},{<},{<},{=})$ leads to the polygon
\begin{equation*}
AM_{AB}BM_{BC}CM_{CD}D
\end{equation*}
which is treated similarly
to~\ref{item:spherical}; here $\overrightarrow{CM_{CD}}$ plays the r\^ole of $\overrightarrow{CD}$ in~\ref{item:spherical}.
The shifted cases $({=},{<},{<},{<})$, $({<},{=},{<},{<})$, $({<},{<},{=},{<})$ are symmetric.
\item\label{item:4loops} Four loops: As~\ref{item:3loops} in fact works for $({<},{<},{<},{\le})$
it also covers the situation $({<},{<},{<},{<})$. Alternatively we consider the polygon
\begin{equation*}
AM_{AB}BM_{BC}CM_{CD}DM_{DA}A
\end{equation*}
as drawn in Figure~\ref{fig:fm7} with
\begin{align*}
&\angle\br{\overrightarrow{AM_{AB}},\overrightarrow{M_{AB}B}}
=\angle\br{\overrightarrow{BM_{BC}},\overrightarrow{M_{BC}C}}
=\angle\br{\overrightarrow{CM_{CD}},\overrightarrow{M_{CD}D}}\\
&=\angle\br{\overrightarrow{DM_{DA}},\overrightarrow{M_{DA}A}}
=\pi.
\end{align*}
\end{enumerate}
\end{enumerate}
\end{enumerate} \end{proof}
\end{appendix}
\end{document} |
\begin{document}
\title{$S_n$-Equivariant Sheaves and Koszul Cohomology} \begin{abstract} We give a new interpretation of Koszul cohomology, which is equivalent under the Bridgeland-King-Reid equivalence to Voisin's Hilbert scheme interpretation in dimensions 1 and 2, but is different in higher dimensions. As an application, we prove that the dimension $K_{p,q}(B,L)$ is a polynomial in $d$ for $L=dA+P$ with $A$ ample and $d$ large enough. \end{abstract} \section{Introduction} The Koszul cohomology of a line bundle $L$ on an algebraic variety $X$ was introduced by Green in \cite{Green}. Koszul cohomology is very closely related to the syzygies of the embedding defined by $L$ (if $L$ is very ample) and is thus related to a host of classical questions. We assume that the reader is already aware of these relations and the main theorems on Koszul cohomology; for a detailed discussion, the reader is referred to \cite{Green}.
Recently, in \cite{EL}, it was realized that a conjectural uniform asymptotic picture emerges as $L$ becomes more positive. We give a new interpretation of Koszul cohomology which we believe will clarify this picture. As an application, we prove that $K_{p,q}(B,dA+P)$ grows polynomially in $d$. In particular, this gives a partial answer to Problem 7.2 from \cite{EL}. More precisely, we establish
\begin{theorem} Let $A$ and $P$ be ample line bundles on a smooth projective variety $X$. Let $B$ be a locally free sheaf on $X$. Then there is a polynomial that equals $\operatorname{dim} K_{p,q}(B,dA+P)$ for $d$ sufficiently large. \end{theorem}
The inspiration for our new interpretation came from Voisin's papers \cite{Voisin1}, \cite{Voisin2} on generic Green's conjecture. To prove generic Green's conjecture, Voisin writes Koszul cohomology for $X$ a surface in terms of the sheaf cohomology of various sheaves on a Hilbert scheme of points of $X$. While this allows for the use of the geometry of Hilbert schemes in analyzing Koszul cohomology, this interpretation has multiple downsides.
The main downside is that it does not generalize well to higher dimensions, as the Hilbert scheme of points of $X$ is not necessarily smooth (or even irreducible) unless $X$ has dimension at most 2. It is unclear to us if this difficulty can be circumvented through uniform use of the curvillinear Hilbert scheme, but even if it can, the loss of properness creates various technical difficulties.
Our construction can be thought of as replacing the Hilbert scheme with a noncommutative resolution of singularities (though the word noncommutative need not ever be mentioned). Specifically, we replace the cohomology of sheaves on Hilbert schemes with the $S_n$-invariants of the cohomology of $S_n$-equivariant sheaves on $n$th fold products. This is motivated by the Bridgeland-King-Reid equivalence \cite{BKR}, which implies that for a curve or surface $X$, the derived category of coherent sheaves on the Hilbert scheme is equivalent to the derived category of $S_n$-equivariant coherent sheaves on $X^n.$ We note that this had been previously used to compute the cohomology of sheaves on the Hilbert scheme, for instance in \cite{Scala}.
We would also like to note two other results related to ours. If one chooses to work with ordinary sheaves, as opposed to $S_n$-equivariant sheaves, one recovers in effect a theorem implicit in \cite{Green2}, which was first stated explicitly in \cite{Inamdar}. We refer to the discussion after Theorem \ref{interpretation} for more detail. Finally, we note that \cite{gonality} proves a result which implies \ref{polynomialicity} for curves. The proof methods are very similar, and we are hopeful that Theorem \ref{interpretation} will help in extending their results to higher dimensions.
Section 2 of this paper is a short introduction to Koszul cohomology. The main goal is to describe the conjectural asymptotic story of \cite{EL}. We also describe (but do not prove) Voisin's interpretation of Koszul cohomology in terms of Hilbert schemes.
Section 3 contains our new interpretation. Section 4 then uses it to analyze the asymptotics of $K_{p,q}(dL+B)$. All the proofs in both sections are quite short.
This research was supported by an NSF funded REU at Emory University under the mentorship of David Zureick-Brown. We would also like to thank Ken Ono and Evan O'Dorney for advice and helpful conversations. Finally, we would like to thank Robert Lazarsfeld for his encouragement and correspondence. Both he and the referees gave many useful suggestions.
\section{Koszul Cohomology} Our discussion of Koszul cohomology will be quite terse. For details and motivation, see \cite{Green} and \cite{EL}.
Let $X$ be a smooth projective algebraic variety of dimension $n$ over an algebraically closed field $k$ of characteristic zero and let $L$ be a line bundle on $X$. Form the graded ring $S=\operatorname{Sym}^{\bullet}H^0(L)$. For any coherent sheaf $B$ on $X$, we have a natural graded $S$-module structure on $M=\oplus_{n\geq 0}H^0(B+nL).$ From this we can construct the bigraded vector space $\operatorname{Tor}^{\bullet\bullet}_S(M,k).$ The $(p,q)$th Kozsul cohomology of $(B,L)$ is the $(p,p+q)$-bigraded part of this vector space. We will call its dimension the $(p,q)$ Betti number, and the two-dimensional table of these the Betti table.
In this paper, we will always take $B$ to be locally free.
The first theorem we describe is a duality theorem for Kozsul cohomology, proven in Green's original paper \cite{Green}. (In fact, Green proves a slightly stronger result.)
\begin{theorem} \label{duality} Assume that $L$ is base-point-free, and $H^i(B\otimes(q-i)L)=H^i(B\otimes(q-i-1)L)=0$ for $i=1,2,\ldots, n-1.$ Then we have a natural isomorphism between $K_{p,q}(B,L)^*$ and $K_{h^0(L)-n-p,n+1-q}(B^*\otimes K_X,L)$. \end{theorem}
To compute the Kozsul cohomology, we can use either a free resolution of $M$ or a free resolution of $k$. Taking the minimal free resolution of $M$ relates the Kozsul cohomology to the degrees in which the syzygies of $M$ lie. In particular, this description shows that $K_{p,q}$ is trivial for $q < 0.$ Combining this with Theorem \ref{duality} and some elementary facts on Castelnuovo-Mumford regularity, it is possible to use this to show that $K_{p,q}$ is trivial for $q > n+1$ and is nontrivial for $q=n+1$ only when $p$ is within a constant (depending only on $B$ and $X$) of $h^0(L)$.
Extending this, in \cite{EL}, Ein and Lazarsfeld study when the groups $K_{p,q}$ vanish for $L$ very positive and $1\leq q\leq n$. More precisely, let $L$ be of the form $P+dA$, where $P$ and $A$ are fixed divisors with $A$ ample. Then as $d$ changes, Ein and Lazarsfeld prove that $K_{p,q}$ is nonzero if $p>O(d^{q-1})$ and $p<H^0(L)-O(d^{n-1})$. They conjecture that, on the other hand, $K_{p,q}$ is trivial for $p<O(d^{q-1}).$ It is known that if $q \geq 2$, then $K_{p,q}$ is trivial for $p < O(d).$ In particular, Theorem \ref{polynomialicity} only has content when $q$ is $0$ or $1$.
For the purposes of computation, the syzygies of $M$ are often hard to work with directly, so instead we will compute using a free resolution of $k$. The natural free resolution is the Kozsul complex $$\cdots\wedge^2H^0(L)\otimes S\rightarrow H^0(L)\otimes S\rightarrow S\rightarrow k\rightarrow 0.$$ Tensoring by $M$, we immediately arrive at the following lemma: \begin{lemma} \label{Koszul} If $q \geq 1$, $K_{p,q}(B,L)$ is equal to the cohomology of the three term complex \begin{multline*} H^0(B+(q-1)L)\otimes\wedge^{p+1}H^0(L)\rightarrow H^0(B+qL)\otimes\wedge^pH^0(L)\\ \rightarrow H^0(B+(q+1)L)\otimes\wedge^{p-1}H^0(L). \end{multline*} On the other hand, for $q=0$, $K_{p,q}(B,L)$ is equal to the kernel of the map $$H^0(B)\otimes\wedge^pH^0(L)\rightarrow H^0(B+L)\otimes\wedge^{p-1}H^0(L).$$ \end{lemma}
Finally, we say a few words on Voisin's Hilbert scheme approach to Koszul cohomology (see \cite{Voisin1}, \cite{Voisin2}). Let $X$ be a smooth curve or surface. Then denote by $\operatorname{Hilb}^n(X)$ the $n$th Hilbert scheme of points of $X.$ We have a natural incidence subscheme $I_n$ in $X\times\operatorname{Hilb}^n(X).$ Let $p\colon I_n\rightarrow X$ and $q\colon I_n\rightarrow\operatorname{Hilb}^n(X).$ \begin{theorem} \label{Voisin} Let $L^{[n]}$ be $\wedge^n{\operatorname{pr}}_{2*}p^*L.$ Then for $q\geq 1$, we have
$$K_{p,q}(B,L)\cong\operatorname{coker}(H^0(B+(q-1)L)\otimes H^0(L^{[p+1]})\rightarrow H^0((B+(q-1)L)\boxtimes L^{[p+1]}|_{I_{p+1}})).$$ Furthermore,
$$K_{p,0}\cong\operatorname{ker}(H^0(B)\otimes H^0(L^{[p+1]})\rightarrow H^0(B\boxtimes L^{[p+1]}|_{I_{p+1}}).$$ \end{theorem}
In fact, we have $H^0(L^{[n]})=\wedge^nH^0(L)$ and $H^0((B+(q-1)L)\boxtimes L^{[n]}|_{I_n})=\operatorname{ker}(H^0(B+qL)\otimes H^0(L^{[n-1]})\rightarrow H^0(B+(q+1)L)\otimes H^0(L^{[n-2]})).$ We thus see that Theorem \ref{Voisin} is a ``geometrized" version of Theorem \ref{Koszul}.
We would like to comment that one of our original motivations was to rewrite this in terms of the $q$th or $(q-1)$th sheaf cohomology of some sheaf. This is accomplished by the formalism of the next section combined with the results of \cite{Scala}.
\section{$S_n$-Equivariant Sheaves on $X^n$}In this section, we prove an analogue of Theorem \ref{Voisin}, but with the category of coherent sheaves on the Hilbert scheme replaced by the category of $S_n$-equivariant sheaves on $X^n$. Here we take $X$ to be smooth projective, $X^n$ to be its $n$th Cartesian power, and $B$ to be a locally free sheaf on $X$.
An $S_n$-equivariant sheaf on $X^n$ is a sheaf on $X^n$ together with $S_n$-linearization in the sense of \cite[Chapter 5]{chrissG:geometricRepresentationTheory}. Note that for any map $X^n\rightarrow Y$ that is fixed under the $S_n$ action on $X^n$, we have an ``$S_n$-invariant pushforward" functor from the category of $S_n$-equivariant quasicoherent sheaves on $X^n$ to the category of quasicoherent sheaves on $Y$. The usual pushforward is equipped with a natural $S_n$ action, and the $S_n$-invariant pushforward is defined just to be the invariants under this action. The $S_n$-invariant cohomology $H^i_{S_n}$ is defined to be the derived functor of $S_n$-invariant pushforward to $\operatorname{Spec} k$.
We start by defining $L^{[n]}$ to be the sheaf $L\boxtimes \cdots \boxtimes L$ on $X^n$ with $S_n$ acting via the alternating action. By definition, we have $H^{\bullet}_{S_n}(L^{[n]})=\wedge^n H^{\bullet}(L).$ In the next section, we will also use the sheaf $D_L$, defined to be the sheaf $L\boxtimes\cdots \boxtimes L$ with $S_n$ acting via the trivial action. From the definitions, it is clear that $L^{[n]}=\mathcal{O}^{[n]}\otimes D_L.$
Let $\Delta_i$ be the subvariety of $X\times X^n$ defined as the set of points $(x,x_1,\ldots, x_n)$ with $x=x_i.$ Let $Z$ be the scheme theoretic union of the $\Delta_i$. Then $Z$ is equipped with two projections $p\colon Z\rightarrow X$ and $q\colon Z\rightarrow X^n.$ We will make crucial use of a long exact sequence $$0\rightarrow\mathcal{O}_Z\rightarrow\oplus_{i_1}\mathcal{O}_{\Delta_i}\rightarrow\oplus_{i_1<i_2}\mathcal{O}_{\Delta_{i_1,i_2}}\rightarrow\cdots\rightarrow\mathcal{O}_{\Delta_{1,2,\ldots,n}}.$$ We will prove the following theorem. \begin{theorem} \label{interpretation} Assume that $H^i(L)=H^i(B+mL)=0$ for all $i,m>0.$ Then for $q > 1$, $K_{p,q}(B,L)\cong H^{q-1}_{S_{p+q}}(p^*B\otimes q^*L^{[p+q]}).$ We also have an exact sequence \begin{multline*} 0\rightarrow K_{p+1,0}(B,L)\rightarrow H^0(B)\otimes H^0_{S_{p+1}}(L^{[p+1]})\rightarrow H^0_{S_{p+1}}(p^*B\otimes q^*L^{[p+1]})\\ \rightarrow K_{p,1}(B,L)\rightarrow 0. \end{multline*} \end{theorem} Note that when $B$ has no higher cohomology, we can write this more aesthetically as $K_{p,q}(B,L)=H^q_{S_{p+q}}((B\boxtimes L^{[p+q]})\otimes \mathcal{I}_Z),$ where $\mathcal{I}_Z$ is the ideal sheaf of $Z$.
Before proving the theorem, we need an exact sequence. Define $\Delta_{i_1,i_2,\ldots,i_m}$ to be the subvariety of $X\times X^n$ such that $(x,x_1,\ldots,x_n)$ is a point of $\Delta_{i_1,i_2,\ldots,i_m}$ if and only if $x=x_{i_1}=\cdots=x_{i_m}.$ Now note that we have a long exact sequence of $S_n$-equivariant sheaves on $X\times X^n$ $$0\rightarrow\mathcal{O}_Z\rightarrow\oplus_{i_1}\mathcal{O}_{\Delta_i}\rightarrow\oplus_{i_1<i_2}\mathcal{O}_{\Delta_{i_1,i_2}}\rightarrow\cdots\rightarrow\mathcal{O}_{\Delta_{1,2,\ldots,n}}.$$
We define the map from $\mathcal{O}_{\Delta_{i_1,i_2,\cdots, i_m}}$ to $\mathcal{O}_{\Delta_{j_1,j_2,\cdots j_{m+1}}}$ to be nonzero if and only if $\Delta_{j_1,\cdots, j_{m+1}}$ is a subvariety of $\Delta_{i_1,i_2,\cdots, i_m}$, in which case we define it to be the natural map induced by the inclusion, up to sign. If the difference between $\{i_1,\cdots i_m\}$ and $\{j_1,\cdots j_{m+1}\}$ is $j_k$, then we modify this map by a factor of $(-1)^{k-1}.$
We need to be careful to define our $S_n$-equivariant structure on our complex in a way compatible with this modification. For any element $\sigma\in S_n,$ we have $\sigma_*\mathcal{O}_{\Delta_{i_1,i_2,\cdots, i_m}}\cong\mathcal{O}_{\Delta_{h_1,h_2,\cdots,h_m}},$ where $h_1 < h_2 <\cdots < h_m$ and $\{h_1,\cdots,h_m\}=\{\sigma{i_1},\cdots, \sigma{i_m}\}$. This gives each term of our complex a natural $S_n$-action. To make our maps $S_n$-equivariant, we modify the action by the sign of the permutation sending $h_1,h_2,\cdots h_m$ to $\sigma{i_1},\cdots \sigma{i_m}.$
After these definitions, it is quite easy to check that we have indeed defined a $S_n$-equivariant complex. Exactness is harder; a proof can be found in Appendix A of \cite{Scala}.
We now prove Theorem \ref{interpretation}. \begin{proof}
Tensor our exact sequence by $B\boxtimes L^{[n]}$. Each term of the complex is of the form $\oplus_{i_1<i_2<\cdots<i_m}B\boxtimes L^{[n]}|_{\Delta_{i_1,i_2,\ldots,i_m}}.$ We compute the $S_n$-invariant cohomology of this sheaf. This is the same as the $S_m\times S_{n-m}$ invariant cohomology of $B\boxtimes L^{[n]}|_{\Delta_{1,2,\ldots,m}}.$ Now note that there is a natural $S_{n-m}$-equivariant isomorphism between $\Delta_{1,2,\ldots, m}$ and $X \times X^{n-m}.$ This sends $B\boxtimes L^{[n]}|_{\Delta_{1,2,\ldots, m}}$ to $(B+mL)\boxtimes L^{[n-m]}.$ The action of $S_m$ on $H^0(B+mL)$ is trivial, as the alternating actions coming from our modified $S_n$-equivariance and $L^{[n]}$ cancel, and thus we can simply consider the $S_{n-m}$-invariant cohomology. By our assumptions and the fact that $H^{\bullet}_{S_n}(L^{[n]})=\wedge^nH^{\bullet}(L),$ we see that the higher $S_{n-m}$-invariant cohomology of $B\boxtimes L ^{[n]}|_{\Delta_{1,2,\ldots,m}}$ vanishes and that $H^0_{S_{n-m}}(B\boxtimes L^{[n]}|_{\Delta_{1,2,\ldots,m}})\cong H^0(B+mL)\otimes\wedge^{n-m}H^0(L).$
Our long exact sequence now immediately shows that the $S_n$-invariant cohomology of $B\boxtimes L^{[n]}|_Z$ is given by the cohomology of the complex $$0\rightarrow H^0(B+L)\otimes\wedge^{n-1}H^0(L)\rightarrow H^0(B+2L)\otimes\wedge^{n-2}H^0(L)\rightarrow\cdots$$ and, setting $n=p+q$, our result immediately follows from Lemma \ref{Koszul}. \end{proof}
We note that in particular, if $H^{q-1}(p^*B\otimes q^*L^{[p+q]})=0$ (note that this is non-invariant cohomology!), then $K_{p,q}(B,L)$ vanishes. A very similar statement was first claimed in the proof of Theorem 3.2 of \cite{Green2}, with a minor mistake corrected by Inamdar in \cite{Inamdar}. Since then, this statement has been used in various places (e.g., \cite{HwangTo}\cite{LazarsfeldPareschiPopa}.)
Note that for any map $X^n\rightarrow Y$ that is fixed under the $S_n$ action on $X^n$, we have a ``$S_n$-invariant pushforward" functor from the category of $S_n$-equivariant quasicoherent sheaves on $X^n$ to the category of quasicoherent sheaves on $Y$. The usual pushforward is equipped with a natural $S_n$ action, and the $S_n$-invariant pushforward is defined just to be the invariants under this action. The $S_n$-invariant cohomology is defined to be the $S_n$-invariant pushforward to $\operatorname{Spec} k$ \section{Polynomial Growth of $K_{p,q}(B,L_d)$} In this section, we will let $L_d=dA+P,$ where $A$ and $P$ are lines bundles with $A$ ample. We allow $B$ to be any locally free sheaf on $X$. Our main theorem is the following. \begin{theorem} \label{polynomialicity} For $d$ sufficiently large, $\operatorname{dim} K_{p,q}(B,L_d)$ is a polynomial in $d.$ \end{theorem} \begin{proof} We start by showing that $K_{p,q}(B,L_d)$ is trivial for $q \geq 2$ and $d$ large enough. This has been known at least since \cite{EL2}, but we will reprove it here for the sake of self-containedness. Let $\mathcal{F}$ be the sheaf ${\operatorname{pr}}_{2*}\mathcal{O}_Z\otimes P^{[p+q]}.$ As $q$ is finite, pushforward along it is exact. Therefore, by Theorem \ref{interpretation}, we know that $$K_{p,q}(B,L_d)\cong H^{q-1}_{S_{p+q}}(\mathcal{F}\otimes dD_{A})$$ (see the beginning of the previous section for the definition of $D_A$). But $D_{A}$ is ample, so this is zero for large enough $d.$
By Theorem \ref{interpretation}, it suffices to show that the dimensions of $K_{p+1,0}(B,L_d), H^0(B)\otimes H^0_{S_{p+1}}(L_d^{[p+1]}),$ and $H^0_{S_{p+1}}(p^*B\otimes q^*L_d^{[p+1]})$ all grow polynomially in $d.$ Using that $H^{\bullet}_{S_{n}}(L_d^{[n]})\cong\wedge^nH^{\bullet}(L_d),$ we immediately see that $\operatorname{dim} H^0(B)\otimes H^0_{S_{p+1}}(L_d^{[p+1]})$ grows polynomially in $d.$
It follows immediately from the second part of Theorem \ref{interpretation} that $K_{p+1,0}(B,L_d)\cong H^0_{S_{p+1}}(B\boxtimes L_d^{[p+1]}\otimes\mathcal{I}_Z).$ Letting $\mathcal{G}$ denote the sheaf ${\operatorname{pr}}_{2*}(B\boxtimes L_d^{[p+1]}\otimes\mathcal{I}_Z),$ we see that $$K_{p+1,0}\cong H^0_{S_{p+1}}(\mathcal{G}\otimes dD_{A}).$$
To proceed, we take the $S_{p+1}$-invariant pushforward of this to $\operatorname{Sym}^{p+1}(X).$ Let the $S_{p+1}$-invariant pushforward of $\mathcal{G}$ be $\mathcal{H}.$ Note that $D_{A}$ is the pullback (with the natural $S_{p+1}$-action) of an ample line bundle $A'$ on $\operatorname{Sym}^{p+1}(X).$ We thus see that $K_{p+1,0}(B,L_d) =H^0(\mathcal{H}\otimes dA'),$ and by the existence of the Hilbert polynomial, its dimension is a polynomial for large enough $d.$ An identical argument shows that $\operatorname{dim}H^0_{S_{p+1}}(p^*B\otimes q^*L_d^{[p+1]})$ is a polynomial for large enough $d.$ \end{proof}
Note that this theorem answers the first part of Problem 7.2 in \cite{EL} in the affirmative. Their specific question asks if the triviality of a specific Koszul cohomology group is independent of $d$ if $d$ is large. As any nonzero polynomial has only finitely many roots, Theorem \ref{polynomialicity} implies that the answer is yes.
\end{document} |
\begin{document}
\begin{abstract} We analyze the dichotomy between {\em sectional-Axiom A flows} (c.f. \cite{memo}) and flows with points accumulated by periodic orbits of different indices. Indeed, this is proved for $C^1$ generic flows whose singularities accumulated by periodic orbits have codimension one. Our result improves \cite{mp1}. \end{abstract}
\maketitle
\section{Introduction}
\noindent Ma\~n\'e discussed in his breakthrough work \cite{M} if the {\em star property}, i.e., the property of being far away from systems with non-hyperbolic periodic orbits, is sufficient to guarantee that a system be Axiom A. Although this is true for diffeomorphisms \cite{h0} it is not for flows as the geometric Lorenz attractor \cite{abs}, \cite{gu}, \cite{GW} shows. On the other hand, if singularities are not allowed then the answer turns on to be positive by \cite{gw}. Previously, Ma\~n\'e connects the star property with the nowadays called {\em Newhouse phenomenon} at least for surfaces. In fact, he proved that a $C^1$-generic surface diffeomorphism either is Axiom A or displays infinitely many sinks or sources \cite{m}. In the extension of this work on surfaces, \cite{mp1} obtained the following results about $C^1$-generic flows for closed 3-manifolds: Any $C^1$-generic star flow is singular-Axiom A and, consequently, any $C^1$-generic flow is singular-Axiom A or displays infinitely many sinks or sources. The notion of {\em singular-Axiom A} was introduced in \cite{mpp} inspired on the dynamical properties of both Axiom A flows and the geometric Lorenz attractor. It is then natural to investigate such generic phenomena in higher dimensions and the natural challenges are: Is a $C^1$-generic star flow in a closed $n$-manifold singular-Axiom A? Does a $C^1$-generic vector field in a closed $n$-manifold is singular-Axiom A or has infinitely many sinks or sources? Unfortunately, what we know is that the second question has negative answer for $n\geq 5$ as counterexamples can be obtained by suspending the diffeomorphisms in Theorem C of \cite{bv} (but for $n=4$ the answer may be positive). A new light comes from the {\em sectional-Axiom A flows} introduced in \cite{memo}. Indeed, the first author replaced the term singular-Axiom A by sectional-Axiom A above in order to formulated the following conjecture for $n\geq 3$ (improving that in p. 947 of \cite{gwz}):
\begin{conjecture}
\label{conj0} $C^1$-generic star flows on closed $n$-manifolds are sectional-Axiom A. \end{conjecture}
Analogously we can ask if a $C^1$-generic vector field in a closed $n$-manifold is sectional-Axiom A or display infinitely many sinks or sources. But now the answer is negative not only for $n=5$, by the suspension of \cite{bv} as above, but also for $n=4$ by \cite{st} and the suspension of certain diffeomorphisms \cite{mane}. Nevertheless, in all these counterexamples, it is possible to observe the existence of {\em points accumulated by hyperbolic periodic orbits of different Morse indices}. Since such a phenomenon can be observe also in a number of well-known examples of non-hyperbolic systems and since, in dimension three, that phenomenon implies existence of infinitely many sinks or sources, it is possible to formulate the following dichotomy (which, in virtue of Proposition \ref{p1}, follows from Conjecture \ref{conj0}):
\begin{conjecture} \label{conj1} $C^1$-generic vector fields $X$ satisfy (only) one of the following properties: \begin{enumerate}
\item $X$ has a point accumulated by hyperbolic periodic orbits of different Morse indices; \item $X$ is sectional-Axiom A. \end{enumerate} \end{conjecture}
In this paper we prove Conjecture \ref{conj1} but in a case very close to the three-dimensional one, namely, when the {\em singularities accumulated by periodic orbits have codimension one} (i.e. Morse index $1$ or $n-1$). Observe that our result implies the dichotomy in \cite{mp1} since the assumption about the singularities is automatic for $n=3$. It also implies Conjecture \ref{conj1} in large classes of vector fields as, for instance, those whose singularities (if any) have codimension one. As an application we prove Conjecture \ref{conj0} for star flows with spectral decomposition as soon as the singularities accumulated by periodic orbits have codimension one. Let us state our results in a precise way.
In what follows $M$ is a compact connected boundaryless Riemannian manifold of dimension $n\geq 3$ (a {\em closed $n$-manifold} for short). If $X$ is a $C^1$ vector field in $M$ we will denote by $X_t$ the flow generated by $X$ in $M$. A subset $\Lambda\subset M$ is {\em invariant} if $X_t(\Lambda)=\Lambda$ for all $t\in I \!\! R$. By a {\em closed orbit} we mean a periodic orbit or a singularity. We define the {\em omega-limit set} of $p\in M$ by $$ \omega(p)=\left\{x\in M: x=\lim_{n\to\infty}X_{t_n}(p) \mbox{ for some sequence }t_n\to\infty\right\} $$ and call $\Lambda$ {\em transitive} if $\Lambda=\omega(p)$ for some $p\in \Lambda$. Clearly every transitive set is compact invariant. As customary we call $\Lambda$ {\em nontrivial} it it does not reduce to a single orbit.
Denote by $\|\cdot\|$ and $m(\cdot)$ the norm and the minimal norm induced by the Riemannian metric and by $Det(\cdot)$ the jacobian operation. A compact invariant set $\Lambda$ is {\em hyperbolic} if there are a continuous invariant tangent bundle decomposition $$ T_\Lambda M=\hat{E}^s_\Lambda\oplus E^X_\Lambda\oplus \hat{E}^u_\Lambda $$ and positive constants $K,\lambda$ such that $E^X_\Lambda$ is the subbundle generated by $X$, $$
\|DX_t(x)/\hat{E}^s_x\|\leq Ke^{-\lambda t} \quad \mbox{ and }\quad m(DX_t(x)/\hat{E}^u_x)\geq K^{-1}e^{\lambda t}, $$ for all $x\in \Lambda$ and $t\geq 0$. Sometimes we write $\hat{E}^{s,X}_x$, $\hat{E}^{u,X}_x$ to indicate dependence on $X$.
A closed orbit $O$ is hyperbolic if it does as a compact invariant set. In such a case we define its {\em Morse index} $I(O)=dim(\hat{E}^s_O)$, where $dim(\cdot)$ stands for the dimension operation. If $O$ reduces to a singularity $\sigma$, then we write $I(\sigma)$ instead of $I(\{\sigma\})$ and say that $\sigma$ has {\em codimension one} if $I(\sigma)=1$ or $I(\sigma)=n-1$. It is customary to call hyperbolic closed orbit of maximal (resp. minimal) Morse index {\em sink} (resp. {\em source}).
On the other hand, an invariant splitting $T_\Lambda M=E_\Lambda\oplus F_\Lambda$ over $\Lambda$ is {\em dominated} (we also say that $E_\Lambda$ {\em dominates} $F_\Lambda$) if there are positive constants $K,\lambda$ such that $$
\frac{\|DX_t(x)/E_x\|}{m(DX_t(x)/F_x)}\leq Ke^{-\lambda t}, \quad\quad\forall x\in \Lambda \mbox{ and }t\geq 0. $$
In this work we agree to call a compact invariant set $\Lambda$ {\em partially hyperbolic} if there is a dominated splitting $T_\Lambda M=E^s_\Lambda\oplus E^c_\Lambda$ with {\em contracting} dominating subbundle $E^s_\Lambda$, namely, $$
\|DX_t(x)/E^s_x\|\leq Ke^{-\lambda t}, \quad\quad\forall x\in \Lambda \mbox{ and }t\geq 0. $$ We stress however that this is not a standard usage (specially due to the lack of symmetry in this definition). Anyway, in such a case, we say that $\Lambda$ has {\em contracting dimension $d$} if $dim(E^s_x)=d$ for all $x\in \Lambda$. Moreover, we say that the central subbundle $E^c_\Lambda$ is {\em sectionally expanding} if $$ dim(E^c_x)\geq 2 \quad\mbox{ and }\quad
|Det(DX_t(x)/L_x)|\geq K^{-1}e^{\lambda t}, \quad\quad\forall x\in \Lambda \mbox{ and }t\geq 0 $$ and all two-dimensional subspace $L_x$ of $E^c_x$.
A {\em sectional-hyperbolic set} is a partially hyperbolic set whose singularities (if any) are hyperbolic and whose central subbundle is sectionally expanding (\footnote{Some authors use the term {\em singular-hyperbolic} instead.}).
Now we recall the concept of sectional-Axiom A flow \cite{memo}. Call a point $p\in M$ {\em nonwandering} if for every neighborhood $U$ of $p$ and every $T>0$ there is $t>T$ such that $X_t(U)\cap U\neq\emptyset$. We denote by $\Omega(X)$ the set of nonwandering points of $X$ (which is clearly a compact invariant set). We say that $X$ is an {\em Axiom A flow} if $\Omega(X)$ is both hyperbolic and the closure of the closed orbits. The so-called {\em Spectral Decomposition Theorem} \cite{hk} asserts that the nonwandering set of an Axiom A flow $X$ splits into finitely many disjoint transitive sets {\em with dense closed orbits} (i.e. with a dense subset of closed orbits) which are hyperbolic for $X$. This motivates the following definition:
\begin{definition} A $C^1$ vector field $X$ in $M$ is called {\em sectional-Axiom A flow} if there is a finite disjoint decomposition $ \Omega(X)=\Omega_1\cup \cdots \cup \Omega_k $ formed by transitive sets with dense periodic orbits $\Omega_1,\cdots, \Omega_k$ such that, for all $1\leq i\leq k$, $\Omega_i$ is either a hyperbolic set for $X$ or a sectional-hyperbolic set for $X$ or a sectional-hyperbolic set for $-X$. \end{definition}
Let $\mathcal{X}^1$ denote the space of $C^1$ vector fields $X$ in $M$. Notice that it is a Baire space if equipped with the standard $C^1$ topology. The expression {\em $C^1$-generic vector field} will mean a vector field in a certain residual subset of $\mathcal{X}^1$. We say that a point is {\em accumulated by periodic orbits}, if it lies in the closure of the union of the periodic orbits, and {\em accumulated by hyperbolic periodic orbits of different Morse index} if it lies simultaneously in the closure of the hyperbolic periodic orbits of Morse index $i$ and $j$ with $i\neq j$. With these definitions we can state our main result settling a special case of Conjecture \ref{conj1}.
\begin{main1} A $C^1$-generic vector field $X$ for which the singularities accumulated by periodic orbits have codimension one satisfies (only) one of the following properties: \begin{enumerate}
\item $X$ has a point accumulated by hyperbolic periodic orbits of different Morse indices; \item $X$ is sectional-Axiom A. \end{enumerate} \end{main1}
Standard $C^1$-generic results \cite{cmp} imply that the sectional-Axiom A flows in the second alternative above also satisfy the no-cycle condition.
The proof of our result follows that of Theorem A in \cite{mp1}. However, we need a more direct approach bypassing Conjecture \ref{conj0}. Indeed, we shall use some methods in \cite{mp1} together with a combination of results \cite{glw}, \cite{gwz}, \cite{memo} for nontrivial transitive sets (originally proved for robustly transitive sets).
\begin{definition}[\cite{a}] We say that $X$ has {\em spectral decomposition} if there is a finite partition $\Omega(X)=\Lambda_1\cup\cdots\cup\Lambda_l$ formed by transitive sets $\Lambda_1,\cdots, \Lambda_l$ . \end{definition}
Theorem A will imply the following approach to Conjecture \ref{conj0}.
\begin{main2} \label{the-coro} A $C^1$-generic star flow with spectral decomposition and for which the singularities accumulated by periodic orbits have codimension one is sectional-Axiom A. \end{main2}
\section{Proofs} \label{sec2}
\noindent Hereafter we fix a closed $n$-manifold $M$, $n\geq 3$, $X\in \mathcal{X}^1$ and a compact invariant set $\Lambda$ of $X$. Denote by $Sing(X,\Lambda)$ the set of singularities of $X$ in $\Lambda$. We shall use the following concept from \cite{glw}.
\begin{definition} We say that $\Lambda$ has a definite index $0\leq Ind(\Lambda)\leq n-1$ if there are a neighborhood $\mathcal{U}$ of $X$ in $\mathcal{X}^1$ and a neighborhood $U$ of $\Lambda$ in $M$ such that $I(O)=Ind(\Lambda)$ for every hyperbolic periodic orbit $O\subset U$ of every vector field $Y\in \mathcal{U}$. In such a case we say that $\Lambda$ is {\em strongly homogeneous (of index $Ind(\Lambda)$)}. \end{definition}
It turns out that the strongly homogeneous property imposes certain constraints on the Morse indices of the singularities \cite{gwz}. To explain this we use the concept of {\em saddle value} of a hyperbolic singularity $\sigma$ of $X$ defined by $$ \Delta(\sigma)=Re(\lambda)+Re(\gamma) $$ where $\lambda$ (resp. $\gamma$) is the stable (resp. unstable) eigenvalue with maximal (resp. minimal) real part (c.f. \cite{sstc} p. 725). Indeed, based on the Hayashi's connecting lemma \cite{h} and well-known results about unfolding of homoclinic loops \cite{sstc}, Lemma 4.3 in \cite{gwz} proves that, if $\Lambda$ is a robustly transitive set which is strongly homogeneous with hyperbolic singularities, then $\Delta(\sigma)\neq 0$ and, furthermore, $I(\sigma)=Ind(\Lambda)$ or $Ind(\Lambda)+1$ depending on whether $\Delta(\sigma)<0$ or $\Delta(\sigma)>0$, $\forall \sigma\in Sing(X,\Lambda)$. However, we can observe that the same is true for nontrivial transitive sets (instead of robustly transitive sets) for the proof in \cite{gwz} uses the connecting lemma only once. In this way we obtain the following lemma.
\begin{lemma} \label{43} Let $\Lambda$ be a nontrivial transitive set which is strongly homogeneous with singularities (all hyperbolic) of $X$. Then, every $\sigma\in Sing(X,\Lambda)$ satisfies $\Delta(\sigma)\neq 0$ and one of the properties below: \begin{itemize} \item If $\Delta(\sigma)<0$, then $I(\sigma)=Ind(\Lambda)$. \item If $\Delta(\sigma)>0$, then $I(\sigma)=Ind(\Lambda)+1$. \end{itemize} \end{lemma}
On the other hand, the following inequalities for strongly homogeneous sets $\Lambda$ where introduced in \cite{glw}: \begin{equation} \label{eq1} I(\sigma)>Ind(\Lambda), \quad\quad\forall \sigma\in Sing(X,\Lambda). \end{equation} \begin{equation}
\label{eq11} I(\sigma)\leq Ind(\Lambda), \quad\quad\forall \sigma\in Sing(X,\Lambda). \end{equation}
We shall use the above lemma to present a special case where one of these inequalities can be proved.
\begin{proposition} \label{thCcc} Let $\Lambda$ be a nontrivial transitive set which is strongly homogeneous with singularities (all hyperbolic of codimension one) of $X$. If $n\geq 4$ and $1\leq Ind(\Lambda)\leq n-2$, then $\Lambda$ satisfies either (\ref{eq1}) or (\ref{eq11}). \end{proposition}
\begin{proof} Otherwise there are $\sigma_0,\sigma_1\in Sing(X,\Lambda)$ satisfying $I(\sigma_0)\leq Ind(\Lambda)<I(\sigma_1)$. Since both $\sigma_0$ and $\sigma_1$ have codimension one and $1\leq Ind(\Lambda)\leq n-2$ we obtain $I(\sigma_0)=1$ and $I(\sigma_1)=n-1$. If $\Delta(\sigma_0)\geq 0$ then $I(\sigma_0)=Ind(\Lambda)+1$ by Lemma \ref{43} so $Ind(\Lambda)=0$ which contradicts $1\leq Ind(\Lambda)$. Then $\Delta(\sigma_0)<0$ and so $Ind(\Lambda)=I(\sigma_0)=1$ by Lemma \ref{43}. On the other hand, if $\Delta(\sigma_1)<0$ then $Ind(\Lambda)=I(\sigma_1)=n-1$ by Lemma \ref{43}. As $Ind(\Lambda)=1$ we get $n=2$ contradicting $n\geq 4$. Then $\Delta(\sigma_1)\geq0$ so $I(\sigma_1)=Ind(\Lambda)+1$ by Lemma \ref{43} thus $n=3$ contradicting $n\geq 4$. The proof follows. \end{proof}
The importance of (\ref{eq1}) and (\ref{eq11}) relies on the the following result proved in \cite{glw}, \cite{gwz}, \cite{memo}: A $C^1$ robustly transitive set $\Lambda$ with singularities (all hyperbolic) which is strongly homogeneous satisfying (\ref{eq1}) (resp. (\ref{eq11})) is sectional hyperbolic for $X$ (resp. $-X$). However, we can observe that the same is true for nontrivial transitive sets (instead of robustly transitive sets) as soon as $1\leq Ind(\Lambda)\leq n-2$. The proof is similar to that in \cite{glw},\cite{gwz}, \cite{memo} but using the so-called {\em preperiodic set} \cite{w} instead of the natural continuation of a robustly transitive sets. Combining this with Proposition \ref{thCc} we obtain the following corollary in which the expression {\em up to flow-reversing} means either for $X$ or $-X$.
\begin{corollary} \label{thCc} Let $\Lambda$ be a nontrivial transitive set which is strongly homogeneous with singularities (all hyperbolic of codimension one) of $X$. If $n\geq 4$ and $1\leq Ind(\Lambda)\leq n-2$, then $\Lambda$ is sectional-hyperbolic up to flow-reversing. \end{corollary}
A direct application of this corollary is as follows. We say that $\Lambda$ is {\em Lyapunov stable} for $X$ if for every neighborhood $U$ of it there is a neighborhood $W\subset U$ of it such that $X_t(p)\in U$ for every $t\ge0$ and $p\in W$.
It was proved in Theorem C of \cite{mp1} that, for $C^1$ generic three-dimensional star flows, every nontrivial Lyapunov stable set with singularities is singular-hyperbolic. We will need a similar result for higher dimensional flows, but with the term singular-hyperbolic replaced by sectional-hyperbolic. The following will supply such a result.
\begin{corollary}
\label{thC} Let $\Lambda$ be a nontrivial transitive set which is strongly homogeneous with singularities (all hyperbolic of codimension one) of $X$. If $n\geq 4$, $1\leq Ind(\Lambda)\leq n-2$ and $\Lambda$ is Lyapunov stable, then $\Lambda$ is sectional-hyperbolic for $X$. \end{corollary}
\begin{proof} By Corollary \ref{thCc} it suffices to prove that $\Lambda$ cannot be sectional-hyperbolic for $-X$. Assume by contradiction that it does. Then, by integrating the corresponding contracting subbundle, we obtain a strong stable manifold $W^{ss}_{-X}(x)$, $\forall x\in \Lambda$. But $\Lambda$ is Lyapunov stable for $X$ so $W^{ss}_{-X}(x)\subset \Lambda$, $\forall x\in \Lambda$, contradicting p. 556 in \cite{momo}. Then, $\Lambda$ cannot be sectional-hyperbolic for $-X$ and we are done. \end{proof}
We also use Lemma \ref{43} to prove the following proposition.
\begin{proposition} \label{c1} Every nontrivial transitive sectional-hyperbolic set $\Lambda$ of a vector field $X$ in a closed $n$-manifold, $n\geq 3$, is strongly homogeneous and satisfies $I(\sigma)=Ind(\Lambda)+1$, $\forall \sigma\in Sing(X,\Lambda)$. \end{proposition}
\begin{proof} Since transitiveness implies connectedness we have that the strong stable subbundle $E^s_\Lambda$ of $\Lambda$ has constant dimension. From this and the persistence of the sectional-hyperbolic splitting we obtain that $\Lambda$ is strongly homogeneous of index $Ind(\Lambda)=dim(E^s_x)$, for $x\in \Lambda$. Now fix a singularity $\sigma$. To prove $I(\sigma)=Ind(\Lambda)+1$ we only need to prove that $\Delta(\sigma)>0$ (c.f. Lemma \ref{43}).
Suppose by contradiction that $\Delta(\sigma)\leq 0$. Then, $\Delta(\sigma)<0$ and $I(\sigma)=Ind(\Lambda)$ by Lemma \ref{43}. Therefore, $dim(E^s_\sigma)=dim(\hat{E}^s_\sigma)$ where $T_\sigma M=\hat{E}^s_\sigma\oplus \hat{E}^u_\sigma$ is the hyperbolic splitting of $\sigma$ (as hyperbolic singularity of $X$). Now, let $W^s(\sigma)$ the stable manifold of $\sigma$ and $W^{ss}(\sigma)$ be the strong stable manifold of $\sigma$ obtained by integrating the strong stable subbundle $E^s_\Lambda$ (c.f. \cite{hps}). Notice that $W^{ss}(\sigma)\subset W^s(\sigma)$. As $dim(W^{ss}(\sigma))=dim(E^s_\sigma)=dim(\hat{E}_\sigma^s)=dim(W^s(\sigma))$ we get $W^{ss}(\sigma)=W^s(\sigma)$. But $\Lambda$ is nontrivial transitive so the dense orbit will accumulate at some point in $W^s(\sigma)\setminus \{\sigma\}$. As $W^{ss}(\sigma)=W^{s}(\sigma)$ such a point must belong to $(\Lambda\cap W^{ss}(\sigma))\setminus \{\sigma\}$. On the other hand, it is well known that $\Lambda\cap W^{ss}(\sigma)=\{\sigma\}$ (c.f. \cite{mp1}) so we obtain a contradiction which proves the result. \end{proof}
We say that $\Lambda$ is an {\em attracting set} if there is a neighborhood $U$ of it such that $$ \Lambda=\bigcap_{t>0}X_t(U). $$ On the other hand, a {\em sectional-hyperbolic attractor} is a transitive attracting set which is also a sectional-hyperbolic set. An {\em unstable branch} of a hyperbolic singularity $\sigma$ of a vector field is an orbit in $W^u(\sigma)\setminus\{\sigma\}$. We say that $\Lambda$ has {\em dense singular unstable branches} if every unstable branch of every hyperbolic singularity on it is dense in $\Lambda$.
The following is a straightforward extension of Theorem D in \cite{mp1} to higher dimensions (with similar proof).
\begin{proposition} \label{thD} Let $\Lambda$ be a Lyapunov stable sectional-hyperbolic set of a vector field $X$ in a closed $n$-manifold, $n\geq 3$. If $\Lambda$ has both singularities, all of Morse index $n-1$, and dense singular unstable branches, then $\Lambda$ is a sectional-hyperbolic attractor of $X$. \end{proposition}
Now we recall the star flow's terminology from \cite{w}.
\begin{definition} \label{star-flow} A {\em star flow} is a $C^1$ vector field which cannot be $C^1$-approximated by ones exhibiting non-hyperbolic closed orbits. \end{definition}
Corollary \ref{thC} together with propositions \ref{c1} and \ref{thD} implies the key result below.
\begin{proposition} \label{p1} A $C^1$-generic vector field $X$ on a closed $n$-manifold, $\forall n\geq 3$, without points accumulated by hyperbolic periodic orbits of different Morse indices is a star flow. If, in addition, $n\geq 4$, then the codimension one singularities of $X$ accumulated by periodic orbits belong to a sectional-hyperbolic attractor up to flow-reversing. \end{proposition}
\begin{proof} We will use the following notation. Given $Z\in \mathcal{X}^1$ and $0\leq i\leq n-1$ we denote by $Per_i(Z)$ the union of the hyperbolic periodic orbits of Morse index $i$. The closure operation will be denoted by $Cl(\cdot)$.
Since $X$ has no point accumulated by hyperbolic periodic orbits of different Morse indices one has \begin{equation}
\label{separa} Cl(Per_i(X))\cap Cl(Per_j(X))=\emptyset, \quad\quad \forall i,j\in \{0,\cdots, n-1\}, \quad i\neq j. \end{equation} Then, since $X$ is $C^1$-generic, standard lower-semicontinuous arguments (c.f. \cite{cmp}) imply that there are a neighborhood $\mathcal{U}$ of $X$ in $\mathcal{X}^1$ and a pairwise disjoint collection of neighborhoods $\{U_i: 0\leq i\leq n-1\}$ such that $Cl(Per_i(Y))\subset U_i$ for all $0\leq i\leq n-1$ and $Y\in \mathcal{U}$.
Let us prove that $X$ is a star flow. When necessary we use the notation $I_X(O)$ to indicate dependence on $X$. By contradiction assume that $X$ is not a star flow. Then, there is a vector field $Y\in \mathcal{U}$ exhibiting a non-hyperbolic closed orbit $O$. Since $X$ is generic we can assume by the Kupka-Smale Theorem \cite{hk} that $O$ is a periodic orbit. Unfolding the eigenvalues of $O$ is a suitable way we would obtain two vector fields $Z_1,Z_2\in \mathcal{U}$ of which $O$ is a hyperbolic periodic orbit with $I_{Z_1}(O)\neq I_{Z_2}(O)$, $1\leq I_{Z_1}(O)\leq n-1$ and $1\leq I_{Z_2}(O)\leq n-1$. Consequently, $O\subset U_i\cap U_j$ where $i=I_{Z_1}(O)$ and $j=I_{Z_2}(O)$ which contradicts that the collection $\{U_i: 0\leq i\leq n-1\}$ is pairwise disjoint. Therefore, $X$ is a star flow.
Next we prove that $Cl(Per_i(X))$ is a strongly homogeneous set of index $i$, $\forall 0\leq i\leq n-1$. Take $Y\in \mathcal{U}$ and a hyperbolic periodic orbit $O\subset U_i$ of Morse index $I_Y(O)=j$. Then, $O\subset Cl(Per_j(Y))$ and so $O\subset U_j$ from which we get $O\subset U_i\cap U_j$. As the collection $\{U_i: 0\leq i\leq n-1\}$ is disjoint we conclude that $i=j$ and so every hyperbolic periodic orbit $O\subset U_i$ of every vector field $Y\in \mathcal{U}$ has Morse index $I_Y(O)=i$. Therefore, $Cl(Per_i(X))$ is a strongly homogeneous set of index $i$.
Now, we prove that every codimension one singularity $\sigma$ accumulated by periodic orbits belongs to a sectional-hyperbolic attractor up to flow-reversing. More precisely, we prove that if $I(\sigma)=n-1$ (resp. $I(\sigma)=1$), then $\sigma$ belongs to a sectional-hyperbolic attractor of $X$ (resp. of $-X$). We only consider the case $I(\sigma)=n-1$ for the case $I(\sigma)=1$ can be handled analogously by just replacing $X$ by $-X$.
Since $I(\sigma)=n-1$ one has $dim(W^u(\sigma))=1$ and, since $X$ is generic, we can assume that both $Cl(W^u(\sigma))$ and $\omega(q)$ (for $q\in W^u(\sigma)\setminus\{\sigma\}$) are Lyapunov stable sets of $X$ (c.f. \cite{cmp'}). As $\sigma$ is accumulated by periodic orbits we obtain from Lemma 4.2 in \cite{mp1} that $Cl(W^u(\sigma))$ is a transitive set.
We claim that $Cl(W^u(\sigma))$ is strongly homogeneous. Indeed, since $X$ is generic the General Density Theorem \cite{p} implies $\Omega(X)=Cl(Per(X)\cup Sing(X))$. Denote by $Sing^*(X)$ is the set of singularities accumulated by periodic orbits. Then, there is a decomposition $$ \Omega(X)=\left(\bigcup_{0\leq i\leq n-1} Cl(Per_i(X))\right)\cup\left(\bigcup_{\sigma'\in Sing(X)\setminus Sing^*(X)}\{\sigma'\}\right) $$ which is disjoint by (\ref{separa}). In addition, $Cl(W^u(\sigma))$ is transitive and so it is connected and contained in $\Omega(X)$. As $\sigma\in Sing^*(X)$ by hypothesis we conclude that $Cl(W^u(\sigma)) \subset Cl(Per_{i_0}(X))$ for some $0\leq i_0\leq n-1$. But we have proved above that $Cl(Per_{i_0}(X))$ is a strongly homogeneous set of index $i_0$, so, $Cl(W^u(\sigma))$ is also a strongly homogeneous set of index $i_0$. The claim follows.
On the other hand, $X$ is a star flow and so it has finitely many sinks and sources \cite{li}, \cite{pl}. From this we obtain $1\leq i_0\leq n-2$ and so $1\leq Ind(Cl(W^u(\sigma)))\leq n-2$. Summarizing, we have proved that $Cl(W^u(\sigma))$ is a transitive set with singularities, all of them of codimension one, which is a Lyapunov stable strongly homogeneous set of index $1\leq Ind(Cl(W^u(\sigma)))\leq n-2$. As certainly $Cl(W^u(\sigma))$ is nontrivial Corollary \ref{thC} applied to $\Lambda=Cl(W^u(\sigma))$ implies that $Cl(W^u(\sigma))$ is sectional-hyperbolic.
Once we have proved that $Cl(W^u(\sigma))$ is sectional-hyperbolic we apply Proposition \ref{c1} to $\Lambda=Cl(W^u(\sigma))$ yielding $I(\sigma')=i_0+1$, $\forall\sigma'\in Sing(X,Cl(W^u(\sigma)))$. But $\sigma\in Cl(W^u(\sigma))$ and $I(\sigma)=n-1$ so $i_0=n-2$ by taking $\sigma'=\sigma$ above. Consequently, $I(\sigma')=n-1$ and so $dim(W^u(\sigma'))=1$, $\forall\sigma'\in Cl(W^u(\sigma))$. This implies two things. Firstly that every singularity in $Cl(W^u(\sigma))$ has Morse index $n-1$ and, secondly, since $X$ is generic, we can assume that $Cl(W^u(\sigma))$ has dense unstable branches (c.f. Lemma 4.1 in \cite{mp1}). So, $Cl(W^u(\sigma))$ is a sectional-hyperbolic attractor by Proposition \ref{thD} applied to $\Lambda=Cl(W^u(\sigma))$. Since $\sigma\in Cl(W^u(\sigma))$ we obtain the result. \end{proof}
The last ingredient is the proposition below whose proof follows from Theorem B of \cite{gw} as in the proof of Theorem B p. 1582 of \cite{mp1}.
\begin{proposition} \label{star=>sec-axa} If $n\geq 3$, every $C^1$-generic star flow whose singularities accumulated by periodic orbits belong to a sectional-hyperbolic attractor up to flow-reversing is sectional-Axiom A. \end{proposition}
\begin{proof}[Proof of Theorem A] Let $X$ be a $C^1$-generic vector field on a closed $n$-manifold, $n\geq 3$, all of whose singularities accumulated by periodic orbits have codimension one. Suppose in addition that there is no point accumulated by hyperbolic periodic orbits of different Morse indices. Since $X$ is $C^1$-generic we have by Proposition \ref{p1} that $X$ is a star flow.
If $n=3$ then, since $X$ is generic, Theorem B in \cite{mp1} implies that $X$ is sectional-Axiom A.
If $n\geq 4$ then, by Proposition \ref{p1}, since the singularities accumulated by periodic orbits have codimension one, we have that all such singularities belong to a sectional-hyperbolic attractor up to flow-reversing. Then, $X$ is sectional-Axiom A by Proposition \ref{star=>sec-axa}. \end{proof}
Now we move to the proof of Theorem B.
Hereafter we denote by $W^s_X(\cdot)$ and $W^u_X(\cdot)$ the stable and unstable manifold operations \cite{hps} with emphasis on $X$. Notation $O(p)$ (or $O_X(p)$ to emphasize $X$) will indicate the orbit of $p$ with respect to $X$. By a {\em periodic point} we mean a point belonging to a periodic orbit of $X$. As usual the notation $\pitchfork$ will indicate the transversal intersection operation.
\begin{lemma} \label{l1} There exists a residual subset ${\mathcal R}\subset \mathcal{X}^1$ with the following property: If $X\in {\mathcal R}$ has two periodic points $q_0$ and $p_0$ such that for any neighborhood ${\mathcal U}$ of $X$ there exists $Y\in {\mathcal U}$ such that the continuations of $q(Y)$ and $p(Y)$ of $q_0$ and $p_0$ respectively are defined and satisfy $W^s_Y(O(q(Y)))\pitchfork W^u_Y(O(p(Y)))\neq \emptyset$. Then $X$ satisfies $$W^s_X(O(q_0))\pitchfork W^u_X(O(p_0))\neq \emptyset.$$ \end{lemma}
\begin{proof} Indeed, let $\{U_n\}$ be a countable basis of the topology of $M$. Now, we define the set $A_{n,m}$ as the set of vector fields such that there exist a periodic point $p$ in $U_n$ and a periodic point $q$ in $U_m$ such that $W^s(O(p))\pitchfork W^u(O(q))\neq \emptyset$. Observe that $A_{n,m}$ is an open set. Define $B_{n,m}=\mathcal{X}^1\setminus Cl(A_{n,m})$. Thus the set $${\mathcal R}=\bigcap_{n,m=0}^{\infty} (A_{n,m}\cup B_{n,m})$$ is residual. If $X$ belongs to ${\mathcal R}$ and satisfies the hypothesis then there exist $n$ and $m$ such that $p_0\in U_n$ and $q_0\in U_m$. Moreover, the hypothesis implies that $X\notin B_{n,m}$. Thus $X\in A_{n,m}$ and the proof follows. \end{proof}
We use this lemma to prove the following one.
\begin{lemma} \label{l3} A $C^1$ generic star flow with spectral decomposition has no points accumulated by hyperbolic periodic orbits of different Morse indices. \end{lemma}
\begin{proof} Let ${\mathcal R}$ be the residual subset in Lemma \ref{l1}. Suppose that $X\in {\mathcal R}$ has spectral decomposition but has no points accumulated by hyperbolic periodic orbits of different Morse indices. Then, there exists $i\neq j$ such that $Cl(Per_i(X))\cap Cl(Per_j(X))\neq \emptyset$. Without loss of generality we can assume $i<j$. Take $x\in Cl(Per_i(X))\cap Cl(Per_j(X))$ so there are periodic orbits $O(p_0)$ (of index $i$) and $O(q_0)$ (of index $j$) arbitrarily close to $x$. Clearly $x\in\Omega(X)$ and so there is a basic set $\Lambda$ in the spectral decomposition of $X$ such that $x\in \Lambda$. As the basic sets in the spectral decomposition are disjoint and the orbits $O(p_0)$, $O(q_0)$ are close to $x$ (and belong to $\Omega(X)$) we conclude that $O(p_0)\cup O(q_0)\subset \Lambda$.
Since $\Lambda$ is transitive, the connecting lemma \cite{h} implies that there exists $Y$ arbitrarily close to $X$ such that $W^s_Y(O(q(Y))) \cap W^u_Y(O(p(Y)))\neq\emptyset$. On the other hand, $j-i>0$ since $j>i$. Moreover, $\operatorname{dim} (W^s_Y(O(q(Y))))=j+1$ and $\operatorname{dim} (W^u_Y(O(p(Y))))=n-i$ since $ind(O(q(y)))=j$ and $ind(O(p(Y)))=i$ (resp.). Then, $\operatorname{dim} (W^s_Y(O(q(Y))))+\operatorname{dim} (W^u_Y(O(p(Y))))=j+1+n-i>n$ and so with another perturbation we can assume that the above intersection is transversal. Since $X\in {\mathcal R}$ we conclude that $$W^s_X(O(q_0))\pitchfork W^u_X(O(p_0))\neq \emptyset.$$ Now, using the connecting lemma again, there exists $Y$ close to $X$ with a heterodimensional cycle. But this contradicts the non-existence of heteroclinic cycles for star flows (c.f. Theorem 4.1 in \cite{gw}). The proof follows. \end{proof}
\begin{proof}[Proof of Theorem B] Apply Theorem A and Lemma \ref{l3}. \end{proof}
\end{document} |
\begin{document}
\title{Unified Approach to Universal Cloning and Phase-Covariant Cloning}
\author{Jia-Zhong Hu}
\email{[email protected]}
\affiliation{Department of Physics, Tsinghua University, Beijing 100084, China } \author{Zong-Wen Yu}
\email{[email protected]}
\affiliation{Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China } \author{Xiang-Bin Wang}
\email{[email protected]}
\affiliation{Department of Physics, Tsinghua University, Beijing 100084, China}
\begin{abstract} We analyze the problem of approximate quantum cloning when the quantum state is between two latitudes on the Bloch's sphere. We present an analytical formula for the optimized 1-to-2 cloning. The formula unifies the universal quantum cloning (UQCM)
and the phase covariant quantum cloning. \end{abstract}
\pacs{03.67.-a}
\maketitle
\section{Introduction}\label{sec:secIntro} Recent development in quantum information have given rise to an increasing number of applications, for instance, quantum teleportation, quantum dense coding, quantum cryptography, quantum logic gates, quantum algorithms and etc~\cite{Galindo2002,Gisin and Thew,Gisin and Ribordy,rev}. Many tasks in quantum information processing (QIP) have different properties from the classical counterpart, for example, the quantum cloning. Classically, we can duplicate (copy) any bits perfectly. In the quantum case, as shown by Wooters and Zurek~\cite{Wootters1982}, it is impossible to design a general machine to clone every state on the Bloch's sphere perfectly. This is called the no-cloning theorem. But such the no-cloning theorem~\cite{Wootters1982} only forbids the perfect cloning. As shown by Bu\v{z}ek and Hillery, approximate cloning of an unknown quantum state is possible. They proposed a type of Universal Quantum Copying Machine~\cite{Buzek1996} (UQCM) that clones all the state on the Bloch's sphere with the same optimal fidelity~\cite{Gisin1997,Brub1998,Gisin1998}. Subsequently, some researches have extended the UQCM to $N$ inputs to $M$ outputs and to d-level system~\cite{Gisin1997,Werner1998,Keyl1999}. Furthermore, studies have also been done on quantum cloning with prior information about unknown state~\cite{Brub2000,phase1,phase2,buc,Fiu2003,Cerf2002,Fan2003,Cerf and Durt2002,Durt2003}, the example is the cloning of phase covariant state~\cite{Brub2000} or unknown equatorial state~\cite{Fan2003} given by $$\ket{\psi}=\frac{1}{\sqrt{2}}\left(\ket{0}+e^{i\phi}\ket{1}\right).$$ It has already been proven that the above state can be cloned with the optimal fidelity $F=\frac{1}{2}[1+\frac{1}{\sqrt{2}}]$~\cite{Fan2001} and the fidelity is higher than UQCM's. This is to say, if we already have some prior information about the unknown state, we can design a better copying machine for the state. The result~\cite{Brub2000,phase1,phase2,Fan2003} was subsequently extended to more general case~\cite{Karimipour2002,phase4} and experimentally demonstrated \cite{Fiu2003,phase4,phase5}.
The UQCM and the phase covariant cloning do not subsume each other, because one cannot be regarded as a special case of the other. In real applications of the quantum information system, we sometimes have access only to pure states distributed on a specific surface on a Bloch sphere. In this article, we study out such general situation in which the states are distributed between two latitudes on a Bloch sphere. Our result unifies the prior results pertaining to UQCM and phase covariant cloning: in particular, one could bring the two latitudes to the poles for UQCM or set the two latitudes together for phase covariant cloning.
To this end, we consider the following state: \begin{equation}
\ket{\psi}=\cos{\frac{\theta}{2}}\ket{0}+
\sin{\frac{\theta}{2}}e^{i\phi}\ket{1} \end{equation} where $\phi\in[0,2\pi]$ and $\theta_{1}\leq\theta\leq\theta_2$. The states we considered here are uniformly distributed between two latitudes on the Bloch sphere. When $\theta_{1}=0$ and $\theta_{2}=\pi$, we get the situation of the UQCM. When $\theta_{1}=\theta_{2}=\frac{\pi}{2}$, it is the phase covariant cloning. In this way, results of the UQCM and the phase covariant cloning can be unified: they are recovered as special cases of our QCM. Contrary to general perception that we can get a better QCM, we point out that this view may not always be true.
This paper is arranged as follows: After introducing some results concerning UQCM~\cite{Buzek1996} and phase covariant cloning~\cite{Brub2000,Fan2003,Fan2001,Karimipour2002}, we formulate our problem in section \uppercase\expandafter{\romannumeral2} and present analytical results to the situation. In section \uppercase\expandafter{\romannumeral3}, we make detailed discussions about our 1 $\to$ 2 QCM and also a qualitative discussion about the situation of $1\to N$ and $M\to N$. We end the paper with some concluding remarks.
For an arbitrary quantum state on the Bloch's sphere, we can use the following unitary transformation to get the optimal result for the cloning: \begin{eqnarray}
U: &&
\ket{0}_{a}\ket{0}_{b}\ket{\uparrow}_{x}\rightarrow
\sqrt{\frac{2}{3}}\ket{0}_{a}\ket{0}_{b}\ket{\uparrow}_{x}+
\sqrt{\frac{1}{6}}\left(\ket{0}_{a}\ket{1}_{b}+
\ket{1}_{a}\ket{0}_{b}\right)\ket{\downarrow}_{x}\nonumber \\
&&
\ket{1}_{a}\ket{0}_{b}\ket{\uparrow}_{x}\rightarrow
\sqrt{\frac{2}{3}}\ket{1}_{a}\ket{1}_{b}\ket{\downarrow}_{x}+
\sqrt{\frac{1}{6}}\left(\ket{0}_{a}\ket{1}_{b}+
\ket{1}_{a}\ket{0}_{b}\right)\ket{\uparrow}_{x}. \end{eqnarray}
For the state $\ket{\psi}=\alpha\ket{0}+\beta\ket{1}$, after operated by the cloning operation U, we can get the density matrices $\rho_{a}$ and $\rho_{b}$ by taking partial trace. We then define the cloning fidelity $F=\bra{\psi}\rho_{a}\ket{\psi}$. For the case of 1 to 2 UQCM, it can be proved that $F=\frac{5}{6}$~\cite{Buzek1996}.
For the phase covariant cloning there already exists a method of adjusting a parameter in the UQCM to get a better cloning fidelity~\cite{Karimipour2002}.
\section{Quantum cloning machine for a qubit between two latitudes on the Bloch sphere}\label{sec:secQCM}
The state we wish to clone can be written as \begin{equation}\ket{\psi}=\cos{\frac{\theta}{2}}\ket{0}+ \sin{\frac{\theta}{2}}e^{i\phi}\ket{1}\end{equation} where $\phi\in[0,2\pi]$ and \begin{equation}\theta_1\leq\theta\leq\theta_2.\end{equation} This is to say, the states we considered here are uniformly distributed in a belt between two latitudes on the Bloch sphere. We assume the following unitary transformation for our QCM: \begin{eqnarray}\label{eq:eqGQCM}
U: &&
\ket{0}_{a}\ket{0}_{b}\ket{\uparrow}_{x}\rightarrow
\cos{\alpha}\ket{0}_{a}\ket{0}_{b}\ket{\uparrow}_{x}+
\sin{\alpha}\ket{\xi^{+}}_{ab}\ket{\downarrow}_{x}\nonumber \\
&&
\ket{1}_{a}\ket{0}_{b}\ket{\uparrow}_{x}\rightarrow
\cos{\beta}\ket{1}_{a}\ket{1}_{b}\ket{\downarrow}_{x}+
\sin{\beta}\ket{\xi^{+}}_{ab}\ket{\uparrow}_{x} \end{eqnarray} where $\ket{\xi^{+}}$ is defined as $\ket{\xi^{+}}=\frac{1}{\sqrt{2}}\left(\ket{01}_{ab}+ \ket{10}_{ab}\right)$, with $\alpha$ and $\beta$ being parameters that we want to determined. We restrain ourselves only to a 'symmetric' transformation and we prove its optimality below.
After transformation by the unitary operation U, we can get the following state: \begin{eqnarray}
\ket{\psi_{a}}\ket{0}_{b}\ket{\uparrow}_{x}
& \rightarrow &
\cos{\frac{\theta}{2}}\cos{\alpha}\ket{00}_{ab}\ket{\uparrow}_{x}+
\sin{\alpha}\cos{\frac{\theta}{2}}\ket{\xi^{+}}_{ab}\ket{\downarrow}_{x}
\nonumber \\
& &
+\sin{\frac{\theta}{2}}\cos{\beta}e^{i\phi}\ket{11}_{ab}\ket{\downarrow}_{x}+
\sin{\frac{\theta}{2}}\sin{\beta}e^{i\phi}\ket{\xi^{+}}_{ab}\ket{\uparrow}_{x}. \end{eqnarray} By taking partial trace, we can calculate the reduced density matrices $\rho_{a}$ and $\rho_{b}$ of particle a and b respectively. \begin{eqnarray}
\rho_{a}=\rho_{b}
&=&
\left(\frac{1}{\sqrt{2}}\sin{\beta}\sin{\frac{\theta}{2}}\right)^2
\oprod{0}{0}+\left(\frac{1}{\sqrt{2}}\sin{\alpha}\cos{\frac{\theta}{2}}\right)^2
\oprod{1}{1} \nonumber \\
& &
+ \left(\cos{\frac{\theta}{2}}\cos{\alpha}\ket{0}+
\frac{1}{\sqrt{2}}\sin{\frac{\theta}{2}}\sin{\beta}e^{i\phi}\ket{1}\right)
\left(\cos{\frac{\theta}{2}}\cos{\alpha}\bra{0}+
\frac{1}{\sqrt{2}}\sin{\frac{\theta}{2}}\sin{\beta}e^{-i\phi}\bra{1}\right)
\nonumber \\
& &
+\left(\sin\frac{\theta}{2}\cos{\beta}e^{i\phi}\ket{1}+
\frac{1}{\sqrt{2}}\cos{\frac{\theta}{2}}\sin{\alpha}\ket{0}\right)
\left(\sin\frac{\theta}{2}\cos{\beta}e^{-i\phi}\bra{1}+
\frac{1}{\sqrt{2}}\cos{\frac{\theta}{2}}\sin{\alpha}\bra{0}\right). \end{eqnarray} With the density matrices of the subsystem, we can get the fidelity \begin{eqnarray}
F
&=&
\bra{\psi}\rho_{a}\ket{\psi} \nonumber \\
&=&
\cos^{4}{\frac{\theta}{2}}\left(\frac{1}{2}+\frac{1}{2}\cos^{2}{\alpha}\right)
+\sin^{4}{\frac{\theta}{2}}\left(\frac{1}{2}+\frac{1}{2}\cos^{2}{\beta}\right)
+\frac{1}{8}\sin^{2}{\theta}\left(\sin^{2}{\alpha}+\sin^{2}{\beta}\right)
+\frac{\sqrt{2}}{4}\sin^{2}{\theta}\sin(\alpha+\beta). \end{eqnarray} Averaging the fidelity over all possible angles $\theta$, we have~\cite{Gisin1997} \begin{eqnarray}
\bar{F}
&=&
\frac{\int_{\theta_{1}}^{\theta_{2}}{F\sin{\theta}d{\theta}}}
{\int_{\theta_{1}}^{\theta_{2}}{\sin{\theta}d{\theta}}}
\nonumber\\
&=&
\frac{1}{2}+\frac{1}{6}K-P\sin(\alpha+\beta)-Q\sin^{2}{\alpha}-R\sin^{2}{\beta} \end{eqnarray} where \begin{equation}\label{eq:eqPara}
\left\{
\begin{array}{l}
K=\cos^{2}{\theta_{2}}+\cos{\theta_1}\cos{\theta_2}+\cos^{2}{\theta_2}
\\
P=\frac{\sqrt{2}}{12}K-\frac{\sqrt{2}}{4} \\
Q=\frac{1}{12}K+\frac{1}{8}\left(\cos{\theta_1}+\cos{\theta_2}\right)
\\
R=\frac{1}{12}K-\frac{1}{8}\left(\cos{\theta_1}+\cos{\theta_2}\right)
\end{array}
\right. \end{equation} and $K,P,Q,R$ are constants with given $\theta_1$ and $\theta_2$. In order to get the maximum of $\bar{F}$, we do a partial differentiating $\bar{F}$ with respect to $\alpha$ and $\beta$. For maximum $\bar{F}$, the parameters $\alpha$ and $\beta$ with the optimal QCM should satisfy the following equations: \begin{equation}\label{eq:eqQCM}
\left\{
\begin{array}{l}
P\cos(\alpha+\beta)+Q\sin(2\alpha)=0\\
P\cos(\alpha+\beta)+R\sin(2\beta)=0.
\end{array}
\right. \end{equation}
\subsection{Solution of $\alpha$ and $\beta$ with maximum $\bar{F}$.}
With the formulation above, we can now seek the solution of $\alpha$ and $\beta$ with maximum $\bar{F}$. Consider the situation in which the state cover the whole Bloch sphere. For this situation, $\theta_{1}=0$ and $\theta_2=\pi$, and we have $K=1$, $P=-\frac{\sqrt{2}}{6}$ and $Q=R=\frac{1}{12}$. We can also solve the equations~\eqref{eq:eqQCM} to get $\cos{\alpha}=\cos{\beta}=\sqrt{\frac{2}{3}}$. This is the well known result in UQCM.
Now the general situation, we can easily get \begin{equation}
\cos(\alpha+\beta)\left[2QR\sin(\alpha-\beta)+P(R-Q)\right]=0. \end{equation}
If it satisfies that \begin{equation}\label{eq:eqcond}
\left|\frac{P(Q-R)}{2QR}\right|\leq 1 \end{equation} we have \begin{equation}
\sin{(\alpha-\beta)}=\frac{P(Q-R)}{2QR}. \end{equation} Then we can get the following solution \begin{equation}\label{eq:eqsolution}
2\alpha=\arcsin\left[\frac{P(Q+R)}{S}\right]+
\arcsin\left[\frac{P(Q-R)}{2QR}\right], \quad
2\beta=\arcsin\left[\frac{P(Q+R)}{S}\right]-
\arcsin\left[\frac{P(Q-R)}{2QR}\right] \end{equation} where $S=-\sqrt{4QRP^2+4Q^{2}R^{2}}$. If
$\left|\frac{P(Q-R)}{2QR}\right|>1$, we can get $\cos(\alpha+\beta)=0$ and $\sin{2\alpha}=0, \sin{2\beta}=0$. Since $P\leq 0$, there are only two cases \begin{enumerate}
\item If $|\theta_{1}-\frac{\pi}{2}|\geq
|\theta_{2}-\frac{\pi}{2}|$, we have
$\alpha=0,\beta=\frac{\pi}{2}$ and
$\bar{F}=\frac{1}{2}+\frac{1}{6}K-P-R$;
\item if $|\theta_{1}-\frac{\pi}{2}|<
|\theta_{2}-\frac{\pi}{2}|$, we have
$\alpha=\frac{\pi}{2},\beta=0$ and
$\bar{F}=\frac{1}{2}+\frac{1}{6}K-P-Q$. \end{enumerate}
In summary, the mean fidelity of our QCM is: \begin{equation}\label{eq:eqMF}
\bar{F}=\left\{
\begin{array}{ll}
\frac{1}{2}+\frac{1}{6}K-P\sin(\alpha+\beta)-
Q\sin^{2}{\alpha}-R\sin^{2}{\beta}, & |T|\leq 1; \\
\frac{1}{2}+\frac{1}{6}K-P-R, & |T|>1 \, \textrm{and} \,
|\theta_{1}-\frac{\pi}{2}|\geq |\theta_{2}-\frac{\pi}{2}|; \\
\frac{1}{2}+\frac{1}{6}K-P-Q, & |T|>1 \, \textrm{and} \,
|\theta_{1}-\frac{\pi}{2}|< |\theta_{2}-\frac{\pi}{2}|.
\end{array}
\right. \end{equation} where $T=\frac{P(Q-R)}{2QR}$ and $\alpha, \beta$ are given in Eq.~\eqref{eq:eqsolution}.
\subsection{Optimization}
Our QCM discussed above has a 'symmetric' form defined by Eq.~\eqref{eq:eqGQCM}. In this section, we will prove that the symmetric form is necessary. We consider all possible forms of quantum cloning. If the transformation U of the bases is not in such a symmetric form, we know that the reduced density matrices of the particle a and b are not equal to each other. We then get the different fidelities for particle a and b $$F_{a}=\bra{\psi}\rho_{a}\ket{\psi}, \quad\quad F_{b}=\bra{\psi}\rho_{b}\ket{\psi}.$$ For the purpose of the cloning, we can only define the fidelity as follows \begin{equation}
F_{U}=\min(F_{a},F_{b}) \end{equation} Then we can construct another quantum cloning machine Q to satisfy: \begin{equation}
\rho_{a}^{'}=\rho_{b}, \quad \quad \rho_{b}^{'}=\rho_{a} \end{equation} where $\rho_{a}$ and $\rho_{b}$ are the reduced density matrices of two particles a and b.
If we use U and Q with probability $\frac{1}{2}$ to copy the state, the reduced density matrices of two particles will be the same $\frac{1}{2}(\rho_{a}+\rho_{b})$. In this situation, the cloning fidelity $F_{Q}=\bra{\psi}\frac{1}{2}(\rho_{a}+\rho_{b})\ket{\psi}$. We have $F_{Q}\geq F_{U}$. Finally we get a symmetric form. It can also be said that for every possible cloning, we can always find a 'symmetric' cloning transformation that is optimal. So we need only consider the form given by Eq.~\eqref{eq:eqGQCM}. After getting the solution to this symmetric situation, we find the optimal QCM.
\section{Some Discussion about our QCM }
Given $\theta_{1}$ and $\theta_{2}$, we can calculate the optimal fidelity by using Eq.~\eqref{eq:eqMF}. Fig.1 presents all the situation with states uniformly distributed in any belt on the Bloch space. With observation from Fig.1 and simple derivation, we arrive at the following results: \begin{enumerate}
\item If $\theta_1=0,\theta_2=\pi$, it is the situation of the UQCM
and the optimal fidelity is $F=\frac{5}{6}$ which corresponds to
points B$_{1}$ or B$_{2}$ in Fig.1;
\item If $\theta_{1}=\theta_{2}=\frac{\pi}{2}$, we encounter the
situation of Phase-covariant QCM and the optimal fidelity is
$F=\frac{1}{2}(1+\frac{1}{\sqrt{2}})$ which corresponds to the
point C in Fig.1;
\item Fixed one latitude of the belt, we can set $\theta_1$ to be
constant without closing any generality, the minimum fidelity
will be get at the point with $\theta_2=\pi-\theta_1$. For
example, fixed $\theta_1=\frac{\pi}{4}$, Fig.2 draw the optimal
fidelity with $\theta_2\in[\frac{\pi}{4},\pi]$. The minimum
optimal fidelity obtained when
$\theta_{2}=\pi-\theta_{1}=\frac{3\pi}{4}$. Contrary to one's
intuition, the fidelity of cloning can rise with the area of where
t he unknown input state is on. \end{enumerate}
\begin{figure}
\caption{The optimal fidelity of 1 to 2 cloning for states between any
tow latitudes of Bloch space. Point B$_{1}$ and B$_{2}$ correspond to the
situation of UQCM. Point C corresponds to the situation of
Phase-covariant QCM. The bottom line corresponds to the situation
of $\theta_{1}+\theta_{2}=\pi$.}
\label{fig:figTotal}
\end{figure}
\begin{figure}
\caption{The optimal fidelity with $\theta_{1}=\frac{\pi}{4},
\theta_{2}\in [\frac{\pi}{4},\pi]$. Point D corresponds to the
minimum optimal fidelity with $\theta_{2}=\frac{3\pi}{4}$.}
\label{fig:figMin}
\end{figure}
In general, we sometimes will encounter the problem of multiple cloning, i.e. $1\to N$ and $M\to N$. Here we discuss the results for $1\to 2$ cloning to case of $1\rightarrow N$ and $M\rightarrow N$ qualitatively.
For the states $\ket{\psi}=\cos{\frac{\theta}{2}}\ket{\uparrow}+ \sin{\frac{\theta}{2}}e^{i\phi}\ket{\downarrow}$, where $\phi\in[0,2\pi]$ and $\theta_1\leq\theta\leq\theta_2$.
We can assume~\cite{Fan2001} \begin{eqnarray}
U_{1,N}\ket{\uparrow}\otimes R
&=&
\sum_{j=0}^{N-1}{a_{j}\ket{(N-j)\uparrow,j\downarrow}\otimes
R^{j}} \nonumber \\
U_{1,N}\ket{\downarrow}\otimes R
&=&
\sum_{j=0}^{N-1}{b_{M-1-j}\ket{(N-1-j)\uparrow,(j+1)\downarrow}\otimes
R^{j}} \nonumber \end{eqnarray} where $R$ and $R_j$ are the auxiliary quantum system.
We know that the parameters of particle a and b are not completely independent. We must let the reduced density matrices of $N$ particles to be the same form in order to achieve optimality, so we can assume that the cloning transformation of bases take a symmetric form.
Using the same method and defining the fidelity as $F=\bra{\psi}\rho_{a}\ket{\psi}$, we get \begin{equation}
\bar{F}=\frac{\int_{\theta_1}^{\theta_2}{F\sin{\theta}d{\theta}}}
{\int_{\theta_1}^{\theta_2}{\sin{\theta}d{\theta}}}. \end{equation} and we calculate the partial derivative es of the free parameters $a_{j}$ and $b_{j}$ and set them to zero, getting the following equations: \begin{equation}
\frac{\partial{\bar{F}}}{\partial{a_j}}=0,\quad
\frac{\partial{\bar{F}}}{\partial{b_j}}=0. \quad \quad
(j=0,1,\cdots,N-1) \end{equation} These equations are high-order multi-variant equations and in general the higher-order equations has no analytical solution. A similar result can be obtained for the $M\rightarrow N$ situation.
\section{concluding remark} In summary, we have presented the quantum cloning machine for qubits uniformly distributed on a belt between two latitudes of the Bloch sphere. Previous results regards $1\rightarrow 2$ cloning of both the universal cloning and the phase covariant cloning can unified into a single formulism. So that the previous results are recovered as special case of the current results.
{\bf Acknowledgement:} This work was supported in part by the National Basic Research Program of China grant No. 2007CB907900 and 2007CB807901, NSFC grant No. 60725416 and China Hi-Tech program grant No. 2006AA01Z420.
\end{document} |
\begin{document}
\begin{abstract} For maps of one complex variable, $f$, given as the sum of a degree $n$ power map and a degree $d$ polynomial, we provide necessary and sufficient conditions that the geometric limit as $n$ approaches infinity of the set of points that remain bounded under iteration by $f$ is the closed unit disk or the unit circle. We also provide a general description, for many cases, of the limiting set. \end{abstract}
\title{Geometric limits of Julia sets for sums of power maps and polynomials}
\section{Introduction}
Let $q$ be a degree $d$ polynomial; define $f_{n}\colon\mathbb C\rightarrow\mathbb C$ by \[f_{n}(z)\ =\ z^n+q(z),\] and note that $f_{n}$ is the sum of a power map (whose power we increase in the limit) and a fixed degree $d$ polynomial, $q$.
For a map $f\colon\mathbb C\rightarrow\mathbb C$, the filled Julia set for $f$, $K(f)$, is the set of points that remain bounded under iteration by $f$. We use the notation $S_0=\{z\in\mathbb C\colon|z|=1\}$ for the unit circle and $\overline{\mathbb D}=\{z\in\mathbb C\colon|z|\leq1\}$ for the closed unit disk. The purpose of this study is to describe the limit of $K(f_{n})$ in the Hausdorff topology as $n\rightarrow\infty$.
This work was inspired the 2012 study by Boyd and Schulz \cite{boyd} that included a result for the family $f_{n}$ with $\deg q=0$; that is, $q(z)=c\in\mathbb C$. Among many other things, they proved \begin{theorem}[Boyd-Shulz, 2012] \label{THM:BS} If $q(z)=c$, then under the Hausdorff metric, \begin{align*}
\mbox{for any $|c|<1$, }&\lim_{n\rightarrow\infty}K(f_{n})=\overline{\mathbb D};\\
\mbox{for any $|c|>1$, }&\lim_{n\rightarrow\infty}K(f_{n})=S_0. \end{align*} \end{theorem}
It comes as little surprise that this phenomena is easily disrupted. It was shown in \cite{krs} that when $q(z)=c$ with $|c|=1$, the limiting behavior of $K(f_{n})$ depends on number-theoretic properties of $c$ and the limit almost always fails to exist. Another study by Alves \cite{alves}, has shown that for maps of the form $f_{n,c}(z)=z^n+cz^k$ for a fixed positive integer $k$, if $|c|<1$, then the limit of $K(f_{n,c})$ as $n\rightarrow\infty$ is $S_0$.
Returning to the more general case in which $q$ is any polynomial, the limiting behavior of $K(f_n)$ is substantially more interesting. See Figure \ref{FIG:PICS} for examples of filled Julia sets for $f_{n}$, where $q(z)=z^2+c$ and $|c|<1$, that very clearly fail to limit to either the closed unit disk or the unit circle. The color gradation in the pictures indicates the number of iterates required to exceed a fixed bound for modulus.
Some results from the $\deg q=0$ cases still hold. If $|z|>1$, we can still expect the image of $z$ under $f_{n}$ to have large modulus for large enough $n$. Guided by this intuition, we find the following generalization of a lemma from \cite{boyd}. We omit the proof, as it is similar to \cite{boyd}, and adopt the notation
\[\mathbb D_r=\{z\in\mathbb C\colon|z|<r\}\mbox{ and }\overline{\mathbb D}_r=\{z\in\mathbb C\colon|z|\leq r\}.\] \begin{lemma} \label{LEM:DISKBOUND} For any polynomial $q$ and any $\epsilon>0$, there is an $N\geq 2$ such that for all $n\geq N$, \[K(f_{n})\subset \mathbb D_{1+\epsilon}.\] \end{lemma}
\begin{figure}
\caption{$K(f_{200})$ with $q_1(z)=z^2+0.25+0.25i$ (left) and $q_2(z)=z^2+0.45+0.25i$ (right)}
\label{FIG:PICS}
\end{figure}
Some of the dependence of the limiting behavior of $K(f_{n})$ on $q$ is obvious; by Lemma \ref{LEM:DISKBOUND}, one should expect any point whose orbit by $q$ leaves the unit disk to not be in $K(f_{n})$ for all $n$ sufficiently large. Thus, one might expect $\uplim_{n\rightarrow\infty}K(f_{n})$ to be contained in the closure of the set
\[\{z \in\mathbb C\colon|q^k(z)|< 1\mbox{ for all }k\}.\] However, this is not quite the case. One can prove, as in the $\deg q=0$ cases, that $ S_0$ is always a subset of the $\lowlim_{n\rightarrow\infty}K(f_{n})$.
Evidence for this fact (proved in Section \ref{SEC:PROOF}) comes by noting that when $n$ is much larger than the degree of $q$, the $n$ fixed points of $f_{n}$ are roughly equidistributed around the unit circle. This result is connected to the work of Erd\"os, Turan, et al.~\cite{erdos,hughes,iz} on distribution of zeros for sequences of complex polynomials.
By invariance properties of the filled Julia set, one should then expect the preimages of $S_0$ by $q$ (that still have modulus less than or equal to one) to be contained in $\lowlim_{n\rightarrow\infty}K(f_{n})$. The basins of the fixed points accumulating on $S_0$ and their preimages (appearing as small black spots) can be seen in Figure \ref{FIG:PICS}. These ideas and the preceding lemmas lead to the next definition and theorem.
\begin{definition} Let \begin{equation*} K_{\infty}:=K_q\cup\bigcup_{j\geq0}S_j, \end{equation*}
where $K_q:=\{z\in\mathbb C\colon |q^k(z)|<1\mbox{ for all }k\}$, $S_0$ is the unit circle $\{z\in\mathbb C\colon|z|=1\}$, and for any integer $j\geq0$,
\[S_{j}:=\{z\in\mathbb C\colon |q^j(z)|=1\mbox{ and }|q^i(z)|<1\mbox{ for all }0\leq i<j\}.\] \end{definition} $K_{\infty}$ is the set of points in $K(q)$ whose orbits by $q$ remain in $\mathbb D$ (the set $K_q$), the unit circle ($S_0$), and the parts of the iterated preimages of $S_0$ that remain in $\overline{\mathbb D}$ at each step (the sets $S_j$ for $j\geq1$). See Figure \ref{FIG:KINF} for an example of $K(f_{n})$ with $q(z)=z^2-0.1+0.75i$ and several different values of $n$ compared to a sketch of $K_{\infty}$ for this polynomial $q$.
\begin{figure}
\caption{Top, left to right: $K(f_{n})$ with $q(z)=z^2-0.1+0.75i$ and $n=6,12,25,50$. Bottom left: $K(f_{1800})$. Bottom right: Sketch of $K_{\infty}$, where $K_q$ is green, $S_0$ is red, and the sets $S_j$ are magenta.}
\label{FIG:KINF}
\end{figure}
By the construction of $K_{\infty}$, any point bounded a definite distance away from $K_{\infty}$ will eventually be mapped a definite distance outside the unit disk. Then Lemma \ref{LEM:DISKBOUND} implies such a point must be in the basin of infinity for all $f_n$ with large enough $n$, so we have the following theorem. \begin{theorem} \label{THM:MAIN} For any polynomial $q$, \[K_{\infty}:=K_q\cup\bigcup_{j=0}^{\infty}S_j\supset\uplim_{n\rightarrow\infty}K(f_n)\supset\lowlim_{n\rightarrow\infty}K(f_n)\supset\partial K_q\cup\bigcup_{j=0}^{\infty}S_j.\] \end{theorem}
What is happening here heuristically is that as long as the orbit of $z$ remains in $\mathbb D$, the polynomial $q(z)$ dominates the dynamics; if the orbit of $z$ leaves $\overline{\mathbb D}$, then the power map $z^n$ dominates. When the orbit hits $S_0$, it is not clear whether $q(z)$ or $z^n$ should win, so you get a point in the Julia set. Now simple conditions that describe precisely when we can expect the closed unit disk, $\overline{\mathbb D}$, or the unit circle, $S_0$, as a limit follow from Theorem \ref{THM:MAIN}: \begin{corollary} \label{COR:MAIN} Suppose $\deg q\geq2$ and $q$ has no fixed points in $S_0$. Under the Hausdorff metric, \begin{enumerate} \item $\displaystyle\lim_{n\rightarrow\infty}K(f_{n})=\overline{\mathbb D}$ if and only if $q(\overline{\mathbb D})\subset\overline{\mathbb D}$, and \item $\displaystyle\lim_{n\rightarrow\infty}K(f_{n})=S_0$ if and only if $q(\overline{\mathbb D})\cap\mathbb D=\emptyset$. \end{enumerate} \end{corollary}
With a couple additional assumptions on $q$, we also have the following stronger result. \begin{theorem} \label{THM:HYPERBOLIC} If $\deg q\geq2$ and $q$ is hyperbolic with no attracting periodic points on $S_0$, then under the Hausdorff metric \[\lim_{n\rightarrow\infty}K(f_{n})=K_{\infty}.\] \end{theorem}
We are left with the following open question. \begin{question} In what ways can the hypotheses from Theorem \ref{THM:HYPERBOLIC} on polynomial $q$ be relaxed and still have $\lim_{n\rightarrow\infty} K(f_{n}) = K_\infty$? \end{question}
Following a brief tour of background information and examples in Section \ref{SEC:BG}, we present the proof of Theorem \ref{THM:MAIN} in Section \ref{SEC:PROOF}. Lastly, Section \ref{SEC:HYPERBOLIC} is devoted to the proofs of theorems that require specific hypotheses on $q$.
The authors are grateful to Roland Roeder at Indiana University Purdue University Indianapolis for his very helpful advice and the Butler University Mathematics Research Camp, where this project began. We are also very grateful to the referee for their insightful and helpful suggestions. All images we created with the Dynamics Explorer \cite{DE} program.
\section{Background and Examples}\label{SEC:BG}
\subsection{Notation and Terminology}
The main results in this note rely on the convergence of sets in the Riemann sphere, $\hat{\mathbb C}$, where the convergence is with respect to the Hausdorff metric. Given two sets $A,B$ in a metric space $(X,d)$, the Hausdorff distance $d_{\mathcal H}(A,B)$ between the sets is defined as \begin{eqnarray*} d_{\mathcal H}(A,B)&=&\max\left\{\sup_{a\in A}d(a,B),\sup_{b\in B}d(b,A)\right\}\\ &=&\max\left\{\sup_{a\in A}\inf_{b\in B}d(a,b),\sup_{b\in B}\inf_{a\in A}d(a,b)\right\}. \end{eqnarray*} The distance from each point in $A$ to $B$ has a least upper bound, and the same it true for each point from $B$ to $A$. The Hausdorff distance is the supremum over all of these distances. As an example, consider a regular hexagon $A$ with sides of length $r$ inscribed in a circle $B$ of radius $r$. In this case, $d_{\mathcal H}(A,B)=r(1-\sqrt3/2)$, the shortest distance from the circle to the midpoint of any of the sides of the hexagon.
Filled Julia sets $K(f_{n})$ are compact \cite{beardon}, bounded, and contained in the compact space $\hat{\mathbb C}$. Moreover, with the Hausdorff metric $d_{\mathcal H}$, the space of all subsets of $\hat{\mathbb C}$ is complete \cite{henrikson}. Suppose $S_n$ and $S$ are compact subsets of $\mathbb C$. We say $S_n$ converges to $S$ and write $\lim_{n\rightarrow\infty}S_n=S$ if for all $\epsilon>0$, there is $N>0$ such that for all $n\geq N$, we have $d_{\mathcal H}(S_n, S)<\epsilon$.
We've also made use of Painlev\'e-Kuratowski set convergence \cite{ROCKAFELLAR}. For a sequence of sets, $S_n$, we have \begin{eqnarray*} \lowlim_{n\rightarrow\infty}S_n&=&\{z\in\mathbb C\colon\uplim_{n\rightarrow\infty}d(z,S_n)=0\},\\ \uplim_{n\rightarrow\infty}S_n&=&\{z\in\mathbb C\colon\lowlim_{n\rightarrow\infty}d(z,S_n)=0\}. \end{eqnarray*} It follows immediately that $\lowlim_{n\rightarrow\infty}S_n\subset\uplim_{n\rightarrow\infty}S_n$. We say $S_n$ converges to a set $S$ in the sense of Painlev\'e-Kuratowski if $\lowlim_{n\rightarrow\infty}S_n=\uplim_{n\rightarrow\infty}S_n=S$, or equivalently, $\uplim_{n\rightarrow\infty}S_n\subseteq\lowlim_{n\rightarrow\infty}S_n=S$. It is shown in \cite{DONTCHEV} that for sequences of bounded sets, the notion of Painlev\'e-Kuratowski set convergence agrees with the notion of convergence with Hausdorff distance.
\subsection{Complex Dynamics}
We provide here only the fine details relevant to this paper. Thorough explorations of this subject and proof of all the facts below can be found in \cite{milnor,beardon,carleson}. The Fatou set of rational map $f\colon\hat{\mathbb C}\rightarrow\hat{\mathbb C}$, denoted $\mathcal F(f)$, is the set of points for which the iterates of $f$ form a normal family; the Julia set of $f$, denoted $J(f)$, is the complement of $\mathcal F(f)$ in $\hat{\mathbb C}$. When $f$ is a polynomial map, the Julia set of $f$ is the boundary of the filled Julia set; that is, $J(f)=\partial K(f)$.
We say a point $z\in\hat{\mathbb C}$ is periodic for $f$ with period $k$ if $f^k(z)=z$ and the points $z,f(z),\dots,f^{k-1}(z)$ are all distinct. The multiplier $\lambda$ of a periodic point $z_0$ of period $k$ is defined as \[\lambda=(f^k)'(z_0)=\prod_{i=0}^{k-1}f'(f^i(z)).\]
If $|\lambda|<1$, then $z_0$ is attracting; if $\lambda>1$, $z_0$ is repelling; if $\lambda=1$, $z_0$ is indifferent. Repelling periodic points are contained in $J(f)$; in fact, repelling periodic points are dense in $J(f)$. Attracting periodic points, on the other hand, are contained in $\mathcal F(f)$. Moreover, for every attracting periodic point $z_0$ of period $k$, there is an open neighborhood $B(z_0,\epsilon)$ such that $f^k(B(z_0,\epsilon))\subset B(z_0,\epsilon)$ and the orbit by $f^k$ of any point in $B(z_0,\epsilon)$ converges to $z_0$. The set of all points whose orbits by $f^k$ converge to $z_0$ is called the basin of attraction for $z_0$. Finally, a rational map is called hyperbolic if every point in $\mathcal F(f)$ converges to an attracting periodic cycle.
\subsection{Examples}\label{SEC:EX}
\begin{example} Let $q(z)=0.75z^2+c_j$ with $c_1=0.21+0.017i$, $c_2=0.41+0.047i$, and $c_3=1.41+1.17i$. For $c_1$, we have $q(\overline{\mathbb D})\subset\mathbb D$, so in this case, $K_{\infty}=\overline{\mathbb D}$. For $c_3$, we have $q(\overline{\mathbb D})\cap\mathbb D=\emptyset$, so in this case, $K_{\infty}=S_0$. Both of these cases follow from Corollary \ref{COR:MAIN}. The more interesting case is $c_2$ in which $q(\mathbb D)\backslash\mathbb D\neq\emptyset$. See Figure \ref{FIG:EX}. The limit, should it exist, of $K(f_{n})$ is the set $K_{\infty}$, which is now significantly more complicated, neither the closed unit disk nor the unit circle. \begin{figure}
\caption{Each row, left to right: $q(\overline{\mathbb D})$ for $q(z)=0.75z^2+c_j$, $K(q)$, and $K(f_{1800})$. The first row is for $c_1=0.21+0.017i$, the second row is $c_2=0.41+0.047i$, and the third row is $c_3=1.41+1.17i$. Scale is the same in pictures in the last two columns.}
\label{FIG:EX}
\end{figure} \end{example}
\section{Proof of Main Results}\label{SEC:PROOF}
\begin{proof}[Proof of Lemma \ref{LEM:DISKBOUND}]
Let $z\in \mathbb C\setminus\overline{\mathbb D}_{1+\epsilon}$. We prove $|f^m_{n,q}(z)|\geq B^m$ for all $m\geq 1$ using induction. Let $a_i$ be the coefficients of $q$, and pick $M>d(\max|a_i|)$. Then for any $|z|>1$,
we have $|q(z)|\leq M|z|^d$.
Choose $B>\max{\{1,M\}}$ and $N>d$ large enough that $|z|^N>\max\{4B,2M|z|^d\}$. Let $n\geq N$. Observe that \begin{eqnarray*}
|f_{n,q}(z)|
\ \geq\ |z|^n-|q(z)|\
\geq\ |z|^n-M|z|^d
\ \geq\ |z|^n-\frac{1}{2}|z|^n\ \geq\ 2B\ >\ B. \end{eqnarray*}
Now suppose for some $m\geq 1$, we know $|f_{n,q}^m(z)|\geq B^m$. Let $z_m=f^m_{n,q}(z)$, and note that
$|q(z_m)|\leq M|z_m|^d
<|z_m|^N$. Then for any $n\geq N$, \begin{align*}
\left|f^{m+1}_{n,q}(z)\right|\
\geq\ \left|z_m^n\right|-\left|q(z_m)\right|\
&\geq\ \left|z_m\right|^n-M\left|z_m\right|^d\\
& \geq B^{mn}-B^{md}B\
=\ B^{m+1}\left(B^{mn-m-1}-B^{md-m}\right)\ \geq\ B^{m+1}. \end{align*}
It follows that $|f_{n,q}^m(z)|\geq B^m$ for all $m\geq 1$. Since $B>1$, the orbit of $z$ under $f_{n,q}$ escapes to infinity. Thus, $z\notin K(f_{n,q})$. \end{proof}
Before proving Theorems \ref{THM:HYPERBOLIC}, we need a couple more lemmas.
\begin{lemma} \label{LEM:BOUNDED}
If $\{q^i(z_0)\}_{i=0}^{k-1}\subset\mathbb D$ and $|q^k(z_0)|=1$, then for any positive integer $m<k$, there is an $N$ such that for all $n\geq N$, \[\{f_{n,q}^i(z_0)\}_{i=0}^m\subset\mathbb D.\] Moreover, for all $\epsilon>0$ and any positive integer $m\leq k$, there is an $N$ such that for all $n\geq N$,
\[\max_{0\leq i\leq m}|f_{n,q}^i(z_0)-q^i(z_0)|<\epsilon.\] \end{lemma}
\begin{proof} This proof follows by continuity and is left to the reader. \end{proof}
\begin{lemma} \label{LEM:CIRCPREIM} \[\lowlim_{n\rightarrow\infty}K(f_n)\supset\bigcup_{j=0}^{\infty}S_j\] \end{lemma}
There is a body of work on the distribution of polynomial roots begun by Erd\"os and Tur\'an in \cite{erdos}. Specific results in \cite{hughes,iz} dealing with the accumulation of polynomial roots around the unit circle could be applied to the polynomials $f_{n}(z)-z$ to find fixed points. However, the case here is simpler because $n-d-1$ of the coefficients of $f_{n}$ are all zero. Thus, we have a concise argument using the following potential theory lemma. \begin{lemma} \label{LEM:POT} For any fixed degree $d$ nonzero polynomial, $q$, the zeros of the polynomial $f_{n,q}(z)=z^n+q(z)$ cluster uniformly around the unit circle as $n\rightarrow\infty$. More specifically, for each $n$, let \[\mu_n=\frac{1}{n}\sum_{f_{n,q}(z)=0}\delta_z,\] where $\delta_z$ is a point mass at $z$, and the roots of $f_{n,q}$ are counted with multiplicity. Then $\mu_n\rightarrow\mu$ weakly as $n\rightarrow\infty$, where $\mu$ is normalized Lebesgue measure on $S_0$. \end{lemma}
\begin{proof}[Proof of Lemma \ref{LEM:POT}] Note that
\[\mu=dd^c\log_{+}|z|,\qquad\mbox{where}\qquad\log_{+}|z|=\left\{\begin{array}{rl}
\log|z|,&\mbox{ if }|z|\geq1\\
0,&\mbox{ if }|z|<1.\end{array}\right.\]
Let $Z_q$ be the zero set of $q$, let $K$ be a compact subset of $\mathbb C\backslash(S_0\cup Z_q)$, and let $A$ be the maximum of $|q(z)|$ on $K$. Then there is an $\epsilon>0$ such that for any $z\in K$, we have $|q(z)|>\epsilon$ and either $|z|\geq1+\epsilon$ or $|z|\leq1-\epsilon$.
If $|z|\geq1+\epsilon$, then \begin{eqnarray} \label{EQN:POT}
\frac{1}{n}\log|f_{n}(z)|&=&\frac{1}{n}\log\left|z^n\left(1+\frac{q(z)}{z^n}\right)\right|
\ =\ \frac{1}{n}\log|z^n|+\frac{1}{n}\log\left|1+\frac{q(z)}{z^n}\right|\nonumber
\end{eqnarray}
Using this and defining $C=\max\{\log(1+A/(1+\epsilon)),-\log(-1-A/(1+\epsilon))\}$, we have \begin{eqnarray} \label{EQN:POT3}
\log_{+}|z|-\frac{C}{n}&\leq&\frac{1}{n}\log|f_{n}(z)|\ \leq\ \log_{+}|z|+\frac{C}{n}. \end{eqnarray}
If $|z|<1-\epsilon$, then there is an $N$ such that for all $n\geq N$, we have $z^n\leq\max\{\epsilon/2,A\}$. Then \begin{eqnarray*}
\frac{\epsilon}{2}\leq|q(z)|-|z^n|\ \leq\ |f_{n}(z)|&\leq&|z^n|+|q(z)|\leq2A. \end{eqnarray*}
Noting that $\log_{+}|z|=0$ when $|z|\leq1-\epsilon$, we have for all $|z|\leq1+\epsilon$ that \begin{eqnarray} \label{EQN:POT2}
\log_{+}|z|+\frac{\log(\epsilon/2)}{n}\ \leq\ \frac{1}{n}\log\left(\frac{\epsilon}{2}\right)\ \leq\ \frac{1}{n}\log|f_{n}(z)|&\leq&
\frac{1}{n}\log(2A)\ =\ \log_{+}|z|+\frac{\log(2A)}{n}. \end{eqnarray} Using Equations (\ref{EQN:POT3}) and (\ref{EQN:POT2}), we have
$\frac{1}{n}\log|f_{n}(z)|\rightarrow\log_{+}|z|$ uniformly on $K$ as $n\rightarrow\infty$; by the compactness theorem for families of subharmonic functions \cite[Theorem 4.1.9]{hormander},
it follows that $\frac{1}{n}\log|f_{n}(z)|\rightarrow\log_{+}|z|$ in $L^1_{loc}(\mathbb C)$. Note that $dd^c\log_{+}|z|=\mu$, and we have from the Poincar\`e-Lelong formula \cite{gh} that $\frac{1}{n}dd^c\log|f_{n}(z)|=\mu_n$. Thus, we have
\[\mu_n=\frac{1}{n}dd^c\log|f_{n}(z)|\rightarrow dd^c\log_{+}|z|=\mu\] weakly as $n\rightarrow\infty$. \end{proof}
\begin{proof}[Proof of Lemma \ref{LEM:CIRCPREIM}] We first deal only with the unit circle by showing that $S_0\subset\lowlim_{n\rightarrow\infty}K(f_{n})$. Let $z\in S_0$ and $\epsilon>0$. Define \[g_{n}(z):=f_{n}(z)-z,\] so the zeros of $g_{n}$ are fixed points of $f_{n}$. By Lemma \ref{LEM:POT}, the fixed points of $f_{n}$ cluster uniformly near the unit circle. If any of the fixed points are repelling, then they are contained in $J(f_{n})$ \cite{milnor}. Otherwise, they are attracting or indifferent, in which case they must be $\epsilon$ close to $J(f_{n})$ because $K(f_{n})\subset\mathbb D_{1+{\epsilon}}$. It follows that $S_0\subset\lowlim_{n\rightarrow\infty}K(f_{n})$.
Since filled Julia sets are backward invariant, the preimages by $f_{n}$ of the fixed points clustering on $S_0$ will also be in $K(f_{n})$. We now show that these preimages cluster on the sets $S_j$ (which are contained in the preimages of $S_0$). Specifically, we must show that for large enough $n$, any point in $S_j$ for any $j$ is close to a preimage of one of the fixed points on $S_0$. This task is made easier by the fact that by construction,
the nonempty $S_j$ accumulate (when there are an infinite number of them) on the boundary of $K_q$. To see this, define
\[\mathcal K_{J}=\bigcap_{i=0}^J\{z\in\mathbb C\colon|q^i(z)|<1\},\] and note that for any $j>J$, we have from construction that $S_j\subset\mathcal K_{J}$ and $\lim_{J\rightarrow\infty}\mathcal K_{J}=K_q$. For any $J$, the set $\mathcal K_J\backslash K_q$ has compact closure, so for any $\epsilon>0$, there is a $J$ and points $\{z_1,\dots,z_{\ell})\subset\bigcup_{j<J}S_j$ such that \[\bigcup_{j=1}^{\infty}S_j\subset\bigcup_{i=1}^{\ell}B(z_{i},\epsilon/2).\]
That is, we have a finite open cover of the set of all the $S_j$ sets: for all $z\in\bigcup_{j=1}^{\infty}S_j$, there is an integer $1\leq i\leq\ell$ such that $|z-z_{i}|<\epsilon/2$.
With this finite open cover, it is now straightforward to use Lemmas~\ref{LEM:BOUNDED} and \ref{LEM:POT} to show that for all $n$ sufficiently large, each $z_i$ (a center of one of the balls in the open cover) is close to some $w_{i,n}$, a preimage of a fixed point of $f_{n}$ on $S_0$. Since each $w_{i,n}\in K(f_n)$, we can choose $n$ large enough so that for all $z_i$ we have $d(z_i,K(f_{n}))<\epsilon/2$. Thus, for any $z\in S_j$ for any $j$, we have $d(z,K(f_{n}))<\epsilon$. \end{proof}
It is intuitive by the construction of $K_{\infty}$ that points bound a definite distance away from $K_{\infty}$ will not be in $K(f_{n})$ for all sufficiently large $n$. We nevertheless provide the formal statement and proof of this fact in the following lemma.
\begin{lemma}\label{LEM:HALFHAUS} For any $\epsilon>0$, there is an $N$ such that for any $n\geq N$ \[d(z_0,K_{\infty})\geq\epsilon\mbox{\ \ implies\ \ } z_0\notin K(f_{n,z}).\] \end{lemma} \begin{proof}
First we consider the case in which $|z_0|>1+\epsilon$. By Lemma \ref{LEM:DISKBOUND} there is a large enough $N$ such that for all $n\geq N$, we have $z_0\notin K(f_{n,z})$.
To attend to the remaining points, suppose $z_0\in\overline{\mathbb D}_{1+\epsilon}$ and $d(z_0,K_{\infty})\geq \epsilon$. However, since $S_0\subset K_{\infty}$, we only need to consider $z_0\in\overline{\mathbb D}_{1-\epsilon}$ with $d(z_0,K_{\infty})\geq \epsilon$. Note that $\{z\in\mathbb C\colon d(z,K_{\infty})\geq\epsilon\}\cap\overline{\mathbb D}_{1-\epsilon}$ is a compact set, so there is some $j$ such that for all $z_0\in\{z\in\mathbb C\colon d(z,K_{\infty})\geq\epsilon\}\cap\overline{\mathbb D}_{1-\epsilon}$, we have $|q^j(z_0)|>1+\epsilon$. By Lemma \ref{LEM:BOUNDED}, $|f^j_{n,q}(z_0)|>1+\epsilon$ for all sufficiently large $n$. Again, by Lemma \ref{LEM:DISKBOUND}, we have $z_0\notin K(f_{n,z})$ in this case as well. \end{proof}
\begin{lemma}\label{LEM:ATTRACT} For any periodic orbit $\{z_i\}_{i=0}^{k-1}$ of $q$ contained in $\mathbb D$ and any $\epsilon>0$, there is an $N$ such that for all $n\geq N$, $f_{n}$ has a periodic orbit $\{z_{i,n}\}_{i=0}^{k-1}$ also contained $\mathbb D$ such that
\[\max_{0\leq i\leq k-1}|z_i-z_{i,n}|<\epsilon.\] Moreover, if $\{z_i\}_{i=0}^{k-1}$ is attracting (repelling) for $q$, then each cycle $\{z_{i,n}\}_{i=0}^{k-1}$ is attracting (repelling) for each corresponding $f_{n}$. \end{lemma}
While zeros of non-contant polynomials depend continuously on the coefficients of the polynomial, the set of polynomials $\{f_{n}\}$ is discrete. Nevertheless, Lemma \ref{LEM:ATTRACT} still follows quickly from Rouche's theorem and the fact that on any compact subset $K$ of $\mathbb D$, we have $f_{n}|_K\rightarrow q$ uniformly and $f'_{n,q}|_K\rightarrow q'$ uniformly, so we omit the proof.
\begin{proof}[Proof of Theorem \ref{THM:MAIN}] From Lemma \ref{LEM:HALFHAUS}, it follows that for all $z$, $d(z,K_{\infty})\leq\lowlim_{n\rightarrow\infty}d(z,K(f_{n}))$. From this and Lemma \ref{LEM:CIRCPREIM}, we have \[K_q\cup\bigcup_{j=0}^{\infty}S_j\supset\uplim_{n\rightarrow\infty}K(f_n) \supset\lowlim_{n\rightarrow\infty}K(f_{n})\supset\bigcup_{j=0}^{\infty}S_j,\]
so it remains only to show that $\lowlim_{n\rightarrow\infty}K(f_{n})\supset\partial K_q$. A point $z\in\partial K_q$ must be an accumulation point of $\bigcup_{j=0}^{\infty}S_j$ or in $\partial K(q)\cap\mathbb D=J(q)\cap\mathbb D$. In the former case, we have $z\in\lowlim_{n\rightarrow\infty}K(f_{n})$, so suppose the latter: $z\in J(q)\cap\mathbb D$, and let $\epsilon>0$. Since repelling periodic points are dense in $J(q)$ \cite{milnor}, there is a repelling periodic point $z_0\in J(q)\cap\mathbb D$ such that $|z-z_0|<\epsilon/2$.
By Lemma \ref{LEM:ATTRACT}, there is an $N$ such that for all $n\geq N$, there is a repelling periodic point $z_n\in J(f_{n})=\partial K(f_{n})$ such that $|z_0-z_n|<\epsilon/2$. Thus, $d(z,K(f_{n}))\leq|z-z_n|<\epsilon$. \end{proof}
\section{Hyperbolic and Other More Specific Maps} \label{SEC:HYPERBOLIC}
We turn our attention now to the more specific cases in which $q$ is hyperbolic and has no periodic points on $S_0$.
\begin{proof}[Proof of Theorem \ref{THM:HYPERBOLIC}] By Theorem \ref{THM:MAIN}, it remains only to show that $K_q^{\circ}$, the interior of $K_q$, is a subset of $\lowlim_{n\rightarrow\infty}K(f_{n})$. That is, we need only now show that any point of $K_q^{\circ}$ is close to $K(f_{n})$ for all sufficiently large $n$. If $K_q^{\circ}$ is empty, we are done, so we proceed with the assumption that $K_q^{\circ}$ is nonempty.
Since $q$ is hyperbolic, we know that the orbit of every point in $K_q^{\circ}\subset\mathcal F(q)\cap\mathbb D$ converges to an attracting periodic cycle for $q$. Suppose $\{z_i\}_{i=0}^{k-1}$ is such an attracting cycle. We will show that for large enough $n$, every $f_{n}$ has an attracting periodic near $\{z_i\}_{i=0}^{k-1}$. Then every point in $K_q^{\circ}$ will have some forward image near an attracting cycle of $f_{n}$, ensuring these points are in $K(f_{n})$.
For each $z_i$ in the attracting cycle for $q$, there is a neighborhood $B(z_i,r_i)$ such that $q^k(B(z_i,r_i))\subset B(z_i,r_i)\subset\mathbb D$. Let $r=\min_{i}r_i$. By Lemma \ref{LEM:ATTRACT}, there is an $N_0$ such that for all $n\geq N_0$, $f_{n}$ has an attracting periodic orbit $\{z_{i,n}\}_{i=0}^{k-1}$ where $z_{i,n}\in B(z_i,r)$. By Lemma \ref{LEM:BOUNDED}, there is an $N_1$ such that for all $n\geq N_1$ we also have $f_{n}^k(B(z_i,r))\subset B(z_i,r)$. Thus, we have constructed a neighborhood that is forward invariant for all $f_{n}^k$ with $n\geq N_1$, so this neighborhood must be contained in each $K(f_{n})$.
We now show that compact subsets of $K_q^{\circ}$ are eventually mapped by $f_{n}$ into the forward invariant neighborhood we just constructed. Let $\epsilon>0$ and $\mathcal K_{\epsilon}$ be a compact subset of $K_q^{\circ}$ such that for all $z\in\mathcal K_{\epsilon}$, there is some $z_0\in\partial K_q$ such that $d(z,z_0)<\epsilon$. Then there is an $m$ such that \[q^m(\mathcal K_{\epsilon})\subset\bigcup_{i=0}^{k-1}B(z_i,r).\] Again by Lemma \ref{LEM:BOUNDED}, there is an $N_2$ such that for all $m\geq N_2$ we also have \[f_{n}^m(\mathcal K_{\epsilon})\subset\bigcup_{i=0}^{k-1}B(z_i,r).\] Let $N=\max_{0\leq i\leq2}N_i$. For all $n\geq N$, it follows that the orbit by $f_{n}$ of any point $z\in\mathcal K_{\epsilon}$ converges to an attracting periodic cycle of $f_{n}$ contained in $\mathbb D$; that is, $\mathcal K_{\epsilon}\subset K(f_{n})$. Then for any point $z\in K_q^{\circ}$, there is a $\hat z\in\mathcal K_{\epsilon}\subset K(f_{n})$ such that $d(z,\hat z)<\epsilon$. \end{proof}
\begin{proof}[Proof of Corollary \ref{COR:MAIN}]
We prove the first part of the corollary. Suppose first that the image of $\overline{\mathbb D}$ under $q$ is contained in $\overline{\mathbb D}$ and let $0<\epsilon<1$. By the open mapping theorem, we know $q(\mathbb D)$ is an open set in $\overline{\mathbb D}$, so $q(\mathbb D)\subset\mathbb D$. Since $\deg q\geq2$, we have from the Denjoy-Wolff Theorem \cite{milnor} that $q$ has a fixed point $z_0$ in $\overline{\mathbb D}$ to which orbits of all points in any compact subset of $\mathbb D$ converge. We have assumed $q$ has no fixed points in $S_0$, so $z_0\in\mathbb D$, and in this case, we also have from the Denjoy-Wolff Theorem that $z_0$ is the unique fixed point in $\mathbb D$. From Lemma \ref{LEM:ATTRACT}, we have that for all $\epsilon>0$, there is an $N$ such that for all $n\geq N$, $f_{n}$ has an attracting fixed point $z_n$ with $|z_0-z_n|<\epsilon$ and no other fixed points in $\overline{\mathbb D}_{1-\epsilon}$. Thus, we have $\mathbb D_{1-\epsilon}\subset K(f_{n})$. Combining this with Lemma \ref{LEM:DISKBOUND}, for any $\epsilon>0$, we may choose $N$ large enough such that \[\mathbb D_{1-\epsilon}\subset K(f_{n})\subset\mathbb D_{1+\epsilon}.\]
Now suppose the image of $\overline{\mathbb D}$ under $q$ is not contained in $\overline{\mathbb D}$, so $q(\overline{\mathbb D})\backslash\overline{\mathbb D}$ is nonempty. Since $\overline{\mathbb D}$ is compact, $q(\overline{\mathbb D})$ is also compact, and since $\mathbb C\backslash\overline{\mathbb D}$ is open and $q(\overline{\mathbb D})\backslash\mathbb D$ is nonempty,
there is some $z_0\in\mathbb D$ and $r>0$ such that $B(z_0,r)\subset\mathbb D$ and for any $z\in B(z_0,r)$, we have $|q(z)|>1$. Then one can pick $N$ large enough that for any $n\geq N$ and any $z\in B(z_0,r)$, we have $|f_{n}(z)|>1$. Then for all $n\geq N$, $z_0\notin K(f_n)$.
For the second part of the theorem, assume $\overline{\mathbb D}$ under $q$ does not intersect $\mathbb D$, so $q(\overline{\mathbb D})\subset(\mathbb C\backslash\mathbb D)$. That is, for all $z_0\in\overline{\mathbb D}$, we have $|q(z_0)|\geq1$. Let
\[s=\min_{z\in\overline{\mathbb D}_{1-\epsilon}}\{|q(z)|\},\]
so $1<s$. Then $(s-1)/2>0$. Since $\overline{\mathbb D}_{1-\epsilon}$ is compact, we may choose this $N$ so that for any $z\in\mathbb D_{1-\epsilon}$, we have $|z|^n<(s-1)/2$. Then for any $z\in\mathbb D_{1-\epsilon}$, we also have \begin{align*}
\left|f_{n}(z)\right|
& \geq\big| |z|^n-|q(z)|\big|\\
& =|q(z)|-|z|^n\\
& > s-(s-1)/2\\
& > 1+(s-1)/2. \end{align*} By Lemma \ref{LEM:DISKBOUND}, it follows that $\mathbb D_{1-\epsilon}$ is in the basin of infinity of $f_{n}$ for all $n\geq N$. The result then follows from this fact and Lemma \ref{LEM:CIRCPREIM}.
Lastly, suppose the image $\overline{\mathbb D}$ under $q$ does intersect $\mathbb D$. Then by Theorem \ref{THM:HYPERBOLIC}, if the limit exists, $\bigcup_{j>1}S_j\subset\mathbb D$ is nonempty, so the limit cannot be $S_0$. \end{proof}
\end{document} |
\begin{document}
\title{Breaking of ensemble equivalence for\\ perturbed Erd\H{o}s-R\'enyi random graphs}
\author{\renewcommand{\arabic{footnote}}{\arabic{footnote}} F. den Hollander\footnotemark[1]\,, \renewcommand{\arabic{footnote}}{\arabic{footnote}} M. Mandjes\footnotemark[2]\,, \renewcommand{\arabic{footnote}}{\arabic{footnote}} A. Roccaverde\footnotemark[3]\,, \renewcommand{\arabic{footnote}}{\arabic{footnote}} N.J. Starreveld\footnotemark[4] }
\footnotetext[1]{Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands\\ {\tt \small [email protected]}}
\footnotetext[2]{Korteweg de-Vries Institute, University of Amsterdam, P.O. Box 94248, 1090 GE Amsterdam, The Netherlands\\ {\tt \small [email protected]}}
\footnotetext[3]{Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands\\ {\tt \small [email protected] }}
\footnotetext[4]{Korteweg de-Vries Institute, University of Amsterdam, P.O. Box 94248, 1090 GE Amsterdam, The Netherlands\\ {\tt \small [email protected]}}
\date{\today}
\maketitle
\begin{abstract}
In \cite{dHMRS18} we analysed a simple undirected random graph subject to constraints on the densities of edges and triangles, considering the dense regime in which the number of edges per vertex is proportional to the number of vertices. We computed the specific relative entropy of the \emph{microcanonical ensemble} with respect to the \emph{canonical ensemble}, i.e., the relative entropy per edge in the limit as the number of vertices tends to infinity. We showed that as soon as the constraints are \emph{frustrated}, i.e., do not lie on the Erd\H{o}s-R\'enyi line (where the density of triangles is the third power of the density of edges), there is \emph{breaking of ensemble equivalence}, meaning that the specific relative entropy is strictly positive. In the present paper we analyse what happens near this line. It turns out that the way in which the specific relative entropy tends to zero critically depends on whether the line is approached is from above or from below. We identify what the constrained random graph looks like in the microcanonical ensemble in the limit as the number of vertices tends to infinity.
\noindent \emph{MSC 2010:} 05C80, 60K35, 82B20.\\ \emph{Key words:} Erd\H{o}s-R\'enyi random graph, Gibbs ensembles, breaking of ensemble equivalence, relative entropy, graphons, large deviation principle, variational representation.\\ \emph{Acknowledgement:} The research in this paper was supported through NWO Gravitation Grant NETWORKS 024.002.003. The authors are grateful to V.\ Patel and H.\ Touchette for helpful discussions. FdH, AR and NJS are grateful for hospitality at the International Centre for Theoretical Sciences in Bangalore, India, as participants of the program on \emph{Large Deviation Theory in Statistical Physics: Recent Advances and Future Challenges} running in the Fall of 2017.
\end{abstract}
\section{Introduction} \label{S1.1}
\subsection{Background}
In this paper we analyse random graphs that are subject to \emph{constraints}. Statistical physics prescribes what probability distribution on the set of graphs we should choose when we want to model a given type of constraint \cite{G02}. Two important choices are: \begin{itemize} \item[(1)] The \emph{microcanonical ensemble}, where the constraints are \emph{hard} (i.e., are satisfied by each individual graph). \item[(2)] The \emph{canonical ensemble}, where the constraints are \emph{soft} (i.e., hold as ensemble averages, while individual graphs may violate the constraints). \end{itemize} For random graphs that are large but finite, the two ensembles are obviously different and, in fact, represent different empirical situations. Each ensemble represents the unique probability distribution with \emph{maximal entropy} respecting the constraints. In the limit as the size of the graph diverges, the two ensembles are traditionally \emph{assumed} to become equivalent as a result of the vanishing fluctuations in the soft constraints, i.e., the soft constraints are assumed to behave asymptotically like hard constraints. This assumption of \emph{ensemble equivalence} is one of the cornerstones of statistical physics, but it does \emph{not} hold in general. We refer to \cite{T14} for more background on this phenomenon.
In a series of papers we investigated the possible breaking of ensemble equivalence for various choices of the constraints, including the degree sequence and the total number of edges, wedges, triangles, etc. Both the \emph{sparse regime} (where the number of edges per vertex remains bounded) and the \emph{dense regime} (where the number of edges per vertex is of the order of the number of vertices) were considered. The effect of \emph{community structure} on ensemble equivalence was investigated as well. Relevant references are \cite{GHR17,GHR18,dHMRS18,GS18,SdMdHG15}.
In \cite{dHMRS18}, for the dense regime, we considered constraints on the densities of \emph{finitely many arbitrary subgraphs}. Our main result was a \emph{variational formula} for $s_\infty = \lim_{n\to\infty} n^{-2} s_n$, where $n$ is the number of vertices and $s_n$ is the relative entropy of the microcanonical ensemble with respect to the canonical ensemble. Our analysis relied on the \emph{large deviation principle for graphons} derived in \cite{Ch15,CV11}. For the case where the constraints were on the densities of edges and triangles we found that $s_\infty>0$ when the constraints are \emph{frustrated}.
In the sequel we will say that we are on the {\em Erd\H{o}s-R\'enyi line} (abbreviated to ER line) when the density of triangles is equal to the third power of the density of edges, which is typical for the Erd\H{o}s-R\'enyi random graph. We analyse the behaviour of $s_\infty$ when the constraints are close to but different from the ER line. Moreover, we identify what the constrained random graph looks like asymptotically in the microcanonical ensemble. It turns out that the behaviour drastically changes when the density of triangles is slightly larger, respectively, slightly smaller than that of the ER line. The microcanonical ensemble is harder to analyse than the canonical ensemble. Yet, we do not know what the constrained random graph looks like asymptotically in the canonical ensemble below the ER line.
\subsection{Literature}
While breaking of ensemble equivalence is a relatively new concept in the theory of random graphs, there are many studies on the asymptotic structure of random graphs. In the pioneering work \cite{CV11}, followed by \cite{LZ15}, a large deviation principle for dense Erd\H{o}s-R\'enyi random graphs was established and the asymptotic structure of constrained Erd\H{o}s-R\'enyi random graphs was described as the solution of a variational problem. In the past few years, significant progress was made regarding sparse random graphs as well \cite{CDS11,DL18,LZ16,Z17}. Two further random graph models that were studied extensively are the exponential random graph model and the constrained exponential random graph model. Exponential random graphs, which are related to the canonical ensemble considered in the present paper, were analysed in \cite{BBS11,CD13}: \cite{BBS11} studies mixing times of Glauber spin-flip dynamics, while \cite{CD13} uses large deviation theory to derive asymptotic expressions for the partition function. In subsequent works \cite{RS13,RY13,Y13} the behaviour of exponential random graphs was analysed in further detail, while in \cite{YZ17} the focus was on sparse exponential random graphs. In \cite{EG18} exponential random graphs were studied in both the dense and the sparse regime, and the main conclusion was that they behave essentially like mixtures of random graphs with independent edges. In \cite{KY17,Y15} constrained exponential random graphs were analysed, while in \cite{AZ11,AZ18} the additional feature of directed edges was added. In \cite{DS19} large deviations were used to study random graphs constrained on the degree sequence in the dense regime.
In \cite{KRRS17,KRRS172,MY18,RS15}, the asymptotic structure of graphs drawn from the microcanonical ensemble was investigated for various choices of constraints on the densities of edges and triangles. The focus of \cite{RS15} was on the behaviour of random graphs for values of the edge and triangle densities close to the ER line, and a rough scaling of the graph was found via a bound on the entropy function. In the present paper we extend these results by determining the precise scaling. A similar question was addressed in \cite{MY18} for a constraint on the edge and triangle density close to the lower boundary curve of the admissibility region (see Fig.~\ref{fig-scallopy} below). In \cite{KRRS172}, through extensive numerics, the regions were determined where phase transitions in the structure of the constrained random graph occur as the densities of edges and triangles is varied. Our results rely on this numerics near the ER line, and make it mathematically precise.
\subsection{Outline}
The remainder of our paper is organised as follows. In Section~\ref{S1.2} we define the two ensembles, give the definition of equivalence of ensembles in the dense regime, and recall some basic facts about graphons. An important role is played by the \emph{variational representation} of $s_\infty$, derived in \cite{dHMRS18} when the constraints are on the total numbers of subgraphs drawn from a finite collection of subgraphs. We also recall the analysis of $s_\infty$ in \cite{dHMRS18} for the special case where the subgraphs are the edges and the triangles. In Section~\ref{S1.4} we state our main theorems and propositions on the behaviour around the ER line. Proofs are given in Sections~\ref{S2 Chapter 5} and \ref{Optimal per}.
\section{Definitions and preliminaries} \label{S1.2}
In this section, which is largely lifted from \cite{dHMRS18}, we present the definitions of the main concepts to be used in the sequel, together with some key results from prior work. Section \ref{S1.2.1} presents the formal definition of the two ensembles we are interested in and gives our definition of ensemble equivalence in the dense regime. Section \ref{S1.2.2} recalls some basic facts about {graphons}, while Section \ref{S1.2.3} recalls some basic properties of the canonical ensemble. Section \ref{S1.3} recalls the variational characterisation of ensemble equivalence when the constraint is on a finite number of subgraph densities, proven in \cite{dHMRS18}. Section \ref{Sedtr} recalls the main results in \cite{dHMRS18} for the case where the constraints are on the densities of edges and triangles.
\subsection{Microcanonical ensemble, canonical ensemble, relative entropy} \label{S1.2.1}
For $n \in \mathbb{N}$, let $\mathcal{G}_n$ denote the set of all $2^{{n\choose 2}}$ simple undirected graphs with vertex set $\{1,\dots, n\}$. Any graph $G\in\mathcal{G}_n$ can be represented by a symmetric $n \times n$ matrix with elements \begin{equation} h^G(i,j) := \begin{cases} 1\qquad \mbox{if there is an edge between vertex } i \mbox{ and vertex } j,\\ 0 \qquad \mbox{otherwise.} \end{cases} \end{equation} Let $\vec{C}$ denote a vector-valued function on $\mathcal{G}_n$. We choose a specific vector $\vec{C}^*$, which we assume to be \emph{graphical}, i.e., realisable by at least one graph in $\mathcal{G}_n$. Given $\vec{C}^*$, the \emph{microcanonical ensemble} is the probability distribution $\mathrm{P}_{\mathrm{mic}}$ on $\mathcal{G}_n$ with \emph{hard constraint} $\vec{C}^*$ defined as \begin{equation} \mathrm{P}_{\mathrm{mic}}(G) := \left\{ \begin{array}{ll} 1/\Omega_{\vec{C}^*}, \quad & \text{if } \vec{C}(G) = \vec{C}^*, \\ 0, & \text{otherwise}, \end{array} \right. \qquad G\in \mathcal{G}_n, \label{eq:PM} \end{equation} where \begin{equation}
\Omega_{\vec{C}^*} := | \{G \in \mathcal{G}_n\colon\, \vec{C}(G) = \vec{C}^* \} | \end{equation} is the number of graphs that realise $\vec{C}^*$. The \emph{canonical ensemble} $\mathrm{P}_{\mathrm{can}}$ is the unique probability distribution on $\mathcal{G}_n$ that maximises the \emph{entropy} \begin{equation} S_n({\rm P}) := - \sum_{G \in \mathcal{G}_n}{\rm P}(G) \log {\rm P}(G) \end{equation} subject to the \emph{soft constraint} $\langle \vec{C} \rangle = \vec{C}^*$, where \begin{equation} \label{softconstr} \langle \vec{C} \rangle := \sum_{G \in \mathcal{G}_n} \vec{C}(G)\,{\rm P}(G). \end{equation} This gives the formula \cite{J57} \begin{equation} \mathrm{P}_{\mathrm{can}}(G) := \frac{1}{Z(\vec{\theta}^*)}\,\mathrm{e}^{H(\vec{\theta}^*,\vec{C}(G))}, \qquad G \in \mathcal{G}_n, \label{eq:PC} \end{equation} with \begin{equation} H(\vec{\theta}^*,\vec{C}(G)) := \vec{\theta}^* \cdot \vec{C}(G), \qquad Z(\vec{\theta}^*\,) := \sum_{G \in \mathcal{G}_n} \mathrm{e}^{\vec{\theta}^* \cdot\hspace{2pt} \vec{C}(G)}, \label{eq:Ham} \end{equation} denoting the \emph{Hamiltonian} and the \emph{partition function}, respectively. In \eqref{eq:PC}--\eqref{eq:Ham} the parameter $\vec{\theta}^*$, which is a real-valued vector whose dimension is equal to the number of constraints, must be set to the unique value that realises $\langle \vec{C} \rangle = \vec{C}^*$. As a Lagrange multiplier, $\vec{\theta}^*$ always exists, but uniqueness is non-trivial. In the sequel we will only consider examples where the gradients of the constraints are \emph{linearly independent} vectors. Consequently, the Hessian matrix of the covariances in the canonical ensemble is a positive-definite matrix, which implies uniqueness.
The \emph{relative entropy} of $\mathrm{P}_{\mathrm{mic}}$ with respect to $\mathrm{P}_{\mathrm{can}}$ is defined as \begin{equation} S_n(\mathrm{P}_{\mathrm{mic}} \mid \mathrm{P}_{\mathrm{can}}) := \sum_{G \in \mathcal{G}_n} \mathrm{P}_{\mathrm{mic}}(G) \log \frac{\mathrm{P}_{\mathrm{mic}}(G)}{\mathrm{P}_{\mathrm{can}}(G)}. \label{eq:KL1} \end{equation} For any $G_1,G_2\in\mathcal{G}_n$, $\mathrm{P}_{\mathrm{can}}(G_1)=\mathrm{P}_{\mathrm{can}}(G_2)$ whenever $\vec{C}(G_1)=\vec{C}(G_2)$, i.e., the canonical probability is the same for all graphs with the same value of the constraint. We may therefore rewrite \eqref{eq:KL1} as \begin{equation} S_n(\mathrm{P}_{\mathrm{mic}} \mid \mathrm{P}_{\mathrm{can}}) = \log \frac{\mathrm{P}_{\mathrm{mic}}(G^*)}{\mathrm{P}_{\mathrm{can}}(G^*)}, \label{eq:KL2} \end{equation} where $G^*$ is \emph{any} graph in $\mathcal{G}_n$ such that $\vec{C}(G^*) =\vec{C}^*$ (recall that we assumed that $\vec{C}^*$ is realisable by at least one graph in $\mathcal{G}_n$). The removal of the sum over $\mathcal{G}_n$ constitutes a major simplification.
All the quantities above depend on $n$. In order not to burden the notation, we exhibit this $n$-dependence only in the symbols $\mathcal{G}_n$ and $S_n(\mathrm{P}_{\mathrm{mic}} \mid \mathrm{P}_{\mathrm{can}})$. When we pass to the limit $n\to\infty$, we need to specify how $\vec{C}(G)$, $\vec{C}^*$ are chosen to depend on $n$. We refer the reader to \cite[Appendix A]{dHMRS18}, where this issue was discussed in detail.
\begin{definition} \label{def: sinfty} {\rm In the dense regime, if \begin{equation} \label{eq: def relative entropy limit} s_{\infty} : = \lim\frac{1}{n^2}S_n(\mathrm{P}_{\mathrm{mic}} \mid \mathrm{P}_{\mathrm{can}})=0, \end{equation} then $\mathrm{P}_{\mathrm{mic}}$ and $\mathrm{P}_{\mathrm{can}}$ are said to be {\em equivalent}.} \end{definition}
\begin{remark} {\rm In \cite{SdMdHG15}, which was concerned with the \emph{sparse regime}, the relative entropy was divided by $n$ (the number of vertices). In the \emph{dense regime}, however, it is appropriate to divide by $n^2$ (the order of the number of edges). } \end{remark}
\subsection{Graphons} \label{S1.2.2}
There is a natural way to embed a simple graph on $n$ vertices in a space of functions called \emph{graphons}. Let $W$ be the space of functions $h\colon\,[0,1]^2 \to [0,1]$ such that $h(x,y) = h(y,x)$ for all $(x,y) \in [0,1]^2$. A finite simple graph $G$ on $n$ vertices can be represented as a graphon $h^{G} \in W$ in a natural way as (see Figure~\ref{fig-graphon}) \begin{equation} \label{graphondef} h^{G}(x,y) := \left\{ \begin{array}{ll} 1 &\mbox{if there is an edge between vertex } \lceil{nx}\rceil \mbox{ and vertex } \lceil{ny}\rceil,\\ 0 &\mbox{otherwise}, \end{array} \right. \end{equation} which is referred to as the empirical graphon associated with $G$.
\begin{figure}
\caption{\small An example of a graph $G$ and its graphon representation $h^G$.}
\label{fig-graphon}
\end{figure}
\noindent The space of graphons $W$ is endowed with the \emph{cut distance} \begin{equation} d_{\square} (h_1,h_2) := \sup_{S,T\subset [0,1]}
\left|\int_{S\times T} \mathrm{d} x\,\mathrm{d} y\,[h_1(x,y) - h_2(x,y)]\right|, \qquad h_1,h_2 \in W. \end{equation} On $W$ there is a natural equivalence relation $\equiv$. Let $\Sigma$ be the space of measure-preserving bijections $\sigma\colon\, [0,1] \to [0,1]$. Then $h_1(x,y)\equiv h_2(x,y)$ if $h_1(x,y) = h_2(\sigma x, \sigma y)$ for some $\sigma\in\Sigma$. This equivalence relation yields the quotient space $(\tilde{W},\delta_{\square})$, where $\delta_{\square}$ is the metric defined by \begin{equation} \label{deltam} \delta_{\square}(\tilde{h}_1,\tilde{h}_2) := \inf _{\sigma_1,\sigma_2 \in \Sigma} d_{\square}(h_1^{\sigma_1}, h_2^{\sigma_2}), \qquad \tilde{h}_1,\tilde{h}_2 \in \tilde{W}. \end{equation} For a more detailed description of the structure of the space $(\tilde{W},\delta_{\square})$ we refer to \cite{BCLSV08,BCLSV12,DGKR15}. In the sequel we will deal with constraints on the edge and triangle density. In the space $W$ the edge and triangle densities of a graphon $h$ are defined by \begin{equation} \label{eq: densityalt} T_1(h) := \int_{[0,1]^2} \mathrm{d} x_1 \mathrm{d} x_2\, h(x_1,x_2), \quad T_2(h):= \int_{[0,1]^3} \mathrm{d} x_1 \mathrm{d} x_2\mathrm{d} x_3\, h(x_1,x_2)h(x_2,x_3)h(x_3,x_1). \end{equation} For an element $\tilde{h}$ of the quotient space $\tilde{W}$ we define the edge and triangle density by \begin{equation} T_1(\tilde{h}) = T_1(h), \qquad T_2(\tilde{h}) = T_2(h), \end{equation} where $h$ is any representative element of the equivalence class $\tilde{h}$.
\subsection{Subgraph counts} \label{S1.2.3}
We can label the simple graphs in any order: $\{F_k\}_{k\in\mathbb{N}}$. Let $C_k(G)$ denote the number of subgraphs $F_k$ in $G$. In the dense regime, $C_k(G)$ grows like $n^{V_k}$ as $n\to\infty$, where
$V_k:=|V(F_k)|$ is the number of vertices in $F_k$. For $m \in \mathbb{N}$, consider the following \emph{scaled vector-valued function} on $\mathcal{G}_n$: \begin{equation} \vec{C}(G) := \left(\frac{p(F_k)\,C_{k}(G)}{n^{V_k-2}}\right)_{k=1}^m = n^2\left(\frac{p(F_k)\,C_{k}(G)}{n^{V_k}}\right)_{k=1}^m. \end{equation} The term $p(F_k)$ counts the edge-preserving permutations of the vertices of $F_k$ (e.g.\ $2$ for an edge, $2$ for a wedge, $6$ for a triangle). The term $C_k(G)/n^{V_k}$ represents the density of $F_k$ in $G$. The additional $n^2$ guarantees that the full vector scales like $n^2$, in line with the scaling of the large deviation principle for graphons in the Erd\H{o}s-R\'enyi random graph derived in \cite{CV11}. For a simple graph $F_k$, let $\text{hom}(F_k,G)$ be the number of homomorphisms from $F_k$ to $G$, and define the \emph{homomorphism density} as \begin{equation} t(F_k,G) := \frac{\text{hom}(F_k,G)}{n^{V_k}} = \frac{p(F_k)\,C_k(G)}{n^{V_k}}, \end{equation} which does not distinguish between permutations of the vertices. In terms of this quantity, the Hamiltonian becomes \begin{equation} \label{eq:HF} H(\vec{\theta}, \vec{T}(G))=n^2 \sum_{k=1}^m \theta_k \,t(F_k,G) = n^2 (\vec{\theta}\cdot\vec{T}(G)), \qquad G \in \mathcal{G}_n, \end{equation} where \begin{equation} \label{operator} \vec{T}(G) := \left(t(F_k, G)\right)_{k=1}^m. \end{equation} The canonical ensemble with parameter $\vec{\theta}$ thus takes the form \begin{equation} \label{eq:CPD} \mathrm{P}_{\mathrm{can}}(G \mid \vec{\theta}\,) := \mathrm{e}^{n^2\big[\vec{\theta}\cdot\vec{T}(G)-\psi_n(\vec{\theta}\,)\big]}, \qquad G \in \mathcal{G}_n, \end{equation} where $\psi_n$ replaces the \emph{partition function} $Z(\vec{\theta})$: \begin{equation} \label{eq:PF} \psi_n(\vec{\theta}) := \frac{1}{n^2}\log\sum_{G\in\mathcal{G}_n} \mathrm{e}^{n^2 (\vec{\theta}\hspace{2pt}\cdot\hspace{2pt}\vec{T}(G))} = \frac{1}{n^2}\log Z(\vec{\theta}). \end{equation} In the sequel we take $\vec{\theta}$ equal to a specific value $\vec{\theta}^*$ so as to meet the soft constraint, i.e., \begin{equation} \label{softconstraint2} \langle \vec{T} \rangle = \sum_{G\in\mathcal{G}_n}\vec{T}(G)\,\mathrm{P}_{\mathrm{can}}(G) = \vec{T}^*. \end{equation} With this choice, the canonical probability becomes \begin{equation} \label{canonicprob} \mathrm{P}_{\mathrm{can}}(G) = \mathrm{P}_{\mathrm{can}}(G\mid \vec{\theta^*}). \end{equation}
Both the constraint $\vec{T}^*$ and the \emph{Lagrange multiplier} $\vec{\theta}^*$ in general depend on $n$, i.e., $\vec{T}^*=\vec{T}^*_n$ and $\vec{\theta}^* = \vec{\theta}^*_n$. We consider constraints that converge when we pass to the limit $n\to\infty$, i.e., \begin{equation} \label{eq: Assumption T} \lim_{n\to\infty} \vec{T}^*_n = \vec{T}^*_\infty. \end{equation} Consequently, we expect that \begin{equation} \label{eq:Assumption} \lim_{n\to\infty} \vec{\theta}^*_n = \vec{\theta}^*_\infty. \end{equation} In the remainder of this paper we \emph{assume} that \eqref{eq:Assumption} holds.\ If convergence fails, then we may still consider subsequential convergence. The subtleties concerning \eqref{eq:Assumption} were discussed in detail in \cite[Appendix A]{dHMRS18}.
\subsection{Variational characterisation of ensemble equivalence} \label{S1.3}
For a graphon $h \in W$ and a simple graph $F$ with vertex set $V(F)$ and edge set $E(F)$, define \begin{equation}
t(F,h) := \int_{[0,1]^{|V(F)|}} \mathrm{d} x_1 \dots \mathrm{d} x_{|V(F)|.} \prod_{\{i,j\} \in E(F)} h(x_i,x_j). \end{equation} Then $t(F,G) = t(F,h^G)$, and so the expression in \eqref{eq:HF} can be written as \begin{equation} \label{eq:HFG} H(\vec{\theta}, \vec{T}(G))=n^2 \sum_{k=1}^m \theta_k \,t(F_k,h^G). \end{equation} The constraint $\vec{T}^*_\infty$ defines a subspace of the quotient space $\tilde{W}$, \begin{equation} \tilde{W}^* := \{\tilde{h}\in \tilde{W}\colon\,\vec{T}(h) = \vec{T}^*_\infty\}. \end{equation} consisting of all graphons that meet the constraint.
In order to characterise the asymptotic behavior of the two ensembles, the entropy function of a Bernoulli random variable is essential. For $u\in [0,1]$, let \begin{equation} \label{Idef1} I(u) := \tfrac{1}{2}u\log u +\tfrac{1}{2}(1-u)\log(1-u). \end{equation} Extend the domain of this function to the graphon space $W$ by defining \begin{equation} \label{Idef} I(h) := \int_{[0,1]^2} \mathrm{d} x\, \mathrm{d} y\,\,I(h(x,y)) \end{equation} (with the convention that $0\log0:=0$). On the quotient space $(\tilde{W},\delta_{\square})$, define $I(\tilde{h}) = I(h)$, where $h$ is any element of the equivalence class $\tilde{h}$. (In order to keep the notation minimal, we use $I(\cdot)$ for both \eqref{Idef1} and \eqref{Idef}. Below it will always be clear which of the two is being considered.) The function $\tilde{h} \mapsto I(\tilde{h})$ is well defined on the quotient space $\tilde{W}$ (see \cite[Lemma 2.1]{CV11}). Moreover, $\tilde{h} \mapsto I(\tilde{h}) + \tfrac12\log 2$ is the rate function of the large deviation principle in \cite{CV11}.
The key result in \cite{dHMRS18} is the following variational formula for $s_\infty$ ($\cdot$ denotes the inner product for vectors).
\begin{theorem} {\rm \cite{dHMRS18}} \label{th:Limit} Subject to \eqref{softconstraint2}, \eqref{eq: Assumption T} and \eqref{eq:Assumption}, \begin{equation} \label{sinftydef} \lim_{n\to\infty} n^{-2} S_n(\mathrm{P}_{\mathrm{mic}} \mid \mathrm{P}_{\mathrm{can}}) =: s_\infty \end{equation} with \begin{equation} \label{varreprsinfty} s_\infty = \sup_{\tilde{h}\in \tilde{W}} \left[\vec{\theta}^*_\infty\cdot\vec{T}(\tilde{h})-I(\tilde{h})\right] -\sup_{\tilde{h}\in \tilde{W}^*} \left[\vec{\theta}^*_\infty\cdot\vec{T}(\tilde{h}) - I(\tilde{h})\right]. \end{equation} \end{theorem}
\noindent Theorem~\ref{th:Limit} and the compactness of $\tilde{W}^*$ give us a \emph{variational characterisation} of ensemble equivalence: $s_\infty = 0$ if and only if at least one of the maximisers of $\vec{\theta}^*_\infty\cdot\vec{T}(\tilde{h})-I(\tilde{h})$ in $\tilde{W}$ also lies in $\tilde{W}^* \subset \tilde{W}$, i.e., satisfies the constraint $T^*_\infty$.
\subsection{Edges and triangles} \label{Sedtr}
Theorem~\ref{th:Limit} allows us to identify examples where ensemble equivalence holds ($s_\infty=0$) or is broken ($s_\infty>0$). In \cite{dHMRS18} a detailed analysis was given for the special case where the constraint is on the densities of edges and triangles.
\begin{theorem} {\rm \cite{dHMRS18}} \label{thm:equivalence} For the edge-triangle model, $s_\infty=0$ when \begin{itemize} \item [$\circ$] $T_2^* = T_1^{*{3}}$, \item [$\circ$] $0<T_1^* \leq \frac{1}{2}$ and $T_2^* = 0$, \end{itemize} while $s_\infty>0$ when \begin{itemize} \item [$\circ$] $T_2^* \neq T_1^{*{3}}$ and $T_2^* \geq \tfrac18$, \item [$\circ$] $T_2^*\neq T_1^{*{3}}$, $0<T_1^*\leq\tfrac12$ and $0<T_2^*<\frac{1}{8}$, \item [$\circ$] $(T_1^*,T_2^*)$ lies on the scallopy curve $($the lower blue curve Figure~{\rm \ref{fig-scallopy}}$)$. \end{itemize} \end{theorem}
\noindent Here, $T_1^*,T_2^*$ are in fact the limits $T_{1,\infty}^*,T_{2,\infty}^*$ in \eqref{eq: Assumption T}, but in order to keep the notation light we suppress the index $\infty$.
\begin{figure}
\caption{\small The admissible edge-triangle density region is the region on and between the blue curves \cite{RS15}.}
\label{fig-scallopy}
\end{figure}
\noindent Theorem~\ref{thm:equivalence} is illustrated in Figure~\ref{fig-scallopy}. The region on and between the blue curves, called the \emph{admissible region}, corresponds to the choices of $(T_1^*,T_2^*)$ that are \emph{graphical}, i.e., there exists a graph with edge density $T_1^*$ and triangle density $T_2^*$. The red curves represent ensemble equivalence, while the blue curves and the grey region represent breaking of ensemble equivalence. In the white region between the red curve and the lower blue curve we do not know whether there is breaking of ensemble equivalence or not (although we expect that there is). Breaking of ensemble equivalence arises from \emph{frustration} between the values of $T_1^*$ and $T_2^*$. In line with \cite{dHMRS18}, by frustration we mean that these two densities do not lie on the Erd\H{o}s-R\'enyi line.
The lower blue curve, called the \emph{scallopy curve}, consists of infinitely many pieces labelled by $\ell \in \mathbb{N} \setminus \{1\}$. The $\ell$-th piece corresponds to $T_1^* \in (\frac{\ell-1}{\ell}, \frac{\ell}{\ell+1}]$ and a value $T_2^*$ that is a computable but non-trivial function of $T_1^*$, explained in detail in \cite{PR12,RS13,RS15}. The structure of the graphs drawn from the microcanonical ensemble was determined in \cite{PR12,RS15}. On the $\ell$-th piece: \begin{itemize} \item The vertex set can be partitioned into $\ell$ subsets. The first $\ell-1$ subsets have size $\lfloor c_{\ell}n\rfloor$, the last subset has size between $\lfloor c_\ell n\rfloor$ and $2\lfloor c_\ell n\rfloor$, with \begin{equation} \label{eq: for c} c_{\ell} := \tfrac{1}{\ell+1}\left[1+ \sqrt{1-\tfrac{\ell+1}{\ell}\,T_1^*}\,\right] \in [\tfrac{1}{\ell+1},\tfrac{1}{\ell}). \end{equation} \item The graph has the form of a complete $\ell$-partite graph, with some additional edges on the last subset that create no triangles within that last subset. \item The optimal graphons have the form \small \begin{equation} \label{eq: g*ell} g^*_{\ell}(x,y) := \left\{ \begin{array}{ll} 1, &\exists\,1 \leq k < \ell\colon\,x~<kc_\ell<~y \mbox{ or } y<kc_\ell<x,\\ p_\ell, &(\ell-1)c_\ell~<x<~\tfrac12[1+(\ell-1)c_\ell]<y \mbox{ or } (\ell-1)c_\ell~<y<~\tfrac12[1+(\ell-1)c]<x,\\ 0, &\mbox{otherwise}, \end{array} \right. \end{equation} \normalsize where \begin{equation} \label{eq: for p} p_\ell = \frac{4c_\ell(1-\ell c_\ell)}{(1-(\ell-1)c_\ell)^2} \in (0,1]. \end{equation} \end{itemize} Figure~\ref{fig: cl pl T1} plots the coefficients $c_{\ell}$ and $p_{\ell}$ as a function of $T_1^*$ for $\ell\in\mathbb{N}$. Figure \ref{fig: graphon ell} plots the graphon $g^*_{\ell}$ for $\ell\in\mathbb{N}$.
\begin{figure}
\caption{\small $c_{\ell}$ (left) and $p_{\ell}$ (right) as a function of $T_1^*$.}
\label{fig: cl pl T1}
\end{figure}
\begin{figure}
\caption{\small $g_{\ell}^*$ for $\ell\in\mathbb{N}$ and $T_1^*\in(\frac{\ell-1}{\ell},\frac{\ell}{\ell+1}]$.}
\label{fig: graphon ell}
\end{figure}
\section{Main results} \label{S1.4}
In Section~\ref{ss.ass} we state two assumptions. In Section~\ref{ss.scal} we identify the scaling behaviour of $s_\infty$ for fixed $T_1^*$ and $T_2^* \downarrow T_1^{*3}$, respectively, $T_2^* \uparrow T_1^{*3}$. It turns out that the way in which $s_\infty$ tends to zero differs in these two cases. In Section~\ref{ss.opt} we characterise the asymptotic structure of random graphs drawn from the microcanonical ensemble for fixed $T_1^*$ and $T_2^* \downarrow T_1^{*3}$, respectively, $T_2^* \uparrow T_1^{*3}$. It turns out that the structure differs in these two cases.
\subsection{Assumptions} \label{ss.ass}
Throughout the sequel we make the following two assumptions:
\begin{assumption} \label{assumption} {\rm Fix the edge density $T_1^*\in(0,1)$ and consider the triangle density $T_1^{*3}+\epsilon$ for some $\epsilon$, either positive or negative. Consider the associated Lagrange multipliers $\vec{\theta}^*_{\infty} (\epsilon):=(\theta^*_1(\epsilon),\theta^*_2(\epsilon))$. Then, for $\epsilon$ sufficiently small, we have the representation \begin{equation} \sup_{\tilde{h}\in\tilde{W}}\left[\theta_1^*(\epsilon)T_1(\tilde{h}) + \theta_2^*(\epsilon)T_2(\tilde{h}) - I(\tilde{h})\right] = [\theta_1T_1^* - I(T_1^*)] + (\gamma_1T_1^* + \gamma_2T_1^{*3})\epsilon +O(\epsilon^2), \end{equation} where $\theta_1 :=\theta^*_1(0), \gamma_1 = \theta^{*'}_1(0)$ and $\gamma_2 = \theta^{*'}_2(0)$. (It turns out that $\theta_2 :=\theta^*_2(0)=0$.)} \end{assumption}
\begin{assumption} \label{assumption 2} {\rm Fix the edge density $T_1^*\hspace{-1pt}\in\hspace{-1pt}(0,1)$ and consider the triangle density $T_1^{*3}+\epsilon$ for some $\epsilon$, either positive or negative. Consider the microcanonical entropy \begin{equation} \label{eq: micro ass} -J(\epsilon) := \sup\{-I(\tilde{h}):~\tilde{h}\in\tilde{W},~T_1(\tilde{h}) = T_1^*, ~T_2(\tilde{h}) = T_1^{*3}+\epsilon\}. \end{equation} Then, for $\epsilon$ sufficiently small, the maximizer $h_{\epsilon}^*$ of \eqref{eq: micro ass} has the form \begin{equation} \label{eq: opt graphon} h_{\epsilon}^* = T_1^* + g_{\epsilon}, \qquad g_{\epsilon} = g_{11}1_{I\times I} + g_{12}1_{(I\times J)\cup( J\times I)} + g_{22}1_{J\times J}, \end{equation} for some $g_{11},g_{12},g_{22}\in[-T_1^*,1-T_1^*]$ and $I,J\subset[0,1]$. } \end{assumption}
\begin{remark} {\rm Assumption 1 is needed to prove Theorems~\ref{thm:perturbationdown}--\ref{thm: perturbationup scallop}. In Section \ref{S2.1 Chapter 5} we show that Assumption~\ref{assumption} is true when $T_1^*\in [\frac{1}{2},1)$, which implies the scalings in \eqref{eq: el 0} and \eqref{eq: el 2} below. We can prove the scalings in \eqref{eq: el 0} and \eqref{eq: el 1} below without Assumption~\ref{assumption}, but at the cost of replacing `$=$' by `$\geq$' and replacing $\lim$ by $\limsup$. If Assumption \ref{assumption} fails, then these scalings hold with strict inequality.} \end{remark}
\begin{remark} {\rm Assumption 2 is needed to prove Propositions~\ref{thm:optimal perturbation2}--\ref{thm:optimal perturbation3}. Importantly, the validity of Assumption \ref{assumption 2} is firmly backed by the extensive numerical experiments performed in \cite{KRRS172}, showing that close to the ER line the optimizing graphon has the form \eqref{eq: opt graphon}. The assumption reflects the following informal argument. Suppose that we want to maximise the microcanonical entropy among block graphons. Then we expect the entropy to decrease when we add more structure to the graphon. An $m \times m$ block graphon corresponds to a random graph where the vertices are divided into $m$ groups, and {\em within} each group we have an Erd\H{o}s-R\'enyi random graph with a certain retention probability. We expect that the microcanonical entropy decreases as $m$ increases. } \end{remark}
\subsection{Scaling of the specific relative entropy} \label{ss.scal}
The variational problem $J(\epsilon)$ in \eqref{eq: micro ass} was solved in \cite{KRRS18} for the case $\epsilon>0$, while the case $\epsilon$ remained unsolved. We consider small $\epsilon$ only, which is simpler and yields more intuition about the way the constraint is achieved. In \cite{KRRS17} the maximisers of the microcanonical entropy are identified numerically. The optimal graphons obtained agree with the optimal graphons that we derive rigorously.
\begin{theorem} \label{thm:perturbationdown} For $T_1^* \in (0,1)$ and $T_1^*\neq \frac{1}{2}$, \begin{equation} \label{eq: el 0} \lim_{\epsilon \downarrow 0}\, \epsilon^{-1} s_\infty(T_1^*,T_1^{*3} +3T_1^*\epsilon) = A(T_1^*) := - \frac{1}{1-2T_1^*}\log\frac{T_1^*}{1-T_1^*} \in (0,\infty). \end{equation} \end{theorem}
\begin{theorem} \label{thm:perturbationup} For $T_1^* \in (0,\frac{1}{2}]$, \begin{equation} \label{eq: el 1} \lim_{\epsilon \downarrow 0}\, \epsilon^{-2/3} s_\infty(T_1^*,T_1^{*3} - T_1^{*3}\epsilon) = B(T^*_1) := \frac{1}{4}\frac{T_1^*}{1-T_1^*} \in (0,\infty). \end{equation} \end{theorem}
\begin{theorem} \label{thm: perturbationup scallop} For $T_1^*\in(\frac{1}{2},1)$, \begin{equation} \label{eq: el 2} \lim_{\epsilon\downarrow0}\epsilon^{-2/3} s_{\infty}(T_1^*,T_1^{*3}-T_1^{*3}\epsilon) = f(T_1^*,y^*)\in(0,\infty), \end{equation} where $y^* = y^*(T^*_1) \in (-T_1^*,0)$ is the unique point where the function $x\mapsto f(T_1^*,x)$ defined by \begin{equation} \label{def: f function} f(T_1^*,x):= T_1^{*2}\frac{I(T_1^*+x)-I(T_1^*) - I'(T_1^*)x}{x^2}, \quad x\in(-T^*_1,0), \end{equation} attains its global minimum. \end{theorem}
The main implication of Theorems~\ref{thm:perturbationdown}--\ref{thm: perturbationup scallop} is that, for a fixed value of the edge density, it is less costly in terms of relative entropy to increase the density of triangles than to decrease the density of triangles. Above the ER line the cost is linear in the distance, below the ER line the cost is proportional to the $\tfrac23$-power of the distance.
For the case $T_1^* \in [\frac{1}{2},1)$ the above results are illustrated in Figure \ref{fig: scaling entropy}. In the left panel we plot $\epsilon^{2/3} f(T_1^*,y^*)$, in the right panel we plot $\epsilon A(T^*_1)$. In both panels we take $\epsilon$ sufficiently small and pick four different values of $T_1^*$.
\begin{figure}
\caption{\small Limit of scaled $s_{\infty}$ as a function of $\epsilon$ for $\epsilon$ sufficiently small.}
\label{fig: scaling entropy}
\end{figure}
\subsection{Scaling of the optimal graphon} \label{ss.opt}
In Propositions~\ref{thm:optimal perturbation1}--\ref{thm:optimal perturbation3} below we identify the structure of the optimal graphons corresponding to the perturbed constraints in the microcanonical ensemble in the limit as $n\to\infty$.
\begin{proposition} \label{thm:optimal perturbation1} For the pair of constraints $(T_1^*,T_1^{*3}+3T_1^*\epsilon)$ with $\epsilon>0$ sufficiently small, the unique optimiser $h_{\epsilon}^*$ of the microcanonical entropy is given by \begin{equation} \label{eq: optimal g*} h_{\epsilon}^*(x,y) = \left\{ \begin{array}{ll} \hspace{2pt}h_{11} +o(\epsilon), &(x,y) \in [0,\lambda\epsilon]^2 ,\\[0.2cm] 1-T_1^*+h_1\epsilon + o(\epsilon), &(x,y) \in [0,\lambda\epsilon]\times (\lambda\epsilon,1] \cup (\lambda\epsilon,1]\times [0,\lambda\epsilon],\\[0.2cm] T_1^*+h_2\epsilon + o(\epsilon), &(x,y) \in (\lambda\epsilon,1]^2,\\ \end{array} \right. \end{equation} where \begin{equation} \lambda :=\frac{1}{(1-2T_1^*)^2}, \qquad h_{1}:=\frac{1}{2}h_{2}, \qquad \qquad h_{2}:=-\frac{2}{1-2T_1^*}, \end{equation} and $h_{11}$ solves the equation $I'(h_{11})=3I'(1-T_1^*)$. \end{proposition}
\begin{proposition} \label{thm:optimal perturbation2} For $T_1^*\in(0,\frac{1}{2}]$ and $T_2^* = T_1^{*3}-3T_1^*\epsilon$, $\epsilon \downarrow 0$, the optimal graphon is given by \begin{equation} \label{hpert2*} h^*_{\epsilon} = T_1^* + \epsilon^{1/3}{g}^* + o(\epsilon^{2/3}) \qquad \text{\em (global perturbation)} \end{equation} with $g^*$ defined by \begin{equation} \label{eq: optimal pert g* 2} g^*(x,y) = \left\{ \begin{array}{ll} \hspace{-2pt}-T_1^*, &(x,y) \in [0,\frac{1}{2}]^2 ,\\[0.2cm] \hspace{5pt}T_1^*, &(x,y) \in [0,\frac{1}{2}]\times (\frac{1}{2},1] \cup (\frac{1}{2},1]\times [0,\frac{1}{2}],\\[0.2cm] \hspace{-2pt}-T_1^*, &(x,y) \in (\frac{1}{2},1]^2.\\ \end{array} \right. \end{equation} \end{proposition}
\begin{proposition} \label{thm:optimal perturbation3} For $T_1^*\in(\frac{1}{2},1)$ and $T_2^* = T_1^{*3}-3T_1^*\epsilon$, $\epsilon\downarrow 0$, the optimal graphon is given by \begin{equation} \label{hpert2* scallop} h^*_{\epsilon} = T_1^*+g_{\epsilon}^* \qquad \text{\em (local perturbation)} \end{equation} with $g_{\epsilon}^*$ defined by \begin{equation} g_{\epsilon}^*(x,y) := \begin{cases}
~\frac{T_1^{*2}}{y*}\epsilon^{2/3},~&\qquad (x,y)\in[0,1-\frac{T_1^*}{|y^*|}\epsilon^{1/3}]^2,\\[0.2cm]
~T_1^*\epsilon^{1/3}, ~&\qquad (x,y)\in [0,1-\frac{T_1^*}{|y^*|}\epsilon^{1/3}]\times[1-\frac{T_1^*}{|y^*|}\epsilon^{1/3},1],\\[0.2cm]
&\qquad \text{ {\rm or} } (x,y)\in[1-\frac{T_1^*}{|y^*|}\epsilon^{1/3},1]\times[0,1-\frac{T_1^*}{|y^*|}\epsilon^{1/3}],\\[0.2cm]
~y^*,~&\qquad (x,y)\in[1-\frac{T_1^*}{|y^*|}\epsilon^{1/3},1]^2, \end{cases} \end{equation} where $y^*\in(-T_1^*,0)$ is as in Theorem~\ref{thm: perturbationup scallop}. \end{proposition}
The main implication of Propositions~\ref{thm:optimal perturbation1}--\ref{thm:optimal perturbation3} is that the optimal perturbation of the Erd\H{o}s-R\'enyi graphon is \emph{global} above the ER line and also below the ER line when the edge density is less than or equal to $\tfrac{1}{2}$, whereas it is \emph{local} below the ER line when the edge density is larger than $\tfrac{1}{2}$.
\section{Proofs of Theorems \ref{thm:perturbationdown}--\ref{thm: perturbationup scallop}} \label{S2 Chapter 5}
In Sections \ref{S2.1 Chapter 5}--\ref{S2.3} we prove Theorems \ref{thm:perturbationdown}--\ref{thm: perturbationup scallop}, respectively. Along the way we use Propositions \ref{thm:optimal perturbation1}--\ref{thm:optimal perturbation3}, which we prove in Section \ref{Optimal per}.
\subsection{Proof of Theorem~\ref{thm:perturbationdown}} \label{S2.1 Chapter 5}
Let \begin{equation} T_1(\epsilon) = T_1^*, \qquad T_2(\epsilon) = T_1^{*3} + 3T_1^{*3}\epsilon. \end{equation} The factor $3T_1^*$ appearing in front of the $\epsilon$ is put in for convenience. We know that for every pair of graphical constraints $(T_1(\epsilon),T_2(\epsilon))$ there exists a unique pair of Lagrange multipliers $(\theta_1(\epsilon),\theta_2(\epsilon))$ corresponding to these constraints (to ease the notation we suppress $*$). For an elaborate discussion on existence and uniqueness we refer the reader to \cite{dHMRS18}. By considering the Taylor expansion of the Lagrange multipliers $(\theta_1(\epsilon), \theta_2(\epsilon))$ around $\epsilon=0$, we obtain \begin{equation} \theta_1(\epsilon) = \theta_1 + \gamma_1 \epsilon + \tfrac{1}{2}\Gamma_1\epsilon^2+O(\epsilon^3), \qquad \theta_2(\epsilon) = \gamma_2\epsilon+\tfrac{1}{2}\Gamma_2\epsilon^2 + O(\epsilon^3), \end{equation} where \begin{equation} \theta_1(0)=\theta_1 = I'(T_1^*), \:~ \gamma_1 = \theta'_1(0), \:~ \Gamma_1 = \theta''_1(0), \:~ \theta_2(0)=0, \:~ \gamma_2 = \theta'_2(0), \:~ \Gamma_2 = \theta''_2(0). \end{equation} These equalities follow from \cite[Lemma 5.3]{dHMRS18}. For $\epsilon = 0$ we have $T_2^*(0) = T_1^{*3}$, which shows that the constraints correspond to those of the Erd\H{o}s-R\'enyi random graph. We denote the two terms in the expression for $s_\infty$ in \eqref{varreprsinfty} by $I_1,I_2$, i.e., $s_\infty = I_1-I_2$ with \begin{equation} I_1:=\sup_{\tilde{h}\in \tilde{W}} \big[\vec{\theta}_\infty\cdot\vec{T}(\tilde{h})-I(\tilde{h})\big], \:\:\:I_2:=\sup_{\tilde{h}\in \tilde{W}^*} \big[\vec{\theta}_\infty\cdot\vec{T}(\tilde{h}) - I(\tilde{h})\big], \end{equation} and let $s_{\infty}(\epsilon)$ denote the relative entropy corresponding to the perturbed constraints. We distinguish between the cases $T_1^*\in[\frac{1}{2},1)$ and $T_1^*\in(0,\frac{1}{2})$.
\paragraph{Case I $T_1^*\in[\frac{1}{2},1)$:} According to \cite[Section 5]{dHMRS18}, if $T_1^*\in[\frac{1}{2},1)$ and $T_2^*\in[\frac{1}{8},1)$, then the corresponding Lagrange multipliers $(\theta_1,\theta_2)$ are both non-negative. Hence by \cite[Theorem 4.1]{CD13} we have that \begin{equation} I_1 :=\sup_{\tilde{h}\in\tilde{W}}\left[\theta_1(\epsilon)T_1(\tilde{h}) + \theta_2(\epsilon) T_2(\tilde{h}) - I(\tilde{h})\right] = \sup_{0\leq u\leq 1}\left[\theta_1(\epsilon)u+\theta_2(\epsilon)u^3-I(u)\right], \end{equation} and, consequently, \begin{equation} \label{eq: I1 CAseI} I_1 = \theta_1(\epsilon)u^*(\epsilon) + \theta_2(\epsilon)u^*(\epsilon)^3-I(u^*(\epsilon)). \end{equation} The optimiser $u^*(\epsilon) \in (0,1)$ corresponding to the perturbed multipliers $\theta_1(\epsilon)$ and $\theta_2(\epsilon)$ is analytic in $\epsilon$, as shown in \cite{RY13}. Therefore, a Taylor expansion around $\epsilon=0$ gives \begin{equation} \label{eq: expansion} u^*(\epsilon) = T_1^* + \delta \epsilon + \tfrac12\Delta \epsilon^2 +O(\epsilon^3), \end{equation} where $\delta = {u^{*}}'(0)$ and $\Delta = {u^{*}}''(0)$. Hence $I_1$ can be written as \begin{equation} \label{eq: expansion I} I_1 = \theta_1T_1^* -I(T_1^*)+(\gamma_1 T_1^*+\gamma_2 T_1^{*3})\epsilon + O(\epsilon^2). \end{equation} Moreover, \begin{equation} \begin{aligned} I_2 &= \big[\theta_1 + \gamma_1 \epsilon + \tfrac{1}{2}\Gamma_1\epsilon^2 +O(\epsilon^3)\big]T_1^* + \big[\gamma_2\epsilon +\tfrac{1}{2}\Gamma_2\epsilon^2 + O(\epsilon^3)\big](T_1^{*3} + 3T_1^*\epsilon) - \inf_{\tilde{h}\in\tilde{W}_{\epsilon}^*} I(\tilde{h}) \\ &=\theta_1 T_1^* + \gamma_1 T_1^*\epsilon +\tfrac{1}{2}\Gamma_1 T_1^*\epsilon^2 + T_1^{*3} \gamma_2\epsilon + \tfrac{1}{2}\Gamma_2 T_1^{*3}\epsilon^2 +3T_1^* \gamma_2 \epsilon^2 - J^{\downarrow}(\epsilon) + O(\epsilon^3), \end{aligned} \end{equation} where \begin{equation} J^{\downarrow}(\epsilon) :=\inf_{\tilde{h}\in\tilde{W}_{\epsilon}^*} I(\tilde{h}), \qquad \tilde{W}_{\epsilon}^*:=\{\tilde{h}\in \tilde{W}\colon\, T_1(\tilde{h})=T_1^*,\ T_2(\tilde{h}) = T_1^{*3} +3T_1^*\epsilon\}. \end{equation} Consequently, \begin{equation} \label{eq: canonical entropy inf} s_{\infty}(T_1^*,T_1^{*3}+3T_1^{*3}\epsilon) = J^{\downarrow}(\epsilon) - I(T_1^*)+O(\epsilon^2). \end{equation} A straightforward computation of the entropy of $h_{\epsilon}^*$ in \eqref{eq: optimal g*} shows that \begin{equation} \begin{aligned} J^{\downarrow}(\epsilon) = I(T_1^*)+I'(T_1^*)h_2\epsilon+o(\epsilon) = I(T_1^*) - \frac{1}{1-2T_1^*}\log\frac{T_1^*}{1-T_1^*}\epsilon + o(\epsilon). \end{aligned} \end{equation} Hence we obtain that \begin{equation} s_{\infty}(T_1^*,T_1^{*3}+3T_1^{*3}\epsilon) = - \frac{1}{1-2T_1^*}\log\frac{T_1^*}{1-T_1^*}\,\epsilon +o(\epsilon). \end{equation}
\paragraph{Case II $T_1^* \in(0,\frac{1}{2})$:} Consider the term \begin{equation} I_1 :=\sup_{\tilde{h}\in\tilde{W}}\left[\theta_1(\epsilon)T_1(\tilde{h}) + \theta_2(\epsilon) T_2(\tilde{h}) - I(\tilde{h})\right], \end{equation} as above. If Assumption \ref{assumption} applies, then this case can be treated in the same way as Case I. Otherwise, consider the lower bound \begin{equation} \sup_{\tilde{h}\in\tilde{W}}\left[\theta_1(\epsilon)T_1(\tilde{h}) + \theta_2(\epsilon) T_2(\tilde{h}) - I(\tilde{h})\right]\geq \sup_{0\leq u\leq 1}\left[\theta_1(\epsilon)u+\theta_2(\epsilon)u^3-I(u)\right]. \end{equation} The arguments for Case I following \eqref{eq: I1 CAseI} still apply, and \eqref{eq: canonical entropy inf} is obtained with an inequality instead of an equality.
\subsection{Proof of Theorem~\ref{thm:perturbationup}} \label{S2.2}
In this section we omit the computations that are similar to those in the proof of Theorem~\ref{thm:perturbationdown}, as provided in Section~\ref{S2.1 Chapter 5}. Let \begin{equation} T_1(\epsilon) = T_1^*, \qquad T_2(\epsilon) = T_1^{*3} - T_1^{*3}\epsilon. \end{equation} The factor $T_1^{*3}$ appearing in front of $\epsilon$ is put in for convenience (without loss of generality). The perturbed Lagrange multipliers are \begin{equation} \theta_1(\epsilon) = \theta_1 + \gamma_1 \epsilon + \tfrac{1}{2}\Gamma_1\epsilon^2+O(\epsilon^3), \qquad \theta_2(\epsilon) = \gamma_2\epsilon+\tfrac{1}{2}\Gamma_2\epsilon^2 + O(\epsilon^3), \end{equation} where \begin{equation} \theta_1 = I'(T_1^*), \qquad \gamma_1 = \theta'_1(0), \qquad \Gamma_1 = \theta''_1(0) \qquad \gamma_2 = \theta'_2(0), \qquad \Gamma_2 = \theta''_2(0). \end{equation} We denote the two terms in the expression for $s_\infty$ in \eqref{varreprsinfty} by $I_1,I_2$, i.e., $s_{\infty} = I_1-I_2$, and let $s_{\infty}(\epsilon)$ denote the perturbed relative entropy. The computations for $I_1$ are similar as before, because the exact form of the constraint does not affect the expansions in \eqref{eq: expansion} and \eqref{eq: expansion I}. For $I_2$, on the other hand, we have \begin{equation} \begin{aligned} I_2 &= \theta_1T_1^*+\gamma_1T_1^*\epsilon + \tfrac{1}{2}\Gamma_1 T_1^*\epsilon^2 +T_1^{*3}\gamma_2\epsilon+\tfrac{1}{2}\Gamma_2 T_1^{*3}\epsilon^2-T_1^{*3}\gamma_2 \epsilon^2 - J_1^{\uparrow}(\epsilon) +\,O(\epsilon^3) \\ &= \theta_1T_1^*+\gamma_1T_1^*\epsilon +T_1^{*3}\gamma_2\epsilon - J_1^{\uparrow}(\epsilon) +O(\epsilon^2), \end{aligned} \end{equation} where \begin{equation}\label{eq:Jar} J_1^{\uparrow}(\epsilon) :=\inf_{\tilde{h}\in\tilde{W}_{\epsilon}^*} I(\tilde{h}), \qquad \tilde{W}_{\epsilon}^* := \big\{\tilde{h}\in \tilde{W}\colon\, T_1(\tilde{h})=T_1^*,\ T_2(\tilde{h}) = T_1^{*3} -T_1^{*3}~\epsilon\big\}. \end{equation} Consequently, \begin{equation} \label{eq: entropy upwards 22} s_{\infty}(T_1^*,T_1^*-T_1^{*3}\epsilon) = J_1^{\uparrow}(\epsilon) - I(T_1^*)+O(\epsilon^2). \end{equation} Denote by $\tilde{h}_{\epsilon}^*$ the optimiser of the variational problem $J_1^{\uparrow}(\epsilon)$, as defined in \eqref{eq:Jar}. From Proposition \ref{thm:optimal perturbation2} we know that, for $T_1^*\in (0,\frac{1}{2}]$, any optimal graphon in the equivalence class $\tilde{h}_{\epsilon}^*$, denoted by $h_{\epsilon}^*$ for simplicity in the notation, has the form \begin{equation} \label{hpert2* b} h_{\epsilon}^* = T_1^* + \epsilon^{1/3}{g}^* + o(\epsilon^{2/3}) \end{equation} with $g^*$ given by \begin{equation} \label{eq: optimal pert g* 2 b} g^*(x,y) = \left\{ \begin{array}{ll} \hspace{-2pt}-T_1^*, &(x,y) \in [0,\frac{1}{2}]^2 ,\\[0.2cm] \hspace{5pt}T_1^*, &(x,y) \in [0,\frac{1}{2}]\times (\frac{1}{2},1] \cup (\frac{1}{2},1]\times [0,\frac{1}{2}],\\[0.2cm] \hspace{-2pt}-T_1^*, &(x,y) \in (\frac{1}{2},1]^2.\\ \end{array} \right. \end{equation} Hence \begin{equation} J_1^{\uparrow}(\epsilon) = I(T_1^*) +\frac{1}{2}T_1^{*2}I''(T_1^*)\epsilon^{2/3} + o(\epsilon^{2/3}) = I(T_1^*) + \frac{1}{4}\frac{T_1^*}{1-T_1^*}\epsilon^{2/3} + o(\epsilon^{2/3}), \end{equation} which gives \begin{equation} s_{\infty}(T_1^*,T_1^*-T_1^{*3}\epsilon) = \frac{1}{4}\frac{T_1^*}{1-T_1^*} \epsilon^{2/3} + o(\epsilon^{2/3}). \end{equation}
\subsection{Proof of Theorem~\ref{thm: perturbationup scallop}} \label{S2.3}
The computations leading to the expression for the relative entropy in the right-hand side of \eqref{eq: el 2} are similar to those in Section \ref{S2.2}, and we omit them. Hence we have \begin{equation} \label{eq: entropy upwards 2ell} s_{\infty}(T_1^*,T_1^*-T_1^{*3}\epsilon)= J_{2}^{\uparrow}(\epsilon) - I(T_1^*)+O(\epsilon^2), \end{equation} where, for $T_1^*\in(\frac{1}{2},1)$, \begin{equation}\label{eq:Jar2} J_{2}^{\uparrow}(\epsilon) :=\inf_{\tilde{h}\in\tilde{W}_{\epsilon}^*} I(\tilde{h}), \qquad \tilde{W}_{\epsilon}^* := \big\{\tilde{h}\in \tilde{W}: T_1(\tilde{h})=T_1^*,\ T_2(\tilde{h}) = T_1^{*3} -T_1^{*3}~\epsilon\big\}. \end{equation} Denote by $\tilde{h}_{\epsilon}^*$ the optimiser of the variational problem $J_2^{\uparrow}(\epsilon)$, as defined in \eqref{eq:Jar2}. From Proposition \ref{thm:optimal perturbation3} we know that, for $T_1^*\in (\frac{1}{2},1)$, any optimal graphon in the equivalence class $\tilde{h}_{\epsilon}^*$, denoted by $h_{\epsilon}^*$ for simplicity in the notation, has the form \begin{equation} h_{\epsilon}^* = T_1^* + g_{\epsilon}^* \end{equation} with $g_{\epsilon}^*$ given by \begin{equation} g_{\epsilon}^*(x,y) := \begin{cases} ~{\displaystyle \tfrac{T_1^{*2}}{y^*}}\epsilon^{2/3},
~&\qquad (x,y)\in[0,1-{\displaystyle \tfrac{T_1^*}{|y^*|}}\epsilon^{1/3}]\times [0,1-{\displaystyle \tfrac{T_1^*}{|y^*|}}\epsilon^{1/3}],\\
&\\
~T_1^*\epsilon^{1/3}, ~&\qquad (x,y)\in [0,1-{\displaystyle \tfrac{T_1^*}{|y^*|}}\epsilon^{1/3}]
\times[1-{\displaystyle \tfrac{T_1^*}{|y^*|}}\epsilon^{1/3},1]\\
&\\
&\qquad ~ \text{ {\rm or} } (x,y)\in[1-{\displaystyle \tfrac{T_1^*}{|y^*|}}\epsilon^{1/3},1]
\times[0,1-{\displaystyle \tfrac{T_1^*}{|y^*|}}\epsilon^{1/3}],\\
&\\
~y^*,~&\qquad (x,y)\in[1-{\displaystyle \tfrac{T_1^*}{|y^*|}}\epsilon^{1/3},1]
\times[1-{\displaystyle \tfrac{T_1^*}{|y^*|}}\epsilon^{1/3},1]. \end{cases} \end{equation} Hence we have \begin{equation} s_{\infty}(T_1^*,T_1^*-T_1^{*3}\epsilon) = f(T_1^*,y^*)\epsilon^{2/3} + o(\epsilon^{2/3}), \end{equation} where $y^*\in(-T_1^*,0)$ minimizes the function $x\mapsto f(T_1^*,x)$ defined by \begin{equation} \label{fdef} f(T_1^*,x):= T_1^{*2}\frac{I(T_1^*+x)-I(T_1^*) - I'(T_1^*)x}{x^2}, \quad x\in(-T_1^*,0). \end{equation} We proceed by showing that $x\mapsto f(T_1^*,x)$ has a unique minimizer $y^*\in(-T_1^*,0).$
\noindent $\circ$~First, we show that $f(T_1^*,x) >0$ for every $T_1^*\in(0,1)$ and every $x\in(-T_1^*,0)$, or equivalently \begin{equation} I(T_1^*+x) - I(T_1^*) - I'(T_1^*)x >0. \end{equation} From the mean-value theorem we have that, for any given $x\in(-T_1^*,0)$, there exists $\xi\in(T_1^*+x,T_1^*)$ such that $I'(T_1^*+x) - I(T_1^*) = I'(\xi)x$. Because $I'$ is an increasing function, this implies that \begin{equation} f(T_1^*,x) = (I'(\xi)-I'(T_1^*))x >0, \end{equation} recalling that $x\in(-T_1^*,0)$ and $\xi\in(T_1^*+x,T_1^*)$.
\noindent $\circ$~Second, we show that the function $x\mapsto f(T_1^*,x)$ attains a unique minimum at some point $y^*\in(-T_1^*, 0)$. A straightforward computation shows that the derivative of $f(T_1^*, \cdot)$ is equal to \begin{equation} \label{eq: derivative f} f'(T_1^*, x) = T_1^{*2}\frac{\left(I'(T_1^*+x)-I'(T_1^*)\right)x^2 - 2x\left(I(T_1^*+x)-I(T_1^*)-I'(T_1^*)x\right)}{x^4}. \end{equation} Substituting $x=0$ we observe that $f'(T_1^*, 0) = \frac{1}{6}(T_1^*)^2\,I^{(4)}(T_1^*) >0$, and by taking the limit $x\rightarrow -T_1^*$ we observe that \begin{equation} \lim_{x\rightarrow-T_1^*}f'(T_1^*, x) = -\infty. \end{equation} Hence the function $f(T_1^*, \cdot)$ is decreasing in a neighborhood of $-T_1^*$, while it is increasing in a neighborhood of zero. Consequently, there must be at least one point in $(-T_1^*,0)$ where the derivative is zero.
\noindent $\circ$~It remains to show that there is a unique point $y^*$ in $(-T_1^*,0)$ where the derivative is zero. Suppose that $y^*$ is such a point where $f'(T_1^*, y^*) = 0$. Then from \eqref{eq: derivative f} we know that \begin{equation} \label{eq: derivative f2} \left(I'(T_1^*+y^*)-I'(T_1^*)\right)y^* - 2\left(I(T_1^*+y^*)-I(T_1^*)-I'(T_1^*)y^*\right) = 0. \end{equation} From the mean-value theorem we know that there exist $\xi_1\in(T_1^*+y^*, T_1^*)$ and $\xi_2\in(T_1^*+y^*, T_1^*)$ such that $I''(\xi_1)y^* = I'(T_1^*+y^*)-I'(T_1^*)$ and $I'(\xi_2)y^* = I(T_1^*+y^*)-I(T_1^*)$. Moreover, $\xi_2$ is unique since $I$ is a convex function. This is not necessarily true for $\xi_1$. Hence \eqref{eq: derivative f2} becomes \begin{equation} \label{eq: derivative f3} I''(\xi_1)y^* - 2(I'(\xi_2)-I'(T_1^*)) = 0. \end{equation} Applying again the mean-value theorem, we get that there exists a, not necessarily unique, $\xi_3\in(\xi_2, T_1^*)$ such that $I'(\xi_2)-I'(T_1^*) = I''(\xi_3)(\xi_2-T_1^*)$. Substituting this into \eqref{eq: derivative f3}, we obtain \begin{equation} \label{eq: derivative f4} I''(\xi_1)y^* - 2I''(\xi_3)(\xi_2 - T_1^*)= 0. \end{equation} Hence any possible solution $y^*$ satisfies the equation \begin{equation} \label{eq: derivative f5} y^* =-\frac{2I''(\xi_3)(T_1^*-\xi_2)}{I''(\xi_1)} = -\frac{2\xi_1(1-\xi_1)(T_1^*-\xi_2)}{\xi_3(1-\xi_3)}. \end{equation} Multiple solutions may arise due to the non-uniqueness of $\xi_1$ and $\xi_3$. Notice that if the function $I'$ attains a given slope at some $\xi^*$, it does so as well at $1-\xi^*$ (and nowhere else); use that $I''(u)$ is given by $1/(u(1-u))$. Therefore, other possible solutions may occur when we replace $\xi_1$ by $1-\xi_1$ and/or $\xi_3$ by $1-\xi_3$. However, it is directly seen that the right-hand side of \eqref{eq: derivative f5} is invariant under this substitution. Hence the point $y^*$ where the derivative of $f(T_1^*, \cdot)$ is equal to zero is unique. We finalize the proof of Proposition \ref{thm:optimal perturbation3} in the next section.
\section{Proofs of Propositions \ref{thm:optimal perturbation1}--\ref{thm:optimal perturbation3}} \label{Optimal per}
In Section \ref{S5.1} we state two lemmas (Lemmas \ref{lem: perturbation up optimal 1}--\ref{lem: perturbation up optimal 2} below) about variational formulas encountered in Section \ref{S2 Chapter 5}, and use them to prove Propositions \ref{thm:optimal perturbation1}--\ref{thm:optimal perturbation3}. In Section \ref{appA3} we provide the proof of these two lemmas, which requires an additional lemma about a certain function related to \eqref{fdef} (Lemma~\ref{lemma: function T1} below), whose proof is deferred to Section~\ref{end}.
\subsection{Key lemmas} \label{S5.1}
In Section \ref{S2 Chapter 5} the following variational problems were encountered: \begin{itemize} \item[(1)] For $T_1^*\in(0,1)$, \begin{equation} \label{eq: VP1} J^{\downarrow}(\epsilon) = \inf\big\{ I(\tilde{h})\colon\,\tilde{h}\in\tilde{W},\, T_1(\tilde{h})= T_1^*,\,T_2(\tilde{h}) = T_1^{*3} + 3T_1^{*3}\epsilon\big\}. \end{equation} \item[(2)] For $T_1^*\in (0,\frac{1}{2}]$, \begin{equation} \label{eq: VP2} J_1^{\uparrow}(\epsilon) = \inf\big\{ I(\tilde{h})\colon\,\tilde{h}\in \tilde{W},\, T_1(\tilde{h})= T_1^*,\,T_2(\tilde{h}) = T_1^{*3} -T_1^{*3}\epsilon\big\}. \end{equation} \item[(3)] For $T_1^*\in (\frac{1}{2},1)$, \begin{equation} \label{eq: VP3} J_{2}^{\uparrow}(\epsilon) = \inf\big\{ I(\tilde{h})\colon\,\tilde{h}\in \tilde{W},\, T_1(\tilde{h}) = T_1^*,\,T_2(\tilde{h}) = T_1^{*3} -T_1^{*3}\epsilon\big\}. \end{equation} \end{itemize} The difference between the variational problems in \eqref{eq: VP2} and \eqref{eq: VP3} lies only in the possible values $T_1^*$ can take. We give them separate displays in order to easily refer to them later on.
In order to prove Propositions \ref{thm:optimal perturbation1}--\ref{thm:optimal perturbation3}, we need to analyse the above three variational problems for $\epsilon$ sufficiently small. The variational formula in \eqref{eq: VP1} was already analysed in \cite{KRRS18}. Solving the equations given in \cite[Theorem 1.1]{KRRS18} for the case of triangle density equal to $T_1^{*3} + 3T_1^{*3}\epsilon$ and $\epsilon$ sufficiently small enough we obtain the graphon given in \eqref{eq: optimal g*}. In what follows we concentrate on the variational formulas in \eqref{eq: VP2} and \eqref{eq: VP3}. We analyse the latter with the help of a perturbation argument. In particular, we show that the optimal perturbations are those given in \eqref{hpert2*} and \eqref{hpert2* scallop}, respectively. The claims in Propositions \ref{thm:optimal perturbation2} and \ref{thm:optimal perturbation3} follow directly from the following two lemmas.
We remind the reader that Assumption \ref{assumption 2} is in forse, i.e., we look for the optimal graphon in the class of two-step graphons.
\begin{lemma} \label{lem: perturbation up optimal 1} Let $T_1^*\in(0,\frac{1}{2}]$. For $\epsilon>0$ sufficiently small, \begin{equation} \label{eq: VP2 solved} J_1^{\uparrow}(\epsilon) = I(T_1^*) + \frac{1}{4}\frac{T_1^*}{1-T_1^*}\epsilon^{2/3}+o(\epsilon^{2/3}). \end{equation} \end{lemma}
\begin{lemma} \label{lem: perturbation up optimal 2} Let $T_1^*\in(\frac{1}{2},1)$. For $\epsilon>0$ sufficiently small, \begin{equation} \label{eq: VP3 solved} J_{2}^{\uparrow}(\epsilon) = I(T_1^*) + f(T_1^*,y^*)\epsilon^{2/3} + o(\epsilon^{2/3}), \end{equation} where $f(T_1^*,x)$, $x\in(-T_1^*,0)$, and $y^*$ are as defined in Theorem {\rm \ref{thm: perturbationup scallop}}. \end{lemma}
In what follows we use the notation $f(\epsilon)\asymp g(\epsilon)$ when ${f(\epsilon)}/{g(\epsilon)}$ converges to a positive finite constant as $\epsilon\downarrow0$, and $f(\epsilon)=\omega(g(\epsilon))$ when ${f(\epsilon)}/{g(\epsilon)}$ diverges as $\epsilon\downarrow0$.
\subsection{Proof of Lemmas \ref{lem: perturbation up optimal 1}--\ref{lem: perturbation up optimal 2}} \label{appA3}
\begin{proof} Instead of the variational problems ${J}_1^{\uparrow}(\epsilon)$ and ${J}_2^{\uparrow}(\epsilon)$ we will consider for $\epsilon>0$ the variational problem \begin{equation} \label{eq: variational problem down} {J}^{\uparrow}(\epsilon) = \inf\{I(\tilde{h}): \tilde{h}\in\tilde{W}, T_1(\tilde{h}) = T_1^*, T_2(\tilde{h})=T_1^{*3}-T_1^{*3}\epsilon\}. \end{equation} Below we provide the technical details leading to the optimal perturbation corresponding to \eqref{eq: variational problem down}. At some point we will distinguish between the two cases $T_1^*\in(0,\frac{1}{2})$ and $T_1^*\in(\frac{1}{2},1)$, yielding the optimisers for ${J}_1^{\uparrow}(\epsilon)$ and ${J}_2^{\uparrow}(\epsilon)$. We denote the optimiser of \eqref{eq: variational problem down} by $\tilde{h}_{\epsilon}^{*\uparrow}$ (in order to keep the notation light, we denote a representative element by $h_{\epsilon}^{*\uparrow}$).
We start by writing the optimiser in the form $h_{\epsilon}^{*\uparrow} = T_1^* + \Delta H_{\epsilon}$ for some perturbation term $\Delta H_{\epsilon}$, which must be a bounded symmetric function on the unit square $[0,1]^2$ taking values in $\mathbb{R}$. The optimiser $h_{\epsilon}^{*\uparrow}$ has to meet the two constraints \begin{equation} T_1(h_{\epsilon}^{*\uparrow}) = T_1^*, \qquad \qquad T_2(h_{\epsilon}^{*\uparrow}) = T_1^{*3} - T_1^{*3}\epsilon. \end{equation} Consequently, $\Delta H_{\epsilon}$ has to meet the two constraints \begin{eqnarray} \label{constraints DH1 D} (K_1): \hspace{1.5cm} \mathcal{K}_1 &:=& \int_{[0,1]^2} \mathrm{d} x\,\mathrm{d} y ~\Delta H_{\epsilon}(x,y) = 0,\\ \label{constraints DH2 D} (K_2): \hspace{0.65cm} \mathcal{K}_{2}+\mathcal{K}_3 &:=& 3T_1^*\int_{[0,1]^3} \mathrm{d} x\,\mathrm{d} y\,\mathrm{d} z\, ~\Delta H_{\epsilon}(x,y)\Delta H_{\epsilon}(y,z)\\ \nonumber &&+ \int_{[0,1]^3} \mathrm{d} x\,\mathrm{d} y\,\mathrm{d} z\, ~\Delta H_{\epsilon}(x,y)\Delta H_{\epsilon}(y,z) \Delta H_{\epsilon}(z,x) =-T_1^{*3}\epsilon. \end{eqnarray} Observe that $\mathcal{K}_2 \geq 0$. By Assumption \ref{assumption 2}, we restrict to graphons of the form $T_1^*+\Delta H_{\epsilon}$ with \begin{equation} \label{eq: perturbation DHe} \Delta H_{\epsilon} = g_{11}1_{I\times I} + g_{12}1_{(I\times J)\cup (J\times I)} + g_{22} 1_{J\times J}, \end{equation} where $g_{11},g_{12},g_{22}\in(-T_1^*,1-T_1^*)$ and $I\subset[0,1]$, $J=I^c$. From \eqref{constraints DH1 D} we have \begin{equation} \label{eq: DDH1} \mathcal{K}_1=\lambda(I)^2g_{11} + 2\lambda(I)(1-\lambda(I))g_{12} +(1-\lambda(I))^2g_{22} =0, \end{equation} which yields \begin{equation} \label{eq: g12 upp} g_{12} = -\frac{1}{2}\left(\frac{\lambda(I)}{1-\lambda(I)}g_{11} + \frac{1-\lambda(I)}{\lambda(I)}g_{22}\right). \end{equation} A straighforward computation shows that \begin{equation} \label{eq: second order integral} \mathcal{K}_2=3T_1^*\left(\lambda(I)^3g_{11}^2+\left(1-\lambda(I)\right)^3g_{22}^2+2\lambda(I)(1-\lambda(I))g_{12}\left(\lambda(I)g_{11}+(1-\lambda(I))g_{22}+\frac{1}{2}g_{12}\right)\right). \end{equation} Using \eqref{eq: g12 upp}, we obtain \begin{equation} \label{eq: g12 upp 2} \mathcal{K}_2=3T_1^*\frac{1}{4}\lambda(I)(1-\lambda(I))\left(\frac{\lambda(I)}{(1-\lambda(I))}g_{11} - \frac{1-\lambda(I)}{\lambda(I)}g_{22}\right)^2. \end{equation} With a similar reasoning we also obtain that \begin{equation} \label{eq: third order integral} \mathcal{K}_3=\lambda(I)^3g_{11}^3+(1-\lambda(I))^3g_{22}^3+3g_{12}^2\lambda(I)(1-\lambda(I))\Big(\lambda(I)g_{11}+g_{22}(1-\lambda(I))\Big). \end{equation} We claim that in order to find the optimal graphon corresponding to the microcanonical entropy, it suffices to solve the following equations: \begin{equation} \label{eq: three integral equations} \mathcal{K}_1 =0, \qquad \mathcal{K}_2 = 0, \qquad \mathcal{K}_3 = -T_1^{*3}\epsilon. \end{equation} We prove this claim in Appendix~\ref{app}. The idea is that for $\epsilon$ sufficiently small, if $\mathcal{K}_2>0$, then we cannot have $\mathcal{K}_2+\mathcal{K}_3<0$. From the second equation in \eqref{eq: three integral equations} we obtain \begin{equation} \label{eq: g11} g_{11} = \frac{(1-\lambda(I))^2}{\lambda(I)^2}g_{22}, \end{equation} and substitution into \eqref{eq: g12 upp} gives \begin{equation} \label{eq: g12} g_{12} = -\frac{1-\lambda(I)}{\lambda(I)}g_{22}. \end{equation} The third equation in \eqref{eq: three integral equations} now yields \begin{equation} \label{eq: triple} g_{22}\frac{1-\lambda(I)}{\lambda(I)} =-T_1^* \epsilon^{1/3}. \end{equation} Substituting this equation into \eqref{eq: g11} and \eqref{eq: g12}, we obtain the two relations \begin{equation} \label{eq: g11 and g12 2} g_{11} = -\frac{1-\lambda(I)}{\lambda(I)}T_1^*\epsilon^{1/3}, \qquad g_{12} = T_1^* \epsilon^{1/3}. \end{equation} In what follows we need to distinguish between the following three cases: \begin{itemize} \item[\bf (I)] $\lambda(I)$ is constant and independent of $\epsilon$; \item[\bf (II)] $\lambda(I) \asymp \epsilon^{1/3}$; \item[\bf (III)] $\lambda(I)=\omega(\epsilon^{1/3})$. \end{itemize} The case $\lambda(I)=o(\epsilon^{1/3})$ can be excluded, since it yields via \eqref{eq: g11 and g12 2} that $g_{11}$ diverges as $\epsilon \downarrow 0$, while from \eqref{eq: perturbation DHe} we argued that $g_{11}\in(-T_1^*,1-T_1^*)$. Hence the above three cases are exhaustive. We treat them separately by computing their corresponding microcanonical entropies. Afterwards, by comparing the three entropies we identify the optimal graphon. A straightforward computation yields \begin{align*} I(h_{\epsilon}^*) &= \lambda(I)^2 I\left(T_1^*- T_1^*\frac{1-\lambda(I)}{\lambda(I)}\epsilon^{1/3}\right) + 2\lambda(I)(1-\lambda(I))I\left(T_1^*+T_1^*\epsilon^{1/3}\right) \\ &+ (1-\lambda(I))^2I\left(T_1^* - T_1^*\frac{\lambda(I)}{1-\lambda(I)}\epsilon^{1/3}\right). \numberthis{\label{eq:Taylor cases}} \end{align*}
\paragraph{Case (I).} A Taylor expansion in $\epsilon$ of all three terms in \eqref{eq:Taylor cases} yields \begin{equation} \label{eq: Micro 11} I(h_{\epsilon}^*) = I(T_1^*) + \frac{1}{2}I''(T_1^*)T_1^{*2}\epsilon^{2/3} - \frac{1}{6}I^{(3)}(T_1^*)T_1^{*3}\left(\frac{(1-2\lambda(I))^2}{\lambda(I)(1-\lambda(I))}\right)\epsilon + o(\epsilon). \end{equation}
\paragraph{Case (II).} We have $\lambda(I)=c\epsilon^{1/3}+o(\epsilon^{1/3})$ for some constant $c>0$. Substituting the expressions for $g_{11}$, $g_{12}$, $g_{22}$ from \eqref{eq: triple} and \eqref{eq: g11 and g12 2} into \eqref{eq:Taylor cases}, and looking only at the leading order term, we obtain \begin{equation} \label{eq: Second entropy} I(h_{\epsilon}^*) = I(T_1^*) + \Bigg(c^2\left(I\left(T_1^*-\frac{T_1^*}{c}\right)-I(T_1^*)\right) +cT_1^*I'(T_1^*)\Bigg)\epsilon^{2/3}+ O(\epsilon). \end{equation}
\paragraph{Case (III).} A Taylor expansion in $\epsilon$ of all three terms in \eqref{eq:Taylor cases} yields \begin{equation} \label{eq: Third entropy} I(h_{\epsilon}^*) = I(T_1^*) +\frac{1}{2}T_1^{*2}I''(T_1^*)\omega(\epsilon^{2/3}) + \omega(\epsilon). \end{equation}
In order to determine the optimal graphon we need to compare the expressions in \eqref{eq: Micro 11}--\eqref{eq: Third entropy}. The leading order term in \eqref{eq: Third entropy} is $\omega(\epsilon^{2/3})$, which entails that for $\epsilon$ sufficiently small this term is larger than the corresponding second order terms in \eqref{eq: Micro 11} and \eqref{eq: Second entropy}. Hence it suffices to compare the second order terms in \eqref{eq: Micro 11} and \eqref{eq: Second entropy}. For this we use the following lemma.
\begin{lemma} \label{lemma: function T1} Consider, for $T_1^*\in(0,1)$ fixed and $x>0$, the function \begin{equation} f(T_1^*,x) := x^2\left(I\left(T_1^*-\frac{T_1^*}{x}\right)-I(T_1^*)\right)+xT_1^*I'(T_1^*). \end{equation} Then the following properties hold: \begin{itemize} \item[$\circ$] If $T_1^*\in(0,\frac{1}{2})$, then $f(T_1^*,x)>\frac{1}{2}T_1^{*2}I''(T_1^*)$ for all $x>0$. \item[$\circ$] If $T_1^*\in(\frac{1}{2},1)$, then there exists $x^*>0$ such that $f(T_1^*,x^*)<\frac{1}{2} T_1^{*2}I''(T_1^*)$. \end{itemize} \end{lemma}
The proof is given in Section~\ref{end}. Using Lemma~\ref{lemma: function T1}, we find that, for $T_1^*\in(0,\frac{1}{2})$ and all $c>0$, the microcanonical entropy in \eqref{eq: Micro 11} is larger than the microcanonical entropy in \eqref{eq: Second entropy}, while for $T_1^*\in(\frac{1}{2},1)$ there exists $c^*>0$ such that the microcanonical entropy in \eqref{eq: Micro 11} is smaller than the microcanonical entropy in \eqref{eq: Second entropy}. To complete the proof we determine the optimal graphons.
\paragraph{Optimal graphon for $T_1^*\in(0,\frac{1}{2})$.}
The microcanonical entropy computed in \eqref{eq: Micro 11} is equal to \begin{equation} I(h_{\epsilon}^*) = I(T_1^*) + \frac{1}{2}I''(T_1^*)T_1^{*2}\epsilon^{2/3} - \frac{1}{6}I^{(3)}(T_1^*)T_1^{*3}\left(\frac{(1-2\lambda(I))^2}{\lambda(I)(1-\lambda(I))}\right)\epsilon + O(\epsilon). \end{equation} Since $T_1^*\in(0,\frac{1}{2})$, we have $I^{(3)}(T_1^*)<0$. Hence, in order to find the optimal graphon we need to minimise \begin{equation} \frac{(1-2\lambda(I))^2}{\lambda(I)(1-\lambda(I))}. \end{equation} The minimum is achieved for $\lambda(I)=\frac{1}{2}$, which yields the graphon in Proposition \ref{thm:optimal perturbation2}.
\paragraph{Optimal graphon for $T_1^*\in(\frac{1}{2},1)$.}
The microcanonical entropy computed in \eqref{eq: Second entropy} is equal to \begin{equation} I(h_{\epsilon}^*) = I(T_1^*) + \Bigg(c^2\left(I\left(T_1^*-\frac{T_1^*}{c}\right)-I(T_1^*)\right) +cT_1^*I'(T_1^*)\Bigg)\epsilon^{2/3}+ O(\epsilon). \end{equation} Hence, in order to find the optimal graphon we need to minimise the second order term. This is done in Lemma \ref{lemma: function T1} and yields the graphon in Proposition \ref{thm:optimal perturbation3}. \end{proof}
\subsection{Proof of Lemma \ref{lemma: function T1}} \label{end}
\begin{proof} Via the substitution $y:={T_1^*}/{x}$ we see that we can equivalently work with the function, defined for $y\in(0,T_1^*)$, \begin{equation} \label{fff} f\left(T_1^*,\frac{T_1^*}{y}\right)- \frac{1}{2}T_1^{*2}I''(T_1^*) = T_1^{*2}\frac{I(T_1^*-y)-I(T_1^*)+yI'(T_1^*)-\frac{1}{2}y^2I''(T_1^*)}{y^2}, \end{equation} which we write for simplicity as $(T_1^*/y)^2\,\check f(T_1^*,y)$, with $\check f(T_1^*,y)$ the numerator in right-hand side of \eqref{fff}. It suffices to prove that: (i)~$\check f(T_1^*,y)>0$ for $T_1^*\in(0,\frac{1}{2})$ and all $y>0$; (ii)~$\check f(T_1^*,y)<0$ for $T_1^*\in(\frac{1}{2},1)$ and some $y>0$.
Our next observation is that \begin{equation} \label{Tay} \check f(T_1^*,y) =\sum_{k=0}^\infty I^{(k)}(T_1^*)\frac{(-y)^k}{k!}- I(T_1^*) +yI'(T_1^*)-\frac{1}{2}y^2I''(T_1^*)=\sum_{k=3}^\infty I^{(k)}(T_1^*)\frac{(-y)^k}{k!}. \end{equation} An elementary computation shows that, for $t\in(0,1)$ and $k\in\mathbb{N}\setminus\{1\}$, \begin{equation} I^{(k)}(t) = \frac{(k-2)!}{2}\left(\frac{(-1)^k}{t^{k-1}}+\frac{1}{(1-t)^{k-1}}\right). \end{equation} For $k$ even, $I^{(k)}(t)>0$ for all $t\in(0,1)$. For $k$ odd, $I^{(k)}(t)<0$ for $t\in(0,\frac{1}{2})$ and $I^{(k)}(t)>0$ for $t\in(\frac{1}{2},1)$. The above properties imply that, for all $T_1^*\in(0,\frac{1}{2})$ and all $y>0$, $I^{(k)}(T_1^*)\,{(-y)^k}>0$, so that \eqref{Tay} immediately implies claim (i). Claim (ii) follows from \eqref{Tay} in combination with \begin{equation} \check f(T_1^*,0) = \check f^{(1)}(T_1^*,0) = \check f^{(2)}(T_1^*,0) =0,\:\:\: \check f^{(3)}(T_1^*,0)=-I^{(3)}(T_1^*)<0, \end{equation} where $\check f^{(k)}(T_1^*,y)$ denotes the $k$-th derivative of $\check f(T_1^*,y)$ with respect to $y$. \end{proof}
\appendix
\section{Appendix} \label{app}
In this section we prove the claim made in \eqref{eq: three integral equations}. Let \begin{equation} \begin{aligned} \mathcal{K}_1 &= \lambda(I)^2g_{11}+2\lambda(I)(1-\lambda(I))g_{12}+(1-\lambda(I))^2g_{22},\\ \mathcal{K}_{2} &= 3T_1^*\left(\frac{1}{4}\lambda(I)(1-\lambda(I))\left(\frac{\lambda(I)}{1-\lambda(I)}g_{11} -\frac{1-\lambda(I)}{\lambda(I)}g_{22}\right)^2\right),\\ \mathcal{K}_3 &= \lambda(I)^3g_{11}^3+(1-\lambda(I))^3g_{22}^3 +3g_{12}^2\lambda(I)(1-\lambda(I))\Big(\lambda(I)g_{11}+(1-\lambda(I))g_{22}\Big). \end{aligned} \end{equation} From the constraints on the perturbation we know that \begin{equation} \mathcal{K}_1 =0, \qquad \mathcal{K}_2+\mathcal{K}_3 = -T_1^{*3}\epsilon. \end{equation} We will show that it suffices to solve $\mathcal{K}_2=0$ and $\mathcal{K}_3=-T_1^{*3}\epsilon$. The argument we use is similar to the one used in Section \ref{appA3} to find the optimal graphon. Since $\mathcal{K}_2 + \mathcal{K}_3 = -T_1^{*3}\epsilon$, and $\mathcal{K}_2\geq0$ independently of $\epsilon$, it must be that $\mathcal{K}_3 = -c\epsilon$ for some constant $c>0$. Using \eqref{eq: g12 upp}, after some straightforward computations we obtain \begin{equation} \begin{aligned} \mathcal{K}_3 &=\lambda(I)^3g_{11}^3+(1-\lambda(I))^3g_{22}^3 +\frac{3}{4}\frac{\lambda(I)^4}{1-\lambda(I)}g_{11}^3+\frac{3}{4}\frac{(1-\lambda(I))^4}{\lambda(I)}g_{22}^3\\ &\qquad +\frac{3}{2}\lambda(I)^2(1-\lambda(I))g_{11}^2g_{22}+\frac{3}{2}(1-\lambda(I))^2\lambda(I)g_{11}g_{22}^2 +\frac{3}{4}(1-\lambda(I))^3g_{11}g_{22}^2+\frac{3}{4}\lambda(I)^3g_{11}^2g_{22}. \end{aligned} \end{equation} We need to consider the following four cases: \begin{itemize} \item[(1)] $\lambda(I)^3g_{11}^3\asymp -\epsilon$. \item[(2)] $\frac{1}{\lambda(I)}g_{22}^3\asymp-\epsilon$. \item[(3)] $\lambda(I)^2g_{11}^2g_{22}\asymp-\epsilon$. \item[(4)] $\lambda(I)g_{11}g_{22}^2\asymp-\epsilon$. \end{itemize} These cases are exhaustive because they cover all possible ways a term in $\mathcal{K}_3$ can be asymptotically of the order $-\epsilon$. We first observe that, because of symmetry in $\lambda(I),g_{11}$ and $g_{22}$, the term $(1-\lambda(I))^3g_{22}$ can be dealt with in a similar way as in case (1). We show that in all cases if $\mathcal{K}_2>0$, then $\mathcal{K}_2=\omega(\epsilon)$, which implies that the constraint $\mathcal{K}_2 +\mathcal{K}_3=-T_1^{*3}\epsilon$ is not possible. We only treat case (1) in detail, because cases (2)--(3) follow from similar computations.
For case (1) we need to consider three sub-cases: \begin{itemize} \item[(1a)] $\lambda(I)$ is constant and $g_{11}\asymp -\epsilon^{1/3}$. \item[(1b)] $\lambda(I)^3 = \omega(\epsilon)$, $g_{11}^3=\omega(\epsilon)$ and $\lambda(I)^3g_{11}^3\asymp-\epsilon$. \item[(1c)] $\lambda(I)^3\asymp\epsilon$ and $g_{11}^3$ is constant and negative. \end{itemize} The case $\lambda(I)^3 = o(\epsilon)$ can be excluded, since this would imply that $g_{11}^3\asymp -\frac{\epsilon}{o(\epsilon)}$, which tends to $-\infty$, a property that is not allowed because $g_{11}\in(-T_1^*,1-T_1^*)$. For each of the three sub-cases we study the asymptotic behavior of $\mathcal{K}_2$ as $\epsilon\downarrow0$.
\paragraph{Case (1a).}
Since $\lambda(I)$ is constant, it suffices to analyse the square appearing in $\mathcal{K}_2$, i.e., \begin{equation} \left(\frac{\lambda(I)}{1-\lambda(I)}g_{11}-\frac{1-\lambda(I)}{\lambda(I)}g_{22}\right)^2. \end{equation} After straightforward computations we see that if $\mathcal{K}_2>0$, then \begin{equation} \label{eq: big big big 2} \left(\frac{\lambda(I)}{1-\lambda(I)}g_{11}-\frac{1-\lambda(I)}{\lambda(I)}g_{22}\right)^2\asymp \epsilon^{2/3}, \end{equation} which yields that $\mathcal{K}_2+\mathcal{K}_3 \asymp \epsilon^{2/3}$ instead of $-\epsilon$. Hence this case is not possible.
\paragraph{Case (1b).}
We have $\lambda(I)=\omega(\epsilon^{1/3})$, $g_{11}=\omega(\epsilon^{1/3})$ and $\lambda(I)^3g_{11}^3 \asymp-\epsilon$, and again obtain that $\lambda(I)^2g_{11}^2\asymp\epsilon^{2/3}$, which yields a result similar to the one in \eqref{eq: big big big 2}.
\paragraph{Case (1c).}
After straightforward computations we again obtain $\lambda(I)^2g_{11}^2\asymp \epsilon^{2/3}$.
Performing similar computations for cases (2)-(4), we can also exclude those. This verifies the claim made in the beginning, namely, that in order to find the optimal graphon corresponding to the constraints $\mathcal{K}_1=0$ and $\mathcal{K}_2+\mathcal{K}_3=-T_1^{*3}\epsilon$ it suffices to consider the constraints $\mathcal{K}_1=0$, $\mathcal{K}_2=0$ and $\mathcal{K}_3=-T_1^{*3}\epsilon$.
\end{document} |
\begin{document}
\vspace*{15mm} \noindent{\bf QUANTUM NONLOCALITY AND INSEPARABILITY} \\[12mm] \hspace*{15mm} Asher Peres \\[5mm] \hspace*{15mm} {\it Department of Physics\\ \hspace*{15mm} Technion---Israel Institute of Technology\\ \hspace*{15mm} 32\,000 Haifa, Israel}\\[8mm]
\noindent A quantum system consisting of two subsystems is {\it separable\/} if its density matrix can be written as $\rho=\sum w_K\,\rho_K'\otimes \rho_K''$, where $\rho_K'$ and $\rho_K''$ are density matrices for the two subsytems, and the positive weights $w_K$ satisfy $\sum w_K=1$. A necessary condition for separability is derived and is shown to be more sensitive than Bell's inequality for detecting quantum inseparability. Moreover, {\it collective\/} tests of Bell's inequality (namely, tests that involve several composite systems simultaneously) may sometimes lead to a violation of Bell's inequality, even if the latter is satisfied when each composite system is tested separately.\\[7mm]
\noindent{\bf 1. INTRODUCTION}
From the early days of quantum mechanics, the question has often been raised whether an underlying ``subquantum'' theory, that would be deterministic or even stochastic, was viable. Such a theory would presumably involve additional ``hidden'' variables, and the statistical predictions of quantum theory would be reproduced by performing suitable averages over these hidden variables.
A fundamental theorem was proved by Bell~\cite{Bell}, who showed that if the constraint of {\it locality\/} was imposed on the hidden variables (namely, if the hidden variables of two distant quantum systems were themselves be separable into two distinct subsets), then there was an upper bound to the correlations of results of measurements that could be performed on the two distant systems. That upper bound, mathematically expressed by Bell's inequality~\cite{Bell}, is violated by some states in quantum mechanics, for example the singlet state of two \mbox{spin-$1\over2$} particles.
A variant of Bell's inequality, more general and more useful for experimental tests, was later derived by Clauser, Horne, Shimony, and Holt (CHSH)~\cite{chsh}. It can be written
\begin{equation} |\langle{AB}\rangle+\langle{AB'}\rangle+\langle{A'B}\rangle
-\langle{A'B'}\rangle|\leq 2. \label{CHSH}\end{equation} On the left hand side, $A$ and $A'$ are two operators that can be measured by an observer, conventionally called Alice. These operators do not commute (so that Alice has to choose whether to measure $A$ or $A'$) and each one is normalized to unit norm (the norm of an operator is defined as the largest absolute value of any of its eigenvalues). Likewise, $B$ and $B'$ are two normalized noncommuting operators, any one of which can be measured by another, distant observer (Bob). Note that each one of the {\it expectation\/} values in Eq.~(\ref{CHSH}) can be calculated by means of quantum theory, if the quantum state is known, and is also experimentally observable, by repeating the measurements sufficiently many times, starting each time with identically prepared pairs of quantum systems. The validity of the CHSH inequality, for {\it all\/} combinations of measurements independently performed on both systems, is a necessary condition for the possible existence of a local hidden variable (LHV) model for the results of these measurements. It is not in general a sufficient condition, as will be shown below.
Note that, in order to test Bell's inequality, the two distant observers independently {\it measure\/} subsytems of a composite quantum system, and then {\it report\/} their results to a common site where that information is analyzed~\cite{qt}. A related, but essentially different, issue is whether a composite quantum system can be {\it prepared\/} in a prescribed state by two distant observers who receive {\it instructions\/} from a common source. For this to be possible, the density matrix $\rho$ has to be separable into a sum of direct products, \begin{equation} \rho=\sum_K w_K\,\rho_K'\otimes\rho_K'', \label{sep}\end{equation} where the positive weights $w_K$ satisfy $\sum w_K=1$, and where $\rho_K'$ and $\rho_K''$ are density matrices for the two subsystems. A separable system always satisfies Bell's inequality, but the converse is not necessarily true~[4--7]. I shall derive below a simple algebraic test, which is a necessary condition for the existence of the decomposition (\ref{sep}). I shall then give some examples showing that this criterion is more restrictive than Bell's inequality, or than the $\alpha$-entropy inequality~\cite{H3a}.\\[7mm]
\noindent{\bf 2. SEPARABILITY OF DENSITY MATRICES}
The derivation of the separability condition is easiest when the density matrix elements are written explicitly, with all their indices~\cite{qt}. For example, Eq.~(\ref{sep}) becomes \begin{equation} \rho_{m\mu,n\nu}=
\sum_K w_K\,(\rho'_K)_{mn}\,(\rho''_K)_{\mu\nu}. \end{equation} Latin indices refer to the first subsystem, Greek indices to the second one (the sub\-systems may have different dimensions). Note that this equation can always be satisfied if we replace the quantum density matrices by classical Liouville functions (and the discrete indices are replaced by canonical variables, {\bf p} and {\bf q}). The reason is that the only constraint that a Liouville function has to satisfy is being non-negative. On the other hand, we want quantum density matrices to have non-negative {\it eigenvalues\/}, rather than non-negative elements, and the latter condition is more difficult to satisfy.
Let us now define a new matrix, \begin{equation} \sigma_{m\mu,n\nu}\equiv\rho_{n\mu,m\nu}. \end{equation} The Latin indices of $\rho$ have been transposed, but not the Greek ones. This is not a unitary transformation but, nevertheless, the $\sigma$ matrix is Hermitian. When Eq.~(\ref{sep}) is valid, we have \begin{equation} \sigma=\sum_A w_A\,(\rho_A')^T\otimes\rho_A''. \label{sig}\end{equation} Since the transposed matrices $(\rho'_A)^T\equiv(\rho'_A)^*$ are non-negative matrices with unit trace, they can also be legitimate density matrices. It follows that {\it none of the eigenvalues of $\sigma$ is negative\/}. This is a necessary condition for Eq.~(\ref{sep}) to hold~\cite{PRL}.
Note that the eigenvalues of $\sigma$ are invariant under separate unitary transformations, $U'$ and $U''$, of the bases used by the two observers. In such a case, $\rho$ transforms as \begin{equation} \rho\to (U'\otimes U'')\,\rho\,(U'\otimes U'')^\dagger, \end{equation} and we then have \begin{equation} \sigma\to
(U'^T\otimes U'')\,\sigma\,(U'^T\otimes U'')^\dagger, \end{equation} which also is a unitary transformation, leaving the eigenvalues of $\sigma$ invariant.
As an example, consider a pair of spin-$1\over2$ particles in an impure singlet state, consisting of a singlet fraction $x$ and a random fraction $(1-x)$~\cite{Exner}. Note that the ``random fraction'' $(1-x)$ also includes singlets, mixed in equal proportions with the three triplet components. We have \begin{equation} \rho_{m\mu,n\nu}=x\,S_{m\mu,n\nu}+
(1-x)\,\delta_{mn}\,\delta_{\mu\nu}\,/4, \label{x}\end{equation} where the density matrix for a pure singlet is given by \begin{equation} S_{01,01}=S_{10,10}=-S_{01,10}=-S_{10,01}=\mbox{$1\over2$},
\label{singlet} \end{equation} and all the other components of $S$ vanish. (The indices 0 and 1 refer to any two ortho\-gonal states, such as ``up'' and ``down.'') A straightforward calculation shows that $\sigma$ has three eigenvalues equal to $(1+x)/4$, and the fourth eigenvalue is $(1-3x)/4$. This lowest eigenvalue is positive if $x<{1\over3}$, and the separability criterion is then fulfilled. This result may be compared with other criteria: Bell's inequality holds for $x<1/\sqrt{2}$, and the $\alpha$-entropic inequality~\cite{H3a} for $x<1/\sqrt{3}$. These are therefore much weaker tests for detecting inseparability than the condition that was derived here.
In this particular case, it happens that this necessary condition is also a sufficient one. It is indeed known that if $x<{1\over3}$ it is possible to write $\rho$ as a mixture of unentangled product states~\cite{BBPSSW}. This suggests that the necessary condition derived above ($\sigma$ has no negative eigenvalue) might also be sufficient for any $\rho$. A proof of this conjecture was indeed recently obtained~\cite{H3b} for composite systems having dimensions $2\times2$ and $2\times3$. However, for higher dimensions, the present necessary condition was shown {\it not\/} to be a sufficient one.
As a second example, consider a mixed state consisting of a fraction $x$ of the pure state $a|01\rangle+b|10\rangle$ (with
$|a|^2+|b|^2=1$), and fractions $(1-x)/2$ of the pure states
$|00\rangle$ and $|11\rangle$. We have
\begin{equation} \rho_{00,00}=\rho_{11,11}=(1-x)/2,\end{equation}
\begin{equation} \rho_{01,01}=x|a|^2, \end{equation}
\begin{equation} \rho_{10,10}=x|b|^2, \end{equation} \begin{equation} \rho_{01,10}=\rho_{10,01}^*=xab^*, \end{equation} and the other elements of $\rho$ vanish. It is easily seen that the $\sigma$ matrix has a negative determinant, and thus a negative eigenvalue, when
\begin{equation} x>(1+2|ab|)^{-1}. \end{equation} This is a lower limit than the one for a violation of Bell's inequality, which requires~\cite{Gisin}
\begin{equation} x>[1+2|ab|(\sqrt{2}-1)]^{-1}. \end{equation}
An even more striking example is the mixture of a singlet and a maximally polarized pair: \begin{equation} \rho_{m\mu,n\nu}=x\,S_{m\mu,n\nu}+
(1-x)\,\delta_{m0}\,\delta_{n0}\,\delta_{\mu0}\,\delta_{\nu0}.\end{equation} For any positive $x$, however small, this state is inseparable, because $\sigma$ has a negative eigenvalue $(-x/2)$. On the other hand, the Horodecki criterion~\cite{H3c} gives a very generous domain to the validity of Bell's inequality: $x\leq 0.8$.\\[7mm]
\noindent{\bf 3. COLLECTIVE TESTS FOR NONLOCALITY}
The weakness of Bell's inequality as a test for inseparability is due to the fact that the only use made of the density matrix $\rho$ is for computing the probabilities of the various outcomes of tests that may be performed on the subsystems of a {\it single\/} composite system. On the other hand, an experimental verification of that inequality necessitates the use of {\it many\/} composite systems, all prepared in the same way. However, if many such systems are actually available, we may also test them collectively, for example two by two, or three by three, etc., rather than one by one. If we do that, we must use, instead of $\rho$ (the density matrix of a single system), a {\it new\/} density matrix, which is $\rho\otimes\rho$, or $\rho\otimes\rho\otimes\rho$, in a higher dimensional space. It will now be shown that there are some density matrices $\rho$ that satisfy Bell's inequality, but for which $\rho\otimes\rho$, or $\rho\otimes\rho\otimes\rho$, etc., violate that inequality~\cite{PRA}.
The example that will be discussed is that of the Werner states~\cite{Werner} defined by Eq.~(\ref{x}). Let us consider $n$ Werner pairs. Each one of the two observers has $n$ particles (one from each pair). They proceed as follows. First, they subject their $n$-particle systems to suitably chosen local unitary transformations, $U$, for Alice, and $V$, for Bob. Then, they test whether each one of the particles labelled 2, 3, \ldots, $n$, has spin up (for simplicity, it is assumed that all the particles are distinguishable, and can be labelled unambiguously). Note that any other test that they can perform is unitarily equivalent to the one for spins up, as this involves only a redefinition of the matrices $U$ and $V$. If any one of the $2(n-1)$ particles tested by Alice and Bob shows spin down, the experiment is considered to have failed, and the two observers must start again with $n$ new Werner pairs.
A similar elimination of ``bad'' samples is also inherent to any experimental procedure where a failure of one of the detectors to fire is handled by discarding the results registered by all the other detectors: only when {\it all\/} the detectors fire are their results included in the statistics. This obviously requires an exchange of {\it classical\/} information between the observers. (There is a controversy on whether a violation of Bell's inequality with post\-selected data~\cite{postselect} is a valid test for non\-locality~\cite{Santos}. I shall not discuss this issue here; I only examine whether or not Bell's inequality is violated by the post\-selected data.)
The calculations shown below will refer to the case $n=3$, for definiteness. The generalization to any other value of $n$ is straightforward. Spinor indices, for a single \mbox{spin-$1\over2$} particle, will take the values 0 (for the ``up'' component of spin) and 1 (for the ``down'' component). The 16 components of the density matrix of a Werner pair, consisting of a singlet fraction $x$ and a random fraction $(1-x)$, are, in the standard direct product basis: \begin{equation} \rho_{mn,st}=x\,S_{mn,st}+(1-x)\,\delta_{ms}\,\delta_{nt}\,/4,\end{equation} where I am now using only Latin indices, contrary to what I did in Eq.(\ref{x}); this is because Greek indices will be needed for another purpose, as will be seen soon. Thus, now, the indices $m$ and $s$ refer to Alice's particle, and $n$ and $t$ to Bob's particle.
When there are three Werner pairs, their combined density matrix is a direct product $\rho\otimes\rho'\otimes\rho''$, or explicitly, $\rho_{mn,st}\,\rho_{m'n',s't'}\,\rho_{m''n'',s''t''}$. The result of the unitary transformations $U$ and $V$ is \begin{equation} \rho\otimes\rho'\otimes\rho''\to
(U\otimes V)\,(\rho\otimes\rho'\otimes\rho'')\,
(U^\dagger\otimes V^\dagger). \label{newrho} \end{equation} Explicitly, with all its indices, the $U$ matrix satisfies the unitarity relation \begin{equation} \sum_{mm'm''}U_{\mu\mu'\mu'',mm'm''}\;
U^*_{\lambda\lambda'\lambda'',mm'm''}=
\delta_{\mu\lambda}\;\delta_{\mu'\lambda'}\;\delta_{\mu''\lambda''}.
\label{unitary}\end{equation} In order to avoid any possible ambiguity, Greek indices (whose values are also 0 and 1) are now used to label spinor components {\it after\/} the unitary transformations. Note that the indices without primes refer to the two particles of the first Werner pair (the only ones that are not tested for spin up) and the primed indices refer to all the other particles (that are tested for spin up). The $V_{\nu\nu'\nu'',nn'n''}$ matrix elements of Bob's unitary transformation satisfy a relationship similar to (\theequation). The generalization to a larger number of Werner pairs is obvious.
After the execution of the unitary transformation (\ref{newrho}), Alice and Bob have to test that all the particles, except those labelled by the first (unprimed) indices, have their spin up. They discard any set of $n$ Werner pairs where that test fails, even once. The density matrix for the remaining ``successful'' cases is thus obtained by retaining, on the right hand side of Eq.~(\ref{newrho}), only the terms whose primed components are zeros, and then renormalizing the resulting matrix to unit trace. This means that only two of the $2^n$ rows of the $U$ matrix, namely those with indices 000\ldots\ and 100\ldots, are relevant (and likewise for the $V$ matrix). The elimination of all the other rows greatly simplifies the problem of optimizing these matrices. We shall thus write, for brevity, \begin{equation} U_{\mu 00,mm'm''}\to U_{\mu,mm'm''}, \end{equation} where $\mu=0,1$. Then, on the left hand side of Eq.~(\ref{unitary}), we effectively have two unknown row vectors, $U_0$ and $U_1$, each one with $2^n$ components (labelled by Latin indices $mm'm''$). These vectors have unit norm and are mutually orthogonal. Likewise, Bob has two vectors, $V_0$ and $V_1$. The problem is to optimize these four vectors so as to make the expectation value of the Bell operator~\cite{BMR}, \begin{equation} C:=AB+AB'+A'B-A'B', \end{equation} as large as possible.
The optimization proceeds as follows. The new density matrix, for the pairs of spin-\mbox{$1\over2$} particles that were {\it not\/} tested by Alice and Bob for spin up (that is, for the first pair in each set of $n$ pairs), is\\[4mm] $ (\rho_{\rm new})_{\mu\nu,\sigma\tau}=$
\nopagebreak \begin{equation} N\, U_{\mu,mm'm''}\,V_{\nu,nn'n''}\;\rho_{mn,st}\;\rho_{m'n',s't'}\;
\rho_{m''n'',s''t''}\,U^*_{\sigma,ss's''}\,V^*_{\tau,tt't''},\end{equation} where $N$ is a normalization constant, needed to obtain unit trace ($N^{-1}$ is the probability that all the ``spin up'' tests were successful). We then have~\cite{H3c}, for fixed $\rho_{\rm new}$ and all possible choices of $C$, \begin{equation} \max\,[{\rm Tr}\,(C\rho_{\rm new})]=2\sqrt{M}, \label{M}\end{equation} where $M$ is the sum of the two largest eigenvalues of the real symmetric matrix $T^\dagger T$, defined by \begin{equation} T_{pq}:={\rm Tr}\,[(\sigma_p\otimes\sigma_q)\,\rho_{\rm new}].
\label{T} \end{equation} (In the last equation, $\sigma_p$ and $\sigma_q$ are the Pauli spin matrices.) Our problem is to find the vectors $U_\mu$ and $V_\nu$ that maximize $M$.
At this point, some simplifying assumptions are helpful. Since all matrix elements $\rho_{mn,st}$ are real, we can restrict the search to vectors $U_\mu$ and $V_\nu$ that have only real components. Furthermore, the situations seen by Alice and Bob are completely symmetric, except for the opposite signs in the standard expression for the singlet state: \begin{equation} \textstyle{\psi=
\left[{1\choose0}{0\choose1}-{0\choose1}{1\choose0}\right]
\;/\sqrt{2}.} \end{equation} These signs can be made to become the same by redefining the basis, for example by representing the ``down'' state of Bob's particle by the symbol ${0\choose-1}$, {\it without\/} changing the basis used for Alice's particle. This unilateral change of basis is equivalent a substitution \begin{equation} V_{\nu,nn'n''}\to(-1)^{\nu+n+n'+n''}\,V_{\nu,nn'n''},\end{equation} on Bob's side. The minus signs in Eq.~(\ref{singlet}) also disappear, and there is complete symmetry for the two observers. It is then plausible that, with the new basis, the optimal $U_\nu$ and $V_\nu$ are the same. Therefore, when we return to the original basis and notations, they satisfy \begin{equation} V_{\nu,nn'n''}=(-1)^{\nu+n+n'+n''}\,U_{\nu,nn'n''}.\end{equation} We shall henceforth restrict our search to pairs of vectors that satisfy this relation.
After all the above simplifications, the problem that has to be solved is the following: find two mutually orthogonal unit vectors, $U_0$ and $U_1$, each one with $2^n$ real components, that maximize the value of $M(U)$ defined by Eqs.~(\ref{M}) and~(\ref{T}). This is a standard optimization problem which can be solved numerically. Since the function $M(U)$ is bounded, it has at least one maximum. It may, however, have more than one: there may be several distinct local maxima with different values. A numerical search leads to one of these maxima, but not necessarily to the largest one. The outcome may depend on the initial point of the search. It is therefore imperative to start from numerous randomly chosen points in order to ascertain, with reasonable confidence, that the largest maximum has indeed been found.\\[7mm]
\noindent{\bf 4. NUMERICAL RESULTS}
In all the cases that were examined, $M(U)$ turned out to have a local maximum for the following simple choice: \begin{equation} U_{0,00\ldots}=U_{1,11\ldots}=1, \label{xor}\end{equation} and all the other components of $U_0$ and $U_1$ vanish. Recall that the ``vectors'' $U_0$ and $U_1$ actually are two rows, $U_{000\ldots}$ and $U_{100\ldots}$, of a unitary matrix of order $2^n$ (the other rows are irrelevant because of the elimination of all the experiments in which a particle failed the spin-up test). In the case $n=2$, one of the unitary matrices having the property (\ref{xor}) is a simple permutation matrix that can be implemented by a ``controlled-{\sc not}'' quantum gate~\cite{cnot}. The corresponding Boolean operation is known as {\sc xor} (exclusive {\sc or}). For larger values of $n$, matrices that satisfy Eq.~(\ref{xor}) will also be called {\sc xor}-transformations.
It was found, by numerical calculations, that {\sc xor}-transformations always are the optimal ones for $n=2$. They are also optimal for $n=3$ when the singlet fraction $x$ is less than 0.57, and for $n=4$ when $x<0.52$. For larger values of $x$, more complicated forms of $U_0$ and $U_1$ give better results. The existence of two different sets of maxima may be seen in Fig.~1: there are discontinuities in the slopes of the graphs for $n=3$ and~4, that occur at the values of $x$ where the largest value of $\langle{C}\rangle$ jumps from one local maximum to another one.
For $n=5$, a complete determination of $U_0$ and $U_1$ requires the optimization of 64 parameters subject to 3 constraints, more than my workstation could handle. I therefore considered only {\sc xor}-transformations, which are likely to be optimal for $x\, \mbox{\raisebox{-2pt}{{\scriptsize $\stackrel{\textstyle <}{\sim}$}}}\, 0.5$. In particular, for $x=0.5$ (the value that was used in Werner's original work \cite{Werner}), the result is $\langle C\rangle=2.0087$, and the CHSH inequality is violated. This violation occurs in spite of the existence of an explicit LHV model that gives correct results if the Werner pairs are tested one by one.
These results prompt a new question: can we get stronger {\it inseparability\/} criteria by considering $\rho\otimes\rho$, or higher tensor products? It is easily seen that no further progress can be achieved in this way. If $\rho$ is separable as in Eq.~(\ref{sep}), so is $\rho\otimes\rho$. Moreover, the partly transposed matrix corresponding to $\rho\otimes\rho$ simply is $\sigma\otimes\sigma$, so that if no eigenvalue of $\sigma$ is negative, then $\sigma\otimes\sigma$ too has no negative eigenvalue.\\[7mm]
\noindent{\bf ACKNOWLEDGMENT}
This work was supported by the Gerard Swope Fund, and the Fund for Encouragement of Research.
\parindent 0mm {\bf Caption of figure}
FIG. 1. \ Maximal expectation value of the Bell operator, versus the singlet fraction in the Werner state, for collective tests performed on several Werner pairs (from bottom to top of the figure, 1, 2, 3, and 4 pairs, respectively). The CHSH inequality is violated when $\langle{C\rangle}>2$.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Coupling Staggered-Grid and MPFA Finite Volume Methods for Free Flow/Porous-Medium Flow Problems}
\author[addressIWS]{Martin Schneider\corref{mycorrespondingauthor}} \ead{[email protected]} \author[addressIWS]{Kilian Weishaupt} \ead{[email protected]} \author[addressIWS]{Dennis Gl\"aser} \ead{[email protected]} \author[addressIWS]{Wietse M. Boon} \ead{[email protected]} \author[addressIWS]{Rainer Helmig} \ead{[email protected]}
\cortext[mycorrespondingauthor]{Corresponding author}
\address[addressIWS]{Institute for Modelling Hydraulic and Environmental Systems,
University of Stuttgart,
Pfaffenwaldring 61,
70569 Stuttgart, Germany}
\begin{abstract}
A discretization is proposed for models coupling free flow with anisotropic porous medium flow. Our approach
employs a staggered grid finite volume method for the Navier-Stokes equations in the free flow
subdomain and a MPFA finite volume method to solve Darcy flow in the porous medium. After
appropriate spatial refinement in the free flow domain, the degrees of freedom are conveniently
located to allow for a natural coupling of the two discretization schemes. In turn, we
automatically obtain a more accurate description of the flow field surrounding the porous medium.
Numerical experiments highlight the stability and applicability of the scheme in the presence of
anisotropy and show good agreement with existing methods, verifying our approach. \end{abstract}
\begin{keyword} free flow \sep porous medium \sep coupling \sep multi-point flux approximation \end{keyword}
\end{frontmatter}
\section{Introduction} \label{sec:introduction} Coupled systems of free flow and flow through a porous medium can be found ubiquitously in various kinds of natural and industrial contexts, including soil water evaporation \cite{vanderborght2017a}, fuel cell water management \cite{gurau2009a}, food processing \cite{verboven2006a}, and evaporative cooling for turbomachinery \cite{dahmen2014a}. Despite every-growing computational capacities, a discrete numerical modeling these kind of systems, including the porous geometry, is only feasible for small-scale problems. For larger domain sizes, averaging techniques involving the concept of a representative elementary volume (REV) \cite{bear1972a}, are used in order to yield upscaled models for the description of the porous domain \cite{whitaker1999a}. These models can then be coupled to the free-flow region either using a single-domain or a two-domain approach. For the former, both the porous medium and the free flow are described by a single set of equations, as first introduced by Brinkman \cite{brinkman1949a}. The two domains are then discerned by a spatial variation of material parameters. On the other hand, the two-domain approach decomposes the problem into two disjoint subdomains. The free-flow region is then governed by the Navier-Stokes equations while Darcy's or Forchheimer's law is used in the porous medium subdomain \cite{ochoa1995a, layton2002, jamet2009a, mosthaf2011a}. In order to maintain thermodynamic consistency, appropriate coupling conditions have to be formulated which enforce the conservation of mass, momentum and energy across the interface between the two domains \cite{hassanizadeh1989a}. The aim of this work is to couple two different discretization schemes, and we will focus on the two-domain approach since it provides this flexibility more readily. We will consider laminar single-phase flow in the following but the extension to compositional multi-phase flow in the porous domain \cite{mosthaf2011a} and the use of turbulence models in the free flow part \cite{fetzer2017a} is possible.
Being an active area of research, various mathematical and numerical models for the coupling of (Navier-) Stokes and Darcy/Forchheimer have been developed during recent years. Examples using the same spatial discretization scheme for both domains range from the staggered-grid finite volume method \cite{iliev2004a, harlow1965a}, finite element method \cite{discacciati2009a} or the box-scheme, a vertex-centered finite volume method \cite{mosthaf2011a, huber2000a}. Furthermore, combinations of colocated and staggered-grid schemes have been developed \cite{rybak2015a, masson2016a, fetzer2017a}. For more discretization approaches
to the modeling of coupled free flow and porous medium flow, we refer the reader to \cite{discacciati2009a} and references therein.
In this work, the free flow equations are discretized using the staggered-grid method, which forms a stable numerical method for such problems. In turn, no additional stabilization techniques are necessary, in contrast to colocated schemes \cite{versteeg2007a}, for example. In the porous medium, we employ a multi-point-flux-approximation method (MPFA) \cite{aavatsmark2002a}. This method was developed to overcome the shortcomings of the classical two-point-flux approximation. In particular, the MPFA scheme does not require the grid to be $\mathbf{K}$-orthogonal, meaning that the grid cells need not be in line with the principal directions of the permeability tensor $\mathbf{K}$. This is especially important for skewed and unstructured grids or in case the principal directions of the permeability tensor are inclined, in the case of layered porous structures or faults, for example. The MPFA method has been applied previously for solving Brinkman's equation \cite{iliev2014a, brinkman1949a} for a coupled system of free flow and porous medium. The referenced works thus use the same discretization scheme for both subdomains. The novelty of this work is that we employ a staggered-grid method for the free-flow model (Navier-Stokes), and couple it with a MPFA method for the porous medium model (Darcy).
The paper is organized as follows. Section~\ref{sec:equations} introduces the coupled model by presenting the governing equations in the free flow and porous medium flow subdomains, respectively, as well as the coupling conditions at the interface. The discretization schemes are introduced in Section~\ref{sec:discretization}, including the newly proposed coupling at the interface between the two subdomains. Numerical results are presented in Section~\ref{sec:numresults} with the use of two test cases. Finally, Section~\ref{sec:conclusions} focuses on the conclusions.
\section{Governing Equations} \label{sec:equations}
In order to present the governing equations for the coupled model, we first introduce the assumptions on the geometry and the notational conventions. After these preliminary definitions, we continue with the equations governing free flow and porous medium flow, followed by the coupling conditions.
Let $\Omega\subset\mathbb{R}^d$, $d\in \{2, 3\}$, be an open, connected Lipschitz domain with boundary $\partial\Omega$ and $d$-dimensional measure $\meas{\Omega}$. Furthermore, let $\Omega^\mathrm{pm}$ and $\Omega^\mathrm{ff}$ be a disjoint partition of $\Omega$ representing the porous medium and free flow subdomains, respectively. The subdomain boundaries are given by the interface $\Gamma^\mathrm{if} := \partial \Omega^\mathrm{ff} \cap \partial \Omega^\mathrm{pm}$ as well as the remainders $\Gamma^\mathrm{pm} := \partial \Omega^\mathrm{pm} \setminus \Gamma^\mathrm{if}$ and $\Gamma^\mathrm{ff} := \partial \Omega^\mathrm{ff} \setminus \Gamma^\mathrm{if}$. For brevity, the superscripts are often omitted when no ambiguity arises.
The external boundary $\partial \Omega = \Gamma^\mathrm{ff} \cup \Gamma^\mathrm{pm}$ is further decomposed such that $\Gamma^\mathrm{ff} = \Gamma_\mathrm{v}^\mathrm{ff} \cup \Gamma_\mathrm{p}^\mathrm{ff}$ and $\Gamma^\mathrm{pm} = \Gamma_\mathrm{v}^\mathrm{pm} \cup \Gamma_\mathrm{p}^\mathrm{pm}$ disjointly. Here, the subscript denotes whether the velocity or the pressure is prescribed as a boundary condition. To ensure unique solvability of the resulting system, we assume that $\meas{\Gamma_\mathrm{p}^\mathrm{pm} \cup \Gamma_\mathrm{p}^\mathrm{ff}} > 0$, i.e. that a pressure boundary condition is imposed on a subset of the boundary $\partial \Omega$ with positive measure.
Let $\mathbf{n}$ denote the unit normal vector on $\partial \Omega$ oriented outward with respect to $\Omega$. We abuse notation and let $\mathbf{n}$ moreover denote the unit normal vector on $\Gamma^\mathrm{if}$ oriented outward with respect to $\Omega^\mathrm{ff}$.
\subsection{Free Flow} \label{sub:free_flow}
In our model, the Navier-Stokes equations govern the free flow in subdomain $\Omega^\mathrm{ff}$. These equations are given by: \begin{subequations}
\label{eq:navierstokes}
\begin{align}
\frac{\partial \varrho}{\partial t} + \nabla \cdot (\varrho \mathbf{v}) &= q, \\
\frac{\partial (\varrho \mathbf{v})}{\partial t} + \nabla \cdot \left(\varrho \mathbf{v} \mathbf{v}^{\mathrm{T}}
- \mu (\nabla \mathbf{v} + (\nabla \mathbf{v})^{\mathrm{T}})
+ p \mathbf{I} \right)
&= \varrho \textbf{g}, &&\mathrm{in} \, \Omega^\mathrm{ff}, \\
\mathbf{v} &= \mathbf{v}_\Gamma, &&\mathrm{on} \, \Gamma^\mathrm{ff}_\mathrm{v}, \\
p &= p_\Gamma, &&\mathrm{on} \, \Gamma^\mathrm{ff}_\mathrm{p}.
\end{align} \end{subequations}
The unknown variables are the velocity $\mathbf{v}$ and the pressure $p$. Here, $\varrho$ and $\mu$ denote the potentially pressure-dependent density and viscosity, respectively, while $q$ is a source (or sink) term, and $\mathbf{g}$ describes the influence of gravity. $\mathbf{I}$ is the identity tensor in $\mathbb{R}^{d \times d}$. The boundary conditions are given by known quantities $\mathbf{v}_\Gamma$ and $p_\Gamma$, representing the velocity or pressure at the corresponding boundary.
\subsection{Porous Medium Flow} \label{sub:porous_medium_flow}
The equations governing single-phase flow in the porous-medium $\Omega^\mathrm{pm}$ are given by \begin{subequations}
\label{eq:darcy}
\begin{align}
\frac{\partial \varrho}{\partial t} + \nabla \cdot \left( \varrho \mathbf{v} \right) &= q, \\
\mathbf{v} + \frac{1}{\mu} \mathbf{K} \left( \nabla p - \varrho \mathbf{g}\right) &= 0, &&\mathrm{in} \, \Omega^\mathrm{pm}, \label{eq: darcys law}\\
\mathbf{v} \cdot \mathbf{n} &= v_\Gamma, &&\mathrm{on} \, \Gamma^\mathrm{pm}_\mathrm{v}, \\
p &= p_\Gamma, &&\mathrm{on} \, \Gamma^\mathrm{pm}_\mathrm{p}.
\end{align} \end{subequations} Equation \eqref{eq: darcys law} states that the momentum balance in the porous medium is given by Darcy's law, i.e. the Darcy velocity is calculated as $\mathbf{v} = -\frac{1}{\mu} \mathbf{K} \left( \nabla p - \varrho \mathbf{g}\right)$, with $\mathbf{K}$ being the permeability tensor. Similar to the free-flow equations \eqref{eq:navierstokes}, $q$ denotes a source or sink term. Finally $v_\Gamma$ and $p_\Gamma$ are known quantities representing the normal flux and pressure on the corresponding boundaries, respectively.
\subsection{Coupling Conditions} \label{sub:coupling}
In order to derive a thermodynamically consistent formulation of the coupled problem,
conservation of mass and momentum has to be guaranteed at the interface between the porous
medium and the free-flow domain. We therefore impose the following interface conditions:
\begin{subequations}\label{eq:interfaceConditions}
\begin{align}
\label{eq:interfaceConditionsFlux}
\mathbf{v}^\mathrm{ff} \cdot \mathbf{n} &= - \mathbf{v}^\mathrm{pm} \cdot \mathbf{n} , \\
\label{eq:interfaceConditionsMomentum}
\left(\varrho \mathbf{v} \mathbf{v}^\mathrm{T} - \mu (\nabla \mathbf{v} + (\nabla \mathbf{v})^\mathrm{T}) + p \mathbf{I} \right)^\mathrm{ff} \mathbf{n} &=
p^\mathrm{pm} \mathbf{n}, \\
\label{eq:interfaceConditionsBJS}
\left( - \frac{\sqrt{\mathbf{t} \cdot \mathbf{K} \mathbf{t}}}{\alpha_{\mathrm{BF}}} (\nabla \mathbf{v}) \mathbf{n} - \mathbf{v} \right)^\mathrm{ff} \cdot \mathbf{t} &= 0, && \text{on} \, \Gamma^\mathrm{if}.
\end{align} \end{subequations}
The momentum transfer normal to the interface is given by \eqref{eq:interfaceConditionsMomentum} \cite{layton2002}. Condition \eqref{eq:interfaceConditionsBJS} is the commonly used Beavers-Joseph-Saffman slip condition \cite{beavers1967a, saffman1971a}. Here, $\mathbf{t}$ denotes any unit vector from the tangent bundle of $\Gamma^\mathrm{if}$ and $\alpha_{\mathrm{BF}}$ is a parameter to be determined experimentally. We remark that this condition is technically a boundary condition for the free flow, not a coupling condition between the two flow regimes. Furthermore, it has been developed for free flow strictly parallel to the interface and might lose its validity for other flow configurations.
\section{Discretization} \label{sec:discretization}
This section is devoted to giving an outline of the numerical schemes used in the individual subdomains
and the incorporation of the interface conditions \eqref{eq:interfaceConditions}. However, we first introduce some notational conventions concerning the partition of $\Omega$ in the following definition.
\begin{definition}[Grid discretization] \label{def:griddisc} The tuple \mbox{$\mathcal{D} := (\mathcal{T},\mathcal{E},\mathcal{P},\mathcal{V})$} denotes the grid discretization, in which
\begin{enumerate}[label=(\roman*)]
\item $\mathcal{T}$ is the set of control volumes (cells) such that
\mbox{$\overline{\Omega}= \cup_{K \in \mathcal{T}} \overline{K}$}.
For each cell $K\in\mathcal{T}$, $\meas{K}>0$ denotes the cell volume.
\item
$\mathcal{E}$ is the set of faces such that each face $\sigma$ is a $(d-1)$-dimensional hyperplane
with measure
\mbox{$\meas{\sigma}>0$}.
For each cell $K \in \mathcal{T}$, $\mathcal{E}_K$ is the subset of
$\mathcal{E}$ such that \mbox{$\partial K = \cup_{\sigma \in\mathcal{E}_K}{\sigma}$}. Furthermore, $\mathbf{x}_\sigma$ denotes
the face evaluation points and $\mathbf{n}_{K,\sigma}$ the unit vector that is normal to
$\sigma$ and outward to $K$.
\item
$\mathcal{P} := \lbrace \mathbf{x}_K\rbrace_{K \in \mathcal{T}}$ is the set of \emph{cell centers} (not required to be the barycenters) such that
$\mathbf{x}_K\in K$ and $K$ is star-shaped with respect to $\mathbf{x}_K$.
For all $K\in\mathcal{T}$ and $\sigma\in\mathcal{E}_K$, let $d_{K,\sigma}$ denote the Euclidean distance between $\mathbf{x}_K$ and $\sigma$.
\item
$\mathcal{V}$ is the set of vertices of the grid, corresponding to the corners of the cells.
\end{enumerate} \end{definition}
For ease of exposition, we assume $d = 2$, however, this is not a limitation and the model can be readily extended for three dimensions.
\subsection{Staggered grid scheme} \label{sub:staggered}
A staggered-grid finite volume scheme, also known as MAC scheme \cite{harlow1965a}, is used in the free-flow subdomain. Here, scalar quantities including pressure and density are stored on the cell centers $\mathcal{P}$ while the velocity degrees of freedom are located on the primary control volumes' faces $\mathcal{E}$. The resulting scheme is stable, hence oscillation-free solutions are guaranteed without the need for additional stabilization techniques. This is in contrast with colocated schemes, in which all unknowns are defined at the same location \cite{versteeg2007a}. $\mathcal{D}$ is a uniform Cartesian grid with mesh size $h$ in both directions.
We start with the momentum balance equations. For each face in $\mathcal{E}$, we construct a secondary control volume $K^*$ with boundary $\partial K^*$, as depicted in Figure~\ref{pc:staggeredgrid}. On each $K^*$, we integrate the momentum balance equation and apply Gauss divergence theorem to obtain \begin{equation}
\begin{aligned}
\int_{K^*} \frac{\partial (\varrho \mathbf{v})}{\partial t} \, \mathrm{d} x + \int_{\partial K^*} (\varrho \mathbf{v} \mathbf{v}^{\mathrm{T}}) \cdot \mathbf{n} \, \mathrm{d} \Gamma
&- \int_{\partial K^*} (\mu (\nabla \mathbf{v} + (\nabla \mathbf{v})^{\mathrm{T}})) \cdot \mathbf{n} \, \mathrm{d} \Gamma
+ \int_{\partial K^*} p \mathbf{n} \, \mathrm{d} \Gamma
= \int_{K^*} \varrho \mathbf{g} \, \mathrm{d} x.
\end{aligned} \end{equation}
The first and second components of this vector equation are considered separately. Due to the different locations at which the degrees of freedom are defined, we require interpolation operators in order to continue. Let us introduce the average ($\{ \cdot \}$) and jump quantities ($\llbracket \cdot \rrbracket$) on cell centers ($\mathcal{P}$) and vertices ($\mathcal{V}$) of the primal grid for $\mathbf{v} = [v_x, v_y]^{\text{T}}$ (see Figure \ref{pc:staggeredgrid}): \begin{equation} \begin{aligned}
\{ \mathbf{v} \}|_{\mathcal{P}} &= \frac{1}{2}
\begin{bmatrix}
v_x^E + v_x^W \\ v_y^N + v_y^S
\end{bmatrix}
& \quad
\{ \mathbf{v} \}|_{\mathcal{V}} &= \frac{1}{2}
\begin{bmatrix}
v_x^N + v_x^S \\ v_y^E + v_y^W
\end{bmatrix} \\
\llbracket \mathbf{v} \rrbracket|_{\mathcal{P}} &= \frac{2}{h}
\begin{bmatrix}
v_x^E - v_x^W \\ v_y^N - v_y^S
\end{bmatrix}
& \quad
\llbracket \mathbf{v} \rrbracket|_{\mathcal{V}} &= \frac{1}{h}
\begin{bmatrix}
v_x^N - v_x^S + v_y^E - v_y^W \\
v_x^N - v_x^S + v_y^E - v_y^W
\end{bmatrix} \end{aligned} \end{equation} Here, the superscript $\{E, N, W, S\}$ refers to the closest degree of freedom East, North, West, or South of the evaluation point, see Figure \ref{pc:staggeredgrid}. \begin{figure}
\caption{Grid and notations used for the staggered-grid discretization. $K_x^*, K_y^*$ denote the dual cells. The picture on the left illustrates the situation where $\mathbf{x}_{\sigma^*}$ coincides with a cell center, whereas the picture on the right shows the case where the center $\mathbf{x}_{\sigma^*}$ of a dual face $\sigma^*$ coincides with a vertex of the primary grid.}
\label{pc:staggeredgrid}
\end{figure} In the following, the superscript ``up'' denotes the upwind quantity relative to the velocity $\mathbf{v}$. Moreover, we introduce $\mu^\mathrm{avg}$ such that in all cell centers $\mathcal{P}$, it denotes the corresponding viscosity whereas in each vertex $\mathcal{V}$, it is the viscosity averaged over the adjacent cells.
With the operators defined, we discretize the momentum balance equation for each secondary control volume $K_i^*$, in which the subscript $i \in \{ x, y \}$ denotes whether the control volume surrounds a vertical or horizontal face, respectively. The discretized equation for component $i$ is then given by \begin{equation}
\begin{aligned}
\int_{K_i^*} \frac{\partial (\varrho v_i)}{\partial t} \, \mathrm{d} x
+ \int_{\partial K_i^*} (\{ \mathbf{v} \} (\varrho v_i)^\mathrm{up}) \cdot \mathbf{n} \, \mathrm{d} \Gamma
- \int_{\partial K_i^*} \mu^\mathrm{avg} \llbracket \mathbf{v} \rrbracket \cdot \mathbf{n} \, \mathrm{d} \Gamma
&+ \int_{\partial K_i^*} p n_i \, \mathrm{d} \Gamma
= \int_{K^*} \varrho g_i \, \mathrm{d} x.
\end{aligned} \end{equation}
We emphasize that the boundary integrals are computed numerically using the following rules for a scalar-valued function $f$ and a vector-valued function $\mathbf{f}$: \begin{subequations}
\begin{align}
\int_{\partial K^*} f \, \mathrm{d} \Gamma
&= \sum_{\sigma^* \in \mathcal{E}_{K^*}} \meas{\sigma^*} f(\mathbf{x}_{\sigma^*})
= h \sum_{\sigma^* \in \mathcal{E}_{K^*}} f(\mathbf{x}_{\sigma^*}), \\
\int_{\partial K^*} \mathbf{f} \cdot \mathbf{n} \, \mathrm{d} \Gamma
&= \sum_{\sigma^* \in \mathcal{E}_{K^*}} \meas{\sigma^*}\mathbf{f}(\mathbf{x}_{\sigma^*}) \cdot \mathbf{n}_{K^*, \sigma^*}
= h \sum_{\sigma^* \in \mathcal{E}_{K^*}} \mathbf{f}(\mathbf{x}_{\sigma^*}) \cdot \mathbf{n}_{K^*, \sigma^*}.
\end{align} \end{subequations} By definition, each $\mathbf{x}_{\sigma^*}$ will either be a cell center ($\mathcal{P}$) or a vertex ($\mathcal{V}$) of the grid.
Finally, the mass balance is evaluated on each cell of the grid, i.e. we compute for each $K \in \mathcal{T}$: \begin{equation}
\int_{K} \frac{\partial \varrho}{\partial t} \, \mathrm{d} x
+ \int_{\partial K} \varrho^\mathrm{up} \mathbf{v} \cdot \mathbf{n} \, \mathrm{d} \Gamma
=
\int_{K} q \, \mathrm{d} x. \end{equation}
The above equations fully define the staggered-grid discretization scheme for all internal faces of the grid $\mathcal{D}$. For incorporation of boundary conditions, we refer the reader to \cite{versteeg2007a}.
\subsection{Cell-centered finite volume scheme} \label{sub:ccfvm}
A cell-centered finite volume scheme is employed in the Darcy subdomain, i.e. the
grid elements are used as control-volumes and the degrees of freedom are associated
with the cell centers. Typically,
the finite volume formulation is obtained by integrating the first equation of
\eqref{eq:darcy} over a control volume $K \in \mathcal{T}$ and by applying the Gauss divergence theorem:
\begin{equation}
\int_K \frac{\partial \varrho}{\partial t} \, \mathrm{d}x + \sum_{\sigma \in \mathcal{E}_K }\int_{\sigma} \frac{\varrho^\mathrm{up}}{\mu^\mathrm{up}}\left( - \mathbf{K} \left( \nabla p - \varrho \mathbf{g}\right) \right) \cdot \mathbf{n} \, \mathrm{d} \Gamma = \int_K q \, \mathrm{d}x.
\label{eq:darcyIntegrated}
\end{equation}
Replacing, the exact fluxes by an approximation, i.e. \mbox{$F_{K, \sigma} \approx \int_{\sigma} \left( - \mathbf{K}_K \left( \nabla p - \varrho \mathbf{g}\right) \right) \cdot \mathbf{n} \, \mathrm{d} \Gamma$}
(here $\mathbf{K}_K$ is the value of $\mathbf{K}$ associated with control volume $K$), yields
\begin{equation}
\int_K \frac{\partial \varrho}{\partial t} \, \mathrm{d}x + \sum_{\sigma \in \mathcal{E}_K} \frac{\varrho^\mathrm{up}}{\mu^\mathrm{up}} F_{K, \sigma} = Q_K, \quad \forall \, {K \in \mathcal{T}},
\label{eq:darcyCCdiscrete} \end{equation} where $F_{K, \sigma}$ is the discrete flux through face $\sigma$ flowing out of cell
$K$, $Q_K := \int_K q \, \mathrm{d}x$ is the integrated source/sink term, and $(\cdot)^\mathrm{up}$ denotes upwinding with respect to the sign of the flux $F_{K,\sigma}$.
Finite volume schemes primarily differ in the approximation of the term $(\mathbf{\mathbf{K}}_K \nabla p) \cdot \mathbf{n}$ (i.e. the choice of the fluxes $F_{K, \sigma}$). The widely used linear two-point flux approximation (TPFA), for example, is a simple but robust scheme. However, it is well-known that it is inconsistent on grids that are not $\mathbf{K}$-orthogonal (see e.g.\ \cite{edwards1998finite}). In this work we consider anisotropic permeability tensors in the porous medium and $\mathbf{K}$-orthogonality of the grid can thus not be guaranteed. Therefore, we employ a multi-point flux approximation (MPFA) scheme for the formulation of the discrete fluxes, which has been presented in \cite{aavatsmark2002a}. This particular scheme is termed MPFA-O and
is only one among many methods that fall into the family of MPFA schemes (\cite{aavatsmark2008compact, edwards2010mpfafps}).
Please note that we will omit the suffix ``-O'' throughout this document wherever it would affect the readability.
\begin{figure}
\caption{Interaction region for the MPFA-O method. The parameter $\xi$, $0 \le \xi < 1$ is used to define the location of the intermediate
face pressure unknowns $p_{\sigma_i^v}$. Here, the situation for $\xi = 0.5$ is illustrated.}
\label{pc:interactionRegion_interior}
\end{figure}
For the computation of the fluxes, a dual grid is created
by connecting the barycenters of the cells with the barycenters of the faces ($d=2$)
or the barycenters of the faces and edges ($d=3$). This divides each cell into sub-control
volumes $K^v$ with $K \in \mathcal{T}$ and $v \in \mathcal{V}$. Analogously, each face is sub-divided into sub-control volume faces $\sigma^v$,
see Figure \ref{pc:interactionRegion_interior}. Expressions for the face fluxes $F_{K, \sigma^v}$ are obtained by introducing the face pressures $p_{\sigma^v}$. The location of these face pressures along the sub-control volume faces $\sigma^v$ is parameterized by $\xi$, $0 \le \xi < 1$, and is the center of the original face $\sigma$ for $\xi = 0$ and would be the position of the vertex $v$ for $\xi = 1$. These face pressures are then eliminated by enforcing the continuity of fluxes across each sub-control volume face. I.e., for each face $\sigma^v$ between $K^v$ and $L^v$, we impose:
\begin{subequations}
\begin{align}
&F_{K, \sigma^v} + F_{L, \sigma^v} = 0.
\label{eq:sigmaConditions}
\end{align} \end{subequations}
We allow for piecewise constant $\mathbf{K}$
(denoted as $\mathbf{K}_K$ for each cell $K$) and construct discrete gradients
$\nabla_\mathcal{D}^{K^v} p$, per sub-control volume $K^v$, depending on its two embedded
sub-control volume faces.
Let us consider $K^v$ in Figure~\ref{pc:interactionRegion_interior} with faces $\sigma_1^v$ and $\sigma_3^v$. Here, the discrete
gradients are constructed to be consistent such that the following holds for $i \in \{1, 3\}$: \begin{equation}
\nabla_\mathcal{D}^{K^v} p \cdot (\mathbf{x}_{\sigma^v_i}- \mathbf{x}_{K}) =
p_{\sigma^v_i} - p_K. \label{eq:piecewiselin} \end{equation}
Thus, a discrete gradient (for sub-control volume $K^v$) that fulfills the two conditions \eqref{eq:piecewiselin} is defined by \begin{equation}
\nabla_\mathcal{D}^{K^v} p = \mathbb{D}^{-T}_{K^v}
\begin{bmatrix}
p_{\sigma^v_1} - p_K \\
p_{\sigma^v_3} - p_K
\end{bmatrix}, \qquad \text{ with }\; \mathbb{D}_{K^v} :=
\begin{bmatrix}
\mathbf{x}_{\sigma^v_1}- \mathbf{x}_K & \mathbf{x}_{\sigma^v_3} - \mathbf{x}_K
\end{bmatrix}.
\label{eq:MPFAGradientRecons} \end{equation}
This enables us to write the discrete flux across $\sigma^v$ between $K^v$ and $L^v$ as follows:
\begin{equation}
F_{K, \sigma^v} := - |\sigma^v| \mathbf{n}_{K, \sigma^v}^T \mathbf{\mathbf{K}}_K \nabla_\mathcal{D}^{K^v} p + \gamma_{K, \sigma^v},
\label{eq:discreteFlux} \end{equation}
where we introduced $\gamma_{K, \sigma^v} = \rho_{\sigma^v} \meas{\sigma^v} \mathbf{n}_{K, \sigma^v}^T \mathbf{\mathbf{K}}_K \mathbf{g}$, with $\rho_{\sigma^v} = \frac{\rho_K+\rho_L}{2}$, to incorporate the effect of gravity.
To deduce a cell-centered scheme, the face pressures $p_{\sigma^v_i}$
are eliminated. This is done by enforcing flux continuity \eqref{eq:sigmaConditions}
within each interaction
volume and by solving a local system of equations.
We rewrite these conditions in matrix form, and introduce the sans serif font to denote the corresponding matrices and vectors. All local
face pressures of an interaction region are collected in the vector $\mathsf{p_{\sigma}}$, cell pressures in the vector
$\mathsf{p_K}$, and all terms related to gravity in the vector $\mathsf{g}$. Flux continuity then allows us to rewrite the face pressures in terms of the cell pressures:
\begin{equation}
\mathsf{A} \, \mathsf{p_{\sigma}} = \mathsf{B} \, \mathsf{p_K} + \Delta \mathsf{g}.
\label{eq:ivLocalSystem} \end{equation}
Here, the $\Delta$ represents the difference in contributions due to gravity over each face.
Let $\mathsf{f}$ denote the vector of all fluxes across the sub-control volume faces of the interaction
region. These can be expressed in matrix form using equation \eqref{eq:discreteFlux}:
\begin{equation}
\mathsf{f} = \mathsf{C} \, \mathsf{p_{\sigma}} + \mathsf{D} \, \mathsf{p_K} + \mathsf{g}.
\label{eq:ivLocalFluxes} \end{equation}
With these introduced matrices the final expressions for the local
sub-control volume face fluxes, related to the interaction region, read:
\begin{equation}
\mathsf{f} = \underbrace{\left( \mathsf{C} \mathsf{A}^{-1} \mathsf{B}
+ \mathsf{D} \right)}_{=: \mathsf{T}} \, \mathsf{p_K}
+ \mathsf{C} \mathsf{A}^{-1} \left( \Delta \mathsf{g} \right) + \mathsf{g}.
\label{eq:ivLocalFluxesTrans} \end{equation}
The entries of the matrix $\mathsf{T}$ are often referred to as the transmissibilities.
\subsection{Coupling} \label{sub:discrete_coupling}
In this section, we consider the realization of the coupling conditions \eqref{eq:interfaceConditions}. As depicted in Figure~\ref{pc:interactionRegion_interface}, the grids are chosen to be non-matching at the interface such that each sub-control volume face coincides with a face $\sigma \in \mathcal{E}^\mathrm{ff}$ of the free-flow domain. In turn, a natural coupling arises between the staggered grid discretization and the MPFA method, due to the coinciding degrees of freedom (for $\xi = 0.5$, see Figure \ref{pc:interactionRegion_interface}).
We emphasize that each cell in the porous-medium subdomain has two neighboring cells in the free-flow subdomain (two-dimensional setup).
We start with the flux continuity condition \eqref{eq:interfaceConditionsFlux}. As depicted in Figure~\ref{pc:interactionRegion_interface}, let us consider a sub-control volume $K^v$ located at the interface such that $\sigma_3^v$ is located on $\Gamma^\mathrm{if}$. We then let the velocity from the free-flow
domain determine the flux over $\sigma_3^v$: \begin{equation} \begin{aligned}
F_{K, \sigma_3^v} &= - \mu^{up} \meas{\sigma_3^v} \veli{y, \sigma_3^v}^\mathrm{ff}.
\label{eq:interactionRegionConditions} \end{aligned} \end{equation}
Collecting the right-hand side of \eqref{eq:interactionRegionConditions} in the matrix-vector product
$\mathsf{Wv_\mathrm{ff}}$, we can rewrite \eqref{eq:ivLocalSystem} for interaction regions
that are located at the interface to the free-flow domain:
\begin{equation}
\mathsf{A} \, \mathsf{p_{\sigma}} = \mathsf{B} \, \mathsf{p_K} + \mathsf{Wv_\mathrm{ff}} + \Delta \mathsf{g}.
\label{eq:ivLocalSystemInterface} \end{equation}
In the situation shown in Figure~\ref{pc:interactionRegion_interface}, the face pressures $p^{\mathrm{pm}}_{\sigma^v_1}, p^{\mathrm{pm}}_{\sigma^v_2}, p^{\mathrm{pm}}_{\sigma^v_3}$ are thus dependent on the primary unknowns $p^\mathrm{pm}_K, p^\mathrm{pm}_L$ of the porous-medium domain and the face velocities $v^\mathrm{ff}_{\sigma^v_2},v^\mathrm{ff}_{\sigma^v_3}$ of the free-flow domain (i.e. $p^{\mathrm{pm}}_{\sigma^v_i} = p^{\mathrm{pm}}_{\sigma^v_i}(p^\mathrm{pm}_K, p^\mathrm{pm}_L,v^\mathrm{ff}_{\sigma^v_2},v^\mathrm{ff}_{\sigma^v_3})$). Insertion of \eqref{eq:ivLocalSystemInterface} in \eqref{eq:ivLocalFluxes} leads to the expression
\begin{equation}
\mathsf{f} = \mathsf{T} \, \mathsf{p_K}
+ \mathsf{C} \mathsf{A}^{-1} \left( \mathsf{Wv_\mathrm{ff}} + \Delta \mathsf{g} \right) + \mathsf{g}
\label{eq:ivLocalFluxesInterface} \end{equation}
for the fluxes across sub-control volume faces within interaction regions located at the interface to the
free-flow domain.
\begin{figure}
\caption{Interaction region for the MPFA-O method at the interface to the free-flow subdomain. The graphic illustrates the
non-matching grids at the interface and the choice of $\xi=0.5$ for the MPFA scheme such that the degrees of freedom
for the face velocities in the free-flow domain coincide with the intermediate face pressure unknowns introduced on the interface.}
\label{pc:interactionRegion_interface}
\end{figure}
The remaining coupling conditions are imposed as followed. The momentum balance \eqref{eq:interfaceConditionsMomentum} is enforced using the reconstructed face pressures from \eqref{eq:ivLocalSystemInterface}. On the other hand, the Beavers-Joseph-Saffman condition \eqref{eq:interfaceConditionsBJS} is technically a boundary condition for the free-flow problem, as previously noted in Section~\ref{sec:equations}, and is implemented accordingly.
Finally, we remark that in the case of compressible fluids, the density and viscosity are pressure-dependent. The upwind terms $\mu^{\mathrm{pm},\mathrm{up}}, \varrho^{\mathrm{pm},\mathrm{up}}, \varrho^{\mathrm{ff},\mathrm{up}}$ are then evaluated using the cell-pressure unknowns, i.e. for a face $\sigma$ between porous-medium cell $K$ and free-flow cell $L$, these terms therefore depend on the pressures $p^\mathrm{pm}_K$ and $p^\mathrm{ff}_L$.
\section{Numerical results} \label{sec:numresults} All simulations are performed using the open-source simulator {Du\-Mu$^\text{x}$ } \cite{dumux}, which comes in the form of an additional DUNE module \citep{blatt.ea:2016}. We employ a monolithic approach, where both sub-problems are assembled into one system of equations and use an implicit Euler method for the time discretization. Newton's method is used to solve the non-linearities involved in the systems of equations. For all test cases, the compressible fluid ``air'' (see the {Du\-Mu$^\text{x}$ } documentation \cite{dumux}) is used. We consider two-dimensional setups, however, the implementation is also able to handle three-dimensional domains.
\subsection{Test case 1}
The first test case is similar to the one that has been presented in \cite{iliev2004a}. However, the authors of \cite{iliev2004a} used a Navier-Stokes-Brinkman-type system for both domains and this system is discretized by using a staggered-grid scheme in both domains. For anisotropic permeability tensors, this requires the interpolation of all velocity components at grid faces.
In this work, we use a different approach, where a staggered-grid scheme is used to discretize the free-flow system and a cell-centered finite volume scheme to discretize the porous-medium system. Thus, no additional velocity degrees of freedom are needed for the porous-medium domain. However, for anisotropic permeability tensors or unstructured grids this requires more sophisticated cell-centered finite volume schemes, as for example the MPFA scheme that has been presented in Section \ref{sub:ccfvm}.
Air is flowing through a two-dimensional channel which is partially blocked by a rectangular porous medium as shown in Figure \ref{pc:settingCaseOne}. The first test case involves small Reynolds numbers ($Re \ll 1$) with respect to the average velocity in the narrow section in the channel above the porous medium. In this case, only the stationary solution (which is reached after a few time steps, starting from a resting fluid) is investigated. The top and bottom of the domain are considered as rigid, impermeable walls with $\mathbf{v} = 0$ (including the wall part below the porous box). Flow is driven by a pressure difference between the left and the right boundary which is set to $\Delta p = \unit[10^{-6}]{Pa}$.
\begin{figure}
\caption{Setting for the first test case.}
\label{pc:settingCaseOne}
\end{figure}
The permeability tensor in the porous medium is given as \begin{equation} \mathbf{K} = \mathbf{R}(\alpha) \begin{pmatrix} \frac{1}{\beta}k & 0 \\ 0 & k \\ \end{pmatrix} \mathbf{R}^{-1}(\alpha), \quad \text{with} \quad \mathbf{R}(\alpha) = \begin{pmatrix} \cos \alpha & -\sin \alpha \\ \sin \alpha & \cos \alpha \\ \end{pmatrix}, \label{eq:RotatedPermTensor} \end{equation} where $\alpha$ is the rotation angle, $\beta \geq 1$ the anisotropy ratio, and $\unit[k=10^{-6}]{m^2}$. In the following, the influence of the anisotropy on the total mass fluxes crossing the boundaries $\Gamma^\mathrm{pm}_\mathrm{in}, \Gamma^\mathrm{pm}_\mathrm{out}, \Gamma^\mathrm{pm}_\mathrm{top}$ is investigated for $\alpha \in \lbrace -45^\circ, -30^\circ, 0^\circ, 30^\circ, 45^\circ \rbrace$ and $\beta \in \lbrace 10 ,100 \rbrace$. In the free-flow domain the same grid is used for the MPFA and the TPFA schemes, whereas in the porous medium the grid used for the MPFA scheme is coarsened such that we obtain a non-matching interface, as shown in Figure \ref{pc:interactionRegion_interface}.
Figure \ref{pc:testOneVelMpfa} and \ref{pc:testOneVelTpfa} show the velocity magnitude and the pressure profile of the MPFA and TPFA schemes for $\beta = 100$ and $\alpha \in \lbrace -45^\circ, 45^\circ\rbrace$. For the TPFA scheme, the results are only shown for $\alpha = 45^\circ$ because the results for $\alpha = - 45^\circ$ are identical. Due to the flow resistance imposed by the porous medium, most of the air passes this obstacle through the constricted section above the block, thus leading to the highest flow velocities there. While the gas passes the porous block virtually parallel for both $\alpha = 45^\circ$ and $\alpha = 0^\circ$ when applying the TPFA method, the effect of anisotropy is clearly visible in the MPFA results. Here, the flow follows the inclined principal direction of the permeability tensor, exiting or entering the porous domain at the top. Small regions of local recirculation can be found within the obstacle which are caused by the medium's anisotropy and the closed wall at the bottom. For $\alpha = - 45^\circ$, the upward flow in the box creates a recirculation at the right part of the bottom, where a small amount of gas is actually pulled from the free-flow channel into the porous medium. Analogously for $\alpha = 45^\circ$, the downward flow causes a recirculation at the bottom left, where the gas cannot exit through the solid wall and thus leaves the domain towards the left, in opposition to the general flow field.
Due to the small pressure differences, we subtract a reference value of $\unit[p_\mathrm{ref}=10^{5}]{Pa}$ for improved visibility as shown on the right side of Figures \ref{pc:testOneVelMpfa} and \ref{pc:testOneVelTpfa}. Almost the entire pressure drop along the channel is observed at the porous domain. Again, the influence of anisotropy is clearly visibly for the MPFA results in terms of an inclined pressure profile which also reflects the closed bottom of the domain. None of these effects can be captured by the TPFA method.
\begin{figure}\label{pc:testOneVelMpfa}
\end{figure}
\begin{figure}\label{pc:testOneVelTpfa}
\end{figure}
Figure \ref{pc:massFluxesTestOne} shows the total mass fluxes over the porous-medium free-flow interfaces. For $\alpha = 0$, the TPFA and MPFA scheme result in almost the same solutions (small differences occur because a finer mesh is used for the TPFA scheme in the porous medium). For this angle, the TPFA scheme is also consistent and produces the correct results. However, it is observed that for all other angles the total mass fluxes significantly differ. The off-diagonal terms of $\mathbf{K}$ are not considered in the TPFA transmissibilities, which explains why the TPFA results are independent from the direction of rotation. Furthermore, the total fluxes over $\Gamma^\mathrm{pm}_\mathrm{top}$ are small for the TPFA scheme, in contrast to the MPFA scheme, where the total mass fluxes increase with increasing rotation angle due to the contribution from the non-parallel porous-medium flow as mentioned above.
\begin{figure}
\caption{Total mass fluxes of MPFA and TPFA schemes over the porous-medium boundaries $\Gamma^\mathrm{pm}_\mathrm{in}, \Gamma^\mathrm{pm}_\mathrm{out}$ (left column) and $\Gamma^\mathrm{pm}_\mathrm{top}$ (right column) for an ansisotropy ratio of $\beta = 10$ (upper row) and $\beta = 100$ (lower row). In the left pictures, the dashed lines correspond to the mass fluxes at the inlet boundary $\Gamma^\mathrm{pm}_\mathrm{in}$, whereas the solid lines to the fluxes at the outlet boundary $\Gamma^\mathrm{pm}_\mathrm{out}$. Positive fluxes mean that fluid flows into the porous medium.}
\label{pc:massFluxesTestOne}
\end{figure}
\subsection{Test case 2} The next test case investigates the solution behavior for higher Reynolds numbers. It uses a similar setting as the previous one with the difference that the channel is elongated in $x$-direction. The computational domains are given by $\Omega = [0,2.5] \times [0,0.25]$ m and $\Omega^\mathrm{pm} = [0.4,0.6] \times [0,0.2]$ m such that $\Omega^\mathrm{ff} = \Omega \setminus \Omega^\mathrm{pm}$. A pressure difference of $\Delta p = \unit[2 \cdot 10^{-3}]{Pa}$ between the left and the right boundary results in $Re \approx 130$ in the channel right atop the porous block.
\begin{figure}
\caption{Velocity profiles of MPFA scheme for an ansisotropy ratio of $\beta = 100$ and for angle $\alpha = 45^\circ$ for times $t \in \lbrace \unit[20]{s}, \unit[40]{s}, \unit[80]{s}, \unit[200]{s}, \unit[1000]{s} \rbrace$. The domain is scaled by a factor of 2 in $y$-direction for a better visualization. The porous-medium boundary is represented by the black lines. }
\label{pc:testHighReVel}
\end{figure}
\begin{figure}
\caption{Velocity profile of TPFA scheme for an ansisotropy ratio of $\beta = 100$ and for angle $\alpha = 45^\circ$ for times $t = \unit[1000]{s}$. The domain is scaled by a factor of 2 in $y$-direction for a better visualization. The porous-medium boundary is represented by the black lines. }
\label{pc:testHighReVelTPFA}
\end{figure}
For this test case, it takes much longer until a stationary solution is reached, which is why the solutions are investigated at different time steps. In the following, we focus on the discussion of the case where $\alpha = 0^\circ$ or $\alpha = 45^\circ$ and $\beta = 100$. Figure \ref{pc:testHighReVel} shows the resulting velocity fields of the MPFA method at different times for $\alpha = 45^\circ$. As before, the gas hits the front of the porous block and is forced mainly through the narrow channel section above it which acts as some sort of duct through which a jet of high velocity fluid streams in the open channel on the right side. As the velocity gradually increases over time, the formation of vortex structures within the free-flow channel can be observed after around $\unit[20]{s}$. Having reached equilibrium at around $\unit[1000]{s}$, the system features two stable countercurrent, larger vortices downstream the porous obstacle and one small recirculation zone in front of the block. Again, the flow within the obstacle is clearly influenced by the anisotropy, a feature not reproduced by the TPFA method as seen in Figure \ref{pc:testHighReVelTPFA}. Here, the flow within the block does not follow $\mathbf{K}$'s orientation. Instead, the fluid immediately strives for the top of the obstacle once it has entered it which is due to the strongly increased vertical permeability ($\beta = 100$). Again, the TPFA scheme is not able to capture this relevant anisotropy effect as it only considers the main diagonals of $\mathbf{K}$. Figure \ref{pc:massFluxesTestTwo} depicts the cumulative mass fluxes per unit depth across the porous medium's boundaries (top row) and across the plane at the center of the narrow channel section ($x = \unit[0.5]{m}$, bottom row) for a case with $\alpha = 0^\circ$ (left column) and one with $\alpha = 45^\circ$ (right column). For the former, the results of the TPFA and MPFA are very similar. With increasing time, the total mass flux entering the obstacle's front (red line) increases in accordance with the global velocity field (see Figure \ref{pc:testHighReVel}) until it approaches a constant value after approximately $\unit[100]{s}$. The same applies for the fluxes leaving the obstacle's top (blue line), which approaches the same value as the incoming fluxes but with a different sign. This indicates that no mass flux occurs over the obstacle's back which can also be seen from the black line. This line (and thus, the fluxes over the obstacle's top) only deviates from zero at the very beginning of the simulation where it has to balance out the initial disequilibrium between the red and blue curve. Due to the imposed anisotropy ratio of $\beta = 100$, the horizontal permeability is 100 times lower than the vertical one and fluid is immediately pushed towards the top, where it exits the domain again. Even though the fluid's density is actually pressure-dependent, significant compressibility effects could not be observed for the given setup. The differences between TPFA and MPFA with respect to the blue and red curves (front and back of the obstacle) are most likely due to the different discretization width in the porous domain as explained before. In total, less gas seems to enter the porous block for the MPFA method. The plot on the lower left of Figure \ref{pc:massFluxesTestTwo} shows the temporal evolution of the total mass flux through the channel above the obstacle. Both the TPFA and the MPFA scheme converge to the same result, the grid resolution of the free-flow domain is identical for both cases. Note that even with a constant flux through the constriction after $\unit[100]{s}$, the global velocity field downstream the obstacle still changes, including the formation of vortices as described before.
\begin{figure}
\caption{Total mass fluxes of MPFA and TPFA schemes over the porous-medium boundaries $\Gamma^\mathrm{pm}_\mathrm{in}, \Gamma^\mathrm{pm}_\mathrm{top}, \Gamma^\mathrm{pm}_\mathrm{out}$ (upper row) and at the center of the constricted section of the free-flow channel (over the line segment connecting the points $(\unit[0.5]{m},\unit[0.2]{m})^T$ and $(\unit[0.5]{m},\unit[0.25]{m})^T$) (lower row) for an ansisotropy ratio of $\beta = 0$ (left column) and $\beta = 100$ (right column). Positive fluxes mean that fluid flows into the porous medium.}
\label{pc:massFluxesTestTwo}
\end{figure}
Considering the right column of Figure \ref{pc:massFluxesTestTwo} ($\alpha = 45^\circ$), the differences between the two schemes are significantly higher. For the MPFA method, we observe a considerable inflow across the obstacle's top boundary, coming from the constricted free-flow channel and drawn by the downwards inclined flow field within the porous medium. At the same time, there is only a very limited inflow through the obstacle's front which is due the obstacle's anistropy.
This is in strong contrast to the TPFA method's result, where air is still leaving the obstacle's top as also described before and seen in Figure \ref{pc:testHighReVelTPFA}. The MPFA method results in a higher flux through the constricted channel (see bottom right of Figure \ref{pc:massFluxesTestTwo}) as there is less fluid entering the porous obstacle as explained above.
\subsection{Test case 3}
Another advantage of using MPFA in the porous medium domain is the ability to use unstructured grids while maintaining
consistency of the scheme. The test case presented in this section considers a porous medium domain in which geometrical
constraints favor the use of triangles (unstructured) over quadrilaterals for its discretization. This situation can arise,
for example, in environmental applications considering the exchange processes between the atmosphere and the subsurface, where
the latter can be composed of complex shaped geological layers. An illustration of the setup of this test case is given in
Figure \ref{pc:settingCaseTwo}, while a detailed view on the discretization of the two compartments is depicted in Figure \ref{pc:testTwoGrid}.
\begin{figure}
\caption{Setting for the third test case.}
\label{pc:settingCaseTwo}
\end{figure}
\begin{figure}
\caption{Detailed view on the grid used in test case three.}
\label{pc:testTwoGrid}
\end{figure}
Two sets of simulations were performed using the rotation angles $\alpha = 45^\circ$ and $\alpha = -45^\circ$, together with $\beta = 10$,
$\unit[k=10^{-6}]{m^2}$ and $\Delta p = \unit[1 \cdot 10^{-6}]{Pa}$. The results using both MPFA and TPFA in the porous medium domain are shown
in the Figures \ref{pc:testTwoPMVelocity_-45} and \ref{pc:testTwoPMVelocity_45}, respectively. Please note that unlike before, the velocity vectors are now scaled by magnitude.
As expected, the solutions using TPFA exhibit
a more distorted velocity field originating from the scheme being inconsistent on both unstructured grids and for anisotropic tensors.
On the other hand, the solutions using MPFA provide velocity fields that follow the geometrical features of the porous medium.
For $\alpha = -45^\circ$, two main regions in which the flux from the porous medium into the free-flow domain is concentrated,
and where the maximum velocities occur, can be observed (see Figure \ref{pc:testTwoPMVelocity_-45}). The direction of these
maximum velocities coincides with the direction of highest permeability. In contrast to that, the inflow from the free-flow domain into the porous
medium occurs in the direction of the lowest permeability and thus at smaller velocities.
In the case of $\alpha = 45^\circ$, the highest velocities are observed in the regions where an inflow from the free-flow domain into the porous
medium occurs, again following the direction of the highest permeability (see Figure \ref{pc:testTwoPMVelocity_45}). With TPFA being used in the porous
medium domain, the differences between the velocity fields obtained from the two angles turn out to be smaller in comparison to the solutions obtained with MPFA.
Furthermore, the low velocity regions seem to be generally overestimated. This effect shows itself also in the integrated transfer flux across the interface, which
results in $\unit[1.3 \cdot 10^{-8}]{kg/s}$ for MPFA and $\unit[2.2 \cdot 10^{-8}]{kg/s}$ for TPFA, thus, around $\unit[70]{\%}$ higher.
\begin{figure}
\caption{Velocity distribution in the porous medium for the third test case with $\beta = 10$ and $\alpha = -45^\circ$.
The upper and lower pictures depict the result using MPFA and TPFA in the porous medium domain, respectively.}
\label{pc:testTwoPMVelocity_-45}
\end{figure}
\begin{figure}
\caption{Velocity distribution in the porous medium for the third test case with $\beta = 10$ and $\alpha = 45^\circ$.
The upper and lower pictures depict the result using MPFA and TPFA in the porous medium domain, respectively.}
\label{pc:testTwoPMVelocity_45}
\end{figure}
\section{Conclusions} \label{sec:conclusions}
In this work, a discretization method is proposed for problems concerning free flow coupled to porous medium flow. The method combines the stability of the staggered grid finite volume method for the free-flow equations with the consistency of MPFA finite volume methods for flows in anisotropic porous media. We have shown how appropriate alignment of the grids results in a natural coupling between the two discretization schemes due to coinciding degrees of freedom.
The stability and consistency of the method are emphasized with the use of numerical experiments. Especially in the presence of anisotropy in the porous medium, a significant difference is observed with respect to the widely used, but inconsistent, TPFA finite volume method. We moreover emphasized the use of unstructured grids in the porous medium, allowing for computations on more general geometries. The use of unstructured grids in the free-flow domain \cite{eymard2014a} is a topic for future investigation.
Future work will moreover address the extension of the presented model to multi-phase flow simulations including compositional and non-isothermal effects,
with a special focus on evaporative processes at the interface between the porous medium and the free-flow domain. This extension
would make the model applicable to a large variety of applications, as e.g.\ the drying of soil due to evaporation \cite{mosthaf2011a} and subsequent soil
salinization \cite{jambhekar2015a}. This drying process can lead to the creation of fractures within the porous medium. Therefore, we want to employ
a discrete fracture model for the description of the porous medium domain, for which there is a preexisting implementation available
in {Du\-Mu$^\text{x}$ } (see \cite{glaser2017discrete}). To increase efficiency, we will also use other finite volume schemes for the porous domain, as for example those that have been recently presented in \cite{Schneider.ea:2018}.
\end{document} |
\begin{document}
\title{On Non-degenerate Chaos Processes}
\begin{abstract}
We consider a process $\{X_t\}_{0\leq t\leq 1}$ in a fixed Wiener chaos $\mathcal{H}_n$. We establish some non-degenerate properties and related results for $\{X_t\}_{0\leq t\leq 1}$. As an application, we show that solution to SDE driven by $\{X_t\}_{0\leq t\leq 1}$ admits a density. Our approach relies on an interplay between Malliavin calculus and analysis on Wiener space.
\end{abstract}
\tableofcontents
\section{Introduction}
Over the last two decades, tremendous progresses have been achieved in the study of differential equations driven by Gaussian rough paths (\cite{MR2680405} \cite{MR3112937} \cite{MR2485431} \cite{MR3161525}\cite{MR3531675} \cite{MR3229800} \cite{MR3298472}). Central to this direction is the non-degenerate property of Gaussian processes. It was first appeared in \cite{MR2485431}; a non-degenerate property for Gaussian processes in the following form: we say non-degeneracy holds for a Gaussian process $\{X_t\}_{0\leq t\leq 1}$ if for any $g\in C^\infty([0,1])$, we have
\begin{equation}
\label{Non-degeneracy for Gaussian}
\left\{\int_{0}^{1}g_sdh_s=0, \forall h\in\mathcal{H} \right\}\Rightarrow \left\{ g\equiv 0 \right\},
\end{equation}
where $\mathcal{H}$ is the Cameron-Martin space associated with $\{X_t\}_{0\leq t\leq 1}$. Condition \eqref{Non-degeneracy for Gaussian} has many interesting consequences. Most notably, \eqref{Non-degeneracy for Gaussian} implies any Gaussian variable of the form
\begin{equation}
\label{Goal 2}
\int_{0}^{1}g_sdX_s,\ g\in C^\infty([0,1]),\ g\not\equiv 0
\end{equation}
has positive variance (hence a density). In fact, \eqref{Non-degeneracy for Gaussian} also implies the existence of the density for Gaussian rough differential equations (see \cite{MR2680405}).
It is well known that Gaussian processes belong to the larger family of processes that live in a fixed Wiener chaos (also known as chaos processes). More specifically, they all live in the first Wiener chaos. The study of chaos vectors and processes have gained a lot of success in recent years (for instance, \cite{MR3003367} \cite{MR3035750} \cite{MR4165649} \cite{MR3430861} and references therein). It is a bit surprising that the non-degenerate property of chaos processes received little attention during this period. It is a natural question to ask that, to what extend can we generalize characterizations \eqref{Non-degeneracy for Gaussian} and \eqref{Goal 2} from Gaussian processes to chaos processes.
The goal of this paper is to prove the abovementioned two characterizations of non-degeneracy for chaos processes. More precisely, we will define
\[X_t=I_n(f_t),\ \forall t\in [0,1], \]
where $I_n$ is the $n$-th multiple Wiener integral, and look for a class of such processes that exhibit similar non-degenerate behaviors \eqref{Non-degeneracy for Gaussian} and \eqref{Goal 2}. In this endeavor,
we first need to clarify what would be the proper generalization of \eqref{Non-degeneracy for Gaussian} to a general chaos process. Indeed, the role of a non-Gaussian $\{X_t\}_{0\leq t\leq 1}$ in \eqref{Non-degeneracy for Gaussian} is not clear. Moreover, for an abstract Wiener space, which will be our setting, elements from $\mathcal{H}$ might not be processes, and the integral in \eqref{Non-degeneracy for Gaussian} is not well defined. To answer this question, let us observe that we can rewrite \eqref{Non-degeneracy for Gaussian} as
\begin{equation*}
\left\{\int_{0}^{1}g_sd\langle DX_s, h\rangle_{\mathcal{H}}=\langle \int_{0}^{1}g_sd DX_s, h\rangle_{\mathcal{H}}=0, \forall h\in\mathcal{H} \right\}\Rightarrow \left\{ g\equiv 0 \right\},
\end{equation*}
where $DX_s$ is the Malliavin derivative of $X_s$. Thus, a reasonable generalization of \eqref{Non-degeneracy for Gaussian} should be of the form
\begin{equation*}
\left\{\int_{0}^{1}g_sd DX_s=0 \right\}\Rightarrow \left\{ g= 0 \right\}.
\end{equation*}
More precisely, we will prove that
\begin{equation}
\label{Goal 1''}
\mathbb{P}\left\{\int_{0}^{1}g_sd DX_s=0, g_t\neq 0 \right\}=0.
\end{equation}
To sum up, our goal is to prove \eqref{Goal 1''} and the existence of density for variables of the form given by \eqref{Goal 2}.
We expect many arguments can be directly borrowed from previous studies in Gaussian processes, but there are two fundamental difficulties that one must overcome.
\begin{enumerate}
\item Unlike a Gaussian process, whose distribution is completely determined by the first two moments, the distribution of a general chaos process cannot be characterized by its moments up to a finite order. As a result, one must find a reasonable assumption that gives enough control over the process. We will see in the next section that one such option is to look into the finer structures generated by partially integrating the kernels.
\item The Malliavin derivative of a Gaussian process is deterministic. The study of \eqref{Non-degeneracy for Gaussian} is a pure analysis question and many analysis tools, such as the Hardy–Littlewood inequality, can thus be used. However, a chaos process living in the $n$th-homogeneous chaos has Malliavin derivatives in terms of the $(n-1)$th-homogeneous chaos, which in general are random. This is much more than just a small nuisance, as one will have to find a condition that can be applied to all sample paths of $\{DX_t\}_{0\leq t\leq 1}$ in a uniform way.
\end{enumerate}
The rest of the paper is organized as follows: Section 2 is a detailed discussion of our assumptions with their motivations, followed by the statements of our main results. Section 3 provides some necessary preliminary materials. Section 4 is devoted to the study of non-degenerate property of chaos processes, while Section 5 proves the existence of density for differential equations driven by chaos processes.
\section{Statements of assumptions and main theorems}
Our first assumption provides regularity for the sample paths of $\{X_t\}_{0\leq t\leq 1}$. Thanks to the fact that $\{X_t\}_{0\leq t\leq 1}$ belongs to a fixed chaos, we will see that the Malliavin derivative of $\{X_t\}_{0\leq t\leq 1}$ has the same regularity.
\begin{assumptionnew}
\label{Regularity}
Let $\{f_t\}_{0\leq t\leq 1}\subset \mathcal{H}^{\otimes n}$ be the kernels of $X_t$. We assume $f_0=0$ and that we can find constants $C>0$ and $\theta>1$, such that for any $0\leq s<t\leq 1$
\begin{equation*}
0<\norm{f_t-f_s}_{\mathcal{H}^{\otimes n}}\leq C \abs{t-s}^\frac{\theta}{2}.
\end{equation*}
\end{assumptionnew}
By the moment equivalence of Wiener chaos, for any $p>1$
\begin{equation*}
\mathbb{E}\abs{X_t-X_s}^p\leq C_{2,p}\left(\mathbb{E}\abs{X_t-X_s}^{2}\right)^{\frac{p}{2}}\leq C_{2,p,n}\abs{t-s}^{\frac{p\theta}{2}}.
\end{equation*}
It is then a consequence of Kolmogorov continuity theorem that the sample paths of $\{X_t\}_{0\leq t\leq 1}$ are $\rho$-H\"older continuous, for any $\rho<\theta/2$, almost surely. Moreover, we have by Meyer's inequality (see proposition \ref{Meyer})
\begin{align*}
\mathbb{E}(\norm{DX_t-DX_s}^p_{\mathcal{H}})\leq C_p \mathbb{E}\abs{X_t-X_s}^p\leq C'_{2,p,n}\abs{t-s}^{\frac{p\theta}{2}}.
\end{align*}
Another application of Kolmogorov continuity theorem gives $\{DX_t\}_{0\leq t\leq 1}$ the same regularity as $\{X_t\}_{0\leq t\leq 1}$.
\begin{remark}
As a rather straightforward consequence of assumption \ref{Regularity}, integrals like
\begin{equation*}
\int_{0}^{t}X_sdX_s,\ \int_{0}^{t}X_sdDX_s
\end{equation*}
are well-defined as Young's integrals. In the language of the rough paths theory, we are now in the regular case.
\end{remark}
In order to state and explain our second and most important assumption as transparent as possible, let us first briefly discuss its motivation. We know that it is standard, when studying integrals like
\[\int_{0}^{t}Y_sdX_s, \]
to consider its discrete Riemann sum approximation. The latter is nothing but linear combinations of finite-dimensional distributions of $\{X_t\}_{0\leq t\leq 1}$. If $\{X_t\}_{0\leq t\leq 1}$ is centered Gaussian, its distribution is completely determined by its covariance function. In other words, assumption \ref{Regularity} is sufficient for us, should $\{X_t\}_{0\leq t\leq 1}$ be Gaussian. But as we explained in the previous section, the situation is drastically different when $\{X_t\}_{0\leq t\leq 1}$ lives in a general chaos. We need an assumption that can give us enough control on the finite-dimensional distributions of $\{X_t\}_{0\leq t\leq 1}$.
It helps to look at the Gaussian case for hints. It is well-known that if $\{X_t\}_{0\leq t\leq 1}$ is Gaussian, then its finite dimensional distributions are non-degenerate Gaussian if and only if any finite collection of $\{f_t\}_{0\leq t\leq 1}$ are linearly independent. Since our goal is to make $X_t=I_n(f_t)$ as non-degenerate as possible, we should try to formulate a similar but stronger version of linearly independence in terms of $\{f_t\}_{0\leq t\leq 1}$. It turns out, one such formulation can be carried out as follows.
For each $f_t$, let us consider
\begin{equation*}
F_t=\overline{Span}\left\{\xi=\langle f_t, e_{i_1}\otimes e_{i_2}\otimes \cdots \otimes e_{i_{n-1}} \rangle_{\mathcal{H}^{\otimes (n-1)}},\ i_1,i_2,\cdots, i_{n-1}\geq 1 \right\},
\end{equation*}
where $\{e_{i} \}_{i\geq 1}$ is an orthonormal basis of $\mathcal{H}$. We define $F_{s,t}$ in the same way with $f_t$ replaced by $f_t-f_s$. One can extract plenty of information about $I_n(f_t)$ from $F_t$. For instance, $I_n(f_t)$ and $I_n(f_s)$ are independent if and only if $F_t \perp F_s$ (see \cite{MR1048936}, \cite{MR1106271}). We will also see later that the Malliavin derivative of $X_t$ lives in the subspace $F_t$ (lemma \ref{DX_t and F_t}). Intuitively, $F_t$ as a subspace of $\mathcal{H}$ contains all the ``building blocks'' for $f_t$.
Our previous discussions motivate our next
\begin{assumptionnew}
\label{Block form}
We assume that there exists a constant $0<\alpha<1$ such that for any $m\in \mathbb{N}^+, k,l\in\mathbb{N}$, any nonzero vector $(a_1, \cdots , a_m)$, any $\{t_1<t_2< \cdots< t_m\}\subset [0,1]$, any $\{s_1,\cdots ,s_k\}\in [0,t_1)$ and any $\{r_1, r_2, \cdots, r_l \}\in (t_m,1]$, we have
\begin{equation}
\label{Projection bound}
\norm{(a_1\xi_{t_1, t_2}+\cdots +a_m\xi_{t_{m-1},t_m})-P_{[s_1,s_k]\cup[r_1,r_l]}(a_1\xi_{t_1, t_2}+\cdots +a_m\xi_{t_{m-1},t_m})}^2_{\mathcal{H}}
> \alpha \norm{a_1\xi_{t_1, t_2}+\cdots +a_m\xi_{t_{m-1},t_m}}^2_{\mathcal{H}},
\end{equation}
where $\xi_{t_i,t_{i+1}}\in F_{t_i,t_{i+1}}$ and $P_{[s_1,s_k]\cup[r_1,r_l]}$ is the projection onto
\[\overline{Span}\{F_{0,s_1},\cdots, F_{s_k,t_1}, F_{t_m, r_{1}}, \cdots , F_{r_{l-1},r_l} \}. \]
\end{assumptionnew}
\begin{remark}
We notice that, given $f_0=0$, \eqref{Projection bound} is nothing but a quantitative way of saying that the any finite collection of $\xi_{t}$ (hence $f_{t}$) are linearly independent. It is essentially a non-determinism condition, reminiscent of condition 2 of \cite{MR3298472}. The main difference, is that we take projections of linear combinations of $\xi_{s,t}$ rather than just a single term. This is for the need of controlling finite-dimensional distributions of $\{X_t\}_{0\leq t\leq1}$ as we discussed before.
\end{remark}
\begin{remark}
It is worth mentioning that there is another way to see why considering linear combinations of $\xi_{s,t}$ is necessary in our setting. One often needs to use conditioning in the study of Gaussian processes. Conditional variance has a natural algebraic expression given by the Schur complement (see \cite{MR3298472} section 6) for Gaussian random variables. However, It is very difficult, if not impossible, to get such explicit conditioning formulas for general chaos random variables (see \cite{MR999370}).
\end{remark} \begin{remark}
Assumption \ref{Block form} also relates to locally non-determinism and locally approximately independent increments property introduced in \cite{MR1001520}. We instead formulated our assumption in a global manner. \end{remark}
Although we do not need the next two assumptions for our main results, they are closer related to the assumptions used for Gaussian processes. Moreover, they enable us to apply techniques that will give a special version of theorem \ref{Main 1} when the integrand is deterministic, which also highlights some unique properties of chaos processes.
Our next assumption is just assumption \ref{Block form} at the kernel level.
\begin{assumptionnew}
\label{Kernel form}
We can find $0<\beta<1$ such that for any $m\in \mathbb{N}^+, k,l\in\mathbb{N}$, any nonzero vector $(a_1, \cdots , a_m)$, any $\{t_1<t_2< \cdots< t_m\}\subset [0,1]$, any $\{s_1,\cdots ,s_k\}\in [0,t_1)$ and any $\{r_1, r_2, \cdots, r_l \}\in (t_m,1]$, we have
\begin{equation}
\norm{(a_1f_{t_1, t_2}+\cdots +a_mf_{t_{m-1},t_m})-P_{[s_1,s_k]\cup[r_1,r_l]}(a_1f_{t_1, t_2}+\cdots +a_mf_{t_{m-1},t_m})}^2_{\mathcal{H}}
> \beta \norm{a_1f_{t_1, t_2}+\cdots +a_mf_{t_{m-1},t_m}}^2_{\mathcal{H}},
\end{equation}
where $P_{[s_1,s_k]\cup[r_1,r_l]}$ is the projection onto
\[\overline{Span}\{f_{0,s_1},\cdots, f_{s_{k},t_1}, f_{t_m, r_1}, \cdots , f_{r_{l-1},r_l} \}. \]
\end{assumptionnew}
Finally, our last assumption
\begin{assumptionnew}
\label{Non-negative row sum}
We assume that for any $[u,v]\subset[s,t]\subset [0,1]$, we have
\begin{equation*}
\langle f_v-f_u, f_t-f_s \rangle_{\mathcal{H}^{\otimes n}}\geq 0.
\end{equation*}
\end{assumptionnew}
This assumption will function in the exact same way as condition 3 of \cite{MR3298472}; they both ensure the covariance matrices of $X_t$ have non-negative row sums.
\subsection{Main results}
Now we are ready to state our main results.
\begin{theorem}
\label{Main 1}
Let $\{X_t\}_{0\leq t\leq 1}=I_n(f_t)$ be a continuous process in the $n$-th homogeneous Wiener chaos. If $\{X_t\}_{0\leq t\leq 1}$ satisfies assumptions \ref{Regularity} and \ref{Block form}, then for any process $\{g_t\}_{0\leq t\leq 1}$ whose sample paths are $\tau$-H\"older continuous almost surely, with $\tau+\rho>1$, we have
\begin{equation*}
\mathbb{P}\left\{\int_{0}^{1}g_tdDX_t=0,\ g_t\neq 0 \right\}=0.
\end{equation*}
\end{theorem}
As an application, we have
\begin{theorem}
\label{Main 2}
Let $\{X_t\}_{0\leq t\leq 1}=I_n(f_t)$ be a continuous process in the $n$-th homogeneous Wiener chaos satisfies assumptions \ref{Regularity} and \ref{Block form}. Consider the following SDE
\begin{equation*}
dY_t=\sum_{i=1}^{d}V_i(Y_t)dX^i_t+V_0(Y_t)dt,\ Y_0=y_0\in\mathbb{R}^d.
\end{equation*}
If $\{V_i\}_{0\leq i \leq d}\subset C^\infty_b(\mathbb{R}^d)$ and $\{V_i\}_{1\leq i\leq d}$ form an elliptic system, then, for any $0<t\leq 1$, $Y_t$ has a density with respect to the Lebesgue measure on $\mathbb{R}^d$.
\end{theorem}
\section{Preliminaries} \subsection{Wiener chaos} Let $\mathcal{H}$ be a real separable Hilbert space. We say $X=\{W(h):\ h\in\mathcal{H} \}$ is an isonormal Gaussian process over $\mathcal{H}$, if $X$ is a family of centered Gaussian random variables defined on some complete probability space $(\Omega, \mathcal{F}, \mathbb{P})$ such that \begin{equation*}
\mathbb{E}(W(h)W(g))=\langle h,g \rangle_{\mathcal{H}}. \end{equation*} We will further assume that $\mathcal{F}$ is generated by $W$.
For every $k\geq 1$, we denote by $\mathcal{H}_k$ the $k$-th homogeneous Wiener chaos of $W$ defined as the closed subspace of $L^2(\Omega)$ generated by the family of random variables $\{H_k(W(h)):\ h\in\mathcal{H} \}$ where $H_k$ is the $k$-th Hermite polynomial given by \begin{equation*}
H_k(x)=(-1)^ke^{\frac{x^2}{2}}\frac{d^k}{dx^k}\left(e^{-\frac{x^2}{2}} \right). \end{equation*} $H_0$ is by convention defined to be $\mathbb{R}$.
For any $k\geq 1$, we denote by $\mathcal{H}^{\otimes k}$ the $k$-th tensor product of $\mathcal{H}$. If $\phi_1, \phi_2,\cdots ,\phi_n\in\mathcal{H}$, we define the symmetrization of $\phi_1\otimes \cdots \otimes \phi_n$ by \[\phi_1\hat{\otimes}\cdots \hat{\otimes} \phi_n=\frac{1}{n!}\sum_{\sigma\in\Sigma_n}\phi_{\sigma(1)}\otimes \cdots \otimes \phi_{\sigma(n)},\]
where $\Sigma_n$ is the symmetric group of $\{1,2,\cdots,n\}$. The symmetrization of $\mathcal{H}^{\otimes k}$ is denoted by $\mathcal{H}^{\hat{\otimes} k}$. We consider $f\in\mathcal{H}^{\hat{\otimes} n}$ of the form
\[f=e_{j_1}^{\hat{\otimes} k_1}\hat{\otimes} e_{j_2}^{\hat{\otimes} k_2} \hat{\otimes} \cdots \hat{\otimes} e_{j_m}^{\hat{\otimes} k_{n}}, \]
where $\{e_i\}_{i\geq 1}$ is an orthonormal basis of $\mathcal{H}$ and $k_1+\cdots +k_n=n$. The multiple Wiener-It\^o integral of $f$ is defined as
\[I_n(f)=H_{k_1}(W(e_{j_1})) \cdots H_{k_n}(W(e_{j_n})). \] If $f,g\in \mathcal{H}^{\hat{\otimes} n}$ are of the above form, we have the following isometry
\[\mathbb{E}(I_n(f)I_n(g))=n!\langle f,g \rangle_{\mathcal{H}^{\otimes n}}. \]
For general elements in $ \mathcal{H}^{\hat{\otimes} n}$, the multiple Wiener-It\^o integrals are defined by $L^2$ convergence with the previous isometry equality.
Let $\mathcal{G}$ be the $\sigma$-algebra generated by $\{W(h), h\in\mathcal{H}\}$, then any random variable $F\in L^2(\Omega, \mathcal{G}, \mathbb{P})$ admits an orthonormal decomposition (Wiener chaos decomposition) of the form
\begin{equation*}
F=\sum_{k=1}^{\infty}I_k(f_k),
\end{equation*} where $f_0=\mathbb{E}(F)$ and $f_k\in\mathcal{H}^{\hat{\otimes}k}$ are uniquely determined by $F$.
We end this subsection with a useful lemma. \begin{lemma}
\label{Multiple integral density}
For $b\geq 1$, and let $f\in \mathcal{H}^{\otimes n}$ be such that $\norm{f}_{\mathcal{H}^{\otimes n}}>0$. Then $I_n(f)$ has a density with respect to the Lebesgue measure on $\mathbb{R}$. \end{lemma} \subsection{Malliavin calculus}
Let $\mathcal{FC}^\infty$ denote the set of cylindrical random variables of the form \[ F=f(W(h_1), \cdots ,W(h_n )), \] where $n\geq 1$, $h_i\in \mathcal{H}$ and $f\in C^{\infty}_b(\mathbb{R}^n)$; that means $f$ is a smooth function on $\mathbb{R}^n$ bounded with all derivatives. The Malliavin derivative of $F$ is a $\mathcal{H}$-valued random variable defined as \begin{equation*}
DF=\sum_{i=1}^{n}\frac{\partial f}{\partial x_i}(W(h_1), \cdots ,W(h_n ))h_i. \end{equation*} By iteration, one can define the $k$-th Malliavin derivative of $F$ as a $\mathcal{H}^{\otimes k}$-valued random variable. For $m,p\geq 1$, we denote $\mathbb{D}^{m,p}$ the closure of $\mathcal{FC}^\infty$ with respect to the norm \[ \norm{F}^p_{m,p}=\mathbb{E}(\abs{F}^p)+\sum_{k=1}^{m}\mathbb{E}\left(\norm{D^kF }^p_{\mathcal{H}^{\otimes k}} \right).\]
For any $F\in L^2(\Omega)$ with Wiener chaos decomposition $F=\sum_{n=0}^{\infty}I_n(f_t)$, we define the operator $C=-\sqrt{-L}$ by \[ CF= -\sum_{n=0}^{\infty}\sqrt{n}I_n(f_t), \] provided the series is convergent. Here $L$ is the the Ornstein-Uhlenbeck operator. Now we can state the Meyer's inequality
\begin{proposition}
\label{Meyer}
For any $p>1$ and any integer $k\geq 1$ there exist positive constants $C_{p,k}, C'_{p,k}$ such that for any $F\in\mathcal{FC}^\infty$
\begin{equation*}
C_{p,k}\mathbb{E}\norm{D^kF}^p_{\mathcal{H}^{\otimes k}}\leq \mathbb{E}\abs{C^kF}^p\leq C'_{p,k} \left\{ \mathbb{E}\abs{F}^p+\mathbb{E}\norm{D^kF}^p_{\mathcal{H}^{\otimes k}} \right\}.
\end{equation*} \end{proposition}
For a random vector $F=(F_1,\cdots,F_n)$ such that $F_i\in\mathbb{D}^{1,2}$, we denote $C(F)$ the Malliavin matrix of $F$, which is a non-negative definite matrix defined by \[C(F)_{ij}=\langle DF_i, DF_j \rangle_{\mathcal{H}}. \] If $\det(C(F))>0$ almost surely, then the law of $F$ admits a density with respect to the Lebesgue measure on $\mathbb{R}^n$. This is a basic result that we will use in later sections.
\subsection{SDE with Young's setting} Suppose that $\{X_t\}_{0\leq t\leq 1}$ is a process whose sample paths are $\beta$-H\"older continuous almost surely for some $\beta\in(1/2 , 1)$. We consider the following stochastic differential equation \begin{equation}
\label{Young SDE}
Y_t=y_0+\sum_{i=1}^{d}\int_{0}^{t}V_i(Y_s)dX_s+\int_{0}^{t}V_0(Y_s)ds,\ y_0\in\mathbb{R}^d. \end{equation} It is well known that if $\{V_i\}_{0\leq i\leq d}\subset C^2_b(\mathbb{R}^d)$, then \eqref{Young SDE} admits a unique solution. A standard way to prove this fact is to regard $\{Y_t\}_{0\leq t\leq 1}$ as a fixed point of the map \begin{align*}
\mathcal{M}: C^{\beta}([0,1])&\rightarrow C^{\beta}([0,1])\\
Y&\mapsto y_0+\sum_{i=1}^{d}\int_{0}^{t}V_i(Y_s)dX_s+\int_{0}^{t}V_0(Y_s)ds, \end{align*} where the integrals are understood as Young's integrals. By Young's maximal inequality, one has the estimate \[ \norm{Y}_{\beta}\leq C \norm{V}_{C^2_b}\norm{X}_{\beta}, \] where the constant $C$ only depends on $\beta$. (There is a subtlety in the actual proof of these results; one usually needs to replace $\beta$ by $\beta'\in (1/2, \beta)$. We refer to \cite{MR3289027} chapter 8 for more details.)
The differential equation \eqref{Young SDE} is a smooth map with respect to the initial point. We can define the Jacobian process of $Y_t$ as \[J_{t\leftarrow 0}=\frac{\partial Y_t}{\partial y_0}. \] By differentiating both sides of \eqref{Young SDE} we can obtain a non-autonomous linear differential equation governing $J_{t\leftarrow 0}$: \begin{equation}
\label{Jacobian}
dJ_{t\leftarrow 0}=\sum_{i=1}^{d}DV_i(Y_t)J_{t\leftarrow 0}dX_t+DV_0(Y_t)J_{t\leftarrow 0}dt,\ J_{0\leftarrow 0}=I_{d\times d}. \end{equation} The inverse $J^{-1}_{t\leftarrow 0}:=J_{0\leftarrow t}$ of the Jacobian process can be found by solving the SDE
\begin{equation}
\label{Jacobian inverse}
J_{0\leftarrow t}=-\sum_{i=1}^{d}J_{0\leftarrow t}DV_i(Y_t)dX_t-J_{0\leftarrow t}DV_0(Y_t)dt,\ J_{0\leftarrow 0}=I_{d\times d}. \end{equation}
\section{Non-degenerate properties of $X_t$}
\subsection{Deterministic integrand}
Let us first consider the case where $\{g_t\}_{0\leq t\leq 1}$ is deterministic. In the rest of the paper we will often use $X_{s,t}$ as a short notation for $X_t-X_s$.
\begin{proposition}
\label{Interpolation type inequality}
Let $\{X_t\}_{0\leq t\leq 1}=I_n(f_t)$ be a continuous process in the $n$-th homogeneous Wiener chaos. Let $g_t$ be a $\tau$-H\"older continuous function with $\tau+\rho>1$ and H\"older norm $\norm{g}_\tau\neq 0$. Then under assumptions \ref{Regularity}, \ref{Kernel form} and \ref{Non-negative row sum}, one of the following two inequalities is always true:
\begin{equation}
\label{Case 1}
\mathbb{E}\left(\int_{0}^{1}g_tdX_t \right)^2\geq \frac{\beta }{4}(\sup_{r\in[0,1]}\abs{g_r})^2 \mathbb{E}(X_{1})^2,
\end{equation}
or,
\begin{equation}
\label{Case 2}
\mathbb{E}\left(\int_{0}^{1}g_tdX_t \right)^2\geq \frac{\beta }{4}(\sup_{r\in[0,1]}\abs{g_r})^2 \mathbb{E}(X_{a,b})^2,
\end{equation}
for a certain interval $[a,b]\subset [0,1]$ such that
\begin{equation*}
\left(\frac{\sup_{r\in[0,1]}\abs{g_r}}{2\norm{g}_\tau } \right)^{\frac{1}{\tau}} \leq \abs{b-a}.
\end{equation*}
\end{proposition}
\begin{proof}
We have
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}g_tdX_t \right)^2=\int_{0}^{1}\int_{0}^{1}g_sg_t d\mathbb{E} (X_s X_t)= n! \int_{0}^{1}\int_{0}^{1}g_sg_t d \langle f_s, f_t \rangle_{\mathcal{H}^{\otimes n}}.
\end{equation*}
Since $g$ is continuous, we can always find an interval $[a,b]\in [0,1]$ such that
\[\inf_{s\in[a,b]} \abs{f_s}\geq \frac{1}{2} \sup_{s\in[0,1]}\abs{f_s}. \]
Consider the dyadic partitions $\{D_k \}_{k\geq 1}$ of $[0,1]$. For each $k\geq 1$, we write
\[D_k=\{t_0, t_1 ,\cdots, t_{2^k} \}. \]
The discrete approximation of the above double integral along $D_k$ is given by
\[ \int_{D^n\times D^n}g_sg_t d\mathbb{E} (X_s X_t)=\mathbb{E}\left(g_{t_0}X_{t_0,t_1}+ g_{t_1}X_{t_1,t_2}+\cdots g_{t_{2^k}}X_{t_{2^k-1},t_{2^k}} \right)^2. \]
Since $\{X_t\}_{t\in[0,1]}$ belongs to the $n$-th homogeneous Wiener chaos, we have
\begin{align*}
\int_{D^n\times D^n}g_sg_t d\mathbb{E} (X_s X_t)&=n!\norm{g_{t_0}f_{t_0,t_1}+ g_{t_1}f_{t_1,t_2}+\cdots g_{t_{2^k}}f_{t_{2^k-1},t_{2^k}} }^2_{\mathcal{H}^{\otimes n}}\\
&=n!\norm{\sum_{(t_i,t_{i+1})\subset[a,b]}g_{t_i}f_{t_i,t_{i+1}}+\sum_{(t_l,t_{l+1})\not\subset [a,b]}g_{t_l}f_{t_l,t_{l+1}}}^2_{\mathcal{H}^{\otimes n}}.
\end{align*}
Define
\begin{align*}
\Gamma_k&=\sum_{(t_i,t_{i+1})\subset[a,b]}g_{t_i}f_{t_i,t_{i+1}},\\
\mathcal{S}_k&=\overline{Span}\left\{f_{t_l,t_{l+1}}:\ {(t_l,t_{l+1})\not\subset [a,b]} \right\}.
\end{align*}
We use $P_{\mathcal{S}_k}$ to denote the projection onto $\mathcal{S}_k$, then by assumption \ref{Kernel form} we have
\begin{align}
&\int_{D^k\times D^k}g_sg_t d\mathbb{E} (X_s X_t)\nonumber \\
&=n!\norm{\left(\Gamma_k-P_{\mathcal{S}_k}(\Gamma_k)\right)+P_{\mathcal{S}_k}(\Gamma_k)+\sum_{(t_l,t_{l+1})\in [0,a)\cup (b,1]}g_{t_l}f_{t_l,t_{l+1}}}^2_{\mathcal{H}^{\otimes n}}\nonumber \\
&\geq n! \norm{\left(\Gamma_k-P_{\mathcal{S}_k}(\Gamma_k)\right)}^2_{\mathcal{H}^{\otimes n}}\geq \beta n! \norm{\Gamma_k}^2_{\mathcal{H}^{\otimes n}}. \label{Interpolation lower bound}
\end{align}
On the other hand, we have
\begin{equation*}
\norm{\Gamma_k}^2_{\mathcal{H}^{\otimes n}}=( g_{t_{k_1}}, g_{t_{k_2}},\cdots)\cdot \langle f_{t_{k_i}, t_{k_i+1}}, f_{t_{k_j}, t_{k_j+1}} \rangle_{\mathcal{H}^{\otimes n}}\cdot ( g_{t_{k_1}}, g_{t_{k_2}},\cdots)^T,
\end{equation*}
where $\{t_{k_i} \}\subset [a,b] \cap D_k$. Since the last expression is quadratic in $( g_{t_{k_1}}, g_{t_{k_2}},\cdots)$ and $g_s$ does not change sign in $[a,b]$, we may assume without loss of generality that
\[g_s\geq \frac{1}{2} \sup_{r\in[0,1]}\abs{g_r},\ \forall s\in [a,b]. \]
To bound $\norm{\Gamma_k}^2_{\mathcal{H}^{\otimes n}}$ from below, it is equivalent to solving the optimization problem
\begin{equation}
\label{Infimum}
\inf_{x\in \mathcal{C}^k} x^T \cdot M^k \cdot x,
\end{equation}
where
\[ \mathcal{C}^k=\{x\in \mathbb{R}^{d_k}: x_i\geq \frac{1}{2} \sup_{r\in[0,1]}\abs{g_r} , 1\leq i\leq d_k \}, \]
\begin{equation*}
M^k_{ij}= \langle f_{t_{k_i}, t_{k_i+1}}, f_{t_{k_j}, t_{k_j+1}} \rangle_{\mathcal{H}^{\otimes n}},
\end{equation*}
and $d_k$ is the row number of $M^k$.
By assumption \ref{Non-negative row sum}, for any $k\geq 1$ the matrix $M^k$ has non-negative row sums. Thus, by lemma 6.2 of \cite{MR3298472}, the infimum of \eqref{Infimum} is achieved when all the components of $x$ equal to $\frac{1}{2} \sup_{r\in[0,1]}\abs{g_r}$. We therefore deduce
\begin{equation*}
\norm{\Gamma_k}^2_{\mathcal{H}^{\otimes n}}\geq \frac{1}{4}(\sup_{r\in[0,1]}\abs{g_r})^2 \norm{f_{t_{k_1},t_{k_{d_k}}}}^2_{\mathcal{H}^{\otimes n}}= \frac{1}{4n!}(\sup_{r\in[0,1]}\abs{g_r})^2 \mathbb{E}(X_{t_{k_1},t_{k_{d_k}}})^2.
\end{equation*}
Sending $k$ to infinity, we infer from \eqref{Interpolation lower bound} that
\begin{equation}
\label{Expected Malliavin lower bound}
\int_{0}^{1}\int_{0}^{1}g_sg_t d\mathbb{E} (X_s X_t)\geq \frac{\beta }{4}(\sup_{r\in[0,1]}\abs{g_r})^2 \mathbb{E}(X_{a,b})^2.
\end{equation}
Finally, let us give estimate to the interval $[a,b]$. There are two possibilities which are mutually exclusive. In the first case, we have
\[ \abs{g(x)}>\frac{1}{2}\sup_{r\in[0,1]}\abs{g_r},\ \forall x\in[0,1]. \]
We can thus choose $[a,b]=[0,1]$, and deduce \eqref{Case 1} from \eqref{Expected Malliavin lower bound}. In the second case, there exists $x\in [0,1]$ such that $\abs{g(x)}=1/2\sup_{r\in[0,1]}\abs{g_r}$. Then, we can choose $b$ such that $\abs{g(b)}=\sup_{r\in[0,1]}\abs{g_r}$ and define
\[ a=\sup\{x: \abs{f(x)}\leq \frac{1}{2}\sup_{r\in[0,1]}\abs{g_r},\ 0\leq x<b \}. \]
By H\"older continuity of $g_s$, we have
\begin{equation*}
\frac{1}{2} \sup_{r\in[0,1]}\abs{g_r} \leq \norm{g}_\tau \abs{b-a}^\tau.
\end{equation*}
Inequality \eqref{Case 2} now follows from \eqref{Expected Malliavin lower bound}. Our proof is now complete.
\end{proof}
If one had a H\"older type lower bound on the covariance function (as often the case with Gaussian processes), we may incorporate the information on the interval $[a,b]$ into the main inequality.
\begin{corollary}
\label{Interpolation 2}
Under the assumptions of proposition \ref{Interpolation type inequality}, if, in addition, we have for some $\eta, C>0$ and any $0\leq s<t\leq 1$ that
\begin{equation*}
\mathbb{E}(X_{s,t})^2\geq C \abs{t-s}^\eta.
\end{equation*}
Then
\begin{equation*}
\sup_{r\in[0,1]}\abs{g_r}\leq \max\left\{\frac{2}{\sqrt{\beta}}\left( \mathbb{E}(X_1)^2\right)^{-\frac{1}{2}}\left(\mathbb{E}\left(\int_{0}^{1}g_tdX_t \right)^2 \right)^{\frac{1}{2}}, \frac{2^{\frac{2\tau-\eta}{2\tau+\eta}}}{(\beta C)^{\frac{\tau}{2\tau+\eta}}} \left(\mathbb{E}\left(\int_{0}^{1}g_tdX_t \right)^2 \right)^{\frac{\tau}{2\tau+\eta}}\norm{g}_{\tau}^{\frac{\eta}{2\tau+\eta}} \right\}.
\end{equation*}
\end{corollary}
\begin{remark}
The previous inequality is of the same form as lemma A.3 from \cite{MR2814425} and corollary 6.10 from \cite{MR3298472}.
\end{remark}
With a little more effort, we have the following
\begin{corollary}
\label{Interpolation 3}
Under the assumptions of corollary \ref{Interpolation 2}, we have
\begin{equation}
\label{Weaker Goal 1''}
\sup_{r\in[0,1]}\abs{g_r}\leq \max\left\{\frac{2}{n\sqrt{\beta}}\left( \mathbb{E}(X_1)^2\right)^{-\frac{1}{2}}\left(\mathbb{E}\norm{\int_{0}^{1}g_tdDX_t}^2_{\mathcal{H}}\right)^{\frac{1}{2}}, \frac{2^{\frac{2\tau-\eta}{2\tau+\eta}}}{n(\beta C)^{\frac{\tau}{2\tau+\eta}}} \left(\mathbb{E}\norm{\int_{0}^{1}g_tdDX_t}^2_{\mathcal{H}} \right)^{\frac{\tau}{2\tau+\eta}}\norm{g}_{\tau}^{\frac{\eta}{2\tau+\eta}} \right\}.
\end{equation}
\end{corollary}
\begin{proof}
This is the consequence of
\begin{align*}
\mathbb{E}\left(\int_{0}^{1}g_tdX_t \right)^2 =\int_{0}^{1}\int_{0}^{1}g_tg_sd\mathbb{E}(X_sX_t)= \frac{1}{n}\int_{0}^{1}\int_{0}^{1}g_tg_sd\mathbb{E}\langle DX_s, DX_t \rangle_{\mathcal{H}}=\frac{1}{n}\mathbb{E}\norm{\int_{0}^{1}g_tdDX_t}^2_{\mathcal{H}}.
\end{align*}
\end{proof}
\begin{remark}
Corollary \ref{Interpolation 3} is a quantitative version of
\begin{equation*}
\left\{\int_{0}^{1}g_sd DX_s=0,\ \forall\omega\in\Omega \right\}\Rightarrow \left\{ g\equiv 0 \right\}.
\end{equation*}
This implication is weaker than \eqref{Goal 1''} when setting $f_t$ to be deterministic , but the inequality \eqref{Weaker Goal 1''} itself is quite interesting in its own.
\end{remark}
Now we move on to prove the existence of density for variables defined in \eqref{Goal 2} under assumptions \ref{Regularity}, \ref{Kernel form} and \ref{Non-negative row sum}.
\begin{proposition}
Let $\{X_t\}_{0\leq t\leq 1}=I_n(f_t)$ be a continuous process in the $n$-th homogeneous Wiener chaos. Let $g_t$ be any $\tau$-H\"older continuous function with $\tau+\rho>1$. If $g_t$ is not identically zero, then under assumptions \ref{Regularity}, \ref{Kernel form} and \ref{Non-negative row sum}, the random variable
\begin{equation*}
Y=\int_{0}^{1}g_tdX_t
\end{equation*}
has a density with respect to the Lebesgue measure on $\mathbb{R}$.
\end{proposition}
\begin{proof}
Since $Y$ is a $\mathbb{R}^1$ valued, by definition, the Malliavin matrix of $Y$ is just
\[C(Y)=\langle DY, DY \rangle_\mathcal{H}. \] Thanks to $g_t$ being deterministic, $Y$ also belongs to the $n$-th homogeneous Wiener chaos. By theorem 3.1 of \cite{MR3035750}, it suffices to show that
\begin{equation}
\label{Chaos unique}
\mathbb{E}(\det C(Y))=\mathbb{E}(\langle DY, DY \rangle_\mathcal{H})> 0.
\end{equation}
A standard computation gives
\begin{equation*}
DY=\int_{0}^{1}g_tdDX_t,
\end{equation*}
where the integral is understood as a Young integral. Thus,
\begin{align}
\mathbb{E}(\langle DY, DY \rangle_\mathcal{H})&=\int_{0}^{1}\int_{0}^{1}g_sg_td\mathbb{E}\langle D_rX_s ,D_rX_t \rangle\nonumber \\
&=n\int_{0}^{1}\int_{0}^{1}g_sg_td\mathbb{E} (X_s X_t). \label{ Malliavin matrix: determinstic integrand}
\end{align}
By proposition \ref{Interpolation type inequality} we have
\begin{equation}
\label{Lower bound deterministic case}
\int_{0}^{1}\int_{0}^{1}g_sg_td\mathbb{E} (X_s X_t)=\mathbb{E}\left(\int_{0}^{1}g_tdX_t \right)^2\geq \frac{\beta }{4}(\sup_{r\in[0,1]}\abs{g_r})^2 \{ \mathbb{E}(X_{a,b})^2 \wedge \mathbb{E}(X_1)^2 \}.
\end{equation}
Since $g_t$ is not identically zero, we have
\[\sup_{r\in[0,1]}\abs{g_r}>0,\ \text{and}\ a<b. \]
Combing \eqref{ Malliavin matrix: determinstic integrand}, \eqref{Lower bound deterministic case} and the lower bound from assumption \ref{Regularity}, we conclude that
\begin{equation*}
\mathbb{E}(\det C(Y))=\mathbb{E}(\langle DY, DY \rangle_\mathcal{H})> 0.
\end{equation*}
The proof is now complete.
\end{proof}
\begin{remark}
Notice we used in \eqref{Chaos unique} the fact that if $F$ is in a fixed Wiener chaos then
\begin{equation*}
\mathbb{P}\{ DF=0 \}=0 \Longleftrightarrow \mathbb{E}\langle DF, DF \rangle_{\mathcal{H}}=0.
\end{equation*}
We can thus take expectation and transform $DF$ into $F$ as we did. This equivalence fails radically when $F$ is not in a finite Wiener chaos (in our setting, when $g_t$ is not deterministic).
\end{remark}
\subsection{Random integrand}
Now we turn to more the general case where $\{g_t\}_{0\leq t\leq 1}$ is random. We first prepare a lemma, which helps reveal the relationship between the kernel $f_t$ and the subspace $F_t$ we used in assumption \ref{Block form}.
\begin{lemma}
\label{DX_t and F_t}
For every $t\in [0,1]$, we have $ DX_t\in F_t$ almost surely.
\end{lemma}
\begin{proof}
Let $\{e_i\}_{i\geq 1}$ be an orthonormal basis of $\mathcal{H}$. Then for each kernel $f_t$, we may write
\begin{equation*}
f_t=\sum_{i_1,i_2,\cdots, i_n\geq 1} a_{i_1,i_2,\cdots, i_n} e_{i_1}\otimes e_{i_2}\otimes\cdots \otimes e_{i_n}.
\end{equation*}
For any set of $\{e_{k_1}, \cdots e_{k_{n-1}}\}$, we have by definition
\begin{equation*}
\langle f_t, e_{k_1}\otimes e_{k_2}\otimes\cdots \otimes e_{k_{n-1}} \rangle_{\mathcal{H}^{\otimes (n-1)}}=\sum_{i_1\geq 1} a_{i_1, k_1, k_2,\cdots ,k_{n-1}} e_{i_1} \in F_t.
\end{equation*}
Let $\{\xi^i_t\}_{i\geq 1}$ be an orthonormal basis of $F_t$, then we may write
\begin{equation*}
\sum_{i_1\geq 1} a_{i_1, k_1, k_2,\cdots ,k_{n-1}} e_{i_1}=\sum_{i_1\geq 1}b_{i_1,k_1, k_2,\cdots ,k_{n-1} } \xi^{i_1}_t.
\end{equation*}
We thus infer that
\begin{equation*}
f_t=\sum_{k_1,k_2,\cdots k_{n-1}\geq 1 }b_{i_1,k_1, k_2,\cdots ,k_{n-1} }\xi^{i_1}_t\otimes e_{k_1}\otimes \cdots \otimes e_{k_{n-1}}.
\end{equation*}
We can repeat this process and get
\begin{equation*}
f_t=\sum_{i_1,i_2,\cdots, i_n\geq 1} c_{i_1,i_2,\cdots, i_n} \xi_t^{i_1}\otimes \xi_t^{i_2}\otimes \cdots \otimes \xi_t^{i_n}.
\end{equation*}
As a result, we have $f_t\in F_t^{\otimes n}$ and
\begin{equation*}
DI_n(f_t)=nI_{n-1}(f_t)\in F_t.
\end{equation*}
\end{proof}
\begin{proposition}
\label{Uniform bound for integrals}
Let $\{X_t\}_{0\leq t\leq 1}=I_n(f_t)$ be a continuous process in the $n$-th homogeneous Wiener chaos. Let $g_t$ be any process whose sample paths are $\tau$-H\"older continuous almost surely, with $\tau+\rho>1$. Then under assumptions \ref{Regularity} and \ref{Block form}, we have
\begin{equation*}
\norm{\int_{0}^{1}g_rdDX_r}_{\mathcal{H}}\geq \sqrt{\alpha}\cdot \sup_{t\in[0,1]} \norm{\int_{0}^{t}g_rdDX_r}_{\mathcal{H}}.
\end{equation*}
\end{proposition}
\begin{proof}
Let $\{D_k \}_{k\geq 1}$ be an increasing sequence of partitions of $[0,1]$. It suffices to prove that for any $t\in [0,1]$,
\begin{equation*}
\norm{\int_{D_k\cap [0,1]}g_rdDX_r}_{\mathcal{H}}\geq \sqrt{\alpha} \norm{\int_{D_k\cap [0,t]}g_rdDX_r}_{\mathcal{H}}.
\end{equation*}
We have
\begin{equation*}
\int_{D_k\cap [0,1]}f_rdDX_r=\int_{D_k\cap [0,t]}g_rdDX_r+\int_{D_k\cap (t,1]}g_rdDX_r.
\end{equation*}
Define
\[G^k_{t}= Span\left\{ (DX_{r_{i+1}}-DX_{r_{i}}),\ \forall r_i,r_{i+1}\in D_k\cap (t,1] \right\}. \]
By previous lemma, we have
\[ \int_{D_k\cap (t,1]}g_rdDX_r\in G^k_{t}\subset \overline{Span}\left\{F_{r_{i},r_{i+1}} ,\ \forall r_i,r_{i+1}\in D_k\cap (t,1] \right\}. \]
To ease notations, we simply denote the projection onto $\overline{Span}\left\{F_{r_{i},r_{i+1}} ,\ \forall r_i,r_{i+1}\in D_k\cap (t,1] \right\}$ by $P$. Now we can deduce from assumption \ref{Block form} that
\begin{align*}
& \norm{\int_{D_k\cap [0,1]}g_rdDX_r}^2_{\mathcal{H}}=\norm{\int_{D_k\cap [0,t]}g_rdDX_r+\int_{D_k\cap (t,1]}g_rdDX_r}^2_{\mathcal{H}}\\
&=\norm{\int_{D_k\cap [0,t]}g_rdDX_r-P\left(\int_{D_k\cap [0,t]}g_rdDX_r \right)}^2_{\mathcal{H}}+\norm{\int_{D_k\cap (t,1]}g_rdDX_r+P\left(\int_{D_k\cap [0,t]}g_rdDX_r\right)}^2_{\mathcal{H}}\\
&\geq \norm{\int_{D_k\cap [0,t]}g_rdDX_r-P\left(\int_{D_k\cap [0,t]}g_rdDX_r \right)}^2_{\mathcal{H}}\\
&\geq \alpha \norm{\int_{D_k\cap [0,t]}g_rdDX_r}^2_{\mathcal{H}}.
\end{align*}
\end{proof}
\begin{remark}
In fact, with the same argument, we can show
\begin{equation*}
\norm{\int_{0}^{1}g_rdDX_r}_{\mathcal{H}}\geq \sqrt{\alpha}\cdot \sup_{[s,t]\subset[0,1]} \norm{\int_{s}^{t}g_rdDX_r}_{\mathcal{H}}.
\end{equation*}
\end{remark}
Now we are ready for the proof of theorem \ref{Main 1}.
\begin{proof}[Proof of theorem \ref{Main 1}]
By the maximal inequality of Young's integrals, we can find $C(\rho,\tau)>0$ independent of $g(\omega)$ such that for any $[a,a+\epsilon]\subset[0,1]$
\begin{equation}
\label{Young's maximal inequality}
\norm{\int_{a}^{a+\epsilon}g_rdDX_r-g_a\cdot(DX_{a+\epsilon}-DX_a)}_\mathcal{H}\leq C \norm{g_r(\omega)}_\tau \norm{DX_r(\omega)}_{\rho} \epsilon^{\tau+\rho}.
\end{equation}
From proposition \ref{Uniform bound for integrals}, we know that
\begin{equation*}
\left\{\int_{0}^{1}g_tdDX_t=0 \right\} \Rightarrow \left\{ \int_{a}^{a+\epsilon}g_rdDX_r=0 \right\}.
\end{equation*}
Combining this with \eqref{Young's maximal inequality} gives
\begin{equation}
\label{Important step}
\left\{\int_{0}^{1}g_tdDX_t=0 \right\} \Rightarrow \left\{ \norm{g_a\cdot(DX_{a+\epsilon}-DX_a)}_\mathcal{H}\leq C \norm{g_r(\omega)}_\tau \norm{DX_r(\omega)}_{\rho} \epsilon^{\tau+\rho} \right\}.
\end{equation}
If $g_t(\omega)\not\equiv 0$, by continuity we can find $[l(\omega),u(\omega)]\subset [0,1]$ such that $\abs{g_r(\omega)}>c(\omega)>0$ on $[l,u]$.
Hence, we have for any $[a,a+\epsilon]\subset[l,u]$
\begin{equation}
\label{Holder exponent greater than 1}
\left\{\int_{0}^{1}g_tdDX_t=0,\ g_t\neq 0 \right\} \Rightarrow\left\{ c(\omega)\cdot\norm{DX_{a+\epsilon}-DX_a}_\mathcal{H}\leq C \norm{g_t(\omega)}_\tau \norm{DX_r(\omega)}_{\rho} \epsilon^{\tau+\rho} \right\}.
\end{equation}
The right-hand side of the above implication says that $DX_t$ is $(\tau+\rho)$-H\"older continuous on $[l,u]$. Moreover, since $\tau+\rho>1$, $DX_t$ must remain constant on $[l,u]$, which means that $DX_t-DX_s=0$ for any $[s,t]\subset [l,u]$. However, since $DX_s\in F_s$, $DX_t\in F_t$, assumptions \ref{Regularity} and \ref{Block form} imply that $DX_s, DX_t$ are linearly independent unless they are both zero. Hence, we infer that $DX_s=DX_t=0$.
Gathering everything we have proved so far gives
\begin{equation}
\label{Crucial implication}
\left\{\int_{0}^{1}g_tdDX_t=0,\ g_t\neq 0 \right\}\Rightarrow \left\{DX_t=0,\ \forall t\in [l(\omega), u(\omega)] \right\}.
\end{equation}
The event on the right-hand side depends on the sample paths $\omega$. We need the following uniform estimate.
\begin{align}
\left\{DX_t=0,\ \forall t\in [l(\omega), u(\omega)] \right\}&\Rightarrow \left\{\exists t\in \mathbb{Q}\cap [0,1],\ DX_t=0 \right\}\nonumber \\
&\Rightarrow \bigcup_{t\in \mathbb{Q}\cap[0,1]}\left\{ DX_t=0 \right\}. \label{Uniform implication}
\end{align}
Now we resort to use lemma \ref{Multiple integral density}. Since $\norm{f_t}_{\mathcal{H}^{\otimes n}}>0$, it is always possible to find $h\in\mathcal{H}$ such that $\norm{\langle f_t, h \rangle_{\mathcal{H}}}_{\mathcal{H}^{\otimes (n-1)} }>0$. As a result,
\begin{align*}
\left\{ DX_t=0 \right\}\Rightarrow \left\{ \langle DX_t, h \rangle_{\mathcal{H}}=0 \right\} =\left\{ nI_{n-1}(\langle f_t, h\rangle_{\mathcal{H}})=0 \right\}.
\end{align*}
By lemma \ref{Multiple integral density}, $I_{n-1}(\langle f_t, h\rangle)$ has a density. So, we have
\begin{equation*}
\mathbb{P}\{DX_t=0 \}\leq\mathbb{P}\{I_{n-1}(\langle f_t, h\rangle)=0 \}=0,
\end{equation*}
which immediately gives
\begin{equation}
\label{Bound for the collection}
\mathbb{P}\left\{\bigcup_{t\in \mathbb{Q}\cap[0,1]}\left\{DX_t=0 \right\} \right\}=0.
\end{equation}
Combining \eqref{Crucial implication}, \eqref{Uniform implication} and \eqref{Bound for the collection} finishes the proof.
\end{proof}
\begin{remark}
Let $\nu\in(0,1)$ be a positive constant. If $\{DX_t\}_{0\leq t\leq 1}$ has the so called $\nu$-H\"older roughness property (see definition 6.7 of \cite{MR3289027}), then we for any $s\in[0,1]$ and $\epsilon\in[0, 1/2]$, we can find $L_\nu(DX)(\omega)>0$ and $t\in[0,1]$ such that $0<\abs{t-s}<\epsilon$ and
\[ \norm{DX_t-DX_s}_{\mathcal{H}}\geq L_\nu(DX) \epsilon^\nu. \]
Then \eqref{Young's maximal inequality} gives
\begin{equation*}
g_a\cdot L_\nu(DX) \epsilon^\nu\leq 2 \sup_{t\in[0,1]}\norm{\int_{0}^{t}g_rdDX_r}+ C \norm{g_r(\omega)}_\tau \norm{DX_r(\omega)}_{\rho} \epsilon^{\tau+\rho}.
\end{equation*}
We can recast it as
\begin{equation*}
\abs{g_a}\leq \frac{C'}{L_\nu(DX)}\left(\sup_{t\in[0,1]}\norm{\int_{0}^{t}g_rdDX_r}_{\mathcal{H}}\epsilon^{-\nu}+ \norm{g_r(\omega)}_\tau \norm{DX_r(\omega)}_{\rho} \epsilon^{\tau+\rho-\nu} \right).
\end{equation*}
Optimizing the right-hand side with respect to $\epsilon$ and taking supreme of the left-hand side, one verifies
\begin{equation}
\label{Norris}
\sup_{r\in[0,1]} \abs{g_r} \leq \frac{C''}{L_\nu(DX)} \left\{\sup_{t\in[0,1]}\norm{\int_{0}^{t}g_rdDX_r}_{\mathcal{H}}^{1-\frac{\nu}{\tau+\rho}}\norm{g_r(\omega)}^{\frac{\nu}{\tau+\rho}}_\tau \norm{DX_r(\omega)}^{\frac{\nu}{\tau+\rho}}_{\rho} \right\}.
\end{equation}
Inequality \eqref{Norris} can be regarded as a Norris' type lemma for $\{DX_t\}_{0\leq t\leq 1}$, which (taking proposition \ref{Uniform bound for integrals} into account) is another quantitative version of \eqref{Goal 1''}. The H\"older roughness of $\{X_t\}_{0\leq t\leq 1}$ will be an interesting topic to investigate.
\end{remark}
A straightforward consequence of theorem \ref{Main 1} is the following zero-one law.
\begin{corollary}
Let $g_t$ be any $\tau$-H\"older continuous function with $\tau+\rho>1$. Then under assumptions \ref{Regularity} and \ref{Block form}, we have
\begin{equation*}
\mathbb{P}\left\{ \int_{0}^{1}g_tdDX_t=0 \right\}= 1\ \text{or}\ 0.
\end{equation*}
\end{corollary}
\begin{proof}
We can regard $g$ as a constant random process, then by theorem \ref{Main 1}
\begin{align}
\mathbb{P}\left\{ \int_{0}^{1}g_tdDX_t=0 \right\}&=\mathbb{P}\left\{ \int_{0}^{1}g_tdDX_t=0,\ g_t\neq 0 \right\}+\mathbb{P}\left\{ \int_{0}^{1}g_tdDX_t=0,\ g_t=0 \right\} \nonumber\\
&=\mathbb{P}\left\{ \int_{0}^{1}g_tdDX_t=0,\ g_t=0 \right\}\nonumber\\
&=\mathbb{P}\left\{ \ g_t=0 \right\}. \label{Zero-one}
\end{align}
We conclude with the fact that $g_t$ is deterministic.
\end{proof}
We can now give the proof for the existence of density for variables defined in \eqref{Goal 2} under assumptions \ref{Regularity} and \ref{Block form}.
\begin{proposition}
Let $\{X_t\}_{0\leq t\leq 1}=I_n(f_t)$ be a continuous process in the $n$-th homogeneous Wiener chaos. Let $g_t$ be any $\tau$-H\"older continuous function with $\tau+\rho>1$. If $g_t$ is not identically zero, then under assumptions \ref{Regularity} and \ref{Block form}, the random variable
\begin{equation*}
Y=\int_{0}^{1}g_tdX_t
\end{equation*}
has a density with respect to the Lebesgue measure on $\mathbb{R}$.
\end{proposition}
\begin{proof}
Note that by \eqref{Zero-one}
\begin{equation*}
\mathbb{P}\left\{ DY=0 \right\}=\mathbb{P}\left\{ \int_{0}^{1}g_tdDX_t=0 \right\}=\mathbb{P}\left\{ g_t=0 \right\}=0.
\end{equation*}
\end{proof}
\section{Application to SDE}
In this section, we use $\{X_t\}_{0\leq t\leq 1}$ to denote a $d$-dimensional chaos process for a fixed positive integer $d$. More explicitly, for $1\leq i\leq d$, $\{X^i_t\}_{0\leq t\leq 1}$ is an independent copy of $\{X_t\}_{0\leq t\leq 1}$ we defined in previous sections. We consider the following SDE
\begin{equation*}
dY_t=\sum_{i=1}^{d}V_i(Y_t)dX^i_t+V_0(Y_t)dt,\ Y_0=y_0\in\mathbb{R}^d.
\end{equation*}
Since $\{X_t\}_{0\leq t\leq 1}$ is $\rho$-H\"older continuous for some $\rho>1/2$, the above SDE is understood in Young's sense. By Duhamel's principle (see \cite{friz2010multidimensional} chapter 4), we have
\begin{equation*}
\langle DY_t,h\rangle=\sum_{i=1}^{d}\int_{0}^{t}J_{t\leftarrow s}V_i(Y_s)d\langle DX^i_s,h\rangle,
\end{equation*}
where $J_{t\leftarrow s}$ is the Jacobian process defined in \eqref{Jacobian}. Note that in multi-dimensional case, we have
\begin{equation*}
DX^i_t=(D^1X^i_t, D^2X^i_t,\cdots, D^dX^i_t)^T
\end{equation*}
where $D^jX^i_t$ is the Malliavin derivative of $X^i_t$ with respect to the underlying Brownian motion of $X^j_t$. Thanks to the fact that different components of $\{X_t\}_{0\leq t\leq 1}$ are independent, $D^jX^i_t=\delta_{ij}\cdot D^iX^i_t$, where $\delta_{ij}$ is the Kronecker delta function.
\begin{proof}[Proof of theorem \ref{Main 2}]
We have by definition
\begin{equation*}
D^jY_t=\sum_{i=1}^{d}\int_{0}^{t}J_{t\leftarrow s}V_i(Y_s)d D^jX^i_s=\int_{0}^{t}J_{t\leftarrow s}V_j(Y_s)d D^jX^j_s.
\end{equation*}
The Malliavin matrix of $Y_t$ is given by
\begin{equation*}
C^{ij}_t=\langle Y^i_t, Y^j_t \rangle_{\mathcal{H}^d}.
\end{equation*}
We can write
\begin{align*}
\mathbb{P}\{\det(C_t)=0 \}&=\mathbb{P}\left\{v^T\cdot C_t \cdot v=0,\ \text{for some vector}\ v\neq 0 \right\}.
\end{align*}
Since
\begin{equation*}
v^T\cdot C_t \cdot v=\norm{\sum_{i=1}^{d}v_i DY^i_t}^2_{\mathcal{H}^d}.
\end{equation*}
We can infer that
\begin{align}
&\mathbb{P}\left\{v^T\cdot C_t \cdot v=0,\ \text{for some vector}\ v\neq 0 \right\}\nonumber\\
&=\mathbb{P}\left\{v^TD^jY_t=0,\ 1\leq i\leq d,\ \text{for some vector}\ v\neq 0 \right\}\nonumber\\
&=\mathbb{P}\left\{\int_{0}^{t}v^TJ_{t\leftarrow s}V_j(Y_s)d D^jX^j_s=0,\ 1\leq i\leq d,\ \text{for some vector}\ v\neq 0 \right\}.\label{Last step}
\end{align}
However, by theorem \ref{Main 1}, we have
\begin{equation*}
\left\{\int_{0}^{t}v^TJ_{t\leftarrow s}V_j(Y_s)d D^jX^j_s=0,\ 1\leq j\leq d. \right\}\Rightarrow \left\{ v^TJ_{t\leftarrow s}V_j(Y_s)=0,\ 1\leq j\leq d \right\}\Rightarrow\left\{v^TJ_{t\leftarrow s}V(Y_s)=0 \right\},
\end{equation*}
where $V\in \mathbb{R}^{d\times d}$ whose columns are given by $\{V_i\}_{1\leq i\leq d}$. Since $V(Y_s)$ is elliptic by our assumption and $J_{t\leftarrow s}$ is invertible almost surely with inverse given by \eqref{Jacobian inverse}, we see that
\begin{equation*}
\left\{v^TJ_{t\leftarrow s}V(Y_s)=0 \right\}\Rightarrow\{v=0 \}.
\end{equation*}
Plugging this back to \eqref{Last step} gives
\begin{equation*}
\mathbb{P}\left\{v^T\cdot C_t \cdot v=0,\ \text{for some vector}\ v\neq 0 \right\}=\mathbb{P}\{v=0,\ \text{for some vector}\ v\neq 0 \}=0.
\end{equation*}
Our proof is complete.
\end{proof}
\begin{remark}
If $\{X_t\}_{0\leq t\leq 1}$ is H\"older rough, we can apply the Norris's lemma \eqref{Norris} and prove the existence of density with a weaker parabolic H\"ormander's condition, instead of ellipticity, on the vector fields $\{V_i\}_{0\leq i\leq d}$.
\end{remark}
\end{document} |
\begin{document}
\baselineskip=17pt
\title{Estimates for Character Sums with Various Convolutions}
\author[Brandon Hanson]{Brandon Hanson} \address{Pennsylvania State University\\ University Park, PA} \email{[email protected]}
\date{} \maketitle
\begin{abstract}
We provide estimates for sums of the form \[\left|\sum_{a\in A}\sum_{b\in B}\sum_{c\in C}\chi(a+b+c)\right|\] and
\[\left|\sum_{a\in A}\sum_{b\in B}\sum_{c\in C}\sum_{d\in D}\chi(a+b+cd)\right|\] when $A,B,C,D\subset \FF_p$, the field with $p$ elements and $\chi$ is a non-trivial multiplicative character modulo $p$. \end{abstract} \section{Introduction} \sloppy In analytic number theory, one is often concerned with estimating a bilinear sum of the form \begin{equation}\label{bilinear}S=\sum_{\substack{1\leq m\leq M\\
1\leq n\leq N}}a_mb_nc_{m,n}\end{equation} where $a_m,\ b_n$ and $c_{m,n}$ are complex numbers. The standard way to handle this sum is to apply the Cauchy-Schwarz inequality so that \begin{align*}|S|^2&\leq \lr{\sum_{1\leq m\leq M}|a_m|\left|\sum_{1\leq n\leq N}b_nc_{m,n}\right|}^2\\&\leq\lr{\sum_{1\leq m\leq M}|a_m|^2}\lr{\sum_{1\leq n_1,n_2\leq N}b_{n_1}\bar{b_{n_2}}\sum_{1\leq m\leq M}c_{m,n_1}\bar{c_{m,n_2}}}.\end{align*} \noindent One usually has that \[\sum_{1\leq m\leq M}c_{m,n_1}\bar{c_{m,n_2}}\] is small when $n_1\neq n_2$, so that the second factor is essentially dominated by the \emph{diagonal terms} where $n_1=n_2$.
For instance, suppose $p$ is a prime number and denote by $\FF_p$ the field with $p$ elements. We write $e_p(u)=e^{2\pi i u/p}$ and we denote by $\chi$ a multiplicative (or Dirichlet) character modulo $p$. Two well-known sums of the form (\ref{bilinear}) are \begin{equation}\label{Paley}S_\chi(A,B)=\sum_{a\in A}\sum_{b\in B}\chi(a+b)\end{equation} and \begin{equation}\label{exponential}T_x(A,B)=\sum_{a\in A}\sum_{b\in B}e_p(xab)\end{equation} where $A$ and $B$ are subsets of $\FF_p$.
By the triangle inequality, each of these sums are at most $|A||B|$, but we expect an upper bound of the form $|A||B|p^{-\eps}$ for some positive $\eps$. Indeed, using the Cauchy-Schwarz inequality as above, and orthogonality of characters, one can prove that the sums (\ref{Paley}) and
(\ref{exponential}) are at most $(p|A||B|)^{1/2}$. Such an estimate is better than the trivial estimate when $|A||B|>p$.
For the second sum, (\ref{exponential}), the bound $(p|A||B|)^{1/2}$ is quite sharp. Indeed, if $A=B=\{n:1\leq n\leq \delta p^{1/2}\}$ for a small number
$\delta>0$, then products $ab$ with $a,b\in A$ are at most $\delta^2p$ (here we are identifying residues mod $p$ with integers between $0$ and $p-1$). It follows that $|e_p(ab)-1|\ll \delta^2$, so the summands in (\ref{exponential}) are essentially constant and there is little cancellation. On the other hand, it is conjectured that the first sum, (\ref{Paley}), should exhibit cancellation even for small sets $A$ and $B$. From now on, we will call
(\ref{Paley}) the \emph{Paley sum}. The problem of obtaining good estimates for it beyond the range $|A||B|>p$ appears to be quite hard.
In this article we investigate character sums which are related to the Paley sum. First, we motivate its study with the following question of S\'ark\"ozy:
\begin{Problem}[S\'ark\"ozy]\label{Sarkozy} Are the quadratic residues modulo $p$ a sumset? That is, do there exist sets $A, B\subset \FF_p$ each of size at least two, and with the set $A+B$ equal to the set of quadratic residues? \end{Problem}
One expects that the answer to the above question is no. Heuristically, if $B$ contains two elements $b$ and $b'$ we would require that $A+b$ and $A+b'$ are both subsets of the quadratic residues. But we expect that $a+b$ is a quadratic residue half of the time, and we expect that $a+b'$ also be a residue half of the time \emph{independent} of whether or not $a+b$ is a quadratic residue. So if $A+B$ consisted entirely of quadratic residues then many unlikely events must have occurred. For $A+B$ to consist of all the quadratic residues would be shocking. The difficulty in this problem is establishing the aforementioned independence.
In \cite{Sh}, Shkredov showed the the quadratic residues are never of the form $A+A$. In more recent work, \cite{Sh2}, he also ruled out the case that $Q=A+B$ when $A$ is a multiplicative subgroup. By way of character sum estimates, Shparlinski, building on work of S\'ark\"ozy \cite{Sar} has proved that:
\begin{UnnumberedTheorem}[S\'ark\"ozy, Shparlinski] If $A, B\subset \FF_p$, each of size at least two with the set $A+B$ equal to
the set of quadratic residues then $|A|$ and $|B|$ are within a constant factor of $\sqrt p$. \end{UnnumberedTheorem} \noindent As a consequence of this theorem and a combinatorial theorem of Ruzsa, one can deduce that the quadratic residues are not of the for $A+B+C$ with each set of size at least two.
S\'ark\"ozy's question is settled by improved bounds for the Paley sum. Since each sum $a+b$ with $a\in A$ and $b\in B$ is a quadratic residue we have \[|A||B|=\sum_{a\in A}\sum_{b\in B}\leg{a+b}{p}\leq (p|A||B|)^{1/2}.\] So
$|A||B|\leq p$ and this estimate just fails to resolve S\'ark\"ozy's problem. So even improving upon the bound
$S_{\leg{\cdot}{p}}(A,B)\leq (p|A||B|)^{1/2}$ by a constant factor would be worthwhile.
Breaking past this barrier, often called the \emph{square-root barrier}, is hard. In practice, the usual way we estimate character sums is via the method of completion. One way of doing so was outlined at the beginning of this article. With this method, we replace a short sum over a subset $A\subset \FF_p$ with a complete sum over the whole of $\FF_p$ which lets us to use orthogonality. However some terms, the diagonal terms, exhibit no cancellation at all and must be accounted for. By completing the sum we create more diagonal terms, and the resulting loss becomes worse than trivial when the set $A$ is too small. One can dampen the loss from completion by using a higher moment (using H\"older's inequality as opposed to Cauchy-Schwarz). This was the idea used by Burgess in his work on character sums in \cite{Bu1} and \cite{Bu2}, and it is still one of the only manoeuvres we have for pushing past the square-root barrier. Still, with higher moments the off-diagonal terms become more complicated and we must settle for worse orthogonality estimates, which can be limiting.
In the case of the Paley sum, the square-root barrier is more than just a consequence of our methods. Suppose $q=p^2$ so that $\FF_p$ is a subfield of
$\FF_q$ and each element in $\FF_p$ is the square of an element in $\FF_q$. Since $\FF_p$ is closed under addition, any sum $a+b$ with $a,b\in\FF_p$ is also a square in $\FF_q$. So, if we take $A=B=\FF_p$ and $\chi$ the quadratic character on $\FF_q$, then there is no cancellation in $S_\chi(A,B)$. This shows that, for the Paley sum over $\FF_q$, the bound $|S_\chi(A,B)|\leq
(q|A||B|)^{1/2}$ is essentially best possible. In order to improve the bound for the Paley sum past the square-root barrier, we need to use an argument which is sensitive to the fact that $\FF_p$ has no subfields. Such arguments are hard to come by and this is perhaps the greatest source of difficulty in the problem.
There have been improvements to estimates for the Paley sum when the sets $A$ and $B$ have a particularly nice structure. In \cite{FI}, Friedlander and Iwaniec improved the range in which one can obtain non-trivial estimates when the set $A$ is an interval. This constraint was weakened by Mei-Chu Chang in \cite{C1}
to the case where $|A+A|$ is very small:
\begin{UnnumberedTheorem}[Chang]\label{Chang}
Suppose $A,B\subset \FF_p$ with $|A|,|B|\geq p^\alpha$ for some
$\alpha>\frac{4}{9}$ and such that $|A+A|\leq K|A|$. Then there is a constant
$\tau=\tau(K,\alpha)$ such that for $p$ sufficiently large and any non-trivial character $\chi$, we have \[|S_\chi(A,B)|\leq
|A||B|p^{-\tau}.\] \end{UnnumberedTheorem}
\noindent We remark that in light of Freiman's Theorem, which we will recall shortly, the condition that $|A+A|$ has to be so small is still very restrictive.
Often problems involving a sum of two variables, called \emph{binary additive problems}, are hard. Introducing a third variable gives rise to a \emph{ternary additive problem}, which may be tractable. In this paper we establish non-trivial bounds beyond the square-root barrier for character sums with more than two variables. These results are different from those mentioned above since they hold for all sets which are sufficiently large - there are no further assumptions made about their structure. Our first theorem is the following.
\begin{Theorem}\label{TripleSum}
Given subsets $A,B,C\subset \FF_p$ each of size $|A|,|B|,|C|\geq \delta\sqrt p$, for some $\delta>0$, and a non-trivial character $\chi$, then we have
\[\left|\sum_{a\in A}\sum_{b\in B}\sum_{c\in C}\chi(a+b+c)\right|=o_\delta(|A||B||C|).\] \end{Theorem}
There are analogous results for exponential sums. We mentioned above that the sum $T_x(A,B)$ in (\ref{exponential}) also obeys the bound $|T_x(A,B)|\leq (p|A||B|)^{1/2}$. While this bound may be sharp, Bourgain \cite{Bou} proved that with more variables one can extend the range in which the estimate is non-trivial.
\begin{UnnumberedTheorem}[Bourgain] There is a constant $C$ such that the following holds. Suppose $\delta>0$ and
$k\geq C\delta^{-1}$, then for $A_1,\ldots,A_k\subset \FF_p$ with $|A_i|\geq p^\delta$ and $x\in\FF_p^\times$, we have \[\left|\sum_{a_1\in A_i}\cdots\sum_{a_k\in A_k}e_p(xa_1\cdots a_k)\right|<|A_1|\cdots|A_k|p^{-\tau}\] where $\tau>C^{-k}$. \end{UnnumberedTheorem}
We cannot prove results of this strength. The reason is that one can play the additive and multiplicative structures of the frequencies appearing in such exponential sums and then leverage the Sum-Product Phenomenon to deduce some cancellation. The structure of multiplicative characters is not so nice and we rely on Burgess' method instead.
In Theorem \ref{TripleSum}, we would prefer a bound of the form
$|S_\chi(A,B,C)|\leq |A||B||C|p^{-\tau}$ for some positive $\tau$. However, the proof of Theorem \ref{TripleSum} relies on Chang's Theorem, which only allows one to estimate $S_\chi(A,B)$ past the square-root barrier under the hypothesis that
$|A+A|\leq K|A|$ for some constant $K$. This hypothesis plays a crucial part in the proof of her theorem because it allows for the use of Freiman's Classification Theorem:
\begin{UnnumberedTheorem}[Freiman]
Suppose $A$ is a finite set of integers such that $|A+A|\leq K|A|$. Then there is a generalized arithmetic progression $P$ containing
$A$ and such that $P$ is of dimension at most $K$ and $\log(|P|/|A|)\ll K^{c}$ for some absolute constant $c$. \end{UnnumberedTheorem}
Using this classification theorem, one can make a change of variables $a\mapsto a+bc$, which is the first step in a Burgess type argument. Freiman's Theorem is unable to accommodate the situation $|A+A|\leq |A|^{1+\delta}$, even for small values of $\delta>0$, which is what is needed in order to get a power saving in our bound for ternary character sums. To circumvent the use of Freiman's Theorem, we can replace triple sums with sums of four variables. By incorporating both additive and multiplicative convolutions we arrive at sums of the form \[H_\chi(A,B,C,D)=\sum_{a\in A}\sum_{b\in B}\sum_{c\in C}\sum_{d\in D}\chi(a+b+cd).\] In this way we have essentially \emph{forced} a scenario where we can make use of the Burgess argument. By introducing both arithmetic operations, we are able to weigh the additive structure in one of the variables against the multiplicative structure of that variable in order to use a Sum-Product estimate. Our second result is:
\begin{Theorem}\label{MixedSum}
Suppose $A,B,C,D\subset \FF_p$ are sets with $|A|,|B|,|C|,|D|>p^\delta$,
$|C|<\sqrt p$ and $|D|^4|A|^{56}|B|^{28}|C|^{33}\geq p^{60+\eps}$ for some $\delta, \eps>0$. There is a constant $\tau>0$ depending only on $\delta$ and $\epsilon$ such that
\[|H_\chi(A,B,C,D)|\ll |A||B||C||D|p^{-\tau}.\] In the case that $|A|,|B|,|D|>p^\delta$,
$|C|\geq \sqrt p$ and $|D|^8|A|^{112}|B|^{56}\geq p^{87+\eps}$ then there is a constant $\tau>0$ depending only on $\delta$ and $\epsilon$ such that \[|H_\chi(A,B,C,D)|\ll |A||B||C||D|p^{-\tau}.\] \end{Theorem}
Theorem \ref{MixedSum} is simplified greatly when all sets in question are assumed to have roughly the same size:
\begin{Corollary}\label{MixedSum2}
Suppose $A,B,C,D\subset \FF_p$ are sets with $|A|,|B|,|C|,|D|>p^\delta$ with $\delta>\frac{1}{2}-\frac{1}{176}$. Then $H_\chi(A,B,C,D)\leq |A||B||C||D|p^{-\eps}$ for some $\eps>0$ depending only on $\delta$. \end{Corollary}
\section{Background} Here we recall facts concerning multiplicative characters over finite fields and additive combinatorics. For details concerning character sums, we refer to Chapters 11 and 12 of \cite{IK}. The reference \cite{TV} is extremely helpful for all things additive combinatorial.
Multiplicative characters are the characters $\chi$ of the group $\FF_q^\times$ which are extended to $\FF_q$ by setting $\chi(0)=0$. In order to carry out the proof of a Burgess-type estimate, we shall need Weil's bound for character sums with polynomial arguments.
\begin{Theorem}[Weil] Let $f\in\FF_p[x]$ be a polynomial with $r$ distinct roots over $\bar{\FF_p}$. Then if $\chi$ has order $l$ and provided $f$ is not an $l$'th power over $\bar{\FF_p}[x]$ we have
\[\left|\sum_{x\in\FF_p}\chi(f(x))\right|\leq r\sqrt p.\] \end{Theorem}
\begin{Lemma}\label{MomentBound}
Let $k$ be a positive integer and $\chi$ a non-trivial multiplicative character. Then for any subset $A\subset\FF_p$ we have \[\sum_{x\in\FF_q}\left|\sum_{a\in A}\chi(a+x)\right|^{2k}\leq |A|^{2k}2k\sqrt p+(2k|A|)^kp.\] \end{Lemma} \begin{proof} Expanding the $2k$'th power and using that $\bar\chi(y)=\chi(y^{p-2})$, we have \begin{align*} &\sum_{a_1,\ldots,a_{2k}\in A}\sum_x\chi((x-a_1)\cdots(x-a_k)(x-a_{k+1})^{p-2}\cdots(x-a_{2k})^{p-2})\\ &=\sum_{\aa\in A^{2k}}\sum_x\chi(f_{\aa}(x)). \end{align*} Here $f_{\aa}(t)$ is the polynomial \[f_{\aa}(X)=(X-a_1)\cdots(X-a_k)(X-a_{k+1})^{p-2}\cdots(X-a_{2k})^{p-2}.\] By Weil's theorem, $\sum_x\chi(f_{\aa}(x))\leq 2k\sqrt p$ unless $f_{\aa}$ is an $l$'th power, where $l$ is the order of $\chi$. If any of the roots $a_i$ of $f_{\aa}$ is distinct from all other $a_j$ then it occurs in the above expression with multiplicity 1 or $p-2$. Both $1$ and $p-2$ are prime to $l$ since $l$ divides $p-1$. Hence $f_{\aa}$ is an $l$'th power only provided all of its roots can be grouped into pairs. So, for all but at most
$\frac{(2k)!}{2^k k!}\leq (2k|A|)^k$ vectors $\aa\in A^{2k}$, we have the estimate $2k\sqrt p$ for the inner sum. For the remaining $\aa$ we bound the sum trivially by $p$. Hence the upper bound \[\sum_{x\in\FF_q}\left|\sum_{a\in A}\chi(a+x)\right|^{2k}\leq |A|^{2k}2k\sqrt p+(2k|A|)^kp.\] \end{proof}
We now turn to results from additive combinatorics. Let $A$ and $B$ be finite subsets of an abelian group $G$. The \emph{additive energy} between $A$ and $B$
is the quantity \[E_+(A,B)=\left|\{(a,a',b,b')\in A\times A\times B\times B:a+b=a'+b'\}\right|.\] One of the fundamental results on additive energy is the Balog-Szemer\'edi-Gowers Theorem, which we use in the following form.
\begin{Theorem}[Balog-Szemer\'edi-Gowers]\label{BSG} Suppose $A$ is a finite subset of an abelian group $G$ and \[E_+(A,A)\geq
\frac{|A|^3}{K}.\] Then there is a subset
$A'\subset A$ of size $|A'|\gg\frac{|A|}{K(\log(e|A|))^2}$ with \[|A'-A'|\ll K^4\frac{|A'|^3(\log (|A|))^8}{|A|^2}.\] The implied constants are absolute. \end{Theorem}
This version of the Balog-Szemer\'edi-Gowers Theorem has very good explicit bounds, and is due Bourgain and Garaev. The proof is essentially a combination of the Lemmas 2.2 and 2.4 from \cite{BG}. It was communicated to us by O. Roche-Newton. Since we prefer to work with sumsets rather than difference sets we have the following lemma which is a well-known application of Ruzsa's Triangle Inequality.
\begin{Lemma}\label{sumset}
Suppose $A$ is a finite subset of an abelian group $G$. Then \[|A-A|\leq
\lr{\frac{|A+A|}{|A|}}^2|A|.\] \end{Lemma}
We will prefer to work with the energy between a set and itself rather than between distinct sets, so we need the following fact, which is a simple consequence of the Cauchy-Schwarz inequality. \begin{Lemma}\label{triangle} For sets $A$ and $B$ we have \[E_+(A,B)^2\leq E_+(A,A)E_+(B,B)\] \end{Lemma}
We now record a general version of Burgess' argument, which is an application of H\"older's inequality and Weil's bound. This proof is distilled from the proof of Burgess's estimate in Chapter 12 of \cite{IK}.
\begin{Lemma} \label{BasicBurgess}
Let $A,B,C\subset \FF_p$ and suppose $\chi$ is a non-trivial multiplicative character. Define \[r(x)=|\{(a,b)\in A\times B:ab=x\}|.\] Then for any positive integer $k$, we have the estimate \begin{align*}
\sum_{x\in\FF_p}r(x)\left|\sum_{c\in C}\chi(x+c)\right|&\leq(|A||B|)^{1-1/k}E_\times(A,A)^{1/4k}E_\times(B,B)^{1/4k}\cdot\\
&\cdot\lr{|C|^{2k}2k\sqrt p+(2k|C|)^kp}^{1/2k}. \end{align*} \end{Lemma} \begin{proof} Call the left hand side above $S$. Applying H\"older's inequality \begin{align*}
|S|&\leq\lr{\sum_{x\in\FF_p}r(x)}^{1-1/k}\lr{\sum_{x\in\FF_p}r(x)^2}^{1/2k}\lr{\sum_{x\in\FF_p}\left|\sum_{c\in C}\chi(x+c)\right|^{2k}}^{1/2k}\\ &=T_1^{1-1/k}T_2^{1/2k}T_3^{1/2k}. \end{align*}
Now $T_1$ is precisely $|A||B|$ and $T_2$ is the multiplicative energy $E_\times(A,B)$. By Lemma \ref{triangle} inequality, we have \[E_\times(A,B)\leq\sqrt{E_\times(A,A)E_\times(B,B)}.\] The estimate for $T_3$ is an immediate from Lemma \ref{MomentBound}. \end{proof}
The last ingredient in our proof is the most crucial. Sum-Product estimates are sensitive to prime fields and allow us to break the square-root barrier. We record the following estimate of Rudnev.
\begin{Theorem}[Rudnev] \label{EnergyEstimate}
Let $A\subset \FF_p$ satisfy $|A|<\sqrt p$. Then \[E_\times(A,A)\ll
|A||A+A|^\frac{7}{4}\log |A|.\] \end{Theorem}
This is not the state of the art for Sum-Product theory in $\FF_p$, which at the time of this writing is found in \cite{RNRS}, but the above estimate is more readily applied to our situation. Moreover, the strength of the Sum-Product estimates is not the bottleneck for proving non-trivial character sum estimates in a wider range (avoiding completion is).
\section{Ternary sums}\label{Triple}
We begin this section by giving a simple estimate which is non-trivial past the square-root barrier provided we can control certain additive energy.
\begin{Lemma}\label{energy} Given subsets $A,B,C\subset \FF_p$ and a non-trivial character $\chi$ we have
\[|S_\chi(A,B,C)|\leq\sqrt{p|A|E_+(B,C)}.\] \end{Lemma} \begin{proof} Let $r(x)$ be the number of ways in which $x\in\FF_p$ is a sum $x=b+c$ with $b\in B$ and $c\in C$. Then
\begin{align*}|S(A,B,C)|&\leq\sum_{x\in\FF_p}r(x)\left|\sum_{a\in A}\chi(a+x)\right|\\ &\leq
\lr{\sum_{x\in\FF_p}r(x)^2}^{1/2}\lr{\sum_{x\in\FF_p}\left|\sum_{a\in A}\chi(a+x)\right|^2}^{1/2}.\end{align*} It is straightforward to check that the first factor above is $\lr{E_+(B,C)}^{1/2}$ and as before, the second factor is
$\lr{p|A|}^{1/2}$. \end{proof}
\begin{Lemma}\label{arg}
Let $z_1,\ldots,z_n$ be complex numbers with $|\arg z_1-\arg z_j|\leq \delta$. Then \[|z_1+\ldots +z_n|\geq (1-\delta)(|z_1|+\ldots+|z_n|).\] \end{Lemma} \begin{proof} We have
\begin{align*}|z_1|+\ldots+|z_n|&=\theta_1z_1+\ldots+\theta_nz_n\\&=\theta_1(z_1+\ldots+z_n)+(\theta_2-\theta_1)z_2+\ldots+(\theta_n-\theta_1)z_n\end{align*}
for some complex numbers $\theta_k$ of modulus 1 with $|\theta_1-\theta_j|\leq
\delta$. Thus by the triangle inequality \[|z_1|+\ldots+|z_n|\leq
|z_1+\ldots+z_n|+\delta(|z_2|+\ldots+|z_n|)\] and the result follows. \end{proof}
We are now able prove Theorem \ref{TripleSum}. Ignoring technical details for the moment, either we are in a situation where Lemma \ref{energy} improves upon the trivial estimate, or else we can appeal to the Balog-Szemer\'edi-Gowers Theorem and deduce that $A$ has a subset with small sumset. In the latter case we can make use of Chang's Theorem and also arrive at a non-trivial estimate, even saving a power of $p$. Unfortunately, this second scenario does not come in to play until one of the sets has a lot of additive energy. This means that the saving from Lemma \ref{energy} will become quite poor before we are rescued by Chang's estimate. We proceed with the proof proper.
\begin{proof}[Proof of Theorem \ref{TripleSum}] Suppose, by way of contradiction, that the theorem does not hold. This means that there is some positive constant $\eps>0$ such that for $p$ arbitrarily large, we have sets
$A,B,C\subset\FF_p$ with $|A|,|B|,|C|\geq \delta\sqrt{p}$, and a non-trivial character $\chi$ of $\FF_p^\times$ satisfying
\[|S_\chi(A,B,C)|\geq\eps|A||B||C|.\] It follows that
\[\eps|A||B||C|\leq\sum_{a\in A}|S_\chi(B,a+C)|.\] If we let \[A'=\{a\in A:|S_\chi(B,a+C)|\geq \frac{\eps}{2}|B||C|\}\] then
\[\frac{\eps}{2}|A||B||C|\leq \sum_{a\in A'}|S_\chi(B,a+C)|\] and $|A'|\geq
|A|\eps/2$. Now by the same argument as in the proof of Lemma \ref{energy}, we must have \[\frac{\eps^2}{4}|A|^2|B|^2|C|^2\leq p|C|E_+(A',B)\leq p|C|E_+(A',A')^{1/2}E_+(B,B)^{1/2},\] the last inequality being a consequence of Lemma \ref{triangle}. So, using that $|A|,|B|,|C|\geq \delta\sqrt p$ and
$E_+(B,B)\leq |B|^3$, we have \[E_+(A',A')\geq \frac{\eps^4\delta^4}{16}|A'|^3\] and so by Theorem \ref{BSG} and Lemma \ref{sumset} we can find a subset $A''\subset A'$, with size at least $(\eps\delta)^{t}\sqrt p$ and such that
$|A''+A''|\leq (\eps\delta)^{-t}|A''|$ for some $t=O(1)$. Now since $A''\subset A'$, we have \[\frac{\eps}{2}|A''||B||C|\leq \sum_{a\in A''}|S_\chi(B,a+C)|.\] By the pigeon-hole principle, after passing to a subset of $A'''$ of size
$|A''|/16$, we can assume that the complex numbers $S_\chi(B,a+C)$ all have argument within
$\frac{1}{2}$ of each other. Thus, by Lemma \ref{arg}, we have
\[\frac{\eps}{4}|A'''||B||C|\leq \left|S_\chi(A''',B,C)\right|,\] we have
$|A'''|\geq (\eps\delta)^{t}\sqrt p/16$, and we have \[|A'''+A'''|\leq|A''+A''|\leq (\eps\delta)^{-t}|A''|\leq 16(\eps\delta)^{-t}|A'''|.\] However, by the triangle inequality, this implies that \[\frac{\eps}{4}|A'''||B+c|\leq \max_{c\in C}\left|S_\chi(A''',B+c)\right|.\] This is in clear violation of Theorem \ref{Chang} provided $p$ is sufficiently large in terms of $\delta$ and $\eps$. Thus we have arrived at the desired contradiction. \end{proof}
\section{Mixed quaternary sums}\label{Mixed}
We now turn to the estimation of the sums $H_\chi(A,B,C,D)$. First we consider an auxiliary ternary character sum with a multiplicative convolution. \[M_\chi(A,B,C)=\sum_{a\in A}\sum_{b\in B}\sum_{c\in C}\chi(a+bc).\] We can bound $M_\chi$ in terms of the \emph{multiplicative energy}
\[E_\times(X,Y)=|\{(x_1,x_2,y_1,y_2)\in X\times X\times Y\times Y:x_1y_1=x_2y_2\}|.\] As before, this satisfies the bound \[E_\times(X,Y)^2\leq E_\times(X,X)E_\times(Y,Y).\]
Now, using Sum-Product estimates, if the sets had enough additive structure, we could bound the multiplicative energy non-trivially and make an improvement. This is essentially Burgess' argument, though he did not use Sum-Product theory; rather, since he was working with arithmetic progressions, the multiplicative energy could be bounded directly.
By fixing one element in the sum $H_\chi(A,B,C,D)$, we can view it as a ternary sum in two different ways. First, \[H_\chi(A,B,C,D)=\sum_{d\in D}S_\chi(A,B,d\cdot C)\] where $d\cdot C$ is the dilate of $C$ by $d$. We can use Lemma \ref{energy} to bound this sum non-trivially whenever we can bound $E_+(C,C)$ non-trivially. If not, we can write \[H_\chi(A,B,C,D)=\sum_{a\in A}M_\chi(a+B,C,D)\] instead and try to bound this non-trivially using Lemma \ref{BasicBurgess}, which we can do if $E_\times(C,C)$
is smaller than $|C|^3$. By making some simple manipulations to $H_\chi$ and using a sum-product estimate, we will be able to guarantee one of these facts holds.
Before presenting our proof, we mention that A. Balog has communicated to us a forthcoming result with T. Wooley which asserts:
\begin{UnnumberedTheorem}
There is a positive $\delta$ such that any $X\subset \FF_p$ can be decomposed as $X=Y\cup Z$ with $E_+(Y,Y)\leq |Y|^{3-\delta}$ and $E_\times(Z,Z)\leq |Z|^{3-\delta}$. \end{UnnumberedTheorem}
The proof of this result uses ideas similar to those in our proof of Theorem \ref{MixedSum}, and implies a non-trivial estimate for $H_\chi$. Indeed, decomposing $C=Y\cup Z$ as in the theorem,
\[|H_\chi(A,B,C,D)|\leq |H_\chi(A,B,X,D)|+|H_\chi(A,B,Y,D)|.\] Estimating each of these sums as we mentioned above, gives a non-trivial bound for $|H_\chi(A,B,C,D)|$.
Now we proceed to our proof of Theorem \ref{MixedSum}.
\begin{proof}[Proof of Theorem \ref{MixedSum}]
Let $2\leq k \ll \log p$ be a (large) parameter. First we handle the case $|C|<\sqrt p$. Let us write \[|H_\chi(A,B,C,D)|=\Delta|A||B|||C||D|\] so that our purpose is to estimate $\Delta$. Let \[C_1=\left\{c\in C:|S_\chi(A,B,c\cdot D)|\geq\frac{\Delta|A||B||D|}{2}\right\}.\] We have that for any $C_2\subset C_1$
\[\frac{|C_2|}{2|C|}|H_\chi(A,B,C,D)|= |C_2|\frac{\Delta|A||B||D|}{2}\leq \sum_{c\in C_2}|S_\chi(A,B,c\cdot D)|,\] and using that the inner quantities are at most $|A||B||D|$, we also have \[|C_1|\geq
\frac{\Delta}{2}|C|.\] Now, passing to a subset $C_2$ of $C_1$ of size at least \[|C_2|\geq \frac{|C_1|}{16}\geq \frac{\Delta}{32}|C|,\] we can assume that the complex numbers $S_\chi(A,B,c\cdot D)$ with $c\in C_2$ all have arguments within $\frac{1}{2}$ of each other, so that by Lemma \ref{arg} we have \begin{equation}\label{lowerBound}
\frac{|C_3|}{4|C|}|H_\chi(A,B,C,D)|\leq \left|\sum_{c\in C_3}S_\chi(A,B,c\cdot D)\right|=|H_\chi(A,B,C_3,D)| \end{equation} whenever $C_3$ is a subset of $C_2$. In particular, if $C_3=C_2$ we have
\[\frac{\Delta^2}{128}|A||B||C||D|\leq\frac{|C_2|}{4|C|}|H_\chi(A,B,C,D)|\leq
\sum_{d\in D}|S_\chi(A,B,d\cdot C_2)|.\] Now in view of Lemma \ref{energy}, we see that \begin{align*}\frac{\Delta^2}{128}|A||B||C||D|&\leq|D|\max_{d\in D}\sqrt{p|A|E_+(B,d\cdot C_2)}\\&\leq
\sqrt{p}|D||A|^{1/2}|B|^{3/4}E_+(C_2,C_2)^{1/4},\end{align*} having bounded $E_+(B,B)$ trivially by $|B|^3$. Thus
\[E_+(C_2,C_2)\geq \frac{\Delta^8}{128^4}|A|^2|B||C|^4p^{-2}\geq
\lr{\frac{\Delta^8}{128^4}|A|^2|B||C|p^{-2}}|C_2|^3.
\] For convenience, write $K^{-1}=\frac{\Delta^8}{128^4}|A|^2|B||C|p^{-2}$. By Theorem \ref{BSG} there is a subset $C_3\subset C_2$ of size at least
$\frac{|C_2|}{K(\log p)^2}$ and such that \[|C_3-C_3|\ll K^4\frac{|C_3|^2(\log p)^8}{|C_2|^2}|C_3|.\] In particular, by Theorem \ref{EnergyEstimate} we have \begin{align*} E_\times(C_3,C_3)&\ll
|C_3|K^7\lr{\frac{|C_3|^2(\log p)^8}{|C_2|^2}}^{7/4}|C_3|^{7/4}\log p\\&=K^7|C_3|^{25/4}|C_2|^{-7/2}(\log p)^{15}. \end{align*} Inserting this into equation (\ref{lowerBound}), we get
\begin{align*}\frac{\Delta}{4}|A||B||C_3||D|&=\frac{|C_3|}{4|C|}|H_\chi(A,B,C,D)|\\&\leq
|H_\chi(A,B,C_3,D)|\\&\leq\sum_{a\in A}|M_\chi(a+B,C_3,D)|.\end{align*} Next we apply Lemma \ref{BasicBurgess} to obtain that
\begin{multline*}\frac{\Delta}{4}|A||B||C_3||D|\ll
|A|(|D||C_3|)^{1-\frac{1}{k}}(E_\times(D,D)E_\times(C_3,C_3))^{1/4k}\times\\\times\lr{|B|^{2k}2k\sqrt p+(2k|B|)^kp}^{1/2k}\end{multline*} which implies (after bounding $E_\times(D,D)$
trivially by $|D|^3$) \[\Delta^{4k} \ll|D|^{-1}|C_3|^{-4}
E_\times(C_3,C_3)\lr{2k\sqrt p+(2k|B|^{-1})^kp}^2.\]
Since $2\leq k\ll \log p$ and $|B|\geq p^\delta$, the final factor is at most $O(p(\log p)^{2k})$ as long as $k>\frac{1}{2\delta}$, and after inserting the upper bound for $E_\times(C_3,C_3)$ we have \[\Delta
^{4k}\ll|D|^{-1}
K^7|C_3|^{9/4}|C_2|^{-7/2}(\log p)^{2k+15}p.\]
Now we substitute $K^{-1}=\frac{\Delta^8}{128^4}|A|^2|B||C|p^{-2}$ and see
\[\Delta^{4k+56}\ll|D|^{-1}|A|^{-14}|B|^{-7}|C|^{-7}|C_3|^{9/4}|C_2|^{-7/2}(\log p)^{2k+15}p^{15}.\]
Bounding $|C_3|\leq |C_2|$ and $|C_2|\gg\Delta|C|$ we get
\[\Delta^{4k+\frac{229}{4}}\ll|D|^{-1}|A|^{-14}|B|^{-7}|C|^{-\frac{33}{4}}(\log p)^{2k+15}p^{15}.\] Upon taking $4k$'th roots we have \[\Delta^{1+229/16k}\ll
\lr{|D|^{-1}|A|^{-14}|B|^{-7}|C|^{-\frac{33}{4}}p^{15}}^{1/4k}(\log p)^{1/2+15/4k}.\] Since
\[|D|^4|A|^{56}|B|^{28}|C|^{33}\geq p^{60+\eps},\] the quantity in brackets on the right is at most $p^{-\eps/4}$. This shows that we must have $\Delta<p^{-\tau}$ for some $\tau >0$ depending only on $\eps$ and $\delta$. This is because we only needed $k$ to be sufficiently large in terms of $\delta$.
If $|C|>\sqrt p$ then we can break $C$ into a disjoint union of $m\approx \frac{|C|}{\sqrt p}$ sets
$C_1,\ldots, C_m$ of size at most $\sqrt p$. Then \[|H_\chi(A,B,C,D)|\leq\sum_{j}|H_\chi(A,B,C_j,D)|.\] We obtain a savings of $p^{-\tau}$ for each $H_\chi(A,B,C_j,D)$ and hence for $H_\chi(A,B,C,D)$ provided
\[|D|^4|A|^{56}|B|^{28}|C_j|^{33}\gg|D|^4|A|^{56}|B|^{28}p^{33/2}\geq p^{60+\eps}\] which is guaranteed by hypothesis (with $2\eps$ in place of $\eps$). \end{proof}
\end{document} |
\begin{document}
\title[Minimal cubature rules on unbounded domain] {Minimal cubature rules on an unbounded domain}
\author{Yuan Xu} \address{Department of Mathematics\\ University of Oregon\\
Eugene, Oregon 97403-1222.}\email{[email protected]} \thanks{The work of the third author was supported in part by NSF Grant DMS-1106113.}
\date{\today} \keywords{Minimal cubature rules, orthogonal polynomials, unbounded domain} \subjclass[2000]{41A05, 65D05, 65D32}
\begin{abstract} A family of minimal cubature rules is established on an unbounded domain, which is the first such family known on unbounded domains. The nodes of such cubature rules are common zeros of certain orthogonal polynomials on the unbounded domain, which are also constructed. \end{abstract}
\maketitle
\section{Introduction} \setcounter{equation}{0}
In two or more variables, few families of minimal cubature rules are known in the literature, none on unbounded domains. The purpose of this note is to record a family of minimal cubature rules on an unbounded domain.
The precision of a cubature rule is usually measured by the degrees of polynomials that can be evaluated exactly. For a nonnegative integer $m$, we denote be $\Pi_m^2$ the space of polynomials of degree at most $m$. Let $\Omega$ be a domain in ${\mathbb R}^2$ and let $W$ be a non-negative weight function on $\Omega$. A cubature rule of precision $2n-1$ with respect to $W$ is a finite sum that satisfies \begin{equation} \label{cuba-generic}
\int_\Omega f(x,y) W(x,y) dxdy = \sum_{k=1}^N {\lambda}_k f(x_k,y_k),
\qquad \forall f\in \Pi_{2n-1}^2, \end{equation} and there exists at least one function $f^*$ in $\Pi_{2n}^2$ such that the equation \eqref{cuba-generic} does not hold. A cubature rule in the form of \eqref{cuba-generic} is called {\it minimal}, if its number of nodes is the smallest among all cubature rules of the same precision for the same integral.
It is well known that the number of nodes, $N$, of \eqref{cuba-generic} satisfies (cf. \cite{My, St}), \begin{equation}\label{lbdGaussian}
N \ge \dim \Pi_{n-1}^2 = \frac{n (n+1)}{2}. \end{equation} A cubature rule of degree $2n-1$ with $N = \dim \Pi_{n-1}^2 $ is called Gaussian. In contrast to the Gaussian quadrature of one variable, Gaussian cubature rules rarely exists and there are two family of examples known \cite{LSX, SX}, both on bounded domains. It is known that they do not exist if $W$ is centrally symmetric, which means that $W(x) = W(-x)$ and $- x \in \Omega$ whenever $x \in \Omega$. In fact, in the centrally symmetric case, the number of nodes of \eqref{cuba-generic} satisfies a better lower bound \cite{M}, \begin{equation}\label{lwbd}
N \ge \dim \Pi_{n-1}^2 + \left \lfloor \frac{n}{2} \right \rfloor
= \frac{n(n+1)}{2} + \left \lfloor \frac{n}{2} \right \rfloor. \end{equation}
A cubature rule whose number of nodes attains a known lower bound is necessarily minimal. It turns out that a family of weight functions ${\mathcal W}_{{\alpha},{\beta}, \pm \f12}$ defined by \begin{equation} \label{CWab}
{\mathcal W}_{{\alpha},{\beta},\pm \frac12}(x,y) : = |x+y|^{2 {\alpha}+1} |x- y|^{2 \beta +1}
(1-x^2)^{\pm \frac12}(1-y^2)^{\pm \frac12}, \quad {\alpha},{\beta} > -1, \end{equation} on $[-1,1]^2$ admits minimal cubature rules of degree $4n-1$ on the square $[-1,1]^2$, which was established in \cite{MP} for ${\alpha} ={\beta} = 0$ (see also \cite{BP, X94}) and in \cite{X12a} for $({\alpha},{\beta}) \ne (0,0)$. There are not many other cases for which minimal cubature rules are known to exist for all $n$ and none in the literature that are known for unbounded domains.
To establish the minimal cubature rules for ${\mathcal W}_{{\alpha},{\beta},\pm 12}$, the starting point in \cite{X12a} is the Gaussian cubature rules in \cite{SX} and it amounts to a series of changing of variables from the product Jacobi weight function on the square $[-1,1]^2$ to the weight function ${\mathcal W}_{{\alpha},{\beta}, \pm 12}$. The procedure works for general product weight function on the square. Moreover, as we shall shown in this note, that the procedure also works for an unbounded domain, which leads to our main results in this note. The minimal cubature rules are known to be closely related to orthogonal polynomials, as their nodes are necessarily zeros of certain orthogonal polynomials. We will discuss this connection and construct an explicit orthogonal basis on our unbounded domain in Section 3.
\section{Gaussian Cubature rules on an unbounded domain} \setcounter{equation}{0}
Let $W$ be a nonnegative weight function on a domain $\Omega \subset {\mathbb R}^2$. A polynomial $P \in \Pi_n^d$ is an orthogonal polynomial of degree $n$ with respect to $W$ if $$
\int_{\Omega} P_{k,n}(x,y) q(x,y) W(x,y) dxdy = 0, \qquad \forall q \in \Pi_{n-1}^2. $$ Let ${\mathcal V}_n^2$ be the space of orthogonal polynomials of degree exactly $n$. Then $\dim {\mathcal V}_n^2 = n +1$. The Gaussian cubature rules can be characterized in terms of the common zeros of elements in ${\mathcal V}_n^2$ (\cite{My,St}).
\begin{thm} Let $\{P_{k,n}: 0 \le k \le n\}$ be a basis of ${\mathcal V}_n^2$. Then a Gaussian cubature rule of degree $2n-1$ for the integral against $W$ exists if and only if its nodes are the common zeros of $P_{k,n}$, $0 \le k \le n$. \end{thm}
We now describe a family of Gaussian cubature rules on an unbounded domain. Let $w(x)$ be a nonnegative weight function defined on the unbounded domain $[1, \infty)$ and let $c_w$ denote its normalization constant defined by $c_w \int_{1}^\infty w(x) dx =1$. Let $p_n(w;x)$ be the orthogonal polynomial of degree $n$ with respect to $w$ and let $x_{1,n}, x_{2,n},\ldots, x_{n,n}$ be the zeros of $p_n(w;x)$. It is well known that $x_{k,n}$ are real and distinct points in $[1,\infty)$. The Gaussian quadrature rule for the integral against $w$ is given by \begin{equation} \label{Gauss-quad}
c_w \int_{1}^\infty f(x) w(x) dx = \sum_{k=1}^n {\lambda}_{k,n} f(x_{k,n}), \quad f \in \Pi_{2n-1}, \end{equation} where $\Pi_{2n-1}$ denotes the space of polynomials of degree $2n-1$ in one variable and the weights ${\lambda}_{k,n}$ are known to be all positive.
A typical example of $w$ is the shifted Laguerre weight function $$
w_{\alpha}(x) := (x-1)^{\alpha} e^{-x+1}, \quad {\alpha} > -1, $$ for which the orthogonal polynomial $p_n(w_{\alpha};x)$ is the Laguerre polynomial $L_n^{\alpha}(x-1)$ with argument $x-1$.
The unbounded domain on which our Gaussian cubature rules live is given by \begin{equation} \label{Omega}
\Omega: = \{(u,v): 0 < u-1 < v < u^2/4\}, \end{equation} which is bounded by a parabola and a line for $u \ge 1$ and $v \ge 2$, which is the shaded area depicted in Figure 1. \begin{figure}
\caption{Domain $\Omega$}
\label{figure:region}
\end{figure}
The function $w(x)w(y)$ is evidently a symmetric function in $x$ and $y$. For ${\gamma} > -1$, we define the weight \begin{equation} \label{Wgamma}
W_\gamma(u,v) := w(x)w(y) |u^2 - 4 v|^{\gamma}, \quad {\gamma} > -1, \end{equation} where the variables $(x,y)$ and $(u,v)$ are related by \begin{equation} \label{u-v}
u = x+y, \quad v= xy. \end{equation} The function $w(x)w(y)$ is obviously symmetric in $x$ and $y$, so that it can be written as a function of $(u,v)$ for $(x,y)$ in the domain $$
\triangle: = \{(x,y): 1 < x < y < \infty\}. $$
The function $W_\gamma(u,v)$ is the image of $w(x)w(y)|x-y|^{2{\gamma}+1}$ under the changing of
variables $u =x+y$ and $v = xy$, which has a Jacobian $|x-y|$ and $|x-y| = \sqrt{u^2 - 4 v}$. When $w = w_{\alpha}$ is the shifted Laguerre weight, we denote $W_{\gamma}$ by $W_{{\alpha},{\gamma}}$, which is given explicitly by \begin{equation*} W_{{\alpha},{\gamma}}(x,y) := (v - u +1)^{\alpha} e^{-u -2} (u^2 - 4 v)^{\gamma}. \end{equation*}
The changing of variables \eqref{u-v} immediately leads to the relation \begin{align} \label{Para-Square}
\int_{\Omega} f(u,v) W_{\gamma}(u,v) dudv & =
\int_{\triangle} f(x+y, x y) w(x)w(y)|x-y|^{2{\gamma}+1} dxdy \\
& = \f12 \int_{[1,\infty)^2} f(x+y, x y) w(x)w(y)|x-y|^{2{\gamma}+1} dxdy, \notag \end{align} where the second equation follows from symmetry. Now, recall that $x_{k,n}$ denote the zeros of $p_n(w;x)$. We define $$
u_{k,j} = x_{k,n}+x_{j,n}, \qquad v_{k,j} = x_{k,n} x_{j,n}, \qquad 0 \le j \le k \le n. $$
\begin{thm} \label{thm:Gaussian} For $W_{- \frac12}$ on $\Omega$, the Gaussian cubature rule of degree $2n-1$ is \begin{equation} \label{GaussCuba-}
c_w^2 \int_{\Omega} f(u, v) W_{-\frac12}(u,v) du dv = \sum_{k=1}^n \mathop{ {\sum}' }_{j=1}^k
{\lambda}_k {\lambda}_j f(u_{j,k}, v_{j,k}), \quad f \in \Pi_{2n-1}^2, \end{equation} where ${\sum}'$ means that the term for $j = k$ is divided by 2. For $W_{\frac12}$ on $\Omega$, the Gaussian cubature rule of degree $2n-3$ is \begin{equation} \label{GaussCuba+}
c_w^2 \int_{\Omega} f(u, v) W_{\frac12}(u,v) du dv = \sum_{k=2}^n \sum_{j=1}^{k-1}
{\lambda}_{j,k} f(u_{j,k}, v_{j,k}), \quad f \in \Pi_{2n-3}^2, \end{equation} where $\lambda_{j,k} = {\lambda}_j {\lambda}_k (x_{j,n} - x_{k,n})^2$. \end{thm}
The proof follows almost verbatim from the proof of Theorem 3.1 in \cite{X12a}. In fact, by \eqref{Para-Square}, the cubature rule \eqref{GaussCuba-} is equivalent to the cubature rule for $w(x) w(y) dxdy$ on $[1,\infty)^2$ for polynomials $f(x+y,xy)$ with $f \in \Pi_{2n-1}^2$, which are symmetric polynomials in $x$ and $y$. By the Sobolev's theorem on invariant cubature rules \cite{Sobolev}, this cubature rule is equivalent to the product Gaussian cubature rules for $w(x)w(y)$ on $[1,\infty)^2$, which is the product of \eqref{Gauss-quad}. Thus, the cubature rule \eqref{GaussCuba-} follows from the product of Gaussian cubature rules under \eqref{u-v}.
The existence of these cubature rules also follow from counting common zeros of the orthogonal polynomials with respect to $W_{\pm \f12}$. Indeed, a mutually orthogonal basis with respect to $W_{-\frac12}$ on $\Omega$ is given by \begin{equation} \label{OP-1/2}
P_{k,n}^{(-\frac12)} (u,v) = p_n(x) p_k(y) + p_n(y) p_k(x), \qquad 0 \le k \le n, \end{equation} and a mutually orthogonal basis with respect to $W_{\frac12}$ is given by \begin{equation} \label{OP+1/2}
P_{k,n}^{(\frac12)} (u,v) = \frac{p_{n+1}(x) p_k(y) - p_{n+1} (y) p_k(x)}{x-y}, \quad 0 \le k \le n, \end{equation} both families are defined under the mapping \eqref{u-v}. This was established in \cite{K74} for the domain $[-1,1]^2$, but the proof can be easily extended to our $\Omega$. It is easy to see that the elements of $\{(u_{j,k}, v_{j,k}): 0 \le j \le k \le n\}$ are common zeros of $P_{k,n}^{(-\f12)}$, $0 \le k \le n$, and the cardinality of this set is $\dim \Pi_{n-1}^2$, which implies, by Theorem 2.1, that the Gaussian cubature rules for $W_{-\f12}$ exists. The proof for $W_{\frac 12}$ works similarly.
\section{Minimal cubature rules on an unbounded domain} \setcounter{equation}{0}
We are looking for cubature rules of degree $2n-1$ that satisfy the lower bound \eqref{lwbd}, which are necessarily minimal cubature rules. Such cubature rules are characterized by common zeros of a subspace of ${\mathcal V}_n^2$ (\cite{M}).
\begin{thm}\label{thm:minimalCuba} A cubature rule whose number of nodes attains the lower bound \eqref{lwbd} exists if, and only if, its nodes are common zeros of $\lfloor \frac{n+1}{2} \rfloor +1$ orthogonal polynomials of degree $n$. \end{thm}
Let $w$ be the weight function defined on $[1,\infty)$ and let $W_{\gamma}$ be the corresponding weight function in \eqref{Wgamma} defined on $\Omega$ in \eqref{Omega}. We define a family of new weight functions by \begin{equation}\label{CWgamma}
{\mathcal W}_\gamma (x,y): = W_{\gamma} (2 x y, x^2+y^2 -1)|x^2-y^2|, \qquad (x,y) \in G, \end{equation} where the domain $G$ is centrally symmetric and defined by $$
G := (-\infty, -1]^2 \cup [1,\infty)^2. $$ Thus, the weight function ${\mathcal W}_\gamma$ is centrally symmetric on $G$.
That ${\mathcal W}_\gamma$ is well-defined on $G$ is established in the next lemma. Recall that $\triangle = \{ (x,y): 1 < x < y < \infty\}$.
\begin{lem} \label{Int-Para-cube} The mapping $(x,y) \mapsto (2 x y , x^2+y^2 -1)$ is a bijection from $\triangle$ onto $\Omega$. Furthermore, \begin{align} \label{Int-P-Q}
\int_{\Omega} f(u,v) W_{{\gamma}} (u,v) du dv = \int_{G} f(2xy, x^2+y^2 -1) {\mathcal W}_{{\gamma}}(x,y) dx dy. \end{align} \end{lem}
\begin{proof} Recall that if $(x,y) \in \triangle$, then $(x+y,xy) \in \Omega$ and the mapping is one-to-one. For $(x,y) \in [1,\infty)^2$, let us write $x = \cosh {\theta}$ and $y = \cosh \phi$, $0 \le {\theta}, \phi \le \pi$. Then it is easy to verify that \begin{equation} \label{theta-phi}
2 x y = \cosh ({\theta} - \phi) + \cosh ({\theta} + \phi), \quad x^2+y^2 -1 = \cosh ({\theta} - \phi) \cosh ({\theta} + \phi), \end{equation} from which it follows readily that $(2xy, x^2+y^2 -1) \in \Omega$ whenever $(x,y) \in \triangle$.
The Jacobian of the change of variables $u= 2xy$ and $v = x^2+y^2 -1$ is $4 |x^2-y^2|$, so that the mapping is a bijection. Since $dudv = 4 |x^2-y^2 |dxdy$ and the area of $G$ is four times of $\triangle$, the formula \eqref{theta-phi} follows from the change of variables, the integral \eqref{Para-Square} and the fact that $ f(2xy, x^2+y^2 -1)$ is central symmetric on $G$. \end{proof}
In the case of $W_{\gamma} = W_{{\alpha},{\gamma}}$, we denote the weight function ${\mathcal W}_{\gamma}$ by ${\mathcal W}_{{\alpha},{\gamma}}$, which is given explicitly by \begin{equation} \label{CW_a,g}
{\mathcal W}_{{\alpha},{\gamma}} (x,y) := 4^{\gamma} |x-y|^{2 {\alpha}} (x^2-1)^{\gamma} (y^2-1)^{\gamma} |x^2-y^2| e^{-2xy-2}. \end{equation}
For the weight function $W_{\gamma}$ on the unbounded domain $G$, minimal cubature rules of degree $4n-1$ exist, as shown in the next theorem. To state the theorem, we need a notation. For $n \in {\mathbb N}_0$, let $x_{k,n}$ be the zeros of the orthogonal polynomial $p_n(w;x)$, which are in the support set $[1,\infty)$ of the weight function $w$. We define $\theta_{k,n}$ by $$
x_{k,n} = \cosh {\theta}_{k,n}, \qquad 1 \le k \le n. $$ Since $x_{k,n} > 1$, it is evident that ${\theta}_{k,n}$ are well defined. We then define \begin{align} \label{stjk}
s_{j,k}: = \cosh \tfrac{{\theta}_{j,n}-{\theta}_{k,n}}{2} \quad\hbox{and}\quad
t_{j,k} := \cosh \tfrac{{\theta}_{j,n} + {\theta}_{k,n}}{2}. \end{align}
\begin{thm} \label{thm:cubaCW} For ${\mathcal W}_{-\frac12}$ on $G$, we have the minimal cubature rule of degree $4n-1$ with $\dim \Pi_{2n-1}^2 + n$ nodes, \begin{align} \label{MinimalCuba2-}
c_w^2 \int_{G} f(x, y) {\mathcal W}_{-\frac12}(x,y) dx dy = & \frac14 \sum_{k=1}^n \mathop{ {\sum}' }_{j=1}^k
{\lambda}_{j,n} {\lambda}_{k,n} \left[ f( s_{j,k}, t_{j,k})+ f( t_{j,k}, s_{j,k}) \right . \\
& \quad + \left. f( - s_{j,k}, - t_{j,k})+ f(- t_{j,k},- s_{j,k}) \right]. \notag \end{align} For ${\mathcal W}_{\frac12}$ on $G$, we have the minimal cubature rule of degree $4n-3$ with $\dim \Pi_{2n-3}^2 + n$ nodes, \begin{align} \label{MinimalCuba2+}
c_w^2 \int_{G} f(x, y) {\mathcal W}_{\frac12}(x,y) dx dy = & \frac14 \sum_{k=2}^n \sum_{j=1}^{k-1}
{\lambda}_{j,k} \left[ f( s_{j,k}, t_{j,k})+ f( t_{j,k}, s_{j,k}) \right . \\
& \quad + \left. f( - s_{j,k}, - t_{j,k})+ f(- t_{j,k},- s_{j,k}) \right], \notag \end{align} where $\lambda_{j,k} = {\lambda}_{j,n} {\lambda}_{k,n} (\cosh {\theta}_{j,n} - \cosh {\theta}_{k,n})^2$. \end{thm}
\begin{proof} We prove only the case of ${\mathcal W}_{-\f12}$, the case of ${\mathcal W}_{\f12}$ is similar. Our starting point is the Gaussian cubature rule in \eqref{GaussCuba-}, which gives, by \eqref{Int-P-Q}, \begin{equation*}
c_w^2 \int_{G} f(2xy, x^2+y^2-1) {\mathcal W}_{-\frac12}(x,y) dx dy = \sum_{k=1}^n \mathop{ {\sum}' }_{j=1}^k
{\lambda}_{j,n} {\lambda}_{k,n} f(u_{j,k}, v_{j,k}), \quad f \in \Pi_{2n-1}^2. \end{equation*} By \eqref{theta-phi} or by direct verification, \begin{align*}
\cosh {\theta}_{j,n} + \cosh {\theta}_{k,n} & = 2 \cosh \tfrac{{\theta}_{j,n}-{\theta}_{k,n}}{2}
\cosh \tfrac{{\theta}_{j,n} + {\theta}_{k,n}}{2} \\
\cosh {\theta}_{j,n} \cosh {\theta}_{k,n} & = \cosh^2 \tfrac{{\theta}_{j,n}-{\theta}_{k,n}}{2} +
\cosh^2 \tfrac{{\theta}_{j,n} + {\theta}_{k,n}}{2} -1 \end{align*} which implies that $$
u_{j,k} = x_{j,n} + x_{k,n} = 2 s_{j,k}t_{j,k} \quad\hbox{and}\quad v_{j,k} = x_{j,n} x_{k,n} = s_{j,k}^2+t_{j,k}^2 -1. $$ Consequently, the above cubature rule can be written as \begin{equation*}
c_w^2 \int_{G} f(2xy, x^2+y^2-1) {\mathcal W}_{-\frac12}(x,y) dx dy = \sum_{k=1}^n \mathop{ {\sum}' }_{j=1}^k
{\lambda}_k {\lambda}_j f(2 s_{j,k}t_{j,k}, s_{j,k}^2 + t_{j,k}^2 -1) \end{equation*} for all $f \in \Pi_{2n-1}^2$. For $f \in \Pi_{2n-1}^2$, the polynomial $f(2xy x^2+y^2-1)$ is of degree $4n-1$. Since the polynomials $f(2xy, x^2+y^2-1)$ are symmetric polynomials and all symmetric polynomials in $\Pi_{4n-1}^2$ can be written in this way, we have established \eqref{MinimalCuba2-} for symmetric polynomials. By the Sobolev's theorem on invariant cubature rules, this establishes \eqref{MinimalCuba2-} for all polynomials in $\Pi_{4n-1}^2$. \end{proof}
The number of nodes of the cubature rule in \eqref{MinimalCuba2-} is precisely $$
N = \dim \Pi_{2n-1}^2+n = \dim \Pi_{2n-1}^2 + \left \lfloor \frac{2n}2 \right \rfloor $$ which is the lower bounded of \eqref{lwbd} with $n$ replaced by $2n$. Thus, \eqref{MinimalCuba2-} attains the lower bound \eqref{lwbd}. Similarly, \eqref{MinimalCuba2+} attains the lower bound \eqref{lwbd} with $n$ replaced by $2n-1$.
It turns out that a basis of orthogonal polynomials with respect to ${\mathcal W}_{{\gamma}}$ can be given explicitly. We need to define three more weight functions associated with $W_{\gamma}$. \begin{align}\label{Wij} \begin{split}
W_{\gamma}^{(1,1)}(u,v) & := (1-u+v)(1+u+v)W_{\gamma}(u,v), \\
W_{\gamma}^{(1,0)}(u,v) & := (1-u+v)W_{\gamma}(u,v), \qquad \qquad (u,v) \in \Omega,\\
W_{\gamma}^{(0,1)}(u,v) & := (1+u+v)W_{\gamma}(u,v). \end{split} \end{align} Under the change of variables $u = x+y$ and $v = xy$, $1-u+v = (x-1)(y-1)$ and $1+u+v = (x+1)(y+1)$. The three weight functions in \eqref{Wij} are evidently of the same type as $W_{\gamma}$. We denote by $\{P_{k,n}^{({\gamma})}: 0 \le k \le n\}$ an orthonormal basis of ${\mathcal V}_n(W_{\gamma})$ under $ {\langle} \cdot, \cdot {\rangle}_{W_{\gamma}}$. For $0 \le k \le n$, we further denote by $P_{k,n}^{({\gamma}),1,1}$, $P_{k,n}^{({\gamma}),1,0,}$, $P_{k,n}^{({\gamma}),0,1}$ the orthonormal polynomials of degree $n$ with respect to ${\langle} f, g {\rangle}_W$ for $W = W_{\gamma}^{(1,1)}$, $W_{\gamma}^{(1,0)}$, $W_{\gamma}^{(0,1)}$, respectively.
\begin{thm} \label{thm:Qop} For $n = 0,1,\ldots$, a mutually orthogonal basis of ${\mathcal V}_{2n}({\mathcal W}_{{\gamma}})$ is given by \begin{align} \label{Qeven2} \begin{split}
{}_1Q_{k,2n}^{({\gamma})}(x,y):= & P_{k,n}^{({\gamma})}(2xy, x^2+y^2 -1), \qquad 0 \le k \le n, \\
{}_2Q_{k,2n}^{({\gamma})}(x,y) := & (x^2-y^2) P_{k,n-1}^{({\gamma}),1,1}(2xy, x^2+y^2 -1), \quad 0 \le k \le n-1,
\end{split} \end{align} and a mutually orthogonal basis of ${\mathcal V}_{2n+1}({\mathcal W}_{{\gamma}})$ is given by \begin{align} \label{Qodd2}
\begin{split}
& {}_1Q_{k,2n+1}^{({\gamma})}(x,y):= (x+ y) P_{k,n}^{({\gamma}),0,1}(2xy, x^2+y^2 -1), \qquad 0 \le k \le n, \\
& {}_2Q_{k,2n+1}^{({\gamma})}(x,y):= (x-y) P_{k,n}^{({\gamma}),1,0}(2xy, x^2+y^2 -1), \qquad 0 \le k \le n.
\end{split} \end{align} \end{thm}
Under the mapping $(u,v) \mapsto (2xy, x^2+y^2-1)$, it is easy to see that $W_{\gamma}^{(1,1)}(u,v)$ becomes $(x^2-y^2)^2 {\mathcal W}_{\gamma}(x,y)$, $W_{\gamma}^{(1,0)}(u,v)$ becomes $(x-y)^2 {\mathcal W}_{\gamma}(x,y)$, and $W_{\gamma}^{(0,1)}(u,v)$ becomes $(x+y)^2 {\mathcal W}_{\gamma}(x,y)$. Hence, using Lemma \ref{Int-Para-cube}, the proof can be deduced from the orthogonality of $P{k,n}^{({\gamma}), i,j}$ and symmetry of the integrals against $W_{\gamma}^{(i,j)}$, similar to the proof of Theorem 3.4 in \cite{X12b}.
Combining with \eqref{OP-1/2} and \eqref{OP+1/2}, we can express orthogonal polynomials ${}_iQ_{k,n}^{(\pm \f12)}$ in terms of orthogonal polynomials in one variables. For example, in the case of ${\mathcal W}_{{\alpha},{\gamma}}$ in \eqref{CW_a,g}, we can express even degree orthogonal polynomials in terms of the Laguerre polynomials.
\begin{prop} Let ${\alpha} > -1$. A mutually orthogonal basis of ${\mathcal V}_{2n}({\mathcal W}_{{\alpha},-\frac12})$ is given by, for $0 \le k \le n$ and $0 \le k \le n-1$, respectively, \begin{align*}
& {}_1Q_{k,2n}^{({\alpha},-\frac12)}(\cosh{\theta}+1,\cosh \phi +1) = L_n^{({\alpha})} (\cosh ({\theta} - \phi)) L_k^{({\alpha})}(\cosh ({\theta}+\phi)) \\
& \qquad\qquad \qquad\qquad \qquad\qquad \qquad
+ L_k^{({\alpha})} (\cosh ({\theta} - \phi)) L_n^{({\alpha})}(\cosh ({\theta}+\phi)), \\
& {}_2Q_{k,2n}^{({\alpha},-\frac12)}(\cosh{\theta} +1,\cosh \phi+1) =
(x^2-y^2) \left[ L_{n-1}^{({\alpha}+1)} (\cosh ({\theta} - \phi)) L_k^{({\alpha}+1)} (\cosh ({\theta}+\phi)) \right. \\
& \qquad\qquad \qquad\qquad \qquad\qquad \qquad
\left. + L_k^{({\alpha}+1)} (\cosh ({\theta} - \phi)) L_{n-1}^{({\alpha}+1)} (\cosh ({\theta}+\phi)) \right]. \end{align*} \end{prop}
In these formulas we used $x = \cosh {\theta} -1$ and $y = \cosh \phi -1$, so that $L_n^{\alpha}(x-1) = L_n^{\alpha} (\cosh {\theta})$ and $L_n^{\alpha}(y-1) = L_n^{\alpha} (\cosh \phi)$ and we can then use \eqref{theta-phi}. Note, however, we cannot express the odd degree ones in terms of Laguerre polynomials. In fact, for ${}_2Q_{k, 2n+1}$, we need orthogonal polynomials of one variable with respect to $w(x) = (x+1)(x-1)^{\alpha} e^{- (x-1)}$, which is not a shift of the Laguerre weight function.
\begin{cor} For the weight function ${\mathcal W}_{-\f12}$, the nodes of the minimal cubature formulas \eqref{MinimalCuba2-} are common zeros of orthogonal polynomials $\{ {}_1Q_{k,2n}^{(-\frac12)}: 0 \le k \le n\}$. \end{cor}
An analogue result can be stated for the minimal cubature formulas \eqref{MinimalCuba2+}.
\end{document} |
\begin{document}
\title{A goodness of fit test for two component two parameter Weibull mixtures}
\maketitle \begin{center}
{\large Richard A. Lockhart}
{\small{\it{Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, B.C. V5A 1S6, Canada}}}
{\large Chandanie W. Navaratna}
{\small{\it{Department of Mathematics, The Open University of Sri Lanka, Nawala, Nugegoda, Sri Lanka}}}
\end{center}
\begin{abstract} Fitting mixture distributions is needed in applications where data belongs to inhomogeneous populations comprising homogeneous subpopulations. The mixing proportions of the sub populations are in general unknown and need to be estimated as well. A goodness of fit test based on the empirical distribution function is proposed for assessing the goodness of fit in model fits comprising two components, each distributed as two parameter Weibull. The applicability of the proposed test procedure was empirically established using a Monte Carlo simulation study. The proposed test procedure can be easily altered to handle two component mixtures with different componet distributions.
\end{abstract}
\section{Introduction}
Fitting mixture distributions is needed in applications where data belongs to inhomogeneous populations comprising homogeneous subpopulations. The mixing proportions of the sub populations are in general unknown and need to be estimated as well. A goodness of fit test based on the empirical distribution function is proposed for assessing the goodness of fit in mixtures comprising two components, each distributed as two parameter Weibull.
Rest of the article is organized as follows. Section 2 describes mathematical formulation of the problem. Section 3 illustrates the computation of the test statistic. Section 4 offers the asymptotic distribution of the proposed test statistic. Section 5 outlines a procedure for computing p-values based on the proposed test. Section 6 presents the results of a Monte Carlo simulation study that provides empirical evidence for the applicability of the proposed test procedure. Section 8 offers concluding remarks alone with a discussion.
\section{Two parameter Weibull mixture model and testing goodness of fit}
A random variable or vector $X$ is said to follow a finite mixture distribution, if the probability density function ( or probability mass function in the case of discrete $X$), $f(x)$ can be represented by a function of the form $f(x)=p_{1}f_{1}(x,\vec{\theta}_{1})+p_{2}f_{2}(x,\vec{\theta}_{2})+\cdots +p_{k}f_{k}(x,\vec{\theta}_{k}),$ where $p_{i}\ge 0,$ for $i=1,2,\cdots,k$ are the mixing proportions such that $\sum_{i=1}^{k} p_{i}=1$ and $f_{i}(.) \ge 0$ are the density (or mass) functions of the components in the mixture such that $\int_{\Omega} f_{i}(x) dx =1$ ( or in the discrete case $ \sum_{x \in \Omega} f(x)=1)$; here $\vec{\theta}_{i}$ denote the vector of parameters of the $i^{th}$ component density. We assume that the mixture density is identifiable, so that for any two members $ \sum_{i} p_{i} f_{i}(x,\vec{\theta}_{i})=\sum_{j} p_{j}f_{j}(x,\vec{\theta}_{j}) ,$ if and only if $p_{i}=p_{j}$ and $f_{i}(x,\vec{\theta}_{i})=f_{j}(x,\vec{\theta}_{j}).$ In this work, we confine ourselves to identifiable mixtures with two components so that $k=2$ and each component density is a two-parameter Weibull density given by
\[ f_{i}(x,\alpha_{i},\beta_{i}) =\frac{\alpha_{i}}{\beta_{i}} \left( \frac{x}{\beta_{i}}\right )^{\alpha_{i}-1} \exp \left(-\left( \frac{x}{\beta_{i}}\right)^{\alpha_{i}} \right) \]
The parameters $\alpha_{1}, \alpha_{2}$ are the shape parameters, $\beta_{1}, \beta_{2}$ are the scale parameters and $\vec{\theta}_{i}=(\alpha_{i},\beta_{i})^{T}$ for $i=1,2.$ This model assumes that the location parameters of the two component densities to be the same.
In this two component model, let $p_{1}=p$ so that $p_{2}=1-p.$ Let $F(x,\vec{\theta})$ denotes the mixture distribution function where $\vec{\theta}=(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},p)^{T}$. Given a random sample of n observations, from the distribution $F(x,\vec{\theta}),$ the goodness of fit problem can be stated as a test of the null hypothesis that the distribution of the data is a two parameter Weibull mixture with parameter vector $\vec{\theta}$ that needs to be estimated in general.
In the recent past, Weibull mixture models have been extensively used in modeling wind data (\cite{akdag}, \cite{kollu}, \cite{sult}). In many of these studies, the goodness of the fitted models is examined based on Akaike Information Criteria (AIC), Basian Infromation Criteria (BIC), Chi squared test, Root Mean Squared Error (RMSE) and Kolmogorov Smirnov Test (K-S test). Sultan {\em et. al} \cite{sult} reports what they refer to as a correlation Goodness of Fit test for testing goodness of fit in mixtures of two Weibull distributions. In this work, we suggest a procedure for computing approximate p-values for testing goodness of fit of two component two parameter Weibull mixtures based on the Cramer-von Mises statistic.
\section{Computation of the test statistic}
Let $F_{n}(x)$ denote the empirical distribution function of the data defined by $F_{n}(x)=\frac{1}{n} \sum_{i=1}^{n} I(x_{i} \le x), -\infty <x< \infty, $ where the indicator function $I(a,b)$ is defined as 1 for $a \le b$ and as 0 otherwise. Since $F_{n}(x)$ is the proportion of observations less than or equal to $x$ if $F(x)$ is the true distribution of $X$ we expect $F_{n}(x)$ to be close to $F(x).$ The closeness of $F_{n}(x)$ to $F(x)$ is assessed by the Cramer-von Mises statistics defined by \[ W_{n}^{2}=n \int_{-\infty}^{\infty} (F_{n}(x)-F(x))^{2}dF(x). \] A computationally more feasible formula can be obtained by considering the probability integral transformation $z=F(x,\vec{\theta}).$ Let $x_{1},x_{2},\ldots,x_{n}$ be the order statistics of the original sample, then the probability integral transforms $z_{1},z_{2},\ldots z_{n}$ obtained as $z=F(x,\vec{\theta})$ will be an ordered sample of independent uniform[0,1] variables. If $\vec{\theta}$ is known, the test statistic can therefore be computed as (see Stephens, Anderson[ ]) \[ W_{n}^{2}=\sum_{i=1}^{n} \left (z_{i}-\frac{2i-1}{2n}\right )^{2}+\frac{1}{12n}. \]
If $\vec{\theta}$ is not completely specified, and the null hypothesisis that the distribution is a member of the two parameter Weibull mixture distribution $F(x,\vec{\theta}),$ the same formula can be used to compute $W_{n}^{2},$ by using $z_{i}=F(x,\vec{\hat{\theta}}),$ where $\vec{\hat{\theta}}$ is an asymptotically efficient estimate for $\vec{\theta}.$ In this work, we estimated $\vec{\theta}$ by the method of maximum likelihood.
\section{ Limiting Distribution of the proposed statistic} Literatuer reveals that (see Cramer[ ], Durbin[ ]) under suitable regularity conditions, the limiting distribution of $W_{n}^{2}$ for testing the null hypothesis that $X$ is distributed as $F(x)$ is that of $W^{2} = \sum_{j=1}^{n} \lambda_{j} z_{j}^{2}$ where $z_{j}$ s are independent $N(0,1)$ variables and the ’s are independent variables and the $\lambda_{j}$s are the eigenvalues of the covariance kernel $\rho$ namely, the solutions of the eigenvalue equation $\int_{0}^{1} \rho(s,t)f(t)dt = \lambda f(s)$. It remains to discuss the computation of the eigenvalues of the covariance kernel. We present this separately for the two cases of simple hypotheses and composite hypotheses.
\subsection{ Simple Hypotheses}
Durbin and Knott [ ] have proved that for simple null hypotheses, $\rho(s,t)$ is given by $\rho(s,t) = \min(s,t) - st.$ And, $\lambda$s can be computed in the closed form $\lambda_{j}=\frac{1}{\pi^{2}j^{2}}, j=1,2,\cdots .$ and the corresponding eigenfunctions are $ \sqrt{2}\sin(\pi_{j}s),$ for $j=1,2,\cdots,n.$
\subsection { Composite Hypothesis} \label{eigQ} In the case of a composite hypothesis, $\rho(s,t)$ can be estimated by $\hat{\rho}(s,t) = \min(s,t) -st - \Psi(s)^{T} I^{-1} \Psi(t),$ where $\Psi(s) = \frac{\partial F}{\partial \vec{\theta}} \left( F^{-1}(s,\hat{\vec{\theta}}),\vec{\theta}\right),$ where $\hat{\vec{\theta}}$ is an asymptotically efficient estimate for $\vec{\theta}$ and $I$ is the information of a single observation.
Computation of the information matrix and inversion of the mixture distribution function are tedious for the Weibull mixture model at hand. We propose estimating the inverse of the information matrix using $-H/n,$ where $H$ is the Hessian matrix, or the matrix of second derivatives of the likelihood function evaluated at the maximum likelihood estimate $\hat{\vec{\theta}}.$ In passing we note that for the normal and exponential distributions, the matrix $-H/n$ gives the exactly correct form for the covariance kernel $\rho.$
\subsubsection*{Inverse of the mixture distribution function} We propose computing the inverse of the mixture distribution function pointwise numerically. The procedure we used is described next.
Given $t,$ we need to find $x$ such that $F(x,\vec{\theta})=t.$ This is equivalent to finding zeros of $g(x) = F(x, \vec{\theta})-t.$
We used Secant method, that gives the iteration scheme \[ x_{n+1} = \frac{x_{n}g(x_{n-1}) - x_{n-1} g(x_{n})}{g(x_{n})-g(x_{n-1})}; \]
here $g(x) = p \left( 1 - \exp \left( - \left(\frac{x}{\beta_{1}}\right)^{\alpha_{1}} \right) \right) + (1-p ) \left( 1 - \exp \left( - \left(\frac{x}{\beta_{2}}\right)^{\alpha_{2}} \right) \right)-t.$
The initial values needed to use this iterative scheme can be found by considering the boundary conditions for $p=0$ and $p=1.$
When $p=0,$ the condition $g(x)=0$ gives $ \left( 1 - \exp \left( - \left(\frac{x}{\beta_{1}}\right)^{\alpha_{1}} \right) \right) =t.$
Similarly, when $p=1,$ the condition $g(x)=0$ gives $ \left( 1 - \exp \left( - \left(\frac{x}{\beta_{2}}\right)^{\alpha_{2}} \right) \right) =t.$
Thus, $x_{1}=\beta_{1} \log | (1-t) |^{1/\alpha_{1}}$ and $x_{2}=\beta_{2} \log | (1-t) |^{1/\alpha_{2}}$ can be used as initial values. We note that since $t>0,$ $ \log(1-t)<0$ and hence it is essential to take the absolute value.
The iteration scheme can be carried out until desired convergence. We iterated until the difference between two consecutive points is less than a small number $\epsilon (>0)$ which we chosen to be $5 \times 10^{-6}.$
In all the examples we tried, the initial values for $x_{1}$ and $x_{2}$ obtained were on the opposite sides of the root and the iterative scheme worked satisfatorily.
To evaluate $\Psi(s),$ we need the derivatives $\frac{\partial F}{\partial \vec{\theta}}$ given by \begin{eqnarray*} \frac{\partial F}{\partial \alpha_{i}} &= & p_{i} \left(\frac{x}{\beta_{i}}\right)^{\alpha_{i}} \log \left(\frac{x}{\beta_{i}}\right) \exp \left(-\left(\frac{x}{\beta_{i}}\right)^{\alpha_{i}} \right) \\ \frac{\partial F}{\partial \beta_{i}} &= &- p_{i} \left(\frac{\alpha_{i}}{\beta_{i}}\right)^{\alpha_{i}} \left(\frac{x}{\beta_{i}}\right)^{\alpha_{i}} \exp \left(-\left(\frac{x}{\beta_{i}}\right)^{\alpha_{i}} \right) \textrm{for} \quad i=1,2 \quad \textrm{and} \\ \frac{\partial F}{\partial p} &= & \exp \left(-\left(\frac{x}{\beta_{2}}\right)^{\alpha_{2}} \right) - \exp \left(-\left(\frac{x}{\beta_{1}}\right)^{\alpha_{1}} \right). \\ \end{eqnarray*}
These derivatives have to be evaluated at $x=F^{-1}(s,\vec{\theta}),$ where $\hat{\vec{\theta}}$ is the maximum likelihood estimate.
Thus, at any point $(s,t)$ we can evaluate $\hat{\rho}(s,t). $ It remains to show how to calculate estimates for the eigenvalues of $\hat{\rho}(s,t). $ The eigenvalues of $\hat{\rho}(s,t)$ cannot be found in closed form and have to be estimated numerically.
\subsubsection*{Computation of estimates for the eigenvalues of the covariance kernel}
The difficulty associated with finding a closed form for the information matrix and inverting the mixture distribution function limits the application of methods proposed in the literature (see Stephens [5] and Stephens [6]) that hinges on the exapansion of $\Psi(s)^{T}I^{-1}\Psi(t)$ in a Fourier series in the eigenfunctions of $\rho(s,t)$ . We propose a brute force apprach for computing the eigenvalues that proceed as follows.
If $\lambda$ is an eigenvalue of $\rho(s,t)$ and $f(s)$ is an eigenfunction corresponding to $\lambda$, then $\lambda f(s) = \int_{0}^{1} \rho(s,t) f(t) dt. $
Divide the interval [0,1] into $(m+1)$ sub-intervals, each of which is of length $1/(m+1)$. Then,
\begin{eqnarray*} \lambda f(i/(m+1)) & = & \int_{0}^{1} \rho (i/(m+1),t) f(t) dt \\ & \approx & \frac{1}{m} \sum_{j=1}^{m} \rho(\frac{i}{m+1},\frac{j}{m+1})f(\frac{j}{m+1}), \quad \textrm{for sufficiently large $m$} \\ \end{eqnarray*}
Let $V$ be the column vector with $i$th element equal to $f(i/(m+1))$ and $Q$ be the $m \times m$ matrix whose $(i,j)$th element is $Q_{ij} = \frac{1}{m}\rho \left (\frac{i}{(m+1)},\frac{j}{(m+1)} \right).$
The above equation can be written as $ \lambda V = Q V.$ Hence, finding the eigenvalues of $\rho$ reduces to the dicretised problem of finding the eigenvalues of the matrix $Q.$
We developed software to create the matrix $Q$ using the estimate for $\rho(s,t)$ proposed in Section \ref{eigQ}. Eigenvlaues of $Q$ were then used as estimates for $\lambda.$
\section{Computation of p-values} \label{pval}
Having noted that the asymptotic distribution of $W_{n}^{2}$ is that of a weighted chi-squared distribution with eigenvalues of the covariance kernel as weights, approximate p-values can be computed as the probability $P(\sum_{i=1}^{\infty} \hat{\lambda}_{i} \chi_{1}^{2} \ge t), $ where $t$ is the value of the test statistic and $\hat{\lambda}_{i}$ are the estimates for the eigenvalues of $\hat{\rho}(s,t)$. We used Imhof's method \cite{imhof} to compute the approximate p-values.
Below we summarise the procedure for computing the suggested approximate p-values.
\begin{enumerate} \item Find an asymptotically efficient estimate $ \hat{\vec{\theta}}$ of $\vec{\theta}=(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},p)^{T}.$ \item Compute the probability integral transforms $z_{i}=F(x_{i},\hat{\vec{\theta}}). $ \item Compute the value of the test statistic, $W_{n}^{2} = \sum_{i=1}^{n} \left( z_{i}-\frac{(2i-1)}{2n}\right)^{2}+\frac{1}{12n}.$ \label{step3} \item Compute $\hat{\Psi}(s)=\frac{\partial F}{\partial \vec{\theta}}(F^{-1}(s,\hat{\theta}),\theta),$ evaluated at $\hat{\vec{\theta}}$ at a desired grid of points $s$ in [0,1]. \item Estimate $I^{-1}$ by $- H/n,$ where $H$ is the Hessian matrix evaluated at $\hat{\theta}.$ \item Create the matrix $Q$ with $(s,t)$th element given by $\hat{\rho}(s,t)= \min(s,t)-st- \hat{\Psi}^{T}(s)(-H/n)\hat{\Psi}(t),$ for the grid points $s,t$ of the interval [0,1]. \item Find the eigenvalues $\hat{\lambda}$ of $Q.$ \item Compute approximate p-value as the probrability that the linear combination $\sum_{i=1}^{k}\hat{\lambda}_{i} \chi_{1}^{2} $ exceeds the test statistic value $W$ computed in Step \ref{step3}. \end{enumerate}
\section{Empirical Justification for the proposed approximate p-value} \label{simulation}
Weibull mixture populations for the simulation study were chosen to cover a range from poorly separated mixture components to well separated mixture components. Table \ref{results} presents the parameters of the chosen mixture components. The results presented in Table \ref{results} is based on 10000 simulations for each. Approximate p-values for testing the composite hypothesis that the distribution is a member of the two component two parameter Weibull was computed using the procedure described in Section \ref{pval}. If the procedure for computing approximate p-values is justifiable, the computed approximate p-values have to be uniformly distributed. The Anderson Darling test was used to examine the uniformity of the p-values. The last two columns of Table \ref{results} gives the values of the Anderson Darling statistic and the p-value for testing uniformity.
\begin{center} \begin{table}[ht] \begin{tabular}{cccccccc} population & $\alpha_{1}$ & $\alpha_{2}$ & $\beta_{1}$ & $\beta_{2}$ & $ p$ & Test statistic & pvalue \\ \hline
1 & 2 & 3 & 3 & 0.9 & 0.5 & 0.78 & 0.49 \\
2 & 1.5 & 3 & 2 & 4 & 0.5 & 1.28 & 0.24 \\ 3 & 1 & 3 & 2 & 4 & 0.5 & 0.95 & 0.38 \\ 4 & 2 & 4 & 0.5 & 3 & 0.5 & 1.47 & 0.18 \\ 5 & 2 & 8 & 1 & 4 & 0.5 & 2.43& 0.05 \\ \hline \end{tabular} \caption{Results of the simulation study} \label{results} \end{table} \end{center}
None of the results presented in Table \ref{results} provide evidence against the assumption that the resulting p-values are uniformly distributed. This can be taken as empirical evidence for the validity of the proposed test procedure fortesting goodness of fit in two component two parameter Weibull mixtures.
\section{Concluding Remarks and Discussion}
In this paper, we presented a goodness of fit test for two parameter Weibull mixture models. Results of a Monte Carlo simulation study provided empirical evidence for the applicability of the suggested goodness of fit test. More simulation results are presented in Perera (\cite {perera}). Literature revealed applications of tests based on the Akaike Information Criteria and Bayesian Information Criteria (\cite {song}) as well as Root Mean Squared Error (RMSE), Chi Square tests, Kolmogorov-Smirnov test (\cite {kollu}) in order to assess the goodness of fit in such mixture model fits. We expect the proposed test in this paper to be superior in terms of power; however, this needs to be established using power studies against suitable alternative distributions. This is left as further work.
We also note that Likelihood surfaces of Weibull mixture distributions appear to be flat over a wide range in the parameter space. This gives rise to difficulties in calculating maximum likelihood estimates using simple procedures such as Newton Raphson method. Also, likelihood functions for samples of Weibull mixture densities that are not welll separated sometimes have more than one maximum; it is hard to find the global maximum with certainty. In such cases, we found that several very different roots can give equally good fits with similar likelihood values.
\end{document} |
\begin{document}
\def\displaystyle{\displaystyle}
\begin{center} { {\sc Stability, NIP, and NSOP; Model Theoretic Properties of Formulas \\ via Topological Properties of Function Spaces }}
{ {\sc Karim Khanaki}}
{\footnotesize
Department of science, Arak University of Technology,
\\ P.O. Box 38135-1177, Arak, Iran; e-mail: [email protected] \\
School of Mathematics, Institute for Research in Fundamental Sciences (IPM), \\ P.O. Box 19395-5746, Tehran, Iran;
e-mail: [email protected]}
\end{center}
{\sc Abstract.}
{\small We study and characterize stability, NIP and NSOP in terms of topological and measure theoretical properties of classes of functions. We study a measure theoretic property, `Talagrand's stability', and explain the relationship between this property and NIP in continuous logic. Using a result of Bourgain, Fremlin and Talagrand, we prove the `almost definability' and `Baire~1 definability' of coheirs assuming NIP. We show that a formula $\phi(x,y)$ has the strict order property if and only if there is a convergent sequence of continuous functions on the space of $\phi$-types such that its limit is not continuous. We deduce from this a theorem of Shelah and point out the correspondence between this theorem and the Eberlein-\v{S}mulian theorem.}
{\small{\sc Keywords}: Talagrand's stability, independence property, coheir, strict order property, continuous~logic, relative weak compactness, angelic space.}
AMS subject classification: 03C45, 03C90, 46E15, 46A50
\noindent\hrulefill
{\small \tableofcontents
\noindent\hrulefill
\section{Introduction} \label{1} In \cite{Ros} Rosenthal introduced the independence property for families of real-valued functions and used this property for proving his celebrated $l^1$ theorem: a Banach space is either `good' (every bounded sequence has a weak-Cauchy subsequence) or `bad' (contains an isomorphic copy of $l^1$). After this and another work of Rosenthal \cite{Ros2},
Bourgain, Fremlin and Talagrand \cite{BFT} found some topological and measure theoretical criteria for the independence property and proved that the space of functions of the first Baire class on a Polish space is angelic; a topological notion for which the terminology was introduced by Fremlin. This theorem asserts that a set of continuous functions on a Polish space is either `good' (its closure is precisely the set of limits of its sequences) or `bad' (its closure contains non-measurable functions). In fact these dichotomies correspond to the NIP/IP dichotomy in continuous logic; see Fact~\ref{BFT} below.
In this paper we propose a generalization of Shelah's dividing lines for classification of first order theories which deals with real-valued formulas instead of 0-1~valued formulas. The principal aim of this paper is to study and characterize some model theoretic properties of formulas, such as OP, IP and SOP, in terms of topological and measure theoretical properties of function spaces. This study enables us to obtain new results and to reach a better understanding of the known results.
Let us give the background and our own point of view. In Shelah's stability theory, the set-theoretic criteria lead to ranks or combinatorial properties of a particular formula. There are known interactions between some of these combinatorial properties and some topological properties of function spaces. As an example, a formula $\phi(x,y)$ has the order property (OP) if there exist $a_ib_i,i<\omega$ such that $\phi(a_i,b_j)$ holds if and only if $i<j$. One can assume that $\phi$ is a 0-1 valued function, such that $\phi(a,b)=1$ iff $\phi(a,b)$ holds. Then $\phi$ has the order property iff there exist $a_i,b_j$ such that $\lim_i\lim_j\phi(a_i,b_j)=1\neq 0=\lim_j\lim_i\phi(a_i,b_j)$. Thus failure of the order property, or stability, is equivalent to the requirement that the double limits $\lim_i\lim_j\phi$ and $\lim_j\lim_i\phi$ be the same. Using a crucial result due to Eberlein and Grothendieck, the latter is a topological property of a family of functions; see Fact~\ref{Fact2} below. Similarly, using the result of Bourgain, Fremlin and Talagrand mentioned above, one can obtain some topological and measure theoretical characterizations of NIP formulas. Therefore, it seems reasonable that one studies real-valued formulas and hopes to obtain new classes of functions (formulas) and develop a sharper stability theory by making use of topological properties of function spaces instead of only combinatorial properties of formulas.
In this paper (except in Section 4) we work in continuous logic which is an extension of classical first order logic; thus our results hold in the latter case.
The following is a summary of the main results of this paper:
Propositions \ref{SCP->NSOP}, \ref{NSOP=SCP}, \ref{Shelah=Eberlein}, Theorems \ref{NIP-compactness} and \ref{almost-dfn} are new results. Also, Definitions \ref{NIP-formula} and \ref{universal dfn} are new. Propositions \ref{NIP-almost}, \ref{NIP-dfn}, \ref{Keisler-NIP} and Theorem \ref{Baire-dfn} have not previously been published but are essentially just translations from functional analysis. Section~3 focuses on NIP in the framework of continuous logic and Section~4 focuses on SOP in classical model theory.
This is not the end of the story if one defines a notion of non-forking extension in NIP theories such that it satisfies symmetry and transitivity. Moreover, one can study sensitive families of functions, dynamical systems and chaotic maps and their connections with stability theory. We will study them in a future work.
It is worth recalling another line of research. After the preparation of the first version of this paper, we came to know that simultaneously in \cite{Iba14} and \cite{S2} the relationship between NIP and Rosenthal's dichotomy was noticed in the contexts of $\aleph_0$-categorical structures in continuous logic and classical first order setting, respectively. Independently, the relationship between NIP in integral logic and Talagrand's stability was studied in \cite{K}.
This paper is organized as follows: In the second section, we briefly review continuous logic and stability. In the third section, we study Talagrand's stability and its relationship with NIP in logic, and give some characterizations of NIP in terms of measure and topology. The result of Bourgain, Fremlin and Talagrand is used in this section for proving of definability of coheirs in NIP theories. In the fourth section, we study the SOP and point out the correspondence between Shelah's theorem and the Eberlein-\v{S}mulian theorem.
\noindent {\bf Acknowledgements.} I am very much indebted to Professor David H. Fremlin for his kindness and his helpful comments. I am grateful to M\'{a}rton Elekes for valuable comments and observations, particularly Example~\ref{exa} below. I thank the anonymous referees for their detailed suggestions and corrections; they helped to improve significantly the exposition of this paper.
I would like to thank the Institute for Basic Sciences (IPM), Tehran, Iran. Research partially supported by IPM grant 93030032.}
\noindent\hrulefill
\section{Continuous Logic} \label{2} In this section we give a brief review of continuous logic from \cite{BU} and \cite{BBHU}. Results stated without proof can be found there. The reader who is familiar with continuous logic can skip this section.
\subsection{Syntax and semantics} \label{syntax} A {\em language} is a set $L$ consisting of constant symbols and function/relation symbols of various arities. To each relation symbol $R$ is assigned a bound $\flat_R\in[0,\infty)$ and we assume that its interpretations is bounded by $\flat_R$. It is always assumed that $L$ contains the metric symbol $d$ and $\flat_d=1$. We use $\mathbb{R}$ as value space and its common operations $+,\times$ and scalar products as connectives. Moreover to each relation symbol $R$ (function symbol $F$) is assigned a modulus of uniform continuity $\Delta_R$ ($\Delta_F$). We also use the symbols `sup' and `inf' as quantifiers.
Let $L$ be a language. {\em $L$-terms} and their bound are inductively define as follows:
\begin{itemize}
\item Constant symbols and variables are terms.
\item If $F$ is a $n$-ary function symbol and $t_1, \ldots,t_n$ are terms, then $F(t_1,\ldots, t_n)$ is a term.
All $L$-terms are constructed in this way.
\end{itemize}
\begin{dfn} $L$-formulas and their bounds are inductively defined as follows: \begin{itemize}
\item Every $r\in\mathbb{R}$ is an atomic formula with bound $|r|$.
\item If $R$ is a $n$-ary relation symbol and $t_1,\ldots,t_n$ are terms, $R(t_1,\ldots,t_n)$ is an atomic formula with bound $\flat_R$. \item If $\phi,\psi$ are formula and $r\in\mathbb{R}$ then
$\phi+\psi,\phi\times\psi$ and $r\phi$ are formulas with bound resp $\flat_\phi+\flat_\psi, \flat_\phi\flat_\psi, |r|\flat_\phi$. \item If $\phi$ is a formula and $x$ is a variable, $\sup_x\phi$ and $\inf_x\phi$ are formulas with the same
bound as $\phi$. \end{itemize} \end{dfn}
\begin{dfn} A {\em prestructure} in $L$ is pseudo-metric space $(M, d)$ equipped with: \begin{itemize}
\item for each constant symbol $c\in L$, an element $c^M\in M$
\item for each $n$-ary function symbol $F$ a function $F^M : M^n\to M$ such that
$$d_n^M(\bar x,\bar y)\leqslant\Delta_F(\epsilon) \Longrightarrow d^M(F^M(\bar x),F^M(\bar y))\leqslant\epsilon$$
\item for each $n$-ary relation symbol $R$ a function $R^M : M^n\to [-\flat_R,\flat_R]$ such that
$$d_n^M(\bar x,\bar y)\leqslant\Delta_R(\epsilon) \Longrightarrow |R^M(\bar x)-R^M(\bar y)|\leqslant\epsilon.$$ \end{itemize} \end{dfn}
If $M$ is a prestructure, for each formula $\phi(\bar x)$ and $\bar a\in M$, $\phi^M(\bar a)$ is defined inductively starting from atomic formulas. In particular, $(\sup_y \phi)^M(\bar a)=\sup_{b\in M}\phi^M(\bar a, b)$. Similarly for $\inf_y\phi$.
\begin{proposition} Let $M$ be an $L$-prestructure and $\phi(\bar x)$ a formula with $|\bar x|=n$. Then
$\phi^M(\bar x)$ is a real-valued function on $M^n$ with a modulus of uniform continuity $\Delta_\phi$ and $|\phi^M(\bar a)|\leqslant\flat_\phi$ for every $\bar a$. \end{proposition}
Interesting prestructures are those which are {\em complete} metric spaces. They are called {\em $L$-structures}. Every prestructure can be easily transformed to a complete $L$-structure by first taking the quotient metric and then completing the resulting metric space. By uniform continuity, interpretations of function and relation symbols induce well-defined function and relations on the resulting metric space.
\subsection{Compactness, types, stability} \label{compactness} Let $L$ be a language. An expression of the form $\phi\leqslant\psi$, where $\phi,\psi$ are formulas, is called a {\em condition}. The equality $\phi=\psi$ is called a condition again. These conditions are called closed if $\phi,\psi$ are sentences. A {\em theory} is a set of closed conditions. The notion $M\models T$ is defined in the obvious way. $M$ is then called a model of $T$. A theory is {\em satisfiable} if has a model.
An ultraproduct construction can be defined. The most important application of this construction in logic is to prove the \L o\'{s} theorem and to deduce the compactness theorem.
\begin{thm}[Compactness Theorem] Let $T$ be an $L$-theory and $\mathcal{C}$ a class of $L$-structures. Suppose that $T$ is finitely satisfiable in $\mathcal{C}$. Then there exists an ultraproduct of structures from $\mathcal{C}$ that is a model of $T$. \end{thm}
There are intrinsic connections between some concepts from functional analysis and continuous logic. For example, types are well known mathematical objects, {\em Riesz homomorphisms}. To illustrate this, there are two options; Gelfand representation of $C^*$-algebras, and Kakutani representation of $M$-spaces. We work in a real-valued logic, so we use the latter.
Suppose that $L$ is an arbitrary language. Let $M$ be an $L$-structure, $A\subseteq M$ and $T_A=Th({M}, a)_{a\in A}$. Let $p(x)$ be a set of $L(A)$-conditions in free variable $x$. We shall say that $p(x)$ is a {\em type over} $A$ if $p(x)\cup T_A$ is satisfiable. A {\em complete type over} $A$ is a maximal type over $A$. The collection of all such types over $A$ is denoted by $S^{M}(A)$, or simply by $S(A)$ if the context makes the theory $T_A$ clear. The {\em type of $a$ in $M$ over $A$}, denoted by $\text{tp}^{M}(a/A)$, is the set of all $L(A)$-conditions satisfied in $M$ by $a$. If $\phi(x,y)$ is a formula, a {\em $\phi$-type} over $M$ is a maximal consistent set of formulas of the form $\phi(x,a)\geqslant r$, for $a\in M$ and $r\in\mathbb{R}$. The set of $\phi$-types over $M$ is denoted by $S_\phi(M)$. The definition of a $\phi$-type over a set $A$ which is not a model needs a few more steps (see Definition~6.6 in \cite{BU}).
We now give a characterization of complete types in terms of functional analysis. Let $\mathcal{L}_A$ be the family of all interpretations $\phi^{M}$ in $M$ where $\phi$ is an $L(A)$-formula with a free variable $x$. Then $\mathcal{L}_A$ is an Archimedean Riesz space of measurable functions on $M$ (see \cite{Fremlin3}). Let $\sigma_A({M})$ be the set of Riesz homomorphisms $I: {\mathcal L}_A\to \mathbb{R}$ such that $I(\textbf{1}) = 1$, where $\textbf{1}$ is the constant $1$ function on $M$. The set $\sigma_A({M})$ is called the {\em spectrum} of $T_A$. Note that $\sigma_A({M})$ is a weak* compact subset of the dual space $\mathcal{L}_A^*$ of $\mathcal{L}_A$. The next proposition shows that a complete type can be coded by a Riesz homomorphism and gives a characterization of complete types. In fact, by the Kakutani representation theorem, the map $S^{M}(A)\to\sigma_A({M})$, defined by $p\mapsto I_p$ where $I_p(\phi^M)=r$ if $\phi(x) = r$ is in $p$, is a bijection. By adapting the proof of Proposition~5.6 of \cite{K}, one can show that:
\begin{proposition} \label{key} Suppose that $M$, $A$ and $T_A$ are as above. \begin{itemize}
\item [{\em (i)}] The map $S^{M}(A)\to\sigma_A({M})$ defined by $p\mapsto I_p$ is bijective.
\item [{\em (ii)}] A set $p$ of $L(A)$-conditions is an element of $S^{M}(A)$ if and only if there is an elementary extension $N$ of $M$ and $a\in N$ such that $p=\text{tp}^{N}(a/A)$. \end{itemize} \end{proposition}
We equip $S^{M}(A)=\sigma_A({M})$ with the related topology induced from $\mathcal{L}_A^*$. Therefore, $S^{M}(A)$ is a compact and Hausdorff space. For any complete type $p$ and formula $\phi$, we let $\phi(p)=I_p(\phi^{M})$. It is easy to verify that the topology on $S^{M}(A)$ is the weakest topology in which all the functions $p\mapsto \phi(p)$ are continuous. This topology is sometimes called the {\em logic topology}. The same things are true for $S_\phi(M)$.
\begin{dfn} \label{stab2} A formula $\phi(x,y)$ is called {\em stable in a structure} $M$ if there are no $\epsilon>0$ and infinite sequences
$a_n, b_n \in M$ such that for all $i<j$: $|\phi(a_i,b_j) -
\phi(a_j,b_i)| \geqslant \epsilon$. A formula $\phi$ is {\em stable in a theory} $T$ if it is stable in every model of $T$. If $\phi$ is not stable in $M$ we say that it has the {\em order property} (or short the OP). Similarly, $\phi$ has the OP in $T$ if it is not stable in some model of $T$. \end{dfn}
It is easy to verify that $\phi(x,y)$ is stable in $M$ if whenever $a_n,b_m\in M$ form two sequences we have $$\lim_n\lim_m\phi(a_n,b_m)=\lim_m\lim_n\phi(a_n,b_m),$$ provided both limits exist.
\begin{lem} \label{stable formula}
Let $\phi(x,y)$ be a formula.
Then the following are equivalent:
\begin{itemize}
\item [{\em (i)}] The formula $\phi$ is stable.
\item [{\em (ii)}] There are no distinct real numbers $r,s$ and
infinite sequence $(a_ib_i\colon i < \omega)$ such that
$\phi(a_i,b_j)=r$ for $i<j$ and $\phi(a_i,b_j)=s$ for $i\geq j$.
\end{itemize} \end{lem}
By the following result, stability of a formula $\phi(x,y)$ is equivalent to the family of functions being relatively weakly compact. In everything that follows, if $X$ is a topological space then $C_b(X)$ denotes the Banach space of bounded real-valued functions on $X$, equipped with the supremum norm. A subset $A\subseteq C_b(X)$ is relatively weakly compact if it has compact closure in the weak topology on $C_b(X)$. If $X$ is a compact space, then we write $C(X)$ instead of $C_b(X)$.
\begin{fct}[\cite{Fremlin4}, Proposition~462E] \label{Grothendieck-lemma} Let $X$ be a compact topological space, and $A$ a subset of $C(X)$. Then $A$ is weakly compact in $C(X)$ iff it is norm-bounded and pointwise compact. \end{fct}
In \cite{Gro}, Grothendieck says that the following is based on an idea of Eberlein. (In \cite{Pillay-Grothendieck}, Pillay correctly pointed out this.)
\begin{fct}[Eberlein-Grothendieck criterion, \cite{Gro}, Th\'{e}or\`{e}me~6] \label{Criterion}
Let $X$ be an arbitrary topological space, $X_0\subseteq X$ a dense subset. Then the following are equivalent for a subset $A\subseteq C_b(X)$: \begin{itemize}
\item [{\em (i)}] The set $A$ is relatively weakly compact in $C_b(X)$.
\item [{\em (ii)}] The set $A$ is bounded, and for any sequences $\{f_n\}_{1}^\infty\subseteq A$ and $\{x_n\}_{1}^\infty\subseteq X_0$, we have $$\lim_n \lim_m f_n(x_m) =\lim_m \lim_n f_n(x_m),$$ whenever both limits exist. \end{itemize} \end{fct}
The following is a model-theoretic version of the Eberlein-Grothendieck criterion, as pointed out by Ben~Yaacov in \cite{Ben-Gro} (see Fact 2, the discussion before Theorem 3 and Theorem 5 therein).
\begin{cor} \label{Fact2} Let $M$ be a structure and $\phi(x,y)$ a formula. Then the following are equivalent: \begin{itemize}
\item [{\em (i)}] $\phi(x,y)$ is stable in $M$.
\item [{\em (ii)}] The set $A=\{\phi(x,b):S_x(M)\to \mathbb{R}~|b\in M\}$ is relatively weakly compact in $C(S_x(M))$.
\end{itemize} \end{cor}
\noindent\hrulefill \section{NIP} \label{4}
In this section we study Talagrand's stability and its relationship to NIP in continuous logic. Then, we give some characterizations of NIP in terms of topology and measure, and deduce various forms of definability of coheirs for NIP models.
\subsection{Independent family of functions} In \cite{Ros} Rosenthal introduced the independence property for families of real-valued functions and used it for proving his dichotomy. As we will see shortly, this notion corresponds to a generalization of the IP for real-valued formulas.
\begin{dfn}[\cite{GM}, Definition~2.8] \label{NIP-family}
A family $F$ of real-valued functions on a set $X$ is said to be {\em independent}
(or has the {\em independence property}, short IP) if there exist real numbers $s<r$ and a sequence $f_n\in F$ such that for each $k\geqslant 1$ and for each $I\subseteq\{1,\ldots,k\}$, there is $x\in X$ with $f_i(x)\leqslant s$ for $i\in I$ and $f_i(x)\geqslant r$ for $i\notin I$. In this case, sometimes we say that every finite subset of the sequence $f_n$ is shattered by $X$. If $F$ has not the independence property then we say that it has the {\em dependent property} (or the NIP). \end{dfn}
We have the following remarkable topological characterizations of this property. More details and several equivalent presentations can be found in \cite{GM}.
\begin{fct}[\cite{GM}, Theorem~2.11] \label{NIP-convergence}
Let $X$ be a compact space and $F\subseteq C(X)$ a bounded subset. The following conditions are equivalent: \begin{itemize}
\item [{\em (i)}] $F$ does not contain an independent sequence.
\item [{\em (ii)}] Each sequence in $F$ has a pointwise convergent subsequence in $\mathbb{R}^X$.
\end{itemize} \end{fct}
\begin{dfn} \label{RSC}
We say that a (bounded) family $F$ of real-valued function on a set $X$ has the {\em relative sequential compactness in ${\mathbb{R}}^X$} (short RSC) if every sequence in $F$ has a pointwise convergent subsequence in ${\mathbb{R}}^X$. \end{dfn}
As we will see shortly, the following statement is a generalization of a model theoretic fact, i.e. IP implies OP.
\begin{fct} \label{IP->OP} Let $X$ be a compact space and $F\subseteq C(X)$ a bounded subset.
If $F$ is relatively weakly compact in $C(X)$, then $F$ has the RSC. \end{fct} \begin{proof} Suppose that $F$ is relatively weakly compact in $C(X)$. (Not that, by Fact~\ref{Grothendieck-lemma} above, the weak topology and pointwise topology are the same.) By the Eberlein-\v{S}mulian theorem, each sequence in $F$ has a subsequence converging to an element of $C(X)$. So, in particular, $F$ has the RSC. \end{proof}
\subsection{Talagrand's stability and almost NIP} Historically, Talagrand's stability (see Definition~\ref{Talagrand-stable} below), which we call the almost dependence property, arose naturally when Talagrand and Fremlin were studying pointwise compact sets of measurable functions; they found that in many cases a set of functions was relatively pointwise compact because it was almost dependent (see Fact~\ref{almost-NIP} below). Later it appeared that the concept was connected with Glivenko-Cantelli classes in the theory of empirical measures, as explained in \cite{Talagrand}. In this subsection we study this property and show that it is the `correct' counterpart of NIP in integral logic (see \cite{K}). Then, we point out the connection between NIP in continuous logic and this property.
\begin{dfn}[Talagrand's stability, \cite{Fremlin4}, 465B] \label{Talagrand-stable}
Let $A\subseteq C(X)$ be a pointwise bounded family of real-valued continuous functions on $X$. Suppose that $\mu$ is a measure on $X$. We say that $A$ is {\em $\mu$-stable}, if $A$ is a stable set of functions in the sense of Definition~465B in \cite{Fremlin4}, that is, whenever $E\subseteq M$ is measurable, $\mu(E)>0$ and $s<r$ in $\mathbb{R}$, there is some $k\geqslant 1$ such that $(\mu^{2k})^*D_k(A, E,s,r)<(\mu E)^{2k}$ where \begin{align*} D_k(A, E,s,r) = \bigcup_{f\in A}\big\{w\in & E^{2k}:f(w_{2i})\leqslant s, ~f(w_{2i+1})\geqslant r \textrm{ for } i<k\big\}. \end{align*}
\end{dfn}
Now we invoke the first result connecting this notion. First, we need a notion and a notation. If $X$ is any set and $A$ a subset of ${\Bbb R}^X$, then the topology of {\em pointwise convergence} on $A$ is that inherited from the usual product topology of ${\Bbb R}^X$; that is, the coarsest topology on $A$ for which the map $f\to f(x) : A\mapsto {\Bbb R}$ is continuous for every $x\in X$.
We will denote the pointwise closure of $A$ in ${\Bbb R}^X$ by $cl_p(A)$.
\begin{fct}[{\cite[465D]{Fremlin4}}] \label{almost-NIP} Let $X$ be a compact Housdorff space and $A\subseteq C(X)$ be a pointwise bounded family of real-valued continuous functions from $X$. Suppose that $\mu$ is a Radon measure on $X$. If $A$ is $\mu$-stable, then $cl_p(A)$ is $\mu$-stable and every element in $cl_p(A)$ is $\mu$-measurable. \end{fct}
In \cite{Fremlin75} Fremlin obtained a remarkable result, which has become known as Fremlin's dichotomy: a set of measurable functions on a perfect measure space is either `good' (relatively countably compact for the pointwise topology and relatively compact for the topology of convergence in measure) or `bad' (with neither property). We recall that a subset $A$ of a topological space $X$ is {\em relatively countably compact} if every sequence of $A$ has a cluster point in $X$.
\begin{fct}[Fremlin's dichotomy, \cite{Fremlin4}, 463J] Let $(X,\Sigma,\mu)$ be a perfect $\sigma$-finite measure space, and $\{f_n\}$ a sequence of real-valued measurable functions on $X$. Then \begin{enumerate} \item[] either $\{f_n\}$ has a subsequence which is convergent almost everywhere \item[] or $\{f_n\}$ has a subsequence with no measurable cluster point in $\mathbb{R}^X$. \end{enumerate} \end{fct}
We now define the notion of $\mu$-almost NIP and we will see shortly the connection between this notion and NIP.
\begin{dfn}[$\mu$-almost NIP] Let $A\subseteq C(X)$ be a pointwise bounded family of real-valued continuous functions on $X$. Suppose that $\mu$ is a measure on $X$. We say that $A$ has the {\em $\mu$-almost NIP}, if every sequence in $A$ has a subsequence which is convergent $\mu$-almost everywhere. \end{dfn}
Let $(X,\Sigma,\mu)$ be a finite Radon measure on a compact space $X$, and $\mathcal{L}^0$ the set of all real-valued measurable functions on $X$. Let $A\subseteq \mathcal{L}^0$ be a bounded family. Then we say that $A$ satisfies condition (M), if for all $s<r$ and all $k$, the set $D_k(A,X,r,s)$ is measurable (this applies, in particular, if $A$ is countable).
\begin{proposition} \label{NIP-Fremlin} Let $(X,\Sigma,\mu)$ be a finite Radon measure on a compact space $X$, and $A\subseteq \mathcal{L}^0$ a bounded family of real-valued measurable functions on $X$. Consider the following statements. \begin{itemize}
\item [{\em (i)}] $A$ is $\mu$-stable.
\item [{\em (ii)}] There do not exist measurable set $E$ with $\mu(E)>0$ and $s<r$ in $\mathbb{R}$, such that for each $n$, and almost all $w\in E^n$,
for each subset $I$ of $\{1,\ldots,n\}$, there is $f\in A$ with $$f(w_i)<s \text{ if } i\in I \text{ and } f(w_i)>r \text{ if } i\notin I.$$
\item [{\em (iii)}] $A$ has the $\mu$-almost NIP. \end{itemize} Then (i)~$\Rightarrow$~(ii). If $A$ satisfies condition (M), then (ii)~$\Rightarrow$~(i). (i)~$\Rightarrow$~(iii), but (iii)~$\nRightarrow$~(i) and (iii)~$\nRightarrow$~(ii). \end{proposition} \begin{proof} (i)~$\Rightarrow$~(ii) is evident.
(M)$\wedge$(ii)~$\Rightarrow$~(i) is Proposition~4 in \cite{Talagrand}.
(i)~$\Rightarrow$~(iii): Let $\{f_n\}$ be any sequence in $A$, and take an arbitrary subsequence of it (still denoted by $\{f_n\}$). Let $\mathcal{D}$ be a non-principal ultrafilter on $\mathbb{N}$, and then define
$f(x)=\lim_{\mathcal{D}} f_i(x)$ for all $x\in X$. (By the assumption, there is a real number $r$ such that $|h|\leqslant r$ for each $h\in A$, and therefore $f$ is well defined.) Since $A$
is $\mu$-stable and $f\in \text{cl}_p(\{f_n\})$, the function $f$ is measurable (see Fact~\ref{almost-NIP}). So every subsequence of $\{f_n\}$ has a measurable cluster point. Fremlin's dichotomy now tells us that $\{f_n\}$ has a subsequence which is convergent almost everywhere.
(iii)~$\nRightarrow$~(i)$\vee$(ii): In \cite{SF} Shelah and Fremlin found that in a model of set theory there is a separable pointwise compact set $A$ of real-valued Lebesgue measurable functions on the unit interval which it is not $\mu$-stable. Thus we see that (iii)~$\nRightarrow$~(i). Since the set $A$ is separable, it satisfies condition (M) and therefore (ii) fails. \end{proof}
Professor Fremlin kindly pointed out to us that Shelah's model, described in their paper \cite{SF}, in fact deals with the point that there is a countable set of \emph{continuous} functions which is relatively pointwise compact in ${\mathcal L}^0(\mu)$ for a Radon measure $\mu$, but that it is not $\mu$-stable. Of course, in some cases, there are still some things to say (see Theorem~\ref{NIP-compactness} below).
For a Hausdorff space $X$, $\textbf{M}_r(X)$ will be the space of universally measurable functions, i.e. a function $f$ is an element of $\textbf{M}_r(X)$ iff $f$ is $\mu$-measurable for every Radon measure $\mu$ on $X$.
\begin{fct}[BFT Criterion, \cite{BFT}, Theorem~2F] \label{BFT}
Let $X$ be a compact Hausdorff space, and $F\subseteq C(X)$ be bounded. Then the following are equivalent. \begin{itemize}
\item [{\em (i)}] $F$ has the NIP (see Definition~\ref{NIP-family} above).
\item [{\em (ii)}] $F$ is relatively compact in $\textbf{M}_r(X)$ for the topology of pointwise convergence.
\item [{\em (iii)}] $F$ has the RSC (see Definition~\ref{RSC} above).
\item [{\em (iv)}] Each sequence in $F$ has a subsequence which is convergent $\mu$-almost everywhere for every Radon measure $\mu$ on $X$.
\item [{\em (v)}] For each Radon measure $\mu$ on $X$, each sequence in $F$ has a subsequence which is convergent $\mu$-almost everywhere. \end{itemize} \end{fct} \begin{proof} The equivalence (i)--(iii) is the equivalence (ii)~$\Leftrightarrow$~(vi)~$\Leftrightarrow$~(iv) of Theorem~2F of \cite{BFT}. (See also Fact~\ref{NIP-convergence} above.)
Fremlin's dichotomy and the equivalence (v)~$\Leftrightarrow$~(vi)~$\Leftrightarrow$~(iv) of Theorem~2F of \cite{BFT} imply (v)~$\Leftrightarrow$~(i)~$\Leftrightarrow$~(iv).
\end{proof}
We will see that the BFT criterion in NIP theories plays a role similar to the role played by the Eberlein-Grothendieck criterion in stable theories.
\subsection{NIP in a model} In \cite{Sh} Shelah introduced the independence property (IP) for 0-1 valued formulas; a formula $\phi(x,y)$ has the IP if for each $n$ there exist $b_1,\ldots,b_n$ in the monster model such that each nontrivial Boolean combination of
$\phi(x,b_1),\ldots,\phi(x,b_n)$ is satisfiable. By some set-theoretic considerations, a formula $\phi(x,y)$ has IP if and only if $\sup\{|S_\phi(A)|:A\text{ of size }\kappa\}=2^\kappa$ for some infinite cardinal $\kappa$. Although this property was introduced for counting types, its negation (NIP) is a successful extension of local stability and also an active domain of research in classical first order logic and other areas of mathematics. The following generalization of NIP (in the framework of continuous logic) also has a natural topological presentation.
\begin{dfn} \label{NIP-formula}
Let $M$ be a structure, and $\phi(x,y)$ a formula. We say that $\phi(x,y)$ is NIP on $M\times M$ (or on $M$) if for each sequence $(a_n)\subseteq M$, and
$r>s$, there are some {\em finite} disjoint subsets $E,F$ of ${\Bbb N}$ such that $$\Big\{b\in M:\big( \bigwedge_{n\in E}\phi^{\sf M}(a_n,b)\leqslant
s\big)\wedge\big(\bigwedge_{n\in F}\phi^{\sf M}(a_n,b)\geqslant r\big)\Big\}=\emptyset.$$ \end{dfn}
\begin{lem} \label{equivalence}
Let $M$ be a structure, and $\phi(x,y)$ a formula. Then the following are equivalent. \begin{itemize}
\item [(i)] $\phi(x,y)$ is NIP on $M$.
\item [(ii)] For each sequence $(a_n)\subseteq M$, each saturated elementary extension ${N}\succeq{M}$, and
$r>s$, there are disjoint subsets $E,F$ of ${\Bbb N}$ such that $$\Big\{b\in N: \big( \bigwedge_{n\in E}\phi(a_n,b)\leqslant
s\big)\wedge\big(\bigwedge_{n\in F}\phi(a_n,b)\geqslant r\big)\Big\}=\emptyset.$$
\item [(iii)] For each sequence $\phi(a_n,y)$ in the set
$A=\{\phi(a,y):S_y(M)\to{\Bbb R}~|~a\in M\}$, where $S_y(M)$ is
the space of all complete types on $M$ in the variable $y$, and $r>s$
there are {\em finite} disjoint subsets $E,F$ of ${\Bbb N}$ such that $$\Big\{y\in S_y(M):\big( \bigwedge_{n\in E}\phi(a_n,y)\leqslant
s\big)\wedge\big(\bigwedge_{n\in F}\phi(a_n,y)\geqslant r\big)\Big\}=\emptyset.$$
\item [(iv)] The condition (iii) holds for
{\em arbitrary} disjoint subsets $E,F$ of ${\Bbb N}$.
\item [(v)] Every sequence $\phi(a_n,y)$ in $A$ has a convergent subsequence in ${\Bbb R}^X$, equivalently $A$ has the RSC. \end{itemize} \end{lem} \begin{proof} (ii)~$\Rightarrow$~(i) follows from the compactness theorem and saturation. (Indeed, suppose that (i) fails, and then consider a suitable type and get a contradiction.)
(i)~$\Rightarrow$~(iii) is just a restatement of the notion of type.
(iv)~$\Rightarrow$~(iii): Suppose that (iii) fails for the sequence $(a_n)\subseteq M$ and $s<r$. Let $E=\{i_n:n\in \Bbb N\}$ and $F=\{j_n:n\in \Bbb N\}$ be arbitrary disjoint subsets of ${\Bbb N}$. Let $E_m=\{i_n:n\leq m\}$
and $F_m=\{j_n:n\leq m\}$ for each $m\in\Bbb N$. So, for each $m$, there is some $y_m\in S_y(M)$ such that $ \bigwedge_{n\in E_m}\phi(a_n,y_m)\leqslant s$ and $\bigwedge_{n\in F_m}\phi(a_n,y_m)\geqslant r$. Let $z$ be a cluster point of the sequence $(y_m)$. (Note that $z\in S_y(M)$ because the type space is compact.) Since the $\phi(a_n,y)$ are continuous, it is easy to verify that $ \bigwedge_{n\in E}\phi(a_n,z)\leqslant s$ and $\bigwedge_{n\in F}\phi(a_n,z)\geqslant r$. As $E,F$ are arbitrary, (iv) fails. (iii)~$\Rightarrow$~(iv) is evident.
(iii)~$\Leftrightarrow$~(v): It is easy to verify that for the set $A$, the dependence property in Definition~\ref{NIP-family} is equivalent to the condition (iii). Now, by Fact~\ref{NIP-convergence} the proof is completed.
(ii)~$\Leftrightarrow$~(iv) follows from saturation (and the notion of type). \end{proof}
Some similar notions are studied in \cite{Iba14}. Note that the notion NIP on a model is `double local', i.e. $\phi$ can be NIP on a model, but not in a theory.
If $\phi(x,y)$ is a formula, we let $\tilde{\phi}(y, x)= \phi(x,y)$. Hence $\tilde{\phi}$ is the same formula as $\phi$, but we have exchanged the role of variables and parameters.
\begin{rmk} Let $M$ be a structure and $\phi(x,y)$ a formula. The space $S_{\tilde\phi}(M)$ of all $\tilde\phi$-types on $M$ is the quotient of $S_y(M)$ given by the family of functions $\{\phi(a,y):a\in M\}$ (see \cite{BU}, Fact~4.7). So in Definition~\ref{NIP-formula} above, $S_y(M)$ can be replace by $S_{\tilde\phi}(M)$. \end{rmk}
\begin{thm}[NIP and $\mu$-stability] \label{NIP-compactness} Let $M$ be an $\aleph_0$-saturated $L$-structure, $\phi(x;y)$ a formula, $A=\{\phi(a,y):a\in M\}$ and $\tilde{A}=\{\phi(x,b):b\in M\}$. Then the following are equivalent: \begin{itemize}
\item [{\em (i)}] $\phi$ is NIP on $M$.
\item [{\em (ii)}] $\tilde A$ is $\mu$-stable for all Radon measures $\mu$ on $S_\phi(M)$. \end{itemize} \end{thm} \begin{proof} (i)~$\Rightarrow$~(ii): By the compactness theorem of continuous logic, since $M$ is $\aleph_0$-saturated and $\phi(x,y)$ is NIP on $M$, there is some integer $n$ such that no subset (of $M$) of size $n$ is shattered by $\phi(x,y)$. We note that by Proposition~465T of \cite{Fremlin4}, the conditions (i) and (ii) of Proposition~\ref{NIP-Fremlin} are equivalent. So if $E\subseteq M$, $\mu(E)>0$, $r>s$, then for each $(a_1,\ldots,a_n)\in E^n$ there is a set $I\subseteq \{1,\ldots,n\}$ such that $$\Big\{y\in S_y(M):\big( \bigwedge_{i\in I}\phi(a_i,y)\leqslant s\big)\wedge\big(\bigwedge_{i\notin I}\phi(a_i,y)\geqslant r\big)\Big\}=\emptyset,$$ where $S_y(M)$ is the space of all complete types on $M$ in the variable $y$. Since $M\subseteq S_y(M)$, the set ${\tilde A}$ is $\mu$-stable for every Radon measure $\mu$ on $S_\phi(M)$.
(ii)~$\Rightarrow$~(i): Suppose that $\tilde A$ is $\mu$-stable for every Radon measure $\mu$ on $\tilde{X}=S_\phi(M)$. Thus, by Fact~\ref{almost-NIP}, $\tilde A$ is relatively compact in $\textbf{M}_r({\tilde X})$ (the space of all $\mu$-measurable functions on $\tilde X$ for each Radon
measure $\mu$ on ${\tilde X}$). By the BFT criterion, for each sequence $\phi(x,a_n)$ in $\tilde A$, and $r<s$, there is some $I\subseteq {\mathbb{N}}$ such that $$\Big\{x\in S_x(M):\big( \bigwedge_{i\in I}{\phi}(x,a_i)\leqslant s\big)\wedge\big(\bigwedge_{i\notin I}{\phi}(x,a_i)\geqslant r\big)\Big\}=\emptyset.$$ Thus the dual formula ${\tilde \phi}(y,x)$ is NIP on $M$. So, by applying the direction (i)~$\Rightarrow$~(ii) to the formula $\tilde \phi$, we see that ${\tilde{\tilde A}}=A$ is $\mu$-stable for every Radon measure $\mu$ on $X=S_{\tilde{\phi}}(M)$. Thus, again by the BFT criterion and Proposition~\ref{NIP-Fremlin}, we conclude that $\phi(x,y)$ is NIP on $M$. \end{proof}
In fact the proof of the previous result says more: if $M$ is $\aleph_0$-saturated, then $\phi$ is NIP on $M$ if and only if $\tilde \phi$ is NIP on $M$.
\begin{cor} Under the assumptions in Theorem~\ref{NIP-compactness}, $\phi$ is NIP on $M$ if and only if $A$ is $\mu$-stable for every Radon measure $\mu$ on $S_{\tilde\phi}(M)$. \end{cor}
The previous results also show why the $\mu$-stability is the `correct' notion of NIP in integral logic (see \cite{K}).
The following is a translation of the BFT criterion into (continuous) model theory. Note that here we do not need any saturation conditions on the model.
\begin{proposition}[NIP and $\mu$-almost NIP] \label{NIP-almost} Let $M$ be an $L$-structure, $\phi(x;y)$ a formula and $A=\{\phi(a,y):a\in M\}$. Then the following are equivalent: \begin{itemize}
\item [{\em (i)}] $\phi$ is NIP on $M$.
\item [{\em (ii)}] $A$ has the $\mu$-almost NIP for all Radon measures $\mu$ on $S_{\tilde\phi}(M)$.
\end{itemize} \end{proposition} \begin{proof} This is the equivalence (i)~$\Leftrightarrow$~(v) of Fact~\ref{BFT} with $X=S_{\tilde\phi}(M)$ and $F=A$. \end{proof}
\begin{rmk} One can not expect the notion `NIP on a model' to be symmetrical. It is easy to make examples such that $\phi$ is NIP on $M$ but $\tilde\phi$ is not NIP on $M$. Of course, if $M$ is $\aleph_0$-saturated, they are the same. \end{rmk}
\subsection{Almost definable coheirs} \label{3} It is well known that every type on a stable model is definable (see \cite{Ben-Gro}). Here we want to give a counterpart of this fact for NIP theories. In \cite{K} it is shown that if a formula $\phi$ (in integral logic) is $\mu$-stable on a model $M$, then every type in $S_\phi(M)$ is $\mu$-almost definable.
We present the notion of `coheir' here. Let $M^*$ be a saturated elementary extension of $M$. A type $p(x)\in S_\phi(M^*)$ is called a {\em coheir (of a type) over $M$} if for every condition $\varphi=0$ in $p(x)$ and every $\epsilon>0$, the condition $|\varphi|\leq \epsilon$ is satisfiable in $M$. (In classical ($\{0,1\}$-valued) model theory, this means that every formula in $p(x)$ is realized in $M$.) In this case we say that $p$ is $M$-finitely satisfiable. It is easy to verify that a type $p(x)\in S_\phi(M^*)$ is a coheir over $M$ if and only if there are $(a_i\in M: i\in I)$ and an ultrafilter $\mathcal D$ on $I$ such that $\lim_{i,\mathcal D}tp(a_i/M)=p$, where the $\mathcal D$-limit is taken in the logic topology. (For the definition of $\mathcal D$-limit, see Section~5 of \cite{BBHU}.) By Proposition~\ref{key}, $p$ is a coheir over $M$ if and only if there are $(a_i\in M: i\in I)$ and an ultrafilter $\mathcal D$ on $I$ such that $\lim_{i,\mathcal D}I_{p_i}=I_p$, where $p_i=tp(a_i/M)$ and the $\mathcal D$-limit is taken in the weak* topology on $\sigma_{M^*}(M^*)$.
Here we say a function $\psi:X\to \mathbb{R}$ on a topological space $X$ is {\em universally measurable}, if it is $\mu$-measurable for every probability Radon measure $\mu$ on $X$.
\begin{dfn} \label{universal dfn} Let $M^*$ be a saturated elementary extension of $M$ and $p(x)\in S_\phi(M^*)$ be a coheir of a type over $M$.
We say that a universally measurable function $\psi:S_{\tilde\phi}(M)\to\mathbb{R}$ {\em defines} $p$ if $\phi(p,a)=\psi(tp_{\tilde\phi}(a/M))$ for all $a\in M^*$, and in this case we say that $p$ is {\em universally definable}. \end{dfn}
The above notion is well defined since every coheir is $M$-finitely satisfiable and so $M$-invariant.
\begin{rmk}[\cite{Pillay-Grothendieck}, Remark~2.1] There is a correspondence between the set of all coheirs of types over $M$ and the closure of the set
$A=\{\phi^a(y):S_{\tilde\phi}(M)\to{\mathbb{R}}~|a\in M\}$, where $\phi^a(q)=\phi(a,q)$ for all $q\in S_{\tilde\phi}(M)$. Indeed, let $M^*$ be a saturated elemetary extension of $M$.
Then for any global $M$-finitely satisfiable $\phi$-type $p(x)\in S_\phi(M^*)$ there is a function $\psi_p$ in the closure $A$ such that $\psi_p(tp_{\tilde\phi}(a/M))=\phi(p,a)$ for all $a\in M^*$. Indeed, suppose that $tp_\phi(a_i/M^*)\to p$ in the logic topology, where $a_i\in M$. Define $\psi_p(y)=\lim_i\phi(a_i,y)$ for all $y\in S_{\tilde\phi}(M)$. Now, it is easy to verify that $\psi_p(tp_{\tilde\phi}(a/M))=\phi(p,a)$ for all $a\in M^*$. To summarize, let $S_\phi^{M\text{-fs}}(M^*)$ be the set of all global $M$-finitely satisfiable $\phi$-types (over the monster model $M^*$). Then the map $S_\phi^{M\text{-fs}}(M^*)\to \overline{A}$, defined by $p\mapsto\psi_p$, is a homeomorphic embedding of $S_\phi^{M\text{-fs}}(M^*)$ in the pointwise convergence topology on $\overline{A}\subseteq {\Bbb R}^{S_{\tilde\phi}(M)}$. \end{rmk}
For simplicity, we will write $\phi(p,a)=\phi(p,b)$ where $b=tp_{\tilde\phi}(a/M)$ and $a\in M^*$.
The following is a translation of the BFT criterion:
\begin{proposition} \label{NIP-dfn} Let $M$ be a structure and $\phi(x,y)$ a formula. Then the following are equivalent: \begin{itemize}
\item [{\em (i)}] $\phi$ is NIP on $M$.
\item [{\em (ii)}] Every coheir of a $\phi$-type over $M$ is definable by a
universally measurable relation $\psi(y)$ over $S_{\tilde\phi}(M)$. \end{itemize} \end{proposition} \begin{proof} (i)~$\Rightarrow$~(ii): Let
$A=\{\phi^a(y):S_{\tilde\phi}(M)\to{\mathbb{R}}~|a\in M\}$. By NIP, $A$ is relatively compact in $\textbf{M}_r(S_{\tilde\phi}(M))$ (see the BFT criterion). Suppose that $p_{a_i}\to p\in S_{\phi}(M^*)$
where $p_{a_i}$ is realized by $a_i\in M$ and $M^*$ is a saturated elementary extension of $M$.
(We note that the set of all types realized in $M$ is dense in the set of all coheirs.) Thus $\phi^{a_i}\to\psi$ pointwise where $\psi$ is universally measurable, and $\psi$ defines $p$.
(ii)~$\Rightarrow$~(i): Suppose that $\phi^{a_i}\to\psi$ pointwise. We can assume that $p_{a_i}\to p\in S_{\phi}(M^*)$. Suppose that $p$ is definable by a universally measurable relation $\varphi$, so we have $\psi=\varphi$ on $S_{\tilde\phi}(M)$. So, $\psi$ is measurable for all Radon measures on $S_{\tilde\phi}(M)$. Again by the BFT criterion, $\phi$ is NIP. \end{proof}
\begin{rmk} \label{invariant-type} In \cite{HP}, the authors showed that, in 0-1 valued logic, every global invariant type admits a Borel definition assuming NIP. This implies, in particular, that every global $M$-invariant type admits a universally measurable definition. Note that Proposition~\ref{NIP-dfn} is an extension of their result to continuous logic for global finitely satisfiable types.
Moreover, one can show that $\phi(x,y)$ is NIP on a separable model $M$ if and only if every global $M$-finitely satisfiable $\phi$-type is Baire 1 definable (see also Fact~\ref{Polish-compact}, Theorem~\ref{Baire-dfn} and Remark~\ref{NIP=Baire 1} below). \end{rmk}
Here we mention a characterization of NIP in
terms of measure algebra. For this, a definition is needed. Let $\phi(x,y)$ be a formula, $r\in\mathbb{R}$ and $a\in M$. By $\{\phi(x,a)\geqslant r\}$ we denote the set $\{p\in S_\phi(M):\phi(p,a)\geqslant r \}$. The set $\{ \phi(x,a)\leqslant r\}$ has the obvious meaning. The measure algebra generated by $\phi$ on $S_\phi(M)$ is the measure algebra generated by all sets of the forms $\{\phi(x,a)\geqslant r\}$ and $\{\phi(x,b)\leqslant s\}$ where $a,b\in M$ and $r,s\in \mathbb{R}$. One can assume that all $r,s$ are rational numbers. Now, a straightforward translation of the proof for classical first order theories, as can be found in \cite[Theorem~3.14]{Keisler}, implies that:
\begin{proposition} \label{Keisler-NIP} Let $T$ be a theory and $\phi(x,y)$ a formula. Then the following are equivalent: \begin{itemize}
\item [{\em (i)}] $\phi$ is NIP.
\item [{\em (ii)}] For every sufficiently saturated model $M$, each Radon measure on $S_\phi(M)$ has a
countably generated measure algebra (which is the
measure algebra generated by $\phi$). \end{itemize} \end{proposition}
Now we are going to give another characterization of NIP. First we need some definitions. Let $\psi$ be a measurable function on
$(S_{\tilde\phi}(M),\mu)$ where $\mu$ is a probability Radon measure on $S_{\tilde\phi}(M)$. Then $\psi$ is called an {\em almost ${\tilde\phi}$-definable relation over $M$} if there is a sequence $g_n:S_{\tilde\phi}(M)\to \mathbb{R}$, $|g_n|\leqslant
|{\tilde\phi}|$, of continuous functions such that $\lim_n g_n(p)=\psi(p)$ for almost all $p\in S_{\tilde\phi}(M)$. (We note that by the Stone-Weierstrass theorem every continuous function $g_n:S_{\tilde\phi}(M)\to \mathbb{R}$ can be expressed as a uniform limit of algebraic combinations of (at most countably many) functions of the form $p\mapsto {\tilde\phi}(p, b)$, $b\in M$.)
An almost $\tilde\phi$-definable relation $\psi(y)$ over $M$ defines a coheir $p(x) \in S_\phi(M^*)$ (of a $\phi$-type over $M$) if the set $A_0\subseteq S_{\tilde\phi}(M)$ is measurable and $\mu(A_0)=1$, where $A_0=\{b\in S_{\tilde\phi}(M):\phi(p,b) =\psi(b)\}$. In this case we say that $p$ is ($\mu$-)\emph{almost definable}. It is easy to check that almost definability is well defined. Suppose that every coheir $p$ is almost definable by a measurable function $\psi^p$.
Then, we say that $p$ is {\em almost equal to $q$}, denoted by $p\equiv_\mu q$, if $\psi^p=\psi^q$ $\mu$-almost everywhere. For a coheir $p(x)$, define $[p]_\mu=\{q\in S_\phi(M^*): p\equiv_\mu q \text{ and $q$ is a coheir}\}$ and
$[S_\phi]_\mu(M)=\{[p]_\mu:p\in S_\phi(M^*) \text{ is a coheir}\}$. Then $[S_\phi]_\mu(M)$ has a natural topology which is defined by metric $d([p]_\mu,[q]_\mu)=\int|\psi^p-\psi^q|d\mu$ for coheirs $p,q\in S_\phi(M^*)$.
Recall that the density character of a topological space $X$, is the least infinite cardinal number of a dense subset of $X$. When measuring the size of a structure we will use its density character (as a metric space), denoted $\|M\|$, rather than its cardinality. Similarly, since $[S_\phi]_\mu(M)$ is a metric space, we measure the size $[S_\phi]_\mu(M)$ by its density character $\|[S_\phi]_\mu(M)\|$.
\begin{thm}[Almost definability of coheirs] \label{almost-dfn}
Let $T$ be a theory and $\phi(x,y)$ a formula. Then the following are equivalent: \begin{itemize}
\item [{\em (i)}] $\phi$ is NIP.
\item [{\em (ii)}] For every model $M$ and measure $\mu$ on $S_{\tilde\phi}(M)$, every
coheir of a type over $M$ is $\mu$-almost definable, and $\|[S_{\phi}]_\mu(M)\|\leqslant\|M\|$. \end{itemize} \end{thm} \begin{proof} (i)~$\Rightarrow$~(ii): Suppose that the coheir $p\in S_\phi(M^*)$ is definable by a universally measurable relation $\psi$ on $S_{\tilde\phi}(M)$. Let $\mu$ be a Radon measure on $S_{\tilde\phi}(M)$. Then there is a sequence $g_n$ of continuous functions on $S_{\tilde\phi}(M)$ such that $g_n\to \psi$ in $L^1(\mu)$ (see \cite[7.9]{Folland}), and hence
a subsequence (still denoted by $g_n$) that converges to $\psi$ $\mu$-almost everywhere. So $p$ is $\mu$-almost definable. Moreover, by the Stone-Weierstrass theorem,
$\|C(S_{\tilde\phi}(M))\|\leqslant\|M\|$. Now, since $C(S_{\tilde\phi}(M))$ is dense in $L^1(\mu)$ (again see
\cite[7.9]{Folland}), $\|L^1(\mu)\|\leqslant\|M\|$. By definition,
$\|[S_{\phi}]_\mu(M)\|\leqslant\|M\|$ and the proof is completed.
(ii)~$\Rightarrow$~(i): Let $p\in S_\phi(M^*)$ be a coheir of a type over $M$. Suppose that $p_{a_i}\to p$ where $p_{a_i}$ is realized by $a_i\in M$. Then the function $\psi(y)=\lim_i\phi(a_i,y)$ is measurable for all Radon measures on $S_{\tilde\phi}(M)$. Indeed, by definition, for each Radon measure $\mu$, there is a measurable function $\psi_\mu$ such that $\psi_\mu(b)=\phi(p,b)$ $\mu$-almost everywhere. Since $\mu$ is Radon (and so is complete), and $\psi=\psi_\mu$ almost everywhere, $\psi$ is $\mu$-measurable (see \cite[2.11]{Folland}). Then, by Proposition~\ref{NIP-dfn}, the proof is completed. \end{proof}
\subsection{Baire 1 definable types} More results can be reached, if one works in a separable model. Let $X$ be a Polish space. A function $f:X\to{\mathbb{R}}$ is of Baire class 1 if it can be written as the pointwise limit of a sequence of continuous functions. The set of Baire class 1 functions on $X$ is denoted by $B_1(X)$.
\begin{fct}[BFT Criterion for Polish spaces, \cite{BFT}, Corollary~4G] \label{Polish-compact} Let $X$ be a Polish space, and $A\subseteq C(X)$ pointwise bounded set. Then the following are equivalent: \begin{itemize}
\item [{\em (i)}] $A$ is relatively compact in $B_1(X)$.
\item [{\em (ii)}] $A$ is relatively sequentially compact in ${\mathbb{R}}^X$, or $A$ has the RSC. \end{itemize} \end{fct}
Fremlin's notion of an angelic topological space is as follows: a regular Hausdorff space X is {\em angelic} if (i) every relatively countably compact set in $X$ is relatively compact, (ii) the closure of a relatively compact set is precisely the set of limits of its sequences. The following is the principal result of \cite{BFT}.
\begin{fct}[\cite{BFT}, Theorem~3F] \label{Polish-angelic} If $X$ is a Polish space, then $B_1(X)$ is angelic under the topology of pointwise convergence. \end{fct}
Let $M$ be a structure and $\phi(x,y)$ a formula. A Baire class 1 function $\psi:S_{\tilde\phi}(M)\to{\mathbb{R}}$ defines $p\in S_\phi(M)$ if $\phi(p,b)=\psi(b)$ for all $b\in M$. We say $p$ is Baire 1 definable if some Baire class 1 function $\psi$ defines it. The following is another criterion for NIP.
\begin{thm}[Baire 1 definability of types] \label{Baire-dfn}
Let $\phi(x,y)$ be a NIP formula
on a separable model $M$. Then every $p\in S_{\phi}(M)$ is definable by a Baire 1 function $\psi(y)$ on $S_{\tilde\phi}(M)$. \end{thm} \begin{proof} The proof is an easy consequence of Fact~\ref{Polish-compact}. Suppose that $p_{a_i}\to p\in S_{\phi}(M)$ where $a_i\in M$. (Recall that the set of all types realized in $M$ is dense in $S_{\phi}(M)$.) For each $a\in M$, define $\phi^a:M\to {\Bbb R}$ by $\phi^a(b)=\phi(a,b)$.
Since $\phi$ is NIP, the set $\hat{A}=\{\phi(a,y):S_y(M)\to{ \mathbb{R}}:a\in M\}$ is relatively sequentially compact in ${\Bbb R}^{S_y(M)}$, and in particular the set $A=\{\phi^a:a\in M\}$ is relatively sequentially compact in ${\Bbb R}^M$. Now by Fact~\ref{Polish-compact}, since $M$ is Polish, also $A$ is relatively compact in $B_1(M)$. Thus, there is a $\psi\in B_1(M)$ such that $\phi^{a_i}\to \psi$, so $p$ is definable by a Baire class 1 function. Moreover, since $B_1(M)$ is angelic, there is some sequence $\phi^{a_n},a_n\in M$ such that $\phi^{a_n}\to\psi$. \end{proof}
\begin{rmk} \label{NIP=Baire 1} Note that one can say more: $\phi$ is NIP on $M$ if and only if every coheir is Baire 1 definable. This is discussed in detail in \cite{KP}. \end{rmk}
\noindent\hrulefill
\section{SOP} \label{5}
In this section we work in the classical logic. One reason for restricting our attention to the classical case is to make this section more accessible to model-theorists and other interested readers.
In \cite{Sh} Shelah introduced the strict order property as complementary to the independence property: a theory has OP iff it has IP or SOP. In functional analysis, the Eberlein-\v{S}mulian theorem states that a subset of a Banach space is not relatively weakly compact iff it has a sequence without any weak Cauchy subsequence or it has a weak Cauchy sequence with no weak limit. In fact there is a correspondence between the Eberlein-\v{S}mulian theorem and Shelah's result above. To determine this correspondence, we first give a topological description of the strict order property, and then study the above dividing line.
In classical ($\{0,1\}$-valued) model theory a formula $\phi(x,y)$ has the {\em strict order property} (or short SOP) if there exists a sequence $(a_i:i<\omega)$ in the monster model $\mathcal{U}$ such that for all $i<\omega$, $$\phi({\mathcal{U}},a_i)\subsetneqq\phi({\mathcal U},a_{i+1}).$$ The acronym SOP stands for the strict order property and NSOP is its negation. We can assume that $\phi(x,y)$ is a 0-1 valued function on $\mathcal U$ such that $\phi(a,b)=1$ iff $\models\phi(a,b)$. Then $\phi(x,y)$ has the strict order property if and only if there are sequences $(a_i,b_j:i,j<\omega)$ in $\mathcal U$ such that for each $b\in{\mathcal U}$, the sequence $\{\phi(b,a_i)\}_i$ is increasing -- therefore the pointwise limit $\psi(x):=\lim_i\phi(x,a_i)$ is well-defined -- and $\phi(b_j,a_j)<\phi(b_j,a_{j+1})$ for all $j<\omega$.
Now, suppose that the $\phi(x,a_i)$ are continuous functions on $S_\phi({\mathcal U})$, the space of all complete $\phi$-types. Suppose that $\phi$ has not the SOP, and $\phi(x,a_i)\nearrow\psi(x)$. Then $\psi:S_\phi({\mathcal U})\to\{0,1\}$ is continuous, because there is a $k$ such that $\phi(x,a_k)=\phi(x,a_{k+1})=\cdots$. Conversely, suppose that $\phi(x,a_i)\nearrow\psi(x)$ and $\psi$ is continuous. It is a standard fact that an increasing sequence of continuous functions on a compact space which converges to a continuous function converges uniformly (Dini's Theorem). Therefore, our sequence is eventually constant, because the logic is 0-1 valued.
Therefore, it seems right to say that the SOP in classical logic
is equivalent to the existence of a pointwise convergent sequence (not necessary increasing) of continuous functions such that its limit is not continuous. Our next goal is to convince the reader that by a technical consideration this is indeed the case.
In functional analysis, a Banach space $X$ is called {\em weakly sequentially complete} if every weak Cauchy sequence has a weak limit. Similarly we define the following notion and
will observe that this notion corresponds to NSOP on the model-theoretic side.
\begin{dfn} Let $X$ be a topological space and $F\subseteq C(X)$. We say that $F$ has the {\em weak sequential completeness property} (or short SCP) if the limit of each pointwise convergent sequence $\{f_n\}\subseteq F$ is continuous. \end{dfn}
As we will see shortly, the following statement is a generalization of a well known model theoretic fact, i.e. SOP implies OP.
\begin{fct} \label{SOP->OP} Let $X$ be a compact space and $F\subseteq C(X)$ a bounded subset. If $F$ is relatively weakly compact in $C(X)$, then $F$ has the SCP. \end{fct} \begin{proof} Suppose that $F$ is relatively weakly compact in $C(X)$, and $\{f_n\}$ is a sequence in $F$ which pointwise converges to $f$. Since the pointwise topology and weak topology are the same (see Fact~\ref{Grothendieck-lemma} above), so $f$ as a cluster point of $\{f_n\}$ is continuous. \end{proof}
The next result is another application of the Eberlein-Grothendieck criterion:
\begin{thm} \label{nip+scp=stable} Let $X$ be a compact space and $A\subseteq C(X)$ be bounded. Then $A$ is relatively weakly compact in $C(X)$ iff it has RSC and SCP. \end{thm} \begin{proof} First we show that $cl_p(A)\subseteq C(X)$ if every sequence of $A$ has a convergent subsequence in ${ \mathbb{R}}^X$ and the limit of every convergent sequence of $A$ is continuous. Suppose that $A$ has RSC and SCP. Let $\{f_n\}_n\subseteq A$ and $\{a_m\}_m\subseteq X$, and suppose that the double limits $\lim_m\lim_n f_n(a_m)$ and $\lim_n\lim_m f_n(a_m)$ exist. Let $a$ be a cluster point of $\{a_m\}_m$. By RSC, there is a convergent subsequence $f_{n_k}$ such that $f_{n_k}\to f$. Therefore $\lim_m\lim_{n_k} f_{n_k}(a_m)=\lim_mf(a_m)$ and $\lim_{n_k}\lim_m f_{n_k}(a_m)=\lim_{n_k}f_{n_k}(a)=f(a)$. By SCP, $\lim_mf(a_m)=f(a)$. Since the double limits exist, it is easy to verify that $\lim_m\lim_n f_{n}(a_m)=\lim_m\lim_{n_k} f_{n_k}(a_m)$ and $\lim_n\lim_m f_{n}(a_m)=\lim_{n_k}\lim_m f_{n_k}(a_m)$. So $A$ has the double limit property and thus it is relatively weakly compact in $C(X)$. The converse follows from Facts~\ref{IP->OP} and \ref{SOP->OP}. \end{proof}
\begin{proposition} \label{SCP->NSOP} If the set $\{\phi(x,a):a\in\mathcal{U}\}$ has the SCP, then $\phi(x,y)$ is NSOP. \end{proposition} \begin{proof} Suppose, for a contradiction, that $\{\phi(x,a):a\in\mathcal{U}\}$ has the SCP
and $\phi$ is SOP. By SOP, there are $(a_ib_i:i<\omega)$ in the monster model $\mathcal U$ such that $\phi({\mathcal U},a_i)\leqslant\phi({\mathcal U},a_{i+1})$ and $\phi(b_j,a_i)<\phi(b_i,a_j)$ for all $i<j$. Let $b$ be a cluster point of $\{b_i\}_{i<\omega}$. By SCP, $\phi(S_\phi({\mathcal U}),a_i)\nearrow\psi$ and $\psi$ is continuous. But $\lim_i\lim_j\phi(b_j,a_i)=0<1=\lim_i\lim_j\phi(b_i,a_j)$ and by continuity $\psi(b)<\psi(b)$, a contradiction. \end{proof}
The following example shows that the converse does not hold in analysis. It was suggested to us by M\'{a}rton Elekes.
\begin{exa} \label{exa}
Let $X$ be the Cantor set. Let $H=\{0\}\cup(X\cap(2/3,1))$. (We note that $H$ is $\Delta_2^0$, i.e. it is $F_\sigma$ and $G_\delta$ at the same time, but neither open nor closed.) Then it is easy to see that there exists a sequence $H_n$ of clopen subsets of $X$ such that if $f_n$ is the characteristic function of $H_n$ and $f$ is the characteristic function of $H$ then $f_n\to f$ pointwise. Let $A =\{f_n :n<\omega\}$. Then all $f_n$ are continuous, uniformly bounded (even 0-1 valued), the pointwise closure is $A\cup\{f\}$ (which are all Baire class 1 functions), and all monotone sequences in $A$ are eventually constant: indeed, if there were a true monotone subsequence then its limit would be the characteristic function of an open or a closed set, but $H$ is neither open nor closed. Also, we note that $A$ has the RSC but it is not relatively weakly compact in $C(X)$. \end{exa}
Again we give a topological presentation of a model theoretic property. For this, we need some definitions. Let $M$ be a saturated enough structure and $\phi:M\times M\to\{0,1\}$ a formula. For subsets $B,D\subseteq M$, we say that $\phi(x,y)$ has the {\em order property on } $B\times D$ (short OP on $B\times D$) if there are sequences $(a_i)\subseteq B$, $(b_i)\subseteq D$ such that $\phi(a_i,b_j)$ holds if and only if $i<j<\omega$. We will say that $\phi(x,y)$ has the {\em NIP on
$B\times D$}, if for the set $A=\{\phi(a,y):S_y(D)\to\{0,1\}~|a\in B\}$, any of the cases in Lemma~\ref{equivalence} holds.
\begin{proposition} \label{NSOP=SCP} Suppose that $T$ is a theory. Then the following are equivalent: \begin{itemize}
\item [{\em (i)}] $T$ is NSOP.
\item [{\em (ii)}] For each indiscernible sequence $(a_n)_{n<\omega}$ and formula $\phi(x,y)$, if the sequence $(\phi(x,a_n))_{n<\omega}$ pointwise converges on $S_\phi(\mathcal{U})$, then its limit is continuous. \end{itemize} \end{proposition} \begin{proof} (i) $\Rightarrow$ (ii): Suppose that there are an indiscernible sequence $(a_n)_{n<\omega}$ and a formula $\phi(x,y)$ such that the sequence $(\phi(x,a_n))_{n<\omega}$ pointwise converges but its limit is not continuous. Since the limit is not continuous, $\tilde{\phi}(y,x)=\phi(x,y)$ has OP on $\{a_n\}_{n<\omega}\times S_\phi({\mathcal U})$. Since every sequence in $\{\phi(x,a_n)\}_{n<\omega}$ has a pointwise convergent subsequence, $\tilde{\phi}(y,x)$ is NIP on $\{a_n\}_{n<\omega}\times S_\phi({\mathcal U})$. The following argument is classic (see \cite{Poi} and \cite{S}). Since $\tilde{\phi}(y,x)$ has OP, there is a sequence $\{b_N\}\subseteq S_\phi({\mathcal U})$ such that $\tilde\phi(a_i,b_N)$ holds if and only if $i<N$.
By NIP, there is some integer $n$ and $\eta : n \rightarrow \{0,1\}$ such that $\bigwedge_{i<n} \tilde\phi(a_i,x)^{\eta(i)}$ is inconsistent. (Recall that for a formula $\varphi$, we use the notation $\varphi^0$ to mean $\neg\varphi$ and $\varphi^1$ to mean $\varphi$.) Starting with that formula, we change one by one instances of $\neg\tilde\phi(a_i,x) \wedge \tilde\phi(a_{i+1},x)$ to $\tilde\phi(a_i,x) \wedge \neg\tilde\phi(a_{i+1},x)$. Finally, we arrive at a formula of the form $\bigwedge_{i<N} \tilde\phi(a_i,x) \wedge \bigwedge_{N\leq i<n}
\neg\tilde\phi(a_i,x)$. The tuple $b_N$ satisfies that formula.
Therefore, there is some $i_0<n$, $\eta_0 : n \rightarrow \{0,1\}$ such that $$\bigwedge_{i\neq i_0, i_0+1} \tilde\phi(a_i,x)^{\eta_0(i)} \wedge \neg\tilde\phi(a_{i_0},x) \wedge \tilde\phi(a_{i_0+1},x)$$ is inconsistent, but $$\bigwedge_{i\neq i_0, i_0+1} \tilde\phi(a_i,x)^{\eta_0(i)} \wedge \tilde\phi(a_{i_0},x) \wedge \neg\tilde\phi(a_{i_0+1},x)$$ is consistent. Let us define $\varphi(\bar a,x)=\bigwedge_{i\neq i_0,i_0+1} \tilde\phi(a_i,x)^{\eta_0(i)}$. Increase the sequence $(a_i : i<\omega)$ to an indiscernible sequence $(a_i:i\in \mathbb Q)$. Then for $i_0 \leq i<i' \leq i_0+1$, the formula $\varphi(\bar a,x) \wedge \tilde\phi(a_i,x) \wedge \neg\tilde\phi(a_{i'},x)$ is consistent, but $\varphi(\bar a,x) \wedge \neg\tilde\phi(a_i,x) \wedge \tilde\phi(a_{i'},x)$ is inconsistent. Thus the formula $\psi(x,y) = \varphi(\bar a,x) \wedge \tilde\phi(y,x)$ has the strict order property.
(ii) $\Rightarrow$ (i): Suppose that the formula $\phi(x,y)$ has SOP as witnessed by a sequence $(a_nb_n:n<\omega)$. Then the formula $\psi(y_1,y_2)=\forall x(\phi(x,y_1)\to\phi(x,y_2))$ defines a continuous pre-order for which the sequence $(a_n:n<\omega)$ forms an infinite chain. Replace $(a_n)_{n<\omega}$ by an indiscernible sequence $(c_n)_{n<\omega}$, and return to $\phi(x,y)$. Therefore, $\phi(x,y)$ has SOP as witnessed by the sequence $(c_nb_n:n<\omega)$. Now, $\phi(S_\phi({\mathcal U}),c_n)\nearrow\varphi$ but $\varphi$ is not continuous. \end{proof}
We now provide a proof of Shelah's theorem (\cite{Sh}, Theorem~4.1).
\begin{cor} \label{Shelah-continuous} Suppose that $T$ is NIP and NSOP. Then $T$ is stable. \end{cor} \begin{proof} Let $\phi(x,y)$ be a formula, $(a_n)_{n<\omega}$ an indiscernible sequence, and $(b_n)_{n<\omega}$ an arbitrary sequence. Suppose that the double limits $\lim_m\lim_n\phi(b_n,a_m)$ and $\lim_n\lim_m\phi(b_n,a_m)$ exist. By NIP, there is a convergent subsequence $\phi(x,a_{m_k})$ such that $\phi(x,a_{m_k})\to\psi(x)$ on $S_\phi(\mathcal{U})$. Therefore, $\lim_n\lim_k\phi(b_n,a_{m_k})=\lim_n\psi(b_n)$ and $\lim_k\lim_n\phi(b_n,a_{m_k})=\lim_k\phi(b,a_{m_k})=\psi(b)$ where $b$ is a cluster point of $\{b_n\}$. By NSOP, $\lim_n\psi(b_n)=\psi(b)$. So the double limits are the same and thus $T$ is stable. (Compare Theorem~\ref{nip+scp=stable}.) \end{proof}
\subsection*{Theorems of Eberlein-\v{S}mulian and Shelah} The well known compactness theorem of Eberlein and \v{S}mulian says that relative compactness, relative sequential compactness and relative countable compactness are equivalent for the weak topology of a Banach space. Now, we show the correspondence between Shelah's theorem and the Eberlein-\v{S}mulian theorem.
\begin{proposition} \label{Shelah=Eberlein} Suppose that $X$ is a space of the form $S_\phi(M)$ and $A=\{\phi(a,y):a\in M\}$ where $M$ is a sufficiently saturated model of a theory $T$ and $\phi(x,y)$ a formula. Then the following are equivalent. \begin{itemize}
\item [{\em (i)}] {\bf The Eberlein-\v{S}mulian theorem:}
For every $A\subseteq C(X)$, the following statements are equivalent:
\begin{itemize}
\item [{\em (a)}] The weak closure of $A$ is weakly compact in $C(X)$.
\item [{\em (b)}] Each sequence of elements of $A$ has a subsequence that is weakly convergent in $C(X)$.
\end{itemize}
\item [{\em (ii)}] {\bf Shelah's theorem:} The following statements are equivalent:
\begin{itemize}
\item [{\em (a$'$)}] $T$ is stable.
\item [{\em (b$'$)}] $T$ has the NIP and the NSOP.
\end{itemize} \end{itemize} \end{proposition} \begin{proof} First, we note that by the Eberlein-Grothendieck criterion, (a)~$\Leftrightarrow$~(a$'$).
It suffices to show that (b)~$\Leftrightarrow$~(b$'$). Suppose that $(f_n)$ is a sequence of the form $(\phi(a_n,y))$ where $(a_n)$ is an indiscernible sequence. By (b), there is a subsequence $(f_{n_k})$ that is convergent. Therefore, $T$ has NIP. Again by (b), its limit is continuous, so $T$ has NSOP, and (b$'$) holds. Conversely, suppose that $T$ has NIP and NSOP. Let $(f_n)$ be a sequence of the form $(\phi(a_n,y))$ where $(a_n)$ is an arbitrary sequence. By NIP, $(f_n)$ has a convergent subsequence $(f_{n_k})$. Replace $(a_n)$ by an indiscernible sequence $(c_n)$. Then, by NSOP, $f=\lim_kf_{n_k}$ is continuous. So, (b) holds. \end{proof}
To summarize: $$\begin{array}{cccccc}
\textrm{Logic:~~~~~~~~~} & \textrm{Stable} & \Longleftrightarrow & \textrm{NIP} & + & \textrm{ NSOP} \\
\textrm{ } & & & & & \\
\textrm{Analysis:~~~~~} & \textrm{Weakly Compact} & \Longleftrightarrow & \textrm{RSC} & + & \textrm{SCP} \end{array}$$
Of course, the Eberlein-\v{S}mulian theorem is proved for arbitrary Banach spaces (even normed spaces), but it follows easily from the case $C(X)$ (see \cite{Fremlin4}, Theorem~462D). On the other hand, the above argument implicitly shows that countable compactness implies compactness.
Earlier we defined angelic topological spaces. Roughly an angelic space is one for which the conclusions of the Eberlein-\v{S}mulian theorem hold. By the previous observations one can say that `first order logic is angelic.'
\noindent\hrulefill
\end{document} |
\begin{document}
\raggedbottom \topmargin-49pt \textheight620pt
\title{Topology measurement within the histories approach} \addtocounter{footnote}{+1} \footnotetext{The Centre of Mathematics, Nevsky pr., 39, 191011, St-Petersburg, Russia} \footnotetext{Department of Mathematics, SPb UEF, Griboyedova 30/32, 191023, St-Petersburg, Russia (address for correspondence)} \addtocounter{footnote}{+1} \footnotetext{Division of Mathematics, Istituto per la Ricerca di Base, I-86075, Monteroduni (IS), Molise, Italy}
\begin{abstract} An idealised experiment estimating the spacetime topology is considered in both classical and quantum frameworks. The latter is described in terms of histories approach to quantum theory. A procedure creating combinatorial models of topology is suggested. The correspondence between these models and discretised spacetime models is established. \end{abstract}
\section{Introduction}
Within the conventional account of the relativity theory the structure of spacetime as differentiable manifold is supposed to be given and it is the metric structure that is subject to measurement and changes. So, the topology of spacetime is not {\em an observable.}
Nowadays there is no fully fledged theory in which the spacetime topology would be a variable, nor even in a sense perceivable entity. However, even if such theory does not exist, we may try to consider idealized experiments which would let us know the spacetime topology. That means, we should assume the spacetime to be a manifold, and we only wish to {\em determine} its topological structure. In accordance with it, any observer should believe that the topology of the area of his observation (that is, appropriate coordinate neighborhood) is that of a ball. So, in order to recover the entire spacetime topology we have to find out how the balls do overlap. However, any realistic experiment (having at most finite number of outcomes) can not let us know it. We are only able to know if the regions have common points (section \ref{semp}).
Such experimental scheme inevitably needs {\em several} observers, but the problem of event identification arises: two observers registering an event should be made sure they really see the same. We emphasize that this is a matter of {\em convention:\/} two observers should have a way to identify remote events. This leads to the concept of organized observation (section \ref{sentang}).
The obtained results of observations then ought to be somehow interpreted. We may do that in classical either quantum way. In the classical approach this leads to the Sorkin discretization scheme (section \ref{sfinsub}).
The attempt to put a scheme of topology estimation into the framework of quantum mechanics requires the cooperative nature of the observations to be explicitly captured in the theory. It is the notion of {\em homogeneous history} in the histories approach to quantum theory \cite{ha} that can be used for this purpose. Within the histories approach we introduce the notion of the 'team' (organized set of observers). To carry out the mathematical description of the team we had to impose an additional mathematical structure. It turned out that this structure can be represented by that of associative algebra (section \ref{sentang}). It is worthy to mention that such structures can be introduced in different ways reflecting different ways of organization of the team of observers.
\[ \left( \begin{array}{c} \hbox{topology} \cr \hbox{measurement} \end{array} \right) \,=\, \left( \begin{array}{c} \hbox{homogeneous} \cr \hbox{history} \end{array} \right) \,+\, \left( \begin{array}{c} \hbox{organization} \cr \hbox{of observers} \end{array} \right) \]
There is no spacetime points at all within the histories approach, and the goal of the introduced additional structure is to 'manufacture' them. We suggest an algebraic machinery building topological spaces (namely, the Rota topologies on primitive spectra of appropriate algebras) and call it {\em spatialization procedure} (section \ref{sspat}).
In order to make sure of the viability of our quantum construction we should take care of {\em correspondence principle}: we should be able to carry out quasiclassical measurements. That means to possess such an organization scheme for the team that the result of the spatialization procedure would be the same as in classical approach. It reduces to a purely mathematical problem of existence of appropriate algebraic structure. We suggest a constructive solution of this problem using so-called incidence algebras (section \ref{sialg}).
\section{Empirical topology}\label{semp}
Let us consider, following \cite{g2}, an idealized experimental scheme for determining the topological structure of spacetime. Consider a team $\Lambda$ of observers. Each of them assumes himself to be in the center of an area ${{\cal O}_{\lambda}}$ $(\lambda \in \Lambda$) homeomorphic to an open ball. We require it to satisfy the correspondence principle: in fact, looking around we do not see holes or borders in the sky. These areas $\{{{\cal O}_{\lambda}}\}$ will form an atlas for the spacetime manifold in which they are. Then the problem of learning the structure of the entire manifold arises. It was solved by Alexandrov \cite{alexandrov} by introducing the notion of {\em nerve} of the covering, namely the result is encoded in the structure of mutual intersections of the elements of the covering.
Within the proposed scheme, the problem is to {\em experimentally} verify which areas ${{\cal O}_{\lambda}}$ do overlap. This is done by exchanging information between observers about the events they observe. The results of the observations could be put into the following table (Tab. \ref{tab1}) whose rows correspond to events and columns correspond to the observers.
\begin{table}[h!t] \begin{center}
\begin{tabular}{||c||c|c|@{$\:\ldots\:$}|c|@{$\:\ldots\:$}||} \hline Event label & ${\cal O}_1$ & ${\cal O}_2$ & ${\cal O}_\lambda$ \cr \hline 1 & + & -- & + \cr 2 & + & + & + \cr \ldots & \ldots & \ldots & \ldots \cr $n$ & -- & + & + \cr \ldots & \ldots & \ldots & \ldots \cr \hline \end{tabular} \end{center} \caption{The results of observations: if an observer $\lambda$ registers the event $i$ we put "$+$" into the appropriate cell of the table, otherwise "$-$" is put.} \label{tab1} \end{table}
The consequences we make out of the experiments necessarily have the statistical nature. In particular, the statement "the areas of two observers do overlap" is merely a statistical hypothesis. To verify it the following criterion is suggested:
\begin{equation}\label{epropos} \mbox{\parbox[c]{100mm}{\em If it occurs that the observers ${\cal O}_1$ and ${\cal O}_2$ have registered the same event, then the areas of their observations do overlap. } } \end{equation}
Note that this criterion is {\em statistical} rather than {\em logical}. We emphasize that after the observations were carried out we only accept or reject the appropriate hypothesis.
When such a hypothesis is accepted, it gives us the complete information about the {\em nerve} of the covering. One might think that now we are able to recover the global topology by gluing the balls together. But this is an illusion: the obstacle is that we have nothing to glue! Moreover the geometrical realization by nerve may be a source of artifacts: for instance we can cover an interval $(0,1)$ (having dimension 1) in such a way
\[ \begin{array}{rcl} {\cal O}_1 &=& (0,0.6) \\ {\cal O}_2 &=& (0.4,1) \\ {\cal O}_3 &=& (0.2,0.8) \end{array} \]
\noindent that the appropriate nerve is realized by a triangle (having dimension 2). Supposed we could exhaust {\em all} the points of spacetime, the "real ultimate" structure of spacetime manifold would be recovered. However what we can really carry out is to realize a "homogeneous history" whose outcome is recorded in the table like Tab. \ref{tab1}.
\section{Entanglement in histories approach}\label{sentang}
In this section we introduce topology measurements into the histories approach to quantum theory. It will be based on the algebraization scheme of the histories approach suggested by C.Isham \cite{qlha}. The key issues of this scheme are
\begin{itemize} \item to consider {\em propositions} about histories rather than histories themselves \item to span a linear space on elementary propositions about histories \item to endow the propositions themselves by the additional structure of orthoalgebra \end{itemize}
\noindent thus organizing them in a way similar to conventional quantum mechanics.
In this paper a similar idea is realized. We consider \begin{itemize} \item[*] propositions about topologies rather than the topologies themselves \item[*] a linear space spanned on the elementary propositions about topologies \item[*] the structure of associative algebra on this linear space \end{itemize}
Let us specify what do we mean by propositions about topologies. There are at least three ways to introduce a topology on a set $M$ \cite{ishamtop}. First two of them are in a sense exhaustive: to define (to list out) all open sets either to define the operation of closure on all subsets of $M$. The third way is more 'economic': to declare which sequences do converge. It will be suitable for us to replace topology by convergencies for both technical and operationalistic reasons (section \ref{sspat}. In fact, any realistic experiment can yield us at most a finite sequence of results. The associative algebras related with propositions about topology will be built in section \ref{sialg}.
Let us figure out how the notion of organized team of observers can be incorporated into the histories approach. Let
\begin{equation}\label{eh1} A^1_{t_1}U_{t_1t_2} A^2_{t_2}\ldots U_{t_{n-1}t_n}A^n_{t_n}\psi_0 \end{equation}
\noindent be a homogeneous history. The operators $A^i$ are assumed to act in a Hilbert space ${\cal H}$. It was suggested by Isham \cite{qlha} to describe the history (\ref{eh1}) by an element of the tensor product $\otimes_{i=1,n}{\cal H}$. Then we assume that there is an 'organizer' of the history whose status is {\em a priori} the same as that of every member of the team. That means that he has the same state space ${\cal H}$. Thus each history, that is, a vector from $\otimes_{i=1,n}{\cal H}$, should be associated with a vector in the state space ${\cal H}$ of the organizer. The suggested correspondence should meet the following requirements:
\begin{itemize} \item[(i)] Neither the number of observers nor their particular choice of what to measure should influence the form of this organization \item[(ii)] If we have an experiment which is a refinement of different coarser experiments, their results should not contradict \item[(iii)] This correspondence should be linear in order to support the superposition principle \end{itemize}
Mathematically this correspondence is introduced by defining a family of linear mappings ${\bf O}_n$ ($n=1,\ldots,n$):
\begin{equation} \label{eooo} {\bf O}_n :{\cal H} \otimes {\cal H} \otimes \ldots \otimes {\cal H}\rightarrow {\cal H} \end{equation}
\noindent whose form is specified by a particular organization of the topology measurement. The requirement (iii) is expressed in the linearity of ${\bf O}_n$. To meet the requirement (i) we are dealing with the family $\{{\bf O}_n\}$ rather than with a single mapping.
Now the requirement (ii) can be formulated as a relation between the mappings ${\bf O}_n$. First,
\[ {\bf O}_1(x) = x \]
\noindent and
\[ {\bf O}_{p+q}(x_1\otimes \ldots \otimes x_{p+q}) = {\bf O}_2({\bf O}_p(x_1\otimes \ldots \otimes x_p) \otimes {\bf O}_q(x_1\otimes \ldots \otimes x_q) ) \]
\noindent In particular
\begin{equation}\label{eass} {\bf O}_2({\bf O}_2(x\otimes y)\otimes z) = {\bf O}_2(x\otimes {\bf O}_2(y\otimes z)) \end{equation}
\noindent therefore all the mappings ${\bf O}_n$ can be inductively expressed through ${\bf O}_2$.
\[ {\bf O}_n(x_1\otimes \ldots \otimes x_{n-1} \otimes x_n) = {\bf O}_2({\bf O}_{n-1}(x_1\otimes \ldots \otimes x_{n-1}) \otimes x_n) \]
Being a linear mapping, ${\bf O}_2:{\cal H} \otimes {\cal H} \to {\cal H}$ generates a bilinear mapping ${\cal H} \times {\cal H} \to {\cal H}$ whose action on the pair $(x,y)$ we denote simply by $x\cdot y$. Then the relation (\ref{eass}) reads:
\[ (x\cdot y)\cdot z = x\cdot (y\cdot z) \]
So, the organization of a topology measurement is mathematically expressed by defining an {\em associative} product in ${\cal H}$. Then the 'organizing operator' (\ref{eooo}) takes the form:
\[ {\bf O}_n(x\otimes y\otimes \ldots \otimes z) = x\cdot y\cdot \ldots \cdot z \]
Suppose a history realizing a topology measurement 'took place', that is, the team of observers had carried out a number of yes-no experiments (Table \ref{tab1}), or, in other words, the operators $A^i_{t_i}$ are projectors in ${\cal H}$:
\begin{equation}\label{eh10} P^1_{t_1}U_{t_1t_2} P^2_{t_2}\ldots U_{t_{n-1}t_n}P^n_{t_n}\psi_0 \end{equation}
The outcome of each of these yes-no measurements is the selection of a subspace of ${\cal H}$ (associated with appropriate projector). The organizer has a collection of subspaces of ${\cal H}$ in his disposal. Now let us return to requirement (i): what invariant object may he construe out of them having only these subspaces and the product in ${\cal H}$? This is the algebra ${\cal A}$ spanned on these subspaces. So, all the available information about the spacetime topology is encoded in this subalgebra of ${\cal H}$. A way to extract it is to apply the spatialization procedure described in the next section.
\section{Spatialization procedure and Rota topology}\label{sspat}
Let us consider what sort of spaces can be extracted from algebras. Begin with the discussion of what points ought to be. Suppose for a moment that the obtained algebra ${\cal A}$ is commutative. In this case it can be canonically represented by a functional algebra on an appropriate topological space. This may be obtained using Gel'fand representation. In this case the points of this space can be thought of as characters. The characters are, in turn, one-dimensional irreducible representations whose kernels are maximal ideals. There are several ways to impose a topology on the set of points \cite{bourbaki}.
In general, when the algebra ${\cal A}$ may be non-commutative, the scheme of geometrization remains in principle unchanged: we only pass from characters to classes of irreducible representations, and, respectively, from maximal ideals to primitive ones. For a more detailed analysis of the relevance of primitive ideals the reader is referred to \cite{g1}. So
\[ X = {\rm Prim}\,{\cal A} \]
\noindent that is, the points are the elements of the primitive spectrum of ${\cal A}$ (equivalence classes of IRRs). Note that at this point we have $X$ as a {\em set} not yet endowed by any structure. The straightforward way to 'topologize' $X$ could be to use the Jacobson topology. Unfortunately, in the finitary context we are (section \ref{semp}) this topology (as well as the other standard ones) reduces to the trivial case of discrete one. So, let us seek for a weaker structure which could produce us a reasonable topology on $X$.
It is the notion of convergence space \cite{ishamtop} which is the closest to topological structure. It is formed by declaring a relation $(x_n)\rightharpoonup y$ of convergence between sequences and points:
\[ x_1,x_2,\ldots, x_n, \ldots \rightharpoonup y \]
\noindent which always gives rise to the following relation on the points of $X$:
\begin{equation}\label{econv} x\rightharpoonup y \:\hbox{if and only if}\: x,x, \ldots,x, \ldots \rightharpoonup y \end{equation}
Having any relation $\rightharpoonup$ on $X$, we are always in a position to define a topology on $X$ as the strongest topology in which (\ref{econv}) holds. So, we shall introduce a topology on $X = {\rm Prim}\,{\cal A}$ according to the following scheme:
\[ (\hbox{relation on }X) \longrightarrow (\hbox{topology on }X) \]
Recall that the elements of $X$ are the primitive ideals of ${\cal A}$, which are, in turn, subsets of ${\cal A}$. Having two such ideals $X,Y$ we can form both their intersection $X\cap Y$ and their product $X\cdot Y$ as the ideal spanned on all products $x\cdot y$ with $x\in X$, $y\in Y$. Note that in general $X\cdot Y\neq Y\cdot X$. However both $X\cdot Y$ and $Y\cdot X$ always lie in (but may not coincide with) $X\cap Y$. Relations between primitive ideals were investigated. G.-C. Rota \cite{rota} introduced the following relation in the context of enumerative combinatorics:
\begin{equation}\label{etends} X \rho Y \:\hbox{if and only if}\: X\cdot Y \stackrel{\neq}{\subset} X\cap Y \end{equation}
We shall call the topology generated by this relation "$\rho$" the {\sc Rota topology} on the set of primitive ideals.
We see that in order to judge on the measured topology it suffices to build an 'organizing' algebra. A particular form of this algebra should be produced using the table of observations (like Tab. \ref{tab1}). There is no {\em a priori} preferred way to build such an algebra: different models of 'data processing' may give different spatializations. However, there exists a 'classical spatialization' using no quantum models --- this is the Sorkin discretization scheme (section \ref{sfinsub}). The problem of correspondence then arises is it possible to suggest such an organizing algebra based on the table of results that the appropriate topological spaces (Rota and Sorkin topologies) would coincide. This problem will be solved in section \ref{sialg}.
\section{Finitary substitutes}\label{sfinsub}
The Sorkin spatialization procedure imposes the topology on the set $N$ of events whose prebase is formed by the subsets of events observed by each observer. Consider this construction in more detail following the account suggested in \cite{g2,g1}.
Associate with any event $i$ the set $\Lambda_i \subseteq \Lambda$ of observers which registered it:
\begin{equation}\label{e11} \Lambda_i = \{\lambda \subseteq \Lambda \mid \quad \hbox{the event}\; i \; \hbox{was registered by }\, \lambda\} \end{equation}
\noindent and consider the relation $\rightharpoonup$ on the set of events:
\begin{equation}\label{e12} i\rightharpoonup j \:\:\, \hbox{if and only if} \:\:\, \forall \lambda \: j\in N_\lambda \, \Rightarrow \, i\in N_\lambda \end{equation}
Note that the relation $\rightharpoonup$ is evidently reflexive ($i\rightharpoonup i$) and transitive ($i\rightharpoonup j,j\rightharpoonup k\,\Rightarrow \, i\rightharpoonup k$). Such relations are called {\sc quasiorders}. Consider the equivalence relation $\leftrightarrow$ on the set of events $N$:
\begin{equation}\label{e12a} i\leftrightarrow j \:\:\, \hbox{if and only if} \:\:\, i\rightharpoonup j \: \hbox{ and } \: j\rightharpoonup i \end{equation}
\noindent and consider the quotient set
\begin{equation}\label{e13} X \: = \: N/\leftrightarrow \end{equation}
\noindent called {\sc finitary spacetime substitute} \cite{sorkin} or pattern space \cite{prg}. For $x,y\in X$ introduce the relation $x\to y$:
\begin{equation}\label{e13a} x\to y \:\:\, \hbox{if and only if} \:\:\, \forall i\in x, \: \forall j\in y \: \, \,i\rightharpoonup j \end{equation}
\noindent (note that the expressions like $i\in x$ make sense since the elements of $X$ are subsets of $N$). The relation $\to$ (\ref{e13a}) on $X$ is:
\begin{itemize} \item[(i)] reflexive: $x\to x$ \item[(ii)] transitive: $x\to y,y\to z\,\Rightarrow \, x\to z$ \item[(iii)] antisymmetric: $x\to y,y\to x\,\Rightarrow \, x=y$ \end{itemize}
The relations having these three properties are called {\sc partial orders}. It is known (see, e.g. \cite{sorkin}) that there is 1--1 correspondence between partial orders and topologies on finite sets, and that the topology of the manifold can be recovered when the number of events and observers grows to infinity \cite{sorkin}.
To conclude this section consider an example of finitary substitute from \cite{g2}. Suppose there are four observers ${\cal O}_1,\ldots,{\cal O}_4$ living on the circle $e^{\imath\varphi}$ whose areas of observations are:
\[ \begin{array}{ccl} {\cal O}_1 & \mapsto & \{-2\pi/3 < \varphi < 2\pi/3\} \cr {\cal O}_2 & \mapsto & \{\pi/3 < \varphi < 5\pi/3\} \cr {\cal O}_3 & \mapsto & \{-3\pi/4 < \varphi < 2/3\pi\} \cr {\cal O}_4 & \mapsto & \{\pi/4 < \varphi < 3\pi/4\} \end{array} \]
Then the table of outcomes takes the form:
\begin{center}
\begin{tabular}{||c||c|c|c|c||} \hline Event label & ${\cal O}_1$ & ${\cal O}_2$ & ${\cal O}_3$ & ${\cal O}_4$ \cr \hline 0 & + & -- & -- & -- \cr $\pi/2$ & + & + & -- & + \cr $\pi$ & -- & + & -- & -- \cr $3\pi/2$ & + & + & + & -- \cr \hline \end{tabular} \end{center}
Then the relation "$\rightharpoonup$" (\ref{e12}) is already partial order:
\begin{equation}\label{e17} \begin{array}{ccccccc} \pi/2 & \rightharpoonup & 0 &;\:& \pi/2 & \rightharpoonup & \pi \cr 3\pi/2 & \rightharpoonup & 0 &;\: & 3\pi/2 & \rightharpoonup & \pi \end{array} \end{equation}
\noindent and the equivalence relation (\ref{e12a}) turns out to be the equality. Hence the finitary substitute $X$ is the whole set of events:
\[ x = \{\pi/2\} \; , \; y = \{3\pi/2\} \; , \; z = \{0\} \; , \; w = \{\pi\} \]
\noindent and the partial order on $X$ is:
\begin{figure}
\caption{The finitary substitute of the circle.}
\label{f17}
\end{figure}
\noindent {\bf Remark.} As we have already mentioned, there is an equivalent way to define topology in terms of converging sequences. It worthy to mention that we use the symbol $\to$ for the partial order (\ref{e12}) due to the following fact:
\[ x \to y \qquad \hbox{if and only if} \qquad \lim\{x,x,\ldots,x,\ldots\} = y \]
\section{Incidence algebras}\label{sialg}
There is no direct evidence of the compatibility of Sorkin and histories approaches to empirical spacetime topologies. In this section we solve this problem. We explicitly suggest the construction which starting from the table of observations produces an algebra whose space of primitive ideals endowed with the Rota topology is homeomorphic to the Sorkin finitary substitute obtained from the same table.
As it was studied in section \ref{sentang} in order to build a model of organized spacetime observations we need to introduce an algebra, that is the two following objects:
\begin{itemize} \item A linear space $H$ \item A product operation on the space $H$ \end{itemize}
\noindent somehow generated by the table of observations Tab. \ref{tab1}, where $H$ will stand for a model of ${\cal H}$ and the product will capture the organization.
As we promised, we shall deal with a linear space $H$ spanned on the elementary propositions about topology. What could be the form of such propositions? Each of them should involve at least two points, since the matter of topology is just to study the mutual positions of events. We shall choose the simplest model, namely, that of two-point statements (a higher order situation was considered in \cite{jmp96}). Such elementary statements were already formulated (\ref{epropos}).
The form of the algebra we suggest will be similar to that introduced in \cite{dmhv}. Let $p,q$ be two events, denote by the symbol (sic!) ${|p\!><\!q|}$ the proposition associated with this pair. Form the linear span of all such symbols: ${\rm span}\{ {|p\!><\!q|} \}$ and define the product on it:
\begin{equation}\label{epq}
{|p\!><\!q|} \cdot |r\!><\!s| \,=\, <\!q|r\!> \cdot |p\!><\!s| \end{equation}
\noindent where \( <\!q|r\!> = \delta_{qr} \). Note the obtained product is associative but not commutative in general.
In order to take into account the results of the measurement (Table \ref{tab1}) we form the linear space
\[ H \,=\, {\rm span} \{ {|p\!><\!q|} \hbox{ such that } p\rightharpoonup q \} \]
\noindent where $p\rightharpoonup q$ (\ref{e12}) means that $p$ was registered by a greater set of observers than $q$. To assure that the obtained algebra can really describe an organization (in the sense of section \ref{sentang}, we have to check that $H$ is an algebra.
\noindent {\bf Proposition 1.} The linear space $H$ with the product (\ref{epq}) is associative algebra.
\noindent {\em Proof.\/} Let ${|p\!><\!q|}$ and $|r\!><\!s|$ are in $H$, that means $p\rightharpoonup q$ and $r\rightharpoonup s$. If $q\neq r$ then their product is zero. If $q=r$ then, according to (\ref{epq}) their product is ${|p\!><\!q|} \cdot |q\!><\!s| = |p\!><\!s|$, which is the element of $H$ since the relation $\rightharpoonup$ is transitive: $p\rightharpoonup q, q\rightharpoonup s$ implies $p\rightharpoonup s$.
\noindent {\bf Remark.} The algebras of such sort (called incidence algebras) were introduced by Rota in \cite{rota} in a slightly different way.
Now let us apply the spatialization procedure described in section \ref{sspat}. The primitive spectrum of the algebra $H$ was calculated in \cite{drs}; it consists of all the ideals of the form:
\begin{equation}\label{eprim}
X_s = {\rm span} \{ {|p\!><\!q|} :\; {|p\!><\!q|} \neq |s\!><\!s| \} \end{equation}
\noindent where $s$ ranges over all equivalence classes with respect to the relation "$\leftrightarrow$" (\ref{e12a}) on $N$, that is, events. So, at the first stage of the spatialization procedure we already have a canonical bijection between the elements of the primitive spectrum of the algebra $H$ and the events in the Sorkin's discretization scheme (section \ref{sfinsub}). In order to show the compatibility of the two schemes we have to show that the Rota topology on the set $X_s$ is the same as that of Sorkin.
Let us figure out the form of the relation $\rho$ (\ref{etends}) for the suggested algebra. By the way we shall see that the relation $\rho$ can be thought of as a sort of 'proximity' between events. So, let $r,s$ be two events.
\noindent {\bf Proposition 2.} Let $X_r, X_s$ be two primitive ideals. Then $X_r\rho X_s$ if and only if $r\rightharpoonup s$ and there is no $t\neq r,s$ such that $r\rightharpoonup t \rightharpoonup s$.
\noindent {\em Proof\/} will be carried out exhaustively: we shall consider all possible cases.
\begin{itemize} \item {\bf Case 1.} $r=s$. Consider $X_r\cdot X_r$. To prove that this product coincides with $X_r$
recall its definition (\ref{eprim}). Let $|a\!><\!b| \in X_r$ (that is $a\neq r$ or $b\neq r$) and prove that it can be represented as the product of two elements from $X_r$. If $a\neq r$
then $|a\!><\!b| = |a\!><\!a|\cdot |a\!><\!b|$. If $b\neq r$ then
$|a\!><\!b| = |a\!><\!b|\cdot |b\!><\!b|$. Therefore $X_r\cdot X_r=X_r$, and $X_r\overline{\rho} X_r$.
\item {\bf Case 2.} $r\not\rightharpoonup s$. Consider an element ${|p\!><\!q|}$ from the intersection of the ideals:
\[
X_r\cap X_s \,=\,{\rm span} \{ {|p\!><\!q|} :\;{|p\!><\!q|} \neq |r\!><\!r| \hbox{ and }
{|p\!><\!q|} \neq |s\!><\!s|\} \]
\noindent and show that it belongs to the product
\[
X_r\cdot X_s \,=\, {\rm span} \{ {|p\!><\!q|} |a\!><\!b|:\; {|p\!><\!q|} \neq |r\!><\!r|
\hbox{ and } |a\!><\!b|\neq |s\!><\!s| \} \]
\noindent If $p=q\neq r,s$ then $|p\!><\!p| = |p\!><\!p|\cdot|p\!><\!p|$. If $p\neq q$ then $p\neq r$ or $q\neq s$ (since $r\not\rightharpoonup s$). Then
${|p\!><\!q|} = |p\!><\!p|\cdot {|p\!><\!q|}$ or ${|p\!><\!q|} = {|p\!><\!q|}\cdot |q\!><\!q|$, respectively. Therefore $X_r\overline{\rho} X_s$.
\item {\bf Case 3.} $r\rightharpoonup s$ and there is $t\neq r,s$ such that $r\rightharpoonup t \rightharpoonup s$. Consider an element ${|p\!><\!q|}$ from the intersection of the ideals:
\[
X_r\cap X_s \,=\,{\rm span} \{ {|p\!><\!q|} :\;{|p\!><\!q|} \neq |r\!><\!r| \hbox{ and }
{|p\!><\!q|} \neq |s\!><\!s|\} \]
\noindent and show that it belongs to the product
\[
X_r\cdot X_s \,=\, {\rm span} \{ {|p\!><\!q|} |a\!><\!b|:\; {|p\!><\!q|} \neq |r\!><\!r|
\hbox{ and } |a\!><\!b|\neq |s\!><\!s| \} \]
\noindent If $p=q\neq r,s$ then $|p\!><\!p| =
|p\!><\!p|\cdot|p\!><\!p|$. Let $p\neq q$ and ($p\neq r$ or $q\neq s$), then ${|p\!><\!q|} = |p\!><\!p|\cdot {|p\!><\!q|}$ (if $p\neq r$) or ${|p\!><\!q|} = {|p\!><\!q|}
\cdot |q\!><\!q|$ (if $q\neq s$). Finally let ${|p\!><\!q|} = |r\!><\!s|$, then $|r\!><\!s| = |r\!><\!t|\cdot |t\!><\!s|$, and we again have $X_r\overline{\rho} X_s$.
\item {\bf Case 4.} $r\rightharpoonup s$ and there is no $t\neq r,s$ such that $r\rightharpoonup t \rightharpoonup s$. Let us show that the element
$|r\!><\!s|$ from the intersection of the ideals:
\[
X_r\cap X_s \,=\,{\rm span} \{ {|p\!><\!q|} :\;{|p\!><\!q|} \neq |r\!><\!r| \hbox{ and }
{|p\!><\!q|} \neq |s\!><\!s|\} \]
\noindent is not an element of the product
\[
X_r\cdot X_s \,=\, {\rm span} \{ {|p\!><\!q|} |a\!><\!b|:\; {|p\!><\!q|} \neq |r\!><\!r|
\hbox{ and } |a\!><\!b|\neq |s\!><\!s| \} \]
\noindent Suppose this is not the case, then $|r\!><\!s| = \sum C_{atc}|a\!><\!t|\cdot|t\!><\!c|$. Multiply this equality (in $H$) by
$|r\!><\!r|$ from the left and by $|s\!><\!s|$ from the right. Then $|r\!><\!s| =
\sum C_{rts}|r\!><\!t|\cdot|t\!><\!s|$. However there is no $t$ such that
$r\rightharpoonup t \rightharpoonup s$, therefore this sum is zero, while $|r\!><\!s|\neq 0$.
\end{itemize}
\noindent
So we see that two primitive ideals are binded by the relation $\rho$ if and only if they are as close as possible on the Hasse diagram of the partial order associated with the Sorkin topology (Case 4).
The results of this section can be summarized in the following theorem.
\noindent {\bf Theorem.} {\em The Sorkin topology of a finitary substitute coincides with the Rota topology of its incidence algebra. }
\section*{Concluding remarks}
Initially, in the histories approach to quantum mechanics the existence of the spacetime as a fixed manifold was presupposed \cite{ha}. An algebraic version \cite{qlha} of this approach did not give up this presupposition, however rendered it rudimentary. In this paper we do the next step, and the spacetime becomes an observable up to its combinatorial approximation.
The core of the suggested quantum scheme is the spatialization procedure. We have realized it as close as possible to the standard spatialization due to Gel'fand. The peculiarity of our approach is that we impose a new topology, namely, that of Rota (section \ref{sspat}). The reason for us to do it was that in finite dimensional situations (which we considered as realistic ones) the Gel'fand topology reduced to the trivial discrete one. The suggested machinery is a bridge between the algebraic version of the histories approach \cite{qlha} and combinatorial models such as lattice-like discretization schemes \cite{sorkin}.
From the other side, beyond the histories approach an algebraic construction merging finitary substitutes into the quantum-like environment was already carried out by the 'poseteers group' (the term introduced by F.Lizzi) in \cite{g2,g1}. The comparison of the two constructions is in Table \ref{tabc}.
\begin{table}[h!t] \begin{tabular}{p{0.35\textwidth} @{\hspace{0.1\textwidth}}p{0.35\textwidth}} \hline \\ {\sc Poseteers' approach} & {\sc Incidence algebras}\\ &\\ \multicolumn{2}{c}{\bf The algebras}\\ &\\ $C^*$-algebras of infinite dimensions & finite dimensional algebras with no involution \\ &\\ \multicolumn{2}{c}{\bf The points}\\ &\\ kernels of irreducible \mbox{$*$-re}presentations & kernels of irreducible representations \\ &\\ \multicolumn{2}{c}{\bf The topology}\\ &\\ Jacobson topology & Rota topology \\ &\\ \hline \\ \end{tabular} \caption{The comparison of the two algebraic schemes.} \label{tabc} \end{table}
Note that the incidence algebras are not semisimple. At first sight this seems to be an essential drawback of the theory bringing it far from the existing quantum constructions. However, in the case of finite dimension the Jacobson topology will always be discrete. From the other side, the lack of semisimplicity makes it possible to develop differential calculi on finitary models, which may be considered as a link between the poseteers' approach and the formalism of discrete differential manifolds \cite{dmhv}.
In \cite{dmhv} finite dimensional semisimple commutative algebras are studied and a differential structure is built in terms of moduli of differential forms (being the conjugate to the module of derivations in the classical case), while there is no nonzero derivations in the algebras themselves. Contrarily, the incidence algebras possess derivations which makes it possible to introduce tensor calculi based on the notion of duality \cite{dv}. Moreover, since all their derivatives are inner, they already contain vectors (for details the reader is referred to \cite{ricomm}).
Finally, we dwell on the algebra of symbols ${|p\!><\!q|}$ we introduced in section \ref{sialg}. Irrespective to the particular form of the organizing algebra (like the incidence algebra in our case) we can always write down expressions like
\[ \large
\sum_{q_1,\ldots,q_n} |p\!><\!q_1|\circ |q_1\!><\!q_2|\circ
\ldots \circ |q_n\!><\!r| \]
\noindent where the operation "$\circ$" is a multiplication generating a particular organization of the team (section \ref{sentang}), and $q_i,q_{i+1}$ are neighbor (in the sense of Rota topology) events. So, this expression can be thought of as the sum over trajectories.
One of the authors (RRZ) expresses his gratitude to prof. G.Landi who organized his visit to the University of Trieste (supported by the Italian National Research Council) and conveyed many important ideas. Stimulating discussions with P.M.Ha\-jac, S.V.Kra\-s\-ni\-kov, and Te\-t\-sue Ma\-s\-u\-da are appreciated.
The work was supported by the RFFI research grant (96--02--19528). One of us (R.R.Z.) acknowledges the financial support from the Soros foundation (grant A98-42) and the research grant 97--14.3--62 "Universities of Russia".
\end{document} |
\begin{document} \begin{titlepage} \begin{center} \bfseries OPTIMAL MEASUREMENTS OF SPIN DIRECTION \end{center}
\begin{center} D M APPLEBY \end{center} \begin{center} Department of Physics, Queen Mary and
Westfield College, Mile End Rd, London E1 4NS, UK
\end{center}
\begin{center}
(E-mail: [email protected]) \end{center}
\begin{center} \textbf{Abstract}\\
\parbox{10.5 cm }{ The accuracy of a measurement of the spin direction of a spin-$s$ particle is characterised, for arbitrary half-integral $s$. The disturbance caused by the measurement is also characterised. The approach is based on that taken in several previous papers concerning joint measurements of position and momentum. As in those papers, a distinction is made between the errors of retrodiction and prediction. Retrodictive and predictive error relationships are derived. The POVM describing the outcome of a maximally accurate measurement process is investigated. It is shown that, if the measurement is retrodictively optimal, then the distribution of measured values is given by the initial state $\mathrm{SU}(2)$ $Q$-function. If the measurement is predictively optimal, then the distribution of measured values is related to the final state $\mathrm{SU}(2)$ $P$-function. The general form of the unitary evolution operator producing an optimal measurement is characterised.
} \end{center}
\begin{center} Report no. QMW-PH-99-18 \end{center} \end{titlepage} \title{Bohmian Velocity post-Decoherence}
\section{Introduction} \label{sec: intro} In a recent series of papers~\cite{self1,self2a,self2b,self3,self2c} we analysed the concept of experimental accuracy, as it applies to simultaneous measurements of position and momentum~\cite{Arthurs,Peres1,Busch,Schroeck,Leonhardt}. The purpose of this paper is to give a similar analysis for measurements of spin direction.
There have been a number of previous discussions of joint, imperfectly accurate measurements of two (non-commuting) components of spin~\cite{TwoComp}. Measurements of spin direction---the kind of measurement considered in this paper---have been discussed by Busch and Schroeck~\cite{BuschSpin}, Grabowski~\cite{Grabowski}, Peres~\cite{Peres1}, and Busch \emph{et al}~\cite{Busch}. In the following we extend the work of these authors by giving an analysis of the measurement errors, and of the conditions for a measurement process to be optimal. In particular, we will show that a measurement is retrodictively optimal if and only if the distribution of measured values is given by the generalized $Q$-function which is defined in terms of $\mathrm{SU}(2)$ coherent states~\cite{SpinCoh,Lieb,Perel,SpinCohB} (corresponding to an analogous property of joint measurements of position and momentum derived by Ali and Prugove\v{c}ki~\cite{Ali}, and proved under less restrictive conditions in Appleby~\cite{self3}).
This result provides us with some further insight into the physical significance of the $\mathrm{SU}(2)$ $Q$-function. It also has a bearing on the problem of state reconstruction. Amiet and Weigert~\cite{Amiet1,Amiet2} have recently shown how, by making measurements of a single spin component for sufficiently many differently oriented Stern-Gerlach apparatuses, one can calculate the corresponding values of the $\mathrm{SU}(2)$ $Q$-function, and thereby reconstruct the density matrix. The fact that a retrodictively optimal measurement of spin direction has the $Q$-function as its distribution of measured values suggests an alternative approach to the problem of state reconstruction: for it means that one can reconstruct the density matrix from the statistics of a single run of measurements, performed on a single apparatus. The fact that measurements whose outcome is described the $Q$-function have this property of informational completeness has been stressed by Busch and Schroeck~\cite{BuschSpin} (also see Busch~\cite{Complete}, Busch \emph{et al}~\cite{Busch} and Schroeck~\cite{Schroeck}).
Retrodictively optimal joint measurements of position and momentum~~\cite{self3,Ali} give rise to the ordinary Husimi or $Q$-function~\cite{Husimi,Hillery,Lee}, and so they also have the property of informational completeness~\cite{Busch,Schroeck,Complete,Naka}, at least in principle. However, the practical usefulness of this fact is somewhat restricted, due to the amplification of statistical errors which occurs when one attempts to perform the reconstruction starting from real experimental data~\cite{Leonhardt,StatAmp}. No such difficulty arises in the case of measurements of spin direction, due to the fact that the state space is finite dimensional.
We now outline the approach taken in the remainder of this paper. We consider a system consisting of a single spin, with angular momentum operator $\hat{\mathbf{S}}$ satisfying the usual commutation relations $\bigl[\hat{S}_{a},\hat{S}_{b}\bigr]=i \sum_{c=1}^{3}\epsilon_{abc}\hat{S}_{c}$ (with units chosen such that $\hbar=1$). We take it that $\hat{S}^2=s(s+1)$ for some arbitrary, but fixed half-integer $s$.
The components of $\hat{\mathbf{S}}$ are non-commuting, so they cannot all be simultaneously measured with perfect precision. However, they can all be measured with a less than perfect degree of accuracy. In order to do so one can use the same kind of procedure which is employed in the Arthurs-Kelly process~\cite{self2a,Arthurs,Peres1,Busch,Schroeck,Leonhardt}: that is, one can couple the non-commuting observables of interest---the components of $\hat{\mathbf{S}}$---to another set of ``pointer'' or ``meter'' observables which do commute, and whose values may therefore be simultaneously determined with arbitrary precision.
The question we then have to decide is how to choose the pointer observables. The observables to be measured satisfy the constraint $\hat{S}^2=s(s+1)$, where $s$ is fixed. Consequently, one might take the view that the magnitude of the spin vector is already known, and that all that needs to be measured is its direction. This suggests that the pointer observables should be taken to be the (commuting) components of a unit vector $\hat{\mathbf{n}}$, satisfying the constraint $\hat{n}^2=1$. The direction of $\hat{\mathbf{n}}$ measures the direction of $\hat{\mathbf{S}}$. We will refer to this as a type 1 measurement. Such measurements are discussed in Sections~\ref{sec: POVM}--\ref{sec: CompOpt}.
There is another possibility: for one could take the pointer observables to be the three \emph{independent}, commuting components of a vector $\hat{\boldsymbol{\mu}}$, no constraint being placed on the squared modulus $\hat{\mu}^2$. The value of $\hat{S}_1$ (respectively $\hat{S}_2$, $\hat{S}_3$) is measured by $\hat{\mu}_1$ (respectively $\hat{\mu}_2$, $\hat{\mu}_3$). We will refer to this as a type 2 measurement. Such measurements are discussed in Section~\ref{sec: type2}.
We begin our analysis in Section~\ref{sec: POVM}, by characterising the POVM (positive operator valued measure) describing the outcome of an arbitrary type 1 measurement process.
In Section~\ref{sec: AccDis} we characterise the accuracy of and disturbance caused by a type 1 measurement process. Our definitions are based on those given in Appleby~\cite{self1,self2b}, for simultaneous measurements of position and momentum. In particular, we are led to make a distinction between two different kinds of accuracy, which we refer to as retrodictive and predictive.
After giving, in Section~\ref{sec: CohSte}, a brief summary of the relevant features of the theory of $\mathrm{SU}(2)$ coherent states we go on, in Section~\ref{sec: RetOpt}, to describe retrodictively optimal type 1 measurements. We establish a bound on the retrodictive accuracy. We define a retrodictively optimal measurement to be a measurement which (1) achieves the maximum possible degree of retrodictive accuracy, and which (2) is isotropic (in a sense to be explained). We then show that the necessary and sufficient condition for the measurement to be retrodictively optimal is that the distribution of measured values be given by the initial state $\mathrm{SU}(2)$ $Q$-function.
In Section~\ref{sec: PreOpt} we establish a bound on the predictive accuracy of a type 1 measurement. We derive a necessary and sufficient condition for this bound to be achieved, in which case we say that the measurement is predictively optimal. We show that the distribution of measured values is then related to the final state $\mathrm{SU}(2)$ $P$-function.
In Section~\ref{sec: CompOpt} we consider completely optimal type 1 measurement processes---\emph{i.e.} processes that are both retrodictively and predictively optimal. We give the general form of the unitary evolution operator describing such a process.
Finally, in Section~\ref{sec: type2}, we consider type 2 measurements. We define the retrodictive and predictive errors of such measurements, and establish bounds which the errors must satisfy. We then show that, in the limit as a type 2 measurement tends to optimality (retrodictive or predictive), it more and more nearly approaches an optimal type 1 measurement (with the replacement $s^{-1}\hat{\boldsymbol{\mu}} \rightarrow \hat{\mathbf{n}}$). It follows that, in so far as the aim is to maximise the measurement accuracy, type 2 measurements have no advantages. \section{Type 1 Measurements: POVM} \label{sec: POVM} The purpose of this section is to characterise the POVM (positive operator valued measure)~\cite{Peres1,Busch,Schroeck,Kraus,Peres2} describing the outcome of an arbitrary type 1 measurement.
We take a type 1 measurement to consist of a process in which the system, with $2s+1$ dimensional state space $\mathscr{H}_{\rm sy}$, is coupled to a measuring apparatus, with state space $\mathscr{H}_{\rm ap}$. The interaction commences at a time $t=t_{\rm i}$ when system$+$apparatus are in the product state $\ket{\psi\otimes\chi_{\rm ap}}$, where $\ket{\psi}\in \mathscr{H}_{\rm sy}$ is the initial state of the system and $\ket{\chi_{\rm ap}}\in \mathscr{H}_{\rm ap}$ is the initial state of the apparatus. It ends after a finite time interval at $t=t_{\rm f}$ when system$+$apparatus are in the state $\hat{U}\ket{\psi\otimes\chi_{\rm ap}}$, where $\hat{U}$ is the unitary evolution operator describing the measurement interaction.
It should be stressed that this description is quite general. In particular, we are not making an impulsive approximation. Nor are we assuming that the interaction Hamiltonian is large in comparison with the Hamiltonians describing the system and apparatus separately. The only substantive assumption is the statement that system$+$apparatus are initially in a product state (so that they are initially uncorrelated).
It should be noted that $\ket{\psi}$ is arbitrary, since the system might initially be in any state $\in \mathscr{H}_{\rm sy}$. On the other hand $\ket{\chi_{\rm ap}}$ is fixed, since we assume that initially the apparatus is always in the same ``zeroed'' or ``ready'' state.
As explained in Section~\ref{sec: intro}, we take it that the result of the measurement is specified by the recorded values of three commuting pointer observables $\hat{\mathbf{n}}=(\hat{n}_1,\hat{n}_2,\hat{n}_3)$, satisfying the constraint $\sum_{r=1}^{3}\hat{n}_r^2=1$ (so that there are only two pointer degrees of freedom). However, a measuring instrument does not usually consist of some pointers, and nothing else. We therefore allow for the existence of $N$ additional apparatus observables $\hat{\xi}=(\hat{\xi}_1,\dots,\hat{\xi}_N)$ which, together with the components of $\hat{\mathbf{n}}$, constitute a complete commuting set. The eigenkets $\ket{\mathbf{n},\xi}$ thus provide an orthonormal basis for $\mathscr{H}_{\rm ap}$.
The operator $\hat{U}$ specifies the final state of system$+$apparatus given \emph{any} initial state $\in \mathscr{H}_{\rm sy}\otimes\mathscr{H}_{\rm ap}$. However, we are only interested in initial states of the very special form $\ket{\psi\otimes \chi_{\rm ap}}$, where $\ket{\chi_{\rm ap}}$ is fixed. In other words, the operator $\hat{U}$ provides us with much more information than we actually need. It turns out that all the quantities which are relevant to the argument of this paper can be expressed in terms of the operator $\hat{T}(\ptv,\avv)$, defined by~\cite{Busch,Schroeck,Kraus,Peres2} \begin{equation}
\hat{T}(\ptv,\avv) = \sum_{m_1,m_2=-s}^{s}
\bigl(\bra{m_1}\otimes\bra{\mathbf{n},\xi}\bigr)
\hat{U}
\bigl(\ket{m_2}\otimes\ket{\chi_{\rm ap}}\bigr)
\ket{m_1}\bra{m_2} \label{eq: TDef} \end{equation} where $\ket{m}$ denotes the eigenket of $\hat{S}_3$ with eigenvalue $m$ (in units such that $\hbar=1$). The operator $\hat{T}(\ptv,\avv)$ is more convenient to work with because, unlike $\hat{U}$, it only acts on the system state space $\mathscr{H}_{\rm sy}$.
The significance of the operator $\hat{T}(\ptv,\avv)$ is that it describes the change in the state of the system which is caused by the measurement process~\cite{Busch,Schroeck,Kraus,Peres2} (\emph{i.e.}\ it describes the operation~\cite{Busch,Schroeck, Kraus} induced by the measurement). In fact, suppose that the measurement is non-selective (meaning that the final value of $\mathbf{n}$ is not recorded, so that there is no ``collapse''), and let $\hat{\rho}_{\rm f}$ be the reduced density matrix describing the final state of the system. It is then readily verified that \begin{equation}
\hat{\rho}_{\rm f} = \int d\mathbf{n} \, d \xi \;
\hat{T}(\ptv,\avv)\,
\ket{\psi}\bra{\psi} \,
\hat{T}^{\dagger}(\ptv,\avv) \label{eq: rhof} \end{equation} where $d \mathbf{n}$ denotes the usual measure on the unit $2$-sphere: in terms of spherical polars $d \mathbf{n} = \sin \theta d\theta d \phi$.
Let $\rho_{\rm val}(\mathbf{n})$ be the probability density function describing the distribution of measured values: \begin{equation}
\rho_{\rm val}(\mathbf{n}) = \sum_{m =-l}^{l} \int d\xi \,
\Bigl| \bigl(\bra{m}\otimes \bra{\mathbf{n},\xi}\bigr)
\hat{U}
\bigl(\ket{\psi}\otimes \ket{\chi_{\rm ap}}\bigr)
\Bigr|^2 \label{eq: pdfDef} \end{equation} $\rho_{\rm val}(\mathbf{n})$ can also be expressed in terms of the operators $\hat{T}(\ptv,\avv)$. In fact, define~\cite{Busch,Schroeck,Kraus,Peres2} \begin{equation}
\hat{E}(\ptv) = \int d \xi \, \hat{T}^{\dagger}(\ptv,\avv) \hat{T}(\ptv,\avv) \label{eq: SDef} \end{equation} Then \begin{equation}
\rho_{\rm val}(\mathbf{n}) = \bmat{\psi}{\hat{E}(\ptv)}{\psi} \label{eq: val} \end{equation} We see from this that $\hat{E}(\ptv) d\mathbf{n}$ is the POVM describing the measurement outcome. In particular \begin{equation*}
\hat{E}(\ptv) \ge 0 \end{equation*} for all $\mathbf{n}$ and \begin{equation}
\int d \mathbf{n} \, \hat{E}(\ptv) = 1 \label{eq: SNorm} \end{equation}
Until now we have been assuming that the system is initially in a pure state. If the system is initially in the mixed state with density matrix $\hat{\rho}_{\rm i}$ we have, in place of Eqs.~(\ref{eq: rhof}) and~(\ref{eq: val}), \begin{equation}
\hat{\rho}_{\rm f} = \int d \mathbf{n} \, d\xi \,
\hat{T}(\ptv,\avv) \, \hat{\rho}_{\rm i} \, \hat{T}^{\dagger}(\ptv,\avv) \label{eq: rhofMix} \end{equation} and \begin{equation}
\rho_{\rm val}(\mathbf{n}) = \Tr \left(\hat{E}(\ptv) \, \hat{\rho}_{\rm i}\right) \label{eq: rhoValTermsRhoi} \end{equation} Eq.~(\ref{eq: rhofMix}) gives the final state reduced density for the system in the case when the measurement is non-selective, so that the pointer position is not recorded. Suppose, on the other hand, that the final pointer position is recorded to be in the subset $\mathscr{R}$ of the unit $2$-sphere. Then $\hat{\rho}_{\rm f}$ is given by \begin{equation}
\hat{\rho}_{\rm f} = \frac{1}{p_{\mathscr{R}}}
\int_{\mathscr{R}} d\mathbf{n} \int d\xi \,
\hat{T}(\ptv,\avv) \, \hat{\rho}_{\rm i} \, \hat{T}^{\dagger}(\ptv,\avv) \label{eq: rhofTermsT} \end{equation} where $p_{\mathscr{R}}$ is the probability of finding $\mathbf{n} \in \mathscr{R}$: \begin{equation*}
p_{\mathscr{R}} = \int_{\mathscr{R}} d\mathbf{n} \, \rho_{\rm val}(\mathbf{n}) \end{equation*} \section{Type 1 Measurements: Accuracy and Disturbance} \label{sec: AccDis} The purpose of this paper is to establish the form of the operators $\hat{T}(\ptv,\avv)$ and $\hat{E}(\ptv)$ when the measurement is optimal. In order to give a precise definition of what ``optimal'' means in this context, we first need to define a concept of measurement accuracy; which is the problem addressed in this section. We also discuss how to quantify the degree to which the system is disturbed by the measurement process.
The approach we take is based on the approach taken in Appleby~\cite{self1,self2b}, to the problem of defining the accuracy of and disturbance caused by a simultaneous measurement of position and momentum. We thus work in terms of the Heisenberg picture.
Let $\hat{\mathbf{S}}_{\rm i} = \hat{\mathbf{S}}$ and $\hat{\mathbf{n}}_{\rm i} = \hat{\mathbf{n}}$ be the initial values of the Heisenberg spin and pointer observables at the time $t_{\rm i}$, when the measurement interaction begins; and let $\hat{\mathbf{S}}_{\rm f} = \hat{U}^{\dagger}\hat{\mathbf{S}}\hat{U}$ and $\hat{\mathbf{n}}_{\rm f} = \hat{U}^{\dagger}\hat{\mathbf{n}}\hat{U}$ be the final values of these observables at the time $t_{\rm f}$, when the measurement interaction ends. Let $\mathscr{S}_{\rm sy}\subset\mathscr{H}_{\rm sy}$ be the unit sphere in the system state space. We then define the retrodictive fidelity $\eta_{\rm i}$ by \begin{equation} \eta_{\rm i} = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}} \Bigl(\bmat{\psi\otimes\chi_{\rm ap}}{\tfrac{1}{2}\bigl(\hat{\mathbf{n}}_{\rm f} \cdot \hat{\mathbf{S}}_{\rm i}+ \hat{\mathbf{S}}_{\rm i} \cdot \hat{\mathbf{n}}_{\rm f}\bigr)}{\psi\otimes \chi_{\rm ap}}\Bigr) \label{eq: rfDef} \end{equation} and the predictive fidelity $\eta_{\rm f}$ by \begin{align} \eta_{\rm f} & = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}} \Bigl(\bmat{\psi\otimes\chi_{\rm ap}}{\tfrac{1}{2}\bigl(\hat{\mathbf{n}}_{\rm f} \cdot \hat{\mathbf{S}}_{\rm f}+ \hat{\mathbf{S}}_{\rm f} \cdot \hat{\mathbf{n}}_{\rm f}\bigr)}{\psi\otimes\chi_{\rm ap}}\Bigr) \notag \\ & = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}} \Bigl(\bmat{\psi\otimes\chi_{\rm ap}}{\hat{\mathbf{n}}_{\rm f} \cdot \hat{\mathbf{S}}_{\rm f}}{\psi\otimes\chi_{\rm ap}}\Bigr) \label{eq: pfDef} \end{align} (where we have used the fact that the components of $\hat{\mathbf{n}}_{\rm f}$ and $\hat{\mathbf{S}}_{\rm f}$ commute). It should be noted that the concept of fidelity employed here is somewhat different from the concept of fidelity which is employed in discussions of cloning and state estimation ($\eta_{\rm i}$ and $\eta_{\rm f}$ are defined in terms of scalar products of observables, rather than scalar products of states).
We also define the quantity $\eta_{\rm d}$ by \begin{equation} \eta_{\rm d} = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}} \Bigl(\bmat{\psi\otimes\chi_{\rm ap}}{\tfrac{1}{2}\bigl(\hat{\mathbf{S}}_{\rm f} \cdot \hat{\mathbf{S}}_{\rm i}+ \hat{\mathbf{S}}_{\rm i} \cdot \hat{\mathbf{S}}_{\rm f}\bigr)}{\psi\otimes\chi_{\rm ap}}\Bigr) \label{eq: syfDef} \end{equation}
The intuitive basis for these definitions is most easily appreciated if one thinks, temporarily, in classical terms. If interpreted classically $\eta_{\rm i}$ would represent the minimum expected degree of alignment between the final pointer direction and the initial direction of the spin vector. In other words, it would quantify the retrodictive accuracy of the measurement. On the other hand, $\eta_{\rm f}$ would represent the minimum expected degree of alignment between the final pointer direction and the final direction of the spin vector: it would therefore provide a quantitative indication of the predictive accuracy. Lastly, $\eta_{\rm d}$ would quantify the extent to which the measurement disturbs the system, by changing the direction of the spin vector.
Of course, $\hat{\mathbf{n}}_{\rm f}$, $\hat{\mathbf{S}}_{\rm i}$, $\hat{\mathbf{S}}_{\rm f}$ are in fact quantum mechanical observables, and so the physical interpretation of $\eta_{\rm i}$, $\eta_{\rm f}$ and $\eta_{\rm d}$ needs to be justified much more carefully. Rather than proceeding directly, it will be convenient first to relate these quantities to an alternative characterisation of the measurement accuracy and disturbance. This will allow us to appeal to the arguments given in Appleby~\cite{self1,self2b}, to justify our earlier characterisation of the accuracy of and disturbance caused by a simultaneous measurement of position and momentum. It will also be helpful in Section~\ref{sec: type2}, when we compare type 1 and type 2 measurements.
In a type 1 measurement, the result of the measurement is a direction, represented by the unit vector $\mathbf{n}$. However, one could extract from this information estimates of the initial and final values of the spin vector itself by multiplying $\mathbf{n}$ by suitable constants: say $\zeta_{\rm i} \mathbf{n}$ as an estimate for $\mathbf{S}_{\rm i}$, and $\zeta_{\rm f} \mathbf{n}$ as an estimate for $\mathbf{S}_{\rm f}$. The question then arises: what are the best choices for these constants?
To answer this question, consider the quantities \begin{align*}
\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl|\zeta_{\rm i} \hat{\mathbf{n}}_{\rm f}-\hat{\mathbf{S}}_{\rm i}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right) & = \zeta_{\rm i}^{2} - 2 \zeta_{\rm i} \eta_{\rm i} + s(s+1) \\ \intertext{and}
\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl|\zeta_{\rm f} \hat{\mathbf{n}}_{\rm f}-\hat{\mathbf{S}}_{\rm f}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right) & = \zeta_{\rm f}^{2} - 2 \zeta_{\rm f} \eta_{\rm f} + s(s+1) \end{align*} These expressions are minimised if we choose $\zeta_{\rm i} = \eta_{\rm i}$, $\zeta_{\rm f} = \eta_{\rm f}$. We accordingly define the maximal rms error of retrodiction \begin{equation} \Delta_{\rm e i} S
= \left(\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl|\eta_{\rm i} \hat{\mathbf{n}}_{\rm f}-\hat{\mathbf{S}}_{\rm i}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
\right)^{\frac{1}{2}} = \left( s + s^2-\eta_{\rm i}^2
\right)^{\frac{1}{2}} \label{eq: RetErrA} \end{equation} and the maximal rms error of prediction \begin{equation} \Delta_{\rm e f} S
= \left(\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl|\eta_{\rm f} \hat{\mathbf{n}}_{\rm f}-\hat{\mathbf{S}}_{\rm f}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
\right)^{\frac{1}{2}} = \left( s + s^2-\eta_{\rm f}^2
\right)^{\frac{1}{2}} \label{eq: PreErrA} \end{equation} We also define the maximal rms disturbance by \begin{equation} \Delta_{\rm d} S
= \left(\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl| \hat{\mathbf{S}}_{\rm f}-\hat{\mathbf{S}}_{\rm i}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
\right)^{\frac{1}{2}} = \sqrt{2}\left( s + s^2-\eta_{\rm d}
\right)^{\frac{1}{2}} \label{eq: DistA} \end{equation} Comparing these expressions with those given in refs.~\cite{self1,self2b} it can be seen that $\Delta_{\rm e i} S$ plays the same role in relation to the kind of measurement here considered as do the quantities $ \Delta_{\mathrm{ei}} x$, $ \Delta_{\mathrm{ei}} p$ in relation to joint measurements of position and momentum; that $\Delta_{\rm e f} S$ is the analogue of $ \Delta_{\mathrm{ef}} x$, $ \Delta_{\mathrm{ef}} p$; and that $\Delta_{\rm d} S$ is the analogue of $ \Delta_{\mathrm{d}} x$, $ \Delta_{\mathrm{d}} p$. A suitably modified version of the argument given in Section 5 of ref.~\cite{self1} may then be used to show that $\Delta_{\rm e i} S$ (and therefore $\eta_{\rm i}$) describes the retrodictive accuracy of the measurement; that $\Delta_{\rm e f} S$ (and therefore $\eta_{\rm f}$) describes the predictive accuracy; and that $\Delta_{\rm d} S$ (and therefore $\eta_{\rm d}$) describes the degree of disturbance caused by the measurement.
Finally, we note that the quantities $\eta_{\rm i}$, $\eta_{\rm f}$ and $\eta_{\rm d}$ can be expressed in terms of the operators $\hat{T}(\mathbf{n}, \xi)$ and $\hat{S}(\mathbf{n})$ defined earlier. In fact, comparing Eqs.~(\ref{eq: TDef}) and~(\ref{eq: SDef}) with Eqs.~(\ref{eq: rfDef}--\ref{eq: syfDef}) one finds {\allowdisplaybreaks \begin{align}
\eta_{\rm i} & = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\left( \int d \mathbf{n} \,
\bmat{\psi}{\tfrac{1}{2}
\bigl( \hat{E}(\ptv) \mathbf{n} \cdot \hat{\mathbf{S}} + \mathbf{n} \cdot \hat{\mathbf{S}} \, \hat{E}(\ptv)
\bigr)}{\psi}
\right) \label{eq: etaiTermsS} \\ \eta_{\rm f} & = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\left( \int d \mathbf{n} d\xi \,
\bmat{\psi}{ \hat{T}^{\dagger}(\ptv,\avv)\, \mathbf{n} \cdot \hat{\mathbf{S}} \, \hat{T}(\ptv,\avv)}{\psi}
\right) \label{eq: etafTermsT} \end{align} and \begin{multline} \eta_{\rm d}
=\inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\biggl( \int d \mathbf{n} d \xi \,
\sum_{a=1}^{3}
\bbra{\psi} \tfrac{1}{2}
\bigl( \hat{T}^{\dagger}(\ptv,\avv) \, \hat{S}_{a}\, \hat{T}(\ptv,\avv) \, \hat{S}_{a}
\bigr.
\biggr. \\
\biggl.
\bigl.
+
\hat{S}_{a}\, \hat{T}^{\dagger}(\ptv,\avv) \, \hat{S}_{a} \, \hat{T}(\ptv,\avv)
\bigr)\bket{\psi}
\biggr) \label{eq: etadTermsT} \end{multline} } \section{$\mathrm{SU}(2)$ Coherent States} \label{sec: CohSte} The task we now face is to establish upper bounds on the fidelities $\eta_{\rm i}$, $\eta_{\rm f}$ (or, equivalently, lower bounds on the errors $\Delta_{\rm e i} S$, $\Delta_{\rm e f} S$), and then to establish the form of the operators $\hat{T}(\ptv,\avv)$, $\hat{E}(\ptv)$ for which these bounds are achieved. The theory of $\mathrm{SU}(2)$ coherent states will play an important role in the argument. In order to fix notation we begin by summarising the relevant parts of this theory. For proofs of the statements made in this section see refs.~\cite{SpinCoh,Lieb,Perel,SpinCohB}.
For each unit vector $\mathbf{n} \in \mathbb{R}^3$ choose a vector $\boldsymbol{\theta}_{\mathbf{n}}\in \mathbb{R}^3$ with the property \begin{equation*}
\exp\bigl[-i \boldsymbol{\theta}_{\mathbf{n}} \cdot \hat{\mathbf{S}} \bigr] \,
\hat{S}_{3} \,
\exp\bigl[i \boldsymbol{\theta}_{\mathbf{n}} \cdot \hat{\mathbf{S}} \bigr] = \mathbf{n} \cdot \hat{\mathbf{S}} \end{equation*} Define \begin{equation}
\ket{\mathbf{n},m} = \exp\bigl[ - i \boldsymbol{\theta}_{\mathbf{n}} \cdot
\hat{\mathbf{S}} \bigr]\ket{m} \label{eq: mnKetDef} \end{equation} where $\ket{m}$ is the normalized eigenvector of $\hat{S}_{3}$ with eigenvalue $m$. We then have \begin{equation}
\mathbf{n} \cdot \hat{\mathbf{S}} \ket{\mathbf{n},m} = m \ket{\mathbf{n},m} \label{eq: CohSteEval} \end{equation} and \begin{equation*}
\frac{2s+1}{4 \pi}
\int d\mathbf{n} \,
\ket{\mathbf{n},m}\bra{\mathbf{n},m} = 1 \end{equation*} for all $m$.
We are especially interested in the states $\ket{\mathbf{n},s}$. These are the minimum uncertainty states, for which $\sum_{a=1}^{3} \bigl(\Delta \hat{S}_{a}\bigr)^2=s$. To denote them we employ the abbreviated notation \begin{equation}
\ket{\mathbf{n}}=\ket{\mathbf{n},s} \label{eq: nKetDef} \end{equation} The states $\ket{\mathbf{n}}$ so defined $\in \mathscr{H}_{\rm sy}$ and are eigenvectors of $\mathbf{n} \cdot \hat{\mathbf{S}}$. They need to be carefully distinguished from the states $\ket{\mathbf{n}, \xi}$ which $\in \mathscr{H}_{\rm ap}$ and are eigenvectors of $\hat{\mathbf{n}}$.
Let $\hat{A}$ be any operator acting on $\mathscr{H}_{\rm sy}$. The covariant symbol corresponding to $\hat{A}$ is defined by \begin{equation*}
A_{\mathrm{cv}}(\mathbf{n}) = \bmat{\mathbf{n}}{\hat{A}}{\mathbf{n}} \end{equation*} The contravariant symbol corresponding to $\hat{A}$ is defined to be the unique function $A_{\mathrm{cn}}$ for which \begin{equation*}
\hat{A} = \frac{2s+1}{4 \pi}
\int d \mathbf{n} \,
A_{\mathrm{cn}}(\mathbf{n}) \ket{\mathbf{n}}\bra{\mathbf{n}} \end{equation*} and which satisfies \begin{equation*}
\int d\mathbf{n}' \Pi_{2 s}(\ptv,\ptv') A_{\mathrm{cn}}(\mathbf{n}') = A_{\mathrm{cn}}(\mathbf{n}) \end{equation*} where $\Pi_{2 s}(\ptv,\ptv')$ is the projection kernel \begin{equation}
\Pi_{2 s}(\ptv,\ptv') = \sum_{j=0}^{2 s}\sum_{m=-j}^{j}
Y^{\vphantom{*}}_{jm}(\mathbf{n}) Y^{*}_{jm}(\mathbf{n}') = \sum_{j=0}^{2 s} \frac{2 j+1}{4 \pi} P_{j}(\mathbf{n} \cdot \mathbf{n}') \label{eq: Pi0Proj} \end{equation} In these expressions the $Y_{jm}$ are spherical harmonics and the $P_{j}$ are Legendre polynomials.
It can be shown that, given any square integrable function $f$, \begin{equation}
\hat{A} = \frac{2s+1}{4 \pi}
\int d\mathbf{n} \,f(\mathbf{n}) \ket{\mathbf{n}}\bra{\mathbf{n}} \label{eq: fCondA} \end{equation} if and only if \begin{equation}
\int d\mathbf{n}' \Pi_{2 s}(\ptv,\ptv') f(\mathbf{n}') = A_{\mathrm{cn}} (\mathbf{n}) \label{eq: fCondB} \end{equation} for almost all $\mathbf{n}$.
The covariant (respectively contravariant) symbol of an operator is often referred to as the $Q$ (respectively $P$) symbol of that operator. However, we will find it more convenient to reserve this notation for the symbols corresponding specifically to the density matrix, scaled by a factor $(2s+1)/(4 \pi)$: \begin{align}
Q(\mathbf{n}) & = \frac{2s+1}{4 \pi}\rho_{\mathrm{cv}} (\mathbf{n}) \\
P(\mathbf{n}) & = \frac{2s+1}{4 \pi}\rho_{\mathrm{cn}} (\mathbf{n}) \label{eq: PfncDef} \end{align} With this rescaling the $Q$ and $P$-functions satisfy the normalisation condition \begin{equation*}
\int d\mathbf{n} \, Q(\mathbf{n}) = \int d\mathbf{n} \, P(\mathbf{n}) =1 \end{equation*} In particular, $Q(\mathbf{n})$ is a probability density function. As we will see, it is in fact the probability density function describing the outcome of a retrodictively optimal type 1 measurement.
\section{Retrodictively Optimal Type 1 Measurements} \label{sec: RetOpt} The purpose of this section is to investigate those processes which maximise the retrodictive fidelity. We begin by establishing the following bound on $\eta_{\rm i}$: \begin{equation}
\eta_{\rm i} \le s \label{eq: etaiCondA} \end{equation} which, in view of Eq.~(\ref{eq: RetErrA}), implies \begin{equation}
\Delta_{\rm e i} S \ge \sqrt{s} \label{eq: RetErrRelA} \end{equation} We will refer to Inequality~(\ref{eq: RetErrRelA}) as the retrodictive error relation. It can be seen that it has the same form as the ordinary uncertainty relation, $\Delta S \ge \sqrt{s}$. It is the analogue, for the kind of measurement here considered, of the inequality $ \Delta_{\mathrm{ei}} x \, \Delta_{\mathrm{ei}} p \ge 1/2$ proved in ref.~\cite{self2b} for joint measurements of position and momentum (in units such that $\hbar=1$).
In order to prove this result we note that it follows from Eqs.~(\ref{eq: SDef}) and~(\ref{eq: etaiTermsS}) that \begin{equation*}
(2s+1) \eta_{\rm i} \le \int d \mathbf{n} d \xi \,
\Tr \bigl( \mathbf{n} \cdot \hat{\mathbf{S}} \, \hat{T}^{\dagger}(\ptv,\avv) \hat{T}(\ptv,\avv) \bigr) \end{equation*} In view of Eqs.~(\ref{eq: SDef}) and~(\ref{eq: SNorm}) we also have \begin{equation*}
\int d\mathbf{n} d\xi \,
\Tr \bigl( \hat{T}^{\dagger}(\ptv,\avv) \hat{T}(\ptv,\avv) \bigr) = (2s +1) \end{equation*} Consequently \begin{equation}
\int d\mathbf{n} d\xi \,
\Tr\bigl( (\eta_{\rm i}-\mathbf{n} \cdot \hat{\mathbf{S}})
\hat{T}^{\dagger}(\ptv,\avv) \hat{T}(\ptv,\avv) \bigr) \le 0 \label{eq: RetFidCondC} \end{equation} For each fixed $\mathbf{n}$ the kets $\ket{\mathbf{n},m}$ defined by Eq.~(\ref{eq: mnKetDef}) constitute an orthonormal basis. We may therefore write \begin{equation}
\hat{T}(\ptv,\avv) = \sum_{m,m'=-s}^{s}
T_{m m'}(\mathbf{n},\xi) \ket{\mathbf{n},m}\bra{\mathbf{n},m'} \label{eq: TExpand} \end{equation} for suitable coefficients $T_{m m'}$. Substituting this expression in Inequality~(\ref{eq: RetFidCondC}) gives \begin{equation}
\sum_{m, m'=-s}^{s}
\left(
(\eta_{\rm i}-m')
\int d\mathbf{n} d\xi\,
|T_{m m'}(\mathbf{n},\xi)|^2
\right) \le 0 \label{eq: RetFidCondD} \end{equation} Inequality~(\ref{eq: etaiCondA}) is now immediate.
We next show that the retrodictive fidelity achieves its maximum value $\eta_{\rm i} = s$ if and only if $\hat{E}(\ptv)$ is of the form \begin{equation}
\hat{E}(\ptv) = \frac{2s+1}{4 \pi}g(\mathbf{n}) \ket{\mathbf{n}}\bra{\mathbf{n}} \label{eq: SforRfMax} \end{equation} for almost all $\mathbf{n}$, where $\ket{\mathbf{n}}$ is the state defined by Eq.~(\ref{eq: nKetDef}), and where $g$ is any function satisfying \begin{equation}
\int d\mathbf{n}' \, \Pi_{2 s}(\ptv,\ptv') g(\mathbf{n}') =1 \label{eq: gCond} \end{equation}[$\Pi_{2 s}(\ptv,\ptv')$ being the projection kernel defined by Eq.~(\ref{eq: Pi0Proj})].
In fact, setting $\eta_{\rm i}=s$ in Inequality~(\ref{eq: RetFidCondD}) gives \begin{equation}
\sum_{m, m'=-s}^{s}
\left(
(s-m')
\int d\mathbf{n} d\xi\,
|T_{m m'}(\mathbf{n},\xi)|^2
\right) \le 0 \end{equation} from which it follows that the coefficients $T_{m m'}$ must be of the form \begin{equation*}
T_{m m'}(\mathbf{n},\xi) = \left( \frac{2s+1}{4 \pi}\right)^{\frac{1}{2}} \delta_{m' l} \, g_{m}(\mathbf{n}, \xi) \end{equation*} for almost all $\mathbf{n}$, $\xi$. Substituting this expression into Eq.~(\ref{eq: TExpand}) gives \begin{equation}
\hat{T}(\ptv,\avv) = \left( \frac{2s+1}{4 \pi}\right)^{\frac{1}{2}}
\ket{g(\mathbf{n},\xi)}\bra{\mathbf{n}} \label{eq: RetOptTCondC} \end{equation} for almost all $\mathbf{n}$, $\xi$, where \begin{equation*}
\ket{g(\mathbf{n},\xi)} = \sum_{m=-s}^{s} g_{m}(\mathbf{n},\xi)
\ket{\mathbf{n},m} \end{equation*} Setting \begin{equation*}
g(\mathbf{n}) = \int d\xi \, \bigl\|\ket{g(\mathbf{n},\xi)} \bigr\|^2 \end{equation*} and using Eq.~(\ref{eq: SDef}), we deduce that $\hat{E}(\ptv)$ is of the form specified by Eq.~(\ref{eq: SforRfMax}). It follows from Eqs.~(\ref{eq: SNorm}), (\ref{eq: fCondA}) and~(\ref{eq: fCondB}), and the fact that $\mathrm{id}_{\rm \mathrm{cn}}(\mathbf{n})=1$, that the function $g$ must satisfy
Eq.~(\ref{eq: gCond}). This proves that the condition represented by Eqs.~(\ref{eq: SforRfMax}) and~(\ref{eq: gCond}) is necessary.
Suppose, on the other hand, that $\hat{E}(\ptv)$ is given by Eq.~(\ref{eq: SforRfMax}), with $g$ satisfying Eq.~(\ref{eq: gCond}). Using Eqs.~(\ref{eq: etaiTermsS}), (\ref{eq: fCondA}) and~(\ref{eq: fCondB}) we deduce \begin{equation*}
\eta_{\rm i} =
\inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\left(\frac{2s+1}{4 \pi}\int d\mathbf{n} \,
s g(\mathbf{n}) \bigl|\overlap{\mathbf{n}}{\psi}
\bigr|^2
\right)
= s \end{equation*} which shows that the condition is also sufficient.
The condition $\eta_{\rm i} =s$ is not, by itself, enough to determine the distribution of measured values. However, the requirement that the retrodictive fidelity be maximised is not the only property which it is natural to require of a measurement that is to count as optimal. It is also natural to require that the measurement does not pick out any distinguished spatial directions. We accordingly define an isotropic measurement to be one which has the property that, if the initial system state density matrix takes the rotationally invariant form \begin{equation*}
\hat{\rho}_{\rm i} = \frac{1}{2 s+1} \end{equation*} then the distribution of measured values is also rotationally invariant: \begin{equation*}
\rho_{\rm val}(\mathbf{n}) = \frac{1}{4 \pi} \end{equation*} for all $\mathbf{n}$.
We define a retrodictively optimal type 1 measurement process to be an isotropic process for which the retrodictive fidelity is maximal, $\eta_{\rm i}=s$. It is then straightforward to verify that a type 1 measurement process is retrodictively optimal if and only if $\hat{E}(\ptv) = (2s+1)/(4 \pi) \ket{\mathbf{n}}\bra{\mathbf{n}}$. This is the POVM which has previously been discussed by Busch and Schroeck~\cite{BuschSpin}, and others~\cite{Peres1,Busch,Schroeck,Grabowski}.
We see from Eq.~(\ref{eq: rhoValTermsRhoi}) that the measurement is retrodictively optimal if and only if the distribution of measured values is given by \begin{equation*}
\rho_{\rm val}(\mathbf{n}) = Q_{\rm i} (\mathbf{n}) \end{equation*} for all $\mathbf{n}$, where $Q_{\rm i}$ is the $Q$-function corresponding to the initial system state density matrix: \begin{equation*}
Q_{\rm i} (\mathbf{n}) = \frac{2s+1}{4 \pi}\mat{\mathbf{n}}{\hat{\rho}_{\rm i}}{\mathbf{n}} \end{equation*}
In terms of the operator $\hat{T}(\ptv,\avv)$, the necessary and sufficient condition for a type 1 measurement to be retrodictively optimal is that [see Eq.~(\ref{eq: RetOptTCondC})] \begin{equation}
\hat{T}(\ptv,\avv) = \left( \frac{2s+1}{4 \pi}\right)^{\frac{1}{2}}
\ket{g(\mathbf{n},\xi)}\bra{\mathbf{n}} \label{eq: RetOptTCondA} \end{equation} where $\ket{g(\mathbf{n},\xi)}$ is any family of kets with the property \begin{equation}
\int d\xi \, \bigl\|\ket{g(\mathbf{n},\xi)} \bigr\|^2 = 1 \label{eq: RetOptTCondB} \end{equation} for all $\mathbf{n}$.
We conclude this section by showing that for retrodictively optimal type 1 measurements $\langle \hat{\mathbf{S}}_{\rm i}\rangle= (s+1)\langle \hat{\mathbf{n}}_{\rm f}\rangle$. In fact \begin{align*}
\bmat{\psi \otimes \chi_{\rm ap}}{\hat{\mathbf{n}}_{\rm f}}{\psi\otimes \chi_{\rm ap}} & = \int d\mathbf{n} \,\mathbf{n}\, \bmat{\psi}{\hat{E}(\ptv)}{\psi} \\ & = \frac{2s+1}{4 \pi}
\int d\mathbf{n} \, \mathbf{n} \, \bigl| \overlap{\psi}{\mathbf{n}}\bigr|^2 \\ & = \frac{1}{s+1}
\bmat{\psi\otimes \chi_{\rm ap}}{\hat{\mathbf{S}}_{\rm i}}{\psi\otimes \chi_{\rm ap}} \end{align*} where we have used the fact~\cite{Lieb} that $(s+1)\mathbf{n}$ is the contravariant symbol corresponding to $\hat{\mathbf{S}}$.
\section{Predictively Optimal Type 1 Measurements} \label{sec: PreOpt} The purpose of this section is to characterise the form of the operator $\hat{T}(\ptv,\avv)$ and function $\rho_{\rm val}(\mathbf{n})$ for processes which maximise the predictive fidelity, $\eta_{\rm f}$. In the last section we showed that, for retrodictively optimal type 1 measurements, $\rho_{\rm val}$ coincides with the initial system state $Q$-function. In this section we will show that if the measurement is predictively optimal, then $\rho_{\rm val}$ is related to the final system state $P$-function.
We begin by establishing an upper bound on $\eta_{\rm f}$. By a similar argument to the one leading to Inequality~(\ref{eq: RetFidCondC}) we find \begin{equation*}
\int d\mathbf{n} d\xi \,
\Tr\bigl( (\eta_{\rm f}-\mathbf{n} \cdot \hat{\mathbf{S}}) \,\hat{T}(\ptv,\avv)\, \hat{T}^{\dagger}(\ptv,\avv) \bigr) \le 0 \end{equation*} which only differs from Inequality~(\ref{eq: RetFidCondC}) in the replacement of $\eta_{\rm i}$ by $\eta_{\rm f}$, and in the fact that the order of $\hat{T}(\ptv,\avv)$ and $\hat{T}^{\dagger}(\ptv,\avv)$ is reversed. The analysis therefore proceeds in nearly the same way. Corresponding to Inequality~(\ref{eq: etaiCondA}) we deduce \begin{equation}
\eta_{\rm f} \le s \label{eq: etafCondA} \end{equation} which, in view of Eq.~(\ref{eq: PreErrA}), implies \begin{equation}
\Delta_{\rm e f} S \ge \sqrt{s} \label{eq: PreErrRelA} \end{equation} We will refer to Inequality~(\ref{eq: PreErrRelA}) as the predictive error relation. It is the analogue, for measurements of spin direction, of the inequality $ \Delta_{\mathrm{ef}} x \, \Delta_{\mathrm{ef}} p \ge 1/2$ proved in ref.~\cite{self2b} for joint measurements of position and momentum (units chosen such that $\hbar=1$).
We define a predictively optimal type 1 measurement to be one for which the predictive fidelity is maximal, $\eta_{\rm f} =s$ (unlike the case of retrodictive optimality, we do not impose the requirement that the measurement also be isotropic). By a similar argument to the one given in the last section we find, corresponding to Eqs.~(\ref{eq: RetOptTCondA}) and~(\ref{eq: RetOptTCondB}), that the necessary and sufficient condition for a type 1 measurement to be predictively optimal is that $\hat{T}(\ptv,\avv)$ be of the form \begin{equation}
\hat{T}(\ptv,\avv) = \left( \frac{2 s+1}{4 \pi}\right)^{\frac{1}{2}}
\ket{\mathbf{n}} \bra{h(\mathbf{n},\xi)} \label{eq: PreOptTCondA} \end{equation} for almost all $\mathbf{n}$, $\xi$, where $\ket{h(\mathbf{n},\xi)}$ is any family of kets satisfying the completeness relation \begin{equation}
\frac{2 s+1}{4 \pi}
\int d\mathbf{n} d\xi \, \ket{h(\mathbf{n},\xi)} \bra{h(\mathbf{n},\xi)} = 1 \label{eq: PreOptTCondB} \end{equation}
If $\hat{T}(\ptv,\avv)$ is of this form it follows from Eqs.~(\ref{eq: SDef}) and~(\ref{eq: rhoValTermsRhoi}) that \begin{equation}
\rho_{\rm val}(\mathbf{n}) = \frac{2 s+1}{4 \pi}
\int d\xi \,
\bmat{h(\mathbf{n},\xi)}{\hat{\rho}_{\rm i}}{h(\mathbf{n},\xi)} \label{eq: RetroOptPDF} \end{equation} where $\hat{\rho}_{\rm i}$ is the initial system density matrix. Now suppose that the measured value of $\mathbf{n}$ has been recorded to lie in the region $\mathscr{R}$ of the unit 2-sphere. Then, using Eqs.~(\ref{eq: rhofTermsT}), (\ref{eq: PreOptTCondA}) and~(\ref{eq: RetroOptPDF}), we find \begin{equation*} \hat{\rho}_{\rm f} = \frac{1}{p_{\mathscr{R}}}
\int_{\mathscr{R}} d\mathbf{n} \,
\rho_{\rm val} (\mathbf{n}) \ket{\mathbf{n}}\bra{\mathbf{n}} \end{equation*} where $p_{\mathscr{R}}$ is the probability of recording the result $\mathbf{n} \in\mathscr{R}$, and where $\hat{\rho}_{\rm f}$ is the final system reduced density matrix. In view of Eqs.~(\ref{eq: fCondA}), (\ref{eq: fCondB}) and~(\ref{eq: PfncDef}) this means that the final system state $P$-function $P_{\rm f}$ is given by \begin{equation*}
P_{\rm f} (\mathbf{n}) = \frac{1}{p_{\mathscr{R}}}
\int_{\mathscr{R}} d\mathbf{n}' \,
\Pi_{2s}(\mathbf{n},\mathbf{n}') \rho_{\rm val}(\mathbf{n}') \end{equation*} for almost all $\mathbf{n}$, where $\Pi_{2s}$ is the projection kernel defined by Eq.~(\ref{eq: Pi0Proj}).
If $\mathscr{R}$ is a sufficiently small region surrounding the point $\mathbf{n}_{0}$ then \begin{equation*}
\hat{\rho}_{\rm f} \approx \ket{\mathbf{n}_{0}}\bra{\mathbf{n}_{0}} \end{equation*}
Finally, we note that for a predictively optimal type 1 measurement $\langle \hat{\mathbf{S}}_{\rm f} \rangle= s\langle \hat{\mathbf{n}}_{\rm f}\rangle$. In fact \begin{align*}
\bmat{\psi \otimes \chi_{\rm ap}}{\hat{\mathbf{S}}_{\rm f}}{\psi \otimes \chi_{\rm ap}} & = \int d\mathbf{n} d\xi \,
\bmat{\psi}{\hat{T}^{\dagger}(\ptv,\avv) \, \hat{\mathbf{S}} \, \hat{T}(\ptv,\avv)}{\psi} \\ & = \frac{2 s+1}{4 \pi}\int d\mathbf{n} d\xi\bmat{\mathbf{n}}{\hat{\mathbf{S}}}{\mathbf{n}}
\boverlap{\psi}{h(\mathbf{n},\xi)}
\boverlap{h(\mathbf{n},\xi)}{\psi} \\ & = s \int d\mathbf{n} \, \mathbf{n}
\bmat{\psi}{\hat{E}(\ptv)}{\psi} \\ & = s \bmat{\psi \otimes \chi_{\rm ap}}{\hat{\mathbf{n}}_{\rm f}}{\psi \otimes \chi_{\rm ap}} \end{align*} where we have used the fact~\cite{Lieb} that $s \mathbf{n}$ is the covariant symbol corresponding to $\hat{\mathbf{S}}$.
\section{Completely Optimal Type 1 Measurements} \label{sec: CompOpt} We define a completely optimal type 1 measurement to be one which is both retrodictively and predictively optimal. Referring to Eqs.~(\ref{eq: RetOptTCondA}), (\ref{eq: RetOptTCondB}), (\ref{eq: PreOptTCondA}), and~(\ref{eq: PreOptTCondB}) we see that the necessary and sufficient condition for this to be true is that $\hat{T}(\ptv,\avv)$ be of the form \begin{equation*}
\hat{T}(\ptv,\avv) = \left(\frac{2 s+1}{4\pi}\right)^{\frac{1}{2}} f(\mathbf{n}, \xi)
\ket{\mathbf{n}}\bra{\mathbf{n}} \end{equation*} where $f$ is any function with the property \begin{equation*}
\int d \xi \, \left| f(\mathbf{n}, \xi)\right|^2 =1 \end{equation*} for all $\mathbf{n}$.
Expressed in terms of the operator $\hat{U}$ the condition reads [see Eq.~(\ref{eq: TDef})] \begin{equation*}
\bigl( \bra{m_1}\otimes\bra{\mathbf{n},\xi} \bigr)
\hat{U}
\bigl( \ket{m_2} \otimes \ket{\chi_{\rm ap}}\bigr) = \left(\frac{2 s+1}{4\pi}\right)^{\frac{1}{2}} f(\mathbf{n}, \xi)
\overlap{m_1}{\mathbf{n}} \overlap{\mathbf{n}}{m_2} \end{equation*} It is straightforward to verify that there do exist unitary operators $\hat{U}$ with this property. It follows that completely optimal measurements are defined mathematically. The question as to whether they are possible physically is, of course, rather less straightforward.
Referring to Eq.~(\ref{eq: etadTermsT}) we see that, for a completely optimal measurement, the quantity $\eta_{\rm d}$, characterising the extent to which the system is disturbed by the measurement process, is given by \begin{equation} \eta_{\rm d}
= \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\biggl( \frac{2 s+1}{4 \pi}
\int d \mathbf{n} \, \frac{s}{2} \Bigl(
\boverlap{\psi}{\mathbf{n}} \bmat{\mathbf{n}}{\mathbf{n} \cdot \hat{\mathbf{S}}}{\psi}
+ \bmat{\psi}{\mathbf{n} \cdot \hat{\mathbf{S}}}{\mathbf{n}}\boverlap{\mathbf{n}}{\psi}
\Bigr)
\biggr) = s^2 \label{eq: ComOptetad} \end{equation} where we have used the fact~\cite{Lieb} that $s\mathbf{n}$ is the covariant symbol corresponding to $\hat{\mathbf{S}} $. In view of Eq.~(\ref{eq: DistA}) it follows that \begin{equation*}
\Delta_{\rm d} S = \sqrt{2 s} \end{equation*}
\section{Type 2 Measurements} \label{sec: type2} In the preceding sections we have been concerned with type 1 measurements, for which the pointer position is constrained to lie on the unit $2-$sphere. We now turn our attention to type 2 measurements. As explained in the Introduction, these are measurements for which the outcome is represented by the three \emph{independent} commuting components of a vector $\hat{\boldsymbol{\mu}}$, no constraint being placed on the squared modulus $\hat{\mu}^2=\sum_{a=1}^{3} \hat{\mu}^{2}_{a}$. We will show that, the more nearly a type 2 measurement approaches to optimality, the more nearly it approximates an (optimal) type 1 measurement.
We first need to characterise the accuracy of a type 2 measurement. A similar analysis to that given in Section~\ref{sec: POVM} can be carried through for type 2 measurements, with the replacement $\mathbf{n}\rightarrow\boldsymbol{\mu}$. As before, we denote the additional apparatus degrees of freedom $\hat{\xi}=(\hat{\xi}_{1},\dots,\hat{\xi}_{N})$, so that the eigenkets $\ket{\boldsymbol{\mu},\xi}$ comprise an orthonormal basis for the apparatus state space, $\mathscr{H}_{\rm ap}$. Let $\ket{\chi_{\rm ap}}$ be the intial apparatus state, and let $\hat{U}$ be the unitary operator describing the evolution brought about by the measurement interaction. Then, if the initial system state is $\ket{\psi}$, the final state of system$+$apparatus, immediately after the measurement interaction has ended, will be given by $\hat{U}\ket{\psi \otimes \chi_{\rm ap}}$. Corresponding to Eqs.~(\ref{eq: TDef}) and~(\ref{eq: SDef}) we define \begin{equation*} \hat{T}(\mptv,\avv) = \sum_{m,m'=-s}^{s}
\bigl(\bra{m}\otimes
\bra{\boldsymbol{\mu},\xi}\bigr)
\hat{U}
\bigl(\ket{m'}\otimes \ket{\chi_{\rm ap}} \bigr)
\ket{m}\bra{m'} \end{equation*} and \begin{equation}
\hat{E}(\mptv) = \int d \xi \, \hat{T}^{\dagger}(\mptv,\avv) \, \hat{T}(\mptv,\avv) \label{eq: EType2Def} \end{equation} Corresponding to Eqs.~(\ref{eq: RetErrA}) and~(\ref{eq: PreErrA}) we define the maximal rms errors of retrodiction and prediction by \begin{align} \Delta_{\rm e i} S & = \left(\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl| \hat{\boldsymbol{\mu}}_{\rm f}-\hat{\mathbf{S}}_{\rm i}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
\right)^{\frac{1}{2}} \label{eq: RetErrB} \\ \intertext{and} \Delta_{\rm e f} S & = \left(\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl|\hat{\boldsymbol{\mu}}_{\rm f}-\hat{\mathbf{S}}_{\rm f}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
\right)^{\frac{1}{2}} \label{eq: PreErrB} \end{align} where $\hat{\mathbf{S}}_{\rm i}=\hat{\mathbf{S}}$, $\hat{\mathbf{S}}_{\rm f}=\hat{U}^{\dagger} \hat{\mathbf{S}} \hat{U}$ and $\hat{\boldsymbol{\mu}}_{\rm f} = \hat{U}^{\dagger} \hat{\mathbf{S}} \hat{U}$. It can be seen that Eq.~(\ref{eq: RetErrB}) agrees with Eq.~(\ref{eq: RetErrA}) if one replaces $\hat{\boldsymbol{\mu}}_{\rm f} \rightarrow \eta_{\rm i} \hat{\mathbf{n}}_{\rm f}$, and that Eq.~(\ref{eq: PreErrB}) agrees with Eq.~(\ref{eq: PreErrA}) if one replaces $ \hat{\boldsymbol{\mu}}_{\rm f} \rightarrow \eta_{\rm f} \hat{\mathbf{n}}_{\rm f}$.
In terms of the operators $\hat{E}(\mptv)$ and $\hat{T}(\mptv,\avv)$ we have \begin{equation}
\Delta_{\rm e i} S
=
\Biggl(\sup_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\biggl( \int d \boldsymbol{\mu} \,
\sum_{a=1}^{3}
\bmat{\psi}{
(\mu_{a}-\hat{S}_{a}) \, \hat{E}(\mptv) \, (\mu_{a}-\hat{S}_{a})}{
\psi} \biggr) \Biggr)^{\frac{1}{2}} \label{eq: RetErrType2Def} \end{equation} and \begin{equation}
\Delta_{\rm e f} S
=
\Biggl(
\sup_{\ket{\psi}\in\mathscr{S}_{\rm sys}}
\biggl( \int d \boldsymbol{\mu} d \xi \,
\bmat{\psi}{\hat{T}^{\dagger}(\boldsymbol{\mu},\xi) \,
\bigl|\hat{\boldsymbol{\mu}} - \hat{\mathbf{S}}\bigr|^2 \,
\hat{T}(\boldsymbol{\mu},\xi)}{\psi}
\biggr)\Biggr)^{\frac{1}{2}} \label{eq: PreErrType2Def} \end{equation}
We next show that, corresponding to Inequality~(\ref{eq: RetErrRelA}), one has the retrodictive error relationship for type 2 measurements \begin{equation}
\Delta_{\rm e i} S \ge \sqrt{s} \label{eq: RetErrRelB} \end{equation} and that, corresponding to Inequality~(\ref{eq: PreErrRelA}), one has the predictive error relationship for type 2 measurements \begin{equation}
\Delta_{\rm e f} S \ge \sqrt{s} \label{eq: PreErrRelB} \end{equation} In fact, it follows from Eqs.~(\ref{eq: EType2Def}), (\ref{eq: RetErrType2Def}) and~(\ref{eq: PreErrType2Def}) that {\allowdisplaybreaks \begin{align*} (2 s+1) \left(\Delta_{\rm e i} S\right)^2 & \ge
\int d\boldsymbol{\mu} d\xi \,
\Tr \left(\bigl|\boldsymbol{\mu}-\hat{\mathbf{S}}\bigr|^2 \, \hat{T}^{\dagger}(\mptv,\avv) \, \hat{T}(\mptv,\avv) \right) \\ \intertext{and} (2 s+1) \left(\Delta_{\rm e f} S \right)^2 & \ge
\int d\boldsymbol{\mu} d\xi \,
\Tr \left(\bigl|\boldsymbol{\mu}-\hat{\mathbf{S}}\bigr|^2 \, \hat{T}(\mptv,\avv)\, \hat{T}^{\dagger}(\mptv,\avv)\right) \end{align*} Using the fact \begin{equation*}
(2 s+1) = \int d\boldsymbol{\mu} d\xi \,
\Tr\left(\hat{T}^{\dagger}(\mptv,\avv) \, \hat{T}(\mptv,\avv)\right) \end{equation*} we deduce \begin{align} \int d\boldsymbol{\mu} d\xi \,
\Tr \biggl(\left(\bigl|\boldsymbol{\mu}-\hat{\mathbf{S}}\bigr|^2 -
\left(\Delta_{\rm e i} S\right)^2
\right)
\hat{T}^{\dagger}(\mptv,\avv)\, \hat{T}(\mptv,\avv) \biggr) & \le 0 \label{eq: RetErrBCondA} \\ \intertext{and} \int d\boldsymbol{\mu} d\xi \,
\Tr \biggl(\left(\bigl|\boldsymbol{\mu}-\hat{\mathbf{S}} \bigr|^2 -
\left(\Delta_{\rm e f} S\right)^2
\right)
\hat{T}(\mptv,\avv) \, \hat{T}^{\dagger}(\mptv,\avv)\biggr) & \le 0 \label{eq: PreErrBCondA} \end{align} Now make the expansion} \begin{equation*}
\hat{T}(\mptv,\avv) = \sum_{m,m'=-s}^{s}
T_{m m'} (\boldsymbol{\mu},\xi) \ket{\mathbf{n},m}\bra{\mathbf{n},m'} \end{equation*} where $\mathbf{n} = \boldsymbol{\mu} / \mu$ and $\ket{\mathbf{n},m}$ is the state defined by Eq.~(\ref{eq: mnKetDef}). Using this expansion Inequalities~(\ref{eq: RetErrBCondA}) and~(\ref{eq: PreErrBCondA}) become \begin{align}
\sum_{m,m'=-s}^{s}
\int d\boldsymbol{\mu} d\xi \,
\Bigl(\bigl(\mu-m'\bigr)^2+\bigl(s^2
- m'\vphantom{s}^2\bigr)
+\bigl(s-(\Delta_{\rm e i} S)^2 \bigr)
\Bigr)
\bigl| T_{m m'}(\boldsymbol{\mu},\xi) \bigr|^2 & \le 0 \label{eq: RetErrBCondB} \\ \intertext{and}
\sum_{m,m'=-s}^{s}
\int d\boldsymbol{\mu} d\xi \,
\Bigl(\bigl(\mu-m\bigr)^2+\bigl(s^2
- m^2\bigr)
+\bigl(s-(\Delta_{\rm e f} S)^2 \bigr)
\Bigr)
\bigl| T_{m m'}(\boldsymbol{\mu},\xi) \bigr|^2 & \le 0 \label{eq: PreErrBCondB} \end{align} Inequalities~(\ref{eq: RetErrRelB}) and~(\ref{eq: PreErrRelB}) are now immediate.
Setting $\Delta_{\rm e i} S = \sqrt{s}$ in Inequality~(\ref{eq: RetErrBCondB}) gives \begin{equation*}
\sum_{m,m'=-s}^{s}
\int d\boldsymbol{\mu} d\xi \,
\Bigl(\bigl(\mu-m'\bigr)^2+\bigl(s^2
- m'\vphantom{s}^2\bigr)
\Bigr)
\bigl| T_{m m'}(\boldsymbol{\mu},\xi) \bigr|^2 \le 0 \end{equation*} which implies \begin{equation*}
\left| T_{m m'}(\boldsymbol{\mu},\xi)\right|^2 = g_{m}(\mathbf{n},\xi) \, \delta_{m' s} \, \delta(\mu -s) \end{equation*} for suitable functions $g_{m}$. However, this is not possible, since the square root of the $\delta$-function is not defined. It follows that the lower bound set by Inequality~(\ref{eq: RetErrRelB}) is not precisely achievable. Nor is the lower bound set by Inequality~(\ref{eq: PreErrRelB}).
It is, however, possible to approach the lower bounds set by Inequalities~(\ref{eq: RetErrRelB}) and~(\ref{eq: PreErrRelB}) arbitrarily closely. It can be seen that as $\Delta_{\rm e i} S \rightarrow \sqrt{s}$ (respectively, $\Delta_{\rm e f} S \rightarrow \sqrt{s}$), then $\hat{T}(\mptv,\avv)$ and $\hat{E}(\mptv)$ become more and more strongly concentrated on the surface $\mu =s$. In other words, the measurement more and more nearly approaches a type 1 measurement of maximal retrodictive (respectively, predictive) accuracy, with pointer observable $\hat{\mathbf{n}}=\hat{\boldsymbol{\mu}}/s$. \section{Conclusion} \label{sec: conclusion} There are a number of ways in which one might seek to develop the results reported in this paper.
In the first place, although we showed that $\Delta_{\rm d} S = \sqrt{2 s}$ for a completely optimal type 1 measurement, we did not derive error-disturbance relationships, analogous to the inequalities $\Delta_{\rm e i} x \,\Delta_{\rm d} p$, $\Delta_{\rm e i} p\, \Delta_{\rm d} x$, $\Delta_{\rm e f} x \, \Delta_{\rm d} p$, $\Delta_{\rm e f} p \, \Delta_{\rm d} x \ge 1/2$ (in units such that $\hbar=1$) proved in ref.~\cite{self2b} for the case of a simultaneous measurement of position and momentum. The general principles of quantum mechanics~\cite{Heisenberg,Braginsky} indicate that relationships of this kind must also hold for measurements of spin direction, at least on a qualitative level. However, it appears that the problem of giving the relationships precise, numerical expression is not entirely straightforward. The question requires further investigation.
In this paper we have considered measurements of spin direction. However, the problem of simultaneously measuring just two components of spin is also important~\cite{TwoComp,BuschSpin}. It would be interesting to investigate the accuracy of measurements such as this, and to try to characterise the POVM (or POVM's, in the plural?) describing the outcome when the measurement is optimal.
We have seen that $\mathrm{SU}(2)$ coherent states play an important role in the description of optimal measurements of spin direction. In refs.~\cite{self3,Ali} it was shown that ordinary, Heisenberg-Weyl coherent states play an analogous role in the description of optimal joint measurements of position and momentum. It would be interesting to see if it is generally true, that every system of generalized coherent states is related in this way to joint measurements of the generators of the corresponding Lie group.
There are some important questions of principle regarding measurements of a \emph{single} spin component~\cite{Busch,BuschSpin,Wigner,Wheeler,Garra}. It would be interesting to see if the approach to the problem of defining the measurement accuracy which was described in this paper can be used to gain some additional insight into these questions.
Finally, it is obviously important to investigate whether optimal, or near optimal determinations of spin direction can be realised experimentally.
\end{document} |
\begin{document}
\title{\textbf{A quantum advantage over classical for local max cut}} \author{Charlie Carlson, Zackary Jorquera, Alexandra Kolla, Steven Kordonowy} \date{} \maketitle
\begin{abstract}
We compare the performance of a quantum local algorithm to a similar classical counterpart on a well-established combinatorial optimization problem \algprobm{LocalMaxCut}. We show that a popular quantum algorithm first discovered by Farhi, Goldstone, and Gutmannn \cite{FGG} called the quantum optimization approximation algorithm (QAOA) has a computational advantage over comparable local classical techniques on degree-3 graphs. These results hint that even small-scale quantum computation, which is relevant to the current state-of the art quantum hardware, could have significant advantages over comparably simple classical computation. \end{abstract}
\section{Introduction} With the advent of quantum computers rose the desire to explore their potentially tremendous computational power. A powerful line of research questions that has gained traction revolves around proving so-called quantum ``supremacy'' over classical computation. Namely, can quantum computers perform certain important computational tasks much faster than classical computers? Shor's factoring algorithm \cite{shor} was proof that quantum computers can indeed outperform classical computers when it comes to solving questions of great significance. However, Shor and related algorithms require large-scale quantum computers in order to show any advantage whereas today's state of the art quantum hardware is still limited to a few dozen working qubits \cite{google}. Thus a new important line of questions rears its head: can algorithms that are meaningful to run in small to medium scale quantum computers, such as local quantum algorithms, outperform their local classical counterparts? And if so, what type of problems can they solve better or faster than classical local computation? In this paper, we give evidence that for certain local combinatorial optimization problems, local quantum algorithms exhibit computational advantage over comparable classical algorithms.
Given a graph $G = (V,E)$, a \textit{cut} is an assignment of the vertices to $+$ and $-$ and an edge is cut if its endpoints are assigned $+-$ or $-+$. That is, a cut is a function $\tau : V \rightarrow \set{+,-}$ where an edge $e = (u,v)$ is cut when $\tau(u) \neq \tau(v)$. The (unweighted) \algprobm{MaxCut} problem asks to find a cut that maximizes the number of cut edges. This problem arises naturally when minimizing the energy of anti-ferromagnetic Heisenberg spin systems in which the goal is to assign opposing spins to neighboring nodes \cite{spin-glass}. Finding optimal \algprobm{MaxCut} solutions is computationally intractable so we relax to an easier problem \cite{karp}. A vertex $v$ is \textit{(locally) satisfied} under $\tau$ if at least half of the edges incident to $v$ are cut. A \textit{locally maximal cut} is one in which all vertices are satisfied. Finding a locally maximal cut is not hard to do. The unweighted version can be done in $O(n^2)$ steps in the worst case if one has access to the full graph. Restricted to local computation, this is not so trivial. The \algprobm{LocalMaxCut} problem is the optimization version in which we want a cut that satisfies as many vertices as possible.
A local graph algorithm is a technique to distribute computation over vertices wherein each individual computation requires only local neighborhood information. For simplicity we assume our graphs are unweighted and regular with degree $d$. In general, local algorithms are approximations of ``global'' algorithms in which we have access to the full graph at all steps of the algorithm. For an optimization problem, let $OPT(G)$ be the optimal value for graph $G$. An algorithm that ouputs value $ALG(G)$ is an \textit{$\alpha$-approximation} if $ALG(G) \geq \alpha \cdot OPT(G)$ for any $G$. The best global algorithm for \algprobm{MaxCut} gives an $0.878$-approximation was discovered by Goemans and Williamson and is optimal under further common complexity assumptions is optimal \cite{goewim, khot,hastad2000}. This algorithm relies on solving semidefinite programs which use global information to solve. Restricting to local computation, the best classical techniques produce a cut satisfying $1/2 + \Omega(1/\sqrt{d})$ fraction of the edges \cite{Shearer1992,barak_etal,threshold} on triangle-free graphs. These algorithms all have similar structure: randomly draw an initial cut and then every vertex queries neighboring vertices to determine how it should update its assignment. The classical algorithm our paper considers proposes an update step that generalizes \cite{threshold}.
In 2014, Farhi et al. introduced the quantum approximation optimization algorithm (QAOA) as a way to solve constraint satisfaction problems \cite{FGG}. The QAOA first encodes the linear objective function into the language of Hermitian operators and then relies on mixing techniques from quantum mechanics to round to a good solution. In particular, they proved lower bounds for the performance of the QAOA on \algprobm{MaxCut} performs similar to the aforementioned classical local techniques, satisfying $1/2 + \Omega\left(\frac1{\log{d}\sqrt{d}} \right)$ percentage of edges \cite{FGG2}. The back-and-forth between the QAOA performance and the classical techniques on specific combinatorial problems culminated with Hastings' \cite{hastings2019classical} description of the local tensor meta-algorithm. Both the QAOA and the local classical algorithms fall within this technique. Moreover, Hastings proves that a classical local tensor algorithm outperform the QAOA on \algprobm{MaxCut} on triangle-free graphs. Hastings local tensor algorithms are typically iterative but this work focuses on single round algorithms.
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\begin{tikzpicture}[scale=2, every node/.style={circle, draw, scale=2, line width=1.5pt, minimum size=15pt, inner sep=0pt}]
\node [rotate = 270] (1) at (0,1) {:(};
\node [rotate = 270] (2) at (1,1) {:)};
\node [rotate = 270] (3) at (2,1) [fill=gray!50] {:(};
\node [rotate = 270] (4) at (0,0) {:)};
\node [rotate = 270] (5) at (1,0) [fill=gray!50] {:)};
\node [rotate = 270] (6) at (2,0) {:)};
\draw [line width=1.5pt] (1) -- (2) -- (5) -- (4) -- (1) (3) -- (5) -- (6);
\end{tikzpicture}
\caption{A non-locally maximal cut on $G$. Grey nodes correspond to -1 assignments while white nodes correspond to +1. A smiley face correponds to at least half of its edges are cut (i.e., the spin is locally satisfied).}
\label{fig:cut_graph1}
\end{wrapfigure}
An intuitive definition of \algprobm{LocalMaxCut} is to rephrase the definition of ``locally maximal'' to focus on vertices rather than edges: $v \in V$ is \textit{satisfied} when at least half of the edges incident to $v$ are cut. A cut is then locally maximal if and only if all of its vertices are satisfied.\footnote{Indeed, if a vertex were to be unsatisfied, then flipping its assignment guarantees it will be satisfied and so at least one more edge is now cut.} Any true maximum cut is locally maximal and every graph has some maximum cut therefore every graph contains a cut satisfying all vertices. We define \algprobm{LocalMaxCut} to be the optimization problem of finding a cut that satisfies as many vertices as possible.
There are a few important distinctions between \algprobm{LocalMaxCut} and \algprobm{MaxCut}. Firstly, \algprobm{LocalMaxCut} is a valid relaxation to \algprobm{MaxCut}. A true maximum cut is guaranteed to be locally maximal but the converse is not true. See figure \ref{fig:LMC_vs_mc_graphs}. Since every graph must contain some maximum cut, this implies that $OPT(\algprobm{LocalMaxCut}) = \abs{V}$ no matter the underlying graph. This is different than \algprobm{MaxCut} in which $OPT(\algprobm{MaxCut})$ could be very far from $\abs{E}$. For example, in the complete graph, $OPT(\algprobm{MaxCut}) = 1/2 + \frac1{d}$. Another important distinction are the problem localities. The \algprobm{MaxCut} objective can be broken into $\abs{E}$ subroutines that act on only 2 input bits per function, namely the endpoints of an edge $uv \in E$. Since each subroutine relies on the assignment of only two spins, \algprobm{MaxCut} is 2-local as a problem and this does not change no matter how complex the underlying interaction graph. On the other hand, \algprobm{LocalMaxCut}'s objective is decomposed into $\abs{V}$ functions that act on $d+1$ input bits (the assignment of the full neighborhood of a vertex) which means that \algprobm{LocalMaxCut} is $(d+1)$-local. This distinction in locality seems to be crucial. With \algprobm{MaxCut}, we saw that local algorithms can outperform the QAOA. One main idea from this paper is that the QAOA is able to exploit the larger locality of \algprobm{LocalMaxCut} and thus outperform classial techniques in larger-degree graphs.
The main result of the paper is the following two theorems on low-degree graphs.
\begin{theorem}
For degree-2 graphs, there is a classical one-round algorithm that outperforms QAOA on \algprobm{LocalMaxCut}. \end{theorem}
\noindent In contrast, graphs with degree-3 allow for enough complexity that the QAOA outperforms the basic probabilistic classical algorithm.
\begin{theorem}
For degree-3 graphs with large girth, the QAOA outperforms the classical algorithm \algprobm{LocalMaxCut}. \end{theorem}
\noindent Theorem 1 is shown by contrasting theorems 3 and 5 and theorem 2 is shown by contrasting theorems 4 and 6.
\begin{figure}
\caption{A locally maximal cut with 4 edges cut out of 6 possible.}
\label{fig:local-max-ex}
\caption{A globally maximal cut with all 6 edges cut.}
\label{fig:global-max-ex}
\caption{The 6 assignments that differ from the cut in (a) in one vertex. Since none of them result in more cut edges, (a) is indeed locally maximal. These maxima are not unique as expected.}
\label{fig:more-ex}
\caption{A comparison of \algprobm{MaxCut} and \algprobm{LocalMaxCut} for different cuts. Red lines indicate an edge is cut by the assignment.}
\label{fig:LMC_vs_mc_graphs}
\end{figure}
\section{Preliminaries} For positive integer $n$, let $[n] = \{1, 2, \dots, n\}$. We consider simple, undirected $d$-regular graphs $G = (V,E)$ with $V = [n]$ and $m =\abs{E} = poly(n)$. The vertex neighborhood is denoted by the ordered set $B(v) = (u, u_1, \dots, u_d)$. For sets $A,B \subseteq [n]$, we let $A \triangle B = (A \cap B) \setminus (A \cup B)$ be the symmetric difference. Then, using the associativity of the symmetric difference we extend to a family of subsets over \([n]\), $\mathcal{F} = \left\{F_1, \dotsc, F_k\right\} \subseteq \mathcal{P}([n])$ in the following way (where \(\mathcal{P}([n])\) denotes the power set of \([n]\)).
\[\triangle \mathcal{F} = F_1 \triangle \cdots \triangle F_k = \left\{x \in \bigcup_{i = 1}^k F_i\ |\ \#\set{F \in \mathcal{F} : x \in F} \text{ is odd} \right\} \]
The \textit{Pauli matrices} are $2 \times 2$ unitary operators that, along with the identity matrix, $I$, form a real basis for hermitian operators a qubit:
\[I = \bmat{1 & 0 \\ 0 & 1},\ X = \bmat{0 & 1 \\ 1 & 0},\ Y = \bmat{0 & -i \\ i & 0},\ Z = \bmat{1 & 0 \\ 0 & -1}\]
We also make frequent use of the computational basis $\{ \K 0, \K 1 \}$ and the Fourier basis $\{\K +, \K - \}$ over $\H_2$, which denote the vectors
\begin{align*}
\K{0} &= \bmat{1 \\ 0},\ \K{1} = \bmat{0 \\ 1},\\
\K{+} &= \frac{1}{\sqrt{2}} = \bmat{1 \\ 1},\ \K{-} = \frac{1}{\sqrt{2}} = \bmat{1 \\ -1} \end{align*}
\noindent The \textit{uniform superposition} $\ket{s}$ is the state $$\ket{s} := \ket{+}^{\otimes n} = \frac1{2^{n/2}}\sum_{z \in \{0,1\}^n} \ket{z}$$ For a Hermitian operator \(A\) and angle \(\theta\), the operator \[U(A,\theta) := e^{-i \theta A} \label{U-def}\] is unitary and diagonalizable over the same basis as \(A\). If $A^2 = I$, then $U(A,\theta) = \cos(\theta)I + i \sin(\theta)A$. In particular, all Pauli operators satisfy this property.
For a 1-qubit linear operator $M$, let \(M_k\) represent the corresponding operation on $\H_2^{\otimes n}$ acting as $M$ on the \(k\)th qubit and identity for the rest:
\begin{equation} M_k = I^{\otimes k-1} \otimes M \otimes I^{\otimes n-k-1}\end{equation}
\noindent This is naturally extended to any subset $K \subseteq [n]$ as
\begin{equation} M_K = \prod_{k \in K} M_k \end{equation}
\noindent Note that \(M_{\emptyset} = I^{\otimes n}\). For any $S,T \subseteq [n]$ such that $S \cap T = \emptyset$ and 1-qubit operators $M$ and $N$, the operators $M_S$ and $N_T$ always commute.
\subsection{Boolean functions as Hamiltonians} Let $C: \{0,1\}^n \rightarrow \R$ be some objective function on $n$ variables which we refer to as \textit{spins}. We are interested in finding $C^* = \max_x C(x)$ as well as the input $x$ that achieves this value. An algorithm that outputs value $ALG(C)$ is an \(\alpha\)-approximation if \[\frac{ALG(C)}{C^*} \geq \alpha\]
\noindent Typically the objective function $C$ is described by $m = poly(n)$ clauses $C_1, \dots, C_m : \{0,1\}^n \rightarrow \{0,1\}$ and weights $w_1,\dots,w_m \in \R$ such that \[C(x) = \sum_{a \in [m]} w_a C_a(x)\] Spins $u$ and $v$ are \textit{neighbors} if they both appear non-trivially in at least one clause $C_a$. Using the Fourier coefficients of $C_a$, denoted by \(\hat{C}_a\), and the fact that the Pauli-$Z$ operators acting on subsets of \([n]\) give the parity functions over the computational basis, we can encode each \(C_{a}\) into a \(2^n\)-dimensional Hamiltonian operator as
\begin{equation}
H_{C_a} := \sum_{S \subseteq [n]}\hat{C}_a(S)Z_S \end{equation}
\noindent It is typical that a clause depends on only $k = O(1)$ many of the $n$, so we can write
\begin{equation}
H_{C_a} = \sum_{\substack{S \subseteq [n]\\\abs{S} \leq k}}\hat{C}_a(S)Z_S \label{bool-to-ham-clause} \end{equation}
\noindent where $k$ is the problem's \textit{locality}. We can then sum up \eqref{bool-to-ham-clause} to find the full \textit{problem Hamiltonian}
\begin{equation}
H_C := \sum_{\substack{S \subseteq [n]\\\abs{S} \leq k}}W_S Z_S \label{bool-to-ham} \end{equation}
\noindent where $W_S = \sum_{a \in [m]} \hat{C}_a(S)$. This is one way the algorithm reduces from $O(2^n)$ operations to something on the scale of $poly(n)$ (with a possibly $exp(k)$ hit). See Hadfield for a full description of encoding Boolean functions into Hamiltonian operators \cite{boolean-fns-as-hams}.
\subsection{Local Algorithms} Here we give a description of what it means for an algorithm to be local based on Hastings' tensor algorithms framework \cite{hastings2019classical}. We are given as input some interaction graph with vertices $V = [n]$ and a linear objective function $C : \{-1,+1\}^n \rightarrow \R$ to optimize. As an example, the \algprobm{MaxCut} objective can be written as \(\sum_{ij \in E}\frac{1-z_iz_j}{2}\). Local algorithms are in general iterative over timesteps $t = 0,1,\dots, T$ and throughout the algorithm we carry a vector $v_j^t$ for each vertex $j$ and timestep $t$. In order to calculate $(v_j^{t+1})$, we need local information about vertex $j$: $v_j^t$ and $v_k^t$ for any neighbor $k$ of $j$. After the last timestep, we use $v_j^T$ to construct an assignment $\tau : V \rightarrow \{-1,+1\}^n$ such that $C(\tau(V))$ is large. In this manner, the full algorithm runs in time \(O(poly(n^k,k,T))\) which is efficient for constant \(k\).
The individual \(v_j^t\) can be taken from quite generic domains but common examples are are $\{-1,+1\}$, probability distributions, or quantum states. It is assumed that matching coordinates across spins are taken from the same domain. To begin, \(v_j^0\) is assigned randomly and independently for each spin $j$. At each step $t$, \(v_j^{t}\) is constructed in two updates. First, we apply a linear update that is taken from the parts of the objective function corresponding to spin $j$ resulting in temporary vector $u_j^t$. This step depends on $v_j^t$ as well as $v_k^t$ for any neighbor $k$. Next, there is some function $g_t$ acting such that $v_k^{t+1} := g_t(u_k^{t})$. This function $g_t$ can be non-linear and random. This paper is focused on one-round local algorithms so $T=1$.
\subsection{QAOA} The quantum approximate optimization algorithm (QAOA) \cite{FGG} is an algorithmic paradigm that attempts to solve local computational problems using low-depth quantum circuits. The algorithm is an example of a tensor algorithm that uses quantum information, which is in general iterative but we focus on the single-round instance in this work. Let $C: \{0,1\}^n \rightarrow \R$ be an objective function as described in the previous section and let $H_C$ be the problem Hamiltonian the encodes $C$, as in \eqref{bool-to-ham}. We additionally note that measuring any quantum state $\ket{\psi}$ in the computational basis outputs $x \in \{0,1\}^n$ with probability \(\abs{\ip{x}{\psi}}^2\). The average value of $C(x)$ over this distribution is then the expectation over this state:
\begin{equation}
\mathop{\mathbb{E}}\limits_{x \sim \psi}[C(x)] = \mel{\psi}{H_C}{\psi} \end{equation}
\noindent The goal of the QAOA is to find a $\ket{\psi}$ such that this expectation is large.
The QAOA is parameterized by a pair of angles \((\g, \bt) \in [0,2\pi) \times [0, \pi)\). Along with $H_C$, we also make use of a \textit{mixing operator} \(H_M\). The framework supports a variety of mixing operators (see \cite{qaoa-mixers} for in-depth comparisons) but the important part is that they ``spread out'' amongst the eigenspaces of $H_C$ in a non-commutative way. Typically the mixing operator is an abstraction of the quantum NOT operator $X$. We use the mixing operator $H_M = \sum_v X_v$. We then construct two quantum gates \[U_C := U(H_C,\g),\quad U_M :=U(H_M,\bt)\] to prepare the state \(\ket{\gamma, \beta} := U_M U_C \ket{s}\). The problem reduces to finding \((\g,\bt) \) such that
\begin{equation}
F(\g,\bt) := \mel{\g,\bt}{H_C}{\g,\bt} \label{F} \end{equation}
\noindent is large. This optimization is typically done as a classical-quantum hybrid algorithm making as few queries to the circuit as possible.
\begin{figure}
\caption{$d=2$}
\caption{$d=3$}
\caption{Plots of the expectation values as for all possible QAOA quantum states $\ket{\g,\bt}$ with respect to $H_2$ and $H_3$. The values are normalized by $n$ and so represent a percentage of satisfied vertices. For a fixed $\g, \bt$, we have that the expected value, $\mel{\g,\bt}{H_v^k}{\g,\bt}$, is equal to the expected value of the constraint function if we sample from the distribution induced by state $\ket{\g,\bt}$. The maxima are equal to $0.939$ in (a) and $0.819$ in (b).}
\label{fig:quantum-3dplots}
\end{figure}
\section{Algorithms} \subsection{Classical Algorithm} Classically, we use a one-round local algorithm, a common family of algorithms previously studied for \algprobm{MaxCut} type problems \cite{JPY,Shearer1992,threshold}. Our algorithm is straightforward and can be defined by independent actions on a vertex $v$:
\begin{itemize}
\item[(i)] randomly assign $v$ to inital state in $\{-1,+1\}$
\item[(ii)] query $v$'s neighbors to count the number of agreeing neighbors
\item[(iii)] update $v$'s assignment in $\{-1,+1\}$ depending on the number of agreeing neighbors \end{itemize}
More formally, a run of the algorithm is parameterized by $d + 2$ probabilities $p$ and $q = (q_0, \dots, q_d)$. We build a cut $\tau_t$ for each timestep $t$ and define
\begin{equation}
\ell_t(v):= \# \{uv \in E : \tau_t(u) = \tau_t(v) \} \label{agreeing} \end{equation}
\noindent Initially, draw a random cut $\tau_0$ subject to \[\tau_0(v) = \begin{cases}
+1 &\text{with probability}\ p \\
-1 &\text{otherwise}. \end{cases}\] Next, each vertex $v$ queries a bit from its neighbors to calculate \(\ell_0(v)\). Using this value, set \[\tau_1(v) := \begin{cases} -\tau_0(v) &\text{with probability}\ q_{\ell_0(v)} \\ \tau_0(v) &\text{otherwise} \end{cases}\] The algorithm outputs cut $\tau_1$ and a vertex is then satisfied if \(\ell_1(v) \geq \ceil{\frac{d}{2}}\). This is an example of a local tensor algorithm\cite{hastings2019classical}.
Our algorithm generalizes the HRSS algorithm\cite{threshold}. Their algorithm uses threshold value \(r_d := \ceil{\frac{d + \sqrt{d}}{2}}\) to make the second step is deterministic: vertex $v$ flips if $\ell(v) \geq r_d$. In our algorithm, there is a degree of freedom for each possible $\ell(v)$ so that $v$ flips with probability $q_{\ell(v)}$. Taking for example the degree-2 case in which $r_2 = 2$, the HRSS corresponds to setting flipping probabilities $(q_0,q_1,q_2) = (0,0,1)$. That being said, there is important overlap in the low-degree cases. It turns out that in degrees 2 and 3, maximizing our algorithm over the full probability space results in very similar behavior as HRSS. The optimal strategy for our algorithm is to flip only when $\ell(v) = 2$ or $\ell(v) = 3$ for the degree-2 and degree-3 cases, respectively. However, the maximal probability might not be 1 as in HRSS. For example, in the degree-2 case, maximal probability for our algorithm occurs at $(q_0,q_1,q_2) = (0,0,4/5)$. We have the following theorem as the main result for the classical algorithm.
\begin{theorem}
On a degree-2 graph $G$, there exists probabilities \(p,q\) such that our algorithm outputs a cut satisfying at least \(0.95n\) vertices in expectation. \end{theorem}
\begin{theorem}
On a degree-3 graph $G$, for all possible probabilities \(p,q\), our algorithm outputs a cut that satisfies $\leq 0.8n$ many vertices in expectation. \end{theorem}
See Figure \ref{fig:classical-justp} for the performance of this algorithm on the low degrees. expand; mention symmetries
\begin{figure}
\caption{The optimal performance of the classical algorithm on low-degree graphs. Here, the functions for $\prob{S_v^1}$ are simple enough that calculus tricks are sufficient to isolate the maximizing variables. The x-axis ranges over probabilitiy values $p \in [0,1]$ and the y-axis is the probability a vertex is satisfied using that $p$ and the corresponding optimal $q$ values. See calculations in Appendix A for more information.}
\label{fig:classical-justp}
\end{figure}
\subsection{QAOA Encoding} Here we provide the encoding of the \algprobm{LocalMaxCut} objective function into Hamiltonians as described by \eqref{bool-to-ham}. As the graph degree grows, the explicit objective function changes and so we handle the \(d=2\) and \(d=3\) cases separately.
Define the local Hamiltonian term for degree-2 graphs as
\begin{equation}
H_v^2 := \frac{3}{4}I - \frac{1}{4}Z_{v,v_1} - \frac{1}{4} Z_{v,v_2} - \frac{1}{4} Z_{v_1,v_2} \label{d2-local-ham} \end{equation}
\noindent One can verify \ref{d2-local-ham} by checking $\mel{x}{H_v^2}{x}$ for all $x \in \{0,1\}^3$. The local terms are summed up over all vertices to build the full problem Hamiltonian
\begin{equation}
H_2 := \sum_{v \in V} H_v^2 = \frac{3n}{4}I - \frac{1}{2}\sum_{uv \in E} Z_{u,v}- \frac{1}{4}\sum_{v \in V} Z_{v_1,v_2} \label{d2-full-ham} \end{equation}
We now state one of the two main quantum results.
\begin{theorem}\label{quantum-d2-result-thm}
On a degree-2 graph $G$ with large girth, every pair of angles \((\g, \bt) \) satisfies $F(\g,\bt) < 0.94n$. \end{theorem}
This theorem is in direct contrast with Theorem 3 in which we state that the classical algorithm on degree-2 graphs can achieve at least a 0.95-approximation. Indeed, this is not too surprising. Note that the degree-2 local Hamiltonian term \ref{d2-local-ham} is 2-local, just like the \algprobm{MaxCut} local constraint. So it is not surprising that the behavior of the classical versus the quantum algorithm on \algprobm{LocalMaxCut} mimicks the behavior on \algprobm{MaxCut}.
On the other hand, we start to see interesting behavior for degree-3 graphs. Let the local term $H^v_3$ and full problem Hamiltonian be given by
\begin{align}
H_v^3 &:= \frac1{2}I - \frac1{4}Z_{v,v_1} - \frac1{4}Z_{v,v_2} - \frac1{4}Z_{v,v_3} + \frac1{4}Z_{B(v)} \label{d3-local-ham} \\
H_3 &= \sum_{v \in V}H_v&^3= \frac{n}{2} - \frac1{2}\sum_{ij \in E}Z_{ij} + \frac1{4}\sum_{v \in V}Z_{B(v)} \label{d3-full-ham} \end{align}
\begin{theorem}\label{quantum-d3-result-thm}
On a degree-3 graph $G$ with large girth, there exist angles \((\g^*,\bt^*)\) such that $F(\g^*,\bt^*) \geq 0.81n$. \end{theorem}
This theorem can be compared with Theorem 4 to show that the QAOA outperforms the basic classical algorithm on degree-3 graphs with high degree. Looking at equations \eqref{d3-local-ham} and \eqref{d3-full-ham} we get a glimpse into why this might be the case. Unlike in degree-2 \algprobm{LocalMaxCut}, we now have Pauli-$Z$ terms that rely on upwards of 4 qubits rather than just 2. We believe that this increase in complexity is crucial for allowing the QAOA to outperform the classical technique.
The proof for both theorems \ref{quantum-d2-result-thm} and the \ref{quantum-d3-result-thm} case can be found in appendix \ref{sec:quantum-proofs}.
\begin{figure}
\caption{Degree 2}
\label{fig:sub1}
\caption{Degree 3}
\label{fig:sub2}
\caption{Plots of the expectation values for states $\ket{\g,\bt}$ with respect to $H_2$ and $H_3$ normalized by $n$. (a) and (b) display the solution surface as heatmaps on the state space \((\g,\bt) \in [0,2\pi) \times [0,\pi)\). The red dot is the maximum expectation values are approximately 0.939 and 0.819 for degree 2 and 3 graphs, respectively.}
\label{fig:quantum-heatmaps}
\end{figure}
\appendix \section{Classical Proofs} Define the following few probability events. For every vertex $v$, let $S^i_v$ be the event that $v$ is satisfied by $\tau_i$. Also let $F_v = [\tau_1(v) \neq \tau_0(v)]$ be the probability that $v$ flips its assignment between $\tau_0$ and $\tau_1$.
\begin{lemma}
For a $d$-regular graph $G$ and initial probability $p = 1/2$, we have that $\prob{S_v^0} = \frac1{2} + \frac1{2^{d+1}}\binom{d}{d/2} = \frac1{2} + o(1)$. \end{lemma}
The $o(1)$ term is zero for odd $d$ and is $\frac1{2^{d+1}}\binom{d}{d/2}$ for even $d$, which arises from allowing for ties.
\begin{proof} Every initial assignment of $d+1$ vertices occurs with uniform probability $\frac1{2^{d+1}}$ so this reduces to counting the number of satisfying assignments. A vertex $v$ is satisfied under $\tau_0$ when $\ell(v) \in \set{0, \dots, \floor{\frac{d}{2}}}$ of which there are $\binom{d}{j}$ many ways for each $\ell(v) = j$. Therefore,
\begin{equation}
\label{abc}
\prob{S_v^0} = \frac1{2^{d+1}}\sum_{j=0}^{\floor{d/2}}\binom{d}{j} \end{equation}
Using the fact that $2^d = \sum_{j=0}^d \binom{d}{j}$ allows us to rearrange \eqref{abc} to achieve our result. \end{proof}
A helpful observation is that once we have fixed a cut $\tau_0$, the probabilities of different vertices flipping are independent of one another.
\begin{lemma}[Independence lemma]
For vertices $u,v \in V$, we have that $$\prob{F_v \cap F_u | \tau_0(u), \tau_0(v)} = \prob{F_v | \tau_0(u), \tau_0(v)}\prob{F_u | \tau_0(u), \tau_0(v)}$$ This extends to any number of vertices such that $$\prob{\cap_{u \in U} F_u | \cap_{u \in U} \tau_0(u)} = \prod_{u \in U} \prob{F_u | \cap_{u \in U} \tau_0(u)}$$Moreover, if $u$ and $v$ are not neighbors of one another, then $$\prob{F_v | \tau_0(u), \tau_0(v)} = \prob{F_v | \tau_0(v)}$$
\end{lemma}
\subsection{Degree-2 graphs}
Fix a degree-2 graph $G$ of size at least $7$.
\begin{lemma}
For probabilities $p$ and $q=(q_0,q_1,q_2)$, we have that
\begin{equation}
\begin{split}
\prob{S_v^1 | p,q} = 1 &- (1 - p)^3 (1 - q_2) (1 - p q_1 - (1 - p) q_2)^2 - (1 - p)^3 q_2 (p q_1 + (1 - p) q_2)^2 \\
&- p^3 (1 - q_2) (1 - (1 - p) q_1 - p q_2)^2 - p^3 q_2 ((1 - p) q_1 + p q_2)^2 \label{d2-classical-exact}
\end{split}
\end{equation}
This function is maximized by $p=1/2$ and $q = (0,0,4/5)$ to value $0.95$.
\end{lemma}
The maximizer is found analytically using multivariable calculus techniques. There are a few simplifications we make to make the calculation simpler as well as eliminate some variables. In particular, we see that \(\prob{S_v^1 | p,q}\) does not depend on $q_0$. The first simplification we make is that in the basic case of only degree-2 graphs, a vertex that starts satisfied must remain so.
\begin{lemma}
For a degree-2 graph, if a vertex $v$ is satisfied under $\tau_0$, then it will remain satisfied under $\tau_1$. Moreover, if $v$ is satisfied under $\tau_0$, then at least one of $v$'s neighbors is also satisfied under $\tau_0$ and so will remain satisfied under $\tau_1$.
\end{lemma}
\begin{proof} Let $v_\ell$ and $v_r$ be $v$'s left and right neighbors, respectively. Assume that $v$ is satisfed under $\tau_0$ and without loss of generality, let $\tau_0(v) = 0$. There are three possible assignments for these three vertices: $$\tau_0(v_\ell,v,v_r) \in \{100, 001, 101\}$$ In the first case, both $v$ and $v_\ell$ are satisfied. Satisfied vertices never flip so $$\tau_1(v) = \tau_0(v) \neq \tau_0(v_\ell) = \tau_1(v_\ell)$$ which implies that both vertices remain satisfied under $\tau_1$. The second case is symmetrical, with $v_r$ guaranteed to be the satisfied neighbor. In the last case both neighbors are satisfied (and remain so) by equivalent reasoning.
\end{proof}
Since we are calculating $\prob{\overline{S_v^1}}$, a consequence of this lemma is that we only need to consider unsatisfied initial assignments and so
\begin{equation}
\prob{\overline{S_v^1}} = \sum_{a_0 = 0}^1 \sum_{a_1 = 0}^1 \prob{\tau_1(v,v_1,v_2) = a_1 | \tau_0(v,v_1,v_2) = a_0}\prob{\tau_0(v,v_1,v_2) = a_0} \label{probd2-sum}
\end{equation}
For edge $uv \in E$, define conditional probabilities
\begin{align}
f_{00} &:=\prob{F_u | \tau_0(u,v) = 00} \label{f00-2}\\
f_{11} &:=\prob{F_u | \tau_0(u,v) = 11} \label{f11-2}
\end{align}
\noindent It is easy to check that $$f_{00} = (1-p)q_2 + pq_1, f_{11} = pq_2 + (1-p)q_1$$Using the independence lemma, we can calculate the conditional probabilities in \eqref{probd2-sum}:
\begin{itemize}
\item $\prob{\tau_1(v,v_1,v_2) = 0 | \tau_0(v,v_1,v_2) = 0} = \prob{\overline{F_v}}\cdot \overline{f_{00}}^2 = (1-q_2)\left( 1 - ((1-p)q_2 + pq_1)\right)^2$
\item $\prob{\tau_1(v,v_1,v_2) = 1 | \tau_0(v,v_1,v_2) = 0} = \prob{F_v} \cdot f_{00}^2 = q_2\left( (1-p)q_2 + pq_1\right)^2$
\item $\prob{\tau_1(v,v_1,v_2) = 0 | \tau_0(v,v_1,v_2) = 1} = \prob{F_v} \cdot f_{11}^2 = q_2\left(pq_2 + (1-p)q_1\right)^2$
\item $\prob{\tau_1(v,v_1,v_2) = 1 | \tau_0(v,v_1,v_2) = 1} = \prob{\overline{F_v}} \cdot\overline{f_{11}}^2 = (1-q_2)\left( 1 - (pq_2 + (1-p)q_1)\right)^2$
\end{itemize}
Using $\prob{\tau_0(v,v_1,v_2) = 0} = (1-p)^3$ and $\prob{\tau_0(v,v_1,v_2) = 1} = p^3$ produces equation \eqref{d2-classical-exact}.
To maximize this function, the first step is to solve \[\frac{\partial}{\partial q_2}\prob{S_v^1 | p,q_1,q_2} = 0\] which leads to the maximizer
\begin{equation}
q_2^*(p, q_1) = \frac{-3 + 11 p - 15 p^2 + 8 p^3 - 4 p^4 + 4 p q_1 - 14 p^2 q_1 + 20 p^3 q_1 - 10 p^4 q_1}{-6 + 26 p - 44 p^2 + 36 p^3 - 18 p^4} \label{q_2max}
\end{equation}
Plugging this into \eqref{d2-classical-exact} results in the simplification
\begin{align}
\prob{S_v^1 | p,q_1,q_2^*} = \frac{9 + p(-30 + p(p_1-p_2q_1(1-q_1))}{p_3}
\end{align}
where
\begin{equation}
\begin{split}
p_1 &:= 19 + 42 p - 55 p^2 - 4 p^3 + 76 p^4 - 64 p^5 + 16 p^6 \\
p_2 &:= 4 - 36 p + 152 p^2 - 340 p^3 + 412 p^4 - 256 p^5 + 64 p^6 \\
p_3 &:= 12 - 52 p + 88 p^2 - 72 p^3 + 36 p^4
\end{split}
\end{equation}
\noindent are three functions that depend only on $p$. This form is helpful because for any $q_1 \in [0,1]$, we have that \[\prob{S_v^1 | p,q_1,q_2^*} = \frac{9 + p(-30 + p(p_1-p_2q_1(1-q_1))}{p_3} \leq \frac{9 -30p+ p_1\cdot p^2}{p_3} = \prob{S_v^1 | p,0,q_2^*} \] since $q_1(1-q_1) \in [0,1/4]$. This implies that we need only consider the case when $q_1 = 0$, eliminating an additional variable. What is left is to maximize the one-variable function \[\prob{S_v^1 | p,0,q_2^*} = \frac{9 - 30 p + 19 p^2 + 42 p^3 - 55 p^4 - 4 p^5 + 76 p^6 - 64 p^7 +
16 p^8}{12 - 52 p + 88 p^2 - 72 p^3 + 36 p^4} \] resulting in $19/20 = 0.95$ by $p=1/2$. See Figure \ref{fig:classical-justp}.
\subsection{Degree-3 graphs}
Fix a degree-3 graph $G$ that is locally tree-like. The overall strategy of this section follows the previous and we would like to calculate the equation
\begin{equation}
\prob{S_v^1|p,q} = \sum_{\nu_0 \in \{0,1\}^4} \prob{S_v^1 | \tau_0(B(v)) = \nu_0;p,q}\prob{\tau_0(B(v)) = \nu_0 |p} \label{sum-d3}
\end{equation}
\noindent However, \eqref{sum-d3} is much more complex than the simple degree-2 case in \eqref{d2-classical-exact}.
\begin{lemma}
For probabilities $p$ and $q=(q_0,q_1,q_2,q_3)$, we have that \(\prob{S_v^1 | p,q}\) is maximized by $p\approx 0.39$ and $q = (0,0,0,1)$ to value $\approx 0.77$.
\end{lemma}
It is worth pointing out that we do \textit{not} want uniform initial assignment probability but actually $p \approx 3/5$. Here, it is advantageous to have a slightly worse initial cut that we can improve upon in our algorithm.
There are many symmetries we may use to cut down on these cases as well as make each one simpler. First, we define some helpful conditional probabilities.
\begin{lemma}
If we define similar conditional probabilities as \eqref{f00-2} and \eqref{f11-2}, we have that
\begin{align}
f_{00}(p,q) &:=\prob{F_v| \tau_0(uv) = 00;p,q} = q_3 (1 - p)^2 + 2 q_2p (1 - p) + q_1p^2 \label{f00-3}\\
f_{01}(p,q) &:=\prob{F_v | \tau_0(uv) = 01;p,q}= q_0(1 - p)^2 + 2 q_1p (1 - p) + q_2p^2 \label{f01-3}\\
f_{10}(p,q) &:=\prob{F_v | \tau_0(uv) = 10;p,q}= q_2(1 - p)^2 + 2 q_1 p (1 - p) + q_0 p^2 \label{f10-3}\\
f_{11}(p,q) &:=\prob{F_v | \tau_0(uv) = 11;p,q}= q_1(1 - p)^2 + 2 q_2 p (1 - p) + q_3 p^2 \label{f11-3}
\end{align}
\end{lemma}
\begin{proof}
Let $v', v''$ be the other neighbors of $v$. Consider $f_{00} = \prob{F_v | \tau_0(uv) = 00}$. There are four possible assignments $\tau_0(uvv'v'')$ to consider: $0000, 0001, 0010, 0011$. If $\tau_0(uvv'v'') = 0000$, then $v$ flips with probability $q_3$ (since $v$ agrees with 3 of its neighbors). This case occurs with probability $(1-p)^2$. When $\tau_0(uvv'v'') = 0001$ or $0010$, then $v$ flips with probability $q_2$. Each of these occur with probability $p(1-p)$. Lastly, if $\tau_0(uvv'v'') = 0011$, then $v$ flips with probability $q_1$. This case occurs with probability $p^2$. Summing these together, we get $$f_{00} = q_3 (1 - p)^2 + 2 q_2p (1 - p) + q_1p^2$$The other 3 calculations follow this same pattern.
\end{proof}
For bits $a,b,y$, we we want to define a function that is equal to $f_{ab}$ when $b,y$ agree and $\overline{f_{ab}}$ when they disagree. That is,
\begin{equation}
f_{ab}^y := \begin{cases}
f_{ab} & b \neq y\\
\overline{f_{ab}} & b = y
\end{cases}
\end{equation}
For $xyzw, abcd \in \{0,1\}^4$, we can now use the independence lemma to write
\begin{equation}
\prob{\tau_1(B(v)) = xyzw | \tau_0(B(v)) = abcd} = \prob{F_v = a \oplus x| \tau_0(B(v)) = abcd,f_{ab}^y f_{ac}^z f_{ad}^w }\label{p1}
\end{equation}
We further break down this calculation. First, using some boolean algebra and that $\overline{f_{ab}}(p,q) = 1 - f_{ab}(p,q)$, we have that this can be rewritten as
\begin{equation}
f_{ab}^y(p,q) = \frac{1 - (-1)^{b \oplus y}}{2} + (-1)^{b \oplus y}f_{ab}(p,q) \label{faby-nice}
\end{equation}
\begin{lemma}
The conditional probabilities obey the following symmetries:
\begin{enumerate}
\item $f_{\overline{xy}}(1-p,q) = f_{xy}(p,q)$ and $f_{\overline{ab}}^{\overline{y}}(1-p,q) = f_{ab}^y(p,q)$
\item For $q = (q_0, q_1, q_2, q_3)$, let $q^* = (q_3, q_2, q_1, q_0)$ be the vector of flipped probabilities. Then $$f_{xy}(p,q^*) = f_{x\overline{y}}(p,q), \quad f_{xy}^y(p,q^*) = f_{x\overline{y}}^y(p,q)$$
\end{enumerate}
\end{lemma}
\begin{proof}
(1) The first equality can be checked by swapping $p \mapsto 1-p$ in \eqref{f00-3} - \eqref{f11-3} and matching up corresponding equations. Then
\begin{align}
f_{\overline{ab}}^{\overline{y}}(1-p,q) &= \frac{1 - (-1)^{\overline{b} \oplus \overline{y}}}{2} + (-1)^{\overline{b} \oplus \overline{y}}f_{\overline{ab}}(1-p,q) \\
&= \frac{1 - (-1)^{b \oplus y}}{2} + (-1)^{b \oplus y}f_{\overline{ab}}(1-p,q) \\
&= \frac{1 - (-1)^{b \oplus y}}{2} + (-1)^{b \oplus y}f_{ab}(p,q) \\
&= f_{ab}^y(p,q)
\end{align}
(2) \begin{align}
f_{01}(p,q^*) &= q_0^*(1-p)^2 + 2q_1^*p(1-p) + q_2^*p^2 \\
&= q_3(1-p)^2 + 2q_2p(1-p) + q_1p^2 \\
&=p_{00}(p,q) \\
f_{11}(p,q^*) &= q_1^*(1-p)^2 + 2q_2^*p(1-p) + q_3^*p^2 \\
&= q_2(1-p)^2 + 2q_1p(1-p) + q_0 p^2 \\
&= f_{10}(p,q)
\end{align}
We can use \eqref{faby-nice} to pass this property through to get $f_{xy}^y(p,q^*) = f_{x\overline{y}}^y(p,q)$.
\end{proof}
These facts are helpful to eliminate cases we need to calculate for \eqref{sum-d3}.
\begin{lemma}
for $xyzw, abcd \in \{0,1\}^4$, we have
\begin{equation}
\prob{\tau_1(B(v)) = xyzw | \tau_0(B(v)) = abcd; p,q} = \prob{\tau_1(B(v)) = \overline{xyzw} | \tau_0(B(v)) = \overline{abcd}; 1-p,q}
\end{equation}
\end{lemma}
\begin{proof}Using \eqref{p1} and the previous lemma, we have that
\begin{align}
&\prob{\tau_1(B(v)) = \overline{xyzw} | \tau_0(B(v))= \overline{abcd}; 1-p,q} \nonumber \\
&\quad = \prob{F_v = \overline{a} \oplus \overline{x}| \tau_0(B(v)) = \overline{abcd}; 1-p,q}f_{\overline{ab}}^{\overline{y}}(1-p,q) f_{\overline{ac}}^{\overline{z}}(1-p,q) f_{\overline{ad}}^{\overline{w}}(1-p,q) \nonumber\\
&\quad = \prob{F_v = a \oplus x| \tau_0(B(v)) = abcd; p,q}f_{ab}^{y}(p,q) f_{ac}^{z}(p,q) f_{ad}^{w}(p,q) \nonumber \\
&\quad =\prob{\tau_1(B(v)) = xyzw | \tau_0(B(v)) = abcd; p,q}
\end{align}
\end{proof}
This allows us to cut the number of cases in \eqref{sum-d3} in half.
\begin{lemma}
For any $abcd \in \{0,1\}^4$, we have that $$\prob{S_v^1 | \tau_0(B(v)) = abcd;p,q} = \prob{S_v^1 | \tau_0(B(v)) = \overline{abcd};1-p,q}$$
\end{lemma}
\begin{proof}
Let $xyzw \in \{0,1\}^4$ be a satisfying assignment. Then $\overline{xyzw}$ is also a satisfying assignment. By the previous lemma, $$\prob{\tau_1(B(v)) = xyzw | \tau_0(B(v)) = abcd; p, q} = \prob{\tau_1(B(v)) = \overline{xyzw} | \tau_0(B(v)) = \overline{abcd}; 1-p, q}$$
\end{proof}
Another observation is that vertex $v$ makes its decision based on its neighbor's assignments but the order does not matter. That is, for any $a,b,c,d \in \{0,1\}$,
\begin{align}
\prob{S_v^1 | \tau_0(B(v)) = abcd,p,q} = \prob{S_v^1 | \tau_0(B(v)) = abdc,p,q} = \prob{S_v^1 | \tau_0(B(v)) = acdb,p,q}
\end{align}
This means that
\begin{align}
\prob{S_v^1 | \tau_0(B(v)) = a001,p,q} = \prob{S_v^1 | \tau_0(B(v)) = a010,p,q} = \prob{S_v^1 | \tau_0(B(v)) = a100,p,q} \\
\prob{S_v^1| \tau_0(B(v)) = a011,p,q} = \prob{S_v^1 | \tau_0(B(v)) = a101,p,q} = \prob{S_v^1 | \tau_0(B(v)) = a110,p,q}
\end{align}
Therefore, the full calculation breaks up into the following cases, where we use the shorthand $\prob{A | abcd;p,q}$ as shorthand for $\prob{A | \tau_0(B(v))=abcd;p,q}$.
\begin{equation}\label{d3-classical-explicit}
\begin{split}
\prob{S_v^1} = &\prob{S_v^1 | 0000;p,q}\prob{ 0000;p } + \prob{S_v^1 | 0000;1-p,q}\prob{ 0000;1-p } \\
&+3\left( \prob{S_v^1 | 0001;p,q}\prob{ 0001;p } + \prob{S_v^1 | 0001;1-p,q}\prob{ 0001;1-p } \right)\\
&+3\left( \prob{S_v^1 | 0011;p,q}\prob{ 0011;p } + \prob{S_v^1 | 0011;1-p,q}\prob{ 0011;1-p } \right)\\
&+\prob{S_v^1 | 1111;p,q}\prob{ 1111;p } + \prob{S_v^1 | 1111;1-p,q}\prob{ 1111;1-p }
\end{split}
\end{equation}
Though \eqref{d3-classical-explicit} contains many less cases than \eqref{sum-d3}, it is still a high-degree polynomial in 5 variables and so analytically maximizing it is quite difficult. Similar to the degree-2 case, we rely on a numerical optimizer to solve for the maxima here. There are more submanifolds over which this maximum occurs. As an one maximal solution is given by $p \approx 2/5, q = (0, 0, 0, 1)$ which evaluates optimized to about $0.77$.
\section{Quantum Proofs}\label{sec:quantum-proofs}
The goal of this section is to provide proofs for analytical expressions of $\mel{\g,\bt}{H_2}{\g,\bt}$ and $\mel{\g,\bt}{H_3}{\g,\bt}$. Let us fix some notation. Recall the form of the general problem Hamiltonian from \eqref{bool-to-ham}
\begin{equation}
\label{better_general_hamiltonian}
H_C = \sum_{a = 1}^{m} H_{C_a} = \sum_{a = 1}^{m} \sum_{\substack{S \subseteq [n]\\\abs{S} \leq k}}\hat{C}_a(S)Z_S = \sum_{\substack{S \subseteq [n]\\\abs{S} \leq k}}W_S Z_S \end{equation}
Let \(\mathcal{M} = \{M \subseteq [n]\ |\ W_M \neq 0\}\) be the collection of sets of indices that correspond to non-zero terms in \eqref{better_general_hamiltonian}. Fix some $K \subseteq [n]$. For any $L \subseteq K$, define the following two sets using $\mathcal{M}$:
\begin{align}
\mathcal{O}(L) &:= \{M \in \mathcal{M}\ |\ |M \cap L| \text{ is odd}\} \label{O(L)} \\
\mathcal{O}_K(L) &:= \{ \mathcal{F} \subseteq \mathcal{O}(L) \ |\ \triangle \mathcal{F} = K \} \label{O_K(L)} \end{align}
where $\triangle \mathcal{F} = M_1 \triangle \cdots \triangle M_\ell$ is the repeated symmetric difference over the family of sets \(\mathcal{F} = \{M_1, \dotsc, M_\ell\}\). $\mathcal{O}(L)$ is all of the sets in $\mathcal{M}$ whose intersection in $L$ is odd. The next set, $\mathcal{O}_K(L)$, is a bit more complicated. For a $\mathcal{F} \in \mathcal{O}_K(L)$, we have that each $M_i \in \mathcal{F}$ is such that $M_i \in \mathcal{O}(L)$ and the symmetric difference over all $M_i \in \mathcal{F}$ is exactly $K$. These are ultimately the terms that will remain in the calculations for $\mel{\g,\bt}{Z_K}{\g,\bt}$. The following statement builds off of lemma 3.1 in \cite{ryananderson2018quantum} and is the main tool used in our analysis.
\begin{lemma}\label{anderson_lemma_3_1_but_more}
Let $H_C$ be represented as in \eqref{better_general_hamiltonian} and \(\ket{\gamma, \beta} := U_M U_C \ket{s}\). Then for any \(K \subseteq [n]\),
\begin{equation}\label{anderson_lemma_3_1_but_more_eq}
\mel{\g, \bt}{Z_K}{\g, \bt} = \sum_{L \subseteq K} i^{\abs{L}}\sin(2\beta)^{\abs{L}} \cos(2\beta)^{\abs{K}-\abs{L}} \sum_{\mathcal{F} \in \mathcal{O}_K(L)} \alpha_{\mathcal{F}}
\end{equation}
where \begin{equation}
\alpha_{\mathcal{F}} = \prod_{M \in \mathcal{F}} i\sin(-2\gamma W_M)\prod_{N \in \mathcal{O}(L)\backslash \mathcal{F}} \cos(2\gamma W_N)\label{alpha}
\end{equation} \end{lemma}
It is helpful to define the following components to break up \eqref{anderson_lemma_3_1_but_more_eq} even more
\begin{align}
\nu(L) &= i^{\abs{L}}\sin(2\beta)^{\abs{L}} \cos(2\beta)^{\abs{K}-\abs{L}} \label{nu} \\
\rho(L) &= \nu(L)\sum_{\mathcal{F} \in \mathcal{O}_K(L)} \alpha_{\mathcal{F}} \label{rho}
\end{align}
Note, for \(L = \{u\}\) being a set of cordiality one, we drop the \(\set{}\) in the \(\O(\set{u})\) for ease of notation. So we write \(\O(u)\). We also do this for \eqref{O_K(L)}, \eqref{nu}, and \eqref{rho}.
\begin{proof}
We first start by stating lemma 3.1 from \cite{ryananderson2018quantum}, which is given by the following.
\begin{lemma} [Anderson lemma 3.1 from \cite{ryananderson2018quantum}, fixed\footnote{The negative was mistakenly dropped on the imaginary in equation (3.38) while applying the binomial theorem. The effects of this mistake are inconsequential to the rest of the results in \cite{ryananderson2018quantum}.}] \label{anderson_lemma_3_1_fixed}
For \(\ket{\gamma, \beta} := U_M U_C \ket{s}\), with \(H_C\) as defined in \eqref{better_general_hamiltonian},
\begin{equation}\label{anderson_lemma_3_1_fixed_eq}
\begin{split}
&\bra{\gamma, \beta} Z_{K} \ket{\gamma, \beta} = \\
&\bra{s} Z_{K} \left(\sum_{L \subseteq K} (-i)^{|L|} \cos(2\beta)^{|K|-|L|}\sin(2\beta)^{|L|}X_L \prod_{\substack{M \subseteq [n]\\|M\cap L| \text{ is odd}}} \exp\left(-2i\gamma W_M Z_M\right) \right) \ket{s}
\end{split}
\end{equation}
\end{lemma}
We use this lemma as a starting point for the proof of our lemma \ref{anderson_lemma_3_1_but_more}. We note that, many of the steps for our proof are outlined in \cite{ryananderson2018quantum}, however, they are specific to the \algprobm{MaxCut} problem Hamiltonian from \cite{FGG}. Additionally, similar steps are also done in other papers for solving for the expectation \cite{PhysRevA.97.022304, boolean-fns-as-hams, Marwaha2022}; however, we generalize for any real diagonal problem Hamiltonian, \(H_C\) as defined in \eqref{bool-to-ham}, and make additional observations that allow for easier analysis.
One important fact we use throughout the proof is that $X \ket{+} = \ket{+}$ which extends to \(X_K \ket{s} = \ket{s}\) for any \(K \subseteq [n]\). This allows us to get rid of the \(X_L\) term in the \(Y_L\).
\begin{equation}
\begin{split}
&\bra{\gamma, \beta} Z_{K} \ket{\gamma, \beta} \\
=& \sum_{L \subseteq K} \cos(2\beta)^{|K|-|L|}\sin(2\beta)^{|L|} \bra{s} Y_{L} Z_{K\backslash L} \prod_{M \in \mathcal{O}(L)} \exp\left(-2i\gamma W_M Z_M\right) \ket{s} \\
=& \sum_{L \subseteq K} \cos(2\beta)^{|K|-|L|}\sin(2\beta)^{|L|} \bra{s} i^{|L|} Z_{K} \prod_{M \in \mathcal{O}(L)} \exp\left(-2i\gamma W_M Z_M\right) \ket{s} \\
=& \sum_{L \subseteq K} \cos(2\beta)^{|K|-|L|}\sin(2\beta)^{|L|} \bra{s} i^{|L|} Z_{K} \prod_{M \in \mathcal{O}(L)} I \cos\left(2\gamma W_M\right) + i Z_{M} \sin\left(-2\gamma W_M\right) \ket{s} \\
=& \sum_{L \subseteq K} \cos(2\beta)^{|K|-|L|}\sin(2\beta)^{|L|} \bra{s} i^{|L|} Z_{K} \sum_{\mathcal{F} \subseteq \mathcal{O}(L)} \prod_{M\in \mathcal{F}} i Z_M \sin\left(-2\gamma W_M\right) \prod_{N \notin \mathcal{F}} I \cos\left(2\gamma W_N\right) \ket{s} \\
=& \sum_{L \subseteq K} \cos(2\beta)^{|K|-|L|}\sin(2\beta)^{|L|} \bra{s} i^{|L|} Z_{K} \sum_{\mathcal{F} \subseteq \mathcal{O}(L)} \underbrace{\prod_{M\in \mathcal{F}} i \sin\left(-2\gamma W_M\right) \prod_{N\in \mathcal{O}(L)\backslash \mathcal{F}} \cos\left(2\gamma W_N\right)}_{\alpha_{\mathcal{F}}} \prod_{M \in \mathcal{F}} Z_M \ket{s} \\
=& \sum_{L \subseteq K} \cos(2\beta)^{|K|-|L|}\sin(2\beta)^{|L|} \sum_{\mathcal{F} \subseteq \mathcal{O}(L)} i^{|L|} \alpha_{\mathcal{F}} \bra{s} Z_K \prod_{M \in \mathcal{F}} Z_M \ket{s}
\end{split}
\end{equation}
Here, we turn our attention to the \(\bra{s} Z_K \prod_{M \in \mathcal{F}} Z_M \ket{s}\) term. We can utilize the fact that for any non-empty subset \(P \subseteq [n]\) we always have that \(\bra{s} Z_P \ket{s} = 0\). So, for each \(\mathcal{F} \in \mathcal{O}(L)\) term in the summation, \(\bra{s} Z_K \prod_{M \in \mathcal{F}} Z_M \ket{s}\) is non-zero when the product of Pauli-\(Z\) matrices equals \(Z_K\), i.e., \(Z_K \prod_{M \in \mathcal{F}} Z_M = I \Rightarrow \prod_{M \in \mathcal{F}} Z_M = Z_K\). This is because, pauli-\(Z\)'s on different qubits commute and \(Z^2 = I\). In other words, we have that
\[\bra{s} Z_K \prod_{M \in \mathcal{F}} Z_M \ket{s} = \begin{cases}
1 & \text{if } \triangle \mathcal{F} = K \\
0 & \text{if } \triangle \mathcal{F} \neq K
\end{cases}\]
This allows us to only consider the \(\mathcal{F} \subseteq \mathcal{O}(L)\) terms when \(\triangle \mathcal{F} = K\). Notationally, we write that as considering the terms \(\mathcal{F} \in \mathcal{O}_K(L)\). Putting it together, this gives us
\begin{equation}
\bra{\gamma, \beta} Z_{K} \ket{\gamma, \beta} = \sum_{L \subseteq K} \cos(2\beta)^{|K|-|L|}\sin(2\beta)^{|L|} \sum_{\mathcal{F} \in \mathcal{O}_K(L)} i^{|L|} \alpha_{\mathcal{F}}
\end{equation}
\end{proof}
As these expectations tend to be high-degree trigonometric polynomials, we freely use the shorthand $\cs(\theta) = \cos(\theta)$ and $\sn(\theta) = \sin(\theta)$. There are a few applications of the double angle formula that we use as simplifications throughout our calculations.
\begin{align}
\cos(\g)\cos\left(\frac{\g}{2}\right) + \sin(\g)\sin\left(\frac{\g}{2}\right) &= \cos\left(\frac{\g}{2}\right)\\
\cos(\g)\cos\left(\frac{\g}{2}\right) - \sin(\g)\sin\left(\frac{\g}{2}\right) &= \cos(\frac{3\g}{2})\\
\cos^3(\g)\sin\left(\frac{\g}{2}\right) + \sin^3(\g)\cos\left(\frac{\g}{2}\right) &= \frac1{4}\left(3 \sin (\frac{3\g}{2})-\sin\left(\frac{5\g}{2} \right) \right) \\
\cos^3(\g)\cos\left(\frac{\g}{2}\right)-\sin^3(\g)\sin\left(\frac{\g}{2}\right) &= \frac1{4}\left(3 \cos(\frac{3\g}{2}) + \cos(\frac{5\g}{2}) \right) \end{align}
\subsection{Degree-2} Fix a degree-2 graph $G$ with girth at least $7$.
\begin{theorem}
For a degree-2 graph, the full expected value of the QAOA is
\begin{equation}\label{d2-F-exact}
F(\gamma, \beta) = \frac{3n}{4} + \frac{n}{32}\sn(4 \beta) \left(
3\sn(\gamma) +4 \sn(2 \gamma) + 3 \sn(3 \gamma)\right) -\frac{n}{16} \sn^2(2 \beta) \sn(\gamma)\cs^2\left(\frac{\gamma}{2} \right) \left( \sn(\gamma) + 4 \sn(2 \gamma) + \sn(3 \gamma) \right)
\end{equation}
This equation is numerically maximized to $F(\g,\bt) \approx 0.939n < 0.94n$. \end{theorem}
It is important to highlight the symmetries we have in this objective function. For any edges $e_1$ and $e_2$, we have that $\mel{\g, \bt}{Z_{e_1}}{\g, \bt} = \mel{\g, \bt}{Z_{e_2}}{\g, \bt}$. For any two vertices $u$ and $v$, we have that $\mel{\g, \bt}{Z_{u_1,u_2}}{\g, \bt} = \mel{\g, \bt}{Z_{v_1,v_2}}{\g, \bt}$. So, with out loss of generality, fix an edge $uv$ and vertex $w$. Since $\abs{E(G)} = \abs{V(G)} = n$, we have that
\begin{equation} F(\g,\bt) = \frac{3n}{4} - \frac{n}{2} \mel{\g,\bt}{Z_{uv}}{\g,\bt} - \frac{n}{4}\mel{\g,\bt}{Z_{w_1w_2}}{\g,\bt} \label{d2-F-simplified} \end{equation}
Solving for the expectation, \(F(\g, \bt)\), of \eqref{d2-full-ham} thus reduces to solving $\mel{\g,\bt}{Z_{uv}}{\g,\bt}$ and $\mel{\g,\bt}{Z_{w_1w_2}}{\g,\bt}$.
\begin{lemma}
\begin{equation}
\mel{\g,\bt}{Z_{uv}}{\g,\bt} =-2\cs(2\bt)\sn(2\bt)\cs(\g)\sn(\g)\cs^2\left(\frac{\g}{2}\right) + 2\sn^2(2\bt)\cs(\g)\sn(\g)\cs^3\left(\frac{\g}{2}\right)\sn\left(\frac{\g}{2}\right) \label{d2-exp-edge}
\end{equation} \end{lemma}
\begin{proof}
Solving this expectation is a direct application of \eqref{anderson_lemma_3_1_but_more_eq} by iterating over $L \subseteq \set{u,v}$ and calculating $\rho(L)$. We use the convention for vertex labeling about the edge $uv$ given in Figure \ref{fig:d2-graph-edge-uv}.
\begin{figure}
\caption{Second neighborhood graph of edge $uv \in E$ for a degree-2 graph.}
\label{fig:d2-graph-edge-uv}
\end{figure}
\begin{itemize}
\item[$L = \set{u}$: ] To begin, \[\nu(u) = i \sn(2\bt)\cs(2\bt),\
\mathcal{O}(\{u\}) = \set{\{u'',u\}, \{u',u\}, \{u,v\}, \{u,v'\}} \]
The only element in $\O_{\set{u,v}}(u)$ is the edge $\set{u,v}$ and $\alpha_{\set{u,v}} = i\cs(\g)\sn(\g)\cs^2\left(\frac{\g}{2}\right)$. $$\rho(u) = i\sn(2\bt)\cs(2\bt)\cdot i\cs(\g)\sn(\g)\cs^2\left(\frac{\g}{2}\right) = -\sn(2\bt)\cs(2\bt)\cs(\g)\sn(\g)\cs^2\left(\frac{\g}{2}\right)$$
\item[$L = \set{v}$: ] Due to the symmetry of this calculation we have that $\rho(v) = \rho(u)$.
\item[$L = \set{u,v}$: ] Here $\nu(\set{u,v}) = -\sn^2(2\bt)$ and $$\mathcal{O}(\set{u,v}) = \set{\{u'',u\}, \{u',u\}, \{u',v\}, \{u,v'\}, \{v,v'\}, \{v,v''\}}$$ Now the two elements in $\mathcal{O}_{\set{u,v}}(\set{u,v})$: $\set{\set{u', u}, \set{u', v}}$ and $\set{\set{u, v'}, \set{v, v'}}$. Both contribute the same $\alpha_{\mathcal{F}}$ values of $-\cs(\g)\sn(\g)\cs^3\left(\frac{\g}{2}\right)\sn\left(\frac{\g}{2}\right)$ so $$\rho(\set{u,v}) = 2\sn^2(2\bt)\cs(\g)\sn(\g)\cs^3\left(\frac{\g}{2}\right)\sn\left(\frac{\g}{2}\right)$$
\end{itemize}
Summing up these cases results in \eqref{d2-exp-edge}. \end{proof}
\begin{lemma}
\begin{equation}
\mel{\g,\bt}{Z_{w_1} Z_{w_2}}{\g,\bt} =-2\cs(2\bt)\sn(2\bt)\cs^2(\g)\cs\left(\frac{\g}{2}\right)\sn\left(\frac{\g}{2}\right) + \sn^2(2\bt)\cs^2(\g)\sn^2(\g)\sn^2\left(\frac{\g}{2}\right) \label{d2-exp-nonedge}
\end{equation}
\end{lemma}
\begin{proof}
This proof follows the same outline as before: sum up $\rho(L)$ for each $L \subseteq \set{w_1, w_2}$. We use the convention for vertex labeling about the vertex $w$ given in Figure \ref{fig:d2-graph-nonedge}.
\begin{figure}
\caption{Third neighborhood about vertex $w$ which corresponds to the second neighborhoods of $w_1,w_2$.}
\label{fig:d2-graph-nonedge}
\end{figure}
\begin{enumerate}
\item[$L=\set{w_1}$:] $\nu(w_1) = i \sn(2\bt)\cs(2\g)$ and $$\O(w_1) = \set{\{w_1'',w_1\}, \{w_1',w_1\}, \{w_1,w\}, \{w_1,w_2\}}$$The only solution in $\O_{\set{w_1,w_2}}(w_1)$ is the edge $\set{\set{w_1,w_2}}$, which contributes $i\cs^2(\g)\cs\left(\frac{\g}{2}\right)\sn\left(\frac{\g}{2}\right)$. Therefore $$\rho(w_1) = -1 \sn(2\bt)\cs(2\g)\cs^2(\g)\cs\left(\frac{\g}{2}\right)\sn\left(\frac{\g}{2}\right)$$
\item[$L=\set{w_2}$:] $\rho(w_2) = \rho(w_1)$
\item[$L=\set{w_1,w_2}$:] $\nu(\set{w_1,w_2}) = -\sn^2(2\bt)$ and $$\O(\set{w_1,w_2}) = \set{\{w_1'',w_1\}, \{w_1',w_1\}, \{w_1,w\}, \{w,w_2\}, \{w_2,w_2'\}, \{w_2,w_2''\}}$$The pair of edges $\set{\set{w_1,w},\set{w_2,w}}$ is the only solution with contribution $-\cs^2(\g)\sn^2(\g)\cs^2\left(\frac{\g}{2}\right)$. So $$\rho(\set{w_1,w_2}) = \sn^2(2\bt)\cs^2(\g)\sn^2(\g)\cs^2\left(\frac{\g}{2}\right)$$
\end{enumerate}
\end{proof}
Plugging in the results from the previous two lemmas into \eqref{d2-F-simplified} produces equation \eqref{d2-F-exact}. Running this trigonometric polynomial through a numerical optimizer results in a maximum $F(\g, \bt)$ value of $\approx 0.93937n$, which is indeed less than $0.94n$. This proves theorem \ref{quantum-d2-result-thm}.
\subsection{Degree-3 graphs} Fix a degree-3 graph $G$ that is locally tree-like (specifically, let it have a girth of at least 7).
\begin{theorem}
The expected value $F$ for a degree-3 graph is given by
\begin{equation}\label{d3-F-exact}
\begin{split}
F(\g, \bt) = \frac{n}{2}&+\frac{3n}{2}\cs(2 \bt) \sn(2 \bt) \sn(\g) \cs(g)\cs^4\left(\frac{\g}{2}\right) \\
&+\frac{n}{16} \sn(2\bt)\cs^3(2\bt)\cs^3\left(\frac{\g}{2}\right)\left(3 \sn \left(\frac{3\g}{2} \right)-\sn\left(\frac{5\g}{2} \right) \right)\\
&+\frac{3n}{16}\sn(2\bt)\cs^3(2\bt)\sn\left(\frac{\g}{2}\right)\cs^2\left(\frac{\g}{2}\right)\left(3 \cs\left(\frac{3\g}{2}\right) + \cs\left(\frac{5\g}{2}\right) \right) \\
&-3n\sn^3(2\bt)\cs(2\bt)\sn\left(\frac{\g}{2}\right)\cs^5(\g)\cs^5\left(\frac{\g}{2}\right)\\
&-n\sn^3(2\bt)\cs(2\bt)\cs^6\left(\frac{\g}{2}\right)\left(\frac{1}{64} \sn\left(\frac{\g}{2}\right)\left(3 \cs\left(\frac{3\g}{2}\right)+\cs\left(\frac{5\g}{2}\right) \right)^3 + \sn^3(\g)\cs^3(\g)\cs^4\left(\frac{\g}{2}\right)\right)
\end{split}
\end{equation}
Moreover, there exist a pair of angles $(\g^*, \bt^*)$ such that $F(\g^*, \bt^*) \approx 0.819n > 0.81n$. \end{theorem}
As with the degree 2 case, we note the using symmetries of \(\mel{\g, \bt}{Z_{e_1}}{\g, \bt}\) allows us to, with out loss of generality, fix an edge \(uv\) and a vertex \(u\) to express the expectation, \(F(\g, \bt)\) of \eqref{d3-full-ham}, as the following
\begin{equation}
F(\g, \bt) = \frac{n}{2} - \frac{3n}{4}\mel{\g,\bt}{Z_{uv}}{\g,\bt} + \frac{n}{4}\mel{\g,\bt}{Z_{B(w)}}{\g,\bt} \label{d3-F-simplified} \end{equation}
\begin{lemma}
\begin{equation}
\mel{\g,\bt}{Z_{uv}}{\g,\bt} = -2 \cos(2 \bt) \sin(2 \bt) \sin(\g) \cos(g)\cos^4\left(\frac{\g}{2}\right)\label{d3-exp-edge}
\end{equation} \end{lemma}
\begin{proof} This proof follows the same outline as for the two in the degree 2 case. We use the convention for vertex labeling about the edge $uv$ given in Figure \ref{fig:d3-graph-edge-uv}.
\begin{figure}
\caption{Second neighborhood graph of edge $uv \in E$ for a degree-3 graph.}
\label{fig:d3-graph-edge-uv}
\end{figure}
\noindent Consider $L = \{u\}$. In this case, we have $$\O(u) = \set{\{u,v\}, \{u,u'\}, \{u,u''\}, B(u),B(v),B(u'),B(u'')}$$
There are two solutions $\mathcal{F} \in \O(u)$ such that $\triangle \mathcal{F} = \set{u,v}$: $\mathcal{F}_1 = \set{\set{u,v}}$ and $\mathcal{F}_2 = \set{B(u), \set{u,u'}, \set{u,u''}}$. The contribution for $\mathcal{F}_1$ (and not choosing $\mathcal{F}_2$) is $i \sin(\g) \cos^2(\g)\cos\left(\frac{\g}{2}\right)$. On the other hand, if we choose $\mathcal{F}_2$ and not $\mathcal{F}_1$, the contribution is $i \sin^2(\g)\sin\left(\frac{\g}{2}\right)$. Summing these together, we have a total contribution of $$i \sin(\g)\cos^2(g)\cos\left(\frac{\g}{2}\right) + i \sin^2(\g)\sin\left(\frac{\g}{2}\right)\cos(\g) = i \sin(\g)\cos(\g)\cos\left(\frac{\g}{2}\right)$$ Lastly, every element in $$\O(u) \setminus \left(\mathcal{F}_1 \cup \mathcal{F}_2\right) = \set{B(v), B(u'), B(u'')}$$ contributes $\cos\left(\frac{\g}{2}\right)$ to $\rho(u)$ since they are not used in either solution. Using $\nu(u) = i \sin(2\bt) \cos(2\bt)$,
\begin{align}
\rho(u)&= i \sin(2\bt) \cos(2\bt)i \cdot \sin(\g)\cos(\g)\cos\left(\frac{\g}{2}\right) \cdot \cos^3\left(\frac{\g}{2}\right)\\
&= - \sin(2\bt) \cos(2\bt)\sin(\g)\cos(\g)\cos^4\left(\frac{\g}{2}\right) \end{align}
Due to the symmetry of the calculation, we have $\rho(v) = \rho(u)$. The last case we need to consider is $L = \set{u,v}$. Here, $$\O(\set{u,v}) = \set{ \set{u,u'}, \set{u, u''}, \set{v, v'}, \set{v,v''}, B(u'), B(u''), B(v'), B(v'')}$$Notice that $\O_{\set{u, v}}(\set{u,v}) = \emptyset$ and so this case does not contribute any value to the expectation. \end{proof}
\begin{lemma}
\begin{equation}
\begin{split}
\mel{\g,\bt}{Z_{B(u)}}{\g,\bt} = &\frac1{4}\sn(2\bt)\cs^3(2\bt)\cs^3\left(\frac{\g}{2}\right)\left(3 \sn \left(\frac{3\g}{2} \right)-\sn\left(\frac{5\g}{2} \right) \right)\\
&+\frac{3}{4}\sn(2\bt)\cs^3(2\bt)\sn\left(\frac{\g}{2}\right)\cs^2\left(\frac{\g}{2}\right)\left(3 \cs\left(\frac{3\g}{2}\right) + \cs\left(\frac{5\g}{2}\right) \right) \\
&-3\sn^3(2\bt)\cs(2\bt)\sn\left(\frac{\g}{2}\right)\cs^5(\g)\cs^5\left(\frac{\g}{2}\right) \\
&-\sn^3(2\bt)\cs(2\bt)\cs^6\left(\frac{\g}{2}\right)\left(\frac{1}{64} \sn\left(\frac{\g}{2}\right)\left(3 \cs\left(\frac{3\g}{2}\right)+\cs\left(\frac{5\g}{2}\right) \right)^3 + \sn^3(\g)\cs^3(\g)\cs^4\left(\frac{\g}{2}\right)\right)\label{d3-exp-ball}
\end{split}
\end{equation}
\end{lemma}
\begin{proof} We define the convention for vertex labeling about the vertices \(B(u) = \set{u, u_1, u_2, u_3}\) with Figure \ref{fig:d3-graph-edge-Bu}.
\begin{figure}
\caption{Third neighborhood about vertex $u$ which corresponds to the second neighborhoods of $u_1,u_2,u_3$.}
\label{fig:d3-graph-edge-Bu}
\end{figure}
\noindent We work out the calculation $\rho(L)$ for each $L \subseteq \set{u,u_1,u_2,u_3}$ and them sum up to get \eqref{d3-exp-ball} as per \eqref{anderson_lemma_3_1_but_more_eq}.
\begin{enumerate}
\item[$L = \set{u}$:] To begin, $\nu(u) = i\sn(2\bt)\cs^3(2\bt)$ and $$\O(u) = \set{\set{u,u_1}, \set{u,u_2}, \set{u,u_3}, B(u), B(u_1), B(u_2), B(u_3)}$$ There are two solutions $\mathcal{F}_1 = \{B(u)\}$ and $\mathcal{F}_2 = \set{\set{u,u_1}, \set{u,u_2}, \set{u,u_3}}$ in $\O_{B(u)}(u)$. These solutions are mutually exclusive so we can sum up their individual contributions to $\rho(u)$. Solution $\mathcal{F}_1$ contributes $-i\sn\left(\frac{\g}{2}\right)\cs^3(\g)$ and $\mathcal{F}_2$ contributes $(i\sn(\g))^3\cs\left(\frac{\g}{2}\right) = -i \sn^3(\g)\cs\left(\frac{\g}{2}\right)$. Next, notice that the elements in $\O(u) \setminus (\mathcal{F}_1 \cup \mathcal{F}_2) = \set{B(u_1), B(u_2), B(u_3)}$ are not part of either solution and so each contribute $\cs\left(\frac{\g}{2}\right)$. Putting this all together,
\begin{align}
\rho(u) &= i\sn(2\bt)\cs^3(2\bt) \left[-i\sn\left(\frac{\g}{2}\right)\cs^3(\g) - i \sn^3(\g)\cs\left(\frac{\g}{2}\right)\right]\cs^3\left(\frac{\g}{2}\right) \\
&= \frac1{4}\sn(2\bt)\cs^3(2\bt)\cs^3\left(\frac{\g}{2}\right)\left(3 \sn \left(\frac{3\g}{2} \right)-\sn\left(\frac{5\g}{2} \right) \right) \label{ru}
\end{align}
\item[$L = \set{u_1}$:] We have $\nu(u_1) = i\sn(2\bt)\cs^3(2\bt)$ and $$\O(u_1) = \set{\set{u,u_1 }, \set{u_1,u_1'}, \set{u_1,u_1''}, B(u), B(u_1), B(u_1'), B(u_1'')}$$There are two ways to get a symmetric difference of $B(u)$. Define $E_1 = \set{B(u_1), \set{u,u_1}, \set{u_1,u_1'}, \set{u_1,u_1''}}$. Then $\triangle E_1 = \emptyset$ and so both $\set{B(u)}$ and $\set{B(u)} \cup E_1$ are in $\O_{B(u)}(u_1)$. Both solutions use $B(u)$ and so have a $-i \sn\left(\frac{\g}{2}\right)$ term. If we omit $E_1$, the contribution is $\cs^3(\g)\cs\left(\frac{\g}{2}\right)$ and if we include $E_1$, the contribution is $-\sn^3(\g)\sn\left(\frac{\g}{2}\right)$. The remaining contribution comes from the elements in $\O(u_1)\setminus (\set{B(u)} \cup E_1) = \set{B(u_1'), B(u_1'')}$ which always contribute $\cs\left(\frac{\g}{2}\right)$ each. Putting these together, we have
\begin{align}
\rho(u_1) &= i\sn(2\bt)\cs^3(2\bt)\cdot (-i) \sn\left(\frac{\g}{2}\right)\left[\cs^3(\g)\cs\left(\frac{\g}{2}\right)-\sn^3(\g)\sn\left(\frac{\g}{2}\right)\right] \cdot \cs^2\left(\frac{\g}{2}\right) \\
&= \frac1{4}\sn(2\bt)\cs^3(2\bt)\sn\left(\frac{\g}{2}\right)\cs^2\left(\frac{\g}{2}\right)\left(3 \cs(\frac{3\g}{2}) + \cs(\frac{5\g}{2}) \right) \label{ru1}
\end{align}
\item[$L = \set{u_2}$, $L = \set{u_3}$:] These cases are the same as $L = \set{u_1}$.
\item[$L = \set{u, u_1}$:] Note that $$\O(\set{u,u_1}) = \set{\{u,u_2 \}, \{u,u_3 \}, \{u_1,u_1' \},\{u_1,u_1'' \}, B(u_2),B(u_3),B(u_1'),B(u_1'')}$$contains no subsets whose symmetric difference is equal to $B(u)$ and so $\rho(\set{u,u_1}) = 0$.
\item[$L = \set{u, u_2}, L = \set{u, u_3}$:] These cases are the same as $L = \set{u,u_1}$ and also contribute 0.
\item[$L = \set{u_1, u_2}$:] Note that $$\O(\set{u_1,u_2}) = \set{\{u,u_1 \}, \{u_1,u_1' \}, \{u_1,u_1'' \},\{u,u_2 \}, \{u_2,u_2' \}, \{u_2,u_2'' \}, B(u_1'), B(u_1''), B(u_2'), B(u_2'')}$$contains no subsets whose symmetric difference is equal to $B(u)$ and so $\rho(\set{u_1,u_2}) = 0$.
\item[$L = \set{u_1, u_3}, L = \set{u_2, u_3}$:] These cases are the same as $L = \set{u_1,u_2}$ and also contribute 0.
\item[$L = \set{u, u_1, u_2}$:] We have $\nu(\set{u, u_1, u_2}) = -i\sn^3(2\bt)\cs(2\bt)$ and $$\O(\set{u, u_1, u_2}) = \set{\{u,u_3 \}, \{u_1,u_1' \}, \{u_1,u_1'' \},\{u_2,u_2' \}, \{u_2,u_2'' \}, B(u), B(u_3), B(u_1'), B(u_1''), B(u_2'), B(u_2'')}$$The only solution here is $B(u)$ which contributes $-i\sn\left(\frac{\g}{2}\right)\cs^5(\g)\cs^5\left(\frac{\g}{2}\right)$. So we have
\begin{align}
\rho(\set{u, u_1, u_2}) = -i\sn^3(2\bt)\cs(2\bt)\cdot-i\sn\left(\frac{\g}{2}\right)\cs^5(\g)\cs^5\left(\frac{\g}{2}\right) = -\sn^3(2\bt)\cs(2\bt)\sn\left(\frac{\g}{2}\right)\cs^5(\g)\cs^5\left(\frac{\g}{2}\right) \label{ruu1u2}
\end{align}
\item[$L = \set{u, u_2, u_3}, L = \set{u, u_1, u_3}$:] These cases are the same as $L = \set{u, u_1, u_2}$.
\item[$L = \set{u_1, u_2, u_3}$:] This case is the most complicated. First, note that
\begin{align*}
\mathcal{O}(\set{u_1, u_2, u_3}) &= \{\{u,u_1 \}, \{u,u_2 \}, \{u,u_3 \}, \{u_1,u_1' \}, \{u_1,u_1'' \}, \{u_2,u_2' \}, \{u_2,u_2'' \},\{u_3,u_3' \}, \{u_3,u_3'' \},\\
&\qquad B(u), B(u_1), B(u_1'), B(u_1''), B(u_2), B(u_2'), B(u_2''),B(u_3), B(u_3'), B(u_3'')\}
\end{align*}
Similar to the $L = \set{u}$ case, any $\mathcal{F} \in \mathcal{O}_{B(u)}(\set{u_1, u_2, u_3})$ corresponds to either $\mathcal{F}_1 = \{B(u)\}$ or $\mathcal{F}_2 = \set{\set{u,u_1},\set{u,u_2},\set{u,u_3}}$. However, there are now many ways to result in these sets. Define the following sets $$E_{j} := \set{B(u_j), \set{u_j, u},\set{u_j, u_j''},\set{u_j, u_j'''}}, F_j := E_j \setminus \set{\set{u_j, u}}$$ for each $j \in \set{1,2,3}$. Then we have that
\begin{align}
\triangle E_j &= \emptyset \label{delta-ej}\\
\triangle F_j &= \set{u, u_j} \label{delta-fj}
\end{align}
For the solution $\mathcal{F}_1$, by \eqref{delta-ej}, we can construct another solution that contain any of the $E_j$. On the other hand, for the solution $\mathcal{F}_2$ we can remove any of the edges $\set{u,u_j}$ in-place of $F_j$ to get another solution, by \eqref{delta-fj}. As in the case of $L = \set{u}$, $\mathcal{F}_1$ solutions are mutually exclusive from $\mathcal{F}_2$ solutions and so we sum up their separate calculations. We also note these 16 solutions make up all the possible solution $\mathcal{F} \in \mathcal{O}_{B(u)}(\set{u_1, u_2, u_3})$.
Beginning with $\mathcal{F}_1$ solutions we can independently choose to include $E_1, E_2,$ and $E_3$ so there are $2^3 = 8$ possible solutions in this case. Every solution contains $B(u)$ which contributes $-i\sn\left(\frac{\g}{2}\right)$. Start with deciding whether to pick $E_1$. If we do not include $E_1$, this solution contributes $\cs^3(\g)\cs\left(\frac{\g}{2}\right)$. If we do include $E_1$, then we pick up $(i\sn(\g))^3(-i\sn\left(\frac{\g}{2}\right)) = - \sn^3(\g)\sn\left(\frac{\g}{2}\right)$. Summing up these cases and using a double angle formula results in $$\cs^3(\g)\cs\left(\frac{\g}{2}\right)- \sn^3(\g)\sn\left(\frac{\g}{2}\right) = \frac1{4}\left(3 \cs\left(\frac{3\g}{2}\right)+\cs\left(\frac{5\g}{2}\right) \right)$$This is also the contribution concerning $E_2$ and $E_3$ and these cases are independent. Lastly, any $$\mathcal{O}(L)\setminus \left(\{B(u)\} \cup E_1 \cup E_2 \cup E_3\right) = \set{B(u_1'), B(u_1''), B(u_2'), B(u_2''), B(u_3'), B(u_3'')}$$is not chosen contributes an additional $\cs\left(\frac{\g}{2}\right)$. Therefore, the contribution to $\rho(\set{u_1, u_2, u_3})$ using $\mathcal{F}_1$ is equal to
\begin{align}
-i\sn\left(\frac{\g}{2}\right)\left[ \frac1{4}\left(3 \cs\left(\frac{3\g}{2}\right)+\cs\left(\frac{5\g}{2}\right) \right) \right]^3 \cs^6\left(\frac{\g}{2}\right) = -\frac{1}{64}i \sn\left(\frac{\g}{2}\right)\left(3 \cs\left(\frac{3\g}{2}\right)+\cs\left(\frac{5\g}{2}\right) \right)^3 \cs^6\left(\frac{\g}{2}\right)
\end{align}
Next we need to find the contribution to $\rho(\set{u_1, u_2, u_3})$ using the $\mathcal{F}_2$ solutions. We can get $\set{u,u_1}$ by either choosing the edge itself or the set $F_1 = \set{B(u_1), \set{u_1,u_1'}, \set{u_1,u_1''}}$ since $\triangle F_1 = \set{u, u_1}$. If we chose the edge and not $F_1$, then we have a contribution of $i\sn(\g)\cs^2(\g)\cs\left(\frac{\g}{2}\right)$. If we choose $F_1$ and not the edge, this contributes $i \sn^2(\g)\sn\left(\frac{\g}{2}\right)\cs(\g)$ The full contribution for the edge $\set{u,u_1}$ is then $$i\sn(\g)\cs^2(\g)\cs\left(\frac{\g}{2}\right)+i \sn^2(\g)\sn\left(\frac{\g}{2}\right)\cs(\g) = i \sn(\g)\cs(\g)\left[\cs(\g)\cs\left(\frac{\g}{2}\right) + \sn(\g)\sn\left(\frac{\g}{2}\right)\right] = i \sn(\g)\cs(\g)\cs\left(\frac{\g}{2} \right)$$ This logic is the same for independently choosing $\set{u,u_2}$ versus $F_2$ and same for $\set{u,u_3}$ and $F_3$. Lastly, elements in $$\mathcal{O}(L)\setminus \left(\set{\set{u,u_1}, \set{u,u_2}, \set{u,u_3}} \cup F_1 \cup F_2 \cup F_3\right) = \set{B(u), B(u_1'), B(u_1''), B(u_2'), B(u_2''), B(u_3'), B(u_3'')}$$are never chosen in our solution and each contributes $\cs\left(\frac{\g}{2}\right)$. The full contribution to $\rho(\set{u_1, u_2, u_3})$ using $\mathcal{F}_2$ is then $$\left( i \sn(\g)\cs(\g)\cs\left(\frac{\g}{2} \right)\right)^3\cs^7\left(\frac{\g}{2}\right) = -i \sn^3(\g)\cs^3(\g)\cs^{10}\left(\frac{\g}{2}\right) $$We have that $\nu(\set{u_1, u_2, u_3}) = -i \sn^3(2\bt)\cs(2\bt)$ and so
\begin{align}
\rho(\set{u_1, u_2, u_3}) &= -i \sn^3(2\bt)\cs(2\bt)\left(-\frac{1}{64}i \sn\left(\frac{\g}{2}\right)\left(3 \cs\left(\frac{3\g}{2}\right)+\cs\left(\frac{5\g}{2}\right) \right)^3 \cs^6\left(\frac{\g}{2}\right) -i \sn^3(\g)\cs^3(\g)\cs^{10}\left(\frac{\g}{2}\right)\right) \nonumber \\
&=-\sn^3(2\bt)\cs(2\bt)\cs^6\left(\frac{\g}{2}\right)\left(\frac{1}{64} \sn\left(\frac{\g}{2}\right)\left(3 \cs\left(\frac{3\g}{2}\right)+\cs\left(\frac{5\g}{2}\right) \right)^3 + \sn^3(\g)\cs^3(\g)\cs^4\left(\frac{\g}{2}\right)\right)\label{ru1u2u3}
\end{align} \end{enumerate}
All that is left is to sum the cases to get $\eqref{d3-exp-ball} = \eqref{ru} + 3\cdot\eqref{ru1} + 3\cdot \eqref{ruu1u2} + \eqref{ru1u2u3} $ \end{proof}
Plugging in lemmas 10 and 11 into \eqref{d3-F-simplified} results in \eqref{d3-F-exact}. Running this trigonometric polynomial through a numerical optimizer results in a maximum $F(\g, \bt)$ value of $\approx 0.819292n$, which is indeed greater than \(0.81n\). This proves theorem \ref{quantum-d3-result-thm}.
\end{document} |
\begin{document}
\title[\fontsize{7}{9}\selectfont ]{Stability results of locally coupled wave equations with local Kelvin-Voigt damping: Cases when the supports of damping and coupling coefficients are disjoint} \author{Mohammad Akil$^{1}$, Haidar Badawi$^{1}$, and Serge Nicaise$^1$}
\address{$^1$ Universit\'e Polytechnique Hauts-de-France, CERAMATHS/DEMAV,
Valenciennes, France} \email{[email protected], [email protected], [email protected]} \keywords{Coupled wave equations, Kelvin-Voigt damping, strong stability, polynomial stability }
\setcounter{equation}{0}
\begin{abstract} In this paper, we study the direct/indirect stability of locally coupled wave equations with local Kelvin-Voigt dampings/damping and by assuming that the supports of the dampings and the coupling coefficients are disjoint. First, we prove the well-posedness, strong stability, and polynomial stability for some one dimensional coupled systems. Moreover, under some geometric control condition, we prove the well-posedness and strong stability in the multi-dimensional case.
\end{abstract} \maketitle \pagenumbering{roman} \maketitle \tableofcontents
\pagenumbering{arabic} \setcounter{page}{1}
\section{Introduction} \noindent The direct and indirect stability of locally coupled wave equations with local damping arouses many interests in recent years. The study of coupled systems is also motivated by several physical considerations like Timoshenko and Bresse systems (see for instance \cite{BASSAM20151177,Akil2020,Akil2021,ABBresse,Wehbe08,Fatori01,FATORI2012600}).
The exponential or polynomial stability of the wave equation with a local Kelvin-Voigt damping is considered in \cite{Liu-Rao:06,Tebou:16,BurqSun:22}, for instance. On the other hand, the direct and indirect stability of locally and coupled wave equations with local viscous dampings are analyzed in \cite{Alabau-Leautaud:13,Kassemetal:19,Gerbietal:21}. In this paper, we are interested in locally coupled wave equations with local Kelvin-Voigt dampings. Before stating our main contributions, let us mention similar results for such systems. In 2019, Hayek {\it et al.} in \cite{Hayek}, studied the stabilization of a multi-dimensional system of weakly coupled wave equations with one or two locally Kelvin-Voigt damping and non-smooth coefficient at the interface. They established different stability results. In 2021, Akil {\it et al.} in \cite{Wehbe2021}, studied the stability of an elastic/viscoelastic transmission problem of locally coupled waves with non-smooth coefficients, by considering: \begin{equation*} \left\{ \begin{array}{llll}
\displaystyle u_{tt}-\left(au_x +{\color{black}b_0 \chi_{(\alpha_1,\alpha_3)}} {\color{black}u_{tx}}\right)_x +{\color{black}c_0 \chi_{(\alpha_2,\alpha_4)}}y_t =0,& \text{in}\ (0,L)\times (0,\infty) ,&\\
y_{tt}-y_{xx}-{\color{black}c_0 \chi_{(\alpha_2,\alpha_4)}}u_t =0, &\text{in} \ (0,L)\times (0,\infty) ,&\\
u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,& \text{in} \ (0,\infty) ,&
\end{array}\right. \end{equation*} where $a, b_0, L >0$, $c_0 \neq 0$, and $0<\alpha_1<\alpha_2<\alpha_3<\alpha_4<L$. They established a polynomial energy decay rate of type $t^{-1}$. In the same year, Akil {\it et al.} in \cite{ABWdelay}, studied the stability of a singular local interaction elastic/viscoelastic coupled wave equations with time delay, by considering: \begin{equation*} \left\{ \begin{array}{llll}
\displaystyle u_{tt}-\left[au_x +{\color{black} \chi_{(0,\beta)}}(\kappa_1 {\color{black}u_{tx}}+\kappa_2 u_{tx}(t-\tau))\right]_x +{\color{black}c_0 \chi_{(\alpha,\gamma)}}y_t =0,& \text{in}\ (0,L)\times (0,\infty) ,&\\
y_{tt}-y_{xx}-{\color{black}c_0 \chi_{(\alpha,\gamma)}}u_t =0, &\text{in} \ (0,L)\times (0,\infty) ,&\\
u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,& \text{in} \ (0,\infty) ,&
\end{array}\right. \end{equation*} where $a, \kappa_1, L>0$, $\kappa_2, c_0 \neq 0$, and $0<\alpha <\beta <\gamma <L$. They proved that the energy of their system decays polynomially in $t^{-1}$. In 2021, Akil {\it et al.} in \cite{ABNWmemory}, studied the stability of coupled wave models with locally memory in a past history framework via non-smooth coefficients on the interface, by considering: \begin{equation*} \left\{ \begin{array}{llll}
\displaystyle u_{tt}-\left(au_x +{\color{black} b_0 \chi_{(0,\beta)}} {\color{black}\int_0^{\infty}g(s)u_{x}(t-s)ds}\right)_x +{\color{black}c_0 \chi_{(\alpha,\gamma)}}y_t =0,& \text{in}\ (0,L)\times (0,\infty) ,&\\
y_{tt}-y_{xx}-{\color{black}c_0 \chi_{(\alpha,\gamma)}}u_t =0, &\text{in} \ (0,L)\times (0,\infty) ,&\\
u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,& \text{in} \ (0,\infty) ,&
\end{array}\right. \end{equation*} where $a, b_0, L >0$, $c_0 \neq 0$, $0<\alpha<\beta<\gamma<L$, and $g:[0,\infty) \longmapsto (0,\infty)$ is the convolution kernel function. They established an exponential energy decay rate if the two waves have the same speed of propagation. In case of different speed of propagation, they proved that the energy of their system decays polynomially with rate $t^{-1}$. In the same year, Akil {\it et al.} in \cite{akil2021ndimensional}, studied the stability of a multi-dimensional elastic/viscoelastic transmission problem with Kelvin-Voigt damping and non-smooth coefficient at the interface, they established some polynomial stability results under some geometric control condition. In those previous literature, the authors deal with the locally coupled wave equations with local damping and by assuming that there is an intersection between the damping and coupling regions. The aim of this paper is to study the direct/indirect stability of locally coupled wave equations with Kelvin-Voigt dampings/damping localized via non-smooth coefficients/coefficient and by assuming that the supports of the dampings and coupling coefficients are disjoint. In the first part of this paper, we consider the following one dimensional coupled system: \begin{eqnarray} u_{tt}-\left(au_x+bu_{tx}\right)_x+c y_t&=&0,\quad (x,t)\in (0,L)\times (0,\infty),\label{eq1}\\ y_{tt}-\left(y_x+dy_{tx}\right)_x-cu_t&=&0,\quad (x,t)\in (0,L)\times (0,\infty),\label{eq2} \end{eqnarray} with fully Dirichlet boundary conditions, \begin{equation}\label{eq3} u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,\ t\in (0,\infty), \end{equation} and the following initial conditions \begin{equation}\label{eq4}
u(\cdot,0)=u_0(\cdot),\ u_t(\cdot,0)=u_1(\cdot),\ y(\cdot,0)=y_0(\cdot)\quad \text{and}\quad y_t(\cdot,0)=y_1(\cdot), \ x \in (0,L). \end{equation} In this part, for all $b_0, d_0 >0$ and $c_0 \neq 0$, we treat the following three cases:\\[0.1in]
\textbf{Case 1 (See Figure \ref{p7-Fig1}):} \begin{equation}\tag{${\rm C1}$}\label{C1} \left\{\begin{array}{l} b(x)=b_0 \chi_{(b_1,b_2)}(x) ,\ \quad c(x)=c_0\chi_{(c_1,c_2)}(x),\ \quad d(x)=d_0\chi_{(d_1,d_2)}(x),\\[0.1in] \text{where}\ 0<b_1<b_2<c_1<c_2<d_1<d_2<L. \end{array} \right. \end{equation}
\textbf{Case 2 (See Figure \ref{p7-Fig2}):} \begin{equation}\tag{${\rm C2}$}\label{C2} \left\{\begin{array}{l} b(x)=b_0 \chi_{(b_1,b_2)}(x) ,\ \quad c(x)=c_0\chi_{(c_1,c_2)}(x),\ \quad d(x)=d_0\chi_{(d_1,d_2)}(x),\\[0.1in] \text{where}\ 0<b_1<b_2<d_1<d_2<c_1<c_2<L. \end{array} \right. \end{equation}
\textbf{Case 3 (See Figure \ref{p7-Fig3}):} \begin{equation}\tag{${\rm C3}$}\label{C3} \left\{\begin{array}{l} b(x)=b_0 \chi_{(b_1,b_2)}(x) ,\ \quad c(x)=c_0\chi_{(c_1,c_2)}(x),\ \quad d(x)=0,\\[0.1in] \text{where}\ 0<b_1<b_2<c_1<c_2<L. \end{array} \right. \end{equation} \begin{figure}
\caption{Geometric description of the functions $b, c$ and $d$ in Case 1.}
\label{p7-Fig1}
\end{figure} \begin{figure}
\caption{Geometric description of the functions $b,c$ and $d$ in Case 2.}
\label{p7-Fig2}
\end{figure} \begin{figure}
\caption{Geometric description of the functions $b$ and $c$ in Case 3.}
\label{p7-Fig3}
\end{figure} \noindent While in the second part, we consider the following multi-dimensional coupled system: \begin{eqnarray}\label{ND-1} u_{tt}-\divv (\nabla u+bu_t)+cy_t&=&0\quad \text{in}\ \Omega\times (0,\infty),\\ y_{tt}-\Delta y-cy_t&=&0\quad \text{in}\ \Omega\times (0,\infty), \end{eqnarray} with full Dirichlet boundary condition \begin{equation}\label{ND-2} u=y=0\quad \text{on}\quad \Gamma\times (0,\infty), \end{equation} and the following initial condition \begin{equation}\label{ND-5}
u(\cdot,0)=u_0(\cdot),\ u_t(\cdot,0)=u_1(\cdot),\ y(\cdot,0)=y_0(\cdot)\ \text{and}\ y_t(\cdot,0)=y_1(\cdot) \ \text{in} \ \Omega, \end{equation} where $\Omega \subset \mathbb R^d$, $d\geq 2$ is an open and bounded set with boundary $\Gamma$ of class $C^2$. Here, $b,c\in L^{\infty}(\Omega)$ are such that $b:\Omega\to \mathbb R_+$ is the viscoelastic damping coefficient, $c:\Omega\to \mathbb R$ is the coupling function and \begin{equation}\label{ND-3} b(x)\geq b_0>0\ \ \text{in}\ \ \omega_b\subset \Omega, \quad c(x)\geq c_0\neq 0\ \ \text{in}\ \ \omega_c\subset \Omega\quad \text{and}\quad c(x)=0\ \ \text{on}\ \ \Omega\backslash \omega_c \end{equation} and \begin{equation}\label{ND-4}
\meas\left(\overline{\omega_c}\cap \Gamma\right)>0\quad \text{and}\quad \overline{\omega_b}\cap \overline{\omega_c}=\emptyset. \end{equation}\\\linebreak
In the first part of this paper, we study the direct and indirect stability of system \eqref{eq1}-\eqref{eq4} by considering the three cases \eqref{C1}, \eqref{C2}, and \eqref{C3}. In Subsection \ref{WP}, we prove the well-posedness of our system by using a semigroup approach. In Subsection \ref{subss}, by using a general criteria of Arendt-Batty, we prove the strong stability of our system in the absence of the compactness of the resolvent. Finally, in Subsection \ref{secps}, by using a frequency domain approach combined with a specific multiplier method, we prove that our system decay polynomially in $t^{-4}$ or in $t^{-1}$.\\\linebreak In the second part of this paper, we study the indirect stability of system \eqref{ND-1}-\eqref{ND-5}. In Subsection \ref{wpnd}, we prove the well-posedness of our system by using a semigroup approach. Finally, in Subsection \ref{Strong Stability-ND}, under some geometric control condition, we prove the strong stability of this system.
\section{Direct and Indirect Stability in the one dimensional case}\label{sec1}
In this section, we study the well-posedness, strong stability, and polynomial stability of system \eqref{eq1}-\eqref{eq4}. The main result of this section are the following three subsections. \subsection{Well-Posedness}\label{WP} \noindent In this subsection, we will establish the well-posedness of system \eqref{eq1}-\eqref{eq4} by using semigroup approach. The energy of system \eqref{eq1}-\eqref{eq4} is given by
\begin{equation*}
E(t)=\frac{1}{2}\int_0^L \left(|u_t|^2+a|u_x|^2+|y_t|^2+|y_x|^2\right)dx.
\end{equation*} Let $\left(u,u_{t},y,y_{t}\right)$ be a regular solution of \eqref{eq1}-\eqref{eq4}. Multiplying \eqref{eq1} and \eqref{eq2} by $\overline{u_t}$ and $\overline{y_t}$ respectively, then using the boundary conditions \eqref{eq3}, we get
\begin{equation*}
E^\prime(t)=- \int_0^L \left(b|u_{tx}|^2+d|y_{tx}|^2\right)dx.
\end{equation*} Thus, if \eqref{C1} or \eqref{C2} or \eqref{C3} holds, we get $E^\prime(t)\leq0$. Therefore, system \eqref{eq1}-\eqref{eq4} is dissipative in the sense that its energy is non-increasing with respect to time $t$. Let us define the energy space $\mathcal{H}$ by
\begin{equation*}
\mathcal{H}=(H_0^1(0,L)\times L^2(0,L))^2.
\end{equation*}
\noindent The energy space $\mathcal{H}$ is equipped with the following inner product
$$
\left(U,U_1\right)_\mathcal{H}=\int_{0}^Lv\overline{{v}}_1dx+a\int_{0}^Lu_x(\overline{{u}}_1)_xdx+\int_{0}^Lz\overline{{z}}_1dx+\int_{0}^Ly_x(\overline{{y}}_1)_xdx,
$$ for all $U=\left(u,v,y,z\right)^\top$ and $U_1=\left(u_1,v_1,y_1,z_1\right)^\top$ in $\mathcal{H}$. We define the unbounded linear operator $\mathcal{A}: D\left(\mathcal{A}\right)\subset \mathcal{H}\longrightarrow \mathcal{H}$ by \begin{equation*} D(\mathcal{A})=\left\{\begin{array}{l} \displaystyle U=(u,v,y,z)^\top \in\mathcal{H};\ v,z\in H_0^1(0,L), \ (au_{x}+bv_{x})_{x}\in L^2(0,L), \ (y_{x}+dz_x)_x\in L^2(0,L) \end{array}\right\} \end{equation*} and $$ \mathcal{A}\left(u, v,y, z\right)^\top=\left(v,(au_{x}+bv_{x})_{x}-cz, z, (y_x+dz_x)_x+cv \right)^{\top}, \ \forall U=\left(u, v,y, z\right)^\top \in D\left(\mathcal{A}\right). $$ \noindent Now, if $U=(u,u_t,y,y_t)^\top$ is the state of system \eqref{eq1}-\eqref{eq4}, then it is transformed into the following first order evolution equation \begin{equation}\label{eq-2.9} U_t=\mathcal{A}U,\quad U(0)=U_0, \end{equation} where $U_0=(u_0,u_1,y_0,y_1)^\top \in \mathcal H$.\\\linebreak \begin{pro}\label{mdissipative}
{\rm If \eqref{C1} or \eqref{C2} or \eqref{C3} holds. Then, the unbounded linear operator $\mathcal A$ is m-dissipative in the Hilbert space $\mathcal H$.} \end{pro} \begin{proof}
For all $U=(u,v,y,z)^{\top}\in D(\mathcal{A})$, we have \begin{equation*} \Re\left<\mathcal{A}U,U\right>_{\mathcal{H}}=-\int_0^Lb\abs{v_x}^2dx-\int_0^Ld\abs{z_x}^2dx\leq 0, \end{equation*} which implies that $\mathcal{A}$ is dissipative. Now, similiar to Proposition 2.1 in \cite{Wehbe2021} (see also \cite{ABWdelay} and \cite{ABNWmemory}), we can prove that there exists a unique solution $U=(u,v,y,z)^{\top}\in D(\mathcal{A})$ of \begin{equation*} -\mathcal{A}U=F,\quad \forall F=(f^1,f^2,f^3,f^4)^\top\in \mathcal{H}. \end{equation*} Then $0\in \rho(\mathcal{A})$ and $\mathcal{A}$ is an isomorphism and since $\rho(\mathcal{A})$ is open in $\mathbb{C}$ (see Theorem 6.7 (Chapter III) in \cite{Kato01}), we easily get $R(\lambda I -\mathcal{A}) = {\mathcal{H}}$ for a sufficiently small $\lambda>0 $. This, together with the dissipativeness of $\mathcal{A}$, imply that $D\left(\mathcal{A}\right)$ is dense in ${\mathcal{H}}$ and that $\mathcal{A}$ is m-dissipative in ${\mathcal{H}}$ (see Theorems 4.5, 4.6 in \cite{Pazy01}). \end{proof}\\\linebreak According to Lumer-Phillips theorem (see \cite{Pazy01}), then the operator $\mathcal A$ generates a $C_{0}$-semigroup of contractions $e^{t\mathcal A}$ in $\mathcal H$ which gives the well-posedness of \eqref{eq-2.9}.
Then, we have the following result: \begin{theoreme}{\rm For all $U_0 \in \mathcal H$, system \eqref{eq-2.9} admits a unique weak solution $$U(t)=e^{t\mathcal A}U_0\in C^0 (\mathbb R_+ ,\mathcal H).
$$ Moreover, if $U_0 \in D(\mathcal A)$, then the system \eqref{eq-2.9} admits a unique strong solution $$U(t)=e^{t\mathcal A}U_0\in C^0 (\mathbb R_+ ,D(\mathcal A))\cap C^1 (\mathbb R_+ ,\mathcal H).$$} \end{theoreme}
\subsection{Strong Stability}\label{subss} In this subsection, we will prove the strong stability of system \eqref{eq1}-\eqref{eq4}. We define the following conditions: \begin{equation}\label{SSC1}\tag{${\rm SSC1}$}
\eqref{C1} \ \text{holds}\quad \text{and} \quad \abs{c_0}<\min\left(\frac{\sqrt{a}}{c_2-c_1},\frac{1}{c_2-c_1}\right), \end{equation} \begin{equation}\label{SSC2}\tag{${\rm SSC3}$}
\eqref{C3} \ \text{holds},\quad a=1\quad \text{and}\quad \abs{c_0}<\frac{1}{c_2-c_1}. \end{equation} The main result of this section is the following theorem. \begin{theoreme}\label{Th-SS1} {\rm Assume that \eqref{SSC1} or \eqref{C2} or \eqref{SSC2} holds. Then, the $C_0$-semigroup of contractions $\left(e^{t\mathcal{A}}\right)_{t\geq 0}$ is strongly stable in $\mathcal{H}$; i.e. for all $U_0\in \mathcal{H}$, the solution of \eqref{eq-2.9} satisfies $$
\lim_{t\to +\infty}\|e^{t\mathcal{A}}U_0\|_{\mathcal{H}}=0. $$} \end{theoreme}
\noindent According to Theorem \ref{App-Theorem-A.2}, to prove Theorem \ref{Th-SS1}, we need to prove that the operator $\mathcal A$ has no pure imaginary eigenvalues and $\sigma(\mathcal A)\cap i\mathbb R $ is countable. Its proof has been divided into the following Lemmas.
\begin{lemma}\label{ker-SS123}
{\rm Assume that \eqref{SSC1} or \eqref{C2} or \eqref{SSC2} holds. Then, for all ${\lambda}\in \mathbb{R}$, $i\la I-\mathcal{A}$ is injective, i.e. $$ \ker\left(i\la I-\mathcal{A}\right)=\left\{0\right\}. $$} \end{lemma}
\begin{proof} From Proposition \ref{mdissipative}, we have $0\in \rho(\mathcal{A})$. We still need to show the result for $\la\in \mathbb R^{\ast}$. For this aim, suppose that there exists a real number $\la\neq 0$ and $U=\left(u,v,y,z\right)^\top\in D(\mathcal A)$ such that \begin{equation*} \mathcal A U=i\la U. \end{equation*} Equivalently, we have \begin{eqnarray} v&=&i\la u,\label{eq-2.20}\\ (au_{x}+bv_{x})_{x}-cz&=&i\la v,\label{eq-2.21}\\ z&=&i\la y,\label{eq-2.22}\\ (y_{x}+dz_x)+cv&=&i\la z.\label{eq-2.23} \end{eqnarray} Next, a straightforward computation gives \begin{equation}\label{Re}
0=\Re\left<i\la U,U\right>_{\mathcal H}=\Re\left<\mathcal A U,U\right>_{\mathcal H}=-\int_0^L b|v_x|^2dx-\int_0^L d|z_x|^2dx. \end{equation} Inserting \eqref{eq-2.20} and \eqref{eq-2.22} in \eqref{eq-2.21} and \eqref{eq-2.23}, we get \begin{eqnarray} \la^2u+(au_{x}+i\la bu_x)_x-i\la cy&=&0\quad \text{in}\quad (0,L),\label{eq-2.27}\\ \la^2y+(y_{x}+i\la dy_x)_x+i\la cu&=&0\quad \text{in}\quad (0,L),\label{eq-2.2.8} \end{eqnarray} with the boundary conditions \begin{equation}\label{boundaryconditionker} u(0)=u(L)=y(0)=y(L)=0. \end{equation} $\bullet$ \textbf{Case 1:} Assume that \eqref{SSC1} holds.
From \eqref{eq-2.20}, \eqref{eq-2.22} and \eqref{Re}, we deduce that \begin{equation}\label{2.10} u_x= v_x=0 \ \ \text{in} \ \ (b_1,b_2) \ \ \text{and} \ \ y_x=z_x =0 \ \ \text{in} \ \ (d_1,d_2). \end{equation} Using \eqref{eq-2.27}, \eqref{eq-2.2.8} and \eqref{2.10}, we obtain \begin{equation}\label{2interval} \la^2u+au_{xx}=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad \la^2y+y_{xx}=0\ \ \text{in}\ \ (c_2,L). \end{equation} Deriving the above equations with respect to $x$ and using \eqref{2.10}, we get \begin{equation}\label{2interval1} \left\{\begin{array}{lll} \la^2u_x+au_{xxx}=0&\text{in}&(0,c_1),\\[0.1in] u_x=0&\text{in}&(b_1,b_2)\subset (0,c_1), \end{array} \right.\quad \text{and}\quad \left\{\begin{array}{lll} \la^2y_x+y_{xxx}=0&\text{in}&(c_2,L),\\[0.1in] y_x=0&\text{in}&(d_1,d_2)\subset (c_2,L). \end{array} \right. \end{equation} Using the unique continuation theorem, we get \begin{equation}\label{2interval2} u_x=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad y_x=0\ \ \text{in}\ \ (c_2,L) \end{equation} Using \eqref{2interval2} and the fact that $u(0)=y(L)=0$, we get \begin{equation}\label{2interval3} u=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad y=0\ \ \text{in}\ \ (c_2,L). \end{equation} Now, our aim is to prove that $u=y=0 \ \text{in} \ (c_1,c_2)$. For this aim, using \eqref{2interval3} and the fact that $u, y\in C^1([0,L])$, we obtain the following boundary conditions \begin{equation}\label{1c1c2} u(c_1)=u_x(c_1)=y(c_2)=y_x(c_2)=0. \end{equation}
Multiplying \eqref{eq-2.27} by $-2(x-c_2)\overline{u}_x$, integrating over $(c_1,c_2)$ and taking the real part, we get \begin{equation}\label{ST1step2} -\int_{c_1}^{c_2}\la^2(x-c_2)(\abs{u}^2)_xdx-a\int_{c_1}^{c_2}(x-c_2)\left(\abs{u_x}^2\right)_xdx+2\Re\left(i\la c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u}_xdx\right)=0, \end{equation} using integration by parts and \eqref{1c1c2}, we get \begin{equation}\label{ST2step2} \int_{c_1}^{c_2}\abs{\la u}^2dx+a\int_{c_1}^{c_2}\abs{u_x}^2dx+2\Re\left(i\la c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u}_xdx\right)=0. \end{equation} Multiplying \eqref{eq-2.2.8} by $-2(x-c_1)\overline{y}_x$, integrating over $(c_1,c_2)$, taking the real part, and using the same argument as above, we get \begin{equation}\label{ST3step2} \int_{c_1}^{c_2}\abs{\la y}^2dx+\int_{c_1}^{c_2}\abs{y_x}^2dx+2\Re\left(i\la c_0\int_{c_1}^{c_2}(x-c_1)u\overline{y}_x dx\right)=0. \end{equation} Adding \eqref{ST2step2} and \eqref{ST3step2}, we get \begin{equation}\label{ST4step2} \int_{c_1}^{c_2}\abs{\la u}^2dx+a\int_{c_1}^{c_2}\abs{u_x}^2dx+\int_{c_1}^{c_2}\abs{\la y}^2dx+\int_{c_1}^{c_2}\abs{y_x}^2dx\leq 2\abs{\la}\abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\left(\abs{y}\abs{u_x}+\abs{u}\abs{y_x}\right)dx. \end{equation} Using Young's inequality in \eqref{ST4step2}, we get \begin{equation}\label{ST5step2} \begin{array}{c} \displaystyle \int_{c_1}^{c_2}\abs{\la u}^2dx+a\int_{c_1}^{c_2}\abs{u_x}^2dx+\int_{c_1}^{c_2}\abs{\la y}^2dx+\int_{c_1}^{c_2}\abs{y_x}^2dx\leq \frac{c_0^2(c_2-c_1)^2}{a}\int_{c_1}^{c_2}\abs{\la y}^2dx
\\ \displaystyle +\, a\int_{c_1}^{c_2}\abs{u_x}^2dx+c_0^2(c_2-c_1)^2\int_{c_1}^{c_2}\abs{\la u}^2dx+\int_{c_1}^{c_2}\abs{y_x}^2dx, \end{array} \end{equation} consequently, we get \begin{equation}\label{ST6step2} \left(1-\frac{c_0^2(c_2-c_1)^2}{a}\right)\int_{c_1}^{c_2}\abs{\la y}^2dx+\left(1-c_0^2(c_2-c_1)^2\right)\int_{c_1}^{c_2}\abs{\la u}^2dx\leq 0. \end{equation} Thus, from the above inequality and \eqref{SSC1}, we get \begin{equation}\label{0c1c2} u=y=0 \ \ \text{in} \ \ (c_1,c_2). \end{equation} Next, we need to prove that $u=0$ in $(c_2,L)$ and $y=0$ in $(0,c_1)$. For this aim, from \eqref{0c1c2} and the fact that $u,y \in C^1([0,L])$, we obtain \begin{equation}\label{ST1step3} u(c_2)=u_x(c_2)=0\quad \text{and}\quad y(c_1)=y_x(c_1)=0. \end{equation} It follows from \eqref{eq-2.27}, \eqref{eq-2.2.8} and \eqref{ST1step3} that \begin{equation}\label{ST2step3} \left\{\begin{array}{lll} \la^2u+au_{xx}=0\ \ \text{in}\ \ (c_2,L),\\[0.1in] u(c_2)=u_x(c_2)=u(L)=0, \end{array} \right.\quad \text{and}\quad \left\{\begin{array}{rcc} \la^2y+y_{xx}=0\ \ \text{in}\ \ (0,c_1),\\[0.1in] y(0)=y(c_1)=y_x(c_1)=0. \end{array} \right. \end{equation} Holmgren uniqueness theorem yields \begin{equation}\label{2.25}
u=0 \ \ \text{in} \ \ (c_2,L) \ \ \text{and} \ \ y=0 \ \ \text{in} \ \ (0,c_1). \end{equation} Therefore, from \eqref{eq-2.20}, \eqref{eq-2.22}, \eqref{2interval3}, \eqref{0c1c2} and \eqref{2.25}, we deduce that $$ U=0. $$
$\bullet$ \textbf{Case 2:} Assume that \eqref{C2} holds. From \eqref{eq-2.20}, \eqref{eq-2.22} and \eqref{Re}, we deduce that \begin{equation}\label{2.10*}
u_x= v_x=0 \ \ \text{in} \ \ (b_1,b_2) \ \ \text{and} \ \ y_x=z_x =0 \ \ \text{in} \ \ (d_1,d_2). \end{equation} Using \eqref{eq-2.27}, \eqref{eq-2.2.8} and \eqref{2.10*}, we obtain \begin{equation}\label{2interval} \la^2u+au_{xx}=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad \la^2y+y_{xx}=0\ \ \text{in}\ \ (0,c_1). \end{equation} Deriving the above equations with respect to $x$ and using \eqref{2.10*}, we get \begin{equation}\label{C22interval1} \left\{\begin{array}{lll} \la^2u_x+au_{xxx}=0&\text{in}&(0,c_1),\\[0.1in] u_x=0&\text{in}&(b_1,b_2)\subset (0,c_1), \end{array} \right.\quad \text{and}\quad \left\{\begin{array}{lll} \la^2y_x+y_{xxx}=0&\text{in}&(0,c_1),\\[0.1in] y_x=0&\text{in}&(d_1,d_2)\subset (0,c_1). \end{array} \right. \end{equation} Using the unique continuation theorem, we get \begin{equation}\label{C22interval2} u_x=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad y_x=0\ \ \text{in}\ \ (0,c_1). \end{equation} From \eqref{C22interval2} and the fact that $u(0)=y(0)=0$, we get \begin{equation}\label{C22interval3} u=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad y=0\ \ \text{in}\ \ (0,c_1). \end{equation} Using the fact that $u,y\in C^1([0,L])$ and \eqref{C22interval3}, we get \begin{equation}\label{C21} u(c_1)=u_x(c_1)=y(c_1)=y_x(c_1)=0. \end{equation} Now, using the definition of $c(x)$ in \eqref{eq-2.27}-\eqref{eq-2.2.8}, \eqref{2.10*} and \eqref{C21} and Holmgren theorem, we get
$$u=y=0\ \text{ in} \ (c_1,c_2).$$
Again, using the fact that $u,y\in C^1([0,L])$, we get \begin{equation}\label{C22} u(c_2)=u_x(c_2)=y(c_2)=y_x(c_2)=0. \end{equation} Now, using the same argument as in Case 1, we obtain $$u=y=0 \ \text{in} \ (c_2,L),$$ consequently, we deduce that $$ U=0. $$ $\bullet$ \textbf{Case 3:} Assume that \eqref{SSC2} holds. \noindent Using the same argument as in Cases 1 and 2, we obtain \begin{equation}\label{C3-SST1} u=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad u(c_1)=u_x(c_1)=0. \end{equation} \textbf{Step 1.} The aim of this step is to prove that \begin{equation}\label{2c1c2} \int_{c_1}^{c_2}\abs{u}^2dx=\int_{c_1}^{c_2}\abs{y}^2dx. \end{equation} For this aim, multiplying \eqref{eq-2.27} by $\overline{y}$ and \eqref{eq-2.2.8} by $\overline{u}$ and using integration by parts, we get \begin{eqnarray} \int_{0}^{L}\la^2u\overline{y}dx-\int_{0}^{L}u_x\overline{y_x}dx-i\la c_0\int_{c_1}^{c_2}\abs{y}^2dx&=&0,\label{3c1c2}\\[0.1in] \int_{0}^{L}\la^2y\overline{u}dx-\int_{0}^{L}y_x\overline{u_x}dx+i\la c_0\int_{c_1}^{c_2}\abs{u}^2dx&=&0.\label{4c1c2} \end{eqnarray} Adding \eqref{3c1c2} and \eqref{4c1c2}, taking the imaginary part, we get \eqref{2c1c2}.\\[0.1in]
\textbf{Step 2.}
Multiplying \eqref{eq-2.27} by $-2(x-c_2)\overline{u}_x$, integrating over $(c_1,c_2)$ and taking the real part, we get \begin{equation}\label{C3ST3step2} -\Re\left(\int_{c_1}^{c_2}\la^2(x-c_2)(\abs{u}^2)_xdx\right)-\Re\left(\int_{c_1}^{c_2}(x-c_2)\left(\abs{u_x}^2\right)_xdx\right)+2\Re\left(i\la c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u}_xdx\right)=0, \end{equation} using integration by parts in \eqref{C3ST3step2} and \eqref{C3-SST1}, we get \begin{equation}\label{C3ST4step2} \int_{c_1}^{c_2}\abs{\la u}^2dx+a\int_{c_1}^{c_2}\abs{u_x}^2dx+2\Re\left(i\la c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u}_xdx\right)=0. \end{equation} Using Young's inequality in \eqref{C3ST4step2}, we obtain \begin{equation}\label{C3ST5step2} \int_{c_1}^{c_2}\abs{\la u}^2dx+\int_{c_1}^{c_2}\abs{u_x}^2dx\leq \abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{\la y}^2dx+\abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{u_x}^2dx. \end{equation} Inserting \eqref{2c1c2} in \eqref{C3ST5step2}, we get \begin{equation}\label{C3ST6step2} \left(1-\abs{c_0}(c_2-c_1)\right)\int_{c_1}^{c_2}\left(\abs{\la u}^2+\abs{u_x}^2\right)dx\leq 0. \end{equation} According to \eqref{SSC2} and \eqref{2c1c2}, we get \begin{equation}\label{C3ST7step2} u=y=0\quad \text{in}\quad (c_1,c_2). \end{equation} \textbf{Step 3.} Using the fact that $u\in H^2(c_1,c_2)\subset C^1([c_1,c_2])$, we get \begin{equation}\label{C3ST1step3} u(c_1)=u_x(c_1)=y(c_1)=y_x(c_1)=y(c_2)=y_x(c_2)=0. \end{equation} Now, from \eqref{eq-2.27}, \eqref{eq-2.2.8} and the definition of $c$, we get \begin{equation*} \left\{\begin{array}{lll} \la^2u+u_{xx}=0\ \ \text{in} \ \ (c_2,L),\\ u(c_2)=u_x(c_2)=0, \end{array} \right.\quad \text{and}\quad \left\{\begin{array}{lll} \la^2y+y_{xx}=0\ \ \text{in}\ \ (0,c_1)\cup (c_2,L),\\ y(c_1)=y_x(c_1)=y(c_2)=y_x(c_2)=0. \end{array} \right. \end{equation*} From the above systems and Holmgren uniqueness Theorem, we get \begin{equation}\label{C3ST2step3} u=0\ \ \text{in}\ \ (c_2,L)\quad \text{and}\quad y=0\ \ \text{in}\ \ (0,c_1)\cup (c_2,L). \end{equation} \\ \noindent Consequently, using \eqref{C3-SST1}, \eqref{C3ST7step2} and \eqref{C3ST2step3}, we get $U=0$. The proof is thus completed. \end{proof}
\begin{lemma}\label{surjectivity}
{\rm Assume that \eqref{SSC1} or \eqref{C2} or \eqref{SSC2} holds. Then, for all $\lambda\in \mathbb{R}$, we have $$ R\left(i\la I-\mathcal{A}\right)=\mathcal{H}. $$} \end{lemma} \begin{proof} See Lemma 2.5 in \cite{Wehbe2021} (see also \cite{ABNWmemory}). \end{proof}\\\linebreak
\noindent \textbf{Proof of Theorems \ref{Th-SS1}}. From Lemma \ref{ker-SS123}, we obtain that the operator $\mathcal{A}$ has no pure imaginary eigenvalues (i.e. $\sigma_p(\mathcal A)\cap i\mathbb R=\emptyset$). Moreover, from Lemma \ref{surjectivity} and with the help of the closed graph theorem of Banach, we deduce that $\sigma(\mathcal A)\cap i\mathbb R=\emptyset$. Therefore, according to Theorem \ref{App-Theorem-A.2}, we get that the C$_0 $-semigroup $(e^{t\mathcal A})_{t\geq0}$ is strongly stable. The proof is thus complete. \xqed{$\square$}
\subsection{Polynomial Stability}\label{secps} \noindent In this subsection, we study the polynomial stability of system \eqref{eq1}-\eqref{eq4}. Our main result in this section are the following theorems.
\begin{theoreme}\label{1pol}
{\rm Assume that \eqref{SSC1} holds. Then, for all $U_0 \in D(\mathcal A)$, there exists a constant $C>0$ independent of $U_0$ such that \begin{equation}\label{Energypol1}
E(t)\leq \frac{C}{t^4}\|U_0\|^2_{D(\mathcal A)},\quad t>0. \end{equation}} \end{theoreme}
\begin{theoreme}\label{2pol}
{\rm Assume that \eqref{SSC2} holds . Then, for all $U_0 \in D(\mathcal A)$ there exists a constant $C>0$ independent of $U_0$ such that \begin{equation}\label{Energypol2}
E(t)\leq \frac{C}{t}\|U_0\|^2_{D(\mathcal A)},\quad t>0. \end{equation}} \end{theoreme}
\noindent According to Theorem \ref{bt}, the polynomial energy decays \eqref{Energypol1} and \eqref{Energypol2} hold if the following conditions \begin{equation}\label{H1}\tag{${\rm{H_1}}$} i\mathbb R\subset \rho(\mathcal{A}) \end{equation} and \begin{equation}\label{H2}\tag{${\rm{H_2}}$}
\limsup_{{\lambda}\in \mathbb R, \ |\la| \to \infty}\frac{1}{|\la|^\ell}\left\|(i\la I-\mathcal A)^{-1}\right\|_{\mathcal{L}(\mathcal{H})}<\infty \ \ \text{with} \ \ \ell=\left\{\begin{array}{lll} \frac{1}{2} \ \ \text{for Theorem \ref{1pol}},
\\ 2 \ \ \text{for Theorem \ref{2pol}}, \end{array}\right. \end{equation} are satisfied. Since condition \eqref{H1} is already proved in Subsection \ref{subss}. We still need to prove \eqref{H2}, let us prove it by a contradiction argument. To this aim, suppose that \eqref{H2} is false, then there exists $\left\{\left(\la_n,U_n:=(u_n,v_n,y_n,z_n)^\top\right)\right\}_{n\geq 1}\subset \mathbb R^{\ast}_+\times D(\mathcal A)$ with \begin{equation}\label{pol1}
\la_n\to \infty \ \text{as} \ n\to \infty \quad \text{and}\quad \|U_n\|_{\mathcal{H}}=1, \ \forall n\geq1, \end{equation} such that \begin{equation}\label{pol2-w} \left(\la_n\right)^{\ell}\left(i\la_nI-\mathcal A\right)U_n=F_n:=(f_{1,n},f_{2,n},f_{3,n},f_{4,n})^{\top}\to 0 \ \ \text{in}\ \ \mathcal{H}, \ \text{as} \ n\to \infty. \end{equation} For simplicity, we drop the index $n$. Equivalently, from \eqref{pol2-w}, we have \begin{eqnarray} i\la u-v&=&\dfrac{f_1}{\la^{\ell}}, \ f_1 \to 0 \ \ \text{in}\ \ H_0^1(0,L),\label{pol3}\\ i\la v-\left(au_x+bv_x\right)_x+cz&=&\dfrac{f_2}{\la^{\ell}}, \ f_2 \to 0 \ \ \text{in}\ \ L^2(0,L),\label{pol4}\\ i\la y-z&=&\dfrac{f_3}{\la^{\ell}}, \ f_3 \to 0 \ \ \text{in}\ \ H_0^1(0,L),\label{pol5}\\ i\la z-(y_x+dz_x)_x-cv&=&\dfrac{f_4}{\la^{\ell}},\ f_4 \to 0 \ \ \text{in} \ \ L^2(0,L).\label{pol6} \end{eqnarray}
\subsubsection{Proof of Theorem \ref{1pol}} In this subsection, we will prove Theorem \ref{1pol} by checking the condition \eqref{H2}, by finding a contradiction with \eqref{pol1} by showing $\|U\|_{\mathcal{H}}=o(1)$. For clarity, we divide the proof into several Lemmas.
By taking the inner product of \eqref{pol2-w} with $U$ in $\mathcal{H}$, we remark that \begin{equation*}
\int _0^L b\left|v_{x}\right|^2dx+\int_0^Ld\abs{z_x}^2dx=-\Re\left(\left<\mathcal A U,U\right>_{\mathcal H}\right)=\la^{-\frac{1}{2}}\Re\left(\left<F,U\right>_{\mathcal H}\right)=o\left(\lambda^{-\frac{1}{2}}\right). \end{equation*} Thus, from the definitions of $b$ and $d$, we get \begin{equation}\label{eq-4.9}
\int _{b_1}^{b_2}\left|v_{x}\right|^2dx=o\left(\lambda^{-\frac{1}{2}}\right)\quad \text{and}\quad \int _{d_1}^{d_2}\left|z_{x}\right|^2dx=o\left(\lambda^{-\frac{1}{2}}\right). \end{equation} Using \eqref{pol3}, \eqref{pol5}, \eqref{eq-4.9}, and the fact that $f_1,f_3\to 0$ in $H_0^1(0,L)$, we get \begin{equation}\label{eq-5.0} \int_{b_1}^{b_2}\abs{u_x}^2dx=\frac{o(1)}{\la^{\frac{5}{2}}}\quad \text{and}\quad \int_{d_1}^{d_2}\abs{y_x}^2dx=\frac{o(1)}{\la^{\frac{5}{2}}}. \end{equation}
\begin{lemma}\label{F-est}
{\rm The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations \begin{equation}\label{F-est1} \int_{b_1}^{b_2}\abs{v}^2dx=\frac{o(1)}{\la^{\frac{3}{2}}}\quad \text{and}\quad \int_{d_1}^{d_2}\abs{z}^2dx=\frac{o(1)}{\la^{\frac{3}{2}}}. \end{equation}} \end{lemma} \begin{proof} We give the proof of the first estimation in \eqref{F-est1}, the second one can be done in a similar way. For this aim, we fix $g\in C^1\left([b_1,b_2]\right)$ such that $$ g(b_2)=-g(b_1)=1,\quad \max_{x\in[b_1,b_2]}\abs{g(x)}=m_g\ \ \text{and}\ \ \max_{x\in [b_1,b_2]}\abs{g'(x)}=m_{g'}. $$
The proof is divided into several steps: \\
\textbf{Step 1}. The goal of this step is to prove that \begin{equation}\label{Step1-Eq1} \abs{v(b_1)}^2+\abs{v(b_2)}^2\leq \left(\frac{\la^{\frac{1}{2}}}{2}+2m_{g'}\right)\int_{b_1}^{b_2}\abs{v}^2dx+\frac{o(1)}{\la}. \end{equation} From \eqref{pol3}, we deduce that \begin{equation}\label{Step1-Eq2} v_x=i\la u_x-\la^{-\frac{1}{2}}(f_1)_x. \end{equation} Multiplying \eqref{Step1-Eq2} by $2g\overline{v}$ and integrating over $(b_1,b_2)$, then taking the real part, we get \begin{equation*} \int_{b_1}^{b_2}g\left(\abs{v}^2\right)_xdx=\Re\left(2i{\lambda}\int_{b_1}^{b_2}gu_x\overline{v}dx\right)-\Re\left(2\la^{-\frac{1}{2}}\int_{b_1}^{b_2}g(f_1)_x\overline{v}dx\right). \end{equation*} Using integration by parts in the left hand side of the above equation, we get \begin{equation}\label{Step1-Eq3} \abs{v(b_1)}^2+\abs{v(b_2)}^2=\int_{b_1}^{b_2}g'\abs{v}^2dx+\Re\left(2i{\lambda}\int_{b_1}^{b_2}gu_x\overline{v}dx\right)-\Re\left(2\la^{-\frac{1}{2}}\int_{b_1}^{b_2}g(f_1)_x\overline{v}dx\right). \end{equation} Using Young's inequality, we obtain \begin{equation*} 2\la m_g\abs{u_x}\abs{v}\leq \frac{\la^\frac{1}{2}\abs{v}^2}{2}+2\la^{\frac{3}{2}}m_g^2\abs{u_x}^2\ \text{and}\quad 2\la^{-\frac{1}{2}}m_g\abs{(f_1)_x}\abs{v}\leq m_{g'}\abs{v}^2+m_g^2m_{g'}^{-1}\la^{-1}\abs{(f_1)_x}^2. \end{equation*} From the above inequalities, \eqref{Step1-Eq3} becomes \begin{equation}\label{Step1-Eq4} \abs{v(b_1)}^2+\abs{v(b_2)}^2\leq \left(\frac{\la^{\frac{1}{2}}}{2}+2m_{g'}\right)\int_{b_1}^{b_2}\abs{v}^2dx+2\la^{\frac{3}{2}}m_g^2\int_{b_1}^{b_2}\abs{u_x}^2dx+\frac{m_g^2}{m_{g'}}\la^{-1}\int_{b_1}^{b_2}\abs{(f_1)_x}^2dx. \end{equation} Inserting \eqref{eq-5.0} in \eqref{Step1-Eq4} and the fact that $f_1 \to 0$ in $H^1_0(0,L)$, we get \eqref{Step1-Eq1}.\\[0.1in] \textbf{Step 2}. The aim of this step is to prove that \begin{equation}\label{Step2-Eq1} \abs{(au_x+bv_x)(b_1)}^2+\abs{(au_x+bv_x)(b_2)}^2\leq \frac{\la^{\frac{3}{2}}}{2}\int_{b_1}^{b_2}\abs{v}^2dx+o(1). \end{equation} Multiplying \eqref{pol4} by $-2g\left(\overline{au_x+bv_x}\right)$, using integration by parts over $(b_1,b_2)$ and taking the real part, we get \begin{equation*} \begin{array}{l} \displaystyle \abs{\left(au_x+bv_x\right)(b_1)}^2+\abs{\left(au_x+bv_x\right)(b_2)}^2=\int_{b_1}^{b_2}g'\abs{au_x+bv_x}^2dx+\\[0.1in] \displaystyle \Re\left(2i{\lambda}\int_{b_1}^{b_2}gv(\overline{au_x+bv_x})dx\right)-\Re\left(2\la^{-\frac{1}{2}}\int_{b_1}^{b_2}gf_2(\overline{au_x+bv_x})dx\right), \end{array} \end{equation*} consequently, we get \begin{equation}\label{Step2-Eq2} \begin{array}{lll} \displaystyle \abs{\left(au_x+bv_x\right)(b_1)}^2+\abs{\left(au_x+bv_x\right)(b_2)}^2\leq m_{g'}\int_{b_1}^{b_2}\abs{au_x+bv_x}^2dx\\[0.1in] \displaystyle +2\la m_g\int_{b_1}^{b_2}\abs{v}\abs{au_x+bv_x}dx+2m_g\la^{-\frac{1}{2}}\int_{b_1}^{b_2}\abs{f_2}\abs{au_x+bv_x}dx. \end{array} \end{equation} By Young's inequality, \eqref{eq-4.9}, and \eqref{eq-5.0}, we have \begin{equation}\label{Step2-Eq3} 2\la m_g\int_{b_1}^{b_2}\abs{v}\abs{au_x+bv_x}dx\leq \frac{\la^{\frac{3}{2}}}{2}\int_{b_1}^{b_2}\abs{v}^2dx+2m_g^2\la^{\frac{1}{2}}\int_{b_1}^{b_2}\abs{au_x+bv_x}^2dx\leq \frac{\la^{\frac{3}{2}}}{2}\int_{b_1}^{b_2}\abs{v}^2dx+o(1).\\[0.1in] \end{equation}
Inserting \eqref{Step2-Eq3} in \eqref{Step2-Eq2}, then using \eqref{eq-4.9}, \eqref{eq-5.0} and the fact that $f_2 \to 0$ in $L^2(0,L)$, we get \eqref{Step2-Eq1}.\\[0.1in]
\textbf{Step 3.} The aim of this step is to prove the first estimation in \eqref{F-est1}. For this aim, multiplying \eqref{pol4} by $-i\la^{-1}\overline{v}$, integrating over $(b_1,b_2)$ and taking the real part , we get \begin{equation}\label{Step3-Eq1} \int_{b_1}^{b_2}\abs{v}^2dx=\Re\left(i\la^{-1}\int_{b_1}^{b_2}(au_x+bv_x)\overline{v}_xdx-\left[i\la^{-1}\left(au_x+bv_x\right)\overline{v}\right]_{b_1}^{b_2}+i\la^{-\frac{3}{2}}\int_{b_1}^{b_2}f_2\overline{v}dx\right). \end{equation} Using \eqref{eq-4.9}, \eqref{eq-5.0}, the fact that $v$ is uniformly bounded in $L^2(0,L)$ and $f_2\to 0$ in $L^2(0,1)$, and Young's inequalities, we get \begin{equation}\label{Step3-Eq2} \int_{b_1}^{b_2}\abs{v}^2dx\leq \frac{\la^{-\frac{1}{2}}}{2}[\abs{v(b_1)}^2+\abs{v(b_2)}^2]+\frac{\la^{-\frac{3}{2}}}{2}[\abs{(au_x+bv_x)(b_1)}^2+\abs{(au_x+bv_x)(b_2)}^2]+\frac{o(1)}{\la^{\frac{3}{2}}}. \end{equation} Inserting \eqref{Step1-Eq1} and \eqref{Step2-Eq1} in \eqref{Step3-Eq2}, we get \begin{equation*} \int_{b_1}^{b_2}\abs{v}^2dx\leq \left(\frac{1}{2}+m_{g'}\la^{-\frac{1}{2}}\right)\int_{b_1}^{b_2}\abs{v}^2dx+\frac{o(1)}{\la^{\frac{3}{2}}}, \end{equation*} which implies that \begin{equation}\label{Step3-Eq3} \left(\frac{1}{2}-m_{g'}\la^{-\frac{1}{2}}\right)\int_{b_1}^{b_2}\abs{v}^2dx\leq \frac{o(1)}{\la^{\frac{3}{2}}}. \end{equation} Using the fact that ${\lambda}\to \infty$, we can take ${\lambda}> 4m_{g'}^2$. Then, we obtain the first estimation in \eqref{F-est1}. Similarly, we can obtain the second estimation in \eqref{F-est1}. The proof has been completed. \end{proof}
\begin{lemma}\label{Sec-est}
{\rm The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations \begin{equation}\label{Sec-est1} \int_0^{c_1}\left(\abs{v}^2+a\abs{u_x}^2\right)dx=o(1)\quad \text{and}\quad \int_{c_2}^L\left(\abs{z}^2+\abs{y_x}^2\right)dx=o(1). \end{equation}} \end{lemma} \begin{proof} First, let $h\in C^1([0,c_1])$ such that $h(0)=h(c_1)=0$. Multiplying \eqref{pol4} by $2a^{-1}h\overline{(au_x+bv_x)}$, integrating over $(0,c_1)$, using integration by parts and taking the real part, then using \eqref{eq-4.9} and the fact that $u_x$ is uniformly bounded in $L^2(0,L)$ and $f_2 \to 0$ in $L^2(0,L)$, we get \begin{equation}\label{Sec-est2} \Re\left(2i\la a^{-1}\int_0^{c_1}vh\overline{(au_x+bv_x)}dx\right)+a^{-1}\int_0^{c_1}h'\abs{au_x+bv_x}^2dx=\frac{o(1)}{\la^{\frac{1}{2}}}. \end{equation} From \eqref{pol3}, we have \begin{equation}\label{Sec-est3} i{\lambda}\overline{u}_x=-\overline{v}_x-\la^{-\frac{1}{2}}(\overline{f_1})_x. \end{equation} Inserting \eqref{Sec-est3} in \eqref{Sec-est2}, using integration by parts, then using \eqref{eq-4.9}, \eqref{F-est1}, and the fact that $f_1 \to 0 $ in $H^1_0 (0,L)$ and $v$ is uniformly bounded in $L^2 (0,L)$, we get \begin{equation}\label{Sec-est4} \begin{array}{c} \displaystyle \int_0^{c_1}h'\abs{v}^2dx+a^{-1}\int_0^{c_1}h'\abs{au_x+bv_x}^2dx=\underbrace{2\Re\left(\la^{-\frac{1}{2}}\int_{0}^{c_1}vh(\overline{f_1})_xdx\right)}_{=o(\la^{-\frac{1}{2}})}\\[0.1in] \displaystyle +\underbrace{\Re\left(2i\la a^{-1}b_0\int_{b_1}^{b_2}hv\overline{v}_xdx\right)}_{=o(1)}+\frac{o(1)}{\la^{\frac{1}{2}}}. \end{array} \end{equation} Now, we fix the following cut-off functions $$ p_1(x):=\left\{\begin{array}{ccc} 1&\text{in}&(0,b_1),\\ 0&\text{in}&(b_2,c_1),\\ 0\leq p_1\leq 1&\text{in}&(b_1,b_2), \end{array} \right. \quad\text{and}\quad p_2(x):=\left\{\begin{array}{ccc} 1&\text{in}&(b_2,c_1),\\ 0&\text{in}&(0,b_1),\\ 0\leq p_2\leq 1&\text{in}&(b_1,b_2). \end{array} \right. $$ Finally, take $h(x)=xp_1(x)+(x-c_1)p_2(x)$ in \eqref{Sec-est4} and using \eqref{eq-4.9}, \eqref{eq-5.0}, \eqref{F-est1}, we get the first estimation in \eqref{Sec-est1}. By using the same argument, we can obtain the second estimation in \eqref{Sec-est1}. The proof is thus completed. \end{proof}
\begin{lemma}\label{Third-est} The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations \begin{equation}\label{Third-est1} \abs{\la u(c_1)}=o(1),\ \abs{u_x(c_1)}=o(1),\ \abs{\la y(c_2)}=o(1)\quad \text{and}\quad \abs{y_x(c_2)}=o(1). \end{equation} \end{lemma} \begin{proof} First, from \eqref{pol3} and \eqref{pol4}, we deduce that \begin{equation}\label{Th-est1} \la^2u+au_{xx}=-\frac{f_2}{\la^{\frac{1}{2}}}-i\la^{\frac{1}{2}}f_1 \ \ \text{in} \ \ (b_2,c_1). \end{equation} Multiplying \eqref{Th-est1} by $2(x-b_2)\bar{u}_x$, integrating over $(b_2,c_1)$ and taking the real part, then using the fact that $u_x$ is uniformly bounded in $L^2(0,L)$ and $f_2 \to 0$ in $L^2(0,L)$, we get \begin{equation}\label{Th-est2} \int_{b_2}^{c_1}\la^2 (x-b_2)\left(\abs{u}^2\right)_xdx+a\int_{b_2}^{c_1}(x-b_2)\left(\abs{u_x}^2\right)_xdx=-\Re\left(2i\la^{\frac{1}{2}}\int_{b_2}^{c_1}(x-b_2)f_1\overline{u}_xdx\right)+\frac{o(1)}{\la^{\frac{1}{2}}}. \end{equation} Using integration by parts in \eqref{Th-est2}, then using \eqref{Sec-est1}, and the fact that $f_1\to 0$ in $H_0^1(0,L)$ and $\la u$ is uniformly bounded in $L^2(0,L)$, we get \begin{equation}\label{Th-est3} 0\leq (c_1-b_2)\left(\abs{\la u(c_1)}^2+a\abs{u_x(c_1)}^2\right)=\Re\left(2i\la^{\frac{1}{2}}(c_1-b_2)f_1(c_1)\overline{u}(c_1)\right)+o(1), \end{equation} consequently, by using Young's inequality, we get \begin{equation*} \begin{array}{lll}
\displaystyle\abs{\la u(c_1)}^2+\abs{u_x(c_1)}^2 &\leq& \displaystyle 2\la^{\frac{1}{2}}|f_1(c_1)||u(c_1)|+o(1)\\[0.1in] &\leq &\displaystyle\frac{1}{2}\abs{\la u(c_1)}^2+\frac{2}{\la}\abs{f_1(c_1)}^2 +o(1). \end{array} \end{equation*} Then, we get \begin{equation} \frac{1}{2}\abs{\la u(c_1)}^2+\abs{u_x(c_1)}^2\leq \frac{2}{\la}\abs{f_1(c_1)}^2+o(1). \end{equation} Finally, from the above estimation and the fact that $f_1 \to 0$ in $H^1_0 (0,L)$, we get the first two estimations in \eqref{Third-est1}. By using the same argument, we can obtain the last two estimations in \eqref{Third-est1}. The proof has been completed. \end{proof}
\begin{lemma}\label{Fourth-est} The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimation \begin{equation}\label{4-est1}
\int_{c_1}^{c_2} |\la u|^2 +a |u_x|^2 +|\la y|^2 +|y_x|^2 dx =o(1). \end{equation} \end{lemma} \begin{proof} Inserting \eqref{pol3} and \eqref{pol5} in \eqref{pol4} and \eqref{pol6}, we get \begin{eqnarray} -\la^2u-au_{xx}+i\la c_0y&=&\frac{f_2}{\la^{\frac{1}{2}}}+i\la^{\frac{1}{2}}f_1+\frac{c_0f_3}{\la^{\frac{1}{2}}} \ \ \text{in} \ \ (c_1,c_2),\label{4-est2}\\ -\la^2y-y_{xx}-i\la c_0u&=&\frac{f_4}{\la^{\frac{1}{2}}}+i\la^{\frac{1}{2}}f_3-\frac{c_0f_1}{\la^{\frac{1}{2}}} \ \ \ \text{in} \ \ (c_1,c_2)\label{4-est3}. \end{eqnarray}
Multiplying \eqref{4-est2} by $2(x-c_2)\overline{u_x}$ and \eqref{4-est3} by $2(x-c_1)\overline{y_x}$, integrating over $(c_1,c_2)$ and taking the real part, then using the fact that $\|F\|_\mathcal H =o(1)$ and $\|U\|_\mathcal H =1$, we obtain \begin{equation}\label{4-est4} \begin{array}{l} \displaystyle -\la^2\int_{c_1}^{c_2}(x-c_2)\left(\abs{u}^2\right)_xdx-a\int_{c_1}^{c_2}(x-c_2)\left(\abs{u_x}^2\right)_xdx+\Re\left(2i\la c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u_x}dx\right)=
\\ \displaystyle \Re\left(2i\la^{\frac{1}{2}}\int_{c_1}^{c_2}(x-c_2)f_1\overline{u_x}dx\right)+\frac{o(1)}{\la^{\frac{1}{2}}} \end{array} \end{equation} and \begin{equation}\label{4-est5} \begin{array}{l} \displaystyle -\la^2\int_{c_1}^{c_2}(x-c_1)\left(\abs{y}^2\right)_xdx-\int_{c_1}^{c_2}(x-c_1)\left(\abs{y_x}^2\right)_xdx-\Re\left(2i\la c_0\int_{c_1}^{c_2}(x-c_1)u\overline{y_x}dx\right)=
\\ \displaystyle \Re\left(2i\la^{\frac{1}{2}}\int_{c_1}^{c_2}(x-c_1)f_3\overline{y_x}dx\right)+\frac{o(1)}{\la^{\frac{1}{2}}}. \end{array} \end{equation}
Using integration by parts, \eqref{Third-est1}, and the fact that $f_1, f_3 \to 0$ in $H^1_0(0,L)$, $\|u\|_{L^2(0,L)}=O(\la^{-1})$, $\|y\|_{L^2(0,L)}=O(\la^{-1})$, we deduce that \begin{equation}\label{4-est6} \Re\left(i\la^{\frac{1}{2}}\int_{c_1}^{c_2}(x-c_2)f_1\overline{u_x}dx\right)=\frac{o(1)}{\la^{\frac{1}{2}}}\quad \text{and}\quad \Re\left(i\la^{\frac{1}{2}}\int_{c_1}^{c_2}(x-c_1)f_3\overline{y_x}dx\right)=\frac{o(1)}{\la^{\frac{1}{2}}}. \end{equation} Inserting \eqref{4-est6} in \eqref{4-est4} and \eqref{4-est5}, then using integration by parts and \eqref{Third-est1}, we get \begin{eqnarray} \int_{c_1}^{c_2}\left(\abs{\la u}^2+a\abs{u_x}^2\right)dx+\Re\left(i\la c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u_x}dx\right)&=&o(1),\label{4-est7}\\ \int_{c_1}^{c_2}\left(\abs{\la y}^2+\abs{y_x}^2\right)dx-\Re\left(i\la c_0\int_{c_1}^{c_2}(x-c_1)u\overline{y_x}dx\right)&=&o(1).\label{4-est8} \end{eqnarray} Adding \eqref{4-est7} and \eqref{4-est8}, we get $$ \begin{array}{lll} \displaystyle \int_{c_1}^{c_2}\left(\abs{\la u}^2+a\abs{u_x}^2+\abs{\la y}^2+\abs{y_x}^2\right)dx&=&\displaystyle \Re\left(2i\la c_0\int_{c_1}^{c_2}(x-c_1)u\overline{y_x}dx\right)-\Re\left(2i\la c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u_x}dx\right)+o(1)\\[0.in] &\leq &\displaystyle 2{\lambda}\abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{u}\abs{y_x}dx+2\la\frac{\abs{c_0}}{a^{\frac{1}{4}}}(c_2-c_1)a^{\frac{1}{4}}\int_{c_1}^{c_2}\abs{y}\abs{u_x}dx+o(1). \end{array} $$ Applying Young's inequalities, we get \begin{equation}\label{4-est9} \left(1-\abs{c_0}(c_2-c_1)\right)\int_{c_1}^{c_2}(\abs{\la u}^2+\abs{y_x}^2)dx+\left(1-\frac{1}{\sqrt{a}}\abs{c_0}(c_2-c_1)\right)\int_{c_1}^{c_2}(a\abs{u_x}^2+\abs{\la y}^2)dx\leq o(1). \end{equation} Finally, using \eqref{SSC1}, we get the desired result. The proof has been completed. \end{proof}
\begin{lemma}\label{5-est} The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations \begin{equation}\label{5-est1} \int_0^{c_1}\left(\abs{z}^2+\abs{y_x}^2\right)dx=o(1)\quad \text{and}\quad \int_{c_2}^L\left(\abs{v}^2+a\abs{u_x}^2\right)dx=o(1). \end{equation} \end{lemma} \begin{proof} Using the same argument of Lemma \ref{Sec-est}, we obtain \eqref{5-est1}. \end{proof}
\noindent \textbf{Proof of Theorem \ref{1pol}.} Using \eqref{eq-5.0}, Lemmas \ref{F-est}, \ref{Sec-est}, \ref{Fourth-est}, \ref{5-est}, we get $\|U\|_{\mathcal{H}}=o(1)$, which contradicts \eqref{pol1}. Consequently, condition ${\rm (H2)}$ holds. This implies the energy decay estimation \eqref{Energypol1}.
\subsubsection{Proof of Theorem \ref{2pol}} In this subsection, we will prove Theorem \ref{2pol} by checking the condition \eqref{H2}, that is by finding a contradiction with \eqref{pol1} by showing $\|U\|_{\mathcal{H}}=o(1)$. For clarity, we divide the proof into several Lemmas. By taking the inner product of \eqref{pol2-w} with $U$ in $\mathcal{H}$, we remark that \begin{equation*} \int_0^L b\abs{v_x}^2dx=-\Re\left(\left<\mathcal{A}U,U\right>_{\mathcal{H}}\right)=\la^{-2}\Re\left(\left<F,U\right>_{\mathcal{H}}\right)=o(\la^{-2}). \end{equation*} Then, \begin{equation}\label{C2-dissipation} \int_{b_1}^{b_2}\abs{v_x}^2dx=o(\la^{-2}). \end{equation} Using \eqref{pol3} and \eqref{C2-dissipation}, and the fact that $f_1 \to 0$ in $H^1_0(0,L)$, we get \begin{equation}\label{C2-dissipation1} \int_{b_1}^{b_2}\abs{u_x}^2dx=o(\la^{-4}). \end{equation}
\begin{lemma}\label{C2-Fest} Let $0<\varepsilon<\frac{b_2-b_1}{2}$, the solution $U\in D(\mathcal{A})$ of the system \eqref{pol3}-\eqref{pol6} satisfies the following estimation \begin{equation}\label{C2-Fest1} \int_{b_1+\varepsilon}^{b_2-\varepsilon}\abs{v}^2dx=o(\la^{-2}). \end{equation} \end{lemma} \begin{proof} First, we fix a cut-off function $\theta_1\in C^{1}([0,c_1])$ such that \begin{equation}\label{C2-theta1} \theta_1(x)=\left\{\begin{array}{clc} 1&\text{if}&x\in (b_1+\varepsilon,b_2-\varepsilon),\\ 0&\text{if}&x\in (0,b_1)\cup (b_2,L),\\ 0\leq \theta_1\leq 1&&\text{elsewhere}. \end{array} \right. \end{equation} Multiplying \eqref{pol4} by $\la^{-1}\theta_1 \overline{v}$, integrating over $(0,c_1)$, using integration by parts, and the fact that $f_2 \to 0$ in $L^2(0,L)$ and $v$ is uniformly bounded in $L^2(0,L)$, we get \begin{equation}\label{C2-Fest2} i\int_0^{c_1}\theta_1\abs{v}^2dx+\frac{1}{\la}\int_0^{c_1}(u_x+bv_x)(\theta_1'\overline{v}+\theta \overline{v_x})dx=o(\la^{-3}). \end{equation}
Using \eqref{C2-dissipation} and the fact that $\|U\|_{\mathcal{H}}=1$, we get \begin{equation*} \frac{1}{\la}\int_0^{c_1}(u_x+bv_x)(\theta_1'\overline{v}+\theta \overline{v_x})dx=o(\la^{-2}). \end{equation*} Inserting the above estimation in \eqref{C2-Fest2}, we get the desired result \eqref{C2-Fest1}. The proof has been completed. \end{proof}
\begin{lemma}\label{C2-Secest} The solution $U\in D(\mathcal{A})$ of the system \eqref{pol3}-\eqref{pol6} satisfies the following estimation \begin{equation}\label{C2-Secest1} \int_{0}^{c_1}(\abs{v}^2+\abs{u_x}^2)dx=o(1). \end{equation} \end{lemma} \begin{proof} Let $h\in C^1([0,c_1])$ such that $h(0)=h(c_1)=0$. Multiplying \eqref{pol4} by $2h\overline{(u_x+bv_x)}$, integrating over $(0,c_1)$ and taking the real part, then using integration by parts and the fact that $f_2 \to 0$ in $L^2(0,L)$, we get \begin{equation}\label{C2-Secest2}
\Re\left(2\int_0^{c_1}i\la vh\overline{(u_x+bv_x)}dx\right)+\int_0^{c_1}h'\abs{u_x+bv_x}^2dx=o(\la^{-2}). \end{equation} Using \eqref{C2-dissipation} and the fact that $v$ is uniformly bounded in $L^2(0,L)$, we get \begin{equation}\label{C2-Secest3} \Re\left(2\int_0^{c_1}i\la vh\overline{(u_x+bv_x)}dx\right)=2\int_0^{c_1}i\la vh\overline{u_x}dx+o(1). \end{equation} From \eqref{pol3}, we have \begin{equation}\label{C2-Secest4} i\la\overline{u}_x=-\overline{v}_x-\frac{\left(\overline{f_1}\right)_x}{\la^2}. \end{equation} Inserting \eqref{C2-Secest4} in \eqref{C2-Secest3}, using integration by parts and the fact that $f_1 \to 0$ in $H^1_0(0,L)$, we get \begin{equation}\label{C2-Secest5} \Re\left(2\int_0^{c_1}i\la vh\overline{(u_x+bv_x)}dx\right)=\int_0^{c_1}h'\abs{v}^2dx+o(1). \end{equation} Inserting \eqref{C2-Secest5} in \eqref{C2-Secest2}, we obtain \begin{equation}\label{C2-Secest6} \int_0^{c_1}h'\left(\abs{v}^2+\abs{u_x+bv_x}^2\right)dx=o(1). \end{equation} Now, we fix the following cut-off functions $$ \theta_2(x):=\left\{\begin{array}{ccc} 1&\text{in}&(0,b_1+\varepsilon),\\ 0&\text{in}&(b_2-\varepsilon,c_1),\\ 0\leq \theta_2\leq 1&\text{in}&(b_1+\varepsilon,b_2-\varepsilon), \end{array} \right. \quad\text{and}\quad \theta_3(x):=\left\{\begin{array}{ccc} 1&\text{in}&(b_2-\varepsilon,c_1),\\ 0&\text{in}&(0,b_1+\varepsilon),\\ 0\leq \theta_3\leq 1&\text{in}&(b_1+\varepsilon,b_2-\varepsilon). \end{array} \right. $$ Taking $h(x)=x\theta_2(x)+(x-c_1)\theta_3(x)$ in \eqref{C2-Secest6}, then using \eqref{C2-dissipation} and \eqref{C2-dissipation1}, we get \begin{equation}\label{C2-Secest7}
\int_{(0,b_1+\varepsilon)\cup (b_2-\varepsilon,c_1)}\abs{v}^2dx+\int_{(0,b_1)\cup (b_2,c_1)}|u_x|^2dx=o(1). \end{equation} Finally, from \eqref{C2-dissipation1}, \eqref{C2-Fest1} and \eqref{C2-Secest7}, we get the desired result \eqref{C2-Secest1}. The proof has been completed. \end{proof}
\noindent
\begin{lemma}\label{C2-Fourthest} The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations \begin{equation}\label{C2-Thirest1}
\abs{\la u(c_1)}=o(1)\quad \text{and}\quad \abs{u_x(c_1)}=o(1), \end{equation} \begin{equation}\label{C2-Fourthest1} \int_{c_1}^{c_2}\abs{\la u}^2dx=\int_{c_1}^{c_2}\abs{\la y}^2dx+o(1). \end{equation} \end{lemma} \begin{proof}
First, using the same argument of Lemma \ref{Third-est}, we claim \eqref{C2-Thirest1}.
Inserting \eqref{pol3}, \eqref{pol5} in \eqref{pol4} and \eqref{pol6}, we get
\begin{eqnarray}
\la^2u+\left(u_x+bv_x\right)_x-i\la cy&=&-\frac{f_2}{\la^{2}}-i\frac{f_1}{\la}-c\frac{f_3}{\la^2},\label{Combination1}\\
\la^2y+y_{xx}+i\la cu&=&-\frac{f_4}{\la^2}-\frac{if_3}{\la}+c\frac{f_1}{\la^2}.\label{Combination2}
\end{eqnarray}
Multiplying \eqref{Combination1} and \eqref{Combination2} by ${\lambda}\overline{y}$ and ${\lambda}\overline{u}$ respectively, integrating over $(0,L)$, then using integration by parts, \eqref{C2-dissipation}, and the fact that $\|U\|_\mathcal H=1$ and $\|F\|_\mathcal H =o(1)$, we get \begin{eqnarray} \la^{3}\int_0^Lu\bar{y}dx-\la\int_0^Lu_x\bar{y}_xdx-i c_0\int_{c_1}^{c_2}\abs{\la y}^2dx=o(1),\label{C2-Fourthest2}\\ \la^{3}\int_0^Ly\bar{u}dx-{\lambda}\int_0^Ly_x\bar{u}_xdx+i c_0\int_{c_1}^{c_2}\abs{\la u}^2dx=\frac{o(1)}{\la}\label{C2-Fourthest3}. \end{eqnarray} Adding \eqref{C2-Fourthest2} and \eqref{C2-Fourthest3} and taking the imaginary parts, we get the desired result \eqref{C2-Fourthest1}. The proof is thus completed. \end{proof}
\begin{lemma}\label{C2-Fifthest} The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following asymptotic behavior \begin{equation}\label{C2-Fifthest1} \int_{c_1}^{c_2}\abs{\la u}^2dx=o(1),\quad \int_{c_1}^{c_2}\abs{\la y}^2dx=o(1)\quad \text{and}\quad \int_{c_1}^{c_2}\abs{u_x}^2dx=o(1). \end{equation} \end{lemma} \begin{proof}
First, Multiplying \eqref{Combination1} by $2(x-c_2)\bar{u}_x$, integrating over $(c_1,c_2)$ and taking the real part, using the fact that $\|U\|_\mathcal H=1$ and $\|F\|_\mathcal H =o(1)$, we get \begin{equation}\label{C2-Fifthest2} \la^2\int_{c_1}^{c_2}(x-c_2)\left(\abs{u}^2\right)_xdx+\int_{c_1}^{c_2}(x-c_2)\left(\abs{u_x}^2\right)_xdx=\Re\left(2i\la c_0\int_{c_1}^{c_2}(x-c_2)y\bar{u}_xdx\right)+o(1). \end{equation} Using integration by parts in \eqref{C2-Fifthest2} with the help of \eqref{C2-Thirest1}, we get \begin{equation}\label{C2-Fifthest3} \int_{c_1}^{c_2}\abs{\la u}^2dx+\int_{c_1}^{c_2}\abs{u_x}^2dx\leq 2\la\abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{y}\abs{u_x}+o(1). \end{equation} Applying Young's inequality in \eqref{C2-Fifthest3}, we get \begin{equation}\label{C2-Fifthest4} \int_{c_1}^{c_2}\abs{\la u}^2dx+\int_{c_1}^{c_2}\abs{u_x}^2dx\leq \abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{u_x}^2dx+\abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{\la y}^2dx+o(1). \end{equation} Using \eqref{C2-Fourthest1} in \eqref{C2-Fifthest4}, we get \begin{equation}\label{C2-Fifthest5}
\left(1-\abs{c_0}(c_2-c_1)\right)\int_{c_1}^{c_2}\left(\abs{\la u}^2+|u_x|^2\right)dx\leq o(1). \end{equation} Finally, from the above estimation, \eqref{SSC2} and \eqref{C2-Fourthest1}, we get the desired result \eqref{C2-Fifthest1}. The proof has been completed. \end{proof}
\begin{lemma}\label{C2-sixthest} Let $0<\delta<\frac{c_2-c_1}{2}$. The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations \begin{equation}\label{C2-sixthest1} \int_{c_1+\delta}^{c_2-\delta}\abs{y_x}^2dx=o(1). \end{equation} \end{lemma} \begin{proof} First, we fix a cut-off function $\theta_4\in C^1([0,L])$ such that \begin{equation}\label{C2-theta4} \theta_4(x):=\left\{\begin{array}{clc} 1&\text{if}&x\in (c_1+\delta,c_2-\delta),\\ 0&\text{if}&x\in (0,c_1)\cup (c_2,L),\\ 0\leq \theta_4\leq 1&&\text{elsewhere}. \end{array} \right. \end{equation} Multiplying \eqref{Combination2} by $\theta_4\bar{y}$, integrating over $(0,L)$ and using integration by parts, we get \begin{equation}\label{C2-sixthest2*} \int_{c_1}^{c_2}\theta_4\abs{\la y}^2dx-\int_{0}^{L}\theta_4\abs{y_x}^2dx-\int_0^L\theta_4'y_x\bar{y}dx+i\la c_0\int_{c_1}^{c_2}\theta_4u\bar{y}dx=\frac{o(1)}{\la^2}. \end{equation} Using \eqref{C2-Fifthest1} and the definition of $\theta_4$, we get \begin{equation}\label{C2-sixthest3} \int_{c_1}^{c_2}\theta_4\abs{\la y}^2dx=o(1),\quad \int_0^L\theta_4'y_x\bar{y}dx=o(\la^{-1}),\quad i\la c_0\int_{c_1}^{c_2}\theta_4u\bar{y}dx=o(\la^{-1}). \end{equation} Finally, Inserting \eqref{C2-sixthest3} in \eqref{C2-sixthest2*}, we get the desired result \eqref{C2-sixthest1}. The proof has been completed. \end{proof}
\begin{lemma}\label{C2-seventhest} The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations \begin{equation}\label{C2-sixthest1} \int_0^{c_1+\varepsilon}\abs{\la y}^2dx,\int_{0}^{c_1+\varepsilon}\abs{y_x}^2dx,\int_{c_2-\varepsilon}^L\abs{\la y}^2dx,\int_{c_2-\varepsilon}^L\abs{y_x}^2dx,\int_{c_2}^{L}\abs{\la u}^2dx,\int_{c_2}^{L}\abs{u_x}^2dx=o(1). \end{equation} \end{lemma} \begin{proof}
Let $q\in C^1([0,L])$ such that $q(0)=q(L)=0$. Multiplying \eqref{Combination1} by $2q\bar{y}_x$ integrating over $(0,L)$, using \eqref{C2-Fifthest1}, and the fact that $y_x$ is uniformly bounded in $L^2(0,L)$ and $\|F\|_{\mathcal{H}}=o(1)$, we get \begin{equation}\label{C2-sixthest2} \int_0^{L}q'\left(\abs{\la y}^2+\abs{y_x}^2\right)dx=o(1). \end{equation} Now, take $q(x)=x\theta_5(x)+(x-L)\theta_6(x)$ in \eqref{C2-sixthest2}, such that $$ \theta_5(x):=\left\{\begin{array}{ccc} 1&\text{in}&(0,c_1+\varepsilon),\\ 0&\text{in}&(c_2-\varepsilon,L),\\ 0\leq \theta_1\leq 1&\text{in}&(c_1+\varepsilon,c_2-\varepsilon), \end{array} \right. \quad\text{and}\quad \theta_2(x)\left\{\begin{array}{ccc} 1&\text{in}&(c_2-\varepsilon,L),\\ 0&\text{in}&(0,c_1+\varepsilon),\\ 0\leq \theta_2\leq 1&\text{in}&(c_1+\varepsilon,c_2-\varepsilon). \end{array} \right. $$ Then, we obtain the first four estimations in \eqref{C2-sixthest1}. Now, multiplying \eqref{Combination1} by $2q\left(\overline{u_x+bv_x}\right)$ integrating over $(0,L)$ and using the fact that $u_x$ is uniformly bounded in $L^2(0,L)$, we get \begin{equation} \int_0^Lq'\left(\abs{\la u}^2+\abs{u_x}^2\right)dx=o(1). \end{equation} By taking $q(x)=(x-L)\theta_7(x)$, such that $$ \theta_7(x)=\left\{\begin{array}{ccc} 1&\text{in}&(c_2,L),\\ 0&\text{in}&(0,c_1),\\ 0\leq \theta_7\leq 1&\text{in}&(c_1,c_2), \end{array} \right. $$
we get the the last two estimations in \eqref{C2-sixthest1}. The proof has been completed. \end{proof}
\noindent \textbf{Proof of Theorem \ref{2pol}.} Using \eqref{C2-dissipation1}, Lemmas \ref{C2-Secest}, \ref{C2-Fifthest}, \ref{C2-sixthest} and \ref{C2-seventhest}, we get $\|U\|_{\mathcal{H}}=o(1)$, which contradicts \eqref{pol1}. Consequently, condition ${\rm (H2)}$ holds. This implies the energy decay estimation \eqref{Energypol2}
\section{Indirect Stability in the multi-dimensional case}\label{secnd} \noindent In this section, we study the well-posedness and the strong stability of system \eqref{ND-1}-\eqref{ND-5}. \subsection{Well-posedness}\label{wpnd} In this subsection, we will establish the well-posedness of \eqref{ND-1}-\eqref{ND-5} by usinf semigroup approach. The energy of system \eqref{ND-1}-\eqref{ND-5} is given by \begin{equation}\label{ND-energy} E(t)=\frac{1}{2}\int_0^L\left(\abs{u_t}^2+\abs{\nabla u}^2+\abs{y_t}^2+\abs{\nabla y}^2\right)dx. \end{equation} Let $(u,u_t,y,y_t)$ be a regular solution of \eqref{ND-1}-\eqref{ND-5}. Multiplying \eqref{ND-1} and \eqref{ND-2} by $\overline{u_t}$ and $\overline{y_t}$ respectively, then using the boundary conditions \eqref{ND-3}, we get \begin{equation}\label{ND-denergy}
E'(t)=-\int_{\Omega}b|\nabla u_{t}|^2dx, \end{equation} using the definition of $b$, we get $E'(t)\leq 0$. Thus, system \eqref{ND-1}-\eqref{ND-5} is dissipative in the sense that its energy is non-increasing with respect to time $t$. Let us define the energy space $\mathcal{H}$ by $$ \mathcal{H}=\left(H_0^1(\Omega)\times L^2(\Omega)\right)^2. $$ The energy space $\mathcal{H}$ is equipped with the inner product defined by $$ \left<U,U_1\right>_{\mathcal{H}}=\int_{\Omega}v\overline{v_1}dx+\int_{\Omega}\nabla{u}\nabla{\overline{u_1}}dx+\int_{\Omega}z\overline{z_1}dx+\int_{\Omega}\nabla{y}\cdot \nabla{\overline{y_1}}dx, $$ for all $U=(u,v,y,z)^\top$ and $U_1=(u_1,v_1,y_1,z_1)^\top$ in $\mathcal{H}$. We define the unbounded linear operator $\mathcal{A}_d:D\left(\mathcal{A}_d\right)\subset \mathcal{H}\longrightarrow \mathcal{H}$ by $$ D(\mathcal{A}_d)=\left\{ U=(u,v,y,z)^\top\in \mathcal{H};\ v,z\in H_0^1(\Omega),\ \divv(u_x+bv_x)\in L^2(\Omega),\ \Delta y \in L^2 (\Omega) \right\} $$ and $$ \mathcal{A}_d U=\begin{pmatrix} v\\[0.1in] \divv(\nabla u+b\nabla v)-cz\\[0.1in] z\\ \Delta y+cv \end{pmatrix}, \ \forall U=(u,v,y,z)^\top \in D(\mathcal{A}_d). $$ If $U=(u,u_t,y,y_t)$ is a regular solution of system \eqref{ND-1}-\eqref{ND-5}, then we rewrite this system as the following first order evolution equation \begin{equation}\label{ND-evolution} U_t=\mathcal{A}_dU,\quad U(0)=U_0, \end{equation} where $U_0=(u_0,u_1,y_0,y_1)^{\top}\in \mathcal H$. For all $U=(u,v,y,z)^{\top}\in D(\mathcal{A}_d )$, we have $$ \Re\left<\mathcal{A}_d U,U\right>_{\mathcal{H}}=-\int_{\Omega}b\abs{\nabla v}^2dx\leq 0, $$ which implies that $\mathcal{A}_d$ is dissipative. Now, similar to Proposition 2.1 in \cite{akil2021ndimensional}, we can prove that there exists a unique solution $U=(u,v,y,z)^{\top}\in D(\mathcal{A}_d)$ of $$ -\mathcal A_d U=F,\quad \forall F=(f^1,f^2,f^3,f^4)^\top\in \mathcal{H}. $$ Then $0\in \rho(\mathcal{A}_d)$ and $\mathcal{A}_d$ is an isomorphism and since $\rho(\mathcal{A}_d)$ is open in $\mathbb{C}$ (see Theorem 6.7 (Chapter III) in \cite{Kato01}), we easily get $R(\lambda I -\mathcal{A}_d) = {\mathcal{H}}$ for a sufficiently small $\lambda>0 $. This, together with the dissipativeness of $\mathcal{A}_d$, imply that $D\left(\mathcal{A}_d\right)$ is dense in ${\mathcal{H}}$ and that $\mathcal{A}_d$ is m-dissipative in ${\mathcal{H}}$ (see Theorems 4.5, 4.6 in \cite{Pazy01}). According to Lumer-Phillips theorem (see \cite{Pazy01}), then the operator $\mathcal A_d$ generates a $C_{0}$-semigroup of contractions $e^{t\mathcal A_d}$ in $\mathcal H$ which gives the well-posedness of \eqref{ND-evolution}. Then, we have the following result: \begin{theoreme}{\rm For all $U_0 \in \mathcal H$, system \eqref{eq-2.9} admits a unique weak solution $$U(t)=e^{t\mathcal A_d}U_0\in C^0 (\mathbb R_+ ,\mathcal H).
$$ Moreover, if $U_0 \in D(\mathcal A)$, then the system \eqref{eq-2.9} admits a unique strong solution $$U(t)=e^{t\mathcal A_d}U_0\in C^0 (\mathbb R_+ ,D(\mathcal A_d))\cap C^1 (\mathbb R_+ ,\mathcal H).$$} \end{theoreme}
\subsection{Strong Stability }\label{Strong Stability-ND} In this subsection, we will prove the strong stability of system \eqref{ND-1}-\eqref{ND-5}. First, we fix the following notations $$ \widetilde{\Omega}=\Omega-\overline{\omega_c},\quad \Gamma_1=\partial \omega_c-\partial \Omega\quad \text{and}\quad \Gamma_0=\partial\omega_c-\Gamma_1. $$ \begin{figure}
\caption{Geometric description of the sets $\omega_b$ and $\omega_c$}
\label{p7-Fig4}
\end{figure} \begin{comment} \begin{definition}\label{Gammacondition} Saying that $\omega$ satisfies the \textbf{$\Gamma-$condition} if it contains a neighborhood in $\Omega$ of the set $$ \left\{x\in \Gamma;\ (x-x_0)\cdot \nu(x)>0\right\}, $$ for some $x_0\in \mathbb R^n$, where $\nu$ is the outward unit normal vector to $\Gamma=\partial \Omega$. \end{definition} \end{comment} \noindent Let $x_0\in \mathbb{R}^{d}$ and $m(x)=x-x_0$ and suppose that (see Figure \ref{p7-Fig4}) \begin{equation}\tag{${\rm GC}$}\label{Geometric Condition} m\cdot \nu\leq 0\quad \text{on}\quad \Gamma_0=\left(\partial\omega_c\right)-\Gamma_1. \end{equation} The main result of this section is the following theorem \begin{theoreme}\label{Strong-Stability-ND} Assume that \eqref{Geometric Condition} holds and \begin{equation}\label{GC-Condition}\tag{${\rm SSC}$}
\|c\|_{\infty}\leq \min\left\{\frac{1}{\|m\|_{\infty}+\frac{d-1}{2}},\frac{1}{\|m\|_{\infty}+\frac{(d-1)C_{p,\omega_c}}{2}}\right\}, \end{equation} where $C_{p,\omega_c}$ is the Poincarr\'e constant on $\omega_c$. Then, the $C_0-$semigroup of contractions $\left(e^{t\mathcal{A}_d}\right)$ is strongly stable in $\mathcal{H}$; i.e. for all $U_0\in \mathcal{H}$, the solution of \eqref{ND-evolution} satisfies $$
\lim_{t\to +\infty}\|e^{t\mathcal{A}_d}U_0\|_{\mathcal{H}}=0. $$ \end{theoreme} \begin{proof} First, let us prove that \begin{equation}\label{ker}\ker (i\la I-\mathcal A_d)=\{0\},\ \forall {\lambda}\in \mathbb R.\end{equation} Since $0\in \rho(\mathcal{A}_d)$, then we still need to show the result for $\lambda\in \mathbb{R}^{\ast}$. Suppose that there exists a real number $\lambda\neq 0$ and $U=(u,v,y,z)^\top\in D(\mathcal{A}_d)$, such that $$ \mathcal{A}_dU=i\la U. $$ Equivalently, we have \begin{eqnarray} v&=&i\la u,\label{ND-ST1}\\ \divv(\nabla u+b\nabla v)-cz&=&i\la v,\label{ND-ST2}\\ z&=&i\la y, \label{ND-ST3}\\ \Delta y+cv&=&i\la z.\label{ND-ST4} \end{eqnarray} Next, a straightforward computation gives $$ 0=\Re\left<i\la U,U\right>_{\mathcal{H}}=\Re\left<\mathcal{A}_dU,U\right>_{\mathcal{H}}=-\int_{\Omega}b\abs{\nabla v}^2dx, $$ consequently, we deduce that \begin{equation}\label{ND-ST5} b\nabla v=0\ \ \text{in}\ \ \Omega \quad \text{and}\quad \nabla v= \nabla u=0 \quad \text{in}\quad \omega_b. \end{equation}
Inserting \eqref{ND-ST1} in \eqref{ND-ST2}, then using the definition of $c$, we get \begin{equation}\label{ND-ST6} \Delta u=-\la^2 u\quad \text{in}\quad \omega_b. \end{equation} From \eqref{ND-ST5} we get $\Delta u=0$ in $\omega_b$ and from \eqref{ND-ST6} and the fact that $\la\neq 0$, we get \begin{equation}\label{ND-ST7} u=0\quad \text{in}\quad \omega_b. \end{equation} Now, inserting \eqref{ND-ST1} in \eqref{ND-ST2}, then using \eqref{ND-ST5}, \eqref{ND-ST7} and the definition of $c$, we get \begin{equation}\label{ND-ST8} \begin{array}{rll} \la^2u+\Delta u&=&0\ \ \text{in}\ \ \widetilde{\Omega},\\ u&=&0\ \ \text{in}\ \ \omega_b\subset \widetilde{\Omega}. \end{array} \end{equation} Using Holmgren uniqueness theorem, we get \begin{equation}\label{ND-ST9} u=0\quad \text{in}\quad \widetilde{\Omega}. \end{equation} It follows that \begin{equation}\label{ND-ST10} u=\frac{\partial u}{\partial\nu}=0\quad \text{on}\quad \Gamma_1. \end{equation} Now, our aim is to show that $u=y=0$ in $\omega_c$. For this aim, inserting \eqref{ND-ST1} and \eqref{ND-ST3} in \eqref{ND-ST2} and \eqref{ND-ST4}, then using \eqref{ND-ST5}, we get the following system \begin{eqnarray} \la^2u+\Delta u-i\la cy&=&0\quad \text{in}\ \Omega,\label{ND-ST11}\\ \la^2y+\Delta y+i\la cu&=&0\quad \text{in}\ \Omega,\label{ND-ST12}\\ u&=&0\quad \text{on}\ \partial\omega_c,\label{ND-ST13}\\ y&=&0\quad \text{on}\ \Gamma_0,\label{ND-ST14}\\ \frac{\partial u}{\partial \nu}&=&0\quad \text{on}\ \Gamma_1.\label{ND-ST15} \end{eqnarray} Let us prove \eqref{ker} by the following three steps:\\\linebreak \textbf{Step 1.} The aim of this step is to show that \begin{equation}\label{ND-Step1-1} \int_{\Omega}c\abs{u}^2dx=\int_{\Omega}c\abs{y}^2dx. \end{equation} For this aim, multiplying \eqref{ND-ST11} and \eqref{ND-ST12} by $\bar{y}$ and $\bar{u}$ respectively, integrating over $\Omega$ and using Green's formula, we get \begin{eqnarray} \la^2\int_{\Omega}u\bar{y}dx-\int_{\Omega}\nabla u\cdot \nabla{\bar{y}}dx-i{\lambda}\int_{\Omega}c\abs{y}^2dx&=&0,\label{ND-Step1-2}\\ \la^2\int_{\Omega}y\bar{u}dx-\int_{\Omega}\nabla y\cdot \nabla{\bar{u}}dx+i{\lambda}\int_{\Omega}c\abs{u}^2dx&=&0.\label{ND-Step1-3} \end{eqnarray} Adding \eqref{ND-Step1-2} and \eqref{ND-Step1-3}, then taking the imaginary part, we get \eqref{ND-Step1-1}.\\
\noindent \textbf{Step 2.} The aim of this step is to prove the following identity \begin{equation}\label{ND-Stpe2-1}
-d\int_{\omega_c}\abs{\la u}^2dx+(d-2)\int_{\omega_c}\abs{\nabla u}^2dx+\int_{\Gamma_0}(m\cdot \nu)\left|\frac{\partial u}{\partial\nu}\right|^2d\Gamma -2\Re\left(i{\lambda}\int_{\omega_c}cy\left(m\cdot \nabla{\bar{u}}\right)dx\right)=0. \end{equation} For this aim, multiplying \eqref{ND-ST11} by $2(m\cdot\nabla\bar{u})$, integrating over $\omega_c$ and taking the real part, we get \begin{equation}\label{ND-Stpe2-2} 2\Re\left(\la^2\int_{\omega_c}u(m\cdot \nabla\bar{u})dx\right)+2\Re\left(\int_{\omega_c}\Delta u(m\cdot \nabla\bar{u})dx\right)-2\Re\left(i\la\int_{\omega_c}cy(m\cdot\nabla\bar{u})dx\right)=0. \end{equation}
Now, using the fact that $u=0$ in $\partial\omega_c$, we get \begin{equation}\label{ND-Stpe2-3} \Re\left(2\la^2\int_{\omega_c}u(m\cdot\nabla\bar{u})dx\right)=-d\int_{\omega_c}\abs{\la u}^2dx. \end{equation}
Using Green's formula, we obtain \begin{equation}\label{ND-Stpe2-4} \begin{array}{ll} \displaystyle 2\Re\left(\int_{\omega_c}\Delta u(m\cdot \nabla\bar{u})dx\right)=\displaystyle -2\Re\left(\int_{\omega_c}\nabla u\cdot\nabla\left(m\cdot\nabla\bar{u}\right)dx\right)+2\Re\left(\int_{\Gamma_0}\frac{\partial u}{\partial\nu}\left(m\cdot\nabla\bar{u}\right)d\Gamma\right)\\[0.1in] \hspace{3.85cm}=\displaystyle (d-2)\int_{\omega_c}\abs{\nabla u}^2dx-\int_{\partial\omega_c}(m\cdot \nu)\abs{\nabla u}^2dx+2\Re\left(\int_{\Gamma_0}\frac{\partial u}{\partial\nu}\left(m\cdot\nabla\bar{u}\right)d\Gamma\right). \end{array} \end{equation} Using \eqref{ND-ST13} and \eqref{ND-ST15}, we get \begin{equation}\label{ND-Stpe2-5}
\int_{\partial\omega_c}(m\cdot \nu)\abs{\nabla u}^2dx=\int_{\Gamma_0}(m\cdot\nu)\left|\frac{\partial u}{\partial\nu}\right|^2d\Gamma\ \ \text{and}\ \ \Re\left(\int_{\Gamma_0}\frac{\partial u}{\partial\nu}\left(m\cdot\nabla\bar{u}\right)d\Gamma\right)=\int_{\Gamma_0}(m\cdot\nu)\left|\frac{\partial u}{\partial\nu}\right|^2d\Gamma. \end{equation} Inserting \eqref{ND-Stpe2-5} in \eqref{ND-Stpe2-4}, we get \begin{equation}\label{ND-Stpe2-6}
2\Re\left(\int_{\omega_c}\Delta u(m\cdot \nabla\bar{u})dx\right)=(d-2)\int_{\omega_c}\abs{\nabla u}^2dx+\int_{\Gamma_0}(m\cdot\nu)\left|\frac{\partial u}{\partial\nu}\right|^2d\Gamma. \end{equation} Inserting \eqref{ND-Stpe2-3} and \eqref{ND-Stpe2-6} in \eqref{ND-Stpe2-2}, we get \eqref{ND-Stpe2-1}. \\\linebreak
\noindent \textbf{Step 3}. In this step, we prove \eqref{ker}. Multiplying \eqref{ND-ST11} by $(d-1)\overline{u}$, integrating over $\omega_c$ and using \eqref{ND-ST13}, we get \begin{equation}\label{ND-Stpe2-7} (d-1)\int_{\omega_c}\abs{\la u}^2dx+(1-d)\int_{\omega_c}\abs{\nabla u}^2dx-\Re\left(i{\lambda}(d-1)\int_{\omega_c}cy\bar{u}dx\right)=0. \end{equation}
Adding \eqref{ND-Stpe2-1} and \eqref{ND-Stpe2-7}, we get \begin{equation*}
\int_{\omega_c}\abs{\la u}^2dx+\int_{\omega_c}\abs{\nabla u}^2dx=\int_{\Gamma_0}(m\cdot \nu)\left|\frac{\partial u}{\partial\nu}\right|^2d\Gamma-2\Re\left(i{\lambda}\int_{\omega_c}cy\left(m\cdot \nabla{\bar{u}}\right)dx\right)-\Re\left(i{\lambda}(d-1)\int_{\omega_c}cy\bar{u}dx\right)=0. \end{equation*} Using \eqref{Geometric Condition}, we get \begin{equation}\label{ND-Stpe2-8} \int_{\omega_c}\abs{\la u}^2dx+\int_{\omega_c}\abs{\nabla u}^2dx\leq 2\abs{\la}\int_{\omega_c}\abs{c}\abs{y}\abs{m\cdot \nabla u}dx+\abs{\la}(d-1)\int_{\omega_c}\abs{c}\abs{y}\abs{u}dx. \end{equation} Using Young's inequality and \eqref{ND-Step1-1}, we get \begin{equation}\label{ND-Stpe2-9}
2\abs{\la}\int_{\omega_c}\abs{c}\abs{y}\abs{m\cdot \nabla u}dx\leq \|m\|_{\infty}\|c\|_{\infty}\int_{\omega_c}\left(\abs{\la u}^2+\abs{\nabla u}^2\right)dx \end{equation} and \begin{equation}\label{ND-Stpe2-10}
\abs{\la}(d-1)\int_{\omega_c}\abs{c(x)}\abs{y}\abs{u}dx\leq \frac{(d-1)\|c\|_{\infty}}{2}\int_{\omega_c}\abs{\la u}^2dx+\frac{(d-1)\|c\|_{\infty}C_{p,\omega_c}}{2}\int_{\omega_c}\abs{\nabla u}^2dx. \end{equation} Inserting \eqref{ND-Stpe2-10} in \eqref{ND-Stpe2-8}, we get \begin{equation*}
\left(1-\|c\|_{\infty}\left(\|m\|_{\infty}+\frac{d-1}{2}\right)\right)\int_{\omega_c}\abs{\la u}^2dx+\left(1-\|c\|_{\infty}\left(\|m\|_{\infty}+\frac{(d-1)C_{p,\omega_c}}{2}\right)\right)\int_{\omega_c}\abs{\nabla u}^2dx\leq 0. \end{equation*} Using \eqref{GC-Condition} and \eqref{ND-Step1-1} in the above estimation, we get \begin{equation}\label{ND-Stpe2-11} u=0\quad \text{and}\quad y=0\quad \text{in}\quad \omega_c. \end{equation} In order to complete this proof, we need to show that $y=0$ in $\widetilde{\Omega}$. For this aim, using the definition of the function $c$ in $\widetilde{\Omega}$ and using the fact that $y=0$ in $\omega_c$, we get \begin{equation}\label{ND-Stpe2-12} \begin{array}{rll} \displaystyle \la^2y+\Delta y&=&0\ \ \text{in}\ \ \widetilde{\Omega},\\[0.1in] \displaystyle y&=&0 \ \ \text{on}\ \ \partial\widetilde{\Omega},\\[0.1in] \displaystyle \frac{\partial y}{\partial \nu}&=&0\ \ \text{on}\ \ \Gamma_1. \end{array} \end{equation} Now, using Holmgren uniqueness theorem, we obtain $y=0$ in $\widetilde{\Omega}$ and consequently \eqref{ker} holds true. Moreover, similar to Lemma 2.5 in \cite{akil2021ndimensional}, we can prove $R(i\la I-\mathcal A_d)=\mathcal H, \ \forall {\lambda}\in \mathbb R$. Finally, by using the closed graph theorem of Banach and Theorem \ref{App-Theorem-A.2}, we conclude the proof of this Theorem. \end{proof}\\\linebreak
\noindent Let us notice that, under the sole assumptions \eqref{Geometric Condition} and \eqref{GC-Condition}, the polynomial stability of system \eqref{ND-1}-\eqref{ND-5} is an open problem.
\appendix \section{Some notions and stability theorems}\label{p2-appendix} \noindent In order to make this paper more self-contained, we recall in this short appendix some notions and stability results used in this work. \begin{definition} \label{App-Definition-A.1}{\rm Assume that $A$ is the generator of $C_0-$semigroup of contractions $\left(e^{tA}\right)_{t\geq0}$ on a Hilbert space $H$. The $C_0-$semigroup $\left(e^{tA}\right)_{t\geq0}$ is said to be
\begin{enumerate}
\item[$(1)$] Strongly stable if
$$
\lim_{t\to +\infty} \|e^{tA}x_0\|_H=0,\quad \forall\, x_0\in H.
$$
\item[$(2)$] Exponentially (or uniformly) stable if there exists two positive constants $M$ and $\varepsilon$ such that
$$
\|e^{tA}x_0\|_{H}\leq Me^{-\varepsilon t}\|x_0\|_{H},\quad \forall\, t>0,\ \forall\, x_0\in H.
$$
\item[$(3)$] Polynomially stable if there exists two positive constants $C$ and $\alpha$ such that
$$
\|e^{tA}x_0\|_{H}\leq Ct^{-\alpha}\|A x_0\|_{H},\quad \forall\, t>0,\ \forall\, x_0\in D(A).
$$
\xqed{$\square$}
\end{enumerate}} \end{definition}
\noindent To show the strong stability of the $C_0$-semigroup $\left(e^{tA}\right)_{t\geq0}$ we rely on the following result due to Arendt-Batty \cite{Arendt01}. \begin{theoreme}\label{App-Theorem-A.2}{\rm
{Assume that $A$ is the generator of a C$_0-$semigroup of contractions $\left(e^{tA}\right)_{t\geq0}$ on a Hilbert space $H$. If $A$ has no pure imaginary eigenvalues and $\sigma\left(A\right)\cap i\mathbb{R}$ is countable,
where $\sigma\left(A\right)$ denotes the spectrum of $A$, then the $C_0$-semigroup $\left(e^{tA}\right)_{t\geq0}$ is strongly stable.}\xqed{$\square$}} \end{theoreme}
\noindent Concerning the characterization of polynomial stability stability of a $C_0-$semigroup of contraction $\left(e^{tA}\right)_{t\geq 0}$ we rely on the following result due to Borichev and Tomilov \cite{Borichev01} (see also \cite{Batty01} and \cite{RaoLiu01})
\begin{theoreme}\label{bt}
{\rm
Assume that $A$ is the generator of a strongly continuous semigroup of contractions $\left(e^{tA}\right)_{t\geq0}$ on $\mathcal{H}$. If $ i\mathbb{R}\subset \rho(\mathcal{A})$, then for a fixed $\ell>0$ the following conditions are equivalent
\begin{equation}\label{h1}
\limsup_{{\lambda}\in \mathbb R, \ |\la| \to \infty}\frac{1}{|\la|^\ell}\left\|(i\la I-\mathcal A)^{-1}\right\|_{\mathcal{L}(\mathcal{H})}<\infty,
\end{equation}
\begin{equation}\label{h2}
\|e^{t\mathcal{A}}U_{0}\|^2_{\mathcal H} \leq \frac{C}{t^{\frac{2}{\ell}}}\|U_0\|^2_{D(\mathcal A)},\hspace{0.1cm}\forall t>0,\hspace{0.1cm} U_0\in D(\mathcal A),\hspace{0.1cm} \text{for some}\hspace{0.1cm} C>0.
\end{equation}\xqed{$\square$}} \end{theoreme}
\end{document} |
\begin{document}
\title{Memoir on Divisibility Sequences}
\begin{abstract}
The purpose of this memoir is to discuss two very interesting properties of integer sequences. One is the law of apparition and the other is the law of repetition. Both have been extensively studied by mathematicians such as Ward, Lucas, Lehmer, Hall, etc. However, due to the lack of a proper survey in this area, many results have been rediscovered many decades later. This along with the necessity of the usefulness of such theory calls for a survey on this topic.
\end{abstract}
\section{Introduction}
It is well known that we have $F_{m}\mid F_{n}$ for Fibonacci numbers $(F_{n})$ if $m\mid n$. In fact, we have $\gcd(F_{m},F_{n})=F_{\gcd(m,n)}$. \textcite{lucas_1878_1,lucas_1878_2,lucas_1878_3} and \textcite{lehmer30} generalized this property for Lucas sequence of the first kind $(U_{n})$ defined as
\begin{align*}
U_{n}
& = \dfrac{\alpha^{n}-\beta^{n}}{\alpha-\beta}
\end{align*}
where $\alpha$ and $\beta$ are roots of $x^{2}-ax+b=0$ although under different conditions. They also establish the \textit{law of apparition} and the \textit{law of repetition}. The law of apparition is, if $\rho$ is the smallest index for which a prime $p$ divides $U_{\rho}$, then $p\mid U_{k}$ if and only if $\rho\mid k$. The law of repetition is, if $p^{\alpha}\|U_{\rho}$, then $p^{\alpha+\beta}\|U_{\rho p^{\beta}s}$ for $p\nmid s$.
In this section, we discuss some basics. In \autoref{sec:elem}, we discuss properties of divisibility sequences in general. In \autoref{sec:lucas}, we will focus on the law of apparition for linear recurrences of order $k$. The reason we are so interested in the law of apparition becomes apparent once we have \autoref{thm:equiv}. In \autoref{sec:exp}, we investigate the law of repetition.
\begin{definition}[Divisibility Sequence]
An integer sequence $(a_{n})$ is a \textit{divisibility sequence} if $a_{m}\mid a_{n}$ whenever $m\mid n$. Some simple examples of divisibility sequences are $(n!),(\varphi(n)),(x^{n}-1),(F_{n})$.
\end{definition}
The term divisibility sequence was most likely used by \textcite{hall36} for the first time. Hall called a divisibility sequence $(a_{n})$ \textit{normal} if $a_{0}=0$ and $a_{1}=1$. We can actually assume that a divisibility sequence is normal without losing generality too much, as \textcite{hall36} has shown. In this memoir, we will be mostly concerned with the following stronger assumption.
\begin{definition}[Strong Divisibility Sequence]
An integer sequence $(a_{n})$ is a \textit{strong divisibility sequence} if $\gcd(a_{m},a_{n})=a_{\gcd(m,n)}$ for all positive integers $m$ and $n$. Some simple examples of strong divisibility sequences are $(x^{n}-1),(U_{n})$.
\end{definition}
Although elliptic divisibility sequences are also divisibility sequences, we will not be focusing on that topic in this memoir. For elliptic divisibility sequences, the reader can consult \textcite{ward_1948}.
\begin{definition}[Rank of Apparition]
Let $m$ be a positive integer. If $\rho$ is the smallest index such that $m\mid a_{\rho}$, then $\rho$ is the \textit{rank of apparition} of $p$ in $(a_{n})$. For a prime $p$ and positive integer $e>1$, we denote the rank of apparition of $p^{e}$ by $\rho_{e}(p)$. If it is clear what the prime $p$ is, then we may only write $\rho_{e}$.
\end{definition}
\begin{definition}[Subsequence of Strong Divisibility Sequence]
For a fixed positive integer $s$, the sequence $(c_{n})$ is a subsequence of $(a_{n})$ if
\begin{align*}
c_{n}
& = \dfrac{a_{sn}}{a_{s}}
\end{align*}
for all $n$.
\end{definition}
\begin{definition}[Binomial Coefficients]
Let $n!_{a}$ denote the product of first $n$ terms of the strong divisibility sequence $(a_{n})$. Then the \textit{binomial coefficient} of $(a_{n})$ is
\begin{align*}
\binom{n}{k}_{a}
& = \dfrac{n!_{a}}{k!_{a}(n-k)!_{a}}
\end{align*}
\end{definition}
\section{Elementary Properties}\label{sec:elem}
We will first attempt to characterize strong divisibility sequences by its divisors. First, we see an analog of the law of repetition for strong divisibility sequences. A recent publication \textcite{billal_riasat_2021} discusses divisibility sequences and covers some of the results.
\begin{theorem}
Let $p$ be a prime and $\rho$ be the rank of apparition of $p$ in the strong divisibility sequence $(a_{n})$. Then $p\mid a_{k}$ if and only if $\rho\mid k$.
\end{theorem}
\begin{theorem}
Let $m$ be a positive integer and the prime factorization of $m$ be
\begin{align*}
m
& = \prod_{i=1}^{r}p_{i}^{e_{i}}
\end{align*}
If the rank of apparition of $p_{i}^{e_{i}}$ in $(a_{n})$ is $\rho_{e_{i}}(p_{i})$, then the rank of apparition of $m$ is
\begin{align*}
\rho
& = \lcm(\rho_{e_{1}}(p_{1}),\ldots,\rho_{e_{r}}(p_{r}))
\end{align*}
\end{theorem}
We have the first necessary and sufficient condition for a divisibility sequence $(a_{n})$ to be a strong divisibility sequence due to \textcite{ward36}.
\begin{theorem}\label{thm:equiv}
Let $(a_{n})$ be a divisibility sequence. Then $(a_{n})$ is a strong divisibility sequence is equivalent to the condition that for a prime $p$ and positive integer $e$, $p^{e}\mid a_{k}$ if and only if $\rho_{e}(p)\mid k$.
\end{theorem}
\textcite{ward55_2} proves the following result. \textcite{nowicki} essentially rediscovers the same result.
\begin{theorem}
Let $(a_{n})$ be an integer sequence. Then $(a_{n})$ is a strong divisibility sequence if and only if there exists an integer sequence $(b_{n})$ such that
\begin{align*}
a_{n}
& = \prod_{d\mid n}b_{d}
\end{align*}
where $\gcd(b_{m},b_{n})=1$ whenever $m\nmid n$ and $n\nmid m$.
\end{theorem}
\begin{definition}[LCM Sequence]
This new sequence $(b_{n})$ associated with $(a_{n})$ is the \textit{lcm sequence} of $(a_{n})$. It can be thought of as a generalization of cyclotomic polynomials $\Phi_{n}(x)$ of $x^{n}-1$.
\end{definition}
\begin{theorem}
Let $(a_{n})$ be a strong divisibility sequence and $(b_{n})$ is the lcm sequence of $(a_{n})$. Then
\begin{align*}
\lcm(a_{1},\ldots,a_{n})
& = b_{1}\cdots b_{n}
\end{align*}
\end{theorem}
\begin{theorem}
The lcm sequence $(b_{n})$ of a strong divisibility sequence $(a_{n})$ is given by
\begin{align*}
b_{n}
& = \dfrac{\lcm(a_{1},\ldots,a_{n})}{\lcm(a_{1},\ldots,a_{n-1})}\\
& = \dfrac{a_{n}\prod_{\substack{p_{i},p_{j}\mid n\\i\neq j}}a_{\frac{n}{p_{i}p_{j}}}}{\prod_{p_{i}\mid n}a_{n/p_{i}}\prod_{\substack{p_{i},p_{j},p_{k}\mid n\\i\neq j\neq k}}a_{\frac{n}{p_{i}p_{j}p_{j}}}}\\
& = \dfrac{a_{n}}{\lcm(a_{n/p_{1}},\ldots,a_{n/p_{r}})}
\end{align*}
where $p_{1},\ldots,p_{r}$ are distinct prime factors of $n$.
\end{theorem}
\begin{theorem}
Let $(a_{n})$ be an integer sequence. Then $(a_{n})$ is a strong divisibility sequence if and only if for a positive integer $m>1$ and positive integers $k,l$, we have $m\mid a_{k},m\mid a_{l}$ if and only if $m\mid a_{\gcd(k,l)}$.
\end{theorem}
A corollary is the following.
\begin{theorem}\label{thm:onerank}
A divisibility sequence $(a_{n})$ is a strong divisibility sequence if and only any positive integer $m>1$ assumes only one rank of apparition.
\end{theorem}
\begin{theorem}
If an integer sequence $(u_{n})$ has the property that $\gcd(u_{pn},u_{qn})=u_{n}$ for distinct primes $p,q$ and positive integers $n$, let us say that $(u_{n})$ has property P. Then both the strong divisibility sequence $(a_{n})$ and its lcm sequence $(b_{n})$ have the property P.
\end{theorem}
\begin{theorem}
If $(a_{n})$ is a divisibility sequence and $\gcd(a_{pn},a_{qn})=a_{n}$ for distinct primes $p$ and $q$, then $\gcd(a_{m},a_{n})=1$ if $\gcd(m,n)=1$.
\end{theorem}
\begin{theorem}
A necessary and sufficient condition that an integer sequence $(a_{n})$ is a strong divisibility sequence is that
\begin{align*}
\gcd(a_{pn},a_{qn})
& = a_{n}
\end{align*}
for all distinct primes $p,q$ and positive integers $n$.
\end{theorem}
We have the analogous of Legendre's theorem for strong divisibility sequences.
\begin{theorem}
Let $(a_{n})$ be a strong divisibility sequence and $p$ be a prime. Then
\begin{align*}
\nu_{p}(n!_{a})
& = \sum_{i\geq 1}\left\lfloor{\dfrac{n}{\rho_{i}(p)}}\right\rfloor
\end{align*}
\end{theorem}
\begin{theorem}
The binomial coefficients of a strong divisibility sequence are integers.
\end{theorem}
\section{Lucasian Sequences}\label{sec:lucas}
In this section, we will see the connection between linear recurrent and divisibility sequences. Some of the results will make use of abstract algebra when it seems convenient to do so. But we will mostly concern ourselves with integer sequences since analogous results usually extend to the appropriate field.
\begin{definition}[Linear Recurrent Sequence]
A \textit{linear recurrent sequence} of order $k$ is defined as
\begin{align}
u_{n+k}
& = c_{k-1}u_{n+k-1}+\ldots+c_{0}u_{n}\label{eqn:linrec}
\end{align}
\end{definition}
We are interested in $(u_{n})$ when the coefficients $c_{0},\ldots,c_{k-1}$ are integers. We can easily extend the definition over a field $\mathbb{F}$. The polynomial associated with $(u_{n})$ in \autoref{eqn:linrec} is the \textit{characteristic polynomial} of $u$ which is
\begin{align*}
f(x)
& = x^{k}-c_{k-1}x^{k-1}-\ldots-c_{0}
\end{align*}
Denote the discriminant of $f$ by $\mathfrak{D}(f)$. If it is clear what $f$ is, we may write $\mathfrak{D}$ only.
\begin{definition}[Lucasian Sequence]
An integer sequence $(u_{n})$ is \textit{Lucasian} if $u$ is both a linear recurrent sequence and a divisibility sequence. \textcite{ward37,ward55_2} called such sequences ``Lucasian" \textit{in honor of the french mathematician E. Lucas who first systematically studied a special class of such sequences}.
\end{definition}
\begin{definition}[Null Divisor]
A positive integer $n$ is a \textit{null divisor} of the Lucasian sequence $(u_{n})$ if $n\mid u_{m}$ for all $m\geq n_{0}$. If $(u_{n})$ has no null divisor other than $1$, then $(u_{n})$ is \textit{primary}. $d$ is a \textit{proper null divisor} of $(u_{n})$ if $d$ divides neither the initial terms $u_{0},\ldots,u_{k-1}$ nor the coefficients $c_{0},\ldots,c_{k-1}$. If $d$ is not a proper null divisor, then it is a \textit{trivial null divisor}.
\end{definition}
\begin{definition}[Generator]
Define the polynomial $f_{i}$ as $f_{0}(x)=0$ and
\begin{align*}
f_{r}
& = x^{r}-c_{r-1}x^{r-1}-\ldots-c_{0}
\end{align*}
Then the polynomial
\begin{align*}
\mathfrak{u}(x)
& = u_{0}f_{k-1}(x)+\ldots+u_{k-1}f_{0}(x)
\end{align*}
is called the \textit{generator} of $(u_{n})$. If
\begin{align*}
\Delta(\mathfrak{u})
& =
\begin{vmatrix}
u_{0} & \ldots & u_{k-1}\\
u_{1} & \ldots & u_{k}\\
\vdots & \ddots & \vdots\\
u_{k-1} & \ldots & u_{2k-2}
\end{vmatrix}
\end{align*}
then we have
\begin{align*}
\Delta(\mathfrak{u})
& = (-1)^{k(k-1)/2}\mathfrak{R}(u(x),f(x))
\end{align*}
where $\mathfrak{R}(f(x),g(x))$ is the \textit{resultant} of two polynomials $f$ and $g$.
\end{definition}
\begin{definition}[Index]
Let $\nu_{n}(a)$ be the largest non-negative integer $k$ such that $n^{k}\mid a$ but $n^{k+1}\nmid a$. If $G$ is the largest null divisor of $(u_{n})$, then for a proper null prime divisor $p$, $\nu_{p}(G)$ is the \textit{index} of $p$ in $(u_{n})$.
\end{definition}
\begin{definition}[Period and Numeric]
Consider the Lucasian sequence $(u_{n})$ modulo $m$. Let $\rho$ be the least positive index such that
\begin{align*}
U_{\rho}
& \equiv 0\pmod{m}\\
& \vdots\\
U_{\rho+k-2}
& \equiv 0\pmod{m}\\
U_{\rho+k-1}
& \equiv 1\pmod{m}
\end{align*}
Then $\rho$ is a \textit{period} of $(u_{n})$ modulo $m$ because
\begin{align*}
u_{n+\rho}
& \equiv u_{n}\pmod{m}
\end{align*}
for all $n\geq n_{0}$. The number of non-periodic terms of $(u_{n})$ modulo $m$ is the \textit{numeric}. We say that $(u_{n})$ is \textit{periodic} modulo $m$ and $(u_{n})$ is \textit{purely periodic} modulo $m$ if the numeric $n_{0}=0$. On the other hand, $\tau$ is a \textit{restricted period} of $(u_{n})$ modulo $m$ if $\tau$ is the least positive integer for which
\begin{align*}
U_{\tau}
& \equiv 0\pmod{m}\\
& \vdots\\
U_{\tau+k-2}
& \equiv 0\pmod{m}
\end{align*}
In this case, $u_{n+\tau}\equiv Au_{n}\pmod{m}$ for some $m\nmid A$ and all $n\geq n_{0}'$. This $A$ is called the \textit{multiplier} of $(u_{n})$ modulo $m$. The value of this multiplier $A$ depends on $\tau$.
\end{definition}
\begin{definition}[R-sequence]
Let $(u_{n})$ be a Lucasian sequence with an irreducible polynomial $f$. If $\alpha_{1},\ldots,\alpha_{k}$ are the roots of $f$, then
\begin{align*}
U_{n}(f)
& = \prod_{i<j}\left(\dfrac{\alpha_{i}^{n}-\alpha_{j}^{n}}{\alpha_{i}-\alpha_{j}}\right)
\end{align*}
is the \textit{R-sequence} associated with $(u_{n})$. We simply write $U_{n}$ if it is clear what $f$ is. Then $(U_{n})$ is a Lucasian sequence. The case $k=2$ gives us the classical Lucas sequence of the first kind. R-sequences are of particular importance because Lucasian sequences seem to be either R-sequences themselves or divisors of R-sequences. Moreover, the consideration of R-sequence gives us further insight into the determination of the law of apparition.
\end{definition}
\begin{definition}[Period of Polynomial]
Let $f$ be a polynomial irreducible modulo $p$. Then the smallest positive integer $n$ for which
\begin{align*}
x^{n}
& \equiv 1\pmod{p,f(x)}
\end{align*}
is the \textit{period of $f$ modulo }$p$. For two polynomials $h(x)$ and $g(x)$, we write
\begin{align*}
g(x)
& \equiv h(x)\pmod{m,f(x)}
\end{align*}
if
\begin{align*}
g(x)-h(x)
& = f(x)q(x)+m\cdot r(x)
\end{align*}
for some polynomials $q$ and $r$.
\end{definition}
\textcite{hall36} states the following easily derived results.
\begin{theorem}\label{thm:redchar}
Let $(u_{n})$ be a normal Lucasian sequence with characteristic polynomial $f$ such that the prime $p$ does not divide the discriminant $\mathfrak{D}(f)$. If
\begin{align*}
f(x)
& \equiv f_{1}(x)\cdots f_{s}(x)\pmod{p}
\end{align*}
is the factorization of $f$ modulo $p$ into irreducible polynomials $f_{1},\ldots,f_{s}$ of degree $k_{1},\ldots,k_{s}$ and $\rho$ is the least period of $(u_{n})$ modulo $p$, then
\begin{align*}
\rho
& \mid \lcm(p^{k_{1}}-1,\ldots,p^{k_{s}}-1)
\end{align*}
\end{theorem}
Due to \autoref{thm:redchar}, we can turn our attention primarily to the case when $f$ is irreducible modulo the prime $p$.
\begin{theorem}
Let $(u_{n})$ be a normal Lucasian sequence. If $\rho$ is a rank of apparition and $\tau$ is a restricted period of $(u_{n})$ modulo the prime $p$ respectively, then $\rho\mid\tau$.
\end{theorem}
\begin{theorem}
Let $(u_{n})$ be a normal Lucasian sequence and $\tau$ be its restricted period modulo the prime $p$. If $p\mid n$, then $\tau\mid n$.
\end{theorem}
Note that this result is slightly stronger than the typical result that the rank of apparition $\rho\mid n$ if $p\mid u_{n}$ since $\rho\mid\tau$ but the converse is not always true. \textcite{ward38} proves the following generalized result.
\begin{theorem}
Let $\mathfrak{O}$ be a commutative ring and $(u_{n})$ be a Lucasian sequence with elements in $\mathfrak{O}$. Moreover, $\mathfrak{A}$ is an ideal of $\mathfrak{O}$ such that no divisor of $\mathfrak{A}$ is a null divisor of $(u_{n})$. Then if $(u_{n})$ is periodic modulo $\mathfrak{A}$, the minimal restricted period of $(u_{n})$ modulo $\mathfrak{A}$ exists and divides every other restricted period of $(u_{n})$. This minimal restricted period divides the period of $(u_{n})$ modulo $\mathfrak{A}$. Furthermore, the multipliers of $(u_{n})$ modulo $\mathfrak{A}$ are relatively prime to $\mathfrak{A}$ and forms a group with respect to multiplication modulo $\mathfrak{A}$.
\end{theorem}
\begin{theorem}
Let $\mathfrak{O}$ be a ring and $(u_{n})$ be a sequence of $\mathfrak{O}$ and $\mathfrak{A}$ be an ideal such that $(u_{n})$ is periodic modulo $\mathfrak{A}$ but no divisor of $\mathfrak{A}$ is a null divisor of $(u_{n})$. If $\rho$ is the least period and $\tau$ is the restricted period of $(u_{n})$ modulo $\mathfrak{A}$, then the multipliers of $(u_{n})$ form a cyclic group of order $\rho/\tau$. Furthermore, the multiplier dependent on $\tau$ is a of this group.
\end{theorem}
The concept of the rank of apparition is almost the same as the rank of apparition of strong divisibility sequences for Lucasian sequences. However, unlike strong divisibility sequences, it is possible that sometimes $(u_{n})$ may have more than one rank of apparition modulo $\mathfrak{A}$. For this reason, we can probably redefine the rank of apparition of $\mathfrak{A}$ in the following way. We call $\rho$ a rank of apparition of $\mathfrak{A}$ in $(u_{n})$ for the ring $\mathfrak{O}$ if
\begin{align*}
u_{\rho}
& \equiv 0\pmod{\mathfrak{A}}\\
\iff u_{d}
& \not\equiv 0\pmod{\mathfrak{A}}
\end{align*}
for any divisor $d$ of $\rho$. With this connection, one of our primary interests is knowing when the set of the rank of apparitions is finite. Note that, when we consider such a set of ranks of apparition, we can actually consider a rank of apparition $\delta$ a duplicate of the rank of apparition $\rho$ if $\rho\mid\delta$. The obvious reason being that the ranks covered by $\delta$ are already covered by $\rho$. In this regard, we have the following result.
\begin{theorem}
Let $\mathfrak{A}$ be a divisor of the Lucasian sequence $(u_{n})$ such that $(u_{n})$ is periodic modulo $\mathfrak{A}$. Then a necessary and sufficient condition that $\mathfrak{A}$ has a finite set of ranks of apparition in $(u_{n})$ is that all the ranks divide the restricted period of $(u_{n})$ modulo $\mathfrak{A}$.
\end{theorem}
\begin{theorem}
Let $(u_{n})$ be a Lucasian sequence and $\mathfrak{A}$ be a divisor of $(u_{n})$ such that $(u_{n})$ is purely periodic modulo $\mathfrak{A}$. Then $\mathfrak{A}$ only has a finite set of ranks and each rank divides the restricted period of $(u_{n})$ modulo $\mathfrak{A}$.
\end{theorem}
Let $m$ be a positive integer that does not divide the coefficient $c_{0}$ of $u$ and $\mathfrak{S}_{m}$ denote the set of all ranks of apparition of $(u_{n})$ modulo $m$. We readily have the following result.
\begin{theorem}\label{thm:finrank}
The set $\mathfrak{S}_{m}$ consists of all multiples of a finite set of rank of apparition $\rho_{1},\ldots,\rho_{s}$ such that
\begin{align*}
u_{\rho_{i}}
& \equiv 0\pmod{m}\\
\iff u_{d}
& \not\equiv 0\pmod{m}
\end{align*}
for any $d\mid\rho_{i}$ and $\rho_{i}\nmid\rho_{j}$.
\end{theorem}
The finite set in \autoref{thm:finrank} is called the \textit{ranks of apparition} of $(u_{n})$ modulo $m$. We can actually consider $(u_{n})$ modulo $m$ using a \textit{single unified rank of apparition }$\rho$ where $\rho=\lcm(\rho_{1},\ldots,\rho_{s})$. The places of apparition of $m$ in $(u_{n})$ are periodic modulo $\rho$ and $\rho\mid\tau$ where $\tau$ is the restricted period of $(u_{n})$.
\begin{theorem}
Let $(u_{n})$ be a normal Lucasian sequence of order $k$ and $\mathfrak{l}=\lcm(1,\ldots,k)$. Then $p^{k}(p^{\mathfrak{l}}-1)$ is a period of $(u_{n})$ modulo $p$.
\end{theorem}
\begin{theorem}
Let $(u_{n})$ be a Lucasian sequence of order $k$ with characteristic polynomial $f(x)$ and $p$ be a prime. If $p\mid u_{p}$, then $p\mid\mathfrak{D}(f)$ or $p\mid c_{0}$.
\end{theorem}
\begin{theorem}
Let $p$ be a null divisor of a normal Lucasian sequence $(u_{n})$, then $p$ divides both $\Delta(\mathfrak{u})$ and $\mathfrak{D}(f)$ where $\mathfrak{u}$ is the generator and $f(x)$ is the characteristic polynomial of $u$ respectively.
\end{theorem}
\begin{theorem}
A sufficient condition that the Lucasian sequence $(u_{n})$ is primary is that $\gcd(\Delta(\mathfrak{u}),\mathfrak{D}(f))=1$ where $\mathfrak{u}$ is the generator and $f$ is the characteristic polynomial of $(u_{n})$ respectively.
\end{theorem}
\begin{theorem}
Let $p$ be a null prime divisor of a Lucasian sequence $(u)$ such that the coefficients are relatively prime. If $\mathfrak{u}$ is the generator of $(u_{n})$, then $\nu_{p}(\Delta(\mathfrak{A}))$ is the index of $p$ in $(u_{n})$.
\end{theorem}
\begin{theorem}
A subsequence of a normal Lucasian sequence can have no prime null divisor that is not a possible null divisor of $(u_{n})$ itself.
\end{theorem}
\begin{theorem}
Let $(u_{n})$ be a primary Lucasian sequence of order $k$ such that the characteristic polynomial has no repeated roots, the coefficients are relatively prime and $\mathfrak{l}=\lcm(1,\ldots,k)$. Then
\begin{align*}
u_{p}^{\mathfrak{l}}
& \equiv 1\pmod{p}
\end{align*}
for large enough $p$.
\end{theorem}
\begin{theorem}
Let $(u_{n})$ be a Lucasian sequence with characteristic polynomial $f$, $(U_{n})$ be the associated R-sequence and $p$ be a prime such that $p\nmid \mathfrak{D}(f)$. Then every rank of apparition of $p$ in $(U_{n})$ is a rank of apparition in $(u_{n})$.
\end{theorem}
Next, we have a generalization of the law of apparition given by Lucas.
\begin{theorem}\label{thm:genrank}
Let $(u_{n})$ be a Lucasian sequence of order $k$ with characteristic polynomial $f$ irreducible modulo $p$ and $\lambda$ be the period of $f$ modulo $p$. If $k$ has the prime factorization
\begin{align*}
k
& = q_{1}^{e_{1}}\cdots q_{s}^{e_{s}}
\end{align*}
then the ranks of apparition of $p$ in $(U_{n})$ are divisors of the elements of a subset of
\begin{align*}
\{\rho(k/q_{1}),\ldots,\rho(k/q_{s})\}
\end{align*}
where $\rho(s)=\lambda/\gcd(\lambda,p^{s}-1)$. Thus, $p$ has at most $k$ distinct ranks of apparition and the single unified rank of $p$ divides
\begin{align*}
\rho\left(\dfrac{k}{q_{1}\cdots q_{s}}\right)
\end{align*}
\end{theorem}
A corollary is the following.
\begin{theorem}\label{thm:lucdivex}
Any Lucasian sequence with an irreducible characteristic polynomial of order $k$ where $k$ is a prime power has only one rank of apparition and hence, is a strong divisibility sequence.
\end{theorem}
\begin{theorem}
The Lucasian sequence $(u_{n})$ is not a strong divisibility sequence if it has an irreducible characteristic polynomial and the ranks of apparitions are in the set
\begin{align*}
\{\rho(k/q_{1}),\ldots,\rho(k/q_{r})\}
\end{align*}
for $1<r<s$ where $q_{1},\ldots,q_{s}$ are the distinct prime divisors of $k$.
\end{theorem}
\begin{theorem}
The prime $p$ is a null divisor of the Lucasian sequence $(U_{n})$ if and only if $p$ divides the last two coefficients $c_{1}$ and $c_{0}$ of the characteristic polynomial $f$ of $(u_{n})$.
\end{theorem}
\section{The Law of Repetition}\label{sec:exp}
We say that an integer sequence $(a_{n})$ has the \textit{law of repetition} if for any positive integer $n$ and a prime divisor $p$ of $a_{n}$ such that $p\nmid s$,
\begin{align*}
\nu_{p}(a_{nk})
& = \nu_{p}(a_{n})+\nu_{p}(k)
\end{align*}
holds.
\begin{theorem}\label{thm:expdiv}
Let $(a_{n})$ be an integer sequence with the law of repetition. Then $(a_{n})$ is also a strong divisibility sequence.
\end{theorem}
\begin{proof}
For positive integers $m$ and $n$, let $g=\gcd(m, n),m=gu,n=gv$ where $\gcd(u,v)=1$ and $h=\gcd(a_{m},a_{n})$. We will show that $h=a_{g}$. First, consider that $p$ is a prime divisor of $g$. If $p^{e}\|a_{g}$,
\begin{align*}
\nu_{p}(h)
& = \min\left(\nu_{p}(a_{gu}),\nu_{p}(a_{gv})\right)\\
& = \nu_{p}(a_{g})+\min(\nu_{p}(u),\nu_{p}(v))
\end{align*}
Since $\gcd(u,v)=1$, $p$ cannot divide both $u$ and $v$. Therefore, either $\nu_{p}(u)$ or $\nu_{p}(v)$ is $0$ and $\min(\nu_{p}(u),\nu_{p}(v))=0$. This gives us $\nu_{p}(h)=\nu_{p}(a_{g})$ for all prime divisor $p$ of $g$. Next, assume that $p$ is a prime divisor of $h$ and $p^{e}\|h$. Then $p^{e}\mid a_{m}$ and $p^{e}\mid a_{n}$. More specifically, $p^{e}\|a_{gu}$ or $p^{e}\|a_{gv}$ must hold. Again, by definition $\nu_{p}(a_{gu})=\nu_{p}(a_{g})+\nu_{p}(u)$ and $\nu_{p}(a_{gv})=\nu_{p}(a_{g})+\nu_{p}(v)$. Since both $p\mid u$ and $p\mid v$ cannot hold, so $p^{e}\|a_{gu}$ or $p^{e}\|a_{gv}$ must hold. Then $p^{e}\|a_{g}$ holds for all $p^{e}\|h$. Thus, we must have $h=a_{g}$.
\end{proof}
By \autoref{thm:expdiv}, any sequence with the law of repetition has a corresponding lcm sequence $(b_{n})$. The next result characterizes when a strong divisibility sequence has the law of repetition.
\begin{theorem}\label{thm:expchar}
Let $(a_{n})$ be a strong divisibility sequence, $(b_{n})$ be the lcm sequence of $(a_{n})$ and $\rho$ be the rank of apparition of prime $p$ in $(a_{n})$. Then $(a_{n})$ has the law of repetition if and only if for any positive integers $n$ and $m>1$ such that $p\nmid m$, $p\|b_{\rho p^n}$ but $p\nmid b_{\rho p^nm}$.
\end{theorem}
\begin{proof}
First, we will prove the if part. Since $(a_{n})$ is a strong divisibility sequence, $p\mid a_{k}$ if and only if $\rho\mid k$. By assumption, $(a_{n})$ has law of repetition. If $p^\alpha\|a_\rho$, then $p^{\alpha+1}\|a_{\rho p}$.
\begin{align*}
a_{\rho p}
& = \prod_{d\mid \rho p}b_d\\
\nu_{p}(a_{\rho p})
& = \nu_{p}\left(\prod_{d\mid\rho p}b_d\right)
\end{align*}
If $d<\rho$, then $p\nmid a_d$ so $p\nmid b_d$. Thus,
\begin{align*}
\nu_{p}(a_{\rho p})
& = \nu_{p}\left(\prod_{d\mid p}b_{\rho d}\right)\\
& = \nu_{p}(b_\rho)+\nu_{p}(b_{\rho p})\\
\alpha+1
& = \alpha+\nu_{p}(b_{\rho p})
\end{align*}
So, $\nu_{p}(a_{\rho p})=1$ and $p\mid b_{\rho p}$. By induction, we can see that $p$ not only divides $b_{\rho p^i}$ for $i\in\mathbb{N}$, more precisely, $p\|b_{\rho p^i}$. Next, assume that $p^{\alpha+u}\|a_{n}$ for some positive integer $n=\rho p^{u}m$ where $p\nmid m$. From the law of repetition and the argument above,
\begin{align*}
\nu_{p}(a_{n})
& = \nu_{p}(a_{\rho p^{u}m})\\
& = \nu_{p}(a_{\rho})+\nu_{p}\left(\prod_{d\mid p^um}b_{\rho d}\right)\\
& = \alpha+\nu_{p}\left(\prod_{d\mid p^u}b_{\rho d}\right)+\nu_{p}\left(\prod_{\substack{d\mid p^u\\e\mid m\\e>1}}b_{\rho de}\right)
\end{align*}
Since $\nu_{p}(a_{n})=\nu_{p}(a_{\rho p^{u}m})=\nu_{p}(a_{\rho})+u$,
\begin{align*}
\alpha+u
& = \alpha+\sum_{i=1}^u\nu_{p}(b_{\rho p^i})+\nu_{p}\left(\prod_{i=1}^u\prod_{\substack{e\mid m\\e>1}}b_{\rho p^ie}\right)\\
& = \alpha+u+\nu_{p}\left(\prod_{i=1}^u\prod_{\substack{e\mid m\\e>1}}b_{\rho p^ie}\right)\\
& = \alpha+u+\sum_{i=1}^u\sum_{\substack{e\mid m\\e>1}}\nu_{p}(b_{\rho p^ie})
\end{align*}
From this, we have that $\nu_{p}(b_{\rho p^ie})=0$ for $1\leq i\leq u$ and $e\mid m$ if $e>1$. In other words, $p\mid b_k$ if and only if $k=\rho p^u$ for some non-negative integer $u$.
For the only if part, we have that $(a_{n})$ is a strong divisibility sequence such that $p\| b_{\rho p^u}$ but $p\nmid b_{\rho p^um}$ for $m>1$. Let $n$ be a positive integer such that $n=\rho p^um$ and $p^\alpha\|a_\rho$.
\begin{align*}
\nu_{p}(a_{n})
& = \nu_{p}(a_{\rho p^um})\\
& = \nu_{p}\left(\prod_{d\mid \rho p^um}b_d\right)\\
& = \nu_{p}(a_\rho)+\nu_{p}\left(\prod_{d\mid p^um}b_{\rho d}\right)\\
\end{align*}
Now, separate the sum into two parts based on whether the index has a divisor of $m$ greater than $1$.
\begin{align*}
\nu_{p}(a_{n})
& = \nu_{p}(a_{\rho})+\sum_{d\mid p^u}\nu_{p}(b_{\rho d})+\sum_{d\mid p^u}\sum_{\substack{e\mid m\\e>1}}\nu_{p}(b_{\rho de})\\
& = \alpha+\sum_{i=1}^u\nu_{p}(b_{\rho p^i})+0\\
& = \alpha+\sum_{i=1}^u1\\
& = \alpha+u
\end{align*}
This proves the theorem.
\end{proof}
A corollary of \autoref{thm:expchar} is the following.
\begin{theorem}\label{thm:lte}
Let $(a_{n})$ be a sequence with the law of repetition and $(b_{n})$ be the lcm sequence of $(a_{n})$. If $m$ and $n$ are distinct positive integers, then $\gcd(b_{m},b_{n})>1$ if and only if $m/n$ is a prime power. More precisely, $p$ is a prime divisor of $\gcd(b_{m},b_{n})$ if and only if $m/n=p^{s}$ for some non-negative integer $s$.
\end{theorem}
\printbibliography
\end{document} |
\begin{document}
\title{
extbf{On the Wiener-Hopf factorization for L\'evy processes with bounded positive jumps} \begin{abstract} We study the Wiener-Hopf factorization for L\'evy processes with bounded positive jumps and arbitrary negative jumps. Using the results from the theory of entire functions of Cartwright class we prove that the positive Wiener-Hopf factor can be expressed as an infinite product in terms of the solutions to the equation $\psi(z)=q$, where $\psi$ is the Laplace exponent of the process. Under some additional regularity assumptions on the L\'evy measure we obtain an asymptotic expression for these solutions, which is important for numerical computations. In the case when the process is spectrally negative with bounded jumps, we
derive a series representation for the scale function in terms of the solutions to the equation $\psi(z)=q$.
To illustrate possible applications we discuss the implementation of numerical algorithms and present the results of several numerical experiments.
\noindent {\it Keywords:} L\'evy process, Wiener-Hopf factorization, entire functions of Cartwright class, distribution of the supremum, spectrally-negative processes, scale function
\noindent{\it AMS 2000 subject classification: 60G51.} \end{abstract}
\section{Introduction}\label{sec_intro}
Assume that we want to study the way in which one-dimensional L\'evy process $X$ exits a half-line or a finite interval. For example, we might be interested in the first passage time across a barrier, the overshoot/undershoot at the first passage time, the last time that the process was closest to the barrier, the location of the process at this time, etc. These questions are usually referred to as ``exit problems" in the literature, and they have stimulated a lot of research in recent years due to numerous applications in such diverse areas as actuarial mathematics, mathematical finance, queueing theory and optimal control.
Let us denote the supremum ${S}_t=\sup\{X_s: 0\le s \le t\}$ and infimum ${I}_t=\inf\{X_s: 0\le s \le t\}$, and let ${\textnormal e}(q)$ be an exponentially distributed random variable with parameter $q>0$, which is independent of the process $X$. It is an established fact that exit problems are closely related to the Wiener-Hopf factorization, which studies the distribution of ${S}_{{\textnormal e}(q)}$ and ${I}_{{\textnormal e}(q)}$. For example, if we know the positive Wiener-Hopf factor (which is defined as the Laplace transform of ${S}_{{\textnormal e}(q)}$), then through the Pecherskii-Rogozin identity \cite{Pecherskii} we know the joint Laplace transform of the first passage time and the overshoot. The bad news is that for general L\'evy processes the Wiener-Hopf factors cannot be obtained in closed form, therefore the best that we can do is to try to find rich enough families of L\'evy processes with special analytical properties, for which we can say something useful about the distribution of ${S}_{{\textnormal e}(q)}$ and ${I}_{{\textnormal e}(q)}$.
Let us look at the existing examples of L\'evy processes for which one can identify the Wiener-Hopf factors and the distribution of extrema. In the general case, when the process has jumps of both sides, this list includes processes with jumps having rational transform \cite{Mordecki,Pistorius} and recently introduced meromorphic processes \cite{KuzKypJC}. The first class includes processes with hyper-exponential \cite{Cai,JP,Kou} and phase-type jumps \cite{Asmussen2,Asmussen}, while meromorphic processes include Lamperti-stable processes \cite{CaCha, Caballero2008, Chaumont2009, Patie2009}, hypergeometric processes \cite{Caballeroetal2009, KyPavS, KyPaRi}, $\beta$-processes \cite{Kuz-beta} and $\theta$-processes \cite{Kuz-theta}. In the simpler case when the process is spectrally negative (which means essentially that it has only negative jumps) it turns out that both of the same two classes provide analytically tractable formulas, however in this case there also exist other interesting families, such as the processes constructed in \cite{HuKy} (see also \cite{KuKyRi}).
One may wonder what is so special about these particular processes, that makes it possible to find the Wiener-Hopf factorization explicitly? It turns out that in all cases the Laplace exponent, defined as $\psi(z)=\ln ( {\mathbb E}[\exp(z X_1)])$, has some analytical structure which allows to factorize it as a product of two functions, which are analytic in the left/right complex halfplane. It is not surprising that the analytic structure of $\psi(z)$ plays such an important role, as there is a close connection between Wiener-Hopf factorization and the Riemann boundary value problems, see \cite{Fourati}, \cite{Kuz2010c} and the references therein. For example, if the process has hyper-exponential jumps \cite{Cai}, then $\psi(z)$ is a rational function and if $X$ is a meromorphic process then $\psi(z)$ is a meromorphic function of a very special type, in both cases these functions can be easily factorized as products of two functions. One can formulate a general ``meta-theorem": Wiener-Hopf factorization can be obtained explicitly if and only if $\psi(z)$ can be extended to a meromorphic function in the left or right complex halfplane. This principle helps to explain why no one has yet produced an explicit Wiener-Hopf factorization for one of the processes which are widely used in mathematical finance, such as VG, CGMY/KoBoL or generalized tempered stable processes (see \cite{Cont} and the references therein for more information about these families of L\'evy processes). It turns out that in all these cases the Laplace exponent has a logarithmic or algebraic branch point in the complex plane, and, therefore, cannot be extended meromorphically. At the same time, we can use this meta-theorem to produce a large class of processes for which there is some hope to have an explicit Wiener-Hopf factorization: if the process has bounded jumps then it follows quite easily from the L\'evy-Khintchine formula that the Laplace exponent $\psi(z)$ is an entire function, and it might be possible to factorize it as a product of two functions and obtain some useful information about the Wiener-Hopf factorization.
In this paper we consider a more general class: L\'evy processes with bounded positive jumps. There are two main reasons, one theoretical and one more practical, why we are interested in studying this class of processes. First of all, one can see that this is a very large class. In a certain sense it is ``dense" in the class of all L\'evy processes: clearly, any L\'evy measure can be approximated arbitrarily close by truncating it at a large positive number. Therefore studying the Wiener-Hopf factorization for this class will lead to a better understanding of related results for general L\'evy processes. The second reason is that there are several situations where processes with bounded positive or negative jumps would be natural candidates for modeling purposes. One important example is ruin problem for the insurance company which is protected by the reinsurance agreement. In this case the size of each claim is essentially capped at a fixed level, and the amount of the claim above this level is being covered by the reinsurer. The value of the insurance company can be conveniently modeled by a spectrally negative L\'evy process with bounded jumps, and now we have an interesting problem of how to compute numerically such important quantities as the ruin probability, discounted penalty function, etc.
It is instructive to draw a parallel with the results of Lewis and Mordecki \cite{Mordecki} on processes with positive jumps of rational transform (see also recent paper by Fourati \cite{Fourati} on double-sided exit problem for this class of processes). In their case the Laplace exponent of the ascending ladder process $\kappa(q,z)$ (see \cite{Kyprianou} for the definition of this object) is a rational function, with all singularities in the left half-plane $\textnormal {Re}(z)<0$. In our case it will turn out that $\kappa(q,z)$ is an entire function of a very special type: it belongs to the so-called Cartwright class (see \cite{Levin1980} and the proof of Theorem \ref{thm_main}). This makes it possible to factor it as an infinite product and to identify the Wiener-Hopf factors. There are also some similarities between the analytical structure for L\'evy processes with bounded positive jumps and meromorphic processes \cite{KuzKypJC}. In both cases the positive Wiener-Hopf factor is given as an infinite product involving the solutions to the equation $\psi(z)=q$ in the half-plane $\textnormal {Re}(z)>0$. The major difference is that in the case of meromorphic processes the solutions to the equation $\psi(z)=q$ are all real and simple, while they are complex when the process has bounded positive jumps, and this fact makes the analytical theory more interesting and the computations somewhat more challenging.
The paper is organized as follows: in Section \ref{sec_results} we present our main results on the Wiener-Hopf factorization for processes with bounded positive jumps, and we obtain an expression for the Wiener-Hopf factors as an infinite product in terms of the solutions to $\psi(z)=q$. We also study the asymptotics of these solutions, which will turn out to be very important for applications and numerical computations. In Section \ref{sec_scale_functions} we consider the spectrally negative case, and we obtain a series representation for the scale function $W^{(q)}(x)$. A brief discussion of numerical methods and the results of several numerical experiments are presented in Section \ref{sec_numerics}, while Section \ref{sec_proofs} contains the proofs of all results.
\section{Processes with bounded positive jumps}\label{sec_results}
Let us first introduce some notations and definitions. The L\'evy measure of the process $X$ will be denoted by $\Pi({\textnormal d} x)$, and we will use the following notations for its tails: $\bar \Pi^+(x)=\Pi((x,\infty))$ for $x>0$ and $\bar \Pi^-(x)=\Pi((-\infty,x))$ for $x<0$. In this paper we consider the class of processes with bounded positive jumps, thus we will assume that the L\'evy measure $\Pi$ has support on $(-\infty,k]$. Here $k$ is the right boundary of the support of $\Pi$, that is
\begin{eqnarray}\label{assmpt1}
k=\inf\{ x>0: \bar \Pi^+(x)=0\}.
\end{eqnarray} We will also assume that $k>0$, so that we exclude the spectrally negative case, which will be considered in the next section. Note that at this stage we do not impose any restrictions on the L\'evy measure on the negative half-line.
The Laplace exponent of the process $X$ is defined as $\psi({\textnormal i} z)=\ln({\mathbb E}[\exp({\textnormal i} zX_1)])$ for $z\in {\mathbb R}$, and it can be expressed by the L\'evy-Khintchine formula as follows \begin{eqnarray}\label{Levy_Khinthine2} \psi(z)=\frac12 \sigma^2 z^2 +\mu z + \int\limits_{-\infty}^{k} \left( e^{z x}-1- zx h(x) \right) \Pi({\textnormal d} x), \end{eqnarray} where $\sigma \ge 0$, $\mu \in {\mathbb R}$ and $h(x)$ is the cutoff function. When the process has jumps of bounded variation, or equivalently, when \begin{eqnarray}\label{fin_var_condition}
\int\limits_{-1}^1 |x| \Pi({\textnormal d} x) < \infty, \end{eqnarray} we will assume that $h(x)\equiv 0$, then $\mu$ can be interpreted as the linear drift of the process. When the jump part of the process has infinite variation, or equivalently, when condition (\ref{fin_var_condition}) is violated,
we will assume that $h(x)={\mathbf 1}_{\{x>-1\}}$ (or $h(x)\equiv 1$ if ${\mathbb E}[|X_1|]$ is finite). Note that formula (\ref{Levy_Khinthine2}) implies that the Laplace exponent $\psi(z)$ can be analytically continued into the half-plane $\textnormal {Re}(z)>0$. Also note that $\psi(z)$ is real when $z>0$, and that $\overline{ \psi(z)}=\psi(\overline{z})$. In particular, the last property implies that if for some $q\in {\mathbb R}$ the number $z \in {\mathbb C}$ is a solution to the equation $\psi(z)=q$, then so is $\overline{z}$.
Everywhere in this paper we will denote the first quadrant of the complex plane as \begin{eqnarray*} {\mathcal Q}_1:=\{z\in {\mathbb C}: \; \textnormal {Re}(z)>0, \; \textnormal {Im}(z)>0\}, \end{eqnarray*} and we will always use the principal branch of the logarithm and the power function, that is the branch cut will be taken along the negative half-line and for all $z\in {\mathbb C}$ we have $\textnormal {arg}(z) \in (-\pi, \pi]$.
\subsection{Analytic properties of the Wiener-Hopf factors}\label{subsec_results}
The following theorem is our first main result. It describes the analytic structure of the Wiener-Hopf factors for processes with bounded positive jumps. \begin{theorem}\label{thm_main} Assume that $q>0$. Equation $\psi(z)=q$ has a unique positive solution $\zeta_0$ and infinitely many solutions in ${\mathcal Q}_1$, which we denote by $\{\zeta_n\}_{n\ge 1}$. Assume that $\zeta_n$ are arranged
in the order of increase of the absolute value.
The following statements are true:
\begin{itemize}
\item[(i)] $\zeta_0$ has multiplicity one and $\textnormal {Re}(\zeta_n)\ge \zeta_0$ for all $n\ge 1$.
\item[(ii)] The series $\sum_{n\ge 1} \textnormal {Re}\left(\zeta_n^{-1} \right)$ converges.
\item[(iii)] All of the numbers $\{\zeta_n\}_{n\ge 1}$, except possibly those of a set of zero density, lie inside arbitrary small angle
$\pi/2-\epsilon<\textnormal {arg}(z)<\pi/2$, and the density of zeros inside this angle is equal to
\begin{eqnarray}\label{density_of_zeros}
\lim\limits_{r\to +\infty} \frac{\#\{\zeta_n : |\zeta_n|<r \; \textnormal{ and } \; \pi/2-\epsilon<\textnormal {arg}(\zeta_n)<\pi/2\} }{r}=\frac{k}{2\pi}.
\end{eqnarray}
\item[(iv)] The Wiener-Hopf factors can be identified as follows: for $\textnormal {Re}(z)\ge 0$
\begin{eqnarray}\label{wh_factor} \begin{cases} \displaystyle \phi_q^{\plus}({\textnormal i} z):={\mathbb E} \left[ e^{- z S_{{\textnormal e}(q)}} \right]= e^{\frac{ k z}2 } \left( 1+\frac{z}{\zeta_0}\right)^{-1}
\prod\limits_{n\ge 1} \left(1+\frac{ z}{\zeta_n}\right)^{-1}\left(1+\frac{z}{\bar \zeta_n}\right)^{-1}, \\
\displaystyle \phi_q^{\minus}(-{\textnormal i} z):= {\mathbb E} \left[ e^{z I_{{\textnormal e}(q)}} \right]=\frac{q}{q-\psi(z)} \frac{1}{\phi_q^{\plus}(-{\textnormal i} z)}. \end{cases}
\end{eqnarray}
\end{itemize} \end{theorem}
The proof of Theorem \ref{thm_main} can be found in Section \ref{sec_proofs}.
\label{page_examples} Let us present a very simple example, which will illustrate the results presented in Theorem \ref{thm_main}. Consider a process $X_t=k N_t$, where $N_t$ is the standard Poisson process. It is clear that $X$ is a process with bounded positive jumps, and that its Laplace exponent is $\psi(z)=\exp(kz)-1$. Solving equation $\psi(z)=q$ for $q>0$ we find that \begin{eqnarray*} \zeta_0=\frac{\ln(1+q)}{k}, \;\;\; \zeta_n=\frac{\ln(1+q)}{k}+\frac{2n\pi {\textnormal i}}{k}, \;\;\; n\ge 1. \end{eqnarray*} It is an easy exercise to verify that the series $\sum_{n\ge 1} \textnormal {Re}\left(\zeta_n^{-1} \right)$ converges, thus we have checked part (ii) of the Theorem \ref{thm_main}. Next, all the zeros belong to the vertical line $\textnormal {Re}(z)=\zeta_0$, they are equidistant and the spacing between them is equal to $2\pi/k$. This confirms statement (iii): all zeros (except for a finite number) lie inside arbitrary small angle
$\pi/2-\epsilon<\textnormal {arg}(z)<\pi/2$, and the density of zeros inside this angle, which is inversely proportional to the spacing, is equal to $k/(2\pi)$.
Finally, since $X$ is a subordinator, we have $S_{{\textnormal e}(q)}\equiv X_{{\textnormal e}(q)}$, thus
\begin{eqnarray*}
{\mathbb E} \left[ e^{- z S_{{\textnormal e}(q)}} \right]={\mathbb E} \left[ e^{- z X_{{\textnormal e}(q)}} \right]=\frac{q}{q-\psi(-z)}=
\frac{q}{1+q-e^{-kz}}=e^{\frac{kz}2} \frac{\sinh\left(\frac12 \ln(1+q)\right)}{\sinh\left(\frac12( kz+ \ln(1+q))\right)},
\end{eqnarray*}
and we see that the infinite product representation in (\ref{wh_factor}) is equivalent to the well-known
infinite product formula for the hyperbolic sine function.
It is also easy to verify the validity of Theorem \ref{thm_main} for a more general class of processes with double-sided jumps. Let us assume that for some $h>0$ the measure $\Pi({\textnormal d} x)$ is supported on a finite subset of a lattice $h {\mathbb Z}$, that is there exist $m,l \in {\mathbb N}$ such that the support of $\Pi({\textnormal d} x)$ is equal to $\{-mh, -(m-1)h,\dots,-h,h,\dots,(l-1)h,lh\}$. In this case the right boundary of the support of the L\'evy measure is $k=lh$. Let $X$ be a compound Poisson process defined by the measure $\Pi({\textnormal d} x)$ (note that $X$ can be constructed as a linear combination of $m+l$ independent Poisson processes). From the L\'evy-Khintchine formula \eqref{Levy_Khinthine2} we find that the Laplace exponent of $X$ is given by \begin{eqnarray*} \psi(z)=\sum\limits_{j=1}^m \Pi(\{-jh\})\left( e^{-jh z}-1 \right)+\sum\limits_{j=1}^l \Pi(\{jh\})\left( e^{jh z}-1 \right).
\end{eqnarray*} Note that the function $\psi(\ln(w)/h)$ is a rational function, therefore using the change of variables $z=\ln(w)/h$ the equation $\psi(z)=q$ can be transformed into a polynomial equation of degree $m+l$. It is possible to prove (we leave it as an exercise) that this polynomial equation will have exactly $m$ solutions inside the open unit circle $\{w\in {\mathbb C}: |w|<1\}$ and exactly $l$ solutions $\{w_1,\dots,w_l\}$ outside the closed unit circle. The solutions $\zeta_n$ to the original equation $\psi(z)=q$ can now be found by solving equation $\exp(hz)=w_j$, thus they are given by \begin{eqnarray*} \left\{ \frac{\ln(w_j)}h+\frac{2n\pi {\textnormal i}}{h}; \; n\in {\mathbb Z}, \; 1\le j \le l\right\}. \end{eqnarray*} Again, it is easy to check that the series $\sum_{n\ge 1} \textnormal {Re}\left(\zeta_n^{-1} \right)$ converges. Also, the solutions lie on $l$ vertical lines, therefore all of them (except for a finite number) lie inside arbitrary small angle $\pi/2-\epsilon<\textnormal {arg}(z)<\pi/2$, and the density of zeros inside this angle is equal to $l\times h/(2\pi)=k/(2\pi)$. Infinite product representation for the positive Wiener-Hopf factor (\ref{wh_factor}) is again equivalent to elementary infinite product expressions for certain trigonometric functions.
In the above two examples we were able to describe the solutions to the equation $\psi(z)=q$ in a very precise form, but in the general case this will be a transcendental equation and there is little hope to obtain as much information about $\zeta_n$. However, as our next result shows, we can obtain some very useful information about the asymptotic behavior of $\zeta_n$, provided that the Laplace exponent $\psi(z)$ has regular growth as $z\to \infty$. The connection between the regularity of growth and distribution of the zeros of an entire function is well-known, see, for example, chapters 2 and 3 in \cite{Levin1980}. The main idea is that analytic functions which grow regularly at infinity also enjoy certain regularity in the distribution of zeros (and in fact the opposite is also true). In the following Theorem we impose a rather strong regularity condition on the growth of the Laplace exponent in the half-plane $\textnormal {Re}(z)>0$ in order to obtain an explicit asymptotic approximation for the solutions to $\psi(z)=q$. This asymptotic expression for $\zeta_n$ would prove to be very useful in the next Section, when we will derive a series representation for the scale function $W^{(q)}(x)$, and later in Section \ref{sec_numerics}, when we will discuss numerical algorithms.
\begin{theorem}\label{thm_asymptotics} Assume that \begin{eqnarray}\label{psi_asymptotics} \psi(z)=Ae^{kz}z^{-a}+Bz^b+ o\left(e^{kz}z^{-a}\right) + o \left(z^b\right), \end{eqnarray} as $z\to \infty$ in the domain ${\mathcal Q}_1$. Let us also assume that $a\ge 0$ and $b>0$. Then all sufficiently large solutions to $\psi(z)=q$ are simple and there exists $m \in {\mathbb Z}$ such that \begin{eqnarray}\label{zeta_asymptotics}
\zeta_{n+m}&=&\frac{1}{k} \left[\ln \left( \bigg | \frac{B}{A} \bigg| \right)+(a+b) \ln \left(\frac{2n\pi }{k}\right) \right] \\ \nonumber &+& \frac{{\textnormal i} }{k} \left[ \textnormal {arg}\left( \frac{B}{A} \right)+\left( \frac12 (a+b) + 2n+1\right) \pi \right]+o(1) \end{eqnarray} as $n \to +\infty$. \end{theorem}
\noindent The proof of Theorem \ref{thm_asymptotics} is presented in Section \ref{sec_proofs}.
\begin{remark} Note that formula (\ref{zeta_asymptotics}) implies that $\zeta_n=(a+b)\ln(n)/k+2 \pi n {\textnormal i} /k+O(1)$ as $n\to \infty$, which again confirms statements (ii) and (iii) of Theorem \ref{thm_main}: the zeros cluster ``close'' to the imaginary axis, or to say it more precisely, $\textnormal {arg}(\zeta_n)\nearrow \pi/2$ as $n\to \infty$; and secondly, the density of zeros in ${\mathcal Q}_1$ (which is inversely proportional to the average spacing between them) is equal to $k/(2\pi)$. \end{remark}
Condition $b>0$ in Theorem \ref{thm_asymptotics} implies that $\psi({\textnormal i} z)\to \infty$ as $z\to \infty$, $z\in {\mathbb R}$, therefore $X$ cannot be a compound Poisson process (see Proposition 2 in \cite{Bertoin}). This shows that the two examples considered on page \pageref{page_examples} do not satisfy the conditions of Theorem \ref{thm_asymptotics}, but if we take these compound Poisson processes and add a drift (or Brownian motion with drift) then it is easy to check that the Laplace exponent of this perturbed process will satisfy \eqref{psi_asymptotics}. A natural question then is to describe sufficient conditions on the triple $\{\mu,\sigma,\Pi\}$, which
defines the Laplace exponent via (\ref{Levy_Khinthine2}), which will ensure that $\psi(z)$ satisfies asymptotic relation (\ref{psi_asymptotics}). Below we present a set of sufficient conditions.
\begin{definition} We will say that a real function $f(x)$ is piecewise $n$-times continuously differentiable on an interval $[a,b]$ if there exists a finite set of numbers $\{x_k\}_{1\le k \le m}$, such that \begin{itemize}
\item[(i)] $a=x_1<x_2<\dots<x_m=b$,
\item[(ii)] $f \in {\mathcal C}^{n}([a,b] \setminus \{x_1,x_2,\dots,x_m\})$,
\item[(iii)] for each $j=0,1,\dots,n$ and $k=1,2,\dots,m$ there exist left and right limits $f^{(j)}(x_k-)$ and $f^{(j)}(x_k+)$. \end{itemize} We will use the notation $f \in {\mathcal {PC}}^{n}[a,b]$. In the case of an open interval $(a,b)$ the definition of ${\mathcal PC}^{n}(a,b)$ is very similar, except for condition $a<x_1$ and $x_m<b$. Similarly, one can define the remaining cases of intervals $(a,b]$ and $[a,b)$. \end{definition}
\begin{definition}\label{definition_Levy_measure} We say that a L\'evy measure is {\it regular} if the following two conditions are satisfied: \begin{itemize}
\item[(1)] There exist constants $\hat C, \hat \alpha$ and $\{\hat C_j,\hat \alpha_j\}_{1\le j \le \hat m}$ such that \begin{eqnarray}\label{pibar_minus_assmptn}
\bar \Pi^-(x)-\hat C|x|^{-\hat \alpha}-\sum\limits_{j=1}^{\hat m} \hat C_j |x|^{-\hat \alpha_j} = O(1),\;\;\; x\to 0^-, \end{eqnarray} where $\hat \alpha, \hat \alpha_j \in (-\infty,2)\setminus\{0,1\}$ and $\hat \alpha_j<\hat \alpha$. \\
\item[(2)] There exists $n \in {\mathbb N}\cup\{0\}$ such that
\begin{itemize}
\item[(2a)] for some constants $C, \alpha$ and $\{C_j,\alpha_j\}_{1\le j \le m}$ we have \begin{eqnarray}\label{pibar_plus_assmptn} \bar \Pi^+(x)-Cx^{-\alpha}-\sum\limits_{j=1}^{m} C_j x^{-\alpha_j} \in {\mathcal {PC}}^{n+1}[0,k], \end{eqnarray} where $\alpha, \alpha_j \in (-\infty,2)\setminus\{0,1\}$ and $\alpha_j<\alpha$;
\item[(2b)] $\bar \Pi^+{}^{(n)}(k-) \ne 0$;
\item[(2c)] $\bar \Pi^+(x)\in {\mathcal C}^{n-1}({\mathbb R}^+)$ (this condition is not needed for $n=0$).
\end{itemize} \end{itemize} \end{definition}
\begin{remark} Note that conditions (1) and (2a) imply that the Blumenthal-Getoor index \begin{eqnarray}
\beta(\Pi)=\inf\left\{ \gamma>0 : \int_{-1}^1 |x|^{\gamma} \Pi ({\textnormal d} x) < \infty \right \} \end{eqnarray} is equal to $\beta(\Pi)=\max(\alpha,\hat \alpha,0)$. \end{remark}
Definition \ref{definition_Levy_measure} is not very easy to interpret, therefore we will try to give some intuition behind these conditions.
Conditions (1) and (2a) guarantee that the L\'evy measure is sufficiently well-behaved in the neighborhood of zero. This will help us to ensure
that the main term of $\psi(z)$ grows as $z\to \infty$ exactly as $z^b$, and does not contain any logarithmic terms.
Conditions (2b) and (2c) are slightly harder to interpret. Essentially,
they imply that the L\'evy measure restricted to ${\mathbb R}^+$ has its ``worst'' possible singularity at the right-end point of its support. Let us consider the following example, where conditions (2b) and (2c) are violated.
\begin{example} Assume that the L\'evy measure is given by \begin{eqnarray}\label{example_Pi_dx} \Pi({\textnormal d} x)= {\mathbf 1}_{\{x<0\}}e^{x }{\textnormal d} x + {\mathbf 1}_{\{0<x<4\}} {\textnormal d} x + \delta_3({\textnormal d} x), \end{eqnarray} so that $\Pi({\textnormal d} x)$ has an atom of mass one at $x=3$. Because of the atom at $x=3$ we know that $\bar \Pi^+(x)$ is not continuous, therefore we are forced to take $n=0$ in the Definition \ref{definition_Levy_measure}. But since we have no atom at $x=k=4$, we find that $\bar \Pi^+(k-)=0$, which violates condition (2b), thus we conclude that the measure $\Pi({\textnormal d} x)$ is not regular.
Next, let $X$ be a process which has a L\'evy measure \eqref{example_Pi_dx} and linear drift $\mu=1$. We will check that the Laplace exponent of the process $X$ does not satisfy \eqref{psi_asymptotics}.
We compute the Laplace exponent using the L\'evy-Khintchine formula \eqref{Levy_Khinthine2} and find that it has asymptotics \begin{eqnarray*} \psi(z)=z+\frac{e^{4z}}z+e^{3z}+O(1),
\end{eqnarray*} as $z\to \infty$, $\textnormal {Re}(z)>0$. Now, it is easy to see that in the domain $0<\textnormal {Re}(z)<\frac12\ln|z|$ we will have
$\psi(z)=z+e^{3z}+o(e^{3z})$, while in the domain $\textnormal {Re}(z)>2\ln|z|$ we'll have $\psi(z)=e^{4z}/z+o(e^{4z}/z)$. This implies that we cannot find a single uniform asymptotic formula for $\psi(z)$ as in \eqref{psi_asymptotics}. This happens because the ``worst" singularity of $\Pi({\textnormal d} x)$, which is the atom at $x=3$, is not located at the right boundary $x=k=4$. One can also check that asymptotic expression \eqref{psi_asymptotics} will be satisfied if we replace $\delta_3({\textnormal d} x)$ by $\delta_4({\textnormal d} x)$ in \eqref{example_Pi_dx} or if we add a second atom at $x=4$, and at the same time this will also give us a regular L\'evy measure according to the Definition \ref{definition_Levy_measure}. \end{example}
In the following example we exhibit a large family of regular L\'evy measures (it is an easy exercise to verify all the conditions of Definition \ref{definition_Levy_measure}). \begin{example} Assume that the L\'evy measure $\Pi({\textnormal d} x)$ has a density $\pi(x)$ given by \begin{eqnarray}\label{pi_example} \pi(x)=
{\mathbf 1}_{\{x<0\}}\hat f(x)|x|^{-1-\hat \alpha}+ {\mathbf 1}_{\{0<x<k\}} f(x)x^{-1- \alpha} , \end{eqnarray} where $\alpha, \hat\alpha \in (-\infty,2)\setminus\{0,1\}$ and functions $f$, $\hat f$ satisfy the following conditions:(i) $f(x)$ and $\hat f(x)$ can be represented by convergent Taylor series in some neighborhood of zero; (ii) $f(x)$ is ${\mathcal {PC}}^1[0,k]$; (iii) $f(k-)>0$. Then $\Pi({\textnormal d} x)$ is regular. \end{example}
The above example shows that there are indeed many interesting L\'evy processes with regular L\'evy measure. For example, we can take one of the widely used processes in mathematical finance, such as CGMY/KoBoL or generalized tempered stable (see \cite{Cont}), truncate its L\'evy measure at any positive number and we will obtain a regular L\'evy measure.
The next Proposition shows that if the L\'evy process has a regular L\'evy measure, then its Laplace exponent satisfies the asymptotic expansion (\ref{psi_asymptotics}) and therefore the roots $\zeta_n$ have simple asymptotic approximation given by \eqref{zeta_asymptotics}. \begin{proposition}\label{prop_asymptotic_psi}
Assume that $X$ is not a compound Poisson process and that the L\'evy measure of $X$ is regular. Then the asymptotic expression (\ref{psi_asymptotics}) is true, with parameters $ A=(-1)^n \bar \Pi^+{}^{(n)}(k-)$, and $a=n$.
The parameters $B$ and $b$ can be identified as follows: \begin{itemize}
\item[(i)] if $\sigma>0$, then $B=\sigma^2/2$ and $b=2$,
\item[(ii)] if the process has paths of bounded variation and $\mu\ne 0$, then $B=\mu$ and $b=1$. \end{itemize} In the remaining cases, when the process has paths of unbounded variation and $\sigma=0$, or when the process has paths of bounded variation and $\mu=0$, we have $b=\beta(\Pi)=\max(\alpha,\hat \alpha)$ and \begin{eqnarray*} B= \begin{cases}
-C e^{-\pi {\textnormal i} \alpha} \Gamma(1-\alpha), \; &\textnormal{ if } \; \alpha>\hat \alpha, \\ -\hat C \Gamma(1-\hat \alpha), \; &\textnormal{ if } \; \alpha<\hat \alpha,\\ -(C e^{-\pi {\textnormal i} \alpha} + \hat C)\Gamma(1-\alpha), \; &\textnormal{ if } \; \alpha=\hat \alpha. \end{cases} \end{eqnarray*} Moreover, the asymptotic expression for $\psi'(z)$ can be obtained from (\ref{psi_asymptotics}) by taking derivative of the right-hand side. \end{proposition}
\noindent The proof of Proposition \ref{prop_asymptotic_psi} can be found in Section \ref{sec_proofs}.
\subsection{Partial fraction decomposition and distribution of $S_{{\textnormal e}(q)}$}\label{subsection_conjecture}
As we have mentioned in Section \ref{sec_intro}, in the case when $X$ has positive jumps of rational transform the positive Wiener-Hopf factor is a rational function (see \cite{Mordecki}). By performing the partial fraction decomposition
of this rational function and inverting the Laplace transform one can obtain the distribution of $S_{{\textnormal e}(q)}$ explicitly.
The same procedure works for meromorphic processes (see \cite{KuzKypJC}), though in this case we must work with meromorphic functions
instead of rational functions, and things become slightly more technical. In our case, when the process has bounded positive jumps, formula \eqref{wh_factor} tells us that the positive Wiener-Hopf factor $\phi_q^{\plus}({\textnormal i} z)$ is also a meromorphic function, thus we might hope to follow the same procedure and obtain a series representation for the distribution of $S_{{\textnormal e}(q)}$, which would be very useful for applications. Unfortunately it turns out that proving existence of the partial fraction decomposition for $\phi_q^{\plus}({\textnormal i} z)$ of the form \eqref{wh_factor} is a much harder problem, and we were not able to give a completely rigorous proof of such a result or to find such a result in the existing literature. However, if one assumes that such a partial fraction decomposition exists, it is rather easy to obtain the form of its coefficients and
the resulting expression for the distribution of $S_{{\textnormal e}(q)}$. Let us sketch here the main steps, and
later, in Section \ref{sec_numerics} we perform several numerical experiments which seem to confirm our conjecture.
\label{discussion_simple_zeros}
First of all, it is very likely that all the zeros $\{\zeta_n\}_{n\ge 1}$ of the function $\psi(z)-q$ are simple. This was proved in Theorem \ref{thm_asymptotics} for large zeros and for $\zeta_0$. For other roots one could use the following (rather informal) argument. We know that $z$ is a solution of $\psi(z)=q$ with multiplicity greater than one if and only if $\psi'(z)=0$. We can rephrase this statement: equation $\psi(z)=q$ has a solution of multiplicity greater than one if only if $q=\psi(\zeta)$, where $\zeta$ is a root of $\psi'(z)$. But $\psi'(z)$ is analytic in the half-plane $\textnormal {Re}(z)>0$, thus it has a discrete set of zeros, and for any given complex root of $\psi'(z)$ it is very unlikely that $\psi(\zeta)$ will be a real positive number, thus it is very unlikely that there exists $q>0$ such that $\psi(z)-q$ has a multiple root. Even if such a value of $q$ exists, we see that in the worst possible case, there can be only finitely many such values on any compact subset of ${\mathbb R}$, this shows that the assumption that all the zeros $\{\zeta_n\}_{n\ge 1}$ are simple should not be very restrictive for practical purposes. However, of course this fact would require a rigorous proof in future work.
Next, assuming that $0$ is regular for $(0,\infty)$, or equivalently, that the distribution of $S_{{\textnormal e}(q)}$ has no atom at zero, we must have $\phi_q^{\plus}({\textnormal i} z)={\mathbb E}[\exp(-z S_{{\textnormal e}(q)})]\to 0$ as $\textnormal {Re}(z)\to +\infty$, thus it is reasonable to expect that the partial fraction decomposition for $\phi_q^{\plus}({\textnormal i} z)$ should be of the form \begin{eqnarray}\label{phiqp_partial_fractions}
\phi_q^{\plus}({\textnormal i} z)={\mathbb E}\left[e^{-z S_{{\textnormal e}(q)}}\right]= e^{\frac{ k z}2 } \left( 1+\frac{z}{\zeta_0}\right)^{-1}
\prod\limits_{n\ge 1} \left(1+\frac{ z}{\zeta_n}\right)^{-1}\left(1+\frac{z}{\bar \zeta_n}\right)^{-1}=
\frac{a_0}{z+\zeta_0}+ \sum\limits_{n\ge 1} \left[ \frac{a_n}{z+\zeta_n}+\frac{\bar a_n}{z+\bar \zeta_n} \right] \end{eqnarray} where $a_n={\textnormal{Res}}(\phi_q^{\plus}({\textnormal i} z): \; z=-\zeta_n)$ for $n\ge 0$. Using the above infinite product representation we can easily compute the residues at points $\zeta_n$ and obtain
\begin{eqnarray*} a_0=\zeta_0e^{-\frac12 k \zeta_0} \prod\limits_{m\ge 1} \bigg | 1-\frac{\zeta_0}{\zeta_m} \bigg |^{-2}, \end{eqnarray*} and
\begin{eqnarray*} a_n=\frac{\zeta_0 |\zeta_n|^2 e^{-\frac12 k \zeta_n}}{2\textnormal {Im}(\zeta_n)(\zeta_n-\zeta_0)}
\prod\limits_{\substack{m\ge 1 \\ m\ne n}} \left[ \left(1-\frac{\zeta_n}{\zeta_m}\right) \left(1-\frac{\zeta_n}{\bar \zeta_m}\right) \right]^{-1}. \end{eqnarray*} Therefore, provided that there exists a partial fraction decomposition of the form (\ref{phiqp_partial_fractions}), we can use the uniqueness of Laplace transform and conclude that the density of the supremum ${S}_{{\textnormal e}(q)}$ should be given by the following infinite series
\begin{eqnarray}\label{distribution_S_ee(q)}
\frac{{\textnormal d} }{{\textnormal d} x} {\mathbb P}({S}_{{\textnormal e}(q)}\le x) = a_0 e^{- \zeta_0 x} +
2\sum\limits_{n\ge 1} \textnormal {Re}\left[a_n e^{- \zeta_n x} \right].
\end{eqnarray} Again, we emphasize that at this point the existence of a partial fraction decomposition (\ref{phiqp_partial_fractions}) is just a conjecture, which would have to be proven rigorously in future work.
\section{Scale functions for spectrally negative processes with bounded jumps}\label{sec_scale_functions}
Everywhere in this section we will assume that $Y$ is a spectrally negative L\'evy process with bounded jumps, so that the L\'evy measure $\Pi_Y({\textnormal d} x)$ is supported on the interval $[-k,0)$ where $k>0$, and $k$ is the smallest such number. From the L\'evy-Khintchine formula (\ref{Levy_Khinthine2}) it follows that in this case the Laplace exponent $\psi_Y(z)=\ln {\mathbb E} [\exp(z Y_1)]$ is given by \begin{eqnarray*} \psi_Y(z)=\frac12 \sigma^2 z^2 +\mu z + \int\limits_{-k}^{0} \left( e^{z x}-1- zx h(x) \right) \Pi_Y({\textnormal d} x), \end{eqnarray*} and we see that $\psi_Y(z)$ is an entire function which is convex for real values of $z$. Since $\psi_Y(0)=0$, it is clear that for each $q>0$ the equation $\psi_Y(z)=q$ has a unique positive solution $z=\Phi(q)$, and in fact it is known from the general theory of spectrally negative processes that this is a unique solution in the half-plane $\textnormal {Re}(z)>0$, see chapter 8 in \cite{Kyprianou}.
For $q>0$ the scale function $W^{(q)}(x)$ is defined as follows: $W^{(q)}(x)=0$ for $x<0$ and on $[0,\infty)$ it is characterized via the Laplace transform identity \begin{eqnarray}\label{def_W^q} \int\limits_{0}^{\infty} e^{-zx} W^{(q)}(x) {\textnormal d} x = \frac{1}{\psi_Y(z)-q}, \;\;\; \textnormal {Re}(z) > \Phi(q). \end{eqnarray} The scale function can be considered as the main building block for the vast majority of fluctuation identities for spectrally negative processes, see \cite{Kyprianou,KuKyRi} for many examples of such identities. Here we will present one fundamental identity, which is related to the exit of the process $Y$ from an interval. If we define the first passage time $\tau_a^+=\inf\{t>0: \; Y_t>a\}$ and similarly $\tau_0^-=\inf\{t>0: \; Y_t<0\}$, then Theorem 8.1 in \cite{Kyprianou} tells us that \begin{eqnarray*} {\mathbb E}_x\left[e^{-q\tau_a^+} {\mathbf 1}_{\{\tau_a^+<\tau_0^-\}}\right]=\frac{W^{(q)}(x)}{W^{(q)}(a)}, \;\;\; x\le a, \; q\ge 0. \end{eqnarray*} In fact, this identity justifies the name ``scale function'': we see that $W^{(q)}(x)$ plays an analogous role to scale function for diffusions.
Our main goal in this section is to obtain an expression for the scale function $W^{(q)}(x)$ in terms of the Laplace exponent and the roots of the entire function $\psi_Y(z)-q$. We will consider spectrally negative L\'evy processes, whose L\'evy measure satisfies the following definition.
\begin{definition}\label{def_regular_spec_negative}
We say that the L\'evy measure of a spectrally negative process $Y$ is {\it regular} if there exists $n \in {\mathbb N}\cup\{0\}$ such that \begin{itemize}
\item[(a)] for some constants $C, \alpha$ and $\{C_j,\alpha_j\}_{1\le j \le m}$ we have \begin{eqnarray*}
\bar \Pi^-(x)-C|x|^{-\alpha}-\sum\limits_{j=1}^{m} C_j |x|^{-\alpha_j} \in {\mathcal {PC}}^{n+1}[-k,0], \end{eqnarray*} where $\alpha, \alpha_j \in (-\infty,1) \cup (1,2)$ and $\alpha_j<\alpha$;
\item[(b)] $\bar \Pi^-{}^{(n)}(-k^+) \ne 0$;
\item[(c)] $\bar \Pi^-(x)\in {\mathcal C}^{n-1}({\mathbb R}^-)$ (this condition is not needed for $n=0$). \end{itemize} \end{definition}
By considering the dual process $\hat Y=-Y$ and using Proposition \ref{prop_asymptotic_psi} we see that if $Y$ has a regular L\'evy measure then its Laplace exponent satisfies \begin{eqnarray}\label{psi_Y_asymptotics} \psi_Y(-z)=Ae^{kz}z^{-a}+Bz^b+ o\left(e^{kz}z^{-a}\right) + o \left(z^b\right) \end{eqnarray} as $z\to \infty$ in the domain ${\mathcal Q}_1$, where $a\ge 0$ and $b>0$. Moreover, the parameters in the asymptotic expression (\ref{psi_Y_asymptotics}) are given by \begin{eqnarray}\label{eqn_AaBb}
\begin{cases} & A=\bar \Pi^- {}^{(n)}(-k^+), \; \textnormal{ and } \; a=n, \\
& B=\frac{\sigma^2}2, \; \textnormal{ and } \; b=2, \; \textnormal{ if } \sigma>0, \\
& B=-\mu, \; \textnormal{ and } \; b=1, \; \textnormal{ if } \sigma=0 \textnormal{ and } \alpha<1, \\
& B=-C e^{-\pi {\textnormal i} \alpha} \Gamma(1-\alpha), \; \textnormal{ and } \; b=\alpha, \; \textnormal{ if } \sigma=0 \textnormal{ and } \alpha>1. \end{cases}
\end{eqnarray}
Theorems \ref{thm_main} and \ref{thm_asymptotics} tell us that equation $\psi_Y(-z)=q$ has infinitely many solutions $\{\zeta_n\}_{n\ge 0}$ in ${\mathcal Q}_1$, moreover the first solution $\zeta_0$ is real and positive, we have $\zeta_n \ge \zeta_0$ for $n\ge 1$ and the large solutions $\zeta_n$ satisfy asymptotic relation (\ref{zeta_asymptotics}) with constants $A$, $a$, $B$ and $b$ as given in (\ref{eqn_AaBb}).
The next Theorem is our main result in this section. It provides a series representation for the scale function $W^{(q)}(x)$ in terms of the Laplace exponent $\psi_Y(z)$ and the numbers $\zeta_n$. Its proof can be found in Section \ref{sec_proofs}.
\begin{theorem}\label{thm_w^q} Assume that $q>0$ or $q=0$ and $\Phi(q)>0$. If the L\'evy measure of $Y$ is regular and all solutions to $\psi_Y(z)=q$ are simple, then for $x>0$ we have
\begin{eqnarray}\label{eqn_W^q}
W^{(q)}(x)=\frac{e^{\Phi(q)x}}{\psi_Y'(\Phi(q))}+\frac{e^{- \zeta_0 x}}{\psi_Y'(-\zeta_0)}+
2\sum\limits_{n\ge 1} \textnormal {Re} \left[ \frac{e^{- \zeta_n x}}{\psi_Y'(-\zeta_n)} \right],
\end{eqnarray} where the series converges uniformly on $[\epsilon,\infty)$ for every $\epsilon>0$. \end{theorem}
Formula \eqref{eqn_W^q} is in fact very similar to the corresponding expression for the scale function for meromorphic processes, see \cite{KuMo} and \cite{KuKyRi}.
In the very unlikely case that some of the solutions to $\psi_Y(z)=q$ are not simple (see the discussion on page \pageref{discussion_simple_zeros}) the expression in the right-hand side of (\ref{eqn_W^q}) would have to be modified. The coefficients $\exp(- \zeta_n x)/\psi_Y'(-\zeta_n)$ are the residues of $\exp(zx)/(\psi_Y(z)-q)$ at $z=-\zeta_n$ (see the proof of Theorem \ref{thm_w^q} in Section \ref{sec_proofs}), and these coefficients would have to be appropriately modified if $\zeta_n$ is a root of $\psi_Y(-z)-q$ of multiplicity greater than one.
\section{Numerical examples}\label{sec_numerics}
The main reason why we are interested in the analytical structure of the Wiener-Hopf factorization is that its understanding can lead to efficient numerical algorithms for computing such important objects as the distribution of supremum $S_{{\textnormal e}(q)}$ or infimum $I_{{\textnormal e}(q)}$, or the scale function $W^{(q)}(x)$. These are not easy problems for general L\'evy processes. For example, computing the distribution of $S_{{\textnormal e}(q)}$ in general involves evaluating numerically two integral transforms: first one has to compute the positive Wiener-Hopf factor via the formula (see \cite{Mordecki}) \begin{eqnarray*}
{\mathbb E}\left[ e^{{\textnormal i} z S_{{\textnormal e}(q)}} \right]=\exp\bigg[\frac{z}{2\pi {\rm i}} \int_{{\mathbb R}} \ln\bigg(\frac{q}{q-\psi({\textnormal i} u)}\bigg) \frac{{\rm d} u}{u(u-z)} \bigg], \qquad \textnormal {Im}(z)>0,
\end{eqnarray*} and then perform an inverse Fourier transform to recover the distribution of $S_{{\textnormal e}(q)}$. Similarly, computing the scale function $W^{(q)}(x)$ in general is equivalent to inverting Laplace transform in \eqref{def_W^q} (see \cite{KuKyRi} for the detailed discussion and comparison of several numerical algorithms for computing the scale function).
The results presented in this paper lead to quite different approach for computing the distribution of $S_{{\textnormal e}(q)}$ and the scale function $W^{(q)}(x)$. This approach does not rely on the numerical evaluation of multiple integral transforms; instead, the main ingredients are the solutions to the equation $\psi(z)=q$ and infinite series representations \eqref{distribution_S_ee(q)} and \eqref{eqn_W^q}. Therefore, this approach is very close in spirit to the techniques that are used for processes with positive jumps of rational transform \cite{Mordecki} or meromorphic \cite{KuzKypJC} processes.
Our main goal in this section is to give a brief description of the numerical algorithms and techniques which are suitable for L\'evy processes with bounded positive jumps, in particular, we would like to show that series expansions \eqref{distribution_S_ee(q)} and \eqref{eqn_W^q} may lead to efficient numerical computations. However, the detailed investigation of these numerical algorithms is beyond the scope of the current paper and we will leave to future work such important questions as the speed of convergence, rigorous error analysis, etc.
\subsection{Preliminaries}\label{subsec_preliminaries}
In order to implement the numerical algorithms based on formulas \eqref{distribution_S_ee(q)} and \eqref{eqn_W^q} we have to solve the following problem: how can we find the complex solutions of the equation $\psi(z)=q$? As we will see, it is not an easy problem, yet it can be solved rather efficiently provided that we use the right techniques.
The main problem in finding the zeros of $\psi(z)-q$ is that all of them (except for $\zeta_0$) are complex numbers. Note that the real zero $\zeta_0$ can be easily found by bisection method followed by Newton's method. The large zeros of $\psi(z)=q$ satisfy asymptotic relation (\ref{zeta_asymptotics}), thus they can be found by Newton's method which is started from the value in the right-hand side of (\ref{zeta_asymptotics}). The only problem that remains is how to compute the complex zeros of $\psi(z)-q$ which are not too large.
The problem of computing the zeros of an analytic function inside a bounded domain has been investigated by many authors, see \cite{Dellnitz2002325,Yakoubsohn,Ying_Katz} and the references therein. We will follow the method presented in \cite{Dellnitz2002325}, which is based on Cauchy's Argument Principle. Let us recall this important result. We denote the change in the argument of an analytic function $f(z)$ over a piecewise smooth curve $C$ as \begin{eqnarray}\label{def_arg} \Delta \textnormal {arg}(f,C)=\textnormal {Im} \left[ \int_C \frac{f'(z)}{f(z)}{\textnormal d} z \right], \end{eqnarray} provided that $f(z)\ne 0$ for $z\in C$. Cauchy's Argument Principle states that if $C$ is a simple closed contour which is oriented counter-clockwise, and $f(z)$ is analytic and non-zero on $C$ and analytic inside $C$, then $\Delta \textnormal {arg}(f,C)=2 \pi N$, where $N$ is the number of zeros of $f(z)$ inside contour $C$.
This result leads to a practical iterative procedure to determine the complex zeros of an analytic function $f(z)$ inside a closed contour. We start with an initial rectangle $R$ in the complex plane and proceed through the following sequence of steps: (i) compute $N=N(R)$ - the number of zeros of $f(z)$ inside rectangle $R$, (ii) if $N=0$, then we stop, (iii) if $N=1$ - we try to find the zero using Newton's method started from a point inside the rectangle, (iv) if $N>1$ or if the Newton's method in step (iii) fails - then we subdivide the rectangle $R$ into a finite number of disjoint rectangles $R_j$. For each of the smaller rectangles $R_j$ we proceed through the same sequence of steps (i)$\to$(ii)$\to$(iii)$\to$(iv). At some point the rectangles which contain zeros of $f(z)$ become very small, therefore the starting point of the Newton's method is close to the target and the Newton's method converges to the zero of $f(z)$. This shows that in theory we should be able to recover all zeros of $f(z)$ inside $R$. There are some technical issues which arise when at some step of the algorithm we obtain a rectangle with one of the zeros being extremely close to the boundary of this rectangle, however we found that in our numerical experiments this issue was not a big problem. More details of this algorithm and some numerical examples can be found in \cite{Dellnitz2002325}.
Next, let us review some basic facts about the incomplete gamma function, which will be used extensively in this section. The incomplete gamma function is defined for $\textnormal {Re}(s)>0$ and $z>0$ as \begin{eqnarray}\label{def_gamma} \gamma(s,z)=\int\limits_0^z u^{s-1} e^{-u} {\textnormal d} u. \end{eqnarray} It is known (see section 8.35 in \cite{Jeffrey2007}) that the function $z \mapsto z^{-s}\gamma(s,z)$ is an entire function and it can be represented by a Taylor series (which converges everywhere in the complex plane) as follows \begin{eqnarray}\label{eqn_inc_gamma} z^{-s} \gamma(s,z)=s^{-1} {}_1F_1(s,s+1;-z)=\sum\limits_{n\ge 0} \frac{(-1)^n}{n!} \frac{z^n}{s+n}=e^{-z} \sum\limits_{n\ge 0} \frac{z^n}{(s)_{n+1}}. \end{eqnarray} Here, as usual, $(a)_n=a(a+1)\dots(a+n-1)$ denotes the Pochhammer symbol and the function ${}_1F_1(a,b;z)$ which appears in the above equation is the confluent hypergeometric function \begin{eqnarray}\label{def_1F1} {}_1 F_1(a,b;z)=\sum\limits_{n\ge 0} \frac{(a)_n}{(b)_n} \frac{z^n}{n!}, \end{eqnarray} see chapter 6 in \cite{Erdelyi1955V3} for an extensive collection of results and formulas related to this function.
\begin{figure}
\caption{On graph (a) we present the first fifty roots $\zeta_n$ (crosses) and their approximations (circles) given by Theorem \ref{thm_asymptotics}. On graph (b) we present the distance from $\zeta_n$ to its approximation (note that we use logarithmic scale for the y-axis).}
\label{fig_proof_a}
\label{fig_proof_b}
\label{fig_roots}
\end{figure}
While the incomplete gamma function is not one of the elementary functions, it can still be easily evaluated everywhere in the complex plane. In fact, numerical routines for evaluating this function are provided in such computational software programs as Maple and Mathematica.
One should use different strategies for computing $\gamma(s,z)$ depending on whether $|s|$ and/or $|z|$ is large. However, we will only need to compute $\gamma(s,z)$ for a fixed $s$ (which is real and not large) and for various values of $z$. In this case the numerical
algorithms are based on one of the infinite series expansions presented (\ref{eqn_inc_gamma}) when $|z|$ is not large,
or on the various asymptotic approximations (and expansions in continued fractions, such as formula
8.358 in \cite{Jeffrey2007}) when $|z|$ is large. We refer to \cite{Jones1985401} or \cite{Winitzki03} for all the details.
Finally, we would like to mention that the code for all numerical experiments was written in C++ and the computations were performed on a standard laptop (Intel Core i5 2.6 GHz processor and 4 GB of RAM).
\subsection{Numerical example 1: processes with double-sided jumps}\label{subsec_numerics_example1}
For our first numerical experiment, we consider a generalized tempered stable process (see \cite{Cont}), also known as KoBoL process, with the L\'evy measure truncated at a positive number $k$ \begin{eqnarray}\label{def_pi_temp_stable}
\Pi({\textnormal d} x)={\mathbf 1}_{\{x<0\}} \hat C \hat \alpha e^{\hat \beta x} |x|^{-1-\hat \alpha} {\textnormal d} x+
{\mathbf 1}_{\{0<x<k\}} C \alpha e^{- \beta x} x^{-1- \alpha} {\textnormal d} x. \end{eqnarray}
\begin{proposition}\label{Laplace_exponent_X} Let $X$ be a L\'evy process, with the L\'evy measure $\Pi({\textnormal d} x)$ given by (\ref{def_pi_temp_stable}). Then the Laplace exponent of $X$ is given by \begin{eqnarray}\label{def_psi_X} \psi(z)=\frac12 \sigma^2 z^2 +\mu z - \hat C \Gamma(1-\hat\alpha) (\hat \beta+z)^{\hat \alpha}+ C \alpha (\beta-z)^{\alpha} \gamma(-\alpha,k(\beta-z))+\eta, \end{eqnarray} where $\eta$ is chosen so that $\psi(0)=0$. \end{proposition}
The proof of Proposition \ref{Laplace_exponent_X} can be found in Section \ref{sec_proofs}. Note that when $z< \beta$ and $k\to \infty$, then $\gamma(-\alpha,k(\beta-z)) \to \Gamma(-\alpha)$ (this follows from (\ref{def_gamma}) when $\alpha<0$, see also formulas (8.356.3) and (8.357.1) in \cite{Jeffrey2007}). This confirms the intuitively obvious result that as the cutoff $k$ becomes very large, the Laplace exponent of the truncated process converges to the Laplace exponent of the generalized tempered stable process, see Proposition 4.2 in \cite{Cont}. Note also that the function $ (\beta-z)^{\alpha} \gamma(-\alpha,k(\beta-z))$ which appears in (\ref{def_psi_X}) is an entire function of $z$, which confirms the fact that $\psi(z)$ is analytic in the half-plane $\textnormal {Re}(z)>0$. As in the case of generalized tempered stable processes, when $\sigma>0$ or $\alpha>1$ or $\hat \alpha>1$ we have a process of infinite variation. Finally, in the case when $\sigma=0$ and $\alpha<1$ and $\hat \alpha<1$ we have a process of finite variation, and in this case $\mu$ corresponds to the linear drift of this process.
\begin{figure}\label{fig_density_Sq}
\end{figure}
We consider the following parameters: \begin{eqnarray*} \sigma=1, \;\; \mu=-2, \; \; C=\hat C=1, \;\; \alpha=\hat \alpha=0.5, \;\; \beta=1, \; \; \hat \beta=2, \;\; k=1. \end{eqnarray*} These parameters give us a process with negative linear drift and finite variation infinite activity jumps. Note that since $\sigma=1$ we have a process with paths of unbounded variation. We also set $q=1$.
First we compute 1000 roots $\zeta_n$ using the method discussed in Section \ref{subsec_preliminaries}. Overall, it takes just 0.15 seconds to compute 1000 roots. The results are presented on Figure \ref{fig_roots}. Note that, as expected, the approximation to $\zeta_n$ provided by \eqref{zeta_asymptotics} becomes better and better as $n$ increases, but is not so good for small values of $n$.
Next, we compute the density $p(x)$ of the supremum $S_{{\textnormal e}(q)}$ using the series representation \eqref{distribution_S_ee(q)}. The results are presented on Figure \ref{fig_density_Sq}. After we have pre-computed and stored the roots $\zeta_n$, computing 2000 values of $p(x)$ takes just 0.26 seconds. In order to test the accuracy we have numerically computed the integral of $p(x)$ on the interval $[0,10]$ (which should be close to ${\mathbb P}(S_{{\textnormal e}(q)}>0)=1$). The result is equal 0.985 if we use 1000 roots and 0.995 if we use 5000 roots.
\subsection{Numerical example 2: a family of spectrally negative processes}\label{subsec_numerics_example2}
For our second numerical experiment we consider a spectrally negative L\'evy process $Y$, with the L\'evy measure defined as follows \begin{eqnarray*}
\Pi_Y({\textnormal d} x)={\mathbf 1}_{\{-k<x<0\}} C \alpha e^{ \beta x} |x|^{-1- \alpha} {\textnormal d} x. \end{eqnarray*} Note that the dual process $\hat Y=-Y$ belongs to the class of processes described in Proposition \ref{Laplace_exponent_X}, therefore the Laplace exponent of $Y$ is given by \begin{eqnarray} \psi_Y(z)=\frac12 \sigma^2 z^2 +\mu z + C \alpha (\beta+z)^{\alpha} \gamma(-\alpha,k(\beta+z))+\eta, \end{eqnarray} where again $\eta$ is chosen so that $\psi_Y(0)=0$.
\begin{figure}
\caption{The scale function $W^{(q)}(x)$ for the parameter set \eqref{par_set2}}
\label{fig_proof_a}
\label{fig_proof_b}
\label{fig_Wq}
\end{figure}
We fix the following values of parameters \begin{eqnarray}\label{par_set2} \sigma\in\{0,1\}, \;\; \mu=2, \; \; C=1, \;\; \alpha=0.5, \;\; \beta=1, \;\; k=1, \end{eqnarray} which define a spectrally-negative process with infinite activity/finite variation jumps and paths of finite/infinite variation depending on whether $\sigma=0$ or $\sigma=1$. We compute 1000 numbers $\{\zeta_n\}$ using the algorithm presented in Section \ref{subsec_preliminaries}. This computation takes 0.06 seconds. The qualitative behavior of the roots $\zeta_n$ is very similar to the one presented on Figure \ref{fig_roots}. Computing the scale function via series representation \eqref{eqn_W^q} is also very fast: it takes just 0.07 seconds to compute 1000 values of $W^{(q)}(x)$ for equally spaced points $x\in [0,3]$.
The results of computations are presented on Figure \ref{fig_Wq}. From Lemma 8.6 in \cite{Kyprianou} we know that $W^{(q)}(0)=0$ if the process $Y$ has infinite variation, and $W^{(q)}=1/\mu$ if the process has bounded variation (where $\mu$ is the linear drift). One can see from Figure \ref{fig_Wq} that our numerical results are in perfect agreement with the theoretical prediction. It would also be very interesting to compare the accuracy and performance of this algorithm for evaluating $W^{(q)}(x)$ with the methods used in \cite{KuKyRi}, however we have decided to leave this for future work.
\section{Proofs}\label{sec_proofs}
\begin{lemma}\label{lemma_support}
Let $\nu({\textnormal d} x)$ be a finite positive measure such that $\nu([a,+\infty))=0$ for some $a\in {\mathbb R}$. Define $k=\inf\{ a \in {\mathbb R}: \nu([a,+\infty))=0\}$. Then for every $\epsilon>0$ there exists $\xi=\xi(\epsilon)>0$ such that for all $z>\xi$ we have \begin{eqnarray}\label{eqn_lemma_support} e^{(k-\epsilon)z}<\int\limits_{{\mathbb R}} e^{zx} \nu({\textnormal d} x) < e^{(k+\epsilon)z}. \end{eqnarray} \end{lemma} \begin{proof}
Assume that $\epsilon>0$. Since $\nu((k,\infty))=0$ we have for all $z>0$
\begin{eqnarray*}
e^{-(k+\epsilon)z}\int\limits_{{\mathbb R}} e^{zx} \nu({\textnormal d} x)= \int\limits_{(-\infty,k]} e^{z (x-k-\epsilon)} \nu({\textnormal d} x) < e^{-\epsilon z} \nu ({\mathbb R}),
\end{eqnarray*} and the right-hand side in the above inequality goes to zero as $z \to +\infty$. This proves the upper bound in (\ref{eqn_lemma_support}). Similarly,
\begin{eqnarray*}
e^{-(k-\epsilon)z}\int\limits_{{\mathbb R}} e^{zx} \nu({\textnormal d} x)> e^{-(k-\epsilon)z} \int\limits_{[k-\epsilon/2,k]} e^{z x} \nu({\textnormal d} x) > e^{ \frac{\epsilon}2 z} \nu ([k-\epsilon/2,k]).
\end{eqnarray*}
According to our definition of $k$, the quantity $\nu ([k-\epsilon/2,k])$ is strictly positive, thus the right-hand side in the above inequality goes to $+\infty$ as $z\to +\infty$, which proves the lower bound in (\ref{eqn_lemma_support}). \end{proof}
{\it Proof of Theorem \ref{thm_main}:} Let us prove (i). L\'evy-Khintchine formula (\ref{Levy_Khinthine2}) implies that the function $\psi(z)$ is analytic in the half-plane $\textnormal {Re}(z)>0$ and convex for $z>0$. Since $\psi(0)=0$ and $\psi(z)$ increases exponentially as $z\to +\infty$ we conclude that there exists a unique simple real solution to $\psi(z)=q$, which we will denote $\zeta_0$.
Let us prove that there are no other solutions in the vertical strip $0 \le \textnormal {Re}(z)<\zeta_0$. From the definition of the Laplace exponent we find
\begin{eqnarray*} e^{t \textnormal {Re}(\psi(z))}=\big | {\mathbb E}\left[ e^{z X_t} \right] \big| \le {\mathbb E}\left[ e^{\textnormal {Re}(z) X_t} \right]=e^{t \psi(\textnormal {Re}(z))}. \end{eqnarray*} This shows that for $0\le \textnormal {Re}(z) < \zeta_0$ we have $\textnormal {Re}(\psi(z)-q)\le \psi(\textnormal {Re}(z))-q<0$, therefore $\psi(z)\ne q$ in this vertical strip and
we have proved the first part of Theorem \ref{thm_main}.
Next, let us prove (ii), (iii) and (iv). Let us consider the ascending ladder process $(L^{-1}, H)$ and its Laplace exponent $\kappa(q,z)$. Similarly, let $\hat \kappa(q,z)$ denote the Laplace exponent of the descending ladder process $(\hat L^{-1}, \hat H)$, see section 6.2 in \cite{Kyprianou} for the definition and properties of these objects. Let $\Lambda({\textnormal d} t, {\textnormal d} x)$ denote the L\'evy measure of the bivariate subordinator $(L^{-1}, H)$ (see section 6.3 in \cite{Kyprianou}). One can check that $\kappa(q,z)$ can be expressed in the following form \begin{eqnarray}\label{thm_main_proof1} \kappa(q,z)=\kappa(q,0)+a z-\int\limits_0^{\infty} \left( e^{-zx}-1\right) \Lambda^{(q)}({\textnormal d} x) \end{eqnarray} where $a\ge 0$ and \begin{eqnarray*} \Lambda^{(q)}({\textnormal d} x)=\int_0^{\infty} e^{-q t} \Lambda({\textnormal d} t, {\textnormal d} x). \end{eqnarray*} Formula (\ref{thm_main_proof1}) implies that the function $z\mapsto \kappa(q,z)$ is the Laplace exponent of a subordinator with the L\'evy measure $\Lambda^{(q)}({\textnormal d} x)$ and drift $a$, which is killed at rate $\kappa(q,0)$. Note that $\Lambda^{(0)}({\textnormal d} x)$ is the L\'evy measure of the ascending ladder height process $H$. The jumps in the process $H$ happen when the process $X$ jumps over the past supremum, thus it is clear that if the jumps of $X$ are bounded from above by $k$, then the same is true for the process $H$. Therefore $\Lambda^{(0)}((k,\infty))=0$, and since for each Borel set $B$ the quantity $\Lambda^{(q)}(B)$ is decreasing in $q$ we conclude that $\Lambda^{(q)}((k,\infty))=0$ for all $q\ge 0$. Therefore we have proved that the support of the measure $\Lambda^{(q)}({\textnormal d} x)$ lies inside the interval $(0,k]$.
Let us define $\tilde k=\inf\{ x>0: \Lambda^{(q)}((x,\infty))=0\}$. Note that $\tilde k \le k$, since we have established already that the support of the measure $\Lambda^{(q)}({\textnormal d} x)$ lies inside the interval $(0,k]$. Using formula (\ref{thm_main_proof1}) and the fact that $\Lambda^{(q)}({\textnormal d} x)$ has finite support we conclude that $\kappa(q,z)$ is an entire function of $z$.
Using the Wiener-Hopf factorization (see Theorem 6.16 in \cite{Kyprianou}) we find that for $\textnormal {Re}(z)\le 0$ \begin{eqnarray}\label{WH_fact_1}
\bigg| \frac{\kappa(q,0)}{\kappa(q,-z)} \bigg|=\big | {\mathbb E}\left[ e^{z S_{{\textnormal e}(q)}} \right] \big| < {\mathbb E}\left[ e^{\textnormal {Re}(z) S_{{\textnormal e}(q)}} \right]\le 1, \end{eqnarray} this shows that the function $\kappa(q,-z)$ has no zeros in the half-plane $\textnormal {Re}(z)\le 0$. By the same argument we conclude that $\hat \kappa(q,z)$ has no zeros in the half-plane $\textnormal {Re}(z) \ge 0$. Therefore, using the Wiener-Hopf factorization $q-\psi(z)=\kappa(q,-z) \hat \kappa(q,z)$ we find that all zeros of $\kappa(q,-z)$ in the half-plane $\textnormal {Re}(z)>0$ coincide with the zeros of $q-\psi(z)$. Recall that we have labeled these zeros as $\{\zeta_0,\zeta_n,\bar \zeta_n\}_{n\ge 1}$, where $\zeta_n \in {\mathcal Q}_1$ are arranged in the order of increase of absolute value.
Next, we use Proposition 2 in \cite{Bertoin} and the fact that $\kappa(q,z)$ is the Laplace exponent of a subordinator to conclude that $\kappa(q,{\textnormal i} z) = O(z)$ as $z\to \infty$, $z\in {\mathbb R}$. Therefore the following integral converges (which is equivalent to saying that $\kappa(q,{\textnormal i} z)$ belongs to Cartwright class of entire functions) \begin{eqnarray*}
\int\limits_{{\mathbb R}} \frac{\max(\ln(|\kappa(q,{\textnormal i} z)|),0)}{1+z^2} {\textnormal d} z, \end{eqnarray*} and we can apply Theorem 11, page 251 in \cite{Levin1980} (see also remark 2, page 130 in \cite{Levin1996}) to conclude that $\kappa(q,-z)$ can be factorized as \begin{eqnarray}\label{thm_main_proof2} \kappa(q,-z)=\kappa(q,0) e^{\frac{\tilde k}2 z} \left(1-\frac{z}{\zeta_0} \right) \prod\limits_{n\ge 1} \left(1-\frac{z}{\zeta_n} \right)\left(1-\frac{z}{\bar\zeta_n} \right). \end{eqnarray} Moreover, all of the roots $\zeta_n$ (except for a set of zero density) lie inside arbitrarily small angle $\pi/2-\epsilon<\textnormal {arg}(z)<\pi/2$, the density of the roots in this angle exists and is equal to $\tilde k/(2\pi)$ and the series $\sum\textnormal {Re}\left(\zeta_n^{-1} \right)$ converges.
Using (\ref{thm_main_proof1}) and the following result \begin{eqnarray*} {\mathbb E}\left[ e^{-zS_{{\textnormal e}(q)}} \right]=\phi_q^{\plus}({\textnormal i} z)=\frac{\kappa(q,0)}{\kappa(q,z)}, \end{eqnarray*} (see Theorem 6.16 in \cite{Kyprianou}) we obtain formula (\ref{wh_factor}) for the Wiener-Hopf factors, and in order to finish the proof we only have to show that $\tilde k= k$. This fact seems to be intuitively clear, as it means that the upper boundary of the support of the L\'evy measure of the process $X$ is exactly equal to the upper boundary of the support of the L\'evy measure of the ascending ladder height process $H$. However, we were not able to find a simple probabilistic argument to prove this statement, and we will use an analytic approach instead. Assume that $\tilde k<k$ and define $\epsilon=(k-\tilde k)/3$.
Using the L\'evy-Khintchine formula (\ref{Levy_Khinthine2}) and Lemma \ref{lemma_support} one can check that $|\psi(z)|>\exp((k-\epsilon)z)$ for all $z>0$ large enough. Similarly, using (\ref{thm_main_proof1}), Lemma \ref{lemma_support} and the fact that $\Lambda^{(q)}({\textnormal d} x)$ has support on $(0,\tilde k]$ we can check that
$|\kappa(q,-z)|<\exp((\tilde k+\epsilon)z)$ for all $z>0$ large enough. From the Wiener-Hopf factorization $\hat \kappa(q,z)=(q-\psi(z))/\kappa(q,-z)$ we conclude that \begin{eqnarray*}
|\hat \kappa(q,z)|> \frac{e^{(k-\epsilon)z}}{e^{(\tilde k+\epsilon)z}}=e^{\epsilon z} \end{eqnarray*} for all $z>0$ large enough. But this is not possible, as we know that $\hat \kappa(q,z)$ is the Laplace exponent of a subordinator, thus $\hat \kappa(q,z)=O(z)$ as $z\to \infty$, $\textnormal {Re}(z)\ge 0$ (see Proposition 2 in \cite{Bertoin}). Therefore the inequality $\tilde k<k$ is not true. At the same time we must have $\tilde k \le k$, since the support of the measure $\Lambda^{(q)}({\textnormal d} x)$ lies inside the interval $(0,k]$. This implies that $\tilde k=k$ and ends the proof of parts (ii), (iii) and (iv) of Theorem \ref{thm_main}.
$\sqcap\kern-8.0pt\hbox{$\sqcup$}$\\ \\
Recall that $\Delta \textnormal {arg}(f,C)$, which was defined in (\ref{def_arg}), denotes the change in the argument of $f(z)$ over a curve $C$. The following result will be used in the proof of Theorem \ref{thm_asymptotics}. \begin{proposition}\label{prop_Delta_arg_property}
Assume that $f$ and $g$ are analytic on a piecewise curve $C$ and $f(z)\ne 0$ for $z\in C$. If for some $\epsilon \in (0,1)$ we have $|g(z)|<\epsilon |f(z)|$ for all $z\in C$, then \begin{eqnarray}\label{Delta_arg_property}
\big | \Delta \textnormal {arg}(f+g,C) - \Delta \textnormal {arg}(f,C) \big |< 4 \epsilon. \end{eqnarray} \end{proposition} \begin{proof} From the definition of $\Delta \textnormal {arg}(f,C)$ (\ref{def_arg}) it follows that \begin{eqnarray*} \Delta \textnormal {arg}(f+g,C) = \Delta \textnormal {arg}(f,C)+\Delta \textnormal {arg}(1+g/f,C).
\end{eqnarray*} Due to the condition $|g(z)/f(z)|<\epsilon$ for $z\in C$ we know that the set $\{w=1+g(z)/f(z): \; z \in C\}$ lies inside the circle of radius $\epsilon$ with center at one. Using elementary geometric considerations we check that \begin{eqnarray*}
\max\{|\textnormal {arg}(w)|: \; |w-1|<\epsilon\}=\arcsin(\epsilon)<2\epsilon, \end{eqnarray*}
thus the change of the argument of any curve lying inside the circle $|w-1|<\epsilon$ cannot be greater than $4\epsilon$, which proves (\ref{Delta_arg_property}). \end{proof}
We will also need the following two facts: \begin{itemize}
\item[(i)] For any real $u>0$ and any complex number $v$ for which $|\textnormal {arg}(v)|<\pi/2$ we have \begin{eqnarray}\label{fact1}
|\textnormal {arg}(u+v)| \le |\textnormal {arg}(v)|. \end{eqnarray}
\item[(ii)] For any two complex numbers $u$, $v$ we have \begin{eqnarray}\label{fact2}
|u+v|\ge \cos(\textnormal {arg}(u)-\textnormal {arg}(v)) (|u|+|v|). \end{eqnarray} \end{itemize}
Both of the above facts can be easily verified by elementary geometric considerations
{\it Proof of Theorem \ref{thm_asymptotics}:} Let us denote the set of solutions to $\psi(z)=q$ in ${\mathcal Q}_1$ as ${\mathcal Z}=\{\zeta_n\}_{n\ge 1}$ and introduce numbers
\begin{eqnarray*} z_n=\frac{1}{k} \left[\ln \left( \bigg | \frac{B}{A} \bigg| \right)+(a+b) \ln \left(\frac{2n\pi }{k}\right) \right] + \frac{{\textnormal i} }{k} \left[ \textnormal {arg}\left( \frac{B}{A} \right)+\left( \frac12 (a+b) + 2n+1\right) \pi \right]. \end{eqnarray*} First let us prove that every solution to the equation $\psi(z)=q$ which has sufficiently large absolute value must be close to one of $z_n$. From the asymptotic expansion (\ref{psi_asymptotics}) we find that when $z\to \infty$, $z\in {\mathcal Z}$ we have \begin{eqnarray}\label{psi_asymptotics_2}
e^{kz}=-\frac{B}{A} z^{a+b} (1+o(1)) . \end{eqnarray} Considering the absolute value of both sides of (\ref{psi_asymptotics_2}) we conclude \begin{eqnarray}\label{re_z_asymptotics}
\textnormal {Re}(z)=\frac{1}{k} \ln\bigg |\frac{B}{A} \bigg| + \frac{a+b}{k} \ln|z|+o(1), \;\;\; z\to \infty, \; z\in {\mathcal Z}.
\end{eqnarray} From the above asymptotic expression it follows that $\textnormal {Re}(z)=O(\ln|z|)$, which in turn implies \begin{eqnarray}\label{arg_z_asymptotics} \textnormal {arg}(z)=\frac{\pi}2+o(1), \;\;\; z\to \infty, \; z\in {\mathcal Z}. \end{eqnarray}
Next, considering the argument of both sides of (\ref{psi_asymptotics_2}), we find that \begin{eqnarray*} \textnormal {arg}\left( e^{k z} \right)=\textnormal {arg}\left( -\frac{B}{A} \right)+ \textnormal {arg}\left( z^{a+b}\right)+\textnormal {arg}(1+o(1)) \;\;\; (\textnormal {mod} \; 2 \pi), \end{eqnarray*} and using \eqref{arg_z_asymptotics} we conclude that there exists an integer number $n$ such that \begin{eqnarray}\label{im_z_asymptotics} k\textnormal {Im}(z)=\textnormal {arg}\left(\frac{B}{A}\right)+(2n+1)\pi + \frac12 (a+b) \pi + o(1). \end{eqnarray}
Finally, asymptotic expressions (\ref{re_z_asymptotics}) and (\ref{im_z_asymptotics}) imply that \begin{eqnarray*}
\textnormal {Re}(z)=\frac{1}{k} \left[\ln \left( \bigg | \frac{B}{A} \bigg| \right)+(a+b) \ln \left(\frac{2n\pi }{k}\right) \right]+o(1), \end{eqnarray*} which together with (\ref{im_z_asymptotics}) shows that every sufficiently large solution to $\psi(z)=q$ must be close to one of the numbers $z_n$.
\begin{figure}
\caption{Illustrations to the proofs of Theorem \ref{thm_asymptotics} and Theorem \ref{thm_w^q}.}
\label{fig_proof_a}
\label{fig_proof_b}
\label{fig_Vdxds}
\end{figure}
Now let us prove the converse statement: for all $n$ large enough there is always a solution of the equation $\psi(z)=q$ near a point $z_n$. We set $\epsilon=1/(16k(a+b))$, assume that $n$ is a large positive integer and consider the following contour $L=L(n)=L_1 \cup L_2 \cup L_3 \cup L_4$, defined as \begin{eqnarray*}
L_1&=&L_1(n)=\{ z\in {\mathbb C}: \; \textnormal {Re}(z)=0, \; |\textnormal {Im}(z)-\textnormal {Im}(z_n)|\le \frac{\pi}{k}\}, \\
L_2&=&L_2(n)=\{ z\in {\mathbb C}: \; \textnormal {Im}(z)=\textnormal {Im}(z_n)-\frac{\pi}{k}, \; 0\le \textnormal {Re}(z) \le \epsilon n\}, \\
L_3&=&L_3(n)=\{ z\in {\mathbb C}: \; \textnormal {Re}(z)=\epsilon n, \; \; |\textnormal {Im}(z)-\textnormal {Im}(z_n)|\le \frac{\pi}{k}\}, \\
L_4&=&L_4(n)=\{ z\in {\mathbb C}: \; \textnormal {Im}(z)=\textnormal {Im}(z_n)+\frac{\pi}{k}, \; 0\le \textnormal {Re}(z) \le \epsilon n\}. \end{eqnarray*} As we see on Figure \ref{fig_proof_a}, $L$ is a rectangle of dimensions $2\pi/k$ and $\epsilon n$, which contains exactly one point $z_n$ for $n$ large enough. We assume that this contour is oriented counter-clockwise. Our goal is to prove that $\Delta \textnormal {arg}(\psi(z)-q,L(n))=2\pi$ for all $n$ large enough, and our strategy is to show that the change in the argument over $L_1$, $L_2$ and $L_4$ is small, while the change in the argument over $L_3$ is close to $2\pi$.
First of all, it is clear that the number of zeros of $\psi(z)-q$ inside the contour $L$ is the same as the number of zeros of $F(z)=z^a (\psi(z)-q)$ inside the same contour. Asymptotic expression (\ref{psi_asymptotics}) tells us that \begin{eqnarray}\label{asymptotics_F(z)} F(z)=A e^{kz}+ B z^{a+b} + o\left(e^{kz}\right)+o\left(z^{a+b}\right), \;\;\; z\to \infty, \; z \in {\mathcal Q}_1. \end{eqnarray}
Let us first consider the interval $L_1$. Since $\textnormal {arg}(z)=\pi/2$ on this interval, we have $\Delta \textnormal {arg}(z^{a+b},L_1)=0$. Equation (\ref{asymptotics_F(z)}) implies that $F(z)=B z^{a+b} + o\left( z^{a+b} \right)$ when $z\in L_1$ and $n\to +\infty$,
thus we use Proposition \ref{prop_Delta_arg_property} and conclude that for all $n$ large enough we have $|\Delta \textnormal {arg}(F,L_1(n))|<1/4$.
Let us consider the contour $L_2$. From the definition of this contour it follows that for all $z\in L_2$ \begin{eqnarray}\label{contour_L2_1} \textnormal {arg}\left(\frac{A}{B}\exp\left(kz-\frac{\pi {\textnormal i}}2(a+b)\right)\right)=0. \end{eqnarray} Also, looking at Figure \ref{fig_proof_a} one can check that $\pi/2-\theta \le \textnormal {arg}(z) \le \pi/2$ for all $z \in L_2$, where we have defined \begin{eqnarray*} \theta=\theta(\epsilon,n)=\arctan\left( \frac{\epsilon n}{\textnormal {Im}(z_n)-\pi/k} \right). \end{eqnarray*} Note that as $n\to +\infty$ we have $\theta(\epsilon,n)\to \arctan(\epsilon k /(2\pi))$, and the latter quantity is smaller than $k \epsilon $. This implies that for all $n$ large enough we have $\theta(\epsilon,n)< k \epsilon$. Thus we have proved that for all $n$ large enough we have $\pi/2-k \epsilon \le \textnormal {arg}(z) \le \pi/2$ when $z \in L_2$, which is equivalent to \begin{eqnarray}\label{contour_L2_2}
\bigg |\textnormal {arg}\left(z^{a+b}\exp\left(-\frac{\pi {\textnormal i}}2(a+b)\right)\right)\bigg|< k (a+b) \epsilon=\frac{1}{16}, \;\;\; z\in L_2. \end{eqnarray}
From (\ref{contour_L2_1}), (\ref{contour_L2_2}) and property \eqref{fact1} it follows that for all $z\in L_2$ the number \begin{eqnarray*} w=\frac{A}{B}\exp\left(kz-\frac{\pi {\textnormal i}}2(a+b)\right)+z^{a+b}\exp\left(-\frac{\pi {\textnormal i}}2(a+b)\right) \end{eqnarray*}
lies in the sector $|\textnormal {arg}(w)|<1/16$. From here we find that \begin{eqnarray}\label{Delta_L2}
|\Delta \textnormal {arg}(A e^{kz}+ B z^{a+b}, L_2)|< \frac{1}{8}, \end{eqnarray} and at the same time, with the help of property (\ref{fact2}) we deduce that for all $z\in L_2$ \begin{eqnarray}\label{estimate_L2}
\big| A e^{kz}+ B z^{a+b} \big| > \cos\left(\frac{1}{16} \right)
\left( \big|A e^{kz} \big| + \big |B z^{a+b} \big| \right). \end{eqnarray} Next, if $g(z)= o\left(e^{kz}\right)+o\left(z^{a+b}\right)$ as $z\to \infty$, then
\begin{eqnarray*} g(z)=o\left( \big|A e^{kz} \big| + \big |B z^{a+b} \big| \right), \end{eqnarray*} and we again can use (\ref{Delta_L2}) and (\ref{Delta_arg_property}) with $f(z):=F(z)$ and $g(z)$ defined above, to conclude that for all $n$ large enough \begin{eqnarray*}
|\Delta \textnormal {arg}(F(z),L_2(n))-\Delta \textnormal {arg}(A e^{kz}+ B z^{a+b}, L_2)|<1/8. \end{eqnarray*} The above inequality and estimate \eqref{Delta_L2} imply that for all $n$ large enough
we have $|\Delta \textnormal {arg}(F(z),L_2(n))|<1/4$. Using exactly the same technique we obtain an identical estimate the change of argument over $L_4(n)$.
Finally, on the contour $L_3$ we have $F(z)=A\exp(kz)+o(\exp(kz))$. Since $\Delta \textnormal {arg}(\exp(kz),L_3)=2\pi$, we use Proposition \ref{prop_Delta_arg_property} and
conclude that for all $n$ large enough $|\Delta(F,L_3(n))-2\pi|<1/4$. Combining these four estimates we see that
for all $n$ large enough we have $|\Delta(F,L(n))-2\pi|<1$, and since we know that $\Delta \textnormal {arg}(F,L)$ must be an integer multiple of $2\pi$ we conclude that $\Delta \textnormal {arg}(F,L)=2\pi$, thus there is exactly one solution to $\psi(z)=q$ inside the contour $L(n)$.
From the first part of the proof we know that every sufficiently large solution to $\psi(z)=q$ must be close to $z_m$ for some $m$, and since by construction there is only one such point $z_n$ inside the contour $L(n)$, we conclude that for every $n$ large enough there is a solution to $\psi(z)=q$ close to $z_n$.
$\sqcap\kern-8.0pt\hbox{$\sqcup$}$\\ \\
{\it Proof of Proposition \ref{prop_asymptotic_psi}:} Let us assume that $\alpha<1$ and $\hat \alpha<1$. Then we can take the cutoff function $h(x)\equiv 0$ in (\ref{Levy_Khinthine2}), and we can rewrite $\psi(z)$ as follows \begin{eqnarray}\label{psi_decomposition} \psi(z)=\frac12 \sigma^2 z^2 +\mu z + \psi_1(z) + \psi_2(z), \end{eqnarray} where we have denoted \begin{eqnarray}\label{def_psi1} \psi_1(z)&=&\int\limits_{(-\infty,0)} \left( e^{z x}-1\right) \Pi({\textnormal d} x), \\ \psi_2(z)&=&\int\limits_{(0,k]} \left( e^{z x}-1\right) \Pi({\textnormal d} x). \label{def_psi2} \end{eqnarray}
First, let us study the asymptotic behavior of $\psi_1(z)$. If $\hat \alpha< 0$ then part (1) of Definition \ref{definition_Levy_measure} implies that $\Pi((-\infty,0))<\infty$, thus $\psi_1(z)=O(1)$ as $z\to \infty$, $\textnormal {Re}(z) \ge 0$. If $0<\hat \alpha<1$, then part (1) of Definition \ref{definition_Levy_measure} implies that for $x \in {\mathbb R}^-$ \begin{eqnarray*}
\Pi({\textnormal d} x)=\left( \hat C \hat \alpha |x|^{-1-\hat \alpha} + \sum\limits_{j=1}^{\hat m} \hat C_j \hat \alpha_j |x|^{-1-\hat \alpha_j} \right) {\textnormal d} x+ \nu({\textnormal d} x), \end{eqnarray*} where $\nu({\textnormal d} x)$ is a finite measure on ${\mathbb R}^{-}$ (note that $\nu({\textnormal d} x)$ does not have to be a positive measure). It is clear that \begin{eqnarray*}
\bigg | \int\limits_{-\infty}^0 \left( e^{zx}-1 \right) \nu({\textnormal d} x) \bigg | < 2 |\nu ((-\infty, 0))| \end{eqnarray*} for $\textnormal {Re}(z) \ge 0$. Using integration by parts we find that for $\textnormal {Re}(z) > 0$ \begin{eqnarray}\label{int_identity_1}
\hat \alpha \int\limits_{-\infty}^0 \left( e^{zx}-1 \right) |x|^{-1-\hat \alpha} {\textnormal d} x=-\Gamma(1-\hat \alpha) z^{\hat \alpha}. \end{eqnarray} Combining the above three equations and (\ref{def_psi1}) we conclude that \begin{eqnarray}\label{asymptotics_psi1} \psi_1(z)=- \hat C \Gamma(1-\hat \alpha) z^{\hat \alpha} + o (z^{\hat \alpha}) \end{eqnarray}
as $z\to \infty$, $\textnormal {Re}(z) > 0$.
Next, let us investigate the asymptotic behavior of $\psi_2(z)$. Let us assume that $n>0$ (where $n$ is the constant in the Definition \ref{definition_Levy_measure}), the proof in the case $n=0$ is very similar. Since $n>0$, Definition \ref{definition_Levy_measure} implies that the measure $\Pi({\textnormal d} x)$ restricted to ${\mathbb R}^+$ has a density $\pi(x)$, which belongs to ${\mathcal{PC}}^{n}((0,k])$. Let us assume first that $\pi \in {\mathcal{C}}^{n}((0,k])$, we will relax this assumption later.
First let us consider the case $\alpha < 0$, which is equivalent to $\Pi((0,\infty))<\infty$. Applying integration by parts $n$ times to (\ref{def_psi1}) we obtain \begin{eqnarray*} \psi_2(z)&=&\int\limits_0^k e^{zx} \pi(x) {\textnormal d} x - \int\limits_0^k \pi(x) {\textnormal d} x\\ &=&\sum\limits_{m=0}^{n-1} (-1)^m \left[ \pi^{(m)}(k-)e^{kz}z^{-m-1}-\pi^{(m)}(0+) z^{-m-1} \right] +(-1)^n z^{-n} \int\limits_0^k \pi^{(n)}(x) e^{zx} {\textnormal d} x - \int\limits_0^k \pi(x) {\textnormal d} x. \end{eqnarray*} Since $\pi^{(n)}(x)$ is continuous we conclude that \begin{eqnarray*} \int\limits_0^k \pi^{(n)}(x) e^{zx} {\textnormal d} x= o \left( e^{kz} \right) \end{eqnarray*} as $z\to \infty$, $\textnormal {Re}(z) > 0$. At the same time, due to Definition \ref{definition_Levy_measure} we have $\pi^{(m)}(k-)=0$ for all $m\le n-2$. Using the above two results and the fact that $\frac{{\textnormal d}}{{\textnormal d} x} \Pi^+ (x)=-\pi(x)$ we obtain \begin{eqnarray}\label{asymptotics_psi2_1} \psi_2(z)=(-1)^{n} \Pi^+ {}^{(n)}(k-) e^{kz}z^{-n}+o \left( e^{kz} z^{-n} \right) + O(1), \end{eqnarray} as $z\to \infty$, $\textnormal {Re}(z) > 0$. Equation (\ref{asymptotics_psi2_1}) shows that the exponential term in the right-hand side of (\ref{psi_asymptotics}) comes from the upper boundary of the support of the L\'evy measure and from the first non-zero derivative of $\bar \Pi^+(x)$ at $k-$.
Next, let us assume that $\alpha \in (0,1)$. Then, according Definition \ref{definition_Levy_measure}, the density of the L\'evy measure can be expressed as follows \begin{eqnarray*} \pi(x)=C\alpha x^{-1-\alpha}+\sum\limits_{j=1}^{m} C_j \alpha_j x^{-\alpha_j}+g(x), \end{eqnarray*} where $g \in {\mathcal{C}}^{n}([0,k])$. We can rewrite $\psi_2(z)$ as \begin{eqnarray}\label{expression_psi2} \psi_2(z)= C F(\alpha,k,z)+ \sum\limits_{j=1}^{m} C_j F(\alpha_j,k,z)+ \int\limits_0^k e^{zx} g(x) {\textnormal d} x - \int\limits_0^k g(x) {\textnormal d} x, \end{eqnarray} where we have defined \begin{eqnarray*}
F(\alpha,k,z)=\alpha \int\limits_0^k \left( e^{zx}-1 \right) x^{-1-\alpha} {\textnormal d} x. \end{eqnarray*} Let us obtain an asymptotic expansion of $F(\alpha,k,z)$ as $z\to \infty$, $z\in {\mathcal Q}_1$. Expanding $\exp(zx)$ in Taylor series centered at zero and integrating term by term we find that \begin{eqnarray}\label{eqn_F_alpha_k_z} F(\alpha,k,z)= k^{-\alpha} \left[ 1 - {}_1 F_1(-\alpha,1-\alpha;kz) \right], \end{eqnarray} where ${}_1F_1(a,b;z)$ is the confluent hypergeometric function defined by (\ref{def_1F1}). Applying asymptotic formula (2) on page 278 in \cite{Erdelyi1955V3} we conclude that \begin{eqnarray}\label{asymptotics_F_alpha_k_z} F(\alpha,k,z) &=& - \Gamma(1-\alpha) e^{-\pi {\textnormal i} \alpha} z^{\alpha}+
\alpha k^{-1-\alpha} \frac{e^{kz}}{z} \sum\limits_{m=0}^N \frac{(1+\alpha)_m}{(kz)^m} \\ \nonumber
&+&O(1)+O\left(z^{\alpha-1}\right) +O\left( e^{z}z^{-N-2}\right), \end{eqnarray} as $z\to \infty$, $z\in {\mathcal Q}_1$. Formula (\ref{asymptotics_F_alpha_k_z}) and our previous result (\ref{asymptotics_psi2_1}) imply that \begin{eqnarray}\label{psi_2_final} \psi_2(z)&=& (-1)^{n} \Pi^+ {}^{(n)}(k-) e^{kz}z^{-n}- \Gamma(1-\alpha) e^{-\pi {\textnormal i} \alpha} z^{\alpha}\\ \nonumber &+&o \left(e^{kz}z^{-n} \right) + o\left(z^{\alpha}\right)+O(1), \end{eqnarray} as $z\to \infty$, $z\in {\mathcal Q}_1$.
As a final step, let us relax the assumption $\pi \in {\mathcal{C}}^{n}([0,k])$. Assume that there is a unique point $x_1 \in (0,k)$ at which $\pi^{(n)}(x)$ does not exist (the proof in the general case is exactly the same).
According to Definition \ref{definition_Levy_measure}, $\pi \in {\mathcal C}^{n-2}({\mathbb R}^+)$, thus $\pi^{(m)}(x_1-)=\pi^{(m)}(x_1+)$ for $m\le n-2$. Applying integration by parts $n$ times on each subinterval $(0,x_1)$ and $(x_1,k)$ we would obtain an expression (\ref{psi_2_final}) plus an extra term of the form \begin{eqnarray*} h(z)=(-1)^{n-1} \left[\pi^{(n-1)}(x_1-)-\pi^{(n-1)}(x_1+) \right] e^{x_1 z}z^{-n}. \end{eqnarray*} However, it is easy to see that $h(z)=o(e^{kz}z^{-n}) + o(z^{\alpha})$ as $z\to \infty$, $z\in {\mathcal Q}_1$. This is true since in the domain ${\mathcal D}=\{\textnormal {Re}(z)<\ln(\ln(\textnormal {Im}(z))), \; \textnormal {Im}(z)>e\}$ we have
$|\exp(x_1 z)|=\exp(x_1 \textnormal {Re}(z))=O(\ln|z|)=o(z^{a})$, while in the domain ${\mathcal Q}_1\setminus {\mathcal D}$ we have $\textnormal {Re}(z)\to \infty$ when $z\to \infty$, which implies $\exp(x_1 z)z^{-n}=o(\exp(kz)z^{-n})$.
Formula (\ref{psi_decomposition}) and asymptotic expressions (\ref{psi_2_final}), (\ref{asymptotics_psi1}) imply that $\psi(z)$ satisfies (\ref{psi_asymptotics}) with coefficients $A$, $a$, $B$ and $b$ as in Proposition \ref{prop_asymptotic_psi}, except that there would be an extra term $O(1)$ in the right-hand side of (\ref{psi_asymptotics}), which comes from (\ref{psi_2_final}). According to our assumption, the process $X$ is not a compound Poisson process, thus the constant $b$ defined in Proposition \ref{prop_asymptotic_psi} is strictly positive, and $O(1)=o(z^b)$, therefore this extra term can be absorbed into $o(z^b)$. This ends the proof in the case $\alpha<1$ and $\hat \alpha <1$.
In the case when one or both of $\alpha$, $\hat \alpha$ are greater than one the proof is identical, except that we will have to do one extra integration by parts for proving (\ref{asymptotics_psi1}). The details are left to the reader.
$\sqcap\kern-8.0pt\hbox{$\sqcup$}$\\
{\it Proof of Theorem \ref{thm_w^q}:} Let us denote
\begin{eqnarray*} z_n=\frac{1}{k} \left[\ln \left( \bigg | \frac{B}{A} \bigg| \right)+(a+b) \ln \left(\frac{2n\pi }{k}\right) \right] +\frac{{\textnormal i} }{k} \left[ \textnormal {arg}\left( \frac{B}{A} \right)+\left( \frac12 (a+b) + 2n+1\right) \pi \right]. \end{eqnarray*} Due to Definition \ref{definition_Levy_measure}, the L\'evy measure $\Pi({\textnormal d} x)$ can only have a finite number of atoms. From Corollary 2.5 in \cite{KuKyRi} we find that $W^{(q)}(x)$ can only have a finite number of points where it is not differentiable. Thus we can use (\ref{def_W^q}) and the Bromwich integral formula to conclude that for any $c> \Phi(q)$ \begin{eqnarray}\label{proof_Wq_1} W^{(q)}(x)=\frac{1}{2\pi {\textnormal i} } \int\limits_{c+{\textnormal i} {\mathbb R}} \frac{e^{zx}}{\psi_Y(z)-q} {\textnormal d} z. \end{eqnarray}
For $n>0$ and $m<0$ we define the contour $L=L(n,m)=L_1\cup L_2 \cup L_3 \cup L_4$, where \begin{eqnarray*} L_1&=&L_1(n)=\{ \textnormal {Re}(z)=c, \; -\textnormal {Im}(z_n)-\pi/k<\textnormal {Im}(z)<\textnormal {Im}(z_n)+\pi/k\},\\ L_2&=&L_2(n,m)=\{ \textnormal {Im}(z)=\textnormal {Im}(z_n)+\pi/k, \; m<\textnormal {Re}(z)<c\},\\ L_3&=&L_3(n,m)=\{ \textnormal {Re}(z)=m, \; -\textnormal {Im}(z_n)-\pi/k<\textnormal {Im}(z)<\textnormal {Im}(z_n)+\pi/k\},\\ L_4&=&L_4(n,m)=\{ \textnormal {Im}(z)=-\textnormal {Im}(z_n)-\pi/k, \; m<\textnormal {Re}(z)<c\}. \end{eqnarray*} This contour is shown on figure \ref{fig_proof_b}. We assume that $L$ is oriented counter-clockwise. Using the residue theorem we deduce \begin{eqnarray}\label{sum_residues} \int\limits_{L} \frac{e^{zx}}{\psi_Y(z)-q} {\textnormal d} z= \frac{e^{\Phi(q)x}}{\psi_Y'(\Phi(q))} + \frac{e^{-\zeta_0 x}}{\psi_Y'(-\zeta_0)}+ 2 \sum \textnormal {Re}\left[ \frac{e^{-\zeta_j x}}{\psi_Y'(-\zeta_j)} \right], \end{eqnarray} where the summation is over all $j\ge 1$, such that $-\zeta_j$ lie inside the contour $L$.
First, assume that $n$ is fixed and let us consider what happens as $m\to -\infty$. According to the asymptotic relation (\ref{psi_Y_asymptotics}), as $\textnormal {Re}(z)\to -\infty$ the function $\psi_Y(z)$ increases exponentially (uniformly in every horizontal strip $|\textnormal {Im}(z)|<C$). In particular, for $m$ large enough we would have $|\psi_Y(z)-q|>1$ for all $z\in L_3(n,m)$, which implies \begin{eqnarray*}
\bigg | \int\limits_{L_3(n,m)} \frac{e^{zx}}{\psi_Y(z)-q} {\textnormal d} z \bigg| \le \int\limits_{L_3(n,m)} \bigg | \frac{e^{zx}}{\psi_Y(z)-q} \bigg | \times |{\textnormal d} z|
< \int\limits_{L_3(n,m)} e^{x m} \times |{\textnormal d} z| = \left(2\textnormal {Im}(z_n)+2\pi /k\right)e^{mx}, \end{eqnarray*} and for every $x>0$ the right-hand side converges to zero as $m\to -\infty$.
Our next goal is to let $n\to +\infty$ and to prove that the integrals over the two horizontal half-lines $L_2(n,-\infty)$ and $L_4(n,-\infty)$ in \eqref{sum_residues} disappear. In order to achieve this we'll need to obtain good upper bounds on
$\psi(z)$ on these horizontal half-lines. Let us consider first the contour $L_2(n,-\infty)$. We will prove that there exists a constant $C$ such that $|\psi_Y(z)|> C |\textnormal {Im}(z_n)|$ for all $z\in L_2(n,-\infty)$.
Assume that $\epsilon>0$ is a small number and define a domain \begin{eqnarray*}
{\mathcal D}_{\epsilon}=\{z \in {\mathbb C}: \; |\textnormal {arg}(z)|>\pi/2+\epsilon\}, \end{eqnarray*} see figure \ref{fig_proof_b}. Let $L_5=L_5(n)=L_2(n,-\infty) \cap {\mathcal D}_{\epsilon}$ and $L_6=L_6(n)=L_2(n,-\infty) \setminus {\mathcal D}_{\epsilon}$.
Following the same steps as in the proof of Theorem \ref{thm_asymptotics} (see estimate (\ref{estimate_L2})) we find that there exists a constant $C_1$ (which does not depend on $n$ or $\epsilon$)
such that for all $n$ large enough we have for all $z\in L_5(n)$ \begin{eqnarray}\label{proof_Wq_2}
|\psi_Y(z)|&>&\cos(C_1 \epsilon)\left( \big|Ae^{kz}z^{-a} \big| + \big |Bz^{b} \big| \right)>\cos(C_1 \epsilon) |B| |z|^b
\\ \nonumber &>&\cos(C_1 \epsilon) |B|\textnormal {Im}(z_n)^b>\cos(C_1 \epsilon) |B|\textnormal {Im}(z_n) \end{eqnarray} where in the last estimate we have used the fact that $b\ge 1$ (see \eqref{eqn_AaBb}).
Next, it can be easily seen from the figure \ref{fig_proof_b} that for all $z$ in the domain ${\mathcal D}_{\epsilon}$ we have \begin{eqnarray}\label{estimate_for_De}
\textnormal {Re}(z)<-|z| \sin(\epsilon),
\end{eqnarray} therefore $|z|^b = o (\exp(-k z) z^{-a})$ when $z\to \infty$, $z\in {\mathcal D}_{\epsilon}$. This fact and the asymptotic formula (\ref{psi_Y_asymptotics}) show that there exists a constant $C_2$ such that for all $z \in {\mathcal D}_{\epsilon}$ large enough we have
$|\psi_Y(z)|>C_2 |\exp(-k z) z^{-a}|$. Therefore, for all $n$ large enough we have \begin{eqnarray*}
|\psi_Y(z)|>C_2 e^{k|\textnormal {Re}(z)|}|z|^{-a}, \;\;\; z\in L_6(n). \end{eqnarray*} Using the above estimate and \eqref{estimate_for_De} we find \begin{eqnarray}\label{proof_Wq_4}
|\psi_Y(z)|>C_2 \sin(\epsilon)^a e^{k|\textnormal {Re}(z)|}|\textnormal {Re}(z)|^{-a}, \;\;\; z\in L_6(n). \end{eqnarray} Next, as $n$ increases to $+\infty$, the real part of any $z \in L_6(n)$ decreases to $-\infty$ (see figure \ref{fig_proof_b}), thus for all $n$
large enough we have $\exp(k|\textnormal {Re}(z)|/2)>|\textnormal {Re}(z)|^a$ for all $z\in L_6(n)$. At the same time, from the figure \ref{fig_proof_b} we see that for all $z\in L_6(n)$ it is true that
$|\textnormal {Re}(z)|> \tan(\epsilon) |\textnormal {Im}(z)|=\tan(\epsilon)(\textnormal {Im}(z_n)+\pi/k)$. Using this fact and (\ref{proof_Wq_4}) we find that there exists a constant $C_3=C_3(\epsilon)$ such that for all $n$ large enough \begin{eqnarray}\label{proof_Wq_5}
|\psi_Y(z)|>C_2 \sin(\epsilon)^a e^{\frac{k}2|\textnormal {Re}(z)|}|\textnormal {Re}(z)|^{-a} e^{\frac{k}2|\textnormal {Re}(z)|}> C_2 \sin(\epsilon)^a e^{\frac{k}2\tan(\epsilon) \textnormal {Im}(z_n) }>C_3 \textnormal {Im}(z_n), \;\;\; z\in L_6(n). \end{eqnarray} Combining (\ref{proof_Wq_2}) and (\ref{proof_Wq_5}) we conclude that there exists a constant $C>0$, such that for all $n$ large enough we have \begin{eqnarray*}
|\psi_Y(z)|> C \textnormal {Im}(z_n), \;\;\; z\in L_2(n,-\infty). \end{eqnarray*} A similar estimate for $L_4(n,-\infty)$ can be obtained in the same way.
Thus setting $z=z(u):=u+{\textnormal i}(\textnormal {Im}(z_n)+\pi/k)$ we obtain \begin{eqnarray*}
&&\bigg | \int\limits_{L_2(n,-\infty)} \frac{e^{zx}}{\psi_Y(z)-q} {\textnormal d} z \bigg|=\bigg | \int\limits_{-\infty}^c \frac{e^{z(u)x}}{\psi_Y(z(u))-q} {\textnormal d} u \bigg| \\&<&
\int\limits_{-\infty}^c \frac{|e^{z(u)x}|}{|\psi_Y(z(u))-q|} {\textnormal d} u<
\int\limits_{-\infty}^c \frac{e^{ux}}{ C |\textnormal {Im}(z_n)|-q} {\textnormal d} u=
\frac{x^{-1} e^{cx}}{ C |\textnormal {Im}(z_n)|-q} \end{eqnarray*} and the right hand side converges to zero as $n\to +\infty$. Similarly, the integral over $L_4$ vanishes. Thus as $n\to +\infty$ formula (\ref{sum_residues}) becomes \begin{eqnarray*} \int\limits_{c+{\textnormal i} {\mathbb R}} \frac{e^{zx}}{\psi_Y(z)-q} {\textnormal d} z=\frac{e^{\Phi(q)x}}{\psi_Y'(\Phi(q))} + \frac{e^{-\zeta_0 x}}{\psi_Y'(-\zeta_0)} + 2 \sum\limits_{n\ge 1} \textnormal {Re}\left[ \frac{e^{-\zeta_n x}}{\psi_Y'(-\zeta_n)} \right] \end{eqnarray*} and the left-hand side is equal to $W^{(q)}(x)$ due to Bromwich integral formula (\ref{proof_Wq_1}).
Next, from Proposition \ref{prop_asymptotic_psi} we know that the asymptotic formula for $\psi_Y'(z)$ can be obtained by differentiating (\ref{psi_Y_asymptotics}). Therefore, using the asymptotic expression (\ref{zeta_asymptotics}) for $\zeta_n$ we find that \begin{eqnarray*}
\big | \psi'(-\zeta_n) \big |=k|B| \left( \frac{2n\pi}{k} \right)^{b}+o\left(n^b\right). \end{eqnarray*} Similarly, from (\ref{zeta_asymptotics}) we find that there exists a constant $c$ such that \begin{eqnarray*}
|e^{-\zeta_n x}|\sim c n^{-\frac{x}{k}(a+b)}, \;\;\; n\to +\infty, \end{eqnarray*} thus the terms of the series in the right-hand side of (\ref{eqn_W^q}) decrease as $n^{-b-x(a+b)/k}$. According to (\ref{eqn_AaBb}) we always have $b\ge 1$, which implies that the series in the right-hand side of (\ref{eqn_W^q}) converges on ${\mathbb R}^+$ and uniformly on $[\epsilon,\infty)$ for each $\epsilon>0$.
$\sqcap\kern-8.0pt\hbox{$\sqcup$}$\\
{\it Proof of Proposition \ref{Laplace_exponent_X}:} First, let us assume that $\alpha<1$ and $\hat \alpha<1$. We start with the L\'evy-Khintchine formula (\ref{Levy_Khinthine2}) with the cutoff function $h(x)\equiv 0$, which gives us
\begin{eqnarray}\label{proof_Laplace_exp_X_1}
\psi(z)=\frac12 \sigma^2 z^2 +\mu z + \hat C \hat \alpha \int\limits_{-\infty}^{0} \left( e^{z x}-1 \right) e^{\hat \beta x} |x|^{-1-\hat \alpha} {\textnormal d} x + C \alpha \int\limits_{0}^{k} \left( e^{z x}-1 \right)e^{- \beta x} x^{-1- \alpha} {\textnormal d} x. \end{eqnarray} The first integral in (\ref{proof_Laplace_exp_X_1}) can be evaluated as follows: \begin{eqnarray*}
&&\hat C \hat \alpha \int\limits_{-\infty}^{0} \left( e^{z x}-1 \right) e^{\hat \beta x} |x|^{-1-\hat \alpha} {\textnormal d} x=
\hat C \hat \alpha \int\limits_{-\infty}^{0} \left( e^{(z+\hat \beta)x}-1 \right) |x|^{-1-\hat \alpha} {\textnormal d} x -
\hat C \hat \alpha \int\limits_{-\infty}^{0} \left( e^{\hat \beta x}-1 \right) |x|^{-1-\hat \alpha} {\textnormal d} x \\ &=& - \hat C \Gamma(1-\hat\alpha) (\hat \beta+z)^{\hat \alpha}+ \hat C \Gamma(1-\hat\alpha) \hat \beta^{\hat \alpha}, \end{eqnarray*} where we have used (\ref{int_identity_1}) in the final step. Similarly, the second integral in (\ref{proof_Laplace_exp_X_1}) can be evaluated with the help of (\ref{eqn_inc_gamma}) and (\ref{eqn_F_alpha_k_z}): \begin{eqnarray*} C \alpha \int\limits_{0}^{k} \left( e^{z x}-1 \right)e^{- \beta x} x^{-1- \alpha} {\textnormal d} x&=& C \alpha \int\limits_{0}^{k} \left( e^{(z-\beta) x}-1 \right) x^{-1- \alpha} {\textnormal d} x- C \alpha \int\limits_{0}^{k} \left( e^{-\beta x}-1 \right) x^{-1- \alpha} {\textnormal d} x \\ &=&C F(\alpha,k,z-\beta)-C F(\alpha,k,-\beta)\\&=& -Ck^{-\alpha} {}_1F_1(-\alpha,1-\alpha,-k(\beta-z))+C k^{-\alpha}{}_1F_1(-\alpha,1-\alpha,-k\beta)\\&=& C \alpha (\beta-z)^{\alpha} \gamma(-\alpha,k(\beta-z))-C \alpha \beta^{\alpha} \gamma(-\alpha,k\beta). \end{eqnarray*}
This ends the proof in the case $\alpha<1$ and $\hat \alpha<1$. When $\alpha>1$ or $\hat \alpha>1$ the proof would be very similar, the only difference is that we would use the cutoff function $h(x)\equiv 1$ in (\ref{Levy_Khinthine2}) and perform an extra integration by parts in (\ref{int_identity_1}). We leave all the details to the reader.
$\sqcap\kern-8.0pt\hbox{$\sqcup$}$\\
\end{document} |
\begin{document}
\begin{center} \large \bf Birationally rigid Fano-Mori fibre spaces \end{center}
\centerline{A.V.Pukhlikov}
\parshape=1 3cm 10cm \noindent {\small \quad\quad\quad \quad\quad\quad\quad \quad\quad\quad {\bf }\newline In this paper we prove the birational rigidity of Fano-Mori fibre spaces $\pi\colon V\to S$, every fibre of which is a Fano complete intersection of index 1 and codimension $k\geqslant 3$ in the projective space ${\mathbb P}^{M+k}$ for $M$ sufficiently high, satisfying certain natural conditions of general position, in the assumption that the fibre space $V\slash S$ is sufficiently twisted over the base. The dimension of the base $S$ is bounded from above by a constant, depending only on the dimension $M$ of the fibre (as the dimension of the fibre $M$ grows, this constant grows as $\frac12 M^2$).
Bibliography: 28 items.}
AMS classification: 14E05, 14E07
Key words: Fano variety, Mori fibre space, birational map, birational rigidity, linear system, maximal singularity, multi-quadratic singularity.
\section*{Introduction}
{\bf 0.1. Fano complete intersections.} In the present paper we study the birational geometry of algebraic varieties, fibred into Fano complete intersections of codimension $k\geqslant 3$ (fibrations into Fano hypersurfaces were studied in \cite{Pukh15a}, into Fano complete intersections of codimension 2 in \cite{Pukh2022a}). We start with a description of fibres of these fibre spaces. Let us fix an integer $k\geqslant 3$ and set $$
\varepsilon(k)=\mathop{\rm min} \left\{a\in {\mathbb Z}\,\left|\, a\geqslant 1, \left(1+\frac{1}{k}\right.\right)^a\geqslant 2\right\}. $$ Now let us fix $M\in{\mathbb Z}$, satisfying the inequality \begin{equation}\label{14.11.22.1} M\geqslant 10 k^2+8k+2\varepsilon(k)+3. \end{equation} The right hand side of that inequality denote by the symbol $\rho(k)$. Let $$ \underline{d}=(d_1,\dots, d_k) $$ be an ordered tuple of integers, $$ 2\leqslant d_1\leqslant d_2\leqslant\dots\leqslant d_k, $$ satisfying the equality $$ d_1+\dots +d_k=M+k. $$ Fano varieties, considered in this paper, are complete intersections of type $\underline{d}$ in the complex projective space ${\mathbb P}^{M+k}$. More precisely, let the symbol ${\cal P}_{a,N}$ stand for the space of homogeneous polynomials of degree $a\in {\mathbb Z}_+$ in $N\geqslant 1$ variables. Set $$ {\cal P}=\prod^k_{i=1} {\cal P}_{d_i,M+k+1} $$ to be the space of all tuples $$ \underline{f}=(f_1,\dots, f_k) $$ of homogeneous polynomials of degree $d_1$, \dots, $d_k$ on ${\mathbb P}^{M+k}$. If for $\underline{f}\in {\cal P}$ the scheme of common zeros of the polynomials $f_1,\dots, f_k$ is an irreducible reduced factorial variety $F=F(\underline{f})$ of dimension $M$ with terminal singularities, then $F$ is a primitive Fano variety: $$ \mathop{\rm Pic} F={\mathbb Z} H_F,\quad K_F=-H_F, $$ where $H_F$ is the class of a hyperplane section of the variety $F$ (the Lefschetz theorem). Assuming that this is the case, let us give the following definition.
{\bf Definition 0.1.} The variety $F$ is {\it divisorially canonical}, if for every effective divisor $D\sim nH_F$ the pair $(F,\frac{1}{n}D)$ is canonical, that is, for every exceptional divisor $E$ over $F$ the inequality $$ \mathop{\rm ord}\nolimits_E D\leqslant n\cdot a(E) $$ holds, where $a(E)$ is the discrepancy of $E$ with respect to $F$.
Below is the first main result of the present paper.
{\bf Theorem 0.1.} {\it There exist a Zariski open subset ${\cal F}\subset{\cal P}$, such that for every tuple $\underline{f}\in{\cal F}$ the scheme of common zeros of the tuple $\underline{f}$ is an irreducible reduced factorial divisorially canonical variety $F(\underline{f})$ of dimension $M$ with terminal singularities, and the codimension of the complement ${\cal P}\setminus{\cal F}$ satisfies the inequality} $$ \mathop{\rm codim}(({\cal P}\setminus{\cal F})\subset{\cal P}) \geqslant M-k+5+{M-\rho(k)+2 \choose 2}. $$
(Thus for a fixed $k$ and growing $M$ the codimension of the complement ${\cal P}\setminus{\cal F}$ grows as $\frac12 M^2$.)
It is convenient to express the property of divisorial canonicity in terms of the {\it global canonical threshold} of the variety $F$.
Recall that for a Fano variety $X$ with the Picard number 1 and terminal ${\mathbb Q}$-factorial singularities its global canonical threshold $\mathop{\rm ct}(X)$ is the supremum of $\lambda\in{\mathbb Q}_+$ such that for every effective divisor $D\sim -nK_X$ (here $n\in{\mathbb Q}_+$) the pair $\left(X,\frac{\lambda}{n}D\right)$ is canonical. Therefore, Theorem 0.1 claims that for every $\underline{f}\in{\cal F}$ the inequality $\mathop{\rm ct} (F(\underline{f}))\geqslant 1$ holds.
If in the definition of the global canonical threshold instead of
``for every effective divisor $D\sim -nK_X$'' we put ``for a general divisor $D$ in any linear system $\Sigma\subset|-nK_X|$ with no fixed components'', we get the definition of the {\it mobile canonical threshold} $\mathop{\rm mct}(X)$; obviously, $\mathop{\rm mct}(X)\geqslant\mathop{\rm ct}(X)$. The inequality $\mathop{\rm mct}(X)\geqslant 1$ is equivalent to the birational superrigidity of the Fano variety $X$, see \cite{Ch05c}. If in the definition of the global canonical threshold the property of the pair $(X,\frac{\lambda}{n}D)$ to be canonical we replace by the log canonicity of that pair, we get the definition of the {\it global log canonical threshold} $\mathop{\rm lct}(X)$; again, $\mathop{\rm lct}(X)\geqslant\mathop{\rm ct}(X)$.
For simplicity we write $F\in{\cal F}$ instead of $F=F(\underline{f})$ for $\underline{f}\in{\cal F}$.
{\bf 0.2. Fano-Mori fibre spaces.} By a {\it Fano-Mori fibre space} we mean a surjective morphism of projective varieties $$ \pi\colon V\to S, $$ where $\dim V\geqslant 3 + \dim S$, the base $S$ is non-singular and rationally connected, and the following conditions are satisfied:
(FM1) every scheme fibre $F_s=\pi^{-1}(s)$, $s\in S$, is an irreducible reduced factorial Fano variety with terminal singularities and the Picard group $\mathop{\rm Pic} F_s\cong {\mathbb Z}$,
(FM2) the variety $V$ itself is factorial and has at most terminal singularities,
(FM3) the equality $$ \mathop{\rm Pic} V={\mathbb Z} K_V\oplus\pi^* \mathop{\rm Pic}S $$ holds.
So Fano-Mori fibre spaces are Mori fibre spaces with additional very good properties.
{\bf Definition 0.2.} A Fano-Mori fibre space $\pi\colon V\to S$ is {\it stable with respect to fibre-wise birational modifications}, if for every birational morphism $\sigma_S\colon S^+\to S$, where $S^+$ is a non-singular projective variety, the morphism $$ \pi_+\colon V^+=V\mathop{\times}\nolimits_S S^+\to S^+ $$ is a Fano-Mori fibre space.
We will consider birational maps $\chi\colon V\dashrightarrow V'$, where $V$ is the total space of a Fano-Mori fibre space and $V'$ is the total space of a fibre space $\pi'\colon V'\to S'$ which belongs to one of the two classes:
(1) {\it rationally connected fibre spaces}, that is, $V'$ and $S'$ are non-singular and the base $S'$ and a fibre of general position $(\pi')^{-1}(s')$ are rationally connected,
(2) {\it Mori fibre spaces}, where $V'$ and $S'$ are projective and the variety $V'$ has ${\mathbb Q}$-factorial terminal singularities.
For a birational map $\chi\colon V\dashrightarrow V'$, where $V'/S'$ is a rationally connected fibre space, we want to answer the question: is it fibre-wise, that is, is there a rational dominant map $\beta\colon S\dashrightarrow S'$, making the diagram \begin{equation}\label{15.11.22.1} \begin{array}{rcccl}
& V & \stackrel{\chi}{\dashrightarrow} & V' & \\ \pi\!\!\!\!\! & \downarrow & & \downarrow & \!\!\!\!\!\pi' \\
& S & \stackrel{\beta}{\dashrightarrow} & S' \end{array} \end{equation} a commutative one, that is, $\pi'\circ\chi=\beta\circ\pi$?
For a birational map $\chi\colon V\dashrightarrow V'$, where $V'/S'$ is a Mori fibre space with the additional properties (2) (only such Mori fibre spaces are considered in this paper), we want to answer the question: is there a {\it birational} map $\beta\colon S\dashrightarrow S'$, for which the diagram (\ref{15.11.22.1}) is commutative? If the answer to this question is always affirmative (that is, it is affirmative for every fibre space from the class (2)), then the fibre space $V/S$ is {\it birationally rigid}.
Now let us state the second main result of the present paper.
{\bf Theorem 0.2.} {\it Assume that a Fano-Mori fibre space $\pi\colon V\to S$ is stable with respect to fibre-wise birational modifications, and moreover,
{\rm (i)} for every point $s\in S$ the fibre $F_s$ satisfies the inequalities $\mathop{\rm lct} (F_s)\geqslant 1$ and $\mathop{\rm mct} (F_s)\geqslant 1$,
{\rm (ii)} (the $K$-condition) every mobile (that is, with no fixed components) linear system on $V$ is a subsystem of a complete linear system $|-nK_V+\pi^* Y|$, where $Y$ is a pseudoeffective class on $S$,
{\rm (iii)} for every family ${\overline{\cal C}}$ of irreducible curves on $S$, sweeping out a dense subset of the base $S$, and $\overline{C}\in{\overline{\cal C}}$, no positive multiple of the class $$ -(K_V\cdot \pi^{-1}(\overline{C}))-F\in A^{\dim S} V, $$ where $A^iV$ is the numerical Chow group of classes of cycles of codimension $i$ on $V$ and $F$ --- the class of a fibre of the projection $\pi$, is represented by an effective cycle on $V$.
Then for every rationally connected fibre space $V'\slash S'$ every birational map $\chi\colon V\dashrightarrow V'$ (if such maps exist) is fibre-wise, and the fibre space $V\slash S$ itself is birationally rigid.}
By what was said in Subsection 0.1, the assumption (i) can be replaced by the single inequality $\mathop{\rm ct}(F_s)\geqslant 1$ for every $s\in S$, that is, it is sufficient to assume that every fibre of the fibre space $V\slash S$ is a divisorially canonical variety.
As we will see from the proof of Theorem 0.2, instead of the conditions (ii) and (iii) it is sufficient to require that for every family $\overline{\cal C}$ of irreducible curves on $S$, sweeping out a dense subset, and $\overline{C}\in \overline{\cal C}$ the class $$ -N(K_V\cdot \pi^{-1}(\overline{C}))-F\in A^{\dim S} V $$ is not represented by an effective cycle on $V$ for any $N\geqslant 1$. The last condition is especially easy to verify: it is enough to have a numerically effective $\pi$-ample class $H_V$ on $V$, satisfying the inequality \begin{equation}\label{18.11.22.1} \left(K_V\cdot \pi^{-1}(\overline{C})\cdot H_V^{\dim V-\dim S}\right)\geqslant 0 \end{equation} for every dense family $\overline{\cal C}\ni\overline{C}$.
{\bf 0.3. An explicit construction of a fibre space.} Now let us construct a large class of Fano-Mori fibre spaces, satisfying the conditions of Theorem 0.2. Let $S$ be a non-singular projective rationally connected positive-dimensional variety and $\pi_X\colon X\to S$ a locally trivial fibration with the fibre ${\mathbb P}^{M+k}$, where $k$ and $M$ are the same as in Subsection 0.1. We say that the subvariety $V\subset X$ of codimension $k$ is a {\it fibration into complete intersections of type} $\underline{d}$, if the base $S$ can be covered by Zariski open subsets $U$, over which the fibration $\pi_X$ is trivial, $\pi^{-1}_X(U)\cong U\times{\mathbb P}^{M+k}$, and for every $U$ there is a regular map $$ \Phi_{U}\colon U\to{\cal P}, $$ such that $V\cap\,\pi^{-1}_X(U)$ in the sense of the above-mentioned trivialization is the scheme of common zeros of a tuple $$ \underline{f}(s)=\Phi_{U}(s)=(f_1(x_*,s),\dots,f_k(x_*,s)), $$ where $x_*$ are homogeneous coordinates on ${\mathbb P}^{M+k}$ and $s$ runs through $U$.
Below (in \S 1) it will be clear, that the open subset ${\cal F}$ from Theorem 0.1 is invariant under the action of the group $\mathop{\rm Aut}{\mathbb P}^{M+k}$. For that reason, the following definition makes sense.
{\bf Definition 0.3.} A fibration $V\subset X$ into complete intersections of type $\underline{d}$ is a ${\cal F}$-{\it fibration}, if for any trivialization of the bundle $\pi_X$ over an open set $U\subset S$ we have $\Phi_{U}(U)\subset{\cal F}$.
Obviously, if the inequality \begin{equation}\label{18.11.22.2} \mathop{\rm dim} S\leqslant M-k+4+{M-\rho(k)+2 \choose 2} \end{equation} holds, then we may assume that $V$ is a ${\cal F}$-fibration. Set
$\pi=\pi_X|_V$. Now from Theorems 0.1 and 0.2 it is easy to obtain the third main result of the present paper.
{\bf Theorem 0.3.} {\it Any ${\cal F}$-fibration $\pi\colon V\to S$ constructed above is a Fano-Mori fibre space. If the conditions (ii) and (iii) of Theorem 0.2 hold, then for every rationally connected fibre space $V'/S'$ every birational map $\chi\colon V\dashrightarrow V'$ is fibre-wise, and the fibre space $V\slash S$ itself is birationally rigid.}
{\bf Example 0.1.} Let $H_X$ be a numerically effective divisorial class on $X$, the restriction of which onto the fibre $\pi_X^{-1}(s)\cong {\mathbb P}^{M+k}$ is the class of a hyperplane. Let $\Delta_1$,\dots, $\Delta_k$ be very ample classes on the base $S$. Let us construct a ${\cal F}$-fibration $V/S$ as a complete intersection of $k$ general divisors $$ V=G_1\cap \dots \cap G_k, $$
where $G_i\in|d_i H_X+\pi_X^*\Delta_i|$. Let us find out, when $V/S$ satisfies the conditions (ii) and (iii) of Theorem 0.2. Write $$ K_X=-(M+k+1) H_X+\pi^*_X \Delta_X, $$ then we get $$ K_V=\left.\left(-H_X+\pi^*_X\left(\Delta_X+\sum^k_{i=1}\Delta_i\right)
\right)\right|_V. $$ It is easy to check that the inequality (\ref{18.11.22.1}) in this case takes the form of the estimate $$ \left(\left(\Delta_X+\sum^k_{i=1}\left(1-\frac{1}{d_i}\right)\Delta_i\right)\cdot \overline{C}\right)\geqslant \left(H_X^{M+k+1}\cdot \pi_X^{-1}(\overline{C})\right), $$
where for the class $H_V$ we took $H_X|_V$. This inequality must be satisfied for every dense family $\overline{\cal C}\ni\overline{C}$.
Let us consider a very particular case, when $X={\mathbb P}^m\times{\mathbb P}^{M+k}$ and $G_i$ are divisors of bi-degree $(m_i,d_i)$, $i=1,\dots,k$. Taking for $H_X$ the pull back on $X$ of the class of a hyperplane in ${\mathbb P}^{M+k}$, we get that the last inequality is equivalent to the numerical inequality \begin{equation}\label{18.11.22.3} \sum^k_{i=1}\left(1-\frac{1}{d_i}\right)m_i\geqslant m+1. \end{equation} If it is satisfied and the dimension $m=\mathop{\rm dim} S$ satisfies the inequality (\ref{18.11.22.2}), then the intersection $V=G_1\cap\dots\cap G_k$ of general (in the sense of Zariski topology) divisors of bi-degree $(m_1,d_1),\dots,(m_k,d_k)$, fibred over $S={\mathbb P}^m$, is a birationally rigid Fano-Mori fibre space and every birational map of $V$ onto the total space of a rationally connected fibre space is fibre-wise. The inequality (\ref{18.11.22.3}) shows that this claim holds for almost all tuples $(m_1,\dots,m_k)\in{\mathbb Z}^k_+$ (except for finitely many of them), that is, for almost all families of Fano-Mori fibre spaces, obtained by means of this construction. Note that the condition (\ref{18.11.22.3}) is lose to a criterial one: if $$ m_1+\dots+m_k\leqslant m, $$ then the projection of $V$ onto ${\mathbb P}^{M+k}$ defines on $V$ a structure of a Fano-Mori fibre space (and a rationally connected fibre space), which is ``transversal'' to the original structure $\pi\colon V\to S$ (and is not fibre-wise), so that in this case $V/S$ is not birationally rigid.
{\bf 0.4. The structure of the paper.} The paper is organized in the following way. In \S 1 we produce the explicit local conditions defining the open subset ${\cal F}\subset{\cal P}$. The proof of divisorial canonicity of a variety $F\in{\cal F}$ (that is, of the inequality $\mathop{\rm ct}(F)\geqslant 1$) is reduced in \S 1 to a number of technical facts that will be shown in the subsequent sections (\S\S 3-7). In \S 2 we show Theorem 0.2.
The proof of Theorem 0.1 consists of several pieces. The fact that the local conditions for the singularities that a variety $F\in{\cal F}$ can have ({\it multi-quadratic singularities}, see Subsection 1.2), guarantee that the variety $F$ is factorial and its singularities are terminal, is proven in \S 4, where we give a general definition of multi-quadratic singularities and study their properties. The estimate for the codimension of the complement ${\cal P}\setminus {\cal F}$ (which is very important for constructing families of Fano-Mori fibre spaces, satisfying the assumptions of Theorem 0.2) is shown in \S 8. However, the main (and the hardest) part of the proof of Theorem 0.1 is to show that a variety $F\in{\cal F}$ is divisorially canonical. We assume that for some effective divisor $D\sim nH_F$ the pair $(F,\frac{1}{n}D)$ is not canonical, that is, for some exceptional divisor $E$ over $F$ the inequality $$ \mathop{\rm ord}\nolimits_E D>n\cdot a(E) $$ holds. Now we have to show that this assumption leads to a contradiction. In Subsections 1.3-1.6 it is shown how (using the inequalities for the multiplicity of subvarieties of the variety $F$ at a given point, proven in \S 7) to obtain a contradiction in the case when a point of general position $o\in B$, where $B$ is the centre of $E$ on $F$, either is non-singular on $F$, or is a quadratic singularity. The hardest task is to obtain a contradiction when the point $o$ is a multi-quadratic singularity of the variety $F$. A plan of solving this problem is given in Subsection 1.7, where we introduce the concept of a {\it working triple} and describe the procedure of construction a sequence of subvarieties of the variety $F$, in which each subvariety is a hyperplane section of the previous one and the last subvariety delivers the desired contradiction.
This program is realized in \S 3, where we study the properties of working triples, however a number of key technical facts is only stated there --- their proof is put off for a greater clarity of exposition. These key facts are shown in \S\S 5,6 (and the proof makes use of the facts on linear subspaces on complete intersections of quadrics, proven in Subsection 4.5).
Finally, in \S 7 we prove the estimates for the multiplicities of certain subvarieties of the variety $F$ at given points in terms of the degrees of these subvarieties in ${\mathbb P}^{M+k}$. Here we use the well known technique of hypertangent divisors. For the purposes of our proof of Theorem 0.1 we have to somewhat modify this technique.
{\bf 0.5. General remarks.} The birational rigidity of Fano-Mori fibre spaces over a positive-dimensional base was one of the most important topics in birational geometry in the past 40 years. For its history and place in the context of the modern birational geometry of rationally connected varieties, see \cite[Subsection 0.4]{Pukh2022a}. Here we just mention a few recent papers in the areas that are close to the direction, to which the present paper belongs.
These areas are: the birational rigidity, explicit birational geometry of Mori fibre spaces (including the studies of their groups of birational automorphisms and, wider, Sarkisov links), the rationality problem, computing and estimating the global canonical thresholds and, related to these problems, the theory of $K$-stability.
In the papers \cite{KrylovOkadaetal22,AbbanKrylov22,Krylov18} important results on the birational rigidity and rigidity-type results for fibrations over ${\mathbb P}^1$ were obtained. The paper \cite{Stibitz21} links the Sarkisov program with the problem of estimating the canonical threshold of certain divisors on Fano varieties. The papers \cite{AbbanOkada18,KrylovOkada20} prove the stable non-rationality of very general conic bundles and fibrations into del Pezzo surfaces, respectively, over a higher-dimensional base. The problem of stable rationality for hypersurfaces of various bi-degrees in the products of projective spaces (see Example 0.1 above) is considered in \cite{NicaiseOttem22}. The theory of $K$-stability, which is on the border of birational geometry, is investigated in many papers (especially in the recent past), in particular, see \cite{StibitzZhuang19,Zhuang21,CheltsovPark22,CheltsovDenisovaetal22}; we mentioned the papers that are the closest to the birational rigidity-type problems. Finally, there was a lot of development recently in the direction of applying the theory of Sarkisov links and relations between them to the study of the groups of birational automorphisms of such varieties that have a very large this group, see, for instance, \cite{BlancYasinsky20,BlancLamyZ21}.
Getting back to the topic of this paper, we note that its immediate predecessor is \cite{Pukh2022a}, however, that paper investigates the non canonical singularities, the centre of which is contained in the set of bi-quadratic points of the variety (from the technical viewpoint, this is the hardest part of the proof of divisorial canonicity), using the secant varieties of subvarieties of codimension 2 on an intersection of two quadrics. It is not possible to apply this approach to subvarieties of higher codimension on an intersection of $k\geqslant 3$ quadrics, and the present paper is based on a completely different construction (which applies to the bi-quadratic singularities, considered in \cite{Pukh2022a}, as well).
The author is grateful to the members of Divisions of Algebraic Geometry and Algebra at Steklov Institute of Mathematics for the interest to his work, and also to the colleagues in Algebraic Geometry research group at the University of Liverpool for general support.
\section{Fano complete intersections}
In this section we describe the local conditions defining the open subset ${\cal F}\subset{\cal P}$ (Subsections 1.2 and 1.4). For a complete intersection $F\in{\cal F}$ the proof of its divisorial canonicity is reduced to a number of technical claims, which will be shown later. A more detailed plan of the proof of Theorem 0.1 is given in Subsection 1.1.
{\bf 1.1. A plan of the proof of Theorem 0.1.} In order to prove Theorem 0.1, one has to give an explicit definition of the open set ${\cal F}\subset{\cal P}$. This definition consists of two groups of conditions, which should be satisfied by the polynomials $f_1,\dots, f_k$ at every point $o\in {\mathbb P}^{M+k}$ at which they all vanish. The first group of conditions is about the singularities of the complete intersection $F(\underline{f})$: they can be quadratic or multi-quadratic of a rank bounded from below. The corresponding definitions and facts are given in Subsection 1.2. Assuming that the conditions of the first group are satisfied, we get that the scheme of common zeros of the polynomials $f_1,\dots, f_k$ is an irreducible reduced factorial variety $F=F(\underline{f})\subset {\mathbb P}^{M+k}$ with terminal singularities, and so $\mathop{\rm Pic} F={\mathbb Z} H_F$ and $K_F=-H_F$, so that the question, is it divisorially canonical, makes sense.
Assuming that $F$ is not divisorially canonical, let us fix an effective divisor $D_F\sim n(D_F)H_F$, where $n(D_F)\geqslant 1$, such that the pair $$ \left(F,\frac{1}{n(D_F)}D_F\right) $$ is not canonical, that is, there is an exceptional divisor $E$ over $F$, satisfying the Noether-Fano inequality: $$ \mathop{\rm ord}\nolimits_E D_F>n(D_F)\, a(E). $$ We have to show that the existence of such a divisor leads to a contradiction. Let $B\subset F$ be the centre of the exceptional divisor $E$ on $F$. The information about the singularities of the varieties $F$ makes it possible to easily exclude the option when $\mathop{\rm codim}(B\subset F)=2$. This is done in Subsection 1.3.
After that in Subsection 1.4 we produce the second group of local conditions for the tuple of polynomials $\underline{f}\in{\cal F}$: now they are the regularity conditions. Assuming that they are satisfied at every point $o\in F$, we exclude the option $B\not\subset\mathop{\rm Sing}F$ in Subsection 1.5, and in Subsection 1.6 the option that the point $o\in B$ of general position is a quadratic singularity of $F$. In Subsection 1.7 we describe the procedure of excluding the multi-quadratic case, when the point $o\in B$ of general position is a multi-quadratic singularity of the type $2^l$, $l\in\{2,\dots,k\}$. This is the hardest part of the work, which is completed in the subsequent sections.
{\bf 1.2. Multi-quadratic singularities.} Let $o\in{\mathbb P}^{M+k}$ be a point, at which $f_1,\dots,f_k$ all vanish. Let us consider a system of affine coordinates $z_*=(z_1,\dots,z_{M+k})$ with the origin at the point $o$ on an affine chart ${\mathbb A}^{M+k}\subset{\mathbb P}^{M+k}$, containing that point. Write down $$ \begin{array}{ccccc} f_1=f_{1,1}+f_{1,2}+ & \dots & + f_{1,d_1}, & &\\ f_2=f_{2,1}+f_{2,2}+ & \dots & & + f_{2,d_2}, & \\ & \dots & & &\\ f_k=f_{k,1}+f_{k,2}+ & \dots & & & +f_{k,d_k}, \end{array} $$ where we use the same symbols $f_i$ for the non-homogeneous polynomials in $z_*$, corresponding to the original polynomials $f_i$, and $f_{i,a}$ is a homogeneous polynomial of degree $a$ in $z_*$. Obviously, if the linear forms $f_{1,1},\dots,f_{k,1}$ are linearly independent, then in a neighborhood of the point $o$ the scheme of common zeros of the polynomials $f_1,\dots,f_k$ is a non-singular complete intersection of codimension $k$. In order to give the definition of a multi-quadratic singularity, we will need the concept of the rank of a tuple of quadratic forms.
{\bf Definition 1.1.} (\cite{Pukh2022a}) {\it The rank of the tuple of quadratic forms} $q_1,\dots,q_l$ in $N$ variables is the number $$
\mathop{\rm rk}(q_1,\dots,q_l)=\mathop{\rm min}\{\mathop{\rm rk}(\lambda_1 q_1+\dots+\lambda_l q_l)\,|\,(\lambda_1,\dots,\lambda_l)\neq (0,\dots,0)\}. $$ Obviously, $\mathop{\rm rk}(q_1,\dots,q_l)\leqslant N$. For that reason, in the sequel the inequality $\mathop{\rm rk}(q_*)\geqslant a$ means implicitly that the forms $q_i$ depend on a sufficient $(\geqslant a)$ number of variables.
Take $l\in\{1,2,\dots,k\}$.
{\bf Definition 1.2.} The tuple $\underline{f}$ has at the point $o$ a {\it multi-quadratic singularity of type} $2^l$ of rank $a$, if the following conditions are satisfied:
\begin{itemize}
\item $\mathop{\rm dim}\langle f_{1,1},\dots,f_{1,k}\rangle=k-l$ (and in order to simplify the notations we assume that the forms $$ f_{l+1},\dots, f_k $$ are linearly independent),
\item the rank of the tuple of quadratic forms $$ f^*_{i,2}=f_{i,2}-\sum^k_{j=l+1}\lambda_{i,j}f_{j,2}, $$ $i=1,\dots,l$, where $\lambda_{i,j}\in{\mathbb C}$ are defined by the equalities $$ f_{i,1}=\sum^k_{j=l+1}\lambda_{i,j}f_{j,1}, $$ is equal to the number $a$. \end{itemize}
Now the first condition, defining the subset ${\cal F}\subset{\cal P}$, is stated in the following way.
(MQ1) For every point $o\in{\mathbb P}^{M+k}$, such that $$ f_1(o)=\dots=f_k(o)=0 $$ either the linear forms $f_{1,1},\dots,f_{k,1}$ are linearly independent, or $\underline{f}$ has at the point $o$ a multi-quadratic singularity of type $2^l$, where $l\in\{1,2,\dots,k\}$, of rank $$ \geqslant 2l+4k+2\varepsilon(k)-1. $$
{\bf Theorem 1.1.} {\it Assume that $\underline{f}$ satisfies the condition (MQ1). Then the scheme of common zeros of the polynomials $f_1,\dots,f_k$ is an irreducible reduced factorial variety F=F(\underline{f}) --- a complete intersection of codimension $k$ with terminal singularities, and, moreover,} $$ \mathop{\rm codim}(\mathop{\rm Sing}F\subset F)\geqslant 4k+2\varepsilon(k). $$
{\bf Proof} is given in \S 4 (Subsections 4.1-4.3).
Assume that $\underline{f}$ satisfies the condition (MQ1). For a point $o\in F=F(\underline{f})$ the symbol $T_oF$ stands for the subspace $\{f_{1,1}=\dots=f_{k,1}=0\}\subset{\mathbb C}^{M+k}$. For the proof of Theorem 0.1 we will need one more property of the tuple $\underline{f}$, which we include in the definition of the subset ${\cal F}$.
(MQ2) For any point $o\in F$, which is a multi-quadratic of type $2^l$, where $l\geqslant 2$, the rank of the tuple of quadratic forms $$
f_{1,2}|_{T_oF},\dots,f_{k,2}|_{T_oF} $$ is at least $10k^2+8k+2\varepsilon(k)+5$.
The condition (MQ2) for multi-quadratic points of type $2^l$ with $l\geqslant 2$ implies the condition (MQ1), because the rank of a quadratic form, restricted to a hyperplane, drops at most by 2, however for the convenience of references we state the conditions (MQ1) and (MQ2) independently of each other. These conditions are used in the proof of Theorem 0.1 in different ways.
So every tuple $\underline{f}\in{\cal F}$ satisfies (MQ1) and (MQ2).
{\bf 1.3. Subvarieties of codimension 2.} Following the plan, given in Subsection 1.1, let us fix an effective divisor $D_F\sim n(D_F)H_F$, $n(D_F)\geqslant 1$, such that the pair $(F,\frac{1}{n(D_F)}D_F)$ is not canonical. By the symbol $$ \mathop{\rm CS}\left(F,\frac{1}{n(D_F)}D_F\right) $$ we denote the union of the centres on $F$ of all exceptional divisors over $F$, satisfying the Noether-Fano inequality (that is to say, of all non-canonical singularities of that pair). This is a closed subset of $F$. Let $B$ be an irreducible component of maximal dimension of that set.
{\bf Proposition 1.1.} {\it The following inequality holds:} $\mathop{\rm codim}(B\subset F)\geqslant 3$.
{\bf Proof.} Assume the converse: $\mathop{\rm codim}(B\subset F)=2$. Then $B\not\subset\mathop{\rm Sing}F$. Moreover, let $P\subset{\mathbb P}^{M+k}$ be a general linear subspace of codimension $2k+2$. Theorem 1.1 implies that $P\cap\mathop{\rm Sing}F=\emptyset$, so that $F\cap P$ is a non-singular complete intersection of type $\underline{d}$ in $P\cong{\mathbb P}^{2k+2}$. Furthermore, the pair $$
\left(F\cap P,\frac{1}{n(D_F)}D_F|_{F\cap P}\right) $$ is not canonical, and the irreducible subvariety $B\cap P$ is an irreducible component of maximal dimension of the set $$
\mathop{\rm CS}\left(F\cap P,\frac{1}{n(D_F)}D_F|_{F\cap P}\right), $$ so that (as $F\cap P$ is non-singular) $$
\mathop{\rm mult}\nolimits_{B\cap P}D_F|_{F\cap P}>n(D_F). $$
However, $D_F|_{F\cap P}\sim n(D_F)H_{F\cap P}$ (where $H_{F\cap P}$ is the class of a hyperplane section of $F\cap P$), so that by \cite[Proposition 3.6]{Pukh06b} or \cite{Suzuki15} we get a contradiction, proving the proposition. Q.E.D.
{\bf 1.4. Regularity conditions.} In order to continue the proof of Theorem 0.1, we need a second group of conditions defining the set ${\cal F}$. Let $o\in F$ be a point. We use the notations of Subsection 1.2. By the symbol $T_oF$ we denote the {\it linear} tangent space $$ \{f_{1,1}=\dots=f_{k,1}=0\}\subset{\mathbb C}^{M+k}, $$ and by the symbol ${\mathbb P}(T_oF)$ its projectivization. Let ${\cal S}=(h_1,\dots,h_M)$ be the sequence of homogeneous polynomials $$
f_{i,j}|_{{\mathbb P}(T_oF)}, $$ where $j\geqslant 2$, placed in the lexicographic order: $(i_1,j_1)$ precedes $(i_2,j_2)$, if $j_1<j_2$ or $j_1=j_2$, but
$i_1<i_2$. By the symbol ${\cal S}[-m]$ denote the sequence ${\cal S}$ with the last $m$ members removed. Finally, the symbol ${\cal S}[-m]|_{\Pi}$ stands for the restriction of that sequence (that is, the restriction of each its member) onto a linear subspace $\Pi\subset{\mathbb P}(T_oF)$. The regularity conditions depend on the type of the singularity $o\in F$.
First, let the point $o\in F$ be non-singular, so that ${\mathbb P}(T_oF)\cong{\mathbb P}^{M-1}$. In that case the regularity condition is stated in the following way.
(R1) The sequence $$
{\cal S}[-(k+\varepsilon(k)+3)]|_{\Pi} $$ is regular for every subspace $\Pi\subset{\mathbb P}(T_oF)$ of codimension $k+\varepsilon(k)-1$.
The condition (R1) is assumed for every non-singular point $o\in F$. It implies the following key fact.
{\bf Theorem 1.2.} {\it Let $P\subset{\mathbb P}^{M+k}$ be an arbitrary linear subspace of codimension $\leqslant k+ \varepsilon(k)-1$. Then for every non-singular point $o\in F\cap P$ and every prime divisor $Y\sim n(Y)H_{F\cap P}$ on $F\cap P$ the inequality} $$ \mathop{\rm mult}\nolimits_oY\leqslant2n(Y) $$ {\it holds.}
{\bf Proof} is given in \S 7 (Subsections 7.1, 7.2).
Now let $o\in F$ be a quadratic singularity (this case corresponds to the value $l=1$ in Definition 1.2). Here ${\mathbb P}(T_oF)\cong{\mathbb P}^M$. In this case the regularity condition is stated as follows.
(R2) The sequence $$
{\cal S}[-4]|_{\Pi} $$ is regular for every hyperplane $\Pi\subset{\mathbb P}(T_oF)$.
The condition (R2) is assumed for every quadratic singular point $o\in F$ and implies the following key fact.
{\bf Theorem 1.3.} {\it Let $o\in F$ be a quadratic singularity and $W\ni o$ the section of $F$ by a hyperplane that is not tangent to $F$ at the point $o$, and $Y\sim n(Y)H_W$ a prime divisor on $W$. Then the following inequality holds:} $$ \mathop{\rm mult}\nolimits_oY\leqslant 4n(Y). $$
{\bf Proof} is given in \S 7 (Subsection 7.3).
(The symbol $H_W$ stands for the class of a hyperplane section of the variety $W$; the linear form, defining the hyperplane that cuts out $W$, is not a linear combination of the forms $f_{1,1},\dots,f_{k,1}$.)
Now let $o\in F$ be a multi-quadratic point of type $2^l$, where $l\in\{2,\dots,k\}$. Here we will need two regularity conditions. In the first of them the symbol $T_oF$ means the projective closure of the linear subspace $$ \{f_{1,1}=\dots=f_{k,1}=0\}\subset{\mathbb C}^{M+k} $$ in ${\mathbb P}^{M+k}$.
(R3.1) For every subspace $P\subset T_oF$ of codimension $\varepsilon(k)$, containing the point $o$, the scheme of common zeros of the polynomials $$
f_1|_P,\dots,f_k|_P,\quad f_{i,2}|_P\quad\mbox{for all}\quad i: d_i\geqslant 3, $$ is an irreducible reduced subvariety of codimension $k+k_{\geqslant 3}$ in $P$, where $$
k_{\geqslant 3}=\sharp\{i=1,\dots,k\,|\, d_i\geqslant 3\}. $$ Note that in the condition (R3.1) the homogeneous polynomials $f_{i,2}$ in the {\it affine} coordinates $z_*$ are considered as quadratic forms in {\it homogeneous} coordinates on ${\mathbb P}^{M+k}$.
In the second regularity condition for multi-quadratic points the symbol $T_oF$ means a linear subspace in ${\mathbb C}^{M+k}$.
(R3.2) For every linear subspace $\Pi\subset{\mathbb P}(T_oF)$ of codimension $\varepsilon(k)$ the sequence $$
{\cal S}[-m^*]|_{\Pi} $$ is regular, where $m^*=\mathop{\rm max}\{\varepsilon(k)+4-l,0\}$.
The conditions (R3.1) and (R3.2) are assumed for every multi-quadratic singular point $o\in F$. They imply the following key inequality. In Theorem 1.4, stated below, the symbol $T_oF$ stands for the projective closure of the embedded tangent space, that is, a linear subspace in ${\mathbb P}^{M+k}$, containing the point $o$.
{\bf Theorem 1.4.} {\it Let $P\subset T_oF$ be an arbitrary linear subspace of codimension $\leqslant\varepsilon(k)$ and $Y\ni o$ a prime divisor on $F\cap P$, $Y\sim n(Y)H_{F\cap P}$. Then the following inequality holds:} $$ \mathop{\rm mult}\nolimits_o Y\leqslant\frac32\cdot 2^kn(Y). $$
{\bf Proof} is given in \S 7 (Subsections 7.4, 7.5).
(The symbol $H_{F\cap P}$ stands for the class of a hyperplane section of the variety $F\cap P$; we will show below, see \S 4, that $F\cap P$ is an irreducible factorial complete intersection.)
Summing up, let us give a complete definition of the subset ${\cal F}\subset{\cal P}$: it consists of the tuples $\underline{f}$, satisfying the conditions (MQ1,2), the condition (R1) at every non-singular point $o\in F(\underline{f})$, the condition (R2) at every quadratic point $o\in F(\underline{f})$ and the conditions (R3.1,2) at every multi-quadratic point $o\in F(\underline{f})$.
The inequality for the codimension of the complement ${\cal P}\setminus{\cal F}$, given in Theorem 0.1, is shown in \S 8.
{\bf 1.5. Exclusion of the non-singular case.} We carry on with the proof of divisorial canonicity of the variety $F\in{\cal F}$. In the notations of Subsection 1.3 assume that the point of general position $o\in B$ is a non-singular point of $F$. We know (Proposition 1.1), that $\mathop{\rm codim}(B\subset F)\geqslant 3$. Consider a general subspace $P\ni o$ of dimension $k+3$. Then $F\cap P$ is a non-singular three-dimensional variety and the point $o$ is a connected component of the set $$
\mathop{\rm CS}\left(F\cap P,\frac{1}{n(D_F)}D_F|_{F\cap P}\right) $$ (if $\mathop{\rm codim}(B\subset F)\geqslant 4$, then $\mathop{\rm CS}$ can be replaced, by inversion of adjunction, by $\mathop{\rm LCS}$), that is, outside the point $o$ in a neighborhood of that point the pair \begin{equation}\label{23.11.22.1}
\left(F\cap P,\frac{1}{n(D_F)}D_F|_{F\cap P}\right) \end{equation} is canonical. It is well known (see \cite[Proposition 3]{Pukh05} or \cite[Chapter 7, Proposition 2.3]{Pukh13a}), it follows from here that either the inequality $$ \mathop{\rm mult}\nolimits_oD_F>2n(D_F) $$ holds, or on the exceptional divisor $E\cong{\mathbb P}^{M-1}$ of the blow up $F^+\to F$ of the point $o$ there is a hyperplane $\Theta\subset E$ (uniquely determined by the pair (\ref{23.11.22.1})), such that the inequality $$ \mathop{\rm mult}\nolimits_oD_F+\mathop{\rm mult}\nolimits_{\Theta}D^+_F> 2n(D_F) $$ holds, where $D^+_F$ is the strict transform of $D_F$ on $F^+$.
The first option is impossible as it contradicts Theorem 1.2. In the second case denote by the symbol $|H-\Theta|$ the projectively $k$-dimensional linear system of hyperplane sections of $F$, a general element of which $W\ni o$ is non-singular at the point $o$ and satisfies the equality $$ W^+\cap E=\Theta. $$ The restriction $D_W=(D_F\circ W)$ is an effective divisor on $W$, and $n(D_W)=n(D_F)$ and the inequality $$ \mathop{\rm mult}\nolimits_oD_W\geqslant\mathop{\rm mult}\nolimits_oD_F+\mathop{\rm mult}\nolimits_{\Theta}D^+_F>2n(D_W) $$ holds, which again contradicts Theorem 1.2. We have shown the following fact.
{\bf Proposition 1.2.} {\it The subvariety $B$ is contained in the singular locus of $F$:} $B\subset\mathop{\rm Sing} F$.
{\bf 1.6. Exclusion of the quadratic case.} Again let $o\in B$ be a point of general position.
{\bf Proposition 1.3.} {\it The point $o$ is a multi-quadratic singularity of type} $2^l$, $l\geqslant 2$.
{\bf Proof.} Assume the converse: the point $o$ is a quadratic singularity of $F$. Let $P\ni o$ be a general $(k+3)$-dimensional linear subspace in ${\mathbb P}^{M+k}$. By the condition (MQ1) and Theorem 1.1 the intersection $F\cap P$ is a three-dimensional variety with the unique singular point $o$, which is a non-degenerate quadratic singularity. This intersection can be constructed in two steps: first, we consider the inetrsection $F\cap P'$ with a general linear subspace $P'\subset{\mathbb P}^{M+k}$, $P'\ni o$, of dimension $$ k+\mathop{\rm codim}(\mathop{\rm Sing}F\subset F) $$ and after that the intersection with a general subspace $P\subset P'$, $P\ni o$, of dimension $(k+3)$. Now we get: the pair $$
\left(F\cap P,\frac{1}{n(D_F)}D_F|_{F\cap P}\right) $$ is not log canonical, but canonical out side the point $o$. Let us consider the blow up $$ \varphi_P\colon P^+\to P $$ of the point $o$ with the exceptional divisor ${\mathbb E}_P\cong{\mathbb P}^{k+2}$ and let $(F\cap P)^+\subset P^+$ be the strict transform of $F\cap P$ on $P^+$, so that $(F\cap P)^+\to F\cap P$ is the blow up of the quadratic singularity $o$ with the exceptional divisor $E_P=(F\cap P)^+\cap{\mathbb E}_P$, which is a non-singular two-dimensional quadric in the three-dimensional subspace $\langle E_P\rangle\subset {\mathbb E}_P$. Obviously, $a(E_P, F\cap P)=1$, so that, writing down $$
D_P=D_F|_{F\cap P}\sim n(D_F)H_{F\cap P} $$ and $D_P^+\sim n(D_F)H_{F\cap P}-\nu E_P$ (the strict transform of $D_P$ on $(F\cap P)^+$), we obtain two options: \begin{itemize}
\item either $\nu>2n(D_F)$, so that $E_P$ is a non log canonical singularity of the pair $(F\cap P,\frac{1}{n(D_F)} D_P)$,
\item or $n(D_F)<\nu\leqslant 2n(D_F)$, and then the closed set $$ \mathop{\rm LCS}\left(\left(F\cap P,\frac{1}{n(D_F)} D_P\right), (F\cap P)^+\right) $$ --- the union of the centres of all non log canonical singularities of the original pair $(F\cap P,\frac{1}{n(D_F)} D_P)$ on $(F\cap P)^+$ is a connected closed subset of the non-singular quadric $E_P$, which can be either a (possibly reducible) connected curve $C_P\subset E_P$, or a point $x_P\in E_P$. \end{itemize}
(It is well known, see, for instance, \cite[Chapter 2, Proposition 3.7]{Pukh13a}, that the inequality $\nu\leqslant n(D_F$) is impossible.) In the case $\nu>2n(D_F)$ we get $$ \mathop{\rm mult}\nolimits_o D_P=\mathop{\rm mult}\nolimits_o D_F> 4n(D_F), $$ which contradicts Theorem 1.3, so that this case is impossible. Coming back to the original variety $F$, let us consider the blow ups $\varphi_{\mathbb P}\colon({\mathbb P}^{M+k})^+\to{\mathbb P}^{M+k}$ and $\varphi\colon F^+\to F$ of the point $o$, where $F^+$ is identified with the strict transform of $F$ on $({\mathbb P}^{M+k})^+$, with the exceptional divisors ${\mathbb E}$ and $E$, respectively, so that $E=F^+\cap{\mathbb E}$ is a quadratic hypersurface $E$ in the subspace $\langle E\rangle\subset{\mathbb E}$ of codimension $(k+1)$. By the condition for the rank (MQ1) the case of a point $x_P\in E_P$ is impossible: in that case the quadric $E$ would contain a linear subspace of codimension 2 (with respect to $E$), which can not happen. Now, arguing word for word as in \cite[Subsection 3.2]{Pukh2022a} and using \cite[Theorem 3.1]{Pukh2022a}, we get that on the quadric $E$ there is a hyperplane section $\Lambda\subset E$, such that $$ \nu+\mathop{\rm mult}\nolimits_{\Lambda}D^+_F>2n(D_F). $$
Taking the linear system $|H_F-\Lambda|$ (of the projective dimension $(k-1)$) of hyperplane sections of the variety $F$, a general divisor $W\in|H_F-\Lambda|$ in which contains the point $o$ and its strict transform $W^+$ cuts out $\Lambda$ on $E$ (that is, $W^+\cap E=\Lambda$), we set $D_W=(D_F\circ W)$ and obtain the inequality $$ \mathop{\rm mult}\nolimits_oD_W=2(\nu+\mathop{\rm mult}\nolimits_{\Lambda}D^+_F)>4n(D_F)=4n(D_W), $$ which contradicts Theorem 1.3. This completes the proof of Proposition 1.3.
{\bf 1.7. Exclusion of the multi-quadratic case.} This is the hardest and the longest part of our work. Fix a point $o\in B$ of general position, which by what was proven is a multi-quadratic singularity of type $2^l$, satisfying the conditions (MQ1,2). The pair $(F,\frac{1}{n(D_F)}D_F)$ has a non-canonical singularity, the centre $B$ of which is a component of the maximal dimension of the set $\mathop{\rm CS}(F,\frac{1}{n(D_F)}D_F)$, so that in a neighborhood of the point $o$ this pair is canonical outside $B$. We will show that this is impossible. This will be done in a few steps, and now we describe the scheme of the proof and state the key intermediate claims.
{\bf Definition 1.3.} A pair $[X,o]$, where $$ X\subset{\mathbb P}(X)={\mathbb P}^{N(X)} $$ is an irreducible reduced factorial complete intersection of type $\underline{d}$ in the projective space ${\mathbb P}(X)$, $\mathop{\rm dim}X=N(X)-k\geqslant 3$, and $o\in X$ is a point, is called a {\it complete intersection with a marked point} or, for brevity, a {\it marked complete intersection of level} $(l_X,c_X)$), where $l_X$, $c_X$ are positive integers, satisfying the inequalities $$ 2\leqslant l_X\leqslant k\quad\mbox{and} \quad c_X\geqslant l_X+4, $$ if the following conditions are satisfied:
(MC1) the inequality $$ \mathop{\rm codim}(\mathop{\rm Sing}X\subset X)\geqslant c_X $$ holds,
(MC2) the point $o\in X$ is a multi-quadratic singularity of type $2^{l_X}$, the rank of which satisfies the inequality $$ \mathop{\rm rk}(o\in X)\geqslant 2l_X+c_X-1, $$
(MC3) the non-singular part $X\setminus\mathop{\rm Sing}X$ of the variety $X$ satisfies the condition of divisorial canonicity, $$ \mathop{\rm ct}(X\setminus\mathop{\rm Sing}X)\geqslant 1, $$ that is, for every effective divisor $A\sim aH_X$ we have $\mathop{\rm CS}(X,\frac{1}{a}A)\subset \mathop{\rm Sing}X$.
The non-singular set of integers $$ I_X=[k+l_X+3,k+c_X-1]\cap{\mathbb Z} $$ is called the {\it admissible set} of the marked complete intersection $[X,o]$.
{\bf Remark 1.1.} (i) Since $X\subset{\mathbb P}(X)$ is a complete intersection, the factoriality of the variety $X$ follows from Grothendieck's theorem \cite{CL} by the condition (MC1). For that reason $\mathop{\rm Pic}X={\mathbb Z}H_X$, where $H_X$ is the class of a hyperplane section.
(ii) By (MC1) for every $m\leqslant k+c_X-1$ and a general subspace $P\ni o$ of dimension $m$ in ${\mathbb P}(X)$ the point $o$ is the only singularity of the variety $X\cap P$.
(iii) Let ${\mathbb P}(X)^+\to{\mathbb P}(X)$ be the blow up of the point $o$ with the exceptional divisor ${\mathbb E}_X\cong{\mathbb P}^{N(X)-1}$. The strict transform $X^+\subset{\mathbb P}(X)^+$ is the result of blowing up the point $o$ on $X$ with the exceptional divisor $E_X=X^+\cap{\mathbb E}_X$. Obviously, $E_X$ is an irreducible reduced non-degenerate complete intersection of $l_X$ quadrics in a linear subspace of codimension $(k-l_X)$ in ${\mathbb E}_X$ (this follows from (MC2), see Proposition 1.4).
{\bf Proposition 1.4.} {\it The following inequality holds:} $$ \mathop{\rm codim}(\mathop{\rm Sing}E_X\subset E_X)\geqslant c_X. $$
{\bf Proof} see in \S 4 (Subsection 4.2; by the condition (MQ2) the claim of the proposition follows from Proposition 4.2, (ii)).
{\bf Remark 1.2.} Proposition 1.4 implies the estimate $$ \mathop{\rm codim}(\mathop{\rm Sing}E_X\subset{\mathbb E}_X)\geqslant k+c_X. $$ Therefore for every $m\leqslant k+c_X$ and a general subspace $P\ni o$ of dimension $m$ in ${\mathbb P}(X)$ the strict transform $P^+\subset{\mathbb P}(X)^+$ does not meet the set $\mathop{\rm Sing}E_X$, since $P^+\cap{\mathbb E}_X$ is a general linear subspace of dimension $m-1\leqslant k+c_X-1$ in ${\mathbb E}_X$. Therefore, for $m=k+c_X$ an isolated, and for $m\leqslant k+c_X-1$ the unique singularity $o$ of the variety $X\cap P$ is resolved by the blow up of that point, and moreover the exceptional divisor $$ E_{X\cap P}=P^+\cap E_X $$ of that blow up is a non-singular complete intersection of $l_X$ quadrics in the linear subspace of codimension $(k-l_X)$ in ${\mathbb E}_{X\cap P}=P^+\cap{\mathbb E}_X$. The discrepancy of that exceptional divisor is $$ a(E_{X\cap P})=a(E_{X\cap P}, X\cap P)=m-1-k-l_X, $$ so that for $m=k+l_X+3$ we have: $a(E_{X\cap P})=2$. The meaning of the lower end of the admissible set is in that equality.
In the following definition we use the notations of Remarks 1.1 and 1.2. We continue to consider a marked complete intersection $[X,o]$ of level $(l_X,c_X)$.
{\bf Definition 1.4.} A triple $(X,D,o)$, where $D\sim n(D)H_X$ is an effective divisor on $X$, $n(D)\geqslant 1$, is called a {\it working triple}, if for a general subspace $P\ni o$ of dimension $k+c_X-1$ in ${\mathbb P}(X)$ the pair \begin{equation}\label{28.11.22.1}
\left(X\cap P,\frac{1}{n(D)}D|_{X\cap P}\right) \end{equation} is not log canonical at the point $o$.
{\bf Remark 1.3.} Since the point $o$ is the unique singularity of the variety $X\cap P$, and by (MC3) the pair (\ref{28.11.22.1}) is canonical outside the point $o$, there is a non log canonical singularity of that pair, the centre of which on $X\cap P$ is precisely the point $o$. By inversion of adjunction, the same is true for a general subspace $P\ni o$ of dimension $m\leqslant k+c_X-2$.
Let us introduce one more notation. For the strict transform $D^+$ of the divisor $D$ on $X^+$ write $$ D^+\sim n(D)H_X-\nu(D)E_X $$ (in order to simplify the notations, the pull back of the divisorial class $H_X$ on $X^+$ is denoted by the same symbol $H_X$). Respectively, for a general subspace $P\ni o$ in ${\mathbb P}(X)$ of dimension $m\leqslant k+c_X-1$ we have $$
D_P=D|_{X\cap P}\sim n(D)H_{X\cap P} $$ and $$ D^+_P\sim n(D)H_{X\cap P}-\nu(D)E_{X\cap P}, $$
where $H_{X\cap P}=H_X|_{X\cap P}$ is the class of a hyperplane section of the variety $X\cap P\subset P\cong{\mathbb P}^m$.
{\bf Proposition 1.5.} {\it Assume that $c_X\geqslant 2l_X+4$. Then the inequality} $\nu(D)>n(D)$ {\it holds.}
{\bf Proof} is given in \S 3 (Subsection 3.2).
Let us come back to the task of excluding the multi-quadratic case. Recall that $F\in{\cal F}$, so that we can use the conditions (MQ1,2) and the statement of Theorem 1.4. We fix a point of general position $o\in B$, where $B$ is an irreducible component of the maximal dimension of the closed set $\mathop{\rm CS}(F,\frac{1}{n(D_F)}D_F)$.
{\bf Proposition 1.6.} {\it The pair $[F,o]$ is a marked complete intersection of level $(l,c_F)$, where $c_F=4k+2\varepsilon(k)$, and $(F,D_F,o)$ is a working triple.}
{\bf Proof} is given in \S 3 (Subsection 3.1).
Assume now that $l\leqslant k-1$. The symbol $T_oF$ stands again for a subspace of codimension $(k-l)$ of the projective space ${\mathbb P}^{M+k}$. Set $$ T=F\cap T_oF. $$ This is subvariety of codimension $(k-l)$ in $F$ and a complete intersection of type $\underline{d}$ in ${\mathbb P}(T)=T_oF$.
{\bf Remark 1.4.} Let us state here two well known facts which we will use many times in the sequel: when a quadratic form is restricted to a hyperplane, its rank either remains the same or drops by 1 or 2; when a complete intersection in the projective space is intersected with a hyperplane, the codimension of its singular locus either remains the same or drops by 1 or 2 (for a proof of the second claim, see \cite{IP} or \cite{Pukh00a}).
If $l=k$, then for uniformity of notations we set $T=F$.
{\bf Proposition 1.7.} {\it The pair $[T,o]$ is a marked complete intersection of level $(k,c_T)$, where $c_T=2k+2\varepsilon(k)+4$. There is an effective divisor $D_T\sim n(D_T)H_T$ on $T$, such that $(T,D_T,o)$ is a working triple.}
{\bf Proof} is given in \S 3 (Subsection 3.4).
Proposition 1.5 (taking into account Remark 1.4) implies that $\nu(D_T)>n(D_T)$. Now the main stage in the exclusion of the multi-quadratic case (and thus in the proof of Theorem 0.1) is given by the following claim.
{\bf Proposition 1.8.} {\it There is a sequence of marked complete intersections $$ [R_0=T,o],\quad [R_1,o],\quad\dots,\quad [R_a,o], $$ where $a\leqslant\varepsilon(k)$ and ${\mathbb P}(R_{i+1})$ is a hyperplane in ${\mathbb P}(R_{i})$, containing the point $o$, and of effective divisors $D_i\sim n(D_i)H_{R_i}$ on $R_i$, $n(D_i)\geqslant 1$, such that $D_0=D_T$ and $$ (R_0,D_0,o),\quad (R_1,D_1,o),\quad\dots,\quad (R_a,D_a,o) $$ are working triples, and moreover for every $i=0,\dots,a-1$ the inequality $$ 2-\frac{\nu(D_{i+1})}{n(D_{i+1})}<\frac{1}{1+\frac{1}{k}}\left( 2-\frac{\nu(D_i)}{n(D_i)}\right) $$ holds and} $\nu(D_a)>\frac32 n(D_a)$.
{\bf Proof} is given in \S 3 (Subsection 3.5) and \S 5.
Now let us complete the exclusion of the multi-quadratic case. The variety $R_a$ is a section of $T=F\cap T_oF$ by a subspace of codimension $\leqslant\varepsilon(k)$, containing the point $o$, and $D_a$ is an effective divisor on $R_a$, satisfying the inequality $$ \mathop{\rm mult}\nolimits_o D_a=2^k\nu(D_a)>\frac32\cdot 2^kn(D_a). $$ This contradicts Theorem 1.4.
The contradiction completes the proof of divisorial canonicity of the variety $F\in{\cal F}$.
\section{Fano-Mori fibre spaces}
In this section we prove Theorem 0.2. In Subsection 2.1 we associate with a birational map $\chi\colon V\dashrightarrow V'$ a mobile linear system $\Sigma$ on $V$ and state the key Theorem 2.1 about this system. In Subsection 2.2 we construct a fibre-wise birational modification of the fibre space $V/S$ for the system $\Sigma$. In Subsection 2.3 we consider a mobile algebraic family of irreducible curves ${\cal C}$ on $V$, and use it to prove (in Subsection 2.4) Theorem 2.1, which implies the first claim of Theorem 0.2 (that $\chi$ is fibre-wise). In Subsection 2.5 we prove the birational rigidity of the fibre space $V/S$.
{\bf 2.1. The mobile linear system $\Sigma$.} Assume that the Fano-Mori fibre space $\pi\colon V\to S$ satisfies all conditions of Theorem 0.2. Fix a fibre space $\pi'\colon V'\to S'$ that belongs to one of the two classes: either the class of rationally connected fibre spaces (and then we say that the rationally connected case is being considered), or the class of Mori fibre spaces in the sense of Subsection 0.2 (and then we say that the case of a Mori fibre space is being considered). We will study both cases simultaneously.
In the rationally connected case let $Y'\ni\mathop{\rm Pic}S'$ be a very ample class. Set $$
\Sigma'=|(\pi')^*Y'|=|-mK_V'+(\pi')^*Y'|, $$ where $m=0$. This is a mobile complete linear system on $V'$ (it defines the morphism $\pi'$).
In the case of a Mori fibre space let $$
\Sigma'=|-m K_V'+(\pi')^*Y'| $$ be a complete linear system on $V'$, where $m\geqslant 0$ and $Y'$ is a very ample divisorial class on $S'$, and moreover, for $m\geqslant 1$ the system $\Sigma'$ is very ample.
In both cases set $$
\Sigma=(\chi^{-1})_*\Sigma'\subset|-n K_V+{\pi}^*Y| $$ to be the strict transform of $\Sigma'$ on $V$ with respect to the birational map $\chi\colon V\dashrightarrow V'$. Note that if $m=0$ and $n=0$, then by construction of these linear systems the map $\chi$ is fibre-wise.
{\bf Theorem 2.1.} {\it The following inequality holds:} $n\leqslant m$.
{\bf Proof.} Assume the converse: $n>m$. In particular, if $m=0$, then $\chi$ is not fibre-wise. Let us show that this assumption leads to a contradiction.
{\bf 2.2. A fibre-wise birational modification of the fibre space $V/S$.} Let $\sigma_S\colon S^+\to S$ be a composition of blow ups with non-singular centres, $$ S^+=S_N\stackrel{\sigma_{S,N}}{\to}S_{N-1} \to\dots\stackrel{\sigma_{S,1}}{\to}S_0=S, $$ where $\sigma_{S,i+1}\colon S_{i+1}\to S_i$ blows up a non-singular subvariety $Z_{S,i}\subset S_i$. Set $V_i=V\times_SS_i$ and $\pi_i\colon V_i\to S_i$; by the assumption on the stability with respect to birational modifications of the base $V_i/S_i$ is a Fano-Mori fibre space. Obviously, $$ V_{i+1}=V_i\times_{S_i}S_{i+1} $$ is the result of the blow up $\sigma_{i+1}\colon V_{i+1}\to V_i$ of the subvariety $Z_i=\pi^{-1}_i(Z_{S,i})\subset V_i$. Therefore, we get the commutative diagram $$ \begin{array}{ccccccccccccccc} V^+ & = & V_N & \stackrel{\sigma_N}{\to} & \dots & \to & V_{i+1} & \stackrel{\sigma_{i+1}}{\to} & V_i & \to & \dots & \stackrel{\sigma_1}{\to} & V_0 & = & V \\
& & \downarrow & & \dots & & \downarrow & & \downarrow &
& \dots & & \downarrow & & \\ S^+ & = & S_N & \stackrel{\sigma_{S,N}}{\to} & \dots & \to & S_{i+1} & \stackrel{\sigma_{S,i+1}}{\to} & S_i & \to & \dots & \stackrel{\sigma_{S,1}}{\to} & S_0 & = & S, \end{array} $$ where the vertical arrows $\pi\colon V_i\to S_i$ are Fano-Mori fibre spaces. The symbol $\Sigma^i$ stands for the strict transform of the system $\Sigma$ on $V_i$, $\Sigma^+=\Sigma^N$. In these notations, let us consider a sequence of blow ups $\sigma_{S,*}$ such that for every $i=0,1,\dots,N-1$ $$ Z_i\subset\mathop{\rm Bs}\Sigma^i, $$ and the base set of the system $\Sigma^+$ contains entirely no fibre $\pi^{-1}_+(s_+)$, where $s_+\in S^+$ and $\pi_+=\pi_N$. (If this is true already for the original system $\Sigma$, then we set $\sigma_S=\mathop{\rm id}_S$, $S^+=S$ and $V^+=V$, there is no need to make any blow ups; but we will soon see that this case is impossible.)
By the assumptions on the fibre space $V/S$ the fibre $\pi^{-1}_+(s_+)$ is isomorphic to the fibre $F_s=\pi^{-1}(s)$ of the original fibre space, where $s=\sigma_S(s_+)$. Let ${\cal T}$ be the set of all prime $\sigma_S$-exceptional divisors on $S^+$. We get: $$
\Sigma^+\subset\left|-n\sigma^*K_V+\pi^*_+\left(\sigma^*_SY-\sum_{T\in{\cal T}}b_TT\right)\right|= $$ $$
=\left|-nK^++\pi^*_+\left(\sigma^*_SY+\sum_{T\in{\cal T}}
(na_T-b_T)T\right)\right|, $$ where $\sigma\colon V^+\to V$ is the composition of the morphisms $\sigma_i$, $K^+=K_{V^+}$, $b_T\geqslant 1$ and $a_T\geqslant 1$ for all $T\in{\cal T}$, $a_T=a(T,S)$ is the discrepancy of $T$ with respect to $S$.
Let $\varphi\colon\widetilde{V}\to V^+$ be the resolution of singularities of the composite map $\chi_+=\chi\circ\sigma\colon V^+\dashrightarrow V'$, ${\cal E}$ the set of prime $\varphi$-exceptional divisors on $\widetilde{V}$ and $\psi=\chi\circ\sigma\circ\varphi\colon\widetilde{V}\to V'$ is a birational morphism.
{\bf Proposition 2.1.} {\it For a general divisor $D^+\in\Sigma^+$ the pair $(V^+,\frac{1}{n}D^+)$ is canonical.}
{\bf Proof.} Assume that this is not the case. Then there is an exceptional divisor $E\in{\cal E}$, satisfying the Noether-Fano inequality $$ \mathop{\rm ord}\nolimits_ED^+=\mathop{\rm ord}\nolimits_E\Sigma^+>na(E,V^+) $$ (we write $D^+,\Sigma^+$ instead of $\varphi^*D^+,\varphi^*\Sigma^+$ for simplicity). Set $B=\varphi(E)\subset V^+$.
There are two options:
(1) $\pi_+(B)=S^+$,
(2) $\pi_+(B)$ is a proper irreducible closed subset $S^+$.
If (1) is the case, then the fibre $F=F_s$ of general position intersects $B$. The restriction $$
\Sigma^+_F=\Sigma^+|_F\subset|-nK_F| $$ is a mobile linear system, and moreover, the pair
$(F,\frac{1}{n}D^+_F)$ is not canonical for $D^+_F=D^+|_F$. This contradicts the condition $\mathop{\rm mct}(F)\geqslant 1$.
Therefore, (2) is the case. Let $p\in B$ be a point of general position and $F=\pi^{-1}_+(\pi_+(p))$, so that $p\in F$. Since $F\not\subset\mathop{\rm Bs}\Sigma^+$, the restriction
$D^+_F=D^+|_F$ is well defined (although the linear system $\Sigma^+_F$ may have fixed components). By inversion of adjunction the pair $(F,\frac{1}{n}D^+_F)$ is not log canonical. This contradicts the condition $\mathop{\rm lct}(F)\geqslant 1$. Q.E.D. for the proposition.
Denote by the symbol $\widetilde{\Sigma}$ the strict transform of the system $\Sigma^+$ on $\widetilde{V}$. Obviously, \begin{equation}\label{05.11.22.1}
\widetilde{\Sigma}=\psi^*\Sigma'=|-m\psi^*K'+\psi^*(\pi')^*Y'|, \end{equation} where $K'=K_{V'}$, that is, $\widetilde{\Sigma}$ is a complete linear system. We have another presentation for this linear system: $$
\widetilde{\Sigma}=\left|\varphi^*D^+-\sum_{E\in{\cal E}}b_EE\right|= $$ \begin{equation}\label{05.11.22.2}
=\left|-n\widetilde{K}+\varphi^*\pi^*_+\left(\sigma^*_SY+\sum_{T\in{\cal T}}(na_T-b_T)T\right)+\sum_{E\in{\cal E}}(na_E-b_E)E\right|, \end{equation} where $\widetilde{K}=K_{\widetilde{V}}$, $D^+\in\Sigma^+$ is a general divisor and $a_E=a(E,V^+)$ is the discrepancy.
{\bf 2.3. The mobile system of curves.} Take a family of irreducible curves ${\cal C'}$ on $V'$, contracted by the projection $\pi'$, sweeping out a Zariski dense subset of the variety $V'$ and not meeting the set where the birational map $\psi^{-1}$ is not determined. Assume that for a general pair of points $p,q$ in a fibre of general position of the projection $\pi'$ there is a curve $C'\in {\cal C'}$, containing the both points. In the rationally connected case the curves of the family ${\cal C'}$ are rational (the existence of such family is shown in \cite[Chapter II]{Kol96}), in the case of a Mori fibre space we do not require this. For a curve $C'\in{\cal C'}$ set $\widetilde{C}=\psi^{-1}(C')$ (at every point of the curve $C'$ the map $\psi^{-1}$ is an isomorphism), thus we get a family $\widetilde{{\cal C}}$ of irreducible curves on $\widetilde{V}$. Both in the rationally connected case and the case of a Mori fibre space the inequality $$ (C'\cdot K')<0 $$ holds, so that $(\widetilde{C}\cdot\widetilde{K})=(C'\cdot K')<0$. Furthermore, $$ (\widetilde{C}\cdot\widetilde{D})=(C'\cdot D')=-m(C'\cdot K')\geqslant 0, $$ and $(\widetilde{C}\cdot\widetilde{D})=0$ if and only if $m=0$ (since obviously $(C'\cdot(\pi')^*Y')=0$).
Let ${\cal C}^+=\varphi_*\widetilde{{\cal C}}$ be the image of the family $\widetilde{{\cal C}}$ on $V^+$ and ${\cal C} =\sigma_*{\cal C}^+$ its image on $V$.
{\bf Proposition 2.2.} {\it The curves $C\in{\cal C}$ are not contracted by the projection} $\pi$.
{\bf Proof.} Assume the converse: $\pi(C)$ is a point on $S$. By the construction of the family ${\cal C}'$ this means that the map $\chi^{-1}$ is fibre-wise: there is a rational dominant map $\beta'\colon S'\dashrightarrow S$, such that the diagram $$ \begin{array}{ccc} V & \stackrel{\phantom{xxx}\chi^{-1}}{\dashleftarrow} & V'\\ \downarrow & & \downarrow \\ S & \stackrel{\phantom{xx}\beta'}{\dashleftarrow} & S' \end{array} $$ is commutative, and moreover, $\dim S' > \dim S$ (otherwise $\beta'$ is birational and then $\chi$ is fibre-wise, contrary to our assumption). In that case for a point $s\in S$ of general position the fibre $F_s=\pi^{-1}(s)$ is birational to $(\pi')^{-1}(\beta')^{-1}(s)$. Here $\dim (\beta')^{-1}(s)\geqslant 1$ and either the fibre $(\pi')^{-1}(s')$ for a point $s'\in (\beta')^{-1}(s)$ of general position is rationally connected, or the anti-canonical class of the variety $(\pi')^{-1}(\beta')^{-1}(s)$ is $\pi'$-ample, and we get a contradiction with the condition $\mathop{\rm mct} (F_s)\geqslant 1$ (the fibre $F_s$ is a birationally superrigid Fano variety). Q.E.D. for the proposition.
For a general curve $C\in{\cal C}$ set $$ \pi_*C=d_C\overline{C}, $$ where $d_C\geqslant 1$. Replacing, if necessary, the family $\cal {C}'$ by some open subfamily, we may assume that the integer $d_C$ does not depend on $C$. For the corresponding curve $C^+\in\cal{C}^+$ we have $(\pi_+)_*C^+=d_C\overline{C}^+$, where $\overline{C}^+$ is the strict transform of the curve $\overline{C}$ on $S^+$.
{\bf 2.4. Proof of Theorem 2.1.} Recall that we assume that $n>m$. Using the two presentations (\ref{05.11.22.1}) and (\ref{05.11.22.2}) for the class of a divisor $\widetilde{D}\in\widetilde{\Sigma}$, we get $$ d_C\left(\overline{C}^+\cdot\left(\sigma^*_SY+\sum_{T\in{\cal T}}(na_T-b_T)T\right)\right)+\sum_{E\in{\cal E}}(na_E-b_E)(\overline{C}\cdot E)= (n-m)(\widetilde{C}\cdot\widetilde{K})<0, $$ whence, taking into account the inequalities $b_E\leqslant na_E$ for all $E\in{\cal E}$ (Proposition 2.1), it follows that $$ \left(\overline{C}^+\cdot \left(\sigma^*_SY+\sum_{T\in{\cal T}}(na_T-b_T)T\right)\right)<0. $$ However, the class $Y$ is pseudo-effective, so that $$ (\overline{C}^+\cdot\sigma^*_S Y)=(\overline{C}\cdot Y)\geqslant 0, $$ and $(\overline{C}^+\cdot T)\geqslant 0$ for all $T\in{\cal T}$, so that ${\cal T}\neq\emptyset$ and for some $T\in{\cal T}$, such that $(\overline{C}^+\cdot T)>0$, the inequality $b_T>na_T$ holds. Since $a_T\geqslant 1$ for all $T\in{\cal T}$, we conclude that $$ \left(\overline{C}^+\cdot\left(\sigma^*_SY-\sum_{T\in{\cal T}} b_T T \right)\right)< -n\left(\overline{C}^+\cdot \sum_{T\in{\cal T}} a_T T \right)\leqslant -n. $$ For a general curve $\overline{C}^+$ consider the algebraic cycle of the scheme-theoretic intersection $$ (D^+\circ \pi_+^{-1}(\overline{C}^+))=\left(\left(\sigma^* D-\pi_+^*\left(\sum_{T\in{\cal T}}b_T T\right)\right)\circ\pi_+^{-1}(\overline{C}^+)\right). $$ The numerical class of that effective cycle is $$ n(\sigma^*(-K_V)\cdot\pi^{-1}_+(\overline{C}^+))+\left(\overline{C}^+ \cdot\left(\sigma^*_S Y-\sum_{T\in{\cal T}}b_TT\right)\right)F $$ (where $F$ is the class of a fibre of the projection $\pi_+$), and the class of the effective cycle $\sigma_*(D^+\circ\pi^{-1}_+(\overline{C}^+ ))$ in the numerical Chow group is $$ -n(K_V\cdot\pi^{-1}(\overline{C}))+bF, $$ where $b<-n$. This contradicts the condition (iii) of Theorem 0.2. the proof of Theorem 2.1 is complete. Therefore, in both cases (that of a rationally connected fibre space and of a Mori fibre space) the map $\chi$ is fibre-wise. The first claim of Theorem 0.2 (in the rationally connected case) is shown. It remains to prove the birational rigidity.
{\bf 2.5. Proof of birational rigidity.} Starting from this moment, we assume that $V'/S'$ is a Mori fibre space and the birational map $\chi\colon V\dashrightarrow V'$ is fibre-wise, however, the corresponding map of the bases $\beta\colon S\dashrightarrow S'$ is not birational: $\mathop{\rm dim}S>\mathop{\rm dim}S'$ and the fibres $\beta^{-1}(s')$ for $s'\in S'$ are of positive dimension. We have to obtain a contradiction, showing that this case is impossible.
First of all, let us consider the fibre-wise modification of the fibre space $V/S$ (Subsection 2.2). Now we will need a composition of blow ups $\sigma_S\colon S^+\to S$ with non-singular centres such that as in Subsection 2.2, none of the fibres of the Fano-Mori fibre space $V^+/S^+$ is contained in the base set $\mathop{\rm Bs}\Sigma^+$ and, in addition, $\sigma_S$ resolves the singularities of the rational dominant map $\beta\colon S\dashrightarrow S'$, that is, $$ \beta_+=\beta\circ\sigma_S\colon S^+\to S' $$ is a morphism. (So that the inclusion $Z_i=\pi^{-1}_i(Z_{S,i})\subset\mathop{\rm Bs}\Sigma^i$, see Subsection 2.2, no longer takes place for all $i=0,\dots,N-1$.)
The fibre $\beta^{-1}_+(s')$ over a point $s'\in S'$ of general position is an irreducible non-singular subvariety of positive dimension. Set $G(s')=(\pi')^{-1}(s')$ and let $G^+(s')$ be the strict transform of $G(s')$ on $V^+$. Obviously, $$ G^+(s')=\pi^{-1}_+(\beta^{-1}_+(s')) $$ is a union of fibres of the projection $\pi_+$ over the points of the variety $\beta^{-1}_+(s')$.
Since $\pi'\colon V'\to S'$ is a Mori fibre space, we have the equality $\rho(V')=\rho(S')+1$. Let ${\cal E}'$ be the set of all $\psi$-exceptional divisors $E'\in{\cal E}'$, satisfying the equality $\pi'(\psi'(E'))=S'$. Furthermore, let ${\cal Z}\subset\mathop{\rm Pic}\widetilde{V}\otimes{\mathbb Q}$ be the subspace, generated by the subspace $\psi^*(\pi')^*\mathop{\rm Pic}S'\otimes{\mathbb Q}$ and the classes of all $\psi$-exceptional divisors on $\widetilde{V}$, the images of which on $V'$ do not cover $S'$. Then the equality $$ \mathop{\rm Pic}\widetilde{V}\otimes{\mathbb Q}={}\mathbb Q\widetilde{K}\oplus\left(\bigoplus_{E'\in{\cal E}'}{\mathbb Q}E'\oplus {\cal Z}\right) $$ holds, in particular, the subspace in brackets is a hyperplane in $\mathop{\rm Pic}\widetilde{V}\otimes{\mathbb Q}$. Writing down the class $\widetilde{K}$ with respect to the morphisms $\varphi$ and $\psi$, we get the equality \begin{equation}\label{08.11.22.1} \varphi^*K^++\sum_{E\in{\cal E}}a^+_EE=\psi^* K'+\sum_{E'\in{\cal E}'}a'(E')E'+Z_1, \end{equation} where $Z_1\in{\cal Z}$ is some effective class, $a^+_E=a(E,V^+)$ for $\varphi$-exceptional divisors $E\in{\cal E}$ and $a'(E')=a(E',V')$ for $\psi$-exceptional divisors $E'\in{\cal E}'$, covering $S'$. Here all $a^+_E\geqslant 1$ and $a'(E')>0$. Using the $\psi$-presentation (\ref{05.11.22.1}) and the $\varphi$-presentation (\ref{08.11.22.1}) of the divisorial class $\widetilde{D}$ and expressing $K'$ from the formula (\ref{08.11.22.1}), we get the following equality in $\mathbb{\rm Pic}\widetilde{V}\otimes{\mathbb Q}$: \begin{equation}\label{05.12.22.1} (m-n)\varphi^*K^++\varphi^*\pi^*_+Y_++\sum_{E\in{\cal E}}(ma^+_E-b_E)E=m\sum_{E'\in{{\cal E}}'}a'(E')E'+Z_2, \end{equation} where $Y_+=\sigma^*_SY+\sum_{T\in{\cal T}}(na_T-b_T)T$ and $Z_2=mZ_1+\psi^*(\pi')^*Y'\in{\cal Z}$ is an effective class. Applying to both sides of (\ref{05.12.22.1}) $\varphi_*$ and restricting onto a fibre of general position of the projection $\pi_+$, we get that $$
(m-n)K^+|_{\pi_+^{-1}(s_+)} $$ is an effective class. Since $m\geqslant n$ and the fibre $\pi_+^{-1}(s_+)$ is a Fano variety, we conclude that $m=n$ and (\ref{05.12.22.1}) turns into \begin{equation}\label{08.11.22.2} \varphi^*\pi^*_+Y_++\sum_{E\in{\cal E}}(na^+_E-b_E)E=n\sum_{E'\in{{\cal E}}'}a'(E')E'+Z_2. \end{equation} By Proposition 2.1, we have $b_E\leqslant na^+_E$ for all $E\in{\cal E}$. Again we apply $\varphi_*$ and get that the class $Y_+$ is effective on $S^+$.
Now let us consider the defined above fibre $G=G(s')$ of general position of the morphism $\pi'$ and its strict transforms $\widetilde{G}$ on $\widetilde{V}$ and $G^+$ on $V^+$ (the symbol $s'$ for simplicity of notations is omitted). Obviously, for every
$Z\in{\cal Z}$ we have $Z|_{\widetilde{G}}=0$. Furthermore, for any linear combination with non-negative coefficients $$
\left.\left(\sum_{E'\in{\cal E'}}b'_{E'}E'\right)\right|_{\widetilde{G}} $$ is a fixed divisor on $\widetilde{G}$. Now let $\Delta$ be a very ample divisor on $S^+$. Then the restriction
$\varphi^*\pi^*_+\Delta|_{\widetilde{G}}$ is mobile (recall that $\beta^{-1}_+(s')$ is a variety of positive dimension, so that
$\Delta|_{\beta^{-1}_+(s')}$ is a mobile class). Therefore, $$ \varphi^*\pi^*_+\Delta\not\in\bigoplus_{E'\in{\cal E'}}{\mathbb Q}E'\oplus {\cal Z}, $$ whence we conclude that $$ \mathop{\rm Pic}\widetilde{V}\otimes{\mathbb Q}={\mathbb Q}[\varphi^*\pi^*_+\Delta]\oplus\left(\bigoplus_{E'\in{\cal E'}}{\mathbb Q}E'\oplus{\cal Z}\right). $$ However, this can not be the case. Let $F^+\subset G^+$ be a fibre of general position of the morphism $\pi_+$ and $\widetilde{F}\subset\widetilde{G}$ its strict transform on $\widetilde{V}$. Restricting (\ref{08.11.22.2}) onto $\widetilde{F}$, we obtain the equality $$
\sum_{E\in{\cal E}}(na_E^+-b_E)E|_{\widetilde{F}}=
n\sum_{E'\in{\cal E'}}a'(E')E'|_{\widetilde{F}}, $$
where on the right hand side it is a linear combination of all divisors $E'|_{\widetilde{F}}$, $E'\in{\cal E'}$, with {\it positive} coefficients (it is here that we use the assumption that the singularities of the variety $V'$ are terminal, see Subsection 0.2), and on the left hand side it is a linear combination of
$\varphi$-exceptional divisors $E|_{\widetilde{F}}$, $E\in{\cal E}$, with non-negative coefficients. Since by construction
$\pi^*_+\Delta|_{F^+}=0$, we have
$\varphi^*\pi^*_+\Delta|_{\widetilde{F}}=0$, whence it follows that the restriction of {\it every} divisorial class in $\mathop{\rm Pic}\widetilde{V}\otimes{\mathbb Q}$ onto $\widetilde{F}$ is fixed (is a linear combination of
$\varphi$-exceptional divisors $E|_F,E\in{\cal E}$), which is impossible. This contradiction completes the proof of Theorem 0.2.
\section{Hyperplane sections}
This section is an immediate follow up of \S 1: we develop the technique of working triples and consider its first applications.
{\bf 3.1. The working triple $(F,D_F,o)$.} Let us prove Proposition 1.6. Proposition 1.2, shown in Subsection 1.5, implies the condition (MC3). Theorem 1.1 gives the condition (MC1) for $c_F=4k+2\varepsilon(k)$ (the inequality $c_F\geqslant l+4$ is satisfied in the obvious way, since $l\leqslant k$). Finally, the condition (MQ1) gives precisely (MC2). Therefore, $[F,o]$ is indeed a marked complete intersection of level $(l,c_F)$.
Consider a general subspace $P^{\sharp}\ni o$ of dimension $k+c_F$ in ${\mathbb P}^{M+k}$. The pair $$
\left(F\cap P^{\sharp},\frac{1}{n(D_F)}D_F|_{F\cap P^{\sharp}}\right) $$ is not canonical. By (MC1) the singularities of the variety ${F\cap P^{\sharp}}$ are zero-dimensional, and moreover, $o\in\mathop{\rm Sing}{F\cap P^{\sharp}}$ and $$
\mathop{\rm CS}\left(F\cap P^{\sharp},\frac{1}{n(D_F)}D_F|_{F\cap P^{\sharp}}\right)\subset\mathop{\rm Sing} (F\cap P^{\sharp}), $$ and the point $o$ is an (isolated) centre of some non canonical singularity of that pair. For a general subspace $P\ni o$ of dimension $k+c_F-1$ take a general hyperplane in $P^{\sharp}$, containing the point $o$. By inversion of adjunction we have the equalities $$
\{o\}=\mathop{\rm LCS}\left(F\cap P,\frac{1}{n(D_F)}D_F|_{F\cap P}\right)= \mathop{\rm CS}\left(F\cap P,\frac{1}{n(D_F)}D_F|_{F\cap P}\right), $$ and this is precisely (\ref{28.11.22.1}). Q.E.D. for Proposition 1.6.
As we explained in Subsection 1.7, from now our work is constructing a certain special sequence of working triples. This sequence starts with the working triple $(F,D_F,o)$. In order to construct the sequence, we will need certain facts about working triples.
{\bf 3.2. Multiplicity at the marked point.} Let us prove Proposition 1.5. We use the notations of Subsection 1.7, work with a working triple $(X,D,o)$, where $[X,o]$ is a marked complete intersection. Assume that $\nu(D)\leqslant 2n(D)$ (otherwise, there is nothing to prove).
Since for a general subspace $P\ni o$ of dimension $m\in I_X$ the inequality $a(E_{X\cap P})\geqslant 2$ holds (see Remark 1.2), the pair $$ \left((X\cap P)^+,\frac{1}{n(D)}D^+_P\right) $$ is not log canonical, and moreover, $$ \mathop{\rm LCS}\left((X\cap P)^+,\frac{1}{n(D)}D^+_P\right)\subset E_{X\cap P}. $$ Let $B(P)\subset E_{X\cap P}$ be the centre of some non log canonical singularity of that pair. Then the inequality $$ \mathop{\rm mult}\nolimits_{B(P)}D^+_P>n(D) $$ holds, and the more so
$$\mathop{\rm mult}\nolimits_{B(P)}D^+_P|_{E_{X\cap P}}>n(D). $$ Considering a general subspace $P^*\ni o$ of the minimal admissible dimension $k+l_X+3$ in $I_X$ as a general subspace of codimension $\geqslant l_X$ in a general subspace $P\ni o$ of the maximal admissible dimension $k+c_X-1$ in $I_X$ (recall that by assumption $c_X\geqslant 2l_X+4$), we see that the centre $B(P)$
of some non log canonical singularity is of dimension $\geqslant l_X$. However, $E_{X\cap P}$ is a non-singular complete intersection of $l_X$ quadrics in the projective space of dimension $l_X+c_X-2$, and the divisor $D^+_P|_{E_{X\cap P}}$ is cut out on $E_{X\cap P}$ by a hypersurface of degree $\nu(D)$ in that projective space. Therefore (for example, by \cite[Proposition 3.6]{Pukh06b}), the inequality $$
\nu(D)\geqslant\mathop{\rm mult}\nolimits_{B(P)} D^+_P|_{E_{X\cap P}} $$ holds. Therefore, $\nu(D)>n(D)$. Q.E.D. for Proposition 1.5.
{\bf 3.3. Transversal hyperplane sections.} We still work with an arbitrary working triple $(X,D,o)$, where $[X,o]$ is a marked complete intersection of level $(l_X, c_X)$.
{\bf Proposition 3.1.} {\it Let $R\ni o$ be the section of the variety $X$ by a hyperplane ${\mathbb P}(R)\subset{\mathbb P}(X)$, which is not tangent to $X$ at the point $o$. Then $D\neq bR$ for $b\geqslant 1$. Moreover, if $D$ contains $R$ as a component, that is, $$ D=D^*+bR, $$ where $b\geqslant 1$, then $(X,D^*,o)$ is a working triple.}
{\bf Proof.} If $c_X\geqslant 2l_X+4$, then the first claim (that $D$ is not a multiple of $R$) follows immediately from Proposition 1.5: indeed, the hyperplane ${\mathbb P}(R)$ is not tangent to $X$ at the point $o$, that is, for the strict transform $R^+$ on the blow up of that point we have $$ R^+\sim H_X-E_X, $$ so that the equality $D=bR$ implies that $n(D)=b=\nu(D)$, which contradicts Proposition 1.5. However we will show now that the additional assumptions for the parameters $l_X$ and $c_X$ are not needed.
By Remark 1.4 the condition (MC1) for $X$ implies the inequality \begin{equation}\label{13.09.22.1} \mathop{\rm codim}(\mathop{\rm Sing}R\subset R)\geqslant c_X-2. \end{equation} Since the hyperplane ${\mathbb P}(R)$ is not tangent to $X$ at the point $o$, this point is a multi-quadratic singularity of the variety $R$ of type $2^{l_X}$, the rank of which (by Remark 1.4 and the condition (MC2)) satisfies the inequality $$ \mathop{\rm rk}(o\in R)\geqslant 2l_X+c_X-3. $$
Consider a general linear subspace $P\ni o$ in ${\mathbb P}(X)$ of dimension $k+c_X-2$. That dimension, generally speaking, does not belong to $I_X$ and only the inequality $$ a(E_{X\cap P})\geqslant 1 $$ holds. The variety $X\cap P$ has a unique singularity, the point $o$, and its strict transform $(X\cap P)^+$ and the exceptional divisor $E_{X\cap P}$ are non-singular.
The intersection $P\cap{\mathbb P}(R)$ is a general linear subspace of dimension $k+c_X-3$ in ${\mathbb P}(R)$, containing the point $o$. For that reason $R\cap P$ has a unique singularity, the point $o$, and moreover, the exceptional divisor $$ E_{R\cap P}=(R\cap P)^+\cap{\mathbb E}_X=R^+\cap E_{X\cap P} $$ is non-singular, and the map $(R\cap P)^+\to R\cap P$ is the blow up of the point $o$ on $R\cap P$, which resolves the singularities of that variety. From here, taking into account that $$ \nu(R)=1\leqslant a(E_{X\cap P}), $$ it follows that the pair $(X\cap P,R\cap P)$ is canonical. By inversion of adjunction we get that for every $m\in I_X$ and a general subspace $P^{\sharp}\ni o$ of dimension $m$ the pair $(X\cap P^{\sharp}, R\cap P^{\sharp})$ is canonical. Therefore, $D\neq bR$, $b\geqslant 1$, and the first claim of the proposition is shown.
Assume now that $D=D^*+bR$, where $b\geqslant 1$. Then for a general subspace $P\ni o$ of dimension $k+c_X-1$ in ${\mathbb P}(X)$ the pair $(X\cap P, \frac{1}{n(D)}D_P)$ is not log canonical at the point $o$. As we saw above, the pair $(X\cap P, R_P)$ is log canonical (and even canonical). The condition of being log canonical is linear, so we conclude that the pair $$
\left(X\cap P, \frac{1}{n(D^*)} D^*|_{X\cap P}\right) $$ is not log canonical at the point $o$. Therefore, $(X,D^*,o)$ is a working triple. Q.E.D. for the proposition.
{\bf Theorem 3.1 (on the transversal hyperplane section).} {\it Let $[X,o]$ be a marked complete intersection of level $(l_X=k,c_X)$, where $c_X\geqslant k+6$, and $(X,D,o)$ a working triple. Let $R\ni o$ be a hyperplane section, which is not a component of the divisor $D$. Assume that the inequality $\mathop{\rm ct}(R\backslash\mathop{\rm Sing}R)\geqslant 1$ holds. Then $(R,(D\circ R),o)$ is a working triple on the marked complete intersection $[R,o]$ of level $(l_R=k,c_R)$, where} $c_R=c_X-2$.
{\bf Proof.} First of all, let us check that $[R,o]$ is a marked complete intersection of level $(k,c_R)$. The inequality $c_R\geqslant k+4$ holds by assumption. Furthermore, $$ \mathop{\rm codim} (\mathop{\rm Sing}R\subset R) \geqslant c_X-2=c_R, $$ so that the condition (MC1) is satisfied. Furthermore, the point $o\in R$ is a multi-quadratic singularity, the rank of which satisfies the inequality $$ \mathop{\rm rk}(o\in R)\geqslant\mathop{\rm rk}(o\in X)-2\geqslant 2k+c_R-1, $$ so that the condition (MC2) holds. The condition (MC3) holds by assumption. The bound for the codimension of the singular set $\mathop{\rm Sing}R$ guarantees that the complete intersection $R\subset{\mathbb P}(R)$ is irreducible, reduced and factorial. Therefore, $[R,o]$ is a marked complete intersection of level $(k,c_R)$. Set $$ I_R=[2k+3,k+c_R-1]. $$ Obviously, $(D\circ R)\sim n(D\circ R)H_R=n(D)H_R$, where $H_R$ is the class of a hyperplane section of $R$. It remains to check that for a general subspace $P\ni o$ of dimension $k+c_R-1$ in ${\mathbb P}(R)$ the pair \begin{equation}\label{14.09.22.1}
\left(R\cap P\,\frac{1}{n(D\circ R)}(D\circ R)|_{R\cap P}\right) \end{equation} is not log canonical at the point $o$. In order to do this, we present $P$ as the intersection $$ P=P^{\sharp}\cap{\mathbb P}(R), $$ where $P^{\sharp}\ni o$ is a general subspace of dimension $$ k+c_R=k+c_X-2 $$ in ${\mathbb P}(X)$. As $k+c_R\in I_X$, the point $o$ is the only singularity of the variety $X\cap P^{\sharp}$ (and the only singularity of the variety $R\cap P$), and $$
\{o\}=\mathop{\rm LCS}\left(X\cap P^{\sharp},\frac{1}{n(D)}D|_{X\cap P^{\sharp}}\right). $$ The variety $R\cap P$ is the section of the variety $X\cap P^{\sharp}$ by the hyperplane $P= P^{\sharp}\cap{\mathbb P}(R)$, containing the point $o$, so that by inversion of adjunction the pair (\ref{14.09.22.1}) is not log canonical. At the same time, it is canonical outside the point $o$ since the subspace $P\subset{\mathbb P}(R)$ is generic, the non-singular part $R\backslash\mathop{\rm Sing}R$ is divisorially canonical and the equality $\{o\}=\mathop{\rm Sing}(R\cap P)$ holds. Therefore, the pair (\ref{14.09.22.1}) is not log canonical precisely at the point $o$, which completes the proof of Theorem 3.1.
{\bf 3.4. Tangent hyperplane sections.} Now let us consider a marked complete intersection $[X,o]$ of level $(l_X,c_X)$, where $l_X\leqslant k-1$. Let $R$ be the section of the variety $X$ by a hyperplane ${\mathbb P}(R)\subset{\mathbb P}(X)$, which is tangent to $X$ at the point $o$. By the symbol ${\mathbb P}(R)^+$ denote the strict transform of the hyperplane ${\mathbb P} (R)$ on ${\mathbb P}(X)^+$ and set $$ {\mathbb E}_R={\mathbb P}(R)^+\cap {\mathbb E}_X. $$ Obviously, ${\mathbb E}_R\cong{\mathbb P}^{N(X)-2}$ is the exceptional divisor of the blow up of the point $o$ on the hyperplane ${\mathbb P}(R)$. Set also $E_R=R^+\cap{\mathbb E}_X$. Obviously, $E_R\subset{\mathbb E}_R$, and $$ \mathop{\rm codim}(E_R\subset{\mathbb E}_R)=k. $$
{\bf Proposition 3.2.} {\it Assume that $c_X\geqslant l_X+5$ and the point $o\in R$ is a multi-quadratic singularity of type $2^{l_X+1}$, and moreover, the inequality $$ \mathop{\rm rk} (o\in R)\geqslant 2l_X+c_X-2 $$ holds. Then $D\neq bR$ for $b\geqslant 1$. Moreover, if the divisor $D$ contains $R$ as a component, that is, $$ D=D^*+bR, $$ where $b\geqslant 1$, then $(X,D^*,o)$ is a working triple.}
{\bf Proof} is completely similar to the proof Proposition 3.1, but we give it in full details, because there some small points where the two arguments are different. The inequality (\ref{13.09.22.1}) holds in this case again. Let us use the additional assumption about the singularity $o\in R$. Consider a general linear subspace $P\ni o$ in ${\mathbb P}(X)$ of dimension $k+l_X+3$ (it is the minimal admissible dimension). We get the equality $a(E_{X\cap P})=2$. Obviously, $$ R^+\sim H_X-2E_X $$ and, respectively, on $(X\cap P)^+$ we have $$ (R\cap P)^+\sim H_{X\cap P}-2E_{X\cap P}. $$ Arguing as in the transversal case, we note that the intersection $P\cap{\mathbb P}(R)$ is a general subspace of dimension $k+l_X+2$ in ${\mathbb P}(R)$. Taking into account that by the inequality (\ref{13.09.22.1}) the inequality $$ \mathop{\rm codim}(\mathop{\rm Sing}R\subset{\mathbb P}(R))\geqslant k+c_X-2 $$ holds, and that by assumption $c_X\geqslant l_X+5$, we see that $R\cap P$ has a unique singularity, the point $o$. Furthermore, by the assumption about the rank of the singular point $o\in R$ we get the inequality $$ \mathop{\rm codim}(\mathop{\rm Sing}E_R\subset E_R)\geqslant c_X-3\geqslant l_X+2, $$ so that $$ \mathop{\rm codim}(\mathop{\rm Sing}E_R\subset{\mathbb E}_R)\geqslant k+l_X+2. $$ The exceptional divisor $E_{R\cap P}$ is the section of the subvariety $E_R\subset{\mathbb E}_R$ by a general linear subspace of dimension $k+l_X+1$, whence we conclude that the variety $E_{R\cap P}$ is non-singular. Thus we have shown that the singularity $o\in R\cap P$ is resolved by one blow up. Therefore, the pair $$ ((X\cap P)^+,(R\cap P)^+) $$ is canonical, so that the pair $$ (X\cap P),(R\cap P) $$ is canonical, too. We have shown that $D\neq bR$ for $b\geqslant 1$.
By inversion of adjunction for every $m\in I_X$ and a general subspace $P^{\sharp}\ni o$ of dimension $m$ the pair $(X\cap P^{\sharp}, R\cap P^{\sharp})$ is canonical (recall that $n(R)=1$). Repeating the arguments given in the transversal case (the proof of Proposition 3.1) word for word, we complete the proof of Proposition 3.2.
{\bf Remark 3.1.} If for $l_X\leqslant k-1$ the intersection $X\cap T_oX$ has the point $o$ as a multi-quadratic singularity of type $2^k$, the rank of which satisfies the inequality $$ \mathop{\rm rk}(o\in X\cap T_oX)\geqslant 2l_X+c_X-2, $$ the the assumption about the rank $\mathop{\rm rk}(o\in R)$ in the statement of Proposition 3.2 holds automatically for every tangent hyperplane at the point $o$.
{\bf Theorem 3.2 (on the tangent hyperplane section).} {\it Let $[X,o]$ be a marked complete intersection of level $(l_X,c_X)$, where $2\leqslant l_X\leqslant k-1$ and $c_X\geqslant l_X+7$, and $(X,D,o)$ a working triple. Let $R$ be the section of $X$ by a hyperplane which is tangent to $X$ at the point $o$, and assume that $R$ is not a component of the divisor $D$. Assume that the point $o\in R$ is a multi-quadratic singularity of type $2^{l_R}$, where $l_R=l_X+1$, the rank of which satisfies the inequality $$ \mathop{\rm rk}(o\in R)\geqslant 2l_R+c_R-1=2l_X+c_X-1, $$ where $c_R=c_X-2$, and also that the inequality $\mathop{\rm ct}(R\backslash\mathop{\rm Sing}R)\geqslant 1$ holds. Then $(R,(D\circ R),o)$ is a working triple on the marked complete intersection $[R,o]$ of level} $(l_R,c_R)$.
{\bf Proof} is similar to the transversal case (Theorem 3.1), and we just emphasize the necessary modifications. The fact that $[R,o]$ is a marked complete intersection of level $(l_R,c_R)$ is checked in the tangent case even easier than in the transversal one, because the assumption about the singularity $o\in R$ is among the assumptions of the theorem.
A general subspace $P\ni o$ of dimension $k+c_R-1=k+c_X-3$ in ${\mathbb P}(R)$ is again presented as the intersection $P=P^{\sharp}\cap{\mathbb P}(R)$, where $P^{\sharp}\ni o$ is a general subspace of dimension $k+c_R\in I_X$ in ${\mathbb P}(X)$, and now, repeating the arguments in the transversal case and using inversion of adjunction, we get that $(R,(D\circ R),o)$ is a working triple. Q.E.D. for the theorem.
{\bf Proof of Proposition 1.7.} We assume that $l\leqslant k-1$. recall that the symbol $T$ stands for the intersection $F\cap T_oF$; this is a subvariety of codimension $(k-l)$ in $F$. Let us construct a sequence of subvarieties $$ T_0=F\supset T_1\supset\dots\supset T_{k-l}=T, $$ where $T_{i+1}$ is the section of $T_i\ni o$ by some hyperplane ${\mathbb P}(T_{i+1})=\langle T_{i+1}\rangle\ni o$, which is tangent to $T_i$ at the point $o$. Theorem 1.1 implies that the inequality $$ c_F\geqslant l+3(k-l)+4 $$ holds (the inequality of Theorem 1.1 for the codimension $c_F$ is much stronger, but for the clarity of exposition we give the weakest estimate that is sufficient for the proof of Proposition 1.7; this remark also applies to the estimate of the rank of the multi-quadratic singularity $o\in T$ below). Furthermore, the condition (MQ2) implies that $o\in T$ is a multi-quadratic singularity of type $2^k$, and moreover, the inequality \begin{equation}\label{19.09.22.1} \mathop{\rm rk}(o\in T)\geqslant 2k+c_F-1 \end{equation} holds. Finally, by Theorem 1.2 for every hyperplane section $W$ of every subvariety $T_i$, $i=0,1,\dots,k-l$, every non-singular point $p\in W$ and every prime divisor $Y$ on $W$ the inequality \begin{equation}\label{19.09.22.2} \frac{\mathop{\rm mult}_p}{\mathop{\rm deg}}Y\leqslant\frac{2}{\mathop{\rm deg} F} \end{equation} holds. Then for all $i=0,1,\dots,k-l$ the pair $[T_i,o]$ is a marked complete intersection of level $$ (l_i=l+i,c_i=c_F-2i). $$ Indeed, the inequality $c_i\geqslant l_i+4$ is true by the definition of the numbers $l_i$, $c_i$, the condition (MC1) follows from Remark 1.4, the point $o\in T_i$ by construction is a multi-quadratic singularity of type $2^{l+1}$, and moreover, by (\ref{19.09.22.1}) we have $$ \mathop{\rm rk}(o\in T_i)\geqslant 2l+c_F-1=2l_i+c_i-1, $$ and, finally, repeating the proof of Proposition 1.1 and the arguments of Subsection 1.5 word for word, we get that by (\ref{19.09.22.2}) the condition (MC3) holds. Therefore, $[T,o]$ is a marked complete intersection of level $(k,c_{k-l})$, where $c_{k-l}=c_F-2(k-l)\geqslant k+4$. Recall (Proposition 1.6), that $c_F=4k+2\varepsilon(k)$. Since $l\geqslant 2$, the inequality $$ c_{k-l}\geqslant c_T=2k+2\varepsilon(k)+4 $$ holds, so that $[T,o]$ is a marked complete intersection of level $(k,c_T)$, as we claimed.
It remains to construct the working triple $(T,D_T,o)$. We will construct a sequence of working triples $(T_i,D_i,o)$, where $i=0,1,\dots,k-l$ and $D_0=D_F$. Assume that $(T_i,D_i,o)$ is already constructed and $i\leqslant k-l-1$. Let us check that all assumptions that allow us to apply Proposition 3.2 are satisfied.
Indeed, the fact that $i\leqslant k-l-1$ implies the inequality $c_i\geqslant l_i+7$. The point $o\in T_{i+1}$ is a multi-quadratic singularity of type $2^{l_i+1}$, the rank of which satisfies the inequality $$ \mathop{\rm rk}(o\in T_{i+1})\geqslant 2l_{i+1}+c_{i+1}-1=2l_i+c_i-1 $$ (see above). Applying Proposition 3.2, we remove $T_{i+1}$ from the effective divisor $D_i$ (if it is necessary) and obtain the working triple $(T_i,D^*_i,o)$, where the effective divisor $D^*_i$ does not contain $T_{i+1}$ as a component.
it ie easy to see that we have all assumptions of Theorem 3.2. Set $$ D_{i+1}=(D^*_i\circ T_{i+1}). $$ Now $(T_{i+1}, D_{i+1}, o)$ is a working triple. Proof of Proposition 1.7 is complete.
{\bf 3.5. Plan of the proof of Proposition 1.8.} Recall that by the condition (MQ2) the inequality $$ \mathop{\rm rk} (o\in T)\geqslant 10k^2+8k+2\varepsilon(k)+5 $$ holds. The pair $[T,o]$ is a marked complete intersection of level $(k,c_T)$, where $c_T=2k+2\varepsilon(k)+4$. Let $$ R_0=T,R_1,\dots,R_a, $$ where $a\leqslant\varepsilon(k)$, be an {\it arbitrary} sequence of subvarieties in $T$, where $R_{i+1}$ is the section of $R_i$ by the hyperplane ${\mathbb P}(R_{i+1})$ in ${\mathbb P}(R_i)$, containing the point $o$. Set $c_i=c_T-2i$, where $i=0,1,\dots,a$.
{\bf Proposition 3.3.} {\it The pair $[R_i,o]$ is a marked complete intersection of level} $(k,c_i)$.
{\bf Proof.} Since $a\leqslant\varepsilon(k)$, the inequality $c_i\geqslant k+4$ holds in an obvious way (in fact, $c_i\geqslant 2k+4)$. The condition (MC1) holds by Remark 1.4. The condition (MC3) is obtained by repeating the proof of Proposition 1.1 and the arguments of Subsection 1.5 word for word, taking into account Theorem 1.2. Finally, again by Remark 1.4 the inequality $$ 10k^2+8k+2\varepsilon(k)+4\geqslant 2k+c_i+2i-1 $$ implies the condition (MC2). Q.E.D. for the proposition.
Now let us construct for every $i=0,1,\dots,a$ an effective divisor $D_i$ on $R_i$ in the same way as we did it in Subsection 3.4 in the proof of Proposition 1.7, applying instead of Proposition 3.2 its ``transversal'' analog, Proposition 3.1, and Theorem 3.1 instead of Theorem 3.2. More precisely, if the effective divisor $D_i$, where $i\leqslant a-1$, is already constructed, we remove from this divisor all components that are hyperplane sections (if there are such components), and obtain an effective divisor $D^*_i$ that does not contain hyperplane sections as components, and such that $(R_i,D^*_i,o)$ is a working triple (Proposition 3.1).
{\bf Proposition 3.4.} {\it The following inequality holds:} $$ \frac{\nu(D^*_i)}{n(D^*_i)}\geqslant\frac{\nu(D_i)}{n(D_i)}. $$
{\bf Proof.} It is sufficient to consider the case when $D^*_i$ is obtained from $D_i$ by removing one hyperplane section $Z\ni o$. Write down $$ D_i=D^*_i+bZ, $$ where $b\geqslant 1$. Since $c_i\geqslant 2k+4$, we can apply Proposition 1.5: $\nu(D_i)>n(D_i)$. On the other hand, $\nu(Z)=n(Z)=1$. Set $\nu(D_i)=\alpha n(D_i)$, where $\alpha>1$. We get $$ \frac{\nu(D^*_i)}{n(D^*_i)}= \frac{\alpha n(D^*_i)+(\alpha-1)b}{n(D^*_i)}>\alpha, $$ which proves the proposition. Q.E.D.
(If we remove from $D_i$ a hyperplane section that does not contain the point $o$, the claim of Proposition 3.4 is trivial.)
Now we apply Theorem 3.1, setting $D_{i+1}=(D^*_i\circ R_{i+1})$, this cycle of the scheme-theoretic intersection is well defined as an effective divisor on $R_{i+1}$, and moreover, $(R_{i+1}, D_{i+1}, o)$ is a working triple and $$ \frac{\nu(D_{i+1})}{n(D_{i+1})}\geqslant\frac{\nu(D^*_i)}{n(D^*_i)} \geqslant\frac{\nu(D_i)}{n(D_i)}. $$ We emphasize that $R_1,\dots, R_a$ is an arbitrary sequence of consecutive hyperplane sections. By Remark 1.4, for all $i=0,1,\dots,a$ the inequality $c_i\geqslant 2k+4$ holds, and the rank of the multi-quadratic singularity $o\in R_i$ of type $2^k$ is at least $10k^2+8k+5$. By Theorem 1.4 and Proposition 1.5 w have the inequalities $$ n(D_i)<\nu(D_i)\leqslant\frac32n(D_i). $$ Therefore, at every step of our construction the assumptions of the following claim are satisfied.
{\bf Theorem 3.3 (on the special hyperplane section).} {\it Let $[X,o]$ be a marked complete intersection of level $(k,c_X)$, where $c_X\geqslant 2k+4$ and the inequality $$ \mathop{\rm rk}(o\in X)\geqslant 10k^2+8k+5 $$ holds. Let $(X,D,o)$ be a working triple, where the effective divisor $D$ does not contain hyperplane sections and satisfies the inequalities $$ n(D)<\nu(D)\leqslant\frac32n(D). $$ Then there is a section $R\ni o$ of the variety $X$ by a hyperplane ${\mathbb P}(R)=\langle R\rangle\subset {\mathbb P}(X)= {\mathbb P}^{N(X)}$, such that the effective divisor $D_R=(R\circ D)$ on $R$ satisfies the inequality} $$ 2-\frac{\nu(D_R)}{n(D_R)}<\frac{1}{1+\frac{1}{k}}\left(2- \frac{\nu(D)}{n(D)}\right). $$
Now by the definition of the integer $\varepsilon(k)$ and what was said above, Theorem 3.3 immediately implies Proposition 1.8.
{\bf Proof} of Theorem 3.3 is given in \S 5.
\section{Multi-quadratic singularities}
In this section we consider the properties of multi-quadratic singularities, the rank of which is bounded from below: they are factorial, stable with respect to blow ups and terminal. In Subsection 4.5 we study linear subspaces on complete intersections of quadrics and the properties of projections from these subspaces.
{\bf 4.1. The definition and the first properties.} Let ${\cal X}$ be an (irreducible) algebraic variety, $o\in{\cal X}$ a point.
{\bf Definition 4.1.} The point $o$ s a {\it multi-quadratic singularity} of the variety ${\cal X}$ of type $2^l$ and rank $r\geqslant 1$, if in some neighborhood of this point ${\cal X}$ can be realized as a subvariety of a non-singular $N=(\mathop{\rm dim} {\cal X}+ l)$-dimensional variety ${\cal Y}\ni o$, and for some system $(u_1, \dots, u_N)$ of local parameters on ${\cal Y}$ at the point $o$ the subvariety ${\cal X}$ is the scheme of common zeros of regular functions $$ \alpha_1,\dots,\alpha_l\in{\cal O}_{o,{\cal Y}}\subset{\mathbb C}[[u_1,\dots,u_N]], $$ which are represented by the formal power series $$ \alpha_i=\alpha_{i,2}+\alpha_{i,3}+\dots, $$ where $\alpha_{i,j}(u_1,\dots,u_N)$ are homogeneous polynomials of degree $j$ and $$ \mathop{\rm rk}(\alpha_{1,2},\dots,\alpha_{l,2})=r. $$ (Obviously, the order of the formal power series, representing $\alpha_i$, and the rank of the tuple of quadratic forms $\alpha_{i,2}$ do not depend on the choice of the local parameters on ${\cal Y}$ at the point $o$.)
It is convenient to work in a more general context. Assume that in a neighborhood of the point $o$ the variety ${\cal X}$ is realized as a subvariety ${\cal X}\subset{\cal Z}$, where $\mathop{\rm dim}{\cal Z}=\mathop{\rm dim}{\cal X}+e=N({\cal Z})$, and for a certain system of local parameters $(v_1,\dots,v_{N({\cal Z})})$ on ${\cal Z}$ at the point $o$ the subvariety ${\cal X}$ is the scheme of common zeros of regular functions $$ \beta_1,\dots,\beta_e\in{\cal O}_{o,{{\cal Z}}}\subset{\mathbb C}[[v_*]], $$ which are represented by the formal power series $$ \beta_i=\beta_{i,1}+\beta_{i,2}+\dots, $$ where $\beta_{i,j}(v_*)$ are homogeneous polynomials of degree $j$. Assume that for some $l\in\{0,1,\dots,e\}$ $$ \mathop{\rm dim}\langle\beta_{1,1},\dots,\beta_{e,1}\rangle=e-l, $$ where we assume (for the convenience of notations), that the linear forms $\beta_{j,1}$ for $l+1\leqslant j\leqslant e$ are linearly independent, so that for $1\leqslant i\leqslant l$ and $l+1\leqslant j\leqslant e$ there are uniquely determined numbers $a_{i,j}$, such that $$ \beta_{i,1}=\sum^e_{j=l+1}a_{i,j}\beta_{j,1}. $$
Set ${\cal Y}=\{\beta_j=0\,|\, l+1\leqslant j\leqslant e\}$ and $$ \beta_i^*=\beta_i-\sum_{j=l+1}^e a_{i,j}\beta_j. $$ Then (in a neighborhood of the point $o$) the variety ${\cal Y}$ is non-singular, and ${\cal X}\subset {\cal Y}$ is realized as the scheme of common zeros of the regular functions $\beta^*_i$, $1\leqslant i\leqslant l$. Set $$
T_o{\cal Y}=T_o{\cal X}=\{\beta_{j,1}=0\,|\, l+1\leqslant j\leqslant e\}. $$ If $$
\mathop{\rm rk}\left(\beta^*_{i,2}|_{T_o{\cal X}}\, |\, 1\leqslant i\leqslant l\right)=r, $$ then obviously $o\in{\cal X}$ is a multi-quadratic singularity of rank $r$.
The rank of the multi-quadratic point $o\in{\cal X}$ is denoted by the symbol $\mathop{\rm rk} (o\in{\cal X})$ or just $\mathop{\rm rk}(o)$, if it is clear which variety is meant. For uniformity of notations we treat a non-singular point as a multi-quadratic one of type $2^0$.
{\bf Proposition 4.1.} {\it Assume that $o\in{\cal X}$ is a multi-quadratic singularity of type $2^l$, where $l\geqslant 1$, and of rank $r\geqslant 2l$. Then in a neighborhood of the point $o$ every point $p\in{\cal X}$ is either non-singular, or a multi-quadratic of type $2^b$, where $b\in\{1,\dots,l\}$, of rank} $\geqslant r-2(l-b)$.
{\bf Proof.} Using the notations for the embedding ${\cal X}\subset{\cal Z}$, introduced above, with $e=l$ (so that $\beta_{i,1}=0$ for all $i=1,\dots,l$) and setting $N({\cal Z})=N$, consider an open set $U\subset{\cal Z}$, $U\ni o$, such that for every point $p\in U$ the ``shifted'' functions $$ v^{(p)}_i=v_i-v_i(p),\quad i=1,\dots,N, $$ form a system of local parameters at the point $p$, and in the formal expansion $$ \beta_i=\beta^{(p)}_{i,0}+\beta^{(p)}_{i,1}+\beta^{(p)}_{i,2}+\dots $$ with respect to the system of parameters $v^{(p)}_*$ the quadratic components satisfy the inequality $$
\mathop{\rm rk}(\beta^{(p)}_{i,2}\,|\, 1\leqslant i\leqslant l)\geqslant r. $$ If the point $p$ is a common zero of $\beta_1,\dots,\beta_l$, then $\beta^{(p)}_{i,0}=0$ for $1\leqslant i\leqslant l$. Set $$
T_p{\cal X}=\{\beta^{(p)}_{i,1}=0\,|\, 1\leqslant i\leqslant l\} $$ and assume (for the convenience of notations) that the forms $\beta^{(p)}_{i,1}$ for $b+1\leqslant i\leqslant l$ are linearly independent, where $$
\mathop{\rm dim}\langle\beta^{(p)}_{i,1}\,|\,\,1\leqslant i\leqslant l\rangle=l-b. $$ Since $\mathop{\rm codim}(T_p{\cal X}\subset T_p{\cal Z})=l-b$, by Remark 1.4 the inequality $$
\mathop{\rm rk}(\beta^{(p)}_{i,2}|_{T_p{\cal X}}\,|\, 1\leqslant i\leqslant l) \geqslant r - 2(l-b) $$ holds. It is easy to see from the construction of the quadratic forms $\beta^{(p)*}_{i,2}$, $1\leqslant i\leqslant b$, that every linear combination of these forms with coefficients $(\lambda_1,\dots,\lambda_b)\neq(0,\dots,0)$ is a linear combination of the original forms $\beta^{(p)}_{i,2}$, $1\leqslant i\leqslant l$, not all coefficients in which are equal to zero. Therefore, the point $p$ is a multi-quadratic singularity of rank $\geqslant r-2(l-b)$, as we claimed. Q.E.D. for the proposition.
{\bf 4.2. Complete intersections of quadrics.} In the notations of Definition 4.1 let ${\cal Y}^+\to{\cal Y}$ be the blow up of the point $o$ with the exceptional divisor $E_{\cal Y}\cong{\mathbb P}^{N-1}$ and ${\cal X}^+\subset{\cal Y}^+$ the strict transform of ${\cal X}$ on ${\cal Y}^+$, so that ${\cal X}^+\to{\cal X}$ is the blow up of the point $o$ on ${\cal X}$ with the exceptional divisor $E_{\cal Y}|_{{\cal X}^+}=E_{\cal X}$. Therefore, $E_{\cal X}$ is the scheme of common zeros of the quadratic forms $\alpha_{i,2}$, $i=1,\dots,l$, on $E_{\cal Y}\cong{\mathbb P}^{N-1}$.
Let $q_1,\dots,q_l$ be quadratic forms on ${\mathbb P}^{N-1}$, where $N\geqslant l+4$. By the symbol $q_{[1,l]}$ we denote the tuple $(q_1,\dots,q_l)$.
{\bf Proposition 4.2.} (i) {\it Assume that the inequality $$ \mathop{\rm rk} q_{[1,l]}\geqslant 2l+3 $$ holds. Then the scheme of common zeros of the forms $q_1,\dots,q_l$ is an irreducible non-degenerate factorial variety $Q\subset{\mathbb P}^{N-1}$ of codimension $l$, that is, a complete intersection of type $2^l$.}
(ii) {\it Assume that for some $e\geqslant 4$ the inequality $$ \mathop{\rm rk} q_{[1,l]}\geqslant 2l+e-1 $$ holds. Then the following inequality is true:} $$ \mathop{\rm codim}(\mathop{\rm Sing}Q\subset Q)\geqslant e. $$
{\bf Proof} is given below in Subsection 4.4.
{\bf Corollary 4.1.} (i) {\it Assume that the rank of the tuple $\alpha_{*,2}=(\alpha_{1,2},\dots,\alpha_{l,2})$ of quadratic forms satisfies the inequality $$ \mathop{\rm rk} (\alpha_{*,2})\geqslant 2l+3. $$ Then in a neighborhood of the point $o$ the scheme of common zeros of the regular functions $\alpha_1,\dots,\alpha_l$ is an irreducible reduced factorial subvariety ${\cal X}$ of codimension $l$ in ${\cal Y}$.}
(ii) {\it Assume that for some $e\geqslant 4$ the inequality $$ \mathop{\rm rk} (\alpha_{*,2})\geqslant 2l+e-1 $$ holds. Then in a neighborhood of the point $o$ the following inequality is true} $$ \mathop{\rm codim}(\mathop{\rm Sing}{\cal X}\subset{\cal X})\geqslant e. $$
{\bf Proof.} Both claims obviously follow from Proposition 4.2, taking into account Grothendieck's theorem \cite{CL} on the factoriality of a complete intersection, the singular set of which is of codimension $\geqslant 4$.
Therefore, for $r\geqslant 2l+3$ the assumption in Definition 4.1 that ${\cal X}$ is an irreducible variety, is unnecessary: in a neighborhood of the point $o$ the scheme of common zeros of the functions $\alpha_*$ is automatically irreducible and reduced, and moreover, it is a factorial variety. This proves all claims of Theorem 1.1, except for that the singularities of the variety $F$ are terminal.
{\bf 4.3. Stability with respect to blow ups.} Let $\underline{r}=(r_1,r_2,\dots,r_k)$ be a tuple of integers, satisfying the inequalities $r_{i+1}\geqslant r_i+2$ for $i=1,\dots,k-1$, where $r_1\geqslant 5$. Again, let ${\cal Y}$ be a non-singular $N$-dimensional variety, where $N\geqslant k+3$, and ${\cal X}\subset{\cal Y}$ an (irreducible) subvariety of codimension $k$, every point $o\in{\cal X}$ of which is either non-singular, or a multi-quadratic singularity of type $2^l$, where $l\in\{1,\dots,k\}$, of rank $\geqslant r_l$. Somewhat abusing the terminology, we say in this case that ${\cal X}$ has {\it multi-quadratic singularities of type} $\underline{r}$.
{\bf Theorem 4.1.} {\it In the assumptions above let $B\subset{\cal X}$ be an irreducible subvariety of codimension $\geqslant 2$. Then there is an open subset $U\subset{\cal X}$, such that $U\cap B\neq\emptyset$, $U\cap B$ is non-singular and the blow up $$ \sigma_B\colon U_B\to U $$ along $B$ gives a quasi-projective variety $U_B$ with multi-quadratic singularities of type} $\underline{r}$.
{\bf Proof.} If a point of general position $o\in B$ is non-singular on ${\cal X}$, there is nothing to prove. If $o\in{\cal X}$ is a multi-quadratic singularity of type $2^l$, where $l>k$, then a certain Zariski open subset $U\subset{\cal X}$, $U\ni o$, has multi-quadratic singularities of type $(r_1,\dots,r_l)$ (see Subsection 4.1), so that it is sufficient to consider the case when a point of general position $o\in B$ is a multi-quadratic point of type $2^k$ on ${\cal X}$. Passing over to an open subset, we may assume that the subvariety $B$ is non-singular. Let $(u_1,\dots,u_N)$ be a system of local parameters at the point $o$, such that $B=\{u_1=\dots=u_m=0\}$. Since $B\subset{\cal X}$, the subvariety ${\cal X}\subset{\cal Y}$ is the scheme of common zeros of regular functions $$ \beta_1,\dots,\beta_k\in{\cal O}_{o,{\cal Y}}\subset{\cal O}_{o,B}[[u_1,\dots,u_m]], $$ where for all $i=1,\dots,k$ $$ \beta_i=\beta_{i,2}+\beta_{i,3}+\dots, $$ where $\beta_{i,j}$ are homogeneous polynomials of degree $j$ in $u_1,\dots,u_m$ with coefficients from ${\cal O}_{o,B}$. Again replacing ${\cal Y}$, if necessary, by an open subset, containing the point $o$, we have $$ \beta_i\in{\cal O}({\cal Y})\subset{\cal O}(B)[[u_1,\dots,u_m]], $$ so that all coefficients of the forms $\beta_{i,j}$ are regular functions on $B$; in particular, $$ \beta_{i,2}=\sum_{1\leqslant j_1\leqslant j_2\leqslant m}A_{j_1,j_2}u_{j_1}u_{j_2}, $$ where $A_{j_1,j_2}\in{\cal O}(B)$. In terms of the embedding ${\cal O}_{o,{\cal Y}}\subset{\mathbb C}[[u_1,\dots,u_N]]$ we get the presentation $$ \beta_i=\overline{\beta}_{i,2}+\overline{\beta}_{i,3}+\dots, $$ where $\overline{\beta}_{i,j}$ is a homogeneous polynomial of degree $j$ in $u_*$, and moreover, in the right hand side there are no monomials that do not contain the variables $u_1,\dots,u_m$, or that contain precisely one of them (in the power 1): every monomial in the right hand side is divisible by some quadratic monomial in $u_1,\dots,u_m$.
Let ${\cal Y}_B\to{\cal Y}$ be the blow up of the subvariety $B$ and ${\cal X}_B\subset{\cal Y}_B$ the strict transform of ${\cal X}$. Obviously, the morphism ${\cal X}_B\to{\cal X}$ in the blow up of $B$ on ${\cal X}$. The symbol $E_B$ denotes the exceptional divisors of the blow up of $B$ on ${\cal Y}$. Since outside $E_B$ the varieties ${\cal X}_B$ and ${\cal X}$ are isomorphic, it is sufficient to show that every point $p\in{\cal X}_B\cap E_B$ is either non-singular, or a multi-quadratic singularity of the variety $U_B$ of type $2^l$, where $l\geqslant 1$, and of rank $\geqslant r_l$. We assume that the point $p$ lies over the point $o\in U$ and is a singularity of the variety $U_B$.
By a linear change of local parameters $u_1,\dots,u_m$ we may ensure that at the point $p\in{\cal Y}_B$ there is a system of local parameters $$ (v_1,\dots,v_m,u_{m+1},\dots,u_N), $$ linked to the original system of parameters $u_*$ by the standard relations $$ u_1=v_1,\,\,u_2=v_1v_2,\dots,\,\,u_m=v_1v_m. $$ The local equation of the exceptional divisor $E_B$ at the point $p$ is $v_1=0$, and the subvariety ${\cal X}_B\subset{\cal Y}_B$ at that point is defined by the equations $$ \widetilde{\beta}_1,\dots,\widetilde{\beta}_k\in{\cal O}_{p,{\cal Y}_B}\subset{\mathbb C}[[v_1,\dots,v_m,u_{m+1},\dots,u_N]]. $$ Write down $\widetilde{\beta}_i=\widetilde{\beta}_{i,1} +\widetilde{\beta}_{1,2}+\dots$ and assume that for some $l\in\{1,\dots,k\}$ the linear forms $\widetilde{\beta}_{j,1}$, $l+1\leqslant j\leqslant k$, are linearly independent, and moreover, $$
\mathop{\rm dim}\langle\widetilde{\beta}_{i,1}\,|\,1\leqslant i\leqslant k\rangle=k-l, $$ so that there are relations $$ \widetilde{\beta}_{i,1}=\sum^k_{j=l+1}a_{i,j}\widetilde{\beta}_{i,1}, $$ $i=1,\dots,l$. Replacing the original system of local equations $\beta_1,\dots,\beta_k$ by $$ \beta_i-\sum^k_{j=l+1}a_{i,j}\beta_j,\,\,i=1,\dots,l,\quad \beta_{l+1},\dots,\beta_k, $$ we may assume that the linear forms $\widetilde{\beta}_{i,1}$, $i=1,\dots,l$, are identically zero. In that case the following claim is true.
{\bf Lemma 4.1.} {\it For $i=1,\dots,l$ the quadratic forms $\overline{\beta}_{i,2}$ depend only on $u_2,\dots,u_m$ and $$ \widetilde{\beta}_{i,2}=\overline{\beta}_{i,2}(v_2,\dots,v_m)+\beta^{\sharp}_{i,2}, $$ where every monomial in the quadratic form $\beta^{\sharp}_{i,2}$ is divisible either by $v_1$, or by} $u_i$, $m+1\leqslant i\leqslant N$.
{\bf Proof.} This is obvious because every monomial in $\overline{\beta}_{i,j}$ is divisible by some quadratic monomial in $u_1,\dots,u_m$, and $\widetilde{\beta}_{i,1}\equiv 0$ for $i=1,\dots,l$, and by the standard formulas, transforming regular functions under a blow up. Q.E.D. for the lemma.
The lemma gives us the inequality $$ \mathop{\rm rk}(\widetilde{\beta}_{i,2},\, 1\leqslant i\leqslant l)\geqslant\mathop{\rm rk}(\overline{\beta}_{i,2},\, 1\leqslant i\leqslant l)\geqslant r_k. $$
Setting $T_p{\cal X}_B=\{\widetilde{\beta}_{i,1}=0\,|\, l+1\leqslant j\leqslant k\}$ and using Remark 1.4, we get $$
\mathop{\rm rk}(\widetilde{\beta}_{i,2}|_{T_p{\cal X}_B},\, 1\leqslant i\leqslant l)\geqslant r_k-2(k-l)\geqslant r_l. $$ Therefore, $p\in{\cal X}_B$ is a multi-quadratic singularity of type $2^l$ and rank $\geqslant r_l$. Q.E.D. for Theorem 4.1.
{\bf Corollary 4.2.} {\it Assume that $\cal X$ has multi-quadratic singularities of type $\underline{r}$, where $r_l\geqslant 3l+1$ for all $l=1,\dots,k$. Then the singularities of ${\cal X}$ are terminal.}
{\bf Proof.} In the notations of the proof of Theorem 4.1 it is sufficient to show the inequality $$ a({\cal X}_B\cap E_B,{\cal X})\geqslant 1. $$ Assume that a point $o\in B$ of general position is a multi-quadratic singularity of type $2^l$. From the claim (ii) of Corollary 4.1 we get the inequality $$ \mathop{\rm codim}(B\subset{\cal X})\geqslant l+2, $$ so that $\mathop{\rm codim}(B\subset{\cal Y})\geqslant k+l+2$ and for that reason $$ a(E_B,{\cal Y})\geqslant k+l+1. $$ By the adjunction formula $$ a({\cal X}_B\cap E_B,{\cal X})=a(E_B,{\cal Y})-(k-l)-2l, $$ which implies the required inequality. Q.E.D. for the corollary.
This completes the proof of Theorem 1.1.
{\bf 4.4. Singularities of complete intersections.} Let us show Proposition 4.2. We will prove the claims (i) and (ii) simultaneously: by Grothendieck's theorem on parafactoriality \cite{CL,Call1994} the claim (ii) for $e=4$ implies the factoriality of the variety $Q$.
We argue by induction on $l\geqslant 1$. For one quadric ($l=1$) the claims (i) and (ii) are obvious. Since $$ \mathop{\rm rk} q_{[1,l-1]}\geqslant \mathop{\rm rk} q_{[1,l]}, $$ we may assume that the claims (i) and (ii) are true for the tuple of quadratic forms $q_1$, \dots, $q_{l-1}$. In particular, the scheme of their common zeros $Q_{l-1}$ is an irreducible reduced factorial complete intersection of type $2^{l-1}$ in ${\mathbb P}^{N-1}$, so that $\mathop{\rm Pic} Q_{l-1}={\mathbb Z} H_{l-1}$, where $H_{l-1}$ is the class of a hyperplane section: every effective divisor on $Q_{l-1}$ is cut out on $Q_{l-1}$ by a hypersurface in ${\mathbb P}^{N-1}$.
The scheme of common zeros of the quadratic forms $q_1$, \dots, $q_l$ is the divisor of zeros of the form $q_l$ on the variety $Q_{l-1}$. This divisor is reducible or non-reduced if and only if there is a form $q^*_l$ of rank $\leqslant 2$ such that $$ q_l-q^*_l\in\langle q_1,\dots,q_{l-1}\rangle, $$ and in that case $\mathop{\rm rk}q_{[1,l]}\leqslant 2$, which contradicts the assumption. Therefore, $Q$ is an irreducible reduced complete intersection. It is easy to see that $Q\subset{\mathbb P}^{N-1}$ is non-degenerate. Since $$ \mathop{\rm rk}q_{[1,l-1]}\geqslant 2(l-1)+(e+2)-1 $$ (for the claim (i) we set $e=4$), we have $$ \mathop{\rm codim}(\mathop{\rm Sing}Q_{l-1}\subset Q_{l-1})\geqslant e+2, $$ so that $$ \mathop{\rm codim}((Q\cap\mathop{\rm Sing}Q_{l-1})\subset Q)\geqslant e+1. $$ It is easy to see that a point $p\in Q$, which is non-singular on $Q_{l-1}$, is singular on $Q$ if and only if for some $\lambda_1,\dots,\lambda_{l-1}$ the quadric $$ q_l-\lambda_1q_1-\dots-\lambda_{l-1}q_{l-1}=0 $$ is singular at that point. Since the singular set of a quadric of rank $r$ in ${\mathbb P}^{N-1}$ has dimension $N-1-r$, we conclude that the dimension of the set $$ \mathop{\rm Sing}Q\cap(Q_{l-1}\setminus\mathop{\rm Sing}Q_{l-1}) $$ does not exceed $N-1-\mathop{\rm rk}q_{[1,l]}+(l-1)$, whence it follows that the codimension of that set with respect to $Q$ is at least $\mathop{\rm rk}q_{[1,l]}-2l+1\geqslant e$. Q.E.D. for Proposition 4.2.
{\bf 4.5. Linear subspaces and projections.} Now let us consider the questions that are naturally close to Proposition 4.2 and its proof. These questions are of key importance in the proof of Theorem 3.3 (which will be given in \S 5). Since in Theorem 3.3 the multi-quadratic singularity is of type $2^k$, starting from this moment we consider $k$ quadratic forms $q_1$, \dots, $q_k$ in $N$ variables (that is, on ${\mathbb P}^{N-1}$), and the tuple of them is denoted by the symbol $q_{[1,k]}$. The symbol $Q$, as above, stands for the complete intersection of these $k$ quadrics $\{q_i=0\}$ in ${\mathbb P}^{N-1}$.
{\bf Proposition 4.3.} {\it Assume that for some $b\geqslant 0$ the inequality $$ \mathop{\rm rk} q_{[1,k]}\geqslant 2(1+b)k+3 $$ holds. Then for every point $p\in Q\setminus \mathop{\rm Sing} Q$ there is a linear space $\Pi\subset {\mathbb P}^{N-1}$ of dimension $b$, such that $p\in\Pi\subset Q$, and moreover,} $\Pi\cap\mathop{\rm Sing} Q=\emptyset$.
{\bf Proof} contains the (obvious) construction of such linear subspaces. We argue by induction on $b$. If $b=0$, then $\Pi$ is the point $p$ itself and there is nothing to prove. Assume that $b\geqslant 1$ and for $b-1$ the claim of the Proposition is true.
Consider the linear subspace $T=T_pQ$ of codimension $k$ in
${\mathbb P}^{N-1}$. Obviously, every linear space in ${\mathbb P}^{N-1}$ that contains the point $p$ and is contained in $Q$, is contained in $T$, too. Furthermore, $Q\cap T$ is defined by the quadratic forms $q_1|_T,\dots,q_k|_T$. Since $\mathop{\rm rk}
q_{[1,k]}|_T\geqslant\mathop{\rm rk} q_{[1,k]}-2k$, the inequality $$
\mathop{\rm rk} q_{[1,k]}|_T\geqslant 2bk+3 $$
holds, where every quadric $\{q_i|_T=0\}$, $i=1,\dots,k$, is by construction a cone with the vertex at $p$. Therefore, $Q\cap T$ is a cone with the vertex at the point $p$. Let $P\subset T$ be a hyperplane in $T$ that does not contain the point $p$. Then the cone $Q\cap T$ is a cone is the cone with the base $Q\cap P$, where $Q\cap P$ is a complete intersection of the quadrics
$\{q_i|_P=0\}$, where, obviously, $$
\mathop{\rm rk} q_{[1,k]}|_P=\mathop{\rm rk} q_{[1,k]}|_T\geqslant 2(1+(b-1))k+3. $$ By the induction hypothesis, there is a linear subspace $\Pi^{\sharp}\subset P$ of dimension $(b-1)$, such that $\Pi^{\sharp}\subset Q\cap P$ and $\Pi^{\sharp}\cap\mathop{\rm Sing}(Q\cap P)=\emptyset$.
Furthermore, the set of singular points $\mathop{\rm Sing}(Q\cap T)$ is a cone with the vertex $p$, the base of which is $\mathop{\rm Sing}(Q\cap P)$, so that for the subspace $\Pi=\langle p,\Pi^{\sharp}\rangle$, which is a cone with the vertex $p$ and the base $\Pi^{\sharp}$, we have $\Pi\cap\mathop{\rm Sing}(Q\cap T)=\{p\}$. Since $T\cap\mathop{\rm Sing}Q\subset\mathop{\rm Sing}(Q\cap T)$ and $p\not\in\mathop{\rm Sing}Q$, we get $\Pi\cap\mathop{\rm Sing}Q=\emptyset$, which completes the proof of the proposition.
{\bf Proposition 4.4.} {\it Let $b\geqslant \beta\geqslant 0$ be some integers. Assume that the inequality $$ \mathop{\rm rk} q_{[1,k]}\geqslant 2k(b+\beta+1)+2\beta+3 $$ holds. Then for every linear subspace $P\subset{\mathbb P}^{N-1}$ of codimension $\beta$ and a general linear subspace $\Pi\subset Q$, $\Pi\cap\mathop{\rm Sing}=\emptyset$, of dimension $b$ the intersection $P\cap\Pi$ has codimension $\beta$ in $\Pi$.}
{\bf Proof.} Again we argue by induction on $b,\beta$; the case $\beta=0$ is trivial, only the equality $\Pi\cap\mathop{\rm Sing} Q=\emptyset$ for a general subspace $\Pi\subset Q$ of dimension $\beta\geqslant 0$ is needed, and it is true by Proposition 4.3.
Let us show our claim in the assumption that it is true for $\beta-1$.
First of all, note that $$
\mathop{\rm rk} q_{[1,k]}|_P\geqslant 2k(b+\beta+1)+3>2k+3, $$ so that by Proposition 4.2 the intersection $Q\cap P$ is an irreducible reduced complete intersection of type $2^k$ in $P$; in particular, a point of general position $p\in Q\cap P$ is non-singular. This means that $$ T_p(Q\cap P)=T_pQ\cap P $$ is of codimension $k$ in $P$, so that $T_pQ$ and $P$ are in general position. The property to be in general position is an open property, therefore for a point of general position $p\in Q$ (in particular, $p\not\in P$) the linear subspaces $T_pQ$ and $P$ are in general position and their intersection $T_pQ\cap P$ is of codimension $k$ in $P$ and of codimension $\beta$ in $T_pQ$.
Consider a general hypersurface $Z$ in $T_pQ$, containing the subspace $T_pQ\cap P$ and not containing the point $p$. We have $$
\mathop{\rm rk} q_{[1,k]}|_Z\geqslant 2k(b+\beta)+2\beta+1=2k(b+(\beta-1)+1)+2(\beta-1))+3, $$ so that by the induction hypothesis for a general linear subspace $\Pi^{\sharp}\subset Q\cap Z$ of dimension $(b-1)$ that does not meet the set $\mathop{\rm Sing}(Q\cap Z)$ the intersection $$ (P\cap T_pQ)\cap\Pi^{\sharp}=P\cap\Pi^{\sharp} $$ is of codimension $\beta-1=\mathop{\rm codim}((P\cap T_pQ)\subset Z)$ with respect to $\Pi^{\sharp}$.
Then the linear space $$ \Pi=\langle p,\Pi^{\sharp}\rangle\subset T_pQ $$ of dimension $b$ is contained in $Q$, does not meet the set $\mathop{\rm Sing}Q$ (see the proof of Proposition 4.3) and, finally, the subspace $$ P\cap\Pi=P\cap T_pQ\cap\Pi=P\cap T_pQ\cap Z\cap\Pi=P\cap\Pi^{\sharp} $$ is of codimension $\beta$ with respect to $\Pi$. Q.E.D. for the proposition.
{\bf Corollary 4.3.} {\it In the assumptions of Proposition 4.4, where $\beta\geqslant k$, let $Y\subset Q$ be an irreducible subvariety of codimension $\beta-k$. Then the restriction onto $Y$ of the projection $$ \mathop{\rm pr}\nolimits_{\Pi}\colon {\mathbb P}^{N-1}\dashrightarrow {\mathbb P}^{N-b-2} $$ from a general subspace $\Pi\subset Q$ of dimension $b$ is dominant.}
{\bf Proof.} Let $p\in Y$ be a non-singular point. We apply Proposition 4.4 to the subspace $P=T_pY\subset{\mathbb P^{N-1}}$
of codimension $\beta$. A general subspace $\Pi\subset Q$ of dimension $b\geqslant\beta$ does not contain the point $p$ and is in general position with $P$, so that $\mathop{\rm pr}_{\Pi}|_P$
is regular in a neighborhood of the point $p$ and its differential at the point $p$ is an epimorphism. Therefore, $\mathop{\rm pr}_{\Pi}|_Y$ is regular at the point $p$ and its differential at that point is an epimorphism. Q.E.D. for the corollary.
Note an important particular case.
{\bf Corollary 4.4.} {\it Assume that $b\geqslant k$ and the inequality $$ \mathop{\rm rk}\nolimits_{[1,k]}\geqslant 2k(b+k+2)+3 $$ holds. Then the restriction of the projection $\mathop{\rm pr}_{\Pi}$ from a general subspace $\Pi\subset Q$ of dimension $b$ onto $Q$ is dominant and its general fibre is a linear subspace of dimension} $b+1-k$.
{\bf Proof.} That it is dominant, follows from the previous corollary, so that the dimension of a general fibre is $b+1-k$. Furthermore, $\mathop{\rm pr}_{\Pi}$ fibres ${\mathbb P}^{N-1}$ (more precisely, ${\mathbb P}^{N-1}\setminus\Pi$) into linear subspaces $\Pi^{\sharp}\supset\Pi$ of dimension $b+1$. The centre $\Pi$ of the projection is a hyperplane in $\Pi^{\sharp}$. Since
$\Pi\subset Q$, the quadric $\{q_i|_{\Pi^{\sharp}}=0\}$ is the union of two hyperplanes, one of which is $\Pi$. Now the claim of the corollary is obvious.
Let $\Pi\subset Q$ be a linear subspace of dimension $b\geqslant k$, not meeting the set $\mathop{\rm Sing}Q$, and $\sigma\colon\widetilde{Q}\to Q$ and $\sigma_{\mathbb P}\colon\widetilde{{\mathbb P}^{N-1}}\to{\mathbb P}^{N-1}$ the blow ups of $\Pi$ on $Q$ and ${\mathbb P}^{N-1}$, respectively, so that we can identify $\widetilde{Q}$ with the strict transform of $Q$ on $\widetilde{{\mathbb P}^{N-1}}$. By the symbols $E_Q$ and $E_{\mathbb P}$ we denote the exceptional divisors of these blow ups; we consider $E_Q$ as a subvariety in $E_{\mathbb P}$. Let $\varphi\colon\widetilde{Q}\to{\mathbb P}^{N-b-2}$ and $\varphi_{\mathbb P}\colon\widetilde{{\mathbb P}^{N-1}}
\to{\mathbb P}^{N-b-2}$ be the regularizations of the rational maps $\mathop{\rm pr}_{\Pi}|_Q$ and $\mathop{\rm pr}_{\Pi}$, respectively. We have the natural identification $E_{\mathbb P}=\Pi\times{\mathbb P}^{N-b-2}$, where the map $$
\varphi_{\mathbb P}|_{E_{\mathbb P}}\colon E_{\mathbb P}\to{\mathbb P}^{N-b-2} $$ is the projection onto the second factor. In the assumptions of Corollary 4.4 the morphism $\varphi$ is surjective and for a point of general position $p\in{\mathbb P}^{N-b-2}$ the fibre $\varphi^{-1}(p)$ is a linear subspace of dimension $b+1-k$ in $\varphi^{-1}_{\mathbb P}(p)\cong{\mathbb P}^{b+1}$, which is not contained entirely in the hyperplane $$
\varphi^{-1}_{\mathbb P}(p)\cap E_{\mathbb P}=\left(\varphi_{\mathbb P}|_{E_{\mathbb P}}\right)^{-1}(p), $$ which identifies naturally with $\Pi$, and for that reason $\varphi^{-1}(p)\cap E_{\mathbb P}$ identifies naturally with a subspace of dimension $b-k$ in $\Pi$ (and a hyperplane in $\varphi^{-1}(p)$). However, $$
\varphi^{-1}(p)\cap E_{\mathbb P}=\varphi^{-1}(p)\cap E_Q=(\varphi|_{E_Q})^{-1}(p), $$ so that arguing by dimensions, we conclude that the restriction
$\varphi|_{E_Q}$ is surjective.
{\bf Proposition 4.5.} {\it In the assumptions of Corollary 4.4, let $Y\subset\Pi$ be an irreducible closed subset, and assume that $$ b\geqslant k+\mathop{\rm codim} (Y\subset\Pi). $$
Then the restriction $\varphi|_{\sigma^{-1}(Y)}$ is surjective, so that for a point of general position $p\in{\mathbb P}^{N-b-2}$ the intersection $\varphi^{-1}(p)\cap \sigma^{-1}(Y)$ is non-empty and each of its components is of codimension $\mathop{\rm codim} (Y\subset\Pi)$ in the projective space} $\varphi^{-1}(p)\cap E_{\mathbb P}$.
{\bf Proof.} Obviously, $$ \sigma^{-1}(Y)=\sigma^{-1}_{\mathbb P}(Y)\cap\widetilde{Q}= \sigma^{-1}_{\mathbb P}(Y)\cap E_Q. $$ Since $\varphi^{-1}(p)\subset\widetilde{Q}$, the equality $$ \varphi^{-1}(p)\cap\sigma^{-1}(Y)=\varphi^{-1}(p)\cap\sigma^{-1}_{\mathbb P}(Y) $$ holds, but $\sigma^{-1}(Y)=Y\times{\mathbb P}^{N-b-2}$ in terms of the direct decomposition of the exceptional divisor $E_{\mathbb P}$. Therefore, identifying the fibre of the projection
$\varphi_{\mathbb P}|_{E_{\mathbb P}}$ with the projective space $\Pi$, we get that the intersection $\varphi^{-1}(p)\cap\sigma^{-1}(Y)$ identifies naturally with the intersection of $Y$ and the linear subspace $\varphi^{-1}(p)\cap E_{\mathbb P}$ of dimension $b-k$ in $\Pi$. By our assumption this intersection is non-empty, so that the morphism
$\varphi|_{\sigma^{-1}(Y)}$ is surjective. Q.E.D. for the proposition.
\section{The special hyperplane section}
In this section we prove Theorem 3.3.
{\bf 5.1. Start of the proof.} We use the notations of Subsection 1.7 and the assumptions of Theorem 3.3. Recall that $$ I_X=[2k+3,k+c_X-1]\cap {\mathbb Z} $$ is the set of admissible dimensions for the working triple $(X,D,o)$. Consider a general subspace $P\ni o$ in ${\mathbb P}(X)$ of the minimal admissible dimension $2k+3$. Since $a(E_{X\cap P})=2$ and $\nu(D)\leqslant\frac32n(D)<2n(D)$, we conclude that the pair $$
\left((X\cap P)^+,\frac{1}{n(D)} D|^+_{X\cap P}\right) $$ is not log canonical, but canonical outside the exceptional divisor $E_{X\cap P}$. By the inequality $\nu(D)<2n(D)$ we can apply the connectedness principle to this pair: \begin{equation}\label{16.01.23.1}
\mathop{\rm LCS}\left((X\cap P)^+,\frac{1}{n(D)}D|^+_{X\cap P}\right) \end{equation} is a proper connected closed subset of the exceptional divisor $E_{X\cap P}$. There are the following options:
$(1)_P$ this subset contains a divisor,
$(2)_P$ some irreducible component of maximal dimension $B(P)\subset E_{X\cap P}$ in this set has a positive dimension and codimension $\geqslant 2$ in $E_{X\cap P}$,
$(3)_P$ this subset is a point.
{\bf Remark 5.1.} In the case $(1)_P$ the divisor in the subset (\ref{16.01.23.1}) is unique and is a hyperplane section of the variety $E_{X\cap P}\subset{\mathbb E}_{X\cap P}$, since
$D|^+_{X\cap P}$ has along this subvariety the multiplicity
$>n(D)$ (since it is the centre of some non log canonical singularity), whereas the restriction $D^+|_{E_{X\cap P}}$ is cut out on $E_{X\cap P}$ by a hypersurface of degree $\nu(D)<2n(D)$.
Since $P\ni o$ is a subspace of general position, we go back to the original variety $X$ and get that the pair
$(X^+,\frac{1}{n(D)}D|^+)$ is not log canonical, and moreover, for the centre $B\subset E_X$ of some non log canonical singularity of that pair one of the three option takes place:
(1) $B$ is a hyperplane section of $E_X\subset{\mathbb E}_X$,
(2) $\mathop{\rm codim}(B\subset E_X)\in\{2,\dots,k+1\}$,
(3) $B$ is a linear subspace of codimension $2k+2$ in ${\mathbb E}_X$, which is contained in $E_X$.
{\bf Proposition 5.1.} {\it The option (1) does not take place.}
{\bf Proof.} Assume the converse: $B$ is a hyperplane section of $E_X$. Let $R\subset X$, $R\ni o$ be the uniquely determined hyperplane section, such that $R^+\cap E_X=B$ (in other words, ${\mathbb P}(R)^+\cap{\mathbb E}_X$ is the hyperplane in ${\mathbb E}_X$ that cuts out $B$ on $E_X$). Since $\mathop{\rm mult}_BD^+>n(D)$, we get that for the effective divisor $D_R=(D\circ R)$ on $R$ the inequality $$ \nu(D_R)\geqslant\nu(D)+\mathop{\rm mult}\nolimits_BD^+>2n(D)=2n(D_R) $$ holds, which is impossible by Theorem 1.4. Q.E.D. for the proposition.
{\bf Proposition 5.2.} {\it The option (3) does not take place.}
{\bf Proof.} Since $\mathop{\rm codim}(B\subset E_X)=k+2$, this is impossible by the Lefschetz theorem (in order to apply the Lefschetz theorem, it is sufficient to have the inequality $\mathop{\rm codim}(\mathop{\rm Sing}E_X\subset E_X)\geqslant 2k+6$, for which by Proposition 4.2 it is sufficient to have the inequality $\mathop{\rm rk}(o\in X)\geqslant 4k+5$; we have a much stronger condition for the rank of the singularity). Q.E.D. for the proposition.
Therefore, the option (2) takes place. By construction (or arguing by dimension), $B\not\subset\mathop{\rm Sing}E_X$. Recall that there is a non log canonical singularity of the pair $(X^+,\frac{1}{n(D)}D^+)$, the centre of which is $B$.
Let $p\in B$ be a point of general position; in particular, $p\not\in\mathop{\rm Sing}E_X$ and the more so $p\not\in\mathop{\rm Sing} X^+$. Applying inversion of adjunction in the word for word the same way as in \cite[Chapter 7, Proposition 2.3]{Pukh13a} (that is, restricting $D^+$ onto a general non-singular surface, containing the point $p$), we get the alternative: either $\mathop{\rm mult}_BD^+>2n(D)$, or on the blow up $$ \varphi_p\colon X^{(p)}\to X^+ $$ of the point $p$ with the exceptional divisor $E(p)\subset X^{(p)}$, $E(p)\cong{\mathbb P}^{N(X)-1}$, there is a hyperplane $\Theta(p)\subset E(p)$ in $E(p)$, satisfying the inequality \begin{equation}\label{22.09.22.1} \mathop{\rm mult}\nolimits_BD^++\mathop{\rm mult}\nolimits_{\Theta(p)}D^{(p)}>2n(D), \end{equation} where $D^{(p)}$ is the strict transform of the divisor $D^+$ on $X^{(p)}$, and moreover, the hyperplane $\Theta(p)$ is uniquely determined by the pair $(X^+,\frac{1}{n(D)}D^+)$ and varies algebraically with the point $p\in B$.
The case when the inequality $\mathop{\rm mult}\nolimits_BD^+>2n(D)$ holds, is excluded (with simplifications) by the arguments, excluding the option $(2)_{\Theta}$, given below, see Subsection 5.3, Remark 5.2.
There are two options for the hyperplane $\Theta(p)$:
$(1)_{\Theta}$ $\Theta(p)\neq {\mathbb P}(T_pE_X)$ (where we identify $E(p)$ with the projectivization of the tangent space $T_pX^+$), so that $\Theta(p)$ intersects ${\mathbb P}(T_pE_X)$ by some hyperplane $\Theta_E(p)$,
$(2)_{\Theta}$ the hyperplanes $\Theta(p)$ and ${\mathbb P}(T_pE_X)$ in $E(p)$ are equal.
Below (see Subsection 5.3, Remark 5.2) we show that the option $(2)_{\Theta}$ does not take place: it implies that $E_X\subset D^+$, which is impossible; the same arguments exclude the inequality $\mathop{\rm mult}\nolimits_BD^+>2n(D)$, too.
Therefore, we may assume that the option $(1)_{\Theta}$ takes place.
{\bf 5.2. The existence of the special hyperplane section.} Adding the upper index $(p)$ means the strict transform on $X^{(p)}$: we used this principle for the divisor $D$ above and will use it for other subvarieties on $X^+$. Our aim is to prove the following claim.
{\bf Theorem 5.1.} {\it There is a hyperplane section $\Lambda$ of the exceptional divisor $E_X\subset{\mathbb E}_X$, containing $B$, satisfying the inequality $$ \mathop{\rm mult}\nolimits_{\Lambda}D^+>\frac{2n(D)-\nu(D)}{k+1}. $$ Moreover, for a point of general position $p\in B$ the following equality holds:} $$ \Lambda^{(p)}\cap E(p)=\Theta_E(p). $$
{\bf Proof.} Let $L\subset E_X,L\ni p$ be a line in the projective space ${\mathbb E}_X$, such that $L\cap\mathop{\rm Sing}E_X=\emptyset$ and $$ L^{(p)}\cap E_(p)\in\Theta_E(p). $$
{\bf Lemma 5.1.}{\it The line $L$ is contained in $D^+$.}
{\bf Proof.} Assume the converse. Then $D^+|_L$ is an effective divisor on $L$ of degree $\nu(D)\leqslant\frac32n(D)<2n(D)$. At the same time, the divisor $D^+|_L$ contains the point $p$ with multiplicity $>2n(D)$ due to the inequality (\ref{22.09.22.1}). The contradiction proves the lemma. Q.E.D.
{\bf Proposition 5.3.} {\it The following inequality holds:} $$ \mathop{\rm mult}\nolimits_LD^+>\frac{2n(D)-\nu(D)}{k+1}. $$
{\bf Proof} is given in \S 6.
Let us go back to the proof of Theorem 5.1.
We will construct the set $\Lambda\subset E_X$ explicitly, and then prove that it is a hyperplane section. The exceptional divisor $E_X$ is a complete intersection of $k$ quadrics in ${\mathbb E}_X$: $$ E_X=\{q_1=\dots q_k=0\}, $$ using the notations of Subsection 4.5. Let $U_B\subset B$ be a non-empty Zariski open subset, where $$ U_B\cap\mathop{\rm Sing} E_X=\emptyset $$ and for every point $p\in B$ the option $(1)_{\Theta}$ takes place. By the assumption on the rank of the multi-quadratic point $o\in X$ for $p\in U_{B}$ the set $E_X\cap T_pE_X$ (where $T_pE_X\subset{\mathbb E}_X$ is the embedded tangent space, that is, a linear subspace of codimension $k$ in ${\mathbb E}_X$) is irreducible and reduced, and moreover, every hyperplane section of that set is also irreducible and reduced. Indeed, by Proposition 4.2, in order to have these properties it is sufficient to have the inequality $\mathop{\rm rk}q_{[1,k]}\geqslant 4k+5$, because by Remark 1.4 it implies the inequality $$
\mathop{\rm rk}q_{[1,k]}|_{T_pE_X}\geqslant 2k+5 $$ and we can apply Proposition 4.2. Obviously, $E_X\cap T_pE_X$ is a cone with the vertex $p$, consisting of all lines $L\subset E_X$, $L\ni p$. The singular set of that cone is of codimension $\geqslant 6$ (Proposition 4.2), and so for a general line $L\ni p$ $$ L\cap\mathop{\rm Sing}(E_X\cap T_p E_X)=\{p\}, $$ so that $L\cap\mathop{\rm Sing}E_X=\emptyset$ and the same is true for every hyperplane section of the cone $E_X\cap T_pE_X$, containing the point $p$, since its singular set is of codimension $\geqslant 4$ (Remark 1.4).
Let ${\cal L}(p)$ be the union of all lines $L\subset E_X,L\ni p$, such that $$ L^{(p)}\cap E(p)\in\Theta_E(p). $$ Obviously, ${\cal L}(p)$ is the section of the cone $E_X\cap T_pE_X$ by some hyperplane, containing the point $p$ (this hyperplane corresponds to the hyperplane $\Theta_E(p$)). As we have shown above, ${\cal L}(p)$ is an irreducible closed subset of codimension $k+1$ in $E_X$, and $$ \mathop{\rm mult}\nolimits_{{\cal L}(p)} D^+>\frac{2n(D)-\nu(D)}{k+1}. $$ Set $$ \Lambda=\overline{\mathop{\bigcup}\limits_{p\in U_B}{\cal L}(p)} $$ (the overline means the closure). By what was said above, the inequality $$ \mathop{\rm mult}\nolimits_{\Lambda}D^+>\frac{2n(D)-\nu(D)}{k+1} $$ holds.
{\bf Theorem 5.2.} {\it The subset $\Lambda\subset E_X$ is a hyperplane section of the variety} $E_X\subset{\mathbb E}_X$.
We will prove Theorem 5.2 in two steps: first, we will show that $\Lambda$ is a prime divisor on $E_X$, and then, that this divisor is a hyperplane section. By construction, the set $\Lambda$ is irreducible.
{\bf 5.3. The set $\Lambda$ is a divisor.} By our assumption about the rank of the point $o\in X$ for $b=k+1$ the inequality \begin{equation}\label{01.10.22.1} \mathop{\rm rk} q_{[1,k]}\geqslant 2k(b+2k+2)+2(2k+1)+3 \end{equation} holds. By Corollary 4.3, for a general subspace $\Pi\subset E_X$ of dimension $b$ the restriction onto $B$ of the projection $$ \mathop{\rm pr}\nolimits_{\Pi}\colon {\mathbb P}^{N(X)-1}\dashrightarrow {\mathbb P}^{N(X)-b-2} $$ from the subspace $\Pi$ is dominant. Let $s\in {\mathbb P}^{N(X)-b-2}$ be a point of general position. By the symbol $\langle\Pi,s\rangle$ denote the closure $$ \overline{\mathop{\rm pr}\nolimits_{\Pi}^{-1}(s)}\subset {\mathbb P}^{N(X)-1} $$ (this is a $(\mathop{\rm dim}\Pi+1)$-dimensional subspace) and set $$ E_X(\Pi,s)=E_X\cap \langle\Pi,s\rangle. $$ For the blow ups $\sigma\colon\widetilde{E}_X\to E_X$ and $\sigma_{\mathbb P}\colon\widetilde{{\mathbb P}^{N(X)-1}}\to{\mathbb P}^{N(X)-1}$ of the subspace $\Pi$ on $E_X$ and ${\mathbb P}^{N(X)-1}$, respectively, let $\varphi\colon\widetilde{E}_X\to{\mathbb P}^{N(X)-b-2}$ and
$\varphi_{\mathbb P}\colon\widetilde{{\mathbb P}^{N(X)-1}}\to{\mathbb P}^{N(X)-b-2}$ be the regularizations of the projections $\mathop{\rm pr}_{\Pi}|_{E_X}$ and $\mathop{\rm pr}_{\Pi}$, respectively. Obviously, the fibre $\varphi^{-1}_{\mathbb P}(s)$ identifies naturally with $\langle\Pi,s\rangle$, and the fibre $\varphi^{-1}(s)$ with $E_X(\Pi,s)$. The fibre of the surjective morphism
$\varphi|_{\sigma^{-1}(B)}$ over the point $s$ we denote by the symbol $B(s)$; this is a possibly reducible closed subset in $\varphi^{-1}_{\mathbb P}(s)$, each irreducible component of which is of codimension $c_B=\mathop{\rm codim} (B\subset E_X)$ and is not contained entirely in the hyperplane $\Pi$ (with respect to the identification $\varphi^{-1}_{\mathbb P}(s)=\langle\Pi,s\rangle$). Write down $B(s)$ as a union of irreducible components: $$ B(s)=\mathop{\bigcup}\limits_{i\in I} B_i(s), $$ and let $p\in B_i(s)$ be a point of general position on one of them; in particular, $p\not\in\Pi$, so that the projection $\mathop{\rm pr}\nolimits_{\Pi}$ is regular at that point and $p\not\in B_j(s)$ for $j\neq i$. We will consider the point $p$ as a point of general position on $B$, which was introduced in Subsection 5.1, and use the notations for the blow up $\varphi_p$ of this point and for objects linked to this blow up. Note that for $b=k+1$ we have the inequality $$ \mathop{\rm dim}B(s)=\mathop{\rm dim}B_i(s) \geqslant 1. $$ The set of lines $L\subset E_X(\Pi,s)$, $L\ni p$, such that $L^{(p)}\cap E(p)\in\Theta(p)$, forms a hyperplane in $E_X(\Pi,s)$, which we denote by the symbol $\Lambda(\Pi,s,p)$. By construction, $\Lambda(\Pi,s,p)\subset\Lambda$.
Since any non-trivial algebraic family of hyperplanes in a projective space sweeps out that space and for a general point $s$ we have $E_X(\Pi,s)\not\subset\Lambda$ (otherwise $\Lambda=E_X$, which is impossible), we conclude that the hyperplane $\Lambda(\Pi,s,p)$ does not depend on the choice of a point of general position $p\in B_i(s)$, so that $$ \Lambda(\Pi,s,p)=\Lambda(\Pi,s,B_i(s)) $$ is a hyperplane in $\varphi^{-1}(s)=E_X(\Pi,s)$, containing the component $B_i(s)$. Therefore, for a general point $s$ the intersection $\Lambda\cap E_X(\Pi,s)$ contains a divisor in $E_X(\Pi,s)$, whence we get that $\Lambda\subset E_X$ is a (prime) divisor on $E_X$, as we claimed. This divisor is cut out on $E_X$ by a hypersurface of degree $d_{\Lambda}$ in ${\mathbb E}_X$. It remains to show that $d_{\Lambda}=1$.
{\bf Remark 5.2.} We promised above that the option $(2)_{\Theta}$ does not take place. Indeed, if it does, then every line $L\ni p$ in $E_X(\Pi,s)$ is contained in $\Lambda$, so that $E_X(\Pi,s)\subset\Lambda$ and for that reason $E_X\subset\Lambda$, which is absurd. In a similar way, if $\mathop{\rm mult}_BD^+>\nu(D)$, then every line in $E_X(\Pi,s)$, meeting $B$, is contained in $\Lambda$, so that $E_X\subset\Lambda$, which is impossible. Therefore, the inequality $$ \mathop{\rm mult}\nolimits_BD^+\leqslant\nu(D) $$ holds.
{\bf 5.4. The divisor $\Lambda$ is a hyperplane section.} Let us consider the intersection $\Lambda\cap E_X(\Pi,s)$ for a general point $s$ in more details. This is a possibly reducible divisor, each component of which has multiplicity 1, containing at least one hyperplane. If in this divisor there are components of degree $\geqslant 2$, then the union of hyperplanes in $\Lambda(\Pi,s)$ gives a proper closed subset of $\Lambda_1(\Pi,s)$, which is also a divisor. Then $$ \overline{\bigcup_s\Lambda_1(\Pi,s)} $$ (the union is taken over a non-empty open subset in ${\mathbb P}^{N(X)-b-2}$) is a proper closed subset in $\Lambda$, which is of codimension 1 in $E_X$, which is impossible as $\Lambda$ is a prime divisor. We conclude that $\Lambda(\Pi,s)$ is a union of precisely $d_{\Lambda}$ distinct hyperplanes in $E_X(\Pi,s)$.
Assume that $d_{\Lambda}\geqslant 2$. By our assumptions about the rank $\mathop{\rm rk}(o\in X)$ the inequality (\ref{01.10.22.1}) holds for $b=3k$: $$ \mathop{\rm rk}q_{[1,k]}\geqslant 10k^2+8k+5. $$ Again we apply Corollary 4.3, now to a general subspace $\Pi^*=E_X(\Pi,s)$ of dimension $b^*=b+1-k\geqslant 2k+1$. The subspace $\Pi^*$ does not meet the set $\mathop{\rm Sing} E_X$ and the restriction of the projection from $\Pi^*$ $$ \mathop{\rm pr}\nolimits_{\Pi^*}\colon{\mathbb P}^{N(X)-1}\dashrightarrow {\mathbb P}^{N(X)-b^*-2} $$ onto $B$ is dominant. Let $s^*\in{\mathbb P}^{N(X)-b^*-2}$ be a point of general position. We use the notations introduced above and write $E_X(\Pi^*,s^*)$. For the blow ups of the subspace $\Pi^*$ we use the symbols $\sigma_{\Pi^*}$ and $\sigma_{{\mathbb P},\Pi^*}$, respectively, and for the regularized projections the symbols $\varphi_{\Pi^*}$ and $\varphi_{{\mathbb P},\Pi^*}$. The symbol $\langle\Pi^*,s^*\rangle$ has the same meaning as above. Set $$ E^*=\sigma_{\Pi^*}^{-1}(\Pi^*)\quad\mbox{and}\quad E^*_{\mathbb P}= \sigma_{{\mathbb P},\Pi^*}^{-1}(\Pi^*) $$ to be the exceptional divisors of the blow up of $\Pi^*$ on $E_X$
and ${\mathbb P}^{N(X)-1}$. By the arguments immediately before the statement of Proposition 4.5, the map $\varphi_{\Pi^*}|_{E^*}$ is surjective, and by Proposition 4.5 (which applies since $b^*\geqslant k+1$) the intersection $$ \varphi_{\Pi^*}^{-1}(s^*)\cap \sigma_{\Pi^*}^{-1}(\Lambda\cap \Pi^*) $$ is non-empty and each of its irreducible components is of codimension 1 in the projective space $\varphi_{\Pi^*}^{-1}(s^*)\cap E^*_{\mathbb P}$.
By what was shown above, $\Lambda\cap{\Pi^*}$ is a union of $d_{\Lambda}$ distinct hyperplanes $\Lambda^*_i$, $i\in I$. In a similar way, $$ \Lambda\cap E_X(\Pi^*,s^*)=\sigma^{-1}_{\Pi^*}(\Lambda)\cap\varphi^{-1}_{\Pi^*}(s^*) $$ is the union of $d_{\Lambda}$ distinct hyperplanes in $\varphi^{-1}_{\Pi^*}(s^*)$, none of which coincides with the hyperplane $\varphi^{-1}_{\Pi^*}(s^*)\cap E_{\mathbb P}^*$. Note that the strict transform of the divisor $\Lambda$ with respect to the blow up $\sigma_{\Pi^*}$ is just its full inverse image $\sigma^{-1}_{\Pi^*}(\Lambda)$, since $\Lambda\not\subset\Pi^*$. Furthermore, $$ \sigma^{-1}_{\Pi^*}(\Lambda)\cap E^*=\sigma^{-1}_{\Pi^*}(\Lambda\cap \Pi^*)=\bigcup_{i\in I}\sigma^{-1}_{\Pi^*}(\Lambda^*_i), $$ and every intersection $\varphi^{-1}_{\Pi^*}(s^*)\cap\sigma^{-1}_{\Pi^*}(\Lambda^*_i)$ is a hyperplane in $\varphi^{-1}_{\Pi^*}(s^*)\cap E^*_{\mathbb P}$. It follows that each irreducible component of set $\Lambda\cap E_X(\Pi^*,s^*)$ intersects the hyperplane $\varphi^{-1}_{\Pi^*}(s^*)\cap E^*_{\mathbb P}$ by one of the hyperplanes $\sigma^{-1}_{\Pi^*}(\Lambda^*_i)\cap\varphi^{-1}_{\Pi^*}(s^*)$, $i\in I$. Thus one can write down $$ \Lambda\cap E_X(\Pi^*,s^*)=\bigcup_{i\in I}\Lambda_i(\Pi^*,s^*), $$ where $\Lambda_i(\Pi^*,s^*)$ is a hyperplane in $E_X(\Pi^*,s^*)$, satisfying the equality $$ \Lambda_i(\Pi^*,s^*)\cap E^*=\sigma^{-1}_{\Pi^*}(\Lambda^*_i)\cap\varphi^{-1}_{\Pi^*}(s^*). $$ In other words, the choice of a component of the intersection $\Lambda\cap\Pi^*$ determines uniquely the component of the intersection of $\Lambda$ with $\langle\Pi^*,s^*\rangle=\varphi^{-1}_{\Pi^*}(s^*)=E_X(\Pi^*,s^*)$ for a general point $s^*$. Now set $$ \Lambda_i=\sigma_{\Pi^*}\left(\overline{\mathop{\bigcup}\limits_{s^*} (\Pi^*,s^*)}\right), $$ where the union is taken over a non-empty Zariski open subset of the projective space ${\mathbb P}^{N(X)-b^*-2}$. This is a prime divisor on $E_X$, and moreover, $\Lambda_i\subset\Lambda$ and for that reason $\Lambda_i=\Lambda$, whence we conclude that all hyperplanes $\Lambda_i(\Pi^*,s^*)$ are the same, which is a contradiction with the assumption that $d_{\Lambda}\geqslant 2$.
Thus $d_{\Lambda}=1$ and $\Lambda$ is a hyperplane section of $E_X\subset {\mathbb E}_X$. Q.E.D. for Theorem 5.2 and therefore, for Theorem 5.1.
{\bf 5.5. The construction of a new working triple.} Now we can complete the proof Theorem 3.3 and construct the new working triple $(R,D_R,o)$. Let $R\ni o$ the section of $X$ by the hyperplane ${\mathbb P}(R)=\langle R\rangle$, such that $$ R^+\cap {\mathbb E}_X=R^+\cap E_X=\Lambda $$ (in other words, the hyperplane ${\mathbb P}(R)^+ \cap {\mathbb E}_X$ cuts out $\Lambda$ on $E_X$). Since $R$ is not a component of the effective divisor $D_X$, the scheme-theoretic intersection $(R\circ D_X)$ is well defined, and we treat this intersection as an effective divisor on $R$. Set $D_R=(R\circ D_X)$ in that sense.
On $R^+\subset X^+$ with the exceptional divisor $$ E_R=(R^+\cap E_X)=\Lambda\subset {\mathbb E}_R= {\mathbb P}(R)^+\cap{\mathbb E}_X $$ we have the equivalence $$ D^+_R\sim n(D_R) H_R-\nu(D_R) E_R, $$ where $H_R$ is the class of a hyperplane section of $R$ and $$ \nu(D_R)\geqslant\nu(D_X)+\mathop{\rm mult}\nolimits_\Lambda D^+_X>\nu(D_X)+\frac{2n(D_X)-\nu(D_X)}{k+1}. $$ Again $[R,o]$ is a marked complete intersection, of level $(k,c_R)$, where $c_R=c_X-2$, $(R,D_R,o)$ is a working triple, and the inequality $$ 2n(D_R)-\nu(D_R)<\left(1-\frac{1}{k+1}\right)(2n(D_X)-\nu(D_X)) $$ holds (since $n(D_R)=n(D_X)$).
The procedure of constructing the special hyperplane section is complete. Q.E.D. for Theorem 3.3.
\section{Multiplicity of a line}
In this section we prove Proposition 5.3.
{\bf 6.1. Blowing up a point and a curve.} Since we completed our study of working triples, the symbol $X$ is now free and will mean an arbitrary non-singular quasi-projective variety of dimension $\geqslant 3$. Let $C\subset X$ be a non-singular projective curve, $p\in C$ a point. Furthermore, let $$ \sigma_C\colon X(C)\to X $$ be the blow up of the curve $C$ with the exceptional divisor $E_C$ and $\sigma^{-1}_C(p)\cong {\mathbb P}^{\dim X-2}$ the fibre over the point $p$. Let $$ \sigma\colon X(C,\sigma^{-1}_C(p))\to X(C) $$ be the blow up of that fibre with the exceptional divisor $E$ and $E_C^{(p)}$ the strict transform of $E_C$ on that blow up.
On the other hand, consider the blow up $$ \varphi_p\colon X(p)\to X $$ of the point $p$ with the exceptional divisor $E_p$ and denote by the symbol $C(p)$ the strict transform of the curve $C$ on $X(p)$. Finally, let $$ \varphi\colon X(p,C(p))\to X(p) $$ be the blow up of the curve $C(p)$, $E_{C(p)}$ the exceptional divisor of that blow up and $E^C_p$ the strict transform of $E_p$.
{\bf Proposition 6.1.} {\it The identity map $\mathop{\rm id}_X$ extends to an isomorphism $$ X(C,\sigma^{-1}_C(p))\cong X(p,C(p)), $$ identifying the subvarieties $E$ and $E^C_p$ and the subvarieties $E^p_C$ and $E_{C(p)}$.}
{\bf Proof.} This is a well known fact, which can be checked by elementary computations in local parameters. Q.E.D. for the proposition.
Taking into account the identifications above, we will use the notations $E^C_p$ and $E_{C(p)}$, and forget about $E$ and $E^p_C$. The variety $X(C,\sigma^{-1}_C(p))$ will be denoted by the symbol $\widetilde{X}$. Let $D$ be an effective divisor on $X$. The symbols $D^C$ and $D^p$ stand for its strict transforms on $X(C)$ and $X(p)$, respectively, and the symbol $\widetilde{D}$ for its strict transform on $\widetilde{X}$. Set $$ \mu=\mathop{\rm mult}\nolimits_CD\quad\mbox{and}\quad\mu_p=\mathop{\rm mult}\nolimits_pD, $$ where, of course, $\mu_p\geqslant\mu$.
{\bf Lemma 6.1.} {\it The following equality holds:} $$ \mathop{\rm mult}\nolimits_{\sigma^{-1}_C(p)}D^C=\mu_p-\mu. $$
{\bf Proof.} (This is a well known fact, and we give a proof for the convenience of the reader, and also because a similar argument is used below.) We have the sequence of obvious equalities: $$ \sigma^*_CD=D^C+\mu E_C, $$ so that $$ \sigma^*\sigma^*_CD=\widetilde{D}+\mu E_{C(p)}+(\mu+\mathop{\rm mult}\nolimits_{\sigma^{-1}_{C(p)}}D^C)E^C_p. $$ Considering the second sequence of blow ups, we get $$ \varphi^*_pD=D^p+\mu_pE_p $$ and, respectively, $$ \varphi^*\varphi^*_pD=\widetilde{D}+\mu E_{C(p)}+\mu_pE^C_p. $$ Comparing the two presentations of the same effective divisor, we get the claim of the lemma.
{\bf 6.2. Blowing up two points and a curve.} In the notations of the previous subsection, let us consider the point $$ q=C(p)\cap E_p. $$ Set $\mu_q=\mathop{\rm mult}_qD^p$. Obviously, $$ \mu_q\geqslant\mathop{\rm mult}\nolimits_{C(p)}D^p=\mathop{\rm mult}\nolimits_CD=\mu. $$ Let $$ \varphi_q\colon X(p,q)\to X(p) $$ the blow up of the point $q$ with the exceptional divisor $E_q$ and $C(p,q)$ the strict transform of the curve $C(p)$. Finally, let $$ \varphi_{\sharp}\colon X^{\sharp}\to X(p,q) $$ be the blow up of the curve $C(p,q)$ with the exceptional divisor $E_{C(p,q)}$ and $E^{\sharp}_q$ the strict transform of $E_q$. Note that the curve $C(p)$ intersects $E_p$ transversally and therefore $C(p,q)$ does not meet the strict transform $E^q_p$ of the divisor $E_p\subset X(p)$ on $X(p,q)$.
{\bf Proposition 6.2.} {\it The restriction of the divisor $D^C$ onto the exceptional divisor $E_C$ contains the fibre $\sigma^{-1}_C(p)$ with multiplicity at least} $\mu_p+\mu_q-2\mu$.
{\bf Proof.} Obviously, on $X^{\sharp}$ we have the equality $$ \varphi^*_{\sharp}\varphi^*_q\varphi^*_p D=D^{\sharp}+ (\mu_q+\mu_p)E^{\sharp}_q+\mu_p E^q_p+\mu E_{C(p,q)}, $$ where $D^{\sharp}$ is the strict transform of $D$ on $X^{\sharp}$. On the other hand, using the constructions of Subsection 6.1, we see that $X^{\sharp}$ can be obtained as the blow up of the curve $C(p)$ on $X(p)$ with the subsequent blowing up of the fibre of the exceptional divisor $E_{C(p)}$ over the point $q$ or, applying the construction of Subsection 6.1 twice, as the blow up of the curve $C$ on $X$ with the subsequent blowing up of the fibre $\sigma^{-1}_C(p)$ and then the blowing up of the subvariety $$ E^C_p\cap E^p_C. $$ In the last presentation the three prime exceptional divisors are $$ E^{\sharp}_C=E_{C(p,q)},\quad E^{\sharp}_p\quad\mbox{and}\quad E^{\sharp}_q. $$ We denote the blow up $X^{\sharp}\to\widetilde{X}$ of the subvariety $E^C_p\cap E^p_C$, mentioned above, by the symbol $\sigma_{\sharp}$. Thus we obtain the following commutative diagram of birational morphisms: $$ \begin{array}{ccccc} X(C) & \stackrel{\sigma}{\leftarrow} & \widetilde{X} & \stackrel{\sigma_{\sharp}}{\leftarrow} & X^{\sharp}\\ \downarrow & & \downarrow & & \downarrow\\ X &\stackrel{\varphi_p}{\leftarrow} & X(p) &\stackrel{\varphi_q}{\leftarrow} & X(p,q),\\ \end{array} $$ where the vertical arrows (from the left to the right) are $\sigma_C$, $\varphi$ and $\varphi_{\sharp}$, respectively. We have the equality $$ \sigma^*_{\sharp}\sigma^*\sigma^*_CD=\varphi^*_{ \sharp}\varphi^*_q\varphi^*_pD. $$ This pull back can be written down as $$ D^{\sharp}+\mu E^{\sharp}_C+\mu_pE^{\sharp}_p+(\mu_p+\mu_q)E^{\sharp}_q, $$ since $E_{C(p,q)}=E^{\sharp}_C$ and $E^{\sharp}_q$ is the exceptional divisor of the blow up $\sigma_{\sharp}$. On the other hand, $D^C=\sigma^{*}_C D-\mu E_C$ and, besides, $$ \sigma^*_{\sharp}\sigma^*E_C=E^{\sharp}_C+E^{\sharp}_p+2E^{\sharp}_q, $$ so that the exceptional divisor $E^{\sharp}_q$ comes into the pull back of the divisor $D^C$ on $X^{\sharp}$ with multiplicity $(\mu_p+\mu_q-2\mu)$. However, the blow ups $\sigma$ and $\sigma_{\sharp}$ do not change the divisor $E_C$, as they blow up subvarieties of codimension 1 on the variety: $$
\sigma\circ\sigma_{\sharp}|_{E^{\sharp}_C}\colon E^{\sharp}_C\to E_C $$ is an isomorphism. Since the restriction $D^{\sharp}$ onto $E^{\sharp}_C$ is an effective divisor, it follows from here that the restriction of the divisor $D^C$ onto $E_C$ contains the fibre $\sigma^{-1}(p)$ (which is precisely the restriction of $E^{\sharp}_q$ onto $E^{\sharp}_C$ in terms of the isomorphism between $E^{\sharp}_C$ and $E_C$, discussed above) with multiplicity $\geqslant\mu_p+\mu_q-2\mu$. Proof of Proposition 6.2 is complete.
{\bf 6.3. The multiplicity of an infinitely near line.} Let us come back to the proof of Proposition 5.3. We will obtain its claim from a more general fact. Let $o\in{\cal X}$ be a germ of a multi-quadratic singularity of type $2^k$, where ${\cal X}\subset{\cal Y}$, ${\cal Y}$ is non-singular, $\mathop{\rm codim}({\cal X}\subset{\cal Y})=k$ and the inequality $$ \mathop{\rm rk}(o\in{\cal X})\geqslant 2k+3 $$ holds, so that $\mathop{\rm codim}(\mathop{\rm Sing{\cal X}\subset{\cal X}})\geqslant 4$ and ${\cal X}$ is factorial. Let $\sigma_{\cal Y}\colon{\cal Y}^+\to{\cal Y}$ be the blow up of the point $o$ with the exceptional divisor $E_{\cal Y}$, ${\cal X}^+\subset{\cal Y}^+$ the strict transform, so that $$
\sigma=\sigma_{\cal Y}|_{{\cal X}^+}\colon{\cal X}^+\to{\cal X} $$ is the blow up of the point $o$ on ${\cal X}$ with the exceptional divisor $E_{\cal X}$, which is a complete intersection of $k$ quadrics in $E_{\cal Y}\cong{\mathbb P}^{\rm dim{\cal Y}-1}$. By Proposition 4.2 $$ \mathop{\rm codim}(\mathop{\rm Sing} E_{\cal X}\subset E_{\cal X})\geqslant 4. $$ Let $L\subset E_{\cal X}$ be a line, where $L\cap \mathop{\rm Sing} E_{\cal X}=\emptyset$ and $p\in L$ a point. Let us blow up this point on ${\cal Y}^+$ and ${\cal X}^+$, respectively: $$ \sigma_{p,{\cal Y}}\colon {\cal Y}_p\to {\cal Y}^+\quad\mbox{and}\quad \sigma_p\colon {\cal X}_p\to {\cal X}^+ $$ are these blow ups with the exceptional divisors $E_{p,{\cal Y}}$ and $E_p$. Set $$ q=L^{(p)}\cap E_p, $$ where $L^{(p)}\subset {\cal X}_p$ is the strict transform.
Let $D_{\cal X}$ be an effective divisor on ${\cal X}$. For its strict transform on ${\cal X}^+$ we have the equality $$ D^+_{\cal X}=\sigma^* D_{\cal X}-\nu E_{\cal X} $$ for some $\nu\in{\mathbb Z}_+$. Furthermore, we denote the strict transform of $D^+_{\cal X}$ on ${\cal X}_p$ by the symbol $D^{(p)}_{\cal X}$ and set $$ \mu_p=\mathop{\rm mult}\nolimits_pD^+_{\cal X}\quad\mbox{and} \quad\mu_q=\mathop{\rm mult}\nolimits_qD^{(p)}_{\cal X}. $$ Set also $\mu=\mathop{\rm mult}_LD^+_{\cal X}$; obviously, $\mu\leqslant\mu_p$.
{\bf Theorem 6.1.} {\it The following inequality holds:} $$ \mu\geqslant\frac{1}{k+1}(\mu_p+\mu_q-\nu). $$
{\bf Proof.} Let $P\subset E_{\cal Y}$ be a general linear subspace of dimension $(k+2)$, containing the line $L$.
{\bf Lemma 6.2.} {\it The surface $S=P\cap{\cal X}^+=P\cap E_{\cal X}$ is non-singular.}
{\bf Proof.} We argue by induction on $\mathop{\rm dim}E_{\cal X}\geqslant 2$. If $\mathop{\rm dim}E_{\cal X}=2$, then there is nothing to prove. Let $\mathop{\rm dim}E_{\cal X}\geqslant 3$. The hyperplanes in $E_{\cal Y}$, tangent to $E_{\cal X}$ at at least one point of the line $L$, form a $k$-dimensional family. The hyperplanes, containing the line $L$, form a family (a linear subspace) of codimension 2 in the dual projective space for $E_{\cal Y}$, that is, of dimension $k+\mathop{\rm dim}E_{\cal X}-2\geqslant k+1$, so that for a general hyperplane $R_{\cal Y}\supset L$ in $E_{\cal Y}$ we have: $E_{\cal X}\cap R_{\cal Y}$ is non-singular along $L$ (and, of course, for the codimension of the singular set we have the equality $\mathop{\rm codim}(\mathop{\rm Sing}(E_{\cal X}\cap R_{\cal Y})\subset(E_{\cal X}\cap R_{\cal Y}))=\mathop{\rm codim}(\mathop{\rm Sing}E_{\cal X}\subset E_{\cal X}$)). Applying the induction hypothesis, we complete the proof of the lemma. Q.E.D.
Let ${\cal Z}\subset{\cal Y}$, ${\cal Z}\ni o$, be a general subvariety of dimension $(k+3)$, non-singular at the point $o$, such that ${\cal Z}^+\cap E_{\cal Y}=P$, and $$ {\cal X}_P={\cal X}\cap{\cal Z} $$ (the notation ${\cal X}_P$ is chosen for convenience: ${\cal X}_P$ is determined by ${\cal Z}$, not by $P$). Then ${\cal X}_P$ is a three-dimensional variety with the isolated multi-quadratic singularity $o\in{\cal X}_P$, and the blow up of the point $o$ resolves this singularity: the exceptional divisor ${\cal X}^+_P\cap E_{\cal Y}$ is the non-singular surface $S$.
The restriction of the divisor $D_{\cal X}$ onto ${\cal X}_P$ is denoted by the symbol $D_P$, and its strict transform on ${\cal X}^+_P$ by the symbol $D^+_P$.
{\bf Lemma 6.3.} {\it The normal sheaf ${\cal N}_{L/{\cal X}^+_P}\cong{\cal O}_L(-\alpha)\oplus{\cal O}_L(-\beta)$, where $\alpha+\beta=k$ and $\alpha\geqslant\beta\geqslant 1$.}
{\bf Proof.} Since $P\cong{\mathbb P}^{k+2}$, by the adjunction formula $K_S=(k-3)H_{S}$, where $H_S$ is the class of a hyperplane section of $S\subset P$, whence it follows that $(L^2)_{S}=1-k$. Furthermore, the surface $S$ is the exceptional divisor of the blow up of the point $o$ on ${\cal X}_P$ ($S={\cal X}^+_P\cap E_{\cal Y}$), so that ${\cal O}_{{\cal X}^+_P}(S)|_L={\cal O}_L(-1)$ and we have the exact sequence $$
0\to{\cal N}_{L/S}\to{\cal N}_{L/{\cal X}^+_P}\to{\cal N}_{S/{\cal X}^+_P}|_L\to 0 $$ or $$ 0\to{\cal O}_L(1-k)\to{\cal N}_{L/{\cal X}^+_P}\to{\cal O}(-1)\to 0. $$ From this the claim of the lemma follows at once. Q.E.D.
Let $\sigma_L\colon{\cal X}_{P,L}\to{\cal X}^+_P$ be the blow up of the line $L$, and $E_{P,L}\subset{\cal X}_{P,L}$ the exceptional divisor. The lemma implies that $E_{P,L}$ is a ruled surface of type ${\mathbb F}_{\alpha-\beta}$ and its Picard group is ${\mathbb Z}s\oplus{\mathbb Z}f$, where $f$ is the class of a fibre, $s$ the class of the exceptional section, $s^2=-(\alpha-\beta)$. Again from the lemma shown above it follows that $$
(E^3_{P,L})_{{\cal X}_{P,L}}=(E_{P,L}|^2_{E_{P,L}})= -\mathop{\rm deg}{\cal N}_{L/{\cal X}^+_P}=k, $$ so that $$
-E_{P,L}|_{P,L}=s+\frac12(k+\alpha-\beta)f. $$ Obviously (since the subspace $P$ is general), $$ \mathop{\rm mult}\nolimits_pD^+_P=\mu_p. $$ Let ${\cal X}^{(p)}_P\subset{\cal X}_p$ be the strict transform of
${\cal X}^+_P$ on ${\cal X}_p$. By construction, $q\in{\cal X}^{(p)}_P$. Setting $D^{(p)}_P=D^{(p)}_{\cal X}|_{{\cal X}^{(p)}_P}$, we obtain $$ \mu_q=\mathop{\rm mult}\nolimits_qD^{(p)}_P. $$ Finally, let $D^{(L)}_P$ be the strict transform of the divisor $D^+_P$ on ${\cal X}_{P,L}$. Obviously, $$ D^{(L)}_P=\sigma^*_LD^+_P-\mu E_{P,L}, $$
so that, writing the pull back on ${\cal X}_{P,L}$ of the restriction $E_{\cal X}|_{{\cal X}^+_P}$ for simplicity as the restriction $E_{\cal X}|_{{\cal X}_{P,L}}$, we have $$
(-\nu E_{\cal X}|_{{\cal X}_{P,L}}-\mu E_{P,L})|_{E_{P,L}}\sim $$ $$ \sim\nu f+\mu(s+\frac12(k+\alpha-\beta)f)=\mu s+(\nu+\frac12\mu (k+\alpha-\beta))f. $$ By Proposition 6.2, this effective divisor contains the fibre $\sigma^{-1}_L(p)$ with multiplicity at least $\mu_p+\mu_q-2\mu$, whence we get the inequality $$ \nu+\frac12\mu(k+\alpha-\beta)\geqslant\mu_p+\mu_q-2\mu, $$ which after easy transformations gives us that $$ \mu>\frac{2(\mu_p+\mu_q)-2\nu}{k+(\alpha-\beta)+4}. $$ The denominator of the right hand side is maximal when $\alpha=k-1$ and $\beta=1$ and so $$ \mu>\frac{2(\mu_p+\mu_q)-2\nu}{2k+2}=\frac{(\mu_p+\mu_q)-\nu}{k+1}. $$ Q.E.D. for the theorem.
Proposition 5.3 follows immediately from the theorem that we have just shown, taking into account the construction of the line $L$ and the inequality (\ref{22.09.22.1}).
\section{Hypertangent divisors}
In this section we prove Theorems 1.2, 1.3 and 1.4.
{\bf 7.1. Non-singular points. Tangent divisors.} Let us start the proof of Theorem 1.2. Obviously, it is sufficient to consider the case when the subspace $P$ is of maximal admissible codimension $k+\varepsilon(k)-1$ in ${\mathbb P}^{M+k}$. Theorem 1.1 and Remark 1.4 imply that the inequality $$ \mathop{\rm codim}(\mathop{\rm Sing}(F\cap P)\subset(F\cap P))\geqslant 2k+2 $$ holds. In particular, $F\cap P$ is a factorial complete intersection of codimension $k$ in ${\mathbb P}\cong{\mathbb P}^{M-\varepsilon(k)+1}$. Moreover, by the lefschetz theorem, applied to the section of the variety $F\cap P$ by a general linear subspace of dimension $3k+1$ in $P$ (this section is a non-singular complete intersection of codimension $k$ in ${\mathbb P}^{3k+1}$), we get that the section of $F\cap P$ by an arbitrary linear subspace of codimension $a\leqslant k$ is irreducible and reduced, since for the numerical Chow group we have $$ A^aF\cap P={\mathbb Z}H^a_{F\cap P}, $$ where $H_{F\cap P}$ is the class of a hyperplane section.
Assume that Theorem 1.2 is not true and $\mathop{\rm mult_o}Y>2n(Y)$. We will argue precisely as in \cite[\S 2]{Pukh01}, see also \cite[Chapter 3, Section 2.1]{Pukh13a}. Let
$T_1,\dots,T_k$ be the tangent hyperplane sections of $F\cap P$ at the point $o$ (in the notations of Subsection 1.4 they are defined by the linear forms $f_{i,1}|_P$, $i=1,\dots,k$). By what was said above, for each $i=1,\dots,k$ the intersection $T_1\cap\dots\cap T_i$ is of codimension $i$ in $F\cap P$, coincides with the scheme-theoretic intersection $(T_1\circ\dots\circ T_i)$ and its multiplicity at the point $o$ equals precisely $2^i$, since the quadratic forms $$
f_{1,2}|_{T_o(F\cap P)},\dots,f_{k,2}|_{T_o(F\cap P)} $$ satisfy the regularity condition. Now we argue as in \cite[\S 2]{Pukh01}. We set $Y_1=Y$ and see that $Y_1\neq T_1$, because $\mathop{\rm mult}_oT_1=2n(T_1)=2$. We consider the cycle $(Y_1\circ T_1)$ of the scheme-theoretic intersection and take for $Y_2$ the component of that cycle that has the maximal value of the ratio $\mathop{\rm mult}_o/\mathop{\rm deg}$. Assume that the subvariety $Y_i$ of codimension $i$ in $F\cap P$, satisfying the inequality $$ \mathop{\rm mult}\nolimits_oY_i>\frac{2^i}{\mathop{\rm deg}F}\mathop{\rm deg}Y_i, $$ is already constructed, and $i\leqslant k-1$. Then $$ Y_i\neq T_1\cap\dots\cap T_i, $$ however by construction $Y_i$ is contained in the divisors $T_1,\dots,T_{i-1}$, so that $Y_i\not\subset T_i$ and the cycle of scheme-theoretic intersection $(Y_i\circ T_i)$ of codimension $i+1$ is well defined. For $Y_{i+1}$ we take the component of this cycle with the maximal value of the ratio $\mathop{\rm mult}_o/\mathop{\rm deg}$. Completing this process, we obtain an irreducible subvariety $Y_{k+1}\subset(F\cap P)$ of codimension $k+1$, satisfying the inequality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}} Y_{k+1}>\frac{2^{k+1}}{\mathop{\rm deg}F}. $$
{\bf 7.2. Non-singular points. Hypertangent divisors.} We continue the proof of Theorem 1.2. In the notations of Subsection 1.4 for each $j=2,\dots, d_k-1$ construct the hypertangent linear systems $$
\Lambda_j=\left|\sum^k_{i=1}\sum^{\min\{j,d_i-1\}}_{\alpha=1}
f_{i,[1,\alpha]}s_{i,j-\alpha}\right|_{F\cap P}, $$ where $f_{i,[1,\alpha]}=f_{i,1}+\dots +f_{i,\alpha}$ is the left segment of the polynomial $f_i$ of length $\alpha$, the polynomials $s_{i,j-\alpha}$ are homogeneous polynomials of degree $j-\alpha$, running through the spaces ${\cal P}_{j-\alpha,M+k}$ independently of each other and the restriction onto $F\cap P$ means the restriction onto the affine part of that variety in ${\mathbb A}^{M+k}_{z_*}$ followed by the closure.
Let $h_a$, where $a\geqslant k+1$, be the $a$-th polynomial in the sequence ${\cal S}$. Then $h_a=f_{i,j}|_{{\mathbb P}(T_oF)}$ for some $i$ and $j\geqslant 3$. Set ${\cal H}_a=\Lambda_{j-1}$. In this way we obtain a sequence of linear systems ${\cal H}_{k+1}$, ${\cal H}_{k+2}$,\dots, ${\cal H}_{M}$, where the system $\Lambda_j$ occurs, in the notations of \cite[\S 2]{Pukh01}, $$
w^+_j=\sharp\{i, 1\leqslant i\leqslant k\,|\, j\leqslant d_i-1\} $$ times. By the symbol ${\cal H}[-m]$ we denote the space $$ \prod^{M-m}_{a=k+1}{\cal H}_a $$ of all tuples $(D_{k+1},\dots,D_{M-m})$ of divisors, where $D_a\in{\cal H}_a$. For $a\in\{k+1,\dots,M\}$ set $$ \beta_a=\frac{j+1}{j}, $$ if ${\cal H}_a=\Lambda_j$. The number $\beta_a$ is called the {\it slope} of the divisor $D_a$. It is easy to see that \begin{equation}\label{24.10.22.1} \prod^M_{a=k+1}\beta_a=\frac{d_1\dots d_k}{2^k}=\frac{\mathop{\rm deg} F}{2^k}. \end{equation} Set $m_*=k+\varepsilon(k)+3$. Let $$ (D_{k+1},\dots,D_{M-m_*})\in{\cal H}[-m_*] $$ be a general tuple. The technique of hypertangent divisors, applied in precisely the same way as in \cite[\S 2]{Pukh01} or \cite[Chapter 3, Subsection 2.2]{Pukh13a}, see also \cite[Proposition 2.1]{Pukh2022a}, gives the following claim.
{\bf Proposition 7.1.} {\it There is a sequence of irreducible subvarieties $$ Y_{k+1},Y_{k+2},\dots,Y_{M-m_*}, $$ where $Y_{k+1}$ has been constructed above, such that $\mathop{\rm codim}(Y_i\subset(F\cap P))=i$, the subvariety $Y_i$ is not contained in the support of the divisor $D_{i+1}$ for $i\leqslant M-m_*-1$, the subvariety $Y_{i+1}$ is an irreducible component of the effective cycle $(Y_i\circ D_{i+1})$ and the following inequality holds:} $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y_{i+1}\geqslant \beta_{i+1}\frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y_i. $$
There is no need to give a {\bf proof} of that claim because it is identical to the arguments mentioned above. Note only that the key point in the construction of the sequence of subvarieties $Y_i$ is the fact that $Y_i$ is not contained in the support of a general divisor $D_{i+1}\in{\cal H}_{i+1}$, and this fact follows from the regularity condition (R1). Since $\mathop{\rm dim}(F\cap P)=M+1-k-\varepsilon(k)$, the subvariety $Y^*=Y_{M-m_*}$ is of dimension 4 and satisfies the inequality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y^*> \frac{\displaystyle\frac{2^{k+1}}{\mathop{\rm deg}F}\cdot\frac{\mathop{\rm deg} F}{2^k}}{\displaystyle\frac32\cdot\prod\limits^M_{a=M-m_*+1}\beta_a}= \frac43\frac{1}{\prod\limits^M_{a=M-m_*+1}\beta_a}. $$ (The number $\frac32$ appears in the denominator because the hypertangent divisor $D_{k+1}$ is skipped in the procedure of intersection, in the same way and for the same reason as in \cite[\S 2]{Pukh01}, and its slope is $\frac32$.) Now the inequality \begin{equation}\label{14.10.22.1} \frac43\geqslant\prod^M_{a=M-m_*+1}\beta_a, \end{equation} shown below in Proposition 7.2, completes the proof of Theorem 1.2.
{\bf Proposition 7.2.} {\it Assume that for $k=3,4,5$ the dimension $M$ is, respectively, at least $96$, $160$, $215$, and for $k\geqslant 6$ the inequality $M\geqslant 8k^2+2k$ holds. then the inequality (\ref{14.10.22.1}) is true.}
{\bf Proof.} Using the obvious fact that the function $\frac{t+1}{t}$ is decreasing, it is easy to see that the right hand side of the inequality (\ref{14.10.22.1}) with $k$ and $M$ fixed attains the maximum when the degrees $d_1,\dots,d_k$ are equal or ``almost equal'' in the following sense: let $M\equiv e\mathop{\rm mod}k$ with $e\in\{0,1,\dots,k-1\}$, then the ``almost equality'' means that $$ d_1=\dots=d_{k-e}=\frac{M-e}{k}+1,\quad d_{k-e+1}=\dots=d_k=\frac{M-k}{k}+2. $$ For $k\in\{3,\dots,9\}$ the claim of the proposition can be checked for each case of almost equal degrees, that is, for each possible value of $e$, manually, computing $\varepsilon(k)$ explicitly. For $k\geqslant 10$ it is easy to see that $\varepsilon(k)\leqslant k-3$, so that $m_*\leqslant 2k$. Therefore (again considering the case of almost equal degrees) the right hand side of (\ref{14.10.22.1}) does not exceed the number $$ \left(\frac{\frac{M}{k}}{\frac{M}{k}-2}\right)^k= \left(\frac{M}{M-2k}\right)^k, $$ from which we get that (\ref{14.10.22.1}) is true if $$ M\geqslant 2k\frac{(1+\frac13)^\frac{1}{k}}{(1+\frac13)^\frac{1}{k}-1}. $$ If in the numerator and denominator we replace $(1+\frac{1}{3})^{\frac{1}{k}}$ by the smaller number $1+\frac{1}{4k}$, the right hand side of the last inequality gets higher. This proves the proposition. Q.E.D.
Note that for $M\geqslant \rho(k)$ (see the inequality (\ref{14.11.22.1}) in Subsection 0.1) the assumptions of the previous proposition are satisfied. This completes the proof of Theorem 1.2.
{\bf 7.3. Quadratic points.} Let us show Theorem 1.3. Note first of all that if $Y$ is a section of the variety $W$ by a hyperplane that is tangent to $W$ at the point $o$ (that is, the equation of the hyperplane is a linear combination of the forms $f_{1,1},\dots,f_{k,1}$, restricted onto the hyperplane ${\mathbb P}(W)\cong{\mathbb P}^{M+k-1}$), then $$ \mathop{\rm mult}\nolimits_oY=4n(Y)=4, $$ so that the claim of the theorem is optimal. Thus we assume the converse: the inequality $$ \mathop{\rm mult}\nolimits_oY>2n(Y) $$ holds. We argue as in the non-singular case (Subsection 7.1): let $T_1,\dots,T_{k-1}$ be the tangent hyperplane sections, given by $(k-1)$ independent forms taken from the set $\{f_{1,1},\dots,f_{k,1}\}$. Since $\mathop{\rm codim}(\mathop{\rm Sing}F\subset F)\geqslant2k+2$, all scheme-theoretic intersections $(T_1\circ\dots\circ T_i)$, $1\leqslant i\leqslant k-1$, are irreducible, reduced and coincide with the set-theoretic intersection $T_1\cap\dots\cap T_i$, and moreover, by the condition (R2) the equality $$ \mathop{\rm mult}\nolimits_oT_1\cap\dots\cap T_i=2^{i+1} $$ holds. In particular, $\mathop{\rm mult}_oT_1=4n(T_1)=4$, so that $Y\neq T_1$. Arguing as in Subsection 7.1, we construct a sequence of irreducible subvarieties $Y_1=Y,Y_2,\dots,Y_k$, where $\mathop{\rm codim}(Y_i\subset W)=i$ and the inequality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y_k>\frac{2^{k+1}}{\mathop{\rm deg}F} $$ holds. Now the proof of Theorem 1.3 repeats the arguments of Subsection 7.2, where $m_*$ is replaced by 4. Since $4<m_*$, the inequality (\ref{14.10.22.1}) guarantees the inequality which is obtained from (\ref{14.10.22.1}) when $m_*$ is replaced by 4. This completes the proof of Theorem 1.3.
{\bf 7.4. Multi-quadratic points. Tangent divisors.} We start the proof of Theorem 1.4, the structure of which is similar to the structure of the proof of Theorem 1.2. At first we argue as in Subsection 7.1: it is sufficient to consider a linear subspace $P$ in $T_oF$ of maximal admissible codimension $\varepsilon(k)$. Assume that the prime divisor $Y$ on $F\cap P$ satisfies the inequality $$ \mathop{\rm mult}\nolimits_oY>\frac32\cdot2^kn(Y), $$ or the equivalent inequality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y>\frac32\cdot\frac{2^k}{\mathop{\rm deg}F}, $$ and consider the second hypertangent linear system (which in this case plays the role of the tangent linear system) $$
\Lambda_2=\left|\sum_{d_i\geqslant 3}s_{i,0}f_{i,2}\right|_{F\cap P}, $$ where $s_{i,0}\in{\mathbb C}$ are constants, independent of each other. Instead of the Lefschetz theorem we use the condition
(R3.1): the system of equations $f_{i,2}|_{F\cap P}=0$, where $d_i\geqslant 3$, defines an irreducible reduced subvariety of codimension $k+k_{\geqslant 3}$ in $P$, and by (R3.2) the multiplicity of that subvariety at the point $o$ is precisely $2^k\cdot(\frac32)^{k_{\geqslant 3}}$. More precisely, for a general tuple $(D_{2,1},\dots,D_{2,k_{\geqslant 3}})$ of divisors in the system $\Lambda_2$ the following claim is true: for each $i=1,\dots,k_{\geqslant 3}$ the cycle $(D_{2,1}\circ\dots\circ D_{2,i}$) of the scheme-theoretic intersection of the divisors $D_{2,1},\dots,D_{2,i}$ is an irreducible reduced subvariety of codimension $i$ in $F\cap P$, the multiplicity of which at the point $o$ is $2^k\cdot(\frac32)^i$. Arguing as in Subsection 7.1, we construct a sequence $Y_1=Y,Y_2,\dots,Y_{k_{\geqslant 3}}$ of irreducible subvarieties of codimension $\mathop{\rm codim}(Y_i\subset(F\cap P))=i$, where $Y_{i+1}$ is an irreducible component of the cycle $(Y_i\circ D_{2,i})$ with the maximal value of the ratio $\mathop{\rm mult}_o/\mathop{\rm deg}$. Therefore, $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y_{k_{\geqslant 3}}> \left(\frac32\right)^{k_{\geqslant 3}}\cdot\frac{2^k}{\mathop{\rm deg}F}. $$ It follows from here that $Y_{k_{\geqslant 3}}\neq D_{2,1}\cap\dots\cap D_{2,k_{\geqslant 3}}$, but since by construction $$ Y_{k_{\geqslant 3}}\subset D_{2,1}\cap\dots\cap D_{2,k_{\geqslant 3}-1}, $$ we conclude that $Y_{k_{\geqslant 3}}\not\subset D_{2,k_{\geqslant 3}}$, so that the effective cycle $(Y_{k_{\geqslant 3}}\circ D_{2,k_{\geqslant 3}})$ of the scheme-theoretic intersection of these varieties is well defined and one of its components $Y_{k_{\geqslant 3}+1}$ satisfies the inequality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y_{k_{\geqslant 3}+1}> \left(\frac32\right)^{k_{\geqslant 3}+1}\cdot\frac{2^k}{\mathop{\rm deg}F}. $$
{\bf 7.5. Multi-quadratic points. Hypertangent divisors.} Now we argue almost word for word as in Subsection 7.2: construct the hypertangent systems $$
\Lambda_j=\left|\sum^j_{\alpha=2}\sum_{d_i\geqslant
\alpha+1}f_{i,[2,\alpha]}s_{i,j-\alpha}\right|_{F\cap P}, $$
where $j=3,\dots,d_k-1$ and all symbols have the same meaning as in Subsection 7.2. If $h_a$, where $a\geqslant k+k_{\geqslant 3}+1$, is the $a$-th polynomial in the sequence ${\cal S},h_a=f_{i,j}|_{{\mathbb P}(T_oF)}$, for some $i$ and $j\geqslant 4$, then we set ${\cal H}_a=\Lambda_{j-1}$ and obtain the sequence of linear systems $$ {\cal H}_{k+k_{\geqslant 3}+1},\quad {\cal H}_{k+k_{\geqslant 3}+2},\quad\dots,\quad {\cal H}_M. $$ By the symbol ${\cal H}[-m]$ we denote the space $$ \prod^{M-m}_{a=k+k_{\geqslant 3}+1}{\cal H}_a. $$ Instead of the equality (\ref{24.10.22.1}) we get the equality $$ \prod^M_{a=k+k_{\geqslant 3}+1}\beta_a=\frac{\mathop{\rm deg}F}{2^k\left(\frac32\right)^{k_{\geqslant 3}}}. $$ Let $(D_{k+k_{\geqslant 3}+1},\dots,D_{M-m^*})\in{\cal H}[-m^*]$ be a general tuple. Now the technique of hypertangent divisors, applied in the word for word the same way as in Subsection 7.2, gives the following claim.
{\bf Proposition 7.3.} {\it There is a sequence of irreducible subvarieties $$ Y_{k_{\geqslant 3}+1},Y_{k_{\geqslant 3}+2},\dots,Y_{M-k-m^*}, $$ where $Y_{k_{\geqslant 3}+1}$ is constructed above, such that $\mathop{\rm codim}(Y_i\subset(F\cap P))=i$, the subvariety $Y_i$ is not contained in the support of the divisor $D_{k+i+1}$ for $i\leqslant M-m^*-1$, the subvariety $Y_{i+1}$ is an irreducible component of the effective cycle $(Y_i\circ D_{k+i+1})$ and the following inequality holds:} $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y_{i+1}\geqslant\beta_{k+i+1}\frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y_i. $$
Now since $\mathop{\rm dim}F\cap P=M-(k-l)-\varepsilon(k)$, by the definition of the number $m^*$ the last subvariety $Y^*=Y_{M-k-m^*}$ in that sequence is of dimension $\geqslant 4$ and satisfies the inequality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y^*> \frac{ \displaystyle \left(\frac32\right)^{k_{\geqslant 3}+1}\cdot\frac{2^k}{\mathop{\rm deg}F}\cdot \frac{\mathop{\rm deg}F}{2^k \left(\frac32\right)^{k_{\geqslant 3}}}}{\displaystyle \frac43\prod\limits^M_{a=M-m_*+1}\beta_{k+a}}= \frac98\frac{1}{\displaystyle\prod\limits^M_{a=M-m_*+1}\beta_{k+a}}. $$ (The number $\frac43$ appears in the denominator of the right hand side, because the hypertangent divisor $D_{k+k_{\geqslant 3}+1}$ is skipped in the process of constructing the sequence $Y_*$, see the similar remark above, before the inequality (\ref{14.10.22.1}).) If $m^*=0$, then the product in the denominator is assumed to be equal to 1. Now the inequality \begin{equation}\label{28.10.22.1} \frac98\geqslant\prod^M_{a=M-m^*+1}\beta_{k+a}, \end{equation} shown below in Proposition 7.4, completes the proof of Theorem 1.4.
{\bf Proposition 7.4.} {\it Assume that for $k\in\{3,\dots,7\}$ the number $M$ is at least the number shown in the corresponding column of the table \begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline $k$ & $3$ & $4$ & $5$ & $6$ & $7$\\ \hline $M\geqslant$ & $128$ & $204$ & $255$ & $357$ & $477$\\ \hline \end{tabular}, \end{center} and for $k\geqslant 8$ the inequality $M\geqslant 9k^2+k$ holds. Then the inequality (\ref{28.10.22.1}) holds.}
{\bf Proof.} As in the non-singular case (the proof of Proposition 7.2), we see that the right hand side of the inequality (\ref{28.10.22.1}) for $k$ and $M$ fixed, attains the maximum when the degrees $d_i$ are equal or ``almost equal''. For $k\in\{3,\dots,7\}$ the claim of the proposition is checked manually. For $k\geqslant 8$ we have $\varepsilon(k)\leqslant k-2$, so that $m^*\leqslant k$. Therefore (considering the case of equal or almost equal degrees) the right hand side of the inequality (\ref{28.10.22.1}) does not exceed the number $$ \left(\frac{M}{M-k}\right)^k, $$ which, in its turn, does not exceed $\frac98$ for $M\geqslant 9k^2+k$, which is easy to check by elementary computations, similar to the proof Proposition 7.2. Q.E.D.
\section{The codimension of the complement}
In this section we show the estimate for the codimension of the complement ${\cal P}\setminus{\cal F}$, given in Theorem 0.1.
{\bf 8.1. Preliminary constructions.} Set $$ \gamma=M-k+5+{M-\rho(k)+2\choose 2}, $$ see Subsection 0.1. We consider $\gamma$ as a function of $M$ with $k\geqslant 3$ fixed, where $M\geqslant\rho(k)$. Let $o\in{\mathbb P}^{M+k}$ be an arbitrary point. The symbol ${\cal P}(o)$ stands for the linear subspace of the space ${\cal P}$, consisting of all tuples $\underline{f}$, vanishing at the point $o$: $\underline{f}(o)=(0,\dots,0)$. Obviously, $\mathop{\rm codim}({\cal P}(o)\subset{\cal P})=k$. Fixing the point $o$, we use the notations of Subsections 1.2-1.4, considering the polynomials $f_i$ as non-homogeneous polynomials in the affine coordinates $z_*$. By the symbols $$ {\cal B}_{MQ1},\,{\cal B}_{MQ2},\,{\cal B}_{R1},\,{\cal B}_{R2},\,{\cal B}_{R3.1},\,{\cal B}_{R3.2} $$ we denote the subsets of the subspace ${\cal P}(o)$, consisting of the tuples $\underline{f}$ that do not satisfy the conditions $$ (MQ1),\,(MQ2),\,(R1),\,(R2),\,(R3.1),\,(R3.2) $$ at the point $o$, respectively. Since the point $o$ varies in ${\mathbb P}^{M+k}$, it is sufficient to show that the codimension of each of the six sets ${\cal B}_*$ in ${\cal P}(o)$ is at least $\gamma+M$.
Furthermore, for an arbitrary tuple $$ \underline{\xi}=(\xi_1,\dots,\xi_k) $$ of linear forms in $z_*$ the symbol ${\cal P}(o,\underline{\xi})$ denotes the affine subspace, consisting of the tuples $\underline{f}$, such that $$ f_{1,1}=\xi_1,\quad\dots,\quad f_{k,1}=\xi_k. $$ By the symbol $\mathop{\rm dim}\underline{\xi}$ denote the dimension $$ \mathop{\rm dim}\langle\xi_1,\dots,\xi_k\rangle, $$ so that ${\cal P}(o)$ is fibred into disjoint subsets $$ {\cal P}^{(i)}(o)=\bigcup_{\mathop{\rm dim}\underline{\xi}=i}{\cal P}(o,\underline{\xi}), $$ where $i=0,1,\dots,k$. Obviously, the equality $$ \mathop{\rm codim}({\cal P}^{(i)}(o)\subset{\cal P}(o))=(k-i)(M+k-i) $$ holds. In particular, ${\cal P}^{(k)}(o)$ consists of the tuples $\underline{f}$, such that the scheme of their common zeros is a non-singular subvariety of codimension $k$ in a neighborhood of the point $o$. Set $$ {\cal B}_{R1}(\underline{\xi})={\cal B}_{R1}\cap{\cal P}(o,\underline{\xi}). $$ For the case of a non-singular point it is sufficient to prove the inequality $$ \mathop{\rm codim}({\cal B}_{R1}(\underline{\xi})\subset{\cal P}(o,\underline{\xi}))\geqslant \gamma+M, $$ where $\mathop{\rm dim}\underline{\xi}=k$.
Furthermore, let ${\cal B}_{MQ1}(\underline{\xi})={\cal B}_{MQ1}\cap{\cal P} (o,{\underline{\xi}})$, where $\mathop{\rm dim}\underline{\xi}=i\leqslant k-1$, be the set of the tuples $\underline{f}$, such that the condition (MQ1) for $l=k-i$ is not satisfied, and ${\cal B}_{MQ2}(\underline{\xi})={\cal B}_{MQ2}\cap{\cal P }(o,\underline{\xi})$, where $\mathop{\rm dim}\underline{\xi}=i\leqslant k-2$, be the set of the tuples $\underline{f}$, such that the condition (MQ2) for $l=k-i$ is not satisfied.
In a similar way, we define the sets ${\cal B}_{R2}(\underline{\xi})$ for $\mathop{\rm dim}\underline{\xi}=k-1$ and ${\cal B}_{R3.1}(\underline{\xi})$, ${\cal B}_{R3.2}(\underline{\xi})$ for $\mathop{\rm dim}\underline{\xi}\leqslant k-2$.
Clearly, it is sufficient to prove that for $\mathop{\rm dim}\underline{\xi}=i$ the codimension of the set ${\cal B}_*(\underline{\xi})$ in ${\cal P}(o,\underline{\xi})$ is at least $$ \gamma+M-(k-i)(M+k-i). $$
In the conditions (R1),(R2) and (R3.2) we have also an arbitrary subspace $\Pi\subset{\mathbb P}(T_oF)$ of the corresponding codimension, and in the condition (R3.1) an arbitrary subspace $P$ in the embedded tangent space $T_oF\subset{\mathbb P}^{M+k}$ of codimension $\varepsilon(k)$, containing the point $o$. For an arbitrary subspace $\Pi\subset{\mathbb P}(T_oF)$ of the corresponding codimension let $$ {\cal B}_{R1}(\underline{\xi},\Pi),\quad {\cal B}_{R2}(\underline{\xi},\Pi),\quad {\cal B}_{R3.2}(\underline{\xi},\Pi) $$ be the set of tuples $\underline{f}\in{\cal P}(o,\underline{\xi})$, such that the respective condition (R1),(R2) and (R3.2) is violated precisely for that subspace $\Pi$. In a similar way we define the subset ${\cal B}_{R3.1}(\underline{\xi},P)$. These definitions are meaningful because the tangent space $T_oF$ is given by the fixed linear forms $\xi_i$ and for that reason is fixed.
Since the subspace $\Pi$ varies in a $(\mathop{\rm dim}\Pi+1)\mathop{\rm codim}(\Pi\subset{\mathbb P}(T_oF))$-dimensional Grassmanian, the estimate for the codimension of the set ${\cal B}_*(\underline{\xi},\Pi)$ in ${\cal P}(o,\underline{\xi})$ should be stronger than the estimate for the codimension of the set ${\cal B}_*(\underline{\xi})$ by that number. Similarly, $P$ varies in a $$ \varepsilon(k)(\mathop{\rm dim}T_oF-\varepsilon(k)) $$ -dimensional family, so that the estimate for the codimension of the set ${\cal B}_{R3.1}(\underline{\xi},P)$ should be stronger than the estimate for ${\cal B}_{R3.1}(\underline{\xi})$ by that number.
Now everything is ready to consider each of the six subsets ${\cal B}_*$.
{\bf 8.2. The conditions (MQ1) and (MQ2).} For a non-singular point $o\in F$ these conditions contain no restrictions, so we assume that $\dim\underline{\xi}\leqslant k-1$. It is well known that the closed subset of quadratic forms of rank $\leqslant r\leqslant N-1$ in the space ${\cal P}_{2,N}$ has the codimension $$ {N-r+1\choose 2}. $$ From here it is easy to see that the closed subset of tuples $(q_1,\dots,q_e)=q_{[1,e]}$ of quadratic forms in $N$ variables, defined by the condition $$ \mathop{\rm rk}q_{[1,e]}\leqslant r, $$ is of codimension $$ \geqslant{N-r+1\choose 2}-(e-1) $$ in the space ${\cal P}^{\times e}_{2,N}$. As we noted in Subsection 1.2 (after stating the condition (MQ2)), for $l\geqslant 2$ the condition (MQ2) is stronger than (MQ1), therefore it is sufficient to estimate the codimension of the set ${\cal B}_{MQ2}$ (in the case of quadratic points, when $l=1$, it is easy to check that the codimension of the set ${\cal B}_{MQ1}$ is higher than required). So we assume that $\mathop{\rm dim}\underline{\xi}=k-l\leqslant k-2$. The condition (MQ2)
requires the rank of the tuple of quadratic forms $q_{[1,k]}$, where $q_i=f_{i,2}|_{T_oF}$, to be at least $\rho(k)+2$, see (\ref{14.11.22.1}) in Subsection 0.1. Taking into account the variation of the tuple $\underline{\xi}$, from what was said above it is easy to obtain that the codimension of the set ${\cal B}_{MQ2}\cap{\cal P}^{k-l}(o)\geqslant$ $$ -k+1+l(M+l)+{M+l-\rho(k)\choose 2}. $$ The minimum of this expression is attained for $l=2$ and it is easy to check that this minimum is precisely $\gamma+M$. Therefore, the codimension of the set ${\cal B}_{MQ2}$ is at least $\gamma$, the codimension of the set ${\cal B}_{MQ1}$ for $l\geqslant 2$ is higher. For $l=1$ the last codimension is also higher. This completes our consideration of the conditions (MQ1) and (MQ2).
{\bf 8.3. Regularity at the non-singular and quadratic points.} Let us estimate the codimension of the set ${\cal B}_{R1}(\underline{\xi},\Pi)$ in the space ${\cal P}(o,\underline{\xi})$. Here $\mathop{\rm dim}\underline{\xi}=k$ and $\Pi\subset{\mathbb P}(T_oF)$ is a subspace of codimension $k+\varepsilon(k)-1=m_*-4$. Let $$ {\cal G}(\underline{d})=\prod^{M-m_*}_{i=1}{\cal P}_{\mathop{\rm deg}h_i,\mathop{\rm dim}\Pi+1} $$
be the space, parameterizing all sequences ${\cal S}[-m_*]|_{\Pi}$. Since the polynomials $h_i$ are distinct homogeneous components of the polynomials of the tuple $\underline{f}$, restricted onto the subspace $\Pi$, the codimension of the subset ${\cal B}_{R1}(\underline{\xi},\Pi)$ in ${\cal P}(o,\underline{\xi})$ is equal to the codimension of the subset ${\cal B}\subset{\cal G}(\underline{d})$, which consists of the sequences that do not satisfy the condition (R1).
Using the approach that was applied in \cite{Pukh98b,Pukh13a,EvansPukh2} and many other papers, let us present ${\cal B}$ as a disjoint union $$ {\cal B}=\bigsqcup^{M-m_*}_{i=1}{\cal B}_i, $$ where ${\cal B}_i$ consists of sequences $$
(h_1|_{\Pi},\dots,h_{M-m_*}|_{\Pi}), $$ such that the first $i-1$ polynomials form a regular sequence but $h_i$ vanishes on one of the components of the set of their common zeros. The ``projection method'' estimates the codimension of ${\cal B}_i$ in ${\cal G}(\underline{d})$ from below by the integer \begin{equation}\label{17.10.22.1} {\mathop{\rm dim}\Pi-i+1+\mathop{\rm deg}h_i\choose \mathop{\rm deg}h_i} ={\mathop{\rm dim}\Pi-i+1+\mathop{\rm deg}h_i\choose \mathop{\rm dim}\Pi-i+1} \end{equation} (we will use both presentations). It follows easily from here (see \cite[\S3]{EvansPukh2}), that the worst estimate corresponds to the case of equal or ``almost equal'' degrees $d_i$, described above. We will consider this case.
Thus we need to estimate from below the minimum of $M-m_*$ integers (\ref{17.10.22.1}). Here are the first $(k+1)$ of them: $$ {\mathop{\rm dim}\Pi+2\choose 2},\,{\mathop{\rm dim}\Pi+1\choose 2},\,\dots,\,{\mathop{\rm dim}\Pi+3-k\choose 2},\,{\mathop{\rm dim}\Pi+3-k\choose 3}. $$ We call the left hand side of the equality (\ref{17.10.22.1}) the presentation of type (I), the right hand side is the presentation of type (II). Let us write down each of the numbers (\ref{17.10.22.1}) in the form $$ {A(i)\choose B(i)}, $$ where $A(i)\geqslant 2B(i)$, using the presentation of type (I) or of type (II).
At first (for the starting segment of the sequence) we use the presentation of type (I). It is easy to see that when we change $i$ by $i+1$, we have one of the two options: \begin{itemize} \item either $\mathop{\rm deg}h_{i+1}=\mathop{\rm deg}h_i$, and then $A(i+1)=A(i)-1$ and $B(i+1)=B(i)$, so that $$ {A(i+1)\choose B(i+1)}<{A(i)\choose B(i)}, $$ and moreover, $C(i)=A(i)-2B(i)$ decreases: $C(i+1)=C(i)-1$,
\item or $\mathop{\rm deg}h_{i+1}=\mathop{\rm deg}h_i+1$, and then $A(i+1)=A(i)$ and $B(i+1)=B(i)+1$, so that $C(i+1)=C(i)-2$ and if $C(i+1)\geqslant 0$, then $$ {A(i+1)\choose B(i+1)}>{A(i)\choose B(i)}. $$ \end{itemize}
This is how it goes on until the ``equilibrium'': $C(i_*)\geqslant 0$, but $C(i_*+1)<0$, and after that we use the presentation of type (II).
Now when we change $i$ by $(i+1)$, we have one of the two options:
\begin{itemize}
\item either $\mathop{\rm deg}h_{i+1}=\mathop{\rm deg}h_i$, and then $A(i+1)=A(i)-1$ and $B(i+1)=B(i)-1$, so that $C(i+1)=C(i)+1$ and $$ {A(i+1)\choose B(i+1)}<{A(i)\choose B(i)}, $$
\item or $\mathop{\rm deg}h_{i+1}=\mathop{\rm deg}h_i+1$, and then $A(i+1)=A(i)$ and $B(i+1)=B(i)-1$, so that $C(i+1)=C(i)+2$ and $$ {A(i+1)\choose B(i+1)}<{A(i)\choose B(i)}. $$ \end{itemize}
Therefore, after the ``equilibrium'' our sequence is strictly decreasing. Moreover, if $$ {A(i_1)\choose B(i_1)}\quad\mbox{and}\quad {A(i_2)\choose B(i_2)} $$ are two numbers in our sequence, where $i_1\leqslant i_*$ and $i_2>i_*$ and $B(i_1)\geqslant B(i_2)$, then, obviously, $$ {A(i_1)\choose B(i_1)}>{A(i_2)\choose B(i_2)}. $$ Recall that the degrees $d_i$ are equal or ``almost equal''.
{\bf Lemma 8.1.} {\it For $M\geqslant 3k^2$ the following inequality holds:} $i_*<M-m_*$.
{\bf Proof.} Elementary computations, using the equality $C(i+k)=C(i)-(k+1)$ if $C(i+k)\geqslant 0$. Q.E.D. for the lemma.
Therefore, the ``equilibrium'' is reached earlier than the sequence $h_i,\dots,h_{M-m_*}$ comes to an end, so that there is a non-empty segment after the ``equilibrium''. By construction, $B(M-m_*)=4$. By what was said above, the minimum of the numbers ${A(i)\choose B(i)}$ for $i=1,\dots,M-m_*$ is the minimum of the following three numbers: $$ {\mathop{\rm dim}\Pi+3-k\choose 2},\quad {\mathop{\rm dim}\Pi+4-2k\choose 3},\quad {\mathop{\rm deg}h_{M-m_*}+4\choose 4}. $$
{\bf Lemma 8.2.} {\it For $\mathop{\rm dim}\Pi\geqslant 3k+1$ the following inequality holds:} $$ {\mathop{\rm dim}\Pi+4-2k\choose 3}>{\mathop{\rm dim}\Pi+3-k\choose 2}. $$
{\bf Proof.} Elementary computations. Q.E.D.
{\bf Lemma 8.3.} {\it For $M\geqslant 2\sqrt{3}k^2$ the following inequality holds:} $$ {\mathop{\rm deg}h_{M-m_*}+4\choose 4}>{\mathop{\rm dim}\Pi+3-k\choose 2}. $$
{\bf Proof.} It is easy to check the inequalities $$ \frac{(M-2k)^2}{2}\geqslant{\mathop{\rm dim}\Pi+3-k\choose 2} $$ and $$ {\mathop{\rm deg}h_{M-m_*}+4\choose 4}\geqslant\frac{1}{24} \left(\frac{M}{k}+1\right)\left(\frac{M}{k}\right)\left(\frac{M}{k}-1\right) \left(\frac{M}{k}-2\right), $$ so that it is sufficient to show that for $M\geqslant 2\sqrt{3}k^2$ the inequality $$ \left(\frac{M^2}{k^2}-1\right)\left(\frac{M^2}{k^2}-2\frac{M}{k}\right) >12(M-2k)^2 $$ holds or, equivalently, $M(M^2-k^2)>12k^4(M-2k)$. It is easy to check the last inequality, considering the cubic polynomial $$ t^3-(12k^4+k^2)t+24k^5 $$ in the real variable $t$. Q.E.D. for the lemma.
The work that was carried out above gives the inequality $$ \mathop{\rm codim}({\cal B}\subset{\cal G}(\underline{d}))\geqslant{M+3-2k-\varepsilon(k)\choose 2}. $$ From here by elementary computations (taking into account the variation of the subspace $\Pi$, see Subsection 8.1) it is easy to obtain the required the inequality $\mathop{\rm codim}({\cal B}_{R1}\subset{\cal P}(o))\geqslant\gamma+M$. This completes the proof in the case of smooth points.
It is easy to see that the methods used above give a stronger estimate for the codimension of the set ${\cal B}_{R2}$, because the dimension of the subspace $\Pi$ is higher. The computations are completely similar to the computations given above for the case of a non-singular point, for that reason we do not consider the case of a quadratic point and move on to estimating the codimension of the sets ${\cal B}_{R3.1}$ and ${\cal B}_{R3.2}$.
{\bf 8.4. Regularity at the multi-quadratic points.} Let us estimate the codimension of the set ${\cal B}_{R3.2}(\underline{\xi},\Pi)$, where $\Pi\subset{\mathbb P}(T_oF)$ is an arbitrary subspace of codimension $\varepsilon(k)$. Our arguments are completely similar to the arguments of Subsection 8.3 for a non-singular point and give a stronger estimate for the codimension. We just point out the necessary changes in the constructions of Subsection 8.3. Set $$ {\cal G}(\underline{d})=\prod^{M-m^*}_{i=1}{\cal P}_{\mathop{\rm deg} h_i,\mathop{\rm dim}\Pi+1}. $$ Denote by the symbol ${\cal B}$ the subset in ${\cal G}(\underline{d})$, consisting of the sequences that do not satisfy the condition (R3.2). Again we break ${\cal B}$ into subsets: $$ {\cal B}=\bigsqcup^{M-m^*}_{i=1}{\cal B}_i, $$ where ${\cal B}_i$ has the same meaning as in Subsection 8.3 (but for the multi-quadratic point $o$). Again the codimension of ${\cal B}_i$ in ${\cal G}(\underline{d})$ is bounded from below by the number (\ref{17.10.22.1}), and for $k$ and $M$ fixed the worst estimate corresponds to the case of equal or ``almost equal'' degrees $d_i$.
Arguing precisely in the same way as in the non-singular case (Subsection 8.3), we see, since the dimension of the subspace $\Pi$ is higher than in the non-singular case, that the claim of Lemma 8.1 is true. Note that if $m^*=0$, then in the notations of Subsection 8.3 we have $B(M-m^*)\geqslant 4$. Thus, replacing $B(M-m^*)$ by 4, we get that $\mathop{\rm codim}({\cal B}\subset{\cal G}(\underline{d}))$ is bounded from below by the least of the three numbers $$ {\mathop{\rm dim}\Pi+3-k\choose 2},\quad {\mathop{\rm dim}\Pi+4-2k\choose 3},\quad {\mathop{\rm deg}h_{M-m^*}+4\choose 4}. $$ The claim of Lemma 8.2 is true since, as we noted above, $\mathop{\rm dim} \Pi$ in the multi-quadratic case is higher than in the non-singular case. Obviously, $m^*<m_*$, so that $\mathop{\rm deg} h_{M-m^*}\geqslant\mathop{\rm deg}h_{M-m_*}$ and the claim of Lemma 8.3 is also true. As a result, we get the inequality $$ \mathop{\rm codim}({\cal B}\subset{\cal G}(\underline{d}))\geqslant{M+2+l-k-\varepsilon(k)\choose 2}, $$ where $\mathop{\rm dim}\underline{\xi}=k-l$. The minimum of the right hand side is attained for $l=2$ and it is easy to see that this minimum is significantly higher than in the non-singular case. It is easy to check, taking into account the variation of the subspace $\Pi$, that $$ \mathop{\rm codim}(B_{R3.2}\subset{\cal P}(o))>\gamma+M. $$ This completes our consideration of the condition (R3.2) in the multi-quadratic case.
{\bf 8.5. The condition (R3.1).} In order to estimate the codimension of the set ${\cal B}_{R3.1}(\underline{\xi},P)$, we need the following known general fact. Take $e\geqslant 1$ and let $\underline{w}=(w_1,\dots,w_e)\in{\mathbb Z}^e$ be a tuple of integers, where $2\leqslant w_1\leqslant\dots\leqslant w_e$.
Set $$ {\cal P}(\underline{w})=\prod^e_{i=1}{\cal P}_{w_i,N+1} $$ to be the space of tuples $\underline{g}=(g_1,\dots,g_e)$ of homogeneous polynomials in $N+1$ variables, $\mathop{\rm deg}g_i=w_i$, which we consider as homogeneous polynomials on ${\mathbb P}^N$. Let $$ {\cal B}^*(\underline{w})\subset{\cal P}(\underline{w}) $$ be the set of tuples $\underline{g}$, such that the scheme of their common zeros is not an irreducible reduced subvariety of codimension $e$ in ${\mathbb P}^N$.
{\bf Theorem 8.1.} {\it The following inequality holds:} $$ \mathop{\rm codim}({\cal B}^*(\underline{w})\subset{\cal P}(\underline{w}))\geqslant\frac12(N-e-1)(N-e-4)+2. $$
{\bf Proof:} this is Theorem 2.1 in \cite{Pukh2022a}.
Let us estimate the codimension of the set ${\cal B}_{R3.1}(\underline{\xi},\Pi)$. In order to do this, consider in the projective space $P$ a hypersurface $P^{\sharp}$ that does not contain the point $o$, for instance, the intersection of the hyperplane ``at infinity'' with respect to the system of affine coordinates $(z_1,\dots,z_{M+k})$ with the subspace $P$. If the scheme of common zeros of the tuple of polynomials, consisting of $$
f_1|_P,\dots,f_k|_P $$
and the polynomials $f_{i,2}|_P$ for $i$ such that $d_i\geqslant 3$, is not an irreducible reduced subvariety of codimension $k+k_{\geqslant 3}$ in $P$ (that is, the condition (R3.1) is violated, see Subsection 1.4), then the scheme of common zeros of the set of polynomials \begin{equation}\label{27.02.23.1}
f_1|_{P^{\sharp}},\dots,f_k|_{P^{\sharp}},f_{i,2}|_{P^{\sharp}}\quad \mbox{for} \quad d_i\geqslant 3, \end{equation} respectively, is reducible, non-reduced or is of codimension $<k+k_{\geqslant 3}$ in $P^{\sharp}$. However, for each $i$, such that $d_i\geqslant 3$, the homogeneous polynomials $$
f_i|_{P^{\sharp}}=f_{i,d_i}|_{P^{\sharp}}\quad\mbox{and}\quad f_{i,2}|_{P^{\sharp}} $$ on the projective space $P^{\sharp}$ are linear combinations of disjoint sets of monomials in $f_i$, so that the coefficients of those polynomials belong to disjoint subsets of coefficients of the polynomial $f_i$. Therefore (re-ordering the polynomials of the tuple (\ref{27.02.23.1}) so that their degrees do not decrease), applying Theorem 8.1 to the tuple (\ref{27.02.23.1}), we get that the codimension of the set ${\cal B}_{R3.1}(\underline{\xi},P)$ is at least $$ \frac12(\mathop{\rm dim}P^{\sharp}-k-k_{\geqslant 3}-1) (\mathop{\rm dim}P^{\sharp}-k-k_{\geqslant 3}-4)+2, $$ where $\mathop{\rm dim}P^{\sharp}=M+l-\varepsilon(k)-1$, $\mathop{\rm dim}\underline{\xi}=k-l$. It is easy to check by elementary computations that this estimate (with the correction due to the variation of the subspace $P$ and the set of linear forms $\underline{\xi}$) is stronger than we need.
This completes the proof of the estimate for the codimension of the complement ${\cal P}\backslash{\cal F}$ in Theorem 0.1.
Note that (for the technique of estimating the codimension that we used) the estimate of Theorem 0.1 is optimal for the condition (MQ2), that requirement turns out to be the strongest.
\begin{flushleft} Department of Mathematical Sciences,\\ The University of Liverpool \end{flushleft}
\noindent{\it [email protected]}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Cycle-maximal triangle-free graphs\tnoteref{tn1}} \tnotetext[tn1]{NOTICE: this is the authors' version of a work that was accepted for publication in \emph{Discrete Mathematics}. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in \emph{Discrete Mathematics}, Volume 338, Issue 2 (6 February 2015), pages 274-290. DOI: \href{http://dx.doi.org/10.1016/j.disc.2014.10.002}{10.1016/j.disc.2014.10.002}}
\author[compsci,nserc]{Stephane Durocher} \ead{[email protected]}
\author[math,nserc]{David S. Gunderson} \ead{[email protected]}
\author[compsci]{Pak Ching Li} \ead{[email protected]}
\author[compsci]{Matthew~Skala\corref{cor1}} \ead{[email protected]} \cortext[cor1]{Principal corresponding author.}
\address[compsci]{Department of Computer Science, E2--445 EITC, University of Manitoba, Winnipeg, Manitoba, Canada, R3T 2N2}
\address[math]{Department of Mathematics, 342 Machray Hall, 186 Dysart Road, University of Manitoba, Winnipeg, Manitoba, Canada, R3T 2N2}
\fntext[nserc]{Work of these authors is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC).}
\begin{abstract} We conjecture that the balanced complete bipartite graph $K_{\lfloor n/2 \rfloor,\lceil n/2 \rceil}$ contains more cycles than any other $n$-vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds on the numbers of cycles in graphs depending on numbers of vertices and edges, girth, and homomorphisms to small fixed graphs; and use the bounds to show that among regular graphs, the conjecture holds. We also consider graphs that are close to being regular, with the minimum and maximum degrees differing by at most a positive integer $k$. For $k=1$, we show that any such counterexamples have $n\le 91$ and are not homomorphic to $C_5$; and for any fixed $k$ there exists a finite upper bound on the number of vertices in a counterexample. Finally, we describe an algorithm for efficiently computing the matrix permanent (a $\#P$-complete problem in general) in a special case used by our bounds. \end{abstract}
\begin{keyword} extremal graph theory \sep cycle \sep triangle-free \sep regular graph \sep matrix permanent \sep \#P-complete \MSC[2010] 05C38 \sep 05C35 \end{keyword}
\end{frontmatter}
\section{Introduction} \label{sec:intro}
Many algorithmic problems that are computationally difficult on graphs can be solved easily in polynomial time when the graph is acyclic. Limiting input to trees (connected acyclic graphs) or forests (acyclic graphs), however, is often too restrictive; many of these problems remain efficiently solvable when the graph is ``nearly'' a tree \cite{Arnborg:Easy,Bern:Linear,Bodlaender:Dynamic,Gurevich:Solving}. Various notions exist formalizing how close a given graph is to being a tree, including bounded treewidth (partial $k$-trees), $k$-connectivity, and number of cycles.
The problem of evaluating $c(G)$ for a given graph is $\#P$-complete, equivalent in difficulty to counting the certificates of an $NP$-complete decision problem, even though the problem of testing for the existence of a single cycle is trivially polynomial-time. Existence of a cycle is a graph property definable in monadic second-order logic. By the result known as Courcelle's Theorem~\cite{Courcelle:Monadic}, such properties can be decided in linear time for graphs of bounded treewidth, and as described by Arnborg, Lagergren, and Seese, the counting versions are also linear-time for fixed treewidth~\cite{Arnborg:Easy}. On the other hand, if we parameterize by length of the cycles instead of structure of the graph, Flum and Grohe~\cite{Flum:Parameterized} give evidence against fixed-parameter tractability: they show that counting cycles of length $k$ is $\#W[1]$-complete, with no $(f(k)\cdot n^c)$-time algorithm unless the Exponential Time Hypothesis fails.
When no restrictions are imposed on the graph, the number of cycles in an $n$-vertex graph is maximized by the complete graph on $n$ vertices, $K_n$. In this case the number of cycles is easily seen to be \begin{equation}
\sum_{i=3}^n \left( \binom{n}{i} \frac{(i-1)!}{2}
\right) = n! \sum_{i=3}^n \frac{1}{2i(n-i)!} \, . \label{eqn:kn} \end{equation}
The bound \eqref{eqn:kn} can be refined by introducing additional parameters. Previous results include bounds on the number of cycles in terms of $n$, $m$, $\delta$, and $\Delta$ (the number of vertices, number of edges, minimum degree, and maximum degree of $G$, respectively) \cite{Entringer:Maximum,Guichard:Maximum,Volkmann:Estimations}, as well as bounds on the number of cycles for various classes of graphs, including $k$-connected graphs \cite{Knor:Number}, Hamiltonian graphs \cite{Rautenbach:Maximum,Shi:Number}, planar graphs \cite{Aldred:Maximum,Alt:Number}, series-parallel graphs \cite{DeMier:Maximum}, and random graphs \cite{Takacs:Limit}.
A graph's cycles can be classified by length. For each value of $i$, the summand in \eqref{eqn:kn} corresponds to the number of cycles of length $i$ in $K_n$. If short cycles are disallowed, the number of long cycles possible is also reduced. Every graph $G$ of girth $g$ that contains two or more cycles has $n \geq 3g/2-1$ vertices or, equivalently, if $g > 2(n+1)/3$, then $G$ has at most one cycle \cite{Bose:Bounding}. The bound on the number of cycles increases as $g$ decreases. As mentioned earlier, the case $g=3$ is maximized by $K_n$ for which the number of cycles is exactly \eqref{eqn:kn}. Can the maximum number of cycles be expressed exactly or bounded tightly as a function of arbitrary values for $n$ and $g$? Even when $g=4$ the maximum number of possible cycles is unknown. Graphs of girth four or greater are exactly the triangle-free graphs. One goal of this research program is to show that the number of cycles in a triangle-free $n$-vertex graph is maximized by the complete bipartite graph $K_{\lfloor n/2\rfloor, \lceil n/2\rceil}$, and the results in this paper represent significant progress toward that goal.
We first encountered the problem of bounding the number of cycles as a function of $n$ and $g$ when examining path-finding algorithms on graphs. A tree traversal can be achieved by applying a right-hand rule (e.g., after reaching a vertex $v$ via its $i$th edge, depart along its $(i+1)$st edge). Traversing a graph using only local information at each vertex is significantly more difficult in graphs with cycles. A successful traversal can be guaranteed, however, if the local neighbourhood of every vertex $v$ is tree-like within some distance $k$ from $v$ (e.g., the graph has girth $g \geq 2k+1$) and that a fixed upper bound is known on the number of possible cycles along paths that join pairs of leaves outside each such local tree (Bose, Carmi, and Durocher~\cite{Bose:Bounding} give a more formal discussion). Deriving a useful bound on this number of cycles led to the work presented in this paper.
In any graph, every chordless cycle of length seven or greater can be bridged by the addition of a chord without creating any triangles. Similarly, in any graph of girth six or greater, any given cycle can be bridged without creating any triangles. There exist graphs of girth four and five, however, that contain cycles of length six that cannot be bridged without creating a triangle. The Petersen graph minus one vertex, as shown in Figure~\ref{fig:pminus}, is such a graph of girth five; replacing one of its vertices with two sharing the same neighbourhood results in a graph of girth four with the same property. To increase the number of cycles in a graph, large chordless cycles can be bridged greedily until the graph is triangle-free but the addition of any edge would create a triangle. This suggests that a cycle-maximal triangle-free graph should contain many cycles of length four or five. Since bipartite graphs are triangle free, complete bipartite graphs and, more specifically, balanced bipartite graphs are natural candidates for maximizing the number of cycles. We verified the following conjecture to be true by exhaustive computer search for $n \le 13$:
\begin{figure}
\caption{The Petersen graph minus one vertex, which contains a $C_6$ that
cannot be bridged without creating a triangle.}
\label{fig:pminus}
\end{figure}
\begin{conjecture}\label{con:main} The cycle-maximal triangle-free graphs are exactly the bipartite Tur\'{a}n graphs, $K_{\lfloor n/2 \rfloor, \lceil n/2 \rceil}$ for all $n$. \end{conjecture}
\subsection{Overview of results}
Our main results, Theorems~\ref{thm:regular-triangle-free} and~\ref{thm:near-reg-triangle-free}, show that Conjecture~\ref{con:main} holds for all regular cycle-maximal triangle-free graphs, and all near-regular cycle-maximal triangle-free graphs with greater than $91$ vertices. In Section~\ref{sec:props} we give some properties of cycle-maximal graphs. In Section~\ref{sec:bounds} we establish bounds on the number of cycles in triangle-free graphs. In Section~\ref{sec:regular} we prove Theorem~\ref{thm:regular-triangle-free}, and in Section~\ref{sec:near-regular} we prove Theorem~\ref{thm:near-reg-triangle-free}. Section~\ref{sec:algorithm} describes an algorithm for computing the matrix permanent, which is used in our bounds.
\subsection{Definitions and notation}\label{sub:definitions}
Graphs are simple and undirected unless otherwise specified. A \emph{block} in a graph $G$ is a maximal 2-connected subgraph of $G$. Given a graph $G$, let $V(G)$, $E(G)$, $\delta(G)$, and $\Delta(G)$ denote, respectively, the vertex set of $G$, edge set of $G$, minimum degree of any vertex in $G$, and maximum degree of any vertex in $G$. Given a vertex $v \in V(G)$, let $N(v)$ denote the neighbourhood of $v$; that is, the set of all vertices adjacent to $v$ in $G$. Given positive integers $s$ and $t$, let $K_s$ denote the complete graph on $s$ vertices, $K_{s,t}$ denote the complete bipartite graph with part sizes $s$ and $t$, $C_s$ denote the cycle of $s$ vertices, and $P_s$ denote the path of $s$ vertices. Given a positive integer $n$, let $T(n,2)$ represent the bipartite Tur\'{a}n graph on $n$ vertices, that is, $K_{\lfloor n/2 \rfloor, \lceil n/2 \rceil}$.
A graph is \emph{triangle-free} if it does not contain $C_3$ (a \emph{triangle}) as a subgraph. The \emph{girth} of a graph is the size of the smallest cycle, by convention $\infty$ if there are no cycles. Triangle-free is equivalent to having girth at least 4. A graph $G$ is \emph{maximal triangle-free} if it is triangle-free, but adding any edge would create a triangle. Let $c(G)$ denote the number of labelled cycles in $G$. That is the number of distinct subsets of $E(G)$ that are cycles; note that we are not only counting distinct cycle \emph{lengths}, which may also be interesting but is a completely different problem. Then $G$ is \emph{cycle-maximal} for some class of graphs and number of vertices $n$ if $G$ maximizes $c(G)$ among $n$-vertex graphs in the class. Most often we are interested in cycle-maximal graphs for fixed minimum girth $g$, especially the case $g\ge 4$, \emph{cycle-maximal triangle-free} graphs. It is easy to prove (see Lemma~\ref{lem:every-edge-cfour}) that a cycle-maximal triangle-free graph, if large enough to have any cycles at all, is also maximal triangle-free.
A graph $G$ is \emph{homomorphic} to a graph $H$ when there exists a function $f:V(G) \rightarrow V(H)$, called a \emph{homomorphism}, such that if $(u,v) \in E(G)$ then $(f(u),f(v)) \in E(H)$. A graph is $s$-colourable if and only if it is homomorphic to $K_s$. Given a positive integer $t$ and a graph $H$, let $H(t)$ represent the uniform \emph{blowup} of $H$: that is the graph homomorphic to $H$ formed by replacing the vertices in $H$ with independent sets, each of size $t$, and adding edges between all vertices in two independent sets if the sets correspond to adjacent vertices in $H$. If $H$ has $p$ vertices, then $H(t)$ has $pt$ vertices. When $H$ is a labelled graph with $p$ vertices $v_1,v_2,\ldots,v_p$, let $H(n_1,n_2,\ldots,n_p)$ represent the not necessarily uniform blowup of $H$ in which $v_1$ is replaced by an independent set of size $n_1$, $v_2$ by an independent set of size $n_2$, and so on, with all edges added that are allowed by the homomorphism.
We define the family of \emph{gamma graphs} as follows. For any positive integer $i$, $\Gamma_i$ is a graph with $n=3i-1$ vertices $v_1,v_2,\ldots,v_n$. Each vertex $v_j$ is adjacent to the $i$ vertices $v_{j+i},v_{j+i+1},\ldots,v_{j+2i-1}$, taking the indices modulo $n$. For $i\ge2$, this is the complement of the $(i-1)$st power of the cycle graph $C_{3i-1}$. Then $\Gamma_1$ is $K_2$, $\Gamma_2$ is $C_5$, and $\Gamma_3$ is the eight-vertex M\"{o}bius ladder, or twisted cube, shown in Figure~\ref{fig:twcube}.
\begin{figure}
\caption{The M\"{o}bius ladder $\Gamma_3$.}
\label{fig:twcube}
\end{figure}
A few relevant pieces of notation from outside graph theory will be used. Let $\Gamma(z)$ represent the usual gamma function (generalized factorial); $n! = \Gamma(n+1)$ for integer $n$, but the gamma function is also well-defined for arbitrary complex arguments. We will use it only for nonnegative reals, but not only for integers. The similarity of notation between $\Gamma(n+1)$ and $\Gamma_i$ is unfortunate, but these are widely-used standard symbols for these concepts. Some authors also use $\Gamma(v)$ for the neighbourhood of a vertex $v$; we avoid that here.
For positive integers $n$ and $m$, let $I_n$ denote the $n\times n$ identity matrix, and $J_{n,m}$ denote the $n\times m$ matrix with all entries equal to $1$. Given an $n \times n$ square matrix $A$, let $\perm{} A$ denote the \emph{permanent} of $A$. That is the sum, over all ways to choose $n$ entries from $A$ with one in each row and one in each column, of the product of the chosen entries. Note that the definition of the permanent is the same as the definition of the determinant without the alternating signs.
\subsection{Related work}
A number of previous results examine the problem of characterizing cycle-maximal graphs and bounding the number of cycles as a function of girth, degree, or the number of edges for various classes of graphs. Entringer and Slater \cite{Entringer:Maximum} show that some $n$-vertex graph with $m$ edges has at least $2^{m-n}$ cycles and every such graph has at most $2^{m-n+1}$ cycles. Aldred and Thomassen \cite{Aldred:Maximum} improve the upper bound to $(15/16)2^{m-n+1}$. Guichard \cite{Guichard:Maximum} examines bounds on the number of cycles to which any given edge can belong, including a discussion of cubic graphs and triangle-free graphs. Alt \emph{et al.}\ \cite{Alt:Number} show that the maximum number of cycles in any $n$-vertex planar graph is at least $2.27^n$ and at most $3.37^n$. Buchin \emph{et al.}\ \cite{Buchin:Number} improve these bounds to $2.4262^n$ and $2.8927^n$, respectively. De Mier and Noy \cite{DeMier:Maximum} examine the maximum number of cycles in outerplanar and series-parallel graphs. Knor \cite{Knor:Number} examines bounds on the maximum number of cycles in $k$-connected graphs, including bounds expressed in terms of the minimum and maximum degrees. Markstr\"om \cite{Markstrom:Extremal} presents results of a computer search examining the minimum and maximum numbers of cycles as a function of girth and the number of edges in small graphs.
Several results in extremal graph theory examine bounds on triangle-free graphs. Andr{\'a}sfai \emph{et al.}\ \cite{Andrasfai:Connection} show that every $n$-vertex graph that has chromatic number $r$ but does not contain $K_r$ as a subgraph has minimum degree at most $n (3r-7)/(3r-4)$. Brandt \cite{Brandt:Structure} examines the structure of triangle-free graphs with minimum degree at least $n/3$. Brandt and Thomass\'e \cite{Brandt:Dense} show that every triangle-free graph with minimum degree greater than $n/3$ has chromatic number at most four. Jin \cite{Jin:Chromatic} gives an upper bound on the minimum degree of triangle-free graphs with chromatic number four or greater. Pach \cite{Pach:Graphs} characterizes triangle-free graphs in which every independent set has a common neighbour: a triangle-free graph has that property if and only if it is a maximal triangle-free graph homomorphic to some $\Gamma_i$. Brouwer \cite{Brouwer:Finite} provides a simpler proof of Pach's result.
\section{Properties of triangle-free and cycle-maximal graphs} \label{sec:props}
This section lists some properties of graphs that we will use in subsequent sections. Most of the proofs are simple, or already given by others, but we describe them for completeness.
First, consider the gamma graphs defined in Subsection~\ref{sub:definitions}. This family of graphs recurs throughout the literature on maximal triangle-free graphs. They seem to have been first introduced in 1964 by Andrásfai~\cite{Andrasfai:G18}. Notation and the order of labelling the vertices varies among authors; we follow Brandt and Thomass\'e~\cite{Brandt:Dense} here. All the $\Gamma_i$ graphs are $i$-regular, circulant, and three-colourable. As the following lemma describes, the $\Gamma_i$ graphs form a hierarchy in which each one is homomorphic to the next one, and deleting a vertex renders it homomorphic to the previous one.
\begin{lemma}\label{lem:gamma-deleted} For all $i>1$, $\Gamma_i$ with one vertex deleted is homomorphic to $\Gamma_{i-1}$, and $\Gamma_{i-1}$ is homomorphic to $\Gamma_i$. \end{lemma}
\begin{proof} Let $v_1,v_2,\ldots,v_{3i-1}$ denote the vertices of $\Gamma_i$ and $w_1,w_2,\ldots,w_{3i-4}$ denote the vertices of $\Gamma_{i-1}$. Assume without loss of generality that $v_{3i-1}$ is the vertex deleted from $\Gamma_i$. Then define $f$ and $F$ as follows. \begin{align*}
f(v_j) &= \begin{cases}
w_j & \text{if $j<i$}, \\
w_{j-1} & \text{if $i\le j<2i$}, \\
w_{j-2} & \text{if $j\ge 2i$};
\end{cases} \\
F(w_j) &= \begin{cases}
v_j & \text{if $j<i$}, \\
v_{j+1} & \text{if $i\le j <2i-2$}, \\
v_{j+2} & \text{if $j \ge 2i-2$}.
\end{cases} \end{align*} By checking their effects on the vertex neighbourhoods, $f$ and $F$ are homomorphisms in both directions between $\Gamma_i$ with the vertex $v_{3i-1}$ deleted, and $\Gamma_{i-1}$. Reinserting the deleted vertex, $\Gamma_{i-1}$ is also homomorphic to $\Gamma_i$. \end{proof}
Several known results classify triangle-free graphs according to minimum degree. In particular, if a triangle-free graph $G$ has $n$ vertices and minimum degree $\delta(G)$, then \begin{itemize} \item for every $i \in \{2,3,\ldots,10\}$, if $\delta(G)>in/(3i-1)$ then
$G$ is homomorphic to $\Gamma_{i-1}$; \item if $\delta(G)>2n/5$ then $G$ is bipartite; \item if $\delta(G)>10n/29$ then $G$ is three-colourable; and \item if $\delta(G)>n/3$ then $G$ is four-colourable. \end{itemize}
Jin~\cite{Jin:Minimal} proves that $\delta(G)>in/(3i-1)$ implies $G$ homomorphic to $\Gamma_{i-1}$ for all $i$ up to $10$. The case $i=2$, which also implies $G$ is bipartite because $\Gamma_1=K_2$, was first proved by Andr\'asfai~\cite{Andrasfai:G18}; a later paper, in English, by Andr\'asfai, Erd\H{o}s, and S\'os, is often cited for this result~\cite{Andrasfai:Connection}. H\"{a}ggkvist proved the case $i=3$~\cite{Haggkvist:Odd}. Three-colourability when $\delta(G)>10n/29$ follows from the three-colourability of $\Gamma_9$. Four-colourability when $\delta(G)>n/3$ is due to Brandt and Thomass\'e~\cite{Brandt:Dense}.
The following property of cycle-maximal graphs applies to graphs of general girth, not only triangle-free graphs: we can limit consideration to 2-connected graphs.
\begin{lemma}\label{lem:maybe-biconn} Let $3\leq g\leq n$. Among all $n$-vertex cycle-maximal graphs for girth at least $g$, there is one that is 2-connected. \end{lemma}
\begin{proof} Because $g\le n$, there exists a graph with one cycle and these parameters. That graph consists of a cycle of length $g$ and $n-g$ degree-zero vertices. Therefore any cycle-maximal graph for girth at least $g$ contains at least one cycle.
Given a disconnected graph with maximal cycle count, choose a vertex $v$; then choose one vertex in each connected component other than the one containing $v$, and add an edge from each of those vertices to $v$. The resulting connected graph contains all and only the cycles from the original, so it has the same girth and cycle count. Therefore we need only consider connected graphs.
Any block either is a single edge, or contains a cycle; if it is a single edge, it cannot be part of any cycle. We can contract it without removing any cycles nor decreasing the girth, and then insert one new vertex to replace the one we eliminated, in the middle of some edge that is part of a cycle. Therefore we need only consider blocks that contain cycles, necessarily of at least $g$ vertices.
Suppose there is a cut-vertex $u$. Removing it would disconnect at least two blocks; let $v$ and $w$ be two vertices maximally distant from $u$ that would be disconnected from each other by the removal of $u$. Each of $v$ and $w$ is at least distance $\lfloor g/2 \rfloor$ from $u$. Then by adding an edge $(v,w)$, we create at least one new cycle, but none of length less than $g$. \end{proof}
In the case of triangle-free graphs, Lemma~\ref{lem:maybe-biconn} can be strengthened to require 2-connectedness in all cycle-maximal graphs.
\begin{corollary}\label{cor:definitely-biconn} All cycle-maximal triangle-free graphs with at least four vertices are 2-connected. \end{corollary}
\begin{proof} Suppose $G$ is a cycle-maximal triangle-free graph with at least four vertices. Because $C_4$ contains a cycle, $G$ contains at least one cycle and therefore at least one vertex of degree at least two. If $G$ is disconnected, let $u$ and $v$ be two vertices in distinct components and with the degree of $u$ at least two. Then add edges from $v$ to all neighbours of $u$. These edges do not create any triangles, but create at least one new cycle through $u$, $v$, and two neighbours of $u$, contradicting the cycle-maximality of $G$. Therefore $G$ is connected.
Suppose $G$ contains a block that is a single edge. Then as in the proof of Lemma~\ref{lem:maybe-biconn} we can contract it, removing a vertex while keeping all cycles and not creating any triangles; and then we can add a new vertex $v$ sharing all the neighbours of some vertex $u$ with degree at least two. By doing so we create at least one new cycle through $u$, $v$, and two neighbours of $u$, contradicting the cycle-maximality of $G$. Therefore $G$ contains no blocks that are single edges. All remaining cases are covered by the last paragraph of the proof of Lemma~\ref{lem:maybe-biconn}. \end{proof}
The next property is also specific to the triangle-free case: every edge in a cycle-maximal graph is part of some minimum-length cycle.
\begin{lemma}\label{lem:every-edge-cfour} If $G$ is a cycle-maximal triangle-free graph with at least four vertices, then $G$ is maximal triangle-free and every edge in $G$ is in some 4-cycle. \end{lemma}
\begin{proof} Suppose $u$ and $v$ are non-adjacent vertices in $G$ and adding the edge $(u,v)$ would not create a triangle. By 2-connectedness (Corollary~\ref{cor:definitely-biconn}) there exist two edge-disjoint paths from $u$ to $v$ in $G$, and then adding the edge $(u,v)$ creates at least two new cycles, contradicting cycle-maximality; therefore $G$ is maximal triangle-free.
Suppose $(u,v)$ is an edge in $G$ that is not part of any 4-cycle. Let $G'$ be the graph formed from $G$ by contracting $(u,v)$. This operation cannot create any triangles; and $G'$ contains one less vertex than $G$ and all the cycles of $G$ except any that included both $u$ and $v$ without including the edge $(u,v)$. Let $w$ be the vertex created by the edge contraction; and add a new vertex $w'$ to $G'$ with the same neighbourhood as $w$. For each cycle in $G$ that used $u$ and $v$ without the edge between them, the new graph contains at least one cycle using $w$ and $w'$ instead; and there is also at least one new 4-cycle through $w$, $w'$, and two of their neighbours. (They have at least two neighbours because $G$ was 2-connected.) Therefore we have increased the number of cycles for an $n$-vertex triangle-free graph, contradicting cycle-maximality. Therefore every edge in $G$ is part of some 4-cycle. \end{proof}
Also note that by a result of Erd\H{o}s \emph{et al.}~\cite[Lemma~2.4(ii)]{Erdos:How}, any triangle-free graph (not only maximal or cycle-maximal) with $n$ vertices and $m$ edges has at least one edge contained in at least $4m(2m^2-n^3)/n^2(n^2-2m)$ cycles of length four.
Lemma~\ref{lem:maybe-biconn} and Corollary~\ref{cor:definitely-biconn} do not generalize to higher girth. A graph consisting of the Petersen graph plus one vertex added with degree one is cycle-maximal for 11 vertices and girth at least five, but the edge to the added vertex is not in any 5-cycle, nor any cycle at all, and the graph is not 2-connected.
Finally, we list some simple equivalent conditions for cycle-maximal triangle-free graphs to be the Tur\'an graph. Any counterexample to Conjecture~\ref{con:main} would have to lack all these properties.
\begin{lemma}\label{lem:equivalent-props} If $G$ is a cycle-maximal triangle-free graph with $n\ge 4$ vertices, then these statements are equivalent: \begin{enumerate} \item\label{itm:prop-turan} $G$ is the bipartite Tur\'an graph $T(n,2)$; \item\label{itm:prop-complete} $G$ is complete bipartite; \item\label{itm:prop-bipartite} $G$ is bipartite; \item\label{itm:prop-perfect} $G$ is perfect; \item\label{itm:prop-no-pfour} $G$ contains no induced $P_4$; and \item\label{itm:prop-min-degree} for $n\ne 5$, $G$ has minimum degree
greater than $2n/5$. \end{enumerate} \end{lemma}
\begin{proof} The bipartite Tur\'an graph has all the listed properties ($\ref{itm:prop-turan}\Rightarrow \{ \ref{itm:prop-complete},\ref{itm:prop-bipartite},\ref{itm:prop-perfect}, \ref{itm:prop-no-pfour},\ref{itm:prop-min-degree}\}$), so it remains to prove the implications in the other direction. By exact cycle count, $T(n,2)$ maximizes cycles among complete bipartite graphs (see Corollary~\ref{cor:complete-bipartite}; $\ref{itm:prop-complete}\Rightarrow\ref{itm:prop-turan}$). If $G$ is bipartite, it is necessarily complete bipartite in order to be maximal triangle-free ($\ref{itm:prop-bipartite}\Rightarrow\ref{itm:prop-complete}$). Triangle-free perfect graphs are bipartite as a trivial consequence of the definition ($\ref{itm:prop-perfect}\Rightarrow\ref{itm:prop-bipartite}$). Any graph without an induced $P_4$ is perfect by a result of Seinsche~\cite{Seinsche:Property}, with a simpler proof given by Arditti and de~Werra~\cite{Arditti:Note} ($\ref{itm:prop-no-pfour}\Rightarrow\ref{itm:prop-perfect}$). Any triangle-free graph with minimum degree greater than $2n/5$ is bipartite ($\ref{itm:prop-min-degree}\Rightarrow\ref{itm:prop-bipartite}$)~\cite{Andrasfai:G18,Andrasfai:Connection}. \end{proof}
Our Theorems~\ref{thm:regular-triangle-free} and~\ref{thm:near-reg-triangle-free} have the effect of adding ``$G$ is regular'' to the list of equivalent conditions for all even $n$, and ``$G$ is near-regular'' for odd $n>91$.
\section{Bounds on cycle counts} \label{sec:bounds}
In this section we prove bounds on the numbers of cycles in certain kinds of graphs. We have three basic kinds of bounds, each of which admits some variations. First, for the bipartite Tur\'{a}n graph $T(n,2)$ it is possible to compute the number of cycles exactly for any given $n$, but the resulting expression is a summation; we also find a reasonably tight closed-form lower bound. We can then rule out potential counterexamples to Conjecture~\ref{con:main} by showing upper bounds on the number of cycles in other kinds of graphs. The remaining two kinds of bounds are based on the number of edges, and on homomorphism.
The asymptotic results come from applying Stirling's approximation for the factorial in the following form, which gives precise upper and lower bounds. Note that the approximation is actually an approximation for the gamma function, so we can apply it to non-integer arguments. The approximation is:
\begin{gather}
n \ln n - n + \frac{1}{2}\ln n + \frac{1}{2}\ln 2\pi
\le \ln \Gamma(n+1) \label{eqn:stirling-lb} \, , \text{ and} \\
\ln \Gamma(n+1) \le
n \ln n - n + \frac{1}{2}\ln n + \frac{1}{2}\ln 2\pi + \frac{1}{12}\cdot
\frac{1}{n} \, . \label{eqn:stirling-ub} \end{gather}
Our general approach will be to prove bounds on $\ln c(G)$ as a function of $n$ for $G$ in various classes of graphs. The bounds typically take the form $n \ln n - cn + O(\ln n)$ for some constant coefficient $c \ge 1$. These amount to proofs that the number of cycles is on the order of $n!$ divided by some exponential function, with the coefficient of $n$ in $\ln c(G)$ describing the size of the exponential function. Comparing the coefficients suffices to show that one class of graphs has more cycles than another for sufficiently large $n$; and with more careful attention to the lower-order terms we can bound the values of $n$ that are ``sufficiently large,'' leaving a known finite number of smaller cases to address with other techniques.
\subsection{Cycles in $T(n,2)$}
It is relatively easy to count cycles in the bipartite Tur\'an graph $T(n,2)$. The following result gives the exact count as a summation, and an asymptotic approximation.
\begin{lemma} The number of cycles in $T(n,2)$ is given exactly by \begin{equation}
c(T(n,2)) =
\sum_{k=2}^{\lfloor n/2 \rfloor} \frac{\lfloor n/2 \rfloor!\lceil n/2
\rceil!}{2k(\lfloor n/2 \rfloor-k)!(\lceil n/2 \rceil-k)!} \, ,
\label{eqn:ktwo-exact} \end{equation} and satisfies the bound \begin{equation} \begin{aligned}
\ln c(T(n,2)) &\ge n \ln n -(1+\ln 2)n + \ln \pi \\
&\approx n \ln n - 1.693147n + 1.44730 \, . \end{aligned} \label{eqn:ktwo-numerical} \end{equation} \end{lemma}
\begin{proof} To describe a cycle in the bipartite Tur\'{a}n graph $T(n,2)$, we can start by choosing a value $k$ to be the number of vertices the cycle includes on each side of the bipartite graph. The length of the cycle will be $2k$, and necessarily $2 \le k \le \lfloor n/2 \rfloor$. Then we choose a permutation for $k$ of the $\lfloor n/2 \rfloor$ vertices in the smaller part, and a permutation for $k$ of the $\lceil n/2 \rceil$ vertices in the larger part. These choices will describe each possible cycle $2k$ times, because there are $k$ equivalent starting points and two equivalent directions. Therefore we divide by $2k$ to avoid overcounting, and the overall total number of cycles is given by \eqref{eqn:ktwo-exact}.
The term for $k=\lfloor n/2 \rfloor$ is by far the largest, so we can use it alone as a reasonably tight lower bound. The factorials in the denominator become one and drop out. By the properties of the gamma function, $\lfloor n/2 \rfloor!\lceil n/2 \rceil! \ge \Gamma((n/2)+1)^2$, so we can drop the floors and ceilings in the numerator, use gamma instead of factorial, and have a valid lower bound for both even and odd $n$. Similarly, replacing $2\lfloor n/2\rfloor$ by $n$ in the denominator does not increase the bound. We have: \begin{equation*}
c(T(n,2)) \ge \frac{\Gamma((n/2)+1)^2}{n} \, . \end{equation*}
Applying Stirling's approximation \eqref{eqn:stirling-lb} gives \eqref{eqn:ktwo-numerical}. \end{proof}
The following corollary confirms the intuition that $T(n,2)$ should have more cycles than a less-balanced complete bipartite graph.
\begin{corollary}\label{cor:complete-bipartite} The graph $T(n,2)$ for $n\ge 4$ is uniquely cycle-maximal among complete bipartite graphs on $n$ vertices. \end{corollary}
\begin{proof} The requirement $n\ge 4$ is to rule out pathological cases in which no cycles are possible at all. Let $a$ and $b$ represent the sizes of the two parts, with $n=a+b$ and assume without loss of generality $a\le b$. The number of cycles in $K_{a,b}$ is a suitably modified version of \eqref{eqn:ktwo-exact}: \begin{align*}
c(K_{a,b})
&= \sum_{k=2}^{a} \frac{a!b!}{2k(a-k)!(b-k)!} \\
&= \sum_{k=2}^{a} \frac{1}{2k}
\cdot (ab)
\cdot \left( (a-1)\cdot(b-1) \right)
\cdots
\left( (a-k+1)\cdot(b-k+1) \right) \, . \end{align*}
If $b>a+1$, then subtracting one from $b$ and adding one to $a$ will strictly increase all the factors $(ab)$, $\left( (a-1)\cdot(b-1)\right)$, and so on. Making this change will also add an additional positive term to the sum. Therefore the sum is uniquely maximized when $b\le a+1$, which means the graph is $T(n,2)$. \end{proof}
\subsection{Cycles as a function of number of edges}
It seems intuitively reasonable that more edges should mean more cycles. We can make that more precise by giving an upper bound on number of cycles as a function of number of edges, and therefore (by comparison with the previous bound) a lower bound on number of edges necessary for a graph to potentially exceed the number of cycles in the bipartite Tur\'{a}n graph. First, we define notation for the maximal product of a constrained sequence of integers, which will be used in bounding the cycle count.
\begin{definition}\label{def:pi-func} Let $\Pi(n,m)$, with $2\le m \le \binom{n}{2}$, denote the greatest possible product for any $k< n$ of a sequence of positive integers $c_1,c_2,\ldots,c_{k}$ with $c_i\le n-i$ for all $1 \le i\le k$ and $\sum_{i=1}^{k} c_i = m$. \end{definition}
The following lemma describes the value of $\Pi(n,m)$.
\begin{lemma}\label{lem:pifunc-value} If $m=\binom{n}{2}$, then \begin{equation}
\Pi(n,m) = (n-1)! \,. \label{eqn:pi-factorial} \end{equation} If $2\le m \le 3n-7$, then \begin{equation}
\Pi(n,m) =
\begin{cases}
3^{m/3}
& \text{ for } m \equiv 0 \pmod{3}; \\
4\cdot 3^{(m-4)/3}
& \text{ for } m \equiv 1 \pmod{3}; \text{ and} \\
2\cdot 3^{(m-2)/3}
& \text{ for } m\equiv 2 \pmod{3}\, .
\end{cases} \label{eqn:pi-unconstrained} \end{equation} If $3n-7<m<\binom{n}{2}$, then $k=n-2$ and there exist integers $s\ge 3$ and $t\ge 0$ such that \begin{equation}
\Pi(n,m) = (s+1)^t s^{n-s-t} (s-1)! \, . \label{eqn:pi-constrained} \end{equation} \end{lemma}
\begin{proof} In the case $m=\binom{n}{2}$, the only sequence satisfying the constraints is $n-1, n-2, \ldots, 1$ and $\Pi$ is the product of that sequence, giving \eqref{eqn:pi-factorial}.
Sorting the $c_i$ into nonincreasing order cannot cause them to violate the constraints, so we assume it. Removing a $c_i$ term greater than $3$ and replacing it with two terms $c_i-2$ and $2$ will never decrease the product. Removing a term equal to $1$ and adding $1$ to some other term will always increase the product, as will removing three terms equal to $2$ and replacing them with two terms equal to $3$. Repeated application of these rules uniquely determines a sequence ending with at most two terms equal to $2$, all other terms equal to $3$, and if the constraints allow this sequence, then it determines $\Pi$, giving \eqref{eqn:pi-unconstrained}.
Subtracting $1$ from a term $c_i>3$ and adding $1$ to some other term less than $c_i-1$ will always increase the product. Repeated application of that operation and the operations used for \eqref{eqn:pi-unconstrained}, wherever permitted by the constraints, uniquely determines a sequence in the form given by \eqref{eqn:pi-constrained}. \end{proof}
Now the $\Pi$ function is applied to bound the number of cycles.
\begin{lemma}\label{lem:edge-bound} If a graph $G$ has $n$ vertices, $m$ edges, and girth at least $g$, then \begin{equation}
c(G) \le \Pi(n-1,m) \frac{n^2}{2g} \label{eqn:edges-pifunc}
\, , \end{equation} and if $3n-7<m<\binom{n}{2}$, \begin{equation}
\ln c(G) \le
n \ln n
- (\alpha - \ln \alpha) n
+ \frac{5}{2} \ln n
+ \frac{1}{2} \ln \alpha
+ \frac{1}{2}\ln \frac{\pi}{2}
- \ln g
+ \frac{1}{12\alpha n}
\label{eqn:edges-stirling}
\, , \end{equation} where $\alpha = 1-\sqrt{1-1/n-2(m+1)/n^2}$. \end{lemma}
\begin{proof} Suppose we are counting Hamiltonian cycles in a complete graph. We might start at the first vertex, leave via one of its $n-1$ edges, then from the next vertex, choose one of the $n-2$ edges remaining (excluding the one from the first vertex), and so on. At the last vertex, there are no remaining edges to previously unvisited vertices, and we return to the starting point. Overall there are $(n-1)!$ choices of successor vertices, which suffices as an upper bound. Note that $(n-1)!$ is the product of $n-1$ positive integer factors whose sum is exactly the number of edges in the complete graph. Every time we consider an edge as a choice for leaving a vertex, that edge is eliminated from consideration for all future vertices, hence the bound on the sum. The last few factors in the sequence are $3,2,1$ because we can only visit a previously unvisited vertex and no term can be greater than the number of previously unvisited vertices that remain.
For a more general graph $G$ with $n$ vertices, $m$ edges, girth at least $g$, and cycles that might not be maximal length, we can follow a similar procedure. There are at most $n-1$ positive integers representing choices of successors of all but the last vertex; their sum is at most $m$; and if the factors are $c_1,c_2,\ldots,c_{k}$, the remaining vertex constraint is $c_i\le n-i$ for all $1 \le i \le k$. By definition, $\Pi(n,m)$ is an upper bound on the product of such a sequence.
Since we are not requiring cycles to be Hamiltonian, we cannot assume that any single vertex is the first one in the cycle or is in the cycle at all, so we multiply the bound by $n$ to account for choosing any starting vertex. To account for choosing the length, we multiply by $n$ for choosing which vertex is the last vertex, assume that the cycle closes as soon as it reaches that vertex, and then any remaining choices we may have counted for vertices not in the cycle will only go to make the upper bound a little less tight. Finally, we can remove a small amount of overcounting. With a girth of $g$ (necessarily at least $3$) there will be $g$ distinct choices of starting vertex that actually generate the same cycle; and we can always generate each cycle in two equivalent directions. So we can divide by $2g$ and still have a valid upper bound. Multiplying $\Pi(n,m)$ for choices of successors with $n^2$ for choices of starting and ending vertices, and dividing by $2g$, gives exactly the bound \eqref{eqn:edges-pifunc}.
The form of the sequence $c_i$ that achieves $\Pi(n,m)$ is described in Lemma~\ref{lem:pifunc-value}. In the case $3n-7<m<\binom{n}{2}$ (dense but not complete graphs), this sequence is of length $n-2$ and in general is of the form $\lceil \alpha n \rceil, \ldots, \lceil \alpha n \rceil, \lfloor \alpha n \rfloor, \ldots, \lfloor \alpha n \rfloor, \lfloor \alpha n \rfloor -1, \lfloor \alpha n \rfloor-2, \ldots, 4,3,2$, for some $\alpha$ chosen so that the sum of the sequence is $m$. Note that there is no final factor of $1$ counted in the sequence, because adding it to an earlier term gives a greater product. These factors are shown schematically in Figure~\ref{fig:cg-factors}.
\begin{figure}
\caption{Factors in the upper bound on $\Pi(n,m)$.}
\label{fig:cg-factors}
\end{figure}
If $\alpha n$ is an integer, then this product is $(\alpha n)^{(1-\alpha)n}(\alpha n -1)!$. The sum is $(\alpha n)(1-\alpha n)n+(\alpha n -1 + \alpha n -2 + \cdots + 3 + 2)$. Setting that to $m$ and applying the usual formula for the sum of consecutive integers, we have $(-n^2/2)\alpha^2 + (n^2-n/2)\alpha -m-1=0$. Solving the quadratic, choosing the solution between $0$ and $1$, and removing some terms for an upper bound, gives \begin{align*}
\alpha &= 1-\frac{1}{2n}
-\sqrt{1-\frac{1}{n}+\frac{1}{4n^2}-\frac{2(m+1)}{n^2}} \\
&\le 1-\sqrt{1-\frac{1}{n}-\frac{2(m+1)}{n^2}} \, . \end{align*}
For $\alpha n$ not an integer, removing the floors and ceilings outside the factorial can only increase the product, because making those terms equal maximizes their product given that their sum is fixed. The factorial is at most $\lceil \alpha n -1 \rceil!$, and changing it to $\Gamma(\lceil \alpha n -1 \rceil + 1) \le \Gamma(\alpha n +1)$ similarly cannot decrease the product. Where $\alpha=1-\sqrt{1-1/n-2(m+1)/n^2}$, we have \begin{equation*}
\Pi(n,m) \le
(\alpha n)^{(1-\alpha)n} \Gamma(\alpha n + 1) \, . \end{equation*} The result \eqref{eqn:edges-stirling} follows by Stirling's approximation~\eqref{eqn:stirling-ub}: \begin{align*} \ln c(G) &\le (1-\alpha)n \ln \alpha n + \ln \Gamma(\alpha n+1)
+ \ln \frac{n^2}{2g} \\
&\le n \ln n
- (\alpha - \ln \alpha) n
+ \frac{5}{2} \ln n
+ \frac{1}{2} \ln \alpha
+ \frac{1}{2}\ln \frac{\pi}{2}
- \ln g
+ \frac{1}{12\alpha n} \, . \qedhere \end{align*} \end{proof}
A cycle-maximal triangle-free graph $G$ necessarily contains enough edges for \eqref{eqn:edges-stirling} to exceed \eqref{eqn:ktwo-numerical}. For sufficiently large $n$, the coefficients of $n$ in the bounds on $\ln c(G)$ will determine which bound is greater; for \eqref{eqn:edges-stirling} to exceed \eqref{eqn:ktwo-numerical} requires that $\alpha-\ln \alpha\le 1+\ln 2$. Then $\alpha \ge 0.231961\ldots$ and $2m/n$ (the average degree of $G$) is at least $n(0.410116\ldots)$. Critically, that is greater than $2n/5$. In a graph that is regular, or close to regular in the sense that the difference between minimum and maximum degrees is bounded by some constant, the minimum degree approaches the average and so is also greater than $2n/5$ for sufficiently large $n$. But any triangle-free graph with minimum degree greater than $2n/5$ is bipartite~\cite{Andrasfai:G18,Andrasfai:Connection}, giving the following corollary.
\begin{corollary}\label{cor:near-regular} Let $k$ be any fixed nonnegative integer and let $G$ be any cycle-maximal triangle-free graph with $n$ vertices and $\Delta(G)-\delta(G)\le k$. Then for sufficiently large $n$, $G$ is the bipartite Tur\'{a}n graph. \end{corollary}
In particular, note that $C_5(t)$, which is an important case for many previous results on maximal triangle-free graphs including that of Andr{\'a}sfai used above~\cite{Andrasfai:G18,Andrasfai:Connection}, is regular with degree exactly $2n/5$ and so is \emph{not} cycle-maximal triangle-free once $n$ is sufficiently large. It does not have enough edges to be cycle-maximal triangle-free. Neither does any other non-bipartite regular graph for sufficiently large $n$. Later in the present work, when we show that no regular graph is a counterexample to Conjecture~\ref{con:main}, we need only consider the finite number of cases in which $n$ is not ``sufficiently large.''
However, this result concerns average degree, not minimum degree. A graph could have a large gap between average and minimum degrees. For instance, the graph formed by inserting a degree-two vertex in one edge of $T(n-1,2)$ has average degree approaching $n/2$ despite its minimum degree being fixed at $2$; and although it clearly has fewer cycles than $T(n,2)$, Lemma~\ref{lem:edge-bound} is not strong enough to prove that. Note that by a result of Erd\H{o}s~\cite[Lemma~1]{Erdos:Rademacher}, this graph also contains the maximum possible number of edges for a non-bipartite triangle-free graph on $n$ vertices.
\subsection{Cycle bounds from homomorphisms}
Several important results on maximal triangle-free graphs amount to proving that a graph $G$ with certain properties is necessarily homomorphic to some fixed, usually small, graph $H$. The following lemmas provide bounds on the number of cycles in a graph with that kind of homomorphism; first for $G$ a uniform blowup of $H$, and then more generally where the sizes of the sets mapping onto each vertex of $H$ are known but not necessarily all the same.
\begin{lemma}\label{lem:hmorph-bound} If $G$ and $H$ are graphs with $n$ and $p$ vertices respectively, $n$ an integer multiple of $p$, $G$ is a subgraph of $H(n/p)$, $g$ is the girth of $G$, and $q=\Delta(H)$, then \begin{gather}
c(G) \le q^n \left[ \left( \frac{n}{p} \right) ! \right]^p
\frac{n}{2g} \label{eqn:hmorph-qn}
\, , \text{ and} \\
\ln c(G) \le
n \ln n
- \left( 1 + \ln \frac{p}{q} \right) n
+ \left( 1 + \frac{p}{2}\right) \ln n
+ \frac{p}{2}\ln \frac{2\pi}{p}
- \ln 2g
+ \frac{p^2}{12n}
\, .
\label{eqn:hmorph-stirling} \end{gather} \end{lemma}
\begin{proof} For each vertex in $G$, we will choose a successor in $H$. There are at most $q^n$ ways to do that. By also choosing a permutation for the $n/p$ vertices in $G$ corresponding to each of the $p$ vertices of $H$ (overall $(n/p)!^p$ choices), we can uniquely determine a successor for each vertex in the cycle. Note that we can choose any arbitrary successors for vertices not in the cycle, since we have not limited the total number of times we might choose a vertex of $H$; special handling of non-cycle vertices as in Lemma~\ref{lem:perm-bound} is not necessary here.
The starting vertex is determined by choosing one of the $p$ partitions. To determine the length of the cycle, bearing in mind that the cycle can only end when it returns to its initial partition, we can choose how many of the $n/p$ vertices in the initial partition to include in the cycle. Multiplying those factors, the $p$ cancels out, leaving a factor of $n$ for the choice of both starting vertex and cycle length. Alternately, this choice can be viewed as selecting from among $n$ vertices one to be the \emph{last} vertex in the cycle, with the starting partition implicitly the partition containing that vertex, and the starting vertex implicitly the first one in the starting partition according to the earlier-counted vertex permutations. We can also remove a factor of $2g$ because any cycle (necessarily of length at least $g$) can be described using any of $2$ directions and at least $g$ starting vertices. Multiplying all these factors gives \eqref{eqn:hmorph-qn}.
Then \eqref{eqn:hmorph-stirling} follows by Stirling's approximation as follows: \begin{align*} \ln c(G) &\le n \ln q + p\left[ \frac{n}{p} \ln \frac{n}{p} - \frac{n}{p}
+\frac{1}{2}\ln \frac{n}{p} + \frac{1}{2} \ln 2\pi +\frac{p}{12n}\right]
+ \ln \frac{n}{2g} \\
&= n \ln q + n \ln \frac{n}{p} - n + \frac{p}{2} \ln \frac{n}{p}
+\frac{p}{2} \ln 2\pi + \frac{p^2}{12n} + \ln \frac{n}{2g} \\
&= n \ln n
- \left( 1 + \ln \frac{p}{q} \right) n
+ \left( 1 + \frac{p}{2}\right) \ln n
+ \frac{p}{2}\ln \frac{2\pi}{p}
- \ln 2g
+ \frac{p^2}{12n}
\, . \qedhere \end{align*} \end{proof}
Lemma~\ref{lem:hmorph-bound} can potentially overcount by a significant margin because of the $q^n$ term, which allows each vertex of $G$ to choose a successor in $H$ without restriction. A Hamiltonian cycle in $G$ would necessarily visit each vertex of $H$ exactly $n/p$ times, not any arbitrary number of times; many of the $q^n$ successor-in-$H$ choices involve choosing a vertex of $H$ more than $n/p$ times and so cannot actually correspond to feasible full-length cycles in $G$. There are many fewer than $q^n$ ways to choose each vertex of $H$ exactly $n/p$ times while obeying the other applicable constraints. The situation is complicated somewhat by the possibility of non-Hamiltonian cycles, but it remains that the bound \eqref{eqn:hmorph-qn} is quite loose for many graphs of interest.
The matrix permanent offers a way to prove a tighter upper bound on cycles given a homomorphism. The following result replaces the successor choice in Lemma~\ref{lem:hmorph-bound} with a computation of the permanent of the adjacency matrix of the graph. Choosing a permutation of the rows and columns for which all the chosen entries of the adjacency matrix are nonzero corresponds to choosing a neighbour as successor for each vertex in the graph such that each vertex is chosen exactly once, and the permanent counts such choices, including all Hamiltonian cycles. To allow for non-Hamiltonian cycles, which might not involve all vertices, we add loops to all the vertices, corresponding to ones along the diagonal of the matrix, allowing any vertex to choose itself as successor and therefore not need to be chosen by any other vertex. The result is a simple upper bound on number of cycles. This approach is also more easily applicable to non-uniform blowups; that is, where different vertices in $H$ do not all correspond to the same size of independent sets in $G$. We will discuss later how to compute the permanent efficiently for the cases of interest here.
\begin{lemma}\label{lem:perm-bound} In a graph $G$ with $n$ vertices whose adjacency matrix is $(g_{ij})$, \begin{equation}
c(G) \le \frac{1}{2}\perm \left( (g_{ij}) + I_n \right ) \, .
\label{eqn:perm-bound} \end{equation} Furthermore, if $G$ is homomorphic to a graph $H$ with $p$ vertices labelled $1\ldots p$ and adjacency matrix
$(h_{ij})$, via a homomorphism $f:V(G) \rightarrow V(H)$ that maps $n_i=|f^{-1}(i)|$ vertices of $G$ to each vertex $i$ of $H$, then \begin{equation}
c(G) \le \frac{1}{2}\perm
\begin{pmatrix}
I_{n_1} & h_{12}J_{n_1n_2} & \ldots & h_{1p}J_{n_1n_p} \\
h_{21}J_{n_2n_1} & I_{n_2} & \ldots & h_{2p}J_{n_2n_p} \\
\vdots & \vdots & \ddots & \vdots \\
h_{p1}J_{n_pn_1} & h_{p2}J_{n_pn_2} & \ldots & I_{n_p}
\end{pmatrix} \label{eqn:perm-ibound} \, . \end{equation}
\end{lemma} \begin{proof} A directed cycle cover, or oriented 2-factor, of $G$ is a choice, for each vertex $v$ in $G$, of a successor vertex adjacent to $v$ such that each vertex is chosen exactly once. If we add a loop to every vertex of $G$ (making each vertex adjacent to itself), then every cycle in $G$ is uniquely determined by at least two directed cycle covers of the resulting graph: namely those in which the cycle vertices choose their successors in the cycle, going around the cycle in either direction, and any other vertices choose themselves. The permanent of $\left( (g_{ij}) + I_n \right )$ counts exactly those directed cycle covers, and dividing it by two for the two directions gives \eqref{eqn:perm-bound}.
When $G$ is homomorphic to $H$, we can assume for an upper bound that $G$ contains all edges allowed by the homomorphism; adding edges does not decrease the number of cycles. Then \eqref{eqn:perm-ibound} is just \eqref{eqn:perm-bound} applied to the maximal graph. \end{proof}
\section{Cycles in regular triangle-free graphs} \label{sec:regular}
By Corollary~\ref{cor:near-regular}, no regular graph with $n$ vertices except $T(n,2)$ can be cycle-maximal triangle-free for $n$ sufficiently large. In this section we show that in fact that statement applies to all $n$.
Recall that a maximal triangle-free graph with $n$ vertices and minimum degree greater than $10n/29$ is homomorphic to some $\Gamma_i$ with $i\le 9$. If the graph is also regular, the following lemma narrows the possibilities further.
\begin{lemma}\label{lem:gamma-blowup} An $n$-vertex regular maximal triangle-free graph $G$ homomorphic to some $\Gamma_i$ is exactly $\Gamma_j(n/(3j-1))$ for some $j \le i$. \end{lemma}
\begin{proof} Edge-maximality implies $G$ is exactly $\Gamma_i$ with all vertices replaced by independent sets and all the edges that are allowed by the homomorphism; that is, $\Gamma_i(n_1,n_2,\ldots,n_p)$ with $p=3i-1$. Suppose one of those independent sets is empty; then some $n_k=0$ and $G$ is homomorphic to $\Gamma_i$ minus one vertex. But by Lemma~\ref{lem:gamma-deleted}, deleting a vertex from $\Gamma_i$ leaves a graph homomorphic to $\Gamma_{i-1}$. By transitivity $G$ is homomorphic to $\Gamma_{i-1}$, and by induction there exists $j\le i$ such that $G=\Gamma_j(n_1,n_2,\ldots,n_{3j-1})$ with all the $n_k>0$.
The neighbourhoods of $v_{2j}$ and $v_{2j+1}$ in $\Gamma_j$ are $\{v_1,v_2,\ldots,v_j\}$ and $\{v_2,v_3,$ $\ldots,$ $v_{j+1}\}$ respectively; these differ only by the substitution of $v_{j+1}$ for $v_1$. If $G$ is regular, we have \begin{equation*}
\sum_{k=1}^{j} n_k = \sum_{k=2}^{j+1} n_k \quad\text{and}\quad
n_1 = n_{j+1} \, . \end{equation*} Symmetrically around the cycle, $n_k=n_{j+k}$ for all $k$, taking the subscripts modulo $3j-1$. Because $j$ does not divide $3j-1$, these equalities form a Hamiltonian cycle covering all the vertices of $\Gamma_j$. Then all the $n_k$ are equal, and $G=\Gamma_j(n/(3j-1))$. \end{proof}
Figure~\ref{fig:cases-graphical} summarizes the regular graphs of interest according to number of vertices and regular degree. The horizontal line at $2m/n^2=2/5$ represents the known result that minimum degree greater than $2n/5$ in a triangle-free graph implies the graph is bipartite; anything strictly above that line is bipartite. On or below that line, but above the horizontal line at $2m/n^2=10/29$, Lemma~\ref{lem:gamma-blowup} implies only symmetric blowups of $\Gamma_i$ graphs (denoted by circles in the figure) could be regular counterexamples to Conjecture~\ref{con:main}. And Corollary~\ref{cor:near-regular} implies that the curve labelled ``\eqref{eqn:ktwo-numerical} and \eqref{eqn:edges-stirling}'' eventually crosses (and then permanently remains above) the line at $2m/n^2=2/5$, somewhere to the right of the region shown; it is approaching an asymptote at $2m/n^2\approx 0.41 > 2/5$, and therefore the number of $\Gamma_i(t)$ to consider is finite.
\begin{figure}
\caption{Cases and bounds.}
\label{fig:cases-graphical}
\end{figure}
The bounds \eqref{eqn:edges-stirling} and \eqref{eqn:hmorph-stirling} complement each other, as shown in Figure~\ref{fig:cases-graphical}; the first works well for $\Gamma_i$ with relatively large $i$ and the second works well with relatively small $i$. Applying both, we can exclude all blowups of $\Gamma_i$ for $2\le i \le 9$ except these: $\Gamma_2(t)$ for $t\le 9$; $\Gamma_3(t)$ for $t\le 6$; $\Gamma_4(t)$ for $t\le 5$; $\Gamma_5(t)$ for $t\le 5$; $\Gamma_6(t)$ for $t\le 4$; $\Gamma_7(t)$ for $t\le 3$; $\Gamma_8(t)$ for $t\le 2$; $\Gamma_9(t)$ for $t\le 2$.
By comparing \eqref{eqn:ktwo-exact} and \eqref{eqn:edges-pifunc}, which are tighter but not closed-form versions of \eqref{eqn:edges-stirling} and \eqref{eqn:hmorph-stirling}, we can exclude a few more. This comparison is shown by the zigzag dotted line in the figure; it assumes roughly the same shape and is tending to the same asymptote as the curve for \eqref{eqn:edges-stirling} and \eqref{eqn:hmorph-stirling}, because it comes from the same calculation. The zigzag pattern seems to result from parity effects in \eqref{eqn:ktwo-exact}. Although we conjecture that $T(n,2)$ is cycle-maximal for both even and odd $n$, $T(n,2)$ is Hamiltonian only for even $n$. With odd $n$, there is always at least one vertex not included in each cycle. The fact that maximal-length cycles are a little shorter, and therefore less numerous, when $n$ is odd makes $T(n,2)$ relatively poor in cycles for odd $n$ overall, because almost all cycles are maximal-length or very close. The bound \eqref{eqn:edges-pifunc} has no special dependence on parity, and so the gap between it and \eqref{eqn:ktwo-exact} tends to be narrower for odd $n$, creating the zigzag pattern. Using this bound allows us to eliminate as possibilities $\Gamma_4(4)$, $\Gamma_4(5)$, all $\Gamma_5(t)$ except $\Gamma_5$ itself, all $\Gamma_6(t)$ except $\Gamma_6$ itself, and all $\Gamma_7(t)$, $\Gamma_8(t)$, and $\Gamma_9(t)$. These computations, and the integer programming below, were performed in the \eclipse\ constraint logic programming environment, which provides easy access to backtracking search and large integer arithmetic~\cite{Schimpf:Eclipse}.
Only 20 cases remain for maximal triangle-free graphs that are regular with degree $>10n/29$. All are eliminated by comparing \eqref{eqn:ktwo-exact} with \eqref{eqn:perm-ibound} except $\Gamma_2(1)=C_5$, which has one cycle and therefore is not cycle-maximal by comparison with $T(5,2)=K_{2,3}$, which has three cycles. The numerical values for these cases are included in \ref{sec:numerics}.
At this point we have eliminated as possible counterexamples to Conjecture~\ref{con:main} all regular graphs above the $2m/n^2=10/29$ line in Figure~\ref{fig:cases-graphical}. Then the comparison of \eqref{eqn:ktwo-numerical} with \eqref{eqn:edges-stirling} eliminates all regular graphs with $n>61$. Any remaining regular counterexamples are described by integers $n$ (number of vertices) and $\delta$ (regular degree) satisfying these constraints: \begin{equation}
\begin{gathered}
3 \le n \le 61 \, , \\
2 \le \delta \le 10n/29 \, , \text{ and} \\
m = n\delta/2 \text{ is an integer.}
\end{gathered}
\label{eqn:regular-intpro-csts} \end{equation}
There are 428 pairs of $(n,\delta)$ satisfying \eqref{eqn:regular-intpro-csts}. All are excluded by comparing \eqref{eqn:ktwo-exact} with \eqref{eqn:edges-pifunc}. No more cases remain, so the only regular graphs that can be cycle-maximal triangle-free are of the form $T(n,2)$. Finally, note that $T(n,2)$ is a regular graph only when $n$ is even, so we have the following result.
\begin{theorem} \label{thm:regular-triangle-free} If $G$ is a regular cycle-maximal triangle-free graph with $n$ vertices, then $n$ is even and $G$ is $K_{n/2,n/2}$. \end{theorem}
\section{Cycles in near-regular triangle-free graphs} \label{sec:near-regular}
If the minimum and maximum degrees in a graph differ by one, we will call the graph \emph{near-regular}. Note that this definition is strict: regular graphs are not near-regular. When the minimum degree in a near-regular graph is at most $2n/5$, then by counting $n-1$ vertices of degree $(2n/5)+1$ and one vertex of degree $2n/5$, the maximum possible number of edges is \begin{equation*}
\frac{n^2}{5} + \frac{n-1}{2} \, . \end{equation*}
By substituting that into \eqref{eqn:edges-stirling} and comparing with \eqref{eqn:ktwo-numerical}, any near-regular cycle-maximal triangle-free graph that is not $T(n,2)$ can have at most 804 vertices.
To any near-regular graph $G$ we can assign the integer variables $n$ (number of vertices); $m$ (number of edges); $\delta$ and $\Delta$ (the lower and higher degrees respectively); and $n_\delta$ and $n_\Delta$ (number of low and high-degree vertices respectively). This collection of variables is redundant, but naming them all explicitly makes the constraints simpler. With the upper bound of $804$ vertices, and comparing \eqref{eqn:ktwo-exact} with \eqref{eqn:edges-pifunc}, the following constraints apply to any near-regular triangle-free graph that could be a counterexample to Conjecture~\ref{con:main}. \begin{equation}
\begin{gathered}
4\le n \le 804, \\
n=n_\delta+n_\Delta, \, n_\delta>0, \, n_\Delta>0, \\
2\le \delta\le \frac{2n}{5}, \, \Delta=\delta+1, \\
m=\frac{1}{2}n_\delta\delta+\frac{1}{2}n_\Delta\Delta, \text{ and} \\
\Pi(n,m) \frac{n^2}{8} \ge
\sum_{k=2}^{\lfloor n/2 \rfloor} \frac{\lfloor n/2 \rfloor!\lceil n/2
\rceil!}{2k(\lfloor n/2 \rfloor-k)!(\lceil n/2 \rceil-k)!} \, .
\end{gathered}
\label{eqn:nearreg-csts} \end{equation}
By computer search with \eclipse~\cite{Schimpf:Eclipse}, $n\le 435$; and we can obtain tighter bounds on $n$ for specific classes of graphs by further constraining the minimum degree. \begin{itemize}
\item If $G$ is not homomorphic to $\Gamma_2$, $\delta\le 3n/8$
and then $n\le 91$.
\item If $G$ is not homomorphic to $\Gamma_3$, $\delta\le 4n/11$
and then $n\le 61$
\item If $G$ is not homomorphic to $\Gamma_4$, $\delta\le 5n/14$
and then $n\le 51$.
\item If $G$ is not homomorphic to $\Gamma_5$, $\delta\le 6n/17$
and then $n\le 51$.
\item If $G$ is not homomorphic to $\Gamma_6$, $\delta\le 7n/20$
and then $n\le 43$.
\item If $G$ is not homomorphic to $\Gamma_7$, $\delta\le 8n/23$
and then $n\le 35$.
\item If $G$ is not 3-colourable, $\delta\le 10n/29$ and then $n\le 35$.
\item If $G$ is not 4-colourable, $\delta\le n/3$ and then $n\le 33$. \end{itemize}
The same kind of argument used in Lemma~\ref{lem:gamma-blowup} can be used to show that a not necessarily uniform blowup of a $\Gamma_i$ graph which is near-regular obeys narrow bounds on its partition sizes. The following lemma gives the details for the case of $\Gamma_2=C_5$.
\begin{lemma}\label{lem:nearreg-gtwo-blowup} If a near-regular graph $G$ is maximal triangle-free, homomorphic to $\Gamma_2$, and not bipartite, then $G=\Gamma_2(n_1,n_2,n_3,n_4,n_5)$ with $n_2\le n_1+2$, $n_3\le n_1+1$, $n_4\le n_1+1$, and $n_5\le n_1+2$; and therefore it is a subgraph of $\Gamma_2(\lfloor (n+6)/5 \rfloor)$. \end{lemma}
\begin{proof} If $G$ is maximal triangle-free and homomorphic to $\Gamma_2$, then there exist nonnegative integers $n_1,\ldots,n_5$, summing to $n$, so that $G=\Gamma_2(n_1, \ldots, n_5)$. If $G$ is not bipartite, then these are all positive; and they cannot all be the same for the graph to be strictly near-regular. Therefore $n$ is at least $6$.
Let $v_1$, $v_2$, $v_3$, $v_4$, and $v_5$ be the vertices of $\Gamma_2$. Their neighbourhoods are respectively $\{v_3,v_4\}$, $\{v_4,v_5\}$, $\{v_1,v_5\}$, $\{v_1,v_2\}$, and $\{v_2,v_3\}$. The degree of vertices in $G$ mapped by the homomorphism to any given vertex in $\Gamma_2$ is equal to the sum of the sizes of sets of vertices in $G$ mapped to that vertex's neighbours. Therefore the following constraints hold: \begin{gather*}
|(n_3+n_4)-(n_4+n_5)| = |n_3-n_5| \le 1, \\
|(n_4+n_5)-(n_1+n_5)| = |n_4-n_1| \le 1, \\
|(n_1+n_5)-(n_1+n_2)| = |n_5-n_2| \le 1, \\
|(n_1+n_2)-(n_2+n_3)| = |n_1-n_3| \le 1, \\
|(n_2+n_3)-(n_3+n_4)| = |n_2-n_4| \le 1. \\ \end{gather*} Let $n_1$ be the least of the $n_k$; then $n_2\le n_1+2$, $n_3\le n_1+1$, $n_4\le n_1+1$, and $n_5\le n_1+2$.
Up to symmetry, there are nine cases for near-regular $\Gamma_2(n_1,n_2,n_3,n_4,n_5)$ obeying the above constraints: \begin{itemize}
\item $G=\Gamma_2(n_1,n_1,n_1,n_1,n_1+1)$; then $n\equiv 1 \pmod{5}$
and $G$ is a subgraph of $\Gamma_2(n+4)$.
\item $G=\Gamma_2(n_1,n_1,n_1,n_1+1,n_1+1)$; then $n\equiv 2 \pmod{5}$
and $G$ is a subgraph of $\Gamma_2(n+3)$.
\item $G=\Gamma_2(n_1,n_1,n_1+1,n_1,n_1+1)$; then $n\equiv 2 \pmod{5}$
and $G$ is a subgraph of $\Gamma_2(n+3)$.
\item $G=\Gamma_2(n_1,n_1,n_1+1,n_1+1,n_1+1)$; then $n\equiv 3 \pmod{5}$
and $G$ is a subgraph of $\Gamma_2(n+2)$.
\item $G=\Gamma_2(n_1,n_1+1,n_1+1,n_1,n_1+1)$; then $n\equiv 3 \pmod{5}$
and $G$ is a subgraph of $\Gamma_2(n+2)$.
\item $G=\Gamma_2(n_1,n_1+1,n_1+1,n_1+1,n_1+1)$; then $n\equiv 4 \pmod{5}$
and $G$ is a subgraph of $\Gamma_2(n+1)$.
\item $G=\Gamma_2(n_1,n_1+1,n_1+1,n_1,n_1+2)$; then $n\equiv 4 \pmod{5}$
and $G$ is a subgraph of $\Gamma_2(n+6)$.
\item $G=\Gamma_2(n_1,n_1+1,n_1+1,n_1+1,n_1+2)$; then $n\equiv 0 \pmod{5}$
and $G$ is a subgraph of $\Gamma_2(n+5)$.
\item $G=\Gamma_2(n_1,n_1+2,n_1+1,n_1+1,n_1+2)$; then $n\equiv 1 \pmod{5}$
and $G$ is a subgraph of $\Gamma_2(n+4)$. \end{itemize}
In all these cases, $G$ is a subgraph of $\Gamma_2(\lfloor (n+6)/5 \rfloor)$. \end{proof}
Lemma~\ref{lem:nearreg-gtwo-blowup} brings down the upper bound on $n$ a little for the case of graphs homomorphic to $\Gamma_2$: because $G$ homomorphic to $\Gamma_2$ can have no more cycles than its supergraph $\Gamma_2(\lfloor (n+6)/5\rfloor)$, we can compare \eqref{eqn:ktwo-exact} for $n$ vertices with \eqref{eqn:hmorph-qn} for $5\lfloor (n+6)/5 \rfloor$ vertices, and find that for $G$ near-regular, cycle-maximal triangle-free, and homomorphic to $\Gamma_2$ but not bipartite, $n\le 184$.
If we extend the constraint program \eqref{eqn:nearreg-csts} to include separate variables for $n_1$, $n_2$, $n_3$, $n_4$, and $n_5$, with the constraints on them given by Lemma~\ref{lem:nearreg-gtwo-blowup} and the new bound $n\le 184$, we can generate an exhaustive list of the $\Gamma_2$ blowups that remain as possible counterexamples to Conjecture~\ref{con:main}. Comparing \eqref{eqn:ktwo-exact} with \eqref{eqn:perm-ibound} for these cases eliminates all of them except the three graphs shown in Figure~\ref{fig:gtwo-exceptions}: $\Gamma_2(1, 2, 1, 1, 2)$, $\Gamma_2(1, 2, 2, 1, 3)$, and $\Gamma_2(1, 3, 2, 2, 3)$. Note that the order of indices in $\Gamma_2$ and thus the order of indices in the blowup notation is not consecutive around the five-cycle: $v_1$ in $\Gamma_2$, under the definition, is adjacent to $v_3$ and $v_4$. These graphs are small enough that we can count the cycles exactly; none have as many cycles as the bipartite Tur\'{a}n graph with the same number of vertices.
\begin{figure}
\caption{Near-regular graphs homomorphic to $\Gamma_2$ and not ruled out
by comparing \eqref{eqn:ktwo-exact} with \eqref{eqn:perm-ibound}.}
\label{fig:gtwo-exceptions}
\end{figure}
\begin{equation}
\begin{aligned}
c(\Gamma_2(1,1,2,1,2)) &= 15 \, , &
c(\Gamma_2(1,2,2,1,3)) &= 216 \, , &
c(\Gamma_2(1,3,2,2,3)) &= 3051 \, , \\
c(T(7,2)) &= 42 \, , &
c(T(9,2)) &= 660 \, , &
c(T(11,2)) &= 15390 \, .
\end{aligned}
\label{eqn:gtwo-exceptions} \end{equation}
These results suffice to establish the following theorem, which limits the remaining possibilities for near-regular graphs that could be cycle-maximal triangle-free.
\begin{theorem}\label{thm:near-reg-triangle-free} If a graph $G$ with $n$ vertices and $m$ edges is cycle-maximal triangle-free, its minimum and maximum degrees differ by exactly one, and $G$ is not $T(n,2)$ with $n$ odd, then $n\le 91$, the minimum degree in $G$ is at most $3n/8$, and $G$ is not homomorphic to $C_5$. \end{theorem}
\begin{proof} Suppose $G$ is a counterexample. By comparing \eqref{eqn:edges-stirling} with \eqref{eqn:ktwo-numerical}, $n\le 804$. By solving the constraints \eqref{eqn:nearreg-csts}, $n\le 435$.
For graphs homomorphic to $C_5$, by applying Lemma~\ref{lem:nearreg-gtwo-blowup}, $n\le 184$. Then by examining specific graphs and comparing \eqref{eqn:ktwo-exact} with \eqref{eqn:perm-ibound}, the three graphs shown in Figure~\ref{fig:gtwo-exceptions} are the last remaining graphs homomorphic to $C_5$, and \eqref{eqn:gtwo-exceptions} eliminates them. For graphs not homomorphic to $C_5$: the minimum degree is at most $3n/8$ because $G$ is maximal triangle-free. Then by adding that constraint to \eqref{eqn:nearreg-csts} and solving, $n\le 91$. \end{proof}
\section{Algorithmic aspects of the upper bound calculation} \label{sec:algorithm}
Lemma~\ref{lem:perm-bound} gives a bound~\eqref{eqn:perm-ibound} on number of cycles in a graph in terms of the permanent of a matrix; that is the sum, over all ways to choose one entry from each row and column, of the product of the chosen entries. Note that the matrix permanent is identical to the matrix determinant except that in the determinant, each product is given a sign depending on the parity of the permutation. For the permanent, the products are simply added. Removing the signs has significant consequences for the difficulty of computing the permanent: whereas computing the determinant of an $n\times n$ matrix has the same asymptotic time complexity as matrix multiplication (Cormen \emph{et al.}\ give this as an exercise~\cite[Exercise 28.2--3]{CLRS}), permanent, like cycle counting, is in general a $\#P$-complete problem, even when limited to 0-1 matrices~\cite{Valiant:Complexity}.
Solving one $\#P$-complete problem just to bound another is not obviously useful. However, the matrices for which we compute the permanent to evaluate~\eqref{eqn:perm-ibound} are of a special form which makes the computation much easier. In this section we describe an algorithm to compute such permanents with time complexity having exponential dependence on $p$ (the number of vertices in $H$) but not on $n$ (the number of vertices in $G$, and size of the matrix).
Ryser's formula~\cite{Ryser:Combinatorial} for the permanent of an $n\times n$ matrix with entries $(a_{ij})$ is \begin{equation}
\perm (a_{ij}) =
\hspace{-0.7em}\sum_{S\subseteq \{1,2,\ldots,n\}}\hspace{-0.7em}
(-1)^{n-|S|} \prod_{i=1}^n \sum_{j \in S} a_{ij} \, .
\label{eqn:ryser} \end{equation}
Ryser's formula is a standard method for computing the permanent. To summarize it in words, the permanent is the sum over all subsets of the columns of the matrix, of the product over all rows, of the sums of entries in the chosen columns, with signs according to the parity of the size of the subset. The formula follows from applying the principle of inclusion and exclusion to the permutation-based definition of permanent; and although evaluating it has exponential time complexity because of the $2^n$ distinct subsets of the columns, that is better than the factorial time complexity of examining each permutation separately.
Suppose $A$ is an $n \times n$ binary matrix of the following form: \begin{equation*}
\begin{pmatrix}
I_{n_1} & h_{12}J_{n_1n_2} & \ldots & h_{1p}J_{n_1n_p} \\
h_{21}J_{n_2n_1} & I_{n_2} & \ldots & h_{2p}J_{n_2n_p} \\
\vdots & \vdots & \ddots & \vdots \\
h_{p1}J_{n_pn_1} & h_{p2}J_{n_pn_2} & \ldots & I_{n_p}
\end{pmatrix} \, . \end{equation*} The rows are divided into $p$ blocks with sizes $n_1$, $n_2$, \ldots, $n_p$, with $n=n_1+n_2+\cdots+n_p$. The columns are divided into the same pattern of blocks, giving the matrix an overall structure of $p$ blocks by $p$ blocks, with square blocks along the main diagonal but the other blocks not necessarily square. Furthermore, the blocks along the diagonal of $A$ are identity matrices $I_{n_i}$ and the other blocks are of the form $h_{ij}J_{n_in_j}$ with $h_{ij}\in\{0,1\}$; that is, blocks of all zeros or all ones. This is the form of the matrix for which we calculate the permanent to evaluate~\eqref{eqn:perm-ibound}.
Observe that because of the block structure, many choices of the subset $S$ in~\eqref{eqn:ryser} will produce the same product of row sums. The inside of the first summation in~\eqref{eqn:ryser}, for matrices in the form we consider, depends on how many columns are chosen from each block, but not which ones. If we let $k_i$ for $i \in \{1,2,\ldots,p\}$ be the number of columns chosen in block $i$, then we can sum over the choices of all the $k_i$ rather than the choices of $S$, using binomial coefficients to count the number of choices of $S$ for each choice of all the $k_i$. Furthermore, the innermost sum need only contain $p$ terms for the block columns rather than $n$ for the matrix columns, because we can collapse the sum within a block of columns into $0$ for a block of all zeros; the number of columns selected from the block for a block of all ones; or either $0$ or $1$ for an identity-matrix block depending on whether we are in a row corresponding to a selected column. The product, similarly, only requires $2p$ factors, raised to the appropriate powers, for the block rows and the choice of ``selected'' or ``not selected'' matrix rows; not $n$ possibilities for all the matrix rows. Algorithm~\ref{alg:permanent} gives pseudocode for the calculation.
\begin{algorithm} \caption{}\label{alg:permanent} \begin{algorithmic}
\STATE $result \gets 0$
\FORALL{integer vectors $\langle k_1,k_2,\ldots,k_p \rangle$
such that $0\le k_i \le n_i$}
\STATE $cprod \gets 1$
\FOR{$row=1$ \TO $p$}
\STATE $rsum \gets 0$
\FOR{$col=1$ \TO $p$}
\IF{$row\ne col$ \AND $h[row,col]=1$}
\STATE $rsum \gets rsum + k_{col}$
\ENDIF
\ENDFOR
\STATE $cprod \gets cprod \cdot (rsum+1)^{k_{row}}
\cdot rsum^{n_{row}-k_{row}}$
\ENDFOR
\STATE $result \gets result+cprod \cdot
\prod_{i=1}^p (-1)^{n_i-k_i}\binom{n_i}{k_i}$
\ENDFOR
\RETURN{$result$} \end{algorithmic} \end{algorithm}
There are $\prod_{i=1}^p (n_i+1)$ choices for the vector $\langle k_1,k_2,\ldots,k_p \rangle$; because equal division is the worst case, that is $O(((n/p)+1)^p)$. For each such vector, the inner loops do $O(p^2)$ operations, giving the following result.
\begin{theorem} There exists an algorithm to compute the permanent of a matrix $A$ in $O(p^2((n/p)+1)^p)$ integer arithmetic operations if $A$ is an $n\times n$ matrix divided into $p$ blocks by $p$ blocks, not necessarily all of the same size, in which the blocks along the main diagonal are identity matrices and the other blocks each consist of all zeros or all ones. \end{theorem}
When $p=n$, the case of general unblocked $n\times n$ matrices, this time bound reduces to $O(n^22^n)$, which is the same as a straightforward implementation of Ryser's formula. Note that we describe the time complexity in terms of ``integer arithmetic operations.'' The value of the permanent can be on the order of the factorial of the number of vertices $n$, in which case representing it takes $O(n)$ words of $O(\log n)$ bits each. We cannot do arithmetic on such large numbers in constant time in the standard RAM model of computation. However, including an extended analysis here of the cost of multiple-precision arithmetic would make the presentation more confusing without providing any deeper insight into how the algorithm works. Thus we do the analysis in the unit cost model, with the caution that the cost of arithmetic may be non-constant in practice and should be considered when implementing the algorithm. Even if our model does not include ``binomial coefficient'' as a primitive constant-time operation, we can first build a table of $k!$ for $k$ from 1 to $n$ with $O(n)$ multiplications, then calculate $\binom{n}{k}$ as $n!/k!(n-k)!$ with three table lookups; time and space to build the table are lower order than the overall cost of Algorithm~\ref{alg:permanent}.
For the proofs in the previous sections, we implemented this algorithm in the \eclipse\ language~\cite{Schimpf:Eclipse} with no particular effort to optimize it, and found that the cost of calculating permanents to bound cycle counts was comparable to the cost of the integer programming to find the graphs in the first place, typically a few CPU seconds per graph for small cases, up to a few hours for the largest cases of interest.
\section{Conclusions and future work} \label{sec:conclusion}
Conjecture~\ref{con:main} postulates that the bipartite Tur\'an graphs achieve the maximum number of cycles among all triangle-free graphs. Depending on the parity of $n$, $T(n,2)$ is either regular or near-regular; and we have ruled out all regular graphs and all but a finite number of near-regular graphs as potential counterexamples to Conjecture~\ref{con:main}. It appears that our current techniques might be extended to cover a few more of the near-regular cases by proving results like Lemma~\ref{lem:nearreg-gtwo-blowup} for $\Gamma_3$, $\Gamma_4$, and so on. Each one reduces the maximum value of $\delta(G)/n$, and therefore the maximum value of $n$, for which counterexamples could exist.
However, even if we could do this for all $\Gamma_i$, and extend the theory to cover $4$-chromatic graphs too using the ``Vega graph'' classification results of Brandt and Thomass\'e~\cite{Brandt:Dense}, potential counterexamples with as many as 30 vertices would remain, and too many of them to exhaustively enumerate as we did in the case of regular graphs. Similar issues apply even more strongly to graphs with $\Delta(G)-\delta(g)$ a constant $k>1$, even though by Corollary~\ref{cor:near-regular}, the number of possible counterexamples is finite for any fixed $k$. It seems clear that to close these gaps will require a better theoretical understanding of graphs with $\delta(G)$ less than but close to $n/3$, and to finally prove Conjecture~\ref{con:main} we need better bounds for graphs that are far from being regular.
When the girth increases beyond four the structure of cycle-maximal graphs appears to change significantly. In particular, they are not just complete bipartite graphs with degree-two vertices inserted to increase the lengths of the cycles. For small values of $n$, our computer search showed that most vertices in cycle-maximal graphs of fixed minimum girth $g \geq 5$ have degree three, with a few vertices of degree two and four present in some cases. Our preliminary examination of cycle-maximal graphs of girth greater than four has yet to suggest any natural characterization of these graphs, even when graphs are restricted to having regular degree.
\section{Example: the permanent bound for $C_5(2)$} \label{sec:example}
This appendix demonstrates the permanent-based bound on number of cycles in the graph $C_5(2)$, shown at left in Figure~\ref{fig:cfivetwo-example}. This graph comes up when trying to think of counterexamples to Conjecture~\ref{con:main}: there is no instantly obvious reason for it to have fewer cycles than $T(10,2)$, but in fact, it does have fewer cycles.
\begin{figure}
\caption{Graphs for the permanent bound example.}
\label{fig:cfivetwo-example}
\end{figure}
Let $G$ be the graph $C_5(2)$ and let $H$ be the graph $C_5$, which is the same as $\Gamma_2$. Figure~\ref{fig:cfivetwo-example} shows the vertices of $H$ labelled as in the definition of $\Gamma_2$. The adjacency matrix of $H$ is \begin{equation*}
\begin{pmatrix}
0 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 0 & 1 \\
1 & 1 & 0 & 0 & 0 \\
0 & 1 & 1 & 0 & 0
\end{pmatrix} \, . \end{equation*}
The graph $G$ is obtained by blowing up each vertex of $H$ into a two-vertex independent set, and in the adjacency matrix that is equivalent to replacing each element with a $2\times 2$ submatrix. To apply the bound of Lemma~\ref{lem:perm-bound}, we also add ones along the diagonal, giving this modified version of the adjacency matrix of $G$: \begin{equation*}
\left(\begin{array}{cc|cc|cc|cc|cc}
1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\ \hline
0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 \\ \hline
1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 \\ \hline
1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\
1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ \hline
0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1
\end{array}\right) \, . \end{equation*}
The permanent of that $10\times 10$ matrix is 5753, so by Lemma~\ref{lem:perm-bound}, taking the floor because the cycle count is an integer, $C_5(2)$ contains at most 2876 cycles. In fact, by exact count $C_5(2)$ contains 593 cycles. Both numbers are less than the 3940 cycles in $T(10,2)$.
\section{Numerical results} \label{sec:numerics}
Tables~\ref{tab:counts-lo} and~\ref{tab:counts-hi} list exact counts and bounds on the number of cycles in various graphs, sorted by number of vertices for easier comparisons.
\begin{table} \caption{Cycle counts and bounds for various graphs, $n\le 30$.} \label{tab:counts-lo} \centering \begin{tabular}{rrrc} $G$ & $n$ & $c(G)$ & from \\ \hline
$\Gamma_2=C_5$ & 5 & 1 & obvious \\ $K_{2,3}$ & 5 & 3 & \eqref{eqn:ktwo-exact} \\
$\Gamma_3$ & 8 & $\le$130 & \eqref{eqn:perm-ibound}\\ $K_{4,4}$ & 8 & 204 & \eqref{eqn:ktwo-exact} \\
$\Gamma_2(2)$ & 10 & $\le$2~876 & \eqref{eqn:perm-ibound}\\ $K_{5,5}$ & 10 & 3~940 & \eqref{eqn:ktwo-exact} \\
$\Gamma_4$ & 11 & $\le$6~151 & \eqref{eqn:perm-ibound}\\ $K_{5,6}$ & 11 & 15~390 & \eqref{eqn:ktwo-exact} \\
$\Gamma_5$ & 14 & $\le$602~261 & \eqref{eqn:perm-ibound}\\ $K_{7,7}$ & 14 & 4~662~231 & \eqref{eqn:ktwo-exact} \\
$\Gamma_2(3)$ & 15 & $\le$12~782~394 & \eqref{eqn:perm-ibound}\\ $K_{7,8}$ & 15 & 24~864~588 & \eqref{eqn:ktwo-exact} \\
$\Gamma_3(2)$ & 16 & $\le$36~552~880 & \eqref{eqn:perm-ibound}\\ $K_{8,8}$ & 16 & 256~485~040 & \eqref{eqn:ktwo-exact} \\
$\Gamma_6$ & 17 & $\le$104~770~595 & \eqref{eqn:perm-ibound}\\ $K_{8,9}$ & 17 & 1~549~436~112 & \eqref{eqn:ktwo-exact} \\
$\Gamma_7$ & 20 & $\le$29~685~072~610 & \eqref{eqn:perm-ibound}\\ $\Gamma_2(4)$ & 20 & $\le$275~455~237~776 & \eqref{eqn:perm-ibound}\\ $K_{10,10}$ & 20 & 1~623~855~701~385 & \eqref{eqn:ktwo-exact} \\
$\Gamma_4(2)$ & 22 & $\le$3~544~330~396~616 & \eqref{eqn:perm-ibound}\\ $K_{11,11}$ & 22 & 177~195~820~499~335 & \eqref{eqn:ktwo-exact} \\
$\Gamma_3(3)$ & 24 & $\le$504~887~523~966~914 & \eqref{eqn:perm-ibound}\\ $K_{12,12}$ & 24 & 23~237~493~232~953~516 & \eqref{eqn:ktwo-exact} \\
$\Gamma_2(5)$ & 25 & $\le$19~610~234~100~506~750 & \eqref{eqn:perm-ibound}\\ $K_{12,13}$ & 25 & 205~717~367~581~496~628 & \eqref{eqn:ktwo-exact} \\
$\Gamma_5(2)$ & 28 & $\le$1~583~204~062~862~484~492 & \eqref{eqn:perm-ibound}\\ $K_{14,14}$ & 28 & 653~193~551~573~628~900~289 & \eqref{eqn:ktwo-exact} \\
$\Gamma_2(6)$ & 30 & $\le$3~664~979~770~718~930~748~156 & \eqref{eqn:perm-ibound}\\ $K_{15,15}$ & 30 & 136~634~950~180~317~224~866~335 & \eqref{eqn:ktwo-exact} \end{tabular} \end{table}
\begin{sidewaystable} \caption{Cycle counts and bounds for various graphs, $n>30$.} \label{tab:counts-hi} \centering \begin{tabular}{rrrc} $G$ & $n$ & $c(G)$ & from \\ \hline
$\Gamma_3(4)$ & 32 & $\le$93~314~267~145~221~727~988~928 & \eqref{eqn:perm-ibound}\\ $K_{16,16}$ & 32 & 32~681~589~590~709~963~123~092~160 & \eqref{eqn:ktwo-exact} \\
$\Gamma_4(3)$ & 33 & $\le$472~536~908~624~040~051~159~801 & \eqref{eqn:perm-ibound}\\ $K_{16,17}$ & 33 & 380~842~679~006~967~756~257~282~880 & \eqref{eqn:ktwo-exact} \\
$\Gamma_2(7)$ & 35 & $\le$1~538~132~015~230~964~742~594~686~226 & \eqref{eqn:perm-ibound}\\ $K_{17,18}$ & 35 & 109~481~704~025~024~759~751~150~754~248 & \eqref{eqn:ktwo-exact} \\
$\Gamma_3(5)$ & 40 & $\le$121~876~741~093~584~265~201~282~594~275~138 & \eqref{eqn:perm-ibound}\\ $\Gamma_2(8)$ & 40 & $\le$1~295~546~973~219~341~717~643~333~826~977~344 & \eqref{eqn:perm-ibound}\\ $K_{20,20}$ & 40 & 350~014~073~794~168~154~275~473~348~323~458~540 & \eqref{eqn:ktwo-exact} \\
$\Gamma_2(9)$ & 45 & $\le$2~011~552~320~593~475~430~049~513~125~845~530~235~126 & \eqref{eqn:perm-ibound}\\ $K_{22,23}$ & 45 & 1~072~464~279~544~434~376~131~539~091~650~605~148~971~323 & \eqref{eqn:ktwo-exact} \\
$\Gamma_3(6)$ & 48 & $\le$765~658~164~243~897~411~689~143~843~074~192~950~614~512 & \eqref{eqn:perm-ibound}\\ $K_{24,24}$ & 48 & 18~847~819~366~080~117~996~802~964~862~587~612~140~097~642~544 & \eqref{eqn:ktwo-exact} \\
$\Gamma_2(10)$ & 50 & $\le$5~387~065~180~713~482~750~668~088~096~305~965~320~151~649~500 & \eqref{eqn:perm-ibound}\\ $K_{25,25}$ & 50 & 11~294~267~336~237~005~395~453~340~472~970~226~376~143~920~186~000 & \eqref{eqn:ktwo-exact} \\
$\Gamma_3(7)$ & 56 & $\le$17~877~864~251~518~595~245~276~779~749~582~885~338~633~210~045~796~098 & \eqref{eqn:perm-ibound}\\ $K_{28,28}$ & 56 & 3~883~426~377~993~747~808~177~077~817~275~217~253~080~577~404~858~001~996~940 & \eqref{eqn:ktwo-exact} \end{tabular} \end{sidewaystable}
\end{document} |
\begin{document}
\title{ Sufficient stochastic maximum principle for the optimal control of semi-Markov modulated jump-diffusion with application to Financial optimization.} \baselineskip20pt \parskip10pt \parindent.4in \begin{abstract} \noindent \textcolor{red}{ Paper forthcoming in Stochastic Analysis and Applications}\\ The finite state semi-Markov process is a generalization over the Markov chain in which the sojourn time distribution is any general distribution. In this article we provide a sufficient stochastic maximum principle for the optimal control of a semi-Markov modulated jump-diffusion process in which the drift, diffusion and the jump kernel of the jump-diffusion process is modulated by a semi-Markov process. We also connect the sufficient stochastic maximum principle with the dynamic programming equation. We apply our results to finite horizon risk-sensitive control portfolio optimization problem and to a quadratic loss minimization problem. \end{abstract} \noindent\\
{\bf Keywords}: semi-Markov modulated jump diffusions, sufficient stochastic maximum principle, dynamic programming, risk-sensitive control, quadratic loss-minimization.\\
{\bf AMS subject classification} 93E20; 60H30;46N10.
\section{Introduction} \indent The stochastic maximum principle is a stochastic version of the Pontryagin maximum principle which states that the any optimal control must satisfy a system of forward-backward stochastic differential {equations,} called the optimality system, and should maximize a functional, called the Hamiltonian. The converse indeed is true and gives the sufficient stochastic maximum principle. In this article we will derive sufficient stochastic maximum principle for a class of process called as the semi-Markov modulated jump-diffusion process. In this process the drift, the diffusion and the jump kernel term is modulated by an semi-Markov process.
\\ \indent An early investigation of stochastic maximum principle and its application to finance has been credited to Cadenillas and Karatzas \cite{CK}. Framstadt et al. \cite{Fr} formulated the stochastic maximum principle for jump-diffusion process and applied it to a quadratic portfolio optimization problem. Their work has been partly generalized by Donnelly \cite{Do} who considered a Markov chain modulated diffusion process in which the drift and the diffusion term is modulated by a Markov chain. Zhang et al. \cite{Zh} studied sufficient maximum principle of a process similar to that studied by Donnelly additionally with a jump term whose kernel is also modulated by a Markov chain. It can be noted that the Markov modulated process has been quite popular with its recent applications to finance for example Options pricing (Deshpande and Ghosh \cite{DG}) and references therein and to portfolio optimization refer Xhou and Yin \cite{XY}. However application of semi-Markov modulated process to portfolio optimization in which the portfolio wealth process is a semi-Markov modulated diffusion are not many, see for example Ghosh and Goswami \cite{GG}. Even so it appears that the sufficient maximum principle has not been formulated for the case of a semi-Markov modulated diffusion process with jumps and studied further in the context of quadratic portfolio optimization. Moreover, application of the sufficient stochastic maximum principle in the context of risk-sensitive control portfolio optimization with the portfolio wealth process following a semi-Markov modulated diffusion process has not been studied. This article aims to provide answers to these missing dots and connect them together. For the same reasons, alongwith providing a popular application of the sufficient stochastic maximum principle to a quadratic loss minimization problem when the portfolio wealth process follows a semi-Markov modulated jump-diffusion, we also provide an example of risk-sensitive portfolio optimization for the diffusion part of the said dynamics. \\
\indent The article is organized as follows. In the next section we formally describe basic terminologies used in the article. In section 3 we detail the control problem that we are going to study. The sufficient maximum principle is proven in Section 4. This is followed by establishing its connection with the dynamic programming. We conclude the article by illustrating its applications to risk-sensitive control optimization and to a quadratic loss minimization problem. \section{Mathematical Preliminaries} We adopt the following notations that are valid for the whole paper:\\ $\mathbb{R}$: the set of real numbers\\ $r,M$: any positive integer greater than 1.\\ $ {{\mathcal{X}=\{1,...,M\}}}.$\\ $\mathcal{C}^{1,2,1}([0,T] \times \mathbb{R}^{r} \times \mathcal{X} \times \mathbb{R}_{+})$: denote the family of all functions on $[0,T] \times \mathbb{R}^{r} \times \mathcal{X} \times \mathbb{R}_{+}$ which are twice continuously differentiable in $x$ and continuously differentiable in $t$ and $y$.\\ $v^{'}$, $A^{'}$: the transpose of the vector (say )$v$ and matrix say $A$ respectively.\\
$||v||$: Euclidean norm of a vector $v$.\\
$|A|$: norm of a matrix $A$.\\ $tr(A)$: trace of a square matrix $A$.\\ $C^{m}_{b}(\mathbb{R}^{r})$: Set of real $m$-times continuously differentiable functions which are bounded together with their derivatives upto the $m^{th}$ order. \\ \indent We assume that the probability space ($\Omega,\mathcal{F},\{\mathcal{F}({t})\},\mathbb{P}$) is complete with filtration
$\{\mathcal{F}({t})\}_{t \geq 0}$ and is right-continuous and $\mathcal{F}({0})$ contains all $\mathbb{P}$ null sets. Let $\{{\theta}({t})\}_{t\geq 0}$ be a semi-Markov process taking values in $ {\mathcal{X}}$ with transition probability $ {p_{ij}}$ and conditional holding time distribution $F^{h}(t|i)$. Thus if $0 \leq t_{0}\leq t_{1}\leq ...$ are times when jumps occur, then \begin{eqnarray}\label{2.1}
P(\theta({t_{n+1}})=j,t_{n+1}-t_{n} \leq t|\theta({t_{n}})=i)=p_{ij}F^{h}(t|i). \end{eqnarray}
Matrix $[p_{ij}]_{\{i,j=1,...,M\}}$ is irreducible and for each $i$, $F^{h}(\cdot|i)$ has continuously differentiable and bounded density $f^{h}(\cdot|i)$. For a fixed $t$, let $n(t) \triangleq \max\{n: t_n \leq t\}$ and $Y(t) \triangleq t- t_{n(t)}$. Thus $Y(t)$ represents the amount of time the proess $\theta(t)$ is at the current state after the last jump. The process ($\theta{(t)},Y{(t)}$)defined on ($\Omega,\mathcal{F},\mathbb{P}$) is jointly Markov and the differential generator $\mathcal{L}$ given as follows (Chap.2, \cite{GS}) \begin{eqnarray}\label{2.3}
\mathcal{L}\phi(i,y)=\frac{d}{dy}\phi(i,y)+\frac{f^{h}(y|i)}{1-F^{h}(y|i)}\sum_{j \neq i,j \in \mathcal{X}}{p_{ij}[\phi(j,0)-\phi(i,y)]}. \end{eqnarray} for $\phi:\mathcal{X} \times \mathbb{R_{+}}\rightarrow \mathbb{R}$ is { $C^{1}$} function.\\ \indent We first represent semi-Markov process $\theta(t)$ as a stochastic integral with respect to a Poisson random measure. With that perspective in mind, embed $\mathcal{X}$ in $\mathbb{R}^{M}$ by identifying $i$ with $e_{i} \in \mathbb{R}^{M}$. For $y \in [0,\infty)$ $i, j \in \mathcal{X}$, define \begin{eqnarray*} \lambda_{ij}(y)&=&p_{ij}\frac{f^{h}(y/i)}{1-F^{h}(y/i)} \geq 0 ~~\mbox{and}~~ \forall~~ i \neq j, \\ \lambda_{ii}(y)&=&-\sum_{j\in \mathcal{X},j \neq i}^{M}{\lambda_{ij}(y)}~~ \forall~~ i~~ \in \mathcal{X}. \end{eqnarray*} \indent For $i \neq j \in \mathcal{X}$ , $y \in \mathbb{R}_{+}$ let $\Lambda_{ij}(y)$ be consecutive (with respect to lexicographic ordering on $\mathcal{X}\times \mathcal{X}$) left-closed, right-open intervals of the real line, each having length $\lambda_{ij}(y)$. Define the functions $\bar{h}:\mathcal{X}\times \mathbb{R}_{+}\times\mathbb{R}\rightarrow \mathbb{R}^{r}$ and $\bar{g}:\mathcal{X}\times \mathbb{R}_{+}\times \mathbb{R} \rightarrow \mathbb{R}_{+}$ by $$ \bar{h}(i,y,z) = \left\{ \begin{array}{rl}
j-i &\mbox{ if $z \in \Lambda_{ij}(y)$} \\
0 &\mbox{ otherwise}
\end{array} \right. $$ $$ \bar{g}(i,y,z) = \left\{ \begin{array}{rl}
y &\mbox{ if $z \in \Lambda_{ij}(y), j \neq i$} \\
0 &\mbox{ otherwise}
\end{array} \right. $$ \\ \indent Let $\mathcal{M}(\mathbb{R}_{+} \times \mathbb{R})$ be the set of all nonnegative integer-valued $\sigma$-finite measures on { Borel } $\sigma$-field of ($\mathbb{R}_{+}\times \mathbb{R}$). {The process $\{\tilde{\theta}{(t)},Y{(t)}\}$ is defined} by the following stochastic integral equations: \begin{eqnarray}\label{2.2} \begin{split} \tilde{\theta}{(t)}=\tilde{\theta}{(0)}+\int_{0}^{t}\int_{\mathbb{R}}{\bar{h}(\tilde{\theta}{(u-)},Y{(u-)},z)N_{1}(du,dz)},\\ Y{(t)}=t-\int_{0}^{t}\int_{\mathbb{R}}{\bar{g}(\tilde{\theta}{(u-)},Y{(u-)},z)N_{1}(du,dz)}, \end{split} \end{eqnarray} where $N_{1}(dt,dz)$ is an $\mathcal{M}$($\mathbb{R}_{+}\times \mathbb{R}$)-valued Poisson random measure with intensity $dt m(dz)$
independent of the $\mathcal{X}$-valued random variable $\tilde{\theta}{(0)}$, where $m(\cdot)$ is a Lebesgue measure on $\mathbb{R}$. As usual by definition $Y(t)$ represents the amount of time, process $\tilde{\theta}(t)$ is at the current state after the last jump. We define the corresponding compensated or centered one dimensional Poisson measure as $\tilde{N}_{1}(ds,dz)=N_{1}(ds,dz)-dsm(dz)$. It was shown in Theorem 2.1 of Ghosh and Goswami \cite{GG} that $\tilde{\theta}{(t)}$ is a semi-Markov process with transition probability matrix $[p_{ij}]_{\{i,j=1,...,M\}}$ with conditional holding time distributions $F^{h}(y|i)$. {Since by definition $\theta(t)$ is also a semi-Markov process with transition probability matrix $[p_{ij}]_{\{i,j=1,...,M\}}$ with conditional holding time distributions $F^{h}(y|i)$ defined on the same underlying probability space, by equivalence, $\tilde{\theta}{(t)}=\theta{(t)}$ for $t \geq 0$}.\\ {
{\bf Remark 2.1}~~The semi-Markov process with conditional density $f^{h}(y|i)=\tilde{\lambda}_{i}e^{-\tilde{\lambda}_{i}y}$ for some $\tilde{\lambda}_{i}>0$, $i =1,2...,M$, is infact a Markov chain.}
\section{The control problem} Let $\mathcal{U} \subset \mathbb{R}^{r}$ be a closed subset. { Let $\mathbb{B}_{0}$ be the family of Borel sets $\Gamma \subset \mathbb{R}^{r}$ whose closure $\bar{\Gamma}$ does not contain {0}. For and Borel set $B \subset \Gamma$, one dimensional poisson random measure $ N(t,B)$ counts the number of jumps on $[0,t]$ with values in $B$.} { For a predictable process $u:[0,T] \times \Omega \rightarrow \mathcal{U}$ with right continuous left limit paths, consider the controlled process $X$ with given initial condition $X(0)=x \in \mathbb{R}^{r}$ given by} \begin{eqnarray}\label{3.1} dX({t})=b(t,X({t}),u({t}),\theta({t}))dt+\sigma(t,X({t}),u({t}),\theta({t}))dW({t})+\int_{\Gamma}g(t,X({t}),u({t}),\theta({t})),\gamma){N}(dt,d\gamma),\nonumber\\ \end{eqnarray} where $X(t) \in \mathbb{R}^{r}$ and $W(t)=(W_{1}(t),...,W_{r}(t))$ is $r$-dimensional standard Brownian motion. The coefficients $b(\cdot,\cdot,\cdot,\cdot):[0,T] \times \mathbb{R}^{r}\times \mathcal{U} \times \mathcal{X} \rightarrow \mathbb{R}^{r}$,$\sigma(\cdot,\cdot,\cdot,\cdot):[0,T] \times \mathbb{R}^{r}\times \mathcal{U}\times \mathcal{X} \rightarrow \mathbb{R}^{r} \times \mathbb{R}^{r}$ and $g(\cdot,\cdot,\cdot,\cdot,\cdot):[0,T] \times \mathbb{R}^{r}\times \mathcal{U} \times \mathcal{X} \times \Gamma \rightarrow \mathbb{R}^{r}$ { and satisfy the following conditions,\\ {\bf Assumption (A1)}\\ \textit{(At most linear growth)~~ There exists a constant $ C_{1}< \infty $ for any $ i \in \mathcal{X} $ such that}\\
${|\sigma(t,x,u,i)|}^{2} +{||b(t,x,u,i)||}^{2}+\int_{\mathbb{R}}{{||g(t,x,u,i, \gamma)||}^{2}}\lambda(d\gamma) \leq C_{1}(1+||x||^{2})$\\ \textit{(Lipschitz continuity)~~ There exists a constant $C_{2}< \infty$ for any $ i \in \mathcal{X} $ such that}\\
${|\sigma(t,x,u,i)-\sigma(t,y,u,i)|}^{2} +{||b(t,x,u,i)-b(t,y,u,i)||}^{2}+\int_{\Gamma}{||g(t,x,u,i,\gamma)-g(t,y,u,i,\gamma)||^{2}}\lambda(d\gamma) \leq C_{2}||x-y||^{2}$ $\forall x,y \in \mathbb{R}^{r}$.\\ Then $X(t)$ is a unique cadlag adapted solution given by (\ref{3.1}) refer Theorem 1.19 of \cite{Oks}.}\\ \indent Define $a(t,x,u,i)=\sigma(t,x,u,i)\sigma'(t,x,u,i)$ is a $\mathbb{R}^{r\times r}$ matrix and $a_{kl}(t,x,u,i)$ is the $(k,l)^{th}$ element of the matrix $a$ while $b_{k}(t,x,u,i)$ is the $k^{th}$ element of the vector $b(t,x,u,i)$.
We assume that
$N(\cdot,\cdot), N_{1}(\cdot,\cdot)$ and $\theta_{0},W_{t},X_{0}$ defined on ($\Omega,\mathcal{F},\mathbb{P}$) are independent. For future use we define the compensated Poisson measure $\tilde{N}(dt,d\gamma)=N(dt,d\gamma)-{\lambda} \pi(d\gamma)dt$, where $\pi(\cdot)$ is the jump distribution { (is a probability measure) and $0<{\lambda}<\infty$ is the jump rate} { such that $\int_{\Gamma}{\min({||\gamma||}^{2},1)}\lambda{(d\gamma)}<\infty$.}\\ \indent Consider the performance criterion \begin{eqnarray}\label{3.2} J^{u}(x,i,y)=E^{x,i,y}[\int_{0}^{T}{f_{1}(t,X({t}),u(t),\theta({t}),Y(t))dt+f_{2}(X(T),\theta(T),Y(T))}], \end{eqnarray} where $f_{1}:[0,T] \times \mathbb{R}^{r}\times \mathcal{U} \times \mathcal{X} \times \mathbb{R}_{+} \rightarrow \mathbb{R}$ is continuous and $f_{2}: \mathbb{R}^{r} \times \mathcal{X} \times \mathbb{R}_{+}\rightarrow \mathbb{R}$ is concave. We say that the admissible class of controls $u \in \mathcal{A}(T)$ if \begin{eqnarray*}
E^{x,i,y}\bigg[\int_{0}^{T}|f_{1}(t,X(t),u(t),\theta(t),Y(t))|dt+f_{2}(X(T),\theta(T),Y(T))]\bigg]<\infty. \end{eqnarray*} The problem is to maximize $J^{u}$ over all $u \in \mathcal{A}(T)$ i.e. we seek $\hat{u} \in \mathcal{A}(T)$ such that \begin{eqnarray}\label{3.3} J^{\hat{u}}(x,i,y)=\sup_{u \in \mathcal{A}(T)}J^{u}(x,i,y), \end{eqnarray} where $\hat{u}$ is an optimal control.\\ Define a Hamiltonian $\mathcal{H}: [0,T] \times \mathbb{R}^{r}\times \mathcal{U} \times \mathcal{X} \times \mathbb{R}_{+} \times \mathbb{R}^{r} \times \mathbb{R}^{r \times r} \times \mathbb{R}^{r} \rightarrow \mathbb{R}$ by, \begin{eqnarray}\label{3.4} \mathcal{H}(t,x,u,i,y,p,q,\eta)&:=& f_{1}(t,x,u,i,y)+\bigg(b^{'}(t,x,u,i)-\int_{\Gamma}{g^{'}(t,x,u,i,\gamma)}\pi(d\gamma)\bigg)p+tr(\sigma^{'}(t,x,u,i)q)\nonumber\\ &+&\bigg(\int_{{\Gamma}}{g^{'}(t,x,u,i,\gamma)}\pi(d\gamma)\bigg)\eta. \end{eqnarray}
We assume that the Hamiltonian $\mathcal{H}$ is differentiable with respect to $x$. {The adjoint equation corresponding to $u$ and $X^{u}$ in the unknown adapted processes $p(t) \in \mathbb{R}^{r}$,$ q(t) \in \mathbb{R}^{r \times r}$, $\eta:\mathbb{R}_{+} \times \mathbb{R}^{r}-\{0\}\rightarrow \mathbb{R}^{r }$ and $\tilde{\eta}(t,z)=(\eta^{(1)}(t,z),...,\eta^{(r)}(t,z))^{'}$, where $\tilde{\eta}^{(n)}(t,z) \in \mathbb{R}^{r \times r}$ for each $n=1,2,...,r$, is the backward stochastic differential equation (BSDE)}, \begin{eqnarray}\label{3.5} dp(t)&=& -\nabla_{x}\mathcal{H}(t,X(t),u(t),\theta(t),p(t),q(t),\eta(t,\gamma))dt+q^{'}(t)dW(t)+\int_{\Gamma}{\eta(t,\gamma) \tilde{N}(dt,d\gamma)}\nonumber \\&+&\int_{\mathbb{R}}\tilde{\eta}(t,z) \tilde{N}_{1}(dt,dz), \nonumber \\ p(T)&=&\nabla_{x}f_{2}(X(T),\theta(T),Y(T)).~~a.s. \end{eqnarray} We have assumed that $\mathcal{H}$ is differentiable with respect to $x=X(t)$ and is denoted as \\$\nabla_{x}\mathcal{H}(t,X(t),u(t),\theta(t),p(t),q(t),\eta(t,\gamma))$. { As per Remark 2.1, for the special case where the semi-Markov process has exponential holding time distribution, we would have (\ref{3.5}) to be a BSDE with Markov chain switching. For this special case, Cohen and Elliott \cite{CE} have provided conditions for uniqueness of the solution. However, corresponding uniqueness result for the semi-Markov modulated BSDE as in (3.5) seems not available in the literature. Since this paper concerns sufficient conditions, we will assume ad hoc that a solution to this BSDE exists and is unique. }\\
{\bf Remark 3.1}~~ Notice that there are jumps in the adjoint equation (3.5) attributed to jumps in the semi-Markov process $\theta({t})$. This is because the drift, the diffusion and the jump kernel of the process $X({t})$ is modulated by a semi-Markov process. Also note that the unknown process $\tilde{\eta}(t,z)$ in the adjoint equations (\ref{3.5}) does not appear in the Hamiltonian (\ref{3.4}). \section{Sufficient Stochastic Maximum principle} In this section we state and prove the sufficient stochastic maximum principle.\\ {\bf Theorem 4.1}(Sufficient Maximum principle) Let $\hat{u} \in \mathcal{A}(T)$ with corresponding solution $\hat{X} \triangleq X^{\hat{u}}$. Suppose there exists a solution ($\hat{p}(t),\hat{q}(t),\hat{\eta}(t,\gamma),\hat{\tilde{\eta}}(t,z)$)of the adjoint equation (\ref{3.5}) satisfying \\ \begin{eqnarray}\label{4.1}
&&E \int_{0}^{T}{||\bigg(\sigma(t,\hat{X}(t),\theta(t))-\sigma(t,X^{u}(t),\theta(t))\bigg)^{'}\hat{p}(t)||^{2}}dt< \infty \\
&&E \int_{0}^{T}{||\hat{q}^{'}(t)\bigg(\hat{X}(t)-X^{u}(t)\bigg)||^{2}}dt< \infty \\
&&E \int_{0}^{T}{||(\hat{X}(t)-X^{u}(t))^{'}\hat{\eta}(t,\gamma)||^{2}\pi(d\gamma)}dt< \infty \\
&&E \int_{0}^{T}{|\bigg(\hat{X}(t)-X^{u}(t)\bigg)^{'}\hat{\tilde{\eta}}(t,z)|^{2}m(dz)}dt< \infty. \end{eqnarray} for all admissible controls $u \in \mathcal{A}(T)$. If we further suppose that \\ 1. \begin{eqnarray}\label{4.5} \mathcal{H}(t,\hat{X}({t}),\hat{u}({t}),\theta(t),Y(t),\hat{p}({t}),\hat{q}({t}),\hat{\eta}{(t,\cdot)})=\sup_{u \in\mathcal{A}(T)} \mathcal{H}(t,\hat{X}({t}),{u}({t}),\theta(t),Y(t),\hat{p}({t}),\hat{q}({t}),\hat{\eta}{(t,\cdot)}). \end{eqnarray}
2. for each fixed pair $(t,i,y) \in ([0,T] \times \mathcal{X} \times \mathbb{R}_{+})$,~~$\hat{\mathcal{H}}(x):= \sup_{ u \in \mathcal{A}(T)}\mathcal{H}(t,x,u,i,y,\hat{p}(t),\hat{q}(t),\hat{\eta}(t,\cdot))$ exists and is a concave function of $x$. Then $\hat{u}$ is an optimal control.\\ {\textit{Proof}}~~Fix $u \in \mathcal{A}(T)$ with corresponding solution $X=X^{u}$. For sake of brevity we would henceforth represent ($t,\hat{X}(t-),\hat{u}(t-),\theta(t-),Y(t-)$) by ($t,\hat{X}(t-)$) and ($t,{X}(t-),{u}(t-),\theta(t-),Y(t-)$) by ($t,{X}(t-)$). Then, \begin{eqnarray*} J(\hat{u})-J(u)=E\bigg(\int_{0}^{T}\bigg({f_{1}(t,\hat{X}(t))-f_{1}(t,X(t))}\bigg)dt+f_{2}(\hat{X}(T),\theta(T),Y(T))-f_{2}(X(T),\theta(T),Y(T))\bigg). \end{eqnarray*}
By use of concavity of $f_{2}(\cdot,i,y)$ we have for each $i \in \mathcal{X},~ y \in \mathbb{R}_{+}$ and (\ref{3.5}) to obtain the inequalities, \begin{eqnarray*} E\bigg(f_{2}(\hat{X}(T),\theta(T),Y(T))-f_{2}({X}(T),\theta(T),Y(T))\bigg) & \geq & E\bigg((\hat{X}(T)-X(T))^{'}\nabla_{x}f_{2}(\hat{X}(T),\theta(T),Y(T))\bigg) \nonumber \\ &\geq & E \bigg((\hat{X}(T)-X(T))^{'}\hat{p}(T)\bigg). \end{eqnarray*} which gives \begin{eqnarray}\label{4.6} J(\hat{u})-J(u) \geq E {\int_{0}^{T}{\bigg(f_{1}(t,\hat{X}(t))-f_{1}(t,X(t))\bigg)}}dt + E\bigg((\hat{X}(T)-X(T))^{'}\hat{p}(T)\bigg). \end{eqnarray} We now expand the above equation (\ref{4.6}) term by term. For the first term in this equation we use the definition of $\mathcal{H}$ as in (\ref{3.4}) to obtain \begin{eqnarray}\label{4.7} &&E\int_{0}^{T}{\bigg(f_{1}(t,\hat{X}(t))-f_{1}(t,X(t))\bigg)}dt \nonumber\\ &=& E\int_{0}^{T}\bigg(\mathcal{H}(t,\hat{X}(t),\hat{u}(t),\theta(t),\hat{p}(t),\hat{q}(t),\hat{\eta}(t,\gamma))\nonumber \\ &-&\mathcal{H}(t,{X}(t),{u}(t),\theta(t),{p}(t),{q}(t),{\eta}(t,\gamma))\bigg)dt \nonumber \\ &-&E\int_{0}^{T}\bigg[\bigg(b(t,\hat{X}(t))-b(t,{X}(t))\nonumber\\ &-&\int_{\Gamma}{\bigg(g(t,\hat{X}(t-),\hat{u}(t-),\theta(t-),\gamma)-g(t,{X}(t-),{u}(t-),\theta(t-),\gamma)\bigg)}\pi(d\gamma)\bigg)\hat{p}(t)\nonumber\\ &+&tr\bigg((\sigma(t,\hat{X}(t))-\sigma(t,X(t)))^{'}\hat{q}(t)\bigg)\nonumber \\ &+&\int_{\Gamma}(g(t,\hat{X}(t-),\hat{u}(t-),\theta(t-),\gamma)-g(t,{X}(t-),{u}(t-),\theta(t-),\gamma))^{'}\eta(t,\gamma)\pi(d\gamma)\bigg]dt. \nonumber \\ \end{eqnarray} To expand the second term on the right hand side of (\ref{4.6}) we begin by applying the integration by parts formula to get, \begin{eqnarray*} (\hat{X}(T)-X(T))^{'}\hat{p}(T)&=& \int_{0}^{T}{(\hat{X}(t)-X(t))^{'}}d\hat{p}(t)\\ &+&\int_{0}^{T}{\hat{p}^{'}(t)d(\hat{X}(t)-X(t))}+[\hat{X}-X,\hat{p}](T). \end{eqnarray*} Substitute for $X$, $\hat{X}$ and $\hat{p}$ from (\ref{3.1}) and (\ref{3.5}) to obtain, \begin{eqnarray*} &&(\hat{X}(T)-X(T))^{'}\hat{p}(T) \\ &=&\int_{0}^{T}(\hat{X}(t)-X(t))^{'}\bigg(-\nabla_{x}\mathcal{H}(t,\hat{X}({t}),\hat{u}({t}),\hat{p}({t}),\hat{q}(t),\hat{\eta}({t,\gamma}))dt+\hat{q}^{'}(t)dW(t)\\ &+&\int_{\Gamma}{\hat{\eta}(t,\gamma)\tilde{N}(dt,d\gamma)}+\int_{\mathbb{R}}{\hat{\tilde{\eta}}(t,z)\tilde{N}_{1}(dt,dz)}\bigg)\\ &+&\int_{0}^{T}\hat{p}^{'}(t)\bigg\{\bigg(\bigg(b(t,\hat{X}(t))-b(t,X(t))\bigg) -\int_{\Gamma}\bigg(g(t,\hat{X}(t),\hat{u}(t-),\theta({t-}),\gamma)\\ &-&g(t,{X}(t-),u(t-),\theta({t-}),\gamma)\bigg)\pi(d\gamma)\bigg)dt\\ &+&\bigg(\sigma(t,\hat{X}(t))-\sigma(t,X(t))\bigg)^{'}dW(t)\\ &+&\int_{\Gamma}{\bigg(g(t,\hat{X}(t-),\hat{u}(t-),\theta({t-}),\gamma)-g(t,{X}(t-),u(t-),\theta({t-}),\gamma)\bigg)}\tilde{N}(dt,d\gamma)\bigg\}\\ &+&\int_{0}^{T}\bigg[tr\bigg(\hat{q}^{'}(t)\bigg(\sigma(t,\hat{X}(t))-\sigma(t,X(t))\bigg)\bigg)\\ &+&\int_{\Gamma}\bigg({g(t,\hat{X}(t),\hat{u}(t-),\theta({t-}),\gamma)-g(t,{X}(t),u(t-),\theta({t-}),\gamma)}\bigg)^{'}\eta(t,\gamma)\pi(d\gamma)\bigg]dt. \end{eqnarray*} Due to integrability conditions ({4.1})-({4.4}), the integral with respect to the Brownian motion and the Poisson random measure are square integrable martingales which are null at the origin. Thus taking expectations we obtain \begin{eqnarray*} E\bigg((\hat{X}(T)&-&X(T))^{'}\hat{p}(T)\bigg) \\ &=&\int_{0}^{T}(\hat{X}(t)-X(t))^{'}\bigg(-\nabla_{x}\mathcal{H}(t,\hat{X}({t}),\hat{u}({t}),\hat{p}({t}),\hat{q}(t),\hat{\eta}({t,\gamma}))\bigg)dt\\ &+&\int_{0}^{T}\bigg[\hat{p}^{'}(t)\bigg(b(t,\hat{X}(t))-b(t,X(t)) -\int_{\Gamma}\bigg(g(t,\hat{X}(t-),\hat{u}(t-),\theta({t-}),\gamma)\\ &-&g(t,{X}(t),u(t-),\theta({t-}),\gamma)\bigg)\pi(d\gamma)\bigg)\\ &+&\int_{0}^{T} tr\bigg(\hat{q}^{'}(t)(\sigma(t,\hat{X}(t))-\sigma(t,X(t)))\bigg)\\ &+&\int_{\Gamma}{\bigg(\bigg({g(t,\hat{X}(t-),\theta({t-}),u(t-),\gamma)-g(t,{X}(t-),\theta({t-}),u(t-),\gamma)}\bigg)^{'}\eta(t,\gamma))\bigg)\pi(d\gamma)}\bigg]dt. \end{eqnarray*} \begin{eqnarray*}
\end{eqnarray*} Substitute the last equation and (\ref{4.7}) into the inequality (\ref{4.6}) to find after cancellation that \begin{eqnarray}\label{4.8} J(\hat{u})-J(u) & \geq & E\int_{0}^{T}\bigg(\mathcal{H}(t,\hat{X}(t),\hat{u}(t),\theta(t),\hat{p}(t),\hat{q}(t),\hat{\eta}(t,\gamma))-\mathcal{H}(t,{X}(t),{u}(t),\theta(t),{p}(t),{q}(t),\eta(t,\gamma))\nonumber\\ &-&(\hat{X}(t)-X(t))^{'}\nabla_{x}\mathcal{H}(t,\hat{X}(t),\hat{u}(t),\theta(t),\hat{p}(t),\hat{q}(t),\hat{\eta}(t,\gamma))\bigg)dt. \end{eqnarray} We can show that the integrand on the RHS of (\ref{4.8}) is non-negative a.s. for each $t \in [0,T]$ by fixing the state of the semi-Markov process and then using the assumed concavity of $\hat{\mathcal{H}}(x)$, we apply the argument in Framstad et al. \cite{Fr} . This gives $J(\hat{u}) \geq J(u)$ and $\hat{u}$ is an optimal control.$\qed$ \section{Connection to the Dynamic programming} We show the connection between the stochastic maximum principle and dynamic programming principle for the semi-Markov modulated regime switching jump diffusion. This tantamounts to explicitly showing connection between the value function $V(t,x,i,y)$ of the control problem and the adjoint processes $p(t), q(t)$ ,$\eta(t,\gamma)$ and $\tilde{\eta}(t,z)$. In order to apply the dynamic programming principle we put the problem into a Markovian framework by defining \begin{eqnarray}\label{5.1} J^{u}(t,x,i,y) \triangleq E^{X(t)=x,\theta(t)=i,Y(t)=y}[\int_{t}^{T}{f_{1}(t,X({t}),u(t),\theta({t}),Y({t}))dt+f_{2}(X(T),\theta(T),Y(T))}]. \end{eqnarray} and put \begin{eqnarray}\label{5.2} V(t,x,i,y)=\sup_{u \in \mathcal{A}(T)}J^{u}(t,x,i,y)~~~~\forall~~(t,x,i,y) \in [0,T] \times \mathbb{R}^{r} \times \mathcal{X}\times \mathbb{R}_{+}. \end{eqnarray}\\ {\bf Theorem 5.1}~~\textit{Assume that $V(\cdot,\cdot,i,\cdot)\in \mathcal{C}^{1,{3},1}([0,T]\times \mathbb{R}^{r}\times \mathcal{X}\times \mathbb{R}_{+})$ for each $i,j \in \mathcal{X}$ and that there exists an optimal Markov control $\hat{u}(t,x,i,y)$ for (\ref{5.2}), with the corresponding solution $\hat{X}=X^{(\hat{u})}$. Define \begin{eqnarray}\label{5.3} p_{k}(t) &\triangleq & \frac{\partial V}{\partial x_{k}}(t,\hat{X}(t),\theta(t),Y(t)). \end{eqnarray} \begin{eqnarray}\label{5.4} q_{kl}(t) &\triangleq & \sum_{i=1}^{r}{\sigma_{il}(t,\hat{X}(t),\hat{u}(t),\theta(t))\frac{\partial^{2} V}{{\partial x_{i}}{\partial x_{k}}}(t,\hat{X}(t),\theta(t),Y(t))}. \end{eqnarray} \begin{eqnarray}\label{5.5} \eta^{(k)}(t,\gamma) &\triangleq & \frac{\partial V}{\partial x_{k}}(t,\hat{X}(t),j,Y(t))-\frac{\partial V}{\partial x_{k}}(t,\hat{X}(t),i,Y(t)). \end{eqnarray} \begin{eqnarray}\label{5.6} \tilde{\eta}^{(k)}(t,z) &\triangleq & \frac{\partial V}{\partial x_{k}}(t,\hat{X}(t-),\theta(t-)+\bar{h}(\theta({t-}),Y({t-}),z),Y({t-})-\bar{g}(\theta({t-}),Y({t-}),z))\nonumber \\ &-&\frac{\partial V}{\partial x_{k}}(t,\hat{X}({t-}),\theta({t-}),Y({t-})). \end{eqnarray} {for each $(k,l =1,...,r)$. Also we assume that the coefficients $b(t,x,u,i)$, $\sigma(t,x,u,i)$ and $g(t,x,u,i,\gamma)$ belong to $C^{1}_{b}(\mathbb{R}^{r})$.} Then $p(t), q(t), \eta(t,\gamma)$ and $\tilde{\eta}(t,z)$ solves the adjoint equation (\ref{3.5}).}\\\\ We prove this theorem by using the following Ito's formula.\\ {\bf Theorem 5.2}~~{Suppose $r $ dimensional process $X(t)=(X_{1}(t),...,X_{r}(t))$ or $\{X_{g}(t)\} $ indexed by $(g=1,2,...,r)$ satisfies the following equation, \begin{eqnarray*} dX_{g}(t)=b_{g}(t,X(t),u(t),\theta(t))dt+\sum_{m=1}^{r}{\sigma_{gm}(t,X(t),u(t),\theta(t))}dW_{m}(t)+\int_{\Gamma}{g_{g}(t,X(t-),u(t),\theta(t-),\gamma)}{N}(dt,d\gamma). \end{eqnarray*} for some $X(0)= x_{0} \in \mathbb{R}^{r}~~~a.s.$ . Further let us assume that the coefficients $b, \sigma, g$ satisfies the conditions of Assumption (A1).\\
Let $ V(\cdot,\cdot,i,\cdot)~\in~C^{1,{3},1}([0,T] \times \mathbb{R}^{r}\times \mathcal{X}\times \mathbb{R}_{+})$. Then the generalized Ito's formula is given by \begin{eqnarray*} &&V(t,X({t}),\theta({t}),Y({t}))- V(t,x,\theta,y)=\int_{0}^{t}{G V(s,X({s}),\theta({s}),Y({s}))ds}\\ &+&\int_{0}^{t}{(\nabla_{x} V(s,X({s}),\theta({s}),Y({s})))'\sigma(s,X({s}),\theta({s}))dW({s})} \\ &+& \int_{0}^{t}\int_{\Gamma}[V(s,X({s-})+g(s,X({s-}),u(s),\theta({s-}),\gamma),\theta({s-}),Y({s-}))\\ &-&V(s,X({s-}),\theta({s-}),Y({s-}))]\tilde{N}(ds,d\gamma) \\ &+&\int_{0}^{t}\int_{\mathbb{R}}[V(s,X({s-}),\theta({s-})+\bar{h}(\theta({s-}),Y({s-}),z),Y({s-})-\bar{g}(\theta({s-}),Y({s-}),z))\\
&-& V(s,X({s-}),\theta({s-}),Y({s-}))]\tilde{N}_{1}(ds,dz), \end{eqnarray*} where the local martingale terms are explicitly defined as \\ \begin{eqnarray*} dM_{1}(t)&\triangleq &{(\nabla_{x} V(t,X({t}),\theta({t}),Y({t})))'\sigma(t,X({t}),u(t),\theta({t}))dW_{t}},\\ dM_{2}(t)&\triangleq &\int_{\Gamma}{[V(t,X({t-})+g(t,X({t-}),u(t),\theta({t-}),\gamma),\theta({t-}),Y({t-}))-V(t,X({t-}),\theta({t-}),Y({t-}))]\tilde{N}(dt,d\gamma)},\\ dM_{3}(t)&\triangleq &\int_{\mathbb{R}}[V\bigg(t,X({t-}),\theta({t-})+\bar{h}(\theta({t-}),Y({t-}),z),Y({t-})-\bar{g}(\theta({t-}),Y(t-),z)\bigg)\\ &-&V(t,X({t-}),\theta({t-}),Y({t-}))]\tilde{N_{1}}(dt,dz), \end{eqnarray*} for \begin{eqnarray*} G V(t,x,i,y)&=&\frac{\partial V(t,x,i,y)}{\partial t}\\&+&\frac{1}{2}\sum_{g,l=1}^{r}{a_{gl}(t,x,i)\frac{\partial V(t,x,i,y)}{\partial x_{g}\partial x_{l}}}\\&+&\sum_{g=1}^{r}{b_{g}(t,x,i)\frac{\partial V(t,x,i,y)}{\partial x_{g}}} \\
&+&\frac{\partial V(t,x,i,y)}{\partial y}\\&+&\frac{f^{h}(y|i)}{1-F^{h}(y|i)}\sum_{j \neq i,j \in \mathcal{X},i=1}^{M}{p_{ij}[V(t,x,j,0)-V(t,x,i,y)]} \\ &+&\lambda{\int_{\Gamma}{({V(t,x+g(t,x,i,\gamma),i,y)}-{V(t,x,i,y)})}\pi(d\gamma)},\\ \forall~t~\in~[0,T]~,x \in \mathbb{R}^{r}, (i = 1,....,M),~y \in \mathbb{R}_{+}.
\end{eqnarray*} } {\textit{Proof}}~~ For details refer to Theorem 5.1 in Ikeda and Watanabe \cite{IW}. $\qed$\\ {\textit{Proof of Theorem 5.1}}~~From the standard theory of the Dynamic programming the following HJB equation holds: \begin{eqnarray*} \frac{\partial V}{\partial t}(t,x,i,y)+\sup_{u \in \mathcal{U}}\{f_{1}(t,x,u,i,y)+\mathcal{A}^{u}V(t,x,i,y)\}=0,\\ V(T,x,i,y)=f_{2}(x,i,y). \end{eqnarray*} where $\mathcal{A}^{u}$ is the infinitesimal generator and the supremum is attained by $\hat{u}(t,x,i,y)$. Define \begin{eqnarray*} F(t,x,u,i,y)=f_{1}(t,x,u,i,y)+\frac{\partial V}{\partial t}(t,x,i,y)+\mathcal{A}^{u}V(t,x,i,y). \end{eqnarray*} We assume that $f_{1}$ is differentiable w.r.t to $x$. We use the Ito's formula as described in Theorem 5.2 to get, \begin{eqnarray} F(t,x,u,i,y)&=&f_{1}(t,x,u,i,y)+\frac{\partial V}{\partial t}(t,x,i,y)\nonumber \\ &+&\sum_{k=1}^{r}{\frac{\partial V}{\partial x_{k}}(t,x,i,y)b_{k}(t,x,u,i) }+ \frac{1}{2}\sum_{k=1}^{r}\sum_{l=1}^{r}{\frac{\partial^{2}V}{\partial x_{k} \partial x_{l}}}(t,x,i,y)\sum_{i=1}^{r}{\sigma_{ki}(t,x,u,i)\sigma_{li}(t,x,u,i)}\nonumber \\
&+&\sum_{j \neq i,i=1}^{M}{\frac{p_{ij}f^{h}(y|i)}{1-F^{h}(y|i)}}{(V(t,x,j,0)-V(t,x,i,y))}+\frac{\partial V}{\partial y}(t,x,i,y) \nonumber \\ &+& \lambda \int_{\Gamma}{(V(t,x+g(t,x,u,i,\gamma),i,y)-V(t,x,i,y))}\pi(d\gamma). \end{eqnarray} Differentiate $F(t,x,\hat{u}(t,x,i,y),i,y)$ with respect to $x_{g}$ and evaluate at $x=\hat{X}(t)$, $i=\theta(t)$ and $y=Y(t)$, we get, \begin{eqnarray}\label{5.8} 0&=&\frac{\partial f_{1}}{\partial x_{g}}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t),Y(t))\nonumber\\ &+& \frac{\partial^{2}V}{\partial x_{g} \partial t}(t,\hat{X}(t),\theta(t),Y(t))+\sum_{k=1}^{r}{\frac{\partial^{2}V}{\partial x_{g} \partial x_{k}}(t,\hat{X}(t),\theta(t),Y(t))b_{k}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))}\nonumber\\ &+&\sum_{k=1}^{r}{\frac{\partial V}{\partial x_{k}}(t,\hat{X}(t),\theta(t),Y(t))\frac{\partial b_{k}}{\partial x_{g}}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))}\nonumber\\ &+&\frac{1}{2}\sum_{k=1}^{r}\sum_{l=1}^{r}{\frac{\partial^{3}V}{\partial x_{g} \partial x_{k} \partial x_{l}}(t,\hat{X}(t),\theta(t),Y(t))}\nonumber\\&\times&{\sum_{i=1}^{r}{\sigma_{k,i}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))\sigma_{l,i}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))}}\nonumber\\ &+&\frac{1}{2}\sum_{k=1}^{r}\sum_{l=1}^{r}{\frac{\partial^{2}V}{\partial x_{k} \partial x_{l} }(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t),Y(t))}\nonumber\\&\times&{\frac{\partial}{\partial x_{g}}\sum_{i=1}^{r}{\sigma_{k,i}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))\sigma_{l,i}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))}}\nonumber \\
&+&\sum_{j \neq i, j \in \mathcal{X}}^{M}{\frac {p_{ij}f^{h}(y|i)}{1-F^{h}(y|i)}\bigg(\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t),j,0)-\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t),i,y)\bigg)}\nonumber\\ &+&\lambda \int_{\Gamma}\bigg({\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t)+g(t,\hat{X}(t),\theta(t),\gamma),\theta(t),Y(t))-\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t),\theta(t),Y(t))}\bigg)\pi(d\gamma). \end{eqnarray} Next define, $Y_{g}=\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t),\theta(t),Y(t))$ for ($g =1,...,r$). By Ito's formula (Theorem 5.2) we obtain the dynamics of $Y_{g}(t)$ as follows, \begin{eqnarray*} dY_{g}(t)&=&\bigg\{\frac{\partial^{2}V}{\partial {x_{g}} \partial{t}}(t,\hat{X}(t),\theta(t),Y(t))+\sum_{k=1}^{r}{\frac{\partial^{2}V}{\partial {x_{g}} \partial{x_{k}}}(t,\hat{X}(t),\theta(t),Y(t))b_{k}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))}\nonumber\\ &+&\frac{1}{2}\sum_{k=1}^{r}\sum_{l=1}^{r}{\frac{\partial^{3}V}{\partial x_{g} \partial x_{k} \partial x_{l}}(t,\hat{X}(t),\theta(t),Y(t))}\nonumber\\&\times&{\sum_{i=1}^{r}{\sigma_{ki}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t)) \times \sigma_{li}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))}}\\
&+&\sum_{j \neq i, j =1}^{M}{\frac {p_{ij} f^{h}(y|i)}{1-F^{h}(y|i)}(\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t),j,0)-\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t),i,y))}\nonumber \\ &+&\lambda \int_{\Gamma}{\bigg(\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t)+g(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t),\gamma),\theta(t),Y(t))}\\&-&{\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t),\theta(t),Y(t))\bigg)}\pi(d\gamma)\bigg\}dt\\ &+&\sum_{k=1}^{r}\frac{{\partial^{2}V}}{{\partial x_{g} \partial x_{k}}}(t,\hat{X}(t),\theta(t),Y(t))\sum_{j=1}^{r}{\sigma_{kj}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))dW_{j}(t)}\nonumber \\ &+&\int_{\Gamma}\bigg\{\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t-)+g(t,\hat{X}(t-),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t-),\gamma),\theta(t-),Y(t-))\\ &-&\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t-),\theta(t-),Y(t-))\bigg\}\tilde{N}(dt,d\gamma)\\ &+&\int_{\mathbb{R}}\bigg\{\frac{\partial V}{\partial x_{g}}((t,X(t-),\theta(t-)+\bar{h}(\theta(t-),Y(t-),z),Y(t-)-\bar{g}(\theta(t-),Y(t-),z)))\\&-&\frac{\partial {V}}{\partial x_{g}}(t,\hat{X}(t-),\theta(t-),Y(t-))\bigg\}{\tilde{N}_{1}(dt,dz)}. \end{eqnarray*} We substitute $\frac{\partial^{2}V}{\partial{x_{g}}\partial t } $ from (\ref{5.8}) to get, \begin{eqnarray}\label{5.9} dY_{g}(t)&=& -\frac{\partial f_{1}}{\partial x_{g}}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t),Y(t)))\nonumber\\ &-&\sum_{k=1}^{r}{\frac{\partial V}{\partial x_{k}}(t,\hat{X}(t),\theta(t),Y(t))\frac{\partial b_{k}}{\partial {x_{g}}}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))}\nonumber \\ &-&\frac{1}{2}\sum_{k=1}^{r}\sum_{l=1}^{r}{\frac{\partial^{2}V}{\partial x_{k}\partial x_{l}}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t),Y(t))}\nonumber\\&\times&{\frac{\partial}{\partial x_{g}}(\sum_{k=1}^{r}{\sigma_{ki}(t,\hat{X}(t),\theta(t))}{ \sigma_{li}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))})}\nonumber\\ &+& \sum_{k=1}^{r}{\frac{\partial^{2}V}{\partial x_{g}\partial x_{k}}(t,\hat{X}(t),\theta(t),Y(t))\sum_{j=1}^{r}{\sigma_{kj}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))dW_{j}(t)}}\nonumber\\ &+&\int_{\Gamma}\bigg\{(\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t-)+g(t,X(t-),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t-),\gamma),\theta(t-),Y(t-))\nonumber\\ &-&\frac{\partial V}{\partial x_{g}}(t,\hat{X}(t-),\theta(t-),Y(t-)))\bigg\}\tilde{N}(dt,d\gamma)\nonumber\\ &+&\int_{\mathbb{R}}\bigg\{\frac{\partial V}{\partial x_{g}}((t,X(t-),\theta(t-)+\bar{h}(\theta(t-),Y(t-),z),Y(t-)-\bar{g}(\theta(t-),Y(t-),z)))\nonumber\\&-&\frac{\partial {V}}{\partial x_{g}}(t,\hat{X}(t-),\theta(t-),Y(t-))\bigg\}{\tilde{N}_{1}(dt,dz)}. \end{eqnarray} We have the following identity, \begin{eqnarray}\label{5.10} &&\frac{1}{2}\sum_{k=1}^{r}\sum_{l=1}^{r}{\frac{\partial^{2}V}{\partial x_{k}\partial x_{l}}(t,\hat{X}(t),\theta(t),Y(t))}\nonumber\\&\times&{\frac{\partial}{\partial x_{g}}\bigg(\sum_{i=1}^{r}{\sigma_{ki}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))\sigma_{li}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))}\bigg)}\nonumber \\ &=&\sum_{k=1}^{r}\sum_{l=1}^{r}\sum_{i=1}^{r}{\sigma_{il}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t)){\frac{\partial^{2}V}{\partial x_{i}\partial x_{k}}(t,\hat{X}(t),\theta(t),Y(t))}}\nonumber\\&\times&{{\frac{\partial \sigma_{kl}}{\partial x_{g}}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t))}}. \end{eqnarray} Next, from (\ref{3.4}) we obtain, \begin{eqnarray}\label{5.11} &&\frac{\partial \mathcal{H}}{\partial x_{g}}(t,X(t),u(t),\theta(t),Y(t),p(t),q(t),\eta(t,\gamma))\nonumber \\&=&\frac{\partial f_{1}}{\partial x_{g}}(t,\hat{X}(t),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t),Y(t))\nonumber\\ &+&\sum_{i=1}^{r}\bigg(\frac{\partial b_{i}}{\partial x_{g}}(t,\hat{X}(t-),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t-))\nonumber\\&-&\int_{\Gamma}{\frac{\partial g_{i}}{\partial x_{g}}(t,X(t-),\hat{u}(t,\hat{X}(t),\theta(t),Y(t)),\theta(t-),\gamma)}\pi(d\gamma)\bigg)p_{i}(t)+tr(\frac{\partial \sigma^{'}(t,x,\hat{u},\theta(t))}{\partial x_{g}}q)\nonumber\\ &+&{\sum_{i=1}^{r}\int_{\Gamma}{\frac{\partial g_{i}}{\partial x_{g}}(t,X(t-),\theta(t-),\gamma)}\pi(d\gamma)(\eta^{(g)}_{i}(t,\gamma))}. \end{eqnarray} We also note that \begin{eqnarray*} tr(\frac{\partial \sigma^{'}(t,x,u,i)}{\partial x_{g}}q)&=&\sum_{l=1}^{r}{[\frac{\partial \sigma^{'}(t,x,u,i)}{\partial x_{g}}q ]_{ll}}\\ &=&\sum_{l=1}^{r}\sum_{k=1}^{r}q_{k,l}\frac{\partial \sigma_{kl}}{\partial x_{g} }(t,x,u,i). \end{eqnarray*} Substitute (\ref{5.3})-(\ref{5.6}) and (\ref{5.11}) gives, \begin{eqnarray} dY_{g}(t)&=&-\frac{\partial\mathcal{H}}{\partial x_{g}}(t,X(t),u(t),\theta(t),Y(t),p(t),q(t),\eta(t,\gamma))dt+\sum_{j=1}^{r}{q_{gj}}(t)dW_{j}(t)\nonumber\\ &+&\int_{\Gamma}\eta(t,\gamma) \tilde{N}(dt,d\gamma)+\int_{\mathbb{R}}\tilde{\eta}(t,z) \tilde{N}_{1}(dt,dz). \end{eqnarray} Since $Y_{g}(t)=p_{g}(t)$ for each $g=1,...,r$, we have shown that $p(t),q(t),\eta(t,\gamma)$ and $\tilde{\eta}(t,z)$ solve the adjoint equation (\ref{3.5}). $\qed$\\ \section{Applications} We illustrate the theory developed towards applying it to some key financial wealth optimization problems. For an early motivation on applying sufficient maximum principle, we first consider wealth dynamics to follow semi-Markov modulated diffusion (no jumps case) and apply it towards the risk-sensitive control portfolio optimization problem. We follow it up by illustrating an application of semi-Markov modulated jump-diffusion wealth dynamics to a quadratic loss minimization problem. Unless otherwise stated, all the processes defined in this section are one dimensional.\\ {\bf Risk-sensitive control portfolio optimization}~~Let us consider a financial market consisting of two continuously traded securities namely the risk less bond and a stock. The dynamics of the riskless bond is known to follow \begin{eqnarray*} dS_{0}(t)=r(t,\theta(t-))S_{0}(t)dt~~~S_{0}(0)=1. \end{eqnarray*} where $r(t,\theta(t))$ is the risk-free interest rate at time $t$ and is modulated by an underlying semi-Markov process as described earlier. The dynamics of the stock price is given as \begin{eqnarray*} dS_{1}(t)=S_{1}(t)[(\mu(t,\theta(t-)))dt+\sigma(t,\theta(t-))dW(t)], \end{eqnarray*} where $(\mu(t,\theta(t-)))$ is the instantaneous expected rate of return and as usual $\sigma(t,\theta(t-))$ is the instantaneous volatility rate. The stock price process is thus driven by a 1-d Brownian motion. We denote the wealth of the investor to be $X(t) \in \mathbb{R}$ at time $t$. He holds $\theta_{1}(t)$ units of stock and $\theta_{0}(t)=1-\theta_{1}(t)$ units is held in the riskless bond market. From the self-financing principle (refer Karatzas and Shreve \cite{KS}), the wealth process follows the dynamics given as, \begin{eqnarray*} dX(t)=(r(t,\theta(t-))X(t)+h(t)\sigma(t,\theta(t-))\bar{m}(t,\theta(t-)))dt+h(t)\sigma(t,\theta(t-))dW(t)~~~X(0)=x, \end{eqnarray*} where $h(t)=\theta_{1}(t)S_{1}(t)$, { $\bar{m}(t,i)=\frac{\mu(t,i)-r(t,i)}{\sigma(t,i)} \geq 0$ and the variables $r(t,i), b(t,i)$ and $\sigma(t,i),$ and $\sigma^{-1}(t,i)$ for each $i \in \mathcal{X}$ are measurable and uniformly bounded in $t \in [0,T]$}. { Also $h(\cdot)$ occuring in the drift and diffusion term in above dynamics of $X(t)$ satisfies the following conditions\\
1. $E[\int_{0}^{T}{h^{2}(t)dt}]< \infty$\\
2. $E[\int_{0}^{T}{|r(t,\theta(t-))X(t)+h(t)\sigma(t,\theta(t-))\bar{m}(t,\theta(t-))|}dt+\int_{0}^{T}{h^{2}(t)\sigma^{2}(t,\theta(t-))}dt]< \infty$\\ 3. The SDE for $X$ has a unique strong solution.\\ These conditions on $h(\cdot)$ are needed in order to prevent doubling strategies which otherwise would yield arbitrary profit at time $T$ for an investor.}\\
\indent In a classical risk-sensitive control optimization problem, the investor aims to maximize over some admissible class of portfolio $\mathcal{A}(T)$ the following risk-sensitive criterion given by \begin{eqnarray*}
J(\hat{h}(\cdot),x)&=&\max_{h \in \mathcal{A}(T)}{\frac{1}{\gamma}}\mathbb{E}[{X(T)}^{\gamma}|X(0)=x,\theta(0)=i,Y(0)=y],~~~\gamma \in (1,\infty) \\
&=& -\min_{h \in \mathcal{A}(T)} \frac{1}{\gamma} \mathbb{E}[{X(T)}^{\gamma}|X(0)=x,\theta(0)=i,Y(0)=y], \end{eqnarray*} where the exogenous parameter $\gamma$ is the usual risk-sensitive criterion that describes the risk attitude of an investor. Thus the optimal expected utility function depends on $\gamma$ and is a generalization of the traditional stochastic control approach to utility optimization in the sense that now the degree of risk aversion of the investor is explicitly parameterized through $\gamma$ rather than importing it in the problem via an exogeneous utility function. See Whittle \cite{Wh} for general overview on risk-sensitive control optimization. We now use the sufficient maximum principle (Theorem 4.1). Set the control problem $u(t)\triangleq h(t)$.\\ The corresponding Hamiltonian (for the non-jump case)(\ref{3.4}) becomes, \begin{eqnarray*} \mathcal{H}(t,x,u,i,p,q)=(r(t,i)x+u\sigma(t,i)\bar{m}(t,i))p+u\sigma(t,i)q. \end{eqnarray*} The adjoint process (\ref{3.5}) is given by \begin{eqnarray}\label{6.1} dp(t)&=&-r(t,\theta(t-))p(t)dt+q(t)dW(t)+\int_{\mathbb{R}}{\tilde{\eta}(t,z)\tilde{N}_{1}(dt,dz)},\nonumber \\ p(T)&=&X(T)^{\gamma-1}~~~~a.s.. \end{eqnarray} We need to determine $p(t),q(t)$ and $\eta(t,z)$ in (\ref{6.1}). Going by the terminal condition $p(T)$ we observe that the adjoint process $p$ is the first derivative of $(x^{\gamma})$. Hence we assume that $p(t)$ defined as, \begin{eqnarray*} p(t)=(X(t))^{\gamma-1}e^{{\phi(t,\theta(t),Y(t))}}. \end{eqnarray*} where $\phi(T,\theta(T)=i,Y(T))=0~~~a.s.$ for each $i \in \{1,...,M\}$. Using the Ito's formula we get, \begin{eqnarray}\label{6.2} &&\frac{dp(t)}{p(t)}=\sum_{i=1}^M {1}_{\theta(t-)=i}\bigg((\gamma-1)\bigg\{(r(t,\theta(t-))+\frac{u(t)\sigma(t,\theta(t-))\bar{m}(t,\theta(t-))}{X(t)}\bigg)\nonumber\\ &+&\frac{1}{2}(\gamma-1)(\gamma-2)\sigma^{2}(t,\theta(t-))\frac{u^{2}(t)}{X^{2}(t)} \nonumber \\
&+&\phi_{t}(t,\theta(t-),y)+\phi_{y}(t,\theta(t-),y)+\frac{f^{h}(y|\theta(t-)=i)}{1-F^{h}(y|\theta(t-)=i)}\sum_{j \neq i}{p_{ij}(\phi(t,j,0)-\phi(t,\theta(t-),y))}\bigg\}dt \nonumber \\ &+&{(\gamma-1)\frac{u(t)}{X(t)}\sigma(t,\theta(t-))}dW(t) \nonumber \\ &+&\int_{\mathbb{R}}\bigg(\phi(t,X(t-),\theta(t-)+\bar{h}(\theta(t-),Y(t-),z),Y(t-)-\bar{g}(\theta(t-),Y(t-),z))\nonumber\\ &-&\phi(t,\theta(t-),Y(t-))\bigg)\tilde{N}_{1}(dt,dz).\nonumber\\ \end{eqnarray} Comparing the coefficient of (\ref{6.2}) with that in (\ref{6.1}) we get \begin{eqnarray}\label{6.3} -r(t,\theta(t-))&=&\sum_{i=1}^M {1}_{\theta(t-)=i}\bigg((\gamma-1)\bigg(r(t,\theta(t-))+\frac{u(t)\sigma(t,\theta(t-))\bar{m}(t,i)}{X(t)}\bigg)+\frac{1}{2}(\gamma-1)(\gamma-2)\frac{u^{2}(t)}{X^{2}(t)} \nonumber \\
&+& \phi_{t}(t,\theta(t-),y)+\phi_{y}(t,\theta(t-),y)+\frac{f^{h}(y|i)}{1-F^{h}(y|\theta(t-)=i)}\sum_{j \neq i}{p_{ij}(\phi(t,j,0)-\phi(t,\theta(t-),y))}\bigg).\nonumber\\ \end{eqnarray} \begin{eqnarray}\label{6.4} {q}(t)=(\gamma-1)\frac{u(t)}{X(t)}\sigma(t,\theta(t-)){p}(t). \end{eqnarray} \begin{eqnarray}\label{6.5} \tilde{\eta}(t,z)&=&\bigg(\phi(t,\theta(t-)+\bar{h}(\theta(t-),Y(t-),z),Y(t-)-\bar{g}(\theta(t-),Y(t-),z))\nonumber \\ &-&\phi(t,\theta(t-),Y(t-))\bigg)p(t). \end{eqnarray} Let $\hat{u} \in \mathcal{A}(T)$ be a candidate optimal control corresponding to the wealth process $\hat{X}$ and the adjoint triplet ($\hat{p},\hat{q},\hat{\eta}$), then from the Hamiltonian (\ref{3.4}) for all $u \in \mathbb{R}$ we have \begin{eqnarray}\label{6.6} \mathcal{H}(t,\hat{X}(t),u,\theta(t),\hat{p}(t),\hat{q}(t))=\bigg(r(t,\theta(t))\hat{X}(t)+u\sigma(t,\theta(t)) \bar{m}(t,\theta(t))\bigg)\hat{p}(t)+u\sigma(t,\theta(t))\hat{q}(t). \end{eqnarray} As this is a linear function of $u$, we guess that the coefficient of $u$ vanishes at optimality, which results in the equality \begin{eqnarray}\label{6.7} \bar{m}(t,\theta(t-))\hat{p}(t)+\hat{q}(t)=0. \end{eqnarray} Substitute equation (\ref{6.7}) in (\ref{6.4}) to obtain the expression for the control as \begin{eqnarray}\label{6.8} \hat{u}(t)=\frac{\bar{m}(t,\theta(t-))}{(1-\gamma)\sigma(t,\theta(t-))}\hat{X}(t). \end{eqnarray} We now aim to determine the explicit expression for ${p}(t)$ which is only possible if we can determine what $\phi(t,\theta(t),Y(t))$ is. We substitute $\hat{u}$ from above and input it in equation (\ref{6.3}) to get \begin{eqnarray}\label{6.9} 0&=&\gamma r(t,\theta(t-))-{\bar{m}^{2}(t,\theta(t-))}+\frac{(2-\gamma)}{(1-\gamma)}\frac{\bar{m}^{2}(t,\theta(t-))}{2\sigma^{2}(t,\theta(t-))}\nonumber\\
&+& \phi_{t}(t,\theta(t-),y)+\phi_{y}(t,\theta(t-),y)+\frac{f^{h}(y|\theta(t-)=i)}{1-F^{h}(y|\theta(t-)=i)}\sum_{i=1,j \neq i}^{M}{p_{ij}(\phi(t,j,0)-\phi(t,\theta(t-),y))}.\nonumber\\ \end{eqnarray} with terminal boundary condition given as $\phi(T,\theta(T),Y(T))=0$~~~a.s. Consider the process \begin{eqnarray}\label{6.10} \tilde{\phi}(t,\theta(t),Y(t)) \triangleq
E\bigg[\exp\bigg(\int_{t}^{T}\bigg\{\gamma r(s,\theta(s))-{\bar{m}^{2}(s,\theta(s))}+\frac{(2-\gamma)}{(1-\gamma)}\frac{\bar{m}^{2}(s,\theta(s))}{{2\sigma^{2}(s,\theta(s))}}\bigg\}ds\bigg)|{\theta(t-)=i,Y(t-)=y}\bigg].\nonumber\\ \end{eqnarray} We aim to show that $\phi=\tilde{\phi}$. For the same we define the following martingale, \begin{eqnarray}\label{6.11}
R(t)\triangleq E\bigg[\exp\bigg(\int_{0}^{T}{\bigg\{\gamma r(s,\theta(s))-{\bar{m}^{2}(s,\theta(s))}+\frac{(2-\gamma)}{(1-\gamma)}\frac{\bar{m}^{2}(s,\theta(s))}{{2\sigma^{2}(s,\theta(s))}}\bigg\}}ds\bigg)|\mathcal{F}_{t}^{\theta,y}\bigg], \end{eqnarray} where $\mathcal{F}_{\tau}^{\theta,y} \triangleq \sigma\{\theta{(\tau)},Y(\tau), \tau \in [0,t]\}$ augmented with $\mathbb{P}$ null sets is the filtration generated by the processes $\theta(t)$ and $Y(t)$. From the $\{\mathcal{F}_{t}^{\theta,y}\}$-martingale representation theorem, there exist $\{\mathcal{F}_{t}^{\theta,y}\}$-previsible, square integrable process $\nu(t,i,y)$ such that \begin{eqnarray}\label{6.12} R(t)=R(0)+\int_{0}^{t}\int_{\mathbb{R}}{\nu(\tau,\theta(\tau-),Y(\tau-))}\tilde{N}_{1}(d\tau,dz). \end{eqnarray} By positivity of $R(t)$ we can define $\hat{\nu}(\tau,\theta(\tau-),Y(\tau-)) \triangleq (\nu(\tau,\theta(\tau-),Y(\tau-)))R^{-1}(\tau-)$ so that \begin{eqnarray}\label{6.13} R(t)=R(0)+{\int_{0}^{t}\int_{\mathbb{R}}{R(\tau-)\hat{\nu}(\tau,\theta(\tau-),Y(\tau-))}\tilde{N}_{1}(d\tau,dz)}. \end{eqnarray} From the definition of $\tilde{\phi}$ in (\ref{6.10}) and the definition of $R$ in (\ref{6.11}) it is easy to see that we have the following relationship \begin{eqnarray}\label{6.14} R(t)=\tilde{\phi}(t,\theta(t),Y(t))\exp\bigg\{\int_{0}^{t}(\gamma r(s,\theta(s))-{\bar{m}^{2}(s,\theta(s))}+\frac{(2-\gamma)}{(1-\gamma)}\frac{\bar{m}^{2}(s,\theta(s))}{{2\sigma^{2}(s,\theta(s))}})ds\bigg\},\nonumber\\ ~~\forall~t~\in~[0,T]. \end{eqnarray} Using the Ito's expansion of $\tilde{\phi}(t,\theta(t),Y(t))$ to the RHS of (\ref{6.14}) followed up by comparing it with martingale representation of $R(t)$ in (\ref{6.12}) we get $\phi:= \tilde{\phi}$. We can thus substitute $ \hat{q}$ and $\hat{\tilde{\eta}}$ in expression (\ref{6.4}),(\ref{6.5}) in lieu of $q$ and $\tilde{\eta}(t,z)$ respectively. With the choice of control $\hat{u}$ given by (\ref{6.8}) and boundedness condition on the market parameters $r,\mu$ and $\sigma$, the conditions in Theorem 4.1 are satisfied and hence $\hat{u}(t)$ is an optimal control process and the explicit representation of $\hat{p}$ is given by \begin{eqnarray*}
\hat{p}(t)=(X(t))^{\gamma-1}e^{E[\exp(\int_{t}^{T}{\gamma r(s,\theta(s))-{\bar{m}^{2}(s,\theta(s))}+\frac{(2-\gamma)}{(1-\gamma)}\frac{\bar{m}^{2}(s,\theta(s))}{{2\sigma^{2}(s,\theta(s))}}ds|}{\theta(t-)=i,Y(t-)=y})]}. \end{eqnarray*} {\bf Quadratic loss minimization}~~ We now provide an example related to quadratic loss minimization where the portfolio wealth process is given by
\begin{eqnarray}\label{6.15} dX^{h}({t})&=&\bigg(r({t},\theta(t))X^{h}(t)+h(t)\sigma(t,\theta(t))\bar{m}(t,\theta(t))-h(t)\int_{\Gamma}{g(t,X^{h}(t),\theta(t),\gamma)\pi(d\gamma)}\bigg)dt\nonumber\\ &+&h(t)\sigma(t,\theta(t))dW(t) + h(t)\int_{\Gamma}{g(t,X^{h}(t),\theta(t),\gamma)\tilde{N}(dt,d\gamma)}, \nonumber \\ X^{h}(0)&=&x_{0}~~a.s. \end{eqnarray} {where the market price of risk is defined as $\bar{m}({t},i,y)=\sigma^{-1}(t,i)(b(t,i)-r(t,i))$. {As like earlier example , we have that $\bar{m}(t,i) \geq 0$ and that the variables $r(t,i), b(t,i)$, $\sigma(t,i)$ , $\sigma^{-1}(t,i)$ and $g(t,x,i,\gamma)$ for each $i \in \mathcal{X}$ are measurable and uniformly bounded in $t \in [0,T]$. We assume that $g(t,x,i,\gamma)>-1$ for each $i \in \mathcal{X}$ and for a.a. $t,x,\gamma$. This insures that $X^{h}(t)>0$ for each $t$. We further assume following conditions for each $i \in \mathcal{X}$\\ 1.
$E[\int_{0}^{T}{h^{2}(t)dt}]< \infty.$\\
2.
$E[\int_{0}^{T}{|r(t,i)X(t)+h(t)\sigma(t,i)\bar{m}(t,i)|}dt+\int_{0}^{T}{h^{2}(t)\sigma^{2}(t,i)}dt+\int_{0}^{T}{h^{2}(t)g^{2}(t,X(t),i,\gamma)}dt]< \infty.$\\
3. $t \rightarrow \int_{\mathbb{R}}{h^{2}(t)g^{2}(t,x,i,\gamma)\pi(d\gamma)}$ is bounded. \\ 4. the SDE for $X$ has a unique strong solution.\\ } The portfolio process $h(\cdot)$ satisfying the above four conditions is said to be admissible and belongs to $\mathcal{A}(T)$ (say). } We consider the problem of finding an admissible portfolio process $h \in \mathcal{A}(T)$ such that \begin{eqnarray*} \inf_{h \in \mathcal{A}(T)}{E[(X^{h}(T)-d)^{2}]}, \end{eqnarray*} over all $h \in \mathcal{A}(T)$. Set the control process $u(t) \triangleq h(t)$ and $X(t) \triangleq X^{h}(t)$. For this example the Hamiltonian (\ref{3.4}) becomes \begin{eqnarray}\label{6.16} \mathcal{H}(t,x,h,i,y,p,q,\eta)&=&\bigg[r(t,i)x+u\sigma(t,i)\bar{m}(t,i)-u\int_{\Gamma}{g(t,x,i,\gamma)\pi(d\gamma)}\bigg]p+u\sigma(t,i)q \nonumber \\ &+&\bigg(u\int_{\Gamma}{g(t,x,i,\gamma)}\pi(d\gamma) \bigg)\eta , \end{eqnarray} and the adjoint equations are for all time $t \in [0,T)$, \begin{eqnarray}\label{6.17} dp(t)&=&-r(t,\theta(t-))p(t)dt+q(t)dW(t)+\int_{\Gamma}{\eta(t,\gamma)\tilde{N}(dt,d\gamma)}+\int_{\mathbb{R}}{\tilde{\eta}(t,z)\tilde{N}_{1}(dt,dz)}, \nonumber \\ p(T)&=&-2X(T)+2d ~~a.s. \end{eqnarray} We seek to determine $p(t),q(t), \eta(t,\gamma)$ and $\tilde{\eta}(t,z)$ in (\ref{6.17}). Going by (\ref{6.17}) we assume that , \begin{eqnarray}\label{6.18} p(t)=\phi(t,\theta(t),Y(t))X(t)+\psi(t,\theta(t),Y(t)). \end{eqnarray} with the terminal boundary conditions being \begin{eqnarray}\label{6.19} \phi(T,i,y)=-2~~~~~~~~~~~~\psi(T,i,y)=2d~~~~\forall ~i~\in~\mathcal{X}. \end{eqnarray} For the sake of convenience we again rewrite the following Ito's formula for a function $f(t,\theta(t),y(t))\in \mathcal{C}^{1,2,1}$ given as \begin{eqnarray}\label{6.20} &&df(t,\theta(t),Y(t))=\bigg(\frac{\partial f(t,\theta(t),Y(t))}{\partial {t}}+\frac{(f^{h}(y/i))}{(1-F^{h}(y/i))}\sum_{j \neq i, j=1}^{M}p_{\theta(t-)=i,j}[f(t,j,0)-f(t,\theta(t-),y)]\nonumber\\ &+&\frac{\partial f(t,\theta(t),Y(t))}{\partial y}\bigg)dt\nonumber \\ &+&\int_{\mathbb{R}}{[f(t,\theta({t-})+\bar{h}(\theta({t-}),Y({t-}),z),Y({t-})-\bar{g}(\theta({t-}),Y({t-}),z))-f(t,\theta({t-}),Y({t-}))]\tilde{N}_{1}(dt,dz)}.\nonumber \\ \end{eqnarray} We apply the Ito's product rule to (\ref{6.18}) to obtain \begin{eqnarray}\label{6.21} dp({t})&=&X({t-})d\phi(t,\theta(t-),Y(t))+\phi(t,\theta(t-),Y(t))dX(t)+d\phi(t,\theta(t-),Y(t))dX(t)+d\psi(t)\nonumber\\ &=& \sum_{i=1}^{M}{1_{\theta_{t-}=i}}\bigg\{X(t-)\bigg(\phi(t,\theta(t-),y)r(t,\theta(t-))+\phi_{t}(t,\theta(t-),Y(t))+\phi_{y}(t,\theta(t-),Y(t))\nonumber \\ &+&\sum_{i=1,j \neq i}^{M}{p_{ij}\frac{f^{h}(y/i)}{1-F^{h}(y/i)}(\phi(t,j,0)-\phi(t,\theta(t-),Y(t)))}\bigg)+u(t)\phi(t,\theta(t-),Y(t))\sigma(t,\theta(t-))\bar{m}(t,\theta(t-))\nonumber \\ &-&u(t)\phi(t,\theta(t-),Y(t))\int_{{\Gamma}}{g(t,X(t),\theta(t-),\gamma)\pi(d\gamma)}+\psi_{t}(t,\theta(t-),Y(t))+\psi_{y}(t,\theta(t-),Y(t))\nonumber\\ &+&\sum_{i=1,i \neq j}^{M}{p_{ij}\frac{f^{h}(y/i)}{1-F^{h}(y/i)}[\psi(t,j,0)-\psi(t,\theta(t-)=i,Y(t))]}\bigg\}dt \nonumber \\ &+&u(t)\phi(t,\theta({t-}),Y({t}))\sigma(t,\theta({t-}))dW({t})+u(t)\phi(t,\theta({t-}),Y({t-}))\int_{{\Gamma}}{g(t,X(t-),\theta({t-}),\gamma) \tilde{N}(dt,d\gamma)}\nonumber\\ &+&\int_{\mathbb{R}}\bigg[X(t-)(\phi(t,\theta({t-})+\bar{h}(\theta({t-}),Y({t-}),z),Y({t-})-\bar{g}(\theta({t-}),Y({t-}),z))-\phi(t,\theta({t-}),Y({t-})))\nonumber \\ &+&\psi(t,\theta({t-})+\bar{h}(\theta({t-}),Y({t-}),z),Y({t-})-\bar{g}(\theta({t-}),Y({t-}),z))-\psi(t,\theta({t-}),Y({t-}))\bigg]\tilde{N}_{1}(dt,dz). \nonumber \\ \end{eqnarray} Comparing coefficients with (\ref{6.17}) we obtain three equations given as \begin{eqnarray}\label{6.22} &-&r(t,\theta({t-}))p(t-)\nonumber\\&=&\sum_{i=1}^{M}{1_\{{\theta_{t-}=i},Y(t-)=y}\}\bigg\{X(t-)\bigg(\phi(t,\theta({t-}),Y(t))r(t,\theta({t-}))+\phi_{t}(t,\theta({t-}),Y(t))+\phi_{y}(t,\theta({t-}),Y(t))\nonumber\\ &+&\sum_{i=1,j \neq i}^{M}{p_{ij}\frac{f^{h}(y/i)}{1-F^{h}(y/i)}(\phi(t,j,0)-\phi(t,\theta({t-}),Y(t)))}\bigg) +u(t)\phi(t,\theta({t-}),Y(t))\sigma(t,\theta({t-}))\bar{m}(t,\theta({t-}))\nonumber\\ &-&u(t)\phi(t,\theta({t-}),Y(t))\int_{\Gamma}{g(t,x,\theta({t-}),\gamma)\pi(d\gamma)} +\psi_{t}(t,\theta({t-})),Y(t)+\psi_{y}(t,\theta({t-}),Y(t))\nonumber\\ &+&\sum_{i \neq j}^{M}{p_{ij}\frac{f^{h}(y/i)}{1-F^{h}(y/i)}[\psi(t,j,0)-\psi(t,\theta({t-}),Y(t))]}\bigg\}.\nonumber\\ \end{eqnarray} \begin{eqnarray}\label{6.23} q(t)=u(t)\phi(t,\theta({t-}),Y({t-}))\sigma(t,\theta({t-})). \end{eqnarray} \begin{eqnarray}\label{6.24} \eta(t,\gamma)=u(t)\phi(t,\theta({t-}),Y(t-))g(t,X(t-),\theta({t-}),\gamma). \end{eqnarray} \begin{eqnarray}\label{6.25} \tilde{\eta}(t,z)&=&X(t-)(\phi(t,\theta({t-})+\bar{h}(\theta({t-}),Y({t-}),z),Y({t-})-\bar{g}(\theta({t-}),Y({t-}),z))-\phi(t,\theta({t-}),Y({t-})))\nonumber \\ &+&\psi(t,\theta({t-})+\bar{h}(\theta({t-}),Y({t-}),z),Y({t-})-\bar{g}(\theta({t-}),Y({t-}),z))-\psi(t,\theta({t-}),Y({t-})).\nonumber\\ \end{eqnarray} Let $\hat{u} \in \mathcal{A}(T)$ be a candidate optimal control corresponding to the wealth process $\hat{X}(T)$ and the adjoint triplet ($\hat{p},\hat{q},\hat{\eta},\hat{\tilde{\eta}}$). Then from the Hamiltonian (\ref{3.4}) for all $u \in \mathcal{A}(T)$ we have \begin{eqnarray}\label{6.26} \mathcal{H}(t,\hat{X}(t),u,\theta(t),\hat{p}(t),\hat{q}(t),\hat{\eta}(t))&=&\bigg(r(t,\theta(t))\hat{X}(t)+u \sigma(t,\theta(t))\bar{m}(t,\theta(t))\nonumber\\ &-&u\int_{\Gamma}{g(t,\hat{X}(t-),\theta(t-),\gamma)\pi{d(\gamma)}}\bigg)\hat{p}(t)\nonumber\\ &+&u\sigma(t,\theta(t))\hat{q}(t)+\bigg(u\int_{\Gamma}{g(t,\hat{X}(t-),\theta(t-),\gamma)\pi(d{\gamma})}\bigg)\hat{\eta}(t,\gamma).\nonumber\\ \end{eqnarray} As this is a linear function of $u$, we guess that the coefficient of $u$ vanishes at optimality, which results in the following equality \begin{eqnarray}\label{6.27} \hat{q}(t)&=&\bigg(-\bar{m}(t,\theta({t-}))+\frac{1}{\sigma(t,\theta({t-}))}\int_{\Gamma}{g(t,\hat{X}(t),\theta(t),\gamma)\pi(d\gamma)}\bigg)\hat{p}(t)\nonumber\\ &-&\frac{1}{\sigma(t,\theta({t-}))}\int_{\Gamma}{(g^{'}(t,\hat{X}(t),\theta(t),\gamma))\pi(d\gamma)\hat{\eta}(t,\gamma)}.\nonumber\\ \end{eqnarray} Also substituting (\ref{6.27}) for $\hat{q}(t)$ in (\ref{6.23}) and using (\ref{6.18}) and(\ref{6.24}) we get, \begin{eqnarray}\label{6.28} \hat{u}(t)=\frac{\tilde{\Lambda}(t)}{\Lambda(t)}(\hat{X}(t)+\phi^{-1}(t,\theta({t-}),y)\psi(t,\theta({t-}),y)), \end{eqnarray} where \begin{eqnarray}\label{6.29} \tilde{\Lambda}(t)={-\bar{m}(t,\theta({t-}))\sigma(t,\theta({t-}))+\int_{\Gamma}{g(t,X(t),\theta({t-}),\gamma)}\pi(d\gamma)}. \nonumber \\ \Lambda(t)={\sigma}^{2}(t,\theta({t-}))+\phi(t,\theta({t-}),Y(t))\int_{\Gamma}{g^{'}(t,X(t),\theta({t-}),\gamma)g(t,X(t),\theta({t-}),\gamma)}\pi(d\gamma). \end{eqnarray} To find the optimal control it remains to find $\phi$ and $\psi$. To do so set $X(t):= \hat{X}(t), u(t):=\hat{u}(t)$ and $p(t):=\hat{p}(t)$ in (\ref{6.22}) and then substitute for $\hat{p}(t)$ in (\ref{6.18}) and $\hat{u}(t)$ from (\ref{6.28}) . As this result is linear in $\hat{X}(t)$ we compare the coefficient on both side of the resulting equation to get following two equations namely, \begin{eqnarray}\label{6.30} 0&=&2r\phi(t,i,Y(t))+\phi_{t}(t,i,Y(t))+\phi_{y}(t,i,Y(t))+\sum_{i \neq j,i=1}^{M}{p_{ij}\frac{f^{h}(y/i)}{1-F^{h}(y/i)}}{(\phi(t,j,0)-\phi(t,i,Y(t)))}\nonumber\\ &+& \frac{\tilde{\Lambda}(t)}{\Lambda(t)}\sigma(t,i)\bar{m}(t,i)\phi(t,i,Y(t))-\frac{\tilde{\Lambda}(t)}{\Lambda(t)}\phi(t,i,Y(t))\int_{\Gamma}{g(t,X(t),i,\gamma)}\pi (d\gamma). \end{eqnarray} \begin{eqnarray}\label{6.31} 0&=&r\psi(t,i,Y(t))+\psi_{t}(t,i,Y(t))+\psi_{y}(t,i,Y(t))+\sum_{ i \neq j,i=1}^{M}{p_{ij}\frac{f^{h}(y/i)}{1-F^{h}(y/i)}(\psi(t,j,0)-\psi(t,i,Y(t)))}\nonumber\\ &+&\frac{\tilde{\Lambda}(t)}{\Lambda(t)}\sigma(t,i)\bar{m}(t,i)\psi(t,i,Y(t))-\frac{\tilde{\Lambda}(t)}{\Lambda(t)}\psi(t,i,y)\int_{\Gamma}g(t,X(t),i,\gamma) \pi d(\gamma).\nonumber\\ \end{eqnarray} with terminal boundary conditions given by (\ref{6.19}). Consider the following process \begin{eqnarray}\label{6.32} \tilde{\phi}(t,i,y)=-2E\bigg[\exp\bigg\{\int_{t}^{T}\bigg(2r(s,\theta({s-}))+\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\sigma(s,\theta({s-}))\bar{m}(s,\theta({s-}))\nonumber\\
-\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\int_{\Gamma}{g(s,X(s),\theta({s-}),\gamma)\pi(d\gamma)}\bigg)ds\bigg\}|{(\theta(s-)=i,Y(t)=y)}\bigg].\nonumber\\ \end{eqnarray} \begin{eqnarray}\label{6.33} \tilde{\psi}(t,i,y)&=&2dE\bigg[\exp\bigg\{\int_{t}^{T}\bigg(r(\theta({s-}),s)+\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\sigma(s,\theta({s-}))\bar{m}(s,\theta({s-}))\nonumber\\
&-&\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\int_{\Gamma}{g(s,X(s),\theta({s-}),\gamma)\pi(d\gamma)}\bigg)ds\bigg\}\bigg|{(\theta(s-)=i,Y(s)=y)}\bigg].\nonumber\\ \end{eqnarray} We aim to show that $\phi=\tilde{\phi}$ and $\psi=\tilde{\psi}$. We define the following martingales: \begin{eqnarray}\label{6.34} R(t)= \resizebox{.9\hsize}{!}{$E\bigg[\exp\bigg\{\int_{0}^{T}\bigg(2r(s,\theta(s-))+\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\sigma(s,\theta(s-))\bar{m}(s,\theta(s-))
-\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\int_{\Gamma}{g(s,X(s),\theta(s-),\gamma)\pi(d\gamma)}\bigg)ds\bigg\}|\mathcal{F}_{t}^{\theta,y}\bigg]$},\nonumber\\ \end{eqnarray} \begin{eqnarray}\label{6.35} S(t)= \resizebox{.9\hsize}{!}{$E\bigg[\exp\bigg\{\int_{0}^{T}\bigg(r(s,\theta(s-))+\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\sigma(s,\theta(s-))\bar{m}(s,\theta(s-))
-\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\int_{\Gamma}{g(s,X(s),\theta(s-),\gamma)}\pi(d\gamma)\bigg)ds\bigg\}|\mathcal{F}_{t}^{\theta,y}\bigg]$},\nonumber\\ \end{eqnarray} where $\mathcal{F}_{t}^{\theta,y}$ is defined as usual. We follow steps similar to that as seen in Example 1 and conclude that $\phi=\tilde{\phi}$ and $\psi=\tilde{\psi}$ by using joint-Markov property of ($\theta(t),Y(t)$), to obtain the following expression for the control $\hat{u}(t)$ given as \begin{eqnarray*} \hat{u}(t)= \resizebox{.9\hsize}{!}{$\frac{\tilde{\Lambda}(t)}{\Lambda(t)}\bigg(\hat{X}(t) -\frac{d E\bigg[\exp\bigg\{\int_{t}^{T}(r(s,\theta(s-))+\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\sigma(s,\theta(s-))\bar{m}(s,\theta(s-))
-\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\int_{\Gamma}{g(s,X(s),\theta(s-),\gamma)\pi(d\gamma)})ds\bigg\}|{(\theta(t-)=i,Y(t)=y)}\bigg]}{E\bigg[\exp\bigg\{\int_{t}^{T}(2r(s,\theta(s-))+\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\sigma(s,\theta(s-))\bar{m}(s,\theta(s-))
-\frac{\tilde{\Lambda}(s)}{\Lambda(s)}\int_{\Gamma}{g(s,X(s),\theta(s-),\gamma)\pi(d\gamma)})ds\bigg\}|{(\theta(t)=i,Y(t)=y)}\bigg]}\bigg)$}. \end{eqnarray*} For the choice of the control parameter and the boundedness conditions on the market parameters $r,b$,$\sigma$ and $g$, the conditions of Theorem 4.1 are satisfied and hence $\hat{u}$ is the optimal control process.
\end{document} |
\begin{document}
\title{On a Problem of Hajdu and Tengely}
\author{Samir Siksek} \address{Institute of Mathematics,
University of Warwick,
Coventry CV4 7AL, United Kingdom} \email{[email protected]}
\author{Michael Stoll} \address{Mathematisches Institut,
Universit\"at Bayreuth,
95440 Bayreuth, Germany.} \email{[email protected]}
\keywords{} \subjclass[2000]{Primary 11D41, Secondary 11G30, 14G05, 14G25}
\date{29 May, 2010}
\begin{abstract}
We prove a result that finishes the study of primitive arithmetic
progressions consisting of squares and fifth powers that was
carried out by Hajdu and Tengely in a recent paper:
The only arithmetic progression in coprime integers of the form
$(a^2, b^2, c^2, d^5)$ is $(1, 1, 1, 1)$.
For the proof, we first reduce the problem to that of determining
the sets of rational points on three specific hyperelliptic curves
of genus~4. A 2-cover descent computation shows that there are no
rational points on two of these curves. We find generators for a
subgroup of finite index of the Mordell-Weil group of the last curve.
Applying Chabauty's method, we prove that
the only rational points on this curve are the obvious ones. \end{abstract}
\maketitle
\section{Introduction}
Euler (\cite[pages 440 and 635]{Dickson}) proved Fermat's claim that four distinct squares cannot form an arithmetic progression. Powers in arithmetic progressions are still a subject of current interest. For example, Darmon and Merel \cite{DM} proved that the only solutions in coprime integers to the Diophantine equation $x^n+y^n=2z^n$ with $n \geq 3$ satisfy $xyz=0$ or $\pm 1$. This shows that there are no non-trivial three term arithmetic progressions consisting of $n$-th powers with $n \geq 3$. The result of Darmon and Merel is far from elementary; it needs all the tools used in Wiles' proof of Fermat's Last Theorem and more.
An arithmetic progression $(x_1,x_2,\ldots,x_k)$ of integers is said to be {\em primitive} if the terms are coprime, i.e., if $\gcd(x_1,x_2)=1$. Let $S$ be a finite subset of integers $\geq 2$. Hajdu \cite{Hajdu} showed that if \begin{equation}\label{eqn:ap} (a_1^{\ell_1},\ldots, a_k^{\ell_k}) \end{equation} is a non-constant primitive arithmetic progression with $\ell_i \in S$, then $k$ is bounded by some (inexplicit) constant $C(S)$. Bruin, Gy\H{o}ry, Hajdu and Tengely \cite{BGHT} showed that for any $k \geq 4$ and any $S$, there are only finitely many primitive arithmetic progressions of the form \eqref{eqn:ap}, with $\ell_i \in S$. Moreover, for $S=\{2,3\}$ and $k \geq 4$, they showed that $a_i = \pm 1$ for $i=1,\ldots,k$.
A recent paper of Hajdu and Tengely \cite{HT} studies primitive arithmetic progressions \eqref{eqn:ap} with exponents belonging to $S=\{2,n\}$ and $\{3,n\}$. In particular, they show that any primitive non-constant arithmetic progression \eqref{eqn:ap} with exponents $\ell_i \in \{2,5\}$ has $k \leq 4$. Moreover, for $k=4$ they show that \begin{equation}\label{eqn:l} (\ell_1,\ell_2,\ell_3,\ell_4) =(2,2,2,5) \quad \text{or} \quad (5,2,2,2). \end{equation} Note that if $(a_i^{\ell_i} : i=1,\ldots,k)$ is an arithmetic progression, then so is the reverse progression $(a_i^{\ell_i}: i=k,k-1,\ldots,1)$. Thus there is really only one case left open by Hajdu and Tengely, with exponents $(\ell_1,\ell_2,\ell_3,\ell_4) =(2,2,2,5)$. This is also mentioned as Problem~11 in a list of 22 open problems recently compiled by Evertse and Tijdeman~\cite{LeidenProblem}. In this paper we deal with this case. \begin{Theorem} \label{Thm}
The only arithmetic progression in coprime integers of the form
\[ (a^2, b^2, c^2, d^5) \]
is $(1, 1, 1, 1)$. \end{Theorem}
This together with the above-mentioned results of Hajdu and Tengely completes the proof of the following theorem.
\begin{Theorem} There are no non-constant primitive arithmetic progressions of the form \eqref{eqn:ap}
with $\ell_i \in \{2,5\}$ and $k \geq 4$. \end{Theorem}
The primitivity condition is crucial, since otherwise solutions abound. Let for example $(a^2, b^2, c^2, d)$ be any arithmetic progression whose first three terms are squares --- there are infinitely many of these; one can take $a = r^2 - 2rs - s^2$, $b = r^2 + s^2$, $c = r^2 + 2rs - s^2 $ --- then $\bigl((ad^2)^2, (bd^2)^2, (cd^2)^2, d^5)$ is an arithmetic progression whose first three terms are squares and whose last term is a fifth power.
For the proof of Thm.~\ref{Thm},
we first reduce the problem to that of determining
the sets of rational points on three specific hyperelliptic curves
of genus~4. A $2$-cover descent computation
(following Bruin and Stoll \cite{BSTwoCoverDesc})
shows that there are no
rational points on two of these curves. We find generators for a
subgroup of finite index of the Mordell-Weil group of the last curve.
Applying Chabauty's method, we prove that
the only rational points on this curve are the obvious ones. All our computations are performed using the computer package {\sf MAGMA}~\cite{MAGMA}.
The result we prove here may perhaps not be of compelling interest in itself. Rather, the purpose of this paper is to demonstrate how we can solve problems of this kind with the available machinery. We review the relevant part of this machinery in Sect.~\ref{S:Back}, after we have constructed the curves pertaining to our problem in Sect.~\ref{Curves}. Then, in Sect.~\ref{S:Points}, we apply the machinery to these curves. The proofs are mostly computational. We have tried to make it clear what steps need to be done, and to give enough information to make it possible to reproduce the computations (which have been performed independently by both authors as a consistency check).
\section{Construction of the Curves} \label{Curves}
Let $(a^2, b^2, c^2, d^5)$ be an arithmetic progression in coprime integers. Since a square is $\equiv 0$ or $1 \bmod 4$, it follows that all terms are $\equiv 1 \bmod 4$, in particular, $a$, $b$, $c$ and $d$ are all odd.
Considering the last three terms, we have the relation \[ (-d)^5 = b^2 - 2 c^2 = (b + c \sqrt{2}) (b - c \sqrt{2}) \,. \] Since $b$ and~$c$ are odd and coprime, the two factors on the right are coprime in $R = {\mathbb Z}[\sqrt{2}]$. Since $R^\times/(R^\times)^5$ is generated by $1 + \sqrt{2}$, it follows that \begin{equation} \label{rel1}
b + c \sqrt{2} = (1 + \sqrt{2})^j (u + v \sqrt{2})^5
= g_j(u,v) + h_j(u,v) \sqrt{2} \end{equation} with $-2 \le j \le 2$ and $u, v \in {\mathbb Z}$ coprime (with $u$ odd and $v \equiv j+1 \bmod 2$). The polynomials $g_j$ and $h_j$ are homogeneous of degree~5 and have coefficients in~${\mathbb Z}$.
Now the first three terms of the progression give the relation \[ a^2 = 2 b^2 - c^2 = 2 g_j(u,v)^2 - h_j(u,v)^2 \,. \] Writing $y = a/v^5$ and $x = u/v$, this gives the equation of a hyperelliptic curve of genus~4, \[ C_j : y^2 = f_j(x) \] where $f_j(x) = 2 g_j(x,1)^2 - h_j(x,1)^2$. Every arithmetic progression of the required form therefore induces a rational point on one of the curves~$C_j$.
We observe that taking conjugates in~\eqref{rel1} leads to \[ (-1)^j b + (-1)^{j+1} c\sqrt{2}
= (1 + \sqrt{2})^{-j} (u + (-v) \sqrt{2})^5 \,, \] which implies that $f_{-j}(x) = f_j(-x)$ and therefore that $C_{-j}$ and~$C_j$ are isomorphic and their rational points correspond to the same arithmetic progressions. We can therefore restrict attention to $C_0$, $C_1$ and $C_2$. Their equations are as follows. \begin{align*}
C_0 : y^2 &= f_0(x) = 2 x^{10} + 55 x^8 + 680 x^6 + 1160 x^4 + 640 x^2 - 16 \\
C_1 : y^2 &= f_1(x) = x^{10} + 30 x^9 + 215 x^8 + 720 x^7 + 1840 x^6 + 3024 x^5 \\
& \qquad\qquad\qquad + 3880 x^4 + 2880 x^3 + 1520 x^2 + 480 x + 112 \\
C_2 : y^2 &= f_2(x)
= 14 x^{10} + 180 x^9 + 1135 x^8 + 4320 x^7 + 10760 x^6 + 18144 x^5 \\
& \qquad\qquad\qquad + 21320 x^4 + 17280 x^3 + 9280 x^2 + 2880 x + 368 \end{align*}
The trivial solution $a = b = c = d = 1$ corresponds to $j = 1$, $(u,v) = (1,0)$ in the above and therefore gives rise to the point $\infty_+$ on~$C_1$ (this is the point at infinity where $y/x^5$ takes the value~$+1$). Changing the signs of $a$, $b$ or $c$ leads to $\infty_- \in C_1({\mathbb Q})$ (the point where $y/x^5 = -1$) or to the two points at infinity on the isomorphic curve~$C_{-1}$.
\section{Background on Rational Points on Hyperelliptic Curves} \label{S:Back}
Our task will be to determine the set of rational points on each of the curves $C_0$, $C_1$ and~$C_2$ constructed in the previous section. In this section, we will give an overview of the methods we will use, and in the next section, we will apply these methods to the given curves.
We will restrict attention to {\em hyperelliptic} curves, i.e., curves given by an affine equation of the form \[ C : y^2 = f(x) \] where $f$ is a squarefree polynomial with integral coefficients. The smooth projective curve birational to this affine curve has either one or two additional points `at infinity'. If the degree of~$f$ is odd, there is one point at infinity, which is always a rational point. Otherwise there are two points at infinity corresponding to the two square roots of the leading coefficient of~$f$. In particular, these two points are rational if and only if the leading coefficient is a square. For example, $C_1$ above has two rational points at infinity, whereas the points at infinity on $C_0$ and~$C_2$ are not rational. We will use $C$ in the following to denote the smooth projective model; $C({\mathbb Q})$ denotes as usual the set of rational points including those at infinity.
\subsection{Two-Cover Descent} \label{SS:Twocov}
It will turn out that $C_0$ and~$C_2$ do not have rational points. One way of showing that $C({\mathbb Q})$ is empty is to verify that $C({\mathbb R})$ is empty or that $C({\mathbb Q}_p)$ is empty for some prime~$p$. This does not work for $C_0$ or~$C_2$; both curves have real points and $p$-adic points for all~$p$. (This can be checked by a finite computation.) So we need a more sophisticated way of showing that there are no rational points. One such method is known as {\em 2-cover descent}. We sketch the method here; for a detailed description, see~\cite{BSTwoCoverDesc}.
An important ingredient of this and other methods is the algebra \[ L := {\mathbb Q}[T] = \frac{{\mathbb Q}[x]}{{\mathbb Q}[x] \cdot f(x)} \,, \] where $T$ denotes the image of~$x$. If $f$ is irreducible (as in our examples), then $L$ is the number field generated by a root of~$f$. In general, $L$ will be a product of number fields corresponding to the irreducible factors of~$f$. We now assume that $f$ has even degree $2g+2$, where $g$ is the genus of the curve. This is the generic case; the odd degree case is somewhat simpler. We can then set up a map, called the {\em descent map} or {\em $x-T$ map}: \[ x-T : C({\mathbb Q}) \longrightarrow H := \frac{L^\times}{{\mathbb Q}^\times (L^\times)^2} \,. \] Here $L^\times$ denotes the multiplicative group of~$L$, and $(L^\times)^2$ denotes the subgroup of squares. On points $P \in C({\mathbb Q})$ that are neither at infinity nor Weierstrass points (i.e., points with vanishing $y$ coordinate), the map is defined as \[ (x-T)(P) = x(P) - T \bmod {\mathbb Q}^\times (L^\times)^2 \,. \] Rational points at infinity map to the trivial element, and if there are rational Weierstrass points, their images can be determined using the fact that the norm of $x(P) - T$ is $y(P)^2$ divided by the leading coefficient of~$f$. If we can show that $x-T$ has empty image on~$C({\mathbb Q})$, then it follows that $C({\mathbb Q})$ is empty.
We obtain information of the image by considering again $C({\mathbb R})$ and~$C({\mathbb Q}_p)$. We can carry out the same construction over ${\mathbb R}$ and over~${\mathbb Q}_p$, leading to an algebra $L_v$ ($v = p$, or $v = \infty$ when working over~${\mathbb R}$), a group~$H_v$ and a map \[ (x-T)_v : C({\mathbb Q}_v) \longrightarrow H_v \qquad \text{(where ${\mathbb Q}_\infty = {\mathbb R}$).} \] We have inclusions $C({\mathbb Q}) \hookrightarrow C({\mathbb Q}_v)$ and canonical homomorphisms $H \to H_v$. Everything fits together in a commutative diagram \[ \xymatrix{ C({\mathbb Q}) \ar[rr]^{x-T} \ar[d] & & H \ar[d] \\
\prod_v C({\mathbb Q}_v) \ar[rr]^{\prod_v (x-T)_v} & & \prod_v H_v
} \] where $v$ runs through the primes and~$\infty$. If we can show that the images of the lower horizontal map and of the right vertical map do not meet, then the image of $x-T$ and therefore also $C({\mathbb Q})$ must be empty. We can verify this by considering a finite subset of `places'~$v$.
In general, we obtain a finite subset of~$H$ that contains the image of~$x-T$; this finite subset is known as the {\em fake 2-Selmer set} of~$C/{\mathbb Q}$. It classifies either pairs of (isomorphism classes of) 2-covering curves of~$C$ that have points {\em everywhere locally}, i.e., over~${\mathbb R}$ and over all~${\mathbb Q}_p$, or else it classifies such 2-covering curves, in which case it is the (true) 2-Selmer set. Whether it classifies pairs or individual 2-coverings depends on a certain condition on the polynomial~$f$. This condition is satisfied if either $f$ has an irreducible factor of odd degree, or if $\deg f \equiv 2 \bmod 4$ and $f$ factors over a quadratic extension ${\mathbb Q}(\sqrt{d})$ as a constant times the product of two conjugate polynomials. A {\em 2-covering} of~$C$ is a morphism $\pi : D \to C$ that is unramified and becomes Galois over a suitable field extension of finite degree, with Galois group $({\mathbb Z}/2{\mathbb Z})^{2g}$. It is known that every rational point on~$C$ lifts to a rational point on some 2-covering of~$C$.
The actual computation splits into a global and a local part. The global computation uses the ideal class group and the unit group of~$L$ (or the constituent number fields of~$L$) to construct a finite subgroup of~$H$ containing the image of~$x-T$. The local computation determines the image of $(x-T)_v$ for finitely many places~$v$.
\subsection{The Jacobian} \label{SS:Jac}
Most other methods make use of another object associated to the curve~$C$: its {\em Jacobian variety} (or just {\em Jacobian}). This is an abelian variety~$J$ (a higher-dimensional analogue of an elliptic curve) of dimension~$g$, the genus of~$C$. It reflects a large part of the geometry and arithmetic of~$C$; its main advantage is that its points form an abelian group, whereas the set of points on~$C$ does not carry a natural algebraic structure.
For our purposes, we can more or less forget the structure of~$J$ as a projective variety. Instead we use the description of the points on~$J$ as the elements of the degree zero part of the {\em Picard group} of~$C$. The Picard group is constructed as a quotient of the group of divisors on~$C$. A {\em divisor} on~$C$ is an element of the free abelian group $\operatorname{Div}_C$ on the set~$C(\bar{\mathbb Q})$ of all algebraic points on~$C$. The absolute Galois group of~${\mathbb Q}$ acts on~$\operatorname{Div}_C$; a divisor that is fixed by this action is {\em rational}. This does not mean that the points occurring in the divisor must be rational; points with the same multiplicity can be permuted. A nonzero rational function~$h$ on~$C$ with coefficients in~$\bar{\mathbb Q}$ has an associated divisor $\operatorname{div}(h)$ that records its zeros and poles (with multiplicities). If $h$ has coefficients in~${\mathbb Q}$, then $\operatorname{div}(h)$ is rational. The homomorphism $\deg : \operatorname{Div}_C \to {\mathbb Z}$ induced by sending each point in~$C(\bar{\mathbb Q})$ to~$1$ gives the {\em degree} of a divisor. Divisors of functions have degree zero.
Two divisors $D, D' \in \operatorname{Div}_C$ are {\em linearly equivalent} if their difference is the divisor of a function. The equivalence classes are the elements of the {\em Picard group} $\operatorname{Pic}_C$ defined by the following exact sequence. \[ 0 \longrightarrow \bar{\mathbb Q}^\times \longrightarrow \bar{\mathbb Q}(C)^\times \stackrel{\operatorname{div}}{\longrightarrow} \operatorname{Div}_C
\longrightarrow \operatorname{Pic}_C \longrightarrow 0 \] Since divisors of functions have degree zero, the degree homomorphism descends to~$\operatorname{Pic}_C$. We denote its kernel by $\operatorname{Pic}^0_C$. It is a fact that $J(\bar{\mathbb Q})$ is isomorphic as a group to~$\operatorname{Pic}^0_C$. The rational points $J({\mathbb Q})$ correspond to the elements of~$\operatorname{Pic}^0_C$ left invariant by the Galois group. In general it is not true that a point in~$J({\mathbb Q})$ can be represented by a rational divisor, but this is the case when $C$ has a rational point, or at least points everywhere locally. The most important fact about the group $J({\mathbb Q})$ is the statement of the {\em Mordell-Weil Theorem:} $J({\mathbb Q})$ is a {\em finitely generated} abelian group. For this reason, $J({\mathbb Q})$ is often called the {\em Mordell-Weil group} of $J$ or of~$C$.
If $P_0 \in C({\mathbb Q})$, then the map $C \ni P \mapsto [P - P_0] \in J$ is a ${\mathbb Q}$-defined embedding of $C$ into~$J$. We use $[D]$ to denote the linear equivalence class of the divisor~$D$. The basic idea of the methods described below is to try to recognise the points of~$C$ embedded in this way among the rational points on~$J$.
We need a way of representing elements of~$J({\mathbb Q})$. Let $P \mapsto P^-$ denote the {\em hyperelliptic involution} on~$C$; this is the morphism $C \to C$ that changes the sign of the $y$~coordinate. Then it is easy to see that the divisors $P + P^-$ all belong to the same class $W \in \operatorname{Pic}_C$. An effective divisor~$D$ (a divisor such that no point occurs with negative multiplicity) is {\em in general position} if there is no point~$P$ such that $D - P - P^-$ is still effective. Divisors in general position not containing points at infinity can be represented in a convenient way by pairs of polynomials $(a(x), b(x))$. This pair represents the divisor~$D$ such that its image on the projective line (under the $x$-coordinate map) is given by the roots of~$a$; the corresponding points on~$C$ are determined by the relation $y = b(x)$. The polynomials have to satisfy the relation $f(x) \equiv b(x)^2 \bmod a(x)$. This is the {\em Mumford representation} of~$D$. The polynomials $a$ and~$b$ can be chosen to have rational coefficients if and only if $D$ is rational. (The representation can be adapted to allow for points at infinity occurring in the divisor.)
If the genus $g$ is even, then it is a fact that every point in~$J({\mathbb Q})$ has a unique representation of the form $[D] - nW$ where $D$ is a rational divisor in general position of degree~$2n$ and $n \ge 0$ is minimal. The Mumford representation of~$D$ is then also called the Mumford representation of the corresponding point on~$J$. It is fairly easy to add points on~$J$ using the Mumford representation, see~\cite{Cantor}. This addition procedure is implemented in~{\sf MAGMA}, for example.
There is a relation between 2-coverings of~$C$ and the Jacobian~$J$. Assume $C$ is embedded in~$J$ as above. Then if $D$ is any 2-covering of~$C$ that has a rational point~$P$, $D$ can be realised as the preimage of~$C$ under a map of the form $Q \mapsto 2Q + Q_0$ on~$J$, where $Q_0$ is the image of~$P$ on~$C \subset J$. A consequence of this is that two rational points $P_1, P_2 \in C({\mathbb Q})$ lift to the same 2-covering if and only if $[P_1 - P_2] \in 2 J({\mathbb Q})$.
\subsection{The Mordell-Weil Group} \label{SS:MW}
We will need to know generators of a finite-index subgroup of the Mordell-Weil group~$J({\mathbb Q})$. Since $J({\mathbb Q})$ is a finitely generated abelian group, it will be a direct sum of a finite torsion part and a free abelian group of rank~$r$; $r$ is called the {\em rank} of~$J({\mathbb Q})$. So what we need is a set of $r$ independent points in~$J({\mathbb Q})$.
The torsion subgroup of~$J({\mathbb Q})$ is usually easy to determine. The main tool used here is the fact that the torsion subgroup injects into~$J({\mathbb F}_p)$ when $p$ is an odd prime not dividing the discriminant of~$f$. If the orders of the finite groups~$J({\mathbb F}_p)$ are coprime for suitable primes~$p$, then this shows that $J({\mathbb Q})$ is torsion-free.
We can find points in~$J({\mathbb Q})$ by search. This can be done by searching for rational points on the variety parameterising Mumford representations of divisors of degree 2, 4, \dots. We can then check if the points found are independent by again mapping into~$J({\mathbb F}_p)$ for one or several primes~$p$.
The hard part is to know when we have found enough points. For this we need an upper bound on the rank~$r$. This can be provided by a {\em 2-descent} on the Jacobian~$J$. This is described in detail in~\cite{Stoll2Descent}. The idea is similar to the 2-cover descent on~$C$ described above in Sect.~\ref{SS:Twocov}. Essentially we extend the $x-T$ map from points to divisors. It can be shown that the value of $(x-T)(D)$ only depends on the linear equivalence class of~$D$. This gives us a homomorphism from~$J({\mathbb Q})$ into~$H$, or more precisely, into the kernel of the norm map $N_{L/{\mathbb Q}} : H \to {\mathbb Q}^\times/({\mathbb Q}^\times)^2$. It can be shown that the kernel of this $x-T$ map on~$J({\mathbb Q})$ is either $2 J({\mathbb Q})$, or it contains $2J({\mathbb Q})$ as a subgroup of index~2. The former is the case when $f$ satisfies the same condition as that mentioned in Sect.~\ref{SS:Twocov}.
We can then bound $(x-T)(J({\mathbb Q}))$ in much the same way as we did when doing a 2-cover descent on~$C$. The global part of the computation is identical. The local part is helped by the fact that we now have a group homomorphism (or a homomorphism of ${\mathbb F}_2$-vector spaces), so we can use linear algebra. We obtain a bound for the order of $J({\mathbb Q})/2J({\mathbb Q})$, from which we can deduce a bound for the rank~$r$. If we are lucky and found that same number of independent points in~$J({\mathbb Q})$, then we know that these points generate a subgroup of finite index.
The group containing $(x-T)(J({\mathbb Q}))$ we compute is known as the {\em fake 2-Selmer group} of~$J$~\cite{PS}. If the polynomial~$f$ satisfies the relevant condition, then this fake Selmer group is isomorphic to the true 2-Selmer group of~$J$ (that classifies 2-coverings of~$J$ that have points everywhere locally).
\subsection{The Chabauty-Coleman Method} \label{SS:Chab}
If the rank~$r$ is less than the genus~$g$, there is a method available that allows us to get tight bounds on the number of rational points on~$C$. This goes back to Chabauty~\cite{Chabauty}, who used it to prove Mordell's Conjecture in this case. Coleman~\cite{Coleman} refined the method. We give a sketch here; more details can be found for example in~\cite{StollChabauty}.
Let $p$ be a prime of good reduction for~$C$ (this is the case when $p$ is odd and does not divide the discriminant of~$f$). We use $\Omega_C^1({\mathbb Q}_p)$ and~$\Omega_J^1({\mathbb Q}_p)$ to denote the spaces of regular 1-forms on~$C$ and~$J$ that are defined over~${\mathbb Q}_p$. If $P_0 \in C({\mathbb Q})$ and $\iota : C \to J$, $P \mapsto [P-P_0]$ denotes the corresponding embedding of $C$ into~$J$, then the induced map $\iota^* : \Omega_J^1({\mathbb Q}_p) \to \Omega_C^1({\mathbb Q}_p)$ is an isomorphism that is independent of the choice of basepoint~$P_0$. Both spaces have dimension~$g$. There is an integration pairing \[ \Omega_C^1({\mathbb Q}_p) \times J({\mathbb Q}_p) \longrightarrow {\mathbb Q}_p, \quad
(\iota^* \omega, Q) \longmapsto \int_0^Q \omega = \langle \omega, \log Q \rangle \,. \] In the last expression, $\log Q$ denotes the $p$-adic logarithm on~$J({\mathbb Q}_p)$ with values in the tangent space of~$J({\mathbb Q}_p)$ at the origin, and $\Omega^1_J({\mathbb Q}_p)$ is identified with the dual of this tangent space. If $r < g$, then there are (at least) $g-r$ linearly independent differentials $\omega \in \Omega_C^1({\mathbb Q}_p)$ that annihilate the Mordell-Weil group~$J({\mathbb Q})$. Such a differential can be scaled so that it reduces to a non-zero differential $\bar\omega$ mod~$p$. Now the important fact is that if $\bar\omega$ does not vanish at a point $\bar{P} \in C({\mathbb F}_p)$, then there is at most one rational point on~$C({\mathbb Q})$ whose reduction is~$\bar{P}$. (There are more general bounds valid when $\bar\omega$ does vanish at~$\bar{P}$, but we do not need them here.)
\section{Determining the Rational Points} \label{S:Points}
In this section, we determine the set of rational points on the three curves $C_0$, $C_1$ and~$C_2$. To do this, we apply the methods described in Sect.~\ref{S:Back}.
We first consider $C_0$ and~$C_2$. We apply the 2-cover-descent procedure described in Sect.~\ref{SS:Twocov} to the two curves and find that in each case, there are no 2-coverings that have points everywhere locally. For $C_0$, only 2-adic information is needed in addition to the global computation, for $C_2$, we need 2-adic and 7-adic information. Note that the number fields generated by roots of $f_0$ or~$f_2$ are sufficiently small in terms of degree and discriminant that the necessary class and unit group computations can be done unconditionally. This leads to the following.
\begin{Proposition} \label{Prop02}
There are no rational points on the curves $C_0$ and~$C_2$. \end{Proposition}
\begin{proof}
The 2-cover descent procedure is available in recent releases of {\sf MAGMA}.
The computations leading to the stated result can be performed by issuing
the following {\sf MAGMA} commands.
\begin{verbatim} > SetVerbose("Selmer",2); > TwoCoverDescent(HyperellipticCurve(Polynomial(
[-16,0,640,0,1160,0,680,0,55,0,2]))); > TwoCoverDescent(HyperellipticCurve(Polynomial(
[368,2880,9280,17280,21320,18144,10760,4320,1135,180,14]))); \end{verbatim}
We explain how the results can be checked independently. We give details
for~$C_0$ first. The procedure for~$C_2$ is similar, so we only explain
the differences.
The polynomial~$f_0$ is irreducible, and it can be checked that the number
field generated by one of its roots is isomorphic to $L = {\mathbb Q}(\!\sqrt[10]{288})$.
Using {\sf MAGMA} or pari/gp, one checks that this field has trivial
class group. The finite subgroup~$\tilde{H}$ of~$H$ containing the Selmer~set
is then given
as ${\mathcal O}_{L,S}^\times/({\mathbb Z}_{\{2,3,5\}}^\times ({\mathcal O}_{L,S}^\times)^2)$, where
$S$ is the set of primes in~${\mathcal O}_L$ above the `bad primes' 2, 3 and~5.
The set~$S$ contains two primes
above~2, of degrees 1 and~4, respectively, and one prime above 3 and~5
each, of degree~2 in both cases. Since $L$ has two real embeddings and
four pairs of complex embeddings, the unit rank is~5. The rank (or ${\mathbb F}_2$-dimension)
of~$\tilde{H}$ is then $7$. (Note that 2 is a square in~$L$.) The descent map takes
its values in the subset of~$\tilde{H}$ consisting of elements whose norm is
twice a square. This subset is of size~$32$; elements of~${\mathcal O}_L$ representing
it can easily be obtained. Let $\delta$ be such a representative. We let
$T$ be a root of~$f_0$ in~$L$ and check that the system of equations
\[ y^2 = f_0(x), \quad x - T = \delta c z^2 \]
has no solutions with $x,y,c \in {\mathbb Q}_2$, $z \in L \otimes_{{\mathbb Q}} {\mathbb Q}_2$.
The second equation leads, after expanding $\delta z^2$ as a ${\mathbb Q}$-linear
combination of $1, T, T^2, \dots, T^9$, to eight homogeneous quadratic
equations in the ten unknown coefficients of~$z$. Any solution to these
equations gives a unique~$x$, for which~$f_0(x)$ is a square. The latter
follows by taking norms on both sides of $x-T = \delta c z^2$. So we
only have to check the intersection of eight quadrics in~${\mathbb P}^9$ for
existence of ${\mathbb Q}_2$-points. Alternatively, we evaluate the descent map
on~$C_0({\mathbb Q}_2)$, to get its image in~$H_2 = L_2^\times/({\mathbb Q}_2^\times (L_2^\times)^2)$,
where $L_2 = L \otimes_{{\mathbb Q}} {\mathbb Q}_2$. Then we check that none of the
representatives~$\delta$ map into this image.
When dealing with~$C_2$, the field~$L$ is generated by a root of
$x^{10} - 6 x^5 - 9$. Since the leading coefficient of~$f_2$ is~$14$,
we have to add (the primes above)~7 to the bad primes. As before, the
class group is trivial, and we have the same splitting behaviour of
2, 3 and~5. The prime~7 splits into two primes of degree~1 and two
primes of degree~4. The group of $S$-units of~$L$ modulo squares has
now rank~14, the group $\tilde{H}$ has rank~10, and the subset of~$H$ consisting
of elements whose norm is 14~times a square has 128~elements. These elements
now have to be tested for compatibility with the 2-adic and the 7-adic
information, which can be done using either of the two approaches described
above. The 7-adic check is only necessary for one of the elements; the 127~others
are already ruled out by the 2-adic check. \end{proof}
We cannot hope to deal with~$C_1$ in the same easy manner, since $C_1$ has two rational points at infinity coming from the trivial solutions. We can still perform a 2-cover-descent computation, though, and find that there is only one 2-covering of~$C_1$ with points everywhere locally, which is the covering that lifts the points at infinity. Only 2-adic information is necessary to show that the fake 2-Selmer set has at most one element, so we can get this result using the following {\sf MAGMA} command. \begin{verbatim} > TwoCoverDescent(HyperellipticCurve(Polynomial(
[112,480,1520,2880,3880,3024,1840,720,215,30,1]))
: PrimeCutoff := 2); \end{verbatim} (In some versions of {\sf MAGMA} this returns a two-element set. However, as can be checked by pulling back under the map returned as a second value, these two elements correspond to the images of $1$ and~$-1$ in $L^\times/(L^\times)^2 {\mathbb Q}^\times$ and therefore both represent the trivial element. The error is caused by {\sf MAGMA} using $1$ instead of~$-1$ as a `generator' of ${\mathbb Q}^\times/({\mathbb Q}^\times)^2$. This bug is corrected in recent releases.)
The computation can be performed in the same way as for $C_0$ and~$C_2$. The relevant field~$L$ is generated by a root of $x^{10} - 18 x^5 + 9$; it has class number~1, and the primes 2, 3 and~5 split in the same way as before. The subset~$H'$ (in fact a subgroup) of~$\tilde{H}$ consisting of elements with square norm has size~32. Of these, only the element represented by~1 is compatible with the 2-adic constraints.
We remark that by the way it is given, the polynomial~$f_1$ factors over~${\mathbb Q}(\sqrt{2})$ into two conjugate factors of degree~5. This implies that the `fake 2-Selmer set' computed by the 2-cover descent is the true 2-Selmer set, so that there is really only one 2-covering that corresponds to the only element of the set computed by the procedure. We state the result as a lemma. We fix $P_0 = \infty_- \in C_1$ as our basepoint and write $J_1$ for the Jacobian variety of~$C_1$. Then, as described in Sect.~\ref{SS:Jac}, \[ \iota : C_1 \longrightarrow J_1\,, \quad P \longmapsto [P - P_0] \] is an embedding defined over~${\mathbb Q}$.
\begin{Lemma} \label{Lemma2J}
Let $P \in C_1({\mathbb Q})$. Then the divisor class $[P - P_0]$ is in $2 J_1({\mathbb Q})$. \end{Lemma}
\begin{proof}
Let $D$ be the unique 2-covering of~$C_1$ (up to isomorphism) that has
points everywhere locally. The fact that $D$ is unique follows from the
computation of the 2-Selmer set.
Any rational point $P \in C_1({\mathbb Q})$ lifts to a rational point on some 2-covering
of~$C_1$. In particular, this 2-covering then has a rational point, so it
also satisfies the weaker condition that it has points everywhere locally.
Since $D$ is the only 2-covering of~$C_1$ satisfying this condition,
$P_0$ and~$P$ must both lift to a rational point on~$D$. This implies
by the remark at the end of Sect.~\ref{SS:Jac} that $[P-P_0] \in 2 J_1({\mathbb Q})$. \end{proof}
To make use of this information, we need to know $J_1({\mathbb Q})$, or at least a subgroup of finite index. A computer search reveals two points in~$J_1({\mathbb Q})$, which are given in Mumford representation (see Sect.~\ref{SS:Jac}) as follows. \begin{align*}
Q_1 &= \bigl(x^4 + 4 x^2 + \tfrac{4}{5},\quad -16 x^3 - \tfrac{96}{5} x\bigr) \\
Q_2 &= \bigl(x^4 + \tfrac{24}{5} x^3 + \tfrac{36}{5} x^2 + \tfrac{48}{5} x
+ \tfrac{36}{5},\quad
-\tfrac{1712}{75} x^3 - \tfrac{976}{25} x^2 - \tfrac{1728}{25} x
- \tfrac{2336}{25}\bigr) \end{align*} We note that $2 Q_1 = [\infty_+ - \infty_-]$; this makes Lemma~\ref{Lemma2J} explicit for the known two points on~$C_1$.
\begin{Lemma} \label{LemmaGroup}
The Mordell-Weil group $J_1({\mathbb Q})$ is torsion-free, and $Q_1$, $Q_2$ are
linearly independent. In particular, the rank of $J_1({\mathbb Q})$ is at least~2. \end{Lemma}
\begin{proof}
The only primes of bad reduction for~$C_1$ are 2, 3 and~5. It is known that
the torsion subgroup of $J_1({\mathbb Q})$ injects into $J_1({\mathbb F}_p)$ when $p$ is an
odd prime of good reduction. Since $\#J_1({\mathbb F}_7) = 2400$ and
$\#J_1({\mathbb F}_{41}) = 2633441$
are coprime, there can be no nontrivial torsion in~$J_1({\mathbb Q})$.
We check that the image of $\langle Q_1, Q_2 \rangle$ in~$J_1({\mathbb F}_7)$ is
not cyclic. This shows that $Q_1$ and~$Q_2$ must be independent. \end{proof}
The next step is to show that the Mordell-Weil rank is indeed~2. For this, we compute the 2-Selmer group of~$J_1$ as sketched in Sect.~\ref{SS:MW} and described in detail in~\cite{Stoll2Descent}. We give some details of the computation, since it is outside the scope of the functionality that is currently provided by {\sf MAGMA} (or any other software package).
We first remind ourselves that $f_1$ factors over~${\mathbb Q}(\sqrt{2})$. This implies that the kernel of the $x-T$ map on~$J({\mathbb Q})$ is~$2J({\mathbb Q})$. Therefore the `fake 2-Selmer group' that we compute is in fact the actual 2-Selmer group of~$J_1$. Since $J_1({\mathbb Q})$ is torsion-free, the order of the 2-Selmer group is an upper bound for~$2^r$, where $r$ is the rank of~$J_1({\mathbb Q})$.
The global computation is the same as that we needed to do for the \hbox{2-cover} descent. In particular, the Selmer group is contained in the group~$H'$ from above, consisting of the $S$-units of~$L$ with square norm, modulo squares and modulo $\{2,3,5\}$-units of~${\mathbb Q}$. For the local part of the computation, we have to compute the image of $J_1({\mathbb Q}_p)$ under the local $x-T$ map for the primes~$p$ of bad reduction. We check that there is no 2-torsion in $J_1({\mathbb Q}_3)$ and~$J_1({\mathbb Q}_5)$ ($f_1$ remains irreducible both over~${\mathbb Q}_3$ and over~${\mathbb Q}_5$). This implies that the targets of the local maps $(x-T)_3$ and $(x-T)_5$ are trivial, which means that these two primes need not be considered as bad primes for the descent computation. The real locus $C_1({\mathbb R})$ is connected, which implies that there is no information coming from the local image at the infinite place. (Recall that $C_1$ denotes the smooth projective model of the curve. The real locus of the affine curve $y^2 = f_1(x)$ has two components, but they are connected to each other through the points at infinity.) Therefore, we only need to use 2-adic information in the computation. We set $L_2 = L \otimes_{{\mathbb Q}} {\mathbb Q}_2$ and compute the natural homomorphism \[ \mu_2 : H' \longrightarrow H_2 = \frac{L_2^\times}{{\mathbb Q}_2^\times (L_2^\times)^2} \,. \] Let $I_2$ be the image of~$J_1({\mathbb Q}_2)$ in~$H_2$. Then the 2-Selmer group is $\mu_2^{-1}(I_2)$.
It remains to compute~$I_2$, which is the hardest part of the computation. The \hbox{2-torsion} subgroup $J_1({\mathbb Q}_2)[2]$ has order~2 ($f_1$ splits into factors of degrees 2 and~8 over~${\mathbb Q}_2$); this implies that $J_1({\mathbb Q}_2)/2 J_1({\mathbb Q}_2)$ has dimension~$g + 1 = 5$ as an \hbox{${\mathbb F}_2$-vector} space. This quotient is generated by the images of $Q_1$ and~$Q_2$ and of three further points of the form $[D_i] - \tfrac{\deg D_i}{2} W$, where $D_i$ is the sum of points on~$C_1$ whose $x$-coordinates are the roots of \begin{align*}
D_1 : & \quad \bigl(x - \tfrac{1}{2}\bigr) \bigl(x - \tfrac{1}{4}\bigr)\,, \\
D_2 : & \quad x^2 - 2 x + 6\,, \\
D_3 : & \quad x^4 + 4 x^3 + 12 x^2 + 36\,, \\ \end{align*} respectively. These points were found by a systematic search, using the fact that the local map $(x-T)_2$ is injective in our situation. We can therefore stop the search procedure as soon as we have found points whose images generate a five-dimensional ${\mathbb F}_2$-vector space. We thus find $I_2 \subset H_2$ and then can compute the 2-Selmer group. In our situation, $\mu_2$ is injective, and the intersection of its image with~$I_2$ is generated by the images of $Q_1$ and~$Q_2$. Therefore, the ${\mathbb F}_2$-dimension of the 2-Selmer group is~2.
\begin{Lemma} \label{LemmaRank}
The rank of $J_1({\mathbb Q})$ is~2, and $\langle Q_1, Q_2 \rangle \subset J_1({\mathbb Q})$
is a subgroup of finite odd index. \end{Lemma}
\begin{proof}
The Selmer group computation shows that the rank is $\le 2$, and
Lemma~\ref{LemmaGroup} shows that the rank is $\ge 2$. Regarding the second
statement, it is now clear that we have a subgroup of finite index.
The observation stated just before the lemma shows that the given subgroup
surjects onto the 2-Selmer group under the $x-T$ map. Since the kernel
of the $x-T$ map is~$2 J_1({\mathbb Q})$, this implies that the index is odd. \end{proof}
Now we want to use the Chabauty-Coleman method sketched in Sect.~\ref{SS:Chab} to show that $\infty_+$ and~$\infty_-$ are the only rational points on~$C_1$. To keep the computations reasonably simple, we want to work at $p = 7$, which is the smallest prime of good reduction.
For $p$ a prime of good reduction, we write $\rho_p$ for the two `reduction mod~$p$' maps $J_1({\mathbb Q}) \to J_1({\mathbb F}_p)$ and $C_1({\mathbb Q}) \to C_1({\mathbb F}_p)$.
\begin{Lemma} \label{LemmaRed}
Let $P \in C_1({\mathbb Q})$. Then $\rho_7(P) = \rho_7(\infty_+)$ or
$\rho_7(P) = \rho_7(\infty_-)$. \end{Lemma}
\begin{proof}
Let $G = \langle Q_1, Q_2 \rangle$ be the subgroup of $J_1({\mathbb Q})$ generated
by the two points $Q_1$ and~$Q_2$. We find that $\rho_7(G)$ has index~2
in~$J_1({\mathbb F}_7) \cong {\mathbb Z}/10{\mathbb Z} \oplus {\mathbb Z}/240{\mathbb Z}$.
By Lemma~\ref{LemmaRank}, we know that $(J_1({\mathbb Q}) : G)$ is
odd, so we can deduce that $\rho_7(G) = \rho_7(J_1({\mathbb Q}))$. The group
$J_1({\mathbb F}_7)$ surjects onto $({\mathbb Z}/5{\mathbb Z})^2$. Since $\rho_7(J_1(G))$ has index~2
in~$J_1({\mathbb F}_7)$, $\rho_7(G) = \rho_7(J_1({\mathbb Q}))$ also surjects onto~$({\mathbb Z}/5{\mathbb Z})^2$. This
implies that the index of~$G$ in~$J_1({\mathbb Q})$ is not divisible by~5.
We determine the points $P \in C_1({\mathbb F}_7)$ such that
$\iota(P) \in \rho_7(2 J_1({\mathbb Q})) = 2 \rho_7(G)$. We find the set
\[ X_7 = \{\rho_7(\infty_+), \rho_7(\infty_-), (-2, 2), (-2, -2)\}\,. \]
Note that for any $P \in J_1({\mathbb Q})$, we must have $\rho_7(P) \in X_7$
by Lemma~\ref{Lemma2J}.
Now we look at $p = 13$. The image of~$G$
in~$J_1({\mathbb F}_{13}) \cong {\mathbb Z}/10{\mathbb Z} \oplus {\mathbb Z}/2850{\mathbb Z}$ has index~5.
Since we already know that $(J_1({\mathbb Q}) : G)$ is not a multiple of~5, this
implies that $\rho_{13}(G) = \rho_{13}(J_1({\mathbb Q}))$. As above for $p = 7$,
we compute the set $X_{13} \subset C_1({\mathbb F}_{13})$ of points mapping into
$\rho_{13}(2 J_1({\mathbb Q}))$. We find
\[ X_{13} = \{\rho_{13}(\infty_+), \rho_{13}(\infty_-)\} \,. \]
Now suppose that there is $P \in C_1({\mathbb Q})$ with
$\rho_7(P) \in \{(-2,2), (-2,-2)\}$. Then $\iota(P)$
is in one of two specific cosets in
$J_1({\mathbb Q})/\ker \rho_7 \cong G/\ker \rho_7|_G$. On the other hand,
we have $\rho_{13}(P) = \rho_{13}(\infty_\pm)$, so that $\iota(P)$
is in one of two specific cosets in
$J_1({\mathbb Q})/\ker \rho_{13} \cong G/\ker \rho_{13}|_G$.
If we identify $G = \langle Q_1, Q_2 \rangle$ with ${\mathbb Z}^2$, then we can
find the kernels of $\rho_7$ and of~$\rho_{13}$ on~$G$ explicitly, and
we can also determine the relevant cosets explicitly. It can then be checked
that the union of the first two cosets does not meet the union of the
second two cosets. This implies that such a point $P$ cannot exist.
Therefore, the only remaining possibilities are that
$\rho_7(P) = \rho_7(\infty_\pm)$. \end{proof}
\begin{Remark}
The use of information at $p = 13$ to rule out residue classes at $p = 7$
in the proof above is a very simple instance of a method known as the
{\em Mordell-Weil sieve}. For a detailed description of this method,
see~\cite{BSMWS}. \end{Remark}
Now we need to find the space of holomorphic 1-forms on~$C_1$, defined over~${\mathbb Q}_7$, that annihilate the Mordell-Weil group under the integration pairing, compare Sect.~\ref{SS:Chab}. We follow the procedure described in~\cite{StollXdyn06}. We first find two independent points in the intersection of $J_1({\mathbb Q})$ and the kernel of reduction mod~7. In our case, we take $R_1 = 20 Q_1$ and $R_2 = 5 Q_1 + 60 Q_2$. We represent these points in the form $R_j = [D_j - 4 \infty_-]$ with effective divisors $D_1$, $D_2$ of degree~4. The coefficients of the primitive polynomial in~${\mathbb Z}[x]$ whose roots are the $x$-coordinates of the points in the support of~$D_1$ have more than~100 digits and those of the corresponding polynomial for~$D_2$ fill several pages, so we refrain from printing them here. (This indicates that it is a good idea to work with a small prime!) The points in the support of $D_1$ and~$D_2$ all reduce to~$\infty_-$ modulo the prime above~7 in their fields of definition (which are degree~4 number fields totally ramified at~7). Expressing a basis of $\Omega^1_{C_1}({\mathbb Q}_7)$ as power series in the uniformiser $t = 1/x$ at~$P_0 = \infty_-$ times~$dt$, we compute the integrals numerically. More precisely, the differentials \[ \eta_0 = \frac{dx}{2y}, \quad \eta_1 = \frac{x\,dx}{2y}, \quad
\eta_2 = \frac{x^2\,dx}{2y} \quad\text{and}\quad \eta_3 = \frac{x^3\,dx}{2y} \] form a basis of~$\Omega_{C_1}^1({\mathbb Q}_7)$. We get \[ \eta_j = t^{3-j} \Bigl(\frac{1}{2} - \frac{15}{2} t + 115 t^2 - 1980 t^3
+ \frac{145385}{4} t^4 - \frac{2764899}{4} t^5 + \dots\Bigr)
\,dt \] as power series in the uniformiser. Using these power series up to a precision of~$t^{20}$, we compute the following 7-adic approximations to the integrals. \[ \Bigl(\int_0^{R_j} \eta_i\Bigr)_{0 \le i \le 3, 1 \le j \le 2}
= \begin{pmatrix}
-20 \cdot 7 + O(7^4) & -155 \cdot 7 + O(7^4) \\
-150 \cdot 7 + O(7^4) & -13 \cdot 7 + O(7^4) \\
-130 \cdot 7 + O(7^4) & -83 \cdot 7 + O(7^4) \\
-19 \cdot 7 + O(7^4) & 163 \cdot 7 + O(7^4)
\end{pmatrix} \] From this, it follows easily that the reductions mod~7 of the (suitably scaled) differentials that kill~$J_1({\mathbb Q})$ fill the subspace of $\Omega^1_{C_1}({\mathbb F}_7)$ spanned by \[ \omega_1 = (1 + 3 x - 2 x^2) \frac{dx}{2 y} \quad\text{and}\quad
\omega_2 = (1 - x^2 + x^3) \frac{dx}{2 y} \,. \] Since $\omega_2$ does not vanish at the points $\rho_7(\infty_\pm)$, this implies that there can be at most one rational point~$P$ on~$C_1$ with $\rho_7(P) = \rho_7(\infty_+)$ and at most one point~$P$ with $\rho_7(P) = \rho_7(\infty_-)$ (see for example~\cite[Prop.~6.3]{StollChabauty}).
\begin{Proposition} \label{Prop1}
The only rational points on~$C_1$ are $\infty_+$ and~$\infty_-$. \end{Proposition}
\begin{proof}
Let $P \in C_1({\mathbb Q})$. By Lemma~\ref{LemmaRed}, $\rho_7(P) = \rho_7(\infty_\pm)$.
By the argument above, for each sign $s \in \{+,-\}$, we have
$\#\{P \in C_1({\mathbb Q}) : \rho_7(P) = \rho_7(\infty_s)\} \le 1$. These two
facts together imply that $\#C_1({\mathbb Q}) \le 2$. Since we know the two rational
points $\infty_+$ and~$\infty_-$ on~$C_1$, there cannot be any further
rational points. \end{proof}
We can now prove Thm.~\ref{Thm}.
\begin{proof}[of Thm.~\ref{Thm}]
The considerations
in Sect.~\ref{Curves} imply that if $(a^2, b^2, c^2, d^5)$ is an
arithmetic progression in coprime integers, then there are coprime $u$ and~$v$,
related to $a,b,c,d$ by~\eqref{rel1},
such that $(u/v, a/v^5)$ is a rational point on one of the curves~$C_j$
with $-2 \le j \le 2$. By Prop.~\ref{Prop02}, there are no rational
points on $C_0$ and~$C_2$ and therefore also not on the curve~$C_{-2}$, which
is isomorphic to~$C_2$. By Prop.~\ref{Prop1}, the only rational points
on~$C_1$ (and~$C_{-1}$) are the points at infinity. This translates into
$a = \pm 1$, $u = \pm 1$, $v = 0$, and we have $j = \pm 1$. We deduce
$a^2 = 1$, $b^2 = g_1(\pm 1, 0)^2 = 1$, whence also $c^2 = d^5 = 1$. \end{proof}
\end{document} |
\begin{document}
\title{\large \bf Krigings Over Space and Time Based on Latent\\ Low-Dimensional Structures\footnote{Partially supported by National Statistical Research Project of China 2015LY77 and NSFC grants 11571080, 11571081, 71531006 (DH), by EPSRC grant EP/L01226X/1 (QY), and by NSFC grants 11371318 (RZ).}} \author{\normalsize Da Huang$^{\dagger}$ \qquad Qiwei Yao$^{\ddagger}$ \qquad Rongmao Zhang$^\star$\\[-1ex] \small $^{\dagger}$School of Management, Fudan University, Shanghai, 200433, China\\[-1ex] \small $^{\ddagger}$Department of Statistics, London School of Economics, London, WC2A 2AE, U.K.\\[-1ex] \small
$^\star$School of Mathematics, Zhejiang University, Hangzhou, 310058, China\\[-1ex] \small [email protected] \quad [email protected] \quad [email protected] }
\date{}
\maketitle
\begin{abstract} We propose a new approach to represent nonparametrically the linear dependence structure of a spatio-temporal process in terms of latent common factors. Though it is formally similar to the existing reduced rank approximation methods (Section 7.1.3 of Cressie and Wikle, 2011), the fundamental difference is that the low-dimensional structure is completely unknown in our setting, which is learned from the data collected irregularly over space but regularly over time. Furthermore a graph Laplacian is incorporated in the learning in order to take the advantage of the continuity over space, and a new aggregation method via randomly partitioning space is introduced to improve the efficiency. We do not impose any stationarity conditions over space either, as the learning is facilitated by the stationarity in time. Krigings over space and time are carried out based on the learned low-dimensional structure, which is scalable to the cases when the data are taken over a large number of locations and/or over a long time period. Asymptotic properties of the proposed methods are established. Illustration with both simulated and real data sets is also reported. \end{abstract}
\noindent {\sc Key Words}: Aggregation via random partitioning;
Common factors; Eigenanalysis; Graph Laplacian; Nugget effect; Spatio-temporal processes.
\section{Introduction} \setcounter{equation}{0}
Kriging, referring to the spatial best linear prediction, is named by Matheron after South African mining engineer Daniel Krige.
The key step in kriging is to identify and to estimate the covariance structure. The early applications of kriging are typically based on some parametric models for spatial covariance functions. See Section 4.1 of Cressie and Wikle (2011) and references within. However fitting those parametric covariance models to large spatial or spatio-temporal datasets is conceptually indefensible (Hall, Fisher and Hoffmann, 1994). It also poses serious computational challenges. For example, a spatial kriging with observations from $p$ locations involves inverting a $p\times p$ covariance matrix, which typically requires $O(p^3)$ operations with $O(p^2)$ memory. One attractive approach to overcome the computational burden is to introduce reduced rank approximations for the underlying processes. Methods in this category include Higdon (2002) using kernel convolutions, Wikle and Cressie (1999), Kammann and Wand (2003) and Cressie and Johannesson (2008) using low rank basis functions (see also Section 7.1.3 of Cressie and Wikle, 2011), Banerjee \mbox{\sl et al.\;} (2008) and Finley \mbox{\sl et al.\;} (2009) using predictive processes, and Tzeng and Huang (2018) using thin-plate splines. However as pointed out by Stein (2008), the reduced rank approximations often fail to capture small-scale correlation structure accurately. An alternative approach is to seek sparse approximations for covariance functions, see, e.g., Gneiting (2002) using compactly supported covariance functions, and Kaufman, Schervish and Nychka (2008) proposing a tempering method by setting the covariances to 0 between any two locations with the distances beyond a threshold. Obviously these approaches miss the correlations among the locations which are distantly apart from each other. Combining together both the ideas of reducing rank and the tempering, Sang and Huang (2012) and Zhang, Sang and Huang (2015) proposed a so-called full scale approximation method for large spatial and spatio-temporal datasets.
In this paper we propose a new nonparametric approach to represent the linear dependence structure of a spatio-temporal process. Different from all the methods stated above, we impose neither any distributional assumptions on the underlying process nor any parametric forms on its covariance function. Under the setting that the observations are taken irregularly over space but regularly in time, we recover the linear dependent structure based on a latent factor representation. No stationarity conditions are imposed over space, though the stationary in time is assumed. Formally our latent factor model is a reduced rank representation. However both the factor process and the factor loadings are completely unknown. This is a marked difference from the aforementioned reduced rank approximation methods. The motivation for our approach is to learn the linear dynamic structure across both space and time directly from data with little subjective assumptions. It captures the dependence across the locations over all distances automatically.
The latent factors and the corresponding loadings are estimated via an eigenanalysis. However it differs from the eigenanalysis for estimating latent factors for multiple time series (cf. Lam and Yao, 2012, and the references within) in at least three aspects. First, we extract the information from the dependence across different locations instead of over time: the whole observations are divided into two sets according to their locations, the estimation boils down to the singular value decomposition (SVD) of the spatial covariance matrix of two data sets. One advantage of this approach is that it is free from the impact of the `nugget effect' in the sense that we do not need to estimate the variances of, for example, measurement errors in order to recover the latent dependence structure. Secondly, we propose new aggregation via randomly partitioning the observations over space to improves the original estimation. This also overcomes the arbitrariness in dividing data in the eigenanalysis. The aggregation proposed is in the spirit of the Bagging of Breiman (1996), though random partitioning instead of bootstraping is used in our approach. Thirdly, we incorporate a graph Laplician (Hastie \mbox{\sl et al.\;}, 2009, pp.545) into the eigenanalysis to take the advantage of the continuity over space, leading to further improvement in both estimation and kriging.
The number of latent factors is typically small or at least much smaller than the number of locations on which the data are recorded. Consequently the krigings can be performed via only inverting matrices of the size equal to the number of factors. This is particularly appealing when dealing with large datasets. However the SVD for estimating the latent factor structure requires $O(p^3)$ operation. Nevertheless the nonparametric nature makes our approach easily scalable to large datasets. See Section \ref{sec33} below.
It is worth pointing out that our approach is designed for analyzing spatio-temporal data or pure spatial data but with repeated observations. With the advancement of information technology, large amount of data are collected routinely over space and time nowadays. The surge of the development of statistical methods and theory for modelling and forecasting spatio-temporal processes includes, among others, Smith, Kolenikov and Cox (2003),
Jun and Stein (2007), Li, Genton and Sherman (2007), Katzfuss and Cressie (2011), Castruccio and Stein (2013), Guinness and Stein (2013), Zhu, Fan and Kong (2014), Zhang, Sang and Huang (2015), and Wang and Huang (2017). See also the monograph Cressie and Wilkle (2011).
In addition to the methods based on low-dimensional covariance structures, the dynamic approach which, typically, specifies the standard Gaussian autoregressive model of order 1 (i.e. AR(1)), coupled with MCMC computation has gained popularity in modelling large spatio-temporal data. Cressie, Shi and Kang (2010) assumed a Gaussian AR(1) model for a low-dimensional latent process and developed a full scale Kalman filter in the context of spatio-temporal modelling. See also Chapter 7 of Cressie and Wilkle (2011).
The rest of the paper is organized as follows. We specify the latent factor structure for a spatio-temporal process in Section \ref{sec2}. The newly proposed estimation methods are spelt out in Section \ref{sec3}. The kriging over space and time is presented in Section \ref{sec4},
in which we also state how to handle missing values. The asymptotic results for the proposed estimation and kriging methods are developed in Section \ref{sec5}. Illustration with both simulated and real data is reported in Section \ref{sec6}.
Technical proofs are relegated to the Appendix in a supplementary file.
\section{Models} \label{sec2} \subsection{Setting} Consider spatio-temporal process \begin{equation} \label{b1} y_t ({\mathbf s}) = {\mathbf z}_t({\mathbf s})'\bbeta({\mathbf s})+ \xi_t({\mathbf s}) + {\varepsilon}_t({\mathbf s}), \quad t=0, \pm 1, \pm 2, \cdots, \; {\mathbf s} \in {\mathcal S} \subset \EuScript R^2, \end{equation} where ${\mathbf z}_t({\mathbf s})$ is an $m \times 1$ observable covariant vector, $\bbeta({\mathbf s})$ is a unknown parameter vector,
${\varepsilon}_t({\mathbf s}) $ is unobservable and represents the so-called nugget effect (in space) in the sense that \begin{equation} \label{b2} E\{ {\varepsilon}_t({\mathbf s})\} =0, \quad \mbox{Var}\{ {\varepsilon}_t({\mathbf s})\}= \sigma({\mathbf s})^2, \quad {\rm Cov}\{ {\varepsilon}_{t_1}({\mathbf u}), {\varepsilon}_{t_2}({\mathbf v}) \} =0 \; \; \forall \; (t_1, {\mathbf u}) \ne (t_2, {\mathbf v}), \end{equation} $ \xi_t({\mathbf s})$ is a latent spatio-temporal process satisfying the conditions \begin{equation} \label{b3} E\{ \xi_t({\mathbf s})\} =0, \qquad {\rm Cov}\{ \xi_{t_1}({\mathbf u}), \xi_{t_2}({\mathbf v}) \} =
\Sigma_{|t_1-t_2|}({\mathbf u}, {\mathbf v}). \end{equation} Consequently, $y_t({\mathbf s}) - {\mathbf z}_t({\mathbf s})'\bbeta({\mathbf s})$ is (weakly) stationary in time $t$, $E\{ y_t({\mathbf s}) -{\mathbf z}_t({\mathbf s})'\bbeta({\mathbf s})\}=0$, and \begin{equation}
{\rm Cov}\{ y_{t_1}({\mathbf u})- {\mathbf z}_{t_1}({\mathbf u})'\bbeta({\mathbf u}),\; y_{t_2}({\mathbf v})- {\mathbf z}_{t_2}({\mathbf v})'\bbeta({\mathbf v}) \} =
\Sigma_{|t_1-t_2|}({\mathbf u}, {\mathbf v}) + \sigma({\mathbf u})^2 I\{ (t_1, {\mathbf u}) = (t_2, {\mathbf v}) \}. \label{b5n} \end{equation} Furthermore we assume that $\Sigma_t({\mathbf u}, {\mathbf v})$ is continuous in ${\mathbf u}$ and ${\mathbf v}$.
Model (\ref{b1}) does not impose any stationarity conditions over space. However it requires that $y_t(\cdot) - {\mathbf z}_t(\cdot) ' \bbeta(\cdot)$ is second order stationary in time $t$, which enables the learning of the dependence across different locations and times. In practice the data often show some trends and seasonal patterns in time. The existing detrend and deseasonality methods in time series analysis can be applied to make data stationary in time.
\subsection{A finite dimensional representation for $\xi_t({\mathbf s})$}
Let $L_2({\mathcal S})$ be the Hilbert space consisting of all the square integrable functions defined on ${\mathcal S}$ equipped with the inner product \begin{equation} \label{b5} \inner{f}{g} = \int_{{\mathcal S}} f({\mathbf s}) g({\mathbf s}) d{\mathbf s}, \qquad f, g \in L_2({\mathcal S}). \end{equation} We assume that the latent process $\xi_t({\mathbf s})$ admits a finite-dimensional structure: \begin{equation} \label{b4} \xi_t ({\mathbf s}) = \sum_{j=1}^d a_j({\mathbf s}) x_{tj}, \end{equation} where $a_1(\cdot), \cdots, a_d(\cdot)$ are deterministic and linear independent functions (i.e. none of them can be written as a linear combination of the others) in the Hilbert space $L_2({\mathcal S})$, and $x_{t1}, \cdots, x_{td}$ are $d$ latent time series. Obviously $a_1(\cdot), \cdots, a_d(\cdot)$ (as well as $x_{t1}, \cdots, x_{td}$) are not uniquely defined by (\ref{b4}), as they can be replaced by any of their non-degenerate linear transformations.
There is no loss of generality in assuming that $a_1(\cdot), \cdots, a_d(\cdot)$ are orthonormal in the sense that \begin{equation} \label{b6} \inner{a_i}{a_j} = I(i=j), \end{equation} as any set of linear independent functions in a Hilbert space can be standardized to this effect. Let ${\mathbf x}_t = (x_{t1}, \cdots, x_{td})'$. It follows from (\ref{b3}) that ${\mathbf x}_t$ is a $d$-variant stationary time series with mean $\bf0$, and \begin{equation} \label{b10n} \Sigma_0({\mathbf u}, {\mathbf v}) = {\rm Cov}\{ \xi_{t}({\mathbf u}), \xi_{t}({\mathbf v}) \} = \sum_{i,j=1}^d a_i({\mathbf u}) a_j({\mathbf v}) \sigma_{ij}, \end{equation} where $\sigma_{ij}$ is the $(i,j)$-th element of $\mbox{Var}({\mathbf x}_{t})$. Let \begin{equation} \label{b10} \Sigma_0 \circ f ({\mathbf s}) = \int_{\mathcal S} \Sigma_0({\mathbf s}, {\mathbf u}) f({\mathbf u}) d{\mathbf u}, \qquad f \in L_2({\mathcal S}). \end{equation} Then $\Sigma_0$ is a non-negative definite operator defined on $L_2({\mathcal S})$. See Appendix A of Bathia \mbox{\sl et al.\;} (2010) for some basic facts on the operators in Hilbert spaces. It follows from Mercer's theorem (Mercer 1909) that $\Sigma_0$ admits the spectral decomposition \begin{equation} \label{b8} \Sigma_0({\mathbf u}, {\mathbf v}) = \sum_{j=1}^d \lambda_j \varphi_j({\mathbf u}) \varphi_j({\mathbf v}), \end{equation} where $\lambda_1 \ge \cdots \ge \lambda_d >0$ are the $d$ positive eigenvalues of $\Sigma_0({\mathbf u}, {\mathbf v})$, and $\varphi_1, \cdots, \varphi_d \in L_2({\mathcal S})$ are the corresponding eigenfunctions, i.e. \begin{equation} \label{b9} \Sigma_0\circ \varphi_j({\mathbf s}) = \int_{\mathcal S} \Sigma_0({\mathbf s}, {\mathbf u}) \varphi_j({\mathbf u}) d {\mathbf u} = \lambda_j \varphi_j({\mathbf s}). \end{equation} See Proposition~\ref{prop1} below.
\begin{proposition} \label{prop1} Let rank$(\mbox{Var}({\mathbf x}_t))=d$. Then the following assertions hold. \begin{itemize} \item[(i)] $\Sigma_0$ defined in (\ref{b10}) has exactly $d$ positive eigenvalues. \item[(ii)] The $d$ corresponding orthonormal eigenfunctions can be expressed as \[ \varphi_i({\mathbf s}) = \sum_{j=1}^d \gamma_{ij} a_j({\mathbf s}), \qquad i=1, \cdots, d, \] where $\boldsymbol{\gamma}_i \equiv (\gamma_{i1}, \cdots, \gamma_{id})'$, $i=1, \cdots, d$, are $d$ orthonormal eigenvectors of matrix $ \mbox{Var}({\mathbf x}_t) $. \end{itemize} \end{proposition}
The above proposition shows that the finite-dimensional structure (\ref{b4}) can be identified via the covariance functions of $\xi_t({\mathbf s})$, though the representation of (\ref{b4}) itself is not unique. Note that the linear space spanned by the eigenfunctions $\varphi_1(\cdot), \cdots, \varphi_d(\cdot)$ is called the kernel reproducing Hilbert space (KRHS) by $\Sigma_0(\cdot, \, \cdot)$, and $\{ a_j(\cdot) \}$ and $\{ \varphi_j(\cdot)\}$ are two orthonormal bases for this KRHS. Furthermore any orthonormal basis of this KRHS can be taken as $a_1(\cdot), \cdots, a_d(\cdot)$. In Section \ref{sec3} below, the estimation for $a_1(\cdot), \cdots, a_d(\cdot)$ will be constructed in this spirit.
\section{Estimation} \label{sec3}
Let $\{ (y_t({\mathbf s}_i), {\mathbf z}_t({\mathbf s}_i) ), \; i=1, \cdots, p, \; t=1, \cdots, n\}$ be the available observations over space and time, where ${\mathcal S}_o \equiv \{{\mathbf s}_1, \cdots, {\mathbf s}_p \}
\subset {\mathcal S} $ are typically irregularly spaced. The total number of observations is $n\cdot p$.
\subsection{Estimation for finite dimensional representations of $\xi_t({\mathbf s})$} \label{sec31} To simplify the notation, we first consider a special case $\bbeta({\mathbf s}) \equiv 0$ in (\ref{b1}) in Sections~\ref{sec31} \& \ref{sec32}. Section \ref{sec34} below considers the least squares regression estimation for $\bbeta({\mathbf s})$. Then the procedures describe in Sections~\ref{sec31} \& \ref{sec32} still apply if $\{ y_t({\mathbf s}_i)\}$ are replaced by the residuals from the regression estimation.
Now under (\ref{b4}), \begin{equation}\label{c2} y_t({\mathbf s}) = \xi_t({\mathbf s})+ {\varepsilon}_t({\mathbf s})= \sum_{j=1}^d a_j({\mathbf s}) x_{tj} + {\varepsilon}_t({\mathbf s}). \end{equation}
To exclude nugget effect in our estimation, we divide $p$ locations ${\mathbf s}_1, \cdots, {\mathbf s}_p$ into two sets ${\mathcal S}_1$ and ${\mathcal S}_2$ with, respectively, $p_1$ and $p_2$ elements, and $p_1+p_2=p$. Let ${\mathbf y}_{t, i}$ be a vector consisting of $y_t({\mathbf s})$ with $ {\mathbf s} \in {\mathcal S}_i$, $i=1, 2$. Then ${\mathbf y}_{t, 1}, {\mathbf y}_{t, 2}$ are two vectors with lengths $p_1$ and $p_2$ respectively. Denoted by $\bxi_{t,1}, \, \bxi_{t, 2}$ the corresponding vectors consisting of $\xi_t(\cdot)$. It follows from (\ref{c2}) that \begin{equation} \label{f1} {\mathbf y}_{t,1} = \bxi_{t,1} + \mbox{\boldmath$\varepsilon$}_{t,1}= {\mathbf A}_1 {\mathbf x}_t + \mbox{\boldmath$\varepsilon$}_{t,1}, \qquad {\mathbf y}_{t,2} = \bxi_{t, 2} + \mbox{\boldmath$\varepsilon$}_{t,2}= {\mathbf A}_2 {\mathbf x}_t + \mbox{\boldmath$\varepsilon$}_{t,2}, \end{equation} where ${\mathbf A}_i$ is a $p_i \times d$ matrix, its rows consist of the coefficients $a_j(\cdot)$ on the RHS of (\ref{c2}), and $\mbox{\boldmath$\varepsilon$}_{t,i}$ consists of ${\varepsilon}_t({\mathbf s})$ with ${\mathbf s} \in {\mathcal S}_i$. There is no loss of generality in assuming ${\mathbf A}_1' {\mathbf A}_1 ={\mathbf I}_d$. This can be achieved by performing an orthogonal-triangular (QR) decomposition ${\mathbf A}_1 = \bGamma {\mathbf R}$, and replacing $({\mathbf A}_1, {\mathbf x}_t)$ by $(\bGamma, {\mathbf R}{\mathbf x}_t)$ in the first equation in (\ref{f1}). Note ${\mathcal M}({\mathbf A}_1) = {\mathcal M}(\bGamma)$, where ${\mathcal M}({\mathbf A})$ denotes the linear space spanned by the columns of matrix ${\mathbf A}$. Thus ${\mathcal M}({\mathbf A}_1)$ does not change from imposing the condition ${\mathbf A}_1' {\mathbf A}_1 ={\mathbf I}_d$.
Similar we may also assume ${\mathbf A}_2' {\mathbf A}_2 ={\mathbf I}_d$, which however implies that ${\mathbf x}_t$ in the second equation in (\ref{f1}) is unlikely to be the same as that in the first equation. Hence we may re-write (\ref{f1}) as \begin{equation} \label{f2} {\mathbf y}_{t,1} = {\mathbf A}_1 {\mathbf x}_t + \mbox{\boldmath$\varepsilon$}_{t,1}, \qquad {\mathbf y}_{t,2} = {\mathbf A}_2 {\mathbf x}_t^\star + \mbox{\boldmath$\varepsilon$}_{t,2}, \end{equation} where ${\mathbf A}_1'{\mathbf A}_1 = {\mathbf A}_2' {\mathbf A}_2= {\mathbf I}_d$, $ {\mathbf x}_t^\star = {\mathbf Q} {\mathbf x}_t $, and ${\mathbf Q}$ is an invertible $d \times d$ matrix. Note that $({\mathbf A}_1, {\mathbf x}_t)$ and $({\mathbf A}_2, {\mathbf x}_t^\star)$ are still not uniquely defined in (\ref{f2}), as they can be replaced, respectively, by $({\mathbf A}_1 \bGamma_1, \bGamma_1'{\mathbf x}_t)$ and $({\mathbf A}_2 \bGamma_2, \bGamma_2'{\mathbf x}_t^\star)$ for any $d \times d$ orthogonal matrices $\bGamma_1$ and $\bGamma_2$. However ${\mathcal M}({\mathbf A}_1)$ and ${\mathcal M}({\mathbf A}_2)$ are uniquely defined by (\ref{f2}).
Since ${\mathbf y}_{t,1}$ and ${\mathbf y}_{t,2}$ have no common elements, it follows from (\ref{c2}) and (\ref{b2}) that \begin{equation} \label{f5n} \boldsymbol{\Sigma} \equiv {\rm Cov}({\mathbf y}_{t,1}, {\mathbf y}_{t,2}) = {\mathbf A}_1 {\rm Cov}({\mathbf x}_t, {\mathbf x}_t^{\star}) {\mathbf A}_2'. \end{equation} Note that ${\rm Cov}({\mathbf x}_t, {\mathbf x}_t^{\star}) = \mbox{Var}({\mathbf x}_t) {\mathbf Q}$. When $p \gg d$, it is reasonable to assume that rank$(\boldsymbol{\Sigma}) = {\rm rank}\{{\rm Cov}({\mathbf x}_t, {\mathbf x}_t^\star)\} ={\rm rank}\{\mbox{Var}({\mathbf x}_t)\} =d$. Let \begin{equation} \label{f5} \boldsymbol{\Sigma} \boldsymbol{\Sigma} ' = {\mathbf A}_1 {\rm Cov}({\mathbf x}_t, {\mathbf x}_t^\star) {\rm Cov}({\mathbf x}_t^\star, {\mathbf x}_t){\mathbf A}_1', \qquad \boldsymbol{\Sigma} ' \boldsymbol{\Sigma} = {\mathbf A}_2 {\rm Cov}({\mathbf x}_t^\star, {\mathbf x}_t) {\rm Cov}({\mathbf x}_t, {\mathbf x}_t^\star) {\mathbf A}_2'. \end{equation} Then these two matrices share the same $d$ positive eigenvalues, and $\boldsymbol{\Sigma} \boldsymbol{\Sigma} ' {\mathbf b} =0$ for any vector ${\mathbf b}$ perpendicular to ${\mathcal M}({\mathbf A}_1)$. Therefore, the $d$ orthonormal eigenvectors of matrix $\boldsymbol{\Sigma} \boldsymbol{\Sigma} '$ corresponding to its $d$ positive eigenvalues can be taken as the columns of ${\mathbf A}_1$. Similarly the $d$ orthonormal eigenvectors of matrix $\boldsymbol{\Sigma} '\boldsymbol{\Sigma} $ corresponding to its $d$ positive eigenvalues can be taken as the columns of ${\mathbf A}_2$. We construct the estimators for ${\mathbf A}_1, \, {\mathbf A}_2$ based on this observation.
Let $\widehat \boldsymbol{\Sigma}$ be the sample covariance of ${\mathbf y}_{t,1}$ and ${\mathbf y}_{t,2}$, i.e. \begin{equation} \label{f3} \widehat \boldsymbol{\Sigma} = {1 \over n} \sum_{t=1}^n( {\mathbf y}_{t,1} - \bar {\mathbf y}_1) ({\mathbf y}_{t,2} - \bar {\mathbf y}_2)', \end{equation} where $\bar {\mathbf y}_i = n^{-1} \sum_t {\mathbf y}_{t,i}$. Let $\widehat \lambda_1 \ge \widehat \lambda_2 \ge \cdots $ be the eigenvalues of $\widehat \boldsymbol{\Sigma}\widehat \boldsymbol{\Sigma}'$. A natural estimator for $d$ is defined as \begin{equation} \label{c5} \widehat d = \max_{1 \le j < p_*} \widehat \lambda_j \big/ \widehat \lambda_{j+1}, \end{equation} where $p_* \ll \min( p_1, \, p_2)$ is a prespecified integer (\mbox{\sl e.g.\;} $p_* = \min( p_1, \, p_2)/2$). This estimation method is based on the fact that $\lambda_j/\lambda_{j+1}$ are positive and finite constants for $j=1, \cdots, d-1$, and $\lambda_d/\lambda_{d+1} = \infty$. However $\lambda_j/\lambda_{j+1}$ is asymptotically `$0/0$' for $j=d+1, \cdots, p-1$. In practice, we mitigate this difficulty by comparing the ratios for $j < p_* \ll \min(p_1, p_2)$. Asymptotic properties of the ratio estimators under different settings have been established in, e.g. Lam and Yao (2012), Chang \mbox{\sl et al.\;} (2015), and Zhang \mbox{\sl et al.\;} (2018). The (fine) finite sample performance of the ratio estimators are also reported in those papers.
Consequently the $\widehat d$ orthonormal eigenvectors of $\widehat \boldsymbol{\Sigma}\widehat \boldsymbol{\Sigma}'$ (or $\widehat \boldsymbol{\Sigma}'\widehat \boldsymbol{\Sigma}$), corresponding to the eigenvalues $\widehat \lambda_1, \cdots, \widehat \lambda_{\widehat d}$, can be taken as the estimated columns of ${\mathbf A}_1$ (or ${\mathbf A}_2$). However such an estimator ignores the fact that $\xi_t(\cdot)$ is continuous over the set ${\mathcal S}$, which should be taken into account to improve the estimation. To achieve this, denoted by ${\mathbf s}_1^1, \cdots, {\mathbf s}_{p_1}^1$ the $p_1$ locations in ${\mathcal S}_1$ arranged according to the order such that the $j$-th component of ${\mathbf y}_{t,1}$ is the observation taken at the location ${\mathbf s}^1_j$. We define a graph Laplacian ${\mathbf L} \equiv {\mathbf G} - {\mathbf W}$, where ${\mathbf W}=(w_{ij})$ is a weight matrix with $w_{ii}=0$ and, e.g. $w_{ij}= 1
/(1+ \| {\mathbf s}_i^1 - {\mathbf s}_j^1\|)$
($\|\cdot\|$ denotes the Euclidean norm) for $i\ne j$, and ${\mathbf G}=(g_{ij})$ with $g_{ii} = \sum_j w_{ij}$ and $g_{ij}=0$ for all $i\ne j$. Then it holds that for any column vector ${\mathbf a}=(a_1, \cdots, a_p)'$, \[ {\mathbf a}' {\mathbf L} {\mathbf a} = \sum_{i=1}^p g_{ii} a_i^2 - \sum_{i,j=1}^p w_{ij} a_i a_j = {1 \over 2} \sum_{i,j=1}^p w_{ij} (a_i - a_j)^2. \] See, e.g., Hastie, Tibshirani and Friedman (2009, pp.545). By requiring ${\mathbf a}' {\mathbf L} {\mathbf a} \le c_0$ for some small positive constant $c_0$, the components of ${\mathbf a}$ at the nearby locations will be close with each other. Hence the columns of ${\mathbf A}_1$ are obtained by solving the following optimization problem: \[ \widehat \boldsymbol{\gamma}_1 =\arg \max_{ \boldsymbol{\gamma}} \boldsymbol{\gamma}' \widehat \boldsymbol{\Sigma}\widehat
\boldsymbol{\Sigma}' \boldsymbol{\gamma} \quad {\rm subject \; to} \;\; \| \boldsymbol{\gamma}\|=1 \;\; {\rm and } \; \; \boldsymbol{\gamma}' {\mathbf L} \boldsymbol{\gamma} \le c_0, \] and for $j=2, \cdots, \widehat d$, \[ \widehat \boldsymbol{\gamma}_j = \arg \max_{ \boldsymbol{\gamma}} \boldsymbol{\gamma}' \widehat \boldsymbol{\Sigma}\widehat
\boldsymbol{\Sigma}' \boldsymbol{\gamma} \quad {\rm subject \; to} \;\; \| \boldsymbol{\gamma}\|=1, \;\; \boldsymbol{\gamma}' \widehat \boldsymbol{\gamma}_i=0 \; {\rm for\;} 1\le i <j, \;\; {\rm and } \; \; \boldsymbol{\gamma}' {\mathbf L} \boldsymbol{\gamma} \le c_0. \] The above constrained optimization problem can be recast as an eigen-problem for the symmetric (but not necessarily non-negative definite) matrix $\widehat \boldsymbol{\Sigma} \widehat \boldsymbol{\Sigma}' - \tau {\mathbf L}$ stated below, where $\tau>0$ controls the penalty according to ${\mathbf L}$. \begin{quote} {\sl Find the orthonormal eigenvectors $ \widehat \boldsymbol{\gamma}_1, \cdots, \widehat \boldsymbol{\gamma}_{\widehat d}$ of $\widehat \boldsymbol{\Sigma} \widehat \boldsymbol{\Sigma}' - \tau {\mathbf L}$ corresponding to its $\widehat d$ \linebreak largest eigenvalues. } \end{quote} Denote the resulting estimator for the loading matrix ${\mathbf A}_1$ by \begin{equation} \label{f4} \widehat {\mathbf A}_1 = (\widehat \boldsymbol{\gamma}_1, \cdots, \widehat \boldsymbol{\gamma}_{p_1}). \end{equation} The estimator for ${\mathbf A}_2$, denoted by $\widehat {\mathbf A}_2$, is constructed in the same manner.
By (\ref{f1}), the estimators for the two different representations of the latent processes are defined as \begin{equation} \label{f6} \widehat {\mathbf x}_t = \widehat {\mathbf A}_1' {\mathbf y}_{t,1}, \qquad \widehat {\mathbf x}_t^\star = \widehat {\mathbf A}_2' {\mathbf y}_{t,2}. \end{equation} Consequently, \begin{equation} \label{f7} \widehat \bxi_{t,1} = \widehat {\mathbf A}_1 \widehat {\mathbf x}_{t} = \widehat {\mathbf A}_1\widehat {\mathbf A}_1' {\mathbf y}_{t,1}, \qquad \widehat \bxi_{t,2} = \widehat {\mathbf A}_2 \widehat {\mathbf x}_{t}^\star = \widehat {\mathbf A}_2\widehat {\mathbf A}_2' {\mathbf y}_{t,2}. \end{equation} See also (\ref{f1}).
\begin{remark} \label{remark1} (i) The assumption that matrix $\boldsymbol{\Sigma} $ in (\ref{f5n}) has rank $d$ implies that all the latent factors are spatially correlated; see (\ref{c2}). In the {\sl unlikely} scenarios that some latent factors are only serially correlated but spatially uncorrelated, we should include autocovariance matrices in the estimation (Lam and Yao, 2012). To this end, let \[ \widehat \boldsymbol{\Sigma}_i(k) = {1 \over n} \sum_{t=1}^{n-k} ({\mathbf y}_{t+k,i} - \bar {\mathbf y}_i) ({\mathbf y}_{t,i} - \bar {\mathbf y}_i)', \quad \widehat \boldsymbol{\Sigma}_{12}(k) = {1 \over n} \, \sum_{t=\max\{1, -k\}}^{\min\{n-k, n\}} ({\mathbf y}_{t+k,1} - \bar {\mathbf y}_1) ({\mathbf y}_{t,2} - \bar {\mathbf y}_2)'. \] Assume $p_1=p_2$ for simplicity. Put \[ {\mathbf M}_1=\widehat \boldsymbol{\Sigma} \widehat \boldsymbol{\Sigma}' + \sum_{j=1}^{k_0}\big\{ \widehat \boldsymbol{\Sigma}_1(j) \widehat \boldsymbol{\Sigma}_1(j)' + \widehat \boldsymbol{\Sigma}_{12}(j) \widehat \boldsymbol{\Sigma}_{12}(j)' + \widehat \boldsymbol{\Sigma}_{12}(-j) \widehat \boldsymbol{\Sigma}_{12}(-j)' \}, \] \[ {\mathbf M}_2=\widehat \boldsymbol{\Sigma}' \widehat \boldsymbol{\Sigma} + \sum_{j=1}^{k_0}\big\{ \widehat \boldsymbol{\Sigma}_2(j) \widehat \boldsymbol{\Sigma}_2(j)' + \widehat \boldsymbol{\Sigma}_{12}(j)' \widehat \boldsymbol{\Sigma}_{12}(j) + \widehat \boldsymbol{\Sigma}_{12}(-j)' \widehat \boldsymbol{\Sigma}_{12}(-j) \}, \] where $\widehat \boldsymbol{\Sigma}$ is defined in (\ref{f3}), and $k_0\ge 1$ is an integer. Then we replace $\widehat \boldsymbol{\Sigma} \widehat \boldsymbol{\Sigma}'$ by ${\mathbf M}_1$ for computing $\widehat {\mathbf A}_1$ in (\ref{f4}), and replace $\widehat \boldsymbol{\Sigma}' \widehat \boldsymbol{\Sigma}$ by ${\mathbf M}_2$ for computing $\widehat {\mathbf A}_2$. Empirical evidences in modelling high-dimensional time series indicate that the estimation is not sensitive to the choice of $k_0$, small values of $k_0$ such as 1 to 5 are sufficient for most applications (Lam \mbox{\sl et al.\;} 2011, Lam and Yao 2012, Chang \mbox{\sl et al.\;} 2015). Since using ${\mathbf M}_1$ and ${\mathbf M}_2$ does not add anything fundamentally new, we proceed with the simple version only.
(ii) The proposed procedure encapsulates all the dependence across space and time into $d$ latent factors. Those latent factors, specified objectively by sample covariances (and autocovariances) of the data, capture all the linear correlations parsimoniously. The real data example in Section 6.2 below, and also those not shown in this paper, indicate that the estimated $d$ is often small.
\end{remark}
\subsection{Aggregating via random partitioning} \label{sec32}
The estimation for the latent variable $\xi_t(\cdot)$ depends on partitioning $ {\mathcal S}_o = \{ {\mathbf s}_1, \cdots, {\mathbf s}_p\}$ into two non-overlapping sets ${\mathcal S}_1$ and ${\mathcal S}_2$; see (\ref{f7}). Since the estimation procedure presented in Section \ref{sec31} puts ${\mathcal S}_1$ and ${\mathcal S}_2$ on equal footing, we set $p_1=[p/2]$ and $p_2=p-p_1$. By randomly dividing $ {\mathcal S}_o$ into ${\mathcal S}_1$ and ${\mathcal S}_2$ with the sizes $p_1$ and $p_2$ respectively, the estimates for $\bxi_{t,1}$ and $\bxi_{t,2}$ are obtained as in (\ref{f7}). We repeat this randomization $J$ times, where $J \ge 1$ is a large integer, leading to the $J$ pairs of the estimates $(\widehat \bxi_{t,1}^{j}, \, \widehat \bxi_{t,2}^{j})$ for $j=1, \cdots, J.$ The aggregating estimator over the randomized partitions is \begin{equation} \label{f8} \widetilde \xi_{t}({\mathbf s}_i) = {1 \over J} \sum_{j=1}^J \widehat\xi_{t}^{j}({\mathbf s}_i), \qquad j=1, \cdots, p, \end{equation} where $\widehat\xi_{t}^{j}({\mathbf s}_i)$ is a component of either $\widehat \bxi_{t,1}^{j}$ or $\widehat \bxi_{t,2}^{j}$, depending on ${\mathbf s}_i \in {\mathcal S}_1 $ or ${\mathcal S}_2$ in the $j$-th randomized partition of ${\mathcal S}_o$. Similar to the Bagging method of Breiman (1996), the choice of $J$ is not critical. In our numerical experiments, we set $J=100$.
\begin{theorem} \label{prop2}
For $k=1, \cdots, n$ and $\ell=1, \cdots, p$, \begin{equation} \label{p1}
E\Big(\big\{ \widetilde \xi_{k}({\mathbf s}_\ell) - y_{k}({\mathbf s}_\ell)\big\}^2 \Big| \{ y_t ({\mathbf s}_i) \} \Big)
\le E\Big(\big\{ \widehat \xi_{k}({\mathbf s}_\ell) - y_{k}({\mathbf s}_\ell)\big\}^2 \Big| \{ y_t ({\mathbf s}_i) \} \Big), \end{equation}
and
\begin{equation} \label{p2}
\mathrm{E}\Big({1 \over np} \sum_{t=1}^n\sum_{j=1}^p \big\{ \widetilde \xi_{t}({\mathbf s}_j) - \xi_{t}({\mathbf s}_j)\}^2\Big| \{\xi_{t}({\mathbf s}_i), \, y_t ({\mathbf s}_i) \} \Big)\le \mathrm{E}\Big({1 \over np} \sum_{t=1}^n\sum_{j=1}^p \big\{ \widehat \xi_{t}({\mathbf s}_j) - \xi_{t}({\mathbf s}_j)\}^2\Big| \{\xi_{t}({\mathbf s}_i),\, y_t ({\mathbf s}_i) \} \Big).
\end{equation}
\end{theorem}
Theorem \ref{prop2} is in the same spirit as Breiman's inequality for Bagging; see (4.2) in Breiman (1996). Note that all the conditional expectations in Theorem~\ref{prop2} above are taken with respect to the random partitioning of the location set ${\mathcal S}_o$ into ${\mathcal S}_1 $ and ${\mathcal S}_2$. There are in total $p_0\equiv p!/(p_1!p_2!)$ different partitions, each being taken with probability $1/p_0$. Denote by $\widehat \xi_k^{(1)}(\cdot), \cdots, \widehat \xi_k^{(p_0)}(\cdot)$ the resulting $p_0$ estimates as in (\ref{f7}). Then \begin{align} \label{c9na}
& E\Big(\big\{ \widetilde \xi_{k}({\mathbf s}_\ell) - y_{k}({\mathbf s}_\ell)\big\}^2 \Big| \{ y_t ({\mathbf s}_i) \} \Big)\nonumber
\; =\; E\Big(\big\{ {1\over J}\sum_{l=1}^{J}(\widehat \xi_{k}^{l}({\mathbf s}_\ell) - y_{k}({\mathbf s}_\ell))\big\}^2 \Big| \{ y_t ({\mathbf s}_i) \} \Big)\nonumber\\
\le \; & E\Big({1\over J}\sum_{l=1}^{J}\big\{ (\widehat \xi_{k}^{l}({\mathbf s}_\ell) - y_{k}({\mathbf s}_\ell))\big\}^2 \Big| \{ y_t ({\mathbf s}_i) \} \Big)\nonumber\\ = \;&{1 \over p_0} \sum_{j=1}^{p_0} \big\{\widehat \xi_k^{(j)} ({\mathbf s}_\ell) - y_{k}({\mathbf s}_\ell)\big\}^2
= E\Big(\big\{ \widehat \xi_{k}({\mathbf s}_\ell) - y_{k}({\mathbf s}_\ell)\big\}^2 \Big| \{ y_t ({\mathbf s}_i) \} \Big). \nonumber \end{align}
This completes the proof for (\ref{p1}). Note that (\ref{p2}) can be established in the same manner.
\subsection{Scalable to large datasets} \label{sec33} The estimator $\widehat {\mathbf A}_1$ in (\ref{f4}) was obtained from an eigenanalysis
which requires $O(p_1 p_2^2)$ operations. This is computational challenging when $p$ is large. However our approach can be easily adapted to large $p$, which is in the spirit of `divide and conquer'.
We randomly divided ${\mathcal S}_0$ into $p/q$ sets ${\mathcal S}_1^*, \cdots, {\mathcal S}_q^*$, and each ${\mathcal S}_i^*$ contains $q$ locations, where $q$ is an integer such that the eigenanalysis for $q\times q$ matrices can be performed comfortably with the available computing capacity. We estimate $\xi_t(\cdot)$ at the $q$ locations in ${\mathcal S}_i^*$ for each of $i=1, \cdots, p/q$ separately using the aggregation algorithm below. \begin{quote} (i) Randomly select $q$ locations from ${\mathcal S}_0 - {\mathcal S}_i^*$.\\ (ii) Combine the data on the locations in ${\mathcal S}_i^*$ and the locations selected in (i). By treating the combined data as the whole sample, calculate $\widehat \xi_t({\mathbf s})$ for ${\mathbf s} \in {\mathcal S}_i^*$ as in (\ref{f7}).\\ (iii) Repeat (i) and (ii) above $J$ times, aggregate the estimates as in (\ref{f8}). \end{quote}
Alternatively, we can randomly choose $2q$ locations from ${\mathcal S}_0$ to perform the estimation (\ref{f7}). Repeating the estimation a large number (say, greater than $Jp/(2q)$) of times, we then aggregate the estimates at each location as in (\ref{f8}). This is a computationally more efficient approach with the drawback that the number of the estimates obtained at each location is not directly under control.
\subsection{Regression estimation} \label{sec34}
In the presence of observable covariant ${\mathbf z}_t(\cdot)$ in (\ref{b1}), the regression coefficient vector $\bbeta(\cdot)$ can be estimated by the least squares method. To this end, let \begin{equation} \label{c9} {\mathbf y}({\mathbf s}_i) = (y_1({\mathbf s}_i), \cdots, y_n({\mathbf s}_i))', \qquad {\mathbf Z}({\mathbf s}_i)= ({\mathbf z}_1({\mathbf s}_i), \cdots, {\mathbf z}_n({\mathbf s}_i) )'. \end{equation} It follows from (\ref{b1}) that \[ {\mathbf y}({\mathbf s}_i) = {\mathbf Z}({\mathbf s}_i) \bbeta({\mathbf s}_i) + {\mathbf e}({\mathbf s}_i), \] where ${\mathbf e}({\mathbf s}_i) = (\xi_1({\mathbf s}_i)+ {\varepsilon}_1({\mathbf s}_i), \cdots, \xi_n({\mathbf s}_i)+ {\varepsilon}_n({\mathbf s}_i))'$. Thus the least squares estimator for $\bbeta({\mathbf s}_i)$ is defined as \begin{equation} \label{c10} \widehat \bbeta({\mathbf s}_i) = \{ {\mathbf Z}({\mathbf s}_i)' {\mathbf Z}({\mathbf s}_i)\}^{-1} {\mathbf Z}({\mathbf s}_i)' {\mathbf y}({\mathbf s}_i), \qquad i=1, \cdots, p. \end{equation} Then by replacing the original data $y_t({\mathbf s}_i)$ by the regression residuals $ y_t({\mathbf s}_i) - {\mathbf z}_t({\mathbf s}_i)' \widehat \bbeta({\mathbf s}_i)$, we proceed to estimate the finite dimensional structure of $\xi_t(\cdot)$ as described in Section \ref{sec31} above.
However in the presence of the endogeneity in the sense ${\rm Cov}({\mathbf z}_t({\mathbf s}), \, \xi_t({\mathbf s})) \ne 0$, the regression estimator $\widehat \bbeta({\mathbf s}_i)$ in (\ref{c10}) is practically an estimator for $$\bbeta({\mathbf s}_i)^\star \equiv \bbeta({\mathbf s}_i) + \mbox{Var}({\mathbf z}_t({\mathbf s}_i))^{-1} {\rm Cov}({\mathbf z}_t({\mathbf s}_i),\, \xi_t({\mathbf s}_i))$$ instead, as (\ref{b1}) can be written as $ y_t({\mathbf s}) = {\mathbf z}_t ({\mathbf s})' \bbeta({\mathbf s})^\star + \xi_t({\mathbf s})^\star + {\varepsilon}_t({\mathbf s}) $, where $$\xi_t({\mathbf s})^\star = \xi_t({\mathbf s}) - {\mathbf z}_t ({\mathbf s})' \mbox{Var}({\mathbf z}_t({\mathbf s}_i))^{-1} {\rm Cov}({\mathbf z}_t({\mathbf s}_i), \xi_t({\mathbf s}_i)).$$ It is easy to see that $ {\rm Cov}( {\mathbf z}_t ({\mathbf s}), \; \xi_t({\mathbf s})^\star ) =0$. Hence $\widehat \bbeta({\mathbf s}_i)$ is a consistent estimator for $\bbeta({\mathbf s}_i)^\star$. Furthermore, the estimation based on the residuals described above is still valid though the finite dimensional structure (\ref{b4}) is now imposed upon the latent process $\xi_t({\mathbf s})^\star$ instead.
\section{Kriging} \label{sec4}
First we state a general lemma on linear prediction which shows explicitly the terms required in order to carry out kriging for spatio-temporal process $y_t({\mathbf s})$.
\begin{lemma} For any random vectors $\bzeta$ and $\bfeta$ with
$E( \|\bzeta\|^2 + \|\bfeta\|^2) < \infty$, the best linear predictor for $\bzeta$ based on $\bfeta$ is defined as $\widehat \bzeta = \balpha_0 + {\mathbf B}_0 \bfeta$, where \[ (\balpha_0, {\mathbf B}_0) = \arg \inf_{\balpha, {\mathbf B}}
E\big\{ \| \bzeta-\balpha - {\mathbf B} \bfeta\|^2 \big\}. \] In fact, \[ {\mathbf B}_0 = {\rm Cov}(\bzeta, \bfeta)\{ \mbox{Var}(\bfeta) \}^{-1} , \qquad \balpha_0 = E \bzeta - {\mathbf B}_0 E\bfeta. \] Furthermore, \begin{equation} \label{d0} E\{( \widehat \bzeta - \bzeta) ( \widehat \bzeta - \bzeta)' \} = \mbox{Var}(\bzeta) - {\rm Cov}(\bzeta, \bfeta)\{ \mbox{Var}(\bfeta) \}^{-1} {\rm Cov}(\bfeta, \bzeta). \end{equation} \end{lemma}
With the above lemma, we can predict any value $y_t({\mathbf s})$. With two scenarios considered below, we illustrate how to calculate inverses of large covariance matrices by taking advantages from the finite dimensional structure (\ref{b4}): all matrices to be inverted are of the sizes $d\times d$ only, regardless the size of $p$. Technically we repeatedly use the following formulas for the inverses of partitioned matrices.
\begin{lemma} For an invertible block-partitioned matrix ${\mathbf H}= \Big( \begin{array}{cc} {\mathbf H}_{11} & {\mathbf H}_{12} \\ {\mathbf H}_{21} & {\mathbf H}_{22} \end{array} \Big)$, it holds that \begin{equation} \label{d1} {\mathbf H}^{-1} = \Big( \begin{array}{ll} {\mathbf H}_{11}^{-1} + {\mathbf H}_{11}^{-1}{\mathbf H}_{12}({\mathbf H}_{22} - {\mathbf H}_{21} {\mathbf H}_{11}^{-1}{\mathbf H}_{12})^{-1} {\mathbf H}_{21} {\mathbf H}_{11}^{-1} & - {\mathbf H}_{11}^{-1}{\mathbf H}_{12} ({\mathbf H}_{22} - {\mathbf H}_{21} {\mathbf H}_{11}^{-1}{\mathbf H}_{12})^{-1}\\ -({\mathbf H}_{22} - {\mathbf H}_{21} {\mathbf H}_{11}^{-1}{\mathbf H}_{12})^{-1} {\mathbf H}_{21} {\mathbf H}_{11}^{-1} & ({\mathbf H}_{22} - {\mathbf H}_{21} {\mathbf H}_{11}^{-1}{\mathbf H}_{12})^{-1} \end{array} \Big) \end{equation} provided ${\mathbf H}_{11}^{-1}$ exists. Furthermore, \begin{equation} \label{d2} ({\mathbf H}_{22} - {\mathbf H}_{21} {\mathbf H}_{11}^{-1}{\mathbf H}_{12})^{-1} = {\mathbf H}_{22}^{-1} + {\mathbf H}_{22}^{-1}{\mathbf H}_{21}({\mathbf H}_{11} - {\mathbf H}_{12} {\mathbf H}_{22}^{-1}{\mathbf H}_{21})^{-1} {\mathbf H}_{12} {\mathbf H}_{22}^{-1} \end{equation} provided both ${\mathbf H}_{11}^{-1}$ and ${\mathbf H}_{22}^{-1}$ exist. \end{lemma}
Formula (\ref{d1}) can be proved by checking ${\mathbf H}^{-1} {\mathbf H} = {\mathbf I}$ directly, while (\ref{d2}) follows from (\ref{d1}) by comparing the (1,1) and (2,2) blocks on the RHS of (\ref{d1}).
\subsection{Kriging over space} \label{sec41}
The goal is to predict the unobserved value $y_{t}({\mathbf s}_0)$ for some ${\mathbf s}_0 \in {\mathcal S}$, $1\le t \le n$, and ${\mathbf s}_0 \ne {\mathbf s}_j $ for $1\le j \le p$, based on the observations ${\mathbf y}_t \equiv ({\mathbf y}_{t,1}', {\mathbf y}_{t,2}')'$ only, where ${\mathbf y}_{t,1}, {\mathbf y}_{t,2}$ are defined as in (\ref{f1}). We introduce two predictors below. We always use the notation $K_h(\cdot) = h^{-1} K(\cdot/h)$, where $K(\cdot)$ denotes a kernel function, $h>0$ is a bandwidth, and $K$ and $h$ may be different at different places.
To simplify the notation, we assume $\bbeta({\mathbf s}) \equiv 0$ in (\ref{b1}). As indicated in Section \ref{sec34}, this effectively implies to replace the observations $y_t({\mathbf s}_j)$ by the regression residuals. For kriging, we also need to estimate $\bbeta({\mathbf s}_0)$ based on $\widehat \bbeta({\mathbf s}_j)$, $j=1, \cdots, p$, given in (\ref{c10}). It can be achieved by, for example, using the kernel smoothing: \begin{equation} \label{d6} \widehat \bbeta({\mathbf s}_0) = \sum_{j=1}^p \widehat \bbeta({\mathbf s}_j) K_h({\mathbf s}_j - {\mathbf s}_0) \Big/ \sum_{j=1}^p K_h({\mathbf s}_j - {\mathbf s}_0), \end{equation} where $K(\cdot)$ is a density function defined on $\EuScript R^2$, $h>0$ is a bandwidth.
Furthermore, a local linear smoothing can be applied to improve the accuracy of the estimation; see, e.g. Chapter 3 of Fan and Gijbels (1996). By the standard argument it can be shown (see the supplementary document) that \[
|\widehat \bbeta({\mathbf s}_0) - \bbeta({\mathbf s}_0)| = O_p(h^2 + n^{-1/2}), \] provided that the conditions in Theorem \ref{th3} in Section \ref{sec52} below hold.
Note that if $\bbeta({\mathbf s}_j)$, $j=1, \cdots, p$, were all known, the above error rate reduces to $O_p(h^2)$, as $\bbeta(\cdot)$ is deterministic and continuous. See Condition 4 in Section \ref{sec52} below. The term of order $n^{-1/2}$ reflects the errors in estimation for $\bbeta({\mathbf s}_j)$. In the rest of Section \ref{sec4}, we adhere with the assumption $\bbeta({\mathbf s}) \equiv 0$.
It follows from Lemma 1 that the best linear predictor for $y_{t}({\mathbf s}_0)$ based on ${\mathbf y}_t $ is \begin{equation} \label{d5} \widehat y_t({\mathbf s}_0) = {\rm Cov}(y_t({\mathbf s}_0), {\mathbf y}_t) \mbox{Var}({\mathbf y}_t)^{-1} {\mathbf y}_t. \end{equation} It follows from (\ref{d0}) that \begin{align} \nonumber & E[\{ \widehat y_t({\mathbf s}_0) - y_t({\mathbf s}_0) \}^2] = \mbox{Var}\{y_t({\mathbf s}_0)\} - {\rm Cov}(y_t({\mathbf s}_0), {\mathbf y}_t) \mbox{Var}({\mathbf y}_t)^{-1} {\rm Cov}({\mathbf y}_t, y_t({\mathbf s}_0)) \\ = \; & \sigma({\mathbf s}_0)^2 + \mbox{Var}\{ \xi_t({\mathbf s}_0) \} - {\rm Cov}(\xi_t({\mathbf s}_0), \bxi_t) \{ \mbox{Var}(\bxi_t) + {\mathbf D} \}^{-1} {\rm Cov}(\bxi_t, \xi_t({\mathbf s}_0)), \label{f11} \end{align} where ${\mathbf D} =\mbox{Var}(\mbox{\boldmath$\varepsilon$}_t)$ is a diagonal matrix, $\mbox{\boldmath$\varepsilon$}_t = (\mbox{\boldmath$\varepsilon$}_{t,1}', \mbox{\boldmath$\varepsilon$}_{t,2}')'$ and $\bxi_t = (\bxi_{t,1}', \bxi_{t,2}')'$. See (\ref{f1}).
To apply predictor $\widehat y_t({\mathbf s}_0)$ in (\ref{d5}) in practice, we need to estimate both ${\rm Cov}(y_t({\mathbf s}_0), {\mathbf y}_t)$ and $\mbox{Var}({\mathbf y}_t)$. Since $ {\rm Cov}(y_t({\mathbf s}_0), {\mathbf y}_t) = {\rm Cov}(\xi_t({\mathbf s}_0), {\mathbf y}_t) $, it can be estimated by \[ c({\mathbf s}_0) = {1 \over n} \sum_{k=1}^n (\widehat \xi_k({\mathbf s}_0) - \bar \xi({\mathbf s}_0) ) ({\mathbf y}_k - \bar {\mathbf y}), \] where $\widehat \xi_t({\mathbf s}_0)$ is a kernel estimator for $\xi_t({\mathbf s}_0)$ defined as \begin{equation} \label{f10n} \widehat \xi_t({\mathbf s}_0) = \sum_{j=1}^p \widehat \xi_t({\mathbf s}_j) K_h({\mathbf s}_j - {\mathbf s}_0) \Big/ \sum_{j=1}^p K_h({\mathbf s}_j - {\mathbf s}_0) \end{equation} with $\widehat \xi_t({\mathbf s}_1), \cdots \widehat \xi_t({\mathbf s}_p)$ defined in (\ref{f7}) (see also (\ref{d6}) above), and $\bar \xi({\mathbf s}_0) = n^{-1} \sum_t \widehat \xi_t({\mathbf s}_0)$. Thus a realistic predictor for $y_t({\mathbf s}_0)$ is \begin{equation} \label{f10} \widehat y_t^r({\mathbf s}_0)=c({\mathbf s}_0) \widehat \boldsymbol{\Sigma}_y^{-1} {\mathbf y}_t, \end{equation} where $\widehat \boldsymbol{\Sigma}_y = n^{-1} \sum_{k=1}^n ({\mathbf y}_k -\bar {\mathbf y})({\mathbf y}_k -\bar {\mathbf y})'$ is the sample variance of ${\mathbf y}_t$. Nevertheless it turns out that \begin{equation} \label{f10new} \widehat y_t^r({\mathbf s}_0)=\widehat \xi_t({\mathbf s}_0) .
\end{equation}
To show this, let $w_j=K_h({\mathbf s}_j - {\mathbf s}_0) \big/ \sum_{j=1}^p K_h({\mathbf s}_j - {\mathbf s}_0)$. It follows from (\ref{f7}) that \begin{eqnarray} \widehat y_t^r({\mathbf s}_0)&=&(w_1, \cdots, w_p)\Big[{1\over n}\sum_{k=1}^{n}(\widehat\bxi_k-\bar{\bxi})({\mathbf y}_k-\bar{\mathbf y})'\Big]\widehat\Sigma_y^{-1}{\mathbf y}_t\nonumber\\ &=&(w_1, \cdots, w_p)\left( \begin{array}{cc} \widehat {\mathbf A}_1 \widehat{\mathbf A}'_1 & \bf0 \\ \bf0 &\widehat {\mathbf A}_2 \widehat{\mathbf A}'_2 \end{array} \right)\Big[{1\over n}\sum_{k=1}^{n}({\mathbf y}_k-\bar{{\mathbf y}})({\mathbf y}_k-\bar{\mathbf y})'\Big]\widehat\Sigma_y^{-1}{\mathbf y}_t\nonumber\\ &=&(w_1, \cdots, w_p)\left( \begin{array}{cc} \widehat {\mathbf A}_1 \widehat{\mathbf A}'_1 & \bf0 \\ \bf0 &\widehat {\mathbf A}_2 \widehat{\mathbf A}'_2 \end{array} \right){\mathbf y}_t\nonumber =(w_1, \cdots, w_p)\left(\begin{array}{c}\widehat \bxi_{t,1}\\ \widehat \bxi_{t,2} \end{array}\right)=\widehat \xi_t({\mathbf s}_0).\nonumber\end{eqnarray}
It is worth pointing out that expression (\ref{f10}) involves inverting $p\times p$ matrix $\widehat \boldsymbol{\Sigma}_y$, which is difficult when $p$ is large, while (\ref{f10new}) paves the way for computing the predictor $\widehat y^r_t({\mathbf s}_0)$ without the need to compute $\widehat \boldsymbol{\Sigma}_y^{-1}$ directly.
By Theorem \ref{prop2}, a better predictor than $\widehat y_t^r({\mathbf s}_0)$ in (\ref{f10new}) is \begin{equation} \label{f12} \widetilde y_t^r({\mathbf s}_0) \equiv \widetilde \xi_t({\mathbf s}_0)
= \sum_{j=1}^p \widetilde \xi_t({\mathbf s}_j) K_h({\mathbf s}_j - {\mathbf s}_0) \Big/ \sum_{j=1}^p K_h({\mathbf s}_j - {\mathbf s}_0), \end{equation} where $\widetilde \xi_t({\mathbf s}_j)$ is defined in (\ref{f8}).
Both $\widehat y_t^r({\mathbf s}_0)$ and $\widetilde y_t^r({\mathbf s}_0)$ are the approximate linear estimators for the $\xi_t({\mathbf s}_0)$ based on $\xi_t({\mathbf s}_1), \cdots, \xi_t({\mathbf s}_p)$. Note that $y_t({\mathbf s}_0) = \xi_t({\mathbf s}_0) + {\varepsilon}_t({\mathbf s}_0)$, and the nugget effect term ${\varepsilon}_t({\mathbf s}_0)$ is unpredictable. The best (unrealistic) predictor for $y_t({\mathbf s}_0)$ is $\xi_t({\mathbf s}_0)$. It is indeed recommended to predict $\xi_t({\mathbf s}_0)$ instead of $y_t({\mathbf s}_0)$ directly. See also pp.136-137 of Cressie and Wikle (2011).
\begin{remark} \label{remark3} (i) The realistic kriging estimators $\widehat y_t^r({\mathbf s}_0)$ and $\widetilde y_t^r({\mathbf s}_0)$ actually make the full use of all the available data, in spite that they were induced from (\ref{d5}). Note that the ideal (and unrealistic) preditor for $y_t({\mathbf s}_0)$ is $\sum_{1\le j \le d} a_j({\mathbf s}_0) x_{tj}$, and $\widehat x_{tj}$ and $\widetilde x_{tj}$ are the estimators for $x_{tj}$ based on all the available data from time 1 to $n$. It follows from (\ref{f10n}) -- (\ref{f10new}) that $\widehat y_t^r({\mathbf s}_0)$ is a realistic optimal predictor for $y_t({\mathbf s}_0)$ based on $\{\, \widehat x_{tj},\, j=1, \cdots \widehat d \,\}$.
(ii) When the number of observations in the vicinity of ${\mathbf s}_0$ is small, the kernel based predictor (\ref{f10n}) may perform poorly. One alternative is to impose a parametric spatial covariance function and to perform the kriging based on the parametric model (Sections 4.1.1 and 6.1 of Cressie and Wikle, 2011). How to identify an appropriate parametric model using the nonparametric analysis presented in this paper deserves a separate study.
\end{remark}
\subsection{Kriging in time} \label{sec42}
\subsubsection{Prediction methods} The goal now is to predict the future values $y_{n+j}({\mathbf s}_1), \cdots, y_{n+j}({\mathbf s}_p) $, for some $j\ge 1$, based on ${\mathbf y}_n, \cdots, {\mathbf y}_{n-j_0}$, where $0\le j_0 < n$ is a prescribed integer. When $j_0=n-1$, we use all the available data to predict the future values. Since ${\varepsilon}_{t+j}(\cdot)$ is unpredictable, a more effective approach is to predict ${\mathbf x}_{n+j}= (x_{n+j,1}, \cdots, x_{n+j, d})'$ based on ${\mathbf x}_n, \cdots, {\mathbf x}_{n-j_0}$, as the ideal predictor for $y_{n+j}({\mathbf s}_i)$ is $\xi_{n+j}({\mathbf s}_i)$; see (\ref{c2}).
Since our procedure to recover the latent process ${\mathbf x}_t$ requires to split ${\mathbf y}_t$ into two subvectors ${\mathbf y}_{t,1}, \; {\mathbf y}_{t,2}$, leading to two different configurations $ {\mathbf x}_t$ and $ {\mathbf x}_t^\star$ in (\ref{f2}), we will apply the prediction procedure in Section \ref{sec422} below to each of $ {\mathbf x}_t$ and $ {\mathbf x}_t^\star$. Then the predictors for ${\mathbf y}_{n+j, 1}$ and ${\mathbf y}_{n+j,2}$ are defined as \begin{equation}\label{f12nn}
{\mathbf y}_{n,1}(j) = {\mathbf A}_1 {\mathbf x}_n(j), \qquad {\mathbf y}_{n,2}(j) = {\mathbf A}_2 {\mathbf x}_n^\star(j), \end{equation} where ${\mathbf x}_n(j)$ is the predictor for ${\mathbf x}_{n+j}$, and $ {\mathbf x}_n^\star(j)$ is the predictor for $ {\mathbf x}_{n+j}^\star$. In practice, ${\mathbf A}_i, {\mathbf x}_t, {\mathbf x}_t^\star$ are replaced by their estimators defined in (\ref{f4}) and (\ref{f6}).
The predictors defined above depend on a single partition ${\mathcal S}_o= {\mathcal S}_1 \cup {\mathcal S}_2$. By repeating random partition of ${\mathcal S}_o$ $J$ times, we may obtain the aggregated predicted values for $y_{n+j}({\mathbf s}_i)$ in the same manner as in (\ref{f8}).
Since $\xi_t({\mathbf s}_1), \cdots, \xi_t({\mathbf s}_p)$ are correlated with each other, we should not model $\xi_t$ at each location separately. Instead modeling the factor process ${\mathbf x}_t$ catches the temporal dynamics much more parsimoniously.
An alternative approach, not pursued here, would be to build a dynamic model for ${\mathbf x}_t$, leading to the model-bases forecasts. For example, Cressie, Shi and Kang (2010) adopt the Gaussian AR(1) specification for the latent process and facilitated the forecasting by a Kalman filter.
\subsubsection{Predicting ${\mathbf x}_{n+j}$ and ${\mathbf x}_{n+j}^\star$} \label{sec422}
We only state the method for predicting ${\mathbf x}_{n+j}$. It can be applied to predicting ${\mathbf x}_{n+j}^\star$ exactly in the same manner.
Let ${\mathbf X}' = ({\mathbf x}_n', \cdots, {\mathbf x}_{n-j_0}')$, \begin{equation} \label{d11} {\mathbf W}_k \equiv \mbox{Var}\left( \begin{array}{l} {\mathbf x}_t\\ {\mathbf x}_{t-1}\\ \vdots\\ {\mathbf x}_{t-k} \end{array} \right) = \left( \begin{array}{llll} \boldsymbol{\Sigma}_x(0) & \boldsymbol{\Sigma}_x(1) & \cdots & \boldsymbol{\Sigma}_x(k) \\ \boldsymbol{\Sigma}_x(1)' & \boldsymbol{\Sigma}_x(0) & \cdots & \boldsymbol{\Sigma}_x(k-1) \\ & \cdots& \cdots& \\ \boldsymbol{\Sigma}_x(k)' & \boldsymbol{\Sigma}_x(k-1)'& \cdots &\boldsymbol{\Sigma}_x(0) \end{array} \right), \quad k\ge 0, \end{equation} \[ {\mathbf R}_{j_0}\equiv \left( \boldsymbol{\Sigma}_x(j), \boldsymbol{\Sigma}_x(j+1).
\cdots, \boldsymbol{\Sigma}_x(j+j_0) \right), \] where $\boldsymbol{\Sigma}_x(k) = {\rm Cov}({\mathbf x}_{t+k}, {\mathbf x}_t)$. By Lemma 1, the best linear predictor for ${\mathbf x}_{n+j}$ is \[ {\mathbf x}_{n}(j) = {\mathbf R}_{j_0} {\mathbf W}_{j_0}^{-1} {\mathbf X}. \] The key is to be able to calculate the inverse of $(j_0+1)d \times (j_0+1)d$ matrix ${\mathbf W}_{j_0}$. This can be done by calculating ${\mathbf W}_0^{-1}, {\mathbf W}_1^{-1}, \cdots $ recursively based on \begin{equation} \label{d12} {\mathbf W}_{k+1}^{-1} = \Big( \begin{array}{ll} {\mathbf W}_k^{-1} + {\mathbf W}_k^{-1} {\mathbf U}_k {\mathbf V}_k {\mathbf U}_k' {\mathbf W}_k^{-1} & - {\mathbf W}_k^{-1}{\mathbf U}_k {\mathbf V}_k\\ - {\mathbf V}_k {\mathbf U}_k' {\mathbf W}_k^{-1} & {\mathbf V}_k \end{array} \Big), \end{equation} where $$ {\mathbf U}_k' = ( \boldsymbol{\Sigma}_x(k+1)', \cdots, \boldsymbol{\Sigma}_x(1)'), \qquad {\mathbf V}_k = (\boldsymbol{\Sigma}_x(0) -{\mathbf U}_k'{\mathbf W}_k^{-1}{\mathbf U}_k)^{-1}.$$ See (\ref{d1}). Note only $d\times d$ inverse matrices are involved in this recursion.
In practice we replace $\boldsymbol{\Sigma}_x(k)$ in ${\mathbf R}_{j_0}$ and ${\mathbf W}_{j_0}$ by $ \widehat \boldsymbol{\Sigma}_x(k) = \widehat {\mathbf A}_1'\widehat \boldsymbol{\Sigma}_{y,1}(k) \widehat {\mathbf A}_1, $ and replace ${\mathbf X}$ by \[ \widehat {\mathbf X} = ({\mathbf y}_{t,1}'\widehat{\mathbf A}_1, \cdots, {\mathbf y}_{t-k,1}'\widehat{\mathbf A}_1)', \] where \[ \widehat \boldsymbol{\Sigma}_{y,1}(k)= {1 \over n} \sum_{t=1}^{n-k}({\mathbf y}_{t+k, 1} - \bar {\mathbf y}_1) ({\mathbf y}_{t, 1} - \bar {\mathbf y}_1)' , \qquad \bar {\mathbf y}_1 = {1 \over n} \sum_{t=1}^n {\mathbf y}_{t,1}. \] The resulting predictor for ${\mathbf x}_{n+j}$ is denoted by $\widehat {\mathbf x}_{n}(j)$.
We may define $\widehat {\mathbf x}_{n}^\star(j)$ in the same manner as $\widehat {\mathbf x}_{n}(j)$ with $({\mathbf y}_{t,1}, \widehat {\mathbf A}_1)$ replaced by $({\mathbf y}_{t,2}, \widehat {\mathbf A}_2)$.
Consequently the practical feasible predictor for ${\mathbf y}_{n+j}$ is defined in two similar formulas \begin{equation} \label{f12n} \widehat y_{n,1}(j) = \widehat {\mathbf A}_1 \widehat {\mathbf x}_{n}(j), \qquad \widehat y_{n,2}(j) = \widehat {\mathbf A}_2 \widehat {\mathbf x}_{n}^\star(j), \end{equation} see (\ref{f12nn}).
\subsection{Handling missing values} \label{sec43}
It is not uncommon that a large data set contains some missing values. We assume that the number of missing values is small in the sense that the number of the available observations at each given time $t$ is of the order $p$, and the number of the available observations at each location $s_i$ is of the order $n$. We outline below how to apply the proposed method when some observations are missing.
First for $\boldsymbol{\Sigma}\equiv ( \sigma_{ij})$ defined in (\ref{f5n}), we may estimate each $\sigma_{ij}$ separately using all the available pairs $(y_{t,i}^1, y_{t,j}^2)$ with $1\le t \le n$, where $y_{t,i}^{\ell}$ denotes the $i$-th element of ${\mathbf y}_{t, \ell}$, $\ell =1, 2$. With the estimated $\widehat \boldsymbol{\Sigma}$, we may derive the estimators $\widehat {\mathbf A}_1, \, \widehat {\mathbf A}_2$ as in (\ref{f4}).
For the simplicity in notation, suppose that $y_1({\mathbf s}_1)$ is missing. Let ${\mathbf y}_1^a$ denote all the available observations at time $t=1$. By Lemma 1, the kriging predictor for $y_1({\mathbf s}_1)$ is \begin{equation} \label{xx1} \widehat y_1({\mathbf s}_1) = {\rm Cov}(y_1({\mathbf s}_1), {\mathbf y}_1^a) \{\mbox{Var}({\mathbf y}_1^a) \}^{-1} {\mathbf y}_1^a. \end{equation} We may estimate ${\rm Cov}(y_1({\mathbf s}_1), {\mathbf y}_1^a)$ and $\mbox{Var}({\mathbf y}_1^a)$ in the same manner as that for estimating $\boldsymbol{\Sigma}$ described above. Replacing all the missing values with their kriging estimates, we may proceed the estimation for $ \widehat \xi_t({\mathbf s}_j)$ and $\widetilde \xi_t({\mathbf s}_j)$ as in Sections \ref{sec31} and \ref{sec32}.
\section{Asymptotic properties} \label{sec5}
In this section, we investigate the asymptotic properties of the proposed methods. For any matrix ${\mathbf M}$, let $||{\mathbf M}||_{\min}=\sqrt{\lambda_{\min}({\mathbf M}{\mathbf M}')}$ and $||{\mathbf M}||=\sqrt{\lambda_{\max}({\mathbf M}{\mathbf M}')}$, where
$\lambda_{\min}$ and $\lambda_{\max}$ denote, respectively, the minimum and the maximum eigenvalue. When ${\mathbf M}$ is a vector, $||{\mathbf M}||$ reduces to its Euclidean norm.
\subsection{On latent finite-dimensional structures} \label{sec51}
We state in this subsection some asymptotic results
on the estimation of the factor loading spaces ${\mathcal M}({\mathbf A}_1)$ and ${\mathcal M}({\mathbf A}_2)$. They paves the way to establish the properties for the kriging estimation presented in Section \ref{sec52} below. Proposition \ref{them52} below is similar to those in Lam and Yao (2012), and Chang \mbox{\sl et al.\;} (2015) but with the extra features due to the graph Laplician incorporated in order to pertain the continuity over space.
Nevertheless its proof is similar and, therefore, is omitted.
For any two $k\times d$ orthogonal matrices ${\mathbf B}_1$ and $ {\mathbf B}_2$ with ${\mathbf B}_1' {\mathbf B}_1={\mathbf B}_2' {\mathbf B}_2={\mathbf I}_d$, we measure the distance between the two linear spaces ${\mathcal M}({\mathbf B}_1)$ and ${\mathcal M}({\mathbf B}_2)$ by \begin{equation} \label{5.1} D(\mathcal{M}({\mathbf B}_1), \mathcal{M}({\mathbf B}_2))=\sqrt{1-{1\over d}\mathrm{tr}({\mathbf B}_1{\mathbf B}_1' {\mathbf B}_2 {\mathbf B}_2')}. \end{equation} It can be shown that $D(\mathcal{M}({\mathbf B}_1), \mathcal{M}({\mathbf B}_2)) \in [0, 1]$, being 0 if and only if ${\mathcal{M}}({\mathbf B}_1) ={\mathcal{M}}({\mathbf B}_2)$, and 1 if and only if ${\mathcal{M}}({\mathbf B}_1)$ and ${\mathcal{M}}({\mathbf B}_2)$ are orthogonal.
We introduce some regularity conditions first. Put \[ {\mathbf y}_t = ( y_t({\mathbf s}_1), \cdots, y_t({\mathbf s}_p))', \qquad {\mathbf Z}_t = ( {\mathbf z}_t({\mathbf s}_1), \cdots, {\mathbf z}_t({\mathbf s}_p)). \]
\begin{quote}
\noindent {\bf Condition 1}. $\{ ({\mathbf y}_t, {\mathbf Z}_t), \, t=0, \pm 1, \pm 2, \cdots\} $ is a strictly stationary and $\alpha$-mixing process with $\max_{1\leq i\leq p}[\mathrm{E}|y_t({\mathbf s}_i)|^{\gamma}+\mathrm{E}||{\mathbf z}_t({\mathbf s}_i)||^{\gamma}]<\infty$ for some $\gamma> \max\{\beta, 4\}, \, \beta>2$ and the $\alpha$-mixing coefficients $\alpha_m$ satisfying the condition \begin{eqnarray} \label{5.2}\alpha_m=O(m^{-\theta}) \quad \hbox{for some}\quad \theta> {\gamma \beta/(\gamma-\beta)}.\end{eqnarray} Further, $\min_{1\leq i\leq p}\lambda_{\min}(\mathrm{Var}({\mathbf z}_t({\mathbf s}_i)))>c_0$ for some positive constant $c_0$.
\noindent {\bf Condition 2.} Let $\boldsymbol{\Sigma}_x = {\rm Cov}({\mathbf x}_t, {\mathbf x}_t^\star)$, where ${\mathbf x}_t$ and ${\mathbf x}_t^\star$ are defined in (\ref{f2}). There exists a constant $\delta\in [0, 1]$ for which
$||\boldsymbol{\Sigma}_x||_{\min}\asymp ||\boldsymbol{\Sigma}_x||\asymp p^{1-\delta}$.
\end{quote}
Constant $\delta$ in Condition 2 reflects the strength of factors. Intuitively a strong factor is linked with most components of ${\mathbf y}_{t,1}$ and ${\mathbf y}_{t,2}$, implying that the corresponding coefficients in ${\mathbf A}_1$ or ${\mathbf A}_2$ are non-zero. Therefore it is relatively easy to recover those strong factors from the observations. Unfortunately the mathematical definition of the factor strength is tangled with the standardization condition ${\mathbf A}_1' {\mathbf A}_1 ={\mathbf A}_2' {\mathbf A}_2= {\mathbf I}_d$. See Remark 1(i) of Lam and Yao (2012), and Lemma 1 of Lam \mbox{\sl et al.\;} (2011).
To simplify the presentation, Condition 2 assumes that all the factors in (\ref{f2}) are of the same strength which is measured by a constant $\delta \in [0, 1]$: $\delta=0$ indicates that the strength of the factors is at its strongest, and $\delta=1$ corresponds to the weakest factors.
\begin{proposition} \label{them52} Let Conditions 1 and 2 hold, and $p^{\delta}n^{-1/2}+p n^{-\beta/2}+p^{2\delta-2}\tau \|{\mathbf L}\| \to 0$ as $n\to \infty$. Then \begin{itemize}
\item [(i)] $|\widehat{\lambda}_i-\lambda_i|=O_p(p^{2-\delta}n^{-1/2}+\tau \|{\mathbf L}\|)$ for $1 \le i \le d,$
\item [(ii)] $|\widehat{\lambda}_i|=O_p(p^2n^{-1}+\tau \|{\mathbf L}\|)$ for $d<i\leq p$, and
\item [(iii)] $D(\mathcal{M}(\widehat{\mathbf A}_i), \mathcal{M}({\mathbf A}_i)) = O_p(p^\delta n^{-1/2}+
p^{2\delta-2}\tau \|{\mathbf L}\|)$ ($ i=1, 2$), provided that $d$ is known.
\end{itemize}
\end{proposition}
\begin{remark} \label{remark2} (i)
Proposition \ref{them52} indicates that stronger factors result in a better estimation for the factor loading spaces, and, consequently,
a better recovery of the factor process. This is due to the fact that $\lambda_d - \lambda_{d+1}$ increases as $\delta$ decreases, where $\lambda_i$ denotes the $i$-th largest eigenvalue of $\boldsymbol{\Sigma} \boldsymbol{\Sigma}'$, and $\boldsymbol{\Sigma}$ is defined in (\ref{f5n}). Especially with the strongest factors (i.e. $\delta=0$), $D(\mathcal{M}(\widehat{\mathbf A}_i), \mathcal{M}({\mathbf A}_i))$
attains the standard error rate $n^{-1/2}+p^{-2}\tau\|{\mathbf L}\|$. This phenomenon is coined as `blessing of dimensionality' as in Lam and Yao (2012).
(ii) Proposition \ref{them52}(iii) can be made adaptive to unknown $d$; see Remark 5 of Bathia \mbox{\sl et al.\;} (2010). See also Theorem 2.4 of Chang \mbox{\sl et al.\;} (2015) on how to make $\widehat d $ defined in (\ref{c5}) be a consistent estimator for $d$.
(iii) The condition $p^{2\delta-2}\tau \|{\mathbf L}\|\rightarrow 0$ in Proposition \ref{them52}
controls the perturbation between $\widehat{\mathbf A}$ and ${\mathbf A}$, which is implied by either $p^{2\delta-1}\tau \to 0$ (as $\|{\mathbf L}\|\le p$) or $\|{\mathbf L}\|\le C$ and $p^{2\delta-2}\tau \to 0$.
By the perturbation theory (Theorem 8.1.10 of Golub and Van Loan, 1996), the bound $||\widehat{\mathbf A}-{\mathbf A}||$ depends on $||\widehat\boldsymbol{\Sigma} \widehat\boldsymbol{\Sigma}'+\tau
{\mathbf L}-\boldsymbol{\Sigma}\boldsymbol{\Sigma}'||$, which is bounded from above by
$ ||\widehat\boldsymbol{\Sigma} \widehat\boldsymbol{\Sigma}'-\boldsymbol{\Sigma}\boldsymbol{\Sigma}'||+\tau||{\mathbf L}||.$ This leads to the upper bound of (iii) in Proposition \ref{them52}.
\end{remark}
\subsection{On kriging} \label{sec52}
We now consider the asymptotic properties for the kriging methods proposed in Section \ref{sec4}. To simplify the presentation, we always assume that $d$ is known. We introduce some regularity conditions first.
\begin{quote} \noindent {\bf Condition 3}. The kernel $K(\cdot)$ is a symmetric density function on $\EuScript R^2$ with a bounded support. \end{quote}
\begin{quote} \noindent {\bf Condition 4}. In (\ref{c2}) $ \bbeta(\cdot)$ and $
a_j(\cdot)/||{\mathbf a}({\mathbf s}_0)||, \, j=1, \cdots, d$, are twice continuously differentiable and bounded functions on ${\mathcal S}$, where ${\mathbf a}({\mathbf s}_0)=(a_1({\mathbf s}_0), \cdots, a_d({\mathbf s}_0)).$ \end{quote}
\begin{quote} \noindent {\bf Condition 5}. There exists a positive and continuously differentiable sampling intensity $f(s)$ on ${\mathcal S}$ such that as $p\rightarrow \infty,$ \begin{eqnarray} {1\over p}\sum_{{\mathbf s} \in {\mathcal S}} I({\mathbf s} \in A) =\int_{A} f({\mathbf s}) \, d{\mathbf s}(1+o(1))\nonumber\end{eqnarray} holds for any measurable set $A\subset {\mathcal S}$. \end{quote}
Theorem \ref{th3} below presents the asymptotic properties of the two spatial kriging methods in (\ref{f10new}) and (\ref{f12}). Since $$E[\{ \widehat y_t^r({\mathbf s}_0) - y_t({\mathbf s}_0) \}^2] =E[\{ \widehat y_t^r({\mathbf s}_0) -\xi_t({\mathbf s}_0)\}^2] + \mbox{Var}({\varepsilon}_t({\mathbf s}_0)),$$ it is more relevant to measure the difference between a predictor and $\xi_t({\mathbf s}_0)$ directly.
\begin{theorem} \label{th3} Let bandwidth $h\rightarrow 0, \, ph\rightarrow\infty$ and $p^{\delta}n^{-1/2}+p n^{-\beta/2}+p^{2\delta-2}\tau \|{\mathbf L}\| \to 0$ as $n\to \infty$. It holds under Conditions 1--5 that
$$\max\{|\widehat y^r_t({\mathbf s}_0)-\xi_t({\mathbf s}_0)|, \, |\widetilde y^r_t({\mathbf s}_0)-\xi_t({\mathbf s}_0)|\}=O_p\{h^2+p^\delta (nh)^{-1/2}+(ph)^{-1/2}+p^{2\delta-2}h^{-1/2}\tau\|{\mathbf L}\|\}.$$
\end{theorem}
Theorem \ref{th4} below considers the convergence rates for the kriging predictions in time. Recall $\widehat {\mathbf y}_{n,1}(j), \; \widehat {\mathbf y}_{n,2}(j), \; \widehat {\mathbf x}_{n}(j)$ and $\widehat {\mathbf x}^\star_{n}(j)$ as defined in (\ref{f12n}).
\begin{theorem} \label{th4} Let Conditions 1 and 2 hold. As $n, p \to \infty $ and
$ p^{\delta/2}(p^{\delta}n^{-1/2}+p^{2\delta-2}\tau \|{\mathbf L}\|) \to 0$, \begin{itemize}
\item[] (a) $p^{-{1\over 2}}||\widehat {\mathbf x}_{n}(j)-{\mathbf x}_{n}(j)||=O_p(p^{\delta}n^{-1/2}+p^{2\delta-2}\tau\|{\mathbf L}\|+p^{-{1\over 2}}),$ \\
$p^{-{1\over 2}}||\widehat {\mathbf x}_{n}^\star(j)-{\mathbf x}^\star_{n}(j)||=O_p(p^{\delta}n^{-1/2}+p^{2\delta-2}\tau\|{\mathbf L}\|+p^{-{1\over 2}})$, and
\item[] (b) $
p^{-{1\over 2}} ||\widehat {\mathbf y}_{n, i}(j)- {\mathbf y}_{n,i}(j)||=O_p(p^{\delta}n^{-1/2}+p^{2\delta-2}\tau\|{\mathbf L}\|+p^{-{1\over 2}})$ for $i=1,2$. \end{itemize} \end{theorem}
Theorems \ref{th3} and \ref{th4} indicate that result in better predictions. See also Remark \ref{remark2}(i) above.
\section{Numerical properties}\label{sec6}
We illustrate the finite sample properties of the proposed methods via both simulated and real data.
\subsection{Simulation}
For simplicity, we let ${\mathbf s}_1, \cdots, {\mathbf s}_p$ be drawn randomly from the uniform distribution on $[-1, 1]^2$ and $y_t({\mathbf s}_i)$ be generated from (\ref{c2}) in which $d=3$, ${\varepsilon}_t({\mathbf s})$ are independent and standard normal, and \[ a_1({\mathbf s}) = s_1/2, \qquad a_2({\mathbf s}) = s_2/2, \qquad a_3({\mathbf s}) = (s_1^2+ s_2^2)/2, \] \[ x_{t1} = -0.8 x_{t-1,1} + e_{t1}, \qquad x_{t2} = e_{t2} - 0.5 e_{t-1,2}, \qquad x_{t3} = -0.6 x_{t-1,3} + e_{t3} + 0.3e_{t-1,3}. \] In the above expressions, $e_{ti}$ are independent and standard normal. The signal-noise-ratio, which is defined as $$ \frac{\int_{{\mathbf s}\in[-1,1]^2} \sqrt{\rm{VAR}(\xi_t({\mathbf s}))} d{\mathbf s} } {\int_{{\mathbf s}\in[-1,1]^2} \sqrt{\rm{VAR}(\varepsilon_t({\mathbf s}))} d{\mathbf s}}, $$ is about 0.72.
With $n=80, 160$ or $320$, and $p=50, 100$, or $200$, we draw 100 samples from each setting. With each sample, we calculate $\widehat d$ as in (\ref{c5}), and the factor loadings $\widehat {\mathbf A}_1$ and $ \widehat {\mathbf A}_2$ as in (\ref{f4}). For the latter, we choose the tuning parameter $\tau$ over 101 grid points between 0 and 10 by a five-fold cross-validation: we divide ${\mathbf s}_1, \cdots, {\mathbf s}_n$ into 5 groups of the same size. Each time we use the data at the locations in four groups for estimation, and predict the values at the locations in the other group by spatial kriging (\ref{f10new}). We use Gaussian kernel in (\ref{f10n}) with bandwidth $h$ selected by leave-one-out cross validation method.
As the estimated value $\widehat d$ may not always be equal to $d$, and ${\mathbf A}_1, \, {\mathbf A}_2$ are not half-orthogonal matrices in the model specified above, we extend the distance measure for two linear spaces (\ref{5.1}) as follows: \[ D({\mathcal M}(\widehat{\mathbf A}_i ), {\mathcal M}({\mathbf A}_i)) = \Big( 1 - {1 \over \max( d, \widehat d) } \mbox{tr}\{\widehat {\mathbf A}_i \widehat {\mathbf A}_i' {\mathbf A}_i ({\mathbf A}_i'{\mathbf A}_i)^{-1} {\mathbf A}_i' \} \Big)^{1/2}. \] It can be shown that $D({\mathcal M}(\widehat{\mathbf A}_i ), {\mathcal M}({\mathbf A}_i)) \in [0, 1]$, being 0 if and only if ${\mathcal M}(\widehat{\mathbf A}_i ) = {\mathcal M}({\mathbf A}_i)$, and 1 if and only if ${\mathcal M}(\widehat{\mathbf A}_i)$ and $ {\mathcal M}({\mathbf A}_i)$ are orthogonal. It reduces to (\ref{5.1}) when $\widehat d = d$ and ${\mathbf A}_i'{\mathbf A}_i = {\mathbf I}_d$.
Fig.\ref{figure_distanceA} depicts the boxplots of the average distance \[ \frac{1}{2} \{ D({\mathcal M}(\widehat{\mathbf A}_1 ), {\mathcal M}({\mathbf A}_1)) + D({\mathcal M}(\widehat{\mathbf A}_2 ), {\mathcal M}({\mathbf A}_2)) \} \] over 100 replications under different settings. As expected, the errors in estimating ${\mathcal M}({\mathbf A}_1)$ and ${\mathcal M}({\mathbf A}_2)$ decrease as $n$ increases. Perhaps more interesting is the phenomenon that the estimation errors do not increase as the number of locations $p$ increases. Note that the three factors specified in the above model are all strong factors. According to Proposition \ref{them52}(iii), $D({\mathcal M}(\widehat {\mathbf A}_i),
{\mathcal M}({\mathbf A}_i))= O_p(n^{-1/2} + \tau \|{\mathbf L}\|/p^2)$ when $\delta=0$. See also Remark \ref{remark2}(i). Fig.\ref{figure_distanceA} also shows that the estimation errors with $p=100$ are significantly greater than those with $p=200, 400$. This is due to greater errors in estimating $d$ with smaller $p$; see Table~\ref{table_kriging} below. Note that Proposition \ref{them52}(iii) assumes $d$ known.
\begin{figure}\label{figure_distanceA}
\end{figure}
Fig.\ref{figure_mse_kriging} presents the boxplots of \begin{eqnarray}\label{eqn_mse_xi} {\rm MSE}(\widehat \xi ) = {1 \over n p} \sum_{t=1}^n\sum_{j=1}^p \{ \widehat \xi_t({\mathbf s}_j) - \xi_t({\mathbf s}_j) \}^2, \quad {\rm MSE}(\widetilde \xi ) = {1 \over n p} \sum_{t=1}^n\sum_{j=1}^p \{ \widetilde \xi_t({\mathbf s}_j) - \xi_t({\mathbf s}_j) \}^2, \end{eqnarray} where $\widehat \xi_t({\mathbf s}_j)$ and $\widetilde \xi_t({\mathbf s}_j) $ are defined in, respectively, (\ref{f7}) and (\ref{f8}). We set $J=100$ for the aggregation estimates $\widetilde \xi_t({\mathbf s}_j) $.
As shown by Theorem \ref{prop2}, $\widetilde \xi_t( {\mathbf s}_j)$ always provides more accurate estimate for $\xi_t( {\mathbf s}_j)$ than $\widehat \xi_t( {\mathbf s}_j)$. Furthermore the MSE decreases when either $n$ or $p$ increases.
\begin{figure}
\caption{Boxplot of MSE$(\widehat \xi )$ (red) and MSE$(\widetilde \xi )$ (blue) in a simulation with 100 replications.}
\label{figure_mse_kriging}
\end{figure}
Note that estimating ${\mathbf A}_1, {\mathbf A}_2$ with $\tau>0$ makes use the continuity of the loading functions $a_i(\cdot)$. Table \ref{table_cv} lists the means and the standard errors, over 100 replications, of MSE$(\widehat \xi )$ with $\widehat \xi_t({\mathbf s}_j)$ calculated using either $\tau$ selected by the five-fold cross-validation (i.e. $\tau>0$) or $\tau=0$. The improvement from using the continuity is more pronounced when $n$ and $p$ are small.
\begin{table}
\begin{quote}
\caption{Means and standard errors (in parentheses) of MSE$(\widehat \xi )$ with $\widehat \xi_t({\mathbf s}_i)$
calculated using either $\tau>0$ selected by
five-fold cross-validation or $\tau=0$. } \label{table_cv}
\end{quote}
\centering
\begin{tabular}{rr|cc}
\hline\hline
$n$ & $p$ & $\tau>0$ & $\tau=0$ \\\hline
80 & 50 & 0.0941(0.0347) & 0.1139 (0.0429)\\
160 & 50 & 0.0665(0.0164) & 0.0795 (0.0250)\\
320 & 50 & 0.0585(0.0076) & 0.0631 (0.0126)\\
80 & 100 & 0.0243(0.0157) & 0.0279 (0.0183)\\
160 & 100 & 0.0158(0.0055) & 0.0168 (0.0073)\\
320 & 100 & 0.0146(0.0011) & 0.0150 (0.0012)\\
80 & 200 & 0.0056(0.0050) & 0.0064 (0.0058)\\
160 & 200 & 0.0039(0.0002) & 0.0039 (0.0003)\\
320 & 200 & 0.0037(0.0002) & 0.0037 (0.0002)\\
\hline\hline
\end{tabular} \end{table}
To illustrate the kriging performance, with each sample we also draw additional 50 `post-sample' data points at the
locations randomly drawn from $U[-1, 1]^2$. For each $t = 1, \cdots, n$, we calculate the spatial kriging estimate $\widehat y_t^r(\cdot)$ in (\ref{f10new}) at
each of the 50 post-sample locations. The mean squared predictive error is computed as \begin{eqnarray}\label{eqn_mspe_kis} {\rm MSPE}(\widehat y^r)= {1 \over 50 n } \sum_{t=1}^n \sum_{{\mathbf s}_0 \in {\mathcal S}^* } \{ \widehat y_t^r({\mathbf s}_0) - y_t({\mathbf s}_0) \}^2, \end{eqnarray} where $ {\mathcal S}^*$ is the set consisting of the 50 post-sample locations. Similarly, we repeat this exercise for $\widetilde y_t^r(\cdot)$ in (\ref{f12}). To check the performance of the kriging in time, we also generate two post-sample surfaces at times $n+1$ and $n+2$ for each sample. The mean of square predictive error (${\rm MSPE}$) is calculated as follows. \begin{eqnarray} {\rm MSPE}(\widehat y^r_{n+\ell})= \frac{1}{p} \sum_{j=1}^p \{ \widehat y_{n+\ell}^r({\mathbf s}_j) - y_{n+\ell}({\mathbf s}_j) \}^2 , \qquad \ell =1, 2. \end{eqnarray} We repeat the above exercise for the aggregation estimator $\widetilde{y}^r_{n+\ell}$ with $J=100$.
\begin{table}
\caption{Means of $\widehat d$, means and standard errors (in parentheses) of MSPE for kriging in space and time.}\label{table_kriging}
\resizebox{\textwidth}{!}{
\begin{tabular}{rr|c|cc|cccc}
\hline\hline
\multicolumn{2}{r}{} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Kriging over Space} &\multicolumn{4}{c}{Kriging in Time}\\\hline
$n$ & $p$ & $\widehat{d}$ & ${\rm MSPE}(\widehat{y}^r_t)$ & ${\rm MSPE}(\tilde{y}^r_t)$ & ${\rm MSPE}(\widehat{y}^r_{t+1})$ & ${\rm MSPE}(\tilde{y}^r_{t+1})$ & ${\rm MSPE}(\widehat{y}^r_{t+2})$ & ${\rm MSPE}(\tilde{y}^r_{t+2})$\\\hline
80 & 50 & 2.03 & 1.1893(0.1094) & 1.1763(0.1030) & 1.6300(0.5402) & 1.5660(0.5225) & 1.7856(0.8940) & 1.6876(0.8237)\\
160 & 50 & 2.76 & 1.1119(0.0553) & 1.1016(0.0467) & 1.3765(0.4160) & 1.3346(0.3952) & 1.4795(0.4749) & 1.4599(0.4754)\\
320 & 50 & 2.98 & 1.1004(0.0243) & 1.0888(0.0209) & 1.5073(0.5175) & 1.4699(0.4932) & 1.6132(0.7828) & 1.5855(0.7640)\\
80 & 100 & 2.62 & 1.0829(0.0765) & 1.0804(0.0735) & 1.5037(0.4135) & 1.4354(0.3904) & 1.8469(0.7127) & 1.7680(0.6564)\\
160 & 100 & 2.97 & 1.0509(0.0283) & 1.0455(0.0255) & 1.4701(0.4359) & 1.4244(0.4119) & 1.6118(0.5449) & 1.5866(0.5357)\\
320 & 100 & 3.00 & 1.0462(0.0141) & 1.0412(0.0139) & 1.3541(0.3580) & 1.3290(0.3410) & 1.6301(0.6608) & 1.6137(0.6555)\\
80 & 200 & 2.88 & 1.0411(0.0484) & 1.0368(0.0457) & 1.5157(0.4376) & 1.4884(0.4297) & 1.8312(0.7495) & 1.7954(0.7220)\\
160 & 200 & 3.00 & 1.0238(0.0146) & 1.0221(0.0147) & 1.4471(0.4120) & 1.4326(0.4211) & 1.6841(0.5954) & 1.6721(0.5910)\\
320 & 200 & 3.00 & 1.0225(0.0122) & 1.0204(0.0121) & 1.4006(0.3285) & 1.3877(0.3299) & 1.5689(0.5047) & 1.5650(0.5111)\\
\hline\hline
\end{tabular}
} \end{table}
The means and the standard errors of the MSPE in the 100 replications for each settings are listed in Table \ref{table_kriging}. In general MSPE decreases as $n$ increases. For the kriging over space, MSPE also decreases as $p$ increases. See also Theorem \ref{th3}, noting $\delta =0$ when all the factors are strong. MSPEs of the kriging over space are smaller than those of the kriging in time. This is understandable from comparing Theorem \ref{th3} and Theorem \ref{th4}. The aggregated kriging always outperforms the non-aggregate counterparts. Last but not least, the ratio estimator (\ref{c5}) for $d$ works well for reasonably large $n$ and $p$.
\subsection{Real Data Analysis}
We illustrate the proposed methods with the monthly temperature records (in Celsius) at the 128 monitoring stations in China from January 1970 to December 2000. All series are of the length $n=372$. For each series, we remove the annually seasonal component by subtracting the average temperature of the same months. The distance among the stations are calculated as the great circle distance based on their longitudes and latitudes.
For kriging over space, we randomly select $p=78$ stations for estimation, and predict the values at the other 50 stations. The mean squared predictive error for the non-aggregation estimates (\ref{f10}) are calculated as follows. \[ {\rm MSPE}(\widehat y^r)= {1 \over 50 \times 372 } \sum_{t=1}^{372} \sum_{{\mathbf s}_0 \in {\mathcal S}^* } \big\{ \widehat y_t^r({\mathbf s}_0) - y_t({\mathbf s}_0)
\big\}^2. \] We also apply the aggregation (with $J=100$) estimator $\widetilde y_t(\cdot)$ in (\ref{f12}) to improve the kriging accuracy. To avoid the sampling bias in selecting stations, we replicate this exercise 100 times via randomly dividing the 128 stations into two sets of sizes 78 and 50. The estimated $d$-values are equal to 1 in the 98 replications, and are 2 in the two other replications. The means of MSPE over the 100 replications for $\widehat y^r$ and $\widetilde y^r$ are 0.7787 and 0.7718, and the corresponding standard errors are 0.0335 and 0.0444, respectively. In the training step, the average MSPE of cross-validation are 0.2407 with optimal $\tau$, where $\tau>0$, and 0.2493 with $\tau$ equals to zero. Among all 100 replications, the optimal $\tau$'s are larger than zero for 93 times.
For kriging in time, we consider one-step-ahead and two-step-ahead post-sample prediction (with $j_0=6$) for all the 128 locations in each of the last 24 months in the data set. The corresponding mean squared predictive error at each step is defined as \[ {\rm MSPE}(\widehat y^r_{n+\ell})= {1 \over 128} \sum_{j=1}^{128} \big\{ \widehat y_{n+\ell}^r({\mathbf s}_j) - y_{n+\ell}({\mathbf s}_j) \big\}^2, \qquad \ell=1, 2. \] We also apply the aggregation estimator $\widetilde y^r_{n+\ell}(\cdot)$ with $J=100$. The means and standard errors of ${\rm MSPE}(\widehat y^r_{n+\ell})$ over the last 24 months is 1.7338 and 1.2581 for $\ell =1$, while 1.8814 and 1.4680 for $\ell=2$. On the other side, the means and standard errors of ${\rm MSPE}(\widetilde y^r_{n+\ell})$ are 1.7303 and 1.2583 for $\ell =1$, 1.8802 and 1.4673 for $\ell=2$, respectively.
As we expected, the one-step-ahead prediction is more accurate than the two-step-ahead prediction.
Overall the kriging in space is more accurate than those in time. The aggregation via random partitioning of locations improves the prediction, though the improvement is not substantial in this example.
\noindent {\bf Acknowledgements}. We thank Professor Noel Cressie for helpful comments and suggestions.
\section*{References} \begin{description} \begin{singlespace}
\item Banerjee, S., Gelfand, A., Finley, A. O. and Sang, H. (2008). Gaussian predictive process models for large spatial data sets. {\sl Journal of the Royal Statistical Society}, {\bf B}, {\bf 70}, 825-848.
\item Bathia, N., Yao, Q. and Ziegelmann, F. (2010). Identifying the finite dimensionality of curve time series. {\sl The Annals of Statistics}, {\bf 38}, 3352-3386.
\item Breiman, L. (1996). Bagging predictors. {\sl Machine Learning}, {\bf 24}, 123-140.
\item Castruccio, S. and Stein, M. L. (2013). Global space-time models for climate ensembles. {\sl Annals of Applied Statistics}, \textbf{7}, 1593-1611.
\item Chang, J., Guo, B. and Yao, Q. (2015). High dimensional stochastic regression with latent factors, endogeneity and nonlinearity. {\sl Journal of Econometrics}, {\bf 189}, 297-312.
\item Cressie, N. and Johannesson, G. (2008). Fixed rank kriging for very large spatial data sets. {\sl Journal of the Royal Statistical Society}, {\bf B}, \textbf{70}, 209-226.
\item Cressie, N., Shi, T. and Kang, E.L. (2010). Fixed rank filtering for spatio-temporal data. {\sl Journal of Computational and Graphical Statistics}, {\bf 19}, 724-745.
\item Cressie, N. and Wikle, C. K. (2011). {\sl Statistics for Spatio-Temporal Data}. Wiley, Hoboken.
\item
Fan, J. and Gijbels, I. (1996). {\sl Local Polynomial Modelling and Its Applications}.
Chapman and Hall, London.
\item Finley, A., Sang, H., Banerjee, S. and Gelfand, A. (2009). Improving the performance of predictive process modeling for large datasets. {\sl Computational Statistics and Data Analysis}, {\bf 53}, 2873-2884.
\item Gneiting, T. (2002). Compactly supported correlation functions. {\sl Journal of Multivariate Analysis}, {\bf 83}, 493-508.
\item
Golub, G. and Van Loan, C. (1996). {\sl Matrix Computations (3rd edition)}. John Hopkins University Press.
\item Guinness, J. and Stein, M. L. (2013). Interpolation of nonstationary high frequency spatial-temporal temperature data. {\sl Annals of Applied Statistics}, \textbf{7}, 1684-1708.
\item Hall, P., Fisher, N. I., and Hoffmann, B. (1994). On the Nonparametric Estimation of Covariance Functions. {\sl The Annals of Statistics}, \textbf{22}, 2115-2134.
\item Hastie, T., Tibshirani, R. and Friedman, J. (2009). {\sl The Elements of Statistical Learning}. Springer, New York.
\item Higdon, D. (2002). Space and space-time modeling using process convolutions. In {\sl Quantitative Methods for Current Environmental Issues} (eds C. W. Anderson, V. Barnett, P. C. Chatwin and A. H. El-Shaarawi), pp. 37-54. London: Springer.
\item Jun, M. and Stein, M. L. (2007). An approach to producing space-time covariance functions on spheres. {\sl Technometrics}, \textbf{49}, 468-479.
\item Kammann, E. E. and Wand, M. P. (2003). Geoadditive models. {\sl Applied Statistics}, {\bf 52}, 1-18.
\item Katzfuss, M. and Cressie, N. (2011). Spatio-temporal smoothing and EM estimation for massive remote-sensing data sets. {\sl Journal of Time Series Analysis}, \textbf{32}, 430-446.
\item Kaufman, C., Schervish, M. and Nychka, D. (2008). Covariance tapering for likelihood-based estimation in large spatial data sets. {\sl Journal of the American Statistical Association}, {\bf 103}, 1545-1555.
\item Lam, C. and Yao, Q. (2012). Factor modelling for high-dimensional time series: inference for the number of factors. {\sl The Annals of Statistics}, {\bf 40}, 694-726
\item Lam, C., Yao, Q. and Bathia, N. (2011). Estimation for latent factors for high-dimensional time series. {\sl Biometrika}, \textbf{98}, 901-918.
\item Li, B., Genton, M. G. and Sherman, M. (2007). A nonparametric assessment of properties of space-time covariance functions. {\sl Journal of the American Statistical Association}, \textbf{102}, 736-744.
\item
Lin, Z. and Lu, C. (1996). \textsl{Limit Theory on Mixing Dependent Random Variables}. Kluwer Academic Publishers, New York.
\item Mercer, J. (1909). Functions of positive and negative type and their connection with the theory of integral equations. {\sl Philosophical Transactions of the Royal Society A}, {\bf 209}, 415-446.
\item Sang, H. and Huang J. Z. (2012). A full-scale approximation of covariance functions for large spatial data sets. {\sl Journal of the Royal Statistical Society}, {\bf B}, {\bf 74}, 111-132.
\item Smith, R. L., Kolenikov, S. and Cox, L. H. (2003). Spatiotemporal modelling of PM$_{\rm 2.5}$ data with missing values. {\sl Journal of Geophysical Research}, {\bf 108}, No.D24, {\footnotesize DOI:10.1029/2002JD002914}.
\item Stein, M. (2008). A modeling approach for large spatial data sets. {\sl Journal of the Korean Statistical Society}, \textbf{37}, 3-10.
\item Tzeng, S.L. and Huang, H.C. (2018). Resolution adaptive fixed rank kriging. {\sl Technometrics}, to appear.
\item Wang, W.T. and Huang, H.C. (2017). Regularized principal component analysis for spatial data. {\sl Journal of Computational and Graphical Statistics}, {\bf 26}, 14-25.
\item Wikle, C. and Cressie, N. (1999). A dimension-reduced approach to space-time Kalman filtering. {\sl Biometrika}, {\bf 86}, 815-829.
\item Zhang, B., Sang, H., Huang, J. Z. (2015). Full-scale approximations of spatio-temporal covariance models for large datasets. {\sl Statistica Sinica}, \textbf{25}, 99-114.
\item Zhang, R., Robinson, P. and Yao, Q. (2018). Identifying cointegration by eigenanalysis. {\sl Available at} {\tt arXiv:1505.00821}.
\item Zhu, H., Fan, J. and Kong, L. (2014). Spatially varying coefficient model for neuroimaging data with jump discontinuities. {\sl Journal of the American Statistical Association}, \textbf{109}, 1084-1098.
\end{singlespace} \end{description}
\setcounter{page}{1}
\section*{\normalsize Supplementary document of ``Krigings over space and time based on latent low-dimensional structures''} \centerline{\sc Appendix: Technical proofs}
\setcounter{equation}{0} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
{\bf Proof of Proposition \ref{prop1}}. The first part of the proposition can be proved in the same manner as Proposition 1 of Bathia \mbox{\sl et al.\;} (2010), which is omitted. To prove the second part,
it follows (\ref{b10}) and (\ref{b10n}) that any eigenfunction of $\Sigma_0$ must be the linear combination of $a_1, \cdots, a_d$, i.e. $\varphi_i ({\mathbf s}) = \sum_j \gamma_{ij} a_j({\mathbf s})$. Now it follows from (\ref{b9}) and (\ref{b10n}) that \begin{align*} \Sigma_0\circ \varphi_i ({\mathbf s}) &= \sum_{k,\ell, j} \sigma_{k \ell} \gamma_{ij} a_k({\mathbf s}) \inner{a_\ell}{a_j} = \sum_{k,j} \sigma_{k j} \gamma_{ij} a_k({\mathbf s}) = \sum_k \lambda_i \gamma_{ik} a_k({\mathbf s}) = \lambda_i \varphi_i ({\mathbf s}) . \end{align*} Since $a_1, \cdots, a_d$ are orthonormal, it must hold that \begin{equation} \label{x1} \sum_j \sigma_{k j} \gamma_{ij} = \lambda_i \gamma_{ik} , \qquad k=1, \cdots, d. \end{equation} As $\sigma_{k j}$ is the $(k,j)$-th element of matrix $\mbox{Var}({\mathbf x}_t)$, (\ref{x1}) is equivalent to $\mbox{Var}({\mathbf x}_t) \boldsymbol{\gamma}_i = \lambda_i \boldsymbol{\gamma}_i$, i.e. $\boldsymbol{\gamma}_i$ is an eigenvector of $\mbox{Var}({\mathbf x}_t) $ corresponding to the eigenvalue $\lambda_i$, $i=1, \cdots, d$. Furthermore, \[ I(i=k) = \inner{\phi_i}{\phi_k} = \sum_{j,\ell} \gamma_{ij} \gamma_{k\ell} \inner{a_j}{a_\ell} = \sum_j \gamma_{ij} \gamma_{kj} = \boldsymbol{\gamma}_i' \boldsymbol{\gamma}_k. \] Thus $\boldsymbol{\gamma}_1, \cdots, \boldsymbol{\gamma}_d$ are orthogonal.
$\blacksquare$
\vskip3mm To prove Theorem \ref{prop2}(ii), we first introduce Lemma \ref{lemma3} below.
For the simplicity in presentation, we assume that the $d$ positive eigenvalues of $\boldsymbol{\Sigma} \boldsymbol{\Sigma}'$, defined in (\ref{f5}), are distinct from each other. Then both ${\mathbf A}_1$ and ${\mathbf A}_2$ are uniquely defined if we line up each of the two sets of the $d$ orthonormal eigenvectors (i.e. the columns of ${\mathbf A}_1$ and ${\mathbf A}_2$) in the descending order of their {\red corresponding} eigenvalues, and we require that the first non-zero element of each those eigenvector to be positive. See the discussion below (\ref{f5}) above.
Using the same notation as in (\ref{f4}), we denote by $\widehat{\mathbf A}_{1}^{(j)}, \, \widehat{\mathbf A}_{2}^{(j)}$ the estimated factor loading matrices in (\ref{f4}) with the $j$-th partition, by $\boldsymbol{\Sigma}^{(j)}$ the covariance matrix in (\ref{f5n}), and by ${\mathbf x}_t^{(j)}, \, {\mathbf x}_{t}^{*(j)}$ the estimated latent factors in (\ref{f6}), $j=1, \cdots, p_0=p!/(p_1! p_2!)$. Assume that the $d$ positive eigenvalues of $\boldsymbol{\Sigma}^{(j)} (\boldsymbol{\Sigma}^{(j)})'$ are distinct. Then $\widehat{\mathbf A}_{1}^{(j)}$ and $\widehat{\mathbf A}_{2}^{(j)}$ can be uniquely defined as above. Now we are ready to state the lemma.
\begin{lemma} \label{lemma3} Let Condition 1 hold.
Let the $d$ positive eigenvalues of $\boldsymbol{\Sigma}^{(j)} (\boldsymbol{\Sigma}^{(j)})'$ be distinct, and Condition 2 hold for ${\mathbf x}_t^{(j)}$ and ${\mathbf x}_{t}^{*(j)}$ for all $j=1, \cdots, p_0$.
Then as $p^{\delta} n^{-1/2}+p^{2\delta-2}\tau\|{\mathbf L}\|=o(1)$, it holds that
\begin{eqnarray} \max_{1\leq j\leq p_0}
\{||\widehat{\mathbf A}_1^{(j)}-{\mathbf A}_1^{(j)}||+||\widehat{\mathbf A}_2^{(j)}-{\mathbf A}_2^{(j)}||\}
=O_P(p^{\delta}n^{-1/2}+p^{2\delta-2}\tau\|{\mathbf L}\|).\nonumber\end{eqnarray} \end{lemma}
\noindent {\bf Proof}. Since $\max_{1\leq j\leq p_0}||\widehat{\mathbf A}_2^{(j)}-{\mathbf A}_2^{(j)}||$ can be shown similarly to $\max_{1\leq j\leq p_0}||\widehat{\mathbf A}_1^{(j)}-{\mathbf A}_1^{(j)}||$, we only prove $\max_{1\leq j\leq p_0}||\widehat{\mathbf A}_1^{(j)}-{\mathbf A}_1^{(j)}||$ here. Note that for any $1\leq j\leq p_0$,
\begin{eqnarray} \label{l3-1}||\widehat\boldsymbol{\Sigma}^{(j)} (\widehat\boldsymbol{\Sigma}^{(j)})'-\tau{\mathbf L}-\boldsymbol{\Sigma}^{(j)} (\boldsymbol{\Sigma}^{(j)})'||\leq ||\widehat\boldsymbol{\Sigma}^{(j)}-\boldsymbol{\Sigma}^{(j)}||^2+2||\boldsymbol{\Sigma}^{(j)}||\times ||\widehat\boldsymbol{\Sigma}^{(j)}- \boldsymbol{\Sigma}^{(j)}||+\tau\|{\mathbf L}\|.\end{eqnarray}
Since ${\mathbf x}_t^{(j)}$ satisfies Condition 2, it follows that $||\boldsymbol{\Sigma}^{(j)}||=O(p^{1-\delta}),$ see Lam \mbox{\sl et al.\;} (2011). On the other hand, by the mixing condition of $\{{\mathbf y}_t\}$, we have
\begin{eqnarray} \sup_{j}||\widehat\boldsymbol{\Sigma}^{(j)}- \boldsymbol{\Sigma}^{(j)}||^2&=&\sup_{j}||{1\over n}\sum_{t=1}^{n}\{({\mathbf y}_{t,1}^{(j)}-\bar{\mathbf y}_1^{(j)})({\mathbf y}_{t, 2}^{(j)}-\bar{\mathbf y}_2^{(j)})'-\mathrm{Cov}({\mathbf y}_{t,1}^{(j)}, {\mathbf y}_{t,2}^{(j)})\}||^2\nonumber\\
&\leq&\sum_{i=1}^{p}\sum_{j=1}^{p}\Big\{{1\over n}\sum_{t=1}^{n}[y_t({\mathbf s}_i)-\bar y({\mathbf s}_i)][y_t({\mathbf s}_j)-\bar y({\mathbf s}_j)]-\hbox{Cov}[y_t({\mathbf s}_i), y_t({\mathbf s}_j)]\Big\}^2 \nonumber\\
&=&O_p(p^2/n).\nonumber\end{eqnarray}
Thus, by (\ref{l3-1}),
\begin{eqnarray} \label{l3-2}\sup_{j}||\widehat\boldsymbol{\Sigma}^{(j)} (\widehat\boldsymbol{\Sigma}^{(j)})'-\tau{\mathbf L}-\boldsymbol{\Sigma}^{(j)} (\boldsymbol{\Sigma}^{(j)})'||=O_p(p^{2-\delta}n^{-1/2}+\tau\|{\mathbf L}\|).\end{eqnarray} By (\ref{l3-2}) and a similar argument to Theorem 1 of Lam \mbox{\sl et al.\;} (2011), we can show that
\begin{eqnarray} \max_{1\leq j\leq p_0}||\widehat{\mathbf A}_1^{(j)}-{\mathbf A}_1^{(j)}||=O_P(p^{\delta}n^{-1/2}+p^{2\delta-2}\tau\|{\mathbf L}\|)\nonumber\end{eqnarray} and complete the proof of Lemma 3.
$\blacksquare$
\vskip3mm
\noindent{\bf Proof of Theorem \ref{prop2}(ii)}.\quad
Note that
\begin{eqnarray}\label{pp2} &&\mathrm{E}\Big[{1 \over np} \sum_{t=1}^n\sum_{i=1}^p \big\{ \widehat \xi_{t}({\mathbf s}_i) - \xi_{t}({\mathbf s}_i)\big\}^2\Big{|}\{\xi_{t}({\mathbf s}_i), \, y_t({\mathbf s}_i)\}\Big]\nonumber\\ &=&{1 \over npp_0} \sum_{t=1}^n\sum_{j=1}^{p_0} [\widehat{\mathbf A}_1^{(j)}\widehat{\mathbf x}_t^{(j)} -{\mathbf A}_1^{(j)}{\mathbf x}_t^{(j)}]'[\widehat{\mathbf A}_1^{(j)}\widehat{\mathbf x}_t^{(j)} -{\mathbf A}_1^{(j)}{\mathbf x}_t^{(j)}]\nonumber\\ &&+{1 \over npp_0} \sum_{t=1}^n\sum_{j=1}^{p_0}[\widehat{\mathbf A}_2^{(j)}\widehat{\mathbf x}_t^{*(j)} -{\mathbf A}_2^{(j)}{\mathbf x}_t^{*(j)}]'[\widehat{\mathbf A}_2^{(l)}\widehat{\mathbf x}_t^{*(j)} -{\mathbf A}_2^{(j)}{\mathbf x}_t^{*(j)}]\nonumber\\ &\equiv&\Sigma_1+\Sigma_2.\nonumber\end{eqnarray} By Lemma 3, we have \begin{eqnarray}\Sigma_1&=&{1 \over npp_0} \sum_{t=1}^n\sum_{j=1}^{p_0} \Big\{[\widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'-{\mathbf A}_1^{(j)}({\mathbf A}_1^{(j)})']{\mathbf A}_1^{(j)}{\mathbf x}_t^{(j)} +\widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'{\varepsilon}_{t,1}^{(j)}\Big\}'\nonumber\\ &&\Big\{[\widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'-{\mathbf A}_1^{(j)}({\mathbf A}_1^{(j)})']{\mathbf A}_1^{(j)}{\mathbf x}_t^{(j)} +\widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'{\varepsilon}_{t,1}^{(j)}\Big\}' \nonumber\\ &=&{1 \over npp_0} \sum_{t=1}^n\sum_{j=1}^{p_0}({\mathbf x}_t^{(j)})'(\widehat{\mathbf A}_1^{(j)})'[\widehat{\mathbf A}_1^{(j)} (\widehat{\mathbf A}_1^{(j)})'-{\mathbf A}_1^{(j)}({\mathbf A}_1^{(j)})']' [\widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'-{\mathbf A}_1^{(j)}({\mathbf A}_1^{(j)})'] {\mathbf A}_1^{(j)}{\mathbf x}_t^{(j)}\nonumber\\ &&+{1 \over npp_0} \sum_{t=1}^n\sum_{j=1}^{p_0}({\mathbf x}_t^{(j)})'({\mathbf A}_1^{(j)})'[\widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'-{\mathbf A}_1^{(j)} ({\mathbf A}_1^{(j)})']' \widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'{\varepsilon}_{t, 1}^{(j)}\nonumber\\ &&+{1 \over npp_0} \sum_{t=1}^n\sum_{j=1}^{p_0}({\varepsilon}_{t, 1}^{(j)})'\widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'[\widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'-{\mathbf A}_1^{(j)}({\mathbf A}_1^{(j)})'] {\mathbf A}_1^{(j)}{\mathbf x}_t^{(j)}\nonumber\\ &&+{1 \over npp_0} \sum_{t=1}^n\sum_{j=1}^{p_0}({\varepsilon}_{t, 1}^{(j)})'\widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'\widehat{\mathbf A}_1^{(j)}(\widehat{\mathbf A}_1^{(j)})'{\varepsilon}_{t,1}^{(j)}\nonumber\\
&=&O_p(p^{\delta}/n+p^{(\delta-1)/2}n^{-/2}+p^{\delta-2}\tau\|{\mathbf L}\|)+{1 \over npp_0} \sum_{t=1}^n\sum_{j=1}^{p_0}({\varepsilon}_{t, 1}^{(j)})'{\mathbf A}_1^{(j)}({\mathbf A}_1^{(j)})'{\varepsilon}_{t,1}^{(j)}.\end{eqnarray} Since $\mathrm{E}\Big({1\over p_0}\sum_{j=1}^{p_0}({\varepsilon}_{t, 1}^{(j)})'{\mathbf A}_1^{(j)}({\mathbf A}_1^{(j)})'{\varepsilon}_{t,1}^{(j)}\Big)^2\leq {1\over p_0}\sum_{j=1}^{p_0} \mathrm{E}\Big[({\varepsilon}_{t, 1}^{(j)})'{\mathbf A}_1^{(j)}({\mathbf A}_1^{(j)})'{\varepsilon}_{t,1}^{(j)}\Big]^2<\infty,$ it follows from Markov's inequality that \begin{eqnarray} {1 \over np_0}\sum_{t=1}^n\sum_{j=1}^{p_0}({\varepsilon}_{t, 1}^{(j)})'{\mathbf A}_1^{(j)}({\mathbf A}_1^{(j)})'{\varepsilon}_{t,1}^{(j)}\stackrel{p}{\longrightarrow}
\mathrm{E}\Big[({\varepsilon}_{t, 1}^{(j)})'{\mathbf A}_1^{(j)}({\mathbf A}_1^{(j)})'{\varepsilon}_{t,1}^{(j)}\Big].\nonumber\end{eqnarray} Thus, by (\ref{pp2}), we have the following two conclusions: \begin{itemize}
\item[(i)] When $n\rightarrow\infty$, $\Sigma_1=O_p(p^{\delta}/n+p^{(\delta-1)/2}n^{-/2}+p^{\delta-2}\tau\|{\mathbf L}\|+p^{-1}).$
\item[(ii)] When $p^{1+\delta}/n+p^{\delta-1}\tau\|{\mathbf L}\|\rightarrow 0$, $p\Sigma_1\stackrel{p}{\longrightarrow}\mathrm{E}\Big[({\varepsilon}_{t, 1}^{(1)})'{\mathbf A}_1^{(1)}({\mathbf A}_1^{(1)})'{\varepsilon}_{t,1}^{(1)}\Big].$ \end{itemize}
Similarly, the above properties hold also for $\Sigma_2.$ Hence,
\begin{eqnarray} \label{pp2-6}\mathrm{E}\Big[{1 \over np} \sum_{t=1}^n\sum_{i=1}^p \big\{ \widehat \xi_{t}({\mathbf s}_i) - \xi_{t}({\mathbf s}_i)\big\}^2\Big{|}\{\xi_{t}({\mathbf s}_i), \, y_t({\mathbf s}_i)\}\Big]=O_p(p^{\delta}/n+p^{(\delta-1)/2}n^{-/2}+p^{\delta-2}\tau\|{\mathbf L}\|
+p^{-1}). \, \, \,\end{eqnarray} Further, when $p^{1+\delta}/n+p^{\delta-1}\tau\|{\mathbf L}\|\rightarrow 0$,
$$\mathrm{E}\Big[{1 \over n} \sum_{t=1}^n\sum_{i=1}^p \big\{ \widehat \xi_{t}({\mathbf s}_i) - \xi_{t}({\mathbf s}_i)\big\}^2\Big{|}\{\xi_{t}({\mathbf s}_i), \, y_t({\mathbf s}_i)\}\Big]=\mathrm{E}\Big[({\varepsilon}_{t, 1}^{(1)})'{\mathbf A}_1^{(1)}({\mathbf A}_1^{(1)})'{\varepsilon}_{t,1}^{(1)}+({\varepsilon}_{t, 2}^{(1)})'{\mathbf A}_2^{(1)}({\mathbf A}_2^{(1)})'{\varepsilon}_{t,2}^{(1)}\Big]$$ in probability.
$\blacksquare$
\begin{lemma} Let Condition 1 hold and $pn^{-\beta/2}\rightarrow 0$. Then \begin{eqnarray} \lim_{n\rightarrow\infty} P\left\{\min_{1\leq i\leq p}\lambda_{\min}[n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i)]\geq c_0/2\right\}= 1.\nonumber\end{eqnarray} \end{lemma}
\noindent {\bf Proof}. Let $z_{t}^j({\mathbf s}_i), \, j=1, \cdots, m$ be the components of ${\mathbf z}_t({\mathbf s}_i).$ Since $\{{\mathbf z}_t({\mathbf s}_i)\}$ is a stationary $\alpha$-mixing process satisfying Condition 1, by
Lemma 12.2.2 of Lin and Lu (1996), we have that for any $1\leq j, k \leq m$,
\begin{eqnarray} \mathrm{E}\left|{1\over n}\sum_{t=1}^{n}\{z_t^j({\mathbf s}_i)z_t^k({\mathbf s}_i)-\mathrm{E}[z_t^j({\mathbf s}_i)z_t^k({\mathbf s}_i)]\}\right|^\beta =O(n^{-\beta/2}).\end{eqnarray} Since $m$ is finite, it follows that
\begin{eqnarray} \label{8.4}\mathrm{E}||n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i)-\hbox{Var}({\mathbf z}_t({\mathbf s}_i))||_F^\beta
=O(n^{-\beta/2}),\end{eqnarray} where $||\cdot||_F$ denotes the Frobenius norm. Now suppose that $\min_{1\leq i\leq p}\lambda_{\min}[n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i)]< c_0/2$. Since $\min_{1\leq i\leq p}\lambda_{\min}[\hbox{Var}({\mathbf z}_t({\mathbf s}_i))]> c_0$, and
$$\hbox{Var}({\mathbf z}_t({\mathbf s}_i))= [\hbox{Var}({\mathbf z}_t({\mathbf s}_i))-n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i)]+n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i),$$
it must hold that
\begin{eqnarray} \max_{1\leq i\leq p}||n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i)-\hbox{Var}({\mathbf z}_t({\mathbf s}_i))||_F\geq c_0/2.\end{eqnarray} However, by (\ref{8.4}), it follows that
\begin{eqnarray} P\{\max_{1\leq i\leq p}||n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i)-\hbox{Var}({\mathbf z}_t({\mathbf s}_i))||_F\geq c_0/2\}&\leq& \sum_{i=1}^{p}(c_0/2)^{-\beta}
\mathrm{E}||n^{-1}{\mathbf z}({\mathbf s}_i)'{\mathbf z}({\mathbf s}_i)-\hbox{Var}({\mathbf z}({\mathbf s}_i))||_F^\beta\nonumber\\ &=&O_p(pn^{-\beta/2})=o(1).\end{eqnarray} This implies that $P\{\min_{1\leq i\leq p}\lambda_{\min}[n^{-1}{\mathbf z}({\mathbf s}_i)'{\mathbf z}({\mathbf s}_i)]< c_0/2\}=o(1)$ and completes the proof of Lemma 4.
$\blacksquare$
\vskip3mm
theorem and Theorem 8.1.10 of Golub and Van Loan (1996) (see also Lemma 3 of Lam \mbox{\sl et al.\;} (2011)). (ii) can be shown similarly to Theorem 1 of Bathia, \mbox{\sl et al.\;} (2010), see also Theorem 1 of Lam and Yao (2012).
\noindent {\bf Proof for the convergence rate of $\widehat \bbeta({\mathbf s}_0)$}. Let $e_t({\mathbf s})=y_t({\mathbf s})-{\mathbf z}_t({\mathbf s})'\bbeta({\mathbf s})$ and $w_i=K_h({\mathbf s}_i-{\mathbf s}_0)/\sum_{i=1}^{p}K_h({\mathbf s}_i-{\mathbf s}_0)$. Then ${\mathbf e}({\mathbf s})=(e_1({\mathbf s}), \cdots, e_n({\mathbf s}))'$ and \begin{eqnarray} \widehat \bbeta({\mathbf s}_0)=\sum_{i=1}^{p}[{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i)]^{-1}[{\mathbf Z}({\mathbf s}_i)'{\mathbf e}({\mathbf s}_i)] w_i+\sum_{i=1}^{p}\bbeta({\mathbf s}_i) w_i\equiv I_1+I_2.\nonumber\end{eqnarray} For any twice differentiable function $g({\mathbf s})=g(s_1, s_2), \, {\mathbf s}=(s_1, s_2)\in \EuScript R^2$, define $g_{1\cdot}({\mathbf s})=\partial g({\mathbf s})/\partial s_1,$ $ \, \, g_{\cdot 2}({\mathbf s})=\partial g({\mathbf s})/\partial s_2,$ $ g_{1 1}({\mathbf s})=\partial^2 g({\mathbf s})/(\partial s_1)^2$ and $g_{2 2}({\mathbf s})=\partial^2 g({\mathbf s})/(\partial s_2)^2$. Under Conditions 3, 4 and Taylor's expansion, it can be shown that as $p\rightarrow\infty$,
\begin{eqnarray}\label{8.11} I_2-\bbeta({\mathbf s}_0)&=&\sum_{i=1}^{p}(\bbeta({\mathbf s}_i)-\bbeta({\mathbf s}_0)) w_i\nonumber\\ &=&{ h^2\over f({\mathbf s}_0)}[\bbeta_{1 \cdot}({\mathbf s}_0) f_{1\cdot}({\mathbf s}_0)+{1\over 2}f({\mathbf s}_0)\bbeta_{1 1}({\mathbf s}_0)]\int_{R}\int_R x^2K(x, y)\, dx dy \nonumber\\ &&+{h^2\over f({\mathbf s}_0)}[\bbeta_{\cdot 2}({\mathbf s}_0) f_{\cdot 2}({\mathbf s}_0)+{1\over 2}f({\mathbf s}_0) \bbeta_{22}({\mathbf s}_0)]\int_{R}\int_R y^2K(x, y)\, dx dy+o(h^2). \quad \end{eqnarray} As for $I_1$, by H\"{o}lder's inequality, it follows that
\begin{eqnarray} \label{8.12}||I_1||&\leq&\Big(\max_{1\leq i\leq p}\|[{{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i)/ n}]^{-1}\|_F\Big)\sum_{i=1}^{p}\|{\mathbf Z}({\mathbf s}_i)'{\mathbf e}({\mathbf s}_i)/n\| w_i.\, \,\end{eqnarray} By Lemma 4, we have $\max_{1\leq i\leq p}\lambda_{max}\{(n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i))^{-1}\}\leq 2/c_0$ holds in probability. Since the dimension of ${\mathbf z}_t({\mathbf s})$ is fixed, it follows that
\begin{eqnarray} \label{8.13}\max_{1\leq i\leq p}||(n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf Z}({\mathbf s}_i))^{-1}||_F\leq c_1\end{eqnarray} holds in probability for some positive constant $c_1$. On the other hand, it is easy to get that
\begin{eqnarray} \max_{1\leq i\leq p}\mathrm{E}||n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf e}({\mathbf s}_i)||=O(n^{-1/2}),\nonumber\end{eqnarray} hence,
\begin{eqnarray} \label{8.14}\mathrm{E}\Big[\sum_{i=1}^{p}||n^{-1}{\mathbf Z}({\mathbf s}_i)'{\mathbf e}({\mathbf s}_i)|| w_i\Big]=O(n^{-1/2}).\end{eqnarray} It follows from (\ref{8.12}), (\ref{8.13}) and (\ref{8.14}) that
\begin{eqnarray} ||I_1||=O_p(n^{-1/2}).\end{eqnarray} Thus, by (\ref{8.11}) and (\ref{8.14}), we have $|\widehat\bbeta({\mathbf s}_0)-\bbeta({\mathbf s}_0)|=O_p(h^2+n^{-1/2}).$
$\blacksquare$
\noindent {\bf Proof of Theorem 2}.
Let ${\mathbf x}_t^{o}={\mathbf x}_tI({\mathbf s}_i\in {\cal{S}}_1)+{\mathbf x}_t^*I({\mathbf s}_i\in {\cal{S}}_2).$ Then \begin{eqnarray} \widehat \xi_t({\mathbf s}_0)-\xi_t({\mathbf s}_0)&=&\sum_{i=1}^{p}(\widehat {\mathbf a}'({\mathbf s}_i)\widehat {\mathbf x}_{t}^o-{\mathbf a}'({\mathbf s}_i) {\mathbf x}_{t}^o)w_i +\sum_{j=1}^{d}\sum_{i=1}^{p} (a_j({\mathbf s}_i)-a_j({\mathbf s}_0)) x_{tj}^o w_i\nonumber\\ &\equiv & J_1+J_2.\end{eqnarray} Similar to (\ref{8.11}), we have
\begin{eqnarray} \left|\sum_{i=1}^{p} [(a_j({\mathbf s}_i)-a_j({\mathbf s}_0))/||a({\mathbf s}_0)||] w_i\right|=O(h^2),\nonumber\end{eqnarray} which implies that
\begin{eqnarray} \label{8.17}|J_2|= O(h^2)(||{\mathbf a}({\mathbf s}_0)||)\sum_{j=1}^{d} |x_{tj}^o|=O(dh^2\cdot||{\mathbf a}({\mathbf s}_0)||\cdot ||{\mathbf x}_t^o||)=O_p(h^2),\end{eqnarray} where we use the fact that $||{\mathbf x}_t^o||=O_p(p^{(1-\delta)/2})$ and $||{\mathbf a}({\mathbf s}_0)||=O(p^{(\delta-1)/2})$, which is followed by $\lambda_{\min}\{\mathrm{E}({\mathbf x}_t^o({\mathbf x}_t^o)')\}\asymp p^{1-\delta}$ and
\[(||{\mathbf a}({\mathbf s}_0)||^2) \lambda_{\min}\{\mathrm{E}({\mathbf x}_t^o({\mathbf x}_t^o)')\}\leq (||{\mathbf a}({\mathbf s}_0)||^2)\mathrm{E}[({\mathbf a}'({\mathbf s}_0)/||{\mathbf a}({\mathbf s}_0)||){\mathbf x}_t^o]^2=\mathrm{E}[{\mathbf a}'({\mathbf s}_0){\mathbf x}_t^o]^2 =\mathrm{E}y_t^2({\mathbf s}_0)<\infty.\]
By (iii) of Proposition 3 and the same arguments as in Theorem 2.2 of Chang \mbox{\sl et al.\;} (2015), we have that for $i=1, 2$,
\begin{eqnarray} \label{8.18} p^{-1/2}||\widehat{\mathbf A}_i \widehat{\mathbf x}_{t}^o-{\mathbf A}_i{\mathbf x}_{t}^o||=O_p(||\widehat{\mathbf A}_i-{\mathbf A}_i||+n^{-1/2}+p^{-1/2})
=O_p(n^{-1/2}p^\delta+p^{2\delta-2}\tau\|{\mathbf L}\|+p^{-1/2}),\nonumber\end{eqnarray} which combining with H\"{o}lder inequality implies that \begin{eqnarray} \label{8.19}J_1&\leq& \Big\{\sum_{i=1}^{p}[(\widehat {\mathbf a}'({\mathbf s}_i)\widehat {\mathbf x}_{t}^o-{\mathbf a}'({\mathbf s}_i) {\mathbf x}_{t}^o)]^2\Big\}^{1/2}\Big(\sum_{i=1}^{p}w_i^2\Big)^{1/2}\nonumber\\
&\leq&(\sum_{i=1}^{2}||\widehat{\mathbf A}_i \widehat{\mathbf x}_{t}^o-{\mathbf A}_i{\mathbf x}_{t}^o||)\Big(\sum_{i=1}^{p}w_i^2\Big)^{1/2}\nonumber\\
&=&O_p\{p^{1/2}(n^{-1/2}p^\delta+p^{2\delta-2}\tau\|{\mathbf L}\|+p^{-1/2})\}O((ph)^{-1/2})\nonumber\\ &=&
O_p\{p^\delta(nh)^{-1/2}+(ph)^{-1/2}+p^{2\delta-2}h^{-1/2}\tau\|{\mathbf L}\|\}.\end{eqnarray} Thus, (ii) follows from (\ref{8.17}) and (\ref{8.19}). Similarly, we can show that (\ref{8.19}) holds also for $\widetilde\bxi_t({\mathbf s}_0)$.
\ignore{Next, we show (ii). To this end, we first establish the consistency of $ c({\mathbf s}_0)$. Note that \begin{eqnarray} c'({\mathbf s}_0) &=&\Big(\sum_{j=1}^{p_1}\widehat {\mathbf a}'({\mathbf s}_j)w_j\Big)\Big[{1\over n}\sum_{t=1}^{n}(\widehat{\mathbf x}_t-\bar{\widehat{\mathbf x}})({\mathbf y}_t-\bar{\mathbf y})'\Big]+\Big(\sum_{j=p_1+1}^{p}\widehat {\mathbf a}'({\mathbf s}_j)w_j\Big)\Big[{1\over n}\sum_{t=1}^{n}(\widehat{\mathbf x}_t^*-\bar{\widehat{\mathbf x}}^*)({\mathbf y}_t-\bar{\mathbf y})'\Big]\nonumber\\ &=&\Big(\sum_{j=1}^{p_1}\widehat {\mathbf a}'({\mathbf s}_j)w_j\Big)\Big[{1\over n}\sum_{t=1}^{n}\widehat{\mathbf A}_1'[{\mathbf A}_1({\mathbf x}_t-\bar{{\mathbf x}})+(\mbox{\boldmath$\varepsilon$}_{t,1}-\bar{\mbox{\boldmath$\varepsilon$}}_1)]({\mathbf y}_t-\bar{\mathbf y})'\Big]\nonumber\\ &&+\Big(\sum_{j=p_1+1}^{p}\widehat {\mathbf a}'({\mathbf s}_j)w_j\Big)\Big[{1\over n}\sum_{t=1}^{n}\widehat{\mathbf A}_2'[{\mathbf A}_2({\mathbf x}_t^*-\bar{{\mathbf x}}^*)+(\mbox{\boldmath$\varepsilon$}_{t,2}-\bar{\mbox{\boldmath$\varepsilon$}}_2)]({\mathbf y}_t-\bar{\mathbf y})'\Big]\nonumber\\ &\equiv &\Gamma_1+\Gamma_2.\nonumber\end{eqnarray} As for $\Gamma_1$, we have \begin{eqnarray} \Gamma_1&=&\Big(\sum_{j=1}^{p_1}(\widehat {\mathbf a}'({\mathbf s}_j)-{\mathbf a}'({\mathbf s}_0))w_j\Big)\widehat{\mathbf A}_1'{\mathbf A}_1\Big[{1\over n}\sum_{t=1}^{n}({\mathbf x}_t-\bar{{\mathbf x}})({\mathbf y}_t-\bar{\mathbf y})'\Big]\nonumber\\ &&+\sum_{j=1}^{p_1}w_j{\mathbf a}'({\mathbf s}_0)(\widehat{\mathbf A}_1-{\mathbf A}_1)'{\mathbf A}_1\Big[{1\over n}\sum_{t=1}^{n}({\mathbf x}_t-\bar{{\mathbf x}})({\mathbf y}_t-\bar{\mathbf y})'\Big]+\sum_{j=1}^{p_1}w_j\Big[{1\over n}\sum_{t=1}^{n}{\mathbf a}'({\mathbf s}_0)({\mathbf x}_t-\bar{{\mathbf x}})({\mathbf y}_t-\bar{\mathbf y})'\Big]\nonumber\\ &&+\Big(\sum_{j=1}^{p_1}\widehat {\mathbf a}'({\mathbf s}_j)w_j\Big)\widehat{\mathbf A}_1'\Big[{1\over n}\sum_{t=1}^{n}(\mbox{\boldmath$\varepsilon$}_{t,1}-\bar{\mbox{\boldmath$\varepsilon$}}_1)({\mathbf y}_t-\bar{\mathbf y})'\Big]\equiv \sum_{i=1}^{4}\Delta_i.\nonumber\end{eqnarray} For $\Delta_1$, using the same argument as in $J_1$ and $J_2$, we have
\begin{eqnarray} \label{p8.20}||\hbox{Var}({\mathbf y}_t)^{-1/2}\Delta'_1||&=&
\Big{\|}\Big[{1\over n}\sum_{s=1}^{n}\hbox{Var}({\mathbf y}_t)^{-1/2}({\mathbf y}_t-\bar{\mathbf y})({\mathbf x}_t-\bar{{\mathbf x}})'\Big]{\mathbf A}'_1\widehat{\mathbf A}_1\Big(\sum_{j=1}^{p_1} {(\widehat{\mathbf a}({\mathbf s}_j)-{\mathbf a}({\mathbf s}_0))w_j}\Big)\Big{\|}\nonumber\\
&\leq& ||{\mathbf a}({\mathbf s}_0)||\cdot\Big{\|}{1\over n}\sum_{t=1}^{n}\hbox{Var}({\mathbf y}_t)^{-{1\over 2}}({\mathbf y}_t-\bar{\mathbf y})({\mathbf x}_t-\bar{{\mathbf x}})'\Big{\|}\cdot\Big{\|}\sum_{j=1}^{p_1} {(\widehat{\mathbf a}({\mathbf s}_j)-{\mathbf a}({\mathbf s}_0))w_j\over ||{\mathbf a}({\mathbf s}_0)||}\Big{\|}\nonumber\\ &=&O_p(h^2+p^\delta(nh)^{-1/2}).\end{eqnarray}
For $\Delta_2$, using (\ref{8.17}), we have
\begin{eqnarray} ||\hbox{Var}({\mathbf y}_t)^{-1/2}\Delta'_2||=O_p(||\widehat{\mathbf A}_1-{\mathbf A}_1||)=O_p(p^{\delta}n^{-1/2}).\end{eqnarray}
For $\Delta_3$, we have
\begin{eqnarray} &&\Big{\|}\hbox{Var}({\mathbf y}_t)^{-1/2}\Big[\Delta'_3-\Big(\sum_{j=1}^{p_1}w_j\Big)\hbox{Cov}(y_t({\mathbf s}_0), {\mathbf y}_t)\Big]\Big{\|}\nonumber\\
&=&\Big(\sum_{j=1}^{p_1}w_j\Big)\Big{\|}\hbox{Var}({\mathbf y}_t)^{-1/2}\Big({1\over n}\sum_{t=1}^{n}\{{\mathbf A}[({\mathbf x}_t-\bar{\mathbf x})({\mathbf x}_t-\bar{{\mathbf x}})'-\Sigma_x]+
({\varepsilon}_t-\bar{\varepsilon})({\mathbf x}_t-\bar{{\mathbf x}})'\}{\mathbf a}({\mathbf s}_0)\Big)\Big{\|}\nonumber\\
&\leq&\Big{\|}\hbox{Var}({\mathbf y}_t)^{-1/2}\Big({1\over n}\sum_{t=1}^{n}{\mathbf A}[({\mathbf x}_t-\bar{\mathbf x})({\mathbf x}_t-\bar{{\mathbf x}})'-\Sigma_x]{\mathbf a}({\mathbf s}_0)\Big)\Big{\|}\nonumber\\
&&+\Big{\|}\hbox{Var}({\mathbf y}_t)^{-1/2}\Big({1\over n}\sum_{t=1}^{n}({\varepsilon}_t-\bar{\varepsilon})({\mathbf x}_t-\bar{{\mathbf x}})'\Big){\mathbf a}({\mathbf s}_0)\Big{\|}\nonumber\\
&=&O_p(n^{-1/2}).\end{eqnarray}
Further, for $\Delta_4$, we have
\begin{eqnarray}\label{p8.23}&& ||\hbox{Var}({\mathbf y}_t)^{-1/2}\Delta'_4||\nonumber\\
&=&\Big{\|}\hbox{Var}({\mathbf y}_t)^{-1/2}\Big[{1\over n}\sum_{t=1}^{n}({\mathbf y}_t-\bar{\mathbf y})(\mbox{\boldmath$\varepsilon$}_{t,1}-\bar{\mbox{\boldmath$\varepsilon$}}_1)'\Big](\widehat{\mathbf A}_1-{\mathbf A}_1)\Big(\sum_{j=1}^{p_1}\widehat {\mathbf a}({\mathbf s}_j)w_j\Big)\Big{\|}\nonumber\\
&&+\Big{\|}\hbox{Var}({\mathbf y}_t)^{-1/2}\Big[{1\over n}\sum_{t=1}^{n}({\mathbf y}_t-\bar{\mathbf y})(\mbox{\boldmath$\varepsilon$}_{t,1}-\bar{\mbox{\boldmath$\varepsilon$}}_1)'\Big]{\mathbf A}_1\Big(\sum_{j=1}^{p_1}\widehat {\mathbf a}({\mathbf s}_j)w_j\Big)\Big{\|}\nonumber\\
&=&\Big{\|}\hbox{Var}({\mathbf y}_t)^{-1/2}\Big[{1\over n}\sum_{t=1}^{n}[{\mathbf A}_1({\mathbf x}_t-\bar{\mathbf x})+(\mbox{\boldmath$\varepsilon$}_{t,1}-\bar{\mbox{\boldmath$\varepsilon$}}_1)](\mbox{\boldmath$\varepsilon$}_{t,1}-\bar{\mbox{\boldmath$\varepsilon$}}_1)'\Big]
(\widehat{\mathbf A}_1-{\mathbf A}_1)\Big(\sum_{j=1}^{p_1}\widehat {\mathbf a}({\mathbf s}_j)w_j\Big)\Big{\|}\nonumber\\
&&+\Big{\|}\hbox{Var}({\mathbf y}_t)^{-1/2}\Big[{1\over n}\sum_{t=1}^{n}[{\mathbf A}_1({\mathbf x}_t-\bar{\mathbf x})+(\mbox{\boldmath$\varepsilon$}_{t,1}-\bar{\mbox{\boldmath$\varepsilon$}}_1)](\mbox{\boldmath$\varepsilon$}_{t,1}-\bar{\mbox{\boldmath$\varepsilon$}}_1)'\Big]{\mathbf A}_1
\Big(\sum_{j=1}^{p_1}\widehat {\mathbf a}({\mathbf s}_j)w_j\Big)\Big{\|}\nonumber\\
&=&O_p\Big(\sum_{j=1}^{p}\widehat {\mathbf a}({\mathbf s}_j)w_j\Big)=O_p(h^2+p^{(\delta-1)/2}).\end{eqnarray} Combining (\ref{p8.20})--(\ref{p8.23}) yields
\begin{eqnarray} \label{p8.24}\Big{\|}\hbox{Var}({\mathbf y}_t)^{-1/2}\Big[\Gamma_1-\Big(\sum_{j=1}^{p_1}w_j\Big)
\hbox{Cov}(y_t({\mathbf s}_0), {\mathbf y}_t)\Big]\Big{\|}=O_p(h^2+p^{(\delta-1)/2}+p^\delta(nh)^{-1/2}).\end{eqnarray} Similarly, we can show
\begin{eqnarray}\Big{\|}\hbox{Var}({\mathbf y}_t)^{-1/2}\Big[\Gamma_2-\Big(\sum_{j=p_1+1}^{p}w_j\Big)\hbox{Cov}(y_t({\mathbf s}_0), {\mathbf y}_t)\Big]\Big{\|}=O_p(h^2+p^{(\delta-1)/2}+p^\delta(nh)^{-1/2}).\nonumber\end{eqnarray} This combining with (\ref{8.24}) yields that
\begin{eqnarray} \|\hbox{Var}({\mathbf y}_t)^{-1/2}(c({\mathbf s}_0)-\hbox{Cov}(y_t({\mathbf s}_0), {\mathbf y}_t))\|=O_p(h^2+p^{(\delta-1)/2}+p^\delta(nh)^{-1/2}).\nonumber\end{eqnarray} Thus, by $ ||\widetilde\boldsymbol{\Sigma}_y(0)^{-1} -\boldsymbol{\Sigma}_y(0)^{-1}||=
O_p(p^{1+\delta}n^{-1}+p^{\delta-1}+p^{\delta} n^{-1/2})$ (see (\ref{8.45}) below) and (\ref{p8.24}), we have (ii) and complete the proof of Theorem 2.}
$\blacksquare$
\vskip3mm
\noindent {\bf Proof of Theorem 3}. For simplicity, we only show the case with spatial points over ${\cal{S}}_1$, i.e., ${\mathbf y}_{t1}={\mathbf A}_1 {\mathbf x}_{t}+\mbox{\boldmath$\varepsilon$}_{t,1}.$ For points over ${\cal{S}}_2$ can be shown similarly. Let $\widehat\boldsymbol{\Sigma}_{{\varepsilon}}(k)={1\over n}\sum_{t=1}^{n-k}(\mbox{\boldmath$\varepsilon$}_{t+k, 1}-\bar\mbox{\boldmath$\varepsilon$}_1)(\mbox{\boldmath$\varepsilon$}_{t, 1}-\bar\mbox{\boldmath$\varepsilon$}_1)', \, \widehat\boldsymbol{\Sigma}_{x{\varepsilon}}(k)={1\over n}\sum_{t=1}^{n-k}({\mathbf x}_{t+k}-\bar{\mathbf x})(\mbox{\boldmath$\varepsilon$}_{t,1}-\bar\mbox{\boldmath$\varepsilon$}_1)',$ $\widehat\boldsymbol{\Sigma}_{{\varepsilon} x}(k)={1\over n}\sum_{t=1}^{n-k}(\mbox{\boldmath$\varepsilon$}_{t+k,1}-\bar\mbox{\boldmath$\varepsilon$}_1)({\mathbf x}_{t}-\bar{\mathbf x})'$ and $\widehat\boldsymbol{\Sigma}_{xx}(k)={1\over n}\sum_{t=1}^{n-k}({\mathbf x}_{t+k}-\bar{\mathbf x})({\mathbf x}_{t}-\bar{\mathbf x})'$. It follows that for any $k$, \begin{eqnarray} \label{8.35}\widehat\boldsymbol{\Sigma}_x(k)-\boldsymbol{\Sigma}_x(k) &=&(\widehat{\mathbf A}'_1-{\mathbf A}'_1){\mathbf A}_1\widehat\boldsymbol{\Sigma}_{xx}(k){\mathbf A}'_1\widehat{\mathbf A}_1+\widehat\boldsymbol{\Sigma}_{xx}(k){\mathbf A}'_1(\widehat{\mathbf A}_1-{\mathbf A}_1) +(\widehat\boldsymbol{\Sigma}_{xx}(k)-\boldsymbol{\Sigma}_x(k))\nonumber\\ && +\widehat{\mathbf A}'_1\widehat\boldsymbol{\Sigma}_{{\varepsilon}}(k)\widehat{\mathbf A}_1 +\widehat{\mathbf A}'_1{\mathbf A}_1\widehat\boldsymbol{\Sigma}_{x{\varepsilon} }(k)\widehat{\mathbf A}_1 +\widehat{\mathbf A}'_1\widehat\boldsymbol{\Sigma}_{{\varepsilon} x}(k){\mathbf A}'_1\widehat{\mathbf A}_1\nonumber\\ &=: &\sum_{j=1}^{6}L_j.\nonumber\end{eqnarray}
By $||\widehat{\mathbf A}_1-{\mathbf A}_1||=O_p(n^{-1/2}p^\delta+p^{2\delta-2}\tau\|{\mathbf L}\|)$, it follows that
\begin{eqnarray}\label{PT5.1} ||L_1||+||L_2||=O(||\widehat{\mathbf A}_1-{\mathbf A}_1||\cdot ||\widehat\boldsymbol{\Sigma}_{xx}(k)||)=O_{p}(n^{-1/2}p+p^{\delta-1}\tau\|{\mathbf L}\|).\end{eqnarray} By (A.1) of Lam and Yao (2012), we have
\begin{eqnarray} \label{PT5.2} ||L_3||\leq ||\widehat\boldsymbol{\Sigma}_{xx}(k)-\boldsymbol{\Sigma}_x(k)||_F=O(d||\widehat\boldsymbol{\Sigma}_{xx}(k)-\boldsymbol{\Sigma}_x(k)||) =O(p^{1-\delta}n^{-1/2}).\end{eqnarray}
It is easy to get that
\begin{eqnarray} ||\widehat\boldsymbol{\Sigma}_{x{\varepsilon} }(k)||=O_p(p^{1-\delta/2}n^{-1/2})= ||\widehat\boldsymbol{\Sigma}_{{\varepsilon} x}(k)||, \, \,
\hbox{and} \, \, ||\widehat\boldsymbol{\Sigma}_{{\varepsilon} }(k)||=O_p(pn^{-1/2}), \nonumber\end{eqnarray}
see for example Lemma 2 of Lam \mbox{\sl et al.\;} (2011). Thus,
\begin{eqnarray} \label{PT5.3} ||L_4||+||L_5||+||L_6||=O_{p}(pn^{-1/2}).\end{eqnarray} Combining (\ref{PT5.1}), (\ref{PT5.2}) and (\ref{PT5.3}) yields that for any $0\leq k\leq j_0$,
\begin{eqnarray}\label{PT5.4} ||\widehat\boldsymbol{\Sigma}_x(k)-\boldsymbol{\Sigma}_x(k)||=O_p(pn^{-1/2}+p^{\delta-1}\tau\|{\mathbf L}\|).\end{eqnarray} Thus, by $p^{\delta}n^{-1/2}+p^{2\delta-2}\tau\|{\mathbf L}\|=o(1)$, we get $pn^{-1/2}+p^{\delta-1}\tau\|{\mathbf L}\|=o(p^{1-\delta})$ and in probability,
\begin{eqnarray} \label{PT5.5}||\widehat\boldsymbol{\Sigma}_x(k)||_{\min}\asymp ||\boldsymbol{\Sigma}_x(k)||_{\min}\asymp p^{1-\delta} \asymp ||\boldsymbol{\Sigma}_x(k)||\asymp ||\widehat\boldsymbol{\Sigma}_x(k)||.\end{eqnarray} Since $j_0$ is fixed, from (\ref{PT5.4}) it follows that
\begin{eqnarray} \label{PT5.6}||\widehat {\mathbf R}_{j_0}-{\mathbf R}_{j_0}||\asymp ||{\mathbf W}_{j_0}-\widehat{\mathbf W}_{j_0}||\asymp O_p(pn^{-1/2}+p^{\delta-1}\tau\|{\mathbf L}\|)\end{eqnarray} and from (\ref{PT5.5}) it follows that
\begin{eqnarray}\label{PT5.7} \|{\mathbf R}_{j_0}\|=O( p^{1-\delta}) \, \, \,\hbox{and}\, \, \, ||\widehat{\mathbf W}_{j_0}^{-1}||\asymp ||{\mathbf W}_{j_0}^{-1}|| \asymp O_p(p^{\delta-1}).\end{eqnarray} Since
$$||\widehat{\mathbf x}_t-{\mathbf x}_t||
=||(\widehat{\mathbf A}_1-{\mathbf A}_1)'{\mathbf A}{\mathbf x}_t+(\widehat{\mathbf A}_1-{\mathbf A}_1)'\mbox{\boldmath$\varepsilon$}_{t,1}+{\mathbf A}_1'\mbox{\boldmath$\varepsilon$}_{t,1}||
=O_p(p^{1/2+\delta}n^{-1/2}+p^{2\delta-3/2}\tau\|{\mathbf L}\|+1) $$
and $p^{\delta/2}(p^\delta n^{-1/2}+p^{2\delta-2}\tau\|L\|)=o(1)$, it follows that $\|\widehat{\mathbf X}\widehat{\mathbf X}'\|=O_p(p^{1-\delta}).$ Note that \begin{eqnarray} \widehat{\mathbf x}_{n+j}^r- {\mathbf x}_{n}(j) &=&\widehat {\mathbf R}_{j_0}\widehat{\mathbf W}_{j_0}^{-1}\widehat{\mathbf X}- {\mathbf R}_{j_0}{\mathbf W}_{j_0}^{-1}{\mathbf X}\nonumber\\ &=&(\widehat {\mathbf R}_{j_0}-{\mathbf R}_{j_0})\widehat{\mathbf W}_{j_0}^{-1}\widehat{\mathbf X}+{\mathbf R}_{j_0}\widehat{\mathbf W}_{j_0}^{-1}({\mathbf W}_{j_0}-\widehat{\mathbf W}_{j_0}) {\mathbf W}_{j_0}^{-1}\widehat{\mathbf X}+{\mathbf R}_{j_0}{\mathbf W}_{j_0}^{-1}(\widehat{\mathbf X}-{\mathbf X}).\nonumber\end{eqnarray} By (\ref{PT5.6}) and (\ref{PT5.7}), we have
\begin{eqnarray} ||(\widehat {\mathbf R}_{j_0}-{\mathbf R}_{j_0})\widehat{\mathbf W}_{j_0}^{-1}\widehat{\mathbf X}||^2
&=&O(||\widehat {\mathbf R}_{j_0}-{\mathbf R}_{j_0}||^2\cdot||\widehat{\mathbf W}_{j_0}^{-1}||^2\cdot ||\widehat{\mathbf X}\widehat{\mathbf X}'||)=O_p(p^{1+\delta}n^{-1}).\end{eqnarray} Similarly,
\begin{eqnarray} ||{\mathbf R}_{j_0}\widehat{\mathbf W}_{j_0}^{-1}({\mathbf W}_{j_0}-\widehat{\mathbf W}_{j_0}){\mathbf W}_{j_0}^{-1}\widehat{\mathbf X}||^2
&=&O(||{\mathbf R}_{j_0}||^2\cdot ||\widehat{\mathbf W}_{j_0}^{-1}||^2\cdot ||{\mathbf W}_{j_0}-\widehat{\mathbf W}_{j_0}||^2\cdot||{\mathbf W}_{j_0}^{-1}||^2\cdot \|\widehat{\mathbf X}\widehat{\mathbf X}'\|)\nonumber\\
&=&O_p(p^{1+\delta}n^{-1}).\end{eqnarray} On the other hand, by (\ref{PT5.7}) and $||\widehat{\mathbf x}_t-{\mathbf x}_t||=O_p(p^{1/2+\delta}n^{-1/2}+p^{2\delta-3/2}\tau\|{\mathbf L}\|+1)$, we have
\begin{eqnarray} ||{\mathbf R}_{j_0}{\mathbf W}_{j_0}^{-1}(\widehat{\mathbf X}-{\mathbf X})||=O_p(p^{1/2+\delta}n^{-1/2}+p^{2\delta-3/2}\tau\|{\mathbf L}\|+1).\end{eqnarray} Thus,
\[||\widehat{\mathbf x}_{n+j}^r- {\mathbf x}_{n}(j)||=O_p(p^{1/2+\delta}n^{-1/2}+p^{2\delta-3/2}\tau\|{\mathbf L}\|+1)\] holds and (a) of Theorem 3 is proved.
As for Conclusion (b), by Conclusion (a) and (iii) of Proposition 3, we have
\begin{eqnarray} ||\widehat{\mathbf y}_{n+j}^r- {\mathbf y}_{n}(j)||
&=&|| \widehat{\mathbf A}\widehat{\mathbf x}_{n+j}^r-{\mathbf A} {\mathbf x}_{n}(j)||\nonumber\\
&\leq&||(\widehat{\mathbf A}-{\mathbf A}){\mathbf x}_{n}(j)||+||\widehat{\mathbf A}(\widehat{\mathbf x}_{n+j}^r-{\mathbf x}_{n}(j))||\nonumber\\
&=&O_p(p^\delta n^{-1/2}p^{1/2-\delta/2} +p^{1/2+\delta}n^{-1/2}+p^{2\delta-3/2}\tau\|{\mathbf L}\|+1)\nonumber\\
&=&O_p( p^{1/2+\delta}n^{-1/2}+p^{2\delta-3/2}\tau\|{\mathbf L}\|+1).\nonumber\end{eqnarray} This gives (b) as desired and completes the proof of Theorem 3.
$\blacksquare$
\end{document} |
\begin{document}
\title{Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption}
\begin{abstract} \noindent This paper deals with the homogeneous Neumann boundary-value problem for the chemotaxis-consumption system \begin{eqnarray*} \left\{ \begin{array}{llc} u_t=\Delta u-\chi\nabla\cdot\big(u\nabla v\big)+\kappa u-\mu u^2, &x\in \Omega, \,t>0,\\ \displaystyle v_t=\Delta v-uv , &x\in \Omega,\, t>0,
\end{array} \right. \end{eqnarray*} in $N$-dimensional bounded smooth domains for suitably regular positive initial data. \\ We shall establish the existence of a global bounded classical solution for suitably large $\mu$ and prove that for any $\mu>0$ there exists a weak solution.\\ Moreover, in the case of $\kappa>0$ convergence to the constant equilibrium $(\frac{\kappa}{\mu },0)$ is shown. \\
\noindent{\bf Keywords:} Chemotaxis; logistic source; global existence; boundedness; asymptotic stability; weak solution\\
\noindent {\bf MSC:} 35Q92; 35K55; 35A01; 35B40; 35D30; 92C17 \end{abstract}
\section{Introduction} Chemotaxis is the adaption of the direction of movement to an external chemical signal. This signal can be a substance produced by the biological agents (cells, bacteria) themselves, as is the case in the celebrated Keller-Segel model (\cite{KS}, \cite{horstmann_I}) or -- in the case of even simpler organisms -- by a nutrient that is consumed. A prototypical model taking into account random and chemotactically directed movement of bacteria alongside death effects at points with high population densities and population growth together with diffusion and consumption of the nutrient is given by \begin{align}\label{sys.intro}
u_t&=\Delta u -\chi \nabla \cdot (u\nabla v) +\kappa u-\mu u^2\\
v_t&=\Delta v - uv,\nonumber \end{align} considered in a smooth, bounded domain $\Omega\subset \mathbb{R} ^N$ together with homogeneous Neumann boundary conditions and suitable initial data. Herein, $\chi > 0$, $\kappa\in\mathbb{R} $, $\mu >0$ denote chemotactic sensitivity, growth rate (or death rate, if negative) and strength of the overcrowding effect, respectively. The system \eqref{sys.intro}, in a basic form often with $\kappa=\mu =0$, appears as part of chemotaxis-fluid models intensively studied over the past few years (see e.g. the survey \cite[sec. 4.1]{BBTW} or \cite{cao_lankeit} for a recent contribution with an extensive bibliography).
Compared with the classical Keller-Segel model \begin{align}\label{KS}
u_t&=\Delta u-\chi \nabla \cdot(u\nabla v)+\kappa u-\mu u^2\\
v_t&=\Delta v-v+u, \nonumber \end{align} which we have given in the form with logarithmic source terms paralleling that in \eqref{sys.intro}, at first glance, \eqref{sys.intro} seems much more amenable to the global existence (und boundedness) of solutions -- after all, the second equation by comparison arguments immediately provides an $L^\infty$-bound for $v$.
However, such a bound is not sufficient for dealing with the chemotaxis term, and accordingly global existence and boundedness of solutions to \eqref{sys.intro} with $\kappa=\mu =0$ is only known under the smallness condition \begin{equation}\label{smallnessconditionforsourcefreemodel}
\chi \norm[\Lom\infty]{v(\cdot,0)}\leq \frac1{6(N+1)} \end{equation} on the initial data (\cite{tao_consumption_bdness}) or in a two-dimensional setting (\cite{win_ctfluid}, \cite{win_arma} and also \cite{xieli}). Their rate of convergence has been treated in \cite{zhang_li}. In three-dimensional domains, weak solutions have been constructed that eventually become smooth \cite{taowin_ev_consumption}.
For \eqref{KS}, the presence of logarithmic terms has been shown to exclude otherwise possible finite-time blow-up phenomena (cf. \cite{win_blowuphigherdim}, \cite{mizoguchi_winkler_13}) -- at least as long as $\mu $ is sufficiently large if compared to the strenght of the chemotactic effects (\cite{winkler_10_boundedness})
or if the dimension is $2$ (\cite{osaki_yagi_02}). If the quotient $\frac{\mu }{\chi }$ is sufficiently large, solutions to \eqref{KS} uniformly converge to the constant equilibrium (\cite{win_stability}); convergence rates have been considered in \cite{he_zheng}. Explicit largeness conditions on $\frac{\mu }{\chi }$ that ensure convergence, also for slightly more general source terms, can be found in \cite{lin_mu}, see also \cite{win_ksns_logsource}. For small $\mu >0$, at least global weak solutions are known to exist (\cite{lankeit_ev_smooth}), and in $3$-dimensional domains and for small $\kappa$, their large-time behaviour has been investigated (\cite{lankeit_ev_smooth}).
Also the chemotaxis-consumption model \eqref{sys.intro} has already been considered with nontrivial source terms in \cite{wang_khan_khan}. There it was proved that classical solutions exist globally and are bounded as long as \eqref{smallnessconditionforsourcefreemodel} holds -- which is the same condition as for $\kappa=\mu =0$, thus shedding no light on any possible interplay between chemotaxis and the population kinetics.
In a three-dimensional setting and in the presence of a Navier-Stokes fluid, in \cite{lankeit_fluid} it was recently possible to construct global weak solutions for any positive $\mu $, which moreover eventually become classical and uniformly converge to the constant equilibrium in the large-time limit.
It is the aim of the present article to prove the existence of global classical solutions if only $\mu $ is suitably large and to show their large-time behaviour. For the case of small $\mu >0$, we will prove the existence of global weak solutions (in the sense of Definition \ref{def:weaksol}). \\
\noindent\textbf{What largeness condition on $\mu $ might be sufficient for boundedness?} For the Keller-Segel type model \eqref{KS} the typical condition reads: 'If $\mu $ is large compared to $\chi $, then the solution is global and bounded, independent of initial data.' In order to see why this condition would be far less natural for \eqref{sys.intro}, let us suppose we are given suitably regular initial data $u_0$, $v_0$ and a corresponding solution $(u,v)$ of \[
\begin{cases} u_t=\Delta u - \chi \nabla\cdot(u\nabla v) + \kappa u-\mu u^2\\
v_t=\Delta v - uv\\
\partial_{\nu} u\big\rvert_{\partial\Omega}=\partial_{\nu} v\big\rvert_{\partial\Omega}=0\\
u(\cdot,0)=u_0, v(\cdot,0)=v_0,
\end{cases} \] and let us define \[
w:=\chi v. \] Then $(u,w)$ solves \[
\begin{cases} u_t=\Delta u - \nabla\cdot(u\nabla w) + \kappa u-\mu u^2\\
w_t=\Delta w - uw\\
\partial_{\nu} u\big\rvert_{\partial\Omega}=\partial_{\nu} v\big\rvert_{\partial\Omega}=0\\
u(\cdot,0)=u_0, w(\cdot,0)=\chi v_0,
\end{cases} \] which is the same system, only with different chemotaxis coefficent and rescaled initial data for the second component. Consequently, \textit{in \eqref{sys.intro}, large initial data equal high chemotactic strength}. Hence, there cannot be any condition for global existence which includes $\mu $ and $\chi $, but not $\norm[\Lom \infty]{v_0}$. In light of this discussion, the requirement in Theorem \ref{thm1} that $\mu $ be large with respect to $\chi \norm[\Lom \infty]{v_0}$ seems natural. On the other hand, this observation does not preclude conditions that involve neither $\chi $ nor $\norm[\Lom \infty]{v_0}$, and indeed $\mu >0$ is sufficient for the global existence of weak solutions.
The first main result of the present article is global existence of classical solutions, provided that $\mu $ is sufficiently large as compared to $\norm[\Lom\infty]{\chi v_0}$:
\begin{thm}\label{thm1}
Let $N\in \mathbb{N} $ and let $\Omega\subset \mathbb{R} ^N$ be a smooth, bounded domain. There are constants $k_1=k_1(N)$ and $k_2=k_2(N)$ such that the following holds: Whenever $\kappa\in\mathbb{R} $, $\chi >0$, and $\mu >0$ and initial data \begin{eqnarray}\label{id}
\left\{ \begin{array}{llc} u_0\in C^0(\overline\Omega), \quad { u_0> 0} \,\,\text{in}\,\,\overline{\Omega},\\ \displaystyle v_0\in C^1(\overline{\Omega}), \quad { v_0> 0 } \,\,\text{in}\,\,\overline{\Omega}
\end{array} \right. \end{eqnarray}
are such that \[
\mu >k_1(N)\norm[\Lom\infty]{\chi v_0}^{\frac1N} + k_2(N)\norm[\Lom\infty]{\chi v_0}^{2N}, \]
then the system
\begin{equation}\label{a} \left\{ \begin{array}{llc} u_t=\Delta u-\nabla\cdot\big(u\nabla v\big)+\kappa u-\mu u^2, &x\in \Omega, \,t>0,\\ \displaystyle v_t=\Delta v-uv , &x\in \Omega,\, t>0,\\
\displaystyle \partial_{\nu} u=\partial_{\nu} v=0, &x\in\partial\Omega,\, t>0,\\ \displaystyle u(x,0)=u_0(x),\,\, v(x,0)=v_0(x),
&x\in\Omega, \end{array} \right. \end{equation} has a unique global classical solution $(u,v)$ which is uniformly bounded in the sense that there is some constant $C>0$ such that \begin{eqnarray}\label{B}
\|u(\cdot,t)\|_{L^{\infty}(\Omega)}+\|v(\cdot,t)\|_{W^{1,\infty}(\Omega)}
\le C \qquad \mathrm{for}\,\,\mathrm{all} \quad t\in(0,\infty). \end{eqnarray} \end{thm}
\begin{remark}\label{R1} Here we have to leave open the question, whether, for small values of $\mu >0$ and large $\chi v_0$, blow-up of solutions is possible at all. Consequently, the range of $\mu$ in this
result is not necessarily an optimal one. Nevertheless, the present condition can easily be made explicit (see Lemma \ref{lem:ge.for.positive.eps.or.large.mu}
and \eqref{eq:defk1k2}
for the values of $k_1$ and $k_2$). It seems worth pointing out that, in contrast to the condition \eqref{smallnessconditionforsourcefreemodel}, Theorem \ref{thm1} admits large values of $\chi v_0$, if only $\mu $ is appropriately large. \end{remark}
The second outcome of our analysis is concerned with the large time behaviour of global solutions and reads as follows:
\begin{thm}\label{thm3} Let $N\in \mathbb{N} $ and let $\Omega\subset\mathbb{R} ^N$ be a bounded smooth domain. Suppose that $\chi >0$, $\kappa>0$ and $\mu>0$. Let $(u,v)\in C^{2,1}(\overline{\Omega}\times(0,\infty ))\cap C^0(\overline{\Omega}\times[0,\infty))$ be any global bounded solution to \eqref{a} (in the sense that \eqref{B} is fulfilled) which obeys \eqref{id}. Then \begin{eqnarray}\label{stability1}
\Big\|u(\cdot,t)-\frac{\kappa}{\mu }\Big\|_{L^{\infty}(\Omega)}\rightarrow 0 \end{eqnarray} and \begin{eqnarray}\label{stability2}
\|v(\cdot,t)\|_{L^{\infty}(\Omega)}\rightarrow 0 \end{eqnarray} as $t\rightarrow \infty$. \end{thm}
\begin{remark}
This theorem in particular applies to the solutions considered in Theorem \ref{thm1}. \end{remark}
\begin{remark}
Boundedness is not necessary in the sense of \eqref{B}; in light of Lemma \ref{lem:ulp.to.boundedness}, the existence of $C>0$ and $p>N$ such that
\[
\norm[\Lom p]{u(\cdot,t)}\leq C\qquad \text{for all } t>0
\]
would be sufficient. \end{remark}
\noindent\textbf{Unconditional global weak solvability.}
As in the context of the classical Keller-Segel model \eqref{KS} (\cite{lankeit_ev_smooth}), global weak solutions to \eqref{a} can be shown to exist regardless of the size of initial data and for any positive $\mu$:
\begin{thm}\label{thm:weaksol}
Let $N\in\mathbb{N} $ and let $\Omega\subset \mathbb{R} ^N$ be a bounded smooth domain. Let $\chi >0$, $\kappa\in \mathbb{R} $, $\mu >0$ and assume that $u_0$, $v_0$ satisfy \eqref{id}. Then
system \eqref{a} has a global weak solution (in the sense of Definition \ref{def:weaksol} below). \end{thm}
These solutions, too, stabilize toward $(\frac{\kappa}{\mu },0)$ as $t\to\infty$, even though in a weaker sense than guaranteed by Theorem \ref{thm3} for classical solutions:
\begin{thm}\label{thm:weaksol-limit}
Let $N\in\mathbb{N} $ and let $\Omega\subset \mathbb{R} ^N$ be a bounded smooth domain. Let $\chi >0$, $\kappa>0$, $\mu >0$ and assume that $u_0$, $v_0$ satisfy \eqref{id}. Then for any $p\in[1,\infty)$ the weak solution $(u,v)$ to \eqref{a} that has been constructed during the proof of Theorem \ref{thm:weaksol} satisfies \[
\norm[\Lom p]{v(\cdot,t)}\to 0 \quad \text{ and } \int_t^{t+1} \norm[\Lom 2]{u(\cdot,s)-\frac{\kappa}{\mu }} ds \to 0 \] as $t\to \infty$. \end{thm}
\begin{remark}
Under the restriction $N=3$, the existence of global weak solutions that eventually become smooth and uniformly converge to $(\frac{\kappa}{\mu },0)$ has been proven in \cite{lankeit_fluid}, where a coupled chemotaxis-fluid model is treated. \end{remark}
{\textbf{Plan of the paper.}} In Section \ref{sec:prelim} we will prepare some general calculus inequalities. In the following for some $a>0$ we will then consider \begin{equation}\label{epssys}
\begin{cases} u_{\eps t}=\Delta u_{\eps } - \chi \nabla\cdot(u_{\eps }\nabla v_{\eps }) + \kappau_{\eps } - \mu u_{\eps }^2 - \varepsilon u_{\eps }^2\ln au_{\eps } \\
v_{\eps t}=\Delta v_{\eps } - u_{\eps }v_{\eps }\\
\partial_{\nu} u_{\eps }\big\rvert_{\partial\Omega}=\partial_{\nu} v_{\eps }\big\rvert_{\partial\Omega}=0\\
u_{\eps }(\cdot,0)=u_0, v_{\eps }(\cdot,0)=v_0.
\end{cases} \end{equation}
For $\varepsilon =0$, this system reduces to \eqref{a}; for $\varepsilon \in(0,1)$ we will be able to derive global existence of solutions without any concern for the size of initial data and hence obtain a suitable stepping stone for the construction of weak solutions. Beginning the study of solutions to this system in Section \ref{sec:locex-andbasic} with a local existence result and elementary properties of the solutions, we will in Section \ref{sec:bdclasssol} consider a functional of the type $\int_\Omega u^p+\int_\Omega |\nabla v|^{2p}$ and finally, aided by estimates for the heat semigroup, obtain globally bounded solutions, thus proving Theorem \ref{thm1}. In Section \ref{sec:stabilization} where $\kappa$ is assumed to be positive, we will let $a:=\frac{\mu }{\kappa}$ and employ the functional \[
\mathcal{F}_{\varepsilon }(t)=\int_\Omega u_{\eps }(\cdot,t)-\frac{\kappa}{\mu }\int_\Omega \lnu_{\eps }(\cdot,t) + \frac{\kappa}{2\mu }\int_\Omega v_{\eps }^2(\cdot,t) \] in order to derive the stabilization result in Theorem \ref{thm3} and already prepare Theorem \ref{thm:weaksol-limit}. Section \ref{sec:weaksol}, finally, will be devoted to the construction of weak solutions to \eqref{a}, and to the proofs of Theorem \ref{thm:weaksol} and Theorem \ref{thm:weaksol-limit}.
\begin{remark}
In \eqref{epssys}, the additional term $-\varepsilon u_{\eps }^2\ln au_{\eps }$ could be replaced by $-\varepsilon \Phi(u_{\eps })$ with some other continuous function $\Phi$ which satisfies: $\Phi(s)\to 0$ as $s\searrow 0$, $\frac{\Phi(s)}{s^2}\to \infty$ as $s\to\infty$ and, for the stabilization results in Section \ref{sec:stabilization}, $\Phi<0$ on $(0,\frac{\kappa}{\mu})$ as well as $\Phi>0$ on $(\frac{\kappa}{\mu},\infty)$. \\
We will always let
\begin{equation}\label{defa}
a:=\begin{cases} \frac{\mu}{\kappa},& \text{if }\; \kappa>0\\ \mu&\text{if }\; \kappa\leq 0\end{cases}
\end{equation}
and note that the choice for the case $\kappa\leq 0$ was arbitrary and that in Sections \ref{sec:bdclasssol} and \ref{sec:weaksol}, the precise value of $a$ plays no important role. \end{remark}
\textbf{Notation.} For solutions of PDEs we will use $T_{\rm max}$ to denote their maximal time of existence (cf. also Lemma \ref{criterion}). Throughout the article we fix $N\in\mathbb{N}$ and a bounded, smooth domain $\Omega\subset\mathbb{R}^N$.
\section{General preliminaries}\label{sec:prelim} In this section we provide some estimates that are valid for all suitably regular functions and not only for solutions of the PDE under consideration.
\begin{lem}\label{lem:elementary.estimates}
a) For any $c\in C^2(\Omega)$: \begin{equation}\label{eq:Delta.Hessian}
|\Delta c|^2\leq N|D^2 c|^2 \quad \text{throughout } \Omega. \end{equation}
b) There are $C>0$ and $k>0$ such that every positive $c\in C^2(\overline{\Omega})$ fulfilling $\partial_{\nu} c=0$ on $\partial\Omega$ satisfies \begin{equation}\label{eq:lembdrytermvi}
-2\int_\Omega \frac{|\Delta c|^2}{c} +\int_\Omega \frac{|\nabla c|^2\Delta c}{c^2} \leq -k \int_\Omega c|D^2\ln c|^2 -k \int_\Omega\frac{|\nabla c|^4}{c^3} + C\int_\Omega c. \end{equation} \end{lem} \begin{proof} a) Straightforward calculations yield
\begin{align*}
\kl{\sum_{i=1}^N c_{x_ix_i}}^2=\sum_{i,j=1}^N c_{x_ix_i}c_{x_jx_j}\leq \sum_{i,j=1}^N \kl{\frac12 c_{x_ix_i}^2 + \frac12 c_{x_jx_j}^2}
= N\sum_{i=1}^N c_{x_ix_i}^2 \leq N\sum_{i,j=1}^N c_{x_ix_j}^2.
\end{align*} b) This is \cite[Lemma 2.7 vi)]{lankeit_fluid}. \end{proof}
Let us now derive the following interpolation inequality on which we will rely in obtaining an estimate for $\int_\Omega u^p+\int_\Omega |\nabla v|^{2p}$ in Section \ref{sec:bdclasssol} \begin{lem}\label{l_interpolation}
Let $q\in[1,\infty)$. Then for any $c\in C^2(\overline{\Omega} )$ satisfying $c\frac{\partial c}{\partial \nu}=0$ on $\partial \Omega$, the inequality \begin{eqnarray}\label{inter0}
\|\nabla c\|^{2q+2}_{L^{2q+2}(\Omega)} \le 2(4q^2+N)\|c\|^2_{L^{\infty}} \big\||\nabla c|^{q-1}D^2c\big\|^{2}_{L^2(\Omega)} \end{eqnarray} holds, where $D^2c$ denotes the Hessian of $c$. \end{lem} \begin{proof} Since $c\frac{\partial c}{\partial \nu}=0$ on $\partial \Omega$, an integration by parts yields
$$\|\nabla c\|^{2q+2}_{L^{2q+2}(\Omega)}=-\int_{\Omega}c|\nabla c|^{2q}\Delta c-2q\int_{\Omega}c |\nabla c|^{2q-2} \nabla c\cdot (D^2 c\cdot \nabla c).$$ Using Young's inequality and \eqref{eq:Delta.Hessian} we can estimate \begin{eqnarray}\label{inter1}
\Big|-\int_{\Omega}c|\nabla c|^{2q}\Delta c\Big| &\le &
\frac{1}{4}\int_{\Omega}|\nabla c|^{2q+2}+\int_{\Omega}c^2|\nabla c|^{2q-2}|\Delta c|^2\nonumber\\
&\le&
\frac{1}{4}\int_{\Omega}|\nabla c|^{2q+2}+N\|c\|^2_{L^{\infty}(\Omega)}\int_{\Omega}|\nabla c|^{2q-2}|D^2 c|^2. \end{eqnarray} Likewise, we see that \begin{eqnarray}\label{inter2}
\Big|-2q\int_{\Omega}c |\nabla c|^{2q-2} \nabla c\cdot (D^2 c\cdot
\nabla c)\Big| &\le &
\frac{1}{4}\int_{\Omega}|\nabla c|^{2q+2}+4q^2\|c\|^2_{L^{\infty}(\Omega)}\int_{\Omega}|\nabla c|^{2q-2}|D^2 c|^2. \end{eqnarray} In consequence, \eqref{inter1} and \eqref{inter2} prove \eqref{inter0}. \qquad \end{proof}
\section{Local existence and basic properties of solutions}\label{sec:locex-andbasic} We first recall a result on local solvability of \eqref{epssys}: \begin{lem}\label{criterion} Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$ and $q>N$. Then for any $\varepsilon \in[0,1)$ there exist $T_{max}\in (0,\infty]$ and unique classical solution $(u_{\eps },v_{\eps })$ of system \eqref{epssys} with $a$ as in \eqref{defa} in $\Omega\times(0,T_{max})$ such that \begin{eqnarray*} &&u_{\eps }\in C^{0}\big(\overline{\Omega} \times[0,T_{\rm max})\big)\cap C^{2,1}\big(\overline{\Omega} \times(0,T_{\rm max})\big),\\ &&v_{\eps }\in C^{0}\big(\overline{\Omega} \times[0,T_{\rm max})\big)\cap C^{2,1}\big(\overline{\Omega} \times(0,T_{\rm max})\big). \end{eqnarray*} Moreover, we have $u_{\eps }> 0$ and $v_{\eps } > 0$ in $\overline{\Omega} \times [0, T_{\max})$, and \begin{equation}\label{extcrit} \mathrm{if} \,\,\, T_{\rm max}<\infty,\,\,\, \mathrm{then}
\,\,\,\limsup_{t\nearrowT_{\rm max}}\kl{\|u_{\eps }(\cdot,t)\|_{L^{\infty}(\Omega)}+\|v_{\eps }(\cdot,t)\|_{W^{1,q}(\Omega)}}=\infty. \end{equation} \end{lem} \begin{proof}
Apart from minor adaptions necessary if $\varepsilon >0$ (see also \cite[Lemma 3.1]{win_ksns_logsource}), this lemma is contained in \cite[Lemma 2.1]{win_ctfluid}. \end{proof}
Even thought the total mass is not conserved, an upper bound for it can be obtained easily:
\begin{lem}\label{lu1} Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$. Then for any $\varepsilon \in[0,\infty)$ the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\begin{eqnarray}\label{2.1}
\int_\Omega u_{\eps }(x,t) dx\le \max\Big\{\frac{\kappa|\Omega|}{2\mu }+\sqrt{\kl{\frac{\kappa_+|\Omega|}{2\mu }}^2+\varepsilon \frac{|\Omega|}{2a^2e\mu }},\int_\Omega u_0 \Big\}=:m_{\varepsilon } \quad\mathrm{ for\,\,\, all }\quad t\in(0,T_{\rm max}). \end{eqnarray} \end{lem} \begin{proof}
Because $s^2\ln(as)\geq -\frac1{2a^2e}$ for all $s>0$, integrating the first equation in \eqref{epssys} over $\Omega$ and applying Hölder's inequality shows that \[
\ddt \int_\Omega u_{\eps }\leq \kappa\int_\Omega u_{\eps } - \frac{\mu }{|\Omega|}\kl{\int_\Omega u_{\eps }}^2+\frac{\varepsilon |\Omega|}{2a^2e}\quad \text{ on } (0,T_{\rm max}) \]
and the claim results from an ODI-comparison argument. \end{proof}
For the second component, even uniform boundedness can be deduced instantly:
\begin{lem}\label{lem:normvdecreases} Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$. Then for any $\varepsilon \in[0,1)$ the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\begin{eqnarray}\label{2.2}
\|v_{\eps }(\cdot,t)\|_{L^{\infty}(\Omega)}\le \|v_0\|_{L^{\infty}(\Omega)} \qquad\mathrm{ for\,\,\, all }\quad t\in(0,T_{\rm max}) \end{eqnarray}
and \[
(0,T_{\rm max})\ni t\mapsto \norm[\Lom\infty]{v_{\eps }(\cdot,t)} \] is monotone decreasing. \end{lem} \begin{proof}
This is a consequence of the maximum principle and the nonnegativity of the solution. \end{proof}
Also the gradient of $v$ can be controlled in an $L^2(\Omega)$-sense: \begin{lem}\label{ltv2} Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$. There exists a positive constant $M$ such that for all $\varepsilon \in[0,1)$ the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies \begin{eqnarray}\label{2.tv2}
\int_{\Omega}|\nabla
v_{\eps }(\cdot,t)|^2\le M \qquad\mathrm{ for\,\,\, all }\quad t\in(0,T_{\rm max}). \end{eqnarray} \end{lem} \begin{proof} Integration by parts and the Young inequality result in \begin{eqnarray}\label{2.tv1}
\frac{d}{dt}\int_{\Omega}|\nabla v_{\eps }|^2&=& 2\int_{\Omega} \nabla v_{\eps }\cdot \nabla (\Delta v_{\eps }-u_{\eps }v_{\eps })\nonumber \\
&\le& -2\int_{\Omega}|\Delta v_{\eps }|^2 -2\int_{\Omega}|\nabla
v_{\eps }|^2+2\int_{\Omega} v_{\eps }(u_{\eps }-1)\Delta v_{\eps } \nonumber\\
&\le& -\int_{\Omega}|\Delta v_{\eps }|^2 -2\int_{\Omega}|\nabla v_{\eps }|^2 +\int_{\Omega} v_{\eps }^2(u_{\eps }-1)^2\nonumber\\
&\le& -\int_{\Omega}|\Delta v_{\eps }|^2 -2\int_{\Omega}|\nabla
v_{\eps }|^2+\|v_0\|_{L^{\infty}(\Omega)}^2\int_{\Omega}
u_{\eps }^2+2\|v_0\|_{L^{\infty}(\Omega)}^2\int_{\Omega}
u_{\eps }+\|v_0\|_{L^{\infty}(\Omega)}^2 \end{eqnarray} on $(0,T_{\rm max})$. Furthermore, \begin{eqnarray}\label{2.tv3}
\frac{\|v_0\|_{L^{\infty}(\Omega)}^2}{\mu}\frac{d}{dt}\int_{\Omega}
u_{\eps }\le \frac{\kappa_{+}\|v_0\|_{L^{\infty}(\Omega)}^2}{\mu}\int_{\Omega}
u_{\eps }-\|v_0\|_{L^{\infty}(\Omega)}^2\int_{\Omega} u_{\eps }^2-\frac{\varepsilon \norm[\Lom\infty]{v_0}^2}{\mu }\int_\Omega u_{\eps }^2\ln (au_{\eps }). \end{eqnarray} Adding \eqref{2.tv1} to \eqref{2.tv3} and taking into account that \[
-\frac{\varepsilon \norm[\Lom\infty]{v_0}^2}{\mu } s^2\ln as\leq \frac{\norm[\Lom\infty]{v_0}^2}{2ea^2\mu } \] for any $\varepsilon \in[0,1)$ and $s\geq 0$, we obtain that \begin{eqnarray*}
&&\frac{d}{dt}\Big\{\int_{\Omega}|\nabla
v_{\eps }|^2+\frac{\|v_0\|_{L^{\infty}(\Omega)}^2}{\mu}\int_{\Omega}u_{\eps }\Big\}\\
&& \le -\left(\int_{\Omega}|\nabla
v_{\eps }|^2+\frac{\|v_0\|_{L^{\infty}(\Omega)}^2}{\mu}\int_{\Omega}u_{\eps }\right)+\left(2\|v_0\|_{L^{\infty}(\Omega)}^2+\frac{\kappa_{+}+1}{\mu}\|v_0\|_{L^{\infty}(\Omega)}^2\right)\int_{\Omega}
u_{\eps }+\|v_0\|_{L^{\infty}(\Omega)}^2+\frac{|\Omega|\norm[\Lom\infty]{v_0}^2}{2\mu ea^2}. \end{eqnarray*} Since Lemma \ref{lu1} shows that $\int_\Omega u_{\eps }(x,t) dx\le m_1$ for any $\varepsilon \in[0,1)$ and $t\in(0,T_{\rm max})$, a comparison argument leads to \begin{eqnarray*}
&&\int_{\Omega}|\nabla
v_{\eps }|^2+\frac{\|v_0\|_{L^{\infty}(\Omega)}^2}{\mu}\int_{\Omega}u_{\eps } \\
&& \le \max \left\{|\nabla v_0|^2+\frac{\|v_0\|_{L^{\infty}(\Omega)}^2}{\mu}\int_{\Omega}u_0,\,\kl{1+\frac{|\Omega|}{2ea^2\mu }}\|v_0\|_{L^{\infty}(\Omega)}^2+
\left(2\|v_0\|_{L^{\infty}(\Omega)}^2+\frac{\kappa_{+}+1}{\mu}\|v_0\|_{L^{\infty}(\Omega)}^2\right)m_1\right\}, \end{eqnarray*} holding true on $(0,T_{\rm max})$, which in particular implies \eqref{2.tv2} \end{proof}
\section{Existence of a bounded classical solution}\label{sec:bdclasssol}
We now turn to the analysis of the coupled functional of
$\int_{\Omega}u^p$ and $\int_{\Omega}|\nabla v|^{2p}$. We first apply standard testing procedures to gain the time evolution of each quantity.
\begin{lem}\label{l3.1} Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$. For any $p\in[1,\infty)$, any $\varepsilon \in[0,1)$, we have that the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies \begin{equation}\label{3.1}
\frac{d}{dt}\int_{\Omega}u_{\eps }^p+\frac{2(p-1)}{p}\int_{\Omega}|\nabla
u_{\eps }^{\frac{p}{2}}|^2\le \frac{p(p-1)}{2} \chi
^2\int_{\Omega}u_{\eps }^p|\nabla v_{\eps }|^2+p\kappa\int_{\Omega}u_{\eps }^p-p\mu \int_{\Omega}u_{\eps }^{p+1} -\varepsilon p\int_\Omega u_{\eps }^{p+1}\ln au_{\eps } \end{equation} on $(0,T_{\rm max})$. \end{lem} \begin{proof} Testing the first equation in \eqref{a} against $u_{\eps }^{p-1}$ and using Young's inequality, we can obtain \begin{align}\label{3.1''} \frac{1}{p}\frac{d}{dt}\int_{\Omega}u_{\eps }^p
&=-(p-1)\int_{\Omega}u_{\eps }^{p-2}|\nabla
u_{\eps }|^2+(p-1)\chi \int_{\Omega}u_{\eps }^{p-1}\nabla u_{\eps } \cdot \nabla v_{\eps }+\kappa\int_{\Omega}u_{\eps }^p-\mu\int_{\Omega}u_{\eps }^{p+1}\nonumber\\ &\quad -\varepsilon \int_\Omega u_{\eps }^{p+1}\ln (au_{\eps })\nonumber\\
&\le -(p-1)\int_{\Omega}u_{\eps }^{p-2}|\nabla
u_{\eps }|^2+\frac{p-1}{2}\int_{\Omega}u_{\eps }^{p-2}|\nabla
u_{\eps }|^2+\frac{p-1}{2} \chi^2\int_{\Omega}u_{\eps }^{p}|\nabla
v_{\eps }|^2 \nonumber\\ &\quad+\kappa\int_{\Omega}u_{\eps }^p-\mu\int_{\Omega}u_{\eps }^{p+1}-\varepsilon \int_\Omega u_{\eps }^{p+1}\ln (au_{\eps }) \end{align} on $(0,T_{\rm max})$, which by using the fact that
$$\int_{\Omega}u^{p-2}|\nabla
u_{\eps }|^2=\frac{4}{p^2}\int_{\Omega}|\nabla u_{\eps }^{\frac{p}{2}}|^2 \quad \text{ on } (0,T_{\rm max})$$ directly results in \eqref{3.1}. \end{proof}
\begin{lem}\label{l3.2} Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$.
For any $p\in[1,\infty)$, any $\varepsilon \in[0,1)$, we have that the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies \begin{eqnarray}\label{3.1'}
\frac{d}{dt}\int_{\Omega} |\nabla v_{\eps }|^{2p}+p\int_{\Omega}|\nabla
v_{\eps }|^{2p-2}|D^2 v_{\eps }|^2 \le p(p+N-1)\|v_0\|_{L^{\infty}(\Omega)}^2\int_{\Omega}u_{\eps }^2|\nabla
v_{\eps }|^{2p-2} \quad \text{on } (0,T_{\rm max}). \end{eqnarray} \end{lem} \begin{proof} We differentiate the second equation in \eqref{a} to compute
$$(|\nabla v_{\eps }|^2)_t=2\nabla v_{\eps }\cdot \nabla \Delta v_{\eps }-2\nabla v_{\eps }\cdot \nabla (u_{\eps }v_{\eps })=\Delta |\nabla v_{\eps }|^2-2|D^2v_{\eps }|^2-2\nabla v_{\eps }\cdot \nabla (u_{\eps }v_{\eps })
\quad \text{in } \Omega\times (0,T_{\rm max}).$$ Upon multiplication by $(|\nabla v_{\eps }|^2)^{p-1}$ and integration, this leads to \begin{equation}\label{3.2}
\frac{1}{p}\frac{d}{dt} \int_{\Omega}|\nabla
v_{\eps }|^{2p}+(p-1)\int_{\Omega} |\nabla v_{\eps }|^{2p-4}\big|\nabla|\nabla
v_{\eps }|^2\big|^2+2\int_{\Omega}|\nabla v_{\eps }|^{2p-2}|D^2v_{\eps }|^2\le
-2\int_{\Omega}|\nabla v_{\eps }|^{2p-2} \nabla v_{\eps } \cdot \nabla(u_{\eps }v_{\eps }) \end{equation} on $(0,T_{\rm max})$. Then integrating by parts, we achieve \begin{align*}
-2\int_{\Omega}|\nabla v_{\eps }|^{2p-2} \nabla v_{\eps } \cdot
\nabla(u_{\eps }v_{\eps })&=2\int_{\Omega} u_{\eps }v_{\eps }|\nabla v_{\eps }|^{2p-2} \Delta
v_{\eps }+2(p-1)\int_{\Omega} u_{\eps }v_{\eps }|\nabla v_{\eps }|^{2p-4}\nabla v_{\eps }\cdot
\nabla|\nabla v_{\eps }|^2\\
&\le 2\|v_0\|_{L^{\infty}(\Omega)}\int_{\Omega} u_{\eps }|\nabla
v_{\eps }|^{2p-2} |\Delta
v_{\eps }|+2(p-1)\|v_0\|_{L^{\infty}(\Omega)}\int_{\Omega} u_{\eps }|\nabla
v_{\eps }|^{2p-3}\cdot \big|\nabla|\nabla v_{\eps }|^2\big| \end{align*}
throughout $(0,T_{\rm max})$, were we have used Lemma \ref{lem:normvdecreases}.
Next by Young's inequality and Lemma \ref{lem:elementary.estimates} a) we have that
$$2\|v_0\|_{L^{\infty}(\Omega)}\int_{\Omega} u_{\eps }|\nabla
v_{\eps }|^{2p-2} |\Delta v_{\eps }|\le \int_{\Omega} |\nabla v_{\eps }|^{2p-2} |D^2
v_{\eps }|^2+N\|v_0\|^2_{L^{\infty}(\Omega)}\int_{\Omega}u_{\eps }^2 |\nabla
v_{\eps }|^{2p-2},$$
and \begin{eqnarray*}
&& 2(p-1)\|v_0\|_{L^{\infty}(\Omega)}\int_{\Omega} u_{\eps }|\nabla
v_{\eps }|^{2p-3}\cdot \big|\nabla|\nabla v_{\eps }|^2\big|\\
&&\le (p-1)\int_{\Omega} |\nabla v_{\eps }|^{2p-4}\big|\nabla|\nabla
v_{\eps }|^2\big|^2+(p-1)\|v_0\|^2_{L^{\infty}(\Omega)}\int_{\Omega}u_{\eps }^2
|\nabla v_{\eps }|^{2p-2}. \end{eqnarray*}
Thereupon, \eqref{3.2} implies that
$$\frac{d}{dt} \int_{\Omega}|\nabla
v_{\eps }|^{2p}+p\int_{\Omega}|\nabla v_{\eps }|^{2p-2}|D^2v_{\eps }|^2\le p(p+N-1)\|v_0\|^2_{L^{\infty}(\Omega)}\int_{\Omega}u_{\eps }^2 |\nabla
v_{\eps }|^{2p-2}$$ on $(0,T_{\rm max})$. \end{proof}
Next we will show that if $\mu$ is suitably large, then all integrals on the right side in \eqref{3.1} and \eqref{3.1'}
can adequately be estimated in terms of the respective dissipated quantities on the left, in consequence implying the $L^p$ estimate of $u$ and the boundedness estimate for $|\nabla v|$.
\begin{lem}\label{l3.5} Let $p>1$. With \begin{align}\label{eq:defk1k2}
k_1(p,N):=\frac{p(p-1)}{(p+1)}\left(\frac{4(p-1)(4p^2+N)}{p+1}\right)^{\frac{1}{p}}\nonumber\\
k_2(p,N):=\frac{4(p+N-1)}{p+1}\left(\frac{8(p-1)(p+N-1)(4p^2+N)}{p+1}\right)^{\frac{p-1}{2}} \end{align} the following holds: If $\mu >0$, $\chi >0$ and the positive function $v_0\in C^1(\overline{\Omega})$ fulfil \begin{equation}\label{cond:mularge}
\mu\ge k_1(p,N)\|\chi v_0\|^{\frac{2}{p}}_{L^\infty(\Omega)}+k_2(p,N)\|\chi v_0\|^{2p}_{L^\infty(\Omega)}, \end{equation}
then for every $\kappa\in \mathbb{R} $, $0<u_0\in C^0(\overline{\Omega})$ there is $C>0$ such that for every $\varepsilon \in[0,1)$ the solution $(u_{\eps },v_{\eps })$ of \eqref{epssys} with $a$ as in \eqref{defa} satisfies \[
\int_\Omega u_{\eps }^p(\cdot,t) + \int_\Omega |\nabla v_{\eps }(\cdot,t)|^{2p}\leq C \qquad \text{on } (0,T_{\rm max}). \] If, however, $\mu >0$, $\chi >0$ and $0<v_0\in C^1(\overline{\Omega})$ do not satisfy \eqref{cond:mularge}, then for every $\varepsilon \in(0,1)$, $\kappa\in \mathbb{R} $, $0<u_0\in C^0(\overline{\Omega})$ there is $c_\varepsilon>0$ such that the solution $(u_{\eps },v_{\eps })$ of \eqref{epssys} with $a$ as in \eqref{defa} satisfies \[
\int_\Omega u_{\eps }^p(\cdot,t) + \int_\Omega |\nabla v_{\eps }(\cdot,t)|^{2p}\leq c_\varepsilon \qquad \text{on } (0,T_{\rm max}). \] \end{lem} \begin{proof} Lemma \ref{l3.1} and \ref{l3.2} show that \begin{align}\label{3.12}
& \frac{d}{dt}\Big(\int_{\Omega}u_{\eps }^p+\chi ^{2p}\int_{\Omega}|\nabla
v_{\eps }|^{2p}\Big)+\frac{2(p-1)}{p}\int_{\Omega}|\nabla
u_{\eps }^{\frac{p}{2}}|^2+p\chi ^{2p}\int_{\Omega}|\nabla v_{\eps }|^{2p-2}|D^2 v_{\eps }|^2 \nonumber\\
&\le \frac{p(p-1)}{2} \chi ^2\int_{\Omega}u_{\eps }^p|\nabla
v_{\eps }|^2+p(p+N-1)\|v_0\|_{L^{\infty}(\Omega)}^2\chi ^{2p}\int_{\Omega}u_{\eps }^2|\nabla
v_{\eps }|^{2p-2}\nonumber\\ &+p\kappa\int_{\Omega}u_{\eps }^p-p\mu \int_{\Omega}u_{\eps }^{p+1}- \varepsilon p\int_\Omega u_{\eps }^{p+1}\ln au_{\eps } \end{align} throughout $(0,T_{\rm{max}})$. Using Young's inequality, we can assert that for any $\delta_1>0$, \begin{eqnarray}\label{3.13}
\frac{p(p-1)}{2} \chi ^{2}\int_{\Omega}u_{\eps }^p|\nabla
v_{\eps }|^2\le \frac{p(p-1)\delta_1 ^{p+1}}{2(p+1)}\chi ^{2p}\int_{\Omega}|\nabla
v_{\eps }|^{2(p+1)}+\frac{p^2(p-1)}{2(p+1)}\Big(\frac{1}{\delta_1}\Big)^{\frac{p+1}{p}}\chi ^{\frac2p}\int_{\Omega}u_{\eps }^{p+1} \end{eqnarray} on $(0,T_{\rm max})$. We then apply Lemma
\ref{l_interpolation} and $\|v_{\eps }(\cdot,t)\|_{L^{\infty}(\Omega)}\le
\|v_0\|_{L^{\infty}(\Omega)}$ to obtain
$$ \frac{p(p-1)\delta_1 ^{p+1}}{2(p+1)}\chi ^{2p}\int_{\Omega}|\nabla
v_{\eps }|^{2(p+1)}\le \frac{p(p-1)(4p^2+N)\delta_1
^{p+1}\|v_0\|^2_{L^\infty(\Omega)}}{(p+1)}\chi ^{2p}\int_{\Omega}|\nabla
v_{\eps }|^{2p-2}|D^2v_{\eps }|^2 $$ for all $t\in (0,T_{\rm max})$. If we let
$\delta_1=\left(\frac{p+1}{4(p-1)(4p^2+N)\|v_0\|^2_{L^\infty(\Omega)}}\right)^{\frac{1}{p+1}}$, \eqref{3.13} shows that \begin{equation}\label{3.14}
\frac{p(p-1)}{2}\chi ^2 \int_{\Omega}u_{\eps }^p|\nabla v_{\eps }|^2\le
\frac{p}{4}\chi ^{2p}\int_{\Omega}|\nabla
v_{\eps }|^{2p-2}|D^2v_{\eps }|^2+\frac{p^2(p-1)}{2(p+1)}\left(\frac{4(p-1)(4p^2+N)}{p+1}\right)^{\frac{1}{p}}\|v_0\|^{\frac{2}{p}}_{L^\infty(\Omega)}\chi ^{\frac2p}\int_{\Omega}u_{\eps }^{p+1} \end{equation} on $(0,T_{\rm max})$. Similarly, for any $\delta_2>0$ we have \begin{eqnarray}\label{3.15}
&& p(p+N-1)\|v_0\|_{L^{\infty}(\Omega)}^2\chi ^{2p}\int_{\Omega}u^2|\nabla
v_{\eps }|^{2p-2} \nonumber\\ && \le \frac{p(p-1)(p+N-1)\delta_2
^{\frac{p+1}{p-1}}\|v_0\|^2_{L^\infty(\Omega)}}{p+1}\chi
^{2p}\int_{\Omega}|\nabla
v_{\eps }|^{2(p+1)}\nonumber\\ &&\quad
+\frac{2p(p+N-1)\|v_0\|^2_{L^\infty(\Omega)}}{p+1}\Big(\frac{1}{\delta_2}\Big)^{\frac{p+1}{2}}\chi ^{2p}\int_{\Omega}u_{\eps }^{p+1} \end{eqnarray} on $(0,T_{\rm max})$. Using Lemma \ref{l_interpolation} once more and taking
$\delta_2=\left(\frac{p+1}{8(p-1)(p+N-1)(4p^2+N)\|v_0\|^4_{L^\infty(\Omega)}}\right)^{\frac{p-1}{p+1}}$, we can obtain from \eqref{3.15} that \begin{eqnarray}\label{3.17}
&& p(p+N-1)\|v_0\|_{L^{\infty}(\Omega)}^2\chi ^{2p}\int_{\Omega}u_{\eps }^2|\nabla
v_{\eps }|^{2p-2}\nonumber\\
&& \le \frac{p}{4}\chi ^{2p}\int_{\Omega}|\nabla
v_{\eps }|^{2p-2}|D^2v_{\eps }|^2\nonumber\\
&&\quad+\frac{2p(p+N-1)}{p+1}\left(\frac{8(p-1)(p+N-1)(4p^2+N)}{p+1}\right)^{\frac{p-1}{2}}\|v_0\|^{2p}_{L^\infty(\Omega)}\chi ^{2p}\int_{\Omega}u_{\eps }^{p+1} \end{eqnarray} on $(0,T_{\rm max})$. Combining inequalities \eqref{3.12}, \eqref{3.13} and \eqref{3.17}, we arrive at \begin{align}\label{eq:diffineq}
&\frac{d}{dt}\kl{\int_{\Omega}u_{\eps }^p+\chi ^{2p}\int_{\Omega}|\nabla
v_{\eps }|^{2p}}+\frac{2(p-1)}{p}\int_{\Omega}|\nabla
u_{\eps }^{\frac{p}{2}}|^2+\frac{p}{2}\chi ^{2p}\int_{\Omega}|\nabla
v_{\eps }|^{2p-2}|D^2 v_{\eps }|^2 \\\nonumber &\le \frac{p}2\kl{k_1(p,N)\norm[\Lom\infty]{\chi v_0}^{\frac2p} + k_2(p,N)\norm[\Lom\infty]{\chi v_0}^{2p} - \mu }\int_\Omega u_{\eps }^{p+1} - \varepsilon \int_\Omega u_{\eps }^{p+1}\ln au_{\eps } + p\kappa\int_\Omega u_{\eps }^p - \frac{\mu p}2\int_\Omega u_{\eps }^{p+1} \end{align} on $(0,T_{\rm{max}})$.
We can moreover invoke the Poincar\'{e} inequality along with Lemma \ref{lu1} to estimate \begin{eqnarray*}
\int_{\Omega}u_{\eps }^p = \|u_{\eps }^{\frac{p}{2}}\|^2_{L^2(\Omega)}\le c_1\big(\|\nabla
u_{\eps }^{\frac{p}{2}}\|^2_{L^2(\Omega)}+\|u_{\eps }^{\frac{p}{2}}\|^2_{L^{\frac{2}{p}}(\Omega)}\big)\le c_2\Big(\int_{\Omega}|\nabla u_{\eps }^{\frac{p}{2}}|^2+1\Big) \qquad \text{on } (0,T_{\rm max}) \end{eqnarray*} with some $c_1>0$ and $c_2>0$. In a quite similar way, using Lemma \ref{ltv2} we obtain constants $c_3>0$ and $c_4>0$ such that \begin{eqnarray*}
\int_{\Omega}|\nabla v_{\eps }|^{2p} &=& \big\||\nabla
v_{\eps }|^{p}\big\|^2_{L^2(\Omega)}\\
&\le& c_3\big(\big\|\nabla |\nabla v_{\eps }|^{p}
\big\|^2_{L^2(\Omega)}+\big\||\nabla
v_{\eps }|^{p}\big\|^2_{L^{\frac{2}{p}}(\Omega)}\big)\\
&\le& c_4\Big(\int_{\Omega}\big|\nabla |\nabla
v_{\eps }|^{p}\big|^2+1\Big)\\
&=& c_4\Big(p^2\int_{\Omega}|\nabla v_{\eps }|^{2p-4}|D^2v_{\eps } \nabla
v_{\eps }|^2+1\Big)\\
&\le& c_4\Big(p^2\int_{\Omega}|\nabla v_{\eps }|^{2p-2}|D^2v_{\eps } |^2+1\Big)
\qquad \text{on } (0,T_{\rm max}). \end{eqnarray*}
Introducing $c_5:=\min\set{\frac{2(p-1)}{c_2p},\frac{p}{2c_4}}$ and abbreviating $y_{\eps }(t):=\int_\Omega u_{\eps }^p+\int_\Omega |\nabla v_{\eps }|^{2p}$, we thus obtain from \eqref{eq:diffineq} that \[
y_{\eps }'(t)+c_5 y_{\eps }(t) \leq K \qquad \text{for all } t\in(0,T_{\rm max}), \] where \[
K:=\begin{cases}
p|\Omega|\cdot\kl{\sup_{s>0} (\kappa s^p -\frac{\mu }2s^{p+1}) + |\inf_{s>0} s^{p+1} \ln s|}, &\text{ if \eqref{cond:mularge}}\\%\mu >k_1(p,N)\|v_0\|^{\frac{2}{p}}_{L^\infty(\Omega)}+k_2(p,N)\|v_0\|^{2p}_{L^\infty(\Omega)}\\
p|\Omega| \sup_{s>0} \kl{\kl{\frac12 k_1(p,N)\norm[\Lom\infty]{v_0}^{\frac2p} + \frac12 k_2(p,N)\norm[\Lom\infty]{v_0}^{2p}-\mu } s^{p+1} + \kappa s^p -\varepsilon s^{p+1}\ln as}&\text{ else}.
\end{cases} \] In consequence, \[
y_{\eps }(t)\leq \max\set{y_{\eps }(0);\frac{K}{c_{42}}} \] for all $t\in(0,T_{\rm max})$. We note that $K$ depends on $\varepsilon $ if and only if \eqref{cond:mularge} is not satisfied. \end{proof}
The previous lemma ensures boundedness of $u$ in some $L^p$-space for finite $p$ only. Fortunately, this is already sufficient for the solution to be bounded -- and global.
\begin{lem}\label{lem:ulp.to.boundedness}
Let $T\in(0,\infty]$, $p>N$, $M>0$, $a>0$, $\kappa\in\mathbb{R} $, $\mu >0$. Then there is $C>0$ with the following property:
If for some $\varepsilon \in[0,1)$, the function $(u_{\eps },v_{\eps })\in (C^0(\overline{\Omega}\times[0,T))\cap C^{2,1}(\overline{\Omega}\times(0,T)))^2$ is a solution to \eqref{epssys} with $a$ as in \eqref{defa} such that \[
0\leq u_{\eps },\,\, 0\leq v_{\eps } \text{ in }\Omega\times(0,T)\text{ and } \int_\Omega u_{\eps }^p(\cdot,t)\leq M \text{ for all } t\in (0,T), \] then \[
\norm[\Lom\infty]{u_{\eps }(\cdot,t)} + \norm[\Lom\infty]{\nabla v_{\eps }(\cdot,t)}\leq C \qquad \text{for all } t\in(0,T). \] \end{lem} \begin{proof} We use the standard estimate for the Neumann heat semigroup (\cite[Lemma 1.3]{win_aggregationvs}) to conclude that with some $c_1>0$ \begin{align*}
\|\nabla v_{\eps }(\cdot,t)\|_{L^{\infty}(\Omega)}&\le \|\nabla e^{t\triangle} v_{\eps }(\cdot,0)\|_{L^{\infty}(\Omega)}+\int^t_0 \|\nabla e^{(t-s)\triangle}u_{\eps }(\cdot,s)v_{\eps }(\cdot,s)\|_{L^{\infty}(\Omega)}\\
&\le \norm[\Lom\infty]{\nabla v_0}+c_1\int^t_0 c_1\Big(1+(t-s)^{-\frac{1}{2}-\frac N{2p}}\Big)e^{-\lambda_1(t-s)}\|u_{\eps }(\cdot,s)v_{\eps }(\cdot,s)\|_{L^{p}(\Omega)} \quad \text{for all } t\in(0,T), \end{align*} where $\lambda_1$ denotes the first nonzero eigenvalue of $-\Delta$ in $\Omega$ under the homogeneous Neumann boundary conditions. Due to Lemma \ref{lem:normvdecreases} and the condition on $u_{\eps }$, we obtain $c_2>0$ such that \begin{equation}\label{vlinftybound}
\norm[\Lom \infty]{\nabla v_{\eps }(\cdot,t)}\leq c_2 \quad \text{for all } t\in (0,T). \end{equation} In order to obtain a bound for $u_{\eps }$, we use the variation-of-constants formula to represent $u_{\eps }(\cdot,t)$ as \begin{eqnarray}\label{duhamel.u} u_{\eps }(\cdot,t)&=&e^{(t-t_0)\Delta}u_{\eps }(\cdot,t_0)-\int^t_{t_0} e^{(t-s)\Delta}\nabla\cdot(u_{\eps }(\cdot,s)\nabla v_{\eps }(\cdot,s))ds\nonumber\\ &\quad&+\int^t_{t_0}e^{(t-s)\Delta}(\kappa u_{\eps }(\cdot,s)-\mu u_{\eps }^2(\cdot,s)-\varepsilon u_{\eps }^2(\cdot,s)\ln au_{\eps }(\cdot,s))ds, \end{eqnarray} for each $t\in (0,T)$, where $t_0=(t-1)_{+}$.
Due to the estimate \[
\kappa s-\mu s^2-\varepsilon s^2\ln as\leq \frac1{2a^2e} + \sup_{\xi >0} (\kappa\xi -\mu \xi ^2)=:c_3, \] positivity of the heat semigroup ensures that \begin{equation}\label{uestimate:inhomogeneity}
\int_{t_0}^t e^{(t-s)\Delta }(\kappau_{\eps }(\cdot,s)-\mu u_{\eps }^2(\cdot,s)-\varepsilon u_{\eps }^2(\cdot,s)\ln au_{\eps }(\cdot,s)) \leq c_3(t-t_0)\leq c_3. \end{equation} Moreover, from the maximum principle we can easily infer that \begin{equation}\label{uestimate:initdata.smallt}
\norm[\Lom\infty]{e^{(t-t_0)\Delta} u_{\eps }(\cdot,t_0)}=\norm[\Lom\infty]{e^{t\Delta} u_0} \leq \norm[\Lom\infty]{u_0} \text{ if } t\in[0,2] \end{equation} and that with $c_4>0$ taken from \cite[Lem. 1.3]{win_aggregationvs} \begin{equation}\label{uestimate:initdata.larget}
\norm[\Lom\infty]{e^{(t-t_0)\Delta}u_{\eps }(\cdot,t_0)}\leq \norm[\Lom\infty]{u_{\eps }(\cdot,t_0)}\leq \norm[\Lom\infty]{e^{1\cdot\Delta} u_{\eps }(\cdot,t_0-1)} \leq c_4 (1+1^{-\frac N2})
\norm[\Lom 1]{u_{\eps }(\cdot,t_0-1)}\leq c_4 m_1, \end{equation} whenever $t>2$ and with $m_1$ as in \eqref{2.1}.
Finally, we estimate the second integral on the right hand of \eqref{duhamel.u}. \cite[Lemma 1.3]{win_aggregationvs} provides $c_5>0$ fulfilling \begin{align}\label{uestimate:chemotaxisterm}
\Big\|\int^t_{t_0} e^{(t-s)\Delta}\nabla\cdot(u_{\eps }(\cdot,s)\nabla
v_{\eps }(\cdot,s))ds\Big\|_{L^{\infty}(\Omega)}&\le c_5\int^t_{t_0}
(t-s)^{-\frac{1}{2}-\frac{N}{2p}}\|u_{\eps }(\cdot,s)\nabla
v_{\eps }(\cdot,s)\|_{L^{p}(\Omega)}ds\nonumber\\ &\leq c_5M c_2\int_0^1\sigma^{-\frac12-\frac N{2p}} d\sigma =:c_6 \end{align} for $t\in(0,T)$. In view of \eqref{duhamel.u}, \eqref{uestimate:initdata.smallt}, \eqref{uestimate:initdata.larget}, \eqref{uestimate:chemotaxisterm}, we have obtained that \[
0\leq u_{\eps }(\cdot,t)\leq \max\set{\norm[\Lom\infty]{u_0}, c_4m_1} + c_6 + c_3 \] holds for any $t\in(0,T)$, which combined with \eqref{vlinftybound} is the desired conclusion. \end{proof}
In fact, the assumption of Lemma \ref{lem:ulp.to.boundedness} suffices for even higher regularity, as we will see in Lemma \ref{lem:holder}. For the moment we return to the proof of global existence of solutions.
\begin{lem}\label{lem:ge.for.positive.eps.or.large.mu}
Let $\varepsilon \in(0,1)$ and let $a$ be as in \eqref{defa} or let $\varepsilon =0$ and $\mu >k_1(N,N)\norm[\Lom\infty]{\chi v_0}^{\frac2N}+k_2(N,N)\norm[\Lom\infty]{\chi v_0}^{2N}$, where $k_1$, $k_2$ are as in Lemma \ref{l3.5}. Then the classical solution to \eqref{epssys} given by Lemma \ref{criterion} is global and bounded. \end{lem} \begin{proof}
By continuity, there is $p>N$ such that $\mu >k_1(p,N)\norm[\Lom\infty]{\chi v_0}^{\frac2p}+k_2(p,N)\norm[\Lom\infty]{\chi v_0}^{2p}$, and Lemma \ref{l3.5} shows that $\int_\Omega u_{\eps }^p$ is bounded on $(0,T_{\rm max})$. Lemma \ref{lem:ulp.to.boundedness} together with Lemma \ref{lem:normvdecreases} turns this into a uniform bound on $\norm[\Lom \infty]{u_{\eps }(\cdot,t)}+\norm[W^{1,\infty}(\Omega)]{v_{\eps }(\cdot,t)}$ on $(0,T_{\rm max})$, so that the extensibility criterion \eqref{extcrit} shows that $T_{\rm max}=\infty$. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm1}] Theorem \ref{thm1} is the case $\varepsilon =0$ in Lemma \ref{lem:ge.for.positive.eps.or.large.mu}.\end{proof}
\section{Stabilization}\label{sec:stabilization}
In this section, we shall consider the large time asymptotic stabilization of any global classical bounded solution.
In a first step we derive uniform Hölder bounds that will facilitate convergence. After that, we have to ensure that solutions actually converge, and in particular must identify their limit. In the spirit of the persistence-of-mass result in \cite{taowin_persistence}, showing that $v\to 0$ as $t\to \infty$ would be possible by relying on a uniform lower bound for $\int_\Omega u$ and finiteness of $\int_0^\infty\int_\Omega uv$ (see also \cite[Lemmata 3.2 and 3.3]{lankeit_fluid}). We will instead focus on other information that can be obtained from the following functional of type already employed in \cite{lankeit_fluid} (after the example of \cite{win_ksns_logsource}), namely \begin{equation}\label{eq:defF}
\mathcal{F}_{\varepsilon}(t):=\int_{\Omega}u_{\eps }(\cdot,t)-\frac{\kappa}{\mu }\int_{\Omega}\ln u_{\eps }(\cdot,t)+\frac{\kappa}{2\mu }\int_{\Omega}v_{\eps }^2(\cdot,t). \end{equation} This way, in Lemma \ref{lem:vtozero} we will achieve a convergence result for $v_{\eps }$ that will also be useful in the investigation of the large time behaviour of weak solutions in Section \ref{sec:weaksol}.
\begin{lem}\label{lem:holder}
Let $\varepsilon\in[0,1)$, $\mu >0$, $\chi >0$, $\kappa\in\mathbb{R} $. Let $(u_{\eps },v_{\eps })\in C^0(\overline{\Omega}\times [0,\infty))\cap C^{2,1}(\overline{\Omega}\times(0,\infty))$ be a solution to \eqref{epssys} with $a$ as in \eqref{defa} which is bounded in the sense that there exists $M>0$ such that
\begin{equation}\label{eq:boundednessconditionforregularity}
\norm[\Lom\infty]{u_{\eps }(\cdot,t)}+\norm[W^{1,\infty}(\Omega)]{v_{\eps }(\cdot,t)}\leq M \quad \text{for all } t\in(0,\infty).
\end{equation}
Then there are $\alpha\in(0,1)$ and $C>0$ such that
\[
\normm{C^{\alpha,\frac{\alpha}2}(\overline{\Omega}\times[t,t+1])}{u_{\eps }} + \normm{C^{2+\alpha,1+\frac\alpha2}(\Omega\times[t,t+1])}{v_{\eps }}\leq C \qquad \text{for all } t\in (2,\infty).
\] \end{lem} \begin{proof} Due to the time-uniform ($L^\infty(\Omega)$-)bound on $v_{\eps }$ and on the right hand side of \[
v_{\eps t}-\Delta v_{\eps } = u_{\eps }v_{\eps } \text{ in } \Omega\times[t,t+2], \quad \partial_{\nu} v_{\eps }\big\rvert_{\partial\Omega}=0 \] of which $v_{\eps }$ is a weak solution, \cite[Thm. 1.3]{porzio_vespri} immediately yields $\alpha _1\in(0,1)$ and $c_1>0$ such that \[
\normm{C^{\alpha _1,\frac{\alpha _1}2}(\overline{\Omega}\times[t+1,t+2])}{v_{\eps }}\leq c_1 \] for any $t>0$.
Similarly, \eqref{eq:boundednessconditionforregularity} provides $t$-independent bounds on the functions $\psi _0:=\frac12\chi ^2u_{\eps }^2|\nabla v_{\eps }|^2$, $\psi _1:=\chi u_{\eps }|\nabla v_{\eps }|$, $\psi _2:=|\kappa|u_{\eps }-\mu u_{\eps }^2-\varepsilon u_{\eps }^2\ln au_{\eps }$ in conditions (A$_1$), (A$_2$), (A$_3$) of \cite{porzio_vespri}. An application of \cite[Thm. 1.3]{porzio_vespri} to solutions of \[
u_{\eps t} - \nabla\cdot(\nabla u_{\eps }-\chi u_{\eps }\nabla v_{\eps }) = \kappau_{\eps }-\mu u_{\eps }^2-\varepsilon u_{\eps }^2\ln au_{\eps } \text{ in } \Omega\times[t,t+2], \partial_{\nu} u_{\eps }\big\rvert_{\partial\Omega}=0 \] therefore provides $\alpha _2\in(0,1)$, $c_2>0$ such that \[
\normm{C^{\alpha _2,\frac{\alpha _2}2}(\overline{\Omega}\times[t+1,t+2])}{u_{\eps }}\leq c_2 \] for any $t>0$.
We pick a monotone increasing function $\zeta \in C^\infty(\mathbb{R} )$ such that $\zeta |_{(-\infty,\frac12)}\equiv 0$, $\zeta|_{(1,\infty)}\equiv 1$ and note that, for any $t_0>1$, the function $(x,t)\mapsto \zeta (t-t_0)v_{\eps }(x,t)$ belongs to $C^{2,1}(\overline{\Omega}\times[t_0,t_0+2])$ and satisfies \[
(\zeta v_{\eps })_t=\Delta (\zeta v_{\eps }) - u_{\eps }\zeta v_{\eps } + \zeta 'v_{\eps }, \quad (\zeta v_{\eps })(\cdot,t_0)=0, \quad \partial_{\nu}(\zeta v_{\eps })\big\rvert_{\partial\Omega}=0. \] Due to the uniform bound for $u_{\eps }\zeta v_{\eps } + \zeta 'v_{\eps }$ in some Hölder space, an application of \cite[Thm. IV.5.3]{LSU} (together with \cite[Thm. III.5.1]{LSU}) ensures the existence of $\alpha _3\in(0,1)$ and $c_3>0$ such that \[
\normm{C^{2+\alpha _3,1+\frac{\alpha _3}2}(\overline{\Omega}\times[t_0+1,t_0+2])}{v_{\eps }}=\normm{C^{2+\alpha _3,1+\frac{\alpha _3}2}(\overline{\Omega}\times[t_0+1,t_0+2]}{\zeta v_{\eps }}\leq\normm{C^{2+\alpha _3,1+\frac{\alpha _3}2}(\overline{\Omega}\times[t_0,t_0+2]}{\zeta v_{\eps }}\leq c_3 \] for any $t_0>1$. \end{proof}
\begin{lem}\label{lem:ddtF} Let $u_0$, $v_0$ satisfy \eqref{id} and assume that $\mu >0$, $\kappa>0$, $\chi >0$, $a=\frac{\mu }{\kappa}$ (as in \eqref{defa}). Then for any $\varepsilon \in[0,1)$ any solution $(u_{\eps },v_{\eps })\in C^{2,1}(\overline{\Omega}\times(0,\infty))\cap C^0(\overline{\Omega}\times[0,\infty))$ of \eqref{epssys} satisfies \begin{equation}\label{FODI}
\mathcal{F}_\varepsilon'(t)+\mu \int_\Omega\kl{u_{\eps }-\frac{\kappa}{\mu }}^2\leq 0 \qquad \text{for all } t\in(0,\infty) \end{equation} and, consequently, there is $C>0$ such that for any $\varepsilon \in [0,1)$ \begin{equation}\label{eq:ueminlimit2.bounded}
\int_0^\infty \int_\Omega \kl{u_{\eps }-\frac{\kappa}{\mu }}^2 \leq C. \end{equation}
\end{lem}
\begin{proof} In fact, on $(0,\infty)$ \begin{align*} \mathcal{F}_\varepsilon'&=\int_{\Omega} u_{\eps t}-\frac{\kappa}{\mu }\int_{\Omega}\frac{u_{\eps t}}{u_{\eps }}+\frac{\kappa}{\mu }\int_{\Omega}v_{\eps }v_{\eps t} \nonumber \\ &= \kappa\int_{\Omega} u_{\eps }-\mu\int_{\Omega}u_{\eps }^2 - \varepsilon \int_\Omega u_{\eps }^2\ln au_{\eps } -\frac{\kappa}{\mu }\int_{\Omega}\frac{\Delta u_{\eps }}{u_{\eps }}+\frac{\kappa}{\mu }\int_{\Omega}\frac{\nabla u_{\eps }\cdot \nabla v_{\eps }}{u_{\eps }}-\frac{\kappa^2}{\mu }\int_{\Omega} 1+\kappa\int_{\Omega} u_{\eps }\\ &+\frac{\kappa}{\mu }\int_{\Omega}v_{\eps }\Delta v_{\eps }-\frac{\kappa}{\mu }\int_{\Omega}u_{\eps }v_{\eps }^2 +\varepsilon \frac{\kappa}{\mu }\int_\Omega u_{\eps }\ln au_{\eps } \end{align*} Because $\varepsilon s(\frac{\kappa}{\mu }-s)\ln (\frac{\mu }{\kappa}s)$ is negative for any $s>0$, we obtain \begin{align*} \mathcal{F}_\varepsilon'&\le
-\mu\int_{\Omega}\Big(u_{\eps }-\frac{\kappa}{\mu }\Big)^2-\frac{\kappa}{\mu }\int_{\Omega}\frac{|\nabla
u_{\eps }|^2}{u_{\eps }^2}+\frac{\kappa}{2\mu }\int_{\Omega}\frac{|\nabla
u_{\eps }|^2}{u_{\eps }^2}+\frac{\kappa}{2\mu }\int_{\Omega}|\nabla
v_{\eps }|^2-\frac{\kappa}{\mu }\int_{\Omega}|\nabla
v_{\eps }|^2-\frac{\kappa}{\mu }\int_{\Omega}u_{\eps }v_{\eps }^2\nonumber\\ &=
-\mu\int_{\Omega}\Big(u_{\eps }-\frac{\kappa}{\mu }\Big)^2-\frac{\kappa}{2\mu }\int_{\Omega}\frac{|\nabla
u_{\eps }|^2}{u_{\eps }^2}-\frac{\kappa}{2\mu }\int_{\Omega}|\nabla
v_{\eps }|^2-\frac{\kappa}{\mu }\int_{\Omega}u_{\eps }v_{\eps }^2 \end{align*} on $(0,\infty)$, which implies \eqref{FODI}. \end{proof}
Building upon \eqref{eq:ueminlimit2.bounded} and the second equation of \eqref{epssys}, we can now acquire decay information about $v_{\eps }$: \begin{lem}\label{lem:vtozero} Let $\chi >0$, $\kappa>0$, $\mu >0$, let $u_0$ and $v_0$ satisfy \eqref{id} and moreover set $a:=\frac{\mu }{\kappa}$. Then for every $p\in[1,\infty)$ and every $\eta >0$ there is $T>0$ such that for every $t>T$ and every $\varepsilon \in[0,1)$ every global classical solution $(u_{\eps },v_{\eps })$ of \eqref{epssys} satisfies \begin{equation}\label{eq:vtozerolp}
\norm[\Lom p]{v_{\eps }(\cdot,t)}<\eta . \end{equation} \end{lem} \begin{proof}
By Lemma \ref{lem:ddtF} we find $c_1>0$ such that for any $\varepsilon\in[0,1)$ we have $\int_0^\infty\int_\Omega \kl{\frac{\kappa}{\mu }-u_{\eps }}^2<c_1$.
Integrating the second equation of \eqref{epssys} shows that \[
(0,\infty)\ni t\mapsto \int_\Omega v_{\eps }(\cdot,t) \]
is decreasing, and that, moreover, \[
\frac{\kappa}{\mu }\int_0^t\int_\Omega v_{\eps } + \int_0^t\int_\Omega\kl{u_{\eps }-\frac{\kappa}{\mu }}v_{\eps } = \int_0^t \int_\Omega u_{\eps }v_{\eps } \leq \int_\Omega v_0 \]
for any $t>0$.
We conclude that for any $t>0$ and any $\varepsilon \in[0,1)$ \begin{align*}
\int_\Omega v_{\eps }(\cdot,t) \leq \frac1t\int_0^t \int_\Omega v_{\eps }&\leq\frac{\mu }{\kappa t} \int_\Omega v_0 + \frac{\mu }{\kappa t}\int_0^t\int_\Omega \kl{\frac{\kappa}{\mu }-u_{\eps }}v_{\eps }\\
&\leq \frac{\mu }{\kappa t}\int_\Omega v_0 + \frac{\mu }{\kappa t}\sqrt{\int_0^t\int_\Omega v^2}\sqrt{\int_0^t\int_\Omega \kl{\frac{\kappa}{\mu }-u_{\eps }}^2}\\
&\leq \frac{\mu }{\kappa t}\int_\Omega v_0 +\frac{\mu }{\kappa t} \sqrt{\norm[\Lom\infty]{v_0}^2|\Omega|t} \sqrt{\int_0^\infty\int_\Omega\kl{\frac{\kappa}{\mu }-u_{\eps }}^2}\\
&\leq \frac{\mu }{\kappa t}\int_\Omega v_0 +\frac{\mu \sqrt{|\Omega|}\norm[\Lom\infty]{v_0}\sqrt{c_1}}{\kappa\sqrt{t}} \end{align*} and hence already have that $\int_\Omega v_{\eps }(\cdot,t)$ converges to $0$ as $t\to\infty$, uniformly with respect to $\varepsilon $. In order to obtain \eqref{eq:vtozerolp}, we invoke the additional interpolation \[
\norm[\Lom p]{v_{\eps }(\cdot,t)}\leq \norm[\Lom\infty]{v_{\eps }(\cdot,t)}^{\frac{p-1}p}\norm[\Lom 1]{v_{\eps }(\cdot,t)}^{\frac1p}\leq \norm[\Lom\infty]{v_0}^{\frac{p-1}p}\norm[\Lom 1]{v_{\eps }(\cdot,t)}^{\frac1p}, \] valid for any $t>0$. \end{proof}
A combination of the previous lemmata in this section reveals the large time behaviour of bounded classical solutions:
\begin{lem}\label{lem:convergence.classical} Let $\kappa>0$, $\mu >0$, $\chi >0$ and let $u_0$, $v_0$ satisfy \eqref{id}. For any solution $(u,v)\in C^{2,1}(\overline{\Omega}\times(0,\infty))\cap C^0(\overline{\Omega}\times[0,\infty))$ of \eqref{a} that satisfies the boundedness condition \eqref{eq:boundednessconditionforregularity}, we have \begin{equation}\label{eq:convergencestatement}
u(\cdot,t)\to \frac{\kappa}{\mu } \quad \text{in } C^0(\overline{\Omega}),\quad v(\cdot,t)\to 0\quad \text{in }C^2(\overline{\Omega}). \end{equation} as $t\to \infty $. \end{lem} \begin{proof} For $j\in\mathbb{N} $ we define \[
u_j(x,\tau ):=u(x,j+\tau ), \qquad v_j(x,\tau ):=v(x,j+\tau ),\qquad x\in\overline{\Omega}, \tau \in[0,1]. \] We let $(j_k)_{k\in \mathbb{N}}\subset\mathbb{N} $ be a sequence satisfying $j_k\to \infty $ as $k\to \infty $. By Lemma \ref{lem:holder} there are $\alpha \in(0,1)$, $C>0$ such that \[
\normm{C^{\alpha,\frac{\alpha }2}(\overline{\Omega}\times[0,1])}{u_{j_k}}\leq C, \quad \normm{C^{2+\alpha,1+\frac{\alpha}2}(\overline{\Omega}\times[0,1])}{v_{j_k}}\leq C \] for all $k\in \mathbb{N} $ and hence there are $u,v\in C^{\alpha ,\frac{\alpha }2}(\overline{\Omega}\times[0,1])$ such that $u_{j_{k_l}}\to u$ in $C^0(\overline{\Omega}\times[0,1])$ and $v_{j_{k_l}}\to v$ in $C^2(\overline{\Omega}\times[0,1])$ as $l\to \infty $ along a suitable subsequence. According to \eqref{eq:ueminlimit2.bounded} and Lemma \ref{lem:vtozero} $u\equiv \frac{\kappa}{\mu }$, $v\equiv 0$. Because every subsequence of $((u_j,v_j))_{j\in\mathbb{N} }$ contains a subsequence converging to $(\frac{\kappa}{\mu },0)$, we conclude that $(u_j,v_j)\to(\frac{\kappa}{\mu },0)$ in $C^0(\overline{\Omega}\times[0,1])\times C^2(\overline{\Omega}\times[0,1])$ and hence, a fortiori, \eqref{eq:convergencestatement}. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm3}]
The statement of Lemma \ref{lem:convergence.classical} is even slightly stronger than that of Theorem \ref{thm3}. \end{proof}
\section{Weak solutions}\label{sec:weaksol} Purpose of this section is the construction of weak solutions to \eqref{a}, in those cases, where Theorem \ref{thm1} is not applicable. To this end let us first state what a weak solution is supposed to be:
\begin{dnt}\label{def:weaksol}
A weak solution to \eqref{a} for initial data $(u_0,v_0)$ as in \eqref{id} is a pair $(u,v)$ of functions
\begin{align*}
u\in L^2_{loc}(\overline{\Omega}\times[0,\infty)) \quad \text{ with } \quad \nabla u \in L^1_{loc}(\overline{\Omega}\times [0,\infty)),\\
v\in L^\infty(\Omega\times(0,\infty))\quad \text{ with } \quad \nabla v \in L^2(\Omega\times(0,\infty))
\end{align*}
such that, for every $\varphi \in C_0^{\infty}(\overline{\Omega}\times[0,\infty))$,
\begin{align*}
-\int_0^\infty\int_\Omega u\varphi _t -\int_\Omega u_0\varphi (\cdot,0) &= -\int_0^\infty\int_\Omega \nabla u\cdot \nabla \varphi + \chi \int_0^\infty\int_\Omega u\nabla v\cdot\nabla\varphi +\kappa\int_0^\infty\int_\Omega u\varphi -\mu \int_0^\infty\int_\Omega u^2\varphi \\
-\int_0^\infty\int_\Omega v\varphi _t -\int_\Omega v_0\varphi (\cdot,0) &= -\int_0^\infty\int_\Omega \nabla v\cdot \nabla \varphi - \int_0^\infty \int_\Omega uv\varphi
\end{align*}
hold true. \end{dnt}
Some of the estimates neeeded for the compactness arguments in the construction of these weak solutions will spring from the following quasi-energy inequality:
\begin{lem}\label{lem:energyfunctional}
Let $\mu ,\chi \in(0,\infty)$, $\kappa\in\mathbb{R} $ and let $(u_0,v_0)$ satisfy \eqref{id}.
There are constants $k_1>0$, $k_2>0$ such that for any $\varepsilon \in(0,1)$ the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies \begin{align}\label{eq:ddtuloguplusnavdv}
\ddt&\kl{\int_\Omega u_{\eps }\ln u_{\eps } + \frac{\chi }{2} \int_\Omega \frac{|\nabla v_{\eps }|^2}{v_{\eps }}}\nonumber\\
& + \int_\Omega \frac{|\nabla u_{\eps }|^2}{u_{\eps }} + k_1 \int_\Omega \frac{|\nabla v_{\eps }|^4}{v_{\eps }^3} + k_1 \int_\Omega v_{\eps }|D^2\ln v_{\eps }|^2 + \frac{\mu }2\int_\Omega u_{\eps }^2\ln u_{\eps }+ \varepsilon \int_\Omega u_{\eps }^2\ln au_{\eps }\ln u_{\eps } \nonumber\\
&\leq k_2\int_\Omega v_{\eps }+k_3 \end{align} on $(0,\infty)$. \end{lem} \begin{proof} According to Lemma \ref{lem:ge.for.positive.eps.or.large.mu}, for any $\varepsilon \in(0,1)$, the solution to \eqref{epssys} is global, and from the second equation of \eqref{epssys} we obtain that
\begin{align}\label{eq:ddtnavdv}
\ddt \int_\Omega \frac{|\nabla v_{\eps }|^2}{v_{\eps }} &= 2\int_\Omega \frac{\nabla v_{\eps } \nabla v_{\eps t}}v_{\eps } - \int_\Omega \frac{|\nabla v_{\eps }|^2}{v_{\eps }^2}v_{\eps t}\nonumber\\
&=-2\int_\Omega \frac{\Delta v_{\eps } v_{\eps t}}{v_{\eps }} + 2\int_\Omega \frac{|\nabla v_{\eps }|^2}{v_{\eps }^2} v_{\eps t} -\int_\Omega \frac{|\nabla v_{\eps }|^2}{v_{\eps }^2}v_{\eps t}\nonumber\\
&= -2\int_\Omega \frac{|\Delta v_{\eps }|^2}v_{\eps } + 2\int_\Omega u_{\eps }\Delta v_{\eps } +\int_\Omega \frac{|\nabla v_{\eps }|^2}{v_{\eps }^2}\Delta v_{\eps } - \int_\Omega \frac{|\nabla v_{\eps }|^2}{v_{\eps }}u_{\eps }\nonumber\\
&\leq -2\int_\Omega \frac{|\Delta v_{\eps }|^2}v_{\eps } - 2\int_\Omega \nabla u_{\eps }\cdot \nabla v_{\eps } + \int_\Omega \frac{|\nabla v_{\eps }|^2}{v_{\eps }^2}\Delta v_{\eps }\quad\text{on } (0,\infty).
\end{align} Here we may rely on Lemma \ref{lem:elementary.estimates} b) to obtain $k_1>0$, $k_2>0$ such that \[
\ddt \int_\Omega \frac{|\nabla v_{\eps }|^2}{v_{\eps }}\leq -2\int_\Omega \nabla u_{\eps }\cdot \nabla v_{\eps } - \frac{2k_1}{\chi }\int_\Omega v_{\eps }|D^2\ln v_{\eps }|^2-\frac{2k_1}{\chi }\int_\Omega \frac{|\nabla v_{\eps }|^4}{v_{\eps }^3} + \frac{2k_2}{\chi } \int_\Omega v_{\eps } \quad \text{on } (0,\infty). \] Concerning the entropy term, we compute \begin{align}\label{eq:ddtulogu}
\ddt \int_\Omega u_{\eps }\ln u_{\eps } &= \int_\Omega u_{\eps t} \ln u_{\eps } + \kappa\int_\Omega u_{\eps } - \mu \int_\Omega u_{\eps }^2 -\varepsilon \int_\Omega u_{\eps }^2\ln au_{\eps }\nonumber\\
&= -\int_\Omega \frac{|\nabla u_{\eps }|^2}u_{\eps } + \chi \int_\Omega \nabla u_{\eps }\cdot \nabla v_{\eps } + \kappa\int_\Omega u_{\eps }\ln u_{\eps } - \mu \int_\Omega u_{\eps }^2\ln u_{\eps }-\varepsilon \int_\Omega u_{\eps }^2\ln au_{\eps }\ln u_{\eps }\nonumber\\
&\quad+\kappa\int_\Omega u_{\eps }-\mu \int_\Omega u_{\eps }^2-\varepsilon \int_\Omega u_{\eps }^2\ln au_{\eps } \quad \text{on } (0,\infty ). \end{align} Additionally, $s^2\ln as>-\frac{1}{2a^2e}$ for all $s\in(0,\infty)$, so that for all $\varepsilon \in(0,1)$ we have $-\varepsilon (s^2\ln as)<\frac1{2a^2e}$. Since moreover $\lim_{s\to \infty}(\kappa s-\mu s^2+\kappa s\ln s-\frac{\mu }2s^2\ln s)=-\infty $, we can find $k_3>0$ such that \[
\kappa s\ln s -\frac{\mu }{2}s^2\ln s +\kappa s-\mu s^2-\varepsilon s^2\ln as \leq \frac{k_3}{|\Omega|} \] for any $s\geq 0$ and $\varepsilon \in(0,1)$. Inserting this into the sum of \eqref{eq:ddtulogu} and a multiple of \eqref{eq:ddtnavdv}, we obtain \eqref{eq:ddtuloguplusnavdv}. \end{proof}
The following lemma serves as collection of the bounds we have prepared:
\begin{lem}\label{lem:bounds}
Let $\mu >0$, $\chi >0$, $\kappa\in\mathbb{R} $ and suppose that $u_0$, $v_0$ satisfy \eqref{id}.
Then there is $C>0$ and for any $T>0$ and $q>N$ there is $C(T)>0$ such that for any $\varepsilon \in(0,1)$ the solution $(u_{\eps },v_{\eps })$ of \eqref{epssys} with $a$ as in \eqref{defa} satisfies \begin{align}
\int_0^T \int_\Omega u_{\eps }^2\leq C(T)\label{bd:ul2}\\
\int_0^T \int_\Omega \frac{|\nabla u_{\eps }|^2}u\leq C(T)\label{bd:nau2u}\\
\int_0^T \int_\Omega |\nabla u_{\eps }|^\frac43\leq C(T)\label{bd:nau}\\
\int_0^T \int_\Omega u_{\eps }^2\ln au_{\eps } \leq C(T)\label{bd:u2logu}\\
\int_0^T \int_\Omega \varepsilon u_{\eps }^2(\ln u_{\eps })\ln au_{\eps }\leq C(T)\label{bd:epsu2logu2}\\
\int_0^T \int_\Omega |\nabla v_{\eps }|^4 \leq C\label{bd:nav4}\\
\int_0^{\infty } \int_\Omega |\nabla v_{\eps }|^2\leq C\label{bd:nav}\\
\norm[ L^\infty(\Omega\times (0,\infty ))]{v_{\eps }}\leq C\label{bd:v}\\
\norm[L^2((0,T);(W_0^{1,2}(\Omega))^\ast)]{v_{\eps t}}\leq C(T)\label{bd:vt}\\
\norm[L^1((0,T);(W_0^{1,2}(\Omega))^\ast)]{u_{\eps t}}\leq C(T)\label{bd:ut} \end{align} If, moreover $\kappa>0$, then there is $C>0$ such that for any $\varepsilon \in(0,1)$ the solution $(u_{\eps },v_{\eps })$ of \eqref{epssys} with $a=\frac{\mu}{\kappa}$ as in \eqref{defa} satisfies \begin{equation}
\int_0^\infty\int_\Omega \kl{u_{\eps }-\frac{\kappa}{\mu }}^2 \leq C.\label{bd:uminlimit2}. \end{equation} \end{lem} \begin{proof}
Bondedness of $v_{\eps }$ as in \eqref{bd:v} has been shown in Lemma \ref{lem:normvdecreases}; \eqref{bd:ul2}, \eqref{bd:nau2u}, \eqref{bd:u2logu}, \eqref{bd:epsu2logu2} result from Lemma \ref{lem:energyfunctional} by straightforward integration, as well as \eqref{bd:nav4} if Lemma \ref{lem:normvdecreases} is taken into account. Testing the second equation in \eqref{epssys} by $v_{\eps }$, \eqref{bd:nav} is readily obtained. By an application of Hölder's inequality, \eqref{bd:nau} immediately follows from \eqref{bd:ul2} and \eqref{bd:nau2u}. Moreover, \eqref{bd:uminlimit2} is a consequence of \eqref{FODI}.
For any $\varphi \in C_0^\infty(\overline{\Omega}\times[0,T))$ we have \begin{align*}
\int_0^T\int_\Omega v_{\eps t}\varphi &= -\int_0^T\int_\Omega \nabla \varphi \cdot \nabla v_{\eps } - \int_0^T\int_\Omega \varphi u_{\eps }v_{\eps } \\
&\leq \norm[L^2(\Omega\times(0,T))]{\nabla \varphi }\norm[L^2(\Omega\times(0,T))]{\nabla v_{\eps }} + \norm[L^\infty(\Omega\times(0,T))]{v_{\eps }}\norm[L^2(\Omega\times(0,T))]{u_{\eps }}\norm[L^2(\Omega\times(0,T))]{\varphi } \end{align*} and -- by \eqref{bd:ul2}, \eqref{bd:nav}, \eqref{bd:v} -- hence \eqref{bd:vt}. In order to obtain \eqref{bd:ut}, we let $\varphi \in (L^1((0,T);(W_0^{2,q}(\Omega))^\ast))^\ast = L^\infty((0,T);W_0^{2,q}(\Omega))$ with $\norm[L^\infty((0,T);W_0^{2,q}(\Omega))]{\varphi }\leq 1$ and have \begin{align*}
\int_0^T\int_\Omega u_{\eps t} \varphi &=\int_0^T\int_\Omega u_{\eps } \Delta \varphi + \chi \int_0^T\int_\Omega u_{\eps }\nabla v_{\eps }\cdot\nabla\varphi + \kappa\int_0^T\int_\Omega u_{\eps }\varphi -\mu \int_0^T\int_\Omega u_{\eps }^2\varphi +\varepsilon \int_0^T\int_\Omega \varphi u_{\eps }^2\ln au_{\eps }\\
&\leq \norm[L^2(\Omega\times(0,T))]{u_{\eps }}\norm[L^2(\Omega\times(0,T))]{\Delta \varphi }+\chi \norm[L^2(\Omega\times(0,T))]{u_{\eps }}\norm[L^2(\Omega\times(0,T))]{\nabla v}\norm[L^\infty(\Omega\times(0,T))]{\nabla \varphi } \\
&+ |\kappa| \norm[L^2(\Omega\times(0,T))]{u_{\eps }}\norm[L^2(\Omega\times(0,T))]{\varphi }+\mu \norm[L^2(\Omega\times(0,T))]{u_{\eps }}^2\norm[L^\infty(\Omega\times(0,T))]{\varphi } \\
&+ \norm[L^\infty(\Omega\times(0,T))]{\varphi }\varepsilon \int_0^T\int_\Omega u_{\eps }^2|\ln au_{\eps }|, \end{align*}
which, due to \eqref{bd:ul2}, \eqref{bd:nav}, \eqref{bd:u2logu}, proves \eqref{bd:ut}. \end{proof}
By means of compactness arguments, these estimates allow for the construction of weak solutions. This is to be our next undertaking:
\begin{lem}\label{lem:weaksol}
Let $\mu >0$, $\chi >0$, $\kappa\in\mathbb{R} $ and assume that $u_0$, $v_0$ satisfy \eqref{id}.
There are a sequence $(\varepsilon _j)_{j\in \mathbb{N}}$, $\varepsilon _j\searrow 0$ and functions \begin{align*}
u&\in L^2_{loc}(\overline{\Omega}\times[0,\infty)) \quad \text{ with } \quad \nabla u\in L^{\frac43}_{loc}(\overline{\Omega}\times [0,\infty)),\\
v&\in L^\infty(\Omega\times(0,\infty)) \quad \text{ with }\quad \nabla v \in L^2(\Omega\times(0,\infty)) \end{align*} such that the solutions $(u_{\eps },v_{\eps })$ of \eqref{epssys} with $a$ as in \eqref{defa} satisfy \begin{align}
u_{\eps }&\to u & &\text{in } L^{\frac 43}_{loc}([0,\infty);L^{\frac43}(\Omega)) \quad \text{ and a.e. in } \Omega\times(0,\infty)\label{conv:u}\\
\nablau_{\eps }&\rightharpoonup \nabla u&&\text{ in } L^{\frac 43}_{loc}([0,\infty);L^{\frac43}(\Omega))\label{conv:nau}\\
u_{\eps }^2&\to u^2&&\text{ in } L^1_{loc}(\overline{\Omega}\times[0,\infty))\label{conv:u2}\\
\varepsilon u_{\eps }^2\ln (au_{\eps })&\to 0&&\text{ in } L^1_{loc}(\overline{\Omega}\times[0,\infty))\label{conv:epsu2logu}\\
v_{\eps } &\to v & & \text{a.e. in } \Omega\times (0,\infty)\label{conv:v}\\
v_{\eps } &\weakstarto v && \text{in } L^\infty((0,\infty),\Lom p)\quad \text{for any } p\in[1,\infty]\label{conv:vweakstar}\\
\nabla v_{\eps }&\rightharpoonup \nabla v && \text{in } L^4_{loc}([0,\infty);\Lom4)\label{conv:nav4}\\
\nablav_{\eps }&\rightharpoonup \nabla v && \text{in } L^2((0,\infty);\Lom2)\label{conv:nav} \end{align} as $\varepsilon =\varepsilon _j\searrow 0$ and such that $(u,v)$ is a weak solution to \eqref{a}.\\ If additionally $\kappa>0$ and $a=\frac{\mu }{\kappa}$ as in \eqref{defa}, then $\varepsilon _j$ can be chosen such that additionally \begin{equation}
u_{\eps }-\frac{\kappa}{\mu }\rightharpoonup u-\frac{\kappa}{\mu }\qquad\text{ in } L^2(\Omega\times(0,\infty))\label{conv:uminlimit} \end{equation} as $\varepsilon =\varepsilon _j\searrow 0$, and \begin{equation}\label{uminlimitinL2}
\kl{u-\frac{\kappa}{\mu }}\in L^2(\Omega\times(0,\infty)) \end{equation} \end{lem} \begin{proof}
\cite[Cor. 8.4]{simon} transforms \eqref{bd:ul2}, \eqref{bd:nau} and \eqref{bd:ut} into \eqref{conv:u} along a suitable sequence $(\varepsilon_j)_j\searrow 0$; the bound in \eqref{bd:nau} enables us to find a further subsequence such that \eqref{conv:nau} holds. Similarly, \eqref{bd:nav} facilitates the extraction of a subsequence satisfying \eqref{conv:nav}, and an analogous application of \cite[Cor. 8.4]{simon} as before from \eqref{bd:nav}, \eqref{bd:v} and \eqref{bd:vt} provides a (non-relabeled) subsequence such that $v_{\varepsilon _j}\to v$ in $L^2(\Omega\times(0,\infty))$ and, along another subsequence thereof establishes \eqref{conv:v}. Also \eqref{conv:vweakstar} is immediately obtained from \eqref{bd:v}, as is \eqref{conv:nav4} from \eqref{bd:nav4}; \eqref{conv:uminlimit} results from \eqref{bd:uminlimit2}. For the $L^1$-convergence statements in \eqref{conv:u2} and \eqref{conv:epsu2logu}, mere boundedness, like obtainable from \eqref{bd:nau2u} and \eqref{bd:u2logu}, even if combined with the a.e. convergence provided by \eqref{conv:u}, is insufficient for the existence of a convergent subsequence; we must, in addition, check for equi-integrability on $\Omega\times(0,T)$ for any finite $T>0$. To this purpose we note that with $C(T)$ from \eqref{bd:epsu2logu2} \begin{align*}
\inf_{b\geq0} \sup_{\varepsilon \in(0,1)} \int_0^T\int_{\set{\varepsilon u_{\eps }^2\ln a u_{\eps }>b}} |\varepsilon u_{\eps }^2\ln a u_{\eps }| &\leq \inf_{b>a }\sup_{\varepsilon \in(0,1)} \int_0^T\int_{\set{\varepsilon u_{\eps }^2\ln a u_{\eps }>b}} \varepsilon u_{\eps }^2\ln a u_{\eps }\\
&\leq \inf_{b>a }\sup_{\varepsilon \in(0,1)} \int_0^T\int_{\set{a u_{\eps }^3>b}} \varepsilon u_{\eps }^2\ln a u_{\eps }\\
&\leq \inf_{b>a }\sup_{\varepsilon \in(0,1)} \int_0^T\int_{\set{\ln u_{\eps }>\frac13 \ln \frac ba}} \varepsilon u_{\eps }^2(\ln au_{\eps })\ln u_{\eps }\cdot \frac 3{\ln \frac{b}{a}}\\
&\leq \inf_{b>a } \frac{3C(T)}{\ln \frac{b}{a}} =0 \end{align*} and, due to \eqref{bd:u2logu}, \begin{align*}
\inf_{b\geq 0} \sup_{\varepsilon \in(0,1)} \int_0^T\int_{\set{u_{\eps }^2>b}}u_{\eps }^2 \leq \inf_{b>1}\sup_{\varepsilon \in(0,1)}\int_0^T\int_{\set{u_{\eps }^2>b}} u_{\eps }^2\ln u_{\eps } \frac{1}{\ln b} \leq \inf_{b>1} \frac{C(T)}{\ln b} = 0. \end{align*} Accordingly, $\set{ \varepsilon u_{\eps }^2\ln u_{\eps }; \varepsilon \in(0,1)}$ and $\set{u_{\eps }^2; \varepsilon \in(0,1)}$ are uniformly integrable, hence by \eqref{conv:u} and the Vitali convergence theorem we can extract subsequences such that \eqref{conv:epsu2logu} and \eqref{conv:u2} hold; \eqref{conv:u2} also proves that $u\in L^2_{loc}(\overline{\Omega}\times[0,\infty ))$. Passing to the limit in each of the integrals making up a weak formulation of \eqref{epssys} with $\varepsilon >0$, which is possible due to \eqref{conv:u}, \eqref{conv:nau}, \eqref{conv:nav4}, \eqref{conv:u2}, \eqref{conv:epsu2logu} and \eqref{conv:vweakstar}, shows that $(u,v)$ is a weak solution to \eqref{epssys} with $\varepsilon =0$. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:weaksol}]
The assertion of Theorem \ref{thm:weaksol} is part of Lemma \ref{lem:weaksol}. \end{proof}
We will finally prove that one can expect at least some stabilization of weak solutions also. Here, the preparation in Lemma \ref{lem:vtozero} obtained from the energy inequality for $\mathcal{F}$ will be crucial.
\begin{lem}\label{lem:weaklimit} Let $\mu >0$, $\chi >0$, $\kappa>0$ and assume that $u_0$, $v_0$ satisfy \eqref{id}.
The weak solution $(u,v)$ to \eqref{a} obtained in Lemma \ref{lem:weaksol} satisfies
\begin{equation}\label{eq:weaksolvtozero}
\norm[\Lom p]{v(\cdot,t)}\to 0
\end{equation}
for any $p\in[1,\infty)$ and \begin{equation}\label{eq:weaksoluconv}
\int_t^{t+1} \norm[\Lom 2]{u-\frac{\kappa}{\mu }}\to0 \end{equation}
as $t\to \infty$. \end{lem} \begin{proof}
Using characteristic functions of sets $\Omega\times(t,t+1)$ for sufficiently large $t$ as test functions in the weak-$*$-convergence statement \eqref{conv:vweakstar}, from Lemma \ref{lem:vtozero} we obtain that for every $\eta>0$ there is $T>0$ such that $\norm[L^\infty((T,\infty);\Lom p)]{v}<\eta$,
whereas \eqref{eq:weaksoluconv} is implied by \eqref{uminlimitinL2}. \end{proof}
\begin{remark}
If $N\leq 3$, the uniform bound on $\int_t^{t+1}\int_\Omega |\nabla v|^4$ contained in Lemma \ref{lem:energyfunctional} proves to be sufficient for \eqref{eq:weaksolvtozero} even to hold for $p=\infty$, which can be used as starting point for derivation of eventual smoothness of solutions via a quasi-energy-inequality for $\int_\Omega \frac{u^p}{(\eta -v)^\theta}$ with suitable numbers $\theta$ and $\eta $. This result is already contained in \cite{lankeit_fluid}. \end{remark}
\begin{proof}[Proof of Theorem \ref{thm:weaksol-limit}]
Lemma \ref{lem:weaklimit} is identical with Theorem \ref{thm:weaksol-limit}. \end{proof}
\section*{Acknowledgment}
J.~Lankeit acknowledges support of the {\em Deutsche Forschungsgemeinschaft} within the project {\em Analysis of chemotactic cross-diffusion in complex
frameworks}. Y.~Wang was supported by the NNSF of China (no. 11501457).
{\small
}
\end{document} |
\begin{document}
\title{Copolarity of isometric actions}
\author{Claudio Gorodski\footnote{Partially supported by CNPq grant 300720/93-9 and FAPESP grant 01/04793-8.}\hspace{.1cm} and Carlos Olmos\footnote{ Supported by Universidad Nacional de C\'ordoba and CONICET, partially supported by CIEM, Secyt-UNC and ANPCYT.}\hspace{.1cm}
and Ruy Tojeiro\footnote{ Partially supported by CNPq grant~300229/92-5 and FAPESP grant 01/05318-1.}}
\footnotetext{2000 \emph{Mathematics Subject Classification}. Primary 57S15; Secondary 53C20.}
\maketitle
\begin{abstract} We introduce a new integral invariant for isometric actions of compact Lie groups, the \emph{copolarity}. Roughly speaking, it measures how far from being polar the action is. We generalize some results about polar actions in this context. In particular, we develop some of the structural theory of copolarity $k$ representations, we classify the irreducible representations of copolarity one, and we relate the copolarity of an isometric action to the concept of variational completeness in the sense of Bott and Samelson. \end{abstract}
\section{Introduction}
An isometric action of a compact Lie group $G$ on a complete Riemannian manifold $M$ is called \emph{polar} if there exists a connected, complete submanifold $\Sigma$ of $M$ which intersects all $G$-orbits and such that $\Sigma$ is orthogonal to every $G$-orbit it meets. Such a submanifold is called a \emph{section}. It is easy to see that a section is automatically totally geodesic. If the section is also flat in the induced metric, then the action is called \emph{hyperpolar}. In the case of Euclidean spaces, there is clearly no difference between polar and hyperpolar representations since totally geodesic submanifolds of an Euclidean space are affine subspaces. Polar representations were classified by Dadok~\cite{D} and it follows from his work that a polar representation of a compact Lie group is orbit equivalent to (i.~e.~has the same orbits as) the isotropy representation of a symmetric space.
In this paper we introduce a new invariant for isometric actions of compact Lie groups, the \emph{copolarity}. Roughly speaking, it measures how far from being polar the action is. This is based on the idea of a $k$-\emph{section}, which is a generalization of the concept of section. The \emph{minimal $k$-section} passing through a regular point of the action is the smallest connected, complete, totally geodesic submanifold of the ambient space passing through that point which intersects all the orbits and such that, at any intersection point with a principal orbit, its tangent space contains the normal space of that orbit with codimension $k$. It is easy to see that this is a good definition and uniquely specifies an integer~$k$ which we call the \emph{copolarity} of the isometric action (see Section~\ref{sec:k-sec}). It is also obvious that the $k=0$ case precisely corresponds to the polar actions.
It is apparent that for most isometric actions the minimal $k$-section coincides with the ambient space. Note that in this case $k$ equals the dimension of a principal orbit. We say that such isometric actions have \emph{trivial copolarity}. The obvious questions that emerge are: \begin{quote} \em What are the isometric actions with nontrivial copolarity? What is the meaning of the integer $k$? \end{quote} In this paper we examine this problem in the case of orthogonal representations. Examples of representations of nontrivial copolarity and minimal $k$-sections appear naturally in the framework of the reduction principle in compact transformation groups (see~\cite{GS,SS,S1} for that principle). In fact, in~\cite{GTh2} the reduction principle was used to describe the geometry of the irreducible representations in the table of Theorem~\ref{thm:1} below, which have copolarity $1$. In that paper one was motivated by the fact that the orbits of those representations are tautly embedded in Euclidean space; we call representations with this property \emph{taut}. This work is mainly motivated by the desire to better understand and generalize that description.
We give a complete answer to the above questions for the extremal values of the invariant $k$. Namely, let $(G,V)$ be an irreducible representation of a compact connected Lie group. Let $n$ be the dimension of a principal orbit. We prove the following two theorems.
\begin{thm}\label{thm:1} If $k=1<n$, then $(G,V)$ is one of the following orthogonal representations ($m\geq2$):
\[ \begin{array}{|c|c|} \hline \SO2\times\Spin9 & \mbox{(standard)}\otimes_{\mathbf R}\mbox{(spin)} \\ \U2\times\SP m & \mbox{(standard)}\otimes_{\mathbf C}\mbox{(standard)} \\ \SU2\times\SP m & \mbox{(standard)}^3\otimes_{\mathbf H}\mbox{(standard)} \\ \hline \end{array}\] \end{thm}
\begin{thm}\label{thm:2} If $k=n-1$ or $k=n-2$, then $k=0$. \end{thm}
Theorems~\ref{thm:1} and~\ref{thm:2} will be proved as corollaries of some other, stronger results, see Corollaries~\ref{cor:one} and~\ref{cor:cod2} respectively. A couple of remarks are in order. Theorem~\ref{thm:2} says that in the nonpolar case a nontrivial minimal $k$-section must have codimension at least $3$. Also, the three representations listed in the table of Theorem~\ref{thm:1} are precisely the irreducible representations of cohomogeneity~$3$ that are not polar (see~\cite{Y,D}). In fact, according to the main result of~\cite{GTh3} (see also~\cite{GTh1}), these three representations together with the polar ones precisely comprise all the taut irreducible representations. Hence, we have the following beautiful characterization of taut irreducible representations.
\begin{thm}\label{thm:3} An irreducible representation of a compact Lie group is taut if and only if $k=0$ or $k=1$. \end{thm}
Regarding Theorem~\ref{thm:3}, it is worth pointing out that the case $k=1=n$ is impossible, for such a representation would be orbit equivalent to a linear circle action and hence, by irreducibility, that would have to be the standard action of $\SO2$ on $\mathbf R^2$, which has $k=0$. Notice that Theorems~\ref{thm:1} and~\ref{thm:3} cease to hold if the representation is not irreducible as can be seen by taking the $7$-dimensional representation of $\U2$ given by the direct sum of the vector representation on $\mathbf C^2$ and the adjoint representation on $\mathfrak{su}(2)$ (It is interesting to remark that this representation still has cohomogeneity $3$.)
Another result we would like to explain here is the following. Let $(G,V)$ be an orthogonal representation of nontrivial copolarity. It is easy to see that the $G$-translates of a nontrivial minimal $k$-section naturally determine a group invariant foliation $\mathcal F$ on the $G$-regular set of $V$ (in fact, here the $k$-section need not be minimal, but we do not go into details in this introduction). We prove the following theorem (see Theorem~\ref{thm:int}).
\begin{thm} If the distribution orthogonal to $\mathcal F$ is integrable, then $(G,V)$ is orbit equivalent to a direct product representation $(G_1\times G_2,V_1\oplus V_2)$, where $G_1$, $G_2$ are subgroups of $G$, $(G_1,V_1)$ is a polar representation and $(G_2,V_2)$ is any representation; here the leaves of the distribution orthogonal to $\mathcal F$ correspond to the $G_1$-orbits. In particular, if $(G,V)$ is nonpolar then it cannot be irreducible. \end{thm}
In this paper we also relate the copolarity of an isometric action to the concept of variationally complete actions which was introduced by Bott in \cite{B} (see also~\cite{BS}). Roughly speaking, an isometric action of a compact Lie group on a complete Riemannian manifold is \emph{variationally complete} if it produces enough Jacobi fields along geodesics to determine the multiplicities of focal points to the orbits (see Section~\ref{sec:varcomp} for the precise definition). Conlon proved in~\cite{C} that a hyperpolar action of a compact Lie group on a complete Riemannian manifold is variationally complete. On the other hand, it is known that a variationally complete representation is polar~\cite{DO,GTh1}, and that a variationally complete action on a compact symmetric space is hyperpolar~\cite{GTh4}. This implies that the converse to Conlon's theorem is true for actions on Euclidean spaces or compact symmetric spaces. In this paper we introduce the notion of variational co-completeness of an isometric action and prove that it does not exceed $k$ for an action that admits a flat $k$-section (Theorem~\ref{thm:Conlon}). This reduces to Conlon's theorem for $k=0$. We also prove a weak converse of this result in the case of representations (Theorem~\ref{thm:converse}).
The paper is organized as follows. We first define $k$-sections and the copolarity of an isometric action (Section~\ref{sec:k-sec}) and present some examples (Section~\ref{sec:ex}). Then we go on to develop some of the structural theory of copolarity $k$ representations. In particular, we show that the copolarity of an orthogonal representation behaves well with respect to taking slice representations (Theorem~\ref{thm:slice-copolar}) and forming direct sums (Theorem~\ref{thm:red}), and we obtain a reduction principle in terms of $k$-sections (Theorem~\ref{thm:reduction}). We also show that the codimension of a nontrivial minimal $k$-section of an irreducible representation is at least $3$ (Corollary~\ref{cor:cod2}), and characterize the orthogonal representations admitting a minimal $k$-section whose orthogonal distribution is integrable (Theorem~\ref{thm:int}). We describe the geometry of a principal orbit of a representation of copolarity one (Theorem~\ref{thm:one}) and classify the irreducible representations of copolarity one (Corollary~\ref{cor:one}). We finally prove the extension of Conlon's theorem for copolarity $k$ actions (Theorem~\ref{thm:Conlon}) and its weak converse in the case of representations (Theorem~\ref{thm:converse}). As a corollary, we generalize a result about polar representations (Corollary~\ref{cor:normal}).
As a final note, we recall that the principal orbits of polar representations can be characterized as being the only compact homogeneous isoparametric submanifolds of Euclidean space~\cite{PT}. An open problem in the area is to similarly characterize the principal orbits of more general orthogonal representations in terms of their submanifold geometry and topology. We believe that orthogonal representations of low copolarity may serve as testing cases for this problem.
The first author wishes to thank Prof.~Gudlaugur Thorbergsson for very useful conversations. Part of this work was completed while the third author was visiting University of S\~ao Paulo (USP), for which he wishes to thank Prof.~Ant\^onio Carlos Asperti and the other colleagues from USP for their hospitality, and FAPESP for financial support.
\section{Actions admitting $k$-sections}\label{sec:k-sec} \setcounter{thm}{0}
Let $(G,M)$ be an isometric action of the compact Lie group $G$ on the complete Riemannian manifold $M$. A $k$-\emph{section} for $(G,M)$, where $k$ is a nonnegative integer, is a connected, complete submanifold $\Sigma$ of $M$ such that the following hold: \begin{enumerate} \item[(C1)] $\Sigma$ is totally geodesic in $M$; \item[(C2)] $\Sigma$ intersects all $G$-orbits; \item[(C3)] for every $G$-regular point $p\in\Sigma$ we have that
$T_p\Sigma$ contains the normal space $\nu_p(Gp)$ as a subspace of codimension $k$; \item[(C4)] for every $G$-regular point $p\in\Sigma$ we have that if $gp\in\Sigma$ for some $g\in G$, then $g\Sigma=\Sigma$. \end{enumerate} If $\Sigma$ is a $k$-section through $p$, then $g\Sigma$ is a $k$-section through $gp$ for any $g\in G$. We also remark that: since a $k$-section $\Sigma$ is connected, complete and totally geodesic, for every $p\in\Sigma$ we have that $\Sigma=\exp_p T_p\Sigma$; and, since the $G$-orbits are compact, for every $p\in M$ we have that the set $\exp_p\nu(Gp)$ intersects all $G$-orbits. Using these remarks, it is easy to see that, given a $G$-regular $p\in M$, the connected component containing $p$ of the intersection of a $k_1$-section and a $k_2$-section passing through $p$ is a smooth submanifold and it is a $k$-section passing through $p$ with $k\leq\min\{k_1,k_2\}$. It is also clear that the ambient $M$ is a trivial $k$-section for $k$ equal to the dimension of a principal orbit. It follows that the set of $k$-sections, $k=0,1,2,\ldots$, passing through a fixed regular point admits a unique minimal element. We say that the \emph{copolarity} of $(G,M)$ is $k_0$, and we write $\copol{G}{M}=k_0$, if that minimal element is a $k_0$-section. In this way, the copolarity is well defined for any isometric action $(G,M)$ as being an integer~$k_0$ between zero and the dimension of a principal orbit, and then a $k_0$-section is uniquely determined through any given regular point. We say that $(G,M)$ has \emph{nontrivial copolarity} if $k_0$ is strictly less than the dimension of a principal orbit (or, equivalently, if a $k_0$-section is properly contained in $M$). Since a $0$-section is simply a section, we have that $\copol{G}{M}=0$ if and only if $(G,M)$ is polar. Note that the set of connected, complete submanifolds of $M$ passing through a fixed $G$-regular point and satisfying only conditions (C1), (C2) and (C3) is also closed under connected intersection, and that a minimal element in this set automatically satisfies condition (C4), so that it represents the same minimal $k_0$-section. \emph{It follows from this observation that, in order to show that an isometric action has copolarity at most $k$, it is enough to construct a connected, complete submanifold of codimension $k$ satisfying conditions (C1), (C2) and (C3).} On the other hand, note that many of our applications will not depend on the fact that a $k$-section is minimal, but rather will depend on the fact that it satisfies condition (C4).
Next we discuss the conditions in the definition of a $k$-section. Note that if $k=0$ condition~(C4) is unnecessary and also~(C1) follows rather easily from~(C2) and~(C3), but in general we cannot dispense with them. A standard argument also shows that in the general case condition~(C2) follows from~(C3), if the latter is not empty, namely if we assume that $\Sigma$ contains a regular point. Condition~(C4) (combined with~(C2)) is equivalent to the fact that the $G$-translates of $\Sigma$ define a foliation in the regular set of $M$. Even more interesting is the following rephrasing of condition~(C4). (Note that condition~(C3) implies that the intersection of a $k$-section with a principal orbit is a smooth manifold.)
\begin{prop}\label{prop:distr} Let $(G,M)$ be an isometric action of the compact Lie group $G$ on the complete Riemannian manifold $M$. Suppose $\Sigma$ be a connected, complete submanifold of $M$ satisfying conditions~(C1), (C2) and~(C3). For every regular $p\in\Sigma$ define a $k$-dimensional subspace $\mbox{$\mathcal D$}_p$ of $T_p(Gp)$ by $\mbox{$\mathcal D$}_p=T_p\Sigma\cap T_p(Gp)$. Then condition~(C4) is equivalent to any one of the following assertions: \begin{enumerate} \item[(a)] the subspaces $\{\mbox{$\mathcal D$}_p\}$ extend to a $k$-dimensional, $G$-invariant distribution $\mbox{$\mathcal D$}$ on the regular set of $M$; \item[(b)] for every $G$-regular point $p\in\Sigma$ we have that if $gp\in\Sigma$ for some $g\in G$, then $g_*\mbox{$\mathcal D$}_p=\mbox{$\mathcal D$}_{gp}$; \item[(c)] there exists a principal orbit $\xi$ such that if $p$, $gp\in\Sigma\cap\xi$ for some $g\in G$, then $g_*\mbox{$\mathcal D$}_p=\mbox{$\mathcal D$}_{gp}$. \end{enumerate} Moreover, any one of the above conditions implies that the stabilizer $G_\Sigma$ of $\Sigma$ acts transitively on the intersection of $\Sigma$ with any principal orbit. \end{prop}
{\em Proof}. Since $\Sigma$ is connected, complete and totally geodesic in $M$, for any $G$-regular point $p\in\Sigma$ we have that $\Sigma=\exp_p T_p\Sigma$, where $T_p\Sigma=\mbox{$\mathcal D$}_p\oplus\nu_p(Gp)$. We now show that~(c) implies~(C4). Assume that~(c) holds with respect to the principal orbit $\xi$. Let $q\in\Sigma$ be a $G$-regular point. There exists a minimal geodesic from~$q$ to~$\xi$. Therefore we can write $p=\exp_q v$ for some $p\in\xi$ and $v\in\nu_q(Gq)$. Conditions~(C3) and~(C1) imply that $p\in\Sigma$. Now if $gq\in\Sigma$ for some $g\in G$, then $gp=\exp_{gq}g_*v\in\exp_{gq}(\nu_{gq}(Gq))\subset \exp_{gq}(T_{gq}\Sigma)=\Sigma$. Since $p$, $gp\in\Sigma\cap\xi$, by~(c) we deduce that $g_*\mbox{$\mathcal D$}_p=\mbox{$\mathcal D$}_{gp}$. It follows that $g\Sigma=g\exp_p(\mbox{$\mathcal D$}_p\oplus\nu_p(Gp))= \exp_{gp}(g_*\mbox{$\mathcal D$}_p\oplus g_*\nu_p(Gp))= \exp_{gp}(\mbox{$\mathcal D$}_{gp}\oplus\nu_{gp}(Gp))=\Sigma$, and this gives~(C4).
Notice that~(a) is just a reformulation of~(b), and the fact that $G_\Sigma$ is transitive on the intersection of $\Sigma$ with any principal orbit follows immediately from~(C4). Also, the equivalence between~(C4) and~(b) follows from $\Sigma=\exp_p(\mbox{$\mathcal D$}_p\oplus\nu_p(Gp))$. Since~(b) trivially implies~(c), this completes the proof of the proposition.
$\square$
Let $\Sigma$ be a $k$-section of $(G,M)$ and consider the distribution~$\mbox{$\mathcal D$}$ as in Proposition~\ref{prop:distr}. Define $\mbox{$\mathcal E$}$ to be the distribution on the regular set of $M$ such that $\mbox{$\mathcal E$}_p$ is the orthogonal complement of $\mbox{$\mathcal D$}_p$ in $T_p(Gp)$. If $p$ is $G$-regular, then it follows from the fact that $\Sigma$ is totally geodesic in $M$ that
the distribution $\mbox{$\mathcal D$}|_{Gp}$ is autoparallel in $Gp$ and invariant under the second fundamental form $\alpha$ of $Gp$, in the sense that $\alpha(\mbox{$\mathcal D$},\mbox{$\mathcal E$})=0$ on $Gp$.
The following proposition will be used later to show that some orthogonal representations admit $k$-sections.
\begin{prop}\label{prop:suf} Let $(G,V)$ be an orthogonal representation of a compact Lie group on a Euclidean space $V$. Let $Gp$ be a principal orbit. Suppose there is a $G$-invariant, autoparallel, $k$-dimensional distribution $\mbox{$\mathcal D$}$ on $Gp$ such that $\alpha(\mbox{$\mathcal D$},\mbox{$\mathcal E$})=0$, where $\alpha$ is the second fundamental form of $Gp$ and $\mbox{$\mathcal E$}$ is the distribution on $Gp$ orthogonal to $\mbox{$\mathcal D$}$. Define $\Sigma=\mbox{$\mathcal D$}_p\oplus\nu_p(Gp)$. \begin{enumerate} \item[(a)] Suppose that for every $v\in\nu_p(Gp)$ with $q=p+v$ a $G$-regular point we have that $\mbox{$\mathcal E$}_p\subset T_q(Gq)$. Then $\Sigma$ satisfies conditions~(C1), (C2) and~(C3) in the definition of a $k$-section. \item[(b)] Suppose that, in addition to the hypothesis in~(a), for every $v\in\nu_p(Gp)$ with $q=p+v\in Gp$ we have that $\mbox{$\mathcal E$}_p=\mbox{$\mathcal E$}_q$. Then $\Sigma$ is a $k$-section for $(G,V)$. \end{enumerate} \end{prop}
{\em Proof}. Let $\beta$ be a the maximal integral submanifold of $\mbox{$\mathcal D$}$ through $p$. Since $\beta$ is totally geodesic in $Gp$ and $\alpha(\mbox{$\mathcal D$},\mbox{$\mathcal E$})=0$, we have that the covariant derivative in $V$ of a $\mbox{$\mathcal E$}$-section along $\beta$ is in $\mbox{$\mathcal E$}$, which implies that $\mbox{$\mathcal E$}$ is constant in $V$ along $\beta$. Therefore $\beta$ is contained in the affine subspace of $V$ orthogonal to $\mbox{$\mathcal E$}_p$ which is $\Sigma$ and, in fact, $\beta$ is the connected component of $\Sigma\cap Gp$ containing $p$.
Now if $gp\in\beta$ then $\mbox{$\mathcal E$}_p=\mbox{$\mathcal E$}_{gp}$. Taking orthogonal complements in $V$, we get that $\Sigma=\mbox{$\mathcal D$}_{gp}\oplus\nu_{gp}(Gp)$; but the right hand side in turn equals $g\Sigma$, as $\mbox{$\mathcal D$}$ is $G$-invariant. This shows that conditions~(C3) and~(C4) are already verified for points in~$\beta$.
Next let $\gamma\neq\beta$ be a connected component of the intersection of $\Sigma$ with a principal orbit (which possibly could be $Gp$). Let $q\in\gamma$ and consider the minimal geodesic in $V$ from $q$ to $\beta$. Then we can write $q=gp+v$, where $gp\in\beta$ for some $g\in G$ and $v\in\nu_{gp}(Gp)$. Since $g^{-1}q=p+g^{-1}v$, we have $T_q(Gq)=gT_{g^{-1}q}(Gq)\supset g\mbox{$\mathcal E$}_p=\mbox{$\mathcal E$}_{gp}=\mbox{$\mathcal E$}_p$ (where the inclusion follows from the hypothesis in~(a)). Taking orthogonal complements in $V$, we get that $\nu_q(Gq)\subset\Sigma$. This shows that condition~(C3) is fully verified. If, in addition, $\gamma$ is a connected component of $\Sigma\cap Gp$ and $q=hp$ for some $h\in G$, then we can write $h\mbox{$\mathcal E$}_p=\mbox{$\mathcal E$}_q=g\mbox{$\mathcal E$}_{g^{-1}q}= g\mbox{$\mathcal E$}_p=\mbox{$\mathcal E$}_{gp}=\mbox{$\mathcal E$}_p$ (where the equality $\mbox{$\mathcal E$}_{g^{-1}q}=\mbox{$\mathcal E$}_p$ follows from the hypothesis in~(b)). Taking orthogonal complements we get $h\Sigma=\Sigma$ which, by Proposition~\ref{prop:distr}(c), finally implies condition~(C4).
$\square$
\section{Examples}\label{sec:ex} \setcounter{thm}{0}
In this section we exhibit some examples of actions admitting $k$-sections.
\subsection{Product actions}
Let $(G_1,M_1)$ be a polar action with a section $\Sigma_1$, and let $(G_2,M_2)$ be any isometric action. Let $G=G_1\times G_2$, $M=M_1\times M_2$ and consider the product action $(G,M)$. Then it is immediate to see that $\Sigma=\Sigma_1\times M_2$ is a $k$-section of $(G,M)$, where $k$ is the dimension of a principal orbit of $(G_2,M_2)$. Note that in this example the distribution $\mbox{$\mathcal E$}$ is integrable and its leaves are the $G_2$-orbits in $M_2$. We will show later that, in the case of orthogonal representations, this example is essentially the only one whose distribution $\mbox{$\mathcal E$}$ is integrable.
\subsection{The reduction principle in compact transformation groups}
Let $(G,M)$ be an isometric action and fix a principal isotropy subgroup $H$. Then the $H$-fixed point set $\Sigma=M^H$ is a $k$-section for $(G,M)$, where $k$ is the difference between the dimension of $\Sigma$ and the cohomogeneity of $(G,M)$. In fact, $\Sigma$ is a totally geodesic submanifold of $M$, being the common fixed point set of a set of isometries of $M$. This is condition (C1) in the definition of a $k$-section. Condition~(C3) follows from the fact that the slice representation at a regular point is trivial. Moreover, if $p$, $gp\in\Sigma$ are regular points for some $g\in G$, then both the isotropy subgroups at $p$ and $gp$ are $H$, so $g$ normalizes $H$ and therefore fixes $\Sigma$. This is condition~(C4). The rest follows.
It is interesting to notice that sometimes the group $G$ can be enlarged to another group $\hat G$ that has the same orbits as $G$ on $M$ but has a larger principal isotropy subgroup $\hat H$, and then the $\hat H$-fixed point set is smaller that the $H$-fixed point set (see~\cite{S1}).
This example is very important in the sense it shows that if $(G,M)$ is an arbitrary isometric action, and $\Sigma$ is $k$-section which is \emph{minimal} (so that $\copol{G}{M}=k$), then we always have that $\Sigma\subset M^{G_p}$, for any $G$-regular point $p\in\Sigma$. Therefore $G_p$ acts trivially on $\mbox{$\mathcal D$}_p=T_p\Sigma\cap T_p(Gp)$. We do not know of any example of an isometric action such that the minimal $k$-section is \emph{strictly} contained in the fixed point set of a principal isotropy subgroup.
In any case, $G_p\subset G_\Sigma$ for a minimal $k$-section $\Sigma$ and a $G$-regular point $p\in\Sigma$. This implies that the isotropy subgroup of $G_\Sigma$ at~$p$ is $G_p$. Since $G_p$ acts trivially on $\Sigma$, using Lemma~\ref{lem:inj} below, we get that $Gp\cap\Sigma=G_\Sigma p=G_\Sigma/G_p$ is a group manifold.
\subsection{Some orthogonal representations of low copolarity}\label{subsec:low}
Some calculations in~\cite{GTh3,GTh1} involving the reduction principle in transformation groups produced some examples of irreducible orthogonal representations of low copolarity. \begin{enumerate} \item[(i)] The three irreducible representations of cohomogeneity $3$ have copolarity $1$. These are listed in the table of Theorem~\ref{thm:1}. \item[(ii)] The tensor product of the vector representation of $\SO3$ and the $7$-dimensional representation of $\mbox{$\mathbf{G}_2$}$ is an irreducible representation of copolarity $2$. \item[(iii)] Let $(S^1\times H, V)$ be a polar irreducible representation which is \emph{Hermitian}, namely leaves a complex structure on $V$ invariant. Assume that the restricted representation $(H,V)$ is not orbit equivalent to $(S^1\times H, V)$. Then $(H,V)$ has nontrivial copolarity $k>0$. (There are four families of such Hermitian polar irreducible representations, and the simplest example is maybe the action of the unitary group $\U3$ on the space of complex symmetric bilinear forms in three variables. The induced representation of $\SU3$ has real dimension $12$, cohomogeneity $4$ and copolarity $2$.) \end{enumerate}
\subsection{Examples derived from polar actions}
Let $(G,V)$ be a polar representation of a compact Lie group $G$ on a Euclidean space. Then, of course, $\copol{G}{M}=0$. Nevertheless, in general there are interesting examples of $k$-sections for $(G,V)$ with $k>0$.
In fact, let $Gp$ be a principal orbit. It is known that equivariant normal vector fields along $Gp$ are parallel in the normal connection and therefore $Gp$ is an \emph{isoparametric submanifold} of $V$, namely the principal curvatures of $Gp$ along a parallel normal vector field are constant and the normal bundle of $Gp$ in $V$ is globally flat. Let $v\in\nu_p(Gp)$, $A_{v}$ the Weingarten operator in the direction of $v$ and $\lambda$ a nonzero principal curvature in the direction of $v$ with multiplicity $k$. Since the subspace $\ker(A_v-\lambda\mbox{id}_{T_p(Gp)})$ of $T_p(Gp)$ is $G_p$- and $A_v$-invariant, it extends to a $G$-invariant distribution $\mbox{$\mathcal D$}$ on $Gp$ which is invariant under the second fundamental form of $Gp$, and one can see from the Codazzi equation that $\mbox{$\mathcal D$}$ is also autoparallel. Moreover, it is very easy to deduce from the theory of isoparametric submanifolds that the conditions on $\mbox{$\mathcal E$}$ from Proposition~\ref{prop:suf} are satisfied, so $\Sigma=\mbox{$\mathcal D$}_p\oplus\nu_p(Gp)$ is a $k$-section.
\section{Structural theory of actions admitting $k$-sections} \setcounter{thm}{0}
\subsection{Slice representations}
In this section we will prove that the copolarity of a slice representation is not bigger than the copolarity of the original representation (Theorem~\ref{thm:slice-copolar}). Let $(G,M)$ be an isometric action of the compact Lie group $G$ on the complete Riemannian manifold $M$. Let $q\in M$. Then $G_q$ acts on $T_qM$ via the differential, and $T_qM=T_q(Gq)\oplus\nu_q(Gq)$ is an invariant decomposition. The orthogonal representation $(G_q,\nu_q(Gq))$ is called the \emph{slice representation at~$q$}.
\begin{lem}\label{lem:slice-reg} The following assertions are equivalent: \begin{enumerate} \item[(a)] $v\in\nu_q(Gq)$ is $G_q$-regular. \item[(b)] There exists $\epsilon>0$ such that $\exp_q(tv)$ is $G$-regular for $0<t<\epsilon$. \item[(c)] $\exp_q(t_0v)$ is $G$-regular for some $t_0>0$. \end{enumerate} \end{lem}
{\em Proof}. (c) implies (b): Let $p=\exp_q(t_0v)$ be $G$-regular. Then there exists $\epsilon>0$ such that \[ G_{\exp_q(tv)}=(G_q)_{tv}=(G_q)_v\subset G_p\subset G_q, \] for $0<t<\epsilon$, where the first equality follows from the fact that the exponential map is an equivariant diffeomorphism of a small normal disk of radius $\epsilon$ onto the image, and the last inclusion follows from the fact that the slice representation at $p$ is trivial as $p$ is $G$-regular. Again by the $G$-regularity of~$p$, we have that $G_{\exp_q(tv)}=G_p$ for $0<t<\epsilon$, and hence $\exp_q(tv)$ is $G$-regular for $0<t<\epsilon$.
Clearly (b) implies (c), and the equivalence of (a) with (b) follows from the equality $G_{\exp_q(tv)}=(G_q)_v$ for $0<t<\epsilon$ and the slice theorem.
$\square$
In the following we consider the case of an orthogonal representation $(G,V)$. Let $q\in V$, choose a $k$-section $\Sigma$ through $q$ and consider the slice representation $(G_q,\nu_q(Gq))$.
\begin{lem} There exists $v\in\Sigma\cap\nu_q(Gq)$ which is a $G_q$-regular point. \end{lem}
{\em Proof}. Let $\xi$ be a principal $G$-orbit and choose a connected component $\beta$ of $\Sigma\cap\xi$. Let $c(t)=q+tv$, $0\leq t\leq l$, be a minimal geodesic in $\Sigma$ from $q$ to $\beta$. Then $\dot{c}(l)\in\Sigma=T_{c(l)}\beta\oplus\nu_{c(l)}(Gc(l))$. But $\dot{c}(l)$ must be orthogonal to $\beta$, by minimality. Therefore $\dot{c}(l)\in\nu_{c(l)}(Gc(l))$. Since a geodesic orthogonal to an orbit must be orthogonal to every orbit it meets, $v=\dot{c}(0)\in\Sigma\cap\nu_q(Gq)$. It follows from Lemma~\ref{lem:slice-reg} that $v$ is $G_q$-regular.
$\square$
For the $G$-regular $p=q+tv\in V$, $0<t<\epsilon$, we then have that $\Sigma=\nu_p(Gp)\oplus\mbox{$\mathcal D$}_p$, where $\mbox{$\mathcal D$}$ has rank $k$.
\begin{lem} There is an orthogonal decomposition \[ T_p(G_q p)= T_p(G_q p) \cap\mbox{$\mathcal D$}_p\oplus T_p(G_q p)\cap\mbox{$\mathcal E$}_p. \] \end{lem}
{\em Proof}. Note that $T_p(Gp)=\mbox{$\mathcal D$}_p\oplus\mbox{$\mathcal E$}_p$ is an $A_v$-invariant decomposition, where $A_v$ denotes the Weingarten operator of $Gp$ with respect to the, say, unit vector $v\in\nu_p(Gp)$. We first claim that $T_p(G_q p)$ is contained in the eigenspace of $A_v$
corresponding to the eigenvalue $-1/t$. In fact, let $u=\frac{d}{ds}|_{s=0}\varphi_s(p)\in T_p(G_q p)$ for $\varphi_s\in G_q$, $\varphi_0=1$. Define $\hat{v}(s)=\varphi_sv$ normal vector field along $s\mapsto\varphi_sp$. Then $\hat v(s)=\frac{1}{t}(\varphi_sp-q)$ and \[ -A_vu+\nabla_u^\perp\hat{v}=\frac{d}{ds}
\Big|_{s=0}\hat{v}(s)=\frac{1}{t}u, \] so, by taking tangent components, we get that $A_vu=-\frac{1}{t}u$.
Next write $u=u'+u''$ where $u'\in\mbox{$\mathcal D$}_p$ and $u''\in\mbox{$\mathcal E$}_p$. Then $u'$ and $u''$ are eigenvectors of $A_v$ with eigenvalue $-1/t$. Since $u''\in\mbox{$\mathcal E$}_p$, we have that $u''\in T_p(G_q p)$ by Corollary~\ref{cor:partial-vc}. Therefore, we also have that $u'\in T_p(G_q p)$.
$\square$
Let $\mbox{$\mathcal D$}_{1p}=T_pG_q(p) \cap\mbox{$\mathcal D$}_p$ and $\mbox{$\mathcal E$}_{1p}=T_pG_q(p) \cap\mbox{$\mathcal E$}_p$. Let $\mbox{$\mathcal D$}_{2p}$ be the orthogonal complement of $\mbox{$\mathcal D$}_{1p}$ in $\mbox{$\mathcal D$}_p$ and let $\mbox{$\mathcal E$}_{2p}$ be the orthogonal complement of $\mbox{$\mathcal E$}_{1p}$ in $\mbox{$\mathcal E$}_p$.
\begin{lem}\label{lem:d2perp} We have that $\mbox{$\mathcal E$}_{2p}\subset T_q(Gq)$. \end{lem}
{\em Proof}. Let $u=\frac{d}{ds}|_{s=0}\varphi_sp\in\mbox{$\mathcal E$}_{2p}$ with $\varphi_s\in G$, $\varphi_0=1$. Without loss of generality we may assume that $A_vu=\lambda u$, and then $\lambda\neq-1/t$ because the $-1/t$ eigenvectors in $\mbox{$\mathcal E$}_p$ belong to $T_p(G_q p)$ (Corollary~\ref{cor:partial-vc}). Let $r:Gp\to Gq$ be the canonical equivariant submersion. We compute
\[ r_*(u)=\frac{d}{ds}\Big|_{s=0}\underbrace{r\varphi_s}_{=\varphi_sr} (p)
=\frac{d}{ds}\Big|_{s=0}\underbrace{\varphi_s (q)}_{=\varphi_s(p) -t\varphi_s(v)}
=u-t\frac{d}{ds}\Big|_{s=0}\hat v(s), \] so
\[ u=r_*(u)+t(-A_vu+\nabla_u^\perp\hat v). \] Now $\nabla_u^\perp\hat v=0$ because $u\in\mbox{$\mathcal E$}_p$ (Corollary~\ref{cor:partial-parallel}). Hence $u=(1+\lambda t)^{-1}r_*(u)\in T_q(Gq)$.
$\square$
Notice that each one of $\mbox{$\mathcal D$}_{1p}$, $\mbox{$\mathcal E$}_{1p}$, $\nu_p(Gp)\oplus\mbox{$\mathcal D$}_{2p}$ and $\mbox{$\mathcal E$}_{2p}$ is a constant subspace of $V$ with respect to $t\in(0,\epsilon)$, where~$p=q+tv$.
\begin{lem}\label{lem:slice-intersect} We have that $\Sigma\cap\nu_q(Gq)$ intersects all $G_q$-orbits. \end{lem}
{\em Proof}. It follows from Lemma~\ref{lem:d2perp} that $\nu_q(Gq)\subset\Sigma\oplus\mbox{$\mathcal E$}_{1p}$. Let $N_v$ be the normal space to $T_p(G_q p)$ in $\nu_q(Gq)$. Then $N_v$ is orthogonal to $\mbox{$\mathcal E$}_{2p}$ so that \[ \nu_q(Gq) = \underbrace{T_p(G_q p)}_{=\mbox{$\mathcal D$}_{1p}\oplus\mbox{$\mathcal E$}_{1p}}
\oplus\underbrace{N_v}_{\subset\nu_p(Gp)\oplus\mbox{$\mathcal D$}_{2p}}. \] Hence $\nu_q(Gq)\cap\Sigma=N_v\oplus\mbox{$\mathcal D$}_{1p}$. Since $\nu_q(Gq)\cap\Sigma$ contains $N_v$ and $v$ is $G_q$-regular, we get that $\nu_q(Gq)\cap\Sigma$ intersects all $G_q$-orbits.
$\square$
For a $G_q$-regular $w\in\nu_q(Gq)\cap\Sigma$, we have that $q+tw$ is $G$-regular for $t>0$ small, and thus it makes sense to define $\mbox{$\mathcal D$}_{1w}=\mbox{$\mathcal D$}_{q+tw}\cap T_{q+tw}G_q(q+tw)$. It is clear that $\mbox{$\mathcal D$}_1$ is a $G_q$-invariant distribution on the $G_q$-regular set of $\nu_q(Gq)$. Moreover, $\nu_q(Gq)\cap\Sigma=N_w\oplus\mbox{$\mathcal D$}_{1w}$, as above. It follows that $\nu_q(Gq)\cap\Sigma$ is a $k_1$-section for $(G_q,\nu_q(Gq))$, where $k_1$ is the dimension of $\mbox{$\mathcal D$}_1$. Since the dimension of $\mbox{$\mathcal D$}_1$ is not bigger than the dimension of $\mbox{$\mathcal D$}$, we finally conclude:
\begin{thm}\label{thm:slice-copolar} If $\mbox{copol}(G,V)\leq k$, then $\mbox{copol}(G_q,\nu_q(Gq))\leq k$. \end{thm}
\subsection{The reduction}
In this section we establish a reduction principle for orthogonal representations in terms of $k$-sections (Theorem~\ref{thm:reduction}). Let $(G,V)$ be an orthogonal representation admitting a $k$-section $\Sigma$.
\begin{lem}\label{lem:trans-sec} Let $q\in V$. Then the isotropy subgroup $G_q$ is transitive on the set $\mathcal F$ of $k$-sections through $q$ which are $G$-translates of $\Sigma$, namely $\mathcal F=\{g\Sigma:g\in G,\, q\in g\Sigma\}$. \end{lem}
{\em Proof}. The result is trivial for a $G$-regular point $q$, since in this case there is a unique element in $\mathcal F$ by condition (C4). Assume $q$ is not a $G$-regular point, and let $\Sigma_1$, $\Sigma_2\in\mathcal F$. Let $S$ be a normal slice at $q$ and choose a $G_q$-regular $p\in S$. By Lemma~\ref{lem:slice-intersect}, $\Sigma_i\cap\nu_q(Gq)$ intersects all $G_q$-orbits on $S$, $i=1$, $2$. Therefore we can select $h_i\in G_q$ such that $h_ip\in\Sigma_i$. Now~$p\in h_1^{-1}\Sigma_1\cap h_2^{-1}\Sigma_2$ where $p$ is $G$-regular, so $h_2^{-1}h_1\Sigma_1=\Sigma_2$ where $h_2^{-1}h_1\in G_q$.
$\square$
\begin{lem}\label{lem:inj} Let $G_\Sigma$ be the stabilizer of $\Sigma$. Then $G_\Sigma q=Gq\cap\Sigma$, for all $q\in\Sigma$. \end{lem}
{\em Proof}. Let $p$, $q\in\Sigma$ be in the same $G$-orbit. We need to show that they are in the same $G_\Sigma$-orbit. If they belong to a principal $G$-orbit, then the result follows from Proposition~\ref{prop:distr}. Suppose $p$ and $q$ are not $G$-regular points and $q=gp$ for some $g\in G$. Then $q\in\Sigma\cap g\Sigma$. By Lemma~\ref{lem:trans-sec}, there is $h\in G_q$ such that $h\Sigma=g\Sigma$. Then $h^{-1}g\in G_\Sigma$ and $(h^{-1}g)p=q$.
$\square$
\begin{thm}\label{thm:reduction} The inclusion $\Sigma\to V$ induces a homeomorphism $\Sigma/G_\Sigma\to V/G$. \end{thm}
{\em Proof}. The surjectivity is implied by the fact that $\Sigma$ intersects \emph{all} $G$-orbits. The injectivity is precisely Lemma~\ref{lem:inj}. Since the inclusion $\Sigma\to V$ is continuous, the induced map $\Sigma/G_\Sigma\to V/G$ is continuous. The restriction $S(\Sigma)/G_\Sigma\to S(V)/G$, where $S(\Sigma)\subset\Sigma$, $S(V)\subset V$ are the unit spheres, is also a continuous bijection, and thus a homeomorphism, as $S(\Sigma)/G_\Sigma$ is compact. Since $\Sigma/G_\Sigma\to V/G$ is the cone over $S(\Sigma)/G_\Sigma\to S(V)/G$, it follows that $\Sigma/G_\Sigma\to V/G$ is a homeomorphism.
$\square$
In the remaining of this section we study the intersection of the $k$-section with a nonprincipal orbit. Let $q\in\Sigma$ be a singular point.
\begin{lem}\label{lem:sigma-decomp} We have an orthogonal decomposition $\Sigma=\Sigma\cap T_q(Gq)\oplus\Sigma\cap\nu_q(Gq)$. \end{lem}
{\em Proof}. Write $\Sigma=T_q(G_\Sigma q)\oplus\nu^\Sigma_q(G_\Sigma q)$, where $\nu^\Sigma_q(G_\Sigma q)$ denotes the normal space to $G_\Sigma q$ at $q$ in $\Sigma$. Lemma~\ref{lem:inj} implies that $T_q(G_\Sigma q)=\Sigma\cap T_q(Gq)$. We still need to show that $\nu^\Sigma_q(G_\Sigma q)=\Sigma\cap\nu_q(Gq)$ in order to complete the proof.
Since $\Sigma\cap\nu_q(Gq)\subset\nu^\Sigma_q(G_\Sigma q)$ and $\Sigma\cap\nu_q(Gq)$ intersects all $(G_q,\nu_q(Gq))$-orbits by Lemma~\ref{lem:slice-intersect}, we have that there is $v\in\nu^\Sigma_q(G_\Sigma q)$ which is $G_q$-regular. Now by Lemma~\ref{lem:slice-reg} we know that $p=q+t_0v$ is $G$-regular for some $t_0>0$. We use the remark that a geodesic orthogonal to an orbit is orthogonal to every orbit it meets; we must have $v\in\nu_p^\Sigma(G_\Sigma p)$. Since $p$ is $G$-regular, this implies that $v\in\nu_p(Gp)$. Again, by the same remark, $v\in\nu_q(Gq)$.
$\square$
\begin{lem}\label{lem:sigma-sing} Let $r:Gp\to Gq$ be the canonical equivariant submersion, where $p$ is a $G$-regular point in the normal slice at $q$. Then $\Sigma\cap T_q(Gq)=r_*\mbox{$\mathcal D$}_{2p}$ and this is the orthogonal complement of $r_*\mbox{$\mathcal E$}_{2p}=\mbox{$\mathcal E$}_{2p}$ in $T_q(Gq)$. \end{lem}
{\em Proof}. Write $p=q+tv$ where $v\in\nu_q(Gq)$. Let $u\in\mbox{$\mathcal D$}_{2p}\subset T_p(Gp)$. A computation similar to the one done in Lemma~\ref{lem:d2perp} shows that \[ r_*(u)=\underbrace{(\mbox{id}+tA_v)u}_{\in\mbox{$\mathcal D$}_2p} +\underbrace{t\nabla^\perp_u\hat v}_{\in\nu_p(Gp)} \] is a vector in $\Sigma$ and therefore orthogonal to $\mbox{$\mathcal E$}_{2p}$. Now \[ T_q(Gq)=r_*T_p(Gp)=r_*\mbox{$\mathcal D$}_p+r_*\mbox{$\mathcal E$}_p= \underbrace{r_*\mbox{$\mathcal D$}_{2p}}_{\subset\Sigma}+\underbrace{r_*\mbox{$\mathcal E$}_{2p}}_{=\mbox{$\mathcal E$}_{2p}}. \] Since $\Sigma$ is orthogonal to $\mbox{$\mathcal E$}_{2p}$, we conclude that the last sum is an orthogonal direct sum.
$\square$
\begin{cor} We have that $r_*\mbox{$\mathcal D$}_{2p}\oplus N_v=\mbox{$\mathcal D$}_{2p}\oplus\nu_p(Gp)$. \end{cor}
{\em Proof}. Consider the orthogonal decomposition $\Sigma=\Sigma\cap T_q(Gq)\oplus\Sigma\cap\nu_q(Gq)$ from Lemma~\ref{lem:sigma-decomp}. On the one hand, we know that $\Sigma=\nu_p(Gp)\oplus\mbox{$\mathcal D$}_{2p}\oplus\mbox{$\mathcal D$}_{1p}$. On the other hand, we have that $\Sigma\cap T_q(Gq)=r_*\mbox{$\mathcal D$}_{2p}$ by Lemma~\ref{lem:sigma-sing} and $\Sigma\cap\nu_q(Gq)=N_v\oplus\mbox{$\mathcal D$}_{1p}$ by the proof of Lemma~\ref{lem:slice-intersect}. This gives the result.
$\square$
\subsection{Reducible representations}
In this section we prove that the copolarity of a direct sum of representations is not smaller than the copolarity of its summand representations. The result is:
\begin{thm}\label{thm:red} Let $(G,V)$ be an orthogonal representation and suppose that $V=V_1\oplus V_2$ is an invariant decomposition. If $\copol{G}{V}\leq k$, then $\copol{G}{V_i}\leq k$ for $i=1$, $2$. \end{thm}
\begin{lem}\label{lem:p_2} Given a $(G,V_1)$-regular point $p_1\in V_1$, there is $p_2\in V_2$ such that $p=(p_1,p_2)$ is $(G,V)$-regular. \end{lem}
{\em Proof}. Consider the representation $(G_{p_1},V_2)$ and take a $(G_{p_1},V_2)$-regular point $p_2\in V_2$. We claim that $p=(p_1,p_2)$ is $(G,V)$-regular. In fact, let $q=(q_1,q_2)\in V_1\oplus V_2$. Since $p_1$ is $(G,V_1)$-regular, there is $h\in G$ such that $G_{p_1}\subset hG_{q_1}h^{-1}=G_{hq_1}$, and therefore $(G_{p_1})_{hq_2}\subset(G_{hq_1})_{hq_2}=G_{hq}$. Now $p_2$ is $(G_{p_1},V_2)$-regular, so there is $g\in G$ such that $G_p=(G_{p_1})_{p_2}\subset g(G_{p_1})_{hq_2}g^{-1}\subset gG_{hq}g^{-1}=(gh)G_q(gh)^{-1}$. Hence, the result.
$\square$
\begin{lem}\label{lem:p_2-Sigma} Let $\Sigma$ be a $k$-section of $(G,V)$. Given a $(G,V_1)$-regular point $p_1\in\Sigma\cap V_1$, there is $p_2\in V_2$ such that $p=(p_1,p_2)$ is $(G,V)$-regular and $p\in\Sigma$. \end{lem}
{\em Proof}. We already know from Lemma~\ref{lem:p_2} that there is $p'_2\in V_2$ such that $p'=(p_1,p'_2)$ is $(G,V)$-regular. Let $g\in G$ be such that $gp'=(gp_1,gp'_2)\in\Sigma$. Note that $gp_1\in\nu_{gp'}(Gp')$. Since $gp'$ is $(G,V)$-regular, $\nu_{gp'}(Gp')\subset\Sigma$ so that $gp_1\in\Sigma$. Now $p_1\in\Sigma\cap g^{-1}\Sigma$. Therefore, by Lemma~\ref{lem:trans-sec}, there is $k\in G_{p_1}$ such that $kg^{-1}\Sigma=\Sigma$. Hence $p=kp'=(p_1,kp'_2)\in\Sigma$.
$\square$
\begin{lem}\label{lem:normal-red} Let $\Sigma$ be a $k$-section of $(G,V)$. Given a $(G,V_1)$-regular point $p_1\in\Sigma\cap V_1$, we have that $\Sigma\cap V_1$ contains $\nu_{p_1}^{V_1}(Gp_1)$ as a subspace of codimension at most $k$, where $\nu_{p_1}^{V_1}(Gp_1)$ denotes the normal space to $Gp_1$ at $p_1$ in $V_1$. \end{lem}
{\em Proof}. Use Lemma~\ref{lem:p_2-Sigma} to find $p_2\in V_2$ such that $p=(p_1,p_2)$ is $(G,V)$-regular and $p\in\Sigma$. Now $\Sigma$ contains $\nu_p(Gp)$ with codimension $k$ and $\nu_{p_1}^{V_1}(Gp_1)=\nu_p(Gp)\cap V_1$.
$\square$
\textit{Proof of Theorem~\ref{thm:red}.} Fix a $(G,V_1)$-regular $p_1\in V_1$ and fix a $k$-section $\Sigma$ for $(G,V)$ with $p_1\in\Sigma$. Define $\Sigma_1=\cap_{h\in G_{p_1}}h\Sigma\cap V_1$ and let $q_1\in\Sigma_1$ be $(G,V_1)$-regular (for instance, $q_1$ could be equal to $p_1$). Then, for all $h\in G_{p_1}$, we have $q_1\in h\Sigma\cap V_1$ and $h\Sigma$ is a $k$-section for $(G,V)$. It follows by Lemma~\ref{lem:normal-red} that $h\Sigma\cap V_1$ contains $\nu_{q_1}^{V_1}(Gq_1)$ as a subspace of codimension at most $k$ for all $h\in G_{p_1}$. Therefore $\Sigma_1$ contains $\nu_{q_1}^{V_1}(Gq_1)$ as a subspace of codimension at most $k$ and hence $\Sigma_1$ satisfies conditions~(C1), (C2) and~(C3). This already implies that $\copol{G}{V_1}\leq k$, but we go on to show that $\Sigma_1$ itself satisfies condition~(C4).
Suppose that $p_1\in\Sigma_1\cap g^{-1}\Sigma_1$ for some $g\in G$. Then $p_1\in h\Sigma\cap g^{-1}h\Sigma$ for all $h\in G_{p_1}$. We use Lemma~\ref{lem:trans-sec} to find $l=l(h)\in G_{p_1}$ such that $lh\Sigma=g^{-1}h\Sigma$. This gives $g^{-1}\Sigma_1=\cap_{h\in G_{p_1}}lh\Sigma\cap V_1\supset\Sigma_1$. Therefore $g\Sigma_1=\Sigma_1$. This shows that the distribution $\mbox{$\mathcal D$}_1$ defined by $\mbox{$\mathcal D$}_{1q_1}=\Sigma_1\cap T_{q_1}(Gq_1)$ for $q_1\in Gp_1$ satisfies assertion~(c) in Proposition~\ref{prop:distr} with respect to the principal $(G,V_1)$-orbit $Gp_1$. It follows that $\Sigma_1$ satisfies condition~(C4) and hence it is a $k_1$-section with $k_1=\dim\mbox{$\mathcal D$}_1\leq\dim\mbox{$\mathcal D$}=k$.
$\square$
\subsection{Minimal $k$-sections, the osculating spaces of orbits and the integrability of $\mbox{$\mathcal E$}$}
In this section we study properties of minimal $k$-sections of orthogonal representations with nontrivial copolarity (Recall that an isometric action $(G,M)$ has nontrivial copolarity if a minimal $k$-section is a proper subset of $M$.) In particular, we show that in the irreducible case there can be no minimal $k$-sections of codimension one or two in the ambient space (Corollary~\ref{cor:cod2}). We also characterize the case where the distribution $\mbox{$\mathcal E$}$ is integrable (Theorem~\ref{thm:int}).
Let $(G,V)$ be an orthogonal representation. Define an equivalence relation in the set of $G$-regular points by declaring two points to be equivalent if they can be joined by a polygonal path which is (at smooth points) tangent to the distribution of normal spaces of the $G$-orbits. It is clear that the group action permutes the equivalence classes. For a $G$-regular $p\in V$, denote the equivalence class of $p$ by $\mathfrak S_p$. Note that $0\in\mathfrak S_p$, as $p$ is a vector in the normal space of $Gp$ at $p$. Therefore the affine hull $\vecsp{\mathfrak S_p}$ is a vector subspace of $V$.
Next suppose that $(G,V)$ admits a $k$-section $\Sigma$. Let $p\in\Sigma$ be $G$-regular. It is clear that $\vecsp{\mathfrak S_p}\subset\Sigma$. Let $\mbox{$\mathfrak{g}$}$ be the Lie algebra of $G$ and consider its induced action by linear skew-symmetric endomorphisms on $V$. For each $X\in\mbox{$\mathfrak{g}$}$, let $f_X:\Sigma\to\Sigma$ be defined by $f_X(q)=\Pi\circ X_q$, where $\Pi:V\to\Sigma$ is orthogonal projection. Then $f_X$ is a linear map and $W_X:=\ker f_X$ is a subspace of $\Sigma$. Let $p\in\Sigma$ be $G$-regular. Define the following subspace of $\Sigma$: \[ \bar\Sigma_p:=\bigcap_{W_X\ni p}W_X. \] We have that $\bar\Sigma_p$ is the subset of $\Sigma$ comprised of the common zeros of the vector fields that are orthogonal projections onto $\Sigma$ of the $G$-Killing fields which are orthogonal to $\Sigma$ at $p$. It is clear from this definition that if $q\in\bar\Sigma_p$ is a $G$-regular point then $\bar\Sigma_q\subset\bar\Sigma_p$.
\begin{prop}\label{prop:barsigma} We have that $\vecsp{\mathfrak S_p}\subset\bar\Sigma_p\subset\Sigma$ and $\bar\Sigma_p$ is an $l$-section of $(G,V)$ with $l\leq k$. \end{prop}
{\em Proof}. Let $X\in\mbox{$\mathfrak{g}$}$ be such that $p\in W_X$. Then $X_p\perp\Sigma$. If $q\in\mathfrak S_p$, then $q$ can be joined to $p$ by a polygonal path which is normal to the orbits. It follows by an iterated application of Lemmas~\ref{lem:2} and~\ref{lem:3} below that $X_q\perp\Sigma$, that is, $q\in W_X$. This shows that $\mathfrak S_p\subset W_X$. Therefore $\vecsp{\mathfrak S_p}\subset W_X$. Since $X$ can be any element in $\mbox{$\mathfrak{g}$}$ satisfying $W_X\ni p$, we get that $\vecsp{\mathfrak S_p}\subset\bar\Sigma_p$.
We next prove that $\bar\Sigma_p$ is an $l$-section. Since $\bar\Sigma_p\subset\Sigma$, it will follow that $l\leq k$. In fact, condition (C1) for $\bar\Sigma_p$ is obvious and condition (C2) follows from the facts that $\bar\Sigma_p\supset\mathfrak S_p$ and $\mathfrak S_p$ intersects all orbits. Let us verify condition (C4). Let $q\in\bar\Sigma_p$ be a $G$-regular point and $g\in G$ with $gq\in\bar\Sigma_p$. We need to show that $g\bar\Sigma_p=\bar\Sigma_p$. First we note that it is clear from the definition of $\bar\Sigma_p$ that $\bar\Sigma_{gq}\subset\bar\Sigma_p$. Next, since $\mathfrak S_p$ intersects all orbits and $\mathfrak S_p\subset\bar\Sigma_p$, we may assume that $q\in\mathfrak S_p$, and this implies as above, via Lemmas~\ref{lem:2} and~\ref{lem:3}, that $\bar\Sigma_q=\bar\Sigma_p$. Finally, since $q$, $gq\in\bar\Sigma_p\subset\Sigma$, condition (C4) for $\Sigma$ gives that $g\Sigma=\Sigma$, and then we have $gW_X=W_{gXg^{-1}}$, which shows that $g\bar\Sigma_q=\bar\Sigma_{gq}$. Putting all this together we have $g\bar\Sigma_p=g\bar\Sigma_q=\bar\Sigma_{gq} \subset\bar\Sigma_p$, and hence $g\bar\Sigma_p=\bar\Sigma_p$.
In order to check (C3), let $q\in\bar\Sigma_p$ be a $G$-regular point. Since $\mathfrak S_p$ intersects all orbits, there is $g\in G$ such that $gq\in\mathfrak S_p$. It is clear that $\nu_{gq}(Gq)\subset\vecsp{\mathfrak S_p}\subset\bar\Sigma_p$. Then $\nu_q(Gq)=g^{-1}\nu_{gq}(Gq)\subset g^{-1}\bar\Sigma_p= \bar\Sigma_p$, where the last equality follows from (C4) for $\bar\Sigma_p$.
$\square$
\begin{cor}\label{cor:barsigma} If $\Sigma$ is a \emph{minimal} $k$-section for $(G,V)$ and $p\in\Sigma$ is $G$-regular, then for all $X\in\mbox{$\mathfrak{g}$}$ we have that: $X_p\perp\Sigma$ if and only if $X_q\perp\Sigma$ for all $q\in\Sigma$. \end{cor}
{\em Proof}. This is clear, because $\Sigma=\bar\Sigma_p$.
$\square$
Fix a minimal $k$-section for $(G,V)$ and a $G$-regular $p\in\Sigma$. Let $\mbox{$\mathfrak{k}$}$ be the Lie subalgebra of $\mbox{$\mathfrak{g}$}$ generated by all the $X\in\mbox{$\mathfrak{g}$}$ such that $X_p\perp\Sigma$ and let $K$ be the associated connected subgroup of $G$. Let $G_\Sigma^0$ the connected component of the identity in~$G_\Sigma$. Denote by $G'$ the subgroup of $G$ generated by $G_\Sigma^0$ and $K$. Note that the orbits $G'p$ and $Gp$ have the same tangent space at $p$ (cf.~Lemma~\ref{lem:inj}). Therefore they are equal. Since $Gp$ is a principal orbit, it is not hard to show that $(G',V)$ and $(G,V)$ are orbit equivalent (cf.~Lemma~3.6 in~\cite{GTh1}). By replacing $G$ by $G'$, we may now assume that $G$ is generated by $G_\Sigma^0$ and $K$. Next we show that $K$ is a normal subgroup of $G$. It is enough to show that if $X\in\mbox{$\mathfrak{g}$}$ satisfies $X_p\perp\Sigma$ and $g\in G_\Sigma$, then $Y=gXg^{-1}\in\mbox{$\mathfrak{k}$}$. In fact, $Y_{gp}=gX_p\perp g\Sigma=\Sigma$ and hence $Y\in\mbox{$\mathfrak{k}$}$ by Corollary~\ref{cor:barsigma}. Since $K$ is normal in $G$, we get that $G$ is a quotient of the semidirect product of $G_\Sigma^0$ and $K$.
\begin{prop}\label{prop:cod} Let $(G,V)$ be irreducible with nontrivial copolarity $k$ and assume that $\Sigma$ is a $k$-section. Then, for every $G$-regular $p\in\Sigma$, there does not exist a nonzero $\xi\in\nu_p(Gp)$ such that the Weingarten operator
$A_\xi|_{\mbox{$\mathcal E$}_p}=0$. \end{prop}
{\em Proof}. Suppose there is a nonzero $\xi\in\nu_p(Gp)$ such that the Weingarten operator
$A_\xi|_{\mbox{$\mathcal E$}_p}=0$. Let $\hat \xi$ be the equivariant normal vector field along $Gp$ which extends $\xi$. Then $\nabla^\perp_u\hat\xi=0$ for all $u\in\mbox{$\mathcal E$}_p$ by Corollary~\ref{cor:partial-parallel} below. This implies that $\hat\xi$ is constant on $Kp$ as a vector in $V$, so that $\xi$ is in the fix-point set of $K$. Since $K$ is normal in $G$, the fix-point set of $K$ is $G$-invariant. Since $G$ is irreducible on $V$, $K$ must be trivial on $V$ and then $\Sigma=V$, but this is impossible as $(G,V)$ has nontrivial copolarity.
$\square$
\begin{cor}\label{cor:cod1} Let $(G,V)$ be irreducible with nontrivial copolarity $k$. Then the cohomogeneity of $(G,V)$ is less than $\frac{l(l+1)}2$, where $l$ is the codimension of a $k$-section in $V$. \end{cor}
{\em Proof}. Let $Gp$ be a principal orbit and choose a minimal $k$-section $\Sigma\ni p$. It follows from Proposition~\ref{prop:cod} that the map
\[ \xi\in\nu_p(Gp) \mapsto A_\xi|_{\mbox{$\mathcal E$}_p}\in\mbox{Sym}^2(\mbox{$\mathcal E$}_p^*) \] is injective, where $\mbox{Sym}^2(\mbox{$\mathcal E$}_p^*)$ denotes the symmetric square of the dual space of $\mbox{$\mathcal E$}_p$. Now we need just note that $\dim\mbox{$\mathcal E$}_p=l$.
$\square$
\begin{cor}\label{cor:cod2} Let $(G,V)$ be irreducible with nontrivial copolarity $k>0$ and let $n$ be the dimension of a principal orbit. Then $k\leq n-3$. \end{cor}
{\em Proof}. If $k=n-1$, then $l=n-k=1$, and by Corollary~\ref{cor:cod1} the codimension of a principal orbit is $1$. In this case $G$ is transitive on the unit sphere and therefore polar, so this case is impossible.
If $k=n-2$, then $l=2$ and the codimension of a principal orbit is at most $3$. Here $G$ is either polar on $V$, or has copolarity $1$ and cohomogeneity $3$, by the classification of irreducible representations of cohomogeneity at most $3$~\cite{Y}. Since $k>0$, we must have $k=1$. Then $n=3$, but none of the irreducible representations of cohomogeneity $3$ has principal orbits of dimension $3$. So this case is impossible either.
$\square$
We next characterize the orthogonal representations of nontrivial copolarity whose distribution $\mbox{$\mathcal E$}$ is integrable.
\begin{thm}\label{thm:int} Let $(G,V)$ be an orthogonal representation with nontrivial copolarity. Suppose that the distribution $\mbox{$\mathcal E$}$ is integrable. Then there is an orthogonal decomposition $V=V_1\oplus V_2$, a polar representation $(K,V_1)$ and another orthogonal representation $(H,V_2)$ such that $(G,V)$ is orbit equivalent to the direct product representation $(K\times H,V_1\oplus V_2)$. Here the leaves of $\mbox{$\mathcal E$}$ correspond to the $K$-orbits. \end{thm}
{\em Proof}. Let $K$ be the normal subgroup of $G$ as above. We first show that the leaves of $\mbox{$\mathcal E$}$ coincide with the $K$-orbits. For that purpose, note that $\mbox{$\mathcal E$}_q\subset T_q(Kq)$ for every $G$-regular $q\in\Sigma$, and then $\mbox{$\mathcal E$}_q\subset T_q(Kq)$ for every $G$-regular $q\in V$, as every $K$-orbit intersects $\Sigma$ and $\mbox{$\mathcal E$}$ is $K$-invariant. It follows that, if $p\in\Sigma$ is a fixed $G$-regular point and $\beta$ is the leaf of $\mbox{$\mathcal E$}$ through $p$, then $\beta\subset Kp$. Let $K_\beta^0$ denote the connected component of the stabilizer of $\beta$ in $K$. It is clear that $K_\beta^0 p=\beta$. Let $X\in\mbox{$\mathfrak{g}$}$ be such that $X_p=u\in\mbox{$\mathcal E$}_p$. Since $\mbox{$\mathcal E$}_p=T_p\beta=T_p(K_\beta^0 p)$, there is an $Y$ in the Lie algebra of $K_\beta^0$ such that $Y_p=u$. Now $Z=X-Y\in\mbox{$\mathfrak{k}$}$ and $Z_p=0$. Therefore the one-parameter subgroup of $K$ generated by $Z$ is in the isotropy subgroup $K_p$. But $K_p\subset K_\beta^0$, since $K$ maps $\mbox{$\mathcal E$}$-leaves onto $\mbox{$\mathcal E$}$-leaves. It follows that $Z$ is in the Lie algebra of $K_\beta^0$ and so is $X$. Since $X$ is an arbitrary generator of $\mbox{$\mathfrak{k}$}$, it follows that $K_\beta^0=K$ and hence $\beta=Kp$.
Now it is clear that $\Sigma$ is a section of $(K,V)$ so that $(K,V)$ is polar. Note that the $G$-regular points in $V$ are also $K$-regular. Let $N$ denote the normalizer of $K$ in the orthogonal group $\mathbf O(V)$. Then $N$ maps $K$-orbits onto $K$-orbits. Let $n\in N$. Since $\Sigma$ intersects all $K$-orbits, there is $k\in K$ such that $knp\in\Sigma$. Now $kn$ maps $\Sigma$ onto $\Sigma$. The principal $K$-orbits are isoparametric in $V$ and $kn$ preserves their common focal set. Therefore $kn$ preserves the focal hyperplanes in $\Sigma$. Decompose $\Sigma$ into an orthogonal sum $\Sigma_1\oplus V_2$, where $\Sigma_1$ is the span of the curvature normals of the principal $K$-orbits. If $q\in\Sigma$ is $K$-regular, then $Kq$ is full in the affine subspace $q+V_1$, where $V_1$ is the orthogonal complement of $V_2$ in $V$, and $K$ acts trivially on $V_2$. The decomposition $V=V_1\oplus V_2$ is $N$-invariant. Now $kn$ maps $\Sigma_1$ onto $\Sigma_1$ and maps the $K$-orbits in $V_1$ onto $K$-orbits in $V_1$. Therefore $kn$ is in the Weyl group of $(K,V_1)$, which is a finite group generated by the reflections on the focal hyperplanes in $\Sigma_1$. It follows that $kn$ maps a $K$-orbit in $V_1$ onto the \emph{same} $K$-orbit in $V_1$. In particular, if $n$ is in the connected component $N^0$ of $N$, then $kn$ is the identity on $V_1$. In any case, $N$ and $K$ have the same orbits in $V_1$.
Since $G$ normalizes $K$, we have $Kp_1=Gp_1$ for $p_1\in V_1$, and then $G_\Sigma p_1= Gp_1\cap\Sigma=Kp_1\cap\Sigma_1$ is finite. Now $G_\Sigma^0p_1=\{p_1\}$. This shows that $G_\Sigma^0$ is trivial on $V_1$. Since $K$ is trivial on $V_2$, the intersection $G_\Sigma^0\cap K$ is in the kernel of the $G$-action on $V$. Therefore we may assume that $G_\Sigma^0\cap K=\{1\}$. Now the action of $G$ on $V$ is orbit equivalent to the action of the direct product of $G_\Sigma^0$ and $K$ and this completes the proof of the theorem.
$\square$
\section{Representations of copolarity one}\label{sec:one} \setcounter{thm}{0}
In this section we describe the structure of a principal orbit of a representation of copolarity one (Theorem~\ref{thm:one}) and classify the irreducible representations of copolarity one (Corollary~\ref{cor:one}). We start with a lemma with an interest of its own.
\begin{lem}\label{lem:decomp} Let $(G,V)$ be an orthogonal representation with copolarity $k$. Suppose that $p\in V$ is a regular point and $\Sigma$ is a $k$-section through $p$. Then, given a Killing field $X$ of $V$ induced by $G$, we can write $X=X_1+X_2$, where $X_1$, $X_2$ are Killing fields induced by $G$ such that
$X_1|_{\Sigma}$ is always tangent to $\Sigma$ and $X_2|_{\Sigma}$ is always perpendicular to $\Sigma$. \end{lem}
{\em Proof}. Let $\bar X$ be the intrinsic Killing field of $\Sigma $ which
is obtained by projecting $X|_{\Sigma}$ to the tangent space of $\Sigma$. Since $X_p$ is perpendicular to the normal space $\nu_p(Gp) \subset \Sigma$, we must have that $\bar X_p$ is tangent to $G p \cap \Sigma$ at $p$. By Lemma~\ref{lem:inj}, $G_\Sigma $ acts transitively on $Gp \cap \Sigma$, so there exists a Killing field $X_1$ induced by $G_\Sigma \subset G$ such that $X_{1p} = \bar X_p$ (observe that the restriction to $\Sigma$ of $X_1$ is always tangent to $\Sigma$). Then $X_2 = X-X_1$ is perpendicular to $\Sigma $ at $p$, and, by Corollary~\ref{cor:barsigma}, we
have that $X_2|_{\Sigma}$ is always perpendicular to $\Sigma$.
$\square$
\begin{thm}\label{thm:one} Let $(G,V)$ be an orthogonal representation with copolarity $k=1$ and let $M = Gp$ be a principal orbit. Then the submanifold $M$ of $V$ splits extrinsically as $M = M_0 \times M_1$, where $M_0$ is a homogeneous isoparametric submanifold and $M_1$ is either one of the following: \begin{enumerate} \item[(i)] a nonisoparametric homogeneous curve; \item[(ii)] a focal manifold of an irreducible homogeneous isoparametric submanifold which is obtained by focalizing a one-dimensional distribution; \item[(iii)] a codimension $3$ homogeneous submanifold. \end{enumerate} \end{thm}
{\em Proof}. It follows from the proof of Theorem~B in~\cite{OS} that the Lie algebra $\mbox{$\mathfrak{h}$}$, which is (algebraically) generated by the projection to $\nu_p(M)$ of Killing fields induced by $G$ (restricted to $\nu_p (M)$), contains the normal holonomy algebra (and it is contained in its normalizer). By Lemma~\ref{lem:decomp}, $\mbox{$\mathfrak{h}$}$ is generated by the projection of the Killing fields induced by $G_\Sigma$, where $\Sigma$ is a $1$-section through $p$. But if $X$, $Y\neq 0 $ are such Killing fields then they must be proportional at $p$, since $\dim (Gp\cap\Sigma)=1$. By multiplying one of them by a nonzero scalar, we may assume that $X_p=Y_p$. Now, by Corollary~\ref{cor:barsigma},
$(X-Y)|_{\Sigma}$ must be always perpendicular to $\Sigma$ and is thus zero.
Therefore $\mbox{$\mathfrak{h}$}$ is generated by $X|_{\Sigma}$ and so it has dimension $1$. So, the restricted normal holonomy group of $M$ has dimension $0$ or $1$.
Let $\bar M$ be a nonisoparametric irreducible extrinsic factor of $M$. We may assume it is full (since we are only concerned with the geometry of $M$). If $\bar M$ has flat normal bundle, then, by~Theorem~A in~\cite{O1}, $\bar M$ is a homogeneous curve (which is not an extrinsic circle). Assume that $\bar M$ has nonflat normal bundle. Orthogonally decompose the normal bundle \[ \nu (\bar M) = \nu _0(\bar M) \oplus \nu _s(\bar M), \] where $\nu _0(\bar M)$ is the maximal parallel and flat subbundle of $\nu (\bar M)$. By the Normal Holonomy Theorem~\cite{O2}, as the restricted normal holonomy group of $\bar M$ has dimension $1$, $\nu_s(\bar M)$ has dimension $2$ over $\bar M$ (and the restricted normal holonomy group acts as the circle action in a two-dimensional Euclidean space); notice that there can be at most one irreducible extrinsic factor of $M$ with nonflat normal bundle. If the codimension of $\bar M$ is greater than $3$, then $\mbox{rank}(\bar M)$ (i.e. the dimension over $\bar M$ of $\nu _0(\bar M))$ is at least $2$. Then, by Theorem~A in~\cite{O1}, $G$ can be enlarged to a group admitting a representation which is the isotropy representation of an irreducible symmetric space and has $\bar M$ as an orbit. The normal holonomy tube, which has one dimension more and coincides with a principal orbit of the isotropy representation, is an irreducible isoparametric submanifold. We finish the proof by observing that there cannot be two distinct nonisoparametric irreducible extrinsic factors of $M$, for otherwise the copolarity would be at least $2$.
$\square$
\begin{cor}\label{cor:one} Let $(G,V)$ be an irreducible representation of nontrivial copolarity $1$. Then $(G,V)$ is one of the three orthogonal representations listed in the table of Theorem~\ref{thm:1}. \end{cor}
{\em Proof}. We know from Theorem~\ref{thm:one} that any principal orbit is either a codimension $3$ homogeneous submanifold or a focal manifold of an irreducible homogeneous isoparametric submanifold. If there is a principal orbit which falls into the first case, then the cohomogeneity is $3$ and the result follows from the classification of cohomogeneity $3$ irreducible representations (see~\cite{Y}). Suppose, on the contrary, that no principal orbit falls into the first case. Then any principal orbit is a focal manifold of an isoparametric submanifold, thus it is taut (see~\cite{HPT}). If we can show that the nonprincipal orbits are also taut, then it will follow from the classification of taut irreducible representations~\cite{GTh3} that the cohomogeneity is $3$ and this is a contradiction.
Let $Gq$ be a nonprincipal orbit. We need to prove that it is tautly embedded in $V$.
We can find a vector $v\in\nu_q(Gq)$ and a decreasing sequence $\{t_n\}$ such that $t_n\to0$ and $p_n=q+t_nv$ are $G$-regular points. For each $n$ there is a group $K_n\supset G$ and an isotropy representation of a symmetric space $(K_n,V)$ such that $Gp_n=K_np_n$. Since the number of isotropy representations of symmetric spaces of a given dimension is finite, by passing to a subsequence we can assume that there is a sequence $h_n\in\SO{V}$ such that $K_n=h_nK_1h_n^{-1}$ for $n\geq2$. By compactness of $\SO V$, again by passing to a subsequence we can write $h_n\to h\in\SO V$. Let $K_\infty=hK_1h^{-1}$. We finally prove that $K_\infty q=Gq$. Since $(K_\infty, V)$ is conjugate to the isotropy representation of a symmetric space, its orbits are tautly embedded \cite{BS}, and this shows that $Gq$ is taut.
Let $k\in K_\infty$. We have that $k=hk_1h^{-1}$ for some $k_1\in K_1$, and then $k=\lim k_n$, where $k_n=h_nk_1h_n^{-1}\in K_n$ for $n\geq2$. Since $K_np_n=Gp_n$, there is $g_n\in G$ such that $g_np_n=k_np_n$. By passing to a subsequence we may assume that $g_n\to g\in G$. Now $kq=\lim k_np_n=\lim g_np_n=gq\in Gq$, and this proves that $K_\infty q\subset Gq$. Since the reverse inclusion is clear, this completes the proof of the claim and the proof of the corollary.
$\square$
\begin{rmk} \em Regarding Theorem~\ref{thm:one} and Corollary~\ref{cor:one}, we know examples of reducible representations of nontrivial copolarity $1$, but all of them have cohomogeneity $3$. Thus we do not know if case (ii) in Theorem~\ref{thm:one} indeed can occur. \end{rmk}
\section{Variational co-completeness}\label{sec:varcomp} \setcounter{thm}{0}
Let $N$ be a submanifold of a complete Riemannian manifold~$M$. Let $\eta:\nu(N)\to M$ denote the \emph{endpoint map} of $N$, that is, the restriction of the exponential map of $M$ to the normal bundle of $N$. A point $q=\eta(v)$ is a \emph{focal point of $N$ in the direction of $v\in\nu(N)$of multiplicity $m>0$} if $d\eta_v:T_v\nu(N)\to T_qM$ is not injective and the dimension of its kernel is $m$. Let $v\in\nu_p(N)$ and $\gamma_v$ denote the geodesic $t\mapsto\exp_p(tv)$. A Jacobi field along $\gamma_v$ is called an $N$-\emph{Jacobi field} if it is the variational vector field of a variation through geodesics that are at time zero orthogonal to $N$. We will denote the space of $N$-Jacobi fields along $\gamma_v$ by $\mathcal{J}^N(\gamma_v)$. It is not difficult to see that $J$ is an $N$-Jacobi field along $\gamma_v$ if and only if $J(0)\in T_pN$ and $J'(0)+A_vu\in\nu_p(N)$, where $p$ is the footpoint of $v$, $u=J(0)$ and $A_v$ is the Weingarten map in direction $v$. The point $q$ is a focal point of $N$ in the direction $v$ if there is an $N$-Jacobi field along $\gamma_v$ that vanishes in $q$. We will denote the space of $N$-Jacobi fields along $\gamma_v$ that vanish in $q$ by $\mathcal{J}^N_q(\gamma_v)$.
Now let $(G,M)$ be an isometric action of a compact Lie group $G$ on the complete Riemannian manifold $M$. The action $(G,M)$ is called \emph{variationally complete} if every Jacobi field $J\in\mathcal{J}^N_q(\gamma_v)$, where $N$ is a $G$-orbit and $q$ is a focal point of $N$ in the direction of $v$, is the restriction along $\gamma_v$ of a Killing field on $M$ induced by the action of $G$.
More generally, let $N$ be a fixed principal orbit of an isometric action $(G,M)$ and let $p\in N$. For each $v\in\nu_p(N)$, we have an isomorphism $\mathcal{J}^N(\gamma_v)\to T_p N\oplus\nu_p N=T_p M$ given by $J\mapsto(J(0),J'(0)+A_vJ(0))$. Let $U_p$ be a subspace of $T_p M$ with the following property: \begin{enumerate} \item[(P)] for each $v\in\nu_p(N)$, if $J\in\mathcal{J}^N(\gamma_v)$ vanishes for some $t_0>0$ and $(J(0),J'(0)+A_vJ(0))$ is orthogonal to $U_p$, then $J$ is the restriction along $\gamma_v$ of a $G$-Killing field. \end{enumerate} Of course, $U_{gp}$ can always be taken to be $g_*U_p$, where $g\in G$, and in particular, $U_p$ can always be taken to be $G_p$-invariant. Moreover, $U_p$ can always be taken to be all of~$T_p M$. In any case we write $\mbox{covar}_N(G,M)\leq\dim U_p$. We say that the \emph{variational co-completeness} of $(G,M)$ is less than or equal to $k$, where $k$ is an integer between $0$ and $\dim M$, and we write $\covar{G}{M}\leq k$, if $\mbox{covar}_N(G,M)\leq k$ for all principal $G$-orbits $N$. Clearly, $\covar{G}{M}\leq0$ (or, in this case, $\covar{G}{M}=0$) if and only if $(G,M)$ is variationally complete, and one cannot do better than $\covar{G}{M}\leq\dim M$ for a generic isometric action. In the next section we will describe a situation where the intermediate values occur.
Observe that the intersection of two subspaces of $T_pM$ with property~(P) does not need to have that property, so we cannot speak of a minimal subspace with property~(P). Nonetheless, given a general isometric action $(G,M)$ and a $G$-regular $p\in M$, we next show how to construct a canonical subspace $U_p^0$ of $T_p M$ with property~(P). Let $N=Gp$. For each $v\in\nu_p(Gp)$ and $q$ a focal point of $N$ in the direction $v$, consider the subspace $\tilde U_p^{v,q}$ of $T_pM$ spanned by the initial conditions $(J(0),J'(0)+A_vJ(0))$ for all $J\in\mathcal{J}^N_q(\gamma_v)$. Now take the subspace of $\tilde U_p^{v,q}$ spanned by the initial conditions of all $G$-Killing fields in $\mathcal{J}^N_q(\gamma_v)$ and let $U_p^{v,q}$ denote its orthogonal complement in~$\tilde U_p^{v,q}$. Finally, define $U_p^0$ as the sum over $v$, $q$ of the subspaces $U_p^{v,q}$. It is clear that $U_p^0$ has property~(P). Note also that these $U_p^0$, for $p\in N$, define a $G$-invariant distribution on $N$.
\subsection{The theorem of Conlon for actions admitting $k$-sections}
In this section we prove a version of Conlon's theorem~\cite{C}.
\begin{thm}\label{thm:Conlon} If $(G,M)$ is an isometric action admitting a $k$-section which is flat in the induced metric, then $\covar{G}{M}\leq k$. \end{thm}
Let $N=Gp$ be a principal orbit, $v\in\nu_p(N)$, and choose a flat $k$-section $\Sigma$ through $p$.
\begin{lem}\label{lem:1} Let $J$ be an $N$-Jacobi field along $\gamma_v$ such that $J(0)\in\mbox{$\mathcal E$}_p$. If $J(t_0)=0$ for some $t_0>0$, then $J$ is always orthogonal to $\Sigma$. \end{lem}
{\em Proof}. Decompose $J=J_1+J_2$, where $J_1(t)$ and $J_2(t)$ are respectively the tangent and normal components of $J(t)$ relative to $T_{\gamma_v(t)}\Sigma$. Since $\Sigma$ is totally geodesic, $J_1$ and $J_2$ are Jacobi fields along $\gamma_v$. Now $J_1$ is a Jacobi field in $\Sigma$ with $J_1(0)=J_1(t_0)=0$. Since $\Sigma$ is flat, we have that $J_1$ identically vanishes and hence $J=J_2$.
$\square$
\begin{lem}\label{lem:2} Let $J$ be an $N$-Jacobi field along $\gamma_v$ such that $J(0)\in\mbox{$\mathcal E$}_p$. If $J$ is the restriction along $\gamma_v$ of a $G$-Killing field on~$M$, then $J$ satisfies $J'(0)+A_vJ(0)=0$. \end{lem}
{\em Proof}. Let $X$ be a $G$-Killing field on $M$ which restricts to $J$ along~$\gamma_v$. Note that $X_p=J(0)\in\mbox{$\mathcal E$}_p$. Denote by $\nabla$ the Levi-Civita connection of $M$. Now $J'(0)=(\nabla_v X)_p$. Let $E(t)$ be any vector field along $\gamma_v(t)$ which is normal to $G\gamma_v(t)$. Then $E(t)$ is tangent to $\Sigma$ for small $t$. We have $\inn{J'(0)}{E}_p=\inn{\nabla_v X}{E}_p=-\inn{X}{\nabla_v E}_p=0$. Therefore $J'(0)\in T_pN$, and this implies $J'(0)+A_vJ(0)\in T_pN$. Since we already have $J'(0)+A_vJ(0)\in\nu_pN$, we get that $J'(0)+A_vJ(0)=0$.
$\square$
\begin{lem}\label{lem:3} Let $J$ be an $N$-Jacobi field along $\gamma_v$ such that $J(0)\in\mbox{$\mathcal E$}_p$. Then $J$ is always orthogonal to $\Sigma$ if and only if $J$ satisfies $J'(0)+A_vJ(0)=0$. \end{lem}
{\em Proof}. If $J$ is always orthogonal to $\Sigma$, then $J'(0)+A_vJ(0)$ is also orthogonal to $\Sigma$ as $\Sigma$ is totally geodesic. But as an $N$-Jacobi field, $J$ satisfies $J'(0)+A_vJ(0)\in\nu_p(N)$. Now we have $\nu_p(N)\subset T_p\Sigma$, hence $J'(0)+A_vJ(0)=0$. Conversely, if $J'(0)+A_vJ(0)=0$, then $J'(0)=-A_vJ(0)\in\mbox{$\mathcal E$}_p$, since $\mbox{$\mathcal E$}_p$ is $A_v$-invariant and $J(0)\in\mbox{$\mathcal E$}_p$. Since $J$ and $J'$ are both orthogonal to $\Sigma$ at time zero and $\Sigma$ is totally geodesic, we deduce that $J$ is always orthogonal to $\Sigma$.
$\square$
We finish the proof of Theorem~\ref{thm:Conlon} by observing that $\mbox{$\mathcal D$}_p$ has property (P). In fact, if an $N$-Jacobi field $J$ along $\gamma_v$ for some $v\in\nu_p N$ vanishes for some $t_0>0$ and $(J(0),J'(0)+A_vJ(0))$ is orthogonal to $\mbox{$\mathcal D$}_p$, then $J(0)\in\mbox{$\mathcal E$}_p$. By Lemmas~\ref{lem:1} and~\ref{lem:3}, $J'(0)+A_vJ(0)=0$. Let $X$ be a $G$-Killing on $M$ such that $X_p=J(0)$. Then $X$ restricts to a Jacobi field along $\gamma_v$ which by Lemma~\ref{lem:2} must be~$J$.
Since Lemmas~\ref{lem:1}, \ref{lem:2} and~\ref{lem:3} do not depend on condition~(C4) in the definition of a $k$-section, we have the following corollary of the proof.
\begin{cor}\label{cor:partial-vc} Let $(G,M)$ be an isometric action. Suppose there is a flat, connected, complete submanifold $\Sigma$ of $M$ satisfying conditions~(C1), (C2) and~(C3) in the definition of $k$-section. Let $N$ be a principal orbit, $p\in N\cap\Sigma$ and $v\in\nu_pN$. Then $T_p(N\cap\Sigma)$ has property (P). \end{cor}
Let $N$ be a principal orbit of an isometric action $(G,M)$ admitting a $k$-section $\Sigma$. Let $\xi$ be a normal vector field parallel along a curve in $N$ that is everywhere tangent to the distribution $\mbox{$\mathcal E$}$. The next corollary implies that the principal curvatures of $N$ along $\xi$ are constant.
\begin{cor}\label{cor:partial-parallel} Let $(G,M)$ be an isometric action admitting a $k$-section $\Sigma$ (non necessarily flat). Let $N$ be a principal orbit, $p\in N\cap\Sigma$ and $v\in\nu_pN$. Extend $v$ to an equivariant normal vector field $\hat v$ along $N$. Then $\hat v$ is parallel along $\mbox{$\mathcal E$}$. \end{cor}
{\em Proof}. By homogeneity, it is enough to show that $\nabla^\perp_u\hat v=0$ for all $u\in\mbox{$\mathcal E$}_p$. Let $X$ be a $G$-Killing field such that $X_p=u\in\mbox{$\mathcal E$}_p$. Let $J$ be the $N$-Jacobi field along $\gamma_v$ which is the restriction of $X$. Then $J(0)=u\in\mbox{$\mathcal E$}_p$. By Lemma~\ref{lem:2}, we have $J'(0)+A_vu=0$. On the other hand, since $(L_X\hat v)_p=0$, $J'(0)=(\nabla_v X)_p= (\nabla_{X}\hat v)_p=-A_vu+\nabla_u^\perp\hat v$. Hence, $\nabla^\perp_u\hat v=0$. \mbox{}
$\square$
\subsection{A weak converse for~5.1 in the Euclidean case}
In this section we prove a sort of converse to Theorem~\ref{thm:Conlon} in the Euclidean case (Theorem~\ref{thm:converse}) and obtain, as a corollary, a generalization of a result about polar representations (Corollary~\ref{cor:normal}). First, observe that if $(G,V)$ is an orthogonal representation and $N=Gp$ is a principal orbit, then an~$N$-Jacobi field~$J$ along~$\gamma_v$ which vanishes for some~$t_0>0$ is necessarily of the form $J(t)=(1-\frac{t}{t_0})J(0)$ (since the Jacobi equation in Euclidean space is $J''=0$). Therefore the vector $J'(0)+A_vJ(0)=(A_v-\frac{1}{t_0}\mbox{id}_{T_p N})J(0)$ is simultaneously normal and tangent to $N$, so that it must vanish. In this way we see that an element~$J$ of $\mathcal{J}^N_q(\gamma_v)$ is completely determined by the value of $J(0)$, so that it is enough to consider property (P) for subspaces of $T_p N$. Note also that the span in $T_pN$ of the $J(0)$, where $J\in\mathcal{J}^N_q(\gamma_v)$ and $J=X\circ\gamma_v$ for some $X\in\mbox{$\mathfrak{g}$}$, is $T_p(G_qp)$. We conclude that in the Euclidean case property (P) can be rewritten in the following form: \begin{enumerate} \item[($\mbox{P}_{\mbox{\scriptsize Euc}}$)] for each $v\in\nu_p(N)$, if $A_vu=\lambda u$ for some $\lambda\neq0$ and $u$ is orthogonal to $U_p$, then $u\in T_p(G_qp)$, where $q=p+\frac{1}{\lambda}v$. \end{enumerate} Moreover, it follows that the canonical subspace $U_p^0$ is the sum, over $v\in\nu_pN$ and $q$ a focal point of $N$ in the direction $v$, of the orthogonal complement of $T_p(G_qp)$ in $\{J(0):J\in\mathcal{J}^N_q(\gamma_v)\}$. Recall that $U_p^0$ is $G_p$-invariant, but it is not clear that $U_p^0$ is invariant under the second fundamental form of $N$.
\begin{prop}\label{prop:can} Every subspace $\mbox{$\mathcal D$}_p$ of $T_p N$ which satisfies property ($\mbox{P}_{\mbox{\scriptsize Euc}}$) and is invariant under the second fundamental form of $N$ must contain $U_p^0$. \end{prop}
{\em Proof}. Suppose, on the contrary, that $U_p^0\not\subset\mbox{$\mathcal D$}_p$. Then, by the definition of $U_p^0$, there exists $v\in\nu_pN$ and an eigenvector $u$ of $A_v$ with eigenvalue $\lambda\neq0$ such that $u\not\in\mbox{$\mathcal D$}_p$ and $u$ is in the orthogonal complement of $T_p(G_qp)$ in $\{J(0):J\in\mathcal{J}^N_q(\gamma_v)\}$, where $q=p+\frac{1}{\lambda}v$.
Write $u=u_1+u_2$, where $u_1\in\mbox{$\mathcal D$}_p$ and $u_2\perp\mbox{$\mathcal D$}_p$. Then $u_2\neq0$. Since $\mbox{$\mathcal D$}_p$ is $A_v$-invariant, we get that $A_v u_2=\lambda u_2$. By ($\mbox{P}_{\mbox{\scriptsize Euc}}$) for $\mbox{$\mathcal D$}_p$, we have that $u_2\in T_q(G_q p)$. Therefore $u_2\perp u$, but this is a contradiction to the fact that $u_2\neq0$.
$\square$
\begin{thm}\label{thm:converse} Let $(G,V)$ be an orthogonal representation, $N$ be a principal orbit and suppose there is a $G$-invariant, $k$-dimensional distribution $\mbox{$\mathcal D$}$ in $N$ which is autoparallel in $N$ and invariant under the second fundamental form of $N$, and satisfies property ($\mbox{P}_{\mbox{\scriptsize Euc}}$). Then $\copol{G}{M}\leq k$ (and hence, by Theorem~\ref{thm:Conlon}, we have that $\covar{G}{M}\leq k$). \end{thm}
{\em Proof}. Fix $p\in N$. We will prove that $\Sigma=\mbox{$\mathcal D$}_p\oplus\nu_p(Gp)$ satisfies conditions (C1), (C2) and (C3) for $(G,V)$. For that purpose, we will use Proposition~\ref{prop:suf}(a). Let $v\in\nu_p N$ be such that the principal curvatures of $A_v$ are all nonzero (note that the subset of all such $v$ in $\nu_p N$ is open and dense). Suppose that $q=p+v$ is a $G$-regular point. Let $\mbox{$\mathcal E$}$ be the orthogonal complement distribution of $\mbox{$\mathcal D$}$ in $N$. Let $u\in\mbox{$\mathcal E$}_p$. Since $\mbox{$\mathcal D$}$ is $A_v$-invariant, we may assume that $u$ is an eigenvector of $A_v$, and we know that the corresponding eigenvalue $\lambda$ is not zero. Now the $N$-Jacobi field $J$ along $\gamma_v(t)=p+tv$ with initial conditions $J(0)=u$, $J'(0)+A_v u=0$ is given by $J(t)=(1-t\lambda)u$. By property ($\mbox{P}_{\mbox{\scriptsize Euc}}$), $J$ is the restriction of a $G$-Killing field along~$\gamma_v$. In particular, $J(1)$ is tangent to $T_q(Gq)$. Since $Gq$ is a principal orbit, the slice representation at $q$ is trivial, so the one-parameter subgroup of $G$ that induces $J$ cannot fix $q$ and thus $J(1)\neq0$. This implies that $u\in T_q(Gq)$. We have shown that $\mbox{$\mathcal E$}_p\subset T_q(Gq)$ in the case $q=p+v$ is a $G$-regular point and $v\in\nu_p N$ is such that the principal curvatures of $A_v$ are all nonzero. The case of an arbitrary $v\in\nu_pN$ follows from a limiting argument. This implies that $\Sigma$ satisfies conditions~(C1), (C2) and~(C3) by Proposition~\ref{prop:suf}(a).
$\square$
\begin{rmk} \em In Theorem~\ref{thm:converse}, if it happens that the distribution $\mbox{$\mathcal D$}$ coincides with the distribution defined by the canonical subspaces, namely $\mbox{$\mathcal D$}_p=U_p^0$ for all $p\in N$, then we can show that $\Sigma$ satisfies condition (C4) so that it is already a $k$-section (not necessarily minimal). In fact, following the notation of the proof, suppose that $v\in\nu_pN$ is arbitrary and $q=p+v$ is a $G$-regular point where $q=gp$ for some $g\in G$. We want to show that $\mbox{$\mathcal E$}_p=\mbox{$\mathcal E$}_q$. Note that $T_q(\Sigma\cap N)$ is invariant under the second fundamental form of $N$ at $q$, since $\Sigma$ is totally geodesic. Moreover, $T_q(\Sigma\cap N)$ has property ($\mbox{P}_{\mbox{\scriptsize Euc}}$) by Corollary~\ref{cor:partial-vc}. Now Proposition~\ref{prop:can} implies that $T_q(\Sigma\cap N)\supset\mbox{$\mathcal D$}_q$. Since $\mbox{$\mathcal E$}_p$ is the orthogonal complement of $T_q(\Sigma\cap N)$ in $T_qN$, we deduce that $\mbox{$\mathcal E$}_p\subset\mbox{$\mathcal E$}_q$, and thus, by dimensional reasons, $\mbox{$\mathcal E$}_p=\mbox{$\mathcal E$}_q$. It follows from Proposition~\ref{prop:suf}(b) that $\Sigma$ is a $k$-section. \end{rmk}
It is known that if a principal orbit of an orthogonal representation has the property that equivariant normal vector fields are parallel in the normal connection, then the representation is polar. The following corollary is a generalization of this result.
\begin{cor}\label{cor:normal} Let $(G,V)$ be an orthogonal representation, $N$ be a principal orbit and suppose there is a $G$-invariant, $k$-dimensional distribution $\mbox{$\mathcal D$}$ in $N$ which is autoparallel in $N$ and invariant under the second fundamental form of $N$. Let $\mbox{$\mathcal E$}$ be the orthogonal complement distribution of $\mbox{$\mathcal D$}$ in $N$. Assume that every equivariant normal vector field on $N$ is parallel along $\mbox{$\mathcal E$}$. Then $\copol{G}{M}\leq k$. \end{cor}
{\em Proof}. It is enough to see that $\mbox{$\mathcal D$}$ has property ($\mbox{P}_{\mbox{\scriptsize Euc}}$) and use Theorem~\ref{thm:converse}. Let $p\in N$, $v\in\nu_p(N)$ and suppose that $A_vu=\lambda u$ where $\lambda\neq0$ and $u$ is orthogonal to $\mbox{$\mathcal D$}_p$. Set $q=p+\frac{1}{\lambda}v$. Consider the equivariant normal vector field $\hat v$ that extends $v$. Then $\nabla^{\perp}_u\hat v=0$ by hypothesis. Let $X\in\mbox{$\mathfrak{g}$}$ be such that $X_p=u$.
Now \[X_q=\frac{d}{ds}\Big|_{s=0}(\exp sX)q=
\frac{d}{ds}\Big|_{s=0}(\exp sX)p+
\frac{1}{\lambda}\frac{d}{ds}\Big|_{s=0}\hat v(s) =u+\frac{1}{\lambda}(-A_v u+\nabla^{\perp}_u\hat v)=0.\] Hence $u\in T_q(G_qp)$.
$\square$
\textbf{Final questions} \begin{enumerate} \item Is it true that a minimal $k$-section $\Sigma$ of an orthogonal representation $(G,V)$, where $G$ is the maximal (not necessarily connected) subgroup of $\mathbf O(V)$ with its orbits, always coincides with the fixed point set of a principal isotropy group at a point $p\in\Sigma$? \item Is there an example of a focal manifold of an irreducible homogeneous isoparametric submanifold, obtained by focalizing a one-dimensional distribution, which is an extrinsic factor of a principal orbit of a \emph{reducible} representation of nontrivial copolarity $1$ and cohomogeneity bigger than $3$? \item Classify representations of nontrivial copolarity (in the irreducible case we believe that there should be not too many examples). \end{enumerate}
\parbox[t]{7cm}{\footnotesize\sc Instituto de Matem\'atica e Estat\'\i stica\\
Universidade de S\~ao Paulo\\
Rua do Mat\~ao, 1010\\
S\~ao Paulo, SP 05508-090\\
Brazil\\
E-mail: {\tt [email protected]}}
\parbox[t]{9cm}{\footnotesize\sc Facultad de Matem\'atica, Astronom\'\i a y F\'\i sica\\ Universidad Nacional C\'ordoba\\ Medina Allende y Haya de la Torre\\ Ciudad Universitaria\\ 5000 C\'ordoba Argentina\\ E-mail: {\tt [email protected]}}
\parbox[t]{7cm}{\footnotesize\sc Departamento de Matem\'atica\\
Universidade Federal de S\~ao Carlos\\
Rodovia Washington Luiz, km 235\\
S\~ao Carlos, SP 13565-905\\
Brazil\\
E-mail: {\tt [email protected]}}
\end{document} |
\begin{document}
\title{On the integral form of rank 1 Kac-Moody algebras. }
\date{} \author{Ilaria Damiani, Margherita Paolini} \maketitle
\begin{abstract}
\noindent In this paper we shall prove that the $\Z$-subalgebra generated by the divided powers of the Drinfeld generators $x_r^{\pm}$ ($r\in\Z$) of the Kac-Moody algebra of type $A_2^{(2)}$ is an integral form (strictly smaller than Mitzman's, see \cite{DM}) of the enveloping algebra, we shall exhibit a basis generalizing the one provided in \cite{HG} for the untwisted affine Kac-Moody algebras and we shall determine explicitly the commutation relations. Moreover we prove that both in the untwisted and in the twisted case the positive (respectively negative) imaginary part of the integral form is an algebra of polynomials over $\Z$.
\end{abstract}
\small \small \tableofcontents
\section{Introduction} \label{intr} \vskip .5truecm
\noindent Recall that the twisted affine Kac-Moody algebra of type $A_2^{(2)}$ is $\hat{{\frak sl_3}}^{\!\!\chi}$, the $\chi$-invariant subalgebra of $\hat{{\frak sl_3}}$ where $\chi$ is the non trivial Dynkin diagram automorphism of $A_2$ (see \cite{VK}) and denote by $\tilde{\cal U}$ its enveloping algebra ${\cal U}(\hat{{\frak sl_3}}^{\!\!\chi})$.
\noindent The aim of this paper is to give a basis over $\Z$ of the $\Z$-subalgebra of $\tilde{\cal U}$
generated by the divided powers of the Drinfeld generators $x_r^{\pm}$'s ($r\in\Z$) (see definitions \ref{a22} and \ref{thuz}), thus proving that this $\Z$-subalgebra is an integral form of $\tilde{\cal U}$. \vskip .3 truecm
\noindent The integral forms for finite dimensional semisimple Lie algebras were first introduced by Chevalley in \cite{Ch} for the study of the Chevalley groups and of their representation theory.
\noindent The construction of the ``divided power''-$\Z$-form
for
the simple finite dimensional Lie algebras
is due to Kostant (see \cite{Ko})
; it has been generalized to the untwisted affine Kac-Moody algebras by Garland in \cite{HG} as we shall quickly recall.
\noindent Given a simple Lie algebra ${{\frak g}}_0$ and the corresponding untwisted affine Kac-Moody algebra ${{\frak g}}={{\frak g}}_0\otimes\C[t,t^{-1}]\oplus\C c$ provided with an (ordered) Chevalley basis, the $\Z$-subalgebra $\u_{\Z}$ of ${\cal U}={\cal U}({{\frak g}})$ generated by the divided powers of the real root vectors is an integral form of ${\cal U}$; a $\Z$-basis of this integral form (hence its $\Z$-module structure) can be described by decomposing $\u_{\Z}$ as tensor product of its $\Z$-subalgebras relative respectively to the real root vectors ($\u_{\Z}^{re,+}$ and $\u_{\Z}^{re,-}$), to the imaginary root vectors ($\u_{\Z}^{im,+}$ and $\u_{\Z}^{im,-}$) and to the Cartan subalgebra ($\u_{\Z}^{{\frak h}}$):
$\u_{\Z}^{re,+}$ has a basis $B^{re,+}$ consisting of the (finite) ordered products of divided powers of the distinct positive real root vectors
and $(\u_{\Z}^{re,-},B^{re,-})$ can be described in the same way: $$B^{re,\pm}=\{x_{\pm\beta_1}^{(k_{\beta_1})}\cdot ...\cdot x_{\pm\beta_N}^{(k_{\beta_N})}|N\geq 0,\ \beta_1>...>\beta_N>0\ {\rm{real\ roots}},\ k_{\beta_j}> 0\ \forall j\}.$$ Here a real root $\beta$ of ${{\frak g}}$ is said to be positive if there exists a positive root $\alpha$ of ${{\frak g}}_0$ such that $\beta=\alpha$ or $\beta-\alpha$ is imaginary; $x_{\beta}$ is the Chevalley generator corresponding to the real root $\beta$.
A basis $B^{\frak h}$ of $\u_{\Z}^{{\frak h}}$, which is commutative, consists of the products of the ``binomials'' of the (Chevalley) generators $h_{i }$ ($i\in I$) of the Cartan subalgebra of ${{\frak g}}$: $$B^{\frak h}=\left\{ \prod_i{h_{i
}\choose k_i}\Big|k_i\geq 0\ \forall i\right\};$$ it is worth remarking that $\u_{\Z}^{{\frak h}}$ is not an algebra of polynomials.
$\u_{\Z}^{im,+}$ (and its symmetric $\u_{\Z}^{im,-}$) is commutative, too; as $\Z$-module it is isomorphic to the tensor product of the ${\cal U}_{i,\Z}^{im,+}$'s (each factor corresponding to the $i^{th}$ copy of ${\cal U}(\hat\frak sl_2)$ inside ${\cal U}$), so that it is enough to describe it in the rank 1 case: the basis $B^{im,+}$ of $\u_{\Z}^{im,+}(\hat\frak sl_2)$ provided by Garland can be described as a set of finite products of the elements $\Lambda_k{(\xi(m))}$ ($r\in\N$, $m>0$), where the $\Lambda_k{(\xi(m))}$'s are the elements of ${\cal U}^{im,+}=\C[h_{ r}(=h
\otimes t^r)|r>0]$ defined recursively (for all $m\neq 0$) by $$\Lambda_{-1}{(\xi(m))}=1,\ \ k\Lambda_{k-1}{(\xi(m))} =\sum_{r\geq 0,s>0\atop r+s=k}\Lambda_{r-1}{(\xi(m))} {h_{ms}}:$$ $$B
^{im,+}=\left\{\prod_{m>0}\Lambda_{k_m-1}{(\xi(m))}|k_m\geq 0\ \forall m,\ \#\{m>0|k_m\neq 0\}<\infty\right\};$$
\noindent It is not clear from this description that $\u_{\Z}^{im,+}$ and $\u_{\Z}^{im,-}$ are algebras of polynomials.
\noindent Thanks to the isomorphism of $\Z$-modules $$\u_{\Z}\cong\u_{\Z}^{re,-}\otimes_{\Z}\u_{\Z}^{im,-}\otimes_{\Z}\u_{\Z}^{{\frak h}}\otimes_{\Z}\u_{\Z}^{im,+}\otimes_{\Z}\u_{\Z}^{re,+}$$ a $\Z$-basis $B$ of $\u_{\Z}$ is produced as multiplication of $\Z$-bases of these subalgebras: $$B=B^{re,-}B^{im,-}B^{{\frak h}}B^{im,+}B^{re,+}.$$
\noindent The same result has been proved for all the twisted affine Kac-Moody algebras by Mitzman in \cite{DM}, where the author provides a deeper comprehension and a compact description of the commutation formulas by means of a drastic simplification of both the relations and their proofs. This goal is achieved remarking that the generating series of the elements involved in the basis can be expressed as suitable exponentials, observation that allows to apply very general tools of calculus, such as the well known properties $$x\exp(y)=\exp(y)\exp([\cdot,y])(x)$$ if $\exp(y)$ and $\exp([\cdot,y])(x)$ are well defined, and $$D(\exp(f))=D(f)\exp(f)$$ if $D$ is a derivation such that $[D(f),f]=0$.
\noindent Here, too, it is not yet clear that $\u_{\Z}^{im,\pm}$ are algebras of polynomials.
\noindent However this property, namely $\u_{\Z}^{im,+}=\Z[\Lambda_{k-1}=\Lambda_{k-1}{(\xi(1))}=p_{k,1}|k>0]$, is stated in Fisher-Vasta's PhD thesis (\cite{F}), where the author describes the results of Garland for the untwisted case and of Mitzman for $A_2^{(2)}$ aiming at a better understanding of the commutation formulas. Yet the proof is missing: the theorem describing the integral form is based on observations which seem to forget some necessary commutations, those between $(x_r^+)^{(k)}$ and $(x_s^-)^{(l)}$ when $|r+s|>1$; in \cite{F} only the cases $r+s=0$ and $r+s=\pm 1$ are considered, the former producing the binomials appearing in $B^{{\frak h}}$, the latter producing the elements $p_{n,1}$ (and their corresponding negative elements in $\u_{\Z}^{im,-}$).
\vskip .3 truecm
\noindent Comparing the Kac-Moody presentation of the affine Kac-Moody algebras with its ``Drinfeld'' presentation as current algebra, one can notice a difference between the untwisted and twisted case, which is at the origin of our work. As in the simple finite dimensional case, also in the affine cases the generators of $\u_{\Z}$ described above are redundant: the $\Z$-subalgebra of ${\cal U}$ generated by $\{e_i^{(k)},f_i^{(k)}|i\in I,\ k\in\N\}$, obviously contained in $\u_{\Z}$, is actually equal to $\u_{\Z}$.
\noindent On the other hand, the situation changes when we move to the Drinfeld presentation and study the $\Z$-subalgebra $^*{\cal U}_{\Z}$ of ${\cal U}$ generated by the divided powers of the Drinfeld generators $ (x_{i,r}^{\pm})^{(k)}$: indeed, while in the untwisted case it is still true that $\u_{\Z}=$ $^*{\cal U}_{\Z}$ and (also in the twisted case) it is always true that $^*{\cal U}_{\Z}\subseteq\u_{\Z}$, in general we get two different $\Z$-subalgebras of ${\cal U}$; more precisely $^*{\cal U}_{\Z}\subsetneq\u_{\Z}$ in case $A_{2n}^{(2)}$, that is when there exists a vertex $i$ whose corresponding rank 1 subalgebra is not a copy of ${\cal U}(\hat\frak sl_2)$ but is a copy of ${\cal U}(\hat{{\frak sl_3}}^{\!\!\chi})$. Thus in order to complete the description of $^*{\cal U}_{\Z}$ we need to study the case of $A_2^{(2)}$. \vskip .3 truecm
\noindent In the present paper we prove that the $\Z$-subalgebra generated by $$\{(x_r^+)^{(k)},(x_r^-)^{(k)}|r\in\Z,k\in\N\}$$ is an integral form of the enveloping algebra also in the case of $A_2^{(2)}$, we exhibit a basis generalizing the one provided in \cite{HG} and in \cite{DM} and determine the commutation relations in a compact yet explicit formulation (see theorem \ref{trmA22} and appendix \ref{appendA}). We use the same approach as Mitzman's, with a further simplification consisting in the remark that an element of the form $G(u,v)=\exp(xu)\exp(yv)$ is characterized by two properties: $G(0,v)=\exp(yv)$ and ${dG\over du}=xG$.
\noindent Moreover, studying the rank 1 cases we prove that, both in the untwisted and in the twisted case, $\u_{\Z}^{im,+}$ and $^*{\cal U}_{\Z}^{im,+}$ are algebras of polynomials: as stated in \cite{F}, the generators of $\u_{\Z}^{im,+}$ are the elements $\Lambda_k$ introduced in \cite{HG} and \cite{DM} (see proposition \ref{tmom} and remark \ref{tmfv}); the generators of $^*{\cal U}_{\Z}^{im,+}$ in the case $A_2^{(2)}$ are elements defined formally as the $\Lambda_k$'s after a deformation of the $h_r$'s (see definition \ref{thuz} and remark \ref{hdiversi}): describing $^*{\cal U}_{\Z}^{im,+}(\hat{{\frak sl_3}}^{\!\!\chi})$ (denoted by $\tilde{\cal U}_{\Z}^{0,+}$) has been the hard part of this work. \vskip .3 truecm
\noindent We work over ${\mathbb{Q}}$ and dedicate a preliminary particular care to the description of some integral forms of ${\mathbb{Q}}[x_i|i\in I]$ and of their properties and relations when they appear in some non commutative situations, properties that will be repeatedly used for the computations in ${\frak g}$: fixing the notations helps to understand the construction in the correct setting. With analogous care we discuss the symmetries arising both in $\hat\frak sl_2$ and in $\hat{{\frak sl_3}}^{\!\!\chi}$. We chose to recall also the case of $\frak sl_2$ and to give in a few lines the proof of the theorem describing its divided power integral form in order to present in this easy context the tools that will be used in the more complicated affine cases.
\vskip .3 truecm
\noindent The paper is organized as follows.
Section \ref{intgpl} is devoted to review the description of some integral forms of the algebra of polynomials (polynomials over $\Z$, divided powers,``binomials'' and symmetric functions, see \cite{IM}): they are introduced together with their generating series as exponentials of suitable series with null constant term, and their properties are rigorously stated, thus preparing to their use in the Lie algebra setting.
\noindent We have inserted here, in proposition \ref{tmom}, a result about the stability of the symmetric functions with integral coefficients under the homomorphism $\lambda_m$ mapping $x_i$ to $x_i^m$ ($m>0$ fixed), which is almost trivial in the symmetric function context; it is a straightforward consequence of this observation that $\u_{\Z}^{im,+}$ is an algebra of polynomials and so is $^*\u_{\Z}^{im,+}$ in the twisted case. We also provide a direct, elementary proof of this proposition (see proposition \ref{tdmom}).
In section \ref{ncn} we collect some computations in non commutative situations that we shall systematically refer to in the following sections.
Section \ref{sld} deals with the case of $\frak sl_2$: the one-page formulation and proof that we present (see theorem \ref{trdc}) inspire the way we study $\hat\frak sl_2$ and $\hat{{\frak sl_3}}^{\!\!\chi}$, and offer an easy introduction to the strategy followed also in the harder affine cases: decomposing our $\Z$-algebra as a tensor product of commutative subalgebras; describing these commutative structures thanks to the examples introduced in section \ref{intgpl}; and glueing the pieces together applying the results of section \ref{ncn}.
\noindent Even if the results of this section imply the commutation rules between $(x_r^+)^{(k)}$ and $(x_{-r}^+)^{(l)}$ ($r\in\Z$, $k,l\in\N$) in the enveloping algebra of $\hat\frak sl_2$ (see remark \ref{hrs}), it is worth remarking that section \ref{slh} does not depend on section \ref{sld}, and can be read independently (see remark \ref{hev}).
In section \ref{slh} we discuss the case of $\hat\frak sl_2$.
\noindent The first part of the section is devoted to the choice of the notations in $\hat{\cal U}={\cal U}(\hat\frak sl_2)$; to the definition of its (commutative) subalgebras $\hat{\cal U}^{\pm}$ (corresponding to the real component of $\hat{\cal U}$), $\hat{\cal U}^{0,\pm}$ (corresponding to the imaginary component), $\hat{\cal U}^{0,0}$ (corresponding to the Cartan), of their integral forms $\hat{\cal U}_{\Z}^{\pm}$, $\hat{\cal U}_{\Z}^{0,\pm}$, $\hat{\cal U}_{\Z}^{0,0}$, and of the $\Z$-subalgebra $\hat{\cal U}_{\Z}$ of $\hat{\cal U}$; and to a detailed reminder about the useful symmetries (automorphisms, antiautomorphisms, homomorphisms and triangular decomposition) thanks to which we can get rid of redundant computations.
\noindent In the second part of the section the apparently tough computations involved in the commutation relations are reduced to four formulas whose proofs are contained in a few lines:
proposition \ref{zzk}, proposition \ref{pum}, lemma \ref{limt}, and proposition \ref{exefh}, (together with proposition \ref{tmom}) are all what is needed to show that $\hat{\cal U}_{\Z}$ is an integral form of $\hat{\cal U}$, to recognize that the imaginary (positive and negative) components $\hat{\cal U}_{\Z}^{0,\pm}$ of $\hat{\cal U}_{\Z}$ are the algebras of polynomials $\Z[\Lambda_k(\xi(\pm 1))|k\geq 0]=\Z[\hat h_{\pm k}|k>0]$, and to exhibit a $\Z$-basis of $\hat{\cal U}_{\Z}$ (see theorem \ref{trm}).
In section \ref{ifa22} we finally present the case of $A_2^{(2)}$.
\noindent As for $\hat\frak sl_2$ we first evidentiate some general structures of ${\cal U}(\hat{{\frak sl_3}}^{\!\!\chi})$ (that we denote here $\tilde{\cal U}$ in order to distinguish it from $\hat{\cal U}={\cal U}(\hat\frak sl_2)$): notations, subalgebras and symmetries. Here we introduce the elements $\tilde h_k$ through the announced deformation of the formulas defining the elements $\hat h_k$'s (see definition \ref{thuz} and remark \ref{hdiversi}). We also describe a ${\mathbb{Q}}[w]$-module structure on a Lie subalgebra $L$ of $\hat{{\frak sl_3}}^{\!\!\chi}$ (see definitions \ref{sottoalgebraL} and \ref{qwmodulo}), thanks to which we can further simplify the notations.
\noindent In addition, in remark \ref{emgg} we recall the embeddings of $\hat{\cal U}$ inside $\tilde{\cal U}$ thanks to which a big part of the work can be translated from section \ref{slh}.
\noindent The heart of the problem is thus reduced to the commutation of $\exp(x_0^+u)$ with $\exp(x_1^-v)$ (which is technically more complicated than for $A_1^{(1)}$ since it is a product involving a higher number of factors) and to deducing from this formula the description of the imaginary part of the integral form as the algebra of the polynomials in the $\tilde h_k$'s. To the solution of this problem, which represents the central contribution of this work, we dedicate subsection \ref{sottosezione}, where we concentrate, perform and explain the necessary computations. \vskip .15 truecm \noindent At the end of the paper some appendices are added for the sake of completeness.
In appendix \ref{appendA} we collect all the straightening formulas: since not all of them are necessary to our proofs and in the previous sections we only computed those which were essential for our argument, we give here a complete explicit picture of the commutation relations.
Appendix \ref{appendB} is devoted to the description of a $\Z$-basis of $\Z^{(sym)}[h_r|r>0]$ alternative to that introduced in the example \ref{rvsf}.
\noindent $\Z^{(sym)}[h_r|r>0]$ is the algebra of polynomials $\Z[\hat h_k|k>0]$, and as such it has a $\Z$-basis consisting of the monomials in the $\hat h_k$'s, which is the one considered in our paper. But, as mentioned above, this algebra, that we are naturally interested in because it is isomorphic to the imaginary positive part of the integral form of the rank 1 Kac-Moody algebras, was not recognized by Garland and Mitzman as an algebra of polynomials: in this appendix the $\Z$-basis they introduce is studied from the point of view of the symmetric functions and thanks to this interpretation it is easily proved to generate freely the same $\Z$-submodule of ${\mathbb{Q}}[h_r|r>0]$ as the monomials in the $\hat h_k$'s.
In appendix \ref{appendC} we compare the Mitzman integral form of the enveloping algebra of type $A_2^{(2)}$ with the one studied here, proving the inclusion stated above. We also show that our commutation relations imply Mitzman theorem, too.
Finally, in order to help the reader to orientate in the notations and to find easily their definitions, we conclude the paper with an index of symbols, collected in appendix \ref{appendD}.
\vskip .3 truecm \noindent The study of the integral form of the affine Kac-Moody algebras from the point of view of the Drinfeld presentation, which differs from the one defined through the Kac-Moody presentation (\cite{HG} and \cite{DM}) in the case $A_{2}^{(2)}$ as outilined above, is motivated by the interest in the representation theory over $\Z$, since for the affine Kac-Moody algebras the notion of highest weight vector with respect to the $e_i$'s has been usefully replaced with that defined through the action of the $x_{i,r}^+$'s (see the works of Chari and Pressley \cite{C} and \cite{CP2}): in order to study what happens over the integers it is useful to work with an integral form defined in terms of the same $x_{i,r}^+$'s.
\noindent This work is also intended to be the preliminary classical step in the project of constructing and describing the {\it quantum} integral form for the twisted affine quantum algebras (with respect to the Drinfeld presentation). It is a joint project with Vyjayanthi Chari (see also \cite{CP}), who proposed it during a period of three months that she passed as a visiting professor at the Department of Mathematics of the University of Rome ``Tor Vergata''.
\noindent The commutation relations involved are extremely complicated and appear to be unworkable by hands without a deeper insight; we hope that a simplified approach can open a viable way to work in the quantum setting.
\vskip .5 truecm
\section{Integral form and commutative examples} \label{intgpl}
\vskip .5 truecm \noindent In this section we give the definition of integral form and summarize, fixing the notations useful to our purpose, some well known commutative examples (deeply studied and systematically exposed in \cite{IM}), which will play a central role in the non commutative enveloping algebra of finite and affine Kac-Moody algebras.
\vskip .3 truecm
\begin{definition} \label{intu}
\noindent Let $U$ be a ${\mathbb{Q}}$-algebra. An integral form of $U$ is a $\Z$-algebra $U_{\Z}$ such that:
i) $U_{\Z}$ is a free $\Z$-module;
ii) $U={\mathbb{Q}}\otimes_{\Z}U_{\Z}$.
\noindent In particular an integral form of $U$ is (can be identified to) a $\Z$-subalgebra of $U$, and a $\Z$-basis of an integral form of $U$ is a ${\mathbb{Q}}$-basis of $U$.
\end{definition}
\vskip .3 truecm \begin{example} \label{clpol}
\noindent Of course $\Z[x_i|i\in I]$ is an integral form of ${\mathbb{Q}}[x_i|i\in I]$ with basis the set of monomials in the $x_i$'s, namely
$\{{\bf{x}}^{{\bf{k}}}=\prod_{i\in I}x_i^{k_i}\}$ where ${\bf{k}}:I\to\N$ is finitely supported (that is $\#\{i\in I|k_i\neq 0\}<\infty$).
\noindent If $\{y_i\}_{i\in I}$ and $\{x_i\}_{i\in I}$ are $\Z$-bases of the same $\Z$-module, then $\Z[x_i|i\in I]=\Z[y_i|i\in I]$.
\end{example}
This can be said also as follows:
\noindent {Let $M$ be a free $\Z$-module and $V={\mathbb{Q}}\otimes_{\Z}M$} and consider the functor $S=$ ``symmetric algebra'' from the category of $\Z$-modules (respectively ${\mathbb{Q}}$-vector spaces) to the category of commutative unitary $\Z$-algebras (respectively commutative unitary ${\mathbb{Q}}$-algebras).
\noindent Then $SM$ is an integral form of $SV$ and $SM\cap V=M$.
\noindent By definition, every integral form of $SV$ containing $M$ contains $SM$, that is $SM$ is the least integral form of $SV$ containing $M$. \vskip .3 truecm \noindent We are interested in other remarkable integral forms of $SV$ containing $M$.
\begin{remark} \label{srinv} \noindent Let $U$ be a unitary $\Z$-algebra and $f(u)\in U[[u]]$. Then:
\noindent 1) If $f(u)\in 1+uU[[u]]$ then:
i) $f(u)$ is invertible in $U[[u]]$;
ii) the coefficients of $f(u)$, those of $f(-u)$ and those of $f(u)^{-1}$ generate the same $\Z$-subalgebra of $U$;
\noindent 2) If $f(u)\in uU[[u]]$ then ${\rm{exp}}(f(u))$ is a well defined element of $1+uU[[u]]$;
\noindent 3) If $f(u)\in 1+uU[[u]]$ then ${\rm{ln}}(f(u))$ is a well defined element of $uU[[u]]$;
\noindent 4) ${\rm{exp}}\,{\scriptstyle\circ}\,{\rm{ln}}\big|_{1+uU[[u]]}=id$ and ${\rm{ln}}\,{\scriptstyle\circ}\, {\rm{exp}}\big|_{uU[[u]]}=id$. \end{remark}
\vskip .3 truecm \begin{notation} \label{ntdvd} \noindent Let $a$ be an element of a unitary ${\mathbb{Q}}$-algebra $U$. The divided powers of $a$ are the elements $$a^{(k)}={a^k\over k!}\, \, (k\in\N).$$ Remark that the generating series of the $a^{(k)}$'s is $\exp(a u)$, that is \begin{equation}\label{gensexp} \sum_{k\geq 0}a^{(k)}u^k=\exp(au) \end{equation}. \end{notation}
\vskip .3 truecm \begin{example} \label{dvdpw}
\noindent Let $\{x_i\}_{i\in I}$ be a $\Z$-basis of $M$. Then it is well known and trivial that:
i) The $\Z$-subalgebra $S^{(div)}M\subseteq SV$ generated by $\{x^{(k)}\}_{x\in M,k\in\N}$ contains $M$;
ii) $S^{(div)}M\cap V=M$;
iii) $\{x_i^{(k)}\}_{i\in I,k\in\N}$ is a set of algebra-generators (over $\Z$) of $S^{(div)}M$;
iv) the set $\{{\bf{x}}^{({\bf{k}})}=\prod_{i\in I}x_i^{(k_i)}|{\bf{k}}:I\to\N$ is finitely supported$\}$ is a $\Z$-basis of $S^{(div)}M$.
v) $S^{(div)}M$ is an integral form of $SV$ (called the algebra of the divided powers of $M$).
\noindent $S^{(div)}M$ is also denoted $\Z^{(div)}[x_i|i\in I]$.
\noindent Remark that if $m(u)=\sum_{r\in\N}m_ru^r\in uM[[u]]$ then \begin{equation} \label{divpoweq} m(u)^{(k)}\in S^{(div)}M[[u]]\,\,\forall k\in\N \end{equation} or equivalently $${\rm{exp}}(m(u))\in S^{(div)}M[[u]].$$ The viceversa is obviously also true: \begin{equation}\label{sdivmv}m(u)\in uV[[u]],\ {\rm{exp}}(m(u))\in S^{(div)}M[[u]]\Leftrightarrow m(u)\in uM[[u]].\end{equation}
\end{example}
\vskip .3 truecm
\begin{notation} \label{ntbin}
\noindent Let $a$ be an element of a unitary ${\mathbb{Q}}$-algebra $U$. The ``binomials'' of $a$ are the elements $${a\choose k}={a(a-1)\cdot...\cdot(a-k+1)\over k!}\, \,\,\,\, \,\,\,(k\in\N).$$ Notice that ${a\choose k}$ is the image of ${x\choose k}$ through the evaluation $ev_a:{\mathbb{Q}}[x]\to U$ mapping $x$ to $a$. \end{notation}
\noindent Now consider the series $\exp(a\ln(1+u))$: this is a well defined element of $U[[u]]$ (because $a\ln(1+u)\in uU[[u]]$) whose coefficients are polynomials in $a$; this means that with the notations above $$\exp(a\ln(1+u))=ev_a(\exp(x{\rm{ln}}(1+u))).$$ In particular if we want to prove that for all $U$ and for all $a\in U$ the generating series of the ${a\choose k}$'s is $\exp(a\ln(1+u))$ it is enough to prove the claim in the case $a=x\in{\mathbb{Q}}[x]$, and to this aim it is enough to compare the evaluations on an infinite subset of ${\mathbb{Q}}$ (for istance on $\N$), thus reducing the proof to the trivial observation that $\forall n\in\N$ $$\sum_{k\in\N}{n\choose k}u^k=(1+u)^n=\exp(n\ln(1+u)).$$ Thus in general the generating series $\exp(a\ln(1+u))$ of the ${a\choose k}$'s can and will be denoted as $(1+u)^a$; more explicitly \begin{equation}\label{gensbin}\sum_{k\in\N}{a\choose k}u^k=(1+u)^a=\exp\Big(\sum_{r>0}(-1)^{r-1}{a\over r}u^r\Big).\end{equation}
\noindent It is clear from the definition of $(1+u)^a$ that if $a$ and $b$ are commuting elements of $U$ then $$(1+u)^{a+b}=(1+u)^a(1+u)^b.$$ It is also clear that the $\Z$-submodule of $U$ generated by the coefficients of $(1+u)^{a+m}$ ($a\in U$, $m\in\Z$) depends only on $a$ and not on $m$; it is actually a $\Z$-subalgebra of $U$: indeed for all $k,l\in\N$ $${a\choose k}{a-k\choose l}={k+l\choose k}{a\choose k+l}.$$
\noindent More precisely for each $m\in\Z$ and $n\in\N$ the $\Z$-submodule of $U$ generated by the ${a+m\choose k}$'s for $k=0,...,n$ ($a\in U$) depends only on $a$ and $n$ and not on $m$.
\noindent Finally notice that in $U[[u]]$ we have ${{\rm{d}}\over{\rm{d}}u}(1+u)^a=a(1+u)^{a-1}$. \vskip .3 truecm \begin{example} \label{binex}
\noindent Let $\{x_i\}_{i\in I}$ be a $\Z$-basis of $M$. Then it is well known and trivial that:
i) The $\Z$-subalgebra $S^{(bin)}M\subseteq SV$ generated by $\{{x\choose k} \}_{x\in M,k\in\N}$ contains $M$;
ii) $\{{x_i\choose k}\}_{i\in I,k\in\N}$ is a set of algebra-generators (over $\Z$) of $S^{(bin)}M$;
iii) the set $\{{{\bf{x}}\choose{\bf{k}}}=\prod_{i\in I}{x_i\choose k_i}\}|{\bf{k}}:I\to\N$ finitely supported$\}$ is a $\Z$-basis of $S^{(bin)}M$.
iv) $S^{(bin)}M\cap V=M$;
v) $S^{(bin)}M$ is an integral form of $SV$ (called the algebra of binomials of $M$).
\noindent $S^{(bin)}M$ is also denoted $\Z^{(bin)}[x_i|i\in I]$. \end{example}
\vskip .3 truecm \begin{example} {\,(Review of the symmetric functions, see \cite{IM})} \label{rvsf}
\noindent Let $n\in\N$. It is well known that $\Z[x_1,...,x_n]^{{\cal{S}}_n}$ is an integral form of ${\mathbb{Q}}[x_1,...,x_n]^{{\cal{S}}_n}$ and that $\Z[x_1,...,x_n]^{{\cal{S}}_n}=\Z[e_1^{[n]},...,e_n^{[n]}]$, where the (algebraically independent for $k=1,...,n$) elementary symmetric polynomials $e_k^{[n]}$'s are defined by \begin{equation}\label{mcd} \prod_{i=1}^n(T-x_i)=\sum_{k\in\N}(-1)^ke_k^{[n]}T^{n-k} \end{equation} and are homogeneous of degree $k$, that is $e_k^{[n]}\in\Z[x_1,...,x_n]_k^{{\cal{S}}_n}\subseteq{\mathbb{Q}}[x_1,...,x_n]_k^{{\cal{S}}_n}$.
\noindent It is also well known that for $n_1\geq n_2$ the natural projection $$\pi_{n_1,n_2}:{\mathbb{Q}}[x_1,...,x_{n_1}]\to{\mathbb{Q}}[x_1,...,x_{n_2}]$$ defined by $$\pi_{n_1,n_2}(x_i)=\begin{cases}x_i&{\rm{if\, }}i\leq n_2\\ 0 &{\rm{otherwise}}\end{cases}$$ is such that $\pi_{n_1,n_2}(e_k^{[n_1]})= e_k^{[n_2]} $ for all $k\in\N$. Then $$\bigoplus_{d\geq 0}\varprojlim\Z[x_1,...,x_n]_d^{{\cal{S}}_n}=\Z[e_1,...,e_k,...]\, \, (e_k\, {\rm{inverse\, limit\, of\, the\, }}e_k^{[n]})$$ is an integral form of $\oplus_{d\geq 0}\varprojlim{\mathbb{Q}}[x_1,...,x_n]_d^{{\cal{S}}_n}$, which is called the algebra of the symmetric functions.
\noindent Moreover the elements $$p_r^{[n]}=\sum_{i=1}^nx_i^r\in\Z[x_1,...,x_n]^{{\cal S}_n}\, \, (r>0,\, n\in\N)$$ and their inverse limits $p_r\in\Z[e_1,...,e_k,...]$ ($\pi_{n_1,n_2}(p_r^{[n_1]})=p_r^{[n_2]}$ for all $r>0$ and all $n_1\geq n_2$) give another set of generators of the ${\mathbb{Q}}$-algebra of the symmetric functions: the $p_r$'s are algebraically independent and $$\bigoplus_{d\geq 0}\varprojlim{\mathbb{Q}}[x_1,...,x_n]_d^{{\cal S}_n}={\mathbb{Q}}[p_1,...,p_r,...].$$ Finally $\Z[e_1,...,e_k,...]$ is an integral form of ${\mathbb{Q}}[p_1,...,p_r,...]$ containing $p_r$ for all $r>0$ (more precisely a linear combination of the $p_r$'s lies in $\Z[e_1,...,e_k,...]$ if and only if it has integral coefficients), the relation between the $e_k$'s and the $p_r$'s being given by: $$\sum_{k\in\N}(-1)^ke_ku^k={\rm{exp}}\Big(-\sum_{r>0}{p_r\over r}u^r\Big).$$
\noindent In this context we use the notation $$\Z[e_k|k>0]=\Z^{(sym)}[p_r|r>0
]\subseteq{\mathbb{Q}}[p_r|r>0] ;$$ to stress the dependence of the $e_k$'s on the $p_r$'s we set $e=\hat p$, that is \begin{equation} \label{dfhp} \hat p(u)=\sum_{k\in\N}\hat p_ku^k={\rm{exp}}\Big(\sum_{r>0}(-1)^{r-1}{p_r\over r}u^r\Big) \end{equation}
and \begin{equation}\label{zdfhp}\Z[\hat p_k|k>0
]=\Z^{(sym)}[p_r|r>0].\end{equation} \end{example}
\vskip .3 truecm \begin{remark} \label{spsym} \noindent \noindent With the notations above, let $\varphi:{\mathbb{Q}}[p_1,...,p_r,...]\to U$ be an algebra-homomorphism and $a=\varphi(p_1)$:
i) if $\varphi(p_r)=0$ for $r>1$ then $\varphi(\hat p_k)=a^{(k)}$ for all $k\in\N$;
ii) if $\varphi(p_r)=a$ for all $r>0$ then $\varphi(\hat p_k)={a\choose k}$ for all $k\in\N$.
\noindent Hence $\Z^{(sym)}$ is a generalization of both $\Z^{(div)}$ and $\Z^{(bin)}$.
\end{remark}
\vskip .3 truecm
\begin{remark}\label{funtorialita}
Let $M$ be the $\Z$-module with basis $\{p_r|r>0\}$ and, as above, $V={\mathbb{Q}}\otimes_{\Z}M$. Then:
\noindent i) as for the functors $S$, $S^{(div)}$ and $S^{(bin)}$, we have $\Z^{(sym)}[p_r|r>0]\cap V=M$;
\noindent ii) unlike the functors $S$, $S^{(div)}$ and $S^{(bin)}$, $\Z^{(sym)}[p_r|r>0]$ depends on $\{p_r|r>0\}$ and not only on $M$: for instance
$$\Z^{(sym)}[-p_1,p_r|r>1]\neq \Z^{(sym)}[p_r|r>0]$$ (it is easy to check that these integral forms are different for example in degree 3);
\noindent iii) not all the sign changes
of the $p_r$'s produce different $\Z^{(sym)}$-forms of ${\mathbb{Q}}[p_r|r>0]$:
$$\Z^{(sym)}[(-1)^rp_r|r>0]=\Z^{(sym)}[-p_r|r>0]=\Z^{(sym)}[p_r|r>0]$$ since $$\exp\left(\sum_{r>0}(-1)^{r-1}{(-1)^rp_r\over r}u^r\right)=\exp\left(\sum_{r>0}(-1)^{r-1}{p_r\over r}(-u)^r\right)$$ and $$\exp\left(\sum_{r>0}(-1)^{r-1}{-p_r\over r}u^r\right)=\exp\left(\sum_{r>0}(-1)^{r-1}{p_r\over r}u^r\right)^{-1}$$ (see remark \ref{srinv},1),ii)). \end{remark}
\vskip .3 truecm
\noindent In general it is not trivial to understand whether an element of ${\mathbb{Q}}[p_r|r>0]$ belongs or not to $\Z^{(sym)}[p_r|r>0]$; proposition \ref{tmom} gives an answer to this question, which is generalized in proposition \ref{convoluzioneintera} (the examples in remark \ref{funtorialita}, ii) and iii) can be obtained also as applications of proposition \ref{convoluzioneintera}).
\vskip .3 truecm \begin{proposition} \label {tmom}
\noindent Let us fix $m>0$ and let $\lambda_m:{\mathbb{Q}}[p_r|r>0]\to{\mathbb{Q}}[p_r|r>0]$ be the algebra homomorphism defined by $\lambda_m(p_r)=p_{mr}$ for all $r>0$.
\noindent Then $\Z^{(sym)}[p_r|r>0]$ $(=\Z[\hat p_k|k>0])$ is $\lambda_m$-stable. \begin{proof} \noindent For $n\in\N$ let $\lambda_m^{[n]}:{\mathbb{Q}}[x_1,...,x_n]\to{\mathbb{Q}}[x_1,...,x_n]$ be the algebra homomorphism defined by $\lambda_m^{[n]}(x_i)=x_i^m$ for all $i=1,...,n$.
\noindent We obviously have that $$\Z[x_1,...,x_n]\ \ {\rm{is\ }}\lambda_m^{[n]}{\rm{-stable}},$$ $${\mathbb{Q}}[x_1,...,x_n]_d\ \ {\rm{is\ mapped\ to\ }}{\mathbb{Q}}[x_1,...,x_n]_{md}\,\,\forall d\geq 0,$$ $$\lambda_m^{[n]}\circ\sigma=\sigma\circ\lambda_m^{[n]}\,\,\forall n\in\N,\, \sigma\in{\cal{S}}_n,$$ $$\pi_{n_1,n_2}\circ\lambda_m^{[n_1]}=\lambda_m^{[n_2]}\circ\pi_{n_1,n_2}\,\,\forall n_1\geq n_2,$$ $$\lambda_m^{[n]}(p_r^{[n]})=p_{mr}^{[n]}\,\,\forall n\in\N,\,r>0,$$
hence there exist the limits of the $\lambda_m^{[n]}\big|_{{\mathbb{Q}}[x_1,...,x_n]_d^{{\cal S}_n}}$'s: their direct sum over $d\geq 0$ stabilizes $\oplus_{d\geq 0}\lim\Z[x_1,...,x_n]_d^{{\cal S}_n}=\Z[\hat p_k|k>0]$ and is $\lambda_m$.
\noindent In particular $\lambda_m(\hat p_k)\in\Z[\hat p_l|l>0]$ $\forall k\in\N$.
\end{proof} \end{proposition}
\vskip .3 truecm \noindent We also propose a second, direct, proof of proposition \ref{tmom}, which provides in addition an explicit expression of the $\lambda_m(\hat p_k)$'s in terms of the $\hat p_l$'s.
\begin{proposition}\label{tdmom} Let $m$ and $\lambda_m$ be as in proposition \ref{tmom} and $\omega \in\C$ a primitive $m^{th}$ root of 1. Then
$$\lambda_m(\hat p(-u^m))=\prod_{j=0}^{m-1}\hat p(-\omega^j u)\in\Z[\hat p_k|k>0][[u]].$$
\begin{proof} The equality in the statement is an immediate consequence of
$$\sum_{j=0}^{m-1}\omega^{jr}=\begin{cases}m&{\rm{if}}\ m|r\\0&{\rm{otherwise}},\end{cases}$$ so that $$-\sum_{j=0}^{m-1}\sum_{r>0}{p_r\over r}\omega^{jr}u^r=-\sum_{r>0}{p_{mr}\over r}u^{mr}=\lambda_m\left(-\sum_{r>0}{p_r\over r}(u^m)^r\right),$$ whose exponential is the claim.
\noindent Then for all $k>0$
$$\lambda_m(\hat p_k)\in{\mathbb{Q}}[\hat p_l|l>0]\cap\Z[\omega][\hat p_l|l>0]=\Z[\hat p_l|l>0]$$ since ${\mathbb{Q}}\cap\Z[\omega]=\Z$. \end{proof}
\end{proposition}
\noindent In order to characterize the functions $a:\Z_+\to{\mathbb{Q}}$ such that
$$\Z^{(sym)}[a_rp_r|r>0]\subseteq\Z^{(sym)}[p_r|r>0]$$ we introduce the notation \ref{hcappucciof}, where we rename the $p_r$'s into $h_r$ since in the affine Kac-Moody case the $\Z^{(sym)}$-construction describes the imaginary component of the integral form. Moreover from now on $p_i$ will denote a positive prime number. \begin{notation}\label{hcappucciof} Given $a:\Z_+\to{\mathbb{Q}}$ set $$\sum_{k\geq 0}\hat h^{\{ a \} }_ku^k=\hat h^{\{a\}}(u)=\exp\left(\sum_{r>0}(-1)^{r-1}{a_rh_r\over r}u^r\right);$$ $1\!\!\!\!1$ denotes the function defined by $1\!\!\!\!1_r=1$ for all $r\in\Z_+$;
\noindent for all $m>0$
$1\!\!\!\!1^{(m)}$ denotes the function defined by $1\!\!\!\!1^{(m)}_r=\begin{cases}m&{\rm{if}}\ m|r\\0&{\rm{otherwise}}.\end{cases}$
\noindent Thus $\hat h^{\{1\!\!\!\!1\}}(u)=\hat h(u)$ and $\hat h^{\{1\!\!\!\!1^{(m)}\}}(-u)=\lambda_m(\hat h(-u^m))$.
\end{notation} \vskip .3 truecm \begin{recall}
\noindent The convolution product $*$ in the ring of the ${\mathbb{Q}}$-valued arithmetic functions $${\cal A}r=\{f:\Z_+\to{\mathbb{Q}}\}$$ is defined by $$(f*g)(n)=\sum_{r,s:\atop rs=n}f(r)g(s).$$ The M\"obius function $\mu:\Z_+\to{\mathbb{Q}}$ defined by $$\mu\left(\prod_{i=1}^n p_i^{r_i}\right)=\begin{cases}(-1)^n&{\rm {if}}\ r_i=1\ \forall i\\0&{\rm{otherwise}}\end{cases}$$ {\centerline{(where the $p_i$'s are distinct positive prime integers and $r_i\geq 1$ for all $i$)}} is the inverse of $1\!\!\!\!1$ in the ring of the arithmetic functions. \end{recall}
\vskip .3 truecm
\begin{proposition}\label{convoluzioneintera} \noindent Let $a:\Z_+\to{\mathbb{Q}}$ be any function; then, with the notations fixed in \ref{hcappucciof},
$$\hat h^{\{a\}}_k\in\Z[\hat h_l|l>0]\ \ \forall k>0\Leftrightarrow n|(\mu*a)(n)\in\Z\ \ \forall n>0.$$
\begin{proof} \noindent Remark that $a=1\!\!\!\!1*\mu*a$, that is
$$\forall n>0\ a_n=\sum_{m|n}(\mu*a)(m)=\sum_{m|n}{(\mu*a)(m)\over m}m=\sum_{m>0}{(\mu*a)(m)\over m}1\!\!\!\!1^{(m)}_n,$$ which means $$a=\sum_{m>0}{(\mu*a)(m)\over m}1\!\!\!\!1^{(m)}.$$ Let $k_m={(\mu*a)(m)\over m}$ for all $m>0$, choose $m_0>0$ such that $k_m\in\Z$ $\forall m<m_0$ and set $a^{(0)}=\sum_{m<m_0}k_m1\!\!\!\!1^{(m)}$, $a'=a-a^{(0)}$, so that $$\hat h^{\{a\}}(u)=\hat h^{\{a'\}}(u)\hat h^{\{a^{(0)}\}}(u),$$ and, by proposition \ref{tmom} (see also notation \ref{hcappucciof}),
$$\ \ \ \hat h^{\{a^{(0)}\}}(u)\in\Z[\hat h_k|k>0][[u]].$$ It follows that
\noindent i) $\hat h^{\{a\}}(u)\in\Z[\hat h_k|k>0][[u]]\Leftrightarrow\hat h^{\{a'\}}(u)\in\Z[\hat h_k|k>0][[u]]$.
\noindent ii) $\forall n<m_0$ $\hat h^{\{a'\}}_n=0$, so that $\hat h^{\{a\}}_n=\hat h^{\{a^{(0)}\}}_n\in\Z[\hat h_k|k>0]$;
in particular $\hat h^{\{a\}}(u)\in\Z[\hat h_k|k>0][[u]]$ if $k_m\in\Z$ $\forall m>0$.
\noindent iii) $a'_{m_0}=(\mu*a)(m_0)=n_0k_{m_0}$ so that $\hat h^{\{a'\}}_{m_0}=k_{m_0}h_{m_0}$, which belongs to $\Z[\hat h_k|k>0]$ if and only if $k_{m_0}\in\Z$ (see remark \ref{funtorialita},i));
in particular $\hat h^{\{a\}}(u)\not\in\Z[\hat h_k|k>0][[u]]$ if $\exists m_0\in\Z_+$ such that $k_{m_0}\not\in\Z$.
\end{proof} \end{proposition} \begin{proposition}\label{emmepiallaerre} \noindent Let $a:\Z_+\to\Z$ be a function satisfying the condition
$$p^r|a_{mp^r}-a_{mp^{r-1}}\ \ \forall p,m\in\Z_+\ \ {\rm{with}}\ \ p\ \ {\rm{prime\ and}}\ (m,p)=1.$$
Then $n|(\mu*a)(n)$ $\forall n\in\Z_+$.
\begin{proof}
\noindent
The condition $1|(\mu*a)(1)$ is equivalent to the condition $a_1\in\Z$.
\noindent For $n>1$ remark that $$n|(\mu*a)(n)\Leftrightarrow p^r|(\mu*a)(n)\ \ \forall p\ {\rm{prime}},\ r>0\ {\rm{such\ that}}\ p^r||n.$$ Recall that if $P $ is the set of the prime factors of $n$ and $p\in P$ then $$(\mu*a)(n)=\sum_{S\subseteq P}(-1)^{\#S}a_{{n\over\prod_{q\in S}q}}=$$ \begin{equation}\label{mpr}=\sum_{S'\subseteq P\setminus\{p\}}(-1)^{\#S'}(a_{{n\over\prod_{q\in S'}q}}-a_{{n\over p\prod_{q\in S'}q}}).\end{equation} The claim follows from the remark that
$p^r||n$ if and only if $p^r||{n\over\prod_{q\in S'}q}$.
\end{proof} \end{proposition} \begin{remark} \label{vicelambda} The viceversa of proposition \ref{emmepiallaerre} is trivially true, too, and is immediately proved applying (\ref{mpr})
to the minimal $n>0$ such that there exists $p|n$ and $r>0$ ($p^r|n$, $n=mp^r$) not satisfying the hypothesis of the statement. \end{remark} \noindent Proposition \ref{tmom} will play an important role in the study of the commutation relations in the enveloping algebra of $\hat\frak sl_2$ (see remarks \ref{stuz},vi) and \ref{exev}) and of $\hat{{\frak sl_3}}^{\!\!\chi}$ (see remark \ref{ometiomecap} and proposition \ref{sttuz},iv)).
\noindent Proposition
\ref{convoluzioneintera} is based on and generalizes proposition \ref{tmom}; it is a key tool in the study of the integral form in the case of $A_2^{(2)}$, see corollary \ref{hcappucciod}.
\noindent A more precise connection between the integral form $\Z^{(sym)}[h_r|r>0]$ of ${\mathbb{Q}}[h_r|r>0]$ and the homomorphisms $\lambda_m$'s, namely another $\Z$-basis of $\Z^{(sym)}[h_r|r>0]$ (basis defined in terms of the elements $\lambda_m(\hat h_k)$'s and arising from Garland's and Mitzman's description of the integral form of the affine Kac-Moody algebras) is discussed in appendix \ref{appendB}.
\vskip .5 truecm \section{Some non commutative cases} \label{ncn}
\vskip .5truecm \noindent We start this section with a basic remark.
\begin{remark} \label{dbsg} \noindent i) Let $U_1$, $U_2$ be two ${\mathbb{Q}}$-algebras, with integral forms respectively $\tilde U_1$ and $\tilde U_2$. Then $\tilde U_1\otimes_{\Z}\tilde U_2$ is an integral form of the ${\mathbb{Q}}$-algebra $U_1\otimes_{{\mathbb{Q}}}U_2$ .
\noindent ii) Let $U$ be an associative unitary ${\mathbb{Q}}$-algebra (not necessarily commutative) and $U_1,U_2\subseteq U$ be two ${\mathbb{Q}}$-subalgebras such that $U\cong U_1\otimes_{{\mathbb{Q}}} U_2$ as ${\mathbb{Q}}$-vector spaces. If $\tilde U_1,\tilde U_2$ are integral forms of $U_1,U_2$, then $\tilde U_1\otimes_{\Z} \tilde U_2$ is an integral form of $U$ if and only if $\tilde U_2\tilde U_1\subseteq\tilde U_1\tilde U_2$. \end{remark}
\noindent Remark \ref{dbsg},ii) suggests that if we have a (linear) decomposition of an algebra $U$ as an ordered tensor product of polynomial algebras $U_i$ ($i=1,...,N$), that is we have a linear isomorphism $$U\cong U_1\otimes_{{\mathbb{Q}}}...\otimes_{{\mathbb{Q}}} U_N,$$ then one can tackle the problem of finding an integral form of $U$ by studying the commutation relations among the elements of some suitable integral forms of the $U_i$'s.
\noindent Glueing together in a non commutative way the different integral forms of the algebras of polynomials discussed in section \ref{intgpl} is the aim of this section, which collects the preliminary work of the paper: the main results of the following sections are applications of the formulas found here.
\vskip .3 truecm \begin{notation}\label{lard} \noindent Let $U$ be an associative ${\mathbb{Q}}$-algebra and $a\in U$.
\noindent We denote by $L_a$ and $R_a$ respectively the left and right multiplication by $a$; of course $L_a-R_a=[a,\cdot]=-[\cdot,a]$. \end{notation}
\vskip .3 truecm \begin{lemma} \label{cle} \noindent Let $U$ be an associative unitary ${\mathbb{Q}}$-algebra.
\noindent Consider elements $a,b,c\in U$, $f,g\in End(U)$ and $\alpha(u)\in U[[u]]$. Then:
i) if ${\exp}(f)$ and ${\exp}(g)$ converge and $[f,g]=0$ we have $${\rm{exp}}(f\pm g)={\rm{exp}}(f){\rm{exp}}(g)^{\pm 1};$$
ii) $[L_a,R_a]=0$;
iii) if $f$ is an algebra-homomorphism and $f(a)=a$ we have
$$[f,L_a]=[f,R_a]=0;$$
iv) if ${\exp}(a)$ converges so do ${\rm{exp}}(L_a)$ and ${\rm{exp}}(R_a)$, and we have $${\rm{exp}}(L_a)=L_{{\rm{exp}}(a)},\,\,{\rm{exp}}(R_a)=R_{{\rm{exp}}(a)},\,\,{\rm{exp}}(R_a)=L_{{\rm{exp}}(a)}{\rm{exp}}([\cdot,a]);$$
v) if $\exp(a)$ and ${\rm{exp}}(c)$ converge we have $$ab=bc\Leftrightarrow {\rm{exp}}(a)b=b{\rm{exp}}(c);$$
vi) if $\exp(b)$ converges and $[b,c]=0$ we have $$[a,b]=c\Leftrightarrow a{\rm{exp}}(b)={\rm{exp}}(b)(a+c) ;$$
vii) if ${\rm{exp}}(a)$, ${\rm{exp}}(b)$ and ${\rm{exp}}(c)$ converge and $[a,c]=[b,c]=0$ then $$[a,b]=c\Leftrightarrow {\rm{exp}}(a){\rm{exp}}(b)={\rm{exp}}(b){\rm{exp}}(a){\rm{exp}}(c)$$
viii) if ${\exp}(a)$, ${\rm{exp}}(b)$ and ${\rm{exp}}(c)$ converge and $[a,c]=[b,c]=0$ then $$[a,b]=c\Rightarrow {\rm{exp}}(a+b)={\rm{exp}}(a){\rm{exp}}(b){\rm{exp}}(-c/2);$$
ix) if ${\rm{d}}:U\to U$ is a derivation and $[a,{\rm{d}}(a)]=0$ we have $${\rm{d}}({\rm{exp}}(a))={\rm{d}}(a){\rm{exp}}(a)={\rm{exp}}(a){\rm{d}}(a).$$
x) if $\alpha(u)=\sum_{r\in\N}\alpha_ru^r $ ($\alpha_r\in U$ $\forall r\in\N$) we have $${{\rm{d}}\over{\rm{d}}u}\alpha(u)=\alpha(u)b\Leftrightarrow \alpha(u)=\alpha_0{\rm{exp}}(bu)$$ and $${{\rm{d}}\over{\rm{d}}u}\alpha(u)=b\alpha(u)\Leftrightarrow \alpha(u)={\rm{exp}}(bu)\alpha_0.$$
\begin{proof} Statements v) and vi) are immediate consequence respectively of the fact that for all $n\in\N$:
v) $a^nb=bc^n$;
vi) $ab^{(n)}=b^{(n)}a+b^{(n-1)}c$.
\noindent vii) follows from v) ad vi).
\noindent viii) follows from vii): $$(a+b)^{(n)}=\sum_{r,s,t:\atop r+s+2t=n}{(-1)^t\over 2^t}a^{(r)}b^{(s)}c^{(t)}.$$
\noindent The other points are obvious. \end{proof} \end{lemma}
\vskip .3 truecm \begin{proposition} \label{bdm}
\noindent Let us fix $m\in\Z$ and consider the ${\mathbb{Q}}$-algebra structure on $U={\mathbb{Q}}[x]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[h]$ given by $xh=(h-m)x$.
\noindent Then $\Z^{(div)}[x]\otimes_{\Z} \Z^{(bin)}[h]$ and $\Z^{(bin)}[h]\otimes_{\Z} \Z^{(div)}[x]$ are integral forms of $U$: their images in $U$ are closed under multiplication, and coincide. Indeed \begin{equation}\label{fru}x^{(k)}{h\choose l}={h-mk\choose l}x^{(k)}\,\,\forall k,l\in\N\end{equation} or equivalently, with a notation that will be useful in the following, \begin{equation}\label{fu} {\rm{exp}}(xu)(1+v)^h=(1+v)^h {\rm{exp}}\left({xu\over (1+v)^m}\right). \end{equation}
\begin{proof} The relation between $x$ and $h$ can be written as $$xP(h)=P(h-m)x$$ and $$x^{(k)}P(h)=P(h-mk)x^{(k)}$$ for all $P\in{\mathbb{Q}}[h]$ and for all $k >0$. In particular it holds for $P(h)={h\choose l}$, that is \begin{equation} \label{xvh} x(1+v)^h=(1+v)^{h-m}x=(1+v)^h{x\over(1+v)^m} \end{equation} and \begin{equation}\label{xvh2}x^{(k)}(1+v)^h=(1+v)^h\left({x\over(1+v)^m}\right)^{(k)}.\end{equation} \noindent The conclusion follows multiplying by $u^k$ and summing over $k$. \end{proof} \end{proposition}
\vskip .3 truecm \begin{proposition} \label{jhg} \noindent Let us fix $m\in\Z$ and consider the ${\mathbb{Q}}$-algebra structure on $$U={\mathbb{Q}}[x]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[z]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[y]$$ defined by $[x,z]=[y,z]=0$, $[x,y]=mz$.
\noindent Then $\Z^{(div)}[x]\otimes_{\Z}\Z^{(div)}[z]\otimes_{\Z}\Z^{(div)}[y]$ is an integral form of $U$. \begin{proof} Since $z$ commutes with $x$ and $y$ we just have to straighten $y^{(r)}x^{(s)}$. Thus the claim is a straightforward consequence of lemma \ref{cle},vii): \begin{equation}\label{strxx}\exp(yu)\exp(xv)=\exp(xv)\exp(zuv)^{-m}\exp(yu).\end{equation} \end{proof} \end{proposition}
\vskip .3 truecm
\begin{proposition} \label{heise}
\noindent Let us fix $m,l\in\Z$ and consider the ${\mathbb{Q}}$-algebra structure on $U={\mathbb{Q}}[h_r|r<0]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[h_0,c]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[h_r|r>0]$ given by $$[c,h_r]=0,\,\,[h_r,h_s]=\delta_{r+s,0}r(m+(-1)^rl)c\,\,\forall r,s\in\Z.$$
\noindent Then, recalling the notation $\Z[\hat h_{\pm k}|k>0]=\Z^{(sym)}[h_{\pm r}|r>0]$ and defining $U_{\Z}$ to be the $\Z$-subalgebra of $U$ generated by
$U_{\Z}^{\pm}=\Z^{(sym)}[h_{\pm r}|r>0]$ and $U_{\Z}^0=\Z^{(bin)}[h_0,c]$, we have that \begin{equation} \label{hhh} \hat h_+(u)\hat h_-(v)=\hat h_-(v)(1-uv)^{-mc}(1+uv)^{-lc}\hat h_+(u) \end{equation} and $U_{\Z}=U_{\Z}^-U_{\Z}^0U_{\Z}^+$, so that
$$U_{\Z}\cong\Z^{(sym)}[h_{-r}|r>0]\otimes_{\Z}\Z^{(bin)}[h_0,c]\otimes_{\Z}\Z^{(sym)}[h_{r}|r>0]$$ is an integral form of $U$. \begin{proof} \ref{hhh} follows from lemma \ref{cle}, vii) remarking that $$\Big[\sum_{r>0}(-1)^{r-1}{h_r\over r}u^r,\sum_{s>0}(-1)^{s-1}{h_{-s}\over s}v^s\Big]=c\sum_{r>0}{m+(-1)^rl\over r}u^rv^r=$$ $$=-mc{\rm {ln}}(1-uv)-lc{\rm {ln}}(1+uv).$$
\noindent Of course $U_{\Z}^0U_{\Z}^-=U_{\Z}^-U_{\Z}^0$ is a $\Z$-subalgebra of $U$, $U_{\Z}^-U_{\Z}^0U_{\Z}^+\subseteq U_{\Z}$, $U_{\Z}$ is generated by $U_{\Z}^-U_{\Z}^0U_{\Z}^+$ as $\Z$-algebra and $U_{\Z}^-U_{\Z}^0U_{\Z}^+\cong U_{\Z}^-\otimes_{\Z}U_{\Z}^0\otimes_{\Z}U_{\Z}^+$ as $\Z$-modules.
\noindent Hence we need to prove that $U_{\Z}^-U_{\Z}^0U_{\Z}^+$ is a $\Z$-subalgebra of $U$, or equivalently that it is closed under left multiplication by $U_{\Z}^+$ (because it is obviously closed under left multiplication by $U_{\Z}^-U_{\Z}^0$), which is a straightforward consequence of \ref{hhh}.
\end{proof}
\end{proposition}
\vskip .3 truecm
\begin{lemma}\label{lhlh} Let $U$ be a ${\mathbb{Q}}$-algebra, $T:U\to U$ an automorphism, $$f\in\sum_{r>0}\Z T^ru^r\subseteq End(U[[u]]),$$ $h\in uU[[u]]$ and $x\in U$ such that $T(h)=h$ and $[x,h]=f(x)$. Then $$x{\rm{exp}}(h)=\exp(h)\cdot \exp(f)(x).$$ \begin{proof} By proposition \ref{cle},iv) $$x\exp(h)=\exp(h)\exp([\cdot,h])(x),$$ so we have to prove that $\exp([\cdot,h])(x)=\exp(f)(x),$ or equivalently that $[\cdot,h]^n(x)=f^n(x)$ for all $n\in\N$.
\noindent If $n=0,1$ the claim is obvious; if $n>1$, $f^{n-1}(x)=\sum_{r>0}a_rT^ru^r(x)$ with $a_r\in\Z$ for all $r>0$, $f$ commutes with $T$, and by the inductive hypothesis $$[\cdot,h]^n(x)=[f^{n-1}(x),h]=\left[\sum_{r>0}a_rT^{r}u^r(x),h\right]=$$ $$=\sum_{r>0}a_ru^rT^r([x,h])=\sum a_ru^rT^r f(x)=f\sum a_r u^rT^r(x)=f(f^{n-1}(x))=f^n(x).$$
\end{proof} \end{lemma}
\begin{proposition} \label{hh}
\noindent Let us fix integers $m_d$'s ($d>0$) and consider elements $\{h_r,\ x_s|r>0,s\in\Z\}$ in a ${\mathbb{Q}}$-algebra $U$ such that
$$[h_r,x_s]=\sum_{d|r}dm_dx_{r+s}\ \ \forall r>0, s\in\Z.$$
\noindent Let $T$ be an algebra automorphism of $U$ such that $$T(h_r)=h_r\,\,{\rm{and}}\,\, T(x_s)=x_{s-1}\,\,\forall r>0, s\in\Z.$$
\noindent Then, recalling the notation $\Z[\hat h_k|k>0]=\Z^{(sym)}[h_r|r>0]$, we have that \begin{equation}\label{cxh} x_r\hat h_+(u)=\hat h_+(u)\cdot\left(\prod_{d>0}(1-(-T^{-1}u)^d)^{-m_d}\right)(x_r). \end{equation}
If moreover the subalgebras of $U$ generated by $\{h_r|r>0\}$ and $\{x_r|r\in\Z\}$ are isomorphic respectively to
${\mathbb{Q}}[h_r|r>0]$ and ${\mathbb{Q}}[x_r|r\in\Z]$ and there is a ${\mathbb{Q}}$-linear isomorphism
$U\cong{\mathbb{Q}}[h_r|r>0]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[x_r|r\in\Z]$ then
$$\Z^{(sym)}[h_r|r>0]\otimes_{\Z}\Z^{(div)}[x_r|r\in\Z]$$ is an integral form of $U$. \begin{proof} This is an application of lemma \ref{lhlh}: let $h=\sum_{r>0}(-1)^{r-1}{h_r\over r}u^r$; then
$$[x_0,h]=\sum_{r>0}{(-1)^r\over r}u^r\sum_{d|r}dm_dT^{-r}(x_0)=$$ $$=\sum_{d>0}\sum_{s>0}{(-1)^{ds}\over s}m_dT^{-ds}u^{ds}(x_0)=f(x_0)$$ where $$f= -\sum_{d>0}m_d\ln(1-(-1)^{d}T^{-d}u^d).$$ Then $$x_0\hat h_+(u)=\hat h_+(u)\cdot\exp(f)(x_0)=\hat h(u)\cdot\left(\prod_{d>0}(1-(-T^{-1}u)^d)^{-m_d}\right)(x_0),$$ and the analogous statement for $x_r$ follows applying $T^{-r}$.
\noindent Remark that $\prod_{d>0}(1-(-T^{-1}u)^d)^{-m_d}=\sum_{r\geq 0}a_rT^{-r}u^r$ with $a_r\in\Z$ $\forall r\in\N$; the hypothesis on the commutativity of the subalgebra generated by the $x_r$'s implies that
$(\sum_{r\geq 0}a_rx_ru^r)^{(k)}$ lies in the subalgebra of $U$ generated by the divided powers $\{x_r^{(k)}|r\in\Z,k\geq 0\}$, which allows to conclude the proof thanks to the last hypotheses on the structure of $U$.
\end{proof} \end{proposition}
\vskip .3 truecm
\begin{remark} \label{praff}
\noindent Proposition \ref{hh}, implies proposition \ref{bdm}: indeed when $m_1=m$, $m_d=0$ $\forall d>1$ we have a projection $h_r\mapsto h, x_r\mapsto x$, which maps $\exp(x_0u)$ to $\exp(xu)$, $\hat h(u)$ to $(1+u)^h$ and $T$ to the identity. \end{remark}
\vskip .5 truecm \section{The integral form of $\frak sl_2$ ($A_1$)} \label{sld} \vskip .5truecm
\noindent The results about $\frak sl_2$ and the $\Z$-basis of the integral form ${\cal U}_{\Z}(\frak sl_2)$ of its enveloping algebra ${\cal U}(\frak sl_2)$ are well known (see \cite{Ko} and \cite{S}). Here we recall the description of $\u_{\Z}(\frak sl_2)$ in terms of the non-commutative generalizations described in section \ref{ncn}, with the notations of the commutative examples given in section \ref{intgpl}.
\noindent The proof expressed in this language has the advantage to be easily generalized to the affine case. \vskip .3 truecm \begin{definition} \label{sl2}
\noindent $\frak sl_2$ (respectively ${\cal U}(\frak sl_2)$) is the Lie algebra (respectively the associative algebra) over ${\mathbb{Q}}$ generated by $\{e,f,h\}$ with relations $$[h,e]=2e,\, [h,f]=-2f,\, [e,f]=h.$$
\noindent $\u_{\Z}(\frak sl_2)$ is the $\Z$-subalgebra of ${\cal U}(\frak sl_2)$ generated by $\{e^{(k)},f^{(k)}| \; k\in\N\}$. \end{definition}
\vskip .3 truecm \begin{theorem}\label{trdc} \noindent Let ${\cal U}^+$, ${\cal U}^-$, ${\cal U}^0$ denote the ${\mathbb{Q}}$-subalgebras of ${\cal U}(\frak sl_2) $ generated respectively by $e$, by $f$, by $h$.
\noindent Then ${\cal U}^+\cong{\mathbb{Q}}[e]$, ${\cal U}^-\cong{\mathbb{Q}}[f]$, ${\cal U}^0\cong{\mathbb{Q}}[h]$ and ${\cal U}(\frak sl_2)\cong{\cal U}^-\otimes{\cal U}^0\otimes{\cal U}^+$; moreover \begin{equation} \label{usldi} \u_{\Z}(\frak sl_2)\cong\Z^{(div)}[f]\otimes_{\Z}\Z^{(bin)}[h]\otimes_{\Z}\Z^{(div)}[e] \end{equation} is an integral form of ${\cal U}(\frak sl_2)$. \begin{proof} Thanks to proposition \ref{bdm}, we just have to study the commutation between $e^{(k)}$ and $f^{(l)}$ for $k,l\in\N$.
\noindent Let us recall the commutation relation \begin{equation} \label{efu} e\exp(fu)=\exp(fu)(e+hu-fu^2) \end{equation} which is a direct application of lemma \ref{cle},iv) and of the relations $[e,f]=h$, $[h,f]=-2f$ and $[f,f]=0$.
\noindent We want to prove that in ${\cal U}(\frak sl_2)[[u,v]]$ \begin{equation} \label{cef} {\rm{exp}}(eu){\rm{exp}}(fv)={\rm{exp}}\Big({fv\over 1+uv}\Big)(1+uv)^h{\rm{exp}}\Big({eu\over 1+uv}\Big). \end{equation} Let $F(u)={\rm{exp}}\Big({fv\over 1+uv}\Big)(1+uv)^h{\rm{exp}}\Big({eu\over 1+uv}\Big).$
\noindent It is obvious that $F(0)={\rm{exp}}(fv)$; hence our claim is equivalent to $${{\rm{d}}\over{\rm{d}}u}F(u)=eF(u).$$ To obtain this result we derive remarking lemma \ref{cle},ix) and then apply formulas \ref{xvh} and \ref{efu}: $${{\rm{d}}\over{\rm{d}}u}F(u)=$$ $$={\rm{exp}}\Big({fv\over 1+uv}\Big)(1+uv)^h{e\over(1+uv)^2}{\rm{exp}}\Big({eu\over 1+uv}\Big)+$$ $$+{\rm{exp}}\Big({fv\over 1+uv}\Big)\Big({hv\over 1+uv}-{fv^2\over(1+uv)^2}\Big)(1+uv)^h{\rm{exp}}\Big({eu\over 1+uv}\Big)=$$ $$={\rm{exp}}\Big({fv\over 1+uv}\Big)\Big(e+{hv\over 1+uv}-{fv^2\over(1+uv)^2}\Big)(1+uv)^h{\rm{exp}}\Big({eu\over 1+uv}\Big)=$$ $$=eF(u).$$ Remarking that $${xu\over 1+v}\in\Z[x][[u,v]],\,\,{\rm{hence}}\,\,\Big({xu\over 1+v}\Big)^{(k)}\in\Z^{(div)}[x][[u,v]]\,\,\forall k\in\N,$$ it follows that the right hand side of \ref{usldi} is an integer form of ${\cal U}(\frak sl_2)$ (containing $\u_{\Z}(\frak sl_2)$).
\noindent Finally remark that inverting the exponentials on the right hand side, the formula (\ref{cef}) gives an expression of $(1+uv)^h$ in terms of the divided powers of $e$ and $f$, so that $\Z^{(bin)}[h] \subseteq \u_{\Z}(\frak sl_2)$, which completes the proof.
\end{proof} \end{theorem}
\vskip .5 truecm \section{The integral form of $\hat{\frak sl_2}$ ($A_1^{(1)}$)} \label{slh} \vskip .5truecm
\noindent The results about $\hat{\frak sl_2}$ and the integral form $\hat{\cal U}_{\Z}$ of its enveloping algebra $\hat{\cal U}$ are due to Garland (see \cite{HG}). Here we simplify the description of the imaginary positive component of $\hat{\cal U}_{\Z}$ proving that it is an algebra of polynomials over $\Z$ and give a compact and complete proof of the assertion that the set given in theorem \ref{trm} is actually a $\Z$-basis of $\hat{\cal U}_{\Z}$. This proof has the advantage, following \cite{DM}, to reduce the long and complicated commutation formulas to compact, simply readable and easily proved ones. It is evident from this approach that the results for $\hat{\frak sl_2}$ are generalizations of those for $\frak sl_2$, so that the commutation formulas arise naturally recalling the homomorphism \begin{equation} \label{evaluation} ev:\hat{\frak sl_2}=\frak sl_2\otimes{\mathbb{Q}}[t^{\pm 1}]\oplus{\mathbb{Q}} c\to\frak sl_2\otimes{\mathbb{Q}}[t^{\pm 1}]\to\frak sl_2 \end{equation} induced by the evaluation of $t$ at 1 .
\noindent On the other hand these results and the strategy for their proof will be shown to be in turn generalizable to $\hat{{\frak sl_3}}^{\!\!\chi}$.
\noindent As announced in the introduction, the proof of theorem \ref{trm} is based on a few results: proposition \ref{zzk}, proposition \ref{pum}, lemma \ref{limt}, and proposition \ref{exefh}.
\vskip .3 truecm
\begin{definition} \label{hs2} \noindent $\hat{\frak sl_2}$ (respectively $\hat{\cal U}
$) is the Lie algebra (respectively the associative algebra) over ${\mathbb{Q}}$ generated by $\{x_r^+,x_r^-,h_r,c|r\in\Z\}$ with relations $$c\,\,\,{\rm{is\,\,central}},$$ $$[h_r,h_s]=2r\delta_{r+s,0}c,\,\,\,[h_r,x_s^{\pm}]=\pm 2x_{r+s}^{\pm}$$ $$[x_r^+,x_s^+]=0=[x_r^-,x_s^-],$$ $$[x_r^+,x_s^-]=h_{r+s} +r\delta_{r+s,0}c.$$
\noindent Notice that $\{x_r^+,x_r^-|r\in\Z\}$ generates $\hat{\cal U}$.
\noindent $\hat{\cal U}^+$, $\hat{\cal U}^-$, $\hat{\cal U}^0$ are the subalgebras of $\hat{\cal U}$ generated respectively by $\{x_r^+|r\in\Z\}$, $\{x_r^-|r\in\Z\}$, $\{c,h_r|r\in\Z\}$.
\noindent $\hat{\cal U}^{0,+}$, $\hat{\cal U}^{0,-}$, $\hat{\cal U}^{0,0}$, are the subalgebras of $\hat{\cal U}$ (of $\hat{\cal U}^0$) generated respectively by $\{h_r|r>0\}$, $\{h_r|r<0\}$, $\{c,h_0\}$. \end{definition} \vskip .3 truecm
\begin{remark} \label{hefp} \noindent $\hat{\cal U}^+$, $\hat{\cal U}^-$ are (commutative) algebras of polynomials:
$$\hat{\cal U}^+\cong{\mathbb{Q}}[x_r^+|r\in\Z],\,\,\,\hat{\cal U}^-\cong{\mathbb{Q}}[x_r^-|r\in\Z];$$ $\hat{\cal U}^0$ is not commutative: $[h_r,h_{-r}]=2rc$;
\noindent $\hat{\cal U}^{0,+}$, $\hat{\cal U}^{0,-}$, $\hat{\cal U}^{0,0}$, are (commutative) algebras of polynomials:
$$\hat{\cal U}^{0,+}\cong{\mathbb{Q}}[h_r|r>0],\,\,\,\hat{\cal U}^{0,-}\cong{\mathbb{Q}}[h_r|r<0],\,\,\,\hat{\cal U}^{0,0}\cong{\mathbb{Q}}[c,h_0];$$ Moreover we have the following ``triangular'' decompositions: $$\hat{\cal U}\cong\hat{\cal U}^-\otimes\hat{\cal U}^0\otimes\hat{\cal U}^+,$$ $$\hat{\cal U}^0\cong\hat{\cal U}^{0,-}\otimes\hat{\cal U}^{0,0}\otimes\hat{\cal U}^{0,+}.$$ Remark that the images in $\hat{\cal U}$ of $\hat{\cal U}^-\otimes\hat{\cal U}^0$ and $\hat{\cal U}^0\otimes\hat{\cal U}^+$ are subalgebras of $\hat{\cal U}$ and the images of $\hat{\cal U}^{0,-}\otimes\hat{\cal U}^{0,0}$ and $\hat{\cal U}^{0,0}\otimes\hat{\cal U}^{0,+}$ are commutative subalgebras of $\hat{\cal U}^0$.
\end{remark}
\vskip .3 truecm
\begin{definition} \label{hto} \noindent $\hat{\cal U}$ is endowed with the following anti/auto/homo/morphisms:
\noindent $\sigma$ is the antiautomorphism defined on the generators by: $$x_r^+\mapsto x_r^+,\,\,\,x_r^-\mapsto x_r^-,\,\,\,(\Rightarrow h_r\mapsto-h_r,\,\,\,c\mapsto -c);$$ $\Omega$ is the antiautomorphism defined on the generators by: $$x_r^+\mapsto x_{-r}^-,\,\,\,x_r^-\mapsto x_{-r}^+,\,\,\,(\Rightarrow h_r\mapsto h_{-r},\,\,\,c\mapsto c);$$ \noindent $T$ is the automorphism defined on the generators by: $$x_r^+\mapsto x_{r-1}^+,\,\,\,x_r^-\mapsto x_{r+1}^-,\,\,\,(\Rightarrow h_r\mapsto h_r-\delta_{r,0}c,\,\,\,c\mapsto c);$$ \noindent for all $m\in\Z$, $\lambda_m$ is the homomorphism defined on the generators by: $$x_r^+\mapsto x_{mr}^+,\,\,\,x_r^-\mapsto x_{mr}^-,\,\,\,(\Rightarrow h_r\mapsto h_{mr},\,\,\,c\mapsto mc).$$ \end{definition} \vskip .3 truecm
\begin{remark} \label{hti} \noindent $\sigma^2={\rm{id}}_{\hat{\cal U}}$, $\Omega^2={\rm{id}}_{\hat{\cal U}}$, $T$ is invertible of infinite order;
\noindent $\lambda_{-1}^2=\lambda_1={\rm{id}}_{\hat{\cal U}}$; $\lambda_m$ is not invertible if $m\neq\pm 1$; $\lambda_0=ev$ (through the identification $<x_0^+,x_0^-,h_0>\cong<e,f,h>$). \end{remark}
\begin{remark} \label{htc} \vskip .3 truecm \noindent $\sigma\Omega=\Omega\sigma$, $\sigma T=T\sigma$, $\sigma\lambda_m=\lambda_m\sigma$ for all $m\in\Z$;
\noindent $\Omega T=T\Omega$, $\Omega\lambda_m=\lambda_m\Omega$ for all $m\in\Z$;
\noindent $\lambda_m T^{\pm 1}=T^{\pm m}\lambda_m$ for all $m\in\Z$;
\noindent $\lambda_m\lambda_n=\lambda_{mn}$, for all $m,n\in\Z$. \end{remark}
\vskip .3 truecm
\begin{remark} \label{hbs}
\noindent $\sigma\big|_{\hat{\cal U}^{\pm}}={\rm{id}}_{\hat{\cal U}^{\pm}},\,\,\,\sigma(\hat{\cal U}^{0,\pm})=\hat{\cal U}^{0,\pm},\,\,\,\sigma(\hat{\cal U}^{0,0})=\hat{\cal U}^{0,0}$.
\noindent $\Omega(\hat{\cal U}^{\pm})=\hat{\cal U}^{\mp},\,\,\,\Omega(\hat{\cal U}^{0,\pm})=\hat{\cal U}^{0,\mp},\,\,\,\Omega\big|_{\hat{\cal U}^{0,0}}={\rm{id}}_{\hat{\cal U}^{0,0}}$.
\noindent $T(\hat{\cal U}^{\pm})=\hat{\cal U}^{\pm},\,\,\,T\big|_{\hat{\cal U}^{0,\pm}}={\rm{id}}_{\hat{\cal U}^{0,\pm}},\,\,\, T(\hat{\cal U}^{0,0})=\hat{\cal U}^{0,0}$.
\noindent For all $m\in\Z$ $\lambda_m(\hat{\cal U}^{\pm})\subseteq\hat{\cal U}^{\pm},\,\,\,\lambda_m(\hat{\cal U}^0)=\hat{\cal U}^0,\,\,\,\lambda_m(\hat{\cal U}^{0,0})\subseteq\hat{\cal U}^{0,0}$, $$\lambda_m(\hat{\cal U}^{0,\pm})\subseteq \begin{cases}\hat{\cal U}^{0,\pm}&{\rm{if}}\,m>0\cr \hat{\cal U}^{0,\mp}&{\rm{if}}\, m<0\cr \hat{\cal U}^{0,0}&{\rm{if}}\,m=0. \end{cases}$$
\end{remark}
\vskip .3 truecm
\begin{definition}\label{hhuz} \noindent Here we define some $\Z$-subalgebras of $\hat{\cal U}$:
\noindent $\hat{\cal U}_{\Z}$ is the $\Z$-subalgebra of $\hat{\cal U}
$ generated by $\{(x_r^{+})^{(k)},(x_r^{-})^{(k)}|r\in\Z,k\in\N\}$;
\noindent $\hat{\cal U}_{\Z}^{\pm}=\Z^{(div)}[x_r^{\pm}|r\in\Z]$;
\noindent $\hat{\cal U}_{\Z}^{0,0}=\Z^{(bin)}[h_0,c]$;
\noindent $\hat{\cal U}_{\Z}^{0,\pm}=\Z^{(sym)}[h_{\pm r}|r>0]$;
\noindent $\hat{\cal U}_{\Z}^0$ is the $\Z$-subalgebra of $\hat{\cal U}$ generated by $\hat{\cal U}_{\Z}^{0,-}$, $\hat{\cal U}_{\Z}^{0,0}$ and $\hat{\cal U}_{\Z}^{0,+}$.
The notations are those of section \ref{intgpl}. \end{definition}
\vskip .3 truecm \noindent We want to prove that $\hat{\cal U}_{\Z}^0 =\hat{\cal U}_{\Z}^{0,-}\hat{\cal U}_{\Z}^{0,0}\hat{\cal U}_{\Z}^{0,+}$, so that it is an integral form of $\hat{\cal U}^0$, and that $\hat{\cal U}_{\Z}=\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$, so that $\hat{\cal U}_{\Z}$ is an integral form of $\hat{\cal U}$.
\noindent As in the case of $\frak sl_2$, working in $\hat{\cal U}[[u]]$ (see the notation below) simplifies enormously the proofs and gives a deeper insight to the question.
\vskip .3 truecm
\begin{notation}\label{hgens} \noindent We shall consider the following elements in $\hat{\cal U}[[u]]$: $$x^+(u)=\sum_{r\geq 0}x_r^+u^r=\sum_{r\ge0} T^{-r}u^r(x_0^+),$$ $$x^-(u)=\sum_{r\geq 0}x_{r+1}^-u^r=\sum_{r\ge0} T^{r}u^r(x_1^-),$$ $$h_{\pm}(u)=\sum_{r\geq 1}(-1)^{r-1}{h_{\pm r}\over r}u^r,$$ $$\hat h_{\pm}(u)={\exp}(h_{\pm}(u))=\sum_{r\geq 0}\hat h_{\pm r} u^r.$$ \end{notation}
\vskip .3 truecm{} \begin{remark} \label{nev} \noindent Notice that $ev \circ T=ev$ and $$ev(x^+(-u))=ev\left({{1}\over{1+T^{-1}u}}x_0^+\right)={e\over 1+u},$$ $$ev(x^-(-u))=ev\left({{T}\over{1+Tu}}x_0^-\right)={f\over 1+u},$$ $$ev(h_{\pm}(u))=h{\rm{ln}}(1+u),$$ $$ev(\hat h_{\pm}(u))=(1+u)^h.$$ \end{remark} \vskip .3 truecm
\begin{remark} \label{stuz} Here we list some obvious remarks.
\noindent i) $\hat{\cal U}_{\Z}^{\pm}\subseteq\hat{\cal U}_{\Z}\cap\hat{\cal U}^{\pm}$ and $\hat{\cal U}_{\Z}$ is the $\Z$-subalgebra of $\hat{\cal U}$ generated by $\hat{\cal U}_{\Z}^+\cup\hat{\cal U}_{\Z}^-$;
\noindent ii) $\hat{\cal U}_{\Z}^{\pm}$, $\hat{\cal U}_{\Z}^{0,0}$, $\hat{\cal U}_{\Z}^{0,\pm}$ and $\hat{\cal U}_{\Z}^{0,\pm}\hat{\cal U}_{\Z}^{0,0}=\hat{\cal U}_{\Z}^{0,0}\hat{\cal U}_{\Z}^{0,\pm}$ are integral forms respectively of $\hat{\cal U}^{\pm}$, $\hat{\cal U}^{0,0}$, $\hat{\cal U}^{0,\pm}$ and $\hat{\cal U}^{0,\pm}\hat{\cal U}^{0,0}=\hat{\cal U}^{0,0}\hat{\cal U}^{0,\pm}$;
\noindent iii) $\hat{\cal U}_{\Z}$ and $\hat{\cal U}_{\Z}^{0,0}$ are stable under $\sigma$, $\Omega$, $T^{\pm 1}$, $\lambda_m $ for all $m\in\Z$;
\noindent iv) $\hat{\cal U}_{\Z}^{\pm}$ is stable under $\sigma$, $T^{\pm 1}$, $\lambda_m $ for all $m\in\Z$ and $\Omega(\hat{\cal U}_{\Z}^{\pm})=\hat{\cal U}_{\Z}^{\mp}$;
\noindent v) $\hat{\cal U}_{\Z}^{0,\pm}$ is stable under $\sigma$, $T^{\pm 1}$ and $\Omega(\hat{\cal U}_{\Z}^{0,\pm})=\lambda_{-1}(\hat{\cal U}_{\Z}^{0,\pm})=\hat{\cal U}_{\Z}^{0,\mp}$: more precisely $$\sigma(\hat h_{\pm}(u))\!=\!\hat h_{\pm}(u)^{-1},\,\Omega(\hat h_{\pm}(u))\!=\!\lambda_{-1}(\hat h_{\pm}(u))\!=\!\hat h_{\mp}(u),\,T^{\pm 1}(\hat h_{\pm}(u))\!=\!\hat h_{\pm}(u);$$ vi) for $m\in\Z$ $$\lambda_m(\hat{\cal U}_{\Z}^{0,\pm})\subseteq \begin{cases}\hat{\cal U}_{\Z}^{0,\pm}&{\rm{if}}\,m>0\cr \hat{\cal U}_{\Z}^{0,\mp}&{\rm{if}}\, m<0\cr \hat{\cal U}_{\Z}^{0,0}&{\rm{if}}\,m=0, \end{cases}$$ thanks to v), to proposition \ref{tmom} and to remarks \ref{htc} and \ref{nev}. \end{remark}
\vskip.3 truecm \begin{remark} \label{tmfv} \noindent The elements $\hat h_k$'s with $k>0$ generate the same $\Z$-subalgebra of $\hat{\cal U}$ as the elements $\Lambda_k$'s ($k\geq 0$) defined in \cite{HG}.
\noindent Indeed let $$\sum_{n\geq 0}p_nu^n=P(u)=\hat h(-u)^{-1};$$ then
remarks \ref{srinv},1,ii) and \ref{funtorialita},iii) imply that $\Z[\hat h_k|k>0]=\Z[p_{n}|n>0]$; but $${{\rm{d}}\over{\rm{d}}u}P(u)=P(u)\sum_{r>0}h_ru^{r-1},$$ that is $$p_0=1,\ \ p_n={1\over n}\sum_{r=1}^nh_rp_{n-r}\ \forall n>0,$$ hence $p_n=\Lambda_{n-1}$ $\forall n\geq 0$.
\noindent On the other hand applying $\lambda_m$ we get $$\lambda_m(p_0)=1,\ \ \lambda_m(p_n)={1\over n}\sum_{r=1}^nh_{rm}\lambda_m(p_{n-r}),$$ so that $\lambda_m(p_n)=\lambda_m(\Lambda_{n-1})=\Lambda_{n-1}{(\xi(m))}$ (see \cite{HG}).
\end{remark}
\vskip .3 truecm
\begin{remark} \label{hrs} \noindent Remark that for all $r\in\Z$ the subalgebra of $\hat{\frak sl_2}$ generated by $$\{x_r^+,x_{-r}^-,h_0+rc\}$$ maps isomorphically onto $\frak sl_2$ through the evaluation homomorphism $ev$ (see formula \ref{evaluation}). On the other hand for each $r\in\Z$ there is an injection ${\cal U}(\frak sl_2)\to\hat{\cal U}$: $$e\mapsto x_r^+,\,\,f\mapsto x_{-r}^-,\,\,h\mapsto h_0+rc.$$ In particular theorem \ref{trdc}, implies that the elements ${h_0+rc\choose k}$ belong to $\hat{\cal U}_{\Z}$ for all $r\in\Z, k\in\N$ (thus, remarking that the elements ${c\choose k}$'s are central and the example \ref{binex}, we get that $\hat{\cal U}_{\Z}^{0,0}\subseteq\hat{\cal U}_{\Z}$) and proposition \ref{bdm} implies that $\hat{\cal U}_{\Z}^{0,0}\hat{\cal U}_{\Z}^+$ and $\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^{0,0}$ are integral forms respectively of $\hat{\cal U}^{0,0}\hat{\cal U}^+$ and $\hat{\cal U}^-\hat{\cal U}^{0,0}$. \end{remark} \vskip .3 truecm
\vskip .3 truecm \begin{proposition} \label{zzk} \noindent The following identity holds in $\hat{\cal U}$:
$$\hat h_+(u)\hat h_-(v)=\hat h_-(v)(1-uv)^{-2c}\hat h_+(u).$$
\noindent $\hat{\cal U}_{\Z}^0=\hat{\cal U}_{\Z}^{0,-}\hat{\cal U}_{\Z}^{0,0}\hat{\cal U}_{\Z}^{0,+}$: it is an integral form of $\hat{\cal U}^0$.
\begin{proof}
\noindent Since $[h_r,h_s]=2r\delta_{r+s,0}c$, the claim is proposition \ref{heise} with $m\!=\!2$, $l\!=\!0$. \end{proof} \end{proposition}
\vskip .3 truecm \begin{proposition}\label{pum} \noindent The following identity holds in $\hat{\cal U}$: \begin{equation} \label{xup} x_0^+\hat h_+(u)= \hat h_+(u)(1+T^{-1}u)^{-2}(x_0^+). \end{equation}
\noindent Hence for all $k\in\N$ \begin{equation} \label{xup2} (x_0^+)^{(k)}\hat h_+(u)= \hat h_+(u)((1+T^{-1}u)^{-2}(x_0^+))^{(k)}. \end{equation} \begin{proof} The claim follows from proposition \ref{hh} with $m_1=2$, $m_d=0 \; \forall d>1$, and from \ref{divpoweq}.
\end{proof} \end{proposition} \vskip .3 truecm
\begin{remark} The identity (\ref{xup}) can be written as $$x_0^+\hat h_+(u)=\hat h_+(u){{\rm{d}}\over{\rm{d}}u}(ux^+(-u)).$$ Indeed $$(1+T^{-1}u)^{-2}(x_0^+)=\sum_{r\in\N}(-1)^r(r+1)x_r^+u^r={{\rm{d}}\over{\rm{d}}u}(ux^+(-u)).$$ \end{remark}
\begin{remark} Remark that the identity (\ref{xup2}) is the affine version of \begin{equation} \label{xup3} e^{(k)}(1+u)^h=(1+u)^{h}\left({e\over (1+u)^2}\right)^{(k)} \end{equation} (see equation (\ref{xvh2})); indeed $ev$ maps (\ref{xup2}) to (\ref{xup3}). \end{remark}
\begin{corollary}\label{cum} \noindent $\hat{\cal U}_{\Z}^+\hat{\cal U}_{\Z}^{0,\pm}\subseteq\hat{\cal U}_{\Z}^{0,\pm}\hat{\cal U}_{\Z}^+$ and $\hat{\cal U}_{\Z}^{\pm}\hat{\cal U}_{\Z}^0=\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^{\pm}$.
\noindent Then $\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$ and $\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0$ are integral forms respectively of $\hat{\cal U}^0\hat{\cal U}^+$ and $\hat{\cal U}^-\hat{\cal U}^0$. \begin{proof} Applying $T^{-r}$ to (\ref{xup2}), we find that $(x_r^+)^{(k)}\hat h_+(u)\subseteq\hat h_+(u)\hat{\cal U}_{\Z}^+[[u]]$ $\forall r\in\Z,k\in\N$, hence $\hat{\cal U}_{\Z}^+\hat h_+(u)\subseteq\hat h_+(u)\hat{\cal U}_{\Z}^+[[u]]$ and $\hat{\cal U}_{\Z}^+\hat{\cal U}_{\Z}^{0,+}\subseteq\hat{\cal U}_{\Z}^{0,+}\hat{\cal U}_{\Z}^+$. From this, applying $\lambda_{-1}$ we get $\hat{\cal U}_{\Z}^+\hat{\cal U}_{\Z}^{0,-}\subseteq\hat{\cal U}_{\Z}^{0,-}\hat{\cal U}_{\Z}^+$, hence $\hat{\cal U}_{\Z}^+\hat{\cal U}_{\Z}^0\subseteq\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$ thanks to remark \ref{hrs}. Finally applying $\Omega$ we obtain that $\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^-\subseteq\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0$ and applying $\sigma$ we get the reverse inclusions. \end{proof} \end{corollary}
\vskip .3 truecm
We are now left to prove that $\hat{\cal U}_{\Z}^+\hat{\cal U}_{\Z}^-\subseteq\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$ and that $\hat{\cal U}_{\Z}^0\subseteq\hat{\cal U}_{\Z}$.
\noindent To this aim we study the commutation relations between $(x_r^+)^{(k)}$ and $(x_s^-)^{(l)}$ or equivalently between ${\rm{exp}}(x_r^+u)$ and ${\rm{exp}}(x_s^-v)$. \vskip .3 truecm \begin{remark}\label{exev} \noindent Remark \ref{hrs}, implies that ${\rm{exp}}(x_r^+u){\rm{exp}}(x_{-r}^-v)\in\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$ for all $r\in\Z$.
\noindent In order to prove a similar result for ${\rm{exp}}(x_r^+u){\rm{exp}}(x_s^-v)$ when $r+s\neq 0$ remark that in general $${\rm{exp}}(x_r^+u){\rm{exp}}(x_s^-v)=T^{-r}\lambda_{r+s}({\rm{exp}}(x_0^+u){\rm{exp}}(x_1^-v)),$$ so that remark \ref{stuz},iv),v),vi) allows us to reduce to the case $r=0$, $s=1$.
\noindent This case will turn out to be enough also to prove that $\hat{\cal U}_{\Z}^0\subseteq\hat{\cal U}_{\Z}$. \end{remark}
\vskip .3 truecm \begin{remark} \label{hev} \noindent In the study of the commutation relations in $\hat{\cal U}_{\Z}$ remark that $$ev(\exp(x_0^+u)\exp(x_1^-v))=\exp(eu)\exp(fv)$$ and that straightening $\exp(x_0^+u)\exp(x_1^-v)$ through the triangular decomposition $\hat{\cal U}\cong\hat{\cal U}^-\otimes\hat{\cal U}^0\otimes\hat{\cal U}^+$ we get an element of $\hat{\cal U}[[u,v]]$ whose coefficients involve $x_{r+1}^-,h_{r+1},\, x_r^+$ with $r\geq 0$ and whose image through $ev$ is $$\exp\Big({fv\over 1+uv}\Big)(1+uv)^h\exp\Big({eu\over 1+uv}\Big)$$ (see remark \ref{nev}).
\noindent Viceversa once we have such an expression for $\exp(x_0^+u)\exp(x_1^-v)$ applying $T^{-r}\lambda_{r+s}$ we can deduce from it the identity (\ref{cef}) and the expression for $\exp(x_r^+u)\exp(x_s^-v)$ for all $r,s\in\Z$ (also in the case $r+s=0$).
\noindent Remark that $${\rm{exp}}(vx^-(-uv))\hat h_+(uv){\rm{exp}}(ux^+(-uv))$$ is an element of $\hat{\cal U}[[u,v]]$ which has the required properties (see remark \ref{nev}) and belongs to $\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$.
\vskip .3 truecm \end{remark}
Our aim is to prove that $$\exp(x_0^+u)\exp(x_1^-v)=\exp(vx^-(-uv))\hat h_+(uv)\exp(ux^+(-uv)).$$
\vskip .3 truecm \begin{lemma} \label{limt} \noindent In $\hat{\cal U}[[u,v]]$ we have $$x_0^+\exp(vx^-(-uv))=\exp(vx^-(-uv))\Big(x_0^++{{\rm{d}}h_+(uv)\over{\rm{d}}u}+{{\rm{d}}vx^-(-uv)\over{\rm{d}}u}\Big).$$ \begin{proof} The claim follows from lemma \ref{cle},iv) remarking that $$[x_0^+,vx^-(-uv)]=v\sum_{r\in\N}h_{r+1}(-uv)^r={{\rm{d}}\over{\rm{d}}u}\sum_{r\in\N}{h_{r+1}\over r+1}(-1)^{r}(uv)^{r+1}={{\rm{d}}h_+(uv)\over{\rm{d}}u},$$ $$\Big[{{\rm{d}}h_+(uv)\over{\rm{d}}u},vx^-(-uv)\Big]=-2v^2\sum_{r,s\in\N}x_{r+s+2}^+(-uv)^{r+s}=$$ $$=-2v^2\sum_{r\in\N}(r+1)x_{r+2}^-(-uv)^r=2{{\rm{d}}vx^-(-uv)\over{\rm{d}}u}$$ and $$\Big[{{\rm{d}}vx^-(-uv)\over{\rm{d}}u},vx^-(-uv)\Big]=0.$$ \end{proof} \end{lemma}
\vskip .3 truecm \begin{proposition}\label{exefh} \noindent In $\hat{\cal U}[[u,v]]$ we have $${\rm{exp}}(x_0^+u){\rm{exp}}(x_1^-v)={\rm{exp}}(vx^-(-uv))\hat h_+(uv){\rm{exp}}(ux^+(-uv)).$$ \begin{proof} Let $F(u)={\rm{exp}}(vx^-(-uv))\hat h_+(uv){\rm{exp}}(ux^+(-uv))$. It is clear that $F(0)={\rm{exp}}(x_1^-v)$, so that it is enough to prove that $${{\rm{d}}\over{\rm{d}}u}F(u)=x_0^+F(u).$$ Remark that, thanks to the derivation rules (lemma \ref{cle},ix)), to proposition \ref{pum}, and to lemma \ref{limt}, we have: $${{\rm{d}}\over{\rm{d}}u}F(u)={\rm{exp}}(vx^-(-uv))\hat h_+(uv){{{\rm d}}\over{\rm{d}}u}(ux^+(-uv)){\rm{exp}}(ux^+(-uv))+$$ $$+{\rm{exp}}(vx^-(-uv))\Big({{{\rm d}}\over{\rm{d}}u}h_+(uv)+{{{\rm d}}\over{\rm{d}}u}(vx^-(-uv))\Big)\hat h_+(uv){\rm{exp}}(ux^+(-uv))=$$ $$={\rm{exp}}(vx^-(-uv))\Big(x_0^++{{{\rm d}}(h_+(uv)+vx^-(-uv))\over{\rm{d}}u}\Big)\hat h_+(uv){\rm{exp}}(ux^+(-uv))=$$ $$=x_0^+{\rm{exp}}(vx^-(-uv))\hat h_+(uv){\rm{exp}}(ux^+(-uv))=x_0^+F(u).$$ \vskip .3 truecm \end{proof} \end{proposition}
\begin{corollary} \label{tfin} \noindent $\hat{\cal U}_{\Z}^0\subseteq\hat{\cal U}_{\Z}$. \begin{proof} That $\hat{\cal U}_{\Z}^{0,+}\subseteq\hat{\cal U}_{\Z}$ is a consequence of proposition \ref{exefh} inverting the exponentials (see the proof theorem \ref{trdc}), which implies also (applying $\Omega$) that $\hat{\cal U}_{\Z}^{0,-}\subseteq\hat{\cal U}_{\Z}$; the claim then follows thanks to remark \ref{hrs}. \end{proof} \end{corollary}
\begin{proposition} \label{strutmodulo} $\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$ is a $\Z$-subalgebra of $\hat{\cal U}$ (hence $\hat{\cal U}_{\Z}=\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$).
\begin{proof} We want to prove that $\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$ (which is obviously a $\hat{\cal U}_{\Z}^-$-module and, by corollary \ref{cum}, a $\hat{\cal U}_{\Z}^0$-module) is also a $\hat{\cal U}_{\Z}^+$-module, or equivalently that $ \hat{\cal U}_{\Z}^+\hat{\cal U}_{\Z}^-\subseteq\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$.
\noindent By proposition \ref{exefh} together with remark \ref{exev}, formula \ref{cef} and remark \ref{hrs} we have that $ y_+y_-\in\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+$ in the particular case when $y_+=(x_r^+)^{(k)}$ and $y_-=(x_s^-)^{(l)}$, thus we just need to perform the correct induction to deal with the general $y_{\pm}\in\hat{\cal U}_{\Z}^{\pm}$.
\noindent Remark that setting $$deg(x_r^{\pm})=\pm 1,\ \ deg(h_r)=deg(c)=0$$ induces a $\Z$-gradation on $\hat{\cal U}$ (since the relations defining $\hat{\cal U}$ are homogeneous) and on $\hat{\cal U}_{\Z}$ (since its generators are homogeneous), which is preserved by $\sigma$, $T^{\pm 1}$ and $\lambda_m$ $\forall m\in\Z$; in particular it induces $\N$-gradations $$\hat{\cal U}^{\pm}=\bigoplus_{k\in\N}\hat{\cal U}_{\pm k}^{\pm}, \qquad \qquad \hat{\cal U}_{\Z}^{\pm}=\bigoplus_{k\in\N}\hat{\cal U}_{\Z,\pm k}^{\pm}$$ with the properties that $$\Omega(\hat{\cal U}_{\Z,\pm k}^{\pm})= \hat{\cal U}_{\Z,\mp k}^{\mp},$$ $$\hat{\cal U}_{\Z,k}^{+}=\sum_{n\in\N\atop k_1+...+k_n=k}\Z(x_{r_1}^{+})^{(k_1)}\cdot ... \cdot(x_{r_n}^{+})^{(k_n)}=\sum_{r\in\Z}\Z(x_r^{+})^{(k)}+\sum_{k_1,k_2>0\atop k_1+k_2=k}\hat{\cal U}_{\Z, k_1}^{+}\hat{\cal U}_{\Z, k_2}^{+},$$ $$\hat{\cal U}_{\Z,k}^{+}\hat{\cal U}_{\Z}^0=\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,k}^{+}\ \ {\rm{(because}}\ \hat{\cal U}_{k}\hat{\cal U}^0=\hat{\cal U}^0\hat{\cal U}_{k}\ {\rm{and}}\ \hat{\cal U}_{\Z}^{+}\hat{\cal U}_{\Z}^0=\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^{+}{\rm{)}}$$ and $$[\hat{\cal U}_k^+,\hat{\cal U}_{-l}^-]\subseteq\sum_{m>0}\hat{\cal U}_{-l+m}^-\hat{\cal U}^0\hat{\cal U}_{k-m}^+\ \ \forall k,l\in\N.$$ We want to prove that \begin{equation}\label{uzruzs}\hat{\cal U}_{\Z,k}^+\hat{\cal U}_{\Z,-l}^-\subseteq\sum_{m\geq 0}\hat{\cal U}_{\Z,-l+m}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,k-m}^+\ \ \forall k,l\in\N,\end{equation} the claim being obvious for $k=0$ or $l=0$.
\noindent Suppose $k\neq 0$, $l\neq 0$ and the claim true for all $(\tilde k,\tilde l)\neq (k,l)$ with $\tilde k\leq k$ and $\tilde l\leq l$ . Then:
a) proposition \ref{exefh} together with remark \ref{exev}, formula (\ref{cef}) and remark \ref{hrs} imply that $$(x_r^+)^{(k)}(x_s^-)^{(l)}\in\sum_{m\geq 0}\hat{\cal U}_{\Z,-l+m}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,k-m}^+\ \ \forall r,s\in\Z;$$
b) if $k_1,k_2>0$ are such that $k_1+k_2=k$ or $l_1,l_2>0$ are such that $l_1+l_2=l$, then $$\hat{\cal U}_{\Z,k_1}^+\hat{\cal U}_{\Z,k_2}^+\hat{\cal U}_{\Z,-l}^-\subseteq\sum_{m_2\geq 0}\hat{\cal U}_{\Z,k_1}^+\hat{\cal U}_{\Z,-l+m_2}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,k_2-m_2}^+\subseteq$$ $$\subseteq\sum_{m_1,m_2\geq 0}\hat{\cal U}_{\Z,-l+m_2+m_1}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,k_1-m_1}^+\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,k_2-m_2}^+=$$ $$=\sum_{m_1,m_2\geq 0}\hat{\cal U}_{\Z,-l+m_2+m_1}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,k_1-m_1}^+\hat{\cal U}_{\Z,k_2-m_2}^+\subseteq\sum_{m\geq 0}\hat{\cal U}_{\Z,-l+m}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,k-m}^+$$ and symmetrically applying $\Omega$ $$\hat{\cal U}_{\Z,k}^+\hat{\cal U}_{\Z,-l_1}^-\hat{\cal U}_{\Z,-l_2}^-= \Omega(\hat{\cal U}_{\Z,l_2}^+\hat{\cal U}_{\Z,l_1}^+\hat{\cal U}_{\Z,-k}^-)\subseteq$$ $$\subseteq\Omega (\sum_{m\ge 0}\hat{\cal U}_{\Z,-k+m}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,l-m}^+)=\sum_{m\geq 0}\hat{\cal U}_{\Z,-l+m}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,k-m}^+.$$ (\ref{uzruzs}) follows from a) and b).
\end{proof} \end{proposition}
\vskip .3 truecm \begin{theorem}\label{trm}
\noindent The $\Z$-subalgebra $\hat{\cal U}_{\Z}$ of $\hat{\cal U}$ generated by $$\{(x_r^{\pm})^{(k)}|r\in\Z,k\in\N\}$$ is an integral form of $\hat{\cal U}$.
\noindent More precisely $$\hat{\cal U}_{\Z}\cong\hat{\cal U}_{\Z}^-\otimes\hat{\cal U}_{\Z}^0\otimes\hat{\cal U}_{\Z}^+\cong\hat{\cal U}_{\Z}^-\otimes\hat{\cal U}_{\Z}^{0,-}\otimes\hat{\cal U}_{\Z}^{0,0}\otimes\hat{\cal U}_{\Z}^{0,+}\otimes\hat{\cal U}_{\Z}^+$$ and a $\Z$-basis of $\hat{\cal U}_{\Z}$ is given by the product $$B^-B^{0,-}B^{0,0}B^{0,+}B^+$$ where $B^{\pm}$, $B^{0,\pm}$ and $B^{0,0}$ are the $\Z$-bases respectively of $\hat{\cal U}_{\Z}^{\pm}$, $\hat{\cal U}_{\Z}^{0,\pm}$ and $\hat{\cal U}_{\Z}^{0,0}$ given as follows:
$$B^{\pm}=\Big\{{(\bf{x}}^{\pm})^{({\bf{k}})}=\prod_{r\in\Z}(x_r^{\pm})^{(k_r)} \;|\; {\bf{k}}:\Z\to\N\,\, {\rm{is\, finitely\, supported}}\Big\}$$
$$B^{0,\pm}=\Big\{{\hat{{\bf{h}}}_{\pm}^{\bf{k}}}=\prod_{l\in\Z_+}{\hat h_{\pm l}^{k_l}}\;|\; {\bf{k}}:\Z_+\to\N\,\,{\rm{is\, finitely\, supported}}\Big\}$$
$$B^{0,0}=\Big\{{h_0\choose k}{c\choose\tilde k} \;|\; k,\tilde k\in\N\Big\}.$$ \end{theorem}
\vskip .5 truecm
\section{The integral form of $\hat{{\frak sl_3}}^{\!\!\chi}$ ($A_2^{(2)}$) }\label{ifa22} \vskip .5truecm
\noindent In this section we describe the integral form $\tilde{\cal U}_{\Z}$ of the enveloping algebra $\tilde{\cal U}$ of the Kac-Moody algebra of type $A_2^{(2)}$ generated by the divided powers of the Drinfeld generators $x_r^{\pm}$; unlike the untwisted case, this integral form is strictly smaller than the one (studied in \cite{DM}) generated by the divided powers of the Chevalley generators $e_0$, $e_1$, $f_0$, $f_1$ (see appendix \ref{appendC}).
\noindent However, the construction of a $\Z$-basis of $\tilde{\cal U}_{\Z}$ follows the idea of the analogous construction in the case $A_1^{(1)}$, seen in the previous section; this method allows us to overcome the technical difficulties arising in case $A_2^{(2)}$ - difficulties which seem otherwise overwhelming.
\noindent The commutation relations needed to our aim can be partially deduced from the case $A_1^{(1)}$: indeed, underlining some embeddings of $\hat{\frak sl_2}$ into $\hat{{\frak sl_3}}^{\!\!\chi}$ (see remark \ref{emgg}), the commutation relations in $\hat{\cal U}$ can be directly translated into a class of commutation relations in $\tilde{\cal U}$ (see corollary \ref{czzp}, proposition \ref{czzq} and the appendix \ref{appendA} for more details).
\noindent Yet, there are some differences between $A_1^{(1)}$ and $A_2^{(2)}$.
\noindent First of all, the real (positive and negative) components of $\tilde{\cal U}$ are no more commutative (this is well known: it happens in all the affine cases different from $A_1^{(1)}$, as well as in all the finite cases different from $A_1$), hence the study of their integral form requires some - easy - additional observations (see lemma \ref{zkp}).
\noindent The non commutativity of the real components of $\tilde{\cal U}$ makes the general commutation formula between the exponentials of positive and negative Drinfeld generators technically more complicated to compute and express than in the case of $\hat\frak sl_2$; nevertheless, general and explicit compact formulas can be given in this case, too, always thanks to the exponential notation. As already seen, the simplification provided by the exponential approach lies essentially on lemma \ref{cle},iv), which allows to perform the computations in $\tilde{\cal U}$ reducing to much simpler computations in $\hat{{\frak sl_3}}^{\!\!\chi}$, and even, thanks to the symmetries highlighted in definition \ref{tto}, in the Lie subalgebra $L=\hat{{\frak sl_3}}^{\!\!\chi} \cap({\frak sl_3}\otimes{\mathbb{Q}}[t])\subseteq\hat{{\frak sl_3}}^{\!\!\chi}$ (see definition \ref{sottoalgebraL}). Recognizing a ${\mathbb{Q}}[w]$-module structure on each direct summand of $L=L^-\oplus L^0\oplus L^+$ and unifying them in a ${\mathbb{Q}}[w]$-module structure on $L$ (see definition \ref{qwmodulo}) provides a further simplification in the notations: one could have done the same construction for $\hat\frak sl_2$, but we have the feeling that in the case of $\hat\frak sl_2$ it would be unnecessary and that on the other hand it is useful to present both formulations.
\noindent The most remarkable difference with respect to $A_1^{(1)}$ on one hand and to Mitzman's integral form on the other hand lies in the description of the generators of the imaginary (positive and negative) components; it can be surprising that they are not what one could expect: $\tilde{\cal U}_{\Z}^{0,+}\neq\Z^{(sym)}[h_r|r>0]$. More precisely (see remark \ref{hdiversi} and theorem \ref{trmA22})
$$\tilde{\cal U}_{\Z}^{0,+}\not\subseteq\Z^{(sym)}[h_r|r>0]\ \ {\rm{ and}}\ \ \Z^{(sym)}[h_r|r>0]\not\subseteq\tilde{\cal U}_{\Z}^{0,+};$$ as we shall show, we need to somehow ``deform'' the $h_r$'s (by changhing some of their signs) to get a basis of $\tilde{\cal U}_{\Z}^{0,+}$ by the $(sym)$-construction (see definition \ref{thuz}, example \ref{rvsf} and remark \ref{funtorialita}).
\noindent Notice that in order to prove that $\tilde{\cal U}_{\Z}$ is an integral form of $\tilde{\cal U}$ and that $B$ is a $\Z$-basis of $\tilde{\cal U}_{\Z}$ (theorem \ref{trmA22}) it is not necessary to find explicitly all the commutation formulas between the basis elements. In any case, for completeness, we shall collect them in the appendix \ref{appendA}.
\vskip .3 truecm
\begin{definition} \label{a22}
\noindent ${\hat{{\frak sl_3}}^{\!\!\chi}}$ (respectively $\tilde{\cal U}$) is the Lie algebra (respectively the associative algebra) over ${\mathbb{Q}}$ generated by $\{c,h_r,x_r^{\pm},X_{2r+1}^{\pm}|r\in\Z\}$ with relations $$c\,\,\,{\rm{is\,\,central}}$$ $$[h_r,h_s]=\delta_{r+s,0}2r(2+(-1)^{r-1})c$$ $$[h_r,x_s^{\pm}]=\pm 2(2+(-1)^{r-1})x_{r+s}^{\pm}$$
$$[h_r,X_s^{\pm}]=\begin{cases}\pm 4X_{r+s}^{\pm}&{\rm{if}}\ 2|r
\\ 0&{\rm{if}}\ 2\not|r \end{cases}\leqno{(s\ {\rm{odd}})}$$
$$[x_r^{\pm},x_s^{\pm}]=\begin{cases}0&{\rm{if}}\ 2|r+s
\\ \pm(-1)^sX_{r+s}^{\pm}&{\rm{if}}\ 2\not|r+s \end{cases}$$ $$[x_r^{\pm},X_s^{\pm}]=[X_r^{\pm},X_s^{\pm}]=0$$ $$[x_r^+,x_s^-]=h_{r+s} +\delta_{r+s,0}rc$$ $$[x_r^{\pm},X_s^{\mp}]=\pm(-1)^r4x_{r+s}^{\mp}\leqno{(s\ {\rm{odd}})}$$ $$[X_r^+,X_s^-]=8h_{r+s} +4\delta_{r+s,0}rc\leqno{(r,s\ {\rm{odd}})}$$ \end{definition}
\noindent Notice that $\{x_r^+,x_r^-|r\in\Z\}$ generates $\tilde{\cal U}$.
\noindent Moreover $\{c,h_r,x_r^{\pm},X_{2r+1}^{\pm}|r\in\Z\}$ is a basis of $\hat{{\frak sl_3}}^{\!\!\chi}$; hence the ordered monomials in these elements (with respect to any total ordering of the basis) is a PBW-basis of $\tilde{\cal U}$.
\noindent $\tilde{\cal U}^+$, $\tilde{\cal U}^-$, $\tilde{\cal U}^0$ are the subalgebras of $\tilde{\cal U}$ generated respectively by $$\{x_r^+|r\in\Z\},\ \{x_r^-|r\in\Z\},\ \{c,h_r|r\in\Z\}.$$
\noindent $\tilde{\cal U}^{\pm,0}$, $\tilde{\cal U}^{\pm,1}$ and $\tilde{\cal U}^{\pm,c}$ are the subalgebras of $\tilde{\cal U}^{\pm}$ generated respectively by
$$\{x_r^{\pm}|r\equiv 0\ (mod\ 2)\},\ \{x_r^{\pm}|r\equiv 1\ (mod\ 2)\}\ {\rm{and}}\ \{X_{2r+1}^{\pm}|r\in\Z\}.$$
\noindent $\tilde{\cal U}^{0,+}$, $\tilde{\cal U}^{0,-}$, $\tilde{\cal U}^{0,0}$, are the subalgebras of $\tilde{\cal U}$ (of $\tilde{\cal U}^0$) generated respectively by $$\{h_r|r>0\},\ \{h_r|r<0\},\ \{c,h_0\}.$$
\vskip .3 truecm \noindent The following remark is a consequence of trivial applications of the PBW-theorem to different subalgebras of $\hat{{\frak sl_3}}^{\!\!\chi}$.
\begin{remark} \label{tefp}
\noindent $\tilde{\cal U}^+$ and $\tilde{\cal U}^-$ are not commutative: $[x_0^+,x_1^+]=-X_1^+$ and $[x_0^-,x_1^-]=X_1^-$.
\noindent $\tilde{\cal U}^{\pm,0}$, $\tilde{\cal U}^{\pm,1}$ and $\tilde{\cal U}^{\pm,c}$ are (commutative) algebras of polynomials:
$$\tilde{\cal U}^{+,0}\cong{\mathbb{Q}}[x_{2r}^+\; | \; r \in \Z],\ \
\tilde{\cal U}^{+,1}\cong{\mathbb{Q}}[x_{2r+1}^+\; | \; r \in \Z],\ \
\tilde{\cal U}^{+,c}\cong{\mathbb{Q}}[X_{2r+1}^+\; | \; r \in \Z],$$
$$\tilde{\cal U}^{-,0}\cong{\mathbb{Q}}[x_{2r}^-\; | \; r \in \Z],\ \
\tilde{\cal U}^{-,1}\cong{\mathbb{Q}}[x_{2r+1}^-\; | \; r \in \Z],\ \
\tilde{\cal U}^{-,c}\cong{\mathbb{Q}}[X_{2r+1} ^-\; | \; r \in \Z].$$ We have the following ``triangular'' decompositions of $\tilde{\cal U}^{\pm}$: $$\tilde{\cal U}^{\pm}\cong\tilde{\cal U}^{\pm,0}\otimes\tilde{\cal U}^{\pm,c}\otimes\tilde{\cal U}^{\pm,1}\cong\tilde{\cal U}^{\pm,1}\otimes\tilde{\cal U}^{\pm,c}\otimes\tilde{\cal U}^{\pm,0}$$ Remark that $\tilde{\cal U}^{\pm,c}$ is central in $\tilde{\cal U}^{\pm}$, so that the images in $\tilde{\cal U}^{\pm}$ of $\tilde{\cal U}^{\pm,0}\otimes\tilde{\cal U}^{\pm,c}$ and $\tilde{\cal U}^{\pm,1}\otimes\tilde{\cal U}^{\pm,c}$ are commutative subalgebras of $\tilde{\cal U}$. \vskip .15 truecm \noindent $\tilde{\cal U}^0$ is not commutative: $[h_r,h_{-r}]\neq 0$ if $r\neq 0$;
\noindent $\tilde{\cal U}^{0,+}$, $\tilde{\cal U}^{0,-}$, $\tilde{\cal U}^{0,0}$, are (commutative) algebras of polynomials:
$$\tilde{\cal U}^{0,+}\cong{\mathbb{Q}}[h_r|r>0],\,\,\,\tilde{\cal U}^{0,-}\cong{\mathbb{Q}}[h_r|r<0],\,\,\,\tilde{\cal U}^{0,0}\cong{\mathbb{Q}}[c,h_0];$$ Moreover we have the following triangular decomposition of $\tilde{\cal U}^0$: $$\tilde{\cal U}^0\cong\tilde{\cal U}^{0,-}\otimes\tilde{\cal U}^{0,0}\otimes\tilde{\cal U}^{0,+}\cong\tilde{\cal U}^{0,+}\otimes\tilde{\cal U}^{0,0}\otimes\tilde{\cal U}^{0,-}.$$ Remark that $\tilde{\cal U}^{0,0}$ is central in $\tilde{\cal U}^0$, so that the images in $\tilde{\cal U}^0$ of $\tilde{\cal U}^{0,-}\otimes\tilde{\cal U}^{0,0}$ and $\tilde{\cal U}^{0,0}\otimes\tilde{\cal U}^{0,+}$ are commutative subalgebras of $\tilde{\cal U}$. \vskip .15 truecm \noindent Finally remark the triangular decomposition of $\tilde{\cal U}$: $$\tilde{\cal U}\cong\tilde{\cal U}^-\otimes\tilde{\cal U}^0\otimes\tilde{\cal U}^+\cong\tilde{\cal U}^+\otimes\tilde{\cal U}^0\otimes\tilde{\cal U}^-,$$ and observe that the images of $\tilde{\cal U}^-\otimes\tilde{\cal U}^0$ and $\tilde{\cal U}^0\otimes\tilde{\cal U}^+$ are subalgebras of $\tilde{\cal U}$. \end{remark}
\vskip .3 truecm
\begin{definition}\label{tto} \noindent $\hat{{\frak sl_3}}^{\!\!\chi}$ and $\tilde{\cal U}$ are endowed with the following anti/auto/ho\-mo/morphisms:
\noindent $\sigma$ is the antiautomorphism defined on the generators by: $$x_r^+\mapsto x_r^+,\,\,\,x_r^-\mapsto x_r^-,\,\,\,(\Rightarrow X_r^{\pm}\mapsto-X_r^{\pm},\,\,\,h_r\mapsto-h_r,\,\,\,c\mapsto -c);$$ $\Omega$ is the antiautomorphism defined on the generators by: $$x_r^+\mapsto x_{-r}^-,\,\,\,x_r^-\mapsto x_{-r}^+,\,\,\,(\Rightarrow X_r^{\pm}\mapsto X_{-r}^{\mp},\,\,\,h_r\mapsto h_{-r},\,\,\,c\mapsto c);$$ \noindent $T$ is the automorphism defined on the generators by: $$x_r^+\mapsto x_{r-1}^+,\,\,\,x_r^-\mapsto x_{r+1}^-,\,\,\,(\Rightarrow X_r^{\pm}\mapsto -X_{r\mp 2}^{\pm},\,\,\,h_r\mapsto h_r-\delta_{r,0}c,\,\,\,c\mapsto c);$$ \noindent for all odd integer $m\in\Z$, $\lambda_m$ is the homomorphism defined on the generators by: $$x_r^+\mapsto x_{mr}^+,\,\,\,x_r^-\mapsto x_{mr}^-,\,\,\,(\Rightarrow X_r^{\pm}\mapsto X_{mr}^{\pm},\,\,\,h_r\mapsto h_{mr},\,\,\,c\mapsto mc).$$
Remark that if $m$ is even $\lambda_m$ is not defined on $\tilde{\cal U}$, but it is still defined on $\tilde{\cal U}^{0,+}={\mathbb{Q}}[h_r|r>0]$. \end{definition}
\vskip .3 truecm \begin{remark}\label{tti} \noindent $\sigma^2={\rm{id}}_{\tilde{\cal U}}$, $\Omega^2={\rm{id}}_{\tilde{\cal U}}$, $T$ is invertible of infinite order;
\noindent $\lambda_{-1}^2=\lambda_1={\rm{id}}_{\tilde{\cal U}}$; $\lambda_m$ is not invertible if $m\neq\pm 1$.
\end{remark}
\vskip .3 truecm \begin{remark}\label{tct} $\sigma\Omega=\Omega\sigma$, $\sigma T=T\sigma$, $\Omega T=T\Omega$. Moreover for all $m,n$ odd we have $\sigma\lambda_m=\lambda_m\sigma$, $\Omega\lambda_m=\lambda_m\Omega$, $\lambda_m T^{\pm 1}=T^{\pm m}\lambda_m$, $\lambda_m\lambda_n=\lambda_{mn}$.
\end{remark}
\vskip .3 truecm
\begin{remark}\label{tsb}
\noindent $\sigma\big|_{\tilde{\cal U}^{\pm,0}}={\rm{id}}_{\tilde{\cal U}^{\pm,0}},\,\sigma\big|_{\tilde{\cal U}^{\pm,1}}={\rm{id}}_{\tilde{\cal U}^{\pm,1}},\,\sigma(\tilde{\cal U}^{\pm,c})=\tilde{\cal U}^{\pm,c},\,\sigma(\tilde{\cal U}^{0,\pm})=\tilde{\cal U}^{0,\pm}$, $\sigma(\tilde{\cal U}^{0,0})=\tilde{\cal U}^{0,0}$.
\noindent $\Omega(\tilde{\cal U}^{\pm,0})\!=\tilde{\cal U}^{\mp,0}$, $\Omega(\tilde{\cal U}^{\pm,1})\!=\tilde{\cal U}^{\mp,1}$, $\Omega(\tilde{\cal U}^{\pm,c})\!=\tilde{\cal U}^{\mp,c}$, $\Omega(\tilde{\cal U}^{0,\pm})\!=\tilde{\cal U}^{0,\mp}$, $\Omega\big|_{\tilde{\cal U}^{0,0}}\!\!=\!{\rm{id}}_{\tilde{\cal U}^{0,0}}$.
\noindent $T (\tilde{\cal U}^{\pm,0})=\tilde{\cal U}^{\pm,1}$, $T (\tilde{\cal U}^{\pm,1})=\tilde{\cal U}^{\pm,0}$, $T (\tilde{\cal U}^{\pm,c})=\tilde{\cal U}^{\pm,c}$, $T
\big|_{\tilde{\cal U}^{0,\pm}}={\rm{id}}_{\tilde{\cal U}^{0,\pm}},\,\,\, T(\tilde{\cal U}^{0,0})=\tilde{\cal U}^{0,0}$.
\noindent For all odd $m\in\Z$:
\noindent $\lambda_m(\tilde{\cal U}^{\pm,0})\subseteq\tilde{\cal U}^{\pm,0},\,\,\,\lambda_m(\tilde{\cal U}^{\pm,1})\subseteq\tilde{\cal U}^{\pm,1},\,\,\,\lambda_m(\tilde{\cal U}^{\pm,c})\subseteq\tilde{\cal U}^{\pm,c},\,\,\,\lambda_m(\tilde{\cal U}^{0,0})\subseteq\tilde{\cal U}^{0,0}$, $$\lambda_m(\tilde{\cal U}^{0,\pm})\subseteq \begin{cases}\tilde{\cal U}^{0,\pm}&{\rm{if}}\,m>0\cr \tilde{\cal U}^{0,\mp}&{\rm{if}}\, m<0. \end{cases}$$
\end{remark} \vskip .3 truecm
\begin{definition}\label{sottoalgebraL} $L$, $L^{\pm}$, $L^0$, $L^{\pm,0}$, $L^{\pm,1}$, $L^{\pm,c}$ are the Lie-subalgebras of $\hat{{\frak sl_3}}^{\!\!\chi}$ generated by:
$$L: \{x_r^+,x_r^-|r\geq 0\},$$
$$L^+: \{x_r^+|r\geq 0\},\ \ L^-: \{x_r^-|r\geq 0\},\ \ L^0: \{h_r|r\geq 0\},$$
$$L^{+,0}: \{x_{2r}^+|r\geq 0\},\ \ L^{+,1}: \{x_{2r+1}^+|r\geq 0\},\ \ L^{+,c}: \{X_{2r+1}^+|r\geq 0\}.$$
$$L^{-,0}: \{x_{2r}^-|r\geq 0\},\ \ L^{-,1}: \{x_{2r+1}^-|r\geq 0\},\ \ L^{-,c}: \{X_{2r+1}^-|r\geq 0\}.$$
\end{definition} \vskip .3 truecm \begin{remark}\label{Lbase} $L^0$, $L^{\pm,0}$, $L^{\pm,1}$ and $L^{\pm,c}$ are commutative Lie-algebras; for these subalgebras of $L$ the Lie-generators given in definition \ref{sottoalgebraL} are bases over ${\mathbb{Q}}$.
\noindent Moreover we have ${\mathbb{Q}}$-vector space decompositions $$L=L^-\oplus L^0\oplus L^+,\ \ L^+=L^{+,0}\oplus L^{+,1}\oplus L^{+,c},\ \ L^-=L^{-,0}\oplus L^{-,1}\oplus L^{-,c}.$$ Finally remark that $L^+$ is $T^{-1}$-stable and that $L^-$ is $T$-stable; more in detail $T^{\mp 1}(L^{\pm,0})=L^{\pm,1}$, $T^{\mp 1}(L^{\pm,1})\subseteq L^{\pm,0}$ (so that $L^{\pm,0}$ and $L^{\pm,1}$ are $T^{\mp 2}$-stable); $L^{\pm,c}$ is $T^{\mp1}$-stable.
\end{remark} \vskip .3 truecm \begin{definition}\label{qwmodulo}
$L$ is endowed with the ${\mathbb{Q}}[w]$-module structure defined by $w\big|_{L^-}=T\big|_{L^-}$, $w\big|_{L^+}=T^{-1}\big|_{L^+}$, $w.h_r=h_{r+1}$ $\forall r\in\N$. \end{definition}
\vskip .3 truecm
\begin{lemma}\label{qwconti} Let $\xi_1(w),\xi_2(w)\in{\mathbb{Q}}[w][[u,v]]$. Then: $$[\xi_1(w^2).x_0^{\pm},\xi_2(w^2).x_1^{\pm}]=\mp(\xi_1\xi_2)(-w).X_1^{\pm};\leqno{i)}$$ $$ [\xi_1(w).x_0^+,\xi_2(w).x_0^-]=(\xi_1\xi_2)(w).h_0;\leqno{ii)}$$ $$[\xi_1(w).x_0^+,\xi_2(w).X_1^-]=4\xi_1(-w)\xi_2(-w^2).x_1^-;\leqno{iii)}$$ $$[\xi_1(w).h_0,\xi_2(w).x_0^{\pm}]=\pm(4\xi_1(w)-2\xi_1(-w))\xi_2(w).x_0^{\pm}.\leqno{iv)}$$ \begin{proof} The assertions are just a translation of the defining relations of $\tilde{\cal U}$: $$[x_{2r}^{\pm},x_{2s+1}^{\pm}],\ \ [x_r^+,x_s^-],\ \ [x_r^+,X_{2s+1}^-],\ \ [h_r,x_s^{\pm}].$$ For iv), remark that $$2(2+(-1)^{r-1})w^r=4w^r-2(-w)^r.$$ \end{proof} \end{lemma}
\vskip .3 truecm \begin{definition}\label{thuz} \noindent Here we define some $\Z$-subalgebras of $\tilde{\cal U}$:
\noindent $\tilde{\cal U}_{\Z}$ is the $\Z$-subalgebra of $\tilde{\cal U}
$ generated by $\{(x_r^{+})^{(k)},(x_r^{-})^{(k)}|r\in\Z,k\in\N\}$;
\noindent $\tilde{\cal U}_{\Z}^+$ and $\tilde{\cal U}_{\Z}^-$ are the $\Z$-subalgebras of $\tilde{\cal U}
$ (and of $\tilde{\cal U}_{\Z}$) generated respectively by $\{(x_r^{+})^{(k)}|r\in\Z,k\in\N\}$, and $\{(x_r^{-})^{(k)}|r\in\Z,k\in\N\}$;
\noindent $\tilde{\cal U}_{\Z}^{\pm,0}=\Z^{(div)}[x_{2r}^{\pm}|r\in \Z]$;
\noindent $\tilde{\cal U}_{\Z}^{\pm,1}=\Z^{(div)}[x_{2r+1}^{\pm}|r\in \Z]$;
\noindent $\tilde{\cal U}_{\Z}^{\pm,c}=\Z^{(div)}[X_{2r+1}^{\pm}|r\in \Z]$;
\noindent $\tilde{\cal U}_{\Z}^{0,0}=\Z^{(bin)}[h_0,c]$;
\noindent $\tilde{\cal U}_{\Z}^{0,\pm}=\Z^{(sym)}[\varepsilon_rh_{\pm r}|r>0]$ with $\varepsilon_r=\begin{cases}1&{\rm{if}}\ 4\not|r\\-1&{\rm{if}}\ 4|r\end{cases}$;
\noindent $\tilde{\cal U}_{\Z}^0$ is the $\Z$-subalgebra of $\tilde{\cal U}$ generated by $\tilde{\cal U}_{\Z}^{0,-}$, $\tilde{\cal U}_{\Z}^{0,0}$ and $\tilde{\cal U}_{\Z}^{0,+}$.
The notations are those of section \ref{intgpl}.
\noindent In particular remark the definition of $\tilde{\cal U}_{\Z}^{0,\pm}$ (where the $\varepsilon_r$'s represent the necessary ``deformation'' announced in the introduction of this section, and discussed in details in proposition \ref{convoluzioneintera}) and introduce the notation
$$\Z[\tilde h_k|\pm k>0]=\Z^{(sym)}[\varepsilon_rh_{\pm r}|r>0]$$ where $$\tilde h_{\pm}(u)=\sum_{k\in\N}\tilde h_{\pm k}u^k=\exp\Big(\sum_{r>0}(-1)^{r-1}{\varepsilon_rh_{\pm r}\over r}u^r\Big).$$ \end{definition} \begin{remark} \label{hdiversi} It is worth underlining that $\tilde h_{+}(u)\neq\hat h_{+}(u)$, where
$$\Z[\hat h_k|k>0]=\Z^{(sym)}[h_{r}|r>0],$$ that is $$\hat h_{+}(u)=\sum_{k\in\N}\hat h_{k}u^k=\exp\Big(\sum_{r>0}(-1)^{r-1}{h_{r}\over r}u^r\Big).$$
More precisely the $\Z$-subalgebras generated respectively by $\{\hat h_k|k>0\}$ and $\{\tilde h_k|k>0\}$ are different and not included in each other: indeed $\tilde h_1=\hat h_1$, $\tilde h_2=\hat h_2$, $\tilde h_3=\hat h_3$ but $\hat h_4\not\in\Z[\tilde h_k|k>0]$ and $\tilde h_4\not\in\Z[\hat h_k|k>0]$ (see propositions \ref{convoluzioneintera} and \ref{emmepiallaerre} and remark \ref{vicelambda}).
\end{remark} \vskip .3 truecm \begin{remark}\label{whtilde} Let $\xi(w)\in{\mathbb{Q}}[w][[u]]$; the elements $$\exp(\xi(w^2).x_0^{\pm}),\ \ \exp(\xi(w^2).x_1^{\pm})\ \ {\rm{and}}\ \ \exp(\xi(w).X_1^{\pm})$$ lie respectively in $\tilde{\cal U}_{\Z}^{\pm,0}[[u]]$, $\tilde{\cal U}_{\Z}^{\pm,1}[[u]]$ and $\tilde{\cal U}_{\Z}^{\pm,c}[[u]]$ if and only if $\xi(w)$ has integral coefficients, that is if and only if $\xi(w)\in\Z[w][[u]]$ (see example \ref{dvdpw}).
\noindent Remark also that $$\hat h_+(u)=\exp(\ln(1+wu).h_0),$$ while $$\tilde h_+(u)=\exp\left(\big(\ln(1+uw)+{1\over 2}\ln(1-u^4w^4)\big).h_0\right).$$ \end{remark}
\noindent Before entering the study of the integral forms just introduced, we still dwell on the comparison between $\tilde h_+(u)$ and $\hat h_+(u)$, proving lemma \ref{ometiomecap}, that will be useful later.
\begin{lemma} \label{emme} For all $m\in\Z\setminus\{0\}$ we have $$(1+m^2u)^{{1\over m}}\in 1+mu\Z[[u]].$$ \begin{proof} $(1+\sum_{r> 0}a_ru^r)^m=1+m^2u$ implies $$1+m^2u=1+m\sum_{r> 0}a_ru^r+\sum_{k>1}{m\choose k}\big(\sum_{r> 0}a_ru^r\big)^k.$$ Let us prove by induction on $s$ that $a_s\in m\Z$:
if $s=1$ we have that $ma_1=m^2$;
if $s>1$ the coefficient $c_s$ of $u^s$ in $\sum_{k>1}{m\choose k}\big(\sum_{r> 0}a_ru^r\big)^k$ is a combination with integral coefficients of products of the $a_t$'s with $t<s$, which are all multiple of $m$. Then, since $k\geq 2$, $m^2|c_s$. But $ma_s+c_s=0$, thus $m|a_s$.
\end{proof} \end{lemma}
\begin{lemma} \label{ometiomecap} Let us consider the integral forms
$\Z[\hat h_k|k>0]$ and $\Z[\tilde h_k|k>0]$ of ${\mathbb{Q}}[h_r|r>0]$ (see example \ref{rvsf}, formula \ref{dfhp}, definiton \ref{thuz} and remark \ref{hdiversi}); for all $m>0$ recall the ${\mathbb{Q}}$-algebra homomorphism $\lambda_m$ of ${\mathbb{Q}}[h_r|r>0]$ (see proposition \ref{tmom}) and define the analogous homomorphism $\tilde \lambda_m$ mapping each $\varepsilon_rh_r$ to $\varepsilon_{mr}h_{mr}$ (of course $\Z[\tilde h_k|k>0]$ is $\tilde\lambda_m$-stable $\forall m>0$).
\noindent We have that:
\noindent i) if $m$ is odd then $\tilde\lambda_m=\lambda_m$; in particular $\Z[\tilde h_k|k>0]$ is $\lambda_m$-stable;
\noindent ii) $\lambda_2(\hat h_k)\in\Z[\tilde h_l|l>0]$ for all $k>0$;
\noindent iii) $\hat h_+(4u)^{{1\over 2}}\in\Z[\tilde h_k|k>0][[u]]$; \begin{proof}
\noindent i) If $m$ is odd then $4|mr\Leftrightarrow 4|r$, hence $\varepsilon_{mr}=\varepsilon_r$ $\forall r>0$ and the claim follows from proposition \ref{tmom}
\noindent ii) By proposition \ref{tmom} we know that $\Z[\tilde h_k|k>0]$ is $\tilde\lambda_2$-stable; but $$\tilde\lambda_2(\tilde h_+(u^2))=\exp\sum_{r>0}(-1)^{r-1}{\varepsilon_{2r}h_{2r}\over r}u^{2r}= \exp\sum_{r>0}{h_{2r}\over r}u^{2r}=\lambda_2(\hat h_+(-u^2))^{-1};$$ equivalently $$\lambda_2(\hat h_+(u^2))=\tilde\lambda_2(\tilde h_+(-u^2))^{-1},$$ which implies the claim.
\noindent iii) Remark that $$\hat h_+(u)\tilde h(u)_+^{-1}= \exp\left(-\sum_{r>0}{2h_{4r}\over 4r}u^{4r}\right)= \tilde\lambda_4(\tilde h_+(-u^4))^{-{1\over 2}};$$ then $$\hat h_+(4u)^{{1\over 2}}=\tilde h_+(4u)^{{1\over 2}}\tilde\lambda_4(\tilde h_+(-4^4u^4))^{-{1\over 4}}.$$
Since $\tilde h_+(4u)\in 1+4u\Z[\tilde h_k | k>0][[u]]$ and $\tilde\lambda_4(\tilde h_+(4^4u^4))\in 1+4^4u\Z[\tilde h_k| k>0][[u]]$ we deduce from lemma \ref{emme} that
$$\tilde h_+(4u)^{{1\over 2}},\ \ \tilde\lambda_4(\tilde h_+(4^4u^4))^{{1\over 4}}\in\Z[\tilde h_k | k>0],$$ which implies the claim.
\end{proof} \end{lemma}
\vskip .3 truecm \begin{remark}\label{tzp}
\noindent It is obvious that $\tilde{\cal U}_{\Z}^{\pm,0}$, $\tilde{\cal U}_{\Z}^{\pm,1}$, $\tilde{\cal U}_{\Z}^{\pm,c}$, $\tilde{\cal U}_{\Z}^{0,\pm}$ and $\tilde{\cal U}_{\Z}^{0,0}$ are integral forms respectively of $\tilde{\cal U}^{\pm,0}$, $\tilde{\cal U}^{\pm,1}$, $\tilde{\cal U}^{\pm,c}$, $\tilde{\cal U}^{0,\pm}$ and $\tilde{\cal U}^{0,0}$.
\noindent Hence by the commutativity properties we also have that $\tilde{\cal U}_{\Z}^{\pm,0}\tilde{\cal U}_{\Z}^{\pm,c}$ and $\tilde{\cal U}_{\Z}^{\pm,1}\tilde{\cal U}_{\Z}^{\pm,c}$ are integral forms respectively of $\tilde{\cal U}^{\pm,0}\tilde{\cal U}^{\pm,c}$ and $\tilde{\cal U}^{\pm,c}\tilde{\cal U}^{\pm,1}$.
\noindent Analogously $\tilde{\cal U}_{\Z}^{0,0}\tilde{\cal U}_{\Z}^{0,+}$ and $\tilde{\cal U}_{\Z}^{0,-}\tilde{\cal U}_{\Z}^{0,0}$ are integral forms respectively of $\tilde{\cal U}^{0,0}\tilde{\cal U}^{0,+}$ and $\tilde{\cal U}^{0,-}\tilde{\cal U}^{0,0}$.
\noindent We want to prove that:
\noindent 1) $\tilde{\cal U}_{\Z}^0=\tilde{\cal U}_{\Z}^{0,-}\tilde{\cal U}_{\Z}^{0,0}\tilde{\cal U}_{\Z}^{0,+}$, so that $\tilde{\cal U}_{\Z}^0$ is an integral form of $\tilde{\cal U}^0$;
\noindent 2) $\tilde{\cal U}_{\Z}^{\pm}=\tilde{\cal U}_{\Z}^{\pm,1}\tilde{\cal U}_{\Z}^{\pm,c}\tilde{\cal U}_{\Z}^{\pm,0}$, so that $\tilde{\cal U}_{\Z}^+$ and $\tilde{\cal U}_{\Z}^-$ are integral forms respectively of $\tilde{\cal U}^+$ and $\tilde{\cal U}^-$;
\noindent 3) $\tilde{\cal U}_{\Z}=\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$, so that $\tilde{\cal U}_{\Z}$ is an integral form of $\tilde{\cal U}$. \vskip .3 truecm
\noindent It is useful to evidentiate the behaviour of the $\Z$-subalgebras introduced above under the symmetries of $\tilde{\cal U}$.
\end{remark}
\vskip .3 truecm \begin{proposition}\label{sttuz} \noindent The following stability properties under the action of $\sigma$, $\Omega$, $T^{\pm 1}$ and $\lambda_m$ ($m \in \Z$ odd) hold:
\noindent i) $\tilde{\cal U}_{\Z}$, $\tilde{\cal U}_{\Z}^+$ and $\tilde{\cal U}_{\Z}^-$ are $\sigma$-stable, $T^{\pm 1}$-stable, $\lambda_m$-stable.
\noindent \,\,\,\,\, $\tilde{\cal U}_{\Z}$ is also $\Omega$-stable, while $\Omega(\tilde{\cal U}_{\Z}^{\pm})=\tilde{\cal U}_{\Z}^{\mp}$.
\noindent ii) $\tilde{\cal U}_{\Z}^{+,0}$, $\tilde{\cal U}_{\Z}^{+,1}$ and $\tilde{\cal U}_{\Z}^{+,c}$ are $\sigma$-stable, $T^{\pm 2}$-stable, $\lambda_m$-stable.
\noindent \,\,\,\,\, $\tilde{\cal U}_{\Z}^{+,c}$ is also $T^{\pm 1}$-stable, while $T^{\pm 1}(\tilde{\cal U}_{\Z}^{+,0})=\tilde{\cal U}_{\Z}^{+,1}$.
\noindent \,\,\,\,\, $\Omega(\tilde{\cal U}_{\Z}^{+,0})=\tilde{\cal U}_{\Z}^{-,0}$, $\Omega(\tilde{\cal U}_{\Z}^{+,1})=\tilde{\cal U}_{\Z}^{-,1}$ and $\Omega(\tilde{\cal U}_{\Z}^{+,c})=\tilde{\cal U}_{\Z}^{-,c}$.
\noindent iii) $\tilde{\cal U}_{\Z}^{0,0}$, $\tilde{\cal U}_{\Z}^{0,+}$ and $\tilde{\cal U}_{\Z}^{0,-}$ are $\sigma$-stable and $T^{\pm 1}$-stable.
\noindent \,\,\,\,\, $\tilde{\cal U}_{\Z}^{0,0}$ is also $\Omega$-stable and $\lambda_m$-stable; $\Omega(\tilde{\cal U}_{\Z}^{0,\pm})=\tilde{\cal U}_{\Z}^{0,\mp}$; $\tilde{\cal U}_{\Z}^{0,\pm}$ is $\lambda_m$-stable if $m>0$, while $\lambda_m(\tilde{\cal U}_{\Z}^{0,\pm})\subseteq\tilde{\cal U}_{\Z}^{0,\mp}$ if $m<0$.
\noindent iv) $\tilde{\cal U}_{\Z}^0$ is $\sigma$-stable, $\Omega$-stable, $T^{\pm 1}$-stable, $\lambda_m$-stable. \begin{proof} The only non-trivial assertion is the claim that $\tilde{\cal U}_{\Z}^{0,+}$ is $\lambda_m$-stable when $m>0$, which was proved in lemma \ref{ometiomecap},i).
\noindent The assertion about $\lambda_m(\tilde{\cal U}_{\Z}^{0,\pm})$ in the general case follows using that $$\Omega(\tilde{\cal U}_{\Z}^{0,\pm})=\tilde{\cal U}_{\Z}^{0,\mp}=\lambda_{-1}(\tilde{\cal U}_{\Z}^{0,\pm}),\ \ \lambda_m\Omega=\Omega\lambda_m\ \ {\rm{and}}\ \ \lambda_{-m}=\lambda_{-1}\lambda_m.$$ Remark that $$\sigma(\tilde h_{\pm}(u))\!=\!\tilde h_{\pm}(u)^{-1},\,\Omega(\tilde h_{\pm}(u))\!=\!\lambda_{-1}(\tilde h_{\pm}(u))\!=\!\tilde h_{\mp}(u),\,T^{\pm 1}(\tilde h_{\pm}(u))\!=\!\tilde h_{\pm}(u).$$ \end{proof} \end{proposition}
\vskip .3 truecm
\begin{remark} \label{autstab}
\noindent The stability properties described in proposition \ref{sttuz} imply that:
\noindent i) $\sigma(\tilde{\cal U}_{\Z}^{0,-}\tilde{\cal U}_{\Z}^{0,0}\tilde{\cal U}_{\Z}^{0,+})=\tilde{\cal U}_{\Z}^{0,+}\tilde{\cal U}_{\Z}^{0,0}\tilde{\cal U}_{\Z}^{0,-}$; in particular $$\tilde{\cal U}_{\Z}^0=\tilde{\cal U}_{\Z}^{0,-}\tilde{\cal U}_{\Z}^{0,0}\tilde{\cal U}_{\Z}^{0,+}\Leftrightarrow\tilde{\cal U}_{\Z}^0=\tilde{\cal U}_{\Z}^{0,+}\tilde{\cal U}_{\Z}^{0,0}\tilde{\cal U}_{\Z}^{0,-}.$$
\noindent ii) $T^{\pm 1}(\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,0})=\tilde{\cal U}_{\Z}^{+,0}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,1}$ and $\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,0}$ is $T^{\pm 2}$-stable and $\lambda_m$-stable ($m\in\Z$ odd); in particular:
$$\tilde{\cal U}_{\Z}^+=\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,0}\Leftrightarrow\tilde{\cal U}_{\Z}^+=\tilde{\cal U}_{\Z}^{+,0}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,1}.$$
\noindent iii) $\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$ is $T^{\pm 1}$-stable and $\lambda_{-1}$-stable, and $\Omega(\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+)=\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0$; in particular it is enough to prove that $(x_0^+)^{(k)}\tilde h_+(u)\in\tilde h_+(u)\tilde{\cal U}_{\Z}^+[[u]]$ $\forall k\geq 0$ in order to show that $$(x_r^+)^{(k)}\tilde h_{\pm}(u)\in\tilde h_{\pm}(u)\tilde{\cal U}_{\Z}^+[[u]], \tilde h_{\pm}(u)(x_r^-)^{(k)}\in\tilde{\cal U}_{\Z}^-[[u]]\tilde h_{\pm}(u)\ \ \forall r\in Z,\ k\in\N,$$ or equivalently that $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^0\subseteq\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$ and $\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^-\subseteq\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0$.
\noindent iv) $\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$ is $T^{\pm 1}$-stable and $\lambda_m$-stable ($m\in\Z$ odd); in particular if one shows that $(x_0^+)^{(k)}(x_1^-)^{(l)}\in\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$ it follows that
$\forall r,s\in\Z$ such that $2\not|r+s$ $$(x_r^+)^{(k)}(x_s^-)^{(l)}=T^{-r}\lambda_{r+s}((x_0^+)^{(k)}(x_1^-)^{(l)})\in\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+.$$
\end{remark}
\vskip.3 truecm
\begin{proposition} \label{zkd}
\noindent The following identities hold in $\tilde{\cal U}$:
$$\hat h_+(u)\hat h_-(v)=\hat h_-(v)(1-uv)^{-4c}(1+uv)^{2c}\hat h_+(u)$$ and $$\tilde h_+(u)\tilde h_-(v)=\tilde h_-(v)(1-uv)^{-4c}(1+uv)^{2c}\tilde h_+(u).$$ In particular $\tilde{\cal U}_{\Z}^0=\tilde{\cal U}_{\Z}^{0,-}\tilde{\cal U}_{\Z}^{0,0}\tilde{\cal U}_{\Z}^{0,+}$ and $\tilde{\cal U}_{\Z}^0$ is an integral form of $\tilde{\cal U}^0$. \begin{proof}
\noindent Since $[h_r,h_s]=[\varepsilon_rh_r,\varepsilon_sh_s]=\delta_{r+s,0}2r(2+(-1)^{r-1})c$, the claim is proposition \ref{heise} with $m=4$, $l=-2$.
\end{proof} \end{proposition}
\begin{lemma} \label{zkp}
\noindent The following identity holds in $\tilde{\cal U}$ for all $r,s\in\Z$:
$$\exp(x_{2r}^+u)\exp(x_{2s+1}^+v)=\exp(x_{2s+1}^+v)\exp(-X_{2r+2s+1}^+uv)\exp(x_{2r}^+u).$$ \begin{proof} The claim is an immediate consequence of lemma \ref{cle},vii), thanks to the relation $[x_{2r}^+,x_{2s+1}^+]=-X_{2r+2s+1}^+$. \end{proof} \end{lemma}
\vskip.3 truecm \begin{corollary} \label{zppkd} $\tilde{\cal U}_{\Z}^+=\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,0}$; then $\tilde{\cal U}_{\Z}^{\pm}$ is an integral form of $\tilde{\cal U}^{\pm}$. \begin{proof} \noindent From lemma \ref{zkp} we deduce that:
\noindent i) $(X_{2r+1}^+)^{(k)}\in\tilde{\cal U}_{\Z}^+$ $\forall k\in\N,r\in\Z$; this implies that $$\tilde{\cal U}_{\Z}^{+,c}\subseteq\tilde{\cal U}_{\Z}^+\ \ {\rm{and}}\ \ \tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,0}\subseteq\tilde{\cal U}_{\Z}^+.$$
\noindent ii) $\tilde{\cal U}_{\Z}^{+,0}\tilde{\cal U}_{\Z}^{+,1}\subseteq\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,0}$, hence $\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,0}$ is stable by left multiplication by $\tilde{\cal U}_{\Z}^{+,0}$, hence by $\tilde{\cal U}_{\Z}$ (which is generated by $\tilde{\cal U}_{\Z}^{+,0}$ and $\tilde{\cal U}_{\Z}^{+,1}$).
\noindent Since $1\in\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,0}$, we deduce $\tilde{\cal U}_{\Z}^+\subseteq\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,0}$, and the claim follows applying $\Omega$ (see proposition \ref{sttuz},i)).
\end{proof} \end{corollary}
\begin{proposition} \label{xtuzzero} $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^{0,0}\subseteq\tilde{\cal U}_{\Z}^{0,0}\tilde{\cal U}_{\Z}^+$; more precisely $$(x_r^+)^{(k)}{h_0\choose l}={h_0-2k\choose l}(x_r^+)^{(k)}\ \ \forall r\in\Z,\ k,l\in\N.$$ \begin{proof} The claim follows by immediate application of \ref{fru}.\end{proof} \end{proposition} \vskip .3 truecm
\begin{proposition} \label{zkopp} In $\tilde{\cal U}$ the following holds:
\noindent i) $x_0^+\tilde h_+(u)=\tilde h_+(u)(1-uT^{-1})^6(1-u^2T^{-2})^{-3}(1+u^2T^{-2})(x_0^+)$;
\noindent ii) $(x_0^+)^{(k)}\tilde h_+(u)\in\tilde h_+(u)\tilde{\cal U}_{\Z}^+[[u]]$ $\forall k\in\N$;
\noindent iii) $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^{0,+}\subseteq\tilde{\cal U}_{\Z}^{0,+}\tilde{\cal U}_{\Z}^+$. \begin{proof} i) We have that $[\varepsilon_r h_r, x_0^+]=\varepsilon_r2(2+(-1)^{r-1})x_r^+$
and $$\varepsilon_r2(2+(-1)^{r-1})=\begin{cases}6&{\rm{if}}\ 2\not|r\\2=6-4&{\rm{if}}\ 2|r\ {\rm{and}}\ 4\not|r\\-2=6-4-4&{\rm{if}}\ 4|r,\end{cases}$$ hence proposition \ref{hh} applies, with $m_1=6$, $m_2=-2$, $m_4=-1$ and implies that $$x_0^+\tilde h_+(u)=\tilde h_+(u)(1+uT^{-1})^{-6}(1-u^2T^{-2})^{2}(1-u^4T^{-4})(x_0^+)=$$ $$=\tilde h_+(u)(1-uT^{-1})^6(1-u^2T^{-2})^{-3}(1+u^2T^{-2})(x_0^+).$$
\noindent ii) Let us underline that $(1-u^2)^{-3}(1+u^2)\in\Z[[u^2]]$, hence from the coefficients of $(1-u)^6$ it can be deduced that $$(1-u)^6(1-u^2)^{-3}(1+u^2)\in\Z[[u^2]]+2u\Z[[u^2]]$$ and
$$x_0^+\tilde h_+(u)=\tilde h_+(u)\sum_{r\geq 0}a_rx_r^+u^r\ \ {\rm{with}}\ \ a_r\in\Z\ \forall r\geq 0\ {\rm{and}}\ 2|a_r\ \forall r\ {\rm{odd}}.$$ If we define $y_0=\sum_{r\ge0}a_{2r}x_{2r}^+u^{2r}$, $y_1={1\over 2}\sum_{r \ge0}a_{2r+1}x_{2r+1}^+u^{2r+1}$ we have that, thanks to lemma \ref{cle},viii) $$\exp(x_0^+v)\tilde h_+(u)=\tilde h_+(u)\exp((y_0+2y_1)v)=$$ $$=\tilde h_+(u)\exp(2y_1v)\exp([y_0,y_1]v^2)\exp(y_0v)\in\tilde h_+(u)\tilde{\cal U}_{\Z}^+[[u,v]]$$ from which the claim follows thanks to remark \ref{whtilde}.
\noindent iii) From the $T^{\pm 1}$-stability of $\tilde{\cal U}_{\Z}^+$ and the fact that $T^{\pm 1}\big|_{\tilde{\cal U}_{\Z}^{0,+}}=id$ we deduce that for all $r\in\Z,\ k\in\N$ $$(x_r^+)^{(k)}\tilde h_+(u)\in\tilde h_+(u)\tilde{\cal U}_{\Z}^+[[u]].$$ The claim follows recalling that the $(x_r^+)^{(k)}$'s generate $\tilde{\cal U}_{\Z}^+$ and the $\tilde h_k$'s generate $\tilde{\cal U}_{\Z}^{0,+}$. \end{proof} \end{proposition}
\begin{corollary} \label{czzp2} $\tilde{\cal U}_{\Z}^{\pm}\tilde{\cal U}_{\Z}^0=\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^{\pm}$. In particular $\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$ and $\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0$ are subalgebras of $\tilde{\cal U}_{\Z}$. \begin{proof} $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^{0,0}\subseteq \tilde{\cal U}_{\Z}^{0,0}\tilde{\cal U}_{\Z}^+$ (see proposition \ref{xtuzzero}) and $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^{0,+}\subseteq \tilde{\cal U}_{\Z}^{0,+}\tilde{\cal U}_{\Z}^+$ (see \ref{zkopp},iii)); moreover $$\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^{0,-}=\lambda_{-1}(\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^{0,+})\subseteq\lambda_{-1}(\tilde{\cal U}_{\Z}^{0,+}\tilde{\cal U}_{\Z}^+)=\tilde{\cal U}_{\Z}^{0,-}\tilde{\cal U}_{\Z}^+.$$ Hence $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^0\subseteq\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$.
\noindent Applying $\sigma$ we get the reverse inclusion and applying $\Omega$ we obtain the claim for $\tilde{\cal U}_{\Z}^-$.
\end{proof} \end{corollary}
\noindent Now that we have described $\tilde{\cal U}_{\Z}^0$, $\tilde{\cal U}_{\Z}^{\pm}$ and the $\Z$-subalgebras generated by $\tilde{\cal U}_{\Z}^0$ and $\tilde{\cal U}_{\Z}^+$ (respectively by $\tilde{\cal U}_{\Z}^0$ and $\tilde{\cal U}_{\Z}^-$), in order to show that $\tilde{\cal U}_{\Z}= \tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$ it remains to prove that $$\tilde{\cal U}_{\Z}^0\subseteq\tilde{\cal U}_{\Z}\ \ {\rm{and}}\ \ \tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^-\subseteq \tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+.$$ Before attaching this problem in its generality it is worth evidentiating the existence of some copies of $\hat\frak sl_2$ inside $\hat{{\frak sl_3}}^{\!\!\chi}$, hence of embeddings $\hat{\cal U}\hookrightarrow\tilde{\cal U}$, that induce some useful commutation relations in $\tilde{\cal U}$.
\vskip .3 truecm
\begin{remark}\label{emgg} The ${\mathbb{Q}}$-linear maps $f,F:\hat\frak sl_2\to\hat{{\frak sl_3}}^{\!\!\chi}$ defined by $$x_r^{\pm}\mapsto x_{2r}^{\pm},\ \ h_r\mapsto h_{2r},\ \ c\mapsto 2c\leqno{f:}$$ $$x_r^{\pm}\mapsto {X_{2r\mp 1}^{\pm}\over 4} ,\ \ h_r\mapsto {h_{2r}\over 2}-\delta_{r,0}{c\over 4},\ \ c\mapsto {c\over 2}\leqno{F:}$$ are Lie-algebra homomorphisms, obviously injective, inducing embeddings $f,F:\hat{\cal U}\hookrightarrow\tilde{\cal U}$.
\end{remark} \begin{corollary}\label{czzp} $f(\hat{\cal U}_{\Z}^{0,0})\subseteq\tilde{\cal U}_{\Z}^{0,0}\subseteq\tilde{\cal U}_{\Z}$. \begin{proof} Since $f(\hat{\cal U}_{\Z}^{\pm})\subseteq\tilde{\cal U}_{\Z}^{\pm,0}\subseteq\tilde{\cal U}_{\Z}$ we have that $f$ maps $\hat{\cal U}_{\Z}$ (which is generated by $\hat{\cal U}_{\Z}^+$ and $\hat{\cal U}_{\Z}^-$) into $\tilde{\cal U}_{\Z}$; in particular $f(\hat{\cal U}_{\Z}^{0,0})\subseteq\tilde{\cal U}_{\Z}$. But $$f(\hat{\cal U}_{\Z}^{0,0})=f(\Z^{(bin)}[h_0,c])=\Z^{(bin)}[h_0,2c],$$ thus $\Z^{(bin)}[h_0,2c]\subseteq\tilde{\cal U}_{\Z}.$ Since $\tilde{\cal U}_{\Z}$ is $T$-stable and $T(h_0)=h_0-c$ we also have $\Z^{(bin)}[h_0-c]\subseteq\tilde{\cal U}_{\Z}$, so that $$f(\hat{\cal U}_{\Z}^{0,0})=\Z^{(bin)}[h_0,2c]\subseteq\Z^{(bin)}[h_0,c]=\Z^{(bin)}[h_0,h_0-c]\subseteq\tilde{\cal U}_{\Z}$$ which is the claim because $\tilde{\cal U}_{\Z}^{0,0}=\Z^{(bin)}[h_0,c]$. \end{proof} \end{corollary}
\begin{proposition} \label{czzq} $\tilde{\cal U}_{\Z}^{+,0}\tilde{\cal U}_{\Z}^{-,0}\subseteq\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$ and $\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{-,1}\subseteq\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$. \begin{proof} $\tilde{\cal U}_{\Z}^{+,0}\tilde{\cal U}_{\Z}^{-,0}=f(\hat{\cal U}_{\Z}^+\hat{\cal U}_{\Z}^-)\subseteq f(\hat{\cal U}_{\Z}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z}^+)=\tilde{\cal U}_{\Z}^{-,0}f(\hat{\cal U}_{\Z}^0)\tilde{\cal U}_{\Z}^{+,0}$: we want to prove that $f(\hat{\cal U}_{\Z}^0)=f(\hat{\cal U}_{\Z}^{0,-}\hat{\cal U}_{\Z}^{0,0}\hat{\cal U}_{\Z}^{0,+})\subseteq\tilde{\cal U}_{\Z}^0$.
\noindent By corollary \ref{czzp} $f(\hat{\cal U}_{\Z}^{0,0})\subseteq\tilde{\cal U}_{\Z}^{0,0}$.
\noindent On the other hand
$$f(\hat{\cal U}_{\Z}^{0,+})=f(\Z^{(sym)}[h_r|r>0])=\Z^{(sym)}[h_{2r}|r>0]=\lambda_2(\Z[\hat h_k|k>0]),$$
hence $f(\hat{\cal U}_{\Z}^{0,+})\subseteq\Z[\tilde h_k|k>0]=\tilde{\cal U}_{\Z}^{0,+}$ thanks to lemma \ref{ometiomecap} ii).
\noindent Finally remark that $f\Omega=\Omega f$, thus $f(\hat{\cal U}_{\Z}^{0,-})=f\Omega(\hat{\cal U}_{\Z}^{0,+})\subseteq\Omega\tilde{\cal U}_{\Z}^{0,+}\subseteq\tilde{\cal U}_{\Z}^{0,-}$.
\noindent It follows that $f(\hat{\cal U}_{\Z}^0)\subseteq\tilde{\cal U}_{\Z}^0$ and $\tilde{\cal U}_{\Z}^{+,0}\tilde{\cal U}_{\Z}^{-,0}\subseteq\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$.
\noindent The assertion for $\tilde{\cal U}_{\Z}^{\pm,1}$ follows applying $T$, see proposition \ref{sttuz},i),ii) and iv).
\end{proof} \end{proposition}
\subsection{$\exp(x_0^+u)\exp(x_1^-v)$ and $\tilde{\cal U}_{\Z}^{0,+}$: here comes the hard work}\label{sottosezione}
\noindent We shall deal with the commutation between $\tilde{\cal U}_{\Z}^{+,0}$ and $\tilde{\cal U}_{\Z}^{-,1}$ following the strategy already proposed for $\hat{\cal U}_{\Z}$ and recalling remark \ref{autstab},iv): finding an explicit expression involving suitable exponentials for $$\exp(x_0^+u)\exp(x_1^-v)\in\tilde{\cal U}^{-,1}\tilde{\cal U}^{-,c}\tilde{\cal U}^{-,0}\tilde{\cal U}^{0,+}\tilde{\cal U}^{+,1}\tilde{\cal U}^{+,c}\tilde{\cal U}^{+,0}[[u,v]]$$ and proving that all its coefficients lie in $$\tilde{\cal U}_{\Z}^{-,1}\tilde{\cal U}_{\Z}^{-,c}\tilde{\cal U}_{\Z}^{-,0}\tilde{\cal U}_{\Z}^{0,+}\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{+,0}\subseteq\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+.$$
\noindent Since here there are more factors involved, the computation is more complicated than in the case of $\hat\frak sl_2$ and the simplification provided by this approach is even more evident. On the other hand it is not immediately clear from the commutation formula that our element belongs to $\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$, or better: the factors relative to the (negative, resp. positive) real root vectors will be evidently elements of $\tilde{\cal U}_{\Z}^-$, resp. $\tilde{\cal U}_{\Z}^+$, while proving that the null part lies indeed in $\tilde{\cal U}_{\Z}^0$ is not evident at all and will require a deeper inspection (see remark \ref{hhdehh}, lemma \ref{contidp} and corollary \ref{hcappucciod}).
\noindent As we shall see, in order to complete the proof that $\tilde{\cal U}_{\Z}^{0,+}\subseteq\tilde{\cal U}_{\Z}$ (see proposition \ref{samealg}), it is useful to compute also $\exp(x_0^+u)\exp(X_1^-v)$. The two computations ($\exp(x_0^+u)\exp(yv)$ with $y=x_1^-$ or $y=X_1^-$) are essentially the same and will be performed together (see the considerations from remark \ref{gesponenziale} to lemma \ref{wderivcomm}, of which the propositions \ref{xmenogrande} and \ref{x0piux1meno} are straightforward applications); even though $\exp(x_0^+u)\exp(x_1^-v)$ presents more symmetries than $\exp(x_0^+u)\exp(X_1^-v)$ (see remark \ref{gtg},iii)), its interpretation will require more work, since it is not evident the connection with $\tilde{\cal U}_{\Z}^{0,+}$, as just mentioned.
\vskip .3 truecm
\begin{remark}\label{gesponenziale} Let $G=G(u,v)\in\tilde{\cal U}[[u,v]]$ and $y\in L^-$ (see definition \ref{sottoalgebraL}); then $$G(u,v)=\exp(x_0^+u)\exp(yv)$$ if and only if the following two conditions hold:
a) $G(0,v)=\exp(yv)$;
b) ${d\over du}G(u,v)=x_0^+G(u,v)$.
\end{remark}
\vskip .3 truecm \begin{notation}\label{gabg} In the following $G^-$, $G^0$, $G^+$ will denote elements of $\tilde{\cal U}[[u,v]]$ of the form $$G^-=\exp({\alpha_-})\exp({\beta_-})\exp({\gamma_-}),$$ $$G^+=\exp(\gamma_+)\exp(\beta_+)\exp(\alpha_+),$$ $$G^0=\exp(\eta)$$ with $$\alpha_-\in{\mathbb{Q}}[w^2][[u,v]].x_1^-,\ \beta_-\in{\mathbb{Q}}[w][[u,v]].X_1^-,\ \gamma_-\in{\mathbb{Q}}[w^2][[u,v]].x_0^-,$$ $$\alpha_+\in{\mathbb{Q}}[w^2][[u,v]].x_0^+,\ \beta_+\in{\mathbb{Q}}[w][[u,v]].X_1^+,\ \gamma_+\in{\mathbb{Q}}[w^2][[u,v]].x_1^+,$$ $$\eta\in w{\mathbb{Q}}[w][[u,v]].h_0.$$ $G(u,v)$ will denote the element $G(u,v)=G=G^-G^0G^+$.
\end{notation} \vskip .3 truecm \begin{remark}\label{gtg} Let $G=G^-G^0G^+\in\tilde{\cal U}[[u,v]]$ be as in notation \ref{gabg}. Then:
\noindent i) Of course $${dG\over du}={dG^-\over du}G^0G^++G^-{dG^0\over du}G^++G^-G^0{dG^+\over du}$$ where, considering the commutativity properties, we have that $${dG^-\over du}=\exp({\alpha_-})\exp({\beta_-}){d(\alpha_-+\beta_-+\gamma_-)\over du}\exp({\gamma_-}),$$ $${dG^+\over du}=\exp(\gamma_+){d(\alpha_++\beta_++\gamma_+)\over du}\exp(\beta_+)\exp(\alpha_+),$$ $${dG^0\over du}={d\eta\over du}G^0 .$$
\noindent ii) If moreover $G=\exp(x_0^+u)\exp(yv)$ with $y\in L^-$, the property b) of remark \ref{gesponenziale} translates into
$$x_0^+G=\exp({\alpha_-})\exp({\beta_-}){d(\alpha_-+\beta_-+\gamma_-)\over du}\exp({\gamma_-})G^0G^++$$ $$+G^-{d\eta\over du}G^0G^++G^-G^0\exp(\gamma_+){d(\alpha_++\beta_++\gamma_+)\over du}\exp(\beta_+)\exp(\alpha_+).$$
\noindent iii) If in particular $y=x_1^-$, then $T\lambda_{-1}\Omega(G(u,v))=G(v,u)$; hence $$G^-(u,v)=T\lambda_{-1}\Omega(G^+)(v,u),$$ $$\alpha_-(u,v)=T\lambda_{-1}\Omega(\alpha_+)(v,u),$$ $$\beta_-(u,v)=T\lambda_{-1}\Omega(\beta_+)(v,u),$$ $$\gamma_-(u,v)=T\lambda_{-1}\Omega(\gamma_+)(v,u),$$ $$\eta(u,v)=\eta(v,u).$$ Observe that $T\lambda_{-1}\Omega(X_{2r+1}^+)=-X_{2r+3}^-$ $\forall r\in\Z$. \end{remark} \vskip .3 truecm
\noindent The following lemma is based on lemma \ref{cle}, iv) and on the defining relations of $\tilde{\cal U}$ (definition \ref{a22}). \begin{lemma}\label{derivcomm} With the notations fixed in \ref{gabg} we have that: $$x_0^+\exp(\alpha_-)=\leqno{i)}$$ $$=\exp(\alpha_-)\left(x_0^++[x_0^+,\alpha_-]+{1\over 2}[[x_0^+,\alpha_-],\alpha_-]+{1\over 6}[[[x_0^+,\alpha_-],\alpha_-],\alpha_-]\right);$$ $$x_0^+\exp(\alpha_-)\exp(\beta_-)=\exp(\alpha_-)\exp(\beta_-)\cdot\leqno{ii)}$$ $$ \cdot\left(x_0^++[x_0^+,\alpha_-]+{1\over 2}[[x_0^+,\alpha_-],\alpha_-]+{1\over 6}[[[x_0^+,\alpha_-],\alpha_-],\alpha_-]+[x_0^+,\beta_-]\right);$$ $$(x_0^++[x_0^+,\alpha_-])\exp(\gamma_-)=\leqno{iii)}$$ $$=\exp(\gamma_-)\left(x_0^++[x_0^+,\alpha_-]+[x_0^+,\gamma_-]\right)+$$ $$+\left([[x_0^+,\alpha_-],\gamma_-]+{1\over 2}[[x_0^+,\gamma_-],\gamma_-]-{1\over 2}[[[x_0^+,\alpha_-],\gamma_-],\gamma_-]\right)\exp(\gamma_-);$$ iv) $x_0^+\exp(\eta)=\exp(\eta)(y_0+y_1)$ with $$y_0\in{\mathbb{Q}}[w^2][[u,v]].x_0^+,\ \ y_1\in w{\mathbb{Q}}[w^2][[u,v]].x_0^+;$$ $$(y_0+y_1)\exp(\gamma_+)=\exp(\gamma_+)(y_0+y_1+[y_0,\gamma_+]).\leqno{v)}$$ vi) In conclusion $$x_0^+G={dG\over du}$$ if and only if the following relations hold: $${d\alpha_-\over du}=[x_0^+,\beta_-]+[[x_0^+,\alpha_-],\gamma_-]$$ $${d\beta_-\over du}={1\over 6}[[[x_0^+,\alpha_-],\alpha_-],\alpha_-]-{1\over 2}[[[x_0^+,\alpha_-],\gamma_-],\gamma_-]$$ $${d\gamma_-\over du}={1\over 2}[[x_0^+,\alpha_-],\alpha_-]+{1\over 2}[[x_0^+,\gamma_-],\gamma_-]$$ $${d\eta\over du}=[x_0^+,\gamma_-]+[x_0^+,\alpha_-]$$ $${d\alpha_+\over du}=y_0$$ $${d\beta_+\over du}=[y_0,\gamma_+]$$ $${d\gamma_+\over du}=y_1.$$ \begin{proof} i)-v) are straightforward repeated applications of lemma \ref{cle},iv) remarking that:
\noindent i) and ii): $[[[x_0^+,\alpha_-],\alpha_-],\alpha_-]\in\tilde{\cal U}^{-,c}[[u,v]]$, hence it commutes with both $\alpha_-$ and $\beta_-$ (which are in $\tilde{\cal U}^-[[u,v]]$);
\noindent ii): $\beta_-\in\tilde{\cal U}^{-,c}[[u,v]]$, hence it commutes also with $[[x_0^+,\alpha_-],\alpha_-]$ and $[x_0^+,\beta_-]$ (which belong to $\tilde{\cal U}^-[[u,v]]$) and with $[x_0^+,\alpha_-]$ (because $[h_{2r+1},\tilde{\cal U}^{-,c}]=0$ $\forall r\in\Z$);
\noindent iii): $[[x_0^+,\gamma_-],\gamma_-]$ and $[[[x_0^+,\alpha_-],\gamma_-],\gamma_-]$ belong respectively to $\tilde{\cal U}^{-,0}[[u,v]]$ and $\tilde{\cal U}^{-,c}[[u,v]]$, so that they commute with $\gamma_-\in\tilde{\cal U}^{-,0}[[u,v]]$; the claim follows from the identities $$(x_0^++[x_0^+,\alpha_-])\exp(\gamma_-)=\exp(\gamma_-)\cdot \Big(x_0^++[x_0^+,\alpha_-]+$$ $$+[x_0^+,\gamma_-]+[[x_0^+,\alpha_-],\gamma_-]+{1\over 2}[[x_0^+,\gamma_-],\gamma_-]+{1\over 2}[[[x_0^+,\alpha_-],\gamma_-],\gamma_-]\Big)$$ and $$\exp(\gamma_-)[[x_0^+,\alpha_-],\gamma_-]=([[x_0^+,\alpha_-],\gamma_-]-[[[x_0^+,\alpha_-],\gamma_-],\gamma_-])\exp(\gamma_-);$$ iv): lemma \ref{lhlh} implies that $\exp(\eta)^{-1}x_0^+\exp(\eta)\in{\mathbb{Q}}[w][[u,v]].x_0^+;$
\noindent v): $\gamma_+\in\tilde{\cal U}^{+,1}[[u,v]]$ commutes with both $y_1\in\tilde{\cal U}^{+,1}[[u,v]]$ and $[y_0,\gamma_+]\in\tilde{\cal U}^{+,c}[[u,v]]$.
\noindent Point vi) is a consequence of points i)-v) and remark \ref{gtg},i).
\end{proof} \end{lemma}
\begin{lemma}\label{wderivcomm} By abuse of notation let $\alpha_{\pm}$, $\beta_{\pm}$, $\gamma_{\pm}$, $\eta$ and $y_0$ (see notation \ref{gabg} and lemma \ref{derivcomm},iv)) denote also the elements of ${\mathbb{Q}}[w][[u,v]]$ such that $$\alpha_+=\alpha_+(w^2).x_0^+,\ \ \beta_+=\beta_+(w).X_1^+,\ \ \gamma_+=\gamma_+(w^2).x_1^+,$$ $$\alpha_-=\alpha_-(w^2).x_1^-,\ \ \beta_-=\beta_-(w).X_1^-,\ \ \gamma_-=\gamma_-(w^2).x_0^-,$$ $$\eta=\eta(w).h_0.$$ Then the relations of lemma \ref{derivcomm},vi) become: $${d\alpha_-(w^2)\over du}=4\beta_-(-w^2)-6\alpha_-(w^2)\gamma_-(w^2),$$ $${d\beta_-(w)\over du}=\alpha_-(-w)(w\alpha_-^2(-w)-3\gamma_-^2(-w)),$$ $${d\gamma_-(w^2)\over du}=-3w^2\alpha_-^2(w^2)-\gamma_-^2(w^2),$$ $${d\eta(w)\over du}=w\alpha_-(w^2)+\gamma_-(w^2),$$ $${d(\alpha_+(w^2)+w\gamma_+(w^2))\over du}=\exp(-4\eta(w)+2\eta(-w)),$$ $${d\beta_+(w)\over du}=-{d\alpha_+(-w)\over du}\gamma_+(-w).$$ \begin{proof} The claim is obtained using lemma \ref{qwconti}.\end{proof} \end{lemma}
\begin{proposition}\label{xmenogrande} $$\exp(x_0^+u)\exp(X_1^-v)=$$ $$=\exp({\alpha_-})\exp({\beta_-})\exp({\gamma_-})\exp(\eta)\exp(\gamma_+)\exp(\beta_+)\exp(\alpha_+)$$ where, with the notations of lemma \ref{wderivcomm}, $$\alpha_-(w)={4uv\over 1-4^2wu^4v^2},\ \ \ \ \alpha_+(w)={u\over 1-4^2wu^4v^2},$$ $$\beta_-(w)={(1+3\cdot 4^2wu^4v^2)v\over (1+4^2wu^4v^2)^2},\ \ \ \ \beta_+(w)={(1-4^2wu^4v^2)u^4v\over (1+4^2wu^4v^2)^2},$$ $$\gamma_-(w)={-4^2wu^3v^2\over 1-4^2wu^4v^2},\ \ \ \ \gamma_+(w)={-4u^3v\over 1-4^2wu^4v^2},$$ $$\eta(w)={1\over 2}\ln(1+4wu^2v).$$ In particular:
\noindent i) $(x_0^+)^{(k)}(X_1^-)^{(l)}\in\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$ for all $k,l\in\N$;
\noindent ii) $\hat h_+(4u)^{{1\over 2}}\in\tilde{\cal U}_{\Z}[[u]]$. \begin{proof} We use the notation fixed in \ref{gabg}.
\noindent It is obvious that $G(0,v)=\exp(X_1^-v)$, so that the condition a) of remark \ref{gesponenziale} is fulfilled, and we need to verify condition b), following lemmas \ref{derivcomm},vi) and \ref{wderivcomm}.
\noindent Remark that $${d\eta(w)\over du}={4wuv\over 1+4wu^2v}={4wuv(1-4wu^2v)\over 1-4^2w^2u^4v^2}=w\alpha_-(w^2)+\gamma_-(w^2)$$ and $$\exp(-4\eta(w)+2\eta(-w))={1-4wu^2v\over(1+4wu^2v)^2},$$ $$\alpha_+(w^2)+w\gamma_+(w^2)={u(1-4wu^2v)\over1-4^2w^2u^4v^2}={u\over 1+4wu^2v},$$ so that $${d(\alpha_+(w^2)+w\gamma_+(w^2))\over du}={1+4wu^2v-8wu^2v\over (1+4wu^2v)^2}=\exp(-4\eta(w)+2\eta(-w)).$$ Now let us recall that for all $n,m\in\N$ $${d\over du}{u^n\over (1-au^4)^m}={nu^{n-1}+(4m-n)au^{n+3}\over (1-au^4)^{m+1}},$$ hence, fixing $a=4^2w^2v^2$, we get $${d\alpha_-(w^2)\over du}={4v(1+3au^4)\over(1-au^4)^2},$$ $${d\beta_-(-w^2)\over du}={-4au^3v(1+3au^4)\over(1-au^4)^3},$$ $${d\gamma_-(w^2)\over du}={-a(3u^2+au^6)\over(1-au^4)^2},$$ $${d\alpha_+(w^2)\over du}={1+3au^4\over(1-au^4)^2},$$ $${d\beta_+(-w^2)\over du}={4vu^3(1+3au^4)\over(1-au^4)^3}.$$
The relations to prove are then equivalent to the following: $$4v(1+3au^4)=4(1-3au^4)v+6\cdot 4uv\cdot au^3,$$ $$-4au^3v(1+3au^4)=4uv(-w^24^2u^2v^2-3a^2u^6),$$ $$-a(3u^2+au^6)=-3w^2\cdot 4^2u^2v^2-a^2u^6,$$ $$4u^3v(1+3au^4)=(1+3au^4)4u^3v,$$ which are easily verified.
\noindent Then, since $\alpha_{\pm}$, $\beta_{\pm}$, $\gamma_{\pm}$ have integral coefficients, i) follows from example \ref{dvdpw}, remark \ref{whtilde} and lemma \ref{ometiomecap},iii).
\noindent ii) follows at once from the above considerations, inverting the exponentials.
\end{proof} \end{proposition}
\begin{proposition}\label{x0piux1meno} $$\exp(x_0^+u)\exp(x_1^-v)=$$ $$=\exp({\alpha_-})\exp({\beta_-})\exp({\gamma_-})\exp(\eta)\exp(\gamma_+)\exp(\beta_+)\exp(\alpha_+)$$ where, with the notations of lemma \ref{wderivcomm},
$$\alpha_+(w)={(1+wu^2v^2)u\over 1-6wu^2v^2+w^2u^4v^4},\ \ \ \ \alpha_-(w)={(1+wu^2v^2)v\over 1-6wu^2v^2+w^2u^4v^4},$$ $$\beta_+(w)={(1-4wu^2v^2-w^2u^4v^4)u^3v\over (1+6wu^2v^2+w^2u^4v^4)^2},\ \ \ \ \beta_-(w)={(1-4wu^2v^2-w^2u^4v^4)wuv^3\over (1+6wu^2v^2+w^2u^4v^4)^2},$$ $$\gamma_+(w)={(-3+wu^2v^2)u^2v\over 1-6wu^2v^2+w^2u^4v^4},\ \ \ \ \gamma_-(w)={(-3+wu^2v^2)wuv^2\over 1-6wu^2v^2+w^2u^4v^4},$$ $$\eta(w)={1\over 2}\ln(1+2wuv-w^2u^2v^2).$$ \begin{proof} \noindent We use the notations fixed in \ref{gabg}.
\noindent \noindent It is obvious that $G(0,v)=\exp(x_1^-v)$, so that the condition a) of remark \ref{gesponenziale} is fulfilled, and we need to verify condition b), following lemma \ref{wderivcomm}.
\noindent First of all remark that $$1-6t^2+t^4=(1+2t-t^2)(1-2t-t^2)$$ and that $$1+t^2+(-3+t^2)t=1-3t+t^2+t^3=(1-t)(1-2t-t^2);$$ thus, replacing $t$ by $wuv$, we get $$\alpha_+(w^2)+w\gamma_+(w^2)={(1-wuv)u\over 1+2wuv-w^2u^2v^2}$$ and $$w\alpha_-(w^2)+\gamma_-(w^2)={(1-wuv)wv\over 1+2wuv-w^2u^2v^2}.$$ Hence the relations of lemma \ref{wderivcomm} involving $\eta$ are easily proved: $${d\eta(w)\over du}={(1-wuv)wv\over 1+2wuv-w^2u^2v^2}=w\alpha_-(w^2)+\gamma_-(w^2)$$ and $$\exp(-4\eta(w)+2\eta(-w))={1-2wuv-w^2u^2v^2\over(1+2wuv-w^2u^2v^2)^2}$$ while, on the other hand, $${d\over dt}{t-t^2\over 1+2t-t^2}={1-2t-t^2\over (1+2t-t^2)^2}$$ so that $${d\over du}(\alpha_+(w^2)+w\gamma_+(w^2))={1-2wuv-w^2u^2v^2\over(1+2wuv-w^2u^2v^2)^2}$$ and $$\exp(-4\eta(w)+2\eta(-w))={d\over du}(\alpha_+(w^2)+w\gamma_+(w^2)).$$
\noindent In order to prove the remaining relations remark that for all $n,m\in\N$ $${d\over dt}{t^n\over (1-6t^2+t^4)^m}={nt^{n-1}+6(2m-n)t^{n+1}+(n-4m)t^{n+3}\over (1-6t^2+t^4)^{m+1}},$$ which helps to compute the derivative of $\alpha_{\pm}(w^2)$, $\beta_{\pm}(-w^2)$, $\gamma_-(w^2)$, fixing $t=wuv$ and recalling that ${d\over du}=wv{d\over dt}$: $${d\alpha_-(w^2)\over du}={wv^2(14t-4t^3-2t^5)\over (1-6t^2+t^4)^2},$$ $${d\beta_-(-w^2)\over du}={w^2v^3(-1-30t^2-12t^4+14t^6-3t^8)\over (1-6t^2+t^4)^3},$$ $${d\gamma_-(w^2)\over du}={w^2v^2(-3-15t^2+3t^4-t^6)\over (1-6t^2+t^4)^2},$$ $${d\alpha_+(w^2)\over du}={1+9t^2-9t^4-t^6\over (1-6t^2+t^4)^2},$$ $${d\beta_+(-w^2)\over du}={w^{-2}v^{-1}(3t^2+26t^4-36t^6+6t^8+t^{10})\over (1-6t^2+t^4)^3}.$$ The relations to prove are then equivalent to the following: $$14t-4t^3-2t^5=-4(1+4t^2-t^4)t-6(1+t^2)(-3+t^2)t,$$ $$-1-30t^2-12t^4+14t^6-3t^8=(1+t^2)(-(1+t^2)^2-3(-3+t^2)^2t^2), $$ $$-3-15t^2+3t^4-t^6=-3(1+t^2)^2-(-3+t^2)^2t^2,$$ $$3t^2+26t^4-36t^6+6t^8+t^{10}=-(1+9t^2-9t^4-t^6)(-3+t^2)t^2,$$ which are easily verified.
\end{proof}
\end{proposition}
\begin{remark} Since $(1+2t-t^2)^{-1}\in\Z[[t]]$ proposition \ref{x0piux1meno} implies that $G^{\pm}\in\tilde{\cal U}_{\Z}^{\pm}[[u,v]]$ (see notation \ref{gabg}). Then, in order to prove that $$(x_0^+)^{(k)}(x_1^-)^{(l)}\in\tilde{\cal U}_{\Z}^{-}\tilde{\cal U}_{\Z}^{0}\tilde{\cal U}_{\Z}^{+},$$ we just need to show that $\exp(\eta)\in\tilde{\cal U}_{\Z}^{0}[[u,v]]$. This will imply that $\tilde{\cal U}_{\Z}^{-}\tilde{\cal U}_{\Z}^{0}\tilde{\cal U}_{\Z}^{+}$ is closed under multiplication, hence it is an integral form of $\tilde{\cal U}$, obviously containing $\tilde{\cal U}_{\Z}$.
\noindent In order to prove that $\tilde{\cal U}_{\Z}=\tilde{\cal U}_{\Z}^{-}\tilde{\cal U}_{\Z}^{0}\tilde{\cal U}_{\Z}^{+}$ we need to show in addition that $\tilde{\cal U}_{\Z}^0\subseteq\tilde{\cal U}_{\Z}$.
\noindent The last part of this paper is devoted to prove that $$\exp\left({1\over 2}\ln(1+2u-u^2).h_0\right)\in\tilde{\cal U}_{\Z}^0[[u]]$$ (see corollary \ref{hcappucciod}) and that $\tilde{\cal U}_{\Z}^0\subseteq\tilde{\cal U}_{\Z}$ (see proposition \ref{samealg}).
\end{remark}
\begin{notation} \label{notedn}
\noindent In the following $d:\Z_+\to{\mathbb{Q}}$ denotes the function defined by $$\sum_{n>0}(-1)^{n-1}{d_n\over n}u^n={1\over 2}\ln(1+2u-u^2)$$ and $\tilde d=\varepsilon d$ (that is $\tilde d_n=\varepsilon_n d_n$ for all $n>0$, where $\varepsilon_n$ has been defined in definition \ref{thuz}).
\noindent Remark that with this notation we have $\exp(\eta)=\hat h_+^{\{d\}}(uv)$ ($\eta$ as in lemma \ref{wderivcomm} and proposition \ref{x0piux1meno}, $\hat h_+^{\{d\}}(u)$ as in notation \ref{hcappucciof}, where we replace $\hat h^{\{d\}}(u)$ by $\hat h_+^{\{d\}}(u)$ in order to distinguish it from its symmetric $\hat h_-^{\{d\}}(u)=\Omega(\hat h_+^{\{d\}}(u))$). \end{notation}
\begin{remark}\label{hhdehh} From $1+2u-u^2=(1+(1+\sqrt 2)u)(1+(1-\sqrt 2)u)$, we get that:
\noindent i) for all $n\in\Z_+$ $d_n={1\over 2}((1+\sqrt{2})^n+(1-\sqrt{2})^n)$; equivalently $\exists\delta_n\in\Z$ such that $$\forall n\in\Z_+\ \ \ (1+\sqrt{2})^n=d_n+\delta_n\sqrt{2}.$$
\noindent ii) $d_n$ is odd for all $n\in\Z_+$; $\delta_n$ is odd if and only if $n$ is odd.
\noindent iii) $\Z[\hat h_k^{\{d\}}|k>0]\not\subseteq\Z[\hat h_k|k>0]$ (indeed $(\mu*d)(4)=d_4-d_2=17-3=14$, which is not a multiple of 4, see propositions \ref{convoluzioneintera} and \ref{emmepiallaerre}).
\noindent iv) $\Z[\hat h_k^{\{d\}}|k>0]\subseteq\Z[\tilde h_k|k>0]$ if and only if $\Z[\hat h_k^{\{\tilde d\}}|k>0]\subseteq\Z[\hat h_k|k>0]$.
\end{remark}
\begin{lemma}\label{contidp} \noindent Let $p,m,r\in\Z_+$ be such that $p$ is prime and $(m,p)=1$. Then
$${\rm{if}}\ p^r=4\ \ \ p^r=4|d_{4m}+d_{2m},$$
$${\rm{if}}\ p^r\neq 4\ \ \ p^r|d_{p^rm}-d_{p^{r-1}m}.$$
\begin{proof} \noindent The claim is obvious for $p^r=2$ since the $d_n$'s are all odd.
\noindent In general if $n$ is any positive integer it follows from remark \ref{hhdehh} that $$d_{np}+\delta_{np}\sqrt{2}=(d_n+\delta_n\sqrt{2})^p.$$ If $p=2$ this means that $$d_{2n}=d_n^2+2\delta_n^2,$$ $$\delta_{2n}=2d_n\delta_n,$$ hence
$$2^r||\delta_{2^rm}\ \ {\rm{(recall\ that}}\ \delta_m\ {\rm{is\ odd\ since}}\ m\ {\rm{is\ odd)}}$$ $$d_{2^rm}\equiv d_{2^{r-1}m}^2\ \ (mod\ 2^{2r-1}),$$ from which it follows that $$d_{2m}\equiv -1\ \ (mod\ 4),$$ $$d_{2^rm}\equiv 1\ \ (mod\ 2^{r+1})\ \ \ {\rm{if}}\ r>1:$$ indeed, since $d_m$ and $\delta_m$ are odd, $$d_{2m}\equiv_{(8)}1+2\equiv_{(4)}-1,$$ while if $r\geq 2$ then $2r-1\geq r+1$ and by induction on $r$ we get $$d_{2^rm}\equiv d^2_{2^{r-1}m}=(\pm 1+2^rk)^2\equiv 1\ \ (mod\ 2^{r+1}).$$ These last relations immediately imply the claim for $p=2$.
\noindent Now let $p\neq 2$. Then $$d_{pn}=\sum_{h\geq 0}{p\choose 2h}2^hd_{n}^{p-2h}\delta_n^{2h},$$ $$\delta_{pn}=\sum_{h\geq 0}{p\choose 2h+1}2^hd_{n}^{p-2h-1}\delta_n^{2h+1}.$$ Suppose that $d_n=d+p^{r-1}k$, $\delta_n=\delta+p^{r-1}k'$ with $k=k'=0$ if $r=1$. Then $$d_{pn}\equiv\sum_{h\geq 0}{p\choose 2h}2^hd^{p-2h}\delta^{2h}\ \ (mod\ p^r),$$ $$\delta_{pn}\equiv\sum_{h\geq 0}{p\choose 2h+1}2^hd^{p-2h-1}\delta^{2h+1}\ \ (mod\ p^r).$$ The above relations allow us to prove by induction on $r>0$ that if $\zeta_p$ is defined by the properties $\zeta_p\in\{\pm 1\}$, $\zeta_p\equiv_{(p)} 2^{{p-1\over 2}}$ then $$d_{p^rm}\equiv d_{p^{r-1}m}\ \ (mod\ p^r)\ \ \ {\rm{and}}\ \ \ \delta_{p^rm}\equiv \zeta_p\delta_{p^{r-1}m}\ \ (mod\ p^r):$$ indeed if $r=1$ $$d_{pm}\equiv d_m^p\equiv d_m\ \ (mod\ p),$$ $$\delta_{pm}\equiv 2^{{p-1\over 2}}\delta_m^p\equiv\zeta_p\delta_m\ \ (mod\ p);$$ if $r>1$ then $$d_{p^rm}\equiv_{(p^r)}\sum_{h\geq 0}{p\choose 2h}2^hd_{p^{r-2}m}^{p-2h}\delta_{p^{r-2}m}^{2h}\equiv_{(p^r)}d_{p^{r-1}m},$$ $$\delta_{p^rm}\equiv_{(p^r)}\zeta_p\sum_{h\geq 0}{p\choose 2h+1}2^hd_{p^{r-2}m}^{p-2h-1}\delta_{p^{r-2}m}^{2h+1}\equiv_{(p^r)}\zeta_p\delta_{p^{r-1}m}.$$ \end{proof} \end{lemma} \vskip .3 truecm
\begin{corollary}\label{hcappucciod}
\noindent $\hat h_n^{\{d\}}\in\Z[\tilde h_k|k>0]$ for all $n>0$.
\noindent In particular $(x_0^+)^{(k)}(x_1^-)^{(l)}\in\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$ $\forall k,l\in\N$.
\begin{proof} \noindent The claim follows from propositions \ref{convoluzioneintera} and \ref{emmepiallaerre}, remark \ref{hhdehh} and lemma \ref{contidp}, remarking that if $m$ is odd then $$ d_{4m}+d_{2m}=-(\tilde d_{4m}-\tilde d_{2m})$$ while if $(m,p)=1$ and $p^r\neq 4$ then $$d_{p^rm}-d_{p^{r-1}m}=\pm(\tilde d_{p^rm}-\tilde d_{p^{r-1}m}) .$$
Thus for all $n>0$ $\hat h_n^{\{\tilde d\}}\in\Z[\hat h_k|k>0]$ and
$\hat h_n^{\{d\}}\in\Z[\tilde h_k|k>0]$. \end{proof} \end{corollary}
\begin{corollary}\label{b22} $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^-\subseteq\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$; equivalently $\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$ is an integral form of $\tilde{\cal U}$. \begin{proof} The proof is identical to that of proposition \ref{strutmodulo} replacing $\hat{\cal U}$ with $\tilde{\cal U}$, having care to remark that in this case, too, $$(x_r^+)^{(k)}(x_s^-)^{(l)}\in\sum_{m\geq 0}\tilde{\cal U}_{\Z,-l+m}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z,k-m}^+\ \ \forall r,s\in\Z,\ \forall k,l\in\N:$$ if $r+s$ is even this follows at once comparing proposition \ref{czzq} with the properties of the gradation, while if $r+s$ is odd it is true by proposition \ref{x0piux1meno} and remark \ref{autstab},iv).
\end{proof} \end{corollary}
\begin{proposition} \label{samealg} $\tilde{\cal U}_{\Z}^0\subseteq\tilde{\cal U}_{\Z}$ and $\tilde{\cal U}_{\Z}=\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+$.
\begin{proof}
\noindent Let ${\cal Z}$ be the $\Z$-subalgebra of ${\mathbb{Q}}[h_r|r>0]$ generated by the coefficients of $\hat h_+^{\{d\}}(u)$ and of $\hat h_+(4u)^{1/2}$. Remark that, by propositions \ref{xmenogrande} and \ref{x0piux1meno}, ${\cal Z}\subseteq\tilde{\cal U}_{\Z}$.
\noindent We have already proved that ${\cal Z}\subseteq\Z[\tilde h_k|k>0]$ (see lemma \ref{ometiomecap},iii) and corollary \ref{hcappucciod}). Let us prove, by induction on $j$, that $\tilde h_j\in{\cal Z}$ for all $j>0$.
\noindent If $j=1$ the claim depends on the equality $\tilde h_1=h_1=\hat h^{\{d\}}_1$ (since $\varepsilon_1=d_1=1$).
\noindent Let $j>1$ and suppose that $\tilde h_1,...,\tilde h_{j-1}\in{\cal Z}$.
\noindent We notice that if $a:\Z_+\to\Z$ is such that $\hat h_j^{\{a\}}\in{\cal{Z}}$ then $a_j\tilde h_j\in{\cal{Z}}$: indeed it is always true that $$\tilde h_j-{\varepsilon_jh_j\over j}\in{\mathbb{Q}}[h_1,...,h_{j-1}]$$ and $$\hat h_j^{\{a\}}-{a_jh_j\over j}\in{\mathbb{Q}}[h_1,...,h_{j-1}]$$ from which we get that $$\hat h_j^{\{a\}}-\varepsilon_ja_j\tilde h_j\in{\mathbb{Q}}[h_1,...,h_{j-1}];$$
but the condition $\hat h_j^{\{a\}}\in{\cal{Z}}\subseteq\Z[\tilde h_k|k>0]$ and the inductive hypothesis $\Z[\tilde h_1,...,\tilde h_{j-1}]\subseteq{\cal{Z}}$ imply that $$\hat h_j^{\{a\}}-\varepsilon_ja_j\tilde h_j\in{\mathbb{Q}}[h_1,...,h_{j-1}]
\cap\Z[\tilde h_k|k>0]=\Z[\tilde h_1,...,\tilde h_{j-1}] \subseteq{\cal{Z}}$$ hence $a_j\tilde h_j\in{\cal{Z}}$.
\noindent This in particular holds for $a=d$ and for $\hat h^{\{a\}}(u)=\hat h_+(4u)^{{1\over 2}}$, hence $$d_j\tilde h_j\in{\cal Z}\ \ {\rm{and}}\ \ 2^{2j-1}\tilde h_j\in{\cal Z}.$$ But $(d_j,2^{2j-1})=1$ because $d_j$ is odd, hence $\tilde h_j\in{\cal Z}$.
\noindent Then $\tilde{\cal U}_{\Z}^{0,+}=\Z[\tilde h_k|k>0]={\cal Z}\subseteq\tilde{\cal U}_{\Z}$ and, applying $\Omega$, $\tilde{\cal U}_{\Z}^{0,-}\subseteq\tilde{\cal U}_{\Z}$. The claim follows recalling corollary \ref{czzp}. \end{proof} \end{proposition}
\noindent We can now collect all the results obtained till now in the main theorem of this work.
\vskip .3 truecm \begin{theorem}\label{trmA22}
The $\Z$-subalgebra $\tilde{\cal U}_{\Z}$ of $\tilde{\cal U}$ generated by $$\{(x_r^+)^{(k)},(x_r^-)^{(k)}|r\in\Z,k\in\N\}$$ is an integral form of $\tilde{\cal U}$.
\noindent More precisely $$\tilde{\cal U}_{\Z}\cong \tilde{\cal U}_{\Z}^{-,1}\otimes\tilde{\cal U}_{\Z}^{-,c}\otimes\tilde{\cal U}_{\Z}^{-,0}\otimes\tilde{\cal U}_{\Z}^{0,-}\otimes\tilde{\cal U}_{\Z}^{0,0}\otimes\tilde{\cal U}_{\Z}^{0,+}\otimes\tilde{\cal U}_{\Z}^{+,1}\otimes\tilde{\cal U}_{\Z}^{+,c}\otimes\tilde{\cal U}_{\Z}^{+,0}$$
and a $\Z$-basis of $\tilde{\cal U}_{\Z}$ is given by the product $$B^{-,1}B^{-,c}B^{-,0}B^{0,-}B^{0,0}B^{0,+}B^{+,1}B^{+,c}B^{+,0}$$ where $B^{\pm,0}$, $B^{\pm,1}$, $B^{\pm,c}$, $B^{0,\pm}$ and $B^{0,0}$ are the $\Z$-bases respectively of $\tilde{\cal U}_{\Z}^{\pm,0}$, $\tilde{\cal U}_{\Z}^{\pm,1}$, $\tilde{\cal U}_{\Z}^{\pm,c}$, $\tilde{\cal U}_{\Z}^{0,\pm}$ and $\tilde{\cal U}_{\Z}^{0,0}$ given as follows:
$$B^{\pm,0}=\Big\{{(\bf{x}}^{\pm,0})^{({\bf{k}})}=\prod_{r\in\Z}(x_{2r}^{\pm})^{(k_r)}|{\bf{k}}:\Z\to\N\,\, {\rm{is\, finitely\, supported}}\Big\}$$
$$B^{\pm,1}=\Big\{{(\bf{x}}^{\pm,1})^{({\bf{k}})}=\prod_{r\in\Z}(x_{2r+1}^{\pm})^{(k_r)}|{\bf{k}}:\Z\to\N\,\, {\rm{is\, finitely\, supported}}\Big\}$$
$$B^{\pm,c}=\Big\{{(\bf{X}}^{\pm})^{({\bf{k}})}=\prod_{r\in\Z}(X_{2r+1}^{\pm})^{(k_r)}|{\bf{k}}:\Z\to\N\,\, {\rm{is\, finitely\, supported}}\Big\}$$
$$B^{0,\pm}=\Big\{{\tilde{{\bf{h}}}_{\pm}^{\bf{k}}}=\prod_{l\in\N}{\tilde h_{\pm l}^{k_l}}|{\bf{k}}:\N\to\N\,\,{\rm{is\, finitely\, supported}}\Big\}$$
$$B^{0,0}=\Big\{{h_0\choose k}{c\choose\tilde k}|k,\tilde k\in\N\Big\}.$$ \end{theorem}
\vskip .5 truecm
\appendix\label{appendix}
\addcontentsline{toc}{section}{Appendices}
\renewcommand{\Alph{subsection}}{\Alph{subsection}} \subsection{Straightening formulas of $A^{(2)}_2$}\label{appendA}
\newtheorem{mydefinition}{Definiton} \numberwithin{mydefinition}{subsection} \newtheorem{mytheorem}[mydefinition]{Theorem} \newtheorem{myremark}[mydefinition]{Remark} \newtheorem{mynotation}[mydefinition]{Notation} \newtheorem{mylemma}[mydefinition]{Lemma} \newtheorem{mycorollary}[mydefinition]{Corollary}
\numberwithin{equation}{subsection}
For the sake of completeness we collect here the commutation formulas of $A^{(2)}_2$, inserting also the formulas that we didn't need for the proof of theorem \ref{trmA22}.
\noindent Notation \ref{pppm} and remark \ref{epppm} will help writing some of the following straightening relations and to understand the origin of some apparently misterious terms.
\begin{mynotation}\label{pppm}
Given $p(t)\in{\mathbb{Q}}[[t]]$ let us define $p_+(t), p_-(t)\in{\mathbb{Q}}[[t^2]]$ and $p_0(t)\in{\mathbb{Q}}[[t]]$ by $$p(t)=p_+(t)+tp_-(t),\ p_0(t^2)={1\over 2}p_+(t)p_-(t).$$ Remark that the maps $p(t)\mapsto p_+(t)$ and $p(t)\mapsto p_-(t)$ are homomorphisms of ${\mathbb{Q}}[[t^2]]$-modules while $q(t)\in{\mathbb{Q}}[[t^2]], \tilde q(t^2)=q(t)\Rightarrow (qp)_0(t)=\tilde q(t)^2p_0(t)$. \end{mynotation} \begin{myremark}\label{epppm} Given $p(t)\in{\mathbb{Q}}[[t]]$ we have that $$\exp(p(uw).x_0^+)=$$ $$=\exp(p_+(uw).x_0^+)\exp(up_0(-u^2w).X_1^+)\exp(up_-(uw).x_1^+)=$$ $$=\exp(up_-(uw).x_1^+)\exp(-up_0(-u^2w).X_1^+)\exp(p_+(uw).x_0^+).$$
\end{myremark} \vskip .3 truecm \noindent We shall now list a complete set of {\bf{straightening formulas}} in $\tilde{\cal U}_{\Z}$.
\vskip .3 truecm \noindent I) Zero commutations regarding $\tilde{\cal U}_{\Z}^{0,0}$: $${c\choose k}\ {\rm{is\ central\ in\ }}\tilde{\cal U}_{\Z};$$ $${h_0\choose k}\ {\rm{is\ central\ in\ }}\tilde{\cal U}_{\Z}^0:\left[{h_0\choose k},\tilde h_l\right]=0\ \forall k\geq 0,\ l\neq 0.$$
\noindent II) Relations in $\tilde{\cal U}_{\Z}^{0,+}$ (from which those in $\tilde{\cal U}_{\Z}^{0,-}$ follow as well): $$\tilde{\cal U}_{\Z}^{0,+}\ {\rm{is\ commutative}}:\ [\tilde h_k,\tilde h_l]=0\ \forall k,l>0;$$ $$\tilde \lambda_m(\tilde h_+(-u^m))=\prod_{j=1}^{m}\tilde h_+(-\omega^ju)\ \forall m\in \Z_+$$ ${\rm{where}}\ \omega\ {\rm{is\ a\ primitive\ }}m^{{\rm{th}}}\ {\rm{root\ of\ }}1$, that is $$\tilde \lambda_m(\tilde h_k)=(-1)^{(m-1)k}\sum_{(k_1,...,k_m):\atop k_1+...+k_m=mk} \omega^{\sum_{j=1}^m j k_j}\tilde h_{k_1} \dots \tilde h_{k_m} ;$$ if $m$ is odd $$\lambda_m(\tilde h_k)=\tilde \lambda_m(\tilde h_k)\ \forall k\geq 0;$$ if $m$ is even $$\lambda_{m}(\hat h_+(u))=\tilde\lambda_{m}(\tilde h_+((-1)^{m\over 2}u)^{-1});$$
$$\hat h_+(u)=\tilde h_+(u)\tilde\lambda_4(\tilde h_+(-u^4)^{-{1\over 2}})=\tilde h_+(u)^{{1\over 2}}\tilde h_+(-u)^{-{1\over 2}}\tilde h_+(iu)^{-{1\over 2}}\tilde h_+(-iu)^{-{1\over 2}},$$
$$\hat h_+^{\{d\}}(u)=\hat h_+((1+\sqrt{2})u)^{{1\over 2}}\hat h_+((1-\sqrt{2})u)^{{1\over 2}}=\prod_{m>0}\tilde\lambda_m(\tilde h_+(u^m))^{k_m}$$ where the $k_m$'s are integers defined by the identity $$1+2u-u^2=(1-2u-u^2)(1+6u^2+u^4)\prod_{m>0}(1+u^m)^{4k_m}.$$
\noindent The corresponding relations in $\tilde{\cal U}_{\Z}^{0,-}$ are obtained applying $\Omega$, that is just replacing $\tilde h_k$, $\tilde h_+(u)$ and $\hat h_+(u)$ with $\tilde h_{-k}$, $\tilde h_-(u)$ and $\hat h_-(u)$.
\noindent III) Other straightening relations in $\tilde{\cal U}_{\Z}^0$: $$\tilde h_+(u)\tilde h_-(v)=\tilde h_-(v)(1-uv)^{-4c}(1+uv)^{2c}\tilde h_+(u).$$
\noindent IV) Commuting elements and straightening relations in $\tilde{\cal U}_{\Z}^+$ (and in $\tilde{\cal U}_{\Z}^-$): $$(X_{2r+1}^+)^{(k)}\ {\rm{is\ central\ in\ }}\tilde{\cal U}_{\Z}^+:$$ $$[(X_{2r+1}^+)^{(k)},(x_s^+)^{(l)}]=0=[(X_{2r+1}^+)^{(k)},(X_{2s+1}^+)^{(l)}]\ \forall r,s\in\Z,\ k,l\in\N;$$ $${\rm{if}}\ r+s\ {\rm{is\ even\ }}[(x_r^+)^{(k)},(x_s^+)^{(l)}]=0\ \forall k,l\in\N;$$ $${\rm{if}}\ r+s\ {\rm{is\ odd\ }} \exp(x_r^+ u)\exp(x_{s}^+ v)=\exp(x_s^+ v)\exp((-1)^sX_{r+s}^+ uv)\exp(x_{r}^+ u).$$
\noindent All the relations in $\tilde{\cal U}_{\Z}^-$ are obtained from those in $\tilde{\cal U}_{\Z}^+$ applying the antiautomorphism $\Omega$; in particular if $r+s$ is odd $$\exp(x_r^- u)\exp(x_{s}^- v)=\exp(x_s^- v)\exp((-1)^rX_{r+s}^- uv)\exp(x_{r}^- u).$$
\noindent V) Straightening relations for $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^{0,0}$ (and for $\tilde{\cal U}_{\Z}^{0,0} \tilde{\cal U}_{\Z}^-$): $\forall r\in\Z,\ k,l\in\N$
$$(x_r^+)^{(k)}{h_0\choose l}={h_0-2k\choose l}(x_r^+)^{(k)},$$ $$(X_{2r+1}^+)^{(k)}{h_0\choose l}={h_0-4k\choose l}(X_{2r+1}^+)^{(k)},$$ and $${h_0\choose l}(x_r^-)^{(k)}=(x_r^-)^{(k)}{h_0-2k\choose l},$$ $${h_0\choose l}(X_{2r+1}^-)^{(k)}=(X_{2r+1}^-)^{(k)}{h_0-4k\choose l}.$$
\noindent VI) Straightening relations for $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^{0,+}$ (and for $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^{0,-}$, $\tilde{\cal U}_{\Z}^{0,\pm} \tilde{\cal U}_{\Z}^-$):
$$(X_{2r+1}^+)^{(k)}\tilde h_+(u)=\tilde h_+(u)\left((1-u^2T^{-1})^2X_{2r+1}^+\right)^{(k)}$$ and $$(x_r^+)^{(k)}\tilde h_+(u)=\tilde h_+(u)\left({(1-uT^{-1})^6(1+u^2T^{-2})\over(1-u^2T^{-2})^3}x_r^+\right)^{(k)};$$ the expression for $\left({(1-uT^{-1})^6(1+u^2T^{-2})\over(1-u^2T^{-2})^3}x_r^+\right)^{(k)}$ can be straightened more explicitly: setting $p(t)=(1-t)^6$ we have $$ p_+(t)=1+15t^2+15t^4+t^8,$$ $$ p_-(t)=-6-20t^2-6t^4,$$ $$ p_0(t)=-(1+15t+15t^2+t^4)(3+10t+3t^2),$$ so that $$\exp(x_r^+v)\tilde h_+(u)=\tilde h_+(u)\exp\left({(1-uT^{-1})^6(1+u^2T^{-2})\over(1-u^2T^{-2})^3}x_r^+v\right)=$$ $$=\tilde h_+(u)\exp\left({p_-(uT^{-1})(1+u^2T^{-2})\over(1-u^2T^{-2})^3}x_{r+1}^+uv\right)\cdot$$ $$\cdot\exp\left({(-1)^{r-1}p_0(-u^2T^{-1})(1-u^2T^{-1})^2\over(1+u^2T^{-1})^6}X_{2r+1}^+uv^2\right)\cdot$$ $$\cdot\exp\left({p_+(uT^{-1})(1+u^2T^{-2})\over(1-u^2T^{-2})^3}x_r^+v\right).$$ Applying the homomorphism $\lambda_{-1}$ (that is $x_s^+\mapsto x_{-s}^+$, $X_s^+\mapsto X_{-s}^+$, $\tilde h_+\mapsto\tilde h_-$, $T^{-1}\mapsto T$) one immediately gets the expression for $(X_{2r+1}^+)^{(k)}\tilde h_-(u)$ and for $\exp(x_{r}^+v)\tilde h_-(u)$.
\noindent Applying the antiautomorphism $\Omega$ ($x_s^+\mapsto x_{-s}^-$, $X_s^+\mapsto X_{-s}^-$, $\tilde h_+\leftrightarrow\tilde h_-$) one gets analogously the expression for $\tilde h_{\pm}(u)(X_{2r+1}^-)^{(k)}$ and for $\tilde h_{\pm}(u)\exp(x_{r}^-v)$.
\noindent VII) Straightening relations for $\tilde{\cal U}_{\Z}^+\tilde{\cal U}_{\Z}^-$:
\noindent VII,a) $\frak sl_2$-like relations: $\forall r\in\Z$ $${\rm{exp}}(x_r^+u){\rm{exp}}(x_{-r}^-v)={\rm{exp}}\Big({x_{-r}^-v\over 1+uv}\Big)(1+uv)^{h_0+rc}{\rm{exp}}\Big({x_r^+u\over 1+uv}\Big),$$ $${\rm{exp}}(X_{2r+1}^+u){\rm{exp}}(X_{-2r-1}^-v)={\rm{exp}}\Big({X_{-2r-1}^-v\over 1+4^2uv}\Big)(1+4^2uv)^{{h_0\over 2}+{(2r+1)c\over 4}}{\rm{exp}}\Big({X_{2r+1}^+u\over 1+4^2uv}\Big).$$
\noindent VII,b) $\hat\frak sl_2$-like relations:
\noindent if $r+s\neq 0$ is even $$\exp(x_{r}^+u)\exp(x_s^-v)=$$ $$=\exp\left({1\over 1+uvT^{r+s}}x_s^-v\right) \lambda_{r+s}(\hat h_+(uv)) \exp\left({1\over 1+uvT^{-r-s}}x_r^+v\right),$$ while $\forall r+s\neq 0$ $$\exp(X_{2r+1}^+u)\exp(X_{2s-1}^-v)=$$ $$=\exp\left({1\over 1+4T^{s+r}uv}X_{2s-1}^-v\right)\cdot$$ $$\cdot\lambda_{2(r+s)}(\hat h_+(4^2uv)^{{1\over 2}})\cdot$$ $$\cdot\exp\left({1\over1+4uvT^{-s-r}}X_{2r+1}^+u\right).$$
\noindent VII,c) Straightening relations for $\tilde{\cal U}_{\Z}^{+,0}\tilde{\cal U}_{\Z}^{-,c}$ (and $\tilde{\cal U}_{\Z}^{+,1}\tilde{\cal U}_{\Z}^{-,c}$, $\tilde{\cal U}_{\Z}^{+,c}\tilde{\cal U}_{\Z}^{-,{0\atop 1}}$): $$\exp(x_0^+u)\exp(X_{1}^-v)=$$ $$=\exp\left({4\over 1-4^2w^2u^4v^2}x_1^-uv\right)\exp\left({-4^2w^2\over 1-4^2w^2u^4v^2}x_0^-u^3v^2\right)\cdot$$ $$\cdot\exp\left({1+3\cdot 4^2wu^4v^2\over (1+4^2wu^4v^2)^2}X_1^-v\right) \hat h_+(4u^2v)^{{1\over 2}}\exp\left({1-4^2w^{-1}u^4v^2\over (1+4^2w^{-1}u^4v^2)^2}X_1^+u^4v\right)\cdot$$ $$\cdot\exp\left({-4\over 1-4^2w^{-2}u^4v^2}x_1^+u^3v\right)\exp\left({1\over 1-4^2w^{-2}u^4v^2}x_0^+u\right),$$ which can be written in a more compact way observing that $${1\over 1-4^2t^2}=\left({1\over 1+4t}\right)_+,\ {-4\over 1-4^2t^2}=\left({1\over 1+4t}\right)_-,\ \left({1\over 1+4t}\right)_0={-2\over (1-4^2t)^2}:$$
$$\exp(x_0^+u)\exp(X_{1}^-v)=$$ $$=\exp\left({4\over 1+4wu^2v}x_1^-uv\right)\exp\left({1\over 1+4^2wu^4v^2}X_1^-v\right)\cdot$$ $$\cdot\hat h_+(4u^2v)^{{1\over 2}}\exp\left({1\over 1+4w^{-1}u^2v}x_0^+u\right)\exp\left(-{1\over 1+4^2w^{-1}u^4v^2}X_1^+u^4v\right);$$ that is more symmetric but less explicit in terms of the given basis of $\tilde{\cal U}_{\Z}$.
\noindent Applying the homomorphism $T^{-r}\lambda_{2r+2s+1}$ (that is $x_l^{\pm}\mapsto x_{l(2r+2s+1)\pm r}^{\pm}$, $X_1^{\pm}\mapsto (-1)^rX_{2r+2s+1\pm 2r}^{\pm}$, $\hat h_k\mapsto \lambda_{2r+2s+1}(\hat h_k)$, $w\big|_{L^{\pm}}\mapsto T^{\mp(2r+2s+1)}$) one deduces the expression for $\exp(x_r^+u)\exp(X_{2s+1}^-v)$.
\noindent Applying $\Omega$ one analogously gets the expression for $\exp(X_{2r+1}^+u)\exp(x_s^-v)$.
\noindent VII,d) The remaining relations:
$$\exp(x_0^+u)\exp(x_1^-v)=$$ $$=\exp \left ({{1+w^2u^2v^2}\over{1-6w^2u^2v^2+w^4u^4v^4}}x_1^-v\right) \exp \left({{-3+w^2u^2v^2}\over{1-6w^2u^2v^2+w^4u^4v^4}}x_2^-uv^2\right) \cdot$$ $$\cdot \exp \left (-{{1-4wu^2v^2-w^2u^4v^4}\over{(1+6wu^2v^2+w^2u^4v^4)^2}}X_3^-uv^3\right ) \hat h_+^{\{d\}}(uv)\cdot$$ $$\cdot\exp \left({{1-4wu^2v^2-w^2u^4v^4}\over{(1+6wu^2v^2+w^2u^4v^4)^2}}X_1^+u^3v\right)\cdot$$ $$\cdot \exp \left({{-3+w^2u^2v^2}\over{1-6w^2u^2v^2+w^4u^4v^4}}x_1^+u^2v\right)\exp \left({{1+ w^2u^2v^2}\over{(1-6w^2u^2v^2+w^4u^4v^4)^2}}x_0^+u\right)$$ or, as well, $$\exp(x_0^+u)\exp(x_1^-v)=$$ $$=\exp\left({1-wuv\over 1+2wuv -w^{2}u^2v^2} x_{1}^{-}v\right) \exp\left({1\over 2(1+6wu^2v^2 +w^2u^4v^4)} X_{3}^{-}uv^3\right) \cdot$$ $$\cdot\hat h_+^{\{d\}}(uv)\cdot $$ $$\cdot \exp\left({1-wuv\over 1+2wuv -w^2u^2v^2}x_0^{+}u\right)\left({-1\over 2(1+6wu^2v^2 +w^2u^4v^4)} X_{1}^{+}u^3v\right).$$
\noindent The general straightening formula for $\exp(x_r^+u)\exp(x_s^-v)$ when $r+s$ is odd is obtained from the case $r=0$, $s=1$ applying $T^{-r}\lambda_{r+s}$, remarking that $w\big|_{L^{\pm}}\mapsto T^{\mp(r+s)}$.
\subsection{Garland's description of $\u_{\Z}^{im,+}$}\label{appendB} In this appendix we focus on the imaginary positive part $\u_{\Z}^{im,+}$ of $\u_{\Z}=\u_{\Z}(\frak g)$ (see section \ref{intr}) when $\frak g$ is an affine Kac-Moody algebra of rank 1 (that is $\frak g=\hat\frak sl_2$ or $\frak g=\hat{{\frak sl_3}}^{\!\!\chi}$: these cases are enough to understand also the cases of higher rank): we aim at a better understanding of Garland's (and Mitzman's) basis of $\u_{\Z}^{im,+}$
and of its connection with the basis consisting of the monomials in the $\hat h_k$'s, basis which arises naturally from the description of $\u_{\Z}^{im,+}$ as $\Z^{(sym)}[h_r|r>0]=\Z[\hat h_k|k>0]$.
\noindent First of all let us fix some notations and recall Garland's description of $\u_{\Z}^{im,+}$.
\begin{definition}\label{bun}
With the notations of example \ref{rvsf} and proposition \ref{tmom} let us define the following elements and subsets in ${\mathbb{Q}}[h_r|r>0]$:
i) $b_{{\bf{k}}}=\prod_{m>0}\lambda_m(\hat h_{k_m})$ where ${\bf{k}}:\Z_+\to\N$ is finitely supported; \begin{equation} \label{GarlandBase}{\rm{ii)}}\,\, B_{\lambda}=\left\{b_{{\bf{k}}}
|{\bf{k}}:\Z_+\to\N\,\,{\rm{is\, finitely\, supported}}\right\};\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \end{equation}
iii) $\Z_{\lambda}[h_r|r>0]=\sum_{{\bf{k}}}\Z b_{{\bf{k}}}$ is the $\Z$-submodule of ${\mathbb{Q}}[h_r|r>0]$ generated by $B_{\lambda}$.
\end{definition}
\noindent Then, with our notation, Garland's description of $\u_{\Z}^{im,+}$ can be stated as follows:
\begin{theorem} $\u_{\Z}^{im,+}$ is a free $\Z$-module with basis $B_{\lambda}$.
\noindent Equivalently:
\noindent i) $\u_{\Z}^{im,+}=\Z_{\lambda}[h_r|r>0]$;
\noindent ii) $B_{\lambda}$ is linearly independent.
\end{theorem}
\begin{remark}
Once proved that $\u_{\Z}^{im,+}$ is the $\Z$-subalgebra of ${\cal U}$ generated by $\{\lambda_m(\hat h_k)|m>0,k\geq 0\}$ (hence by $B_{\lambda}$ or equivalently by $\Z_{\lambda}[h_r|r>0]$), proceeding in two different directions leads to the two descriptions of $\u_{\Z}^{im,+}$ that we want to compare:
\noindent $\star$) $\Z_{\lambda}[h_r|r>0]$ is a $\Z$-subalgebra of ${\mathbb{Q}}[h_r|r>0]$ (that is $\Z_{\lambda}[h_r|r>0]$ is closed under multiplication): this implies that $$\u_{\Z}^{im,+}=\Z_{\lambda}[h_r|r>0];$$
it also implies that $\Z[\hat h_k|k>0]\subseteq\Z_{\lambda}[h_r|r>0]$;
\noindent $\star\star$) $\Z[\hat h_k|k>0]$ is $\lambda_m$-stable for all $m>0$ (see proposition \ref{tmom}): this implies that $$\u_{\Z}^{im,+}=\Z[\hat h_k|k>0];$$
it also implies that $\Z_{\lambda}[h_r|r>0]\subseteq\Z[\hat h_k|k>0]$.
Hence $\star$) and $\star\star$) imply that $\u_{\Z}^{im,+}=\Z_{\lambda}[h_r|r>0]=\Z[\hat h_k|k>0]$.
\noindent $\star$) has been proved in \cite{HG} by induction on a suitably defined degree. The first step of the induction is the second assertion of \cite{HG}-lemma 5.11(b), proved in \cite{HG}-section 9: for all $k,l\in\N$ $\hat h_k\hat h_l-{k+l\choose k}\hat h_{k+l}$ is a linear combination {\it {with integral coefficients}} of elements of $B_{\lambda}$ {\it{of degree lower}} than the degree of $\hat h_{k+l}$.\\ In the proof the author uses that $B_{\lambda}$ is a ${\mathbb{Q}}$-basis of ${\mathbb{Q}}[h_r|r>0]$ and concentrates on the integrality of the coefficients: he studies the action of $\frak h$ on $\hat{{\frak sl_3}}^{\otimes N}$ where $\frak h$ is the commutative Lie-algebra with basis $\{h_r|r>0\}$ and $N\in\N$ is large enough ($N$ is the maximum among the degrees of the elements of $B_{\lambda}$ appearing in $\hat h_k\hat h_l$ with non-integral coefficient, assuming that such an element exists): $\frak h$ is a subalgebra of $\hat\frak sl_2$ and there is an embedding of $\hat\frak sl_2$ in $\hat{{\frak sl_3}}$ for every vertex of the Dynkin diagram of ${\frak sl}_3$, so that fixing a vertex of the Dynkin diagram of ${\frak sl}_3$ induces an embedding ${\frak h}\subseteq\hat\frak sl_2\hookrightarrow\hat{{\frak sl_3}}$, hence an action of ${\frak h}$ on $\hat{{\frak sl_3}}$. But the integral form of $\hat{{\frak sl_3}}$ defined as the $\Z$-span of a Chevalley basis is $\u_{\Z}(\hat{\frak sl_3})$-stable; since the stability under $\u_{\Z}(\hat{\frak sl_3})$ is preserved by tensor products (\cite{HG}-section 6), the author can finally deduce the desired integrality property of $\hat h_k\hat h_l$ from the study of the $\frak h$-action on $\hat{{\frak sl_3}}^{\otimes N}$.
\vskip .3 truecm
\noindent Garland's argument has been sometimes misunderstood: it is the case for instance of \cite{JM} where the authors affirm (in lemma 1.5) that \cite{HG}-lemma 5.11(b) implies that $\u_{\Z}^{im,+}=\Z[\hat h_k|k>0]$, while, as discussed above, it just implies the inclusion $\Z[\hat h_k|k>0]\subseteq\u_{\Z}^{im,+}=\Z_{\lambda}[h_r|r>0]$.
\noindent On the other hand Garland's argument strongly involves many results of the (integral) representation theory of the Kac-Moody algebras, while $\star$) is a property of the algebra ${\mathbb{Q}}[h_r|r>0]$ and of its integral forms that can be stated in a way completely independent of the Kac-Moody algebra setting:
$$\Z^{(sym)}[h_r|r>0]\subseteq\Z_{\lambda}[h_r|r>0].$$
\noindent The above considerations motivate the present appendix, whose aim is to propose a self-contained proof of $\star$), independent of the Kac-Moody algebra context: on one hand we think that a direct proof can help evidentiating the essential structure of the integral form of ${\mathbb{Q}}[h_r|r>0]$ arising from our study; on the other hand the idea of isolating the single pieces and glueing them together after studying them separately is much in the spirit of this work, so that it is natural for us to explain also Garland's basis of $\u_{\Z}^{im,+}$ through this approach; and finally we hope that presenting a different proof can also help to clarify the steps which appear more difficult in Garland's proof. \vskip .3 truecm
\noindent In the following we go back to the description of $\Z[\hat h_k|k>0]$ as the algebra of the symmetric functions and we show that $B_{\lambda}$ is a basis of $\Z[\hat h_k|k>0]$ by comparing it with a well known $\Z$-basis of this algebra.
\end{remark}
\begin{remark}\label{rcdcp}
Recall that $\Z[\hat h_k|k>0]$ is the algebra of the symmetric functions and that $\forall n\in\N$ the projection $\pi_n:\Z[\hat h_k|k>0]\to\Z[x_1,...,x_n]^{{\cal{S}}_n}$ induces an isomorphism $\Z[\hat h_1,...,\hat h_n]\cong\Z[x_1,...,x_n]^{{\cal{S}}_n}$ through which $\hat h_k$ corresponds to the $k^{{\rm{th}}}$ elementary symmetric polynomial $e_k^{[n]}$, while $\pi_n(\hat h_k)=0$ if $k>n$ and $h_r$ corresponds to the sum of the $r^{{\rm{th}}}$-powers $\sum_{i=1}^nx_i^r$ $\forall r>0$ (see example \ref{rvsf}).
\noindent Then it is well known and obvious that:
\noindent i) $\forall{\bf{k}}:\Z_+\to\N$ finitely supported $\exists!(\sigma x)_{{\bf{k}}}\in\Z[\hat h_k|k>0]$ such that
$$\pi_n((\sigma x)_{\bf{k}})=\sum_{{a_1,...,a_n\atop\#\{i|a_i=m\}=k_m\ \forall m>0}}\prod_{i=1}^nx_i^{a_i}\in\Z[x_1,...,x_n]^{{\cal{S}}_n}\ \ \forall n\in\N;$$
\noindent ii) $\{(\sigma x)_{\bf{k}}|{\bf{k}}:\Z_+\to\N$ finitely supported$\}$ is a $\Z$-basis of $\Z[\hat h_k|k>0]$.
\noindent (It is the basis that in \cite{IM} is denoted by $\{m_{\lambda}|\lambda=(\lambda_1\geq\lambda_2\geq...\geq 0)\}$: $m_{\lambda}=(\sigma x)_{{\bf{k}}}$ where $\forall m>0$ $k_m=\#\{i|\lambda_i=m\}$).
\end{remark}
\begin{notation}
As in remark \ref{rcdcp}, for all ${\bf{k}}:\Z_+\to\N$ finitely supported let us denote by $(\sigma x)_{\bf{k}}$ the limit of the elements $$\sum_{{a_1,...,a_n\atop\#\{i|a_i=m\}=k_m\ \forall m>0}}\prod_{i=1}^nx_i^{a_i}\ \ (n\in\N).$$ By abuse of notation, when $n\geq\sum_{m>0}k_m$ we shall write
$$(\sigma x)_{\bf{k}}=\sum_{{a_1,...,a_n\atop\#\{i|a_i=m\}=k_m\ \forall m>0}}\prod_{i=1}^nx_i^{a_i},$$
which is justified because, under the hypothesis that $n\geq\sum_{m>0}k_m$, ${\bf{k}}$ is determined by the set $\{(a_1,...,a_n)|\#\{i=1,...,n|a_i=m\}=k_m$ $\forall m>0\}$.
\end{notation} \begin{definition} \label{basi}
$\forall n\in\N$ define $B_{\lambda}^{(n)}$, $B_x^{(n)}$, $\Z_{\lambda}^{(n)}$, $\Z_x^{(n)}\subseteq{\mathbb{Q}}[h_r|r>0]={\mathbb{Q}}[\hat h_k|k>0]$ as follows:
$$B_{\lambda}^{(n)}=\left\{b_{\bf{k}}=\prod_{m>0}\lambda_m(\hat h_{k_m})\in B_{\lambda}|\sum_{m>0}k_m\leq n\right\},$$
$$B_x^{(n)}=\left\{(\sigma x)_{\bf{k}}|\sum_{m>0}k_m\leq n\right\},$$ $\Z_{\lambda}^{(n)}$ is the $\Z$-module generated by $B_{\lambda}^{(n)}$, $\Z_x^{(n)}$ is the $\Z$-module generated by $B_x^{(n)}$.
\end{definition}
\begin{remark}\label{vdbx} By the very definition of $B_x^{(n)}$ we have that:
\noindent i) $B_x^{(n)}$ is a basis of $\Z_x^{(n)}\subseteq\Z[\hat h_k|k>0]=\sum_{n'\in\N}\Z_x^{(n')}$, see remark \ref{rcdcp}, ii);
\noindent ii) $h\in\Z_x^{(n)}$ means that for all $N\geq n$ each monomial in the $x_i$'s appearing in $\pi_N(h)$ with nonzero coefficient involves no more that $n$ indeterminates $x_i$; hence in particular $$h\in\Z_x^{(n)}, h'\in\Z_x^{(n')}\Rightarrow hh'\in\Z_x^{(n+n')}.$$ \end{remark} \begin{lemma} \label{sxkp} Let $n,n',n''\in\N$ and ${\bf{k}}',{\bf{k}}'':\Z_+\to\N$ be such that $n'+n''=n$, $\sum_{m>0}k'_m=n'$, $\sum_{m>0}k''_m=n''$. Then:
\noindent i) $(\sigma x)_{{\bf{k}}'}\cdot(\sigma x)_{{\bf{k}}''}\in\Z(\sigma x)_{{\bf{k}}'+{\bf{k}}''}\oplus\Z_x^{(n-1)}$;
\noindent ii) if $k'_m k''_m=0$ $\forall m>0$ then $(\sigma x)_{{\bf{k}}'} (\sigma x)_{{\bf{k}}''}-(\sigma x)_{{\bf{k}}'+{\bf{k}}''}\in\Z_x^{(n-1)}$. \begin{proof} That $(\sigma x)_{{\bf{k}}'}\cdot(\sigma x)_{{\bf{k}}''}$ lies in $\Z_x^{(n)}$ follows from remark \ref{vdbx},ii), so we just need to:
\noindent i) prove that if $\prod_{i=1}^nx_i^{a_i}$ with $a_i\neq 0$ $\forall i=1,...,n$ is the product of two monomials $M'$ and $M''$ appearing with nonzero coefficient respectively in $(\sigma x)_{{\bf{k}}'}$ and in $(\sigma x)_{{\bf{k}}''}$ then $\#\{i|a_i=m\}=k'_m+k''_m$ for all $m>0$;
\noindent ii) compute the coefficient of $(\sigma x)_{{\bf{k}}'+{\bf{k}}''}$ in the expression of $(\sigma x)_{{\bf{k}}'}\cdot(\sigma x)_{{\bf{k}}''}$ as a linear combination of the $(\sigma x)_{{\bf{k}}}$'s when $\forall m>0$ $k'_m$ and $k''_m$ are not simultaneously non zero, and find that it is 1.
\noindent i) is obvious because the condition $a_i\neq 0$ $\forall i=1,...,n$ implies that the indeterminates involved in $M'$ and those involved in $M''$ are disjoint sets.
\noindent For ii) it is enough to show that, under the further condition on $k'_m$ and $k''_m$, the monomial $\prod_{i=1}^nx_i^{a_i}$ chosen in i) uniquely determines $M'$ and $M''$ such that $\prod_{i=1}^nx_i^{a_i}=M'M''$: indeed $$M'=\prod_{i:k'_{a_i}\neq 0}x_i^{a_i}\ \ {\rm{and}}\ M''=\prod_{i:k''_{a_i}\neq 0}x_i^{a_i}.$$
\end{proof} \end{lemma}
\begin{lemma}\label{sldftv} Let ${\bf{k}}:\Z_+\to\N$, $n\in\N$ be such that $\sum_{m>0}k_m=n$. Then:
\noindent i) if $\exists m>0$ such that $k_{m'}= 0$ for all $m'\neq m$ (equivalently $k_m=n$) we have $$(\sigma x)_{{\bf{k}}}=\lambda_m(\hat h_{n})=b_{{\bf{k}}}\in\Z_x^{(n)}\cap\Z_{\lambda}^{(n)};$$ \noindent ii) in general $b_{{\bf{k}}}-(\sigma x)_{{\bf{k}}}\in\Z_x^{(n-1)}$. \begin{proof} i) $\forall N\geq n$ we have $$(\sigma x)_{{\bf{k}}}=\sum_{1\leq i_1<...<i_n\leq N}x_{i_1}^m\cdot...\cdot x_{i_n}^m=\lambda_m\left(\sum_{1\leq i_1<...<i_n\leq N}x_{i_1}\cdot...\cdot x_{i_n}\right)=\lambda_m(e_n^{[N]})$$ so that $(\sigma x)_{{\bf{k}}}=\lambda_m(\hat h_{n})$.
\noindent ii) $b_{{\bf{k}}}=\prod_{m>0}\lambda_m(\hat h_{k_m})=\prod_{m>0}(\sigma x)_{{\bf{k}}^{[m]}}$ where $k^{[m]}_{m'}=\delta_{m,m'}k_m$ $\forall m,m'>0$; thanks to lemma \ref{sxkp},ii) we have that $\prod_{m>0}(\sigma x)_{{\bf{k}}^{[m]}}-(\sigma x)_{\sum_m{\bf{k}}^{[m]}}\in\Z_x^{(n-1)}$; but $\sum_{m>0}{\bf{k}}^{[m]}={\bf{k}}$ and the claim follows.
\end{proof} \end{lemma} \begin{theorem}\label{gmrvs}
$B_{\lambda}$ is a $\Z$-basis of $\Z[\hat h_k|k>0]$ (thus $\Z[\hat h_k|k>0]=\Z_{\lambda}[h_r|r>0]$). \begin{proof} We prove by induction on $n$ that $B_{\lambda}^{(n)}$ is a $\Z$-basis of $\Z_x^{(n)}=\Z_{\lambda}^{(n)}$ $\forall n\in\N$, the case $n=0$ being obvious.
\noindent Let $n>0$: by the inductive hypothesis $B_{\lambda}^{(n-1)}$ and $B_x^{(n-1)}$ are both $\Z$-bases of $\Z_x^{(n-1)}=\Z_{\lambda}^{(n-1)}$; by definition $B_x^{(n)}\setminus B_x^{(n-1)}$ represents a $\Z$-basis of $\Z_x^{(n)}/\Z_x^{(n-1)}$ while $B_{\lambda}^{(n)}\setminus B_{\lambda}^{(n-1)}$ represents a set of generators of the $\Z$-module $\Z_{\lambda}^{(n)}/\Z_{\lambda}^{(n-1)}$.
\noindent Now lemma \ref{sldftv},ii) implies that if $\sum_{m>0}k_m=n$ then $b_{{\bf{k}}}$ and $(\sigma x)_{{\bf{k}}}$ represent the same element in ${\mathbb{Q}}[\hat h_k|k>0]/\Z_x^{(n-1)}={\mathbb{Q}}[\hat h_k|k>0]/\Z_{\lambda}^{(n-1)}$.
\noindent Hence $B_{\lambda}^{(n)}\setminus B_{\lambda}^{(n-1)}$ represents a $\Z$-basis of $\Z_x^{(n)}/\Z_x^{(n-1)}=\Z_x^{(n)}/\Z_{\lambda}^{(n-1)}$, that is $B_{\lambda}^{(n)}$ is a $\Z$-basis of $\Z_x^{(n)}$; but $B_{\lambda}^{(n)}$ generates $\Z_{\lambda}^{(n)}$ and the claim follows.
\end{proof} \end{theorem}
\subsection{Comparison with the Mitzman integral form}\label{appendC} In the present appendix we compare the integral form $\tilde{\cal U}_{\Z}=$ $^*{\cal U}_{\Z}(\hat{{\frak sl_3}}^{\!\!\chi})$ of $\tilde{\cal U}$ described in section \ref{ifa22} with the integral form $\u_{\Z}(\hat{{\frak sl_3}}^{\!\!\chi})$ of the same algebra $\tilde{\cal U}$ introduced and studied by Mitzman in \cite{DM}, that we denote here by $\tilde{\cal U}_{\Z,M}$ and that is easily defined as the $\Z$-subalgebra of $\tilde{\cal U}$ generated by the divided powers of the Kac-Moody generators $e_i,f_i$ ($i=0,1$): see also remark \ref{rfink}.
\noindent More precisely:
\begin{mydefinition}\label{ka22}
\noindent $\tilde{\cal U}$ is the enveloping algebra of the Kac-Moody algebra whose generalized Cartan matrix is $A_2^{(2)}=(a_{i,j})_{i,j\in\{0,1\}}=\left(\begin{matrix}2&-1\\-4&2\end{matrix}\right)$ (see \cite{VK}): it has generators $\{e_i,f_i,h_i|i=0,1\}$ and relations $$[h_i,h_j]=0,\ \ [h_i,e_j]=a_{i,j}e_j,\ \ [h_i,f_j]=-a_{i,j}f_j,\ \ [e_i,f_j]=\delta_{i,j}h_i\ \ (i,j\in \{0,1\})$$ $$({\rm{ad}}e_i)^{1-a_{i,j}}(e_j)=0=({\rm{ad}}f_i)^{1-a_{i,j}}(f_j)\ \ (i\neq j\in \{0,1\}).$$
\end{mydefinition}
\begin{mydefinition} \label{mif}
The Mitzman integral form $\tilde{\cal U}_{\Z,M}$ of $\tilde{\cal U}$ is the $\Z$-subalgebra of $\tilde{\cal U}$ generated by $\{e_i^{(k)},f_i^{(k)}|i=0,1,\ k\in\N\}$.
\end{mydefinition}
\begin{myremark} The Kac-Moody presentation of $\tilde{\cal U}$ (definition \ref{ka22}) and its presentation given in definition \ref{a22} are identified through the following isomorphism: $$e_1\mapsto x_0^+,\ \ f_1\mapsto x_0^-,\ \ h_1\mapsto h_0,\ \ e_0\mapsto{1\over 4}X_1^-,\ \ f_0\mapsto{1\over 4}X_{-1}^+,\ \ h_0\mapsto {1\over 4}c-{1\over 2} h_0.$$
\end{myremark}
\begin{mynotation} \label{mitNota} In order to avoid in the following any confusion and heavy notations, we set: $$y_{2r+1}^{\pm}={1\over 4}X_{2r+1}^{\pm},\ \ \mathcal{h}_r={1\over 2}h_r,\ \ \tilde c={1\over 4}c$$ where the $X_{2r+1}^{\pm}$'s, the $h_r$'s and $c$ are those introduced in definition \ref{a22} (thus $e_0=y_1^-$, $f_0=y_{-1}^+$, while the Kac-Moody $h_0$ and $h_1$ appearing in definition \ref{ka22} are respectively $\tilde c-\mathcal{h}_0$ and $2\mathcal{h}_0$; moreover
$\tilde{\cal U}_{\Z,M}$ is the $\Z$-subalgebra of $\tilde{\cal U}$ generated by $\{(x_0^{\pm})^{(k)},(y_{\mp 1}^{\pm})^{(k)}|k\in\N\}$). \end{mynotation}
\begin{myremark}\label{stau} $\tilde{\cal U}_{\Z,M}$ is $\Omega$-stable, $\exp(\pm{\rm{ad}}e_i)$-stable and $\exp(\pm{\rm{ad}}f_i)$-stable. In particular $\tilde{\cal U}_{\Z,M}$ is stable under the action of $$\tau_0=\exp({\rm{ad}}e_0)\exp(-{\rm{ad}}f_0)\exp({\rm{ad}}e_0)=\exp({\rm{ad}}y_1^-)\exp(-{\rm{ad}}y_{-1}^+)\exp({\rm{ad}}y_1^-)$$ and $$\tau_1=\exp({\rm{ad}}e_1)\exp(-{\rm{ad}}f_1)\exp({\rm{ad}}e_1)=\exp({\rm{ad}}x_0^+)\exp(-{\rm{ad}}x_0^-)\exp({\rm{ad}}x_0^+)$$ (cfr. \cite{JH}).
\begin{proof} The claim for $\Omega$ follows at once from the definitions; the remaining claim are an immediate consequence of the identity $({\rm{ad}}a)^{(n)}(b)=\sum_{r+s=n}(-1)^sa^{(r)}ba^{(s)}$. \end{proof} \end{myremark}
\begin{myremark}\label{emgfg} Recalling the embedding $F:\hat{\cal U}\to\tilde{\cal U}$ defined in remark \ref{emgg}, theorem \ref{trm} implies that the $\Z$-subalgebra of $\tilde{\cal U}$ generated by the divided powers of the $y_{2r+1}^{\pm}$'s is the tensor product of the $\Z$-subalgebras
$\Z^{(div)}[y_{2r+1}^{\pm}|r\in\Z]$, $\Z^{(sym)}[\mathcal{h}_{\pm r}|r>0]$, $\Z^{(bin)}[\mathcal{h}_0-\tilde c, 2\tilde c]$.
\end{myremark}
\noindent Mitzman completely described the integral form generated by the divided powers of the Kac-Moody generators in all the twisted cases; in case $A_2^{(2)}$ his result can be stated as follows, using our notations (see examples \ref{dvdpw}, \ref{binex} and \ref{rvsf}, definition \ref{bun} and notation \ref{mitNota}):
\begin{mytheorem}\label{mitz} $\tilde{\cal U}_{\Z,M}\cong\tilde{\cal U}_{\Z,M}^-\otimes_{\Z}\tilde{\cal U}_{\Z,M}^0\otimes_{\Z}\tilde{\cal U}_{\Z,M}^+$ where
$$\tilde{\cal U}_{\Z,M}^{\pm}\cong\Z^{(div)}[x_{2r}^{\pm}|r\in\Z]\otimes_{\Z}\Z^{(div)}[y_{2r+1}^{\pm}|r\in\Z]\otimes_{\Z}\Z^{(div)}[x_{2r+1}^{\pm}|r\in\Z]\cong$$
$$\cong\Z^{(div)}[x_{2r+1}^{\pm}|r\in\Z]\otimes_{\Z}\Z^{(div)}[y_{2r+1}^{\pm}|r\in\Z]\otimes_{\Z}\Z^{(div)}[x_{2r}^{\pm}|r\in\Z],$$
$$\tilde{\cal U}_{\Z,M}^0\cong\Z_{\lambda}[\mathcal{h}_{-r}|r>0]\otimes_{\Z}\Z^{(bin)}[2\mathcal{h}_0,\tilde c-\mathcal{h}_0]\otimes_{\Z}\Z_{\lambda}[\mathcal{h}_r|r>0].$$ The isomorphisms are all induced by the product in $\tilde{\cal U}$.
Remark that $\Z^{(bin)}[2\mathcal{h}_0,\tilde c-\mathcal{h}_0]=\Z^{(bin)}[\mathcal{h}_0-\tilde c,2\tilde c]$ (see example \ref{binex}) and $\Z_{\lambda}[\mathcal{h}_r|r>0]=\Z^{(sym)}[\mathcal{h}_r|r>0]$ (see theorem \ref{gmrvs}). \end{mytheorem}
\vskip.3 truecm \begin{myremark} \label{tmnv} \noindent As in the case of $\hat\frak sl_2$ (see remark \ref{tmfv}) we can evidentiate the relation between the elements $\hat \mathcal{h}_k$'s with $k>0$ and the elements $p_{n,1}$'s ($n>0$) defined in \cite {F} following Garland's $\Lambda_{k}$'s.
\noindent Setting $$\sum_{n\geq 0}p_nu^n=P(u)=\hat \mathcal{h}(-u)^{-1}$$ we have on one hand
$\Z[\hat \mathcal{h}_k|k>0]=\Z[p_{n}|n>0]$ and on the other hand $$p_0=1,\ \ p_n={1\over n}\sum_{r=1}^n\mathcal{h}_rp_{n-r}\ \forall n>0,$$
hence $p_n=p_{n,1}$ $\forall n\geq 0$ (see \cite{F}) and $\Z[\hat \mathcal{h}_k|k>0]=\Z[p_{n,1}|n>0]$.
\end{myremark}
\begin{mycorollary}\label{yins} $\tilde{\cal U}_{\Z}\subsetneq\tilde{\cal U}_{\Z,M}$.
\noindent More precisely:
$$\Z^{(div)}[X_{2r+1}^{\pm}|r\in\Z]\subsetneq\Z^{(div)}[y_{2r+1}^{\pm}|r\in\Z],$$ so that $\tilde{\cal U}_{\Z}^+\subsetneq\tilde{\cal U}_{\Z,M}^+$ and $\tilde{\cal U}_{\Z}^-\subsetneq\tilde{\cal U}_{\Z,M}^-$; $$\Z^{(bin)}[h_0,c]=\Z^{(bin)}[2\mathcal{h}_0,4\tilde c]\subsetneq\Z^{(bin)}[2\mathcal{h}_0,\tilde c-\mathcal{h}_0]$$ and (see definition \ref{thuz})
$$\Z^{(sym)}[\varepsilon_rh_r|r>0]\subsetneq\Z^{(sym)}[\mathcal{h}_r|r>0]$$ (and similarly for the negative part of $\tilde{\cal U}_{\Z,M}^0$), so that $\tilde{\cal U}_{\Z}^0\subsetneq\tilde{\cal U}_{\Z,M}^0$. \begin{proof} For $\Z^{(div)}$ and $\Z^{(bin)}$ the claim is obvious.
For $\Z^{(sym)}$ the inequality follows at once from the fact that $\mathcal{h}_1={h_1\over 2}$ does not belong to $\Z^{(sym)}[\varepsilon_rh_r|r>0]$ while the inclusion follows from propositions \ref{convoluzioneintera} and \ref{emmepiallaerre} remarking that for all $r>0$ $\varepsilon_rh_r=2\varepsilon_r\mathcal{h}_r$.
\noindent Then the assertion for $\tilde{\cal U}_{\Z}$ and $\tilde{\cal U}_{\Z,M}$ follows from theorems \ref{trmA22} and \ref{mitz}. \end{proof} \end{mycorollary}
\begin{myremark} \label{mizRem} Theorem \ref{mitz} can be deduced from the commutation formulas discussed in this paper and collected in appendix \ref{appendA}, thanks to the triangular decompositions (see remark \ref{tefp}) and to the following observations:
\noindent i) $\tilde{\cal U}_{\Z,M}^0$ is a $\Z$-subalgebra of $\tilde{\cal U}$:
\noindent indeed, since the map $h_r\mapsto \mathcal{h}_r$, $c\mapsto\tilde c$ defines an automorphism of $\tilde{\cal U}^0$, proposition \ref{zkd} implies that $$\hat \mathcal{h}_+(u)\hat \mathcal{h}_-(v)=\hat \mathcal{h}_-(v)(1-uv)^{-4\tilde c}(1+uv)^{2\tilde c}\hat \mathcal{h}_+(u).$$
\noindent ii) $\tilde{\cal U}_{\Z,M}^+$ and $\tilde{\cal U}_{\Z,M}^-$ are $\Z$-subalgebras of $\tilde{\cal U}$:
\noindent indeed the $[(x_{2r}^+)^{(k)},(x_{2s+1}^+)^{(l)}]$'s (the only non trivial commutators in $\tilde{\cal U}_{\Z,M}^+$) lie in $\tilde{\cal U}_{\Z}^+\subseteq\tilde{\cal U}_{\Z,M}^+$; on the other hand $\tilde{\cal U}_{\Z,M}^-=\Omega(\tilde{\cal U}_{\Z,M}^+)$.
\noindent iii) $\exp\left(\sum_{r>0}a_rx_r^+\right)\in\tilde{\cal U}_{\Z,M}^+$ if $a_r\in\Z$ for all $r>0$:
\noindent see lemma \ref{cle},viii), formula (\ref{sdivmv}) and the relation $[x_{2r}^+,x_{2s+1}^+]=-4y_{2r+2s+1}^+$.
\noindent iv) $\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+$ and $\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0$ are $\Z$-subalgebras of $\tilde{\cal U}$:
\noindent that $(y_{2r+1}^+)^{(k)}\tilde{\cal U}_{\Z,M}^0\subseteq\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+$ follows from remark \ref{emgfg}; moreover by propositions \ref{bdm} and \ref{hh} we get $$(x_r^+)^{(k)}{\mathcal{h}_0-\tilde c\choose l}={\mathcal{h}_0-\tilde c-k\choose l}(x_r^+)^{(l)},$$ $$(x_r^+)^{(k)}\hat \mathcal{h}_+(u)=\hat \mathcal{h}_+(u)\left({1-uT^{-1}\over(1+uT^{-1})^2}x_r^+\right)^{(k)},$$ $$\lambda_{-1}(x_r^+)=x_{-r}^+,\ \ \lambda_{-1}(\hat \mathcal{h}_+(u))=\hat \mathcal{h}_-(u).$$ On the other hand $\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0=\Omega(\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+)$.
\noindent v) $\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+$ is a $\Z$-subalgebra of $\tilde{\cal U}$: $$(x_r^+)^{(k)}(x_s^-)^{(l)}\in\tilde{\cal U}_{\Z}=\tilde{\cal U}_{\Z}^-\tilde{\cal U}_{\Z}^0\tilde{\cal U}_{\Z}^+\subseteq\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+$$ (see theorem \ref{trmA22} and corollary \ref{yins}), $$(y_{2r+1}^+)^{(k)}(y_{2s+1}^-)^{(l)}\in\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+$$ (see remark \ref{emgfg}), and $$\exp(x_0^+u)\exp(y_1^-v)=$$ $$=\exp({\alpha_-})\exp({\beta_-})\exp({\gamma_-})\hat \mathcal{h}_+(u^2v) \exp(\gamma_+)\exp(\beta_+)\exp(\alpha_+)$$ where $$\alpha_-={uv\over 1-w^2u^4v^2}.x_1^-,\ \ \ \ \alpha_+={u\over 1-w^2u^4v^2}.x_0^+,$$ $$\beta_-={(1+3\cdot wu^4v^2)v\over (1+wu^4v^2)^2}.y_1^-,\ \ \ \ \beta_+={(1-wu^4v^2)u^4v\over (1+wu^4v^2)^2}.y_1^+,$$ $$\gamma_-={-w^2u^3v^2\over 1-w^2u^4v^2}.x_0^-,\ \ \ \ \gamma_+={-u^3v\over 1-w^2u^4v^2}.x_1^+$$ \noindent (see proposition \ref{xmenogrande} recalling definition \ref{qwmodulo} and remark \ref{whtilde}), so that $(x_0^+)^{(k)}(y_1^-)^{(l)}$ lies in $\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+$ for all $k,l\geq 0$; from this it follows that $(x_r^+)^{(k)}(y_{2s+1}^-)^{(l)}$ and $(y_{2s+1}^+)^{(l)}(x_r^-)^{(k)}$ lie in $\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+$ for all $r,s\in\Z$, $k,l\geq 0$ because $\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+$ is stable under $T^{\pm 1}$, $\lambda _m$ ($m\in\Z$ odd) and $\Omega$, and $$x_r^+=T^{-r}\lambda_{2r+2s+1}(x_0^+),\ \ y_{2s+1}^-=(-1)^rT^{-r}\lambda_{2r+2s+1}(y_1^-),$$ $$y_{2s+1}^+=\Omega(y_{-2s-1}^-),\ \ x_r^-=\Omega(x_{-r}^+);$$
\noindent vi) $\tilde{\cal U}_{\Z,M}\subseteq\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+ $:
\noindent it follows from v) since $(x_0^{\pm})^{(k)}\in\Z^{(div)}[x_{2r}^{\pm}|r\!\in\!\Z]$ and $(y_{\mp 1}^{\pm})^{(k)}\in\Z^{(div)}[y_{2r+1}^{\pm}|r\!\in\!\Z]$.
\noindent vii) $\tilde{\cal U}_{\Z,M}^{\pm}\subseteq\tilde{\cal U}_{\Z,M}$:
\noindent this follows from remark \ref{stau}, observing that $$\tau_0(x_r^+)=(-1)^{r-1}x_{r+1}^-,\ \ \tau_1(x_r^-)=x_r^+,\ \ \tau_1(y_{2r+1}^-)=y_{2r+1}^+,\ \ \tau_0(y_{2r+1}^+)=-y_{2r+3}^-.$$
\noindent viii) $\tilde{\cal U}_{\Z,M}^0\subseteq\tilde{\cal U}_{\Z,M}$:
\noindent it follows from vii), v) and the stability under $\Omega$.
\noindent ix) $\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+\subseteq\tilde{\cal U}_{\Z,M}$:
\noindent this is just vii) and viii) together.
Then $\tilde{\cal U}_{\Z,M}=\tilde{\cal U}_{\Z,M}^-\tilde{\cal U}_{\Z,M}^0\tilde{\cal U}_{\Z,M}^+$, which is the claim.
\end{myremark}
\begin{myremark}\label{rfink}
As one can see from remark \ref{mizRem},vii), $$\{x_r^{\pm},y_{2r+1}^{\pm},\mathcal{h}_s,2\mathcal{h}_0,\tilde c-\mathcal{h}_0|r,s\in\Z, s\neq 0\}$$ is, up to signs, a Chevalley basis of $\hat{{\frak sl_3}}^{\!\!\chi}$ (see \cite{DM}).
\noindent It is actually through these basis elements that Mitzman introduces, following \cite{HG}, the integral form of $\tilde{\cal U}$, as the $\Z$-subalgebra of $\tilde{\cal U}$ generated by
$$\{(x_r^{\pm})^{(k)},(y_{2r+1}^{\pm})^{(k)}|r\in\Z,k\in\N\};$$
but this $\Z$-subalgebra is precisely the algebra $\tilde{\cal U}_{\Z,M}$ introduced in definition \ref{mif}: indeed it turns out to be generated over $\Z$ just by $\{e_i^{(k)},f_i^{(k)}|i=0,1,\ k\geq 0\}$, that is by $\{(x_0^{\pm})^{(k)},(y_{\mp 1}^{\pm})^{(k)}|k\geq 0\}$, thanks to remarks \ref{stau} and \ref{mizRem},vii).
\end{myremark}
\subsection{List of Symbols}\label{appendD}
\underline{Lie Algebras and Commutative Algebras}:
\begin{abbrv}
\item[$S^{(div)}$]
Example \ref{dvdpw} \\
\item[$S^{(bin)}$]
Example \ref{binex} \\
\item[$S^{(sym)}$]
Example \ref{rvsf} \\
\item[$\frak {sl_2}$]
Definition \ref{sl2} \\
\item[$\hat{\frak sl_2}$]
Definition \ref{hs2} \\
\item[$\hat{{\frak sl_3}}^{\!\!\chi}$]
Definition \ref{a22} \\
\end{abbrv}
\underline{Enveloping Algebras}:
\begin{abbrv}
\item[$ \u_{\Z} ^{re, \pm}, \;\u_{\Z}^{im,\pm}, \; \u_{\Z}^{\frak h},\; ^*{\cal U}_{\Z}, \; ^*{\cal U}_{\Z}^{imm, \pm}$]
Section \ref{intr} \\
\item[${\cal U}, \; \u_{\Z}$]
Definiton \ref{sl2} \\
\item[${\cal U}^+,\; {\cal U}^-,\; {\cal U}^0$]
Theorem \ref{trdc} \\
\item[$\hat{\cal U},\; \hat{\cal U}^{+},\; \hat{\cal U}^{-}, \; \hat{\cal U}^{0}, \; \hat{\cal U}^{0,\pm},\; \hat{\cal U}^{0,0}$ ]
Definition \ref{hs2} \\
\item[$\hat{\cal U}_{\Z},\; \hat{\cal U}_{\Z}^\pm, \; \hat{\cal U}_{\Z}^{0,\pm}, \; \hat{\cal U}_{\Z}^{0,0}$ ]
Definition \ref{hhuz} \\
\item[$\tilde{\cal U},\; \tilde{\cal U}^{\pm},\; \tilde{\cal U}^{0},\; \tilde{\cal U}^{\pm,0},\; \tilde{\cal U}^{\pm,1},\; \tilde{\cal U}^{\pm,c}, \; \tilde{\cal U}^{0,\pm},\; \tilde{\cal U}^{0,0}$]
Definition \ref{a22} \\
\item[$\tilde{\cal U}_{\Z},\; \tilde{\cal U}_{\Z}^{\pm},\; \tilde{\cal U}_{\Z}^{0},\; \tilde{\cal U}_{\Z}^{\pm,0},\; \tilde{\cal U}_{\Z}^{\pm,1},\; \tilde{\cal U}_{\Z}^{\pm,c} \; \tilde{\cal U}_{\Z}^{0,\pm},\; \tilde{\cal U}_{\Z}^{0,0}$]
Definition \ref{thuz} \\
\item[$\tilde{\cal U}_{\Z,M}$]
Definiton \ref{mif}\\
\item[$\tilde{\cal U}_{\Z,M}^-, \;\tilde{\cal U}_{\Z,M}^0, \;\tilde{\cal U}_{\Z,M}^+ \;$]
Theorem \ref{mitz}\\
\end{abbrv}
\underline{Bases}:
\begin{abbrv}
\item[$B^{re,\pm}, \; B^{im,\pm}, \;B^{\frak h}$]
Section \ref{intr}
\item[$B^\pm,\; B^{0, \pm},\; B^{0,0} $ ]
Theorem \ref{trm} \\
\item[$B^{\pm,0},\; B^{\pm,1},\; B^{\pm,c} $ ]
Theorem \ref{trmA22} \\
\item[$B_\lambda \; B_\lambda^{[n]} \; B_x \;B_x^{[n]}$]
Definitions \ref{bun} and \ref{basi} \\
\end{abbrv}
\underline{Elements and their generating series}:
\begin{abbrv}
\item[$\Lambda_r(\xi(k))$]
Section \ref{intr}
\item[$a^{(k)}, \; \exp(au)$]
Notation \ref{ntdvd} \\
\item[$\binom{a}{k}, \; (1+u)^a$]
Notation \ref{ntbin} \\
\item[$\hat p(u),\; \hat p_r$]
Example \ref{rvsf} \\
\item[$\hat h_r^{\{a\}},\; \hat h_+^{\{a\}}(u)$]
Notation \ref{hcappucciof} \\
\item[$x^\pm_r, \; h_r, \;c$]
Definition \ref{hs2} and Definition \ref{a22}\\
\item[$X^\pm_{2r+1}$]
Definition \ref{a22}\\
\item[$x^\pm(u),\; h_\pm(u), \; \hat h_\pm(u), \; \hat h_r$]
Notation \ref{hgens} \\
\item[$\tilde h_{\pm}(u), \tilde h_{\pm r}$]
Definition \ref{thuz} \\
\item[$e_i, f_i, h_i$]
Remark \ref{ka22}\\
\item[$y_{2r+1}^\pm , \; \mathcal{h}_r, \; \tilde c$]
Notation \ref{mitNota}\\
\item[$\mathcal{h}_\pm(u)$]
Remark \ref{mizRem}\\
\end{abbrv}
\underline{Anti/auto/homomorphisms}:
\begin{abbrv}
\item[$\lambda_m, \; \lambda_m ^{[n]}$]
Proposition \ref{tmom} \\
\item[$ev$]
Equation \ref{evaluation} \\
\item[$\sigma, \; \Omega,\; T, \; \lambda_m$]
Definition \ref{hto} and Definition \ref{tto} \\
\item[$\tilde \lambda_m$]
Lemma \ref{ometiomecap} \\
\end{abbrv}
\underline{Other symbols}:
\begin{abbrv}
\item[$ 1\!\!\!\!1, \; 1\!\!\!\!1^{(m)}, \; 1\!\!\!\!1_r, \; 1\!\!\!\!1^{(m)}_r$]
Notation \ref{hcappucciof} \\
\item[$L_a, \; R_a$]
Notation \ref{lard} \\
\item[$\varepsilon_r$]
Definition \ref{thuz} \\
\item[$L, \; L^\pm,\; L^0, \; L^{\pm, 0}, \; L^{\pm, 1}, \; L^{\pm, c}$]
Definition \ref{sottoalgebraL} \\
\item[$w.$]
Definition \ref{qwmodulo} \\
\item[$d, \; \tilde d, \; d_n, \; \tilde d_n$]
Notation \ref{notedn} \\
\item[$\delta_n$]
Remark \ref{hhdehh} \\
\end{abbrv}
\vskip .5 truecm
\end{document} |
\begin{document}
\title[Classification of terminal quartics and rationality]{A classification of terminal quartic $3$-folds and applications to rationality questions} \date{} \author{Anne-Sophie Kaloghiros} \address{Department of Pure Mathematics and Mathematical Statistics, Uni\-ver\-si\-ty of Cambridge, Wilberforce Road, Cambridge CB3 0WB, Uni\-ted Kingdom} \email{[email protected]} \setcounter{tocdepth}{1} \maketitle \begin{abstract} This paper studies the birational geometry of terminal Gorenstein Fano $3$-folds. If $Y$ is not $\Q$-factorial, in most cases, it is possible to describe explicitly the divisor class group $\Cl Y$ by running a Minimal Model Program (MMP) on $X$, a small $\Q$-factorialisation of $Y$. In this case, the generators of $\Cl Y/ \Pic Y$ are ``topological traces " of $K$-negative extremal contractions on $X$. One can show, as an application of these methods, that a number of families of non-factorial terminal Gorenstein Fano $3$-folds are rational. In particular, I give some examples of rational quartic hypersurfaces $Y_4\subset \PS^4$ with $\rk \Cl Y=2$ and show that when $\rk \Cl Y\geq 6$, $Y$ is always rational. \end{abstract}
\tableofcontents \section*{Introduction} \label{sec:introduction}
Let $Y_4 \subset \PS^4$ be a quartic hypersurface in $\PS^4$ with terminal singularities. The Grothendieck-Lefschetz theorem states that every Cartier divisor on $Y$ is the restriction of a Cartier divisor on $\PS^4$. Recall that a variety $Y$ is $\Q$-factorial when a multiple of every Weil divisor is Cartier. There is no analogous statement for the group of Weil divisors: \cite{Kal07b} bounds the rank of the divisor class group $\Cl Y$ of $Y_4\subset \PS^4$, but when $Y_4$ is not factorial, $\Cl Y$ remains poorly understood.
Terminal quartic hypersurfaces in $\PS^4$ are terminal Gorenstein Fano $3$-folds. Any terminal Gorenstein Fano $3$-fold $Y$ is a $1$-parameter flat deformation of a nonsingular Fano $3$-fold $\mathcal{Y}_{\eta}$ with $\rho(Y)= \rho(\mathcal{Y}_{\eta})$ \cite{Nam97a}. Nonsingular Fano $3$-folds are classified in \cite{Isk77, Isk78} and \cite{MM82, MM03}; there are $17$ deformation families with Picard rank $1$. Terminal Fano $3$-folds with $\Q$-factorial singularities play a central role in Mori theory: they are one of the possible end products of the Minimal Model Program (MMP) on nonsingular varieties. In \cite{T02}, Takagi develops a method to classify such $\Q$-Fano $3$-folds under some mild assumptions; his techniques rely in an essential way on the sudy of Weil non-Cartier divisors on some canonical Gorenstein Fano $3$-folds.
By definition, $\Q$-factoriality is a global topological property: it depends on the prime divisors lying on $Y$ rather than on the local analytic type of its singular points alone. The divisor class group of a terminal Gorenstein $3$-fold $Y$ is torsion free \cite{Kaw88}, so that $Y$ is $\Q$-factorial precisely when it is is factorial. If $Y$ is a terminal Gorenstein Fano $3$-fold, by Kawamata-Viehweg Vanishing $\Pic Y\simeq H^2(Y, \Z)$, and by \cite[Theorem 3.2]{NS95}$, \Cl Y\simeq H_4(Y,\Z)$. Hence $Y$ is factorial if and only if \[ H_4(Y, \Z)\simeq H^2(Y, \Z). \]
Birational techniques can be used to bound the rank of the divisor class group of terminal Gorenstein Fano $3$-folds \cite{Kal07b}. More precisely, under the assumption that $Y$ does not contain a plane, Weil non-Cartier divisors on $Y$ are precisely the divisors that are contracted by the MMP on a small factorialisation $X\to Y$. The assumption that $Y$ does not contain a plane guarantees that each step of the MMP on $X$ is a Gorenstein weak Fano $3$-fold, and hence that this MMP can be studied explicitly. In \cite{Kal07b}, I use numerical constraints associated to the extremal contractions of this MMP to bound the Picard rank of $X$, or equivalently the rank of $\Cl Y$. These methods can be refined to describe explicitly the possible extremal divisorial contractions that occur in this MMP. For instance, if $Y_4 \subset \PS^4$ a terminal quartic hypersurface, one can state a geometric ``motivation'' of non-factoriality as follows:
\begin{thm}[Main Theorem]
\label{thm:1} Let $Y_4^3 \subset \PS^4$ be a terminal Gorenstein quartic $3$-fold. Then one of the following holds: \begin{enumerate} \item[1.] $Y$ is factorial. \item[2.] $Y$ contains a plane $\PS^2$. \item[3.] $Y$ contains an anticanonically embedded del Pezzo surface of
degree $4$ and $\rk \Cl Y=2$. \item[4.] A small factorialisation of $Y$ has a structure of Conic Bundle over $\PS^2$, $\F_0$
or $\F_2$ and $\rk \Cl Y= 2$ or $3$. \item[5.] $Y$ contains a rational scroll $E \to C$ over a curve $C$ whose
genus and degree appear in Table~\ref{table1} (see page \pageref{table1}). \end{enumerate} \end{thm}
Analogous results can be stated for any terminal Gorenstein Fano $3$-fold (see Section~\ref{tables}). When $Y$ is not factorial, either all small factorialisations of $Y$ are Mori fibre spaces or at least one of the surfaces listed in Theorem~\ref{thm:1} is a generator of $\Cl Y/\Pic Y$; observe that these surfaces have relatively low degree (see Remark~\ref{degree}). Further, when $Y$ does not contain a plane, the rank of $\Cl Y$ can only be large when $Y$ contains many independent surfaces of low degree. The divisor class group is generated by prime divisors that are ``topological traces'' on $Y$ of the extremal divisorial contractions that occur when running the MMP on $X$.
The explicit study of the MMP on $X$ exhibits birational models of $Y$ that are small modifications of terminal Gorenstein Fano $3$-folds with Picard rank $1$ and higher anticanonical degree. Questions on rationality of $Y$, or at the other end of the spectrum, questions on rigidity, can be easier to answer on these models than on $Y$ itself. A nonsingular quartic hypersurface $Y=Y_4\subset \PS^4$ is nonrational \cite{IM}; more precisely, $Y$ is the unique Mori fibre space in its birational equivalence class (up to birational automorphisms), i.e.~ $Y_4$ is \emph{birationally rigid}. This remains true for a quartic hypersurface with ordinary double points: \cite{Me04} shows that if $Y$ has ordinary double points but is factorial, $Y$ remains birationally rigid. When the factoriality assumption is dropped, many examples of rational quartic hypersurfaces with ordinary double points are known. It is a notoriously difficult question to determine which mildly singular quartic hypersurfaces are rational. The methods of this paper yield a partial answer and a byproduct of Theorem~\ref{thm:1} is :
\begin{cor} \label{rat} Let $Y_4\subset \PS^4$ be a non-factorial quartic hypersurface with no worse than terminal singularities. Assume that $Y$ contains a rational scroll as in Theorem~\ref{thm:1}. Then $Y$ is rational except possibly when $\rk \Cl Y=2$ and $Y$ is the midpoint of a Sarkisov link of type $15,17,25,29,35$ or $36$ in Table~\ref{table1}. \end{cor}
There are some rationality criteria for strict Mori fibre spaces \cite{Sh83,Al87, Sh07}, so that partial answers are known when a small $\Q$-factorialisation of $Y$ is a strict Mori fibre space. Terminal quartic hypersurfaces that contain a plane may in some cases be studied directly. The main open case is therefore the one addressed by Corollary~\ref{rat}. The behaviour of the $6$ cases left out by Corollary~\ref{rat} and that of factorial terminal quartic hypersurfaces is unclear. Some of these cases would be settled by the following conjecture.
\begin{con} \label{con:rig} A factorial quartic hypersurface $Y_4\subset \PS^4$ (resp.~ a generic complete intersection $Y_{2,3}\subset \PS^5$) with no worse than terminal singularities has a finite number of models as Mori fibre spaces, i.e.~ the pliability of $Y$ is finite. \end{con}
Whereas nodal quartic hypersurfaces are birationally rigid, \cite{CM04} constructs an example of a ``bi-rigid'' terminal factorial quartic hypersurface $Y_4\subset \PS^4$. A nonsingular general complete intersection $Y_{2,3}$ of a quadric and a cubic in $\PS^3$ is birationally rigid \cite{IP96}.However, \cite{ChGr} constructs a small deformation of a (factorial) rigid $Y_{2,3}\subset \PS^5$ with one ordinary double point to a bi-rigid complete intersection of the same type. This example relies on an appropriate deformation of a Sarkisov link between two complete intersections $Y_{2,3}$ (compare with case $35$ in Table~\ref{table1}). Hence, the notion of finite pliability-- rather than that of birational rigidity-- is the one that might behave well in (suitable) families.
Let $Y$ be a terminal quartic hypersurface and $X\to Y$ a small factorialisation. Assume that $X$ is not a strict Mori fibre space. Assuming Conjecture~\ref{con:rig}, $Y$ has finite pliability when $Y$ is factorial or has $\rk \Cl Y=2$ and $Y$ is one of cases $35$ or $36$; $Y$ is rational in all other cases except possibly when $Y$ has $\rk \Cl Y= 2$ and $Y$ is one of cases $15,17,25$ or $29$. In particular, the question of rationality or of finite pliability of $Y$ would be of a topological nature and would be determined by $\Cl Y$.
\subsection*{Outline} I sketch the proofs of Theorem~\ref{thm:1} and of Corollary~\ref{rat} and present an outline of this paper.
In Section~\ref{weak-star}, I recall the definition of weak-star Fano $3$-folds introduced in \cite{Kal07b}. If $Y$ is a terminal Gorenstein Fano $3$-fold that does not contain a plane, a small factorialisation $X \to Y$ is a weak-star Fano $3$-fold. The category of weak-star Fano $3$-folds is preserved by the birational operations of the MMP. If $X$ is weak-star Fano, then each birational step of the MMP on $X$ is either a flop or an extremal divisorial contraction for which the geometric description of Cutkosky-Mori \cite{Cut88} holds. The end product of the MMP on $X$ is well understood. This approach then yields a complete description of $\Cl Y$: $\Cl Y/\Pic Y$ is generated by the proper transforms of the exceptional divisors of the divisorial contractions of the MMP on $X$.
If $X$ has Picard rank $2$ and if $\phi \colon X \to X'$ is a divisorial contration, then $\phi$ is one side of a Sarkisov link with centre along $Y$. A \emph{$2$-ray game} on $X$ as in \cite{Tak89} determines a (finite number of) possibilities for the contraction $\phi$; by construction, $\Cl Y$ is then generated by $\mathcal{O}_Y(1)$ and by the image of $\Exc \phi$ on $Y$.
In the general case, if $\phi \colon X\to X'$ is a divisorial extremal contraction with $\Exc \phi=E$, there is an extremal contraction $\varphi \colon Z \to Z_1$ where $Z$ is a Picard rank $2$ small modification of $Y$ that sits under $X$ and such that $\Exc \varphi$ is the image of $E$ on $Z$. Since $Z$ is not factorial, there is a priori no ``sensible'' Sarkisov link with centre along $Y$ involving $Z$. The proof will rely on exhibiting a natural link.
In Section~\ref{weak-star}, I show that the explicit geometric description of extremal divisorial contractions of \cite{Cut88} holds on non-factorial terminal Gorenstein $3$-folds so long as the exceptional divisor is Cartier.
Section~\ref{deformation} recalls results on the deformation theory of (small modifications of) terminal Gorenstein Fano $3$-folds and on the deformation of extremal contractions. Following the above notation, there is a $1$-parameter proper flat deformation $Z\hookrightarrow\mathcal{Z}$, where $\mathcal{Z}_{\eta}$ is a nonsingular Picard rank $2$ small modification of a terminal Gorenstein Fano $3$-fold $\mathcal{Y}_{\eta}$ that is a $1$-parameter proper flat deformation of $Y$. The extremal contraction $\varphi$ deforms to an extremal contraction on $\mathcal{Z}_{\eta}$ that is one side of a Sarkisov link with centre along $\mathcal{Y}_{\eta}$. Each possible Sarkisov link with centre along $\mathcal{Y}_{\eta}$ obtained by the $2$-ray game on $\mathcal{Z}_{\eta}$ can then be specialised to the central fibre. The specialisation to the central fibre is a Sarkisov link with centre along $Y$, one side of which is $\varphi$. The divisor class group $\Cl \mathcal{Y}_{\eta}$ is isomorphic to a rank $2$ sublattice of $\Cl Y$.
Section~\ref{motivation} presents the systems of Diophantine equations used in the $2$-ray game. Roughly, to each extremal contraction one associates numerical constraints on some intersection numbers in cohomology. If a Sarkisov link involves two extremal contractions $\varphi$ on $Z$ and $\alpha$ on $\widetilde{Z}$, since $Z$ and $\widetilde{Z}$ are connected by a flop, the constraints on each side give rise to systems of Diophantine equations. All possible Sarkisov links with centre along $Y$ are solutions of these systems. The solutions to all systems associated to Sarkisov links with centre along a terminal Gorenstein Fano $3$-fold with Picard rank $1$ are listed in Section~\ref{tables}. Section~\ref{motivation} classifies terminal quartic hypersurfaces according to their divisor class group. The case when there is no extremal divisorial contraction on $X$, where the previous arguments do not apply, is treated separately. When $Y$ does not contain a plane, a consequence of the explicit xstudy of the MMP on a small factorialisation $X \to Y$ is that the bound on the rank of $\Cl Y$ stated in \cite{Kal07b} is not optimal (see Remark~\ref{bound}).
Section~\ref{rationality} first states some easy consequences of the previous explicit study on rationality of non-factorial Fano $3$-folds. I conjecture that a terminal quartic hypersurface $Y_4\subset \PS^4$ either has finite pliability or is rational; and that this is determined by $\Cl Y$. I show that most non-factorial terminal quartic hypersurfaces that do not contain a plane are rational. I then study explicitly quartic hypersurfaces that contain a plane and state some partial results. Last, Section~\ref{examples} gives some examples of non-factorial terminal Gorenstein Fano $3$-folds and gathers some observations and remarks.
\subsection*{Notations and conventions} All varieties considered in this paper are normal, projective and defined over $\C$. Let $Y$ be a terminal Gorenstein Fano $3$-fold, $A_Y=-K_Y$ denotes the anticanonical divisor of $Y$. The \emph{Fano index} of $Y$ is the maximal integer such that $A_Y=i(Y) H$ with $H$ Cartier. As I only consider Fano $3$-folds with terminal Gorenstein singularities in this paper, the term index always stands for Fano index. The \emph{degree} of $Y$ is $H^3$ and the \emph{genus} of $Y$ is $g(Y)=h^0(X,A_Y)-2$. I denote by $Y_{2g-2}$ for $2 \leq g \leq 10$ or $g=12$ (resp.~ $V_d$ for $1\leq d \leq 5$) terminal Gorenstein Fano $3$-folds of Picard rank $1$, index $1$ (resp.~ $2$) and genus $g$ (resp.~ degree $d$). Finally, $\F_m= \PS(\mathcal{O}_{\PS^1}\oplus \mathcal{O}_{\PS^1}(-m))$ denotes the $m$th Segre-Hirzebruch surface.
\section{Birational Geometry of weak Fano $3$-folds} \label{weak-star} In this section, I recall the definition of weak-star Fano $3$-folds and some of their properties. Most terminal Gorenstein Fano $3$-folds have a weak-star small factorialisation $X\to Y$. If $X$ is weak-star Fano, the MMP on $X$ is well behaved, i.e.~ there is an explicit geometric description of each step, it terminates and its end product is either a terminal factorial Fano $3$-fold or a simple Mori fibre space. Since $\Cl Y \simeq \Cl X \simeq \Pic X$, the MMP on $X$ yields much information on the divisor class group of $Y$. I also gather some easy results on elementary contractions on small modifications of terminal Gorenstein Fano $3$-folds: these will be used in the following Sections.
\subsection{Weak-star Fano $3$-folds}
\begin{dfn}\label{dfn:1} \mbox{}\begin{enumerate} \item[1.] A $3$-fold $Y$ with terminal Gorenstein singularities is \emph{Fano} if its anticanonical divisor $A_Y={-}K_Y$ is ample. \item[2.] A $3$-fold $X$ with terminal Gorenstein singularities is \emph{weak Fano} if $A_X$ is
nef and big.
\item[3.]The morphism $X \to Y$ defined by $\vert {-}nK_X \vert$ for $n>\!\!>0$ is the \emph{(pluri-)anticanonical map} of $X$, $R=R(X, A)$ is the \emph{anticanonical ring} of $X$ and $Y= \Proj R$ is the \emph{anticanonical model} of $X$. \item[4.] A weak Fano $3$-fold $X$ is a \emph{weak-star Fano} if, in addition:
\begin{enumerate}
\item[(i)] $A_X$ is ample outside of a finite set of curves, so that $h\colon X \to Y$ is a small modification,
\item[(ii)] $X$ is factorial, and in particular $X$ is Gorenstein,
\item[(iii)] $X$ is inductively Gorenstein, that is $(A_X)^2\cdot S >1$ for every irreducible divisor $S$ on $X$,
\item[(iv)] $\vert A_X \vert$ is basepoint free, so that $\varphi_{\vert A \vert}$ is generically finite.
\end{enumerate} \end{enumerate} \end{dfn} \begin{rem}\label{rem:1} Let $Y$ be a terminal Gorenstein Fano $3$-fold and $X$ a small factorialisation of $Y$ as in \cite{Kaw88}. \cite[Lemma 2.3]{Kal07b} shows that when $Y$ has Picard rank $1$ and genus $g\geq 3$, $X$ is weak-star unless $Y$ contains a plane $\PS^2$ with ${A_Y}_{\vert \PS^2}=\mathcal{O}_{\PS^2}(1)$. \end{rem}
\begin{nt} I call a surface $S \subset Y_{2g-2}$ a plane (resp.~ a quadric) when the image of $S$ by the anticanonical map is a plane (resp.~ a quadric) in $\PS^{g+1}$, that is when $(A_Y)^2\cdot S=1$ (resp.~ $2$). \end{nt}
\begin{thm} \label{thm:3}\cite[Theorem 3.2, Lemma 3.3]{Kal07b} The category of weak-star Fano $3$-folds is preserved by the birational operations of the MMP. More precisely, if $X := X_0$ is a weak-star Fano $3$-fold whose anticanonical model $Y_0$ has Picard rank $1$, there is a sequence of extremal contractions: \begin{equation*} \xymatrix{ X_0 \ar@{-->}[r]^-{\varphi_0} \ar[d]& X_1 \ar@{-->}[r]^-{\varphi_1}\ar[d] & \cdots & X_{n-1}\ar@{-->}[r]^-{\varphi_{n-1}} \ar[d]& X_n \ar[d] \\ Y_0 & Y_1 & \cdots & Y_{n-1} & Y_n } \end{equation*} where for each $i$, $X_i$ is a weak-star Fano $3$-fold, $Y_i$ is its anticanonical model, and each $\varphi_i$ is either a divisorial contraction or a flop. The Picard rank of $Y_i$, $\rho(Y_i)$, is equal to $1$ for all $i$. The final $3$-fold $X_n$ is either a Fano $3$-fold with $\rho(X_n)=1$ or a strict Mori fibre space. In that latter case, $X_n$ is a del Pezzo fibration over $\PS^1$ or a conic bundle over $\PS^2, \F_0$ or $\F_2$ and $\rho(X_n)=2$ or $3$. \end{thm}
\subsection{Elementary contractions of terminal Gorenstein $3$-folds}
Let $h\colon Z\to Y$ be a small modification of a terminal Gorenstein Fano $3$-fold $Y$ with $\rho(Y)=1$ and $g\geq 3$. Suppose that $\varphi \colon Z \to Z_1$ is an extremal contraction such that $E=\Exc \varphi$ is Cartier. If $g \colon X \to Z$ is a small factorialisation, and if $\widetilde{E}=g^{\ast} E$, there is an extremal contraction $\phi \colon X \to X_1$ that makes the diagram \begin{eqnarray} \label{eq:40} \xymatrix{\widetilde{E} \subset X \ar[r]^{\phi} \ar[d]_g & X_1\ar[d]^{g_1}\\ E \subset Z \ar[d]_h \ar[r]^{\varphi} & Z_1\\ \overline{E} \subset Y & } \end{eqnarray} commutative, where $\widetilde{E}=\Exc \phi$ and $g_1$ is an isomorphism in codimension $1$ (See the proof of \cite[Lemma 3.3]{Kal07b} for details). \begin{rem} Note that $\overline{E}= h(E)$ is not Cartier: since $\rho(Y)=1$, $E$ would be ample if it were Cartier. \end{rem} Cutkosky extended Mori's geometric description of extremal contractions to terminal Gorenstein factorial $3$-folds \cite[Theorems 4 and 5]{Cut88}. The next Lemma is an easy generalisation of his results to divisorial contractions with Cartier exceptional divisor on terminal Gorenstein Fano $3$-folds that are not necessarily factorial. \begin{lem}
\label{lem:10}\cite[Lemma 3.1]{Kal07b}
Let $Z$ be a small modification of a terminal Gorenstein Fano $3$-fold $Y$. Assume that $\vert A_Z \vert$ is basepoint free. Denote by $\varphi \colon Z \to Z'$ an extremal divisorial contraction with centre a curve $\Gamma$ and assume that $\Exc \varphi=E$ is a Cartier divisor.
Then $\Gamma \subset Z'$ is locally a complete intersection and has planar singularities. The contraction $\varphi$ is locally the blow up of the ideal sheaf $\mathcal{I}_{\Gamma}$. In addition, the following relations hold : \begin{align} A_Z^3&=(A_{Z'})^3-2(A_{Z'})\cdot \Gamma-2+2p_a (\Gamma) \\ A_Z^2 \cdot E&= A_{Z'}\cdot \Gamma+2-2 p_a(\Gamma)\\ A_{Z} \cdot E^{2}&=-2+2p_a(\Gamma) \\ E^{3}&=-(A_{Z'})\cdot \Gamma +2 -2 p_a(\Gamma)
\end{align}
\end{lem} \begin{lem}
\label{lem:7}\mbox{} Assume that $\varphi \colon Z \to Z_1$ contracts $E$ to a point $P$, then one of the
following holds: \begin{enumerate} \item[E$2$:] $(E,\mathcal{O}_{E}(E))\simeq ( \PS^2,
\mathcal{O}_{\PS^2}(-1))$ and $P$ is nonsingular. \item[E$3$:] $(E,\mathcal{O}_{E}(E))\simeq (\PS^1 \times \PS^1,
\mathcal{O}_{\PS^1 \times \PS^1}(-1,-1))$ and $P$ is an ordinary double point. \item[E$4$:] $(E, \mathcal{O}_{E}(-E))\simeq (Q, \mathcal{O}_{Q}(-1))$, where $Q\subset \PS^3$ is an irreducible reduced
singular quadric surface, and $P$ is a cA$_{n-1}$ point. \item[E$5$:] $(E,\mathcal{O}_{E}(E))\simeq ( \PS^2,
\mathcal{O}_{\PS^2}(-1))$, and $P$ is a point of Gorenstein
index $2$. \end{enumerate} \end{lem} \begin{proof} Diagram~\eqref{eq:40} shows that $g$ maps the centre of $\phi$ to the centre of $\varphi$; in particular $\phi$ also contracts a divisor to a point unless $\phi$ has centre along a curve $C$ such that $A_{X_1} \cdot C=0$. In this case, by Lemma~\ref{lem:10}, $\widetilde{E}\simeq \F_2$ or $\widetilde{E}\simeq \PS^1 \times \PS^1$ and $\varphi$ is of type E$3$ or E$4$.
I now assume that the centre of $\phi$ is a point. The divisor $E\subset Z$ is Cartier by assumption and $A_E=(A_Z-E)_{\vert E}$ is ample: $E$ is a Gorenstein, possibly nonnormal, del Pezzo surface.
The birational morphism $g_{\vert E}\colon \widetilde{E} \to E$ induced by $g$ is an isomorphism outside a finite set of curves. Since $g$ preserves the anticanonical degree of $E$, Cutkosky's classification \cite{Cut88} shows that this degree is $1,2$ or $4$ and that the normalisation of $E$ is a plane or a quadric. Since $E$ is Cartier and $Z_1$ is Cohen Macaulay, the Serre criterion shows that $E$ is nonnormal if and only if it is not regular in codimension $1$. As the centre of $g_{\vert E}$ is at worst a finite number of points, $E$ is normal and $E \simeq \widetilde{E}$; the result follows from \cite{Cut88}. \end{proof}
\begin{lem}
\label{lem:8} Let $Y_4\subset \PS^4$ be a non-factorial terminal quartic hypersurface that does not contain a plane. Let $Z\to Y$ be a small modification such that $\rho(Z/Y)=1$. Assume that $\varphi \colon Z \to Z_1$ is an extremal contraction such that $E=\Exc \varphi$ is Cartier and that $E$ is mapped to a curve $\Gamma$. Then $Z_1$ is a terminal Gorenstein Fano $3$-fold with $\rho(Z_1)=1$ and the following relations hold: \begin{enumerate} \item[1.] If $i(Z_1)=1$ and $A_{Z_1}^3=2g_1-2$, then $\deg(\Gamma)=g_1-4+p_a(\Gamma)$ and $p_a(\Gamma) \leq g_1-1$, \item[2.] If $i(Z_1)=2$ and
$A_{Z_1}^3= 8d$, then
$2\deg(\Gamma)= 4d-3+p_a(\Gamma)$ and $p_a(\Gamma)=2k+1$, for some $0\leq k\leq 2d-1 $ \item[3.] If $Z_1$ is a quadric in $\PS^4$, then $3
\deg(\Gamma)= 24+p_a(\Gamma)$ and $p_a(\Gamma)= 3k$, for some $0 \leq k \leq 9$, \item[4.] If $Z_1= \PS^3$, then $4 \deg(\Gamma)=29+p_a(\Gamma) $
and $p_a(\Gamma)= 4k-1$, for some $0 \leq k \leq 7$. \end{enumerate} \end{lem} \begin{rem} \label{rem:35} Note that the bound obtained on the genus of $\Gamma$ is sharper than the Castelnuovo bound when $\Gamma$ is a nonsingular curve. \end{rem} \begin{proof} Since $\vert A_Z\vert = \vert \varphi^{\ast}A_{Z_1}-E\vert$ is basepoint free, $\Gamma$ is a scheme theoretic intersection of members of $\vert A_{Z_1} \vert$, and hence $A_{Z_1}\cdot \Gamma\leq A_{Z_1}^3$. The Lemma then follows from standard manipulation of the relations of Lemma~\ref{lem:10}. \end{proof} \section{Deformation theory} \label{deformation}
This Section first recalls results on the deformation theory of terminal Gorenstein Fano $3$-folds and of their small modifications. I then state an easy extension of results of \cite{Mo82, KM92} on deformations of extremal contractions. As is explained in the Introduction, if $X \to Y$ is a small factorialisation of a terminal Gorenstein Fano $3$-fold and if $\rho(X)=2$, a \emph{$2$-ray game} on $X$ as in \cite{Tak89} determines all possible Sarkisov links with centre along $Y$ and hence all possible $K$-negative extremal contractions $\varphi \colon X \to X'$ and all possible generators of $\Cl Y/\Pic Y$. In the general case, I show that a similar $2$-ray game can be played on $Z$, a small partial factorialisation of $Y$ with $\rho(Z)=2$. This procedure is delicate because $Z$ is not factorial. However, $Z$ can be smoothed and the $2$-ray game on the generic fibre yields Sarkisov links that specialise to appropriate Sarkisov links involving $Z$ with centre along $Y$. All possible generators of $\Cl Y/\Pic Y$ arise in that way.
\subsection{Deformation Theory of weak Fano $3$-folds} \begin{dfn} \label{kura} Let $X$ be a projective variety. The \emph{Kuranishi space} $\Def(X)$ of $X$ is the semi-universal space of flat deformations of $X$. When the functor of flat deformations of $X$ is pro-representable, the Kuranishi family $\mathcal{X}$ is the universal deformation object. \end{dfn} \begin{thm}\cite{Nam97a}
\label{thm:7} Let $X$ be a small modification of a terminal Gorenstein Fano $3$-fold. There is a $1$-parameter flat deformation of $X$ \[ \xymatrix{ X\ar[r] \ar[d]& \mathcal{X} \ar[d]\\ \{0\} \ar[r]& \Delta } \] such that the generic fibre $\mathcal{X}_{\eta}$ is a nonsingular small modification of a terminal Gorenstein Fano $3$-fold. The Picard ranks, the anticanonical degrees and the indices of $X$ and $\mathcal{X}_{\eta}$ are equal. \end{thm} Let $f\colon X \to Y$ be a small modification of a terminal Gorenstein Fano $3$-fold. Let $E$ be a Cartier divisor such that $\overline{E}=f(E)$ is not Cartier, and denote by $Z$ the symbolic blow up of $\overline{E}$ on $Y$, i.e.~ $Z= \Proj_Y \bigoplus_{n \geq 0}\mathcal{O}_Y(n\overline{E})$. Then $f$ naturally decomposes as: \[ f \colon X \stackrel{h}\to Z \stackrel{g}\to Y. \] Note that if $\rho(X/Y)=1$, $h$ is the identity and $Z=X$. I recall some results that relate the deformations of $X, Z$ and $Y$. \begin{pro}\cite[11.4,11.10]{KM92} \label{pro:1} Let $X$ be a normal projective $3$-fold and $f \colon X \to Y$ a proper map with connected fibres such that $R^1f_{\ast}\mathcal{O}_X=0$. \begin{enumerate} \item[1.] There are natural morphisms $F$ and $\mathcal{F}$ that make the diagram \[ \xymatrix{ \mathcal {X} \ar[r]^{\mathcal{F}} \ar[d] & \mathcal{Y} \ar[d]\\ \Def(X) \ar[r]^{F} & \Def(Y) } \] commutative. In addition, $\mathcal{F}$ restricts to $f$ on $X$. \item[2.] Assume further that $X$ has terminal Gorenstein singularities and that
$f$ contracts a curve $C \subset X$ with $A_{X}$-trivial components to a point $\{Q\} \in Y$. Let $X_S \to S$ be a flat deformation of $X$ over the germ of a complex space $0 \in S$. Then, $f$ extends to a contraction $F_S \colon X_S \to Y_S$,and the flop $F_S^+ \colon X_S^+ \to Y_S $ exists and commutes with any base change. \end{enumerate} \end{pro} \begin{thm}\cite[12.7.3-12.7.4]{KM92} \label{thm:2} Let $f\colon X \to Y$ be a small factorialisation of a terminal Gorenstein $3$-fold $Y$. Then, $F\colon \Def(X)\to \Def(Y)$ is finite and $\im [\Def(X) \to \Def(Y)]$ is closed and independent of the choice of $f$. \end{thm} \begin{rem} \label{stratification} By Proposition~\ref{pro:1}, there are maps $\mathcal{G}$ and $\mathcal{H}$ that restrict to $g$ and $h$ on the central fibre and that make the diagram \[ \xymatrix{ \mathcal{X}\ar[r]^{\mathcal{H}}\ar[d] & \mathcal{Z}\ar[r]^{\mathcal{G}}\ar[d]& \mathcal{Y}\ar[d] \\ \Def(X)\ar[r]^{H} & \Def(Z)\ar[r]^{G} & \Def(Y) } \] commutative. The Kuranishi space of $X$ thereby acquires a natural stratification by sublattices of $\Cl Y$; by Theorem~\ref{thm:2}, there is an inclusion of closed subspaces \[ \Def(X) \subset \Def(Z) \subset \Def(Y). \] As the Picard rank is constant in any $1$-parameter deformation of $Z$, $\Def(Z) \subset \Def(Y)$ corresponds to the locus of the Kuranishi space where the algebraic cycle representing $E$ is preserved. Further, these inclusions are strict because a smoothing of $Y$ does not sit under any $1$-parameter flat deformation of $Z$. \end{rem} \subsection{Deformation of extremal rays}
For future reference, I state a mild generalisation of the results on deformation of extremal rays in \cite{Mo82, KM92}. \begin{thm}[Extension of extremal contractions] \mbox{} \label{thm:5} Let $Z \to Y$ be a small modification of a terminal Gorenstein Fano $3$-fold $Y$. Consider a projective flat deformation $\mathcal{Z} \to S$ of $Z$, where $S$ is a smooth affine complex curve with closed point $\{0\}$ and generic point $\eta$. Let $\varphi \colon Z \to Z_1$ be the contraction of an extremal ray $R \subset Z$ and assume that if $\Exc \varphi$ is a divisor, it is Cartier. The contraction $\varphi$ extends to an $S$-morphism $f \colon \mathcal{Z} \stackrel{\Phi} \to \mathcal{Z}_1$, where $\mathcal{Z}_1\to S$ is a projective $1$-parameter flat deformation of $Z_1$, and \begin{enumerate}
\item[1.] $\Phi_{\eta}$ is the contraction of an extremal ray,
\item[2.] If $\varphi= \Phi_{0}$ contracts a subset
of $\codim \geq 2$ (resp.~ a divisor, resp.~ is a fibre space of generic
relative dimension $k$), so does $\Phi_\eta$, \item[3.] If $\Exc \varphi$ is a Cartier divisor, in the notation of Lemma~\ref{lem:7},
either $\Phi_{\eta}$ and $\varphi$ are of the same type, or $\Phi_{\eta}$
and $\varphi$ are of types E$3$ and E$4$. \end{enumerate} \end{thm} \begin{proof} The assumption that $E$ is Cartier ensures that the proof of \cite[Theorem 3.47]{Mo82} can be extended to this case. See \cite{Kal07a} for a complete proof. \end{proof}
\begin{thm}[The $2$-ray game] \label{thm:2ray} Let $Z \to Y$ be a small modification of a terminal Gorenstein Fano $3$-fold $Y$ with $\rho(Y)=1$ and $\rho(Z/Y)=1$. Assume that $\varphi \colon Z \to Z_1$ is a divisorial contraction and that $E=\Exc \varphi$ is Cartier. There is a diagram: \begin{eqnarray} \label{eq:41} \xymatrix{ \quad & Z \ar[dl]_{\varphi} \ar[dr]^{g} \ar@{<-->}[rr]^{\Phi} &\quad & \widetilde{Z} \ar[dr]^{\alpha} \ar[dl]_{\tilde{g}} &\quad\\ Z_1 &\quad & Y & \quad & \widetilde{Z_1}} \end{eqnarray} where: \begin{enumerate} \item[1.] $Z$ and $\widetilde{Z}$ are small modifications of $Y$ with Picard rank
$2$, \item[3.] $\Phi$ is a composition of flops that is not an isomorphism, \item[4.] $\alpha$ is a $K$-negative extremal contraction, \item[5.] $Z_1$ (resp~$\widetilde{\mathcal{Z}}_1$) is one of:
\begin{enumerate}
\item[(i)] a terminal Gorenstein Fano $3$-fold with Picard rank $1$ if $\varphi$ (resp.~$\alpha$) is birational,
\item[(ii)] $\PS^2$ if $\varphi$ (resp.~$\alpha$) is a conic bundle,
\item[(iii)] $\PS^1$ if $\varphi$ (resp.~$\alpha$) is a del Pezzo fibration.
\end{enumerate} \end{enumerate} If $\alpha$ is birational, then $\Exc \alpha$ is Cartier. \end{thm} \begin{rem} I want to stress that since $Z$ is not factorial, such a diagram does not automatically exist, and in particular, $\Exc \alpha$ is not necessarily Cartier when it is a divisor. \end{rem} \begin{proof} By Theorem~\ref{thm:7}, there is a $1$-parameter smoothing $\mathcal{Z}\to \Delta$ of $Z$. For all $t\in \Delta\smallsetminus \{0\}$, $\mathcal{Z}_t$ is a nonsingular small modification of a terminal Gorenstein Fano $3$-fold with $\rho(\mathcal{Z}_t)=2$. Let $g_t \colon \mathcal{Z}_t \to \mathcal{Y}_t$ be the anticanonical map. Note that Proposition~\ref{pro:1} ensures that $\mathcal{Y}_t$ is a $1$-parameter flat deformation of $Y$; in particular $\mathcal{Y}_t$ is a terminal Gorenstein Fano $3$-fold with $\rho(\mathcal{Y}_t)=1$ and $A_{\mathcal{Y}_t}^3=A_Y^3$, .
Theorem~\ref{thm:5} shows that there is an extremal contraction $\varphi_t $ of $\mathcal{Z}_t$ that specialises to $\varphi$ on the central fibre. As $\mathcal{Z}_t$ is factorial, a $2$-ray game as in \cite{Tak89} yields a diagram: \[ \xymatrix{ \quad & \mathcal{Z}_t \ar[dl]_{\varphi_t} \ar[dr]^{h_t} \ar@{<-->}[rr]^{\Phi_t} &\quad & \widetilde{Z}_t \ar[dr]^{\alpha_t} \ar[dl]_{\widetilde{h}_t} &\quad\\ \mathcal{Z}_{1,t} &\quad & \mathcal{Y}_t & \quad & \widetilde{\mathcal{Z}_{1,t}},} \] where \begin{enumerate} \item[1.] $\mathcal{Z}_t$ and $\widetilde{\mathcal{Z}_t}$ are
nonsingular small modifications of $\mathcal{Y}_t$ with Picard rank
$2$, \item[2.] $\Phi_t$ is a composition of flops that is not an isomorphism, \item[3.] $\alpha_t$ is a $K$-negative extremal contraction, \item[4.] $\mathcal{Z}_{1,t}$ (resp.~ $\widetilde{\mathcal{Z}_{1,t}}$) is one of:
\begin{enumerate}
\item[(i)] a terminal Gorenstein Fano $3$-fold with Picard rank $1$ if $\varphi_t$ (resp.~ $\alpha_t$) is birational,
\item[(ii)] $\PS^2$ if $\varphi_t$ (resp.~ $\alpha_t$) is a conic bundle,
\item[(iii)] $\PS^1$ if $\varphi_t$ (resp.~ $\alpha_t$) is a del Pezzo fibration.
\end{enumerate} \end{enumerate} The theorem then follows from Lemma~\ref{lem:spe}. \end{proof} \begin{lem}[Specialisation of a $2$-ray game] \label{lem:spe}
The elementary Sarkisov link on $\mathcal{Z}_t$, $t \neq 0$, induces
an elementary Sarkisov link on the central fibre of $\mathcal{Z} \to \Delta$. \end{lem} \begin{proof} This lemma is standard, see \cite{Kal07a} for a proof; it follows from the more general \cite[Theorem 4.1]{dFH09}. \end{proof}
Let $Y$ be a terminal Gorenstein Fano $3$-fold with $\rho(Y)=1$. Assume that $X$, a small factorialisation of $Y$, is weak-star Fano. Theorem~\ref{thm:3} shows that there is a sequence of contractions: \begin{equation} \label{eq:7} \xymatrix{ X_0 \ar@{-->}[r]^-{\phi_0} \ar[d]& X_1 \ar@{-->}[r]^-{\phi_1}\ar[d] & \cdots & X_{n-1}\ar@{-->}[r]^-{\phi_{n-1}} \ar[d]& X_n \ar[d] \\ Y_0 & Y_1 & \cdots & Y_{n-1} & Y_n } \end{equation} I assume that at least one of the contractions $\varphi_i$ is divisorial. Then, for a suitable small factorialisation $X_0$, $\varphi_{0}= \varphi$ is divisorial. Let $\widetilde{E}= \Exc \phi$ and $Z_0$ be a small modification of $Y$ such that $X_0\to Y_0$ factors through $Z_0$, $\rho(Z_0/Y_0)=1$ and such that $E$, the image of $\widetilde{E}$ on $Z_0$, is Cartier. Then there is an extremal contraction $\varphi \colon Z_0\to Z_1=Y_1$, such that the diagram \begin{eqnarray} \label{eq:1} \xymatrix{\widetilde{E} \subset X \ar[r]^{\phi} \ar[d]_g & X_1\ar[d]^{g_1}\\ E \subset Z \ar[d]_h \ar[r]^{\varphi} & Z_1\\ \overline{E} \subset Y & } \end{eqnarray} commutes. Theorem~\ref{thm:2ray} shows that $Z_0,Y_0$ and $Z_1$ fit in an elementary Sarkisov link as in \eqref{eq:41}. To each such elementary link, one can associate systems of Diophantine equations that reflect the numerical constraints imposed by contractions of extremal rays on intersection of classes in cohomology. These constraints can be made explicit when $\Exc \varphi$ (and $\Exc \alpha$, if it is a divisor) is Cartier, as is explained in Lemma~\ref{lem:10} and in Section~\ref{motivation}.
This procedure can be carried out at each divisorial step of the MMP on $X_0$.
\section{A geometric motivation of non-factoriality} \label{motivation} In this section, I write down explicitly systems of Diophantine equations associated to elementary Sarkisov links as in \eqref{eq:41}. As Lemma~\ref{lem:10} shows, one can associate to each extremal contraction numerical constraints. These systems of Diophantine equations reflect the relationships between the constraints associated to the extremal contractions $\varphi$ and $\alpha$ on both sides of the link.
I then list all possible divisorial contractions that can occur when running the MMP on weak-star Fano $3$-folds $X$ whose anticanonical model $Y$ have $\rho(Y)=1$. Not all links listed in this Section are geometrically realizable, I call them \emph{numerical links} in order to stress this fact.
\subsection{Systems of Diophantine equations associated to elementary links} \label{2raygame} In this section, I use the notation set in \eqref{eq:41}. Let $\widetilde{E}$ be the proper transform of $E=\Exc\varphi$ on $\widetilde{Z}$, $\widetilde{E}$ is a Cartier divisor. In what follows, I assume that $i(\widetilde{Z})= i(Z)=i(Y)=1$. This is a convenience, Remark~\ref{rem:hif} explains how to recover the general case. By construction, $H$ and $\widetilde{E}$ are generators of $\Pic \widetilde{Z}$. Let $g$ denote the genus of $Y,Z$ and $\widetilde{Z}$.
Since $\Phi$ is a sequence of flops, \begin{eqnarray}
\label{eq:17}
A_Z^2 {\cdot} E & = A_{\widetilde{Z}}^2 {\cdot} \widetilde{E} \nonumber\\
A_Z {\cdot} E^{2} & = A_{\widetilde{Z}}{\cdot} \widetilde{E}^{2}\\ \widetilde{E}^{3} & =E^{3}-e.\nonumber \end{eqnarray} \begin{lem}\cite{T02}
\label{lem:4} The correction term $e$ in \eqref{eq:17} is a strictly positive integer. \end{lem} \begin{proof} This is standard, I include the argument for clarity of exposition. As $E$ is Cartier and $\varphi$-negative, for any effective curve $\gamma \in \Exc \Phi$, $E\cdot\gamma$ is strictly positive. Since $\widetilde{E}$ is also Cartier, $e$ is an integer.
Consider a common resolution of $Z$ and $\widetilde{Z}$: \[ \xymatrix{ \quad & W\ar[dr]^{q}\ar[dl]_{p}& \quad \\ Z \ar@{<-->}[rr]^{\Phi} &\quad & \widetilde{Z}, } \] Since $Z$ and $\widetilde{Z}$ are terminal, the Negativity Lemma shows that every $p$-exceptional divisor is also $q$-exceptional and that $p^{\ast}A_Z= q^{\ast}A_{\widetilde{Z}}$.
Then, \[p_{\ast}^{-1}E= p^{\ast} E-R= q^{\ast}(\widetilde{E})-R',\] where $R$ and $R'$ are effective exceptional divisors for $p$ and $q$. In particular: \[ -p^{\ast}(E)=-q^{\ast}(\widetilde{E})+R'-R. \] By the construction of the $E$-flop, $-q^{\ast}(\widetilde{E})$ is $p$-nef. The Negativity Lemma shows that $R'-R$ is strictly effective because $\Phi$ is not an isomorphism, and its pushforward $p_{\ast}(R'-R)$ is effective. Hence, $-p_{\ast}(R'-R)^2$ is a non-zero effective $1$-cycle contained in the indeterminacy locus of $\Phi$, and $e=-p_{\ast}(R'-R)\cdot E>0$. \end{proof}
I now write down numerical constraints associated to the extremal contraction $\alpha$; this is similar to what is done in Lemma~\ref{lem:10} . These constraints and \eqref{eq:17} yield the systems of Diophantine equations that underlie the $2$-ray game.
\subsubsection{$\alpha$ is divisorial} Let $D$ be the exceptional divisor of $\alpha$ and $C$ its centre. Since $D$ is Cartier, there are integers $x, y$ such that: \begin{equation}
\label{eq:21} D=x A_{\widetilde{Z}} - y \widetilde{E}. \end{equation} If $\alpha$ is of type E$1$, \begin{equation}
\label{eq:20} A_{\widetilde{Z}}= \alpha^{\ast}(A_{\widetilde{Z}_1})-D. \end{equation} Since $\widetilde{Z}_1$ is Gorenstein, $\alpha(\widetilde{E})$ is Cartier because it is $\Q$-Cartier; \eqref{eq:21} and \eqref{eq:20} show that $y$ divides $x+1$. Note that $y$ is the index of $\widetilde{Z}_1$ and define $k$ by $x+1=yk$. By Lemma~ \ref{lem:10}, \[ \left \{ \begin{array}{c} (A_{\widetilde{Z}}+D)^3=(A_{\widetilde{Z}}+D)^2 A_{\widetilde{Z}}=(A_{\widetilde{Z}_1})^3\\ (A_{\widetilde{Z}}+D)^2 D=0\\ (A_{\widetilde{Z}}+D)DA_{\widetilde{Z}}=A_{\widetilde{Z}_1}\cdot C=i(\widetilde{Z}_1) \deg(C)\\ A_{\widetilde{Z}}D^2=2p_a(C)-2 \end{array} \right . \] These relations and \eqref{eq:17} yield the system of equations associated to the configuration $(\varphi, \alpha)$: \[ \left \{ \begin{array}{c} y^2[A_Z^3k^2-2(A_{Z_1}\cdot \Gamma+2-2p_a(\Gamma))k+2p_a(\Gamma)-2]\nonumber \\=A_{\widetilde{Z_1}}^3 \nonumber \\ A_Z^3k^2(yk-1)+(A_{Z_1}\cdot \Gamma+2-2p_a(\Gamma))(2k-3k^2y)\nonumber \\+ (2p_a(\Gamma)-2)(3ky-1)+(A_{Z_1}\cdot \Gamma-2+2p_a(\Gamma)+e)y=0 \nonumber\\ A_Z^3k(yk-1)-(A_{Z_1}\cdot \Gamma+2-2p_a(\Gamma))(2yk-1)\nonumber \\+ (2p_a(\Gamma)-2)y= \deg(C) \nonumber\\ A_Z^3(yk-1)^2-2(A_{Z_1}\cdot \Gamma+2-2p_a(\Gamma))y(yk-1)\nonumber \\ +(2p_a(\Gamma)-2)y^2=2p_a(C)-2 \nonumber \end{array} \right . \]
\begin{rem} Assume that the degree of $Z$ is fixed. Since $Z_1$ and $\widetilde{Z_1}$ are terminal Gorenstein Fano $3$-folds with Picard rank $1$, there are finitely many possible values for ${A_{Z_1}}^3$ and ${A_{\widetilde{Z_1}}}^3$. Further, once ${A_{Z_1}}^3$ and ${A_{\widetilde{Z_1}}}^3$ are fixed, Lemma~\ref{lem:8} shows that there are only finitely many possibilities for $(p_aC, \deg C)$. As a result, there is a finite number of Diophantine systems to consider to determine all numerical Sarkisov links $(\varphi, \alpha)$ with centre along $Z$. \end{rem}
\subsubsection{$\alpha$ is a conic bundle} Let $L$ be the pull back of an ample generator of $\Pic\widetilde{ Z_1}$. Since $\rho(\widetilde{Z_1})=2$, $\widetilde{Z_1}=\PS^2$ \cite[Lemma 3.6]{Kal07b} and $L=\alpha^{\ast}\mathcal{O}_{\PS^2}(1)$.
There are integers $x, y$ such that: \begin{eqnarray}
\label{eq:22} L=x A_{\widetilde{Z}} - y \widetilde{E}. \end{eqnarray}
\begin{cla} The integers $x$ and $y$ are positive and coprime; $y$ is equal to $1$ or $2$. \end{cla} This is similar to the argument in \cite{Tak89}. Since $E$ is fixed on $Z$, $\widetilde{E}$ is fixed, and hence $x\geq 0$. If $y\leq 0$, $\vert L \vert \supset \vert x A_{\widetilde{Z}}\vert$, and $L$ is big. This contradicts $\alpha$ being of fibering type. The integers $x, y$ are coprime because $A_{\widetilde{Z}}$ and $\widetilde{E}$ form a $\Z$-basis of $\Pic \widetilde{Z}$, $L$ is prime and $L$ is not an integer multiple of either of them. Denote by $l$ an effective nonsingular curve that is contracted by $\alpha$. Then $A_{\widetilde{Z}}\cdot l \leq 2$, and since $x(A_{\widetilde{Z}}\cdot l)=y\widetilde{E}\cdot l$, the claim follows.
Let $\Delta\sim-\alpha_{\ast}(A_{\widetilde{Z}/\widetilde{Z}_1})^2$ be the discriminant curve of $\alpha$. \[ \left \{ \begin{array}{c} L^3=0\\ L^2 {\cdot} A_{\widetilde{Z}}=2 \\ L {\cdot} A_{\widetilde{Z}}^2 =12-\deg(\Delta) \end{array} \right. \] The system of equations associated to the configuration $(\varphi, \alpha)=(E1, CB)$ writes: \[ \left \{ \begin{array}{c} A_Z^3x^3 -3(A_{Z_1}{\cdot} \Gamma+2-2p_a(\Gamma))x^2y \nonumber \\+3(2p_a(\Gamma)-2)xy^2 +(A_{Z_1}{\cdot} \Gamma-2+2p_a(\Gamma)+e)y^3=0\\ A_Z^3x^2-2(A_{Z_1}{\cdot} \Gamma+2-2p_a(\Gamma))xy\nonumber\\+(2p_a(\Gamma)-2)y^2=2\\ A_Z^3x-(A_{Z_1}{\cdot} \Gamma+2-2p_a(\Gamma))y= 12-\deg(\Delta) \end{array} \right . \]
\subsubsection{$\alpha$ is a del Pezzo fibration} Let $L$ be the pullback of an ample generator of $\Pic \widetilde{Z_1}$. As $\widetilde{Z_1}=\PS^1$, $L=\alpha^{\ast}\mathcal{O}_{\PS^1}(1)$. Let $d$ be the degree of the generic fibre. There are integers $x,y$ such that \eqref{eq:22} holds. \begin{cla} The integers $x$ and $y$ are positive and coprime; $y$ can only be equal to $1,2$ or $3$. \end{cla} This is proved as in the conic bundle case. \[ \left \{ \begin{array}{c} L^2{\cdot} A_{\widetilde{Z}}=0\\ L^2 {\cdot} \widetilde{E}=0\\ L{\cdot}A_{\widetilde{Z}}^2=d \end{array} \right . \] The system of equations associated to $(\varphi, \alpha)$ writes: \[ \left \{ \begin{array}{c} A_Z^3x^2-2(A_{Z_1}{\cdot} \Gamma+2-2p_a(\Gamma))xy+\nonumber\\(2p_a(\Gamma)-2)y^2=0 \nonumber \\ (A_{Z_1}{\cdot} \Gamma+2-2p_a(\Gamma))x^2-2(2p_a(\Gamma)-2)xy \nonumber\\-(A_{Z_1}{\cdot} \Gamma-2+2p_a(\Gamma)+e)y^2=0 \nonumber\\ A_Z^3x-(A_{Z_1}{\cdot} \Gamma+2-2p_a(\Gamma))y= d \nonumber \end{array} \right . \] \begin{rem} \label{rem:hif}
\cite{Sh89} shows that when $i(Y)=4$, $Y$ is isomorphic to $\PS^3$. If $i(Y)=2$, both $\alpha$ and $\varphi$ are either E$2$ contractions, \'etale conic bundles or quadric bundles. If $i(Y)=3$, then $\alpha$ and $\varphi$ are $\PS^2$-bundles over $\PS^1$. The MMP on small modifications of higher index Fano $3$-folds is therefore very simple. If $\widetilde{H}$ is such that $A_{\widetilde{Z}}= i(\widetilde{Z})H$, the systems written above hold for any index after replacing $A_{\widetilde{Z}}$ by $\widetilde{H}$ in \eqref{eq:21} and \eqref{eq:22}. \end{rem}
\begin{table}
\caption{Numerical Sarkisov Links, $g=3$}
\label{table1}
\begin{tabular}{rclclccc} \hline
&$(\varphi, \alpha)$ &$Z_1$& $ \varphi$ & $\widetilde{Z_1}$ & $\alpha$ & $e$ & $R$\\ \hline $1$ & E$1$-E$1$ &$X_{22}$ &$(g,d)= (0,8)$ & $X_{22}$ & $(h,e)=(0, 8)$ & $268$ & $+$\\
$2$ & E$1$-E$1$& $X_{22}$ & $(g,d)= (1,9)$ & $V_5$ & $(h,e)= (1,9)$ & $171$ & $+$\\
$3$ & E$1$-E$1$& $X_{22} $ & $(g,d)=(2,10)$ & $X_{22}$ & $(h,e)= (2,10)$ & $80$& $+$\\
$4$ & E$1$-CB & $X_{22}$ & $(g,d)=(2,10)$ & $ \PS^{2}$ & $\deg \Delta= 4$ &$92$& $+$\\
$5$& E$1$-E$1$ & $X_{22}$ & $(g,d)=(3,11)$ & $X_{12}$ & $(h,e)=(0,3) $ & $29$& $+$\\
$6$ & E$1$-E$1$& $X_{18}$ & $(g,d)=(0,6)$ & $X_{18}$& $(h,e)= (0,6)$ & $144$ & $+$\\
$7$ & E$1$-E$1$& $X_{18}$ & $(g,d)=(1,7)$ & $V_4$& $(h,e)=(1,7)$ & $77$ & $+$\\
$8$& E$1$-E$1$ & $X_{18}$ & $(g,d)=(2,8)$& $X_{18}$ & $(h,e)= (2,8)$ & $16$& $+$\\
$\times 9$ & E$1$-CB& $X_{18}$ & $(g,d)=(2,8)$ & $\PS^{2}$& $\deg \Delta = 6$ & $26$ & $+$ \\
$10$ & E$1$-E$1$& $X_{16}$ & $(g,d)=(0,5)$ & $Q $ & $(h,e)= (3,9)$ & $103$& $+$\\
$11$ & E$1$-E$1$& $X_{16}$ & $(g,d)=(1,6)$ & $X_{16}$ & $(h,e)=(1,6)$ & $42$& $+$\\
$12$ & E$1$-dP& $X_{16}$ & $(g,d)=(1,6)$ & $\PS^{1}$& $k=6$ & $48$& $+$\\
$13$ & E$1$-E$1$& $X_{16}$ & $(g,d)=(2,7)$ & $X_{2,2,2}$ & $(h,e)= (0,1)$ & $8$ & $+$\\
$14$ & E$1$-E$1$& $X_{16}$ & $(g,d)=(2,7)$ & $V_4$& $(h,e)= (5,9)$ & $4$& $+$\\
$15$ & E$1$-E$1$& $X_{14}$ & $(g,d)=(0,4)$ & $X_{14}$ & $(h,e)= (0,4)$ & $68$ & ?\\
$16$ & E$1$-E$1$& $X_{14}$ & $(g,d)=(1,5)$ & $Q$& $(h,e)=(9, 11)$ & $24$ & $+$\\
$\bullet17$ & E$1$-E$1$& $X_{14}$ & $(g,d)=(1,5)$& $V_3$ & $(h,e)= (1,5)$ & $25$& ?\\
$18$ & E$1$-E$1$& $X_{12}$ & $(g,d)=(0,3)$ & $X_{22}$& $(h,e)= (3,11)$ &$29$ & $+$\\
$\bullet19$ & E$1$-E$1$& $X_{12}$ & $(g,d)=(0,3)$& $\PS^{3}$ & $(h,e)=(7, 9)$ & $45$ & $+$\\
$20$ & E$1$-E$1$& $X_{12}$& $(g,d)=(1,4)$ & $X_{12}$ & $(h,e)= (1,4)$ & $8$& $+$\\
$21$ & E$1$-dP& $X_{12}$ & $(g,d)=(1,4)$ & $\PS^1$& $k=4$ & $12$ & $+$\\
$\bullet22$& E$2$-E$2$& $X_{12}$ && $X_{12}$ && $30$ & $+$\\
$\times 23$& E$2$-E$1$& $X_{12}$& & $X_{10}$ & $(h,e)= (0,2)$ & $29$& $+$\\
$\times 24$& E$2$-E$1$& $X_{12}$& & $V_{5}$ & $(h,e)= (7,12)$ & $24$& $+$\\
$\bullet25$ & E$1$-E$1$& $X_{10}$ & $(g,d)=(0,2)$ & $X_{10}$ & $(h,e)= (0,2)$ & $28$ & ?\\
$\times 26$ & E$1$-E$1$& $X_{10}$ & $(g,d)=(0,2)$ & $V_5$& $(h,e)= (7,12)$ & $23$& $+$\\
$\times27$ & E$1$-E$2$& $X_{10}$ & $(g,d)=(0,2)$ & $X_{12}$&& $29$ & $+$ \\
$\bullet28$ & E$1$-E$1$& $X_{10}$ & $(g,d)=(1,3)$ & $\PS^3$& $(h,e)= (15,11)$ & $2$ & $+$\\
$\times 29$ & E$1$-E$1$& $X_{10}$ & $(g,d)=(1,3)$& $V_2$ & $(h,e)= (1,3)$ & $3$ & ?\\
$\bullet30$ & E$1$-CB& $X_{2,2,2}$ & $(g,d)=(0,1)$ & $\PS^{2}$ & $\deg \Delta = 7$ & $17$& \\
$\times31$ & E$1$-E$1$& $X_{2,2,2}$ & $(g,d)=(0,1)$ &$X_{16}$ & $(h,e)=(2,7)$ & $8$ & $+$\\ $32$ & E$1$-dP& $V_2$ & $(g,d)=(1,3)$ & $\PS^1$ & $k=6$ & $48$& $+$\\ $33$& E$1$-E$1$ & $V_2$ & $(g,d)=(1,3)$ & $X_{16}$ & $(h,e)=(1,6)$ & $42$& $+$\\
$\bullet34$ & E$1$-E$1$& $V_3$ & $(g,d)=(3,6)$ & $\PS^{3}$ & $(h,e)=(3,8)$ & $65$ & $+$ \\
$\bullet35$ &E$3$-E$3$& $X_{2,3}$ & $(g,d)=(0,0)$ & $X_{2,3}$ & $(h,e)=(0,0)$ & $12$ & $?$ \\
$36$ &E$3$-E$1$& $X_{2,3}$ & $(g,d)=(0,0)$ & $V_3$ & $(h,e)=(3,6)$ & $9$ & $?$ \\ $37$ &E$3$-E$1$& $X_{2,3}$ & $(g,d)=(0,0)$ & $Q$ & $(h,e)=(12,12)$ & $8$ & $+$ \\ \hline
\end{tabular}
\end{table}
\begin{nt} Most of the notation used in Tables~\ref{table1} and \ref{table2} is self explanatory.
The column labelled $R$ gathers results from Section~\ref{rationality} on rationality. The symbol $\bullet$ indicates that the link is a known geometric constructions (e.g.~ Cases $17,19$ and $26$), see Section~\ref{examples} for examples and details. The symbol $\times$ indicates that the link is not geometrically realizable. Every numerical Sarkisov link that involves a contraction of type E$1$ with centre along a curve $\Gamma$ such that $(p_a(\Gamma), \deg \Gamma)=(0,0)$ also appears with that contraction replaced by a contraction of type E$3$ or E$4$. I do not repeat these solutions in the tables.
\end{nt}
\begin{rem}
\label{degree}
Observe that the possible generators of $\Cl Y/\Pic Y$ have relatively low degree. When $Y$ is the midpoint of a Sarkisov link of type $(\varphi, \alpha)$, one can choose as a generator of $\Cl Y/\Pic Y$ either $\Exc \varphi$ or $\Exc \alpha$ when both $\varphi$ and $\alpha$ are divisorial. When $\varphi$ (resp.~ $\alpha$) is a strict fibration, consider the pullback of $\mathcal{O}_{Z_1}(1)$ (resp.~ $\mathcal{O}_{\widetilde{Z}_1}(1)$) instead of $\Exc \varphi$ (resp.~ $\Exc \alpha$). The anticanonical degree of the generator of $\Cl Y/\Pic Y$ is then given by Lemma~\ref{lem:10} or by the systems in Section~\ref{2raygame}. For $Y\subset \PS^4$ a quartic $3$-fold, the degree of a generator of $\Cl Y/ \Pic Y$ is at most $10$.
\end{rem}
\begin{rem}{(Exclusion of Cases)} It is known that if $\widetilde{X}\to \PS^2$ is a standard Conic Bundle whose discriminant has degree at least $6$, then it is not rational. This shows, using Lemma~\ref{ratio}, that the (deformed numerical) Sarkisov link $9$ in Table~\ref{table1} is not geometrically realizable. The other excluded cases correspond to deformed numerical Sarkisov links that are not geometrically realizable \cite{Tak89, IP99}. \end{rem}
\subsection{Numerical Sarkisov links with centre along higher degree Fano $3$-folds} \label{tables} The numerical Sarkisov links with centres along terminal Gorenstein Fano $3$-folds of index $1$ and genus $g\geq 3$ are listed in Table~\ref{table2}.
\begin{center}
\begin{longtable}[p]{ rclclccc}
\label{table2}\\
\caption{Numerical Sarkisov links, $g\geq4$}\\
\hline
& $(\varphi, \alpha)$ & $Z_1$ &$\varphi$ & $\widetilde{Z_1} $& $\alpha$ & $e$ & $R$
\\[2mm] \hline
\endfirsthead
\caption[]{Continued}\\
\hline
& $(\varphi, \alpha)$ & $Z_1$ &$\varphi$ & $\widetilde{Z_1} $& $\alpha$ & $e$ & $R$
\\[2mm] \hline \endhead \multicolumn{8}{r}{{Continued on Next Page\ldots}} \\ \endfoot
\\[2mm] \hline \endlastfoot
\multicolumn{8}{c}{$g=4$}\\ \hline
$1$ & E$1$-E$1$& $X_{22}$ & $(g,d)=(0,7)$ & $X_{22}$ & $(h,e)=(0,7)$ & $89$ & $+$ \\
$2$ & E$1$-E$1$& $X_{22}$ & $(g,d)=(1,6)$ & $Q$ & $(h,e)=(1,8)$ & $48$ & $+$\\
$3$ & E$1$-E$1$& $X_{22}$ & $(g,d)=(2,9)$ & $X_{14}$ & $(h,e)=(0,3)$ & $12$ & $+$\\
$4$ & E$1$-E$1$& $X_{18}$ & $(g,d)=(1,6)$ & $X_{18}$ & $(h,e)=(1,6)$ & $12$ & $+$ \\
$5$ & E$1$-E$1$& $X_{18}$ & $(g,d)=(0,5)$ & $V_5$ & $(h,e)=(2,9)$ & $47$ & $+$ \\
$6$ & E$1$-dP& $X_{18}$ & $(g,d)=(1,6)$ & $\PS^1$ & $k=6$ & $18$& $+$ \\
$7$ & E$1$-E$1$& $X_{16}$ & $(g,d)=(0,4)$ & $X_{16}$ & $(h,e)=(0,4)$ & $32$ & $+$\\
$8$ & E$1$-E$1$& $X_{16}$ & $(g,d)=(1,5)$ & $\PS^3$ &$(h,e)=(8,9)$ & $8$ &$+$\\
$9$ & E$1$-E$1$& $X_{14}$ & $(g,d)=(0,3)$ & $X_{22}$ & $(h,e)=(2,9)$ & $12$ & $+$\\
$10$ & E$1$-E$1$& $X_{14}$ & $(g,d)=(1,4)$ & $X_{2,2,2}$ & $(h,e)=(0,0)$ & $4$ & ?\\
$11$ & E$1$-CB& $X_{14}$ & $(g,d)=(0,3)$ & $\PS^2$ &$\deg \Delta= 5$ & $23$ &$+$ \\
$12$&E$1$-E$3$& $X_{14}$ & $(g,d)=(1,4)$ & $V_1$ & & $4$ & $?$ \\
$\bullet 13$& E$2$-E$1$& $X_{14}$& & $V_{3}$ & $(h,e)= (0,4)$ & $16$& ?\\
$\times14$& E$2$-E$1$& $X_{14}$& & $Q$ & $(h,e)= (7,10)$ & $15$& $+$\\
$\bullet15$ & E$1$-E$1$& $X_{12}$ & $(g,d)=(0,2)$ & $Q$ &$(h,e)=(7,10)$ & $14$ & $+$ \\
$\times16$ & E$1$-E$1$& $X_{12}$ & $(g,d)=(0,2)$ & $V_3$ &$(h,e)=(0,4)$ & $15$ & $+$ \\
$\bullet17$ & E$1$-E$1$& $X_{10}$ & $(g,d)=(0,1)$ & $X_{10}$ & $(h,e)=(0,1)$ & $11$& ? \\
$\times18$& E$1$-E$1$ & $X_{10}$ & $(g,d)=(0,1)$ & $V_5$ & $(h,e)=(6,11)$ & $6$ & $+$\\
$19$ & E$3$-E$1$& $X_{2,2,2}/V_1$ & $(g,d)=(0,0)$ & $X_{14}$ & $(h,e)=(1,4)$ & $4$ & ?\\
$20$ & E$3$-dP& $X_{2,2,2}/V_1$ & $(g,d)=(0,0)$ & $\PS^1$ & $k=4$ & $8$ & ? \\
$21$ & E$1$-E$1$& $V_5$ & $(g,d)=(6, 11)$ & $V_5$ & $(h,e)=(0,8)$ & $45$ & $+$ \\
$22$ & E$1$-E$1$& $V_4$ & $(g,d)=(4,8) $ & $\PS^1$ & $k=8$ & $32$ & $+$\\
$23$ & E$1$-E$1$& $V_4$ & $(g,d)=(4,8)$ & $X_{22}$ & $(h,e)=(1,8)$ & $24$ & $+$ \\
$24$ & E$1$-E$1$& $V_3$ & $(g,d)=(2,5)$ & $V_4$ & $(h,e)=(0,6)$ & $28$ & $+$\\
$\times25$ & E$1$-E$1$& $V_2$ & $(g,d)=(0,2)$ & $X_{16}$ & $(h,e)=(0,4)$ & $32$ & $+$\\ \hline
\multicolumn{8}{c}{$g=5$}\\
\hline $1$ & E$1$-E$1$& $X_{22}$ & $(g,d)=(0,6)$ & $X_{22}$ & $(h,e)=(0,6)$ & $36$ & $+$ \\
$\bullet2$ & E$1$-E$1$& $X_{22}$ & $(g,d)=(1,7)$ & $\PS^3$ & $(h,e)=(1,7)$ & $14$ & $+$ \\
$3$ & E$1$-E$1$& $X_{18}$ & $(g,d)=(1,5)$ & $X_{12}$ & $(h,e)=(0,1)$ & $3$ & $+$ \\
$4$ & E$1$-E$1$& $X_{18}$ & $(g,d)=(0,4)$ & $Q$ & $(h,e)=(2,8)$ & $20$ & $+$ \\
$5$ & E$1$-E$1$& $X_{16}$ & $(g,d)=(0,3)$ & $V_5$ & $(h,e)=(3,9)$ & $12$ & $+$ \\
$\times 6$& E$2$-E$1$& $X_{16}$& & $X_{14}$ & $(h,e)= (0,2)$ & $11$& $+$ \\
$\bullet7$& E$2$-E$2$ & $X_{16}$ &&$X_{16}$ && $12$ & $+$\\
$\times 8$& E$2$-E$2$ & $X_{16}$ &&$V_{2}$ && $12$ & $+$\\
$\bullet9$ & E$1$-E$1$& $X_{14}$ & $(g,d)=(0,2)$ & $X_{14}$ & $(h,e)=(0,2)$ & $10$ & ? \\
$\times 10$& E$1$-E$2$& $X_{14}$ & $(g,d)=(0,2)$ & $X_{16}$&& $11$ & $+$ \\
$\times11$& E$1$-E$2$& $X_{14}$ & $(g,d)=(0,2)$ & $V_2$ && $11$ & $+$ \\
$\times12$ & E$1$-E$1$& $X_{12}$ & $(g,d)=(0,1)$ & $X_{18}$ & $(h,e)=(1,5)$ & $3$ & $+$\\
$\bullet13$ & E$1$-dP& $X_{12}$ & $(g,d)=(0,1)$ & $\PS^1$ & $k=5$ & $8$ & $+$\\
$14$ & E$3$-CB& $X_{10}$ & $(g,d)=(0,0)$ & $\PS^2$ & $\deg \Delta= 6$ & $6$ & ?\\
$15$ & E$1$-dP& $V_3$ & $(g,d)=(1,4)$ & $\PS^1$ & $k=8$ & $24$ & $+$\\
$16$ & E$1$-dP & $Q$ & $(g,d)=(30,23)$ & $\PS^1$ & $k=2$ & $4464$ & $+$\\
\hline
\multicolumn{8}{c}{$g=6$}\\
\hline $1$ & E$1$-E$1/2$& $X_{22}$ & $(g,d)=(1,6)$ & $X_{16}$ & $(h,e)=(0,2)$ & $2$ & $+$ \\
$2$ & E$1$-E$1$& $X_{22}$ & $(g,d)=(0,5)$ & $V_5$ & $(h,e)=(0,7)$ & $18$& $+$ \\
$3$ & E$1$-E$1$& $X_{18}$ & $(g,d)=(0,3)$ & $X_{18}$ & $(h,e)=(0,3)$ & $9$ & $+$\\
$\times4$& E$2$-E$1$& $X_{18}$& & $X_{22}$ & $(h,e)= (1,6)$ & $3$& $+$\\
$\bullet5$& E$2$-dP & $X_{18} $&& $\PS^1$ & $k=6$ & $9$ & $+$\\
$\times6$ & E$1$-E$1$& $X_{16}$ & $(g,d)=(0,2)$ & $X_{22}$ & $(h,e)=(1,6)$ & $2$ & $+$\\
$\bullet7$ & E$1$-dP& $X_{16}$ & $(g,d)=(0,2)$ & $\PS^1$ & $k=6$ & $8$ & $+$ \\
$\bullet8$ & E$1$-CB& $X_{14}$ & $(g,d)=(0,1)$ & $\PS^2$ & $\deg \Delta=5$ & $6$ & $+$ \\
$9$& E$3$-E$1$& $X_{12}$ & $(g,d)=(0,0)$ & $\PS^3$ & $(h,e)=(6,8)$ & $5$& $+$ \\
$10$& E$1$-CB & $V_4$ & $(g,d)=(2,6)$ & $\PS^2$ & $\deg \Delta=2$ & $14$ & $+$ \\
$\times11$ & E$1$-dP& $V_2$ & $(g,d)=(0,1)$ & $\PS^1$ & $k=6$ & $8$ & $+$\\
$\times12$ & E$1$-E$1$& $V_2$ & $(g,d)=(0,1)$ & $X_{22}$ & $(h,e)=(1,6)$ & $2$ & $+$ \\
$13$& E$1$-dP & $Q$ & $(g,d)=(36,33)$ & $\PS^1$ & $k=2,6,8$ & $1620$ & $+$\\ \hline
\multicolumn{8}{c}{$g=7$}\\
\hline
$1$ & E$1$-E$1$& $X_{22}$ & $(g,d)=(0,4)$ & $X_{22}$ & $(h,e)=(0,4)$ & $8$ & $+$ \\
$2$ & E$1$-CB& $X_{18}$ & $(g,d)=(0,2)$ & $\PS^2$ &$\deg \Delta=4$ & $6$ & $+$\\
$\bullet3$ & E$1$-E$1$& $X_{16}$ & $(g,d)=(0,1)$ & $\PS^3$ & $(h,e)=(3,7)$ & $5$ & $+$ \\
$4$ & E$3$-E$1$& $X_{14}$ & $(g,d)=(0,0)$ & $Q$ & $(h,e)=(4,8)$ & $4$ & $+$\\
$5$& E$1$-dP & $Q$ & $(g,d)=(1,14)$ & $\PS^1$ & $k=6$ & $2016$ & $+$\\
$6$ & E$1$-dP & $Q$ & $(g,d)=(37,38)$ & $\PS^1$ & $k=6$ & $462$ & $+$\\
\hline
\multicolumn{8}{c}{$g=8$}\\
\hline
$1$ & E$1$-CB& $X_{22}$ & $(g,d)=(0,3)$ & $\PS^2$ & $\deg \Delta=3$ & $6$ & $+$ \\
$\bullet2$& E$2$-E$1$& $X_{22}$& & $\PS^3$ & $(h,e)= (0,6)$ & $6$& $+$\\
$\bullet3$ & E$1$-E$1$& $X_{18}$ & $(g,d)=(0,1)$ & $Q$ & $(h,e)=(2,7)$ & $4$ & $+$\\
$4$& E$3$-E$1$ & $X_{16}/V_2$ & $(g,d)=(0,0)$ & $V_4$ & $(h,e)=(0,4)$ & $4$ & $+$ \\
$5$ & E$1$-dP& $V_3$ & $(g,d)=(0,2)$ & $\PS^1$ & $k=8$ & $8$ & $+$\\
$6$& E$1$-dP & $Q$ & $(g,d)=(26,30)$ & $\PS^1$ & $k=2$ & $360$ & $+$\\
\hline
\multicolumn{8}{c}{$g=9$}\\
\hline
$\bullet1$ & E$1$-E$1$& $X_{22}$ & $(g,d)=(0,2)$ & $Q$ & $(h,e)=(0,6)$ & $4$ & $+$ \\
$2$ & E$3$-E$1$& $X_{18}$ & $(g,d)=(0,0)$ & $V_{5}$ & $(h,e)=(1,6)$ & $3$ & $+$\\ \hline
\multicolumn{8}{c}{$g=10$}\\
\hline
$\bullet1$& E$1$-E$1$ & $X_{22}$ & $(g,d)=(0,1)$ & $V_5$ & $(h,e)=(0,5)$ & $3$ & $+$\\
\end{longtable} \end{center}
\section{A classification of non-factorial terminal Gorenstein Fano $3$-folds} \label{classification}
Let $Y_4^3 \subset \PS^4$ be a terminal non-factorial quartic $3$-fold. Well-known examples of non-factorial quartic $3$-folds contain planes or quadrics. Yet, a very general determinantal quartic hypersurface $Y'$ is not factorial and it contains neither a plane nor a quadric. However, $Y'$ does contain a degree $6$ \emph{Bordigo surface}, i.e.~ a surface whose ideal is generated by the $3\times 3$ minors of the matrix defining $Y'$. In the general case, I show that $Y$ contains some surface of relatively low degree. In other words, the degree of the surface lying on $Y$ that breaks factoriality cannot be arbitrarily large.
\subsection{Quartic $3$-folds} I now prove Theorem~\ref{thm:1}.
\begin{proof} Let $Y$ be a non-factorial terminal Gorenstein Fano $3$-fold and $X\to Y$ a small factorialisation of $X$. I assume that $Y$ does not contain a plane: $X$ is weak-star Fano by Remark~\ref{rem:1}. We may run a MMP on $X$ as in Theorem~\ref{thm:3}.
If the MMP on $X$ involves at least one divisorial contraction, then up to a different choice of factorialisation $X\to Y$, we may assume that $X \to X_1$ is divisorial; let $E$ be its exceptional divisor. The solutions of the systems of Diophantine equations in Section~\ref{2raygame} determine all the possible contractions $X\to X_1$. To each configuration is associated a Weil non-Cartier divisor $F$ on $Y$. By Section~\ref{motivation}, $E$ is a rational scroll over a curve $\Gamma$ as in Table~\ref{table1}.
I now assume that the MMP on $X$ involves no divisorial contraction.
If any small factorialisation $X \to Y$ is a Conic bundle over $\PS^2, \F_0$ or $\F_1$, we are in Case $5.$ of the Theorem. Hence, it suffices to prove that if $Y$ is the midpoint of a link between two del Pezzo fibrations, then $Y$ contains one of the surfaces listed in the Theorem.
Vologodsky shows that if $Y$ is the midpoint of a link between two nonsingular weak Fano $3$-folds that are extremal del Pezzo fibrations of degrees $d,d'$, then $d=d'=2$ or $4$ \cite{Vo01}. \cite[Lemma 3.4]{Kal07b} shows that $d\neq2$ because $A_Y$ is very ample.
\begin{cla}
If $Y$ is the midpoint of a link between two weak-star Fano dP$4$ fibrations $X$ and $\widetilde{X}$, $Y$
contains an anticanonically embedded del Pezzo surface of degree $4$,
and the equation of $Y$ can be written: \[ Y=\{a_2q +b_2q'=0 \}\subset \PS^4 \] where $a_2, b_2, q$ and $q'$ are homogeneous forms of degree $2$ on $\PS^4$. \end{cla} Let $F$ be a general fibre of $X \to \PS^1$; $F$ is a nonsingular del Pezzo surface of degree $4$ and $A_F= {A_X}_{\vert
F}$.
Since $\vert A_X \vert_{\vert F}\subset \vert A_F \vert$, the restriction of the anticanonical map of $Y$ to $F$ factors as $g_{\vert F} = \nu \circ \Phi_{\vert A_F \vert}$, where $\nu$ is the projection from a (possibly empty) linear subspace \[ \xymatrix{\PS(H^0(F, A_F))\simeq \PS^4\ar@{-->}[r] & \PS(H^0(F,
\vert A_{X} \vert_{\vert F}))}. \] If $\nu_{\vert F}$ is not the identity, as $h^0(A_F)=h^0(A_X)=5$, the map $i$ in \begin{align*} 0 \to H^{0}(X, A_X-F)\to H^{0}(X, A_X) \stackrel{i} \to H^{0}(F, A_F)\to \\ \to H^{1}(X, A_X-F)\to 0 \end{align*} is not surjective, and $H^0(X, A_X-F)\neq (0)$: there is a hyperplane section of $Y$ that contains $\Phi_{\vert A_X \vert}(F)$. As this holds for the general fibre $F$, the fibration $X \to \PS^1$ is induced by a pencil of hyperplanes on $Y$. Without loss of generality, we may assume that $\xymatrix{Y \ar@{-->}[r]& \PS^1}$ is determined by the pencil of hyperplanes $\mathcal{H}_{(\lambda{:} \mu)}= \{\lambda x_0+\mu x_1=0\}$ for $(\lambda{:} \mu)\in \PS^1$. The map $X \to Y$ is a resolution of the base locus of $\mathcal{H}$ on $Y$ and therefore $\Pi= \{x_0{=}x_1{=}0\}= \Bs\mathcal{H}$ lies on $Y$: this contradicts $X$ being weak-star Fano.
As $H^0(X, A_X-F)=(0)$, $\nu$ is the identity and $Y$ contains an anticanonically embedded nonsingular del Pezzo surface $S$ of degree $4$, i.e.~ the intersection of two quadric hypersurfaces in $\PS^4$. Since $S =\{ q{=}q'{=}0\} \subset \PS^4$ lies on $Y$, where $q$ and $q'$ are homogeneous quadric forms, the equation of $Y$ writes: \begin{equation}\label{eq:dp4} Y=\{a_2q +b_2q'=0 \}\subset \PS^4 \end{equation} with $a_2$ and $b_2$ homogeneous forms of degree $2$.
Geometrically, the two structures of del Pezzo fibrations on small factorialisations of $Y$ arise as the maps induced by the pencils of quadrics (eg $\mathcal{L}=\{ q,q'\}$ and $\mathcal{M}=\{ a_2,b_2\}$) after blowing up their base locus on $Y$, which are anticanonically embedded del Pezzo surfaces of degree $4$.
Conversely, if the equation of $Y$ is of the form \eqref{eq:dp4} and if $\rk \Cl Y=2$, let $X$ (resp.~ $X'$) be the blow up of $X$ along $S$ (resp.~ along $S'= \{a_2{=}b_2{=}0\}$), there is a diagram \[ \xymatrix{ \quad & X \ar[dl] \ar[dr] \ar@{-->}[rr] & \quad & X'\ar[dl] \ar[dr] \\ \PS^1 & \quad & Y & \quad & \PS^1 } \]
The $3$-fold $X$ (resp.~ $X'$) lies on $Q \times \PS^1$ (resp.~ $Q'
\times \PS^1$) for $Q \subset
\PS^4$ (resp.~ $Q'$) a quadric that is the proper transform of $\{a_2{=}0 \}$ under
the blow up of $\PS^4$ along $S$ (resp.~ $S'$). The $3$-fold $X$ (resp.~ $X'$) is the section of a linear system $\vert 2M +2F \vert$ on $ Q \times \PS^1$ (resp.~ $Q'\times\PS^1$), where $M=p_1^{\ast}\mathcal{O}_Q(1)$ (resp.~ $M=p_1^{\ast}(\mathcal{O}_{Q'}(1))$) and $F=p_2^{\ast}\mathcal{O}_{\PS^1}(1)$. The map $\xymatrix{X \ar@{-->}[r] & X'}$ is a flop in the curves lying above the points $\{q{=}q'{=}a_2{=}b_2{=}0\}$. \end{proof} \begin{rem} \label{bound} The bound on the rank of the divisor class group of quartic $3$-folds given in \cite{Kal07b} is too high: if $Y_4\subset \PS^4$ does not contain a plane, $\rk \Cl Y\leq 6$. Fujita classifies all polarised del Pezzo $3$-folds $(V, L)$ with Cohen-Macaulay Gorenstein singularities \cite{F90}. It is possible that the application of his results would yield an even finer bound. \end{rem} \subsection{Non-factorial terminal Gorestein Fano $3$-folds with $g\geq 4$} By the same methods as above, one obtains the following theorem for non-factorial terminal Gorenstein Fano $3$-folds of index $1$ and higher genus. \begin{thm} Let $Y=Y_{2g-2}\subset \PS^{g+1}$ be a terminal Gorenstein Fano $3$-fold with $\rho(Y)=1$ and $g(Y)=g$. Then one of the following holds: \begin{enumerate} \item[1.] $Y$ is factorial. \item[2.]$Y$ contains a plane $\PS^2$ and $g\leq 8$. \item[3.] $Y$ is the midpoint of a link between two weak-star Fano del Pezzo fibrations of degree $g+1$ and $g \leq 8$, $g\neq 6$. \item[4.] $Y$ has a structure of Conic Bundle over $\PS^2$, $\F_0$ or $\F_2$. \item[5.] $Y$ contains a rational scroll $E \to C$ over a curve $C$ whose
genus and degree appear in the appropriate section of Table~\ref{table2} (see page \pageref{table2}). \end{enumerate} \end{thm} \begin{proof} This is entirely similar to what is done in the previous subsection. See \cite{Vo01} for $3$. \end{proof}
\section{Rationality} \label{rationality}
Classically, it was known that del Pezzo surfaces are rational over any algebraically closed field. Understanding whether Fano varieties are rational or not was one of the early problems of higher dimensional birational geometry. Intuitively, Fano varieties can be thought of as being close to $\PS^n$: they are covered by rational curves and, in some sense, these curves should govern their birational geometry. However, the rationality question proved very difficult and it was not until the early seventies that it was settled for nonsingular Fano hypersurfaces in $\PS^4$ \cite{IM, CG72}. \cite{IM} developed the Noether-Fano method and proved that any smooth quartic hypersurface is \emph{birationally rigid}-- i.e.~ that every rational map from a smooth quartic hypersurface to a Mori fibre space is a birational automorphism-- and in particular, that quartic hypersurfaces are very far from being rational. This approach was further developed and applied to a number of cases; it yielded surprising rigidity results-- see \cite{S82, Co95, Co00,CPR, Me04, IP99} or the survey \cite{Puk07}. The Noether-Fano method works in principle in any dimension and for singular varieties, but the technical difficulties are considerable. This section presents some results related to the rationality question for terminal Gorenstein Fano $3$-folds.
\subsection{Rationality, Rational connectivity and ruledness for mildly singular $3$-folds}
\cite{P04} shows that most canonical Gorenstein Fano $3$-folds with Picard rank $1$ that have at least one non-cDV point are rational. These results concern $3$-folds that are strictly canonical. However, one could argue that singularities make Fano $3$-folds ``more rational''. From the point of view of the Noether-Fano method, the valuations with centre at a singular point give rise to infinitely more complex divisorial extractions--even in the case of isolated hypersurface singularities \cite{Kawk01,Kawk02,Kawk03}-- and hence potentially to many more Sarkisov links and birational maps to other Mori fibre spaces. The following results do not require anything that technical but they do formalise this idea.
\begin{thm} [Matsusaka's Theorem] \cite[IV.1.6]{Kol96} \label{mat} Let $R$ be a DVR with quotient field $K$ and residue field $k$ and denote $T= \Spec R$. Let $f\colon X\to T$ be a morphism where $X$ is normal and irreducible. \begin{enumerate} \item[1.] If $X_K$ is ruled over $K$, then $X_k$ has ruled components over $k$. \item[2.] If $X_K$ is geometrically ruled, then every reduced irreducible component of $X_k$ is geometrically ruled. \end{enumerate} \end{thm} \begin{thm}\cite{KMM92a} \label{rc} Let $X$ be a normal projective weak Fano $3$-fold. If $X$ is klt, $X$ is rationally connected. \end{thm}
\begin{lem} \label{ratio} Let $f\colon \mathcal{Y}\to \Delta$ be a $1$-parameter smoothing of a terminal Gorenstein Fano $3$-fold $Y$. If $\mathcal{Y}_{\eta}$ is geometrically rational then so is $Y$. \end{lem} \begin{proof} This is a direct consequence of Theorem~\ref{mat}. Indeed, $Y$ is rationally connected by Theorem~\ref{rc}, so that $Y$ is rational if and only if $Y$ is ruled. \end{proof}
In now recall and discuss Conjecture~\ref{con:rig}. \setcounter{con}{0}
\begin{con} A factorial quartic hypersurface $Y_4\subset \PS^4$ (resp.~ a generic complete intersection $Y_{2,3}\subset \PS^5$) with no worse than terminal singularities has a finite number of models as Mori fibre spaces, i.e.~ the pliability of $Y$ is finite. \end{con} \begin{rem}\mbox{} \label{rem11} \begin{enumerate} \item[1.] Conjecture~\ref{con:rig} is supported by some evidence. \cite{Me04} shows that a factorial quartic $3$-fold $Y_4\subset \PS^4$ with ordinary double points is rigid, while \cite{IP96} shows that the same is true for a general non-singular $Y_{2,3}$. Mella's proof is based on the Noether-Fano/maximal singularity method of Iskovskikh-Manin as formulated in \cite{Co00, CPR}. It is difficult to extend these results to terminal Gorenstein singularities, because these methods require a careful analysis of $3$-fold divisorial extractions with centre along (possibly singular) points or curves. While divisorial extractions centred at nonsingular or ordinary double points are reasonably tractable, there is an a priori infinite number of divisorial extractions centred on slightly more complicated singularities \cite{Kawk01,Kawk02,Kawk03}. \item[2.] Conjecture~\ref{con:rig} does not hold for some other rigid Fano $3$-folds with Picard rank $1$. For instance, a cubic $3$-fold with a single ordinary double point is both rational and factorial. Since several Sarkisov links exist between a nonsingular cubic $3$-fold and a nonsingular Fano $3$-fold $X_{14}\subset \PS^9$ of genus $8$ \cite{IP99,Tak89}, the same phenomenon can be expected on $X_{14}$. \item[3.] \cite{ChGr} shows that birational rigidity is not preserved under small deformations, and exhibits a small deformation from a rigid $Y_{2,3}$ with one ordinary double point to a bi-rigid $Y_{2,3}$. Similarly, \cite{CM04} gives an example of a bi-rigid terminal factorial quartic hypersurface. As I mention in the Introduction, in known examples where a birationally rigid Fano $3$-fold $V$ of genus $3$ or $4$ degenerates to a non-rigid and nonrational $3$-fold $V'$, $V'$ has finitely many models as a Mori fibre space, i.e~ $V'$ has finite \emph{pliability}. I believe that the correct notions to consider are rationality on the one hand, and finite pliability on the other. \end{enumerate} \end{rem} \subsection{Rationality of terminal quartic $3$-folds} \subsubsection{Quartic $3$-folds that do not contain a plane}
Let $Y$ be a non-factorial terminal Gorenstein Fano $3$-fold. Theorem~\ref{thm:1} shows that when $Y$ does not contain a plane, $Y$ has a structure of Conic Bundle, $Y$ is the midpoint of a link between two del Pezzo fibrations of degree $4$, or $Y$ contains a scroll as in Table~\ref{table1}. Let $X$ be a small factorialisation of $Y$. \begin{lem} Let $Y$ be a non-factorial terminal quartic $3$-fold and denote $X\to Y$ a small factorialisation. Assume that the MMP on $X$ involves at least one divisorial contraction. Then $Y$ is rational except possibly if the first divisorial contraction $\varphi$ is one of cases $15,17,25, 29,35$ or $36$ in Table~\ref{table1}. \end{lem}
\begin{proof} This is an immediate consequence of the classification of Tables~\ref{table1} and \ref{table2} and of Lemma~\ref{ratio}. \end{proof}
\begin{lem} If $\varphi$ is one of cases $15,17,25$ and $29$, and if the MMP on $X$ involves at least another divisorial contraction or a del Pezzo fibration, $Y$ is rational. In particular, if $\rk \Cl Y \geq 5$, $Y$ is rational. \end{lem}
\begin{rem}\mbox{} \begin{enumerate} \item[1.] Note that if $\varphi$ is as in cases $17$ or $36$ and if $\widetilde{Z}_1$ has a singular point, $Y$ is rational. \item[2.] In Case $29$, when $\rk \Cl Y=2$, the Conic bundle on the deformed Sarkisov link is nonrational \cite{Sh83}. However, it is not clear whether the same is true for $Y$. \item[3.] According to Conjecture~\ref{con:rig}, one can expect that when $\rk \Cl Y=2$, Case $36$ is impossible, and that when Case $35$ occurs, $Y$ is birationally rigid. \item[4.] It is unlikely that these methods would lead to any conclusion when $\rk \Cl Y=2$ and $Y$ is one of Cases $15,17,25$ or $29$ (see Rem~\ref{rem11}). \end{enumerate} \end{rem}
When $X\to \PS^1$ is an extremal del Pezzo fibration, recall the following rationality criteria.
\begin{thm}\cite[Section III.3]{Kol96} Let $S_k$ be a nonsingular, proper and geometrically irreducible del Pezzo surface of degree $d\geq 5$ over an arbitrary field $k$. Assume that $S(k)\neq \emptyset$, then $S_k$ is rational. \end{thm}
\begin{thm}\cite{CT87,KMM92b} Let $C$ be an algebraic curve defined over an algebraically closed field and let $K=k(C)$ be its field of rational functions. If $X$ is a del Pezzo surface over $K$, then $X(K)\neq \emptyset$ is dense in the Zariski topology of $X$. In particular, if $X\to \PS^1$ is a del Pezzo fibration of degree $d\geq 5$, then $X$ is rational. \end{thm}
\begin{thm}[\cite{Al87, Sh07}] Let $V \to \PS^1$ be a standard fibration by del Pezzo surfaces of degree $4$. The topological Euler characteristic $\chi(V)$ equals $-8,-4$ or $0$ precisely when $V$ is rational. \end{thm} \begin{rem} In particular, rationality of a del Pezzo fibration $V\to \PS^1$ of degree $4$ is a topological question and depends only on the Hodge numbers of $V$. \cite{Ch06} shows that if $V\to \PS^1$ is the small factorialisation of a terminal quartic $3$-fold and is nonsingular, then $V$ is nonrational. \end{rem} Last, recall the following rationality criterion for standard Conic Bundles over minimal surfaces. \begin{thm}\cite{Sh83} Let $X\to S$ be a standard Conic Bundle over $S= \PS^2$ or $\F_n$. Assume that $\Delta$, the discriminant curve, is connected. If one of the following holds: \begin{enumerate} \item[1.] $\Delta+2K_S$ is not effective, \item[2.] $\Delta\subset \PS^2$ has degree $5$ and the associated double cover $\overline{\Delta}\to \Delta$ has even theta characteristic, \end{enumerate} $X$ is rational. \end{thm} \subsubsection{Quartic $3$-folds that contain a plane.}
Assume that $Y\subset \PS^4$ contains a plane $\Pi=\{x_0{=}x_1{=}0\}$ and let $X$ be the blow up of $Y$ along $\Pi$; $X$ has a natural structure of dP$3$ fibration $\pi\colon X \to \PS^1$ induced by the pencil of hyperplanes that contains $\Pi$ on $\PS^4$ (see \cite[Section 4]{Kal07b} for details).
Write the equation of $Y$ as: \begin{equation} \label{eq:2} \{x_0a_3(x_0, x_1,x_2, x_3,x_4)+ x_1 b_3(x_0, x_1,x_2, x_3,x_4)=0\} \subset \PS^4 \end{equation} so that $X$ is given by:
\begin{eqnarray} \label{eq:3} \{t_0a_3(t_0x, t_1x,x_2, x_3,x_4)+ t_1 b_3(t_0x, t_1x,x_2, x_3,x_4)=0\} \\
\subset \PS_{(t_0{:}t_1)}\times \PS(x, x_2, x_3,x_4) .\nonumber
\end{eqnarray}
\begin{lem}\cite[Lemma 4.1]{Kal07b} The divisor class group $\Cl Y$ is generated by $\pi^{\ast}\mathcal{O}_{\PS^1}(1)$, by the completion of divisors that generate $\Pic X_{\eta}$ and by irreducible components of the reducible fibres of $X$. \end{lem}
As $X$ has terminal Gorenstein singularities, \cite{Co96} shows that there is a birational map \[\xymatrix{X \ar@{-->}[r]^{\Phi} \ar[d] & X' \ar[d]\\ \PS^1 & \PS^1 }\] where $\Phi$ is the composition of projections from planes contained in reducible fibres and $X'$ has irreducible and reduced fibres. Note that $X_{\eta}\simeq X'_{\eta}$ because $\Phi$ is an isomorphism outside of the reducible fibres of $X$. In particular, if $X_{\eta}$ is rational, $X\to \PS^1$ is geometrically rational, i.e.~ is birational to $\PS^2\times \PS^1$ . I recall some results on rationality of cubic surfaces over arbitrary fields.
Let $X_{\eta}$ be a nonsingular cubic surface defined over a field $\eta$ and let $K/\eta$ be a field extension over which the $27$ lines of $X$ are geometric. Denote $S_n$ any subset of the $27$ lines on $X_{\eta}\otimes K$ that consists of $n$ skew lines and that is defined over $X_{\eta}$, i.e.~ if $S_n$ contains a line $L$, then it contains all its conjugates under the action of $\Gal(K/\eta)$. Note that, by the geometry of the configuration of the $27$ lines on $X_K$, any $S_n$ has $n\leq 6$.
\begin{thm}\cite{Se42,SD70} \label{cubics} \begin{enumerate}\item[1.] $\overline{NS}(X_{\eta})\otimes_{\Z}\Q$ is generated as a $\Q$-vector space by the class of a hyperplane section of $X_{\eta}$ and by the classes of the $S_n$, when there are any. \item[2.] If $X_{\eta}$ has an $S_4$ or an $S_5$, $X_{\eta}$ has an $S_2$ or an $S_6$. \item[3.] If $X_{\eta}$ has an $S_2$, $X_{\eta}$ is rational over $\eta$. \item[4.] If $X_{\eta}$ has an $S_3$ or an $S_6$ and $X_{\eta}(\eta) \neq \emptyset$, $X_{\eta}$ is rational over $\eta$. \end{enumerate} \end{thm}
Here, $X_{\eta}$ is the generic fibre of $X \to \PS^1$, $X_{\eta}$ is a nonsingular cubic surface embedded in $\PS^3$ over $\C(t)$, with coordinates $x, x_2,x_3,x_4$ (see \eqref{eq:3}).
\begin{cla}
Assume that $X_{\eta}$ contains a Cartier divisor of type $S_n$ and denote
$D_n$ the completion of $S_n$ to a (Weil) divisor on $X$. The proper transform of $D_n$ on a small factorialisation of $X$ has anticanonical degree $n$; the image of $D_n$ on $Y$ is Weil non-Cartier.
\end{cla} In the light of Theorem~\ref{cubics}, it is then natural to consider the following cases: \setcounter{case}{0}
\begin{case}{$X$ is an extremal Mori fibre space, i.e.~ $\rk \Cl Y =2$, $X\to \PS^1$ has irreducible and reduced fibres and $\rho(X_{\eta})=1$.} \end{case} It is known that $X$ admits another model as a Mori fibre space \cite{BCZ}. Indeed, $Y$ is the midpoint of a link \[ \xymatrix{ \quad & X \ar@{-->}[rr] \ar[dl]\ar[dr] &&\widetilde{X}\ar[dr]\ar[dl]& \quad\\ \PS^1 & \quad& Y & \quad & Z} \] where $Z=Y_{3,3}\subset \PS(1^5,2)$ is a codimension $2$ terminal Fano $3$-fold with one point of Gorenstein index $2$ at $P=(0{:}0{:}0{:}0{:}0{:}0{:}1)$; $Y\dashrightarrow Z$ can be described as follows. Introduce a variable of weight $2$ \[y= \frac{a_3}{x_1}=\frac{b_3}{x_0},\] then $Z$ is the complete intersection: \[\left \{ \begin{array}{c} a_3-yx_1=0\\ b_3-yx_0=0 \end{array}\right. \] The contraction $\widetilde{X}\to Z$ contracts the preimage of the plane $\{x_0{=}x_1{=}0\}$ to the point $P$, the map $X\dashrightarrow \widetilde{X}$ is the flop of the rational curves lying above the locus $\{ x_0{=}x_1{=}a_3{=}b_3{=}0\}$, and $\widetilde{X} \to Y$ is the blow up of the surface $\{a_3{=}b_3{=}0\}$. Recall that $X$ is a section of the linear system $ \vert 3M+L\vert $ on the scroll $\F(0,0,1)$ (see \cite{BCZ} for notation conventions on scrolls); \cite{Ch08} shows that if $X$ is a general member of $\vert 3M+L \vert$, $X$ is nonrational. I make the following conjecture: \begin{con} If $X$ is a standard dP3 fibration, $X$ is bi-rigid. \end{con}
\begin{case}{$X$ is not an extremal Mori fibre space, i.e.~ $\rk \Cl Y \geq 3$, and $\rho(X_{\eta})=1$.} \end{case} \begin{lem}{\cite{Kol96}} \label{3planes} Let $Y_4\subset \PS^4$ be a quartic hypersurface. If $Y$ contains three planes $\Pi_0, \Pi_1, \Pi_2$ such that $\Pi_0\cap\Pi_1\cap\Pi_2=\emptyset$, $Y$ is rational. \end{lem}
\begin{cor} Let $X\to Y$ be as above. Assume that $\rho(X_{\eta})=1$, if there are at least $3$ planes lying in at least $2$ distinct reducible fibres of $X$, $Y$ is rational. More precisely, if $X$ has either at least two reducible fibres, one of which is the union of $3$ planes or if $X$ has at least $3$ reducible fibres, $Y$ is rational. \end{cor} \begin{proof} This follows from the possible configurations of planes lying in reducible fibres obtained as in \cite[Section 4]{Kal07b}. \end{proof}
Assume that $X\to \PS^1$ has $\rho(X_{\eta})=1$, and that $X\to \PS^1$ has $1$ or $2$ reducible fibres, each containing a quadric ($\rk \Cl Y=3$ or $4$). Among the generators of $\Cl Y/\Pic Y$, there is a surface $S$ such that $A_Y^2\cdot S= 2$, i.e.~ there is a quadric lying on $Y$. Denote $f\colon \widetilde{X}\to X \to Y$ a small factorialisation of $X$ and $Y$ and note that there is an extremal divisorial contraction $\varphi \colon \widetilde{X} \to \widetilde{X}_1$ such that $\widetilde{S}=f_{\ast}^{-1}S=\Exc \varphi$ (possibly after flops of $\widetilde{X}$).
Observe that $\widetilde{X}_1$ is the small modification of a terminal Gorenstein Fano $3$-fold $Y_1=Y_{2,3}\subset \PS^5$. For any divisor $D_1\subset \widetilde{X_1}$, the proper transform $D$ of $D_1$ on $\widetilde{X}$ is such that $A_{Y}^2\cdot D\leq A_{Y_1}^2\cdot D_1$ and the inequality is strict when $D$ intersects the quadric $S$ (see the proof of \cite[Theorem 3.2]{Kal07b}). Note that $\Pi$ and all planes contained in reducible fibres of $X\to \PS^1$ do intersect the quadric $S$ and since $\rho(X_{\eta})=1$, $\widetilde{X_1}$ is weak-star Fano: the methods of the previous subsection apply.
More precisely, as $Y_1$ is terminal Gorenstein and has $\rk \Cl(Y_1)\geq 2$, unless $Y_1$ has a structure of Conic Bundle or the MMP on $\widetilde{X_1}$ consists of one divisorial contraction of type $10,11,12,13,14,18,20$ or $21$ in Table~\ref{table2}, $Y$ is rational.
\begin{exa} In particular, this gives potential examples of rational cubic fibrations that are not geometrically rational. \end{exa}
\begin{case} {$X$ is not an extremal Mori fibre space, i.e.~ $\rk \Cl Y \geq 3$, and $\rho(X_{\eta})>1$.} \end{case}
\begin{pro} If $\rho(X_{\eta})\geq 3$, $X$ is rational. If $\rho(X_{\eta})=2$ and either $\Cl Y/\Pic Y$ is not generated by planes or $X$ has at least one reducible fibre, $X$ is rational. \end{pro} \begin{proof}
Theorem~\ref{cubics} shows that unless $\Pic X_{\eta}$ is generated by the class of a hyperplane section and divisors of type $S_1$, $X_{\eta}$ is rational. But then, as \cite{Co96} shows that $X \to \PS^1$ is birational to a cubic fibration $X'\to \PS^1$ with reduced and irreducible fibres and $X_{\eta}\simeq X'_{\eta}$, $X\to \PS^1$ is rational.
We now turn to the case when $\rho(X_{\eta})>1$ and $X_{\eta}$ does not contain any $S_n$ for $n\geq 2$.
The proposition follows from the following Claims. \begin{cla} If $\rho(X_{\eta})\geq 3$ and if $\Pi', \Pi''$ are two planes on $X\to \PS^1$ that arise as completions of divisors of type $S_1$ on $X_{\eta}$, then $\Pi\cap\Pi'\cap\Pi''= \emptyset$. \end{cla} Any $S_1$ lying on $X_{\eta}$ completes to a plane $\Pi'$ that meets $\Pi$ in a point. Indeed, if $\Pi$ and $\Pi'$ met in a line, the image of $\Pi'$ on $Y$ would be contained in a hyperplane section of the original quartic $Y$, and $\Pi'$ would have to be contained in a reducible fibre. If $X_{\eta}$ contains two distinct $S_1$, these cannot be skew (otherwise they would form an $S_2$) and therefore up to coordinate change on $\PS(x, x_2, x_3,x_4)$, $X_{\eta}$ contains the lines \begin{eqnarray*} L=\{x_2{=}x_3{=}0\}\\ L'=\{x_2{=}x_4{=}0\} \end{eqnarray*}
so that $Y$ contains the planes $\{x_0{=}x_1{=}0\}$, $\{x_2{=}x_3{=}0\}$ and $\{x_2{=}x_4{=}0\}$ and by Lemma~\ref{3planes}, $Y$ is rational.
\begin{cla} If there are at least $3$ planes lying in reducible fibres of $X\to \PS^1$ then we may choose $\Pi''$ lying in a reducible fibre of $X$ such that $\Pi\cap\Pi'\cap\Pi''= \emptyset$. \end{cla} Since any plane contained in a fibre of $X\to \PS^1$ intersect $\Pi$ in a line and that given any $3$ such planes, \cite{Kal07b} shows that the $3$ associated lines are distinct and non-concurrent, we may choose one plane that does not contain $\Pi\cap\Pi'$. \end{proof}
We have proved the following. \begin{pro} Let $Y_4\subset \PS^4$ be a quartic hypersurface that contains a plane. If $6\leq \rk \Cl Y\leq 16$, $Y$ is rational. \end{pro}
\section{Examples and geometric realizability of numerical Sarkisov links} \label{examples} \subsection{Examples}
In this section, I construct examples of non-factorial Fano $3$-folds with terminal Gorenstein singularities. I use the Tables of numerical Sarkisov links to recover some known examples and construct some new ones. \begin{exa} Let $X_{2g-2}\subset \PS^{g+1}$ be a nonsingular Fano $3$-fold of genus $g \geq 7$ and let $P\in X$ be a point that does not lie on any line of $X$ (such a point exists by \cite{Isk78}). Let $\widetilde{X}\to X$ be the blow up of $P$. Then $\widetilde{X}$ is a weak-star Fano $3$-fold with Picard rank $2$. The anticanonical model $Y$ of $\widetilde{X}$ is a terminal Gorenstein non-factorial Fano $3$-fold of genus $g-4$. The map $\widetilde{X}\to Y$ is small and contracts the preimages of conics through $P$ to points (\cite{Tak89} proves that there are finitely many such conics). \begin{enumerate} \item[1.] When $g=7$, $Y$ is a quartic $3$-fold that is the midpoint of a link where both contractions of the Sarkisov link are of type E$2$. The link is a self-map of $X_{12}\subset \PS^9$; the centre of the link is a rational quartic $3$-fold $Y$ with $\rk \Cl Y=2$. \item[2.] When $g\geq 8$, \cite{Tak89} lists all possible constructions starting with a nonsingular Fano $3$-fold $X_{2g-2}\subset \PS^{g+1}$. Takeuchi uses Hodge theoretical computations to show that some numerical Sarkisov links are not realizable. Here, since I allow terminal Gorenstein singularities, it is not clear that these links can be excluded (see Remark~\ref{Hodge}). \end{enumerate} \end{exa} Let $X$ be a nonsingular Fano $3$-fold with $\rho(X)=1$ and $\Gamma \subset X$ a curve such that $X=Z_1$ and $\Gamma$ is the centre of $\varphi$ for one case appearing in Table~\ref{table1} (resp.~ of Table~\ref{table2}). Let $\widetilde{X} \to X$ be the blow up of $X$ along $\Gamma$. By construction, $A_{\widetilde{X}}^3=4$ (resp.~ $2g-2$ for $g\geq 4$), so that if $A_{\widetilde{X}}$ is nef, it is big and $\widetilde{X}$ is a Picard rank $2$ weak Fano $3$-fold. Observe that when $\Gamma$ is an intersection of members of $\vert A_X\vert$, then $A_{\widetilde{X}}$ is nef and big.
The anticanonical map $f \colon \widetilde{X} \to Y$ maps to a Gorenstein Fano $3$-fold with canonical singularities. If, in addition, $(A_{\widetilde{X}})^2\cdot D>0$ for every effective divisor $D$, $Y$ has terminal singularities and $f$ is small. Still by construction, in this case, $f$ is not an isomorphism because $e\neq 0$, and $Y$ is a non-factorial terminal Gorenstein Fano $3$-fold with $\rho(Y)=1$, $\rk \Cl Y=2$.
\begin{thm}\cite{Sh79, Re80, Tak89, IP99}\label{lineconic} Let $X_{2g-2}\subset \PS^{g+1}$ be a nonsingular anticanonically embedded Fano $3$-fold of index $1$. If $g\geq 5$ (resp.~ $g\geq 6$), there exists a line (resp.~ a smooth conic) on $X$. For any $g\geq 5$, if $X$ contains a line and a smooth conic, it also contains a rational normal cubic curve. \end{thm}
\begin{exa}\cite{ Isk78, IP99} Let $X=X_{2g'-2}\subset \PS^{g'+1}$ be a nonsingular (or more generally terminal Gorenstein factorial) Fano $3$-fold of index $1$ such that $A_X$ is very ample and let $\Gamma$ be a line lying on $X$. As above, let $\widetilde{X}\to X$ be the blow up along $\Gamma$ and let $\widetilde{X}\to Y$ be the anticanonical map of $\widetilde{X}$. Recall the following result of Iskovskikh's: \begin{thm}\cite{Isk78} If $\Gamma \subset X_{2g'-2}$ is a line on a nonsingular Fano $3$-fold of genus $g'$ and Picard rank $1$, and if $\widetilde{X}\to X$ is the blow up of $X$ along $\Gamma$, then $\widetilde{X}$ is a small modification of a terminal Gorenstein Fano $3$-fold $Y_{2g-2}$ of index $1$, Picard rank $1$ and genus $g= g'-2$. \end{thm}
By construction, $Y$ is not factorial and its divisor class group is generated by the hyperplane section and by a surface $\overline{E}=f(\PS(\mathcal{N}^v_{\Gamma/X}))$, which is the image by the anticanonical map of a cubic scroll.
The blow up $f$ is one side of a Sarkisov link with midpoint along $Y$. Note that the rational map between the two sides of the Sarkisov link $Z_1=X_{2g'-2}\dashrightarrow \widetilde{Z}_1$ is Iskovskikh's double projection from a line \cite{Isk78}, that enabled him to classify Fano $3$-folds of the first species. \begin{enumerate} \item[1.]Case $30$ in Table~\ref{table1} is a geometric construction that was known classically \cite{Be77, BCZ}. Let $X=X_{2,2,2}\subset \PS^6$ be a codimension $3$ complete intersection of quadrics in $\PS^3$ and $l\subset X$ be a line. Then the other contraction in the link starting with the projection from $l$ is a conic bundle with discriminant of degree $7$. Conversely, given a plane curve $\Delta\subset \PS^2 $ of degree $7$, \cite{BCZ} constructs standard conic bundles with ramification data a $2$-to-$1$ admissible cover $N \to \Delta$. When $\deg \Delta=7$, there are $4$ deformation families of standard conic bundles and Case $30$ corresponds to the generic even theta characteristic case. By \cite{Sh83}, the standard conic bundle $\widetilde{X}$ is non rational. \item[2.]\cite{Isk78} When $X$ is nonsingular, the link that occurs is Case $17, g=4$ in Table $2$ for $g'=6$, Case $13, g=5$ for $g'=7$, Case $8,g=6$ for $g'=8$, Case $3,g=7$ for $g'=9$, Case $3, g=8$ for $g'=10$ and Case $1,g=10$ for $g'=12$.
\end{enumerate} Note that Case $31$ in Table~\ref{table1}, and Cases $18, g=4$, $12, g=5$, and Cases $11,12, g=6 $do not occur \cite{Isk78, IP99} if $Z_1$ is nonsingular. One can describe explicitly the inverse rational map $\widetilde{Z}_1 \dashrightarrow Z_1$ for $g \geq 7$ by choosing the curve $C$ carefully on an appropriate $\widetilde{Z_1}$ \cite{IP99}. \end{exa}
\begin{exa}\cite{Tak89} Let $X=X_{2g'-2}\subset \PS^{g'+1}$ be a nonsingular (or more generally terminal Gorenstein factorial) Fano $3$-fold of index $1$ such that $A_X$ is very ample and let $\Gamma$ be a smooth conic lying on $X$. As above, let $\widetilde{X}\to X$ be the blow up along $\Gamma$ and let $\widetilde{X}\to Y$ be the anticanonical map of $\widetilde{X}$. \begin{thm}\cite{Tak89} Let $\Gamma \subset X_{2g'-2}$ be a conic on a nonsingular Fano $3$-fold of genus $g'$ and Picard rank $1$, and $\widetilde{X}\to X$ the blow up of $X$ along $\Gamma$. If $g'\geq 7$ and $\Gamma$ is general, then $\widetilde{X}$ is a small modification of a terminal Gorenstein Fano $3$-fold $Y_{2g-2}$ of index $1$, Picard rank $1$ and genus $g= g'-3$. If $g'\geq 9$, the same holds for any conic $\Gamma \subset X$. \end{thm} Note that when $\widetilde{X}$ is a small modification of a terminal Gorenstein Fano $3$-fold $Y$ with $\rho(Y)=1$, the divisor class group of $Y$ is generated by a hyperplane section and by a surface $\overline{E}= f(\PS(\mathcal{N}_{\Gamma/X}^v))$ which has degree $4$. More precisely, $\mathcal{N}_{\Gamma/X}= \mathcal{O}_{\PS^1}(d)\oplus\mathcal{O}_{\PS^1}(-d)$, for $d=0,1$ or $2$. If $d=0$, $f_{\vert E}\colon \PS^1 \times \PS^1\to \overline{E}$ is induced by a divisor of bidegree $(1,2)$. If $d=1$, $f_{\vert E}\colon \F_2 \to \overline{E}$ is induced by $\vert s+3f\vert$ and if $d=2$, $f_{\vert E}\colon \F_4 \to \overline{E}$ is induced by $\vert s+4f\vert$.
The blow up $f$ is one side of a Sarkisov link with midpoint along $Y$ and corresponds to one of Cases $25$ or $26$ in Table~\ref{table1}, or Cases $15,16$ for $g=4$, $9,10,11$ for $g=5$, $6,7$ for $g=6$, $2$ for $g=7$ or $1$ for $g=9$. \cite{Tak89} shows that for nonsingular Fano $3$-folds $X_{2g-2}=Z_1$, the Cases indicated by $\bullet$ in Tables~\ref{table1} and \ref{table2} are the only geometrically realizable constructions. \end{exa} \begin{exa} Let $X_{2g'-2}\subset \PS^{g+1}$ be a nonsingular Fano $3$-fold with $g'\geq 6$. By Theorem~\ref{lineconic}, there is a rational normal cubic curve $\Gamma$ lying on $X$. If $\widetilde{X}\to Y$ is small, then if $g'=7$, we are in case $18$ or $19$ of Table~\ref{table1}, $Y$ is a terminal Gorentsein factorial quartic $3$-fold that is rational; and if $g'\geq 8$, we are in one of Cases $9$ or $11, g=4$, $5, g=5$, $3, g=6$ or $1,g=8$ of Table~\ref{table2}. \end{exa} \begin{thm}\cite{Mo84, MM83} \label{exi} Let $k=\overline{k}$ be a field of characteristic $0$ and let $d>0$ and $g\geq 0$ be integers. There exists a nonsingular curve $C$ lying on a nonsingular quartic surface $S_4\subset \PS^3_k$ with $(p_a C, \deg C)=(g,d)$ if and only if \[g=d^2/8+1\mbox{ or } g<d^2/8\] and $(g,d)\neq (3,5)$. \end{thm}
\begin{exa} I now use Theorem~\ref{exi} to show that some constructions that appear in Tables~\ref{table1} and \ref{table2} may be geometrically realizable.
\begin{enumerate} \item[1.] Let $C\subset \PS^3$ be a nonsingular curve that is an intersection of nonsingular quartic surfaces with $(p_a C, \deg C)= (15,11)$. Let $X$ be the blow up of $\PS^3$ along $C$, assume that $X$ is the small modification of a terminal quartic hypersurface $Y \subset \PS^4$. The linear system $\vert \mathcal{O}_{\PS^3} \vert $ determines a rational map $\PS^3 \dashrightarrow X_{10}\subset \PS^7$ that corresponds to the inverse of Case $28$ in Table~\ref{table1}. The midpoint $Y_4\subset \PS^4$ is a non-factorial rational quartic $3$-fold; $\Cl Y$ is generated by the hyperplane section and the image in $Y$ of $E$ or $D$. Note that this rational map provides an example of a rational Fano $3$-fold of genus $6$. \item[2.]\cite{IP99} Let $C\subset \PS^3$ be a nonsingular curve that is an intersection of nonsingular quartic surfaces with $(p_a C, \deg C)= (7,9)$. Let $X$ be the blow up of $\PS^3$ along $C$, then since $C$ is an intersection of nonsingular quartic surfaces, $X$ is the small modification of a terminal quartic hypersurface $Y \subset \PS^4$. The linear system $\vert \mathcal{O}_{\PS^3}(15)-4C \vert$ determines a rational map $\PS^3 \dashrightarrow X_{12}\subset \PS^8$ that corresponds to the inverse of Case $19$ in Table~\ref{table1}. The midpoint $Y_4\subset \PS^4$ is a non-factorial rational quartic $3$-fold; $\Cl Y$ is generated by the hyperplane section and the image in $Y$ of $E$ or $D$. \item[3.] Let $C\subset \PS^3$ be a nonsingular curve that is an intersection of nonsingular quartic surfaces with $(p_a C, \deg C)= (3,8)$. Let $X$ be the blow up of $\PS^3$ along $C$, assume that $X$ is the small modification of a terminal quartic hypersurface $Y \subset \PS^4$. The linear system $\vert \mathcal{O}_{\PS^3}\vert$ determines a rational map $\PS^3 \dashrightarrow V_3\subset \PS^4$ that corresponds to the inverse of Case $34$ in Table~\ref{table1}. The midpoint $Y_4\subset \PS^4$ is a non-factorial rational quartic $3$-fold; $\Cl Y$ is generated by the hyperplane section and the image in $Y$ of $E$ or $D$. Note that in this case $V_3\subset \PS^4$ would necessarily be singular, because it would be rational. \item[4.] Let $C\subset \PS^3$ be a nonsingular curve that is an intersection of nonsingular quartic surfaces with $(p_a C, \deg C)= (1,7)$. Let $X$ be the blow up of $\PS^3$ along $C$, assume that $X$ is the small modification of a terminal Gorenstein Fano $3$-fold $Y_{2,2,2} \subset \PS^6$ that is non-factorial and rational. The linear system $\vert \mathcal{O}_{\PS^3}\vert$ determines a rational map $\PS^3 \dashrightarrow X_{22}$ that corresponds to the inverse of Case $2, g=5$ in Table~\ref{table2}. \item[5.] Let $C\subset \PS^3$ be a nonsingular curve lying that is an intersection of nonsingular quartic surfaces with $(p_a C, \deg C)= (6,8)$. Let $X$ be the blow up of $\PS^3$ along $C$, assume that $X$ is the small modification of a terminal Gorenstein Fano $3$-fold $Y_{10} \subset \PS^7$ that is non-factorial and rational. The linear system $\vert \mathcal{O}_{\PS^3}\vert$ determines a rational map $\PS^3 \dashrightarrow X_{22}$ that corresponds to the inverse of Case $9,g=6$ in Table~\ref{table2}. \end{enumerate} \end{exa} \begin{exa}\cite{IP99} Let $C\subset \PS^3$ be a nonsingular non hyperelliptic curve lying on a nonsingular quartic surface with $(p_a C, \deg C)= (3,7)$. Let $X$ be the blow up of $\PS^3$ along $C$, $X$ is the small modification of a terminal Gorenstein Fano $3$-fold $Y_{12} \subset \PS^8$,= that is non-factorial and rational. The linear system $\vert \mathcal{O}_{\PS^3}\vert$ determines a rational map $\PS^3 \dashrightarrow X_{16}$ that corresponds to the inverse of Case $3,g=7$ in Table~\ref{table2}. \end{exa}
\subsection{Some remarks on geometric realizability}
Classically, it has been shown that numerical Sarkisov links were not geometrically realizable by using constraints on the Hodge numbers of blow ups of nonsingular varieties along smooth centres or constraints on Euler characteristics of fibrations. In the case of divisorial contractions of factorial terminal Gorenstein $3$-folds, I have been unable to extend these results so as to use them to rule out some numerical Sarkisov links.
It is easy to show the following weakened version: \begin{lem} \label{htexc} Let $Z$ be a nonsingular weak Fano $3$-fold and $\varphi \colon Z\to Z_1$ an extremal divisorial contraction with centre along a curve $\Gamma$ and such that $Z_1$ is a terminal Gorenstein Fano $3$-fold. Then \[ h^{1,2}(Z)\leq h^{1,2}(\mathcal{Z}_{1, \eta})+ p_a(\Gamma), \] where $p_a(\Gamma)$ denotes the arithmetic genus of $\Gamma$ and $\mathcal{Z}_1$ a smoothing of $Z_1$. \end{lem}
\begin{rem} \label{Hodge} \cite{Kol89} shows that $Z$ and $\widetilde{Z}$ have the same analytic type of singularities. Since $h^{1,2}(Z)$ and $h^{1,2}(\widetilde{Z})$ can be expressed only in terms of local invariants of singularities and of $\rk \Cl Y$, where $Y$ is the anticanonical model of $Z$ and $\widetilde{Z}$, $h^{1,2}(Z)=h^{1,2}(\widetilde{Z})$ . In order to exclude some numerical Sarkisov links, I would need to find a lower bound for $h^{1,2}(Z)$ (resp.~ $h^{1,2}(\widetilde{Z})$). This would follow if the following question could be answered. \begin{que} Is it possible to relate $W_3H^4(Z)$ and $W_2H^3(Z)$ when $Z$ has terminal Gorenstein singularities? What if $Z$ is factorial? \end{que} \end{rem} \begin{rem} Observe that in order to determine that a numerical Sarkisov link is not realizable, it is enough to observe that no deformed (nonsingular) link exists between a Fano in the deformation family of $Z_1$ and a Fano in the deformation family of $\widetilde{Z}_1$. This has been used in the previous subsections. \end{rem}
Another question of interest would be to understand the geometric meaning of the correction term $e$ that appears in the tables of numerical Sarkisov links. The proof of Lemma~\ref{lem:4} shows that $e$ is the intersection of $E$, the exceptional divisor of the left hand side contraction, with the flopping locus of $Z\dashrightarrow \widetilde{Z}$. The large values of $e$ that appear in the table suggest the following question. \begin{que} Let $f \colon Z\to Y$ be a small factorialisation and assume that $f^{-1}(P)$ is a chain of rational curves $\cap \Gamma_i$. If $E$ is the proper transform on $X$ of a Weil non-Cartier divisor passing through the singular point $P$, is it possible to have $E\cdot \Gamma_i>1$? The surface $E$ is a priori not Cohen Macaulay at $P$, but is it possible to bound this intersection number? \end{que}
\end{document} |
\begin{document}
\title{\bf Hölder's inequality and its reverse \\-- a probabilistic point of view }
\author{Lorenz Fr\"uhwirth and Joscha Prochno}
\date{}
\maketitle
\begin{abstract} In this article we take a probabilistic look at H\"older's inequality, considering the ratio of terms in the classical Hölder inequality for random vectors in $\R^n$. We proof a central limit theorem for this ratio, which then allows us to reverse the inequality up to a multiplicative constant with high probability. The models of randomness include the uniform distribution on $\ell_p^n$ balls and spheres. We also provide a Berry-Esseen type result and prove a large and a moderate deviation principle for the suitably normalized H\"older ratio. \end{abstract}
\section{Introduction \& Main results}
There are a number of classical inequalities frequently used throughout mathematics. A natural question is then to characterize the equality cases or to determine to what degree a reverse inequality may hold. The latter shall be the main focus here and we begin by motivating and illustrating the approach in the case of the classical arithmetic-geometric mean inequality (AGM inequality), which has attracted attention in the past decade. Let us recall that the AGM inequality states that for any finite number of non-negative real values, the geometric mean is less than or equal to the arithmetic mean. More precisely, for all $n \in \N$ and $x_1, \dots , x_n \geq 0$ it holds that \begin{equation*} \label{EqClassicalArithGeoIneq} \Big( \prod_{i=1}^n x_i \Big)^{1/n} \leq \frac{1}{n} \sum_{i=1}^n x_i \end{equation*} and equality holds if and only if $x_1= \dots = x_n$. Setting $y_i := \sqrt{x_i}$ for $i=1, \dots ,n$, we obtain \begin{equation*} \Big( \prod_{i=1}^n y_i \Big)^{1/n} \leq \Big( \frac{1}{n} \sum_{i=1}^n y_i^2 \Big)^{1/2}. \end{equation*}
For a point $y$ in the Euclidean unit sphere $\mathbb{S}_2^{n-1}:=\{x=(x_i)_{i=1}^n\in\R^n\,:\, \sum_{i=1}^n|x_i|^2 = 1\}$, this leads to the estimate \begin{equation*}
\Big( \prod_{i=1}^{n} |y_i |\Big)^{1/n} \leq \frac{1}{\sqrt{n}}. \end{equation*} It is natural to ask whether this inequality can be reversed for a ``typical'' point in $\mathbb{S}_2^{n-1}$ and in \cite[Proposition 1]{GluMil}, Gluskin and Milman showed that for any $t \in \R$, \begin{equation*} \label{EqGluMilResult}
\sigma^{(n)}_2 \Big( \Big \{ x \in \mathbb{S}_2^{n-1} \ : \ \Big( \prod_{i=1}^{n} |x_i |\Big)^{1/n} \geq t \cdot \frac{1}{\sqrt{n}} \Big \} \Big) \geq 1- (1.6 \sqrt{t})^n, \end{equation*} where $\sigma^{(n)}_2$ denotes the unique rotationally invariant probability surface measure (the Haar measure) on $\mathbb{S}_2^{n-1}$. For large dimensions $n \in \N$ this means that with high probability, we can reverse the AGM inequality up to a constant. The problem was then revisited by Aldaz in \cite[Theorem 2.8]{Aldaz} and he showed that for all $\epsilon > 0, k> 0$ there exists an $N:=N(k, \epsilon ) \in \N$ such that for every $n \geq N$ \begin{equation*} \label{EqAldazResult}
\sigma^{(n)}_2\Bigg( \Bigg \{ x \in \mathbb{S}_2^{n-1} \ :\ \frac{(1- \epsilon) e^{ -\frac{1}{2} ( \gamma + \log 2)} }{\sqrt{n}} < \Big( \prod_{i=1}^{n} |x_i |\Big)^{1/n} < \frac{(1 + \epsilon) e^{ -\frac{1}{2} ( \gamma + \log 2)} }{\sqrt{n}} \Bigg \} \Bigg) \geq 1- \frac{1}{n^k}, \end{equation*} where $\gamma=0,5772\dots$ is Euler's constant. The previous works motivated Kabluchko, Prochno, and Vysotsky \cite{ArithGeoIneq} to study the asymptotic behavior of the $p$-generalized AGM inequality, which states that for $p \in (0, \infty)$, $n \in \N$, and $(x_i)_{i=1}^n \in \R^n$, \begin{equation*} \label{EqIntrpgenArithGeoIneq}
\Big( \prod_{i=1}^{n} |x_i| \Big)^{1/n} \leq \Big( \frac{1}{n} \sum_{i=1}^{n} |x_i|^p \Big)^{1/p}. \end{equation*} The authors then analyzed the quantity \begin{equation*} \label{EqRatioArithGeo}
\mathcal{R}_n:= \frac{\big( \prod_{i=1}^{n} |x_i| \big)^{1/n} }{ || x||_p}, \end{equation*}
where $ || x||_p := \big( \sum_{i=1}^{n} |x_i|^p \big)^{1/p} $, $x=(x_i)_{i=1}^n\in\R^n$. Similar as in the case of the classical AGM inequality above it is now natural to consider points $x \in \R^n$ that are uniformly distributed on the $\ell_p^n$ unit sphere $\mathbb{S}^{n-1}_p$ or the $\ell_p^n$ unit ball $\mathbb{B}^{n}_p$ respectively, where \begin{equation*}
\mathbb{B}^n_p := \big \{ x \in \R^n \ : \ || x ||_p \leq 1 \big \} \quad \text{and} \quad \mathbb{S}^{n-1}_p := \big \{ x \in \R^n \ : \ || x ||_p = 1 \big \}. \end{equation*} In \cite[Theorem 1.1]{ArithGeoIneq}, for a constant $m_p\in(0,\infty)$ only depending on $p$, it is shown that \begin{equation*} \sqrt{n} \big( e^{- m_p} \mathcal{R}_n -1 \big), \quad n \in \N \end{equation*} converges to a centered normal distribution with known variance and in \cite[Theorem 1.3]{ArithGeoIneq} a large deviation principle for the sequence $(\mathcal{R}_n)_{n \in \N }$ is proven (see Section \ref{SecLDPProb} for the definition of an LDP). \par{} The work \cite{ArithGeoIneq} of Kabluchko, Prochno, and Vysotsky was then recently complemented by Th\"ale in \cite{Th2021}, who obtained a Berry-Esseen type bound and a moderate deviation principle for a wider class of distributions on the $\ell_p^n$ balls (see \cite{Bartheetal}). In the subsequent paper \cite[Theorem 1.1]{KaufThaele}, Kaufmann and Th\"ale were able to identify the sharp asymptotics of $(\mathcal{R}_n)_{n \in \N }$. \par{} Another classical inequality which is used throughout mathematics and applied in numerous situations is H\"older's inequality. While, as outlined above, the AGM inequality is by now well understood from a probabilistic point of view, here we shall focus on H\"older's inequality and take a probabilistic approach in the same spirit. We recall that for $n\in\N$ and $p,q\in (1, \infty)$ with $ \frac{1}{p} + \frac{1}{q} = 1$, H\"older's inequality states that for all points $x,y\in\R^n$, \begin{equation} \label{EqHoelderIneq}
\sum_{i=1}^{n} |x_i y_i | \leq \Big( \sum_{i=1}^{n} |x_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |y_i|^q \Big)^{1/q}. \end{equation} The random quantity to be analyzed is therefore the ratio \begin{equation} \label{EqRationHoelderIneq}
\mathcal{R}_{p,q}^{(n)} := \frac{\sum_{i=1}^{n} |X^{(n)}_i Y^{(n)}_i |}{\Big( \sum_{i=1}^{n} |X^{(n)}_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |Y^{(n)}_i|^q \Big)^{1/q}}, \quad n \in \N , \end{equation} where we assume that $X^{(n)}$ and $Y^{(n)}$ are independent random points in $\mathbb{B}^{n}_p$ and $ \mathbb{B}^n_q$, respectively. In fact, we focus here on the uniform distribution on $\mathbb{B}_p^n$ and $\mathbb{S}_p^{n-1}$, i.e., we consider the cases where $X^{(n)} \sim \U( \mathbb{B}^n_p)$ and $Y^{(n)} \sim \U( \mathbb{B}^n_q)$ or $X^{(n)} \sim \U( \mathbb{S}^{n-1}_p)$ and $Y^{(n)} \sim \U( \mathbb{S}^{n-1}_q)$. The uniform distribution on $\mathbb{B}^{n}_p$ is given by the normalized Lebesgue measure, whereas there are two meaningful uniform distributions on $\mathbb{S}^{n-1}_p $, namely the surface measure denoted by $\sigma^{(n)}_p$ and the cone probability measure $\mu^{(n)}_p$ (see Subsection \ref{SecLDPProb} for precise definitions).
\subsection{Main results -- Limit theorems for the H\"older ratio} \label{SubSectionMainResults}
Let us now present our main results. For the sake of brevity, we first introduce the following general assumption on our random quantities.
\begin{AssumptionA}
\label{AssA}
Let $X^{(n)},Y^{(n)}$ be independent random vectors in $\R^n$ and let $p,q \in (1, \infty)$ with $\frac{1}{p} + \frac{1}{q} =1$. We assume either, $(X^{(n)},Y^{(n)}) \sim $ $\U(\mathbb{B}_p^{n}) \otimes \U(\mathbb{B}_q^{n})$ or $(X^{(n)},Y^{(n)}) \sim \mu^{(n)}_p \otimes \mu^{(n)}_q$ or $(X^{(n)},Y^{(n)}) \sim \sigma^{(n)}_p \otimes \sigma^{(n)}_q$, where $\mu^{(n)}_p$ and $\sigma^{(n)}_p$ denote the cone probability measure and the surface probability measure on $\mathbb{S}_p^{n-1}$, respectively. \end{AssumptionA}
The following quantities appear in the formulation of Theorems \ref{ThmCLT} and \ref{ThmMDP}. Let $\Gamma $ denote the Gamma-function and set \begin{equation} \label{EqCovMatrixVectord} \mathbf{C_{p,q}} = \left ( \begin{matrix}
p^{2/p} \frac{\Gamma \left( \frac{3}{p} \right)}{\Gamma \left( \frac{1}{p} \right)}
q^{2/q} \frac{\Gamma \left( \frac{3}{q} \right)}{\Gamma \left( \frac{1}{q} \right)} - m_{p,q}^2 &
m_{p,q} & m_{p,q}\\
m_{p,q}& p & 0 \\
m_{p,q} & 0 & q \end{matrix} \right), \quad d_{p,q} :=\left( 1, - \frac{m_{p,q}}{p}, - \frac{m_{p,q}}{q} \right), \end{equation} where $m_{p,q} := p^{1/p} \frac{\Gamma \left( \frac{2}{p} \right)}{\Gamma \left( \frac{1}{p} \right)} q^{1/q} \frac{\Gamma \left( \frac{2}{q} \right)}{\Gamma \left( \frac{1}{q} \right)} $.
\subsubsection{The CLT and Berry-Esseen bounds for $\mathcal{R}_{p,q}^{(n)}$}
We start with the central limit theorem and a Berry-Esseen type result for the H\"older ratio $\mathcal{R}_{p,q}^{(n)}$ (see \eqref{EqRationHoelderIneq}). As a consequence we shall see that H\"older's inequality may be reversed up to a specific multiplicative constant only depending on $p$ and $q$ with high probability.
\begin{thmalpha}[Central limit theorem]
\label{ThmCLT}
Let $X^{(n)},Y^{(n)}$ be random vectors satisfying Assumption \ref{AssA} and let
$(\mathcal{R}_{p,q}^{(n)})_{n \in \N }$ be given as in \eqref{EqRationHoelderIneq}, i.e.,
\begin{equation*}
\mathcal{R}_{p,q}^{(n)} = \frac{\sum_{i=1}^{n} |X^{(n)}_i Y^{(n)}_i |}{\Big( \sum_{i=1}^{n} |X^{(n)}_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |Y^{(n)}_i|^q \Big)^{1/q}}, \quad n \in \N.
\end{equation*}
Then, we have
\begin{equation}
\label{EqCLTRpq}
\sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \big) \stackrel{d }{\longrightarrow} Z,
\end{equation}
where $Z \sim \mathcal{N}(0, \sigma_{p,q}^2)$, $\sigma_{p,q}^2 := \langle d_{p,q} , \mathbf{C_{p,q}} d_{p,q} \rangle \in (0, \infty)$ with $ \mathbf{C_{p,q}}$ and $d_{p,q}$ as in \eqref{EqCovMatrixVectord}. \end{thmalpha}
\begin{rem}
As a consequence of Theorem \ref{ThmCLT}, for any $t \in \R$,
\begin{equation*}
\lim_{n \rightarrow \infty} \mathbb{P} \Big[ \sum_{i=1}^n |X_i^{(n)} Y_i^{(n)} | \geq \Big( \frac{t}{\sqrt{n}} + m_{p,q} \Big) ||X^{(n)}||_p ||Y^{(n)}||_q \Big] = \frac{1}{\sqrt{2 \pi } \sigma_{p,q}} \int_{t}^{\infty} e^{- \frac{x^2}{2 \sigma_{p,q}^2}} dx.
\end{equation*}
In particular, for $t=0$, we obtain
\begin{equation*}
\lim_{n \rightarrow \infty} \mathbb{P} \Big[ \sum_{i=1}^n |X_i^{(n)} Y_i^{(n)} | \geq m_{p,q} ||X^{(n)}||_p ||Y^{(n)}||_q \Big] = \frac{1}{2}.
\end{equation*}
This means, with a probability tending to $1/2$, we can reverse Hölder's inequality up to the explicit constant $m_{p,q} = p^{1/p} \frac{\Gamma \left( \frac{2}{p} \right)}{\Gamma \left( \frac{1}{p} \right)}
q^{1/q} \frac{\Gamma \left( \frac{2}{q} \right)}{\Gamma \left( \frac{1}{q} \right)} $. \end{rem}
We are also able to provide a quantitative version of Theorem \ref{ThmCLT}, i.e., a Berry-Esseen type result. For real-valued random variables $X$ and $Y $ on a common probability space, we define the Kolmogorov-distance \begin{equation} \label{EqKolmDist}
d_{Kol}( X,Y) := \sup_{t \in \R} \Big| \mathbb{P} \left[ X \leq t \right] - \mathbb{P} \left[ Y \leq t \right] \Big|. \end{equation}
\begin{thmalpha}[Berry-Esseen bound]
\label{ThmBerryEssentype}
Let $X^{(n)},Y^{(n)}$ be random vectors satisfying Assumption \ref{AssA} and let $(\mathcal{R}_{p,q}^{(n)})_{n \in \N }$ be given as in \eqref{EqRationHoelderIneq}. Then there exists a constant $C_{p,q} \in (0, \infty)$ only depending on $p$ and $q$, such that
\begin{equation*}
d_{Kol} \left (\sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \big), Z \right ) \leq C_{p,q} \frac{ \log(n)}{\sqrt{n}},
\end{equation*}
where $m_{p,q}= p^{1/p} \frac{\Gamma( \frac{2}{p})}{\Gamma( \frac{1}{p})} q^{1/q} \frac{\Gamma( \frac{2}{q})}{\Gamma( \frac{1}{q})} $, $Z \sim \mathcal{N}(0, \sigma_{p,q}^2)$ and $ \sigma_{p,q}^2 = \langle d_{p,q} , \mathbf{C_{p,q}} d_{p,q} \rangle $ is the same quantity as in Theorem \ref{ThmCLT}. \end{thmalpha}
\begin{rem}
\label{RemarkThmBE1}
Theorem \ref{ThmBerryEssentype} gives a similar asymptotic bound of the distance to a normal distribution as the classical theorem of Berry-Esseen. The $\glqq{}\log(n)\grqq$ on the right-hand side seems to be owed to our method of proof and we conjecture that this factor is not necessary. \end{rem}
\begin{rem}
\label{RemarkThmBE2}
Although Theorem \ref{ThmBerryEssentype} implies Theorem \ref{ThmCLT}, we provide a direct and more self-contained proof of Theorem \ref{ThmCLT}, which also contains estimates which we use in the proof of Theorem \ref{ThmMDP}. \end{rem}
\subsubsection{Moderate and large deviations for $\mathcal{R}_{p,q}^{(n)}$}
Two other classical types of limit theorems in probability theory are moderate and large deviations, which typically occur between the normal fluctuations scale and the larger one of a law of large numbers. Here the probabilistic behavior is indeed different and universality is replaced by a tail sensitivity, which enters rate and/or speed in a subtle way. For the definitions we refer to Section \ref{sec:notation and prelim} below.
\begin{thmalpha}[Large deviation principle]
\label{ThmLDP}
Let $X^{(n)},Y^{(n)}$ be random vectors satisfying Assumption \ref{AssA} and let $(\mathcal{R}_{p,q}^{(n)})_{n \in \N }$ be given as in \eqref{EqRationHoelderIneq}. Then, $ (\mathcal{R}_{p,q}^{(n)})_{n \in \N }$ satisfies a large deviation principle in $\R$ at speed $n$ and with good rate function $\mathbb{I}: \R \rightarrow [0, \infty]$ defined as
\begin{equation}
\label{EqGRFLDP}
\mathbb{I}(x) := \begin{cases}
\inf \Big \{ \Lambda^{*}( u,v,w) \ : \ x = \frac{u}{v^{1/p} w^{1/q}} \Big \}, &: x > 0 \\
+ \infty &: x \leq 0.
\end{cases}
\end{equation}
The function $\Lambda^{*} : \R^3 \rightarrow [0 , \infty ]$ is given by
\begin{equation*}
\Lambda^{*}( u,v,w) := \sup_{ (r,s,t) \in \R^3} \big[ su + vt+ wr - \Lambda( r,s,t) \big], \quad (u,v,w) \in \R^3,
\end{equation*}
where
\begin{equation*}
\Lambda(r,s,t) := \log \int_{\R^2} c_{p,q} \exp \Big( r |xy| + s |x|^p +t |y|^q - \frac{|x|^p}{p} - \frac{|y|^q}{q} \Big) dx \,dy, \quad (r,s,t) \in \R^3
\end{equation*}
with $c_{p,q}:= \frac{1}{2 p^{1/p} \Gamma( 1 + \frac{1}{p})} \frac{1}{2 q^{1/q} \Gamma( 1 + \frac{1}{q})}$. \end{thmalpha}
The next result concerns the moderate deviation principle for the H\"older ratio and complements the central limit theorem and the large deviation principle already presented.
\begin{thmalpha}[Moderate deviation principle]
\label{ThmMDP}
Let $X^{(n)},Y^{(n)}$ be random vectors satisfying Assumption \ref{AssA} and let $(\mathcal{R}_{p,q}^{(n)})_{n \in \N }$ be given as in \eqref{EqRationHoelderIneq}. Further, assume that $(b_n)_{n \in \N}\in\R^{\mathbb N}$ is a sequence such that
\begin{equation*}
\lim_{n \rightarrow \infty} \frac{b_n}{\sqrt{\log n}} = \infty \quad \text{ and } \quad \lim_{n \rightarrow \infty} \frac{b_n}{ \sqrt{n}} = 0.
\end{equation*}
Then $\left( \frac{\sqrt{n}}{b_n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \big) \right)_{n \in \N}$ satisfies a moderate deviation principle in $\R$ at speed $(b_n^2)_{n \in \N}$ and with a good rate function $\I : \R \rightarrow [0, \infty] $ given by $ \I(t) := \frac{t^2}{2 \sigma_{p,q}^2}$, where $\sigma_{p,q}^2 = \langle d_{p,q} , \mathbf{C_{p,q}} d_{p,q} \rangle \in (0, \infty)$ with $ \mathbf{C_{p,q}}$ and $d_{p,q}$ as in \eqref{EqCovMatrixVectord}, while $m_{p,q}= p^{1/p} \frac{\Gamma( \frac{2}{p})}{\Gamma( \frac{1}{p})} q^{1/q} \frac{\Gamma( \frac{2}{q})}{\Gamma( \frac{1}{q})} $. \end{thmalpha}
\section{Notation and Preliminaries}\label{sec:notation and prelim}
We shall now briefly introduce the notation used throughout the text together with some background material on large deviations and some further results used in the proofs.
\subsection{Notation}
For $p \in [1, \infty)$, $d \in \N$ and $x \in \R^d$, \begin{equation*}
||x||_p := \left( \sum_{i=1}^{d} |x_i|^p \right)^{1/p} \end{equation*} denotes the $p$-norm in $\R^d$. We recall the definitions of the $\ell_p^n$ unit ball and the $\ell_p^n$ unit sphere, i.e., \begin{equation*}
\mathbb{B}_p^n := \left \{ x \in \R^n \ : \ ||x||_p \leq 1 \right \} \quad \text{and} \quad \mathbb{S}_p^{n-1} := \left \{ x \in \R^n \ : \ ||x||_p = 1 \right \}. \end{equation*} Moreover, for $x,y \in \R^d$, $ \langle x, y \rangle := \sum_{i=1}^n x_i y_i$ is the standard scalar product on $\R^d$. $\mathscr{B}( \R^d)$ denotes the Borel-sigma algebra on $\R^d$. For $p \geq 1$, $\gamma_p$ denotes the $p$-generalized Gaussian distribution with Lebesgue-density \begin{equation*}
\frac{d\gamma_p}{d x}(x) := \frac{1}{2 p^{1/p} \Gamma \left( 1 + \frac{1}{p} \right)} e^{- |x|^p/p}, \quad x \in \R. \end{equation*} We denote by $\mathcal{N}( \mu, \sigma^2)$ the normal distribution with mean $\mu \in \R$ and variance $\sigma^2 \in (0, \infty)$. For two distributions $\nu_1 $ and $ \nu_2$ on $\mathscr{B}( \R^d)$, we denote by $\nu_1 \otimes \nu_2$ the product measure of $\nu_1$ and $\nu_2$. Given a sequence of real-valued random variables $(X_n)_{n \in \N}$ and another real-valued random variable $X$, we denote by $X_n \stackrel{d}{\longrightarrow} X$ convergence in distribution. We shall also write iid for independent and identically distributed.
\subsection{Basics from large deviation theory and probability}
\label{SecLDPProb}
Let $d \in \N $ and $( \xi_n)_{n \in \N}$ be a sequence of $\R^d$-valued random variables and let $(s_n)_{n \in \N}$ be a sequence of real numbers tending to infinity. We say that $( \xi_n)_{ n \in \N}$ satisfies a large deviation principle (LDP) in $\R^d$ at speed $(s_n)_{n \in \N }$ if and only if there exists a good rate function (GRF) $\I : \R^d \rightarrow [0, \infty ]$, i.e., $\I $ has compact level sets, such that \begin{equation} \label{EqIntrLDP} - \inf_{ x \in A^{ \circ }} \I( x) \leq \liminf_{ n \rightarrow \infty } \frac{1}{s_n} \log \prb[ \xi_n \in A^{ \circ } ] \leq \limsup_{ n \rightarrow \infty } \frac{1}{s_n} \log \prb[ \xi_n \in \overline{A}] \leq - \inf_{ x \in \overline{A}} \I( x) \end{equation} for all $A\in \mathscr{B}( \R^d)$.
There are different ways to show that a certain sequence of random variables satisfies an LDP, one of the most commonly used is the so called contraction principle (see, e.g., Theorem 4.2.1 in \cite{DZ2011}).
\begin{lem}[Contraction principle]
\label{LemContractionPrinciple}
Let $d,n\in\N$ and $f: \R^d \rightarrow \R^n$ be a continuous function. Let $( \xi_n)_{ n \in \N}$ be a sequence of random variables that satisfies an LDP in $\R^d$ at speed $(s_n)_{n \in \N }$ with GRF $\mathbb{I}: \R^d \rightarrow [0, \infty ]$. Then, the sequence $( f( \xi_n))_{n \in \N}$ satisfies an LDP in $\R^n$ at speed $( s_n)_{n \in \N}$ with the GRF $\mathbb{I}' : \R^n \rightarrow [0, \infty ]$, where
\begin{equation*}
\mathbb{I}'( y) := \inf_{ x \in f^{-1}( \{ y \})} \mathbb{I}(x).
\end{equation*} \end{lem}
We shall also use Cramér's large deviation theorem for $\R^d $-valued random variables (see, e.g., \cite[Corollary 6.1.6]{DZ2011}).
\begin{proposition}[Cram\'er's theorem]
\label{ThmCramer}
Let $(X_i)_{i \in \N}$ be a sequence of iid $\R^d$-valued random variables such that $0 \in \mathcal{D}_{ \Lambda}^{ \circ }$, where
\begin{equation*}
\mathcal{D}_{ \Lambda} := \big \{ t \in \R^d \ : \ \Lambda(t) = \log \E [ e^{ \langle t , X_1 \rangle }] < \infty \big \}.
\end{equation*}
Then, the sequence $(\xi_n)_{n \in \N}$ with
\begin{equation*}
\xi_n := \frac{X_1 + \cdots + X_n }{n}, \quad n \in \N
\end{equation*}
satisfies an LDP in $\R^d$ at speed $n$ with GRF $\Lambda^{*} : \R^d \rightarrow [0, \infty]$, where
\begin{equation*}
\Lambda^{*}(x) := \sup_{ t \in \R^d} \big[ \langle x ,t \rangle - \Lambda(t) \big].
\end{equation*} \end{proposition}
Two sequences of $ \R^d$-valued random variables $(\xi_n)_{n \in \N}$ and $(\eta_n)_{n \in \N}$ are said to be exponentially equivalent at speed $(s_n)_{n \in \N}$, if for all $\epsilon > 0$, we have
\begin{equation}
\label{EqExpEquivalence}
\lim_{n \rightarrow \infty} \frac{1}{s_n} \log \mathbb{P} \big[ \big || \xi_n - \eta_n \big ||_2 > \epsilon \big] = - \infty.
\end{equation}
The following result can be found, e.g., in \cite[Theorem 4.2.13]{DZ2011}, and states that if a sequence of random vectors satisfies an LDP and is exponentially equivalent to another sequence of random vectors, then both satisfy the same LDP.
\begin{proposition}
\label{PropExpEquivalence}
Let $(\xi_n)_{n \in \N}$ and $(\eta_n)_{n \in \N}$ be two random $\R^d$-valued sequences. Assume that $(\xi_n)_{n \in \N}$ satisfies an LDP at speed $(s_n)_{n \in \N}$ with GRF $\I: \R^d \rightarrow [0, \infty]$. Moreover, let $(\xi_n)_{n \in \N}$ and $(\eta_n)_{n \in \N}$ be exponentially equivalent at speed $(s_n)_{n \in \N}$. Then, $(\eta_n)_{n \in \N}$ satisfies an LDP at speed $(s_n)_{n \in \N}$ with the same GRF $\I: \R^d \rightarrow [0, \infty]$.
\end{proposition}
Let $(\xi_n)_{n \in \N}$ be a sequence of $\R^d$-valued random variables. We say that the sequence of random variables satisfies a moderate deviation principle (MDP) if and only if $\big( \frac{\xi_n}{\sqrt{n} b_n} \big)_{n \in \N} $ satisfies an LDP at speed $(b_n^2)_{n \in \N}$ and with some GRF $\mathbb{I}: \R^d \rightarrow [0, \infty]$ for some positive sequence $(b_n)_{n\in\N}$ satisfying $\lim_{n \rightarrow \infty} b_n = \infty$ and $\lim_{n \rightarrow \infty} \frac{b_n}{\sqrt{n}}=0$.
The scaling by $\sqrt{n} b_n$ is typically faster than the scaling in a central limit theorem but slower than the scaling in a law of large numbers. This property of an MDP is nicely illustrated in the following Cramér-type theorem (see, e.g., \cite[Theorem 3.7.1]{DZ2011}).
\begin{proposition}
\label{ThmCramerMDP}
Let $d \in \N$ and $(X_i)_{i \in \N }$ be a sequence of iid $\R^d$-valued random variables such that
\begin{equation*}
\Lambda(t) = \log \E [ e^{ \langle t , X_1 \rangle }] < \infty ,
\end{equation*}
for all $t$ in some ball around the origin, $\E[X_1]=0$ and $\mathbf{C}$, the covariance matrix of $X_1$, is invertible. Let $(b_n)_{n \in \N}$ be a sequence of real numbers with
\begin{equation*}
\lim_{n \rightarrow \infty} b_n = \infty \quad \text{ and } \quad \lim_{n \rightarrow \infty} \frac{b_n}{ \sqrt{n}} = 0.
\end{equation*}
Then, the sequence $(\xi_n)_{n \in \N}$ with $\xi_n := \frac{1}{b_n \sqrt{n}} \sum_{i=1}^n X_i$ satisfies an LDP in $\R$ at speed $( b_n^2)_{n \in \N}$ with GRF $\I : \R^d \rightarrow [0, \infty]$, where
\begin{equation*}
\I(x) := \frac{1}{2} \langle x, \mathbf{C}^{-1 } x \rangle , \quad x \in \R^d.
\end{equation*}
\end{proposition}
The following result is taken from \cite[Lemma 4.1]{APTGaussianFluct}.
\begin{proposition}
\label{PropEstimateKolmogoroff}
Let $Y_1,Y_2,Y_3$ be three random variables, let $Z$ be a centered Gaussian random variable with variance $ \sigma^2 \in (0, \infty)$ and let $\epsilon > 0$. Then,
\begin{equation*}
\sup_{t \in \R} \big | \mathbb{P}[ Y_1 + Y_2 +Y_3 \geq t] - \mathbb{P} [ Z \geq t] \big | \leq \sup_{t \in \R} \big | \mathbb{P}[ Y_1 \geq t] - \mathbb{P} [ Z \geq t] \big | + \mathbb{P} \left [| Y_2 | > \frac{\epsilon }{2} \right ] + \mathbb{P} \left [| Y_3 | > \frac{\epsilon }{2} \right ] + \frac{\epsilon }{\sqrt{2 \pi \sigma^2 }}.
\end{equation*} \end{proposition}
There are two meaningful uniform distributions on the $\ell_p^n$ unit sphere $\mathbb{S}_p^{n-1}$, namely the cone probability measure $\mu^{(n)}_p$ and the surface probability measure $\sigma^{(n)}_p$. In the following, we will briefly discuss their theoretical foundation as well as the relation between those two distributions. We can equip $\mathbb{S}^{n-1}_p$ with the trace Borel-sigma algebra on $\R^n$ which we denote by $\mathcal{B}( \mathbb{S}_p^{n-1})$. For $A \in \mathcal{B}(\mathbb{S}_p^{n-1})$, the cone probability measure $\mu^{(n)}_p$ is then defined as \begin{equation*} \mu^{(n)}_p(A):= \frac{\lambda^{(n)}([0,1]A)}{\lambda^{(n)}(\mathbb{B}_p^n)}, \end{equation*} where $\lambda^{(n)}$ denotes Lebesgue measure on $\mathscr{B}(\R^n)$ and $[0,1]A := \{ x \in \R^n \ : \ x= ra, \ r \in [0,1], \ a \in A\}$. By a result of Schechtman and Zinn \cite{SchechtZinn} and Rachev and Rüschendorf \cite{RachevRuesch}, we know that for $X_p^{(n)} \sim \U \left( \mathbb{B}_p^n \right) $ and $Y_p^{(n)} \sim \mu^{(n)}_p $, \begin{align} \begin{split} \label{EqProbRepSchechtmannZinn}
X_p^{(n)} & \stackrel{d}{=} U^{1/n} \frac{ \zeta^{(n)} }{ || \zeta^{(n)}||_p} \\
Y_p^{(n)} & \stackrel{d}{=} \frac{ \zeta^{(n)} }{ || \zeta^{(n)}||_p}, \end{split} \end{align} where $U \sim \U( [0,1])$ and $\zeta^{(n)} := ( \zeta_1, \cdots, \zeta_n )$ are independent and $( \zeta_i)_{ i \in \N}$ is an iid sequence distributed with respect to the $p$-generalized Gaussian distribution $\gamma_p$; we recall the corresponding Lebesgue-density \begin{equation} \label{EqDensitypGenGaussian}
\frac{d \gamma_p}{dx}(x) = \frac{1}{2 p^{1/p} \Gamma( 1 + \frac{1}{p})} e^{ - |x|^p/p}, \quad x \in \R. \end{equation}
Let $\sigma^{(n)}_p$ be the $(n-1)$-dimensional Hausdorff probability measure or, equivalently, the $(n-1)$-dimensional normalized Riemannian volume measure on $\mathbb{S}_p^{n-1}$, $p\in [1,\infty)$. We have the following relation between $\mu^{(n)}_p$ and $\sigma^{(n)}_p$ (see \cite[Lemma 2]{NaorRomikProjSurfMeasure}).
\begin{proposition} Let $n \in \N$ and $1 \leq p < \infty$. Then, for all $x \in \mathbb{S}_p^{n-1}$, \begin{equation*}
\frac{d \sigma^{(n)}_p}{d \mu^{(n)}_p}(x) = C_{n,p} \Big( \sum_{i=1}^n |x_i|^ {2p-2} \Big)^{1/2}, \end{equation*} where \begin{equation*}
C_{n,p}:= \Big( \int_{\mathbb{S}_p^{n-1}} \sum_{i=1}^n |x_i|^ {2p-2} \mu^{(n)}_p(dx) \Big)^{-1/2} . \end{equation*} \end{proposition} If $p=1,2$, it is clear that $\sigma^{(n)}_p=\mu^{(n)}_p$. We remark that in case of $p = \infty $, we know that (see \cite{NaorRomikProjSurfMeasure}) $\sigma^{(n)}_{\infty}=\mu^{(n)}_{\infty}$. In contrast, for all $p \in (1, \infty)$ with $p \neq 2$, we have that $\sigma^{(n)}_p \neq \mu^{(n)}_p$. Nevertheless, for large $n \in \N$, one can prove that $\sigma^{(n)}_p$ and $\mu^{(n)}_p$ are close in the total variation distance (see \cite[Theorem 2]{NaorRomikProjSurfMeasure}).
\begin{proposition} \label{PropSurfMeasConeMeasClose} For all $1 \leq p < \infty$, we have \begin{equation*}
|| \mu^{(n)}_p - \sigma^{(n)}_p ||_{TV} := \sup_{ A \in \mathscr{B}(\mathbb{S}_p^{n-1}) } | \mu^{(n)}_p(A) - \sigma^{(n)}_p(A) | \leq \frac{c_p}{\sqrt{n}}, \end{equation*} where $c_p \in (0,\infty)$ only depends on $p$. \end{proposition}
\section{Proofs of the main results}
Before we continue with some technical results and the proofs of the main theorems, let us recall here that all theorems stated in Section \ref{SubSectionMainResults} assume Assumption \ref{AssA}.
We begin with a technical Lemma giving a useful representation of the H\"older ratio $\mathcal{R}_{p,q}^{(n)}$, $ n \in \N$.
\begin{lem} \label{LemReprRpq} Let $p,q \in (1,\infty)$ with $\frac{1}{p} + \frac{1}{q} = 1$ and assume that either, $(X^{(n)},Y^{(n)}) \sim \U( \mathbb{B}^n_p) \otimes \U( \mathbb{B}^n_q)$ or $ (X^{(n)},Y^{(n)}) \sim \mu^{(n)}_p \otimes \mu^{(n)}_q$. Then, we have \begin{equation} \label{EqRepRationRpq}
\mathcal{R}_{p,q}^{(n)} = \frac{\sum_{i=1}^{n} |X^{(n)}_i Y^{(n)}_i |}{\Big( \sum_{i=1}^{n} |X^{(n)}_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |Y^{(n)}_i|^q \Big)^{1/q}} \stackrel{d}{=} \frac{\sum_{i=1}^{n} |\zeta_i \eta_i |}{\Big( \sum_{i=1}^{n} |\zeta_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |\eta_i|^q \Big)^{1/q}}, \end{equation} where $\big( (\zeta_i, \eta_i) \big)_{i \in \N} $ is an iid sequence with $ (\zeta_1, \eta_1) \sim \gamma_p \otimes \gamma_q $. \end{lem} \begin{proof}
First, assume that $(X^{(n)}, Y^{(n)}) \sim \U( \mathbb{B}^n_p) \otimes \U( \mathbb{B}^n_q)$. Then, using the Schechtmann-Zinn representation in \eqref{EqProbRepSchechtmannZinn}, we have $X^{(n)} \stackrel{d}{=} U^{1/n} \frac{\zeta^{(n)}}{|| \zeta^{(n)} ||_p}$, where $U$ and $\zeta^{(n)} =(\zeta_1, \cdots, \zeta_n)$ are independent with $U \sim \U([0,1])$ and iid $\zeta_i \sim \gamma_p $ for $i \in \N$. The random variable $Y^{(n)}$ has a similar form, i.e., $Y^{(n)} \stackrel{d}{=} V^{1/n} \frac{\eta^{(n)}}{|| \eta^{(n)} ||_q}$, where $V$ and $\eta^{(n)} =(\eta_1, \cdots, \eta_n)$ are independent with $V \sim \U([0,1])$ and iid $\eta_i \sim \gamma_q $ for $i \in \N$. Using this leads to
\begin{equation*}
\mathcal{R}_{p,q}^{(n)} = \frac{\sum_{i=1}^{n} |X^{(n)}_i Y^{(n)}_i |}{\Big( \sum_{i=1}^{n} |X^{(n)}_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |Y^{(n)}_i|^q \Big)^{1/q}} \stackrel{d}{=}
\frac{ \sum_{i=1}^{n} |\zeta_i \eta_i |}{\Big( \sum_{i=1}^{n} |\zeta_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |\eta_i|^q \Big)^{1/q}}.
\end{equation*}
Now assume that $X^{(n)}$ and $Y^{(n)}$ are independent and distributed with respect to the cone measure on $\mathbb{S}^{n-1}_p$ and $\mathbb{S}^{n-1}_q$ respectively, i.e., $(X^{(n)}, Y^{(n)}) \sim \mu_p^{(n)} \otimes \mu_q^{(n)}$. Then, again by \eqref{EqProbRepSchechtmannZinn}, we have that $X^{(n)} \stackrel{d}{=} \frac{\zeta^{(n)}}{|| \zeta^{(n)} ||_p}$ and $Y^{(n)} \stackrel{d}{=} \frac{\eta^{(n)}}{|| \eta^{(n)} ||_q}$ with $\zeta^{(n)} = (\zeta_1,...,\zeta_n) $ and $ \eta^{(n)} = (\eta_1,...,\eta_n)$, where $\big( (\zeta_i, \eta_i) \big)_{i \in \N}$ is an iid sequence with $(\zeta_1, \eta_1) \sim \gamma_p \otimes \gamma_q$. Hence, we receive the same representation in distribution for $\mathcal{R}_{p,q}^{(n)}$, i.e., we have
\begin{equation*}
\mathcal{R}_{p,q}^{(n)} = \frac{\sum_{i=1}^{n} |X^{(n)}_i Y^{(n)}_i |}{\Big( \sum_{i=1}^{n} |X^{(n)}_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |Y^{(n)}_i|^q \Big)^{1/q}} \stackrel{d}{=} \frac{\sum_{i=1}^{n} |\zeta_i \eta_i |}{\Big( \sum_{i=1}^{n} |\zeta_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |\eta_i|^q \Big)^{1/q}}, \quad n \in \N.
\end{equation*} \end{proof}
\subsection{Proof of Theorem \ref{ThmCLT}}
We start with an auxiliary Lemma that is used in the proof of the central limit theorem stated as Theorem \ref{ThmCLT} and also later in the proof of Theorem \ref{ThmMDP} (see Section \ref{SectionProofMDP}).
\begin{lem} \label{LemTaylorCLT} Let $X^{(n)}$ and $Y^{(n)}$ be two independent random vectors and assume that either, $(X^{(n)}, Y^{(n)}) \sim \U(\mathbb{B}_p^{n-1}) \otimes \U(\mathbb{B}_q^{n-1})$ or $(X^{(n)}, Y^{(n)}) \sim \mu^{(n)}_p \otimes \mu^{(n)}_q$ and let \begin{equation*}
\mathcal{R}_{p ,q}^{(n)} = \frac{\sum_{i=1}^{n} |X^{(n)}_i Y^{(n)}_i |}{\Big( \sum_{i=1}^{n} |X^{(n)}_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |Y^{(n)}_i|^q \Big)^{1/q}}. \end{equation*} Then, we can write \begin{equation} \label{EqTaylorApproxRpq} \mathcal{R}_{p ,q}^{(n)} = m_{p,q} + \frac{1}{\sqrt{n}} S_n^{(1)} - \frac{m_{p,q}}{p \sqrt{n}} S_n^{(2)} - \frac{m_{p,q}}{q \sqrt{n}} S_n^{(3)} + R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big), \end{equation}
where $m_{p,q} = p^{1/p} \frac{\Gamma( \frac{2}{p})}{\Gamma( \frac{1}{p})} q^{1/q} \frac{\Gamma( \frac{2}{q})}{\Gamma( \frac{1}{q})} $, $ S_n^{(1)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \zeta_i \eta_i | - m_{p , q}) $, $S_n^{(2)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \zeta_i |^p - 1) $ and $S_n^{(3)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \eta_i |^q - 1)$ with iid $(\zeta_i , \eta_i ) \sim \gamma_p \otimes \gamma_q$, $i \in \N$. The function $R$ has the property that there is an $M \in ( 0, \infty ) $ such that \begin{equation} \label{EqErrorEstimate}
|R(x,y,z)| \leq M || (x,y,z) ||^2_2, \quad \text{ as } \quad || (x,y,z) ||_2 \rightarrow 0. \end{equation} \end{lem}
\begin{proof}(of Lemma \ref{LemTaylorCLT})
\label{ProofTheoremCLT}
Consider the function $F: D_F \rightarrow \R$ with
\begin{equation*}
F(x,y,z) := \frac{x + m_{p,q}}{(1+y)^{1/p} ( 1+ z)^{1/q}},
\end{equation*}
where $D_F \subseteq \R^3$ is the domain of $F$ and $m_{p,q}$ is the constant from Lemma \ref{LemTaylorCLT}. Clearly, $F$ is twice continuously differentiable in $D_F$ which contains an open neighborhood of $(0,0,0)$. So, the Taylor expansion of first order exists locally around $(0,0,0)$ and, for $(x,y,z) \in D_F$, we get
\begin{equation}
\label{EqTaylorExpF}
F(x,y,z) = m_{p,q} + x - \frac{m_{p,q}}{p} y - \frac{m_{p,q}}{q} z + R(x,y,z),
\end{equation}
where there exists $M, \delta \in (0, \infty)$ such that, for $||(x,y,z)||_2 \leq \delta $, we have $ |R(x,y,z)| \leq M ||(x,y,z)||_2 ^2$.
Using the representation of $\mathcal{R}_{p,q}^{(n)}$ from Lemma \ref{LemReprRpq}, it follows that
\begin{align*}
\mathcal{R}_{p,q}^{(n)} & \stackrel{d}{=} \frac{\sum_{i=1}^{n} |\zeta_i \eta_i |}{\Big( \sum_{i=1}^{n} |\zeta_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |\eta_i|^q \Big)^{1/q}} \\
& =
\frac{\frac{1}{n} \sum_{i=1}^{n} \big( |\zeta_i \eta_i | - m_{p,q} \big) + m_{p,q} }{\Big( \frac{1}{n} \sum_{i=1}^{n} \big( |\zeta_i|^p -1 \big) +1\Big)^{1/p} \Big( \frac{1}{n} \sum_{i=1}^{n} \big( |\eta_i|^q -1 \big) +1\Big)^{1/q}} \\
& = \frac{ \frac{1}{\sqrt{n}}S_n^{(1)} + m_{p,q} }{\big( \frac{1}{\sqrt{n}}S_n^{(2)} +1 \big)^{1/p} \big( \frac{1}{\sqrt{n}}S_n^{(3)} +1 \big)^{1/q}} \\
& = F \left(\frac{S_n^{(1)}}{\sqrt{n}},\frac{S_n^{(2)}}{\sqrt{n}},\frac{S_n^{(3)}}{\sqrt{n}} \right),
\end{align*}
where we have used that $ \frac{1}{p} + \frac{1}{q} = 1$ and the quantities $S_n^{(i)}, i =1,2,3$ are given as in Lemma \ref{LemTaylorCLT}. By the Taylor expansion of $F$ in \eqref{EqTaylorExpF}, we get
\begin{equation*}
\mathcal{R}_{p ,q}^{(n)} = m_{p,q} + \frac{1}{\sqrt{n}} S_n^{(1)} - \frac{m_{p,q}}{p \sqrt{n}} S_n^{(2)} - \frac{m_{p,q}}{q \sqrt{n}} S_n^{(3)} + R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big),
\end{equation*}
as claimed. \end{proof} \begin{proof}[Proof of Theorem \ref{ThmCLT}] First, we assume that either, $(X^{(n)}, Y^{(n)}) \sim \U(\mathbb{B}_p^{n-1}) \otimes \U(\mathbb{B}_q^{n-1})$ or $(X^{(n)}, Y^{(n)}) \sim \mu^{(n)}_p \otimes \mu^{(n)}_q$. Then, we can use the Taylor expansion in \eqref{EqTaylorApproxRpq} from Lemma \ref{LemTaylorCLT}, where we get \begin{equation} \label{EqRepCLTRpq} \sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \Big) \stackrel{d}{=} S_n^{(1)} - \frac{m_{p,q}}{p} S_n^{(2)} - \frac{m_{p,q}}{q} S_n^{(3)} + \sqrt{n} R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big), \end{equation}
where $ S_n^{(1)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \zeta_i \eta_i | - m_{p , q}) $, $S_n^{(2)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \zeta_i |^p - 1) $ and $S_n^{(3)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \eta_i |^q - 1)$ with iid $(\zeta_i , \eta_i ) \sim \gamma_p \otimes \gamma_q$, $i \in \N$. We show that \begin{equation} \label{EqConvinProbR} \sqrt{n} R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big) \stackrel{\mathbb{P}}{\longrightarrow} 0. \end{equation}
To that end, we recall \eqref{EqErrorEstimate} from Lemma \ref{LemTaylorCLT}. There, we showed that for sufficiently small $\delta > 0$ and $(x,y,z) \in \R^3$ with $ || (x,y,z) ||_2 < \delta $, we have that $ | R(x,y,z) | \leq M ||(x,y,z)||_2^2 $ for some constant $M \in (0, \infty)$. This gives the following estimate \begin{align*}
\mathbb{P} \left[ \frac{ R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big)}{ \Big | \Big | \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big) \Big | \Big |_2^2 } > M \right] & \leq \mathbb{P} \Big[ \Big | \Big | \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big) \Big | \Big |_2^2 > \delta^2 \Big] \\ & = \mathbb{P} \Big[ \Big( \frac{S_n^{(1)}}{\sqrt{n}} \Big)^2 + \Big( \frac{S_n^{(2)}}{\sqrt{n}} \Big)^2 + \Big( \frac{S_n^{(3)}}{\sqrt{n}} \Big)^2 > \delta^2 \Big] \longrightarrow 0, \quad \text{ as } \quad n \rightarrow \infty. \end{align*}
The latter holds due to Slutsky's theorem and the fact that $ \frac{S_n^{(i)}}{\sqrt{n}} \stackrel{\mathbb{P}}{\longrightarrow}0$, $i =1,2,3$ as $n \rightarrow \infty $ by the strong law of large numbers (note that $ \mathbb{E} [ | \zeta_1| |\eta_1|] = m_{p,q}$ and $\mathbb{E} [ | \zeta_1|^p] = \mathbb{E} [ |\eta_1|^q]=1$ ). Further, we have for $\epsilon > 0$, \begin{align*}
\mathbb{P} \left[ \sqrt{n} R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big) > \epsilon \right] & \leq \mathbb{P} \left[ \sqrt{n} \Big | \Big |\Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big) \Big | \Big |_2^2 > \frac{\epsilon }{M}\right ] + \mathbb{P} \left[ \frac{ R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big)}{ \Big | \Big | \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big) \Big | \Big |_2^2 } > M \right] \\ & = \mathbb{P} \Big[ \frac{ \big( S_n^{(1)} \big)^2}{\sqrt{n}} + \frac{ \big( S_n^{(2)} \big)^2}{\sqrt{n}} + \frac{ \big( S_n^{(3)} \big)^2}{\sqrt{n}} > \frac{\epsilon}{M} \Big] +
\mathbb{P} \left[ \frac{ R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big)}{ \Big | \Big | \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big) \Big | \Big |_2^2 } > M \right]. \end{align*} The second term tends to zero as $n \rightarrow \infty$ as shown before. For the first term, we observe that $ S_n^{(i)}$ converges to a normal distribution as $n \rightarrow \infty$ for $i=1,2,3$. Thus, $ \frac{(S_n^{(i)})^2}{\sqrt{n}} \stackrel{\mathbb{P}}{\longrightarrow} 0$ as $n \rightarrow \infty$ and hence, again employing Slutsky's theorem, we get \begin{equation*} \mathbb{P} \Big[ \frac{ \big( S_n^{(1)} \big)^2}{\sqrt{n}} + \frac{ \big( S_n^{(2)} \big)^2}{\sqrt{n}} + \frac{ \big( S_n^{(3)} \big)^2}{\sqrt{n}} > \frac{\epsilon}{M} \Big] \longrightarrow 0, \quad \text{as} \quad n \rightarrow \infty. \end{equation*} This completes the argument and shows the claim in \eqref{EqConvinProbR}. Now, let us consider the sequence \begin{equation} \label{EqVecofSis}
S_n^{(1)} - \frac{m_{p,q}}{p} S_n^{(2)} - \frac{m_{p,q}}{q} S_n^{(3)} = \frac{1}{\sqrt{n}} \sum_{i=1}^n \left( | \zeta_i \eta_i| - m_{p,q} - \frac{m_{p,q}}{p} \left( | \zeta_i|^p -1 \right) - \frac{m_{p,q}}{q} \left(|\eta_i|^q -1 \right) \right), \quad n \in \N . \end{equation} We observe that \eqref{EqVecofSis} is a sum of iid scaled and centered random variables with finite second moment. Thus, by the central limit theorem, \eqref{EqVecofSis} converges in distribution to a centered normal distribution with variance \begin{align} \label{EqVarianceCLT}
\sigma_{p,q}^2 :&= \mathbb{V} \left[ | \zeta_1 \eta_1 | - \frac{m_{p,q}}{p} | \zeta_1 |^p - \frac{m_{p,q}}{q} | \eta_1|^q \right] \\ \notag &=
\mathbb{V} \left [ \left \langle d_{p,q} , \left( | \zeta_1 \eta_1 | , | \zeta_1|^p, | \eta_1 |^q \right) \right \rangle \right] \\ \notag & =
\left \langle d_{p,q} , \mathbf{C_{p,q}} d_{p,q} \right \rangle. \end{align}
Where $ \mathbf{C_{p,q}}$ is the covariance matrix of the vector $\left( | \zeta_1 \eta_1 | , | \zeta_1|^p, | \eta_1 |^q \right)$ and $d_{p,q} = \left( 1, - \frac{m_{p,q}}{p}, - \frac{m_{p,q}}{q} \right) $.
We note that the vector $ \left( | \zeta_1 \eta_1 | , | \zeta_1 |^p, | \eta_1 |^q \right) \in \R^3$ has linear independent coordinates. Thus, the covariance matrix $ \mathbf{C_{p,q}}$ is positive definite. Moreover, since $( \zeta_1 , \eta_1) \sim \gamma_p \otimes \gamma_q$, we can compute the entries of $\mathbf{C_{p,q}}$ explicitly, where we get \begin{equation} \label{EqCovMatrixGammapq} \mathbf{C_{p,q}} = \left ( \begin{matrix} p^{2/p} \frac{\Gamma \left( \frac{3}{p} \right)}{\Gamma \left( \frac{1}{p} \right)} q^{2/q} \frac{\Gamma \left( \frac{3}{q} \right)}{\Gamma \left( \frac{1}{q} \right)} - m_{p,q}^2 & m_{p,q} & m_{p,q}\\ m_{p,q}& p & 0 \\
m_{p,q} & 0 & q \end{matrix} \right), \end{equation} with $m_{p,q} = p^{1/p} \frac{\Gamma \left( \frac{2}{p} \right)}{\Gamma \left( \frac{1}{p} \right)} q^{1/q} \frac{\Gamma \left( \frac{2}{q} \right)}{\Gamma \left( \frac{1}{q} \right)} $. This shows that $ \sigma_{p,q}^2 $ is positive and finite. By Slutsky's theorem we hence, as $n \rightarrow \infty$, we get the following limit in distribution claimed in Theorem \ref{ThmCLT}, \begin{equation*} \sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \Big) \stackrel{d}{=} S_n^{(1)} - \frac{m_{p,q}}{p} S_n^{(2)} - \frac{m_{p,q}}{q} S_n^{(3)} + \sqrt{n} R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big) \stackrel{d}{\longrightarrow} \mathcal{N}(0, \sigma_{p,q}^2). \end{equation*} \par{}
Now we consider the case when $(X^{(n)}, Y^{(n)}) \sim \sigma^{(n)}_p \otimes \sigma^{(n)}_q, n \in \N$. Let $A \in \mathcal{B}(\R)$ and recall the set $ D_n = \Big \{ (x,y) \in \mathbb{S}_p^{n-1} \times \mathbb{S}_q^{n-1} \ : \ \frac{ \sum_{i=1}^n |x_i y_i |}{|| x ||_p ||y||_q} \in A \Big \}$. Then, we have $\mathbb{P} \Big[ \mathcal{R}_{p,q}^{(n)} \in A \Big] = \sigma^{(n)}_p \otimes \sigma^{(n)}_q (D_n)$. Let $Z \sim \mathcal{N}(0, \sigma_{p,q}^2)$. Then \begin{equation*}
\Big| \mathbb{P} \big [ \mathcal{R}_{p,q}^{(n)} \in A \big] - \mathbb{P} \big [Z \in A \big ] \Big | \leq
\Big| \sigma^{(n)}_p \otimes \sigma^{(n)}_q \big [ D_n \big] - \mu^{(n)}_p \otimes \mu^{(n)}_q \big [ D_n \big ] \Big |
+ \Big| \mu^{(n)}_p \otimes \mu^{(n)}_q \big [ D_n \big ] - \mathbb{P} \big [Z \in A \big ] \Big | \stackrel{n \rightarrow \infty}{\longrightarrow} 0. \end{equation*} The second term in the previous expression tends to zero as seen in the first part of this proof. The first term tends to zero, since \begin{align*}
\Big| \sigma^{(n)}_p \otimes \sigma^{(n)}_q \big [ D_n \big] - \mu^{(n)}_p \otimes \mu^{(n)}_q \big [ D_n \big ] \Big | & \leq \sup_{A \in \mathcal{B}( \mathbb{S}_p^{n-1}), B \in \mathcal{B}( \mathbb{S}_q^{n-1}) } \left | \sigma^{(n)}_p \otimes \sigma^{(n)}_q \big [ A \times B \big] - \mu^{(n)}_p \otimes \mu^{(n)}_q \big [ A \times B \big ] \right| \\
& \leq \sup_{A \in \mathcal{B}( \mathbb{S}_p^{n-1}), B \in \mathcal{B}( \mathbb{S}_q^{n-1}) } \left | \sigma^{(n)}_p \otimes \sigma^{(n)}_q \big [ A \times B \big] - \sigma_p^{(n)}\big [A \big ] \mu_q^{(n)} \big [B \big ] \right| \\
& \qquad + \sup_{A \in \mathcal{B}( \mathbb{S}_p^{n-1}), B \in \mathcal{B}( \mathbb{S}_q^{n-1}) } \left | \sigma^{(n)}_p \big [A \big ] \mu^{(n)}_q \big [B \big] - \mu^{(n)}_p \otimes \mu^{(n)}_q \big [ A \times B \big ] \right| \\ & \leq
\sup_{ B \in \mathcal{B}( \mathbb{S}_q^{n-1}) } \left | \sigma^{(n)}_q \big [ B \big ]- \mu_q^{(n)} \big [B \big ] \right| +
\sup_{ A \in \mathcal{B}( \mathbb{S}_p^{n-1}) } \left | \sigma^{(n)}_p \big [ A \big ]- \mu_p^{(n)} \big [ A \big ] \right| \stackrel{n \rightarrow \infty}{\longrightarrow} 0. \end{align*} The latter follows from Proposition \ref{PropSurfMeasConeMeasClose}. Further, we used that $D_n = D_n^1 \times D_n^2$, where $D_n^1 \in \mathscr{B}(\mathbb{S}_p^{n-1})$ and $D_n^2 \in \mathscr{B}(\mathbb{S}_q^{n-1})$ (since $D_n \in \mathscr{B}(\mathbb{S}_p^{n-1} \times \mathbb{S}_q^{n-1}) = \mathscr{B}(\mathbb{S}_p^{n-1}) \times \mathscr{B}(\mathbb{S}_q^{n-1}) $, where the latter holds by, e.g. \cite[Theorem D.4]{DZ2011}). \end{proof}
\subsection{Proof of Theorem \ref{ThmBerryEssentype}}
We now present the proof of the Berry-Esseen bound.
\begin{proof}[Proof of Theorem \ref{ThmBerryEssentype}] First, we assume that either, $(X^{(n)}, Y^{(n)}) \sim \U(\mathbb{B}_p^{n}) \otimes \U(\mathbb{B}_q^{n})$ or $(X^{(n)}, Y^{(n)}) \sim \mu^{(n)}_p \otimes \mu^{(n)}_q$. Then, we recall identity \eqref{EqTaylorApproxRpq} from Lemma \ref{LemTaylorCLT}, i.e., \begin{equation*} \sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \Big) \stackrel{d}{=} S_n^{(1)} - \frac{m_{p,q}}{p} S_n^{(2)} - \frac{m_{p,q}}{q} S_n^{(3)} + \sqrt{n} R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big), \end{equation*}
where $ S_n^{(1)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \zeta_i \eta_i | - m_{p , q}) $, $S_n^{(2)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \zeta_i |^p - 1) $ and $S_n^{(3)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \eta_i |^q - 1)$ with iid $(\zeta_i , \eta_i ) \sim \gamma_p \otimes \gamma_q$, $i \in \N$. We can apply Proposition \ref{PropEstimateKolmogoroff} with $Y_1 := S_n^{(1)} - \frac{m_{p,q}}{p} S_n^{(2)} - \frac{m_{p,q}}{q} S_n^{(3)} $, $Y_2 := \sqrt{n} R \Big( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \Big)$ and $Y_3 := 0$, which yields, for $Z \sim \mathcal{N}(0 , \sigma_{p,q}^2)$ and $\epsilon > 0$, \begin{align} \begin{split} \label{EqKolDistRZ} d_{Kol} \left( \sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \Big), Z \right) & \leq d_{Kol} \left( S_n^{(1)} - \frac{m_{p,q}}{p} S_n^{(2)} - \frac{m_{p,q}}{q} S_n^{(3)} , Z \right) \\ & +
\mathbb{P} \left[ \sqrt{n} \left | R \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \right ) \right | > \frac{\epsilon}{2} \right] + \frac{\epsilon }{\sqrt{2 \pi \sigma_{p,q}^2 }}. \end{split} \end{align} By the definition of $(S_n^{(i)})_{n \in \N}$ for $i=1,2,3$, we have \begin{equation*}
S_n^{(1)} - \frac{m_{p,q}}{p} S_n^{(2)} - \frac{m_{p,q}}{q} S_n^{(3)} = \frac{1}{\sqrt{n}} \sum_{i=1}^n \left( | \zeta_i \eta_i | - m_{p,q} + \frac{m_{p,q}}{p} \left( | \zeta_i |^p - 1 \right) + \frac{m_{p,q}}{q} \left( | \eta_i |^q - 1 \right) \right), \end{equation*} which is a sum of iid centered random variables with finite third moments. Hence, the classical Berry-Esseen theorem (see, e.g., \cite[Chapter XVI.5, Theorem 1]{Feller1971}) gives us a constant $C_1 \in (0, \infty)$ such that \begin{equation} \label{EqBerryEssenTaylorPoly} d_{Kol} \left( S_n^{(1)} - \frac{m_{p,q}}{p} S_n^{(2)} - \frac{m_{p,q}}{q} S_n^{(3)} , Z \right) \leq \frac{C_1}{\sqrt{n}}, \quad n \in \N. \end{equation} Now, we establish an upper bound of the same order for \begin{equation*}
\mathbb{P} \left[ \sqrt{n} \left | R \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \right) \right |> \frac{\epsilon}{2} \right], \quad n \in \N. \end{equation*}
We recall the local behavior of the function $R$ around zero given in Lemma \ref{LemTaylorCLT}. We have that there exist constants $M , \delta \in (0, \infty)$ such that $ | R( x,y,z) | \leq M || (x,y,z) ||_2^2$ for all $(x,y,z) \in \R^3$ with $||(x,y,z) ||_2 \leq \delta$. This gives us the following estimate \begin{align} \begin{split} \label{EqErrorTermBE}
\mathbb{P} \left[ \sqrt{n} \left | R \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \right) \right | > \frac{\epsilon}{2} \right] & \leq \mathbb{P} \left[ \left | \left | \left( \frac{S_n^{(1)}}{\sqrt{n}}, \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \right) \right | \right |_2 > \sqrt{ \frac{\epsilon}{2 \sqrt{n} M }} \right] \\ & +
\mathbb{P} \left[ \left | \left | \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \right) \right | \right |_2 > \delta \right]. \end{split} \end{align} For a random vector $Y =( Y_1,Y_2,Y_3) \in \R^3$ and for every $ \overline{\delta} \in (0, \infty ) $, we have the following upper bound \begin{equation*}
\mathbb{P} \left[ || Y ||_2 > \overline{\delta} \right] \leq \mathbb{P} \left[ |Y_1| > \frac{\overline{\delta}}{\sqrt{3}} \right] + \mathbb{P} \left[ |Y_2| > \frac{\overline{\delta}}{\sqrt{3}} \right] +
\mathbb{P} \left[ |Y_3| > \frac{\overline{\delta}}{\sqrt{3}} \right]. \end{equation*} Applying this to the right-hand side in Equation \eqref{EqErrorTermBE} leads to \begin{align*}
\mathbb{P} \left[ \sqrt{n} \left | R \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}}, \frac{S_n^{(3)}}{\sqrt{n}} \right) \right | > \frac{\epsilon}{2} \right] \leq P_n \left( \frac{S_n^{(1)}}{\sqrt{n}} \right) + P_n \left( \frac{S_n^{(2)}}{\sqrt{n}} \right) + P_n \left( \frac{S_n^{(3)}}{\sqrt{n}} \right), \end{align*}
where $P_n \left (\frac{S_n^{(i)}}{\sqrt{n}} \right ) := \mathbb{P} \left[ \left | \frac{S_n^{(i)}}{\sqrt{n}} \right | > \sqrt{ \frac{\epsilon}{6 \sqrt{n} M }} \right] + \mathbb{P} \left[ \left | \frac{S_n^{(i)}}{\sqrt{n}} \right | > \frac{\delta}{\sqrt{3}} \right]$, $i=1,2,3$. To bound these quantities, we use \cite[Lemma 2.9]{JPBerryEsseenlp} with $\epsilon = \epsilon_n = \tilde{C}_{p,q} \frac{\log n}{\sqrt{n}}$ and $\beta_n = \overline{C}_{p,q} n$, where $ \tilde{C}_{p,q} , \overline{C}_{p,q} \in (0, \infty)$ are suitably chosen constants only depending on $p$ and $q$. As shown in \cite[Section 5.3]{JPBerryEsseenlp}, there exist constants $ C_{i,p,q} \in (0, \infty)$, $i=1,2,3$, such that \begin{equation*} P_n \left (\frac{S_n^{(i)}}{\sqrt{n}} \right ) \leq \frac{C_{i,p,q}}{\sqrt{n}}, \quad i=1,2,3 . \end{equation*} For the quantity in \eqref{EqKolDistRZ}, by combining the previous estimate and \eqref{EqBerryEssenTaylorPoly}, we get \begin{equation*} d_{Kol} \left( \sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \Big), Z \right) \leq \frac{\hat{C}_{p,q}}{\sqrt{n}}+ \frac{\epsilon_n }{\sqrt{2 \pi \sigma^2 }}, \end{equation*} with $\hat{C}_{p,q} := C_1 + C_{1,p,q} + C_{2,p,q} + C_{3,p,q}$. Since $\epsilon_n = \tilde{C}_{p,q} \frac{\log n}{\sqrt{n}}$, we can find a constant $ C_{p,q} \in (0 , \infty)$ such that \begin{equation} \label{EqBEBoundConeMeas} d_{Kol} \left( \sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \Big), Z \right) \leq C_{p,q} \frac{\log n}{\sqrt{n}}, \quad n \in \N , \end{equation} as claimed. \par{}
Now we consider the case when $(X^{(n)}, Y^{(n)}) \sim \sigma^{(n)}_p \otimes \sigma^{(n)}_q$ and recall that $\mathcal{R}_{p,q}^{(n)} = \frac{\sum_{i=1}^n |X^{(n)}_i Y^{(n)}_i |}{||X^{(n)}||_p ||Y^{(n)}||_q}$. Further, let $(\tilde{X}^{(n)}, \tilde{Y}^{(n)}) \sim \mu^{(n)}_p \otimes \mu^{(n)}_q$ and define $\tilde{\mathcal{R}}_{p,q}^{(n)} := \frac{\sum_{i=1}^n |\tilde{X}^{(n)}_i \tilde{Y}^{(n)}_i |}{||\tilde{X}^{(n)}||_p ||\tilde{Y}^{(n)}||_q} $. We want to show that there exists a constant $C \in (0, \infty)$ such that \begin{equation*} d_{Kol} \left( \sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \Big), \sqrt{n} \big( \mathscr{\tilde{R}}_{p,q}^{(n)} - m_{p,q} \Big) \right) \leq \frac{C}{\sqrt{n}}. \end{equation*} For a fixed $t \in \R$, we recall that $\mathbb{P}[ \mathcal{R}_{p,q}^{(n)} \geq t ] = \sigma^{(n)}_p \otimes \sigma^{(n)}_q (D_{n,t})$ as well as $ \mathbb{P}[ \mathscr{\tilde{R}}_{p,q}^{(n)} \geq t ] = \mu^{(n)}_p \otimes \mu^{(n)}_q (D_{n,t})$
with the set $D_{n,t} = \Big \{ (x,y) \in \mathbb{S}_p^{n-1} \times \mathbb{S}_q^{n-1 } \ : \ \frac{\sum_{i=1}^{n} |x_i y_i|}{||x||_p ||y||_q} \geq t \Big \}$. Thus, for all $n \in \N$, we have \begin{align} \label{EqKolDist1} d_{Kol} \left( \sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \Big), \sqrt{n} \big( \mathscr{\tilde{R}}_{p,q}^{(n)} - m_{p,q} \Big) \right) & = d_{Kol} \left( \mathcal{R}_{p,q}^{(n)} , \mathscr{\tilde{R}}_{p,q}^{(n)} \right)\\ \notag
& = \sup_{t \in \R} \big | \sigma^{(n)}_p \otimes \sigma^{(n)}_q (D_{n,t}) - \mu^{(n)}_p \otimes \mu^{(n)}_q(D_{n,t}) \big | \\ \notag
& \leq || \sigma^{(n)}_p \otimes \sigma^{(n)}_q - \mu^{(n)}_p \otimes \mu^{(n)}_q ||_{TV} \\ \label{EqKolDist2} & \leq \frac{C}{\sqrt{n}}. \end{align}
Equation \eqref{EqKolDist1} follows immediately from the definition of the Kolmogorov distance. The estimate in \eqref{EqKolDist2} follows from Proposition \ref{PropSurfMeasConeMeasClose}, since, for some $A = A_1 \times A_2$ with $A_1 \in \mathscr{B}( \mathbb{S}_p^{n-1})$ and $A_2 \in \mathscr{B}( \mathbb{S}_q^{n-1})$, we have \begin{align*}
\left | \sigma_p^{(n)} \otimes \sigma_q^{(n)}(A) - \mu_p^{(n)} \otimes \mu_q^{(n)}(A) \right | & \leq
\left | \sigma_p^{(n)}(A_1) - \mu_p^{(n)}(A_1) \right | +
\left | \sigma_q^{(n)}(A_2) - \mu_q^{(n)}(A_2) \right | \\
& \leq \left | \left | \sigma_p^{(n)} - \mu_p^{(n)} \right | \right |_{TV} + \left | \left | \sigma_q^{(n)} - \mu_q^{(n)} \right | \right |_{TV} \\ & \leq \frac{C}{\sqrt{n}}. \end{align*}
By maximizing over all such $A$, we get $ || \sigma^{(n)}_p \otimes \sigma^{(n)}_q - \mu^{(n)}_p \otimes \mu^{(n)}_q ||_{TV} \leq \frac{C}{\sqrt{n}}$. Now, for a $Z \sim \mathcal{N}(0,\sigma_{p,q}^2)$, we get the following estimate \begin{align*} d_{Kol} \left( \sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \Big), Z \right) & \leq d_{Kol} \left( \sqrt{n} \big( \mathscr{\tilde{R}}_{p,q} - m_{p,q} \Big), \sqrt{n} \big( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \Big) \right) + d_{Kol} \left( \sqrt{n} \Big( \mathscr{\tilde{R}}_{p,q} - m_{p,q} \Big), Z \right) \\ & \leq \frac{C}{\sqrt{n}} + C_{p,q} \frac{\log n}{\sqrt{n}} \\ & \leq \left(C+ C_{p,q}\right) \frac{\log n}{\sqrt{n}} , \end{align*} where we used the bound established in \eqref{EqBEBoundConeMeas} and the first part of this proof. \end{proof}
\subsection{Proof of Theorem \ref{ThmLDP}}
In the proofs of our main results, we frequently use the probabilistic representation of random variables distributed according to the cone measure $\mu_p^{(n)}$ (see Equation \eqref{EqProbRepSchechtmannZinn}). For the surface measure things are more delicate as we do not have such a representation. In order to establish large deviation and moderate deviation results for the surface measure $\sigma_p^{(n)}$, we will need the following exponential equivalence of the cone measure $\mu_p^{(n)}$ and the surface measure $\sigma_p^{(n)}$. \begin{lem} \label{LemExpEquivSurfConeMeas} Let $ A \in \mathcal{B}(\R)$, $p,q \in (1,\infty)$ with $\frac{1}{p} + \frac{1}{q} = 1$ and define \begin{equation*}
D_n:= \Big \{ (x,y) \in \mathbb{S}_p^{n-1} \times \mathbb{S}_q^{n-1} \ : \ \frac{\sum_{i=1}^n |x_i y_i |}{||x||_p ||y||_q} \in A \Big \}. \end{equation*} Then, it holds that \begin{equation*}
\lim_{n \rightarrow \infty} \Big | \frac{1}{s_n} \log \sigma^{(n)}_p \otimes \sigma^{(n)}_q (D_n) - \frac{1}{s_n} \log \mu^{(n)}_p \otimes \mu^{(n)}_q (D_n) \Big| = 0, \end{equation*} where $(s_n)_{n \in \N}$ is a positive sequence with $\lim_{n \rightarrow \infty} \frac{\log n}{s_n}=0$. \end{lem} \begin{proof}
Recall the Lebesgue-density $\frac{d \sigma^{(n)}_p}{d \mu^{(n)}_p}(x)=C_{n,p} \Big( \sum_{i=1}^{n} |x_i|^{2p-2} \Big)^{1/2} =: h_{n,p}(x)$, where $C_{n,p}\in(0,\infty)$ denotes the normalizing constant. By Lemma 2.2 in \cite{ArithGeoIneq}, there exists a constant $C \in (0, \infty)$ such that for all $x \in \mathbb{S}_p^{n-1}$, we have $ n^{-C} \leq h_{n,p}(x) \leq n^C$. Further, we can write $D_n = D_n^1 \times D_n^2$, where $D_n^1 \in \mathscr{B}(\mathbb{S}_p^{n-1})$ and $D_n^2 \in \mathscr{B}(\mathbb{S}_q^{n-1})$ (note that $D_n \in \mathscr{B}(\mathbb{S}_p^{n-1} \times \mathbb{S}_q^{n-1}) = \mathscr{B}(\mathbb{S}_p^{n-1}) \times \mathscr{B}(\mathbb{S}_q^{n-1}) $, where the latter holds, e.g., by Theorem D.4 in \cite{DZ2011}). We hence get the following estimate \begin{align} \begin{split}
\Big| \frac{1}{s_n} \log \sigma^{(n)}_p \otimes \sigma^{(n)}_q( D_n) - \frac{1}{s_n} \log \mu^{(n)}_p \otimes \mu^{(n)}_q( D_n) \Big| \leq &
\Big| \frac{1}{s_n} \log \sigma^{(n)}_p ( D^1_n) - \frac{1}{s_n} \log \mu^{(n)}_p ( D^1_n) \Big| \\ \label{EqEstimateConeSurf}
& + \Big| \frac{1}{s_n} \log \sigma^{(n)}_q ( D^2_n) - \frac{1}{s_n} \log \mu^{(n)}_q ( D^2_n) \Big|. \end{split} \end{align} We consider the first expression on the right-hand side in \eqref{EqEstimateConeSurf}, where we have \begin{align*}
\Big| \frac{1}{s_n} \log \sigma^{(n)}_p ( D^1_n) - \frac{1}{s_n} \log \mu^{(n)}_p ( D^1_n) \Big| =\left | \frac{1}{s_n} \log \left( \frac{\sigma_p^{(n)}(D_n^1)}{\mu_p^{(n)}(D_n^1)} \right) \right | \leq C \frac{\log \left ( n \right )}{s_n} \stackrel{n \rightarrow \infty}{\longrightarrow} 0. \end{align*} Here, we used that $ n^{-C} \mu_p^{(n)}( D_n^1) \leq \sigma_p^{(n)} ( D_n^1) \leq n^C \mu_p^{(n)}( D_n^1) $. The second term in \eqref{EqEstimateConeSurf} can be treated analogously. \end{proof}
We will now present the proof of the large deviation principle.
\begin{proof}[Proof of Theorem \ref{ThmLDP}]
First, assume that either, $(X^{(n)},Y^{(n)}) \sim \U( \mathbb{B}^n_p) \otimes \U( \mathbb{B}^n_q)$ or $ (X^{(n)},Y^{(n)}) \sim \mu^{(n)}_p \otimes \mu^{(n)}_q$. We use the probabilistic representation of $( \mathcal{R}_{p,q}^{(n)})_{n \in \N}$ and Cramér's theorem (see Proposition \ref{ThmCramer}) together with the contraction principle (see Lemma \ref{LemContractionPrinciple}). By Lemma \ref{LemReprRpq}, we have that
\begin{equation*}
\mathcal{R}_{p,q}^{(n)} \stackrel{d}{=} \frac{\sum_{i=1}^{n} |\zeta_i \eta_i |}{\Big( \sum_{i=1}^{n} |\zeta_i|^p \Big)^{1/p} \Big( \sum_{i=1}^{n} |\eta_i|^q \Big)^{1/q}}, \quad n \in \N,
\end{equation*}
where $(\zeta_i)_{i \in \N}$ is an iid sequence of $p$-generalized Gaussian distributed random variables and $(\eta_i)_{i \in \N}$ is an iid sequence of $q$-generalized Gaussian distributed random variables and both are independent. We consider the sequence $( \xi_n)_{n \in \N}$ with
\begin{equation}
\label{EqTrippleSeq}
\xi_n := \frac{1}{n} \sum_{i=1}^n \big( | \zeta_i \eta_i | , | \zeta_i |^p , | \eta_i |^q \big) \in \R^3, \quad n \in \N .
\end{equation}
The summands on the right-hand side in \eqref{EqTrippleSeq} are iid and thus, we want to apply Cramér's theorem (see Proposition \ref{ThmCramer}) in order to establish an LDP for $( \xi_n)_{n \in \N}$. We do this by showing that the cumulant generating function $\Lambda: \R^3 \rightarrow [0, \infty] $ with
\begin{equation}
\label{EqCumGenFLdp}
\Lambda(r,s,t ) := \log \E \big[ e^{ r |\zeta_1 \eta_1 | +s | \zeta_1 |^p + t |\eta_1 |^q} \big],
\end{equation}
is finite in some ball around $0 \in \R^3$.
Using the density in \eqref{EqDensitypGenGaussian}, we can write \eqref{EqCumGenFLdp} as
\begin{equation*}
\Lambda(r,s,t)= \log\int_{\R^2} e^{ r |x y | +s | x|^p + t |y |^q - \frac{1}{p} |x|^p - \frac{1}{q} |y|^q} c_{p,q} dx \,dy,
\end{equation*}
where $c_{p,q} = \frac{1}{2 p^{1/p} \Gamma( 1 + \frac{1}{p})} \frac{1}{2 q^{1/q} \Gamma( 1 + \frac{1}{q})}$. Then, $ \Lambda(r,s,t)$ is finite if and only if
\begin{equation*}
c_{p,q} \int_{\R} \int_{\R} e^{ r |x y| + \big( s - \frac{1}{p} \big) |x|^p + \big( t - \frac{1}{q} \big) |y| ^q } dx\, dy < \infty.
\end{equation*}
Let us fix a point $(r,s,t) \in \R^3 $ with $|r| < \epsilon$, $|s| < \epsilon$ and $|t| < \epsilon$, where we choose $ \epsilon:= \frac{1}{2(1+\max(q,p))}$. Then, using that $|xy| \leq \frac{1}{p}|x|^p + \frac{1}{q} |y|^q$ for all $x,y \in \R$, we get the following estimate
\begin{align*}
c_{p,q} \int_{\R} \int_{\R} e^{ r |x y| + \big( s - \frac{1}{p} \big) |x|^p + \big( t - \frac{1}{q} \big) |y| ^q } dx\, dy &
\leq c_{p,q} \int_{\R} \int_{\R} e^{\Big( s- \frac{1}{p} + \frac{r}{p} \Big) |x|^p} e^{\Big( t- \frac{1}{q} + \frac{r}{q} \Big) |y|^q}dx \,dy < \infty.
\end{align*}
The integral on the right-hand side is finite, since $ \Big( s - \frac{1}{p} + \frac{r}{p} \Big) < 0$ and $ \Big( t - \frac{1}{q} + \frac{r}{q} \Big) < 0 $ by our choice of $\epsilon $.
So, we are able to find an open neighborhood $U \subseteq \R^3$ around zero such that $\Lambda(r,s,t) < \infty$ for all $( r,s,t ) \in U$.
Now we can apply Cramér's theorem to the sequence $( \xi_n)_{n \in \N}$ defined in \eqref{EqTrippleSeq}. It follows that $( \xi_n)_{n \in \N}$ satisfies an LDP in $\R^3$ at speed $n$ with GRF $\Lambda^{*}: \R^3 \rightarrow [0, \infty]$, where
\begin{equation*}
\Lambda^{*}( u,v,w) := \sup_{ (r,s,t) \in \R^3 } \big[ ru + tv +sw - \Lambda(r,s,t) \big].
\end{equation*}
Consider the continuous mapping $F : \R \times (0, \infty)^2 \rightarrow [0, \infty]$ with
\begin{equation*}
F( u, v,w) := \frac{u}{v^{1/p} w^{1/q}}.
\end{equation*}
We see that
\begin{align*}
F( \xi_n ) & = \frac{ \frac{1}{n} \sum_{i=1}^n |\zeta_i \eta_i | }{ \big( \frac{1}{n} \sum_{i=1}^n | \zeta_i |^p \big)^{1/p} \big( \frac{1}{n} \sum_{i=1}^n | \eta_i |^q \big)^{1/q} } \\
& \stackrel{d}{=} \mathcal{R}_{p,q}^{(n)}, \quad n \in \N,
\end{align*}
where we used that $ \frac{1}{p}+ \frac{1}{q} =1$. Hence, the contraction principle (see Lemma \ref{LemContractionPrinciple}) applied to the sequence $ ( F( \xi_n ))_{n \in \N}$ gives us an LDP for $ ( \mathcal{R}_{p,q}^{(n)})_{n \in \N}$ at speed $n$ with GRF $ \I : \R \rightarrow [0, \infty]$, where
\begin{equation*}
\I( x) = \begin{cases}
\inf \Big \{ \Lambda^{*}( u,v,w) \ : \ x = \frac{u}{v^{1/p} w^{1/q}} \Big \}, & \text{if } x > 0 \\
+ \infty & \text{ else}.
\end{cases}
\end{equation*}
Now we assume that $ (X^{(n)}, Y^{(n)}) \sim \sigma^{(n)}_p \otimes \sigma^{(n)}_q , n \in \N$. For a closed set $A \subseteq \R $, by Lemma \ref{LemExpEquivSurfConeMeas}, we get
\begin{equation*}
\limsup_{ n \rightarrow \infty } \frac{1}{n} \log \mathbb{P} \big [ \mathcal{R}_{p,q}^{(n)} \in A \big] = \limsup_{ n \rightarrow \infty } \frac{1}{n} \log \sigma^{(n)}_p \otimes \sigma^{(n)}_q (D_n) =
\limsup_{ n \rightarrow \infty } \frac{1}{n} \log \mu^{(n)}_p \otimes \mu^{(n)}_q (D_n) \leq - \inf_{x \in A} \mathbb{I}(x).
\end{equation*}
$D_n= \Big \{ (x,y) \in \mathbb{S}_p^{n-1} \times \mathbb{S}_q^{n-1} \ : \ \frac{\sum_{i=1}^n |x_i y_i |}{||x||_p ||y||_q} \in A \Big \}$ is the set from Lemma \ref{LemExpEquivSurfConeMeas} and we have used the large deviation upper bound which holds for $\mu^{(n)}_p \otimes \mu^{(n)}_q$. This proves the large deviation upper bound for $\sigma^{(n)}_p \otimes \sigma^{(n)}_q$. The lower bound can be shown analogously. \end{proof}
\subsection{Proof of Theorem \ref{ThmMDP}} \label{SectionProofMDP}
We now present the proof of the MDP.
\begin{proof}[Proof of Theorem \ref{ThmMDP}] First, we assume that either, $(X^{(n)}, Y^{(n)}) \sim \U(\mathbb{B}_p^{n}) \otimes \U(\mathbb{B}_q^{n})$ or $(X^{(n)}, Y^{(n)}) \sim \mu^{(n)}_p \otimes \mu^{(n)}_q$. We work with the sequence \begin{equation*} \frac{\sqrt{n}}{b_n} \left( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \right), \quad n \in \N, \end{equation*} where $\mathcal{R}_{p,q}^{(n)}$ is the quantity from Equation \eqref{EqRepRationRpq} and $m_{p,q} = p^{1/p} \frac{\Gamma( \frac{2}{p})}{\Gamma( \frac{1}{p})} q^{1/q} \frac{\Gamma( \frac{2}{q})}{\Gamma( \frac{1}{q})} $. Using the Taylor expansion of $\mathcal{R}_{p,q}^{(n)}$ given in Lemma \ref{LemTaylorCLT}, we obtain \begin{equation} \label{EqTaylorExpRMDP} \frac{\sqrt{n}}{b_n} \left( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \right) \stackrel{d}{=} \frac{1}{b_n} S_n^{(1)} - \frac{m_{p,q}}{b_n p} S_n^{(2) } - \frac{m_{p,q}}{b_n q} S_n^{(3) } + \frac{\sqrt{n}}{b_n} R \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}} , \frac{S_n^{(3)}}{\sqrt{n}}\right), \end{equation}
where $ S_n^{(1)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \zeta_i \eta_i | - m_{p , q}) $, $S_n^{(2)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \zeta_i |^p - 1) $ and $S_n^{(3)} := \frac{1}{\sqrt{n}} \sum_{i=1}^n ( | \eta_i |^q - 1)$ with iid $(\zeta_i , \eta_i ) \sim \gamma_p \otimes \gamma_q$, $i \in \N$. First, we show that the quantity in \eqref{EqTaylorExpRMDP} and \begin{equation} \label{EqTaylorExpMDP} Y_n := \frac{1}{b_n} S_n^{(1)} - \frac{m_{p,q}}{b_n p} S_n^{(2) } - \frac{m_{p,q}}{b_n q} S_n^{(3) }, \quad n \in \N \end{equation} are exponentially equivalent on the scale $ (b_n^2)_{n \in \N}$. To that end, for $\epsilon >0$, we consider \begin{align*}
\frac{1}{b_n^2} \log \mathbb{P} \left[ \left | \frac{\sqrt{n}}{b_n} \left( \mathcal{R}_{p,q}^{(n)} - m_{p,q} \right) - Y_n \right | > \epsilon \right] & =
\frac{1}{b_n^2} \log \mathbb{P} \left[ \left | \frac{\sqrt{n}}{b_n} R \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}} , \frac{S_n^{(3)}}{\sqrt{n}}\right) \right | > \epsilon \right]. \end{align*} Here, we take a closer look at the probability in the logarithm on the right-hand side. For sufficiently large $n \in \N$, we get \begin{align*}
\mathbb{P} \left[ \left | \frac{\sqrt{n}}{b_n} R \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}} , \frac{S_n^{(3)}}{\sqrt{n}}\right) \right | > \epsilon \right] & \leq
\mathbb{P} \left[ \left | \left | \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}} , \frac{S_n^{(3)}}{\sqrt{n}} \right) \right | \right |_2^2 \geq \delta \right]
+ \mathbb{P} \left[ \left | \left | \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}} , \frac{S_n^{(3)}}{\sqrt{n}} \right) \right | \right |_2^2 > \frac{\epsilon}{M} \frac{b_n}{\sqrt{n}} \right] \\
& \leq 2 \mathbb{P} \left[ \left | \left | \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}} , \frac{S_n^{(3)}}{\sqrt{n}} \right) \right | \right |_2^2 > \frac{\epsilon}{M} \frac{b_n}{\sqrt{n}} \right] \\
& = 2 \mathbb{P} \left[ \left | \left | \left( \frac{S_n^{(1)}}{b_n} , \frac{S_n^{(2)}}{b_n} , \frac{S_n^{(3)}}{b_n} \right) \right | \right |_2^2 > \frac{\epsilon}{M} \frac{\sqrt{n}}{b_n} \right], \end{align*} where we used the properties of the function $R$ from Lemma \ref{LemTaylorCLT}. For a value $T \in (0, \infty)$, we have $ \frac{\epsilon}{M} \frac{\sqrt{n}}{b_n} > T$ for all sufficiently large $n \in \N$, since $ \frac{b_n}{\sqrt{n}}$ tends to zero as $n \rightarrow \infty$. The sequence $ \left( \frac{S_n^{(1)}}{b_n} , \frac{S_n^{(2)}}{b_n} , \frac{S_n^{(3)}}{b_n} \right)_{n \in \N}$ satisfies an MDP at speed $( b_n^2)_{n \in \N}$ by Proposition \ref{ThmCramerMDP} with GRF $ \mathbb{J} : \R^3 \rightarrow [0, \infty)$, where \begin{equation} \label{EqGRFMDP} \mathbb{J}(x) := \frac{1}{2}\langle x, \mathbf{C_{p,q}}^{-1} x \rangle , \quad x \in \R^3. \end{equation}
$ \mathbf{C_{p,q}} \in \R^{3x3}$ is the positive definite covariance matrix of the vector $ \left( | \zeta_1 \eta_1 | , | \zeta_1 |^p, | \eta_1 |^q \right) \in \R^{3}$ (see the proof of Theorem \ref{ThmCLT}). Thus, by combining the upper bound of the MDP and the contraction principle (see Lemma \ref{LemContractionPrinciple}), we get \begin{align*}
\limsup_{ n \rightarrow \infty } \frac{1}{b_n^2} \log \mathbb{P} \left[ \left | \frac{\sqrt{n}}{b_n} R \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}} , \frac{S_n^{(3)}}{\sqrt{n}}\right) \right | > \epsilon \right] & \leq \limsup_{ n \rightarrow \infty } \frac{1}{b_n^2} \log
\mathbb{P} \left[ \left | \left | \left( \frac{S_n^{(1)}}{b_n} , \frac{S_n^{(2)}}{b_n} , \frac{S_n^{(3)}}{b_n} \right) \right | \right |_2^2 > T \right] \\
& \leq - \inf \left \{ x^t \mathbf{C_{p,q}}^{-1} x \ : \ || x ||_2^2 > T \right \} \\ & \leq - T \lambda_{min}( \mathbf{C_{p,q}}^{-1}), \end{align*} where $ \lambda_{min}( \mathbf{C_{p,q}}^{-1}) \in (0, \infty) $ denotes the smallest eigenvalue of $ \mathbf{C_{p,q}}^{-1}$. Since the previous bound holds for any $T \in (0, \infty)$, we have \begin{equation*}
\limsup_{ n \rightarrow \infty } \frac{1}{b_n^2} \log \mathbb{P} \left[ \left | \frac{\sqrt{n}}{b_n} R \left( \frac{S_n^{(1)}}{\sqrt{n}} , \frac{S_n^{(2)}}{\sqrt{n}} , \frac{S_n^{(3)}}{\sqrt{n}}\right) \right | > \epsilon \right] = - \infty, \end{equation*} showing the exponential equivalence at scale $(b_n^2)_{n \in \N }$ as desired. Given that, we need to prove an MDP at rate $(b_n^2)_{n \in \N }$ for the sequence $(Y_n)_{n \in \N}$ defined in Equation \eqref{EqTaylorExpMDP}. For $n \in \N$, we have $Y_n = \left \langle \left( \frac{S_n^{(1)}}{b_n} , \frac{S_n^{(2)}}{b_n} , \frac{S_n^{(3)}}{b_n}\right) , d_{p,q} \right \rangle $, where $d_{p,q} = \left( 1, - \frac{m_{p,q}}{p}, - \frac{m_{p,q}}{q} \right)$. We recall that $\left( \frac{S_n^{(1)}}{b_n} , \frac{S_n^{(2)}}{b_n} , \frac{S_n^{(3)}}{b_n} \right)_{n \in \N } $ satisfies an MDP in $\R^3 $ at speed $ (b_n^2)_{n \in \N}$ with GRF $ \mathbb{J} : \R^3 \rightarrow [0, \infty)$. Now, we can apply the contraction principle (see Lemma \ref{LemContractionPrinciple}) in order to establish an MDP for the sequence $( Y_n)_{n \in \N}$ in $\R$ at speed $( b_n^2)_{n \in \N}$ with GRF $ \I : \R \rightarrow [0, \infty ]$, where \begin{equation} \label{EqGRFMDPSeqY} \I( t) := \inf \left \{ \frac{1}{2}\langle x , \mathbf{C_{p,q}}^{-1} x \rangle \ : \ \langle d_{p,q} , x \rangle = t \right \}. \end{equation} We are able to give a closed form of $ \I$ by using the Lagrange method for optimization. For this, we fix $ t \in \R$ and consider the Lagrange function $ L: \R^4 \rightarrow \R$ given by \begin{equation*} L( x, \lambda ): =\frac{1}{2} \langle x , \mathbf{C_{p,q}}^{-1} x \rangle + \lambda ( t - \langle d_{p,q} , x \rangle ), \quad x \in \R^3 , \quad \lambda \in \R . \end{equation*} The directional derivatives are \begin{equation*} \left ( \frac{\partial L( x , \lambda)}{ \partial x_1 }, \frac{\partial L( x , \lambda)}{ \partial x_2 } , \frac{\partial L( x , \lambda)}{ \partial x_3 } \right) = \mathbf{C_{p,q}}^{-1} x - \lambda d_{p,q} =0 \in \R^3 \quad \text{and} \quad \frac{\partial }{ \partial \lambda } L( x , \lambda ) = t - \langle d_{p,q} ,x \rangle = 0. \end{equation*} This system of equations can be solved elementary and for the solution $x = x(t)$, we get \begin{equation*} x(t)= \frac{ \mathbf{C_{p,q}} d_{p,q} }{ \langle d_{p,q} , \mathbf{C_{p,q}} d_{p,q} \rangle } t, \end{equation*} where we mention that $d_{p,q} \neq (0,0,0)$ and hence $ \langle d_{p,q} , \mathbf{C_{p,q}} d_{p,q} \rangle \in (0, \infty )$. Finally, our GRF $\I : \R \rightarrow [0, \infty]$ is given as \begin{equation*} \I( t) = \frac{t^2}{2 \langle d_{p,q} , \mathbf{C_{p,q}} d_{p,q} \rangle}, \quad t \in \R, \end{equation*} where the quantity $ \sigma_{p,q}^2 = \langle d_{p,q} , \mathbf{C_{p,q}} d_{p,q} \rangle \in (0, \infty ) $ is as claimed in Theorem \ref{ThmMDP}. \par{}
Now we consider the case when $(X^{(n)}, Y^{(n)}) \sim \sigma^{(n)}_p \otimes \sigma^{(n)}_q$, $n \in \N$. We fix a closed set $A \subseteq \R$ and denote $ D_n = \Big \{ (x,y) \in \mathbb{S}_p^{n-1} \times \mathbb{S}_q^{n-1} \ : \ \frac{ \sum_{i=1}^n |x_i y_i |}{|| x ||_p ||y||_q} \in A \Big \}$. We then get \begin{equation*} \limsup_{n \rightarrow \infty} \frac{1}{b_n^2} \log \mathbb{P} \left[ \mathcal{R}_{p,q}^{(n)} \in A \right] = \limsup_{n \rightarrow \infty} \frac{1}{b_n^2} \log \sigma^{(n)}_p \otimes \sigma^{(n)}_q(D_n) = \limsup_{n \rightarrow \infty} \frac{1}{b_n^2} \log \mu^{(n)}_p \otimes \mu^{(n)}_q(D_n) \leq - \inf_{t \in A} \mathbb{I}(t), \end{equation*} where we have used Lemma \ref{LemExpEquivSurfConeMeas} and the assumption $\lim_{n \rightarrow \infty} \frac{b_n}{\sqrt{\log n}} = \infty$. This establishes the upper bound of the MDP. The lower bound can be shown analogously. \end{proof}
\small
\noindent \textsc{Lorenz Fr\"uhwirth:} Institut für Analysis und Zahlentheorie, Graz University of Technology, Kopernikusgasse 24/II, 8010 Graz, Austria
\noindent \textit{E-mail:} \texttt{[email protected]}
\small
\noindent \textsc{Joscha Prochno:} Faculty of Computer Science and Mathematics, University of Passau, Dr.-Hans-Kapfinger-Stra{\ss}e 30, 94032 Passau, Germany.
\noindent \textit{E-mail:} \texttt{[email protected]}
\end{document} |
\begin{document}
\authortitle{Anders Bj\"orn and Daniel Hansevi} {Semiregular and strongly irregular boundary points on unbounded sets} {Semiregular and strongly irregular boundary points for {$p\mspace{1mu}$}-harmonic functions on unbounded sets \\ in metric spaces}
\author {Anders Bj\"orn \\ \it\small Department of Mathematics, Link\"oping University, \\ \it\small SE-581 83 Link\"oping, Sweden\/{\rm ;} \it \small [email protected] \\ \\ Daniel Hansevi \\ \it\small Department of Mathematics, Link\"oping University, \\ \it\small SE-581 83 Link\"oping, Sweden\/{\rm ;} \it \small [email protected] }
\date{Preliminary version, \today} \date{}
\title{#2}
\noindent{\small {\bf Abstract}. The trichotomy between regular, semiregular, and strongly irregular boundary points for {$p\mspace{1mu}$}-harmonic functions is obtained for unbounded open sets in complete metric spaces with a doubling measure supporting a {$p\mspace{1mu}$}-Poincar\'e inequality, $1<p<\infty$. We show that these are local properties. We also deduce several characterizations of semiregular points and strongly irregular points. In particular, semiregular points are characterized by means of capacity, {$p\mspace{1mu}$}-harmonic measures, removability, and semibarriers.
\noindent {\small \emph{Key words and phrases}: barrier, boundary regularity, Dirichlet problem, doubling measure, metric space, nonlinear potential theory, Perron solution, {$p\mspace{1mu}$}-harmonic function, Poincar\'e inequality, semibarrier, semiregular boundary point, strongly irregular boundary point. }
\noindent {\small Mathematics Subject Classification (2010): Primary: 31E05; Secondary: 30L99, 35J66, 35J92, 49Q20. } }
\section{Introduction} \label{sec:intro}
Let $\Omega\subset\mathbb{R}^n$ be a nonempty bounded open set and let $f\in C(\partial\Omega)$. The Perron method provides us with a unique function $P f$ that is harmonic in $\Omega$ and takes the boundary values $f$ in a weak sense, i.e., $P f$ is a solution of the Dirichlet problem for the Laplace equation $\Delta u=0$. It was introduced on $\mathbb{R}^2$ in 1923 by Perron~\cite{Perron23} and independently by Remak~\cite{remak}. A point $x_0\in\partial\Omega$ is \emph{regular} if $\lim_{\Omega\ni y\to x_0}P f(y)=f(x_0)$ for every $f\in C(\partial\Omega)$. Wiener~\cite{Wiener} characterized regular boundary points by means of the \emph{Wiener criterion} in 1924. In the same year Lebesgue~\cite{Lebesgue} gave a different characterization using barriers.
This definition of boundary regularity can be paraphrased in the following way: The point $x_0\in\partial\Omega$ is regular if the following two conditions hold: \begin{enumerate} \renewcommand{\textup{(\roman{enumi})}}{\textup{(\roman{enumi})}} \item For all $f\in C(\partial\Omega)$ the limit $\lim_{\Omega\ni y\to x_0}P f(y)$ exists. \item For all $f\in C(\partial\Omega)$ there is a sequence $\Omega\ni y_j\to x_0$ such that $\lim_{j\to\infty}P f(y_j)=f(x_0)$. \end{enumerate} Perhaps surprisingly, it is the case that for irregular boundary points \emph{exactly one} of these two properties fails; one might have guessed that both can fail at the same time but this can in fact never happen. A boundary point $x_0\in\partial\Omega$ is \emph{semiregular} if the first condition holds but not the second; and \emph{strongly irregular} if the second condition holds but not the first.
For the Laplace equation it is well known that all boundary points are either regular, semiregular, or strongly irregular, and this trichotomy (in an abstract linear setting) was developed in detail in Luke\v{s}--Mal\'y~\cite{lukesmaly}. Key examples of semiregular and strongly irregular points are Zaremba's punctured ball and the Lebesgue spine, respectively, see Examples~13.3 and~13.4 in \cite{BBbook}.
A nonlinear analogue is to consider the Dirichlet problem for {$p\mspace{1mu}$}-harmonic functions, which are solutions of the {$p\mspace{1mu}$}-Laplace equation
$\Delta_p u:=\Div(|\nabla u|^{p-2}\,\nabla u)=0$, $1<p<\infty$. This leads to a nonlinear potential theory that has been studied since the 1960s. Initially, it was developed for $\mathbb{R}^n$, but it has also been extended to weighted $\mathbb{R}^n$, Riemannian manifolds, and other settings. In more recent years, it has been generalized to metric spaces, see, e.g., the monograph Bj\"orn--Bj\"orn~\cite{BBbook} and the references therein. The Perron method was extended to such metric spaces by Bj\"orn--Bj\"orn--Shanmugalingam~\cite{BBS2} for bounded open sets and Hansevi~\cite{hansevi2} for unbounded open sets.
Boundary regularity for {$p\mspace{1mu}$}-harmonic functions on metric spaces was first studied by Bj\"orn~\cite{BjIll} and Bj\"orn--MacManus--Shan\-mu\-ga\-lin\-gam~\cite{BMS}, and a rather extensive study was undertaken by Bj\"orn--Bj\"orn~\cite{BB} on bounded open sets. Recently this theory was generalized to unbounded open sets by Bj\"orn--Hansevi~\cite{BHan1}; see also Bj\"orn--Bj\"orn--Li~\cite{BBLi}. For further references and a historical discussion on regularity for {$p\mspace{1mu}$}-harmonic functions we refer the interested reader to the introduction in \cite{BHan1}.
For {$p\mspace{1mu}$}-harmonic functions on $\mathbb{R}^n$ and metric spaces the trichotomy was obtained by Bj\"orn~\cite{ABclass} for bounded open sets. It was also obtained for unbounded sets in certain Ahlfors regular metric spaces by Bj\"orn--Bj\"orn--Li~\cite{BBLi}. Adamowicz--Bj\"orn--Bj\"orn~\cite{ABB} obtained the trichotomy for $p(\cdot)$-harmonic functions on bounded open sets in $\mathbb{R}^n$.
In this paper we obtain the trichotomy in the following form, where regularity is defined using upper Perron solutions (Definition~\ref{def:reg}). (We use upper Perron solutions as it is not known whether continuous functions are resolutive with respect to unbounded {$p\mspace{1mu}$}-hyperbolic sets.)
\begin{theorem}\label{thm:trichotomy} \textup{(Trichotomy)} Assume that $X$ is a complete metric space equipped with a doubling measure supporting a {$p\mspace{1mu}$}-Poincar\'e inequality\textup{,} $1<p<\infty$. Let\/ $\Omega\subset X$ be a nonempty\/ \textup{(}possibly unbounded\/\textup{)} open set with the capacity ${C_p}(X\setminus\Omega)>0$.
Let $x_0\in\bdy\Omega\setm\{\infty\}$. Then $x_0$ is either regular\textup{,} semiregular\textup{,} or strongly irregular for functions that are {$p\mspace{1mu}$}-harmonic in $\Omega$. Moreover, \begin{itemize} \item $x_0$ is strongly irregular if and only if $x_0\in\itoverline{R}\setminus R$, where \[
R
:= \{x\in\bdy\Omega\setm\{\infty\}:x\text{ is regular}\}. \] \item The relatively open set \begin{equation} \label{eq-S}
S
:= \{x\in\bdy\Omega\setm\{\infty\}:
\text{there is $r>0$ such that ${C_p}(B(x,r)\cap\partial\Omega)=0$}\} \end{equation} consists exactly of all semiregular boundary points of $\bdy\Omega\setm\{\infty\}$. \end{itemize} \end{theorem}
The importance of the distinction between semiregular and strongly irregular boundary points is perhaps best illustrated by the equivalent characterizations given in Theorems~\ref{thm:rem-irr-char} and~\ref{thm:ess-irr-char}. Semiregular points are in some ways not seen by Perron solutions.
Our contribution here is to extend the results in \cite{ABclass} to unbounded open sets. In order to do so there are extra complications, most notably the fact that it is not known whether continuous functions are resolutive with respect to unbounded {$p\mspace{1mu}$}-hyperbolic sets. We will also rely on the recent results by Bj\"orn--Hansevi~\cite{BHan1} on regularity for {$p\mspace{1mu}$}-harmonic functions on unbounded sets in metric spaces. Most of our results are new also on unweighted $\mathbb{R}^n$.
\end{ack}
\section{Notation and preliminaries} \label{sec:prel}
We assume that $(X,d,\mu)$ is a metric measure space (which we simply refer to as $X$) equipped with a metric $d$ and a positive complete Borel measure $\mu$ such that $0<\mu(B)<\infty$ for every ball $B\subset X$. It follows that $X$ is second countable. For balls $B(x_0,r):=\{x\in X:d(x,x_0)<r\}$ and $\lambda>0$, we let $\lambda B=\lambda B(x_0,r):=B(x_0,\lambda r)$. The $\sigma$-algebra on which $\mu$ is defined is the completion of the Borel $\sigma$-algebra. We also assume that $1<p<\infty$. Later we will impose further requirements on the space and on the measure. We will keep the discussion short, see the monographs Bj\"orn--Bj\"orn~\cite{BBbook} and Heinonen--Koskela--Shanmugalingam--Tyson~\cite{HKSTbook} for proofs, further discussion, and references on the topics in this section.
The measure $\mu$ is \emph{doubling} if there exists a constant $C\geq 1$ such that \[
0
< \mu(2B)
\leq C\mu(B)
< \infty \] for every ball $B\subset X$. A metric space is \emph{proper} if all bounded closed subsets are compact, and this is in particular true if the metric space is complete and the measure is doubling.
We say that a property holds for \emph{{$p\mspace{1mu}$}-almost every curve} if it fails only for a curve family $\Gamma$ with zero {$p\mspace{1mu}$}-modulus, i.e., there exists a nonnegative $\rho\inL^p(X)$ such that $\int_\gamma\rho\,ds=\infty$ for every curve $\gamma\in\Gamma$. For us, a curve in $X$ is a rectifiable nonconstant continuous mapping from a compact interval into $X$, and it can thus be parametrized by its arc length $ds$.
Following Koskela--MacManus~\cite{KoMac98} we make the following definition, see also Heinonen--Koskela~\cite{HeKo98}.
\begin{definition}\label{def:upper-gradients} A measurable function $g\colon X\to[0,\infty]$ is a \emph{{$p\mspace{1mu}$}-weak upper gradient} of the function $u\colon X\to{\overline{\R\kern-0.08em}\kern 0.08em}:=[-\infty,\infty]$ if \[
|u(\gamma(0)) - u(\gamma(l_{\gamma}))|
\leq \int_{\gamma}g\,ds \] for {$p\mspace{1mu}$}-almost every curve $\gamma\colon[0,l_{\gamma}]\to X$, where we use the convention that the left-hand side is $\infty$ whenever at least one of the terms on the left-hand side is infinite. \end{definition}
One way of controlling functions by their {$p\mspace{1mu}$}-weak upper gradients is to require a Poincar\'e inequality to hold.
\begin{definition}\label{def:Poincare-inequality} We say that $X$ supports a {$p\mspace{1mu}$}-\emph{Poincar\'e inequality} if there exist constants, $C>0$ and $\lambda\geq 1$ (the dilation constant), such that for all balls $B\subset X$, all integrable functions $u$ on $X$, and all {$p\mspace{1mu}$}-weak upper gradients $g$ of $u$, \begin{equation}\label{def:Poincare-inequality-ineq}
\vint_B|u-u_B|\,d\mu
\leq C\diam(B)\biggl(\vint_{\lambda B}g^p\,d\mu\biggr)^{1/p}, \end{equation} where $u_B:=\vint_B u\,d\mu:=\frac{1}{\mu(B)}\int_B u\,d\mu$. \end{definition}
Shanmugalingam~\cite{Shanmugalingam00} used {$p\mspace{1mu}$}-weak upper gradients to define so-called Newtonian spaces.
\begin{definition}\label{def:Newtonian-space} The \emph{Newtonian space} on $X$, denoted $N^{1,p}(X)$, is the space of all extended real-valued functions $u\inL^p(X)$ such that \[
\|u\|_{N^{1,p}(X)}
:= \biggl(\int_X|u|^p\,d\mu + \inf_g\int_X g^p\,d\mu\biggr)^{1/p}<\infty, \] where the infimum is taken over all {$p\mspace{1mu}$}-weak upper gradients $g$ of $u$. \end{definition}
The quotient space $N^{1,p}(X)/\sim$,
where $u\sim v$ if and only if $\|u-v\|_{N^{1,p}(X)}=0$, is a Banach space, see Shanmugalingam~\cite{Shanmugalingam00}.
\begin{definition}\label{def:Dirichlet-space} The \emph{Dirichlet space} on $X$, denoted $D^p(X)$, is the space of all measurable extended real-valued functions on $X$ that have a {$p\mspace{1mu}$}-weak upper gradient in $L^p(X)$. \end{definition}
In this paper we assume that functions in $N^{1,p}(X)$ and $D^p(X)$ are defined everywhere (with values in ${\overline{\R\kern-0.08em}\kern 0.08em}$), not just up to an equivalence class. This is important, in particular for the definition of {$p\mspace{1mu}$}-weak upper gradients to make sense.
A measurable set $A\subset X$ can itself be considered to be a metric space (with the restriction of $d$ and $\mu$ to $A$) with the Newtonian space $N^{1,p}(A)$ and the Dirichlet space $D^p(A)$ given by Definitions~\ref{def:Newtonian-space}~and~\ref{def:Dirichlet-space}, respectively. If $X$ is proper and $\Omega\subset X$ is open, then $u\inN^{1,p}\loc(\Omega)$ if and only if $u\inN^{1,p}(V)$ for every open $V$ such that $\overline{V}$ is a compact subset of $\Omega$, and similarly for $D^{p}\loc(\Omega)$. If $u\inD^{p}\loc(X)$, then there exists a \emph{minimal {$p\mspace{1mu}$}-weak upper gradient} $g_u\inL^{p}\loc(X)$ of $u$ such that $g_u\leq g$ a.e.\ for all {$p\mspace{1mu}$}-weak upper gradients $g\inL^{p}\loc(X)$ of $u$.
\begin{definition}\label{def:capacity} The (\emph{Sobolev}) \emph{capacity} of a set $E\subset X$ is the number \[
{C_p}(E)
:= \inf_u\|u\|_{N^{1,p}(X)}^p, \] where the infimum is taken over all $u\inN^{1,p}(X)$ such that $u\geq 1$ on $E$.
A property that holds for all points except for those in a set of capacity zero is said to hold \emph{quasieverywhere} (\emph{q.e.}). \end{definition}
The capacity is countably subadditive, and it is the correct gauge for distinguishing between two Newtonian functions: If $u\inN^{1,p}(X)$, then $u\sim v$ if and only if $u=v$ q.e. Moreover, if $u,v\inN^{1,p}\loc(X)$ and $u=v$ a.e., then $u=v$ q.e.
Continuous functions will be assumed to be real-valued unless otherwise stated, whereas semicontinuous functions are allowed to take values in ${\overline{\R\kern-0.08em}\kern 0.08em}$. We use the common notation $u_\limplus=\max\{u,0\}$, let $\chi_E$ denote the characteristic function of the set $E$, and consider all neighbourhoods to be open.
\section{The obstacle problem and \texorpdfstring{\boldmath$p\mspace{1mu}$}{p}-harmonic functions} \label{sec:p-harmonic}
\emph{We assume from now on that\/ $1<p<\infty$\textup{,} that $X$ is a complete metric measure space supporting a {$p\mspace{1mu}$}-Poincar\'e inequality\textup{,} that $\mu$ is doubling\textup{,} and that\/ $\Omega\subset X$ is a nonempty \textup{(}possibly unbounded\textup{)} open subset with ${C_p}(X\setminus\Omega)>0$.}
\begin{definition}\label{def:min} A function $u\inN^{1,p}\loc(\Omega)$ is a \emph{minimizer} in $\Omega$ if \[
\int_{\varphi\neq 0}g_u^p\,d\mu
\leq \int_{\varphi\neq 0}g_{u+\varphi}^p\,d\mu
\quad\text{for all }\varphi\inN^{1,p}_0(\Omega), \] where
$N^{1,p}_0(\Omega)=\{u|_\Omega:u\inN^{1,p}(X)\text{ and }u=0\text{ in }X\setminus\Omega\}$. Moreover, a function is \emph{{$p\mspace{1mu}$}-harmonic} if it is a continuous minimizer. \end{definition}
Kinnunen--Shanmugalingam~\cite[Proposition~3.3 and Theorem~5.2]{KiSh01} used De Giorgi's method to show that every minimizer $u$ has a H\"older continuous representative $\tilde{u}$ such that $\tilde{u}=u$ q.e. Bj\"orn--Marola~\cite[p.\ 362]{BMarola} obtained the same conclusions using Moser iterations. See alternatively Theorems~8.13 and 8.14 in \cite{BBbook}. Note that $N^{1,p}\loc(\Omega)=D^{p}\loc(\Omega)$, by Proposition~4.14 in \cite{BBbook}.
The following obstacle problem is an important tool. In this generality, it was considered by Hansevi~\cite{hansevi1}.
\begin{definition}\label{def:obst} Let $V\subset X$ be a nonempty open subset with ${C_p}(X\setminus V)>0$. For $\psi\colon V\to{\overline{\R\kern-0.08em}\kern 0.08em}$ and $f\inD^p(V)$, let \[
{\mathscr{K}}_{\psi,f}(V)
= \{v\inD^p(V):v-f\inD^p_0(V)\textup{ and }v\geq\psi\text{ q.e.\ in }V\}, \]
where $D^p_0(V)=\{u|_V:u\inD^p(X)\text{ and }u=0\text{ in }X\setminus V\}$. We say that $u\in{\mathscr{K}}_{\psi,f}(V)$ is a \emph{solution of the }${\mathscr{K}}_{\psi,f}(V)$-\emph{obstacle problem \textup{(}with obstacle $\psi$ and boundary values $f$\,\textup{)}} if \[
\int_V g_u^p\,d\mu
\leq \int_V g_v^p\,d\mu
\quad\textup{for all }v\in{\mathscr{K}}_{\psi,f}(V). \] When $V=\Omega$, we usually denote ${\mathscr{K}}_{\psi,f}(\Omega)$ by ${\mathscr{K}}_{\psi,f}$. \end{definition}
The ${\mathscr{K}}_{\psi,f}$-obstacle problem has a unique (up to sets of capacity zero) solution whenever ${\mathscr{K}}_{\psi,f}\neq\varnothing$, see Hansevi~\cite[Theorem~3.4]{hansevi1}. Furthermore, there is a unique lsc-regularized solution of the ${\mathscr{K}}_{\psi,f}$-obstacle problem, by Theorem~4.1 in~\cite{hansevi1}. A function $u$ is \emph{lsc-regularized} if $u=u^*$, where the \emph{lsc-regularization} $u^*$ of $u$ is defined by \[
u^*(x)
= \essliminf_{y\to x}u(y)
:= \lim_{r\to 0}\essinf_{B(x,r)}u. \]
If $\psi\colon\Omega\to[-\infty,\infty)$ is continuous as an extended real-valued function, and ${\mathscr{K}}_{\psi,f}\neq\varnothing$, then the lsc-regularized solution of the ${\mathscr{K}}_{\psi,f}$-obstacle problem is continuous, by Theorem~4.4 in \cite{hansevi1}. Hence the following generalization of Definition~3.3 in Bj\"orn--Bj\"orn--Shanmugalingam~\cite{BBS} (and Definition~8.31 in \cite{BBbook}) to Dirichlet functions and to unbounded sets makes sense. It was first used by Hansevi~\cite[Definition~4.6]{hansevi1}.
\begin{definition}\label{def:ext} Let $V\subset X$ be a nonempty open set with ${C_p}(X\setminus V)>0$. The \emph{{$p\mspace{1mu}$}-harmonic extension} $H_V f$ of $f\inD^p(V)$ to $V$ is the continuous solution of the ${\mathscr{K}}_{-\infty,f}(V)$-obstacle problem. When $V=\Omega$, we usually write $H f$ instead of $H_\Omega f$. \end{definition}
\begin{definition}\label{def:superharm} A function $u\colon\Omega\to(-\infty,\infty]$ is \emph{superharmonic} in $\Omega$ if \begin{enumerate} \renewcommand{\textup{(\roman{enumi})}}{\textup{(\roman{enumi})}} \item $u$ is lower semicontinuous; \item $u$ is not identically $\infty$ in any component of $\Omega$; \item for every nonempty open set $V$ such that $\overline{V}$ is a compact subset of $\Omega$ and all $v\in\Lip(\overline{V})$, we have $H_{V}v\leq u$ in $V$ whenever $v\leq u$ on $\partial V$. \end{enumerate} A function $u\colon\Omega\to[-\infty,\infty)$ is \emph{subharmonic} if $-u$ is superharmonic. \end{definition}
There are several other equivalent definitions of superharmonic functions, see, e.g., Theorem~6.1 in Bj\"orn~\cite{ABsuper} (or Theorem~9.24 and Propositions~9.25 and~9.26 in \cite{BBbook}).
An lsc-regularized solution of the obstacle problem is always superharmonic, by Proposition~3.9 in \cite{hansevi1} together with Proposition~7.4 in Kinnunen--Martio~\cite{KiMa02} (or Proposition~9.4 in \cite{BBbook}). On the other hand, superharmonic functions are always lsc-regularized, by Theorem~7.14 in Kinnunen--Martio~\cite{KiMa02} (or Theorem~9.12 in \cite{BBbook}).
\section{Perron solutions} \label{sec:perron}
\emph{In addition to the assumptions given at the beginning of Section~\ref{sec:p-harmonic}\textup{,} from now on we make the convention that if\/ $\Omega$ is unbounded\textup{,} then the point at infinity\textup{,} $\infty$\textup{,} belongs to the boundary $\partial\Omega$. Topological notions should therefore be understood with respect to the one-point compactification $X^*:=X\cup\{\infty\}$.}
Note that this convention does not affect any of the definitions in Sections~\ref{sec:prel} or~\ref{sec:p-harmonic}, as $\infty$ is \emph{not} added to $X$ (it is added solely to $\partial\Omega$).
Since continuous functions are assumed to be real-valued, every function in $C(\partial\Omega)$ is bounded even if $\Omega$ is unbounded. Note that since $X$ is second countable so is $X^*$, and hence $X^*$ is metrizable by Urysohn's metrization theorem, see, e.g., Munkres~\cite[Theorems~32.3 and~34.1]{Munkres00}.
We will only consider Perron solutions and {$p\mspace{1mu}$}-harmonic measures with respect to $\Omega$ and therefore omit $\Omega$ from the notation below.
\begin{definition}\label{def:Perron} Given a function $f\colon\partial\Omega\to{\overline{\R\kern-0.08em}\kern 0.08em}$, let $\mathscr{U}_f$ be the collection of all functions $u$ that are superharmonic in $\Omega$, bounded from below, and such that \[
\liminf_{\Omega\ni y\to x}u(y)
\geq f(x)
\quad\textup{for all }x\in\partial\Omega. \] The \emph{upper Perron solution} of $f$ is defined by \[
\itoverline{P} f(x)
= \inf_{u\in \mathscr{U}_f }u(x),
\quad x\in\Omega. \] The \emph{lower Perron solution} can be defined similarly using subharmonic functions, or by letting $\itunderline{P} f=-\itoverline{P}(-f)$. If $\itoverline{P} f=\itunderline{P} f$, then we denote the common value by $P f$. Moreover, if $P f$ is real-valued, then $f$ is said to be \emph{resolutive} (with respect to $\Omega$). \end{definition}
An immediate consequence of the definition is that $\itoverline{P} f\leq\itoverline{P} h$ whenever $f\leq h$ on $\partial\Omega$. Moreover, if $\alpha\in\mathbb{R}$ and $\beta\geq 0$, then $\itoverline{P}(\alpha + \beta f)=\alpha+\beta\itoverline{P} f$. Corollary~6.3 in Hansevi~\cite{hansevi2} shows that $\itunderline{P} f\leq\itoverline{P} f$. In each component of $\Omega$, $\itoverline{P} f$ is either {$p\mspace{1mu}$}-harmonic or identically $\pm\infty$, by Theorem~4.1 in Bj\"orn--Bj\"orn--Shanmugalingam~\cite{BBS2} (or Theorem~10.10 in \cite{BBbook}); the proof is local and applies also to unbounded $\Omega$.
\begin{definition}\label{def:p-para} Assume that $\Omega$ is unbounded. Then $\Omega$ is \emph{{$p\mspace{1mu}$}-parabolic} if for every compact $K\subset\Omega$, there exist functions $u_j\inN^{1,p}(\Omega)$ such that $u_j\geq 1$ on $K$ for all $j=1,2,\ldots$\,, and \[
\int_\Omega g_{u_j}^p\,d\mu
\to 0
\quad\text{as }j\to\infty. \] Otherwise, $\Omega$ is \emph{{$p\mspace{1mu}$}-hyperbolic}. \end{definition}
For examples of {$p\mspace{1mu}$}-parabolic sets, see, e.g., Hansevi~\cite{hansevi2}. The main reason for introducing {$p\mspace{1mu}$}-parabolic sets in \cite{hansevi2} was to be able to obtain resolutivity results, and in particular, establishing the following resolutivity and invariance result for {$p\mspace{1mu}$}-parabolic unbounded sets. The first such invariance result for {$p\mspace{1mu}$}-harmonic functions was obtained, for bounded sets, by Bj\"orn--Bj\"orn--Shanmugalingam~\cite{BBS2}.
\begin{theorem}\label{thm-hansevi2-main} \textup{(\cite[Theorem~6.1]{BBS2} and~\cite[Theorem~7.8]{hansevi2})} Assume that\/ $\Omega$ is bounded or {$p\mspace{1mu}$}-parabolic. Let $h\colon\partial\Omega\to{\overline{\R\kern-0.08em}\kern 0.08em}$ be $0$ q.e.\ on $\bdy\Omega\setm\{\infty\}$ and $f\in C(\partial\Omega)$. Then $f$ and $f+h$ are resolutive and $P(f+h)=P f$. \end{theorem}
Resolutivity of continuous functions is not known for unbounded {$p\mspace{1mu}$}-hyperbolic sets, but it is rather trivial to show that constant functions are resolutive. We shall show that a similar invariance result as in Theorem~\ref{thm-hansevi2-main} can be obtained for constant functions on unbounded {$p\mspace{1mu}$}-hyperbolic sets. This fact will be an important tool when characterizing semiregular boundary points.
We first need to define {$p\mspace{1mu}$}-harmonic measures, which despite the name are (usually) not measures, but nonlinear generalizations of the harmonic measure.
\begin{definition}\label{def:p-harmonic-measure} The \emph{upper and lower {$p\mspace{1mu}$}-harmonic measures} of $E\subset\partial\Omega$ are \[
{\overline{\omega}}(E)
:= \itoverline{P}\chi_E
\quad\text{and}\quad
{\itunderline{\omega}}(E)
:= \itunderline{P}\chi_E, \] respectively. \end{definition}
\begin{proposition}\label{prop-inv-pharm} Let $E\subset\bdy\Omega\setm\{\infty\}$\textup{,} $a\in\mathbb{R}$\textup{,} and $f\colon\partial\Omega\to{\overline{\R\kern-0.08em}\kern 0.08em}$ be such that ${C_p}(E)=0$ and $f(x)=a$ for all $x\in\partial\Omega\setminus E$. Then $P f\equiv a$.
In particular\textup{,} ${\overline{\omega}}(E)={\itunderline{\omega}}(E)\equiv 0$. \end{proposition}
\begin{proof} Without loss of generality we may assume that $a=0$. As the capacity ${C_p}$ is an outer capacity, by Corollary~1.3 in Bj\"orn--Bj\"orn--Shanmugalingam~\cite{BBS5} (or \cite[Theorem~5.31]{BBbook}), we can find open sets $G'_j\supset E$ such that ${C_p}(G'_j)<2^{-j-1}$, $j=1,2,\ldots$\,. From the decreasing sequence $\{\bigcup_{k=j}^\infty G'_k\}_{j=1}^\infty$, we can choose a decreasing subsequence of open sets $G_k$ with ${C_p}(G_k)<2^{-kp}$, $k=1,2,\ldots$\,. By Lemma~5.3 in Bj\"orn--Bj\"orn--Shanmugalingam~\cite{BBS2} (or \cite[Lemma~10.17]{BBbook}), there is a decreasing sequence $\{\psi_j\}_{j=1}^\infty$ of nonnegative functions
such that $\lim_{j\to\infty}\|\psi_j\|_{N^{1,p}(X)}=0$ and $\psi_j\geq k-j$ in $G_k$ whenever $k>j$. In particular, $\psi_j=\infty$ on $E$ for each $j=1,2,\ldots$\,.
Let $u_j$ be the lsc-regularized solution of the ${\mathscr{K}}_{\psi_j,0}(\Omega)$-obstacle problem, $j=1,2,\ldots$\,. As $u_j$ is lsc-regularized and $u_j\geq\psi_j$ q.e., we see that $u_j\geq k-j$ everywhere in $G_k$ whenever $k>j$, and also that $u_j\geq 0$ everywhere in $\Omega$. In particular, $\liminf_{\Omega\ni y\to x}u_j(y)=\infty$ for $x\in E$, which shows that $u_j\in\mathscr{U}_f(\Omega)$ and thus $u_j\geq\itoverline{P} f$.
On the other hand, Theorem~3.2 in Hansevi~\cite{hansevi2} shows that the sequence $u_j$ decreases q.e.\ to $0$, and hence $\itoverline{P} f\leq 0$ q.e.\ in $\Omega$. Since $\itoverline{P} f$ is continuous, we get that $\itoverline{P} f\leq 0$ everywhere in $\Omega$. Applying this to $-f$ shows that $\itunderline{P} f=-\itoverline{P}(-f)\geq 0$ everywhere in $\Omega$, which together with the inequality $\itunderline{P} f\leq\itoverline{P} f$ shows that $\itunderline{P} f=\itoverline{P} f\equiv 0$. In particular, ${\itunderline{\omega}}(E)=\itunderline{P}\chi_E\equiv 0$ and ${\overline{\omega}}(E)=\itoverline{P}\chi_E\equiv 0$. \end{proof}
We will also need the following result.
\begin{proposition}\label{prop:Perron-semicont} If $f\colon\partial\Omega\to[-\infty,\infty)$ is an upper semicontinuous function\textup{,} then \[
\itoverline{P} f
= \inf_{C(\partial\Omega)\ni\varphi\geq f}\itoverline{P}\varphi. \] \end{proposition}
\begin{proof} Let $\mathscr{F}=\{\varphi\in C(\partial\Omega):\varphi\geq f\}$. Then $\mathscr{F}$ is downward directed, i.e., for each pair of functions $u,v \in \mathscr{F}$ there is a function $w\in\mathscr{F}$ such that $w\leq\min\{u,v\}$. Because $f$ is upper semicontinuous, $\partial\Omega$ is compact, and $X^*$ is metrizable, it follows from Proposition~1.12 in \cite{BBbook} that $f=\inf_{\varphi\in\mathscr{F}}\varphi$. Hence by Lemma~10.31 in~\cite{BBbook} (whose proof is valid also for unbounded $\Omega$) $\itoverline{P} f=\inf_{\varphi\in\mathscr{F}}\itoverline{P}\varphi$. \end{proof}
\section{Boundary regularity} \label{sec:bdy-regularity}
It is not known whether continuous functions are resolutive also with respect to unbounded {$p\mspace{1mu}$}-hyperbolic sets. We therefore define regular boundary points in the following way.
\begin{definition}\label{def:reg} We say that a boundary point $x_0\in\partial\Omega$ is \emph{regular} if \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} f(y)
= f(x_0)
\quad\text{for all }f\in C(\partial\Omega). \] This can be paraphrased in the following way: A point $x_0\in\partial\Omega$ is regular if the following two conditions hold: \begin{enumerate} \renewcommand{\textup{(\roman{enumi})}}{\textup{(\Roman{enumi})}} \item\label{semi} For all $f\in C(\partial\Omega)$ the limit \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} f(y)
\quad\text{exists}. \] \item\label{strong} For all $f\in C(\partial\Omega)$ there is a sequence $\{y_j\}_{j=1}^\infty$ in $\Omega$ such that \[
\lim_{j\to\infty}y_j
= x_0
\quad\text{and}\quad
\lim_{j\to\infty}\itoverline{P} f(y_j)
= f(x_0). \] \end{enumerate}
Furthermore, we say that a boundary point $x_0\in\partial\Omega$ is \emph{semiregular} if \ref{semi} holds but not \ref{strong}; and \emph{strongly irregular} if \ref{strong} holds but not \ref{semi}. \end{definition}
We do not require $\Omega$ to be bounded in this definition, but if it is, then it follows from Theorem~6.1 in Bj\"orn--Bj\"orn--Shanmugalingam~\cite{BBS2} (or Theorem~10.22 in \cite{BBbook}) that our definition coincides with the definitions of regularity in Bj\"orn--Bj\"orn--Shanmugalingam~\cite{BBS}, \cite{BBS2}, and Bj\"orn--Bj\"orn~\cite{BB}, \cite{BBbook}, where regularity is defined using $P f$ or $H f$. Thus we can use the boundary regularity results from these papers when considering bounded sets.
Since $\itoverline{P} f=-\itunderline{P}(-f)$, the same concept of regularity is obtained if we replace the upper Perron solution by the lower Perron solution in Definition~\ref{def:reg}.
Boundary regularity for {$p\mspace{1mu}$}-harmonic functions on unbounded sets in metric spaces was recently studied by Bj\"orn--Hansevi~\cite{BHan1}. We will need some of the characterizations obtained therein. For the reader's convenience we state these results here. We will not discuss regularity of the point $\infty$ in this paper. One of the important results we will need from \cite{BHan1} is the Kellogg property.
\begin{theorem}\label{thm:kellogg} \textup{(The Kellogg property)} If $I$ is the set of irregular points in $\bdy\Omega\setm\{\infty\}$\textup{,} then ${C_p}(I)=0$. \end{theorem}
\begin{definition}\label{def:barrier} A function $u$ is a \emph{barrier} (with respect to $\Omega$) at $x_0\in\partial\Omega$ if \begin{enumerate} \renewcommand{\textup{(\roman{enumi})}}{\textup{(\roman{enumi})}} \item\label{barrier-i} $u$ is superharmonic in $\Omega$; \item\label{barrier-ii} $\lim_{\Omega\ni y\to x_0}u(y)=0$; \item\label{barrier-iii} $\liminf_{\Omega\ni y\to x}u(y)>0$ for every $x\in\partial\Omega\setminus\{x_0\}$. \end{enumerate} \end{definition}
Superharmonic functions satisfy the strong minimum principle, i.e., if $u$ is superharmonic and attains its minimum in some component
$G$ of $\Omega$, then $u|_G$ is constant (see Theorem~9.13 in \cite{BBbook}). This implies that a barrier is always nonnegative, and furthermore, that a barrier is positive if $\partial G\setminus\{x_0\}\neq\varnothing$ for every component $G\subset\Omega$.
The following result is a collection of the key facts we will need from Bj\"orn--Hansevi~\cite[Theorems~5.2, 5.3, 6.2, and 9.1]{BHan1}.
\begin{theorem}\label{thm:reg} Let $x_0\in\bdy\Omega\setm\{\infty\}$ and $\delta>0$. Also define $d_{x_0}\colon X^*\to[0,1]$ by \begin{equation}\label{eq-dx0}
d_{x_0}(x)
= \begin{cases}
\min\{d(x,x_0),1\}
& \text{if }x\neq\infty, \\
1
& \text{if }x=\infty.
\end{cases} \end{equation} Then the following are equivalent\/\textup{:} \begin{enumerate} \item\label{reg-reg} The point $x_0$ is regular. \item\label{barrier-bar-Om} There is a barrier at $x_0$. \item\label{barrier-bar-pos-Om} There is a positive continuous barrier at $x_0$. \item\label{barrier-reg-B} The point $x_0$ is regular with respect to $\Omega\cap B(x_0,\delta)$. \item\label{reg-cont-x0} It is true that \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} f(y)
= f(x_0) \] for all $f\colon\partial\Omega\to\mathbb{R}$ that are bounded on $\partial\Omega$ and continuous at $x_0$. \item \label{reg-Pd} It is true that \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} d_{x_0}(y)
= 0. \] \item\label{reg-2-obst-dist} The continuous solution $u$ of the ${\mathscr{K}}_{d_{x_0},d_{x_0}}$-obstacle problem\textup{,} satisfies \[
\lim_{\Omega\ni y\to x_0}u(y)
= 0. \] \item\label{reg-2-obst-cont} If $f\in C(\overline{\Omega})\capD^p(\Omega)$\textup{,} then the continuous solution $u$ of the ${\mathscr{K}}_{f,f}$-obstacle problem\textup{,} satisfies \[
\lim_{\Omega\ni y\to x_0}u(y)
= f(x_0). \] \end{enumerate} \end{theorem}
\section{Semiregular and strongly irregular points} \label{sec:trichotomy}
We are now ready to start our discussion of semiregular and strongly irregular boundary points. We begin by proving Theorem~\ref{thm:trichotomy}.
\begin{proof}[Proof of Theorem~\ref{thm:trichotomy}] We consider two complementary cases.
\emph{Case} 1: \emph{There exists $r>0$ such that ${C_p}(B\cap\partial\Omega)=0$\textup{,} where $B:=B(x_0,r)$.}
Let $G$ be the component of $B$ containing $x_0$. Since $X$ is quasiconvex, by, e.g., Theorem~4.32 in \cite{BBbook}, and thus locally connected, it follows that $G$ is open. Let $F=G\setminus\Omega$. Then \[
{C_p}(G\cap\partial F)
= {C_p}(G\cap\partial\Omega)
\leq {C_p}(B\cap\partial\Omega)
= 0, \] and hence ${C_p}(F)=0$, by Lemma~8.6 in Bj\"orn--Bj\"orn--Shanmugalingam~\cite{BBS2} (or Lemma~4.5 in \cite{BBbook}).
Let $f\in C(\partial\Omega)$. Then the Perron solution $\itoverline{P} f$ is bounded (as $f$ is bounded), and thus $\itoverline{P} f$ has a {$p\mspace{1mu}$}-harmonic extension $U$ to $\Omega\cup G$, by Theorem~6.2 in Bj\"orn~\cite{ABremove} (or Theorem~12.2 in \cite{BBbook}). Since $U$ is continuous, it follows that \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} f(y)
= \lim_{\Omega\ni y\to x_0}U(y)
= U(x_0), \] i.e., condition~\ref{semi} in Definition~\ref{def:reg} holds, and hence $x_0$ is either regular or semiregular.
To show that $x_0$ must be semiregular, we let $f(x)=(1-d_{x_0}(x)/{\min\{r,1\}})_\limplus$ on $\partial\Omega$, where $d_{x_0}$ is defined by \eqref{eq-dx0}. Then $f=0$ q.e.\ on $\partial\Omega$, and Proposition~\ref{prop-inv-pharm} shows that $P f\equiv 0$. Since \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} f(y)
= 0
\neq 1
= f(x_0), \] $x_0$ is not regular, and hence must be semiregular.
\emph{Case} 2: \emph{For all $r>0$\textup{,} ${C_p}(B(x_0,r)\cap\partial\Omega)>0$.}
For every $j=1,2,\ldots$\,, ${C_p}(B(x_0,1/j)\cap\partial\Omega)>0$, and by the Kellogg property (Theorem~\ref{thm:kellogg}) there exists a regular boundary point $x_j\in B(x_0,1/j)\cap\partial\Omega$. (We do not require the $x_j$ to be distinct.)
Let $f\in C(\partial\Omega)$. Because $x_j$ is regular, there is $y_j\in B(x_j,1/j)\cap\Omega$
so that $|\itoverline{P} f(y_j)-f(x_j)|<1/j$. It follows that $y_j\to x_0$ and $\itoverline{P} f(y_j)\to f(x_0)$ as $j\to\infty$, i.e., condition~\ref{strong} in Definition~\ref{def:reg} holds, and hence $x_0$ must be either regular or strongly irregular.
As there are no strongly irregular points in case~1, it follows that $x_0\in\bdy\Omega\setm\{\infty\}$ is strongly irregular if and only if $x_0\in\itoverline{R}\setminus R$, where $R:=\{x\in\bdy\Omega\setm\{\infty\}:x\text{ is regular}\}$. And since there are no semiregular points in case~2, the set $S$ in \eqref{eq-S} consists exactly of all semiregular boundary points of $\bdy\Omega\setm\{\infty\}$. \end{proof}
In fact, in case~2 it is possible to improve upon the result above. The sequence $\{y_j\}_{j=1}^\infty$ can be chosen independently of $f$, see the characterization \ref{not-reg-one-seq} in Theorem~\ref{thm:rem-irr-char}.
We will characterize semiregular points by a number of equivalent conditions in Theorem~\ref{thm:rem-irr-char}. But first we obtain the following characterizations of relatively open sets of semiregular points.
\begin{theorem}\label{thm:irr-char-V} Let $V\subset\bdy\Omega\setm\{\infty\}$ be relatively open. Then the following statements are equivalent\/\textup{:} \begin{enumerate} \item\label{V-semireg} The set $V$ consists entirely of semiregular points. \item\label{V-R} The set $V$ does not contain any regular point. \item\label{V-Cp-V-bdy} The capacity ${C_p}(V)=0$. \item\label{V-upharm} The upper {$p\mspace{1mu}$}-harmonic measure ${\overline{\omega}}(V)\equiv 0$. \item\label{V-lpharm} The lower {$p\mspace{1mu}$}-harmonic measure ${\itunderline{\omega}}(V)\equiv 0$. \item\label{V-alt-def-irr-super} The set\/ $\Omega\cup V$ is open in $X$\textup{,} ${C_p}(X\setminus(\Omega\cup V))>0$\textup{,} $\mu(V)=0$\textup{,} and every function that is bounded and superharmonic in $\Omega$ has a superharmonic extension to $\Omega\cup V$. \item\label{V-alt-def-irr} \setcounter{saveenumi}{\value{enumi}} The set\/ $\Omega\cup V$ is open in $X$\textup{,} ${C_p}(X\setminus(\Omega\cup V))>0$\textup{,} and every function that is bounded and {$p\mspace{1mu}$}-harmonic in $\Omega$ has a {$p\mspace{1mu}$}-harmonic extension to $\Omega\cup V$. \end{enumerate}
If moreover $\Omega$ is bounded or {$p\mspace{1mu}$}-parabolic\textup{,} then also the following statement is equivalent to the statements above. \begin{enumerate} \setcounter{enumi}{\value{saveenumi}} \item\label{V-rem-motiv} For every $f\in C(\partial\Omega)$\textup{,}
the Perron solution $P f$ depends only on $f|_{\partial\Omega\setminus V}$ \textup{(}i.e., if $f,h\in C(\partial\Omega)$ and $f=h$ on $\partial\Omega\setminus V$\textup{,} then $P f\equivP h$\textup{)}. \end{enumerate} \end{theorem}
Note that there are examples of sets with positive capacity and even positive measure which are removable for bounded {$p\mspace{1mu}$}-harmonic functions, see Section~9 in Bj\"orn~\cite{ABremove} (or \cite[Section~12.3]{BBbook}). For superharmonic functions it is not known whether such examples exist. This motivates the formulations of \ref{V-alt-def-irr-super} and~\ref{V-alt-def-irr}.
The following example shows that the condition ${C_p}(X\setminus(\Omega\cup V))>0$ cannot be dropped from \ref{V-alt-def-irr}, nor from \ref{alt-def-irr} in Theorem~\ref{thm:rem-irr-char} below. We do not know whether the conditions ${C_p}(X\setminus(\Omega\cup V))>0$ and $\mu(V)=0$ can be dropped from \ref{V-alt-def-irr-super}, but they are needed for our proof. Similarly they are needed in \ref{alt-def-irr-super} in Theorem~\ref{thm:rem-irr-char} below.
The condition ${C_p}(X\setminus(\Omega\cup V))>0$ was unfortunately overlooked in Bj\"orn~\cite{ABclass} and in Bj\"orn--Bj\"orn~\cite{BBbook}: It should be added to conditions (d$'$) and (e$'$) in \cite[Theorem~3.1]{ABclass}, to (h) and (i) in \cite[Theorem~3.3]{ABclass}, to (f$'$) and (g$'$) in \cite[Theorem~13.5]{BBbook}, and to (j) and (l) in \cite[Theorem~13.10]{BBbook}.
\begin{example} Let $X=[0,1]$ be equipped with the Lebesgue measure, and let $1<p<\infty$, $\Omega=(0,1]$ and $V=\{0\}$. Then ${C_p}(V)>0$. In this case the {$p\mspace{1mu}$}-harmonic functions on $\Omega$ are just the constant functions, and these trivially have {$p\mspace{1mu}$}-harmonic extensions to $X$. Thus the condition ${C_p}(X\setminus(\Omega\cup V))>0$ cannot be dropped from \ref{V-alt-def-irr}.
On the other hand, the set $V$ is not removable for bounded superharmonic functions on $\Omega$, see Example~9.1 in Bj\"orn~\cite{ABremove} or Example~12.17 in \cite{BBbook}. \end{example}
\begin{proof}[Proof of Theorem~\ref{thm:irr-char-V}] \ref{V-R} $\ensuremath{\Rightarrow} $ \ref{V-Cp-V-bdy} This follows from the Kellogg property (Theorem~\ref{thm:kellogg}).
\ref{V-Cp-V-bdy} $\ensuremath{\Rightarrow} $ \ref{V-upharm} This follows directly from Proposition~\ref{prop-inv-pharm}.
\ref{V-upharm} $\ensuremath{\Rightarrow} $ \ref{V-lpharm} This is trivial.
\ref{V-lpharm} $\ensuremath{\Rightarrow} $ \ref{V-R} Suppose that $x\in V$ is regular. Because $\chi_V$ is continuous at $x$, this yields a contradiction, as it follows from Theorem~\ref{thm:reg} that \[
0
= \lim_{\Omega\ni y\to x}{\itunderline{\omega}}(V)(y)
= \lim_{\Omega\ni y\to x}\itunderline{P}\chi_V(y)
= -\lim_{\Omega\ni y\to x}\itoverline{P}(-\chi_V)(y)
= \chi_V(x)
= 1. \] Thus $V$ does not contain any regular point.
\ref{V-Cp-V-bdy} $\ensuremath{\Rightarrow} $ \ref{V-alt-def-irr-super} Suppose that ${C_p}(V)=0$. Then ${C_p}(X\setminus(\Omega\cup V))={C_p}(X\setminus\Omega)>0$ and $\mu(V)=0$. Let $x\in V$ and let $G$ be a connected neighbourhood of $x$ such that $G\cap\partial\Omega\subset V$. Sets of capacity zero cannot separate space, by Lemma~4.6 in Bj\"orn--Bj\"orn~\cite{BBbook}, and hence $G\setminus\partial\Omega$ must be connected, i.e., $G\subset\overline\Omega$, from which it follows that $\Omega\cup V$ is open in $X$. The superharmonic extension is now provided by Theorem~6.3 in Bj\"orn~\cite{ABremove} (or Theorem~12.3 in \cite{BBbook}).
\ref{V-alt-def-irr-super} $\ensuremath{\Rightarrow} $ \ref{V-alt-def-irr} Let $u$ be a bounded {$p\mspace{1mu}$}-harmonic function on $\Omega$. Then, by assumption, $u$ has a superharmonic extension $U$ to $\Omega\cup V$. Moreover, as $-u$ is also bounded and {$p\mspace{1mu}$}-harmonic, there is a superharmonic extension $W$ of $-u$ to $\Omega\cup V$. Now, as $-W$ is clearly a subharmonic extension of $u$ to $\Omega\cup V$, Proposition~6.5 in Bj\"orn~\cite{ABremove} (or Proposition~12.5 in \cite{BBbook}) asserts that $U=-W$ is {$p\mspace{1mu}$}-harmonic (it is here that we use that $\mu(V)=0$).
\ref{V-alt-def-irr} $\ensuremath{\Rightarrow} $ \ref{V-semireg} Let $x_0\in V$. Since $\Omega\cup V$ is open in $X$, we see that $V\cap\partial(\Omega\cup V)=\varnothing$, and hence $x_0\notin\partial(\Omega\cup V)$. Let \[
h(x)
= \biggl(1-\frac{d_{x_0}(x)}{\min\{
\dist(x_0,\partial(\Omega\cup V)),1\}}\biggr)_\limplus,
\quad x\in\partial\Omega, \] where $d_{x_0}$ is defined by \eqref{eq-dx0}. Then $\itoverline{P} h$ is bounded and has a {$p\mspace{1mu}$}-harmonic extension $U$ to $\Omega\cup V$, and hence the Kellogg property (Theorem~\ref{thm:kellogg}) implies that \begin{equation}\label{eq-U=0}
\lim_{\Omega\cup V\ni y\to x}U(y)
= \lim_{\Omega\ni y\to x}\itoverline{P} h(y)
= h(x)
= 0
\quad\text{for q.e. }x\in\partial(\Omega\cup V)\setminus\{\infty\}. \end{equation} Let $G$ be the component of $\Omega\cup V$ containing $x_0$. Then \[
{C_p}(X\setminus G)
\geq {C_p}(X\setminus(\Omega\cup V))
>0. \] It then follows from Lemma~4.3 in Bj\"orn--Bj\"orn~\cite{BB} (or Lemma~4.5 in \cite{BBbook}) that ${C_p}(\partial G)>0$. In particular, it follows from \eqref{eq-U=0} that $U\not\equiv 1$ in $G$, and thus, by the strong maximum principle (see Corollary~6.4 in Kinnunen--Shanmugalingam~\cite{KiSh01} or \cite[Theorem~8.13]{BBbook}), that $U(x_0)<1$. Therefore \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} h(y)
= U(x_0)
< 1
= h(x_0), \] and hence $x_0$ must be irregular.
However, if $f\in C(\partial\Omega)$, then $\itoverline{P} f$ has a {$p\mspace{1mu}$}-harmonic extension $W$ to $\Omega\cup V$. Since $W$ is continuous in $\Omega\cup V$, it follows that \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} f(y)
= W(x_0), \] and hence the limit on the left-hand side always exists. Thus $x_0$ is semiregular.
\ref{V-semireg} $\ensuremath{\Rightarrow} $ \ref{V-R} This is trivial.
We now assume that $\Omega$ is bounded or {$p\mspace{1mu}$}-parabolic.
\ref{V-Cp-V-bdy} $\ensuremath{\Rightarrow} $ \ref{V-rem-motiv} This implication follows from Theorem~\ref{thm-hansevi2-main}.
\ref{V-rem-motiv} $\ensuremath{\Rightarrow} $ \ref{V-lpharm} As $-\chi_V\colon\partial\Omega\to\mathbb{R}$ is upper semicontinuous, it follows from Proposition~\ref{prop:Perron-semicont}, and \ref{V-rem-motiv}, that \[
0
\leq {\itunderline{\omega}}(V)
= -\itoverline{P}(-\chi_V)
= -\inf_{\substack{\varphi\in C(\partial\Omega)\\-\chi_V\leq\varphi\leq 0}}\itoverline{P}\varphi \\
= 0, \] and hence ${\itunderline{\omega}}(V)=0$. \end{proof}
\begin{definition}\label{def:semibarrier} A function $u$ is a \emph{semibarrier} (with respect to $\Omega$) at $x_0\in\partial\Omega$ if \begin{enumerate} \renewcommand{\textup{(\roman{enumi})}}{\textup{(\roman{enumi})}} \item\label{semibarrier-i} $u$ is superharmonic in $\Omega$; \item\label{semibarrier-ii} $\liminf_{\Omega\ni y\to x_0}u(y)=0$; \item\label{semibarrier-iii} $\liminf_{\Omega\ni y\to x}u(y)>0$ for every $x\in\partial\Omega\setminus\{x_0\}$. \end{enumerate}
Moreover, we say that $u$ is a \emph{weak semibarrier} (with respect to $\Omega$) at $x_0\in\partial\Omega$ if $u$ is a positive superharmonic function such that \ref{semibarrier-ii} holds. \end{definition}
Now we are ready to characterize the semiregular points by means of capacity, {$p\mspace{1mu}$}-harmonic measures, removable singularities, and semibarriers. In particular, we show that semiregularity is a local property.
\begin{theorem}\label{thm:rem-irr-char} Let $x_0\in\bdy\Omega\setm\{\infty\}$\textup{,} $\delta>0$\textup{,} and $d_{x_0}\colon X^*\to[0,1]$ be defined by \eqref{eq-dx0}. Then the following statements are equivalent\/\textup{:} \begin{enumerate} \item\label{semireg} The point $x_0$ is semiregular. \item\label{semireg-local} The point $x_0$ is semiregular with respect to $G:=\Omega\cap B(x_0,\delta)$. \item\label{not-reg-one-seq} There is no sequence $\{y_j\}_{j=1}^\infty$ in $\Omega$ such that $y_j\to x_0$ as $j\to\infty$ and \[
\lim_{j\to\infty}\itoverline{P} f(y_j)
= f(x_0)
\quad\text{for all }f\in C(\partial\Omega). \] \item\label{not-reg} The point $x_0$ is neither regular nor strongly irregular. \item\label{R} It is true that $x_0\notin\overline{\{x\in\partial\Omega:x\text{ is regular}\}}$. \item\label{Cp-V-bdy} There is a neighbourhood $V$ of $x_0$ such that ${C_p}(V\cap\partial\Omega)=0$. \item\label{Cp-V} There is a neighbourhood $V$ of $x_0$ such that ${C_p}(V\setminus\Omega)=0$. \item\label{upharm} There is a neighbourhood $V$ of $x_0$ such that ${\overline{\omega}}(V\cap\partial\Omega)\equiv 0$. \item\label{lpharm} There is a neighbourhood $V$ of $x_0$ such that ${\itunderline{\omega}}(V\cap\partial\Omega)\equiv 0$. \item\label{alt-def-irr} There is a neighbourhood $V\subset\overline\Omega$ of $x_0$\textup{,} with ${C_p}(X\setminus(\Omega\cup V))>0$\textup{,} such that every function that is bounded and {$p\mspace{1mu}$}-harmonic in $\Omega$ has a {$p\mspace{1mu}$}-harmonic extension to $\Omega\cup V$. \item\label{rem-irr} There is a neighbourhood $V$ of $x_0$ such that every function that is bounded and {$p\mspace{1mu}$}-harmonic in $\Omega$ has a {$p\mspace{1mu}$}-harmonic extension to $\Omega\cup V$\textup{,} and moreover $x_0$ is irregular. \item\label{alt-def-irr-super} There is a neighbourhood $V$ of $x_0$\textup{,} with ${C_p}(X\setminus(\Omega\cup V))>0$ and $\mu(V\setminus\nobreak\Omega)=0$\textup{,} such that every function that is bounded and superharmonic in $\Omega$ has a superharmonic extension to $\Omega\cup V$. \item\label{d-lim} It is true that \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} d_{x_0}(y)
> 0. \] \item\label{d-liminf} It is true that \[
\liminf_{\Omega\ni y\to x_0}\itoverline{P} d_{x_0}(y)
> 0. \] \item\label{weaksemibarrier} There is no weak semibarrier at $x_0$. \item\label{semibarrier} There is no semibarrier at $x_0$. \item\label{obst-dist-semibarrier} \setcounter{saveenumi}{\value{enumi}} The continuous solution of the ${\mathscr{K}}_{d_{x_0},d_{x_0}}$-obstacle problem is not a semibarrier at $x_0$. \end{enumerate}
If moreover $\Omega$ is bounded or {$p\mspace{1mu}$}-parabolic\textup{,} then also the following statement is equivalent to the statements above. \begin{enumerate} \setcounter{enumi}{\value{saveenumi}} \item \label{rem-motiv} There is a neighbourhood $V$ of $x_0$ such that for every $f\in C(\partial\Omega)$\textup{,}
the Perron solution $P f$ depends only on $f|_{\partial\Omega\setminus V}$ \textup{(}i.e., if $f,h\in C(\partial\Omega)$ and $f=h$ on $\partial\Omega\setminus V$\textup{,} then $P f\equivP h$\textup{)}. \end{enumerate} \end{theorem}
\begin{proof} \ref{R} $\eqv$ \ref{Cp-V-bdy} $\eqv$ \ref{upharm} $\eqv$ \ref{lpharm} $\ensuremath{\Rightarrow} $ \ref{semireg} This follows directly from Theorem~\ref{thm:irr-char-V}, with $V$ therein corresponding to $V\cap\partial\Omega$ here.
\ref{semireg} $\ensuremath{\Rightarrow} $ \ref{d-lim} Since $x_0$ is semiregular, the limit \[
\alpha
:= \lim_{\Omega\ni y\to x_0}\itoverline{P} d_{x_0}(y) \] exists. If $\alpha=0$, then $x_0$ must be regular by Theorem~\ref{thm:reg}, which is a contradiction. Hence $\alpha>0$.
\ref{d-lim} $\ensuremath{\Rightarrow} $ \ref{d-liminf} $\ensuremath{\Rightarrow} $ \ref{not-reg} $\ensuremath{\Rightarrow} $ \ref{not-reg-one-seq} These implications are trivial.
$\neg$\ref{R} $\ensuremath{\Rightarrow} $ $\neg$\ref{not-reg-one-seq} Suppose that $x_0\in\overline{\{x\in\partial\Omega:x\textup{ is regular}\}}$. For each integer $j\geq 2$, there exists a regular point $x_j\in B(x_0,1/j)\cap\partial\Omega$. Define $f_j\in C(\partial\Omega)$ by letting \[
f_j(x)
= (jd_{x_0}(x)-1)_\limplus,
\quad j=2,3,\ldots. \] Because $x_j$ is regular, there is $y_j\in B(x_j,1/j)\cap\Omega$ such that \[
|\itoverline{P} f_j(y_j)|
= |f_j(x_j)-\itoverline{P} f_j(y_j)|
< 1/j. \] Hence $y_j\to x_0$ and $\itoverline{P} f_j(y_j)\to 0$ as $j\to\infty$.
Let $f\in C(\partial\Omega)$ and $\alpha:=f(x_0)$. Let $\varepsilon>0$. Then we can find an integer $k\geq 2$ such that
$|f-\alpha|\leq\varepsilon$ on $B(x_0,2/k)\cap\partial\Omega$.
Choose $m$ such that $|f-\alpha|\leq m$. It follows that $f-\alpha\leq mf_j+\varepsilon$ for every $j\geq k$, and thus \[
\limsup_{j\to\infty}\itoverline{P} f(y_j)
\leq \limsup_{j\to\infty}\itoverline{P}(mf_j+\alpha+\varepsilon)(y_j)
= m\lim_{j\to\infty}\itoverline{P} f_j(y_j)+\alpha+\varepsilon
= \alpha+\varepsilon. \] Letting $\varepsilon\to 0$ shows that $\limsup_{j\to\infty}\itoverline{P} f(y_j)\leq\alpha$.
Applying this to $\tilde{f}=-f$ yields $\limsup_{j\to\infty}\itoverline{P} \tilde{f}(y_j)\leq-\alpha$. It follows that \[
\liminf_{j\to\infty}\itoverline{P} f(y_j)
\geq \liminf_{j\to\infty}\itunderline{P} f(y_j)
= -\limsup_{j\to\infty}\itoverline{P} \tilde{f}(y_j)
\geq \alpha, \] and hence $\lim_{j\to\infty}\itoverline{P} f(y_j)=f(x_0)$.
\ref{Cp-V-bdy} $\eqv$ \ref{semireg-local} Observe that \ref{Cp-V-bdy} is equivalent to the existence of a neighbourhood $U$ of $x_0$ with ${C_p}(U\cap\partial G)=0$, which is equivalent to \ref{semireg-local}, by the already proved equivalence \ref{Cp-V-bdy} $\eqv$ \ref{semireg} applied to $G$ instead of $\Omega$.
\ref{Cp-V-bdy} $\ensuremath{\Rightarrow} $ \ref{Cp-V} Let $V$ be a neighbourhood of $x_0$ such that ${C_p}(V\cap\partial\Omega)=0$. By Theorem~\ref{thm:irr-char-V}, \ref{V-Cp-V-bdy} $\ensuremath{\Rightarrow} $ \ref{V-alt-def-irr-super}, the set $U:=\Omega\cup(V\cap\partial\Omega)$ is open and ${C_p}(U\setminus\Omega)=0$.
\ref{Cp-V} $\ensuremath{\Rightarrow} $ \ref{Cp-V-bdy} This is trivial.
\ref{Cp-V} $\eqv$ \ref{alt-def-irr} $\eqv$ \ref{alt-def-irr-super} In all three statements it follows directly that $V\subset\overline\Omega$. Thus their equivalence follows directly from Theorem~\ref{thm:irr-char-V}, with $V$ in Theorem~\ref{thm:irr-char-V} corresponding to $V\cap\partial\Omega$ here.
\ref{alt-def-irr} $\ensuremath{\Rightarrow} $ \ref{rem-irr} We only have to show the last part, i.e., that $x_0$ is irregular, but this follows from the already proved implication \ref{alt-def-irr} $\ensuremath{\Rightarrow} $ \ref{semireg}.
\ref{rem-irr} $\ensuremath{\Rightarrow} $ \ref{semireg} Let $f\in C(\partial\Omega)$. Then $\itoverline{P} f$ has a {$p\mspace{1mu}$}-harmonic extension $U$ to $\Omega\cup V$ for some neighbourhood $V$ of $x_0$, and hence \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} f(y)
= U(x_0). \] Since $x_0$ is irregular it follows that $x_0$ must be semiregular.
\ref{alt-def-irr-super} $\ensuremath{\Rightarrow} $ \ref{weaksemibarrier} Let $u$ be a positive superharmonic function on $\Omega$. Then $\min\{u,1\}$ is superharmonic by Lemma~9.3 in Bj\"orn--Bj\"orn~\cite{BBbook}, and hence has a superharmonic extension $U$ to $\Omega\cup V$. As $U$ is lsc-regularized (see Section~\ref{sec:p-harmonic}) and $\mu(V\setminus\Omega)=0$, it follows that $U\geq 0$ in $\Omega\cup V$. Suppose that $U(x_0)=0$. Then the strong minimum principle \cite[Theorem~9.13]{BBbook} implies that $U\equiv 0$ in the component of $\Omega\cup V$ that contains $x_0$. But this is in contradiction with $u$ being positive in $\Omega$, and thus \[
\liminf_{\Omega\ni y\to x_0}u(x_0)
\geq U(x_0)
> 0. \] Thus there is no weak semibarrier at $x_0$.
$\neg$\ref{semibarrier} $\ensuremath{\Rightarrow} $ $\neg$\ref{weaksemibarrier} Let $u$ be a semibarrier at $x_0$. If $u>0$ in all of $\Omega$, then $u$ is a weak semibarrier at $x_0$. On the other hand, assume that there exists $x\in\Omega$ such that $u(x)=0$ (in this case $u$ is not a weak semibarrier). Then the strong minimum principle \cite[Theorem~9.13]{BBbook} implies that $u\equiv 0$ in the component $G\subset\Omega$ that contains $x$, and hence $x_0$ must be the only boundary point of $G$, because $u$ is a semibarrier. As ${C_p}(X\setminus G)\geq{C_p}(X\setminus\Omega)>0$, Lemma~4.3 in Bj\"orn--Bj\"orn~\cite{BB} (or Lemma~4.5 in \cite{BBbook}) implies that ${C_p}(\{x_0\})={C_p}(\partial G)>0$. By the Kellogg property (Theorem~\ref{thm:kellogg}), $x_0$ is regular, and hence Theorem~\ref{thm:reg} asserts that there is a positive barrier $v$ at $x_0$, and thus $v$ is a weak semibarrier.
\ref{semibarrier} $\ensuremath{\Rightarrow} $ \ref{obst-dist-semibarrier} This is trivial.
$\neg$\ref{R} $\ensuremath{\Rightarrow} $ $\neg$\ref{obst-dist-semibarrier} Let $u$ be the continuous solution of the ${\mathscr{K}}_{d_{x_0},d_{x_0}}$-obstacle problem, which is superharmonic (see Section~\ref{sec:p-harmonic}). Moreover, it is clear that \[
\liminf_{\Omega\ni y\to x}u(y)
> 0
\quad\text{whenever }x\in\partial\Omega\setminus\{x_0\}, \] and thus $u$ satisfies \ref{semibarrier-i} and \ref{semibarrier-iii} in Definition~\ref{def:semibarrier}.
Let $\{x_j\}_{j=1}^\infty$ be a sequence of regular boundary points such that $d_{x_0}(x_j)<1/j$. By Theorem~\ref{thm:reg}, $\lim_{\Omega\ni y\to x_j}u(y)=d_{x_0}(x_j)$. Hence we can find $y_j\in B(x_j,1/j)\cap\Omega$ so that $u(y_j)<2/j$. Thus $u$ satisfies \ref{semibarrier-ii} in Definition~\ref{def:semibarrier} as \[
0
\leq \liminf_{\Omega\ni y\to x_0}u(y)
\leq \liminf_{j\to\infty}u(y_j)
= 0. \]
We now assume that $\Omega$ is bounded or {$p\mspace{1mu}$}-parabolic.
\ref{R} $\eqv$ \ref{rem-motiv} This follows directly from Theorem~\ref{thm:irr-char-V}, with $V$ therein corresponding to $V\cap\partial\Omega$ here. \end{proof}
We conclude our description of boundary points with some characterizations of strongly irregular points. As for regular and semiregular points, strong irregularity is a local property.
\begin{theorem}\label{thm:ess-irr-char} Let $x_0\in\bdy\Omega\setm\{\infty\}$\textup{,} $\delta>0$\textup{,} and $d_{x_0}\colon X^*\to[0,1]$ be defined by \eqref{eq-dx0}. Then the following are equivalent\/\textup{:} \begin{enumerate} \item\label{ess-irr} The point $x_0$ is strongly irregular. \item\label{ess-local} The point $x_0$ is strongly irregular with respect to $G:=\Omega\cap B(x_0,\delta)$. \item\label{ess-one-seq} The point $x_0$ is irregular and there exists a sequence $\{y_j\}_{j=1}^\infty$ in $\Omega$ such that $y_j\to x_0$ as $j\to\infty$\textup{,} and \[
\lim_{j\to\infty}\itoverline{P} f(y_j)
= f(x_0)
\quad\text{for all }f\in C(\partial\Omega). \] \item\label{ess-R} It is true that $x_0\in\itoverline{R}\setminus R$\textup{,} where $R:=\{x\in\partial\Omega:x\text{ is regular}\}$. \item\label{ess-d-liminf} It is true that \[
\liminf_{\Omega\ni y\to x_0}\itoverline{P} d_{x_0}(y)
= 0
< \limsup_{\Omega\ni y\to x_0}\itoverline{P} d_{x_0}(y). \] \item\label{ess-f-nolim} There exists $f\in C(\partial\Omega)$ such that \[
\lim_{\Omega\ni y\to x_0}\itoverline{P} f(y) \] does not exist. \item\label{obst-dist-irr} The continuous solution $u$ of the ${\mathscr{K}}_{d_{x_0},d_{x_0}}$-obstacle problem satisfies \[
\liminf_{\Omega\ni y\to x_0}u(y)
= 0
< \limsup_{\Omega\ni y\to x_0}u(y). \] \item\label{barrier-irr} There is a semibarrier \textup{(}or equivalently there is a weak semibarrier\/\textup{)} but no barrier at $x_0$. \end{enumerate} \end{theorem}
The trichotomy property (Theorem~\ref{thm:trichotomy}) shows that a boundary point is either regular, semiregular, or strongly irregular. We will use this in the following proof.
\begin{proof} \ref{ess-irr} $\eqv$ \ref{ess-local} By Theorems~\ref{thm:reg} and~\ref{thm:rem-irr-char}, regularity and semiregularity are local properties, and hence this must be true also for strong irregularity.
\ref{ess-irr} $\eqv$ \ref{ess-one-seq} $\eqv$ \ref{ess-R} This follows from Theorem~\ref{thm:rem-irr-char} \ref{semireg} $\eqv$ \ref{not-reg-one-seq} $\eqv$ \ref{R}.
\ref{ess-irr} $\ensuremath{\Rightarrow} $ \ref{ess-d-liminf} Since $x_0$ is strongly irregular and $\itoverline{P} d_{x_0}$ is nonnegative, it follows that \[
\liminf_{\Omega\ni y\to x_0}\itoverline{P} d_{x_0}(y)
= 0
\leq \limsup_{\Omega\ni y\to x_0}\itoverline{P} d_{x_0}(y). \] If $\limsup_{\Omega\ni y\to x_0}\itoverline{P} d_{x_0}(y)=0$, then $x_0$ must be regular by Theorem~\ref{thm:reg}, which is a contradiction. Thus \[
\limsup_{\Omega\ni y\to x_0}\itoverline{P} d_{x_0}(y)
> 0. \]
\ref{ess-d-liminf} $\ensuremath{\Rightarrow} $ \ref{ess-f-nolim} This is trivial.
\ref{ess-f-nolim} $\ensuremath{\Rightarrow} $ \ref{ess-irr} By definition, $x_0$ is neither regular nor semiregular, and hence must be strongly irregular.
\ref{ess-irr} $\eqv$ \ref{obst-dist-irr} Theorem~\ref{thm:reg} shows that $x_0$ is regular if and only if $\lim_{\Omega\ni y\to x_0}u(y)=0$. On the other hand, Theorem~\ref{thm:rem-irr-char} implies that $x_0$ is semiregular if and only if $\liminf_{\Omega\ni y\to x_0}u(y)>0$. The equivalence follows by combining these two facts.
\ref{ess-irr} $\eqv$ \ref{barrier-irr} By Theorem~\ref{thm:rem-irr-char}, $x_0$ is semiregular if and only if there is no (weak) semibarrier at $x_0$. On the other hand, by Theorem~\ref{thm:reg}, there is a barrier at $x_0$ if and only if $x_0$ is regular. Combining these two facts gives the equivalence. \end{proof}
\end{document} |
\begin{document}
\maketitle
\begin{abstract} Federated learning (FL) is the most practical multi-source learning method for electronic healthcare records (EHR). Despite its guarantee of privacy protection, the wide application of FL is restricted by two large challenges: the heterogeneous EHR systems, and the non-i.i.d. data characteristic. A recent research proposed a framework that unifies heterogeneous EHRs, named UniHPF. We attempt to address both the challenges simultaneously by combining UniHPF and FL. Our study is the first approach to unify heterogeneous EHRs into a single FL framework. This combination provides an average of 3.4\% performance gain compared to local learning. We believe that our framework is practically applicable in the real-world FL.
\end{abstract}
\begin{keywords} Electronic Healthcare Record, Federated Learning, Multi-Source Learning, Centralized Learning, UniHPF \end{keywords}
\section{Introduction} \label{sec:intro}
\begin{figure*}
\caption{\camr{The application of FL with EHR data is restricted by the two problems: (1) EHR system heterogeneity, and (2) non-i.i.d. problem. Although multiple prior researches attempt to resolve (2), there is no known solution for (1). Our framework is the first attempt to handle the both problems simultaneously by combining UniHPF and FL.}}
\end{figure*}
Electronic healthcare record (EHR) is a rich data source that records whole hospital events for each patient. Using EHR in machine learning (ML) allows \ecedit{us} to make useful predictions for patients' future health status \citep{doctorai,retain}. Since the predictions can affect the life \camr{of the patients}, \ecedit{accurate prediction is vital.} Due to the nature of ML, using more data is helpful to improve the model accuracy \citep{sordo2005sample, prusa2015effect}. Considering that each EHR is maintained by each hospital, the size of the single-source data is limited. Therefore, employing multiple hospitals' data (multi-source learning) is required to improve the accuracy.
Unfortunately, this is not \ecedit{straightforward} due to the privacy issue of EHR. Since healthcare data contains personal information, exporting data outside the hospital is highly restricted. Therefore, traditional centralized learning (CL) has limited practicality, because it need to gather all data into a central server (\algorithmref{alg:cl}).
In this situation, federated learning (FL) can be a solution since it does not need to share data among clients (hospital). It only has to share model weights that are trained \ecedit{only} on each client data. The global server aggregates the \ecedit{weights} and sends the \ecedit{global} model to each client \ecedit{in} each communication round \camr{(\algorithmref{alg:fed}).} For this mechanism, FL is the most appropriate method for achieving multi-source learning on EHR while protecting patient privacy.
Despite \ecedit{its} benefits, the application \ecedit{of FL} is limited due to the heterogeneity of EHR system. In \ecedit{each} EHR system (\textit{i.e.} client, hospital), the medical codes and the \ecedit{database} schema are typically not shared \camr{(\figureref{fig:prob_def} (1)).} Therefore, most of the previous \ecedit{studies conduct FL experiments only with clients using} a single system \citep{lee2020federated, huang2019patient, fedpxn}. However, these approaches are not able to handle the \camr{system heterogeneity of the real-world EHRs.} Unifying all EHR systems into a standard format (common data model, CDM) can resolve this limitation \citep{rajkomar2018scalable, li2019distributed}. However, it is not yet examined due to the cost- and time-consuming nature.
On the other hand, UniHPF \citep{unihpf} is a framework that can effectively handle heterogeneous EHR systems in the cost- and time-efficient manner. It replaces medical codes with text and linearizes different database schemas to mutually compatible free text format (\figureref{fig:prob_def}). However, \camr{the success of} UniHPF was only shown in the CL setting, which has the aforementioned practical limitations. In FL, as opposed to CL, we have to consider the differences among the clients\ecedit{'} data distributions (non-i.i.d. problem, \figureref{fig:prob_def} (2)) \citep{rieke2020future, li2022federated}.
Therefore, we combine UniHPF with multiple FL methods to resolve the non-i.i.d. problem and compare among the methods. Since the performance is increased in FL \camr{compared to without multi-source learning,} we successfully resolve both the privacy problem and the non-i.i.d. problem. Our main contributions can be summarized as follows:
\begin{itemize}
\item We suggest a practically applicable EHR multi-source learning framework by combining UniHPF and FL.
\item Our proposed framework demonstrated improved prediction performance compared to local learning, and even occasionally showed similar performance to centralized learning.
\item To the best of our knowledge, it is the first attempt to unify heterogeneous time-series EHRs into a single FL framework. \end{itemize}
\section{Background and Methods} \label{sec:related}
\subsection{Federated Learning} Federated learning is a kind of distributed learning that trains a model without sharing data among clients \citep{fedavg} (\algorithmref{alg:fed}). It enables to train the model without the hazard of data leakage by aggregating the parameters or gradients \citep{brisimi2018federated}. An obstacle to applying FL is that the data among clients is often not independent and identically distributed (non-i.i.d.). This makes optimizing a global model challenging \citep{rieke2020future}. We examine four well-known FL algorithms with UniHPF.
\camr{ \begin{itemize}
\item \texttt{FedAvg} \citep{fedavg}, a \textit{de facto} algorithm of FL, simply averages the local model weights.
Since this method does not fully consider the non-i.i.d. problem, various FL algorithms have been developed.
\item \texttt{FedProx} \citep{fedprox} regularizes the local model with $L_2$ distance between local and global parameters.
It prevents the weights of the local optimal points from taking the global point.
\item \texttt{FedBN} \citep{fedbn} handle the feature heterogeneity among clients, by excluding the batch normalization layers from the aggregation step.
\item \texttt{FedPxN} \citep{fedpxn} combined the advantages of \texttt{FedProx} and \texttt{FedBN}, and it is reported to show best performance for FL with EHRs. \end{itemize} }
\subsection{UniHPF} As mentioned earlier, the EHR system heterogeneity is the biggest obstacle to performing multi-source learning. To overcome this problem, unifying the input format is required.
Recently, UniHPF \citep{unihpf} has successfully addressed this problem without using domain knowledge and excessive preprocessing (\figureref{fig:unihpf}).
The two key concepts of UniHPF are treating EHR as free text, and utilizing the EHR hierarchy. A patient $\mathcal{P}$ in any EHR system is composed of multiple medical events $m_i \in \mathcal{P}$, and each event has its type $e_i$, such as ``labevents'' or ``prescriptions''. The events are composed of the corresponding features, which is composed of name and value $(n_{i,j}, v_{i,j}) \in m_i$. Some of the values are in the form of the medical codes $c$, and these differ among the EHR systems. Thus, UniHPF replace the code $c$ with its text description $d$ \citep{descemb}. For example, the lab measurement code ``50912'' can be converted into ``Glucose''. UniHPF makes a free text representation $R_i$ of each event $m_i$ by linearizing the schema as \[R_i = (e_i \oplus n_{i,1} \oplus v_{i,1} \oplus n_{i,2} \oplus v_{i,2} \oplus \cdots)\], where $\oplus$ is a concatenation operator. Note that UniHPF does not perform the feature selection, which is time- and cost-consuming. Since these text representations are mutually compatible among the heterogeneous EHR systems, UniHPF is a suitable framework to perform FL. To make a prediction $\hat{y}$, UniHPF uses sub-word tokenizer ($\text{Tok}$) and word embedding layer ($\text{Emb}$), and encodes the text-represented events individually with the event encoder ($\text{Enc}$). \[z_i = \text{Enc}(\text{Emb}(\text{Tok}(R_i)))\] The encoded events are aggregated by the event aggregator ($\text{Agg}$). \[\hat{y} = \text{Agg}(z_1, z_2,\cdots)\] This helps the model to understand the patient-event level hierarchy of EHRs. Since no medical domain knowledge is used in any of the above steps, UniHPF can unify heterogeneous EHR systems efficiently.
\begin{figure*}
\caption{Test AUPRC of the local learning (LL), federated learning (FL), and centralized learning (CL) experiments. Note that the graphs are ordered by the client size from left top to right bottom. $\star$ mark indicates the p-value of the Student's t-test is lower than 0.05.}
\end{figure*}
\section{Experiments and Discussion}
\subsection{Datasets} We use three open-sourced EHR datasets: MIMIC-III \citep{mimic3}, MIMIC-IV \citep{mimic4}, and Philips eICU \citep{eicu}. The first two are composed of data from a single hospital, and the last one is a combination of data from multiple hospitals. MIMIC-III is recorded with two heterogeneous EHR systems, so we split it into MIMIC-III-CV (CareVue) and MIMIC-III-MV (Metavision) based on the systems. Since different hospitals have different data distributions, we treat the 7 largest hospitals in eICU dataset as independent clients. To summarize, we have a total of 10 clients from 4 different EHR systems and 10 different cohorts : MIMIC-IV, MIMIC-III-CV, MIMIC-III-MV, and 7 hospitals in eICU. Note that the clients are heterogeneous enough in terms of the demographic information and label distributions (\appendixref{apd:stat}). The data is split into train, valid, and test set with 8:1:1 ratio in a stratified manner for each task.
\subsection{Experimental Setting} Our cohorts include the patients over 18 years \ecedit{of age} who stayed in intensive care unit (ICU) longer than 24 hours. We only use the first 12 hours of the first ICU stay from each hospital admission to make predictions. We follow the settings of \citet{unihpf}, except that we use GRU \citep{chung2014empirical} as the event encoder of UniHPF. All experimental resources \camr{and hyperparameters} are available on github\footnote{\url{https://github.com/starmpcc/UniFL}}. We adopt 5 prediction tasks from \citet{mcdermott2020comprehensive}. \begin{itemize}
\setlength\itemsep{3pt}
\item Diagnosis (Dx): Predict all categorized diagnosis codes during the whole hospital stay of a patient.
\item Length of Stay (LOS3, LOS7): Predict whether a patient would stay in ICU longer than 3 or 7 days.
\item Mortality (Mort): Predict whether a patient would be alive or die within 60 hours.
\item Readmission (Readm): Predict whether a patient would readmit to ICU within the same hospital admission. \end{itemize}
\camr{To compare with multi-source learning, we examine the performance of Local Learning (LL), which is training and evaluating with each client's data alone.} We used Area Under Precision-Recall Curve (AUPRC) as the metric. We assume a stable internet connection and full participation because the hospitals are generally connected by LAN. All the experiments are repeated with five random seeds with one NVIDIA A100 80G or two RTX A6000 48G gpus.
\subsection{Experimental Result}
The experimental results are shown \ecedit{in} \figureref{fig:result}. The average performance for each task and client are reported in \appendixref{apd:res}. First, we evaluate whether our framework successfully handles the EHR system heterogeneity. Second, we compare the FL algorithms with respect to the non-i.i.d. problem.
Consistent with \citet{unihpf}, UniHPF always gets an average of 10.4\% performance increase by using CL compared with LL. This means that UniHPF properly overcomes the EHR system heterogeneity.
Overall, the FL shows an average of 3.4\% performance increase compared to LL. This implies that UniHPF is helpful in aggregating the clients' data into a single FL model, demonstrating its potential since this combined method is practically applicable in real-world heterogeneous EHRs.
We compare the performance among the algorithms with respect to the non-i.i.d. problem. Our results agree with \citet{choudhury2019predicting, niu2020billion}, which showed CL is the upper bound of the FL performance. In the CL setting, the clients' data is pooled before start the training, which prevents the non-i.i.d. problem. Therefore, CL has the best performance among the learning methods. \camr{In contrast, \texttt{FedAvg} can be treated as an empirical lower bound among the FL algorithms with some non-i.i.d. data \citep{li2019convergence, hsu2019measuring}.} Although \texttt{FedProx} is an algorithm that handles the non-i.i.d. problem, the performance is lower than \texttt{FedAvg}. The reason for this result seems that the performance of \texttt{FedProx} heavily depends on the hyperparameter $\mu$. The performance of \texttt{FedBN} and \texttt{FedPxN} are higher than \texttt{FedAvg}, and lower than CL. This suggests that these algorithms do address the non-i.i.d. problem in EHRs to some extent, but not completely.
Nevertheless, our framework shows its potential when the data distribution is extremely heterogeneous. \ecedit{Contrary to } the other clients, eICU-73 and eICU-443 do not have drug infusion information. \camr{Even in these extreme cases, performing FL with \texttt{FedBN} or \texttt{FedPxN} resulted in some performance increment compared to LL.}
For the training time, FL requires an average of 1.9 times more communication rounds than CL epochs until satisfying the same early stopping criterion. Nevertheless, the performance of FL is inferior to CL. This result denotes that the gradient update is relatively less accurate for each communication round. We expect that this can be improved by developing a better EHR-specific FL algorithm.
\begin{comment} There is no clear relation between client size and performance increment. Considering that the maximum size gap between each client is about 28 times, both the large and the small clients can get benefit from FL. It argues that we can compose clients without the restriction of their sizes. Since hospital sizes are not uniform in real world, it has the potential to make it easier to collect the participants for FL. Rather than client size, the performance increment depends on the label distribution of the clients and the tasks (\tableref{tab:res}). \end{comment}
\section{Conclusion}
In this paper, we empirically show that the combination of UniHPF and FL successfully resolves both the EHR system heterogeneity and the non-i.i.d. problem simultaneously.
The lower performance of FL compared to CL implies that there is still a room for improvement with a new FL algorithm in EHR. We leave the investigation of EHR-specific pretraining with FL as our future work.
\acks{This work was supported by Institute of Information \& Communications Technology Planning \& Evaluation (IITP) grant (No.2019-0-00075), Korea Medical Device Development Fund grant (Project Number: 1711138160, KMDF\_PR\_20200901\_0097), and the Korea Health Industry Development Institute (KHIDI) grant (No.HR21C0198), funded by the Korea government (MSIT, MOTIE, MOHW, MFDS).}
\onecolumn
\appendix
\section{Federated and Centralized Learning Algorithms}\label{apd:first} In federated learning (\algorithmref{alg:fed}), each client has an individual model weight copy. The copies are initialized with the same weights and trained locally with corresponding client's data. After the local training, the weights of the clients are gathered into the central server, aggregated, and synchronized. Each FL algorithms have different corresponding \texttt{Aggregate} function.
\begin{figure*}
\caption{Centralized Learning}
\label{alg:cl}
\caption{Federated Learning}
\label{alg:fed}
\end{figure*}
\section{Clients Statistics}\label{apd:stat}
\begin{table}[!htp]\centering \caption{Cohort Statics and Label Distributions}\label{tab:stat} \scriptsize \setlength{\tabcolsep}{2pt} \adjustbox{max width=\textwidth}{ \begin{tabular}{lrrrrrrrrrrrrrr}\toprule \multicolumn{2}{c}{} &MIMIC-IV &MIMIC-III-MV &MIMIC-III-CV &eICU-264 &eICU-420 &eICU-338 &eICU-73 &eICU-243 &eICU-458 &eICU-443 &Micro Avg. &Macro Avg. \\\cmidrule{1-14} \multicolumn{2}{c}{Cohort Size} &65594 &21160 &16831 &3637 &3153 &2636 &2612 &2423 &2368 &2367 &\multicolumn{2}{c}{12278.10} \\\cmidrule{1-14} \multicolumn{2}{c}{No. of Unique codes} &1908 &1923 &3226 &347 &320 &367 &384 &306 &284 &281 &1844.43 &934.60 \\\cmidrule{1-14} \multicolumn{2}{c}{Average No. of events per sample} &112.89 &91.43 &100.46 &53.69 &87.60 &47.32 &58.13 &51.80 &58.09 &51.82 &99.07 &71.32 \\\cmidrule{1-14} \multicolumn{2}{c}{\textbf{Demographic Informations}} & & & & & & & & & & & & \\\cmidrule{1-2} \multicolumn{2}{c}{Mean Ages} &63.28 &75.18 &73.90 &62.92 &63.73 &61.84 &63.54 &63.77 &61.60 &55.06 &66.58 &64.48 \\\cmidrule{1-14} \multirow{2}{*}{Gender(\%)} &M &55.81 &56.31 &56.74 &51.61 &57.78 &55.73 &54.98 &55.57 &54.18 &57.02 &55.92 &55.57 \\\cmidrule{2-14} &F &43.69 &43.69 &43.26 &48.39 &42.22 &44.27 &45.02 &44.43 &45.82 &42.98 &44.08 &44.38 \\\cmidrule{1-14} \multirow{5}{*}{Ethnicity(\%)} &White &67.78 &72.78 &70.67 &87.82 &86.08 &92.87 &75.61 &64.05 &64.02 &42.97 &70.18 &72.46 \\\cmidrule{2-14} &Black &10.84 &10.25 &8.74 &7.26 &4.19 &1.52 &13.51 &31.82 &29.10 &52.68 &11.61 &16.99 \\\cmidrule{2-14} &Hispanic &3.82 &4.07 &2.80 &0.33 &0.03 &1.29 &7.66 &0.00 &0.00 &1.10 &3.35 &2.11 \\\cmidrule{2-14} &Asian &2.96 &2.72 &2.10 &0.82 &1.49 &0.27 &1.30 &0.99 &1.27 &0.38 &0.00 &1.43 \\\cmidrule{2-14} &Other &14.60 &10.18 &15.69 &3.77 &8.21 &4.06 &1.91 &3.14 &5.62 &2.87 &14.87 &7.00 \\\cmidrule{1-14} \multicolumn{2}{c}{\textbf{Label Ratio}} & & & & & & & & & & & & \\\cmidrule{1-2} \multirow{18}{*}{Dx(\%)} &1 &4.73 &4.81 &4.99 &3.78 &3.26 &0.48 &4.49 &3.40 &3.10 &3.85 &3.00 &3.69 \\\cmidrule{2-14} &2 &3.99 &4.25 &4.16 &2.56 &1.75 &1.54 &2.59 &1.98 &1.23 &4.84 &1.30 &2.89 \\\cmidrule{2-14} &3 &10.36 &10.87 &12.16 &5.55 &12.40 &8.87 &12.25 &11.75 &5.04 &7.09 &3.43 &9.63 \\\cmidrule{2-14} &4 &6.77 &6.55 &6.24 &2.47 &8.97 &1.67 &3.33 &3.23 &1.55 &1.25 &2.38 &4.20 \\\cmidrule{2-14} &5 &7.73 &6.60 &5.16 &2.49 &5.35 &1.90 &2.67 &2.11 &1.98 &2.55 &2.81 &3.85 \\\cmidrule{2-14} &6 &6.18 &5.76 &4.27 &9.21 &5.83 &6.12 &4.82 &5.98 &6.63 &8.69 &2.34 &6.35 \\\cmidrule{2-14} &7 &11.15 &11.92 &15.35 &23.57 &11.55 &22.44 &18.75 &24.62 &23.34 &19.63 &4.38 &18.23 \\\cmidrule{2-14} &8 &6.95 &7.52 &9.24 &17.27 &9.91 &18.52 &12.81 &13.47 &15.40 &15.92 &2.96 &12.70 \\\cmidrule{2-14} &9 &7.25 &7.43 &7.46 &6.94 &5.95 &5.99 &4.19 &3.88 &4.96 &3.92 &3.26 &5.80 \\\cmidrule{2-14} &10 &6.98 &7.33 &7.53 &5.29 &6.93 &6.72 &9.58 &7.56 &12.60 &4.51 &3.30 &7.50 \\\cmidrule{2-14} &11 &0.08 &0.05 &0.07 &0.06 &0.03 &0.05 &0.09 &0.04 &0.12 &0.07 &0.04 &0.06 \\\cmidrule{2-14} &12 &1.53 &1.72 &1.88 &0.55 &0.88 &1.14 &0.46 &0.42 &0.56 &0.24 &0.77 &0.94 \\\cmidrule{2-14} &13 &4.34 &4.33 &2.96 &0.63 &0.58 &0.51 &0.46 &0.36 &0.40 &0.54 &2.20 &1.51 \\\cmidrule{2-14} &14 &0.58 &0.56 &0.57 &0.00 &0.02 &0.00 &0.06 &0.03 &0.00 &0.14 &0.31 &0.20 \\\cmidrule{2-14} &15 &0.01 &0.00 &0.00 &0.00 &0.01 &0.00 &0.00 &0.00 &0.00 &0.00 &0.01 &0.00 \\\cmidrule{2-14} &16 &5.78 &6.52 &8.12 &14.89 &10.85 &20.57 &18.70 &14.50 &18.32 &23.59 &3.03 &14.18 \\\cmidrule{2-14} &17 &6.67 &5.55 &3.81 &3.58 &5.58 &2.53 &2.20 &4.98 &3.83 &2.55 &3.71 &4.13 \\\cmidrule{2-14} &18 &8.91 &8.25 &6.04 &1.40 &10.25 &1.36 &2.73 &1.93 &1.25 &1.02 &5.23 &4.31 \\\cmidrule{1-14} LOS3(\%) &true &32.50 &36.93 &42.24 &42.86 &48.24 &38.66 &37.10 &39.50 &39.74 &45.59 &36.26 &40.34 \\\cmidrule{1-14} LOS7(\%) &true &11.08 &12.51 &15.37 &12.48 &16.97 &11.87 &11.06 &10.94 &15.58 &17.66 &12.44 &13.55 \\\cmidrule{1-14} Mort(\%) &true &1.68 &2.70 &2.84 &2.12 &3.55 &2.35 &0.54 &1.32 &3.04 &2.83 &2.11 &2.30 \\\cmidrule{1-14} Readm(\%) &true &7.94 &5.81 &5.72 &8.83 &11.48 &8.35 &17.99 &12.30 &8.49 &10.31 &7.75 &9.72 \\ \bottomrule \end{tabular} } \end{table}
\section{Experimental Result}\label{apd:res} \begin{table*}[!htp]\centering \caption{Average performance for each task and client. The numbers in the parentheses mean the relative performance improvement compared to the local learning (LL). Red and blue texts mean the negative and more than 10\% increments, respectively.}\label{tab:res} \scriptsize \renewcommand{1.5}{1.5} \adjustbox{max width=\textwidth}{ \begin{tabular}{lrrrrrrr}\toprule &Local &FedAvg &FedProx &FedBN &FedPxN &Centralized \\\hline Dx &0.622 &0.619 \textcolor{red}{($-$0.38\%)} &0.616 \textcolor{red}{($-$0.87\%)} &0.663 (+6.67\%) &0.641 (+3.08\%) &0.714 \textcolor{blue}{(+14.81\%)} \\\hline LOS3 &0.603 &0.603 (+0.12\%) &0.609 (+1.07\%) &0.609 (+1.12\%) &0.613 (+1.81\%) &0.623 (+3.54\%) \\\hline LOS7 &0.266 &0.295 \textcolor{blue}{(+11.33\%)} &0.299 \textcolor{blue}{(+12.67\%)} &0.302 \textcolor{blue}{(+13.88\%)} &0.297 \textcolor{blue}{(+11.94\%)} &0.312 \textcolor{blue}{(+17.51\%)} \\\hline Mort &0.153 &0.167 (+9.33\%) &0.157 (+2.76\%) &0.162 (+5.76\%) &0.158 (+3.35\%) &0.166 (+8.90\%) \\\hline Readm &0.117 &0.118 (+1.62\%) &0.116 \textcolor{red}{($-$0.34\%)} &0.116 \textcolor{red}{($-$0.75\%)} &0.116 \textcolor{red}{($-$0.59\%)} &0.129 \textcolor{blue}{(+10.70\%)} \\ \hline \hline MIMIC-IV &0.414 &0.428 (+3.59\%) &0.410 \textcolor{red}{($-$0.72\%)} &0.424 (+2.45\%) &0.412 \textcolor{red}{($-$0.24\%)} &0.417 (+0.83\%) \\\hline MIMIC-III-MV &0.384 &0.431 \textcolor{blue}{(+12.15\%)} &0.423 \textcolor{blue}{(+10.33\%)} &0.437 \textcolor{blue}{(+13.81\%)} &0.423 \textcolor{blue}{(+10.10\%)} &0.428 \textcolor{blue}{(+11.61\%)} \\\hline MIMIC-III-CV &0.41 &0.416 (+1.52\%) &0.411 (+0.38\%) &0.423 (+3.27\%) &0.409 \textcolor{red}{($-$0.24\%)} &0.427 (+4.23\%) \\\hline eICU-264 &0.283 &0.282 \textcolor{red}{($-$0.06\%)} &0.289 (+2.36\%) &0.299 (+6.12\%) &0.300 (+6.17\%) &0.336 \textcolor{blue}{(+18.95\%)} \\\hline eICU-420 &0.457 &0.445 \textcolor{red}{($-$2.58\%)} &0.445 \textcolor{red}{($-$2.67\%)} &0.458 (+0.25\%) &0.450 \textcolor{red}{($-$1.49\%)} &0.483 (+5.65\%) \\\hline eICU-338 &0.269 &0.289 (+7.89\%) &0.286 (+6.60\%) &0.304 \textcolor{blue}{(+13.14\%)} &0.300 \textcolor{blue}{(+11.67\%)} &0.322 \textcolor{blue}{(+19.86\%)} \\\hline eICU-73 &0.31 &0.327 (+5.77\%) &0.326 (+5.46\%) &0.334 (+8.13\%) &0.327 (+5.72\%) &0.380 \textcolor{blue}{(+23.00\%)} \\\hline eICU-243 &0.356 &0.367 (+3.15\%) &0.368 (+3.43\%) &0.375 (+5.39\%) &0.381 (+7.14\%) &0.382 (+7.29\%) \\\hline eICU-458 &0.343 &0.330 \textcolor{red}{($-$3.59\%)} &0.349 (+1.96\%) &0.349 (+1.84\%) &0.350 (+2.26\%) &0.359 (+4.92\%) \\\hline eICU-443 &0.296 &0.291 \textcolor{red}{($-$1.52\%)} &0.286 \textcolor{red}{($-$3.12\%)} &0.300 (+1.76\%) &0.298 (+0.96\%) &0.355 \textcolor{blue}{(+20.33\%)} \\ \hline \hline Average &0.352 &0.361 (+2.54\%) &0.359 (+2.19\%) &0.370 (+5.29\%) &0.365 (+3.76\%) &0.389 \textcolor{blue}{(+10.57\%)} \\ \bottomrule \end{tabular} } \end{table*}
\end{document} |
\begin{document}
\title[
The Complexity of Generalized Satisfiability for LTL ]{
The Complexity of Generalized Satisfiability \\ for Linear Temporal Logic }
\author[M.~Bauland]{Michael Bauland\rsuper a} \address{{\lsuper a}Knipp GmbH, Martin-Schmei\ss er-Weg 9, 44227 Dortmund, Germany} \email{[email protected]}
\author[T.~Schneider]{Thomas Schneider\rsuper b} \address{{\lsuper b}School of Computer Science, University of Manchester, Oxford Road, Manchester M13 9PL, UK} \email{[email protected]}
\author[H.~Schnoor]{Henning Schnoor\rsuper c} \address{{\lsuper c}Inst.\ f\"ur Informatik, Christian-Albrechts-Universit\"at zu Kiel, 24098 Kiel, Germany} \email{[email protected]} \thanks{Supported by the Postdoc Programme of the German Academic Exchange Service (DAAD)}
\author[I.~Schnoor]{Ilka Schnoor\rsuper d} \address{{\lsuper d}Inst.\ f\"{u}r Theoretische Informatik, Universit\"{a}t zu L\"{u}beck, Ratzeburger Allee 160, 23538 L\"{u}beck, Germany} \email{[email protected]}
\author[H.~Vollmer]{Heribert Vollmer\rsuper e} \address{{\lsuper e}Inst.\ f\"ur Theoretische Informatik, Universit\"{a}t Hannover, Appelstr. 4, 30167 Hannover, Germany} \email{[email protected]} \thanks{Supported in part by DFG VO 630/6-1.}
\keywords{computational complexity, linear temporal logic, satisfiability} \subjclass{F.4.1}
\titlecomment{{\lsuper*}This article extends the conference contribution \cite{bss+07} with full proofs of all lemmata and theorems.}
\begin{abstract}
In a seminal paper from 1985, Sistla and Clarke showed
that satisfiability for Linear Temporal Logic (LTL) is either \NP-complete
or \PSPACE-complete, depending on the set of temporal operators used
.
If, in contrast, the set of propositional operators is restricted, the complexity may decrease.
This paper undertakes a systematic study of satisfiability for LTL formulae over restricted sets
of propositional and temporal operators. Since every propositional operator corresponds to a
Boolean function, there exist infinitely many propositional operators. In order to systematically
cover all possible sets of them, we use Post's lattice. With its help, we determine the
computational complexity of LTL satisfiability for all combinations of temporal operators and
all but two classes of propositional functions. Each of these infinitely many problems is shown
to be either \PSPACE-complete, \NP-complete, or in \PTIME. \end{abstract}
\maketitle
\section{Introduction}
\newlength{\abo}\setlength{\abo}{-3pt}
\newlength{\abu}\setlength{\abu}{0pt}
\newlength{\Abu}\setlength{\Abu}{4pt}
\newcommand{\Stab}{\rule{0pt}{14pt}}
\newcommand{\stab}{\rule{0pt}{12pt}}
\noindent
\emph{Linear Temporal Logic (LTL)}~
was introduced by Pnueli in~\cite{pnu77} as a formalism
for reasoning about the properties and the behaviors of parallel programs and concurrent systems,
and has widely been used
for these purposes. Because of the need to perform reasoning tasks---such as deciding
satisfiability, validity, or truth in a structure generated by binary relations---in an
automated manner, their decidability and computational complexity is an important issue.
It is known that in the case of full LTL with the operators \F\ (eventually), \G\ (invariantly),
\X\ (next-time), \U\ (until), and \S\ (since), satisfiability and determination of truth
are \PSPACE-complete~\cite{sicl85}.
Restricting the set of temporal operators leads to \NP-completeness in some cases~\cite{sicl85}.
These results imply that reasoning with LTL is difficult in terms of computational complexity.
This raises the question under which restrictions the complexity of these problems decreases.
Contrary to classical modal logics, there does not seem to be a natural way to modify the semantics of LTL and obtain decision problems with lower complexity.
However, there are several possible constraints that can be posed on the syntax.
One possibility is to restrict the set of temporal operators, which has been done
exhaustively in~\cite{sicl85,mar04}.
Another constraint is to allow only a certain ``degree of propositionality'' in the language,
\ie, to restrict the set of allowed propositional operators. Every propositional operator
represents a Boolean function---\eg, the operator $\wedge$ (\AND) corresponds to the
binary function whose value is 1 if and only if both arguments have value 1. There are
infinitely many Boolean functions and hence an infinite number of propositional operators.
We will consider propositional restrictions in a systematic way, achieving a
complete classification of the complexity of the reasoning problems for LTL.
Not only will this reveal all cases in this framework where satisfiability is tractable. It will also
provide a better insight into the sources of hardness by explicitly stating the combinations
of temporal and propositional operators that lead to \NP- or \PSPACE-hard fragments.
In addition, the ``sources of hardness'' will be identified whenever a proof technique
is not transferable from an easy to a hard fragment.
\par
\noindent
\emph{Related work.}~ The complexity of model-checking and
satisfiability problems for several syntactic restrictions of LTL
fragments has been determined in the literature: In
\cite{sicl85,mar04}, temporal operators and the use of negation have
been restricted; these fragments have been shown to be \NP- or
\PSPACE-complete. In \cite{ds02}, temporal operators, their
nesting, and the number of atomic propositions have been restricted;
these fragments have been shown to be tractable or \NP-complete.
Furthermore, due to \cite{CL93,DFR00}, the restriction to Horn
formulae does not decrease the complexity of satisfiability for LTL.
As for related logics, the complexity of satisfiability has been
shown in \cite{ees90} to be tractable or \NP-complete for three
fragments of CTL (computation tree logic) with temporal and
propositional restrictions. In \cite{hal95}, satisfiability for
multimodal logics has been investigated systematically, bounding
the depth of modal operators and the number of atomic
propositions. In \cite{hem01}, it was shown that satisfiability
for modal logic over linear frames drops from \NP-complete to
tractable if propositional operators are restricted to conjunction
and atomic negation.
The effect of propositional restrictions on the complexity of the satisfiability problem
was first considered \emph{systematically}
by Lewis for the case of classical propositional logic in~\cite{lew79}.
He established a dichotomy---depending on the set of propositional operators,
satisfiability is either \NP-complete or decidable in polynomial time. In the case of modal
propositional logic, a trichotomy has been achieved in~\cite{bhss06}: modal satisfiability
is \PSPACE-complete, \CONP-complete, or in \PTIME. That complete classification in terms of
restrictions on the propositional operators follows the structure of Post's
lattice of closed sets of Boolean functions~\cite{pos41}.
\par
\noindent
\emph{Our contribution.}~
This paper analyzes the same systematic propositional restrictions for LTL,
and combines them
with restrictions on the temporal operators. Using Post's lattice,
we examine the satisfiability problem for every possible fragment of
LTL determined by an arbitrary set of propositional operators
\textit{and} any subset of the five temporal operators listed
above. We determine the computational complexity of these problems,
except for one case---where only propositional operators
based on the binary \XOR\ function (and, perhaps, constants) are
allowed. We show that all remaining cases are either
\PSPACE-complete, \NP-complete, or in \PTIME.
It is not the aim of this paper to focus on particular propositional
restrictions that are motivated by certain applications. We prefer
to give a classification as complete as possible which allows to
choose a fragment that is appropriate, in terms of expressivity and
tractability, for any given application.
Applications of syntactically restricted fragments of temporal
logics can be found, for example, in the study of cryptographic
protocols: In \cite{low08}, Gavin Lowe restricts the application of
negation and temporal operators to obtain practical verification
algorithms.
Among our results, we exhibit cases with non-trivial tractability as well as
the smallest possible sets of propositional and temporal operators that already lead
to \NP-completeness or \PSPACE-completeness, respectively. Examples for the first
group are cases in which only the unary \NOT\ function, or only monotone functions are
allowed, but there is no restriction on the temporal operators. As for the second group,
if only the binary function $f$ with $f(x,y) = (x \wedge \overline{y})$ is permitted,
then satisfiability is \NP-complete already in the case of propositional
logic~\cite{lew79}. Our results show that the presence of the same function $f$
separates the tractable languages from the \NP-complete and \PSPACE-complete ones,
depending on the set of temporal operators used. According to this, minimal sets of temporal operators
leading to \PSPACE-completeness together with $f$ are, for example, $\{\U\}$ and
$\{\F,\X\}$.
The technically most involved proof is that of \PSPACE-hardness for the language
with only the temporal operator \S\ and the boolean operator $f$
(Theorem~\ref{theorem:SAT(S;BF) SAT(S|U;S1) PSPACE-h}). The difficulty lies in
simulating the quantifier tree of a Quantified Boolean Formula (QBF) in a linear structure.
Our results are summarized in Table~\ref{tab:results_sat}. The first column contains
the sets of propositional operators, with the terminology taken from Definition \ref{def:BFs}.
The second column shows the classification of
classical propositional logic as known from~\cite{lew79} and~\cite{coo71a}.
The last line in column 3 and 4 is largely due to~\cite{sicl85}. All other
entries are the main results of this paper.
The only open case appears in the third line and is discussed in the
Conclusion. Note that the case distinction
also covers all clones which are not mentioned in the present paper.
\begin{table}
\centering
\begin{small}
\begin{tabular}{l|c|cc}
\stab \hspace*{\fill} set of temporal operators & $\emptyset$ & $\{\F\}$, $\{\G\}$, & any other \\[2pt]
set of propositional operators & & $\{\F,\G\}$, $\{\X\}$ & combination \\
\hline
\Stab all operators 1-reproducing or self-dual & trivial & trivial & trivial \\
\stab only negation or all operators monotone & in \PTIME & in \PTIME & in \PTIME \\
\stab all operators linear & in \PTIME & ? & ? \\
\stab $x \wedge \neg y$ is expressible & \NP-c.\ & \NP-c.\ & \PSPACE-c.\ \\
\stab all Boolean functions & \NP-c.\ & \NP-c.\ & \PSPACE-c.\
\end{tabular}
\end{small}
\par
\caption{
Complexity results for satisfiability. The entries ``trivial'' denote cases in which a given formula
is always satisfiable. The abbreviation ``c.'' stands for ``complete.'' Question marks stand for open questions.
}
\label{tab:results_sat}
\end{table}
\section{Preliminaries}
A \emph{Boolean function} or \emph{Boolean operator} is a function
$f:\{0,1\}^n\rightarrow\{0,1\}$. We can identify an $n$-ary
propositional connector $c$ with the $n$-ary Boolean operator $f$
defined by: $f(a_1,\dots,a_n)=1$ if and only if the formula
$c(x_1,\dots,x_n)$ becomes true when assigning $a_i$ to $x_i$ for
all $1\leq i\leq n$. Additionally to propositional connectors we use
the unary temporal operators $\X$ (next-time), $\F$ (eventually),
$\G$ (invariantly) and the binary temporal operators $\U$ (until),
and $\S$ (since).
Let $B$ be a finite set of Boolean functions and $M$ be a set of
temporal operators. A \emph{temporal $B$-formula over $M$} is a
formula $\varphi$ that is built from variables, propositional
connectors from $B$, and temporal operators from $M$. More
formally, a temporal $B$-formula over $M$ is either a propositional
variable or of the form $f(\varphi_1,\dots,\varphi_n)$ or
$g(\varphi_1,\dots,\varphi_m)$, where $\varphi_i$ are temporal
$B$-formulae over $M$, $f$ is an $n$-ary propositional operator from
$B$ and $g$ is an $m$-ary temporal operator from $M$. In
\cite{sicl85}, complexity results for formulae using the temporal
operators \F, \G, \X\ (unary), and \U, \S\ (binary) were
presented. We extend these results to temporal $B$-formulae over
subsets of those temporal operators. The set of variables appearing
in $\varphi$ is denoted by $V_{\varphi}.$ If $M=\{\X,\F,\G,\U,\S\}$
we call $\varphi$ a \emph{temporal $B$-formula}, and if
$M=\emptyset$ we call $\varphi$ a \emph{propositional $B$-formula}
or simply a \emph{$B$-formula}. The set of all temporal $B$-formulae
over $M$ is denoted by $\LL{M,B}.$
A model in linear temporal logic is a linear structure of states,
which intuitively can be seen as different points of time, with
propositional assignments. Formally a \emph{structure} $S=(s,V,\xi)$
consists of an infinite sequence $s=(s_i)_{i\in\mathbb{N}}$ of
distinct states, a set of variables $V$, and a function
$\xi:\{s_i\mid i\in\mathbb{N}\}\rightarrow 2^V$ which induces a
propositional assignment of $V$ for each state.
be a structure and $\varphi$ a temporal $\{\wedge,\neg\}$-formula
over $\{\X,\U,\S\}$ with variables from $V$. We define what it means
that \emph{$S$ satisfies $\varphi$ in $s_i$} ($S,s_i\vDash\varphi$):
For a temporal $\{\wedge,\neg\}$-formula over $\{\X,\U,\S\}$ with
variables from $V$ we define what it means that \emph{$S$ satisfies
$\varphi$ in $s_i$} ($S,s_i\vDash\varphi$): let $\varphi_1$ and
$\varphi_2$ be temporal $\{\wedge,\neg\}$-formulae over
$\{\X,\U,\S\}$ and $x\in V$ a variable.
\begin{center}
\begin{tabular}{lll}
$S,s_i\vDash x$ & if and only if & $x\in\xi(s_i)$, \\
$S,s_i\vDash \varphi_1\wedge\varphi_2$ & if and only if & $S,s_i\vDash \varphi_1$ and $S,s_i\vDash \varphi_2$, \\
$S,s_i\vDash \neg\varphi_1$ & if and only if & $S,s_i\nvDash\varphi_1$, \\
$S,s_i\vDash \X\varphi_1$ & if and only if & $S,s_{i+1}\vDash\varphi_1$, \\
$S,s_i\vDash \varphi_1\U\varphi_2$ & if and only if & there is a $k\geq i$ such that $S,s_k\vDash \varphi_2$, \\
& & and for every $i\leq j<k$,~ $S,s_j\vDash\varphi_1$, \\
$S,s_i\vDash \varphi_1\S\varphi_2$ & if and only if & there is a $k\leq i$ such that $S,s_k\vDash \varphi_2$, \\
& & and for every $k<j\leq i$,~ $S,s_j\vDash\varphi_1$.
\end{tabular}
\end{center}
The remaining temporal operators are interpreted as abbreviations:
$\F\varphi=\mathop{true}\U\varphi$ and
$\G\varphi=\neg\F\neg\varphi.$ Therefore and since every Boolean
operator can be composed from $\wedge$ and $\neg$, the above
definition generalizes to temporal $B$-formulae for arbitrary sets
$B$ of Boolean operators.
A temporal $B$-formula $\varphi$ over $M$ is \emph{satisfiable} if
there exists a structure $S$ such that $S,s_i\vDash\varphi$ for some
state $s_i$ from $S$. Furthermore, $\varphi$ is called \emph{valid}
if, for all structures $S$ and all states $s_i$ from $S$, it holds
that $S,s_i\vDash\varphi$. We will consider the following problems:
Let $B$ be a finite set of Boolean functions and $M$ a set of
temporal operators. Then \tsat{M,B} is the problem to decide whether
a given temporal $B$-formula over $M$ is satisfiable.
In the literature, another notion of satisfiability is sometimes
considered, where we ask if a formula can be satisfied at the first
state in a structure. It is easy to see that, in terms of
computational complexity, this does not make a difference for our
problems as long as the considered fragment does not contain the
temporal operator $\S$. For this paper, we only study the
satisfiability problem as defined above.
Sistla and Clarke analyzed the satisfiability problem for temporal $\{\wedge,\vee,\neg\}$-formulae over some sets of temporal operators, see Theorem \ref{theorem:sicl85}. Note that, due to de Morgan's laws, there is no significant difference between the sets $\{\wedge,\vee,\neg\}$ and $\{\wedge,\neg\}$ of Boolean operators. For convenience, we will therefore prefer the former denotation to the latter when stating results. Furthermore, the original proof of Theorem \ref{theorem:sicl85} explicitly uses the operator $\vee$.
\begin{theorem}[\cite{sicl85}]\label{theorem:sicl85}
~\par
\begin{enumerate}[\em(1)]
\item\label{theorem:sicl85:SAT(F;BF) NP-c}
$\tsat{\{\F\},\{\wedge,\vee,\neg\}}$ is \NP-complete.
\item\label{theorem:sicl85:SAT(F,X|U|U,S,X;BF) PSPACE-c}
$\tsat{\{\F,\X\},\{\wedge,\vee,\neg\}}$,
$\tsat{\{\U\},\{\wedge,\vee,\neg\}}$, and
$\tsat{\{\U,\S,\X\},\{\wedge,\vee,\neg\}}$ are \PSPACE-complete.
\end{enumerate}
\end{theorem}
Since there are infinitely many finite sets of Boolean functions, we
introduce some algebraic tools to classify the complexity of the
infinitely many arising satisfiability problems. We denote with
\proj{n}{k} the $n$-ary projection to the $k$-th variable, \ie,
$\proj{n}{k}(x_1,\dots,x_n)=x_k$, and with \const{n}{a} the $n$-ary
constant function defined by $\const{n}{a}(x_1,\dots,x_n)=a$. For
$\const{1}{1}(x)$ and $\const{1}{0}(x)$ we simply write 1 and 0. A
set $C$ of Boolean functions is called a \emph{clone} if it is
closed under superposition, which means $C$ contains all projections
and $C$ is closed under arbitrary composition \cite{pip97b}. For a
set $B$ of Boolean functions we denote with $\clone{B}$ the smallest
clone containing $B$ and call $B$ a \emph{base} for $\clone{B}$. In
\cite{pos41} Post classified the lattice of all clones
Figure~\ref{Lattice}) and found a finite base for each clone.
We now define some properties of Boolean functions, where $\oplus$ denotes the binary exclusive or.
\begin{definition}
\label{def:BFs}
Let $f$ be an $n$-ary Boolean function.
\begin{enumerate}[$\bullet$]
\item $f$ is 1\emph{-reproducing} if $f(1,\dots,1)=1$.
\item $f$ is \emph{monotone} if $a_1\leq b_1,\dots,a_n\leq b_n$ implies $f(a_1,\dots,a_n)\leq f(b_1,\dots,b_n)$.
\item $f$ is 1\emph{-separating} if there exists an $i\in\{1,\dots,n\}$ such that $f(a_1,\dots,a_n)=1$ implies $a_i=1$.
\item $f$ is \emph{self-dual} if $f\equiv\dual(f)$, where $\dual(f)(x_1,\dots,x_n)= \neg f(\neg x_1,\dots,\neg x_n)$.
\item $f$ is \emph{linear} if $f\equiv x_1\oplus\dots\oplus x_n\oplus c$ for a constant $c\in\set{0,1}$ and variables $x_1,\dots,x_n$.
\end{enumerate}
\end{definition}
\begin{figure}
\caption{Graph of some closed classes of Boolean functions}
\label{Lattice}
\end{figure}
In Table~\ref{baselist} we define those clones that are essential for this paper plus four basic ones,
and give Post's bases \cite{pos41} for them. The inclusions between them are given in Figure~\ref{Lattice}.
The definitions of all clones as well as the full inclusion graph can be found, for example,
in~\cite{bcrv03}.
\def\noalign{\hrule height.8pt}{\noalign{\hrule height.8pt}}
\begin{table}
\begin{scriptsize}
\begin{center}
\begin{tabular}{lll}
\noalign{\hrule height.8pt} Name & Definition & Base \\
\noalign{\hrule height.8pt} $\cBF$ & All Boolean functions & $\{\vee,\wedge,\neg\}$ \\
\hline $\cR_1$ & $\{f \in \mathtext{BF} \mid f$ is $1$-reproducing $\}$ & $\{\vee,\ifff\}$ \\
\hline $\cM$ & $\{f \in \mathtext{BF} \mid f$ is monotone $\}$ & $\{\vee,\wedge,0,1\}$ \\
\hline $\cS_1$ & $\{f \in \mathtext{BF} \mid f$ is $1$-separating $\}$ & $\{x \wedge \overline{y}\}$ \\
\hline $\cD$ & $\{f \mid f \text{ is self-dual}\}$ & $\{x\overline{y} \vee x\overline{z} \vee (\overline{y} \wedge \overline{z})\}$ \\
\hline $\cL$ & $\{f \mid f\text{ is linear}\}$ & $\{\oplus,1\}$ \\
\hline $\cL_0$ & $\clone{\{\oplus\}}$ & $\{\oplus\}$ \\% \hline $\cL_0$ & $\mathtext{L}\cap\cR_0$ & $\{\oplus\}$ \\
\hline $\cV$ & $\{f \mid \text{There is a formula of the form } c_0 \vee c_1x_1 \vee \dots \vee c_nx_n$ & $\{\vee,1,0\}$ \\ &
$\text{ such that } c_i \text{ are constants for } 1\leq i \leq n \text{ that describes } f\}$ & \\
\hline $\cE$ & $\{f \mid \text{There is a formula of the form } c_0 \wedge(c_1\vee x_1)\wedge\dots\wedge(c_n\vee x_n)$ &
$\{\wedge,1,0\}$ \\ & $\text{ such that } c_i \text{ are constants for } 1\leq i \leq n \text{ that describes } f\}$ & \\
\hline $\cN$ & $\{f \mid f\text{ depends on at most one variable}\}$ & $\{\neg,1,0\}$ \\
\hline $\cI$ & $\{f \mid f\text{ is a projection or constant}\}$ & $\{0,1\}$ \\
\hline $\cI_2$ & $\{f \mid f\text{ is a projection}\}$ & $\,\emptyset$ \\
\noalign{\hrule height.8pt}
\end{tabular}
\end{center}
\end{scriptsize}
\caption{List of some closed classes of Boolean functions with bases}
\label{baselist}
\end{table}
There is a strong connection between propositional formulae and Post's lattice.
If we interpret propositional formulae as Boolean functions, it is
obvious that $[B]$ includes exactly those functions that can be
represented by $B$-formulae. This connection has been used various
times to classify the complexity of problems related to
propositional formulae: For example, Lewis presented a dichotomy for
the satisfiability problem for propositional $B$-formulae:
$\tsat{\emptyset,B}$ is \NP-complete if $\cS_1 \subseteq\clone{B}$,
and solvable in \PTIME\ otherwise \cite{lew79}.
Post's lattice was applied for the equivalence problem \cite{rei01},
counting \cite{rewa99-dt} and finding minimal \cite{revo03}
solutions, and learnability \cite{dal00} for Boolean formulae. The
technique has been used in non-classical logic as well: Bauland et
al. achieved a trichotomy in the context of modal logic, which says
that the satisfiability problem for modal formulae is, depending on
the allowed propositional connectives, \PSPACE-complete,
\CONP-complete, or solvable in \PTIME\ \cite{bhss06}. For the
inference problem for propositional circumscription, Nordh presented
another trichotomy theorem \cite{nor05}.
An important tool in restricting the length of the resulting formula
in many of our reductions is the following lemma. It shows that for
certain sets $B$, there are always short formulae representing the
functions \AND, \OR, or \NOT, respectively. Point (2) and (3)
follow directly from the proofs in \cite{lew79}, point (1) is
Lemma~3.3 from \cite{sch05}. \eject
\begin{lemma}\label{lemma:short formulae}
\begin{enumerate}[\em(1)]
\item
Let $B$ be a finite set of Boolean functions such that $V\subseteq\clone{B}\subseteq\mathtext{M}$
($E\subseteq\clone{B}\subseteq\mathtext{M}$, resp.). Then there exists a $B$-formula $f(x,y)$ such that $f$ represents
$x\vee y$ ($x\wedge y$, resp.) and each of the variables $x$ and $y$ occurs exactly once in $f(x,y)$.
\item
Let $B$ be a finite set of Boolean functions such that $\clone B=\rm{BF}$. Then there are $B$-formulae $f(x,y)$ and
$g(x,y)$ such that $f$ represents $x\vee y$, $g$ represents $x\wedge y$, and both variables occur in each of these
formulae exactly once.
\item
Let $B$ be a finite set of Boolean functions such that $\mathtext{N}\subseteq\clone{B}$. Then there is a $B$-formula
$f(x)$ such that $f$ represents $\neg x$ and the variable $x$ occurs in $f$ only once.
\end{enumerate}
\end{lemma}
\section{Results}
Our proofs for most of the upper complexity bounds will rely on similar ideas as the ones in \cite{bhss06}, which are extensions of the proof techniques for the polynomial time results in \cite{lew79}. However, the proof of our polynomial time result for formulae using the exclusive or (Theorem \ref{theorem:SAT(X;L) in P}) will be unrelated to the positive cases for XOR in the mentioned papers.
The proofs for hardness results will use different techniques. Hardness proofs for unimodal logics usually work in embedding a tree-like structure directly into a tree-like model for modal formulae. Naturally, this approach does not work with LTL which speaks about linear models. Hence, in the proof of Theorem
\ref{theorem:SAT(S;BF) SAT(S|U;S1) PSPACE-h}, we will encode a tree-like structure into a linear one, and most of the complexity of the proof will come from the need to enforce a tree-like behavior of linear models.
\input{pspace}
\input{np}
\input{p}
\section{Conclusion}
We have almost completely classified the computational complexity of satisfiability
for LTL with respect to the sets of propositional and temporal operators
permitted, see Table \ref{tab:results_concl}. The only case left open is the one in which only propositional
operators constructed from the binary \XOR\ function (and, perhaps, constants)
are allowed. This case has already turned out to be difficult to
handle---and hence was left open---in~\cite{bhss06} for modal satisfiability
under \textit{restricted} frames classes.
The difficulty here and in~\cite{bhss06} is reflexivity, \ie, the property that
the formula $\F\varphi$ is satisfied at some state if $\varphi$ is satisfied
at \textit{the same} state. This does not allow for a separate treatment of
the propositional part (without temporal operators) and the remainder of a
given formula.
\begin{table}
\centering
\begin{small}
\begin{tabular}{l|cc}
\hspace*{\fill} temporal operators & $\{\F\}$, $\{\G\}$, & any other \\[2pt]
function class $B$ (propositional operators) & $\{\F,\G\}$, $\{\X\}$ & combination \\
\hline
\Stab $B \subseteq \cR_1$ or $B \subseteq \cD$ & trivial & trivial \\
\stab $B \subseteq \cM$ or $B \subseteq \cN$ & in \PTIME & in \PTIME \\
\stab $\cL_0$, $\cL$ & ? & ? \\
\stab else (\ie, $B \supseteq \cS_1$) & \NP-c.\ & \PSPACE-c.\
\end{tabular}
\end{small}
\par
\caption{
Complexity results for satisfiability. The entries ``trivial'' denote cases in which a given formula
is always satisfiable. The abbreviation ``c.'' stands for ``complete.'' Question marks stand for open questions.
}
\label{tab:results_concl}
\end{table}
Our results bear an interesting resemblance to the classifications obtained
in \cite{lew79} and in \cite{bhss06}. In all of these cases (except for one of the several
classifications obtained in the latter), it turns out that sets of Boolean functions $B$
which generate a clone above $\cS_1$ give rise to computationally hard problems,
while other cases seem to be solvable in polynomial time. Therefore, in a precise sense, it is
the function represented by the formula $x\wedge\overline{y}$ which turns problems in
this context computationally intractable.
These hardness results seem to indicate that $x\wedge\overline{y}$ and other functions which
generate clones above $\cS_1$ have properties that make computational problems hard, and this notion
of hardness is to a large extent independent of the actual problem considered.
In \cite{bms+07}, we have separated tractable and intractable cases of the model checking problem for LTL with restrictions
to the propositional operators. Without such restrictions, this problem has the same complexity as satisfiability~\cite{sicl85}.
The results from this paper leave two open questions. Besides the unsolved \XOR\ case, it would be
interesting to further classify the polynomial-time solvable cases.
Further work could also examine related specification languages, such as CTL, $\text{CTL}^\ast$,
or hybrid temporal languages.
\section*{Acknowledgments} We thank Martin Mundhenk and the anonymous referees for helpful comments and suggestions.
\end{document} |
\begin{document}
\newcommand{1503.01529}{1503.01529}
\renewcommand{081}{081}
\FirstPageHeading
\ShortArticleName{Monge--Amp{\`e}re Systems with Lagrangian Pairs}
\ArticleName{Monge--Amp{\`e}re Systems with Lagrangian Pairs}
\Author{Goo ISHIKAWA~$^\dag$ and Yoshinori MACHIDA~$^\ddag$}
\AuthorNameForHeading{G.~Ishikawa and Y.~Machida}
\Address{$^\dag$~Department of Mathematics, Hokkaido University, Sapporo 060-0810, Japan} \EmailD{\href{mailto:[email protected]}{[email protected]}}
\Address{$^\ddag$~Numazu College of Technology, 3600 Ooka, Numazu-shi, Shizuoka, 410-8501, Japan} \EmailD{\href{mailto:[email protected]}{[email protected]}}
\ArticleDates{Received April 10, 2015, in f\/inal form October 05, 2015; Published online October 10, 2015}
\Abstract{The classes of Monge--Amp{\`e}re systems, decomposable and bi-decomposable Monge--Amp{\`e}re systems, including equations for improper af\/f\/ine spheres and hypersurfaces of constant Gauss--Kronecker curvature are introduced. They are studied by the clear geometric setting of Lagrangian contact structures, based on the existence of Lagrangian pairs in contact structures. We show that the Lagrangian pair is uniquely determined by such a~bi-decomposable system up to the order, if the number of independent variables $\geq 3$. We remark that, in the case of three variables, each bi-decomposable system is generated by a~non-degenerate three-form in the sense of Hitchin. It is shown that several classes of homogeneous Monge--Amp{\`e}re systems with Lagrangian pairs arise naturally in various geometries. Moreover we establish the upper bounds on the symmetry dimensions of decomposable and bi-decomposable Monge--Amp{\`e}re systems respectively in terms of the geometric structure and we show that these estimates are sharp (Proposition~\ref{Hess=0} and Theorem~\ref{maximal symmetry}). }
\Keywords{Hessian Monge--Amp{\`e}re equation; non-degenerate three form; bi-Legendrian f\/ibration; Lagrangian contact structure; geometric structure; simple graded Lie algebra}
\Classification{58K20; 53A15; 53C42}
\section{Introduction}
{\bf 1.1.} The second-order partial dif\/ferential equation \begin{gather*} \mathrm{Hess}(f)= \det\left( \frac{\partial^2 f}{\partial x_i\partial x_j} \right)_{1\leq i, j \leq n} = c \quad (\text{$c$ is constant}, \ c \not= 0), \end{gather*} for a scalar function $f$ of $n$ real variables $x_i$, $i = 1, 2, \dots, n$, describes improper (parabolic) af\/f\/ine hyperspheres $ z = f(x_1, \dots, x_n) $ and it plays a signif\/icant role in equi-af\/f\/ine geometry (see \cite{N-S} for example). Similarly the equation of constant Gaussian (Gauss--Kronecker) curvature \begin{gather*} K = c \quad (\text{$c$ is constant}) \end{gather*} for hypersurfaces is important in Riemannian geometry (see \cite{K-N} for example). Note that it is written, for graphs $z = f(x_1, \dots, x_n)$, as the equation \begin{gather*} \mathrm{Hess}(f) = (-1)^n c \big(1 + p_1^2 + \cdots + p_n^2\big)^{\frac{n+2}{2}}, \end{gather*} where $p_i = \frac{\partial f}{\partial x_i}$. Therefore the equations ${\mathrm{Hess}}(f) = c$ and $K = c$ are regarded as Monge--Amp{\`e}re equations, and they are studied from geometric aspects in this paper. If we treat these equations in the framework of {\it Monge--Amp{\`e}re systems}, then we realize that they have a specif\/ic character.
In~\cite{I-M}, we treated improper af\/f\/ine spheres and constant Gaussian curvature surfaces in~${\mathbf{R}}^3$ from the view point of Monge--Amp{\`e}re equations of two variables, and we analyzed the singula\-ri\-ties of their geometric solutions. There we ef\/fectively used the direct sum decomposition of the standard contact structure $D \subset T{\mathbf{R}}^5$ on~${\mathbf{R}}^5$ into a pair of two Lagrangian plane f\/ields~$E_1$,~$E_2$, namely a {\it Lagrangian pair}.
Based on the notion of Lagrangian pairs generalized to the higher-dimensional cases, namely, for contact manifolds of dimension $2n+1$, we introduce decomposable and bi-decomposable Monge--Amp{\`e}re systems with Lagrangian pairs in Section~\ref{Monge-Ampere systems and Lagrangian pairs}. A decomposable (resp.\ a~bi-decomposable) Monge--Amp{\`e}re system is def\/ined by a decomposable $n$-form (resp.\ a~sum of two decomposable $n$-forms) which is compatible with the underlying Lagrangian pair. The class of Monge--Amp{\`e}re systems with Lagrangian pairs, which is introduced in this paper, is invariant under contact transformations. If a Monge--Amp{\`e}re system is isomorphic to a Monge--Amp{\`e}re system with a Lagrangian pair, then it is accompanied with a Lagrangian pair (see Def\/inition~\ref{isomorphism}). On the other hand, it is not trivial that the Lagrangian pair is uniquely asso\-cia\-ted to a given Monge--Amp{\`e}re system. Then we are led to natural questions: Is the Lagrangian pair uniquely determined by the decomposable (resp.\ the bi-decomposable form) form? Is the Lagrangian pair recovered only from the Monge--Amp{\`e}re system?
We see that Lagrangian pair is not determined by the decomposable form. Any decomposable Monge--Amp{\`e}re system with Lagrangian pair $(E_1, E_2)$ is of Lagrangian type in the sense of \cite{M-M}, and the Lagrangian subbundle $E_1$ is obtained as the characteristic system of the Monge--Amp{\`e}re system (see also \cite{Mo2} and \cite[Chapter~V]{B-C-3G}). However the complementary Lagrangian subbundle~$E_2$ is not uniquely determined.
In Section~\ref{Lagrangian pair and bi-decomposable forms.}, we show a close relation between Lagrangian pairs and bi-decomposable forms, and give an answer to the above questions by showing that a bi-decomposable Monge--Amp{\`e}re system has the unique $n$-form as a local generator up to a multiplication by a non-zero function and modulo the contact form (Theorem~\ref{bi-decomposable-form}) and that such a bi-decomposable form uniquely determines the associated Lagrangian pair $(E_1, E_2)$ uniquely, provided $n\geq 3$ (Theorem~\ref{Uniqueness}). Thus we see that any automorphism of a given Monge--Amp{\`e}re system with Lagrangian pair induces an automorphism of the underlying Lagrangian pair, if $n \geq 3$. Note that Lagrangian pair is not determined by the bi-decomposable form if $n = 2$ (Remark~\ref{n=2}).
{\bf 1.2.} It follows that the study of Monge--Amp{\`e}re systems with Lagrangian pairs has close relation with the theory of Takeuchi \cite{Tak} on ``Lagrangian contact structures''.
From the viewpoint of geometric structures, the comparison of the Lagrangian contact structures and Monge--Amp{\`e}re systems with Lagrangian pairs goes as follows: we treat Monge--Amp{\`e}re systems with Lagrangian pairs on $M = P(T^*W)$, the projective cotangent bundle over a manifold~$W$ of dimension $n+1$ in the following two cases.
For the f\/irst case, if the base space $W$ has an af\/f\/ine structure, then $M = P(T^*W)$ has the natural Lagrangian contact structure, i.e., a Lagrangian pair, see~\cite{Tak}. Moreover a Monge--Amp{\`e}re system with the Lagrangian pair on~$M$ is naturally induced, if~$W$ has an equi-af\/f\/ine structure. Here an {\it equi-affine structure} on~$W$ means that $W$ is equipped with a torsion-free linear connection and a parallel volume form on $W$, see~\cite{N-S}. Furthermore if $W$ is the af\/f\/ine f\/lat ${{\mathbf{R}}}^{n+1}$ or the torus $T^{n+1}$, then we have the generalization of the Hessian constant equation ${\mathrm{Hess}}(f) = c$.
For the second case, we take a Lagrangian contact structure on $M=P(T^*W)$ or on the unit tangent bundle $T_1W$ over~$W$ with the projective structure induced from a Riemannian metric on~$W$. Recall that the projective structure is def\/ined as the equivalence class of the Levi-Civita connection, under the projective equivalence on torsion-free linear connections which is determined by the set of un-parametrized geodesics. Moreover we consider a Monge--Amp{\`e}re system with Lagrangian pair on~$M$ induced from the volume of the Riemannian metric on~$W$. Furthermore if~$W$ is a projectively f\/lat Riemannian manifold, that is, one of the spaces ${{\mathbf{E}}}^{n+1}$, $S^{n+1}$, $H^{n+1}$ with constant curvature, we obtain the generalization of the Gaussian curvature constant equation $K = c$ as an ``Euler--Lagrange'' Monge--Amp\`ere system (see Section~\ref{Hesse representations.} and Sections~\ref{Homogeneous M-A systems with Lagrangian pairs.}.2--\ref{Homogeneous M-A systems with Lagrangian pairs.}.5).
We summarize those subjects as the chart: \begin{gather*} \begin{array}{@{}c@{\,}c@{\,}c@{}} W^{n+1} & & M^{2n+1}=P(T^*W)
\\ \left[ {\begin{array}{@{}l@{}} {\mbox{\rm an equi-af\/f\/ine structure, }}\\ {\mbox{\rm the volume structure of a Riemannian metric}} \end{array} } \right]
& \longleftrightarrow & {\mbox{\rm a M-A system with Lagrangian pair}} \\ \downarrow & & \downarrow
\\ \left[ {\begin{array}{@{}l@{}} {\mbox{\rm an af\/f\/ine structure,}} \\ {\mbox{\rm a projective structure}} \end{array} } \right]
& \longleftrightarrow & {\mbox{\rm a Lagrangian contact structure}} \end{array} \end{gather*} Here the lower row indicates the underlying structures and the upper row indicates the additional structures.
{\bf 1.3.} In Section~\ref{Lagrangian contact structures.} we recall the theory of Takeuchi. In Section~\ref{Automorphisms}, we study the sym\-met\-ries of a Monge--Amp{\`e}re system with a Lagrangian pair. Using the results in this paper, we show that the local automorphisms of a Monge--Amp{\`e}re system with a Lagrangian pair form a f\/inite-dimensional Lie pseudo-group, provided $n\geq 3$. We determine the maximal dimension of the automorphism pseudo-groups of the Monge--Amp{\`e}re systems with f\/lat Lagrangian pairs (Theorem~\ref{maximal symmetry}).
Based on those aspects, we characterize a class of Monge--Amp{\`e}re systems which includes the equations ${\mathrm{Hess}}(f) = c$ and $K = c$, $c \not= 0$. In fact, the class of Monge--Amp{\`e}re equations of type \begin{gather*} \mathrm{Hess}(f) = F(x_1, \dots, x_n, f(x), p_1, \dots, p_n), \qquad F \not= 0, \end{gather*} is characterized and called the class of {\it Hesse Monge--Amp{\`e}re systems} in Section~\ref{Hesse representations.}. We observe that the class of Hesse Monge--Amp{\`e}re systems is invariant under the contact transformations in the cases $n \geq 3$ (Proposition~\ref{contact invariance}). For instance, the Legendre dual of the above equation is well def\/ined and given by \begin{gather*} \mathrm{Hess}(f) = \frac{1}{F\left(p_1, \dots, p_n, \sum\limits_{i=1}^n x_i p_i - f(x), x_1, \dots, x_n\right)}. \end{gather*} Note that in the case $n = 2$, ${\rm Hess}(f) = \pm 1$ is transformed to the Laplace equation \mbox{$f_{x_1x_1} {+} f_{x_2x_2} {=} 0$} or to the wave equation $f_{x_1x_1} - f_{x_2x_2} = 0$. Therefore the class of Hesse Monge--Amp{\`e}re systems is not invariant under the contact transformations in the case $n = 2$.
In the case $n = 3$, any Monge--Amp{\`e}re system of the class is given by a non-degenerate three-form which is decomposed uniquely up to ordering by two decomposable forms (see \cite{B1,H,K-L-R,L-R-C} and also Section~\ref{Lagrangian pair and bi-decomposable forms.}). This fact and its generalizations are the basic reasons behind the above observation.
Further, we provide the unif\/ied picture of various subclasses of Monge--Amp{\`e}re equations with signif\/icant examples in arbitrary dimensions from various geometric frameworks.
In Section~\ref{A method to construct systems with Lagrangian pairs.}, we introduce the general method to construct Euler--Lagrange Monge--Amp{\`e}re system. We apply the method of construction to several situations and obtain several illustrative examples. In Section~\ref{Homogeneous M-A systems with Lagrangian pairs.}, based on the general method, we show that homogeneous Monge--Amp{\`e}re systems with f\/lat Lagrangian pairs arise in a very natural manner, in equi-af\/f\/ine geometry, in Euclidean geometry, in sphere geometry, in hyperbolic geometry, and moreover in Minkowski geometry. For these geometries, we construct Monge--Amp{\`e}re systems with Lagrangian pairs explicitly and globally. Moreover we show that the estimate proved in Section~\ref{Automorphisms} is best possible by providing the example with the maximal symmetry.
{\bf 1.4.} In this paper we treat, as underlying manifolds for Monge--Amp{\`e}re systems, contact manifolds of dimensions $\geq 5$. We remark that $3$-dimensional contact manifolds with Lagrangian pairs are related to second-order ordinary dif\/ferential equations with normal form. They are studied in detail in \cite{A,I-L}.
As in \cite{B1,B2,K-L-R, L-R-C}, Lagrangian pairs can be formulated, at least locally, on symplectic manifolds by means of reduction process. In this paper we adopt the contact framework of Monge--Amp{\`e}re equations based on Lagrangian contact structures.
We remark that the paper \cite{M-M} treats in detail the class of decomposable Monge--Amp{\`e}re systems or Monge--Amp{\`e}re systems with one decomposability, which are modeled on the Monge--Amp{\`e}re equation $\mathrm{Hess}(z) =0$.
We will solve the equivalence problem of Monge--Amp{\`e}re systems with Lagrangian pairs in the subsequent paper.
\section[Monge-Amp{\`e}re systems and Lagrangian pairs]{Monge--Amp{\`e}re systems and Lagrangian pairs} \label{Monge-Ampere systems and Lagrangian pairs}
We start with the general def\/inition of Monge--Amp{\`e}re systems \cite{B-G-G,L, Mo1,Mo2}. Recall that a~contact structure $D$ on a~mani\-fold~$M$ is a subbundle of~$TM$ of codimension one locally def\/ined by a~contact $1$-form $\theta$ by $D = \{ \theta = 0\}$ such that~$d\theta$ is non-degenerate, that is, symplectic, on the bundle~$D$. A~manifold endowed with a contact structure is called a contact manifold. It is known that the dimension of a contact manifold is odd.
Let $(M, D)$ be a contact manifold of dimension $2n+1$ with the contact structure \mbox{$D \subset TM$}. A~{\it Monge--Amp{\`e}re system} on $M$ is by def\/inition an exterior dif\/ferential system ${\mathcal M} \subset \Omega_M$ ge\-ne\-rated locally by a contact form $\theta$ for $D$ and an $n$-form $\omega$ on~$M$: for each point $x \in M$, there exists an open neighborhood~$U$ of~$x$ in~$M$ such that, algebraically, \begin{gather*} {\mathcal M}\vert_U = \langle \theta, d\theta, \omega \rangle_{\Omega_U}. \end{gather*} Here $\Omega_M$ (resp.\ $\Omega_U$) is the sheaf of germs of exterior dif\/ferential forms on $M$ (resp.\ on~$U$). In this case we call $\omega$ a {\it local generator} of the Monge--Amp{\`e}re system ${\mathcal M}$ (modulo the contact ideal $\langle \theta, d\theta \rangle$). Note that one may assume that $\omega$ is ef\/fective, i.e., $d\theta \wedge \omega \equiv 0\ (\rm{mod}\ \theta)$. Also note that the $(n+1)$-form $d\omega$ belongs to the contact ideal locally necessarily (see \cite{B1,B2, L}).
Let $D \subset TM$ be a vector bundle of rank $2n$ in the tangent bundle of a manifold $M$. Recall that a conformal symplectic structure on $D$ is a reduction of the structure group of $D$ to the conformal symplectic group ${\rm CSp}({\mathbf{R}}^{2n})$. If $(M, D)$ is a contact manifold of dimension $2n+1$, then the conformal symplectic structure on $D$ is def\/ined locally by $d\theta$ for a contact form $\theta$ which gives $D$ locally. In particular, for each point $x \in M$, $D_x$ has the symplectic structure which is determined uniquely up to a multiplication of a non-zero constant. We call a linear subspace $W \subset D_x$ {\it Lagrangian} if $W$ is isotropic for the conformal symplectic structure on $D_x$ and $\dim_{{\mathbf{R}}} W = n$. A subbundle $E \subset D$ is called a {\it Lagrangian subbundle} if $E_x \subset D_x$ is Lagrangian for any $x \in M$.
Now we def\/ine the key notion in this paper.
\begin{Def} Let $(M, D)$ be a contact manifold. A {\it Lagrangian pair} is a pair $(E_1, E_2)$ of Lagrangian subbundles of $D$ with respect to the conformal symplectic structure on $D$ which satisf\/ies the condition $D = E_1 \oplus E_2$. \end{Def}
\begin{Rem} In \cite[\S~5.2]{B}, the notion of bi-Lagrangian structure is def\/ined as the transverse pair of Lagrangian foliations in a symplectic manifold. Since we treat the contact case, it might be natural to use the terminology ``Legendrian'' instead of ``Lagrangian''. However we would like to use ``Lagrangian'' in the general cases, and to use the terminology ``Legendrian'' just for the integrable cases, such as ``Legendrian submanifolds'' and ``Legendrian f\/ibrations'' (see Section~\ref{Hesse representations.}). \end{Rem}
The standard example of Lagrangian pair is given as follows.
{\bf The standard example.} The standard example of Lagrangian pair is given on the standard Darboux model. Consider ${\mathbf{R}}^{2n+1}$ with coordinates $(x_1,\dots,x_n,z,p_1,\dots,p_n)$ and with the standard contact structure
$D_{\rm{st}} = \{ v \in TM \,|\, \theta_{\rm{st}}(v) = 0\}$ def\/ined by the standard contact form $\theta_{\rm{st}} = dz - \sum\limits_{i=1}^np_idx_i$. Then we set \begin{gather*} E^{\rm{st}}_1 =
\{ v \in D_{\rm{st}} \,|\, dp_1(v) = \cdots = dp_n(v) = 0 \} = \left\langle \frac{\partial}{\partial x_1} + p_1\frac{\partial}{\partial z}, \dots, \frac{\partial}{\partial x_n} + p_n\frac{\partial}{\partial z}\right\rangle, \\ E^{\rm{st}}_2 =
\{ u \in D_{\rm{st}} \,|\, dx_1(u) = \cdots = dx_n(u) = 0\} = \left\langle \frac{\partial}{\partial p_1}, \dots, \frac{\partial}{\partial p_n}\right\rangle. \end{gather*} Then $(E^{\rm{st}}_1, E^{\rm{st}}_2)$ is a Lagrangian pair on $({\mathbf{R}}^{2n+1}, D_{\rm{st}})$.
Note that, in \cite{Tak}, Takeuchi called a contact structure $D$ endowed with a Lagrangian pair $(E_1,E_2)$ a {\it Lagrangian contact structure} $(D; E_1,E_2)$ and gave a detailed study on this geometric structure.
Moreover we consider an exterior dif\/ferential system associated to a given Lagrangian pair or Lagrangian contact structure.
\begin{Def} A Monge--Amp{\`e}re system $\mathcal{M}$ is called a {\it decomposable Monge--Amp{\`e}re system with a Lagrangian pair} $(E_1,E_2)$ if in a neighborhood of each point of $M$, there exists a local generator $\omega$ of $\mathcal{M}$ satisfying the following {\it decomposing condition}: \begin{gather*} i_u\omega = 0\ (u\in E_1), \quad \text{and} \quad \omega\vert_{E_2}\ \text{is a volume form on}\ E_2. \end{gather*} \end{Def}
Such a decomposable Monge--Amp{\`e}re system of Lagrangian type is a decomposable Monge--Amp{\`e}re system with the characteristic system $E_1$ in the sense of~\cite{M-M}. Note that this class of Monge--Amp{\`e}re equations have been introduced in~\cite{G}. Also note that decomposable Monge--Amp{\`e}re systems are studied from another perspective in~\cite{A-B-M-P} by the name of Goursat equations.
\begin{Def} A Monge--Amp{\`e}re system $\mathcal{M}$ is called a {\it bi-decomposable Monge--Amp{\`e}re system with a Lagrangian pair} $(E_1,E_2)$ if, around each point of $M$, there exists a local generator~$\omega$ of~$\mathcal{M}$ of the form \begin{gather*} \omega=\omega_1-\omega_2 \end{gather*} by $n$-forms $\omega_1$ and $\omega_2$ satisfying the following {\it bi-decomposing condition}: \begin{gather*} i_u\omega_1=0\quad (u\in E_2),\qquad i_v\omega_2=0\quad (v\in E_1), \\
\omega_1| _{E_1}\ \text{is a volume form on}\ E_1,\quad
\mathrm{and}\quad \omega_2| _{E_2}\ \text{is a volume form on}\ E_2. \end{gather*} We call such an $n$-form $\omega$ a {\it bi-decomposable form} and $(\omega_1, \omega_2)$ a {\it bi-decomposition} of~$\omega$. Given a~Lagrangian pair $(E_1,E_2)$ on $(M, D)$ and $n$-forms $\omega_1$, $\omega_2$ satisfying the bi-decomposing condition for $(E_1,E_2)$, we def\/ine a Monge--Amp{\`e}re system ${\mathcal M}$ with Lagrangian pair by setting $\omega = \omega_1-\omega_2$. \end{Def}
\begin{Rem} An immersion $f\colon L \to M$ of an $n$-dimensional manifold $L$ to $M$ is called a~{\it geometric solution} of a Monge--Amp{\`e}re system ${\mathcal M} = \langle \theta, d\theta, \omega\rangle$ if $f^*{\mathcal M} = 0$, namely, if $f^*\theta = 0$, i.e., $f$ is a Legendrian immersion, and $f^*\omega = 0$.
Any geometric solution to a decomposable (resp.\ bi-decomposable) Monge--Amp{\`e}re system~${\mathcal M}$ with a Lagrangian pair $D = E_1\oplus E_2$ on a contact manifold $(M^{2n+1}, D)$ has a crucial property.
In fact, an immersion $f\colon L \to M$ is a geometric solution of a decomposable Monge--Amp{\`e}re system with a~Lagrangian pair $D = E_1\oplus E_2$ if and only if $f_*(T_pL) \cap (E_1)_{p} \not= \{ 0\}$.
For a bi-decomposable Monge--Amp{\`e}re system ${\mathcal M}$ with a Lagrangian pair $D = E_1\oplus E_2$, suppose that $f\colon L \to M$ is a geometric solution of ${\mathcal M}$. Then, for any $p \in L$, we see that \begin{gather*} E_1 \cap f_*(T_pL) = \{ 0\},\qquad \text{if and only if}\quad E_2 \cap f_*(T_pL) = \{ 0\}, \end{gather*} equivalently, \begin{gather*} E_1 \cap f_*(T_pL) \not= \{ 0\},\qquad \text{if and only if}\quad E_2 \cap f_*(T_pL) \not= \{ 0\}. \end{gather*} In fact, let $W$ be an $n$-plane in $D_p = (E_1)_p \oplus (E_2)_p$ satisfying $\omega\vert_W = 0$. The direct sum decomposition def\/ines projections $\pi_1\colon D_p \to (E_1)_p$ and $\pi_2\colon D_p \to (E_2)_p$. Then $E_1 \cap W = \{ 0\}$ if and only if $\pi_2\vert_{W}\colon W \to (E_2)_p$ is an isomorphism, and $E_2 \cap W = \{ 0\}$ if and only if $\pi_1\vert_{W}\colon W \to (E_1)_p$ is an isomorphism, respectively. For any bi-decomposition $\omega = \omega_1 - \omega_2$ and for any basis $u_1, \dots, u_n$ of $W$, we have \begin{gather*} \omega_1(\pi_1(u_1), \dots, \pi_1(u_n)) = \omega_1(u_1, \dots, u_n) = \omega_2(u_1, \dots, u_n) = \omega_2(\pi_2(u_1), \dots, \pi_2(u_n)). \end{gather*} Using the bi-decomposing condition again, we see that $\pi_1\vert_{W}$ is an isomorphism if and only if the most left hand side is non-zero, and it is equivalent to the condition that $\pi_2\vert_{W}$ is an isomorphism. \end{Rem}
Now we are led to natural questions: \begin{itemize}\itemsep=0pt \item Is the Lagrangian pair $(E_1,E_2)$ uniquely determined by a decomposable form?
\item Is the Lagrangian pair $(E_1,E_2)$ uniquely determined by a bi-decomposable form?
\item Is the Lagrangian pair $(E_1, E_2)$ recovered only from the Monge--Amp{\`e}re system $\mathcal{M}$? \end{itemize}
As is stated in Introduction, the f\/irst question is answered negatively. To answer the second and third questions, we recall the basic def\/initions.
\begin{Def} \label{isomorphism} Let $(M, D)$, $(M', D')$ be contact manifolds of dimension $2n+1$, and ${\mathcal M}$, ${\mathcal M'}$ Monge--Amp{\`e}re systems on contact manifolds $(M,D)$, $(M',D')$ respectively. A dif\/feomorphism $\Phi\colon M \longrightarrow M'$ is called an {\it isomorphism of Monge--Amp{\`e}re systems} if (1)~$\Phi$ is a~contactomorphism, namely $(\Phi_*)D=D'$, and (2)~${\Phi}^*{\mathcal M'}={\mathcal M}$. \end{Def}
Now suppose that contact manifolds $(M, D)$, $(M', D')$ of dimension $2n+1$ are endowed with Lagrangian pairs $(E_1, E_2)$, $(E'_1, E'_2)$ respectively, namely, that the decompositions $D = E_1\oplus E_2$ and $D' = E'_1\oplus E'_2$ are given. Suppose $n \geq 3$. Then, from the result in Section~\ref{Lagrangian pair and bi-decomposable forms.} mentioned above, we have that any isomorphism ${\Phi}$ of a Monge--Amp{\`e}re system ${\mathcal M}$ with the Lagrangian pair $(E_1, E_2)$ and a Monge--Amp{\`e}re system ${\mathcal M}'$ with the Lagrangian pair $(E'_1, E'_2)$ necessarily preserves the Lagrangian pairs up to ordering, namely, $(\Phi_*)E_1 = E'_1$, $(\Phi_*)E_2 = E'_2$ or $(\Phi_*)E_1 = E'_2$, $(\Phi_*)E_2 = E'_1$.
In the case $n = 2$, a Lagrangian pair is not uniquely determined from a Monge--Amp{\`e}re system and moreover, the automorphism pseudo-group may be of inf\/inite dimension. For instance, consider the equation ${\rm Hess} = -1$ which is isomorphic to the linear wave equation $f_{x_1x_1} - f_{x_2x_2} = 0$ and to the equation $f_{x_1x_2} = 0$. Then the last equation has inf\/inite-dimensional automorphisms induced by dif\/feomorphisms $(x_1, x_2) \mapsto (X_1(x_1), X_2(x_2))$.
\section{Lagrangian pair and bi-decomposable form} \label{Lagrangian pair and bi-decomposable forms.}
Let $(M, D)$ be a contact manifold of dimension $2n+1$ with a contact structure $D \subset TM$. We have def\/ined in Section~\ref{Monge-Ampere systems and Lagrangian pairs} the notion of bi-decomposing conditions and bi-decomposable forms on~$(M, D)$. Then, f\/irst we show
\begin{Lem} \label{bi-decomposability} Let $\omega$ be an $n$-form on a contact manifold $(M, D)$,
$D = \{ u \in TM \,|\, \theta(u) = 0\}$ for a local contact form $\theta$ defining $D$, and $(E_1, E_2)$ a Lagrangian pair of~$D$. Assume that $\omega$ is a~bi-decomposable form for $(E_1, E_2)$, and $\omega = \omega_1 - \omega_2$ is any bi-decomposition of $\omega$ for $(E_1, E_2)$. Then locally there exists a coframe $\theta, \alpha_1, \dots, \alpha_n, \beta_1, \dots, \beta_n$ of $T^*M$ such that \begin{gather*}
E_1 = \{ v \in D \,|\, \beta_1(v) = \cdots = \beta_n(v) = 0\}, \qquad E_2 = \{ u \in D \,|\, \alpha_1(u) = \cdots = \alpha_n(u) = 0\}, \end{gather*} and that the $n$-forms \begin{gather*} \widetilde{\omega}_1 = \alpha_1\wedge \cdots \wedge \alpha_n, \qquad \widetilde{\omega}_2 = \beta_1\wedge \cdots \wedge \beta_n \end{gather*} satisfy the bi-decomposing condition for $(E_1, E_2)$ with \begin{gather*} \widetilde{\omega}_1 \equiv \omega_1, \qquad \widetilde{\omega}_2 \equiv \omega_2, \qquad \omega \equiv \widetilde{\omega}_1 - \widetilde{\omega}_2 \end{gather*} up to a multiple of $\theta$. \end{Lem}
\begin{proof} The proof is based on the fact that the symplectic group on a f\/inite-dimensional symplectic vector space acts transitively on the set of transversal pairs of Lagrangian subspaces.
Let $X_1, \dots, X_n$ and $P_1, \dots, P_n$ be local frames of~$E_1$ and~$E_2$ respectively. Let~$R$ be the Reeb vector f\/ield for a local contact form $\theta$ def\/ining $D$. Recall that~$R$ is def\/ined uniquely by $i_R \theta = 1$, $i_R d\theta = 0$. Consider the dual coframe $\theta, \alpha_1,\dots,\alpha_n, \beta_1,\dots,\beta_n$ of~$T^*M$ to the frame~$R$, $X_1, \dots, X_n$, $P_1, \dots, P_n$ of~$TM$. Then we see, from the bi-decomposing condition, that there exist an $(n-1)$-form $\gamma$ and a~non-vanishing function $\mu$ on $M$ such that $\omega_1 = \mu(\alpha_1 \wedge \cdots \wedge \alpha_n) + \theta\wedge\gamma$. By replacing $\alpha_1$ by $\frac1{\mu}\alpha_1$, we may suppose $\mu \equiv 1$. Similarly, we have $\omega_2 \equiv \beta_1 \wedge \cdots \wedge \beta_n \ \rm{mod}\ \theta$. \end{proof}
Next we show that $\mathcal{M}$ has the unique local bi-decomposable generator $\omega$ modulo $\theta$.
\begin{Th} \label{bi-decomposable-form} Let $(E_1, E_2)$ be a Lagrangian pair and $\omega$, $\omega'$ be two bi-decomposable $n$-forms for the Lagrangian pair $(E_1, E_2)$ on $(M, D)$. Assume that they generate the same Monge--Amp{\`e}re system \begin{gather*} {\mathcal M} = \langle \theta, d\theta, \omega \rangle = \langle \theta, d\theta, \omega' \rangle. \end{gather*} Then there exist locally a non-vanishing function $\mu$ and an $(n-1)$-form $\eta$ on $M$ such that $\omega' = \mu \omega + \theta \wedge \eta$. \end{Th}
To show Theorem~\ref{bi-decomposable-form}, we study, for each $x \in M$, the symplectic exterior linear algebra on the symplectic vector space $V = D_x$ with the symplectic form $\Theta = d\theta\vert_{D_x}$ and with the decomposition $V = V_1\oplus V_2$, $V_1 = (E_1)_x$, $V_2 = (E_2)_x$, of $(V, \Theta)$ into Lagrangian subspaces.
Let $(V,\Theta)$ be a $2n$-dimensional symplectic vector space. We say that an $n$-form $\omega \in \wedge^n V^*$ is {\it bi-decomposable} if there exist a decomposition $V = V_1\oplus V_2$ of~$V$ into Lagrangian sub\-spa\-ces~$V_1$,~$V_2$ in~$V$ and $n$-forms $\omega_1, \omega_2 \in \wedge^n V^*$ such that $\omega = \omega_1 - \omega_2$, $i_u\omega_1 = 0$ $(u \in V_2)$, $i_v\omega_2 = 0$ $(v \in V_1)$, $\omega_1\vert_{V_1} \not= 0$, $\omega_2\vert_{V_2} \not= 0$. In this case $(\omega_1, \omega_2)$ is called a {\it bi-decomposition} of~$\omega$. Then, similarly as the proof of Lemma~\ref{bi-decomposability}, we have that there exists a basis $\{ a_1,\dots, a_n, b_1,\dots, b_n\}$ of~$V$ such that \begin{gather*} \omega_1 = a_1^*\wedge \cdots \wedge a_n^*, \qquad \omega_2 = b_1^*\wedge \cdots \wedge b_n^*, \end{gather*} and that \begin{gather*} V_1= \langle a_1,\dots, a_n\rangle, \qquad V_2= \langle b_1,\dots, b_n\rangle, \end{gather*} where $\{ a_1^*,\dots, a_n^*, b_1^*,\dots, b_n^*\}$ denotes the dual basis of $\{ a_1,\dots, a_n, b_1,\dots, b_n\}$. Note that $V_2$ (resp.~$V_1$) coincides with the annihilator of $a_1^*,\dots, a_n^*$ (resp.~$b_1^*,\dots, b_n^*$). Moreover we see that there exist a symplectic basis $\{ a_1,\dots, a_n, b_1,\dots, b_n\}$ of $(V, \Theta)$ and a non-zero constants~$a$,~$b$ such that \begin{gather*} \omega = a a_1^*\wedge \cdots \wedge a_n^* - b b_1^*\wedge \cdots \wedge b_n^*. \end{gather*} If we replace $a_1$, $b_1$ by $(1/b) a_1$, $b b_1$ respectively, so $a_1^*$, $b_1^*$ by $b a_1^*$, $(1/b) b_1^*$, and set $c = ab$, then we have the following:
\begin{Lem} \label{symplectic basis} Let $\omega \in \wedge^n V^*$ be a bi-decomposable $n$-form on a $2n$-dimensional symplectic vector space $(V, \Theta)$, and $\omega = \omega_1 - \omega_2$ a~bi-decomposition of~$\omega$. Then there exist a symplectic basis $\{ a_1,\dots, a_n, b_1,\dots, b_n\}$ of~$(V, \Theta)$ and a non-zero constant $c$ such that \begin{gather*} \omega_1 = c a_1^*\wedge \cdots \wedge a_n^*, \qquad \omega_2 = b_1^*\wedge \cdots \wedge b_n^*, \\ \omega = c a_1^*\wedge \cdots \wedge a_n^* - b_1^*\wedge \cdots \wedge b_n^*, \qquad \Theta = a_1^* \wedge b_1^* + \cdots + a_n^* \wedge b_n^*. \end{gather*} \end{Lem}
A form $\varphi \in \wedge^n V^*$ on a symplectic vector space $(V,\Theta)$ is called {\it effective} if the interior product $i_{X_{\Theta}}\varphi = 0$ for the $2$-vector $i_{X_{\Theta}}$ corresponding to the $2$-form~$\Theta$. Note that \begin{gather*} X_{\Theta} = a_1 \wedge b_1 + \cdots + a_n \wedge b_n \end{gather*} in terms of the basis in Lemma \ref{symplectic basis}. That $\varphi \in \wedge^n V^*$ is ef\/fective if and only if the wedge $\varphi\wedge\Theta$ with the symplectic form~$\Theta$ is equal to zero~\cite{B2, L}. Then we have, by Lemma~\ref{symplectic basis}:
\begin{Cor} \label{effective} If $\omega \in \wedge^n V^*$ is bi-decomposable and $\omega = \omega_1 - \omega_2$ is any bi-decomposition, then~$\omega$,~$\omega_1$ and~$\omega_2$ are all effective. \end{Cor}
We will apply the following basic result to our situation.
\begin{Lem}[\protect{\cite[Theorem~1.6]{L}}]\label{Lych} Let $\omega$, $\omega'$ be effective $k$-forms on a symplectic vector space $(V, \Theta)$, $(0\leq k\leq n)$. Suppose that, for every isotropic subspace $L\subset V$ on which $\omega\vert_L = 0$, the form $\omega'$ also vanishes on~$L$, $\omega'\vert_L = 0$. Then we have $\omega' =\mu{\omega}$ for some~$\mu \in {\mathbf{R}}$. \end{Lem}
Then we have the following:
\begin{Lem} \label{bi-decomposable2} Let $\omega$, ${\omega}'$ be bi-decomposable forms on $(V, \Theta)$. Suppose that~${\omega}'$ is of the form $\lambda \omega+\phi \wedge \Theta$ for a scalar $\lambda$ and an $(n-2)$-form $\phi$. Then ${\omega}' = \mu\omega$ for a scalar~$\mu$. \end{Lem}
\begin{proof} By Corollary \ref{effective}, the $n$-form $\omega$ is ef\/fective and so is the $n$-form~${\omega}'$. For every Lagrangian subspace~$L$ $(\Theta\vert_L = 0)$ on which $\omega\vert_L = 0$, the form $\omega'$ also vanishes. Therefore, by Lemma~\ref{Lych}, it follows that $\omega' =\mu{\omega}$ for a scalar~$\mu$. \end{proof}
\begin{Rem} Lemma \ref{bi-decomposable2} can be shown, as an alternative proof, by applying Lefschetz isomorphism $\Theta^2\colon \wedge^{n-2} V \to \wedge^{n+2} V$. In fact, from ${\omega}' - \mu\omega = \phi \wedge \Theta$ we have $\phi\wedge \Theta^2 = 0$, which implies~$\phi = 0$. \end{Rem}
\begin{proof}[Proof of Theorem \ref{bi-decomposable-form}] Since $\omega'$ belongs to ${\mathcal M} = \langle \theta, d\theta, \omega \rangle$, we set $\omega' = \lambda\omega + d\theta\wedge\phi + \theta\wedge \eta$, for a~function~$\lambda$, an $(n-2)$-form $\phi$ and an $(n-1)$-form $\eta$ on~$M$. For each $x \in M$, we have $\omega'\vert_{D_x} = \lambda(x)\omega\vert_{D_x} + \Theta\wedge\phi\vert_{D_x}$, where $\Theta = d\theta\vert_{D_x}$. Then, by Lemma~\ref{bi-decomposable2}, we have $\omega'\vert_{D_x} = \mu(x)\omega\vert_{D_x}$ for a scalar~$\mu(x)$ depending on $x \in M$. Since $\omega\vert_{(E_1)_x}$ is a volume form, we see that~$\mu(x)$ is unique and of class $C^\infty$. Moreover there exists a $C^\infty$ $(n-1)$-form $\eta$ such that $\omega' - \mu\omega = \theta \wedge \eta$, which implies the required consequence. \end{proof}
Moreover we show the following basic result: \begin{Th}\label{Uniqueness} Assume that $n\geq 3$. Let $\mathcal{M}= \langle \theta, d\theta, \omega \rangle$ be a Monge--Amp{\`e}re system with a~Lag\-rangian pair locally generated by a bi-decomposable $n$-form~$\omega$. Then~$\mathcal{M}$ canonically determines the Lagrangian pair~$(E_1,E_2)$. Namely, if $(\omega_1, \omega_2)$ $($resp.\ $(\omega'_1, \omega'_2))$ is a bi-decomposition of~$\omega$ for a~Lag\-rangian pair $(E_1,E_2)$ $($resp.~$(E_1', E_2'))$ enjoying the bi-decomposing condition. Then we have \begin{gather*} E_1'=E_1,\quad E_2'=E_2, \qquad {\mathrm or} \qquad E_1' = E_2, \quad E_2' = E_1. \end{gather*} \end{Th}
Theorem \ref{Uniqueness} follows immediately from the following:
\begin{Prop} \label{uniqueness 2} Assume that $n\geq 3$. Let $(V,\Theta)$ be a $2n$-dimensional symplectic vector space. Let $\omega=\omega_1-\omega_2$ be a bi-decomposable $n$-form for a Lagrangian pair $(V_1, V_2)$ of $V$. Then the bi-decomposition $(\omega_1, \omega_2)$ of~$\omega$ is unique up to ordering: If $\omega=\omega'_1-\omega'_2$ is another bi-decomposition of $\omega$ for another Lagrangian pair $(V'_1, V'_2)$ of~$V$, then \begin{gather*} V'_1 = V_1, \qquad V'_2 = V_2, \qquad \omega'_1 = \omega_1, \qquad \omega'_2 = \omega_2, \qquad {\mathrm{or}} \\ V'_1 = V_2, \qquad V'_2 = V_1, \qquad \omega'_1 = - \omega_2, \qquad \omega'_2 = - \omega_1. \end{gather*} \end{Prop}
The bi-decomposition of $\omega$ (Proposition~\ref{uniqueness 2}) is given by using the symplectic structure, based on Hitchin's result \cite[Propositions 2.1,~2.2]{H}, in the case that $n$ is odd and $n \geq 3$, as follows:
Let $\omega = \omega_1 - \omega_2$, with $\omega_1 = c a_1^*\wedge \cdots \wedge a_n^*$ and $\omega_2 = b_1^*\wedge \cdots \wedge b_n^*$, as in Lemma~\ref{symplectic basis}. Let $\epsilon=a^*_1\wedge \cdots \wedge a^*_n \wedge b^*_1\wedge \cdots b^*_n$ be the associated basis vector for $\Lambda^{2n}V^*$, which is the intrinsically def\/ined volume form by the symplectic structure on~$V$. From the isomorphism $A\colon \Lambda^{2n-1}V^* \longrightarrow V\otimes \wedge^{2n}V^*$ induced by the exterior product $V^*\otimes\wedge^{2n-1}V^* \to \wedge^{2n}V^*$, we def\/ine, for each $\psi \in \wedge^nV^*$, a linear transformation $K_{\psi}=K_{\psi}^{\epsilon}\colon V \longrightarrow V$ by $K_{\psi}(u)\epsilon = A(i_u(\psi)\wedge \psi)$, and put $\lambda(\psi)=\lambda_{\epsilon}(\psi)= \frac{1}{2n}\operatorname{tr}(K_{\psi}^2)$.
In particular, we set $\psi = \omega$. Then we have $K_{\omega}a_i=-c a_i$, $K_{\omega}b_i = (-1)^{n+1} c b_i = cb_i$. Then we have $K_{\omega}^2 = c^2\operatorname{id}$, therefore $\lambda(\omega) = c^2$. Since $K_{\omega}^*a_i^* = -ca_i^*$, $K_{\omega}^*b_i^* =cb_i^* $, we have \begin{gather*} K_{\omega}^*\omega = c(-c)^n a^*_1\wedge \cdots \wedge a^*_n - c^{n}b^*_1\wedge \cdots \wedge b^*_n = (-c)^n(\omega_1 + \omega_2). \end{gather*} Thus, from $\omega={\omega}_1-{\omega}_2$, $-\frac{1}{c^n}K_{\omega}^*\omega={\omega}_1+{\omega}_2$ and $\lambda(\omega) = c^2$ $(c >0, <0)$, we see \begin{gather*} \omega_1 = \frac{1}{2}\big(\omega \mp \lambda(\omega)^{-\frac{n}{2}} K_{\omega}^*\omega\big),\qquad \omega_2= \frac{1}{2}\big({-}\omega \mp \lambda(\omega)^{-\frac{n}{2}} K_{\omega}^*\omega\big). \end{gather*} Since $K_{\omega}^*\omega$ and $\lambda(\omega)$ are intrinsically determined from the symplectic structure on~$V$, so is the decomposition of~$\omega$.
To verify Proposition~\ref{uniqueness 2} in general case, we observe a fact from projective geometry of Pl\"{u}cker embeddings. Consider Grassmannian ${\rm Gr}(n, V^*)$ consisting of all $n$-dimensional subspaces in the $2n$-dimensional vector space $V^*$, and its Pl\"{u}cker embedding ${\rm Gr}(n, V^*) \hookrightarrow P(\wedge^{n} V^*)$.
\begin{Lem} \label{proj.geom} Let $\Lambda_1, \Lambda_2 \in {\rm Gr}(n, V^*)$ with $\Lambda_1 \cap \Lambda_2 = \{ 0\}$. \begin{enumerate}\itemsep=0pt \item[$(1)$] The projective line in $P(\wedge^{n} V^*)$ through the two points~$\Lambda_1$,~$\Lambda_2$ does not intersect with ${\rm Gr}(n, V^*)$ at other points.
\item[$(2)$] Assume $n \geq 3$. Let $\Lambda_3$ be another point in~${\rm Gr}(n, V^*)$ different from $\Lambda_1$, $\Lambda_2$. Then the projective plane in $P(\wedge^{n} V^*)$ through the three points $\Lambda_1$, $\Lambda_2$, $\Lambda_3$ does not intersect with ${\rm Gr}(n, V^*)$ at other points. \end{enumerate} \end{Lem}
\begin{proof} Choose a basis $e_1, \dots, e_n$ of $\Lambda_1$ and $e_{n+1}, \dots, e_{2n}$ of $\Lambda_2$ to form a basis of~$V^*$. Let $( p_{i_1, i_2, \dots, i_n})$ denote Pl\"{u}cker coordinates of the Pl\"{u}cker embedding. Here $i_1, i_2, \dots, i_n$ are $n$-tuple chosen from $\{ 1, 2, \dots, 2n\}$. For any sequences $1 \leq j_1 < j_2 < \cdots < j_{n-1} \leq 2n$ of $(n-1)$-letters and $1 \leq k_1 < k_2 < \cdots < k_{n+1} \leq 2n$ of $(n+1)$-letters, we have the Pl\"{u}cker relation \begin{gather*} \sum_{\ell = 1}^{n+1} (-1)^\ell p_{j_1, j_2, \dots, j_{n-1}, k_{\ell}} p_{k_1, k_2, \dots, \breve{k}_\ell, \dots, k_{n+1}} = 0 \end{gather*} (see \cite{G-K-Z,G-H} for instance). To see (1), take a point $W \in {\rm Gr}(n, V^*)$ on the projective line through~$\Lambda_1$,~$\Lambda_2$. Since Pl\"{u}cker coordinates of $\Lambda_1$ (resp.\ $\Lambda_2$) are given by $p_{1,\dots,n} = 1$ (resp.\ $p_{n+1, \dots, 2n} = 1$) with zero other coordinates, we have that Pl\"{u}cker coordinates of~$W$ are given by $p_{1,\dots,n} = \lambda$, $p_{n+1, \dots, 2n} = \mu$ for some~$\lambda$,~$\mu$, other coordinates $p_{i_1, \dots, i_n}$ being null. Then, by applying the Pl\"{u}cker relation for $(j_1, \dots, j_{n-1}) = (1, 2, \dots, n-1)$ and $(k_1, k_2, \dots, k_{n+1}) = (n-1, n, \dots, 2n)$, we have $\lambda\mu = 0$. Therefore $\lambda = 0$ or $\mu = 0$, namely, $W = \Lambda_1$ or $W = \Lambda_2$.
To see (2), let $\bar{p}=(\bar{p}_{i_1, i_2, \dots, i_n})$ be coordinates of~$\Lambda_3$. Then the coordinates $p=(p_{i_1, i_2, \dots, i_n})$ for a point~$W$ on the plane through $\Lambda_1$, $\Lambda_2$, $\Lambda_3$ are given by \begin{gather*} p_{1,\dots,n} = \lambda + \bar{p}_{1,\dots,n}, \qquad p_{n+1, \dots, 2n} = \mu + \bar{p}_{n+1, \dots, 2n}, \qquad p_{i_1, \dots, i_n} = \bar{p}_{i_1, \dots, i_n}, \end{gather*} $\{i_1, \dots, i_n\} \not= \{ 1, \dots, n\}, \{ n+1, \dots, 2n\}$. Suppose $W \in {\rm Gr}(n, V^*)$, $W \not= \Lambda_1, \Lambda_2, \Lambda_3$. Then we see that both~$p$ and~$\bar{p}$ satisfy the Pl\"{u}cker relations and that $\lambda\mu\not= 0$. Let $\{ i_1, \dots, i_n \} \not= \{ 1, \dots, n\}, \{ n+1, \dots, 2n\}$. Then, because $n \geq 3$, there exist $\ell$, $\ell'$, $\ell \not= \ell'$, such that $i_\ell \not\in \{ n+1, \dots, 2n\}$ and $i_{\ell'} \not\in \{ n+1, \dots, 2n\}$ or that $i_\ell \not\in \{ 1, \dots, n\}$ and $i_{\ell'} \not\in \{ 1, \dots, n\}$. In the former case, we take $(i_1, \dots, \breve{i}_k, \dots, i_n)$, $(i_k, n+1, \dots, 2n)$, and write the Pl\"{u}cker relation to get $\mu\cdot\bar{p}_{i_1, \dots, i_n} = 0$. In the latter case, we take $(i_1, \dots, \breve{i}_k, \dots, i_n), (1, \dots, n, i_k)$ to get $\lambda\cdot\bar{p}_{i_1, \dots, i_n} = 0$. Therefore we obtain that $\bar{p}_{i_1, \dots, i_n} = 0$ for $\{ i_1, \dots, i_n \} \not= \{ 1, \dots, n\}, \{ n+1, \dots, 2n\}$. This means that $\Lambda_3$ lies on the projective line through~$\Lambda_1$,~$\Lambda_2$, which contradicts~(1). \end{proof}
\begin{proof}[Proof of Proposition \ref{uniqueness 2}] Set $\omega_1 - \omega_2 = \omega'_1 - \omega'_2$. Then $\omega_1 - \omega_2 - \omega'_1$ is equal to a decomposable form $- \omega'_2$. Let $\Lambda_1$, $\Lambda_2$, $\Lambda'_1$, $\Lambda'_2$ be points in ${\rm Gr}(n, V^*) \subset P(\wedge^n V^*)$ corresponding to $\omega_1$, $\omega_2$, $\omega'_1$, $\omega'_2$ respectively. Then the projective plane through $\Lambda_1$, $\Lambda_2$, $\Lambda'_1$ intersects with~${\rm Gr}(n, V^*)$ also at~$\Lambda'_2$. Note that $\Lambda'_1 \not= \Lambda'_2$ as well as $\Lambda_1 \not= \Lambda_2$. Suppose $\Lambda'_1 \not= \Lambda_1, \Lambda_2$. By Lemma~\ref{proj.geom}(2), we have $\Lambda'_2 = \Lambda_1$ or $\Lambda'_2 = \Lambda_2$. Then we have that~$\Lambda'_1$ lies on the line through $\Lambda_1$, $\Lambda_2$. By Lemma~\ref{proj.geom}(1), we conclude that $\Lambda'_1 = \Lambda_1$ or $\Lambda'_1 = \Lambda_2$. If $\omega'_1 = \lambda\omega_1$ for some $\lambda \not= 0$, then $V'_2 = V_2$ as the annihilator of $\omega_1$ or $\omega'_1$. If $\omega'_1 = \mu\omega_2$ for some $\mu \not= 0$, then we have $V'_2 = V_1$. By the symmetric argument, we obtain also that $V'_1 = V_1$ or $V'_1 = V_2$. Thus we have $V_1' = V_1$, $V_2' = V_2$ or $V_1' = V_2$, $V_2' = V_1$. Assume $V_1' = V_1$, $V_2' = V_2$, then, restricting $\omega$ to $V = V_1 \oplus V_2$, we have $\omega'_1 = \omega_1$, $\omega'_2 = \omega_2$. Assume $V_1' = V_2$, $V_2' = V_1$, then we have $\omega'_1 = -\omega_2$, $\omega'_2 = - \omega_1$. \end{proof}
\begin{Rem} \label{n=2} If $n=2$, that is, $M$ is of dimension~$5$, then Theorem~\ref{Uniqueness} does not hold. In fact, consider~$M={\mathbf{R}}^5$ with coordinates $(x,y,z,p,q)$ and with the contact form $\theta=dz-pdx-qdy$. Take a $2$-form \begin{gather*} \omega=dx\wedge dy-dp\wedge dq. \end{gather*} Then decomposable $2$-forms $\omega_1 = dx\wedge dy$, $\omega_2=dp\wedge dq$ satisfy $\omega = \omega_1 - \omega_2$ and the bi-decomposing condition for the Lagrangian pair given by \begin{gather*}
E_1= \big\{ v \in T{\mathbf{R}}^5 \,|\, \theta(v) = dp(v) = dq(v) = 0 \big\} = \left\langle \frac{\partial}{\partial x}+ p\frac{\partial}{\partial z},\frac{\partial}{\partial y}+ q\frac{\partial}{\partial z}\right\rangle, \\
E_2= \big\{ u \in T{\mathbf{R}}^5 \,|\, \theta(u) = dx(u) = dy(u) = 0\big\} = \left\langle \frac{\partial}{\partial p}, \frac{\partial}{\partial q}\right\rangle. \end{gather*} Then we can f\/ind other decomposable $2$-forms $\omega'_1= d(x+p)\wedge dy$, $\omega'_2=dp\wedge d(y+q)$ satisfying $\omega=\omega'_1-\omega'_2$ and the bi-decomposing condition for another Lagrangian pair given by \begin{gather*}
E'_1 = \big\{ v \in T{\mathbf{R}}^5 \,|\, \theta(v) = dp(v) = d(y+q)(v) = 0\big\} = \left\langle \frac{\partial}{\partial x}+p\frac{\partial}{\partial z}, \frac{\partial}{\partial y}+q\frac{\partial}{\partial z}- \frac{\partial}{\partial q}\right\rangle,\\
E'_2 = \big\{ u \in T{\mathbf{R}}^5 \,|\, \theta(u) = d(x+p)(u) = dy(u) = 0 \big\} = \left\langle \frac{\partial}{\partial x}+p\frac{\partial}{\partial z}- \frac{\partial}{\partial p},\frac{\partial}{\partial q}\right\rangle. \end{gather*}
Lemma~\ref{proj.geom}(2) does not hold in the case $n = 2$, because ${\rm Gr}(2, {\mathbf{R}}^4) \hookrightarrow P(\wedge^2({\mathbf{R}}^4)) = P^5$ is a~hypersurface and a~projective plane intersects with~${\rm Gr}(2, {\mathbf{R}}^4)$ in inf\/inite points (a~planer curve). \end{Rem}
From the above-mentioned propositions, it follows that, in the case $n \geq 3$, the notion of a bi-decomposable Monge--Amp{\`e}re system with a Lagrangian pair $(E_1,E_2)$ is nothing but
the notion of a Monge--Amp{\`e}re system generated by a bi-decomposable form $\omega=\omega_1-\omega_2$.
\section{Lagrangian contact structures} \label{Lagrangian contact structures.}
Let us recall Takeuchi's paper \cite{Tak} for Lagrangian contact structures. A contact structure with a Lagrangian pair is called a~Lagrangian contact structures in~\cite{Tak}. A typical example of Lagrangian contact structures is the projective cotangent vector bundle $M = P(T^*W)$ of an $(n+1)$-dimensional manifold~$W$ with an af\/f\/ine structure (a~torsion-free linear connection) or a~projective structure (a~projective equivalence class of torsion-free linear connections). For the canonical contact structure~$D$ on~$M$, we take as a Lagrangian pair horizontal and vertical vector bundles (cf.\ Example~\ref{hor+ver}). In~\cite{Tak}, it is given the description of Cartan connections on~$P(T^*W)$ associated to Lagrangian contact structures and the equivalence of the vanishing of the curvature of Cartan connection and the projective f\/latness of $W$.
The f\/lat model, which is a homogeneous space qualif\/ied as a model for a Cartan connection, with Lagrangian contact structure is the projective cotangent bundle $P(T^*P^{n+1})$ of the ($n+1$)-dimensional projective space $P^{n+1}=P(\mathbf{R}^{n+2})$.
Put $G={\rm PGL}(n+2,{\mathbf{R}})={\rm GL}(n+2,{\mathbf{R}})/C$ ($C$ is the center, $C=\mathbf{R}^{\times}\cdot I_{n+2}$), and $\mathfrak{g}=\operatorname{Lie} G \cong \mathfrak{sl}(n+2,{\mathbf{R}})$. The Lie algebra $\mathfrak{g}$ has a structure of a simple graded Lie algebra (GLA) of second kind as follows: \begin{gather*}
{\mathfrak g} = {\mathfrak{sl}}(n+2,{\mathbf{R}}) = {\mathfrak g}_{-2}\oplus {\mathfrak g}_{-1} \oplus
{\mathfrak g}_0
\oplus {\mathfrak g}_1 \oplus {\mathfrak g}_2 \\ \hphantom{{\mathfrak g}}{} = \left\{{\begin{pmatrix}
0 & 0 & 0 \\
0 & \mathrm{O}_n & 0 \\
a & 0 & 0
\end{pmatrix}}\right\}
\oplus
\left\{{\begin{pmatrix}
0 & 0 & 0 \\
b_1 & \mathrm{O}_n & 0 \\
0 & {}^tb_2 & 0
\end{pmatrix}}\right\}
\oplus
\left\{{\begin{pmatrix}
\alpha & 0 & 0 \\
0 & A & 0 \\
0 & 0 &\beta
\end{pmatrix}}\right\} \\ \hphantom{{\mathfrak g}=}{} \quad {}\oplus
\left\{{\begin{pmatrix}
0 & {}^tc_1 & 0 \\
0 & \mathrm{O}_n & c_2 \\
0 & 0 & 0
\end{pmatrix}}\right\}
\oplus
\left\{{\begin{pmatrix}
0 & 0 & d \\
0 & \mathrm{O}_n & 0 \\
0 & 0 & 0
\end{pmatrix}}\right\}, \\
(a,d,\alpha,\beta \in {\mathbf R},\, b_1,b_2,c_1,c_2\in
{\mathbf R}^n, \, A\in \mathfrak{gl}(n,{\mathbf{R}}); \,
\alpha +\beta +\operatorname{tr}A=0),
\qquad [\mathfrak{g}_p,\mathfrak{g}_q] \subset
\mathfrak{g}_{p+q}. \end{gather*} Put \begin{gather*} \mathfrak{m}= \mathfrak{g}_{-2}\oplus \mathfrak{g}_{-1}, \end{gather*} then $\mathfrak{m}$ is a fundamental GLA of contact type, i.e., Heisenberg algebra. Put \begin{gather*} \mathfrak{g}'=\mathfrak{g}_0 \oplus \mathfrak{g}_1 \oplus \mathfrak{g}_2. \end{gather*} Let $G'$ be the Lie subgroup of $G = {\rm PGL}(n+2, {\mathbf{R}})$ def\/ined by \begin{gather*} G' = P\left\{ \begin{pmatrix} * & * & * \\ 0 & * & * \\ 0 & 0 & * \end{pmatrix} \in {\rm GL}(n+2, {\mathbf{R}}) \right\}. \end{gather*} Then the Lie algebra of $G'$ is given by $\mathfrak{g}'$. Note that $\dim \mathfrak{g}' = n^2 + 2n + 2$.
The group $G$ transitively acts on the f\/lag manifold \begin{gather*}
P\big(T^*P^{n+1}\big) = \big\{ V_1 \subset V_{n+1} \subset {\mathbf{R}}^{n+2} \,|\, \dim V_1 = 1, \dim V_{n+1} = n + 1 \big\} \subset P^{n+1}\times P^{n+1*}. \end{gather*} Then $G'$ is the isotropy group of $(V_1, V_{n+1}) = (\langle e_0 \rangle, \langle e_0, \dots, e_n \rangle)$ for the standard basis $e_0, e_1, \dots$, $e_n, e_{n+1}$ of ${\mathbf{R}}^{n+1}$. Therefore we have \begin{gather*} G/G' \cong P\big(T^*P^{n+1}\big) \qquad \big(\cong P\big(T^*P^{n+1*}\big)\big). \end{gather*} Note that $\mathfrak{g}_{-1} \subset \mathfrak{m} = T_o(G/G')$, where $o = G'$ is the origin, def\/ines the contact structure $D$ on $G/G'$ which corresponds to the canonical contact structure on $P(T^*P^{n+1})$ via the above dif\/feomorphism (cf.~\cite{I-M}).
Next, we consider \begin{gather*} \mathfrak{e}^1=\left\{{\begin{pmatrix}
0 & 0 & 0 \\
b_1 & \mathrm{O}_n & 0 \\
0 & 0 & 0
\end{pmatrix}}\right\}, \;
\mathfrak{e}^2=\left\{{\begin{pmatrix}
0 & 0 & 0 \\
0 & \mathrm{O}_n & 0 \\
0 & {}^tb_2 & 0
\end{pmatrix}}\right\}. \end{gather*} Then we have \begin{gather*} \mathfrak{g}_{-1}=\mathfrak{e}^1\oplus \mathfrak{e}^2,\qquad [\mathfrak{e}^1,\mathfrak{e}^1]=[\mathfrak{e}^2,\mathfrak{e}^2]=0,\qquad \mathfrak{g}_{-2}=[\mathfrak{e}^1,\mathfrak{e}^2], \\ [\mathfrak{g}_0,\mathfrak{e}^1]
\subset \mathfrak{e}^1,\qquad
[\mathfrak{g}_0,\mathfrak{e}^2]
\subset \mathfrak{e}^2. \end{gather*} Denote by $E_{ij}\in \mathfrak{gl}(n+2,{\mathbf{R}})$ the matrix unit of $(i,j)$ component. Then we put \begin{gather*} \gamma =E_{n+2,1}\in \mathfrak{g}_{-2},\\ e_i =E_{i+1,1}\in \mathfrak{e}^1\subset \mathfrak{g}_{-1},\qquad f_i=E_{n+2,i+1}\in \mathfrak{e}^2\subset \mathfrak{g}_{-1}\qquad (1\leq i\leq n). \end{gather*} We set \begin{gather*} [X,Y] = - A(X,Y)\gamma \qquad (X,Y\in \mathfrak{g}_{-1}). \end{gather*} It follows that \begin{gather*} A(e_i,f_j)={\delta}_{ij}, \qquad A(e_i, e_j) = 0, \qquad A(f_i, f_j) = 0 \qquad (1 \leq i, j \leq n). \end{gather*} Therefore we have that $A$ is a symplectic form. Then $\mathfrak{g}_{-1}$ becomes a symplectic vector space with respect to~$A$, and $e_1, \dots, e_n$, $f_1, \dots, f_n$ form a symplectic basis of the symplectic vector space~$(\mathfrak{g}_{-1},A)$. Moreover $\mathfrak{e}^1$, $\mathfrak{e}^2$ form a Lagrangian pair of~$(\mathfrak{g}_{-1}, A)$.
Moreover, put \begin{gather*} \mathfrak{a}^1=\mathfrak{e}^1+\mathfrak{g}', \qquad \mathfrak{a}^2=\mathfrak{e}^2+\mathfrak{g}'. \end{gather*} Then we easily verify that \begin{gather*} \mathrm{Ad}(G')\mathfrak{a}^1=\mathfrak{a}^1, \qquad \mathrm{Ad}(G')\mathfrak{a}^2=\mathfrak{a}^2,\qquad \mathfrak{a}^1\cap \mathfrak{a}^2=\mathfrak{g}'. \end{gather*} Thus $\mathfrak{a}^1$ and $\mathfrak{a}^2$ induce invariant dif\/ferential systems~$E_1$ and $E_2$ on $G/G'$ respectively. The pair $(E_1,E_2)$ forms a Lagrangian pair of $(G/G', D)$ and thus that of the standard contact structure $D$ on $P(T^*P^{n+1})$. Moreover both $E_1$ and $E_2$ are completely integrable. It follows that $P(T^*P^{n+1})/\mathcal{E}_2\cong P^{n+1}$, $P(T^*P^{n+1})/\mathcal{E}_1 \cong {P^{n+1}}^*$, where $\mathcal{E}_1$, $\mathcal{E}_2$ are the foliations induced by~$E_1$,~$E_2$ respectively.
For the linear isotropy representation $\rho \colon G' \longrightarrow \mathrm{GL}(\mathfrak{m})$ at the origin $o = G'$ in $G/G'$, we have, with respect to the basis $\{ \gamma, e_1, \dots, e_n, f_1, \dots, f_n \}$ in $\mathfrak{m}$, \begin{gather*} \tilde{G} =\rho(G')
=\left\{{\begin{pmatrix}
a & 0 & 0 \\
b_1 & A & \mathrm{O}_n \\
b_2 & \mathrm{O}_n & a\, {}^t\! A^{-1}
\end{pmatrix}} \Bigg\vert \, a\in \mathbf{R}^*,\, A\in \mathrm{GL}(n,{\mathbf{R}}), \,b_1, b_2\in \mathbf{R}^n \right\}. \end{gather*}
Then the $\tilde{G}$-structures of type $\mathfrak{m}$ are in bijective correspondence with the Lagrangian contact structures~\cite[Theorem~5.1]{Tak}. Note that $\tilde{G}$-structure is of inf\/inite type~\cite[Chapter~I]{K}. How\-ever~${\mathfrak g}$ is the prolongation of $({\mathfrak m}, {\mathfrak g}_0)$ in the sense of Tanaka~\cite{Tan1,Y}. Moreover $\tilde{G} = G_0^{\#}$ in the notation of~\cite{Tan}. Thus, by the f\/initeness theorem of Tanaka~\cite[Corollary~2]{Tan}, we have:
\begin{Prop} Let $(M,D)$ be a contact manifold of dimension $2n+1$ with a Lagrangian pair $(E_1,E_2)$. Then the automorphism pseudo-group of all compatible diffeomorphisms on $M$ as a Lagrangian contact structure is of finite type, that is to say, it is a finite-dimensional Lie pseudo-group. The maximum dimension of the automorphism pseudo-groups, fixing~$n$, is given by $\dim \mathfrak{sl}(n+2,{\mathbf{R}}) = (n+2)^2-1$ which the flat model attains. \end{Prop}
Moreover, using a result in \cite{M-M}, we have:
\begin{Prop} \label{Hess=0} The equivalence class of a decomposable Monge--Amp{\`e}re system with a~Lag\-rangian pair $(E_1, E_2)$ on a contact manifold $(M, D)$ is uniquely determined by the Lagrangian contact structure $(D, E_1, E_2)$. Therefore the maximal dimension of the automorphism pseudo-groups of decomposable Monge--Amp{\`e}re systems on $(M^{2n+1}, D)$ is equal to $(n+2)^2-1$. The maximum is attained by the decomposable Monge--Amp{\`e}re system ${\rm Hess} = 0$ on the flat model on $M = PT^*(P^{n+1})$. \end{Prop}
In fact it is given the following result in~\cite{M-M}, which implies Proposition~\ref{Hess=0}:
\begin{Lem}[\protect{\cite[Proposition~2.1]{M-M}}] \label{decomposable-M-M} Let $(V, \Theta)$ be a symplectic vector space of dimension~$2n$. For given two non-zero decomposable $n$-covectors $\omega = \beta_1 \wedge \cdots \wedge \beta_n$, $\omega' = \beta'_1 \wedge \cdots \wedge \beta'_n$ with $\beta_i, \beta'_i \in V^*$, we have $\omega' = \lambda\omega + \phi\wedge\Theta$ for a nonzero scalar $\lambda$ and a~$(n-2)$-covector $\phi$, if and only if, the annihilator of $\beta_1, \dots, \beta_n$ in~$V$ and the annihilator of $\beta'_1, \dots, \beta'_n$ in~$V$ are either identical or perpendicular with respect to~$\Theta$. \end{Lem}
\begin{proof}[Proof of Proposition \ref{Hess=0}] Let ${\mathcal M}$ be a decomposable Monge--Amp{\`e}re system with a~Lag\-rangian pair $(E_1, E_2)$ on $(M, D)$. First we observe that ${\mathcal M}$ has a decomposable local generator. To see this, let $X_1, \dots, X_n$ and $P_1, \dots, P_n$ be local frames of $E_1$ and~$E_2$ respectively. Let~$R$ be the Reeb vector f\/ield for a local contact form~$\theta$ def\/ining~$D$. Consider the dual coframe $\theta, \alpha_1,\dots,\alpha_n, \beta_1,\dots,\beta_n$ of~$T^*M$ to the frame $R, X_1, \dots, X_n, P_1, \dots, P_n$ of $TM$. Then we see, by the decomposing condition, that there exist an $(n-1)$-form $\gamma$ and a non-vanishing function~$\mu$ on~$M$ such that $\omega = \mu(\beta_1 \wedge \cdots \wedge \beta_n) + \theta\wedge\gamma$. Thus we have ${\mathcal M} = \langle \beta_1 \wedge \cdots \wedge \beta_n, \theta, d\theta\rangle$. Note that ${\rm Ann}(\theta, \alpha_1, \dots, \alpha_n) = E_1$. Then, by Lemma~\ref{decomposable-M-M}, we see that ${\mathcal M}$ is determined just by~$E_1$. In particular, given a Lagrangian pair $(E_1, E_2)$, the decomposable Monge--Amp{\`e}re system with the Lagrangian pair $(E_1, E_2)$ is uniquely determined. \end{proof}
\begin{Rem} If the characteristic system $E_1$ is integrable, then the decomposable Monge--Amp{\`e}re system is isomorphic to the system corresponding to the equation ${\rm Hess} = 0$ (see \cite{M-M,Mo2}). \end{Rem}
\begin{Rem} The equation ${\rm Hess} = 0$ has the inf\/inite-dimensional automorphism pseudo-group which consists of the lifts of dif\/feomorphisms on the dual projective space. However, if a Lagrangian pair is associated, then the automorphism pseudo-group turns to be of f\/inite-dimensional, as stated in Proposition~\ref{Hess=0}. \end{Rem}
\section[Automorphisms of Monge-Amp{\`e}re systems with Lagrangian pairs]{Automorphisms of Monge--Amp{\`e}re systems\\ with Lagrangian pairs} \label{Automorphisms}
Let us consider the automorphism pseudo-group Aut($\mathcal{M}$) of all local isomorphisms of a bi-decomposable Monge--Amp{\`e}re system $\mathcal{M} = \langle \theta, d\theta, \omega \rangle$ with a Lagrangian pair $(E_1,E_2)$ on a~contact manifold $(M,D)$ of dimension $2n+1$.
By Theorems \ref{bi-decomposable-form} and~\ref{Uniqueness}, in the case $n \geq 3$, any automorphism of any bi-decomposable Monge--Amp{\`e}re system with a~Lagrangian pair $(E_1, E_2)$ preserves the Lagrangian pair $(E_1, E_2)$ up to the interchange of~$E_1$,~$E_2$. Therefore inf\/initesimal symmetries of bi-decom\-po\-sable Monge--Amp{\`e}re systems are studied based on inf\/initesimal symmetries of Lagrangian contact structures.
Now let $\overline{G}$ be \begin{gather*} \overline{G} = \left\{\left. \begin{pmatrix}
c^2 & 0 & 0 \\
b_1 & cA & \mathrm{O}_n \\
b_2 & \mathrm{O}_n & c\, {}^t\!\! A^{-1}
\end{pmatrix}
\, \right\vert \, c \in {\mathbf{R}}^{\times}, \,A\in \mathrm{SL}(n,{\mathbf{R}}),\, b_1, b_2 \in {\mathbf{R}}^n \right\}. \end{gather*}
\begin{Prop} The bi-decomposable Monge--Amp{\`e}re systems with Lagrangian pairs are in bijective correspondence with $\overline{G}$-structures of type~$\mathfrak{m}$. \end{Prop}
\begin{proof} We consider, as in Section~\ref{Lagrangian contact structures.}, the fundamental GLA of contact type $\mathfrak{m}= \mathfrak{g}_{-2}\oplus \mathfrak{g}_{-1}$ and the decomposition $\mathfrak{g}_{-1}=\mathfrak{e}^1\oplus \mathfrak{e}^2$ into the Lagrangian pair. Moreover we f\/ix a volume form $\Omega_1 \in \wedge^n(\mathfrak{e}^{1*})$ on $\mathfrak{e}^1$ and a volume form $\Omega_2 \in \wedge^n(\mathfrak{e}^{2*})$ on $\mathfrak{e}^2$. Consider the group $C(\mathfrak{m}; \mathfrak{e}^1, \Omega_1; \mathfrak{e}^2, \Omega_2)$ consisting of all $a \in {\rm GL}(\mathfrak{m})$ which satisf\/ies the following conditions: $a\mathfrak{g}_{-1} = \mathfrak{g}_{-1}$, the graded linear automorphism $\overline{a}$ of $\mathfrak{m}$ induced by $a$ is a GLA-automorphism, $a\mathfrak{e}^1 = \mathfrak{e}^1$, $a\mathfrak{e}^2 = \mathfrak{e}^2$, and $a^*\Omega_1 = \lambda\Omega_1$, $a^*\Omega_2 = \lambda\Omega_2$ for some $\lambda \in {\mathbf{R}}^{\times}$. Then we have that $C(\mathfrak{m}; \mathfrak{e}^1, \Omega_1; \mathfrak{e}^2, \Omega_2)$ is identical with $\overline{G}$. \end{proof}
Thus the equivalence problem of bi-decomposable Monge--Amp{\`e}re systems with Lagrangian pairs is studied as an adapted $G$-structure on a contact manifold of dimension~$2n+1$.
Let $G^0$ be a subgroup of ${\rm GL}(n+2, {\mathbf{R}})$ def\/ined by \begin{gather*} G^0 = \left\{ \left. \begin{pmatrix}
\ k^{-1} & 0 & \ 0 \\
\ b_1 & A & \ 0 \\
\ a & {}^tb_2 & \ k
\end{pmatrix}
\, \right\vert \,
k \in {\mathbf{R}}^{\times}, \, A \in {\rm SL}(n, {\mathbf{R}}), \, b_1, b_2 \in {\mathbf{R}}^n, \, a \in {\mathbf{R}}
\right\}. \end{gather*} We set \begin{gather*} H^0 = \left\{ \left. \begin{pmatrix}
\ k^{-1} & 0 & \ 0 \\
\ 0 & A & \ 0 \\
\ 0 & 0 & \ k
\end{pmatrix}
\, \right\vert \,
k \in {\mathbf{R}}^{\times}, \, A \in {\rm SL} (n, {\mathbf{R}})
\right\} \subset G^0. \end{gather*} Then we have the model space $G^0/H^0 \cong {\mathbf{R}}^{2n+1}$ with coordinates $(x, z, p)$. The $G^0$-action on ${\mathbf{R}}^{2n+1}$ is described by \begin{gather*} (x, z, p) \mapsto (x', z', p') = \left(k(Ax + b_1), k\big(kz - {}^tb_2x - a\big), k\left(p - \frac{1}{k}{}^tb_2\right)A^{-1}\right). \end{gather*} Then $dz' - p'dx' = k^2(dz - pdx)$ and the form $\omega = c dx_1\wedge \cdots\wedge dx_n - dp_1\wedge\cdots\wedge dp_n$ is transformed to $\omega' = k^n\omega$.
Let $E_{i,j}$, $0 \leq i, j \leq n+1$, denotes the elementary $(n+2)\times (n+2)$-matrix such that the $(i, j)$ component is $1$ and other components are all zero. We set \begin{gather*} \varepsilon = - E_{0,0} + E_{n+1,n+1}, \qquad e_i = E_{i,0} \quad (1 \leq i \leq n), \\ f_j = E_{n+1,j} \quad (1 \leq j \leq n), \qquad \gamma = E_{n+1,0}, \end{gather*} and identify the set of traceless matrices ${\mathfrak{sl}}(n, {\mathbf{R}})$ with \begin{gather*} \left\{ \left. \begin{pmatrix}
0 & 0 & 0 \\
0 & A & 0 \\
0 & 0 & 0
\end{pmatrix}
\, \right\vert \,
A \in {\mathfrak{sl}}(n, {\mathbf{R}})
\right\} \subset {\mathfrak g}^0. \end{gather*} Then we set ${\mathfrak g}_0 = \langle\varepsilon\rangle_{{\mathbf{R}}} \oplus {\mathfrak{sl}}(n, {\mathbf{R}})$, ${\mathfrak g}_{-1}^1 = \langle e_1, \dots, e_n\rangle_{{\mathbf{R}}}$, ${\mathfrak g}_{-1}^2 = \langle f_1, \dots, f_n\rangle_{{\mathbf{R}}}$, ${\mathfrak g}_{-1} = {\mathfrak g}_{-1}^1 \oplus {\mathfrak g}_{-1}^2$, and ${\mathfrak g}_{-2} = \langle\gamma\rangle_{{\mathbf{R}}}$. Then \begin{gather*} {\mathfrak g}^0 = {\mathfrak g}_{-2} \oplus {\mathfrak g}_{-1} \oplus {\mathfrak g}_{0} \end{gather*} is the Lie algebra of $G^0$. We write ${\mathfrak m} = {\mathfrak g}_{-2}\oplus{\mathfrak g}_{-1}$. Then the Lie subalgebra~${\mathfrak m}$ becomes the split Heisenberg algebra.
The matrix representation of $A_0 \in \mathfrak{g}_0=\langle\varepsilon\rangle_{{\mathbf{R}}} \oplus {\mathfrak{sl}}(n, {\mathbf{R}})$ with respect to the basis $\{ \gamma, \, e_i $ $(1 \leq i \leq n),\, f_j\, (1 \leq i \leq n) \}$ in the Heisenberg algebra $V= \mathfrak{m} = \mathfrak{g}_{-2}\oplus \mathfrak{g}_{-1}$ has the following form: \begin{gather*} A_0=C+A_0'=
c \begin{pmatrix} 2 & 0 & 0 \\ 0 & I & \mathrm{O} \\ 0 & \mathrm{O} & I \end{pmatrix} + \begin{pmatrix} 0 & 0 & 0 \\ 0 & A & \mathrm{O} \\ 0 & \mathrm{O} & -{}^t\!A \end{pmatrix}. \end{gather*} In fact, for the commutators, we have by the direct calculations: \begin{gather*} [\varepsilon, \gamma] = 2\gamma, \qquad [\varepsilon, e_i] = e_i, \qquad [\varepsilon, f_j] = f_j, \\ [A, \gamma] = O, \qquad [A, e_i] = a_i, \qquad [A, f_j] = - a^j, \end{gather*} where $a_i$ is the $i$-th column of $A$ and $a^j$ is the $j$-th row of $A$, $1 \leq i, j \leq n$, and $A \in {\mathfrak{sl}}(n, {\mathbf{R}})$. Moreover we have \begin{gather*} [e_i, f_j] = - \delta_{ij}\gamma, \qquad [e_i, e_j] = 0, \qquad [f_i, f_j] = 0 \quad (1 \leq i, j \leq n). \end{gather*}
We will study the prolongation of $({\mathfrak m}, {\mathfrak g}_0)$. We def\/ine the prolongation inductively ${\mathfrak g}_k = {\mathfrak g}({\mathfrak m}, {\mathfrak g}_0)_k$ $(k \geq 1)$ by the set of elements $\{ (\alpha, \beta) \in {\rm Hom}({\mathfrak g}_{-1}, {\mathfrak g}_{k-1})\oplus {\rm Hom}({\mathfrak g}_{-2}, {\mathfrak g}_{k-2})\}$ satisfying \begin{alignat*}{3} & {\rm (i)} \ \ && \beta([x, y]) = [\alpha(x), y] - [\alpha(y), x] \qquad (x, y \in {\mathfrak g}_{-1}), & \\ & {\rm (ii)} \ \ && [\alpha(y), z] = [\beta(z), y] \qquad (y \in {\mathfrak g}_{-1},\, z \in {\mathfrak g}_{-2}).& \end{alignat*} See \cite[p.~429]{Y}. Then we have
\begin{Lem} \label{prolongation-calculation} The prolongation ${\mathfrak g}_i$ vanishes for any $i \geq 1$. \end{Lem}
\begin{proof} First we calculate ${\mathfrak g}_1$. For any $(\alpha, \beta) \in {\rm Hom}(\mathfrak{g}_{-1}, \mathfrak{g}_0) \oplus {\rm Hom}(\mathfrak{g}_{-2}, \mathfrak{g}_{-1})$, the condi\-tions~(i) and (ii) imply that \begin{alignat*}{3} & \text{(i)} \ \ && \beta([e_i, f_j]) = [\alpha(e_i), f_j] - [\alpha(f_j), e_i] \qquad (1 \leq i, j \leq n), & \\ & \text{(ii-1)} \ \ & & [\alpha(e_i), \gamma] = [\beta(\gamma), e_i] \qquad (1 \leq i \leq n),& \\ & \text{(ii-2)} \ \ && [\alpha(f_j), \gamma] = [\beta(\gamma), f_j] \qquad (1 \leq j \leq n).& \end{alignat*} Set \begin{gather*} \beta(\gamma) = \sum_{\ell=1}^n b_\ell e_\ell + \sum_{m=1}^n c_m f_m, \qquad \alpha(e_i) = h_i\varepsilon + A^{(i)}, \qquad \alpha(f_j) = k_j\varepsilon + B^{(j)}, \end{gather*} where $b_\ell, c_m, h_i, k_j \in {\mathbf{R}}$, $A^{(i)}, B^{(j)} \in {\mathfrak{sl}}(n, {\mathbf{R}})$. Then \begin{gather*} \beta([e_i, f_j]) = \sum_{\ell=1}^n (- \delta_{ij}b_\ell)e_\ell + \sum_{m=1}^n (- \delta_{ij}c_m)f_m, \\ {[\alpha(e_i), f_j]} = h_i f_j - \big(j{\mbox{\rm -th row of }} A^{(i)}\big), \qquad {[\alpha(f_j), e_i]} = k_j e_i + \big(i{\mbox{\rm -th column of }} B^{(j)}\big), \\ {[\alpha(e_i), \gamma]} = 2h_i\gamma, \qquad {[\beta(\gamma), e_i]} = c_i\gamma, \qquad {[\alpha(f_j), \gamma]} = 2k_j\gamma, \qquad {[\beta(\gamma), f_j]} = - b_j\gamma. \end{gather*} By the condition (i), we have \begin{gather*} B^{(j)}_{\ell i} = - \delta_{\ell i}k_j + \delta_{ij}b_\ell, \qquad A^{(i)}_{j m} = \delta_{j m}h_i + \delta_{ij}c_m \qquad (1 \leq i, j, \ell, m \leq n). \end{gather*} By the condition (ii-1), we have $c_i = 2h_i$, $1 \leq i \leq n$. By the condition (ii-2), we have $b_j = - 2k_j$, $1 \leq j \leq n$. Therefore we have \begin{gather*} B^{(j)}_{\ell i} = - \delta_{\ell i}k_j - 2\delta_{ij}k_\ell, \qquad A^{(i)}_{j m} = \delta_{j m}h_i + 2\delta_{ij}h_m \qquad (1 \leq i, j, \ell, m \leq n). \end{gather*} In particular, for any $j$, $\ell$ with $1 \leq j, \ell \leq n$, we have $B^{(j)}_{\ell\ell} = - k_j - 2\delta_{\ell j}k_\ell$, and therefore \begin{gather*} 0 = \operatorname{tr} B^{(j)} = - (n+2)k_j, \end{gather*} hence $k_j = 0$, $1 \leq j \leq n$. For any $i$, $m$ with $1 \leq i, m \leq n$, we have $A^{(j)}_{mm} = h_i + 2\delta_{i m}h_m$, and therefore \begin{gather*} 0 = \operatorname{tr} A^{(i)} = (n+2)h_i, \end{gather*} hence $h_i = 0$, $1 \leq i \leq n$. Thus we have $\beta = 0$ and $\alpha = 0$. Therefore we have ${\mathfrak g}_1 = 0$.
Second we calculate ${\mathfrak g}_2$. Take any $(\alpha, \beta) \in {\mathfrak g}_2 \subset {\rm Hom}({\mathfrak g}_{-1}, {\mathfrak g}_{1})\oplus {\rm Hom}({\mathfrak g}_{-2}, {\mathfrak g}_{0})$. Since ${\mathfrak g}_{1} = 0$, $\alpha = 0$. Then we see $\beta(\gamma) = 0$. Therefore $\beta = 0$. Thus we obtain that ${\mathfrak g}_{2} = 0$.
From ${\mathfrak g}_{1} = 0$, ${\mathfrak g}_{2} = 0$, we have ${\mathfrak g}_{i} = 0$, $i \geq 3$, automatically. \end{proof}
Then we have
\begin{Th} \label{maximal symmetry} Let $(M, D)$ be a contact manifold of dimension $2n+1$ and~$\mathcal{M}$ a bi-decomposable Monge--Amp{\`e}re system with a Lagrangian pair on~$(M,D)$. Assume that $n \geq 3$. Then the automorphism pseudo-group $\operatorname{Aut}(\mathcal{M})$ of~$\mathcal{M}$ has at most the dimension of $(n+1)^2$. The estimate is best possible. \end{Th}
\begin{proof} By Lemma \ref{prolongation-calculation}, we have known that the prolongations ${\mathfrak g}_i$, $1 \leq i$ in the sense of Tanaka vanish. On the other hand $\overline{G}$ is equal to $(H^0)^{\#}$ in the notation of~\cite{Tan}. Then, by the f\/initeness theorem of Tanaka~\cite[Corollary~2]{Tan}, we see that the dimension of the pseudo-group of automorphisms on ${\mathcal M}$ is estimated by $\dim({\mathfrak g^0}) = \sum\limits_{i \leq 0} \dim({\mathfrak g}_i) = (n+1)^2$. Moreover there exists an Monge--Amp{\`e}re system $\mathcal{M}$ with a Lagrangian pair on $M = {\mathbf{R}}^{2n+1}$, which arises in equi-af\/f\/ine geometry, such that the automorphism group {\rm Aut}($\mathcal{M}$) attains the maximal dimension $(n+1)^2$. See Section~\ref{Homogeneous M-A systems with Lagrangian pairs.}.1. \end{proof}
\begin{Rem} The sharp symmetry bounds of non-f\/lat Lagrangian contact structures together with many other parabolic geometries have been obtained by Kruglikov and The~\cite{K-T}. Lemma~\ref{prolongation-calculation} in our paper is similar to the concept of prolongation rigidity studied by them in~\cite{K-T}. \end{Rem}
\section{Hesse representations} \label{Hesse representations.}
Let $(E_1,E_2)$ be a Lagrangian pair on a contact manifold $(M, D)$. We call $(E_1,E_2)$ {\it bi-Legendre-integrable} or simply {\it integrable} if both~$E_1$ and~$E_2$ are completely integrable as subbundles in~$TM$. Then $(E_1, E_2)$ def\/ines a pair of Legendrian foliations $({\mathcal E}_1, {\mathcal E}_2)$ on $M$ locally. The standard Lagrangian pair $(E^{\rm{st}}_1, E^{\rm{st}}_2)$ introduced in Section~\ref{Monge-Ampere systems and Lagrangian pairs} is integrable. In fact the foliation~${\mathcal E}^{\rm{st}}_1$ is def\/ined by f\/ibers of the projection $(x, z, p) \mapsto \Big(p, \sum\limits_{i=1}^n x_ip_i - z\Big)$ and the foliation ${\mathcal E}^{\rm{st}}_2$ is def\/ined by $(x, z, p) \mapsto (x, z)$.
\begin{Def} If a Lagrangian pair $(E_1,E_2)$ on $(M, D)$ is locally contactomorphic to the standard Lagrangian pair $(E^{\rm{st}}_1, E^{\rm{st}}_2)$ on $({\mathbf{R}}^{2n+1}, D_{\rm{st}})$, then we call $(E_1, E_2)$ {\it flat}. \end{Def}
Let $(E_1, E_2)$ be a Lagrangian pair on a contact manifold $(M, D)$ of dimension $2n + 1$. Assume that $(E_1, E_2)$ is bi-Legendre-integrable. Then, locally, there exist Legendrian f\/ibrations \mbox{$\pi_1 \colon M \to W_1$} and $\pi_2 \colon M \to W_2$ having $E_2$ and $E_1$ as the kernels of the dif\/ferentials $(\pi_1)_* \colon TM \to TW_1$ and $(\pi_2)_* \colon TM \to TW_2$ for some manifolds $W_1$ and $W_2$ of dimension $n+1$ respectively. Then we have the following diagram: \begin{gather*} \xymatrix{ & M \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} &\\ W_1 & & W_2}
\end{gather*} We call $(\pi_1, \pi_2)$ a {\it double Legendrian fibration}. In this case we say that $W_1$ and $W_2$ are in the {\it dual} relation via the {\it Legendre transformation} on~$M$. Since $D = E_1\oplus E_2$, we see $(\pi_1, \pi_2)\colon M \to W_1\times W_2$ is an immersion. Thus $M$ has a {\it pseudo-product structure} doubly foliated by Legendrian submanifolds~\cite{Tab, Tan1}.
\begin{Ex} The projective cotangent bundle $M = P(T^*W)$ over a manifold $W$ of dimension $n+1$ with the canonical projection $\pi_1 \colon M \to W$ has the canonical contact structure $D \subset TM$ def\/ined by, for any $(x, [\alpha]) \in M$ with $x \in W$, $[\alpha] \in P(T^*_xW)$, \begin{gather*}
D_{(x, [\alpha])} = \{ v \in T_xM \,|\, \alpha((\pi_1)_* v) = 0 \}. \end{gather*} We see moreover that $D$ has the integrable Lagrangian subbundle $E_2 = {\rm Ker}(\pi_{1*})$.
Assume that $W$ is the projective space~$P^{n+1}$. Then the contact manifold $M=P(T^*P^{n+1})$ is naturally identif\/ied with the contact manifold $P(T^*{P^{n+1}}^*)$, the projective cotangent bundle over the dual projective space ${P^{n+1}}^*$ with the canonical projection $\pi_2 \colon P(T^*{P^{n+1}}^*) \to {P^{n+1}}^*$~\cite{I-Mo}. Then $D \subset TM$ has another Lagrangian subbundle $E_1 = {\rm Ker}((\pi_2)_*)$ and $(E_1, E_2)$ turns to be an integrable Lagrangian pair of $(M, D)$. There is associated to $(E_1, E_2)$ the double Legendrian f\/ibration \begin{gather*} \xymatrix{ & P\big(T^*P^{n+1}\big) \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} &\\ P^{n+1} & & {P^{n+1}}^*}
\end{gather*} for $W_1=P^{n+1}$, $W_2= {P^{n+1}}^*$. This globally def\/ined Lagrangian pair $(E_1, E_2)$ is f\/lat. In fact, $P(T^*P^{n+1})$ is identif\/ied with the incidence hypersurface $I \subset P^{n+1}\times {P^{n+1}}^*$ def\/ined by \begin{gather*} x_0y_0 + x_1y_1 + \cdots + x_{n+1}y_{n+1} = 0 \end{gather*} for homogeneous coordinates $[x] = [x_0 : x_1 : \cdots : x_{n+1}]$ of $P^{n+1}$ and $[y] = [y_0 : y_1 : \cdots : y_{n+1}]$ of ${P^{n+1}}^*$. On the af\/f\/ine open subset $U = \{ x_0 \not= 0, \, y_{n+1}\not= 0\} \subset P^{n+1}\times {P^{n+1}}^*$, take local coordinates of \begin{gather*} x'_i = \frac{x_i}{x_0}, \qquad z' = - \frac{x_{n+1}}{x_0}, \qquad p'_j = \frac{y_j}{y_{n+1}}, \qquad \tilde{z}' = - \frac{y_0}{y_{n+1}}, \end{gather*} $(1 \leq i \leq n,\ 1 \leq j \leq n)$. Then $I \cap U$ is def\/ined by $- z' - \tilde{z}' + \sum\limits_{i=1}^n x'_ip'_i = 0$, where we have $dz' - \sum\limits_{i=1}^n p'_i dx'_i + \Big(d\tilde{z}' - \sum\limits_{i=1}^n x'_i dp'_i\Big) = 0$. The contact structure on $I \cap U$ is given by $dz' - \sum\limits_{i=1}^n p'_i dx'_i = 0$. Also it is given by $d\tilde{z}' - \sum\limits_{i=1}^n x'_i dp'_i = 0$. This shows $(E_1, E_2)\vert_{U}$ is f\/lat. Also on other af\/f\/ine open subsets, we can verify the f\/latness of $(E_1, E_2)$ similarly or by an argument using homogeneity. \end{Ex}
\begin{Ex} \label{hor+ver} For a Riemannian manifold $W=(W,g)$ of dimension $n+1$, the unit tangent bundle $M=T_1W$ of $W$ has the canonical contact structure $D\subset TM$, \begin{gather*}
D_{(x, v)} = \big\{ u \in T_{(x, v)}M \,|\, \pi_*u \in v^{\perp}\big\}, \qquad (x, v) \in T_1W, \end{gather*} and the canonical Lagrangian pair $(E_1,E_2)$ induced by the horizontal lift and the vertical lift of the Levi-Civita connection: \begin{gather*} (E_1)_{(x, v)} = \big(v^\perp\big)^{\rm{hor}}, \qquad (E_2)_{(x, v)} = \big(v^\perp\big)^{\rm{ver}}, \qquad (x, v) \in T_1W. \end{gather*}
Here $\pi\colon M \longrightarrow W$ is the canonical projection and $v^{\perp} = \{ w\in T_{x}W \,|\, g(v,w)=0 \}$. Then $(E_1,E_2)$ is bi-Legendre-integrable if and only if~$W$ is a space form. In fact, the vertical lift~$E_2$ is always completely integrable. Moreover the horizontal lift $E_1$ is completely integrable if and only if~$W$ is projectively f\/lat, that is, a space form (see~\cite[Corollaries~3.5 and~6.5]{Tak}). \end{Ex}
The bi-decomposable class of Monge--Amp{\`e}re systems with f\/lat Lagrangian pairs turns out to be an intrinsic representation of the well-known class of Monge--Amp{\`e}re equations locally expressed by Hesse representations: \begin{gather*} \mathrm{Hess}(z) = F(x_1, \dots, x_n, z, p_1, \dots, p_n)\quad (\not=0). \end{gather*} Thus we are led to the following def\/inition:
\begin{Def} \label{Hesse-M-A-def} We call a Monge--Amp{\`e}re system with a f\/lat Lagrangian pair a {\it Hesse Monge--Amp{\`e}re system}. \end{Def}
Note that there are sub-classes of Hesse Monge--Amp{\`e}re equations, {\it Euler--Lagrange Monge--Amp{\`e}re equations}: \begin{gather*} \mathrm{Hess}(z) = F_1(x_1, \dots, x_n, z)\cdot F_2\left(p_1,\dots, p_n, \sum_{j=1}^n p_jx_j-z\right)\quad (\not=0), \end{gather*} (see \cite[p.~21, Example~2]{B-G-G}) and {\it flat Monge--Amp{\`e}re equations}: \begin{gather*} \mathrm{Hess}(z) = c\quad (c \ \text{is constant}, \ c\not=0). \end{gather*}
\begin{Def} \label{E-L-M-A-def} Let $\mathcal{M}$ be a Hesse Monge--Amp{\`e}re system. Then we call $\mathcal{M}$ an {\it Euler--Lagrange Monge--Amp{\`e}re system} if $\mathcal{M}$ is locally generated by a bi-decomposable $n$-form $\omega = \omega_1 - \omega_2$ satisfying \begin{gather*} d\omega_1 \equiv 0, \qquad d\omega_2 \equiv 0 \quad \rm{mod}\ \theta \end{gather*} \end{Def}
Thus we have a sequence of classes of Monge--Amp{\`e}re systems: \begin{gather*}
\text{\{M-A system with Lagrangian pair\} \ $\supset$ \ \{M-A system with integrable Lag. pair\}}\\ \text{$\supset$ \ \{Hesse M-A system\} \ $\supset$ \ \{Euler--Lagrange M-A system\} \ $\supset$ \ \{f\/lat M-A system\}}. \end{gather*}
We will study Hesse Monge--Amp{\`e}re systems in detail, and, in fact, we characterize the above three classes in intrinsic way. Note that the equation of non-zero constant Gauss--Kronecker curvature $K = c$ falls into the class of Euler--Lagrange Monge--Amp{\`e}re systems and the equation of improper af\/f\/ine hyperspheres ${\rm Hess} = c$ falls into the class of f\/lat Monge--Amp{\`e}re systems.
We will show, in Sections~\ref{Homogeneous M-A systems with Lagrangian pairs.}.2--\ref{Homogeneous M-A systems with Lagrangian pairs.}.5, that Monge--Amp{\`e}re systems def\/ined by the Gaussian curvature constant equation $K=c$ in ${\mathbf{E}}^{n+1}$, $S^{n+1}$, $H^{n+1}$ are Euler--Lagrange systems.
\begin{Prop} \label{Hesse-MA} Let $\mathcal{M}$ be a Hesse Monge--Amp{\`e}re system, i.e., a Monge--Amp{\`e}re system with a flat Lagrangian pair $(E_1,E_2)$ generated by an $n$-form $\omega = \omega_1-\omega_2$ enjoying the bi-decomposing condition. Then $\mathcal{M}$ is locally isomorphic to a Monge--Amp{\`e}re system $\mathcal{M}'$ on an open subset $U \subset {\mathbf{R}}^{2n+1}$ with the standard Lagrangian pair $(E^{\rm{st}}_1, E^{\rm{st}}_2)$ which is locally generated by an $n$-form of type \begin{gather*} F(x_1, \dots, x_n, z, p_1, \dots, p_n) dx_1\wedge\cdots\wedge dx_n - dp_1\wedge\cdots\wedge dp_n \end{gather*} for a non-vanishing function $F$ on $U$. In particular, there exists a system of local Darboux coordinates $(x_1, \dots, x_n, z, p_1, \dots, p_n)$ in some neighborhood of each point, such that $\mathcal{M}$ is represented by a Hesse Monge--Amp{\`e}re equation of the form \begin{gather*} \mathrm{Hess}(z) = F(x_1, \dots, x_n, z, p_1, \dots, p_n) \qquad (F\not=0). \end{gather*} \end{Prop}
\begin{proof} Since $(E_1,E_2)$ is f\/lat, around each point of~$M$, there exists a system of local coordinates $ (x_1,\dots,x_n,z,p_1,\dots,p_n) $ such that $D =\{ \theta = 0 \}$, $\theta=dz-\sum\limits_{i=1}^np_idx_i$, and that \begin{gather*} E_1 =\mathrm{Ker}({\pi}_{2*})=\left\langle \frac{\partial}{\partial x_1}+ p_1\frac{\partial}{\partial z}, \dots,\frac{\partial}{\partial x_n}+p_n\frac{\partial}{\partial z}\right\rangle, \\ E_2 =\mathrm{Ker}({\pi}_{1*})=\left\langle \frac{\partial}{\partial p_1},\dots, \frac{\partial}{\partial p_n}\right\rangle, \\ {\pi}_1 \colon \ {\mathbf{R}}^{2n+1} \to {\mathbf{R}}^{n+1}, \qquad \pi_1(x_1, \dots, x_n, z, p_1, \dots, p_n)=(x_1, \dots, x_n, z), \\ {\pi}_2 \colon \ {\mathbf{R}}^{2n+1} \to {\mathbf{R}}^{n+1}, \qquad {\pi}_2(x_1, \dots, x_n, z, p_1, \dots, p_n) = \left(p_1, \dots, p_n, \sum_{i=1}^np_ix_i-z\right). \end{gather*} This means that $\mathcal{M}$ is locally isomorphic to a Monge--Amp{\`e}re system with the standard Lag\-rangian pair. Then, from the bi-decomposing condition, we have, setting $x = (x_1, \dots, x_n)$, $p = (p_1, \dots, p_n)$, \begin{gather*} \omega = \omega_1-\omega_2
=f(x,z,p)dx_1\wedge \cdots \wedge dx_n - g(x,z,p)dp_1\wedge \cdots \wedge dp_n \end{gather*} for some functions $f(x,z,p)$, $g(x,z,p)$ $(\not=0)$ on ${\mathbf{R}}^{2n+1}$. Therefore, putting $F=f/g$, we have a~form $\mathrm{Hess}(z) = F(x,z,p)$. \end{proof}
\begin{Prop} \label{E-L-MA} Let $\mathcal{M}$ be a Hesse Monge--Amp{\`e}re system. Then $\mathcal{M}$ is an Euler--Lagrange Monge--Amp{\`e}re system if and only if $\mathcal{M}$ is locally isomorphic to a Monge--Amp{\`e}re system $\mathcal{M}'$ on an open subset $U \subset {\mathbf{R}}^{2n+1}$ with the standard Lagrangian pair generated by an $n$-form of type \begin{gather*} F_1(x_1, \dots, x_n, z)\cdot F_2(\widetilde{z}, p_1, \dots, p_n) dx_1\wedge\cdots\wedge dx_n - dp_1\wedge\cdots\wedge dp_n \end{gather*} for some non-vanishing functions $F_1$ of~$x$,~$z$ and $F_2$ of $\widetilde{z} = \sum\limits_{j=1}^n p_jx_j-z$ and~$p$. In particular, ${\mathcal M}$~is locally represented as \begin{gather*} \mathrm{Hess}(z) = F_1(x_1, \dots, x_n, z)\cdot F_2(\widetilde{z}, p_1, \dots, p_n) \qquad (F_1,\ F_2\not=0). \end{gather*} \end{Prop}
\begin{proof} By the assumption, we can set \begin{gather*} \omega_1 = f(x,z,p)dx_1\wedge \cdots \wedge dx_n, \qquad \omega_2 = g(x,z,p)dp_1\wedge \cdots \wedge dp_n. \end{gather*} Since \begin{gather*} d\omega_1 =df\wedge dx_1\wedge \cdots \wedge dx_n =\left(\sum_{i=1}^n\frac{\partial f}{\partial x_i}dx_i+\frac{\partial f}{\partial z}dz+ \sum_{i=1}^n\frac{\partial f}{\partial p_i}dp_i\right)\wedge dx_1\wedge \cdots \wedge dx_n \\ \hphantom{d\omega_1}{} \equiv \sum_{i=1}^n\frac{\partial f}{\partial p_i} dp_i\wedge dx_1\wedge \cdots \wedge dx_n, \quad \rm{mod}\ \theta, \end{gather*} we see $d\omega_1 \equiv 0 \ \rm{mod}\ \theta$ if and only if $f$ is independent of $p_1, \dots, p_n$: $f=f(x_1, \dots, x_n,z)$. Besides, we take another system of local coordinates $(x, \tilde{z}, p)$ with $\tilde{z} = \sum\limits_{i=1}^n x_ip_i - z$. Then $\theta = - \Big(d\tilde{z} - \sum\limits_{i=1}^n x_idp_i\Big)$ and we have \begin{gather*} d\omega_2 = dg\wedge dp_1\wedge \cdots \wedge dp_n
= \left(\sum_{i=1}^n\frac{\partial g}{\partial x_i}dx_i +\frac{\partial g}{\partial \tilde{z}}d\tilde{z}+ \sum_{i=1}^n\frac{\partial g}{\partial p_i}dp_i\right) \wedge dp_1\wedge \cdots \wedge dp_n \\ \hphantom{d\omega_2}{}
\equiv \sum_{i=1}^n\frac{\partial g}{\partial x_i} dx_i \wedge dp_1\wedge \cdots \wedge dp_n, \quad \rm{mod}\ \theta. \end{gather*} Therefore we see $d\omega_2 \equiv 0 \ \rm{mod}\ \theta$ if and only if~$g$ is independent of~$x$ for the system of local coordinates $(x, \tilde{z}, p)$: $g = g(\widetilde{z}, p_1, \dots, p_n)$. Therefore putting $F_1 = f$, $F_2 = 1/g$, we have a form $\mathrm{Hess}(z) = F_1(x, z)\cdot F_2(\widetilde{z}, p)$. \end{proof}
By Theorem \ref{Uniqueness}, we have the following:
\begin{Prop} \label{contact invariance} Let $n \geq 3$. Then Definition~{\rm \ref{Hesse-M-A-def}} $($resp.\ Definition~{\rm \ref{E-L-M-A-def})} depends only on the Monge--Amp{\`e}re system and does not depend on the choice of Lagrangian pairs of the Monge--Amp{\`e}re system. The class of Hesse Monge--Amp{\`e}re systems $($resp.\ the class of Euler--Lagrange Monge--Amp{\`e}re systems$)$ is invariant under contact transformations. \end{Prop}
\begin{proof} Let ${\mathcal M}$ be a Monge--Amp{\`e}re system with a Lagrangian pair on a contact manifold $(M, D)$. Let $(E_1, E_2)$ and $(E_1', E_2')$ be two Lagrangian pairs associated to ${\mathcal M}$. Then, by Theorem~\ref{Uniqueness}, $E_1' = E_1$, $E_2' = E_2$ or $E_1' = E_2$, $E_2' = E_1$. Therefore the f\/latness of Lagrangian pair depends only on~${\mathcal M}$. Moreover it is clear that, the condition of Def\/inition~\ref{E-L-M-A-def}, i.e., the possibility of a bi-decomposition $\omega = \omega_1 - \omega_2$ of a local generator~$\omega$ into closed decomposable forms $\omega_1$, $\omega_2$ up to a contact form~$\theta$ depends only on~${\mathcal M}$.
Let $\Phi \colon (M, D) \to (M', D')$ be a contact transformation between~$(M, D)$ and another contact manifold $(M', D')$. Set ${\mathcal M}' = {\Phi^{-1}}^*{\mathcal M}$. Then ${\mathcal M}'$ is a Monge--Amp{\`e}re system with the Lagrangian pair $(\Phi_*E_1, \Phi_*E_2)$. If $(E_1, E_2)$ is f\/lat, then so is $(\Phi_*E_1, \Phi_*E_2)$. Moreover if $\omega = \omega_1 - \omega_2$ is a bi-decomposition satisfying the condition of Def\/inition~\ref{E-L-M-A-def} for ${\mathcal M}$, then $({\Phi^{-1}})^*\omega = ({\Phi^{-1}})^*\omega_1 - ({\Phi^{-1}})^*\omega_2$ satisf\/ies the condition of Def\/inition \ref{E-L-M-A-def} for~${\mathcal M}'$. \end{proof}
\begin{Rem} \label{Poincare-Cartan form} The Euler--Lagrange systems are studied in \cite{B-G-G} via the key notion ``Poincar\'{e}--Cartan form''. If an Euler--Lagrange Monge--Amp{\`e}re system with Lagrangian pair is given by \begin{gather*} \theta = dz - \sum_{i=1}^n p_i dx_i = - \left(d\tilde{z} - \sum_{i=1}^n x_i dp_i\right), \\ \omega = f(x, z) dx_1 \wedge \cdots \wedge dx_n - g(p, \tilde{z}) dp_1\wedge \cdots \wedge dp_n, \end{gather*} then the Poincar\'{e}--Cartan form is given by \begin{gather*} \Pi = \theta\wedge\omega = f(x, z) dz \wedge dx_1 \wedge \cdots \wedge dx_n - g(p, \tilde{z}) d\tilde{z} \wedge dp_1\wedge \cdots \wedge dp_n. \end{gather*} \end{Rem}
To conclude this section, we characterize, among others, the class of equations $\mathrm{Hess} = c$ in term of the projective structure:
\begin{Prop} Let $\mathcal{M}$ be an Euler--Lagrange Monge--Amp{\`e}re system on $M=P(T^*P^{n+1})$ induced by the diagram \begin{gather*} \xymatrix{ & P\big(T^*P^{n+1}\big) \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} &\\ P^{n+1} & & {P^{n+1}}^*}
\end{gather*} and $W_1=P^{n+1}$, $W_2={P^{n+1}}^* $, generated by a bi-decomposable $n$-form $\omega = \omega_1-\omega_2$. Then the condition \begin{gather*} \nabla \omega_1=0, \quad \nabla \omega_2=0 \end{gather*} is satisfied for the covariant derivative $\nabla$ of the flat connection induced on each local projective chart of~$W_1=P^{n+1}$ if and only if~${\mathcal M}$ is represented by a Monge--Amp{\`e}re equation \begin{gather*} \mathrm{Hess}(z) = c \quad (c \ \text{\rm is constant}, \ c\not=0). \end{gather*} \end{Prop}
\begin{proof} From the equivalence between the conditions $\nabla \omega_1=0$ and $df=0$ (resp.\ $\nabla \omega_2=0$ and $dg=0$) in the proof of Proposition~\ref{Hesse-MA}, we have that $f$, $g$ are non-zero constants. Therefore we have the form $\mathrm{Hess}(z) = c$ $(\not=0)$. \end{proof}
\section[A method to construct Monge-Amp{\`e}re systems with Lagrangian pairs]{A method to construct Monge--Amp{\`e}re systems\\ with Lagrangian pairs} \label{A method to construct systems with Lagrangian pairs.}
Let $(M, D)$ be a contact manifold of dimension $2n+1$ and $(E_1, E_2)$ a Lagrangian pair on $(M, D)$. Consider the quotient bundle $TM/E_2$ (resp.~$TM/E_1$) of rank~$n+1$ and a~section~$\omega'_1$ (resp.~$\omega'_2$) to the line bundle $\wedge^{n+1}(TM/E_2)^*$ (resp.~$\wedge^{n+1}(TM/E_1)^*$) of\/f the zero-section. Let $\theta$ be a local contact form def\/ining $D$. Recall that the Reeb vector f\/ield $R = R_{\theta}$ is def\/ined by the condition $i_{R_\theta} \theta = 1$, $i_{R_\theta} d\theta = 0$. Then we def\/ine an $n$-form on $M$ by \begin{gather*} \omega = i_{R_\theta} \big(\Pi_1^*\omega'_1 - \Pi_2^*\omega'_2\big). \end{gather*} Here $\Pi_1\colon TM \to TM/E_2$ (resp.\ $\Pi_2\colon TM \to TM/E_1$) denotes the bundle projection, and $\Pi_1^*\colon (TM/E_2)^* \to T^*M$ and $\Pi_1^*\colon \wedge^{n+1}(TM/E_2)^* \to \wedge^{n+1}T^*M$ (resp.\ $\Pi_2^*\colon (TM/E_1)^* \to T^*M$ and $\Pi_2^*\colon \wedge^{n+1}(TM/E_1)^* \to \wedge^{n+1}T^*M$) its dual injections. Then we have the following basic lemma for our construction:
\begin{Lem} \label{MA-system construction} The differential system $\mathcal{M} = \langle \theta, d\theta, \omega \rangle$, generated by $\omega$ and the contact form $\theta$, is independent of the choice of $\theta$, and depends only on given $\omega'_1$, $\omega'_2$. \end{Lem}
We call $\mathcal{M}$ {\it the Monge--Amp{\`e}re system with} $(E_1, E_2)$ {\it induced from} $\omega'_1$, $\omega'_2$.
To show Lemma \ref{MA-system construction}, take a local symplectic frame $X_1, \dots, X_n, P_1, \dots, P_n$ of $D$ with respect to $d\theta$: \begin{gather*} d\theta(X_i, X_j) = 0, \qquad d\theta(P_i, P_j) = 0, \qquad d\theta(X_i, P_j) = \delta_{ij}, \end{gather*} with \begin{gather*} E_1 = \langle X_1, \dots, X_n\rangle, \qquad E_2 = \langle P_1, \dots, P_n\rangle. \end{gather*} Then $X_1, \dots, X_n, P_1, \dots, P_n, R_\theta$ form a local frame of~$TM$.
\begin{Lem} \label{Reeb} The Reeb vector field $R_{\theta'}$ for a contact form $\theta' = \rho\theta$ defining $D$ is given by \begin{gather*} R_{\theta'} = \frac{1}{\rho^2}\left[ \sum_{i=1}^n (P_i\rho)X_i - \sum_{i=1}^n (X_i\rho)P_i + \rho R_{\theta} \right]. \end{gather*} \end{Lem}
\begin{proof} Let $\alpha_1, \dots, \alpha_n, \beta_1, \dots, \beta_n, \theta$ be the frame of $T^*M$ dual to $X_1, \dots, X_n, P_1, \dots, P_n, R_\theta$. Set $R_{\theta'} = \sum_i a_iX_i + \sum_j b_jP_j + cR_{\theta}$. Then, by $i_{R_{\theta'}} \theta' = 1$, we have $c = \frac{1}{\rho}$. Besides we have \begin{gather*} i_{R_{\theta'}} d\theta' = [i_{R_{\theta'}} d\rho]\theta - i_{R_{\theta'}} \theta\ d\rho + \rho i_{R_{\theta'}} d\theta \\ \hphantom{i_{R_{\theta'}} d\theta'}{}
= \left[\sum_i a_i(X_i\rho) + \sum_j b_j(P_j\rho) + c(R_\theta\rho)\right]\theta - c d\rho + \rho\left[\sum_i a_i\beta_i - \sum_j b_j\alpha_j\right], \end{gather*} while $d\rho = \sum_i (X_i\rho)\alpha_i + \sum_j (P_j\rho)\beta_j + (R_\theta \rho)\theta$. Therefore, by $i_{R_{\theta'}} d\theta' = 0$, we have \begin{gather*} - \sum_i [\rho b_i + c(X_i\rho)]\alpha_i + \sum_j [\rho a_j - c(P_j\rho)]\beta_j + \left[\sum_i a_i(X_i\rho) + \sum_j b_j(P_j\rho)\right]\theta = 0. \end{gather*} Thus we have \begin{gather*} a_i = \frac{1}{\rho^2} P_i\rho, \qquad b_i = - \frac{1}{\rho^2} X_i\rho, \qquad c = \frac{1}{\rho}.\tag*{\qed} \end{gather*} \renewcommand{\qed}{} \end{proof}
\begin{proof}[Proof of Lemma \ref{MA-system construction}] We set $\Pi_1^*{\omega'}_1 = \lambda\cdot \theta\wedge\alpha_1\wedge\cdots\wedge\alpha_n$ for a function~$\lambda$. Then $i_{R_{\theta}} (\Pi_1^*{\omega'}_1) = \lambda\cdot \alpha_1\wedge\cdots\wedge\alpha_n$. By Lemma~\ref{Reeb}, we have for~$\theta'$ \begin{gather*} i_{R_{\theta'}} \big(\Pi_1^*{\omega'}_1\big) \equiv \frac{1}{\rho}\lambda\cdot \alpha_1\wedge\cdots\wedge\alpha_n = \frac{1}{\rho}\cdot i_{R_{\theta}} {\omega'}_1,\quad \rm{mod}\ \theta. \end{gather*} Similarly we have $i_{R_{\theta'}} (\Pi_2^*{\omega'}_2) \equiv \frac{1}{\rho}\cdot i_{R_{\theta}} {\omega'}_2$, $\rm{mod}\ \theta$. Therefore \begin{gather*} i_{R_{\theta'}} \big(\Pi_1^*{\omega'}_1 - \Pi_2^*{\omega'}_2\big) \equiv \frac{1}{\rho} i_{R_{\theta}} \big(\Pi_1^*{\omega'}_1 - \Pi_2^*{\omega'}_2\big),\quad \rm{mod}\ \theta. \end{gather*} Therefore $\mathcal{M} = \langle \theta, d\theta, \omega\rangle$ is independent of the choice of~$\theta$. \end{proof}
Suppose that $(E_1, E_2)$ is integrable. Let \begin{gather*} \xymatrix{ & M^{2n+1} \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} &\\ W^{n+1}_1 & & W^{n+1}_2}
\end{gather*} be a double Legendrian f\/ibration induced by a Lagrangian pair $(E_1, E_2)$ locally. Suppose that a volume $(n+1)$-form ${\Omega_1}$ is given on $W_1$ (resp.\ a~volume $(n+1)$-form ${\Omega_2}$ on $W_2$). Since $(TM/E_2)_x \cong T_{\pi_1(x)}W_1$ via $\pi_1$ (resp.\ $(TM/E_1)_x \cong T_{\pi_2(x)}W_2$ via $\pi_2$) for $x\in M$, ${\Omega_1}$ (resp.~${\Omega_2}$) is regarded as a non-zero section $\omega'_1$ of $\wedge^{n+1}(TM/E_1)^*$ (resp.~$\omega'_2$ of $\wedge^{n+1}(TM/E_2)^*$). Then, following the general setting, we set \begin{gather*} \omega = i_{R_\theta} \big(\pi_1^*\Omega_1 - \pi_2^*\Omega_2\big). \end{gather*} Note that $\pi_1^*\Omega_1$, $\pi_2^*\Omega_2$ are basic forms (cf.~\cite{I-L}) for $\pi_1$, $\pi_2$ respectively, however $i_{R_\theta} \pi_1^*\Omega_1$ and $i_{R_\theta} \pi_2^*\Omega_2$ need not to be basic.
\begin{Ex} \label{standard model calcu} Consider ${\mathbf{R}}^{2n+1}$ with coordinates $(x, z, p) = (x_1, \dots, x_n, z$, $p_1, \dots, p_n)$, $D = \{ \theta = 0\}$, $\theta = dz - \sum\limits_{i=1}^n p_i dx_i$, $E^{\rm{st}}_1 = \langle \frac{\partial}{\partial x_1} + p_1\frac{\partial}{\partial z}, \dots, \frac{\partial}{\partial x_n} + p_n\frac{\partial}{\partial z}\rangle$, $E^{\rm{st}}_2 = \langle \frac{\partial}{\partial p_1}, \dots, \frac{\partial}{\partial p_n}\rangle$, $\pi_1 \colon {\mathbf{R}}^{2n+1} \to {\mathbf{R}}^{n+1}$, $\pi_1(x, z, p) = (x, z)$, $\pi_2 \colon {\mathbf{R}}^{2n+1} \to {\mathbf{R}}^{n+1}$, $\pi_2(x, z, p) = (p, x\cdot p - z)$. Set $\tilde{z} = x\cdot p - z$. The Reeb vector f\/ield for $\theta$ is given by $R = \frac{\partial}{\partial z}$.
Let $\pi_1^*\omega'_1 = f(x, z, p) dz \wedge dx_1\wedge \cdots \wedge dx_n$ be a non-zero section of $\wedge^{n+1}(T{\mathbf{R}}^{2n+1}/E_2)^*$ pulled-back to an $(n+1)$-form on ${\mathbf{R}}^{2n+1}$. Then $i_R (\pi_1^*\omega'_1) = f(x, z, p) dx_1\wedge \cdots \wedge dx_n$. Similarly, let $\pi_2^*\omega'_2 = -g(x, z, p) d\tilde{z} \wedge dp_1\wedge \cdots \wedge dp_n$ be a non-zero section of $\wedge^{n+1}(T{\mathbf{R}}^{2n+1}/E_1)^*$ pulled-back to an $(n+1)$-form on~${\mathbf{R}}^{2n+1}$. Then $i_R (\pi_2^*\omega'_2) = g(x, z, p) dp_1\wedge \cdots \wedge dp_n$. Thus, following the general setting, we have \begin{gather*} \omega = f(x, z, p)dx_1\wedge \cdots \wedge dx_n - g(x, z, p) dp_1\wedge \cdots \wedge dp_n, \end{gather*} and we obtain Hesse Monge--Amp{\`e}re systems.
Further, let $\Omega_1 = f(x, z) dz \wedge dx_1\wedge \cdots \wedge dx_n$ (resp.\ $\Omega_2 = - g(p, \tilde{z}) d\tilde{z} \wedge dp_1\wedge \cdots \wedge dp_n$) be a~volume form on~$W_1 = {\mathbf{R}}^{n+1}$ (resp.\ on~$W_2 = {\mathbf{R}}^{n+1}$). Then $i_R \pi_1^*\Omega_1 = f(x, z) dx_1\wedge \cdots \wedge dx_n$ (resp.\ $i_R \pi_2^*\Omega_2 = g(p, \tilde{z}) dp_1\wedge \cdots \wedge dp_n$). Thus we have \begin{gather*} \omega = f(x, z) dx_1\wedge \cdots \wedge dx_n - g(p, \tilde{z}) dp_1\wedge \cdots \wedge dp_n, \end{gather*} and we obtain Euler--Lagrange Monge--Amp\`ere systems. \end{Ex}
\begin{Rem} The Poincar\'{e}--Cartan form is given by $\Pi = \pi_1^*\Omega_1 - \pi_2^*\Omega_2$ (see Remark~\ref{Poincare-Cartan form}). \end{Rem}
\section{Homogeneous Monge--Amp{\`e}re systems with Lagrangian pairs} \label{Homogeneous M-A systems with Lagrangian pairs.}
{\bf 8.1.} Monge--Amp{\`e}re system on ${\mathbf{R}}^{2n+1}$ as ${\rm Hess} = c$ on~${\mathbf{R}}^{n+1}$. The Monge--Amp{\`e}re system with Lagrangian pair for the equation ${\rm Hess}(f)= c$ in equi-af\/f\/ine geometry is given as follows.
Consider the contact manifold $M = {\mathbf{R}}^{2n+1}$ with coordinates \begin{gather*} (x, z, p) = (x_1, \dots, x_n, z, p_1, \dots, p_n) \end{gather*} and with the contact form $\theta = dz - \sum\limits_{i=1}^n p_idx_i$. We set two Lagrangian sub-bundles $E_1$, $E_2$ of the contact distribution $D=\{ \theta =0 \}$ by \begin{gather*} E_1 = \left\langle \frac{\partial}{\partial x_1}+p_1\frac{\partial}{\partial z}, \dots,\frac{\partial}{\partial x_n}+p_n\frac{\partial}{\partial z}\right\rangle, \qquad E_2 = \left\langle \frac{\partial}{\partial p_1},\dots,\frac{\partial}{\partial p_n} \right\rangle, \end{gather*} which form a Lagrangian pair of $(M, D)$. The Reeb vector f\/ield~$R$ is given by~$\frac{\partial}{\partial z}$. Note that $ - \theta = d\Big(\sum\limits_{i=1}^n p_ix_i - z\Big) - \sum\limits_{i=1}^n x_idp_i. $ Then we have the double Legendrian f\/ibration induced by~$(E_1,E_2)$: \begin{gather*} \xymatrix{ & M={\mathbf{R}}^{2n+1} \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} &\\ W_1={\mathbf{R}}^{n+1} & & W_2={\mathbf{R}}^{n+1}}
\end{gather*} where \begin{gather*} \pi_1(x, z, p) = (x, z), \qquad \pi_2(x, z, p) = (p, \tilde{z}), \qquad \tilde{z} = \sum_{i=1}^n p_ix_i - z. \end{gather*} Moreover take the $(n+1)$-forms \begin{gather*} \Omega_1 = c(dz \wedge dx_1\wedge \cdots \wedge dx_{n}),\qquad \Omega_2 = - d\tilde{z} \wedge dp_1\wedge \cdots \wedge dp_{n} \end{gather*} on $W_1 = {\mathbf{R}}^{n+1}$ ($c\in {\mathbf{R}}$, $c\not= 0$) and on $W_2 = {\mathbf{R}}^{n+1}$ respectively. Then $\omega = i_R (\pi_1^*\Omega_1 - \pi_2^*\Omega_2)$ is given by \begin{gather*} \omega = c dx_1\wedge \cdots \wedge dx_{n} - dp_1\wedge \cdots \wedge dp_{n}. \end{gather*}
Thus we construct the Monge--Amp{\`e}re system $\mathcal{M}=\langle \theta, d\theta, \omega \rangle $ with the Lagrangian pair $(E_1,E_2)$ globally on $M={\mathbf{R}}^{2n+1}$.
\begin{Prop} Under the situation above, we have a Monge--Amp{\`e}re system $\mathcal{M}$ generated by $(\theta, d\theta, \omega)$ with a Lagrangian pair $(E_1,E_2)$ on $M={\mathbf R}^{2n+1}$. The projection to $W_1 = {\mathbf{R}}^{n+1}$ of a~geometric solution of $\mathcal{M}$ satisfies the equation $\textrm{Hess}(f)=c$, when it is represented as a~graph $z = f(x_1,\dots,x_n)$ outside of its singular locus. The projection to $W_2 = {\mathbf{R}}^{n+1}$ of a~geometric solution of $\mathcal{M}$ satisfies the equation $\textrm{Hess}(f)=\frac{1}{c}$, when it is represented as a~graph $z'=\sum\limits_{i=1}^np_ix_i-z = f(p_1,\dots,p_n)$ outside of its singular locus. \end{Prop}
\begin{Rem} The Monge--Amp{\`e}re system for ${\mathrm{Hess}} = c$ is isomorphic to the system for ${\mathrm{Hess}} = 1$ if $c > 0$, and is isomorphic to the system for ${\mathrm{Hess}} = -1$ if $c < 0$. \end{Rem}
\begin{Rem} The Monge--Amp{\`e}re system for ${\mathrm{Hess}} = c, c \not= 0$ has the natural symmetry by the group $G'$ of equi-af\/f\/ine transformations on $W_1={\mathbf{R}}^{n+1}$ preserving the vector f\/ield $\frac{\partial}{\partial z}$. The group $G'$ is given by the semi-direct product $G' = G''\ltimes {{\mathbf{R}}}^{n+1}$ of $G'' \subset {\rm SL}(n+1, {\mathbf{R}})$ and ${\mathbf{R}}^{n+1}$, where \begin{gather*} G'' =\left\{ \left.\left( \begin{matrix} A & 0 \\ {}^t\!a & 1 \end{matrix} \right)\, \right\vert \, A \in {\rm SL}(n, {\mathbf{R}}), \,a \in {\mathbf{R}}^n \right\}. \end{gather*} Note that $\dim G' = n(n+2)$ and also that each element of $G'$ is identif\/ied with \begin{gather*} \left( \begin{matrix} 1 & 0 & 0 \\ b & A & 0 \\ c & {}^t\!a & 1 \end{matrix} \right) \end{gather*} via an appropriate embedding ${\rm SL}(n+1, {\mathbf{R}}) \hookrightarrow {\rm GL}(n+2, {\mathbf{R}})$. The Monge--Amp{\`e}re system for ${\mathrm{Hess}} = c, c \not= 0$, in fact, has bigger symmetry which attains the maximum for the dimension estimate of automorphisms given in Section~\ref{Automorphisms} for bi-decomposable Monge--Amp{\`e}re systems.
Let $G$ be a subgroup of the projective transformation group ${\rm PGL}(n+2, {\mathbf{R}})$ on ${\mathbf{R}}^{n+2}$ consisting of transformations represented by the matrices \begin{gather*} \widetilde{A} = \left( \begin{matrix} \ell & 0 & 0 \\ b & A & 0 \\ c & {}^t\!a & k \end{matrix} \right), \end{gather*} considered up to non-zero scalar multiples. Here $\ell, k \in {\mathbf{R}}^{\times}$, $A \in {\rm GL}(n, {\mathbf{R}})$, $a, b \in {\mathbf{R}}^n, c \in {\mathbf{R}}$ satisfying the condition $(\det A)^2 = (k\ell)^n$. From the condition, we can take $\det A = \pm 1$, $\ell = \pm 1/k$ in ${\rm PGL}(n+2, {\mathbf{R}})$. If $n$ is odd, then we can take $\det A = 1$ and $\ell = 1/k$. Note that $\dim G = (n+1)^2$.
The group $G$ acts on $M = {\mathbf{R}}^{2n+1} = {\mathbf{R}}^n \times {\mathbf{R}} \times {\mathbf{R}}^n$ transitively by \begin{gather*} \widetilde{A}(x, z, p) = \left(\frac{1}{\ell}(Ax + b), \, \frac{1}{\ell}\big(kz - {}^t\!ax - c\big), \, k\left(p - \frac{1}{k}{}^t\!a\right)A^{-1}\right). \end{gather*} Here $a$, $b$, $x$ are regarded as column $n$-vectors and $p$ a row $n$-vector. The contact form $\theta = dz - pdx$ is transformed to $\frac{k}{\ell}\theta$, therefore the contact structure $D=\{ \theta =0 \}$ is $G$-invariant. The bi-decomposable form $\omega$ is transformed to \begin{gather*} \frac{1}{\ell^n}(\det A)c(dx_1\wedge \cdots \wedge dx_{n}) - k^n(\det A)^{-1}(dp_1\wedge \cdots \wedge dp_{n}) = \frac{1}{\ell^n}(\det A) \omega, \end{gather*} by the condition $(\det A)^2 = (k\ell)^n$. Therefore $G$ leaves the Monge--Amp{\`e}re system ${\mathcal M} = \langle \theta, d\theta, \omega\rangle$ for ${\mathrm{Hess}} = c$, $c \not= 0$.
The group $G$ acts on $W_1 = {\mathbf{R}}^{n+1}$ and on $W_2={\mathbf{R}}^{n+1}$ respectively by \begin{gather*} \widetilde{A}(x, z) = \left(\frac{1}{\ell}(Ax + b), \, \frac{1}{\ell}\big(kz - {}^t\!ax - c\big)\right), \end{gather*} and by \begin{gather*} \widetilde{A}(\tilde{z}, p) = \left(\frac{1}{\ell}\left(k\tilde{z} + k\left(p - \frac{1}{k}{}^t\!a\right)A^{-1}b + c\right), \, k\left(p - \frac{1}{k}{}^t\!a\right)A^{-1}\right). \end{gather*} Then the volume form $\Omega_1$ on~$W_1$ (resp.~$\Omega_2$ on~$W_2$) is transformed to $\frac{k}{\ell^{n+1}}(\det A)\Omega_1$ (resp.\ $\frac{k^{n+1}}{\ell}(\det A)^{-1}\Omega_2$). \end{Rem}
{\bf 8.2.} Monge--Amp{\`e}re system on $T_1{\mathbf{E}}^{n+1}$ as $K=c$ in ${\mathbf{E}}^{n+1}$. In Sections~8.2--8.4, we describe the Monge--Amp{\`e}re systems corresponding to the the equation of constant Gaussian curvature in Euclidean, spherical or hyperbolic geometry. To provide the concrete form of the Monge--Amp{\`e}re system, we treat three cases separately.
In the famous paper of Gauss~\cite{S}, the ``Gaussian curvature'' of a space surface is introduced as the ratio of areas in the ``Gauss map'' of the surface. We observe that the equation of constant Gaussian curvature is regarded as a Monge--Amp{\`e}re system with Lagrangian pair as follows.
Consider the unit tangent bundle $T_1{\mathbf{E}}^{n+1}={\mathbf{E}}^{n+1}\times S^n$ of the Euclidean space ${\mathbf{E}}^{n+1}$. The standard contact structure on ${\mathbf{E}}^{n+1}\times S^n$ is given by the one-form $\theta = y_1dx_1 + y_2dx_2 + \cdots + y_{n+1}dx_{n+1}$ on ${\mathbf{E}}^{n+1}\times {\mathbf{E}}^{n+1}$, restricted to ${\mathbf{E}}^{n+1}\times S^n$. Here \begin{gather*} (x;y)=(x_1,x_2,\dots,x_{n+1};y_1,y_2,\dots,y_{n+1}) \end{gather*} is the system of coordinates on ${\mathbf{E}}^{n+1}\times {\mathbf{E}}^{n+1}$. We set the contact distribution $D=\{ \theta = 0 \} \subset T({\mathbf{E}}^{n+1}\times S^n)$ and two Lagrangian subbundles of $D$: \begin{gather*} E_1 =\left\{ u = \xi_1 \frac{\partial}{\partial x_1}+ \xi_2 \frac{\partial}{\partial x_2}+\cdots + \xi_{n+1}\frac{\partial}{\partial x_{n+1}}
\,\Big|\, \xi_1y_1+\xi_2y_2+\cdots +\xi_{n+1}y_{n+1}=0 \right\}, \\ E_2 =\left\{ v=\eta_1 \frac{\partial}{\partial y_1}+ \eta_2 \frac{\partial}{\partial y_2}+\cdots + \eta_{n+1}\frac{\partial}{\partial y_{n+1}}
\,\Big|\, v\ \text{is tangent to}\ S^n \right\}, \end{gather*} which form an integrable Lagrangian pair of~$(M, D)$. Then we have the double Legendrian f\/ibration induced by $(E_1,E_2)$: \begin{gather*} \xymatrix{ & M={\mathbf{E}}^{n+1}\times S^n \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} &\\ W_1={\mathbf{E}}^{n+1} & & W_2={\mathbf{R}}\times S^n}
\end{gather*} Here $\pi_1(x,y)=x, \pi_2(x,y)=(x\cdot y,y)$ for $(x,y)\in {\mathbf{E}}^{n+1}\times S^n \subset {\mathbf{E}}^{n+1} \times {\mathbf{E}}^{n+1}$.
\begin{Lem} \label{Euclid-M-A} The above Lagrangian pair is f\/lat, namely, the bi-Legendrian f\/ibration is contactomorphic to the standard one: The Lagrangian pair $(E_1, E_2) = ({\rm Ker}(\pi_2)_*, {\rm Ker}(\pi_1)_*)$ is flat $($see the standard example in Introduction$)$. \end{Lem}
\begin{proof} On the open sets $U = \{ (x, y) \in {\mathbf{E}}^{n+1}\times S^n
\,|\, y_{n+1} \not= 0 \}$ of ${\mathbf{E}}^{n+1}\times S^n$
and $V = \{ (w, y) \in {\mathbf{R}}\times S^n \,|\,
y_{n+1} \not= 0 \}$ of ${\mathbf{R}}\times S^n$, take the system of coordinates $x'_i = x_i$, $p'_i = - \frac{y_i}{y_{n+1}}$, $z' = x_{n+1}$, $1 \leq i \leq n$, on~$U$, and def\/ine the dif\/feomorphism $\Phi \colon U \to {\mathbf{R}}^{2n+1}$ by $\Phi(x, y) = (x', z', p')$. Moreover take the system of coordinates $\tilde{z}' = \frac{w}{y_{n+1}}$, $p'_i = - \frac{y_i}{y_{n+1}}$, $1 \leq i \leq n$, on $V$ and def\/ine the dif\/feomorphism $\psi \colon V \to {\mathbf{R}}^{n+1}$ by $\psi(w, y) = (\tilde{z}', p')$. We denote by $\varphi \colon {\mathbf{E}}^{n+1} \to {\mathbf{R}}^{n+1}$ the identity map, forgetting the Euclidean metric. Then $(\Phi, \varphi, \psi)$ induces the contactomorphism from $(U, D; E_1, E_2)$ to $({\mathbf{R}}^{2n+1}, D_{\rm{st}}; E^{\rm{st}}_1, E^{\rm{st}}_2)$. In fact $\theta = y_{n+1}\Big(dz' - \sum\limits_{i=1}^n p'_i dx'_i\Big) = y_{n+1}(\Phi^{-1})^*\theta_{\rm{st}}$ and $y_{n+1} = \pm 1\Big/\sqrt{1 + \sum\limits_{i=1}^n (p'_i)^2}$. Similarly on each open set $\{ y_i \not= 0\}$, $1 \leq i \leq n+1$, we see the f\/latness of $(E_1, E_2)$. \end{proof}
We endow ${\mathbf{E}}^{n+1}$ with the standard volume form \begin{gather*} \Omega_1 = c dx_1\wedge dx_2 \wedge \cdots \wedge dx_{n+1}, \end{gather*} multiplied with a real constant~$c$ $(\not= 0)$. Moreover we endow $(z;y_1,y_2,\dots,y_{n+1}) \in {\mathbf{R}}\times S^n$ $(\subset {\mathbf{R}} \times {\mathbf{E}}^{n+1})$ with the standard volume form on ${\mathbf{R}}\times S^n$ \begin{gather*} \Omega_2 = dz \wedge \sum_{i=1}^{n+1}\big((-1)^{i+1}y_idy_1\wedge \cdots \wedge \breve{dy_i} \wedge \cdots \wedge dy_{n+1}\big)\vert_{{\mathbf{R}}\times S^n}. \end{gather*} The Reeb vector f\/ield $R$ on ${\mathbf{E}}^{n+1}\times S^n$ is given by $R = y_1\frac{\partial}{\partial x_1} + y_2\frac{\partial}{\partial x_2} + \cdots + y_{n+1}\frac{\partial}{\partial x_{n+1}}$, the ``tautological'' vector f\/ield. Then we set $\omega = i_R (\pi_1^*\Omega_1 - \pi_2^*\Omega_2)$. Since \begin{gather*} i_R \pi_2^*\Omega_2 =
i_R d(x\cdot y)\wedge \sum_{i=1}^{n+1}\big((-1)^{i+1}y_idy_1\wedge \cdots \wedge \breve{dy_i} \wedge \cdots \wedge dy_{n+1}\big)\\ \hphantom{i_R \pi_2^*\Omega_2 }{}
= \sum_{i=1}^{n+1}((-1)^{i+1}y_idy_1\wedge \cdots \wedge \breve{dy_i} \wedge \cdots \wedge dy_{n+1}), \end{gather*} we have \begin{gather*} \omega
= c(y_1dx_2\wedge \cdots \wedge dx_{n+1} - y_2dx_1\wedge dx_3\wedge \cdots \wedge dx_{n+1} + \cdots + (-1)^ny_{n+1}dx_1\wedge \cdots \wedge dx_n) \\
\hphantom{\omega=}{} - (y_1dy_2 \wedge \cdots \wedge dy_{n+1} - y_2dy_1 \wedge dy_3\wedge \cdots \wedge dy_{n+1} + \cdots +
(-1)^ny_{n+1}dy_1 \wedge \cdots \wedge dy_n) \end{gather*} on ${\mathbf{E}}^{n+1}\times S^n$.
This is exactly the reincarnation of the equation $K = c$ from the original def\/inition due to Gauss.
\begin{Prop} Under the situation above, we have a Monge--Amp{\`e}re system $\mathcal{M}$ generated by $(\theta, d\theta, \omega)$ with a Lagrangian pair $(E_1,E_2)$ on $M=T_1{\mathbf{E}}^{n+1}={\mathbf{E}}^{n+1}\times S^n$. The Gaussian curvature of the projection to $W_1={\mathbf{E}}^{n+1}$ of a geometric solution of~$\mathcal{M}$ is equal to constant~$c$ outside of singular locus. \end{Prop}
\begin{Rem} The Monge--Amp{\`e}re system for $K = c$ is isomorphic to the system for $K = 1$ if $c > 0$, and is isomorphic to the system for $K = -1$ if $c < 0$. \end{Rem}
\begin{Rem} Let $G$ be the Euclidean group on ${\mathbf{E}}^{n+1}$. The group $G$ acts on the orthonormal frame bundle transitively, therefore so on the unit tangent sphere bundle (the orthonormal $1$-Stiefel bundle) ${\mathbf{E}}^{n+1}\times S^n$ of ${\mathbf{E}}^{n+1}$. We f\/ix the origin $0$ in ${\mathbf{E}}^{n+1}$ and identify ${\mathbf{E}}^{n+1}$ with ${\mathbf{R}}^{n+1}$, giving the isomorphism $G \cong {\rm O}(n+1) \ltimes {\mathbf{R}}^{n+1}$. The contact structure $D = \{ \theta = 0\}$ and the contact form $\theta = \sum\limits_{i=1}^{n+1} y_idx_i$ on ${\mathbf{E}}^{n+1}\times S^n$ are $G$-invariant, and any $G$-invariant contact form is a non-zero multiple of~$\theta$.
The group $G$ acts on ${\mathbf{R}} \times S^n$ by \begin{gather*} g(r, v) = (g(0)\cdot g(v) + r, g(v)) = (b\cdot Av + r, Av), \end{gather*} Here, for $g \in G$, we set $g(v) = Av + b$ ($A \in {\rm O}(n+1)$, $b \in {\mathbf{R}}^{n+1}$). Then we get the diagram: \begin{gather*} {\mathbf{E}}^{n+1} = G/H' \leftarrow {\mathbf{E}}^{n+1}\times S^n = G/H \rightarrow {\mathbf{R}}\times S^n = G/H'', \end{gather*} for the isotropy groups $H$, $H'$ and $H''$ satisfying \begin{gather*} H' \cong {\rm O}(n+1) \hookleftarrow H \cong {\rm O}(n) \hookrightarrow H'' \cong {\rm O}(n) \ltimes {\mathbf{R}}^n. \end{gather*} The $G$-invariant volume forms on ${\mathbf{E}}^{n+1}$ and ${\mathbf{R}}\times S^n$ are unique up to non-zero constant. We can construct the Monge--Amp{\`e}re system $\mathcal{M}=\langle \theta, d\theta, \omega \rangle $ with the Lagrangian pair $(E_1,E_2)$ globally on $M=T_1{\mathbf{E}}^{n+1}$. \end{Rem}
{\bf 8.3.} Monge--Amp{\`e}re system on $T_1S^{n+1}$ as $K=c$ in $S^{n+1}$. Consider ${\mathbf{E}}^{n+2}\times {\mathbf{E}}^{n+2}$ with coordinates $x = (x_0, x_1, \dots, x_{n+1}), y = (y_0, y_1, \dots, y_{n+1})$. Set $x\cdot y = \sum_{i=0}^{n+1}x_iy_i$, the standard inner product. Consider \begin{gather*} T_1S^{n+1} = \big\{ (x, y) \in {\mathbf{E}}^{n+2}\times {\mathbf{E}}^{n+2}
\,|\, \vert x \vert = 1,\, \vert y\vert = 1, \, x\cdot y = 0 \big\}, \end{gather*} the unit tangent bundle of~$S^{n+1}$. The standard contact structure~$D$ of~$T_1S^{n+1}$ is def\/ined by the contact form $\theta = \sum\limits_{i=0}^{n+1} y_idx_i$. Then we have the double Legendrian f\/ibration \begin{gather*} \xymatrix{ & M=T_1S^{n+1} \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} &\\ W_1=S^{n+1} & & W_2=S^{n+1}}
\end{gather*} where $\pi_1(x, y) = x$, $\pi_2(x, y) = y$.
\begin{Lem} \label{sphere-M-A} The above bi-Legendrian fibration is contactomorphic to the standard model: The Lagrangian pair $(E_1, E_2) = ({\rm Ker}(\pi_2)_*, {\rm Ker}(\pi_1)_*)$ is flat. \end{Lem}
\begin{proof} For instance, on the open subset $U = \{ x_0 \not= 0, \,y_{n+1} \not= 0\}$ of $T_1S^{n+1}$, consider the system of coordinates $x'_i = \frac{x_i}{x_0}$, $z' = - \frac{x_{n+1}}{x_0}$, $ p'_i = \frac{y_i}{y_{n+1}}$, $1 \leq i \leq n$. Moreover we set $\tilde{z}' = - \frac{y_0}{y_{n+1}}$. Then $\tilde{z}' + z' - \sum\limits_{i=1}^n x'_ip'_i = 0$ is satisf\/ied. Moreover we have \begin{gather*} \theta = - x_0y_{n+1}\left(dz' - \sum_{i=1}^n p'_idx'_i\right) = x_0y_{n+1}\left(d\tilde{z}' - \sum_{i=1}^n x'_i dp'_i\right). \end{gather*} Thus we easily see that the dif\/feomorphism $\Phi \colon U \to {\mathbf{R}}^{2n+1}$ def\/ined by $\Phi(x, y) = (x', z', p')$ provides the required contactomorphism. \end{proof}
We set \begin{gather*} \Omega_1 = c\ i_X(dx_0\wedge\cdots\wedge dx_{n+1}), \end{gather*} the standard volume form on $W_1=S^{n+1}$ multiplied by $c(\not= 0)\in {\mathbf{R}}$, and set \begin{gather*} \Omega_2 = i_Y(dy_0\wedge\cdots\wedge dy_{n+1}), \end{gather*} the standard volume form on $W_2=S^{n+1}$. Here $X = \sum\limits_{i=0}^{n+1} x_i\frac{\partial}{\partial x_i}$ and $Y = \sum\limits_{i=0}^{n+1} y_i\frac{\partial}{\partial y_i}$. The Reeb vector f\/ield $R$ on $T_1S^{n+1}$ is given by $R = \sum\limits_{i=0}^{n+1} \big(y_i\frac{\partial}{\partial x_i} - x_i\frac{\partial}{\partial y_i}\big)$. Then we have the following, which is the geometric form of the equation ``Gaussian curvature $=c$''.
\begin{Prop} The associated Monge--Amp{\`e}re system~$\mathcal{M}$ with Lagrangian pair on $M=T_1S^{n+1}$ is generated by~$\theta$ and \begin{gather*} \omega = i_R \big({\pi}_1^*{\Omega}_1-{\pi}_2^*{\Omega}_2\big)\\ \hphantom{\omega}{} = c \sum_{0 \leq j < i \leq n+1} (-1)^{i+j} x_iy_j dx_0 \wedge \cdots \wedge \breve{dx_j} \wedge \cdots \wedge \breve{dx_i} \wedge \cdots \wedge dx_{n+1}\\ \hphantom{\omega=}{} - c \sum_{0 \leq i < k \leq n+1} (-1)^{i+k} x_iy_k dx_0 \wedge \cdots \wedge \breve{dx_i} \wedge \cdots \wedge \breve{dx_k} \wedge \cdots \wedge dx_{n+1} \\ \hphantom{\omega=}{} + \sum_{0 \leq j < i \leq n+1} (-1)^{i+j} y_ix_j dy_0 \wedge \cdots \wedge \breve{dy_j} \wedge \cdots \wedge \breve{dy_i} \wedge \cdots \wedge dy_{n+1} \\ \hphantom{\omega=}{} - \sum_{0 \leq i < k \leq n+1} (-1)^{i+k} y_ix_k dy_0 \wedge \cdots \wedge \breve{dy_i} \wedge \cdots \wedge \breve{dy_k} \wedge \cdots \wedge dy_{n+1}. \end{gather*} The Gaussian curvature of the projection to $W_1=S^{n+1}$ of a geometric solution of~$\mathcal{M}$ is equal to constant $c$ outside of singular locus, while the Gaussian curvature of the projection to~$W_2=S^{n+1}$ of a geometric solution of~$\mathcal{M}$ is equal to constant~$\frac{1}{c}$ outside of singular locus, \end{Prop}
\begin{Rem} Note that the Gaussian curvature $K$ of a hypersurface in the unit sphere $S^{n+1}$ and its sectional curvature $S$ as a Riemannian manifold are related by $S = K + 1$. For example, the great sphere $S^{n} \subset S^{n+1}$ has the constant Gauss map to $S^{n+1}$ and $K = 0$, while has the sectional curvature $1$. \end{Rem}
\begin{Rem} The group $G = {\rm O}(n+2)$ acts on $W_1=S^{n+1}$, on $W_2=S^{n+1}$
and on $S^{n+1}\times S^{n+1}$, thus on
$T_1S^{n+1} = \{ (x, y) \in S^{n+1}\times S^{n+1} \,|\, x\cdot y = 0\}$. The contact structure $D = \{ \theta = 0\}$ and the contact form $\theta = \sum\limits_{i=0}^{n+1} y_i dx_i$ are $G$-invariant, and any contact form def\/ining $D$ is a non-zero constant multiple of~$\theta$.
We get the diagram: \begin{gather*} W_1=S^{n+1} = G/H' \leftarrow M=T_1S^{n+1} = G/H \rightarrow W_2=S^{n+1} = G/H'', \end{gather*} for the isotropy groups $H$, $H'$ and $H''$ satisfying \begin{gather*} H' \cong {\rm O}(n+1) \hookleftarrow H \cong {\rm O}(n) \hookrightarrow H'' \cong {\rm O}(n+1). \end{gather*} The $G$-invariant volume forms on $W_1=S^{n+1}$ and $W_2=S^{n+1}$ are unique up to non-zero constant. We can construct the Monge--Amp{\`e}re system $\mathcal{M}=\langle \theta, d\theta, \omega \rangle $ with Lagrangian pair globally on $M=T_1S^{n+1}$. \end{Rem}
{\bf 8.4.} Monge--Amp{\`e}re system on $T_1H^{n+1}$ as $K=c$ in $H^{n+1}$. In general, let us consider ${\mathbf{R}}_r^{n+2} = {\mathbf{R}}^{n+2}$ with the inner product \begin{gather*} x\cdot y = - \sum_{i=0}^{r-1} x_iy_i + \sum_{j=r}^{n+1} x_jy_j, \end{gather*} of signature $(r, n+2-r)$. We set, for $\varepsilon_1 = 0, \pm 1$, $\varepsilon_2 = 0, \pm 1$, and for a~real number~$a$, \begin{gather*} S^{2n+1}_{\varepsilon_1, \varepsilon_2, a} = \big\{ (x, y) \in {\mathbf{R}}_r^{n+2}\times {\mathbf{R}}_r^{n+2}
\,|\, x\cdot x = \varepsilon_1, \, y\cdot y = \varepsilon_2, \, x\cdot y = a , \, x \not= 0, \, y \not= 0 \big\}, \end{gather*}
provided that $S^{2n+1}_{\varepsilon_1, \varepsilon_2, a} \not= \varnothing$. Moreover we set $S^{n+1}_\varepsilon = \{ x \in {\mathbf{R}}_r^{n+2} \,|\, x\cdot x = \varepsilon, \, x \not= 0 \}$ for $\varepsilon = 0, \pm 1$. On $S^{2n+1}_{\varepsilon_1, \varepsilon_2, a}$, the contact structure $D$ is def\/ined by $\theta = - \sum\limits_{i=0}^{r-1} y_i dx_i + \sum\limits_{j=r}^{n+1} y_j dx_j$. We have the double Legendrian f\/ibration \begin{gather*} \xymatrix{ & M=S^{2n+1}_{\varepsilon_1, \varepsilon_2} \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} &\\ W_1 = S^{n+1}_{\varepsilon_1} & & W_2 = S^{n+1}_{\varepsilon_2}}
\end{gather*} by $\pi_1(x, y) = x$ and $\pi_2(x, y) = y$.
In the case where $r=1$, $\varepsilon_1=-1$, $\varepsilon_2=1$, $a=0$, since \begin{gather*} S_{-1,1,0}^{2n+1}=T_1H^{n+1}=T_{-1}S_1^{n+1}\subset {\mathbf{R}}_1^{n+2}\times {\mathbf{R}}_1^{n+2},\\
S_{-1}^{n+1}=H^{n+1}\colon \ \text{the hyperbolic space}, \qquad S_1^{n+1}\colon \ \text{the de Sitter space}, \end{gather*} we have the double Legendrian f\/ibration \begin{gather*} \xymatrix{ & M=T_1H^{n+1}\cong H^{n+1}\times S^n \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} &\\ W_1 = H^{n+1} & & W_2 =S_1^{n+1}}
\end{gather*} (cf.\ the hyperbolic Gauss map~\cite{I-P-S}).
\begin{Lem} \label{Minkowskii-M-A} The above bi-Legendrian fibration is contactomorphic to the standard model. The Lagrangian pair $(E_1, E_2) = ({\rm Ker}(\pi_2)_*, {\rm Ker}(\pi_1)_*)$ is flat. \end{Lem}
\begin{proof} For instance, on the open subset $\{ x_0 \not= 0,\, y_{n+1} \not= 0\}$, we take the system of coordinates $x'_i = \frac{x_i}{x_0}$, $p'_i = \frac{y_i}{y_{n+1}}$, $z' = - \frac{x_{n+1}}{x_0}$. Moreover we set $\tilde{z}' = \frac{y_0}{y_{n+1}}$. Then we have $z' + \tilde{z}' + \sum\limits_{i=1}^n x'_i p'_i = 0$, and $\theta = - x_0y_{n+1}\Big(dz' - \sum\limits_{i=1}^n p'_i dx'_i\Big)$. Then we see that the bi-Legendrian f\/ibration is contactomorphic to the standard model. \end{proof}
We set \begin{gather*} \Omega_1 = c\ i_X(dx_0\wedge\cdots\wedge dx_{n+1}), \end{gather*} the standard volume form on $W_1=H^{n+1}$ multiplied by $c\in {\mathbf{R}}$ $(c\not= 0)$, and set \begin{gather*} \Omega_2 = i_Y(dy_0\wedge\cdots\wedge dy_{n+1}), \end{gather*} the standard volume form on $W_2=S_1^{n+1}$. Here $X = -x_0\frac{\partial}{\partial x_0}+\sum\limits_{i=1}^{n+1} x_i\frac{\partial}{\partial x_i}$ and $Y = -y_0\frac{\partial}{\partial y_0}+\sum\limits_{i=1}^{n+1} y_i\frac{\partial}{\partial y_i}$. The Reeb vector f\/ield~$R$ on~$T_1H^{n+1}$ is given by $R = \sum\limits_{i=0}^{n+1} \big(y_i\frac{\partial}{\partial x_i} + x_i\frac{\partial}{\partial y_i}\big)$. Then we have the following.
\begin{Prop} The associated Monge--Amp{\`e}re system $\mathcal{M}$ with Lagrangian pair on $M=T_1H^{n+1}$ is generated by $\theta$ and \begin{gather*} \omega = i_R \big({\pi}_1^*{\Omega}_1-{\pi}_2^*{\Omega}_2\big)\\ \hphantom{\omega}{} = c \sum_{i=1}^{n+1} (-1)^{i} x_0y_i dx_1 \wedge \cdots \wedge \breve{dx_i} \wedge \cdots \wedge dx_{n+1} \\ \hphantom{\omega=}{} + c \sum_{0 \leq j < i \leq n+1} (-1)^{i+j} x_iy_j dx_0 \wedge \cdots \wedge \breve{dx_j} \wedge \cdots \wedge \breve{dx_i} \wedge \cdots \wedge dx_{n+1} \\ \hphantom{\omega=}{} - c \sum_{1 \leq i < k \leq n+1} (-1)^{i+k} x_iy_k dx_0 \wedge \cdots \wedge \breve{dx_i} \wedge \cdots \wedge \breve{dx_k} \wedge \cdots \wedge dx_{n+1} \\ \hphantom{\omega=}{} + \sum_{i=1}^{n+1} (-1)^{i} y_0x_i dy_1 \wedge \cdots \wedge \breve{dy_i} \wedge \cdots \wedge dy_{n+1} \\ \hphantom{\omega=}{} + \sum_{0 \leq j < i \leq n+1} (-1)^{i+j} y_ix_j dy_0 \wedge \cdots \wedge \breve{dy_j} \wedge \cdots \wedge \breve{dy_i} \wedge \cdots \wedge dy_{n+1} \\ \hphantom{\omega=}{} - \sum_{1 \leq i < k \leq n+1} (-1)^{i+k} y_ix_k dy_0 \wedge \cdots \wedge \breve{dy_i} \wedge \cdots \wedge \breve{dy_k} \wedge \cdots \wedge dy_{n+1}. \end{gather*}
The Gaussian curvature of the projection to $W_1=H^{n+1}$ of a geometric solution of~$\mathcal{M}$ is equal to constant $c$ outside of singular locus. \end{Prop}
\begin{Rem} The group $G = {\rm O}(1,n+1)$ acts on $W_1=H^{n+1}$, on $W_2=S_1^{n+1}$ and on $H^{n+1}\times S_1^{n+1}$, thus on
$T_1H^{n+1} = \{ (x, y) \in H^{n+1}\times S_1^{n+1} \,|\, x\cdot y = 0\}$. The contact structure $D = \{ \theta = 0\}$ and the contact form $\theta = - y_0 dx_0 + \sum\limits_{i=1}^{n+1} y_i dx_i$ are $G$-invariant, and any contact form def\/ining $D$ is a non-zero constant multiple of $\theta$.
We get the diagram: \begin{gather*} W_1=H^{n+1} = G/H' \leftarrow M=T_1H^{n+1} = G/H \rightarrow W_2=S_1^{n+1} = G/H'', \end{gather*} for the isotropy groups $H$, $H'$ and $H''$ satisfying \begin{gather*} H' \cong {\rm O}(n+1) \hookleftarrow H \cong {\rm O}(n) \hookrightarrow H'' \cong {\rm O}(1,n). \end{gather*} The $G$-invariant volume forms on $W_1=H^{n+1}$ and $W_2=S_1^{n+1}$ are unique up to non-zero constant. We can construct the Monge--Amp{\`e}re system $\mathcal{M}= \langle \theta, d\theta, \omega \rangle $ with Lagrangian pair globally on $M=T_1H^{n+1}$. \end{Rem}
{\bf 8.5.} The Monge--Amp{\`e}re systems introduced in Sections~8.1--8.4 are all Euler--Lagrange Monge--Amp{\`e}re systems.
In fact, by Lemmas~\ref{Euclid-M-A}, \ref{sphere-M-A}, \ref{Minkowskii-M-A}, there exists a system of local coordinates $(x', z', p') = (x'_1, \dots, x'_n, z', p'_1, \dots, p'_n)$ at any point of~$M$ such that a local contact form is given by $\theta = dz' - \sum\limits_{i=1}^n p'_idx'_i$, the Lagrangian pair is given by \begin{gather*}
E_1 = \{ v \in TM \,|\, \theta(v) = 0, \, dp'_1(v) = 0, \dots, dp'_n(v) = 0 \}, \\
E_2 = \{ u \in TM \,|\, \theta(u) = 0, \, dx'_1(u) = 0, \dots, dx'_n(u) = 0 \}, \end{gather*} the Monge--Amp{\`e}re system~${\mathcal M}$ is generated by an $n$-form \begin{gather*} \tilde{\omega} = f(x'_1, \dots, x'_n, z')dx'_1\wedge \cdots \wedge dx'_n - g\left(p'_1, \dots, p'_n, \sum_{i=1}^n x'_ip'_i - z'\right)dp'_1\wedge \cdots \wedge dp'_n \end{gather*} for some (pulled-back) functions $f$, $g$ on $W_1$, $W_2$ respectively. If a local contactomorphism $\Phi \colon M \to {\mathbf{R}}^{2n+1}$ and dif\/feomorphisms $\varphi \colon W_1 \to {\mathbf{R}}^{n+1}$, $\psi \colon W_2 \to {\mathbf{R}}^{n+1}$ give the contactomorphism of the bi-Legendrian f\/ibration to the standard model, then we have \begin{gather*} \big(\varphi^{-1}\big)^*\Omega_1 = f(x, z)dz \wedge dx_1 \wedge \cdots \wedge dx_n, \qquad \big(\psi^{-1}\big)^*\Omega_2 = g(p, \tilde{z})d\tilde{z} \wedge dp_1 \wedge \cdots \wedge dp_n, \end{gather*} for some non-zero functions $f$, $g$.
For example, we calculate $f$, $g$ explicitly in Euclidean geometry (Section~8.2), for the system of coordinates $x'_i = x_i$, $p'_i = - y_i/y_{n+1}$ $(1\leq i\leq n)$, $z' = x_{n+1}$ on the open set $\{ y_{n+1} \not= 0 \}$. A~direct calculation yields that the Monge--Amp{\`e}re system for $K = c$ is generated by \begin{gather*} \tilde{\omega} = c \ dx'_1\wedge \cdots \wedge dx'_n - (-1)^n\big(1 + {p'}_{1}^{2} + \cdots + {p'}_{n}^{2}\big)^{-\frac{n+2}{2}}dp'_1\wedge \cdots \wedge dp'_n, \end{gather*} with the contact form $\theta = dz' - \sum\limits_{i=1}^n p'_idx'_i$.
Since $(\varphi^{-1})^*\Omega_1$ and $(\psi^{-1})^*\Omega_2$ are local volume forms on ${\mathbf{R}}^{n+1}$, the argument in Example~\ref{standard model calcu} and Lemma~\ref{MA-system construction} yields that all Monge--Amp{\`e}re systems introduced in Sections 8.1--8.4 are Euler--Lagrange Monge--Amp{\`e}re systems.
\LastPageEnding
\end{document} |
\begin{document}
\title[A study on multiple zeta values]{A study on multiple zeta values from the viewpoint of zeta-functions of root systems} \date{}
\begin{abstract} We study multiple zeta values (MZVs) from the viewpoint of zeta-functions associated with the root systems which we have studied in our previous papers. In fact, the $r$-ple zeta-functions of Euler-Zagier type can be regarded as the zeta-function associated with a certain sub-root system of type $C_r$. Hence, by the action of the Weyl group, we can find new aspects of MZVs which imply that the well-known formula for MZVs given by Hoffman and Zagier coincides with Witten's volume formula associated with the above sub-root system of type $C_r$. Also, from this observation, we can prove some new formulas which especially include the parity results of double and triple zeta values. As another important application, we give certain refinement of restricted sum formulas, which gives restricted sum formulas among MZVs of an arbitrary depth $r$ which were previously known only in the cases of depth $2,3,4$. Furthermore, considering a sub-root system of type $B_r$ analogously, we can give relevant analogues of the Hoffman-Zagier formula, parity results and restricted sum formulas. \end{abstract}
\maketitle
\section{Introduction}\label{sec-1}
Let $\mathbb{N}$, $\mathbb{N}_0$, $\mathbb{Z}$, $\mathbb{Q}$, $\mathbb{R}$, $\mathbb{C}$ be the set of positive integers, non-negative integers, rational integers, rational numbers, real numbers, and complex numbers, respectively.
We define the Euler-Zagier $r$-ple zeta-function (simply called the Euler-Zagier sum) by \begin{equation} \zeta_r(s_1,\ldots,s_r)=\sum_{0<n_1<\cdots<n_r}\frac{1}{n_1^{s_1}n_2^{s_2}
\cdots n_r^{s_r}},\label{e-1-1} \end{equation} where $s_1,\ldots,s_r$ are complex variables. When $(s_1,s_2,\ldots,s_r)\in \mathbb{N}^r$ $(s_r>1)$, this is called the multiple zeta value (MZV) of depth $r$ first studied by Hoffman \cite{Hoff} and Zagier\cite{Za}. Though the opposite order of summation in the definition of $\zeta_{r}(s_1,\ldots,s_r)$ is also used recently, we use the order in \eqref{e-1-1} through this paper because it is natural in our study. In the research of MZVs, the main target is to give non-trivial relations among them, in order to investigate the structure of the algebra generated by them (for the details, see Kaneko \cite{Ka}).
In our previous papers \cite{KMT}-\cite{KM4} and \cite{MT1}, as more general multiple series, we defined and studied multi-variable zeta-functions associated with root systems of type $X_r$ ($X=A,B,C,D,E,F,G$) denoted by $\zeta_r(s_1,\ldots,s_n;X_r)$ where $n$ is the number of positive roots of type $X_r$ (see definition \eqref{def-zeta}). In particular when $s_1=\cdots=s_r=s$, $\zeta_r(s,\ldots,s;X_r)$ essentially coincides with the Witten zeta-function (see Witten \cite{Wi} and Zagier\cite{Za}). An important fact is \begin{equation} \zeta_r(2k,2k,\ldots,2k;X_r)\in \mathbb{Q}\cdot \pi^{2kn}\quad (k\in \mathbb{N}), \label{Witten-VF} \end{equation} which is a consequence of Witten's volume formula given in \cite{Wi}. Since we considered multi-variable version of Witten zeta-function, we were able to determine the rational coefficients in \eqref{Witten-VF} explicitly in a generalized form (see \cite[Thoerem 4.6]{KM3}).
Recently, in our previous paper \cite{KMT-MZ}, we regarded MZVs as special values of zeta-functions of root systems of type $A_r$, and clarified the structure of the shuffle product procedure for MZVs from this viewpoint. In fact, we showed that the shuffle product procedure can be described in terms of partial fraction decompositions of zeta-functions of root systems of type $A_r$.
The main idea in the present paper is to regard \eqref{e-1-1} as a specialization of zeta-functions of root systems of type $C_r$ (see below). It is essential in our theory that $C_r$ is not simply-laced. In fact, there exists a subset of the root system of type $C_r$ so that the Euler-Zagier sum \eqref{e-1-1} is the zeta-function associated with this subset (see Section \ref{sec-4}). This subset itself is a root system, and hence the Weyl group naturally acts on \eqref{e-1-1}. General fundamental results will be stated in Section \ref{sec-3}, and their proofs will be given in Section \ref{sec-proof1}. As a consequence, it can be shown that a kind of formula \eqref{Witten-VF} corresponding to this sub-root system implies the well-known result given by Hoffman \cite[Section 2]{Hoff} and Zagier \cite[Section 9]{Za} independently: \begin{equation} \zeta_r(2k,2k,\ldots,2k)\in \mathbb{Q}\cdot \pi^{2kr}\quad (k\in \mathbb{N})\label{H-Z} \end{equation} (see Corollary \ref{Cor-Z}).
Furthermore, based on this observation in the cases when $r=2,3$, we will give explicit formulas for double series and for triple series (see Proposition \ref{Pr-1} and Theorem \ref{T-5-1}) which include what is called the parity results for double and triple zeta values (see Corollary \ref{C-5-3}).
Similarly we can consider analogues of those results corresponding to a sub-root system of type $B_r$. In fact, we can define a $B_r$-type analogue of $\zeta_{r}({\bf s})$ by \begin{align} & \zeta_{r}^\sharp({\bf s}) =\sum_{m_1,\ldots,m_r=1}^\infty \prod_{i=1}^{r} \frac{1}{(2\sum_{j=r-i+1}^{r-1}m_j+m_r)^{{s_i}}}, \label{def-Br-Zeta} \end{align} which is a ``partial sum'' of the series of $\zeta_{r}({\bf s})$ (see Section \ref{sec-6}). From the viewpoint of root systems, we see that this has some properties similar to those of $\zeta_{r}({\bf s})$, because the root system of type $B_r$ is a dual of that of type $C_r$. Actually we can obtain an analogue of \eqref{H-Z} for this series (see Corollary \ref{C-6-2}). We also prove a formula between the values of $\zeta_{2}^\sharp({\bf s})$ and the Riemann zeta values (see Theorem \ref{T-B2-EZ}), which gives the parity result corresponding to type $B_r$ (see Theorem \ref{T-B2-EZ}). This result plays an important role in a recent study on the dimension of the linear space spanned by double zeta values of level $2$ given by Kaneko and Tasaka (see \cite{Ka-Ta}).
The fact that parity results hold in those classes implies that those are ``nice'' classes. In Section \ref{sec-acs} we will study those classes from the analytic point of view, and prove that those classes, as well as the subclass of zeta-functions of root systems of type $A_r$ introduced in \cite{KMT-MZ}, are ``closed'' in a certain analytic sense.
Another important consequence of our fundamental theorem in Section \ref{sec-3} is the ``refined restricted sum formulas'' for the values of $\zeta_{r}({\bf s})$ and $\zeta_{r}^\sharp({\bf s})$, which are embodied in Corollaries \ref{Cor-Cr-Sr} and \ref{Cor-Br-Sr}.
One of the famous formulas among MZVs is the sum formula, which is, in the case of double zeta values, written as \begin{equation} \sum_{j=2}^{K-1}\zeta_2(K-j,j)=\zeta(K)\quad (K\in \mathbb{Z}_{\geq 3}). \label{sumformula} \end{equation} Gangl, Kaneko and Zagier \cite{GKZ} obtained the following formulas, which ``divide'' \eqref{sumformula} for even $K$ into two parts: \begin{equation} \begin{split} & \sum_{a,b \in \mathbb{N}\atop a+b=N} \zeta_2(2a,2b)=\frac{3}{4}\zeta(2N)\in \mathbb{Q}\cdot \pi^{2N}\quad (N\in \mathbb{Z}_{\geq 2}), \\ & \sum_{a,b \in \mathbb{N}\atop a+b=N} \zeta_2(2a-1,2b+1)=\frac{1}{4}\zeta(2N)\in \mathbb{Q}\cdot \pi^{2N}\quad (N\in \mathbb{Z}_{\geq 2}), \end{split} \label{F-GKZ} \end{equation} which are sometimes called the restricted sum formulas. More recently, Shen and Cai \cite{Shen-Cai} gave restricted sum formulas for triple and fourth zeta values (see \eqref{sumf-triple} and \eqref{sumf-fourth}). As we will discuss in Section \ref{sumf}, our Corollaries \ref{Cor-Cr-Sr} and \ref{Cor-Br-Sr} give more refined restricted sum formulas for $\zeta_r({\bf s})$ and for $\zeta_r^\sharp({\bf s})$ of an arbitrary depth $r$. From these refined formulas we can deduce the restricted sum formulas for an arbitrary depth $r$, actually in a generalized form involving a parameter $d$ (see Theorems \ref{sumf-EZ-Cr} and \ref{sumf-EZ-Br}).
A part of the results in the present paper has been announced in \cite{KMT-PJA}.
\section{Zeta-functions of root systems and root sets}\label{sec-2}
In this section, we recall the definition of zeta-functions of root systems studied in our papers \cite{KMT}-\cite{KM3}. For the details of basic facts about root systems and Weyl groups, see \cite{Bourbaki,Hum72,Hum}.
Let $V$ be an $r$-dimensional real vector space equipped with an inner product $\langle \cdot,\cdot\rangle$.
The dual space $V^*$ is identified with $V$ via the inner product of $V$. Let $\Delta$ be a finite irreducible reduced root system, and $\Psi=\{\alpha_1,\ldots,\alpha_r\}$ its fundamental system. We fix $\Delta_+$ and $\Delta_-$ as the set of all positive roots and negative roots respectively. Then we have a decomposition of the root system $\Delta=\Delta_+\coprod\Delta_-$. Let $Q=Q(\Delta)$ be the root lattice, $Q^\vee$ the coroot lattice, $P=P(\Delta)$ the weight lattice, $P^\vee$ the coweight lattice, and $P_{++}$ the set of integral strongly dominant weights respectively defined by \begin{align*} & Q=\bigoplus_{i=1}^r\mathbb{Z}\,\alpha_i,\qquad
Q^\vee=\bigoplus_{i=1}^r\mathbb{Z}\,\alpha^\vee_i,\\ & P=\bigoplus_{i=1}^r\mathbb{Z}\,\lambda_i, \qquad
P^\vee=\bigoplus_{i=1}^r\mathbb{Z}\,\lambda^\vee_i,\qquad P_{++}=\bigoplus_{i=1}^r\mathbb{N}\,\lambda_i, \end{align*} where the fundamental weights $\{\lambda_j\}_{j=1}^r$ and the fundamental coweights $\{\lambda_j^\vee\}_{j=1}^r$ are the dual bases of $\Psi^\vee$ and $\Psi$ satisfying $\langle \alpha_i^\vee,\lambda_j\rangle=\delta_{ij}$ (Kronecker's delta) and $\langle \lambda_i^\vee,\alpha_j\rangle=\delta_{ij}$ respectively.
Let $\sigma_\alpha :V\to V$ be the reflection with respect to a root $\alpha\in\Delta$ defined by $$\sigma_\alpha:v\mapsto v-\langle \alpha^\vee,v\rangle\alpha.$$ For a subset $A\subset\Delta$, let $W(A)$ be the group generated by reflections $\sigma_\alpha$ for all $\alpha\in A$. In particular, $W=W(\Delta)$ is the Weyl group, and
$\{\sigma_j:=\sigma_{\alpha_j}\,|\,1\leq j \leq r\}$ generates $W$. For $w\in W$, denote $\Delta_w=\Delta_+\cap w^{-1}\Delta_-$. The zeta-function associated with $\Delta$ is defined by \begin{equation}
\zeta_r(\mathbf{s},\mathbf{y};\Delta)
=
\sum_{\lambda\in P_{++}}
e^{2\pi i\langle\mathbf{y},\lambda\rangle}\prod_{\alpha\in\Delta_+}
\frac{1}{\langle\alpha^\vee,\lambda\rangle^{s_\alpha}}, \label{def-zeta} \end{equation}
where $\mathbf{s}=(s_{\alpha})_{\alpha\in\Delta_+}\in \mathbb{C}^{|\Delta_+|}$ and $\mathbf{y}\in V$. This can be regarded as a multi-variable version of Witten zeta-functions formulated by Zagier \cite{Za} based on the work of Witten \cite{Wi}.
Let $\Delta^*$ be a subset of $\Delta_+$. We call $\Delta^*$ a {\it root set} (or a {\it root subset} of $\Delta_+$) if, for any $\lambda_j$ ($1\leq j\leq r$), there exists an element $\alpha\in\Delta^*$ for which $\langle\alpha,\lambda_j \rangle\neq 0$ holds. We define the zeta-function associated with a root set $\Delta^*$ by \begin{equation}
\zeta_r(\mathbf{s},\mathbf{y};\Delta^*)
=
\sum_{\lambda\in P_{++}}
e^{2\pi i\langle\mathbf{y},\lambda\rangle}\prod_{\alpha\in\Delta^*}
\frac{1}{\langle\alpha^\vee,\lambda\rangle^{s_\alpha}}. \label{def-zeta-root-set} \end{equation} In the case $\mathbf{y}=\mathbf{0}$, this zeta-function was introduced in \cite{KM2}. When the root system is of type $X_r$, we write $\Delta=\Delta(X_r)$, $\Delta^*=\Delta^*(X_r)$, and so on.
\begin{remark} The notion of $\zeta_r(\mathbf{s},\mathbf{y};\Delta^*)$ depends not only on $\Delta^*$, but also on $\Delta_+$, because the summation on \eqref{def-zeta-root-set} runs over all strongly dominant weights associated with $\Delta_+$. \end{remark}
\section{Fundamental formulas}\label{sec-3}
In this section, we state several fundamental formulas which are certain extensions of our previous results given in \cite{KM2,KM5,KM3}. Proofs of theorems stated in this section will be given in Section \ref{sec-proof1}.
Let $\mathscr{V}$ be the set of all bases $\mathbf{V}\subset\Delta_+$. Let $\mathbf{V}^*=\{\mu_\beta^{\mathbf{V}}\}_{\beta\in\mathbf{V}}$ be the dual basis of $\mathbf{V}^\vee=\{\beta^\vee\}_{\beta\in\mathbf{V}}$. Let $L(\mathbf{V}^\vee) =\bigoplus_{\beta\in\mathbf{V}}\mathbb{Z}\beta^\vee$. Then we have $\abs{Q^\vee/L(\mathbf{V}^\vee)}<\infty$. Fix $\phi\in V$ such that $\langle\phi,\mu^{\mathbf{V}}_\beta\rangle\neq 0$ for all $\mathbf{V}\in\mathscr{V}$ and $\beta\in\mathbf{V}$. If the root system $\Delta$ is of $A_1$ type, then we choose $\phi=\alpha_1^\vee$. We define a multiple generalization of the fractional part as \begin{equation*} \{\mathbf{y}\}_{\mathbf{V},\beta}= \begin{cases}
\{\langle\mathbf{y},\mu^{\mathbf{V}}_\beta\rangle\}\quad&(\langle\phi,\mu^{\mathbf{V}}_\beta\rangle>0),\\
1-\{-\langle\mathbf{y},\mu^{\mathbf{V}}_\beta\rangle\}&(\langle\phi,\mu^{\mathbf{V}}_\beta\rangle<0), \end{cases} \end{equation*} where the notation $\{x\}$ on the right-hand sides stands for the usual fractional
part of $x\in\mathbb{R}$. Let $\mathbf{T}=\{t\in\mathbb{C}\;|\;|t|<2\pi\}^{|\Delta_+|}$.
\begin{definition}
\label{thm:exp_F}
For $\mathbf{t}=(t_{\alpha})_{\alpha\in\Delta_+}\in\mathbf{T}$ and $\mathbf{y}\in V$,
we define
\begin{equation}\label{def-F}
\begin{split}
F&(\mathbf{t},\mathbf{y};\Delta)=
\sum_{\mathbf{V}\in\mathscr{V}}
\Bigl(
\prod_{\gamma\in\Delta_+\setminus\mathbf{V}}
\frac{t_\gamma}
{t_\gamma-\sum_{\beta\in\mathbf{V}}t_\beta\langle\gamma^\vee,\mu^{\mathbf{V}}_\beta\rangle}
\Bigr)
\\
&\times
\frac{1}{\abs{Q^\vee/L(\mathbf{V}^\vee)}}
\sum_{q\in Q^\vee/L(\mathbf{V}^\vee)}
\Bigl(
\prod_{\beta\in\mathbf{V}}\frac{t_\beta\exp
(t_\beta\{\mathbf{y}+q\}_{\mathbf{V},\beta})}{e^{t_\beta}-1}
\Bigr)
\\
&=
\sum_{\mathbf{k}\in\mathbb{N}_{0}^{|\Delta_+|}}
\mathcal{P}(\mathbf{k},\mathbf{y};\Delta)
\prod_{\alpha\in\Delta_+} \frac{t_\alpha^{k_\alpha}}{k_\alpha!}
\end{split}
\end{equation} which is independent of choice of $\phi$. \end{definition}
\begin{remark} In \cite{KM5}, $F(\mathbf{t},\mathbf{y};\Delta)$ is defined in a different way. The above is \cite[Theorem 4.1]{KM5}. In particular when $\Delta=\Delta(A_1)$, we see that $$F(\mathbf{t},\mathbf{y};\Delta(A_1))=\frac{te^{t\{y\}}}{e^t-1},$$
which is the generating function of ordinary Bernoulli periodic functions $\{B_k(\{y\})\}$. \end{remark}
Let \begin{equation}
\label{eq:def_S}
S(\mathbf{s},\mathbf{y};\Delta)
=\sum_{\lambda\in P\setminus H_{\Delta^\vee}}
e^{2\pi i\langle \mathbf{y},\lambda\rangle}
\prod_{\alpha\in\Delta_+}
\frac{1}{\langle\alpha^\vee,\lambda\rangle^{s_\alpha}}, \end{equation}
where $H_{\Delta^\vee}=\{v\in V~|~\langle \alpha^\vee,v\rangle=0\text{
for some }\alpha\in\Delta\}$ is the set of all walls of Weyl chambers. For $\mathbf{s}\in\mathbb{C}^{|\Delta_+|}$, we define $(w\mathbf{s})_{\alpha}=s_{w^{-1}\alpha}$, where if $w^{-1}\alpha\in\Delta_-$ we use the convention $s_{-\alpha}=s_\alpha$. \begin{prop}[{\cite[Theorem 4.4]{KM3},\cite[Proposition 3.2]{KM5}}]
\label{prop:ZP}
\begin{equation}
\label{eq:formula1}
\begin{split}
S(\mathbf{k},\mathbf{y};\Delta)
&=
\sum_{w\in W}
\Bigl(\prod_{\alpha\in\Delta_+\cap w\Delta_-}
(-1)^{k_{\alpha}}\Bigr)
\zeta_r(w^{-1}\mathbf{k},w^{-1}\mathbf{y};\Delta)\\
&=(-1)^{\abs{\Delta_+}}
\mathcal{P}(\mathbf{k},\mathbf{y};\Delta)
\biggl(\prod_{\alpha\in\Delta_+}
\frac{(2\pi i)^{k_\alpha}}{k_\alpha!}\biggr)
\end{split}
\end{equation}
for $k_\alpha\in\mathbb{Z}_{\geq2}$ ($\alpha\in\Delta_+$). \end{prop} \begin{remark}
It should be noted that
the formula \eqref{eq:formula1} holds in the cases
$k_\alpha=1$ for some $\alpha\in\Delta_+$, while it does not hold in the cases
$k_\alpha=0$ for any $\alpha\in\Delta_+$. \end{remark}
For $\mathbf{v}\in V$, and a differentiable function $f$ on $V$, let \begin{equation*}
(\partial_{\mathbf{v}}f)(\mathbf{y})=\lim_{h\to 0}\frac{f(\mathbf{y}+h\mathbf{v})-f(\mathbf{y})}{h} \end{equation*} and for $\alpha\in\Delta_+$, \begin{equation*}
\mathfrak{D}_\alpha=
\frac{\partial}{\partial t_\alpha}
\biggr\rvert_{t_\alpha=0}\partial_{\alpha^\vee}. \end{equation*} Let $\Delta^*\subset\Delta_+$ be a root set and let $A=\Delta_+\setminus\Delta^*=\{\nu_1,\ldots,\nu_N\}\subset\Delta_+$, and define \begin{equation*} \mathfrak{D}_A=\mathfrak{D}_{\nu_N}\cdots \mathfrak{D}_{\nu_1}. \end{equation*} Similarly we define \begin{gather}
\mathfrak{D}_{\alpha,2}=
\frac{1}{2}\frac{\partial^2}{\partial t_\alpha^2}
\biggr\rvert_{t_\alpha=0}\partial^2_{\alpha^\vee},\\
\mathfrak{D}_{A,2}=\mathfrak{D}_{\nu_N,2}\cdots \mathfrak{D}_{\nu_1,2}. \end{gather}
Further, let $A_j=\{\nu_1,\ldots,\nu_j\}$ ($1\leq j\leq N-1$), $A_0=\emptyset$, and \begin{equation*}
\mathscr{V}_A=\{\mathbf{V}\in\mathscr{V}~|~
\text{$\nu_{j+1}\notin{\rm L.h.}[\mathbf{V}\cap A_j]\;(0\leq j \leq N-1)$} \}, \end{equation*} where ${\rm L.h.}[\;\cdot\;]$ denotes the linear hull (linear span). Let $\mathscr{R}$ be the set of all linearly independent subsets $\mathbf{R}=\{\beta_1,\ldots,\beta_{r-1}\}\subset\Delta$ and
\begin{equation}
\label{eq:def_H_R}
\mathfrak{H}_{\mathscr{R}}:=
\bigcup_{\substack{\mathbf{R}\in\mathscr{R}\{\mathbb{Q}}\in Q^\vee}}({\rm L.h.} [\mathbf{R}^\vee]+q). \end{equation}
\begin{remark}\label{tsuika} It is to be noted that $\mathbf{y}\in\mathfrak{H}_{\mathscr{R}}$ if and only if $\langle\mathbf{y}+q,\mu^{\mathbf{V}}_\beta\rangle\in\mathbb{Z}$ for some $\mathbf{V}\in\mathscr{V},\mathbf{\beta}\in\mathbf{V},q\in Q^\vee$. In fact, if $\mathbf{y}\in\mathfrak{H}_{\mathscr{R}}$ then we can write $\mathbf{y}=\sum_{j=1}^{r-1}a_j \beta_j^{\vee}+q$ ($a_j\in\mathbb{R}$). We can find an element $\beta_r\in\Delta$ such that $\mathbf{V}=\{\beta_1,\ldots,\beta_r\}\in\mathscr{V}$. Then $\langle\mathbf{y}-q,\mu_{\beta_r}^{\mathbf{V}}\rangle=0\in\mathbb{Z}$. Conversely, assume $\langle\mathbf{y}+q,\mu^{\mathbf{V}}_\beta\rangle=c\in\mathbb{Z}$. Write $\mathbf{V}=\{\beta_1,\ldots,\beta_{r-1},\beta\}$. Since this is a basis, we may write $\mathbf{y}+q=\sum_{j=1}^{r-1}a_j \beta_j^{\vee}+a\beta^{\vee}$ with $a_j,a\in\mathbb{R}$. Then $c=\langle\mathbf{y}+q,\mu^{\mathbf{V}}_\beta\rangle=a$, especially $a\in\mathbb{Z}$. Therefore $a\beta^{\vee}-q\in Q^{\vee}$, which implies $\mathbf{y}\in \mathfrak{H}_{\mathscr{R}}$. \end{remark}
\begin{definition}
For $\Delta_+\setminus \Delta^*=A=\{\nu_1,\ldots,\nu_N\}\subset\Delta_+$, $\mathbf{t}_{\Delta^*}=\{t_\alpha\}_{\alpha\in\Delta^*}$ and $\mathbf{y}\in V$,
we define
\begin{equation}
\begin{split}
F_{\Delta^*}(\mathbf{t}_{\Delta^*},\mathbf{y};\Delta)
&=
\sum_{\mathbf{V}\in\mathscr{V}_A}
(-1)^{\abs{A\setminus\mathbf{V}}}\\
& \times\Bigl(
\prod_{\gamma\in\Delta_+\setminus(\mathbf{V}\cup A)}
\frac{t_\gamma}
{t_\gamma-\sum_{\beta\in\mathbf{V}\setminus A}t_\beta\langle\gamma^\vee,\mu^{\mathbf{V}}_\beta\rangle}
\Bigr)
\\
&\times
\frac{1}{\abs{Q^\vee/L(\mathbf{V}^\vee)}}
\sum_{q\in Q^\vee/L(\mathbf{V}^\vee)}
\Bigl(
\prod_{\beta\in\mathbf{V}\setminus A}\frac{t_\beta\exp
(t_\beta\{\mathbf{y}+q\}_{\mathbf{V},\beta})}{e^{t_\beta}-1}
\Bigr).
\end{split}
\end{equation} \end{definition}
\begin{thrm}
\label{thm:main0}
For $\Delta_+\setminus \Delta^*=A=\{\nu_1,\ldots,\nu_N\}\subset\Delta_+$, $\mathbf{t}_{\Delta^*}=\{t_\alpha\}_{\alpha\in\Delta^*}$ and $\mathbf{y}\in V\setminus\mathfrak{H}_{\mathscr{R}}$.
we have
\begin{equation}
\label{eq:main0}
\bigl(\mathfrak{D}_A F\bigr) (\mathbf{t}_{\Delta^*},\mathbf{y};\Delta)=
\bigl(\mathfrak{D}_{A,2} F\bigr) (\mathbf{t}_{\Delta^*},\mathbf{y};\Delta)=
F_{\Delta^*}(\mathbf{t}_{\Delta^*},\mathbf{y};\Delta),
\end{equation}
and hence is independent of choice of the order of $A$.
The function $F_{\Delta^*}(\mathbf{t}_{\Delta^*},\mathbf{y};\Delta)$
is the continuous extension
of $\bigl(\mathfrak{D}_A F\bigr) (\mathbf{t}_{\Delta^*},\mathbf{y};\Delta)$
in $\mathbf{y}$ in the sense that
$\bigl(\mathfrak{D}_A F\bigr) (\mathbf{t}_{\Delta^*},\mathbf{y}+c\phi;\Delta)$
tends continuously to $F_{\Delta^*}(\mathbf{t}_{\Delta^*},\mathbf{y};\Delta)$
when $c\to 0+$,
and is holomorphic with respect to $\mathbf{t}_{\Delta^*}$ around the origin. \end{thrm}
\begin{definition} For $\Delta^*\subset\Delta_+$ and $\mathbf{t}_{\Delta^*}=\{t_\alpha\}_{\alpha\in\Delta^*}$, we define $\mathcal{P}_{\Delta^*}(\mathbf{k}_{\Delta^*},\mathbf{y};\Delta)$ by
\begin{align*}
&
F_{\Delta^*}(\mathbf{t}_{\Delta^*},\mathbf{y};\Delta)
\\
&=
\sum_{\mathbf{k}_{\Delta^*}\in \mathbb{N}_{0}^{\abs{\Delta^*}}}\mathcal{P}_{\Delta^*}(\mathbf{k}_{\Delta^*},\mathbf{y};\Delta)
\prod_{\alpha\in\Delta^*}
\frac{t_{\alpha}^{k_\alpha}}{k_\alpha!}.
\end{align*} \end{definition}
\begin{thrm} \label{thm:main1}
For
$\mathbf{s}=\mathbf{k}=(k_\alpha)_{\alpha\in\Delta_+}$ with
$k_\alpha\in\mathbb{Z}_{\geq2}$ ($\alpha\in \Delta^*$),
$k_\alpha=0$ ($\alpha\in \Delta_+\setminus \Delta^*$),
we have
\begin{align}
&\sum_{w\in W}
\Bigl(\prod_{\alpha\in\Delta_+\cap w\Delta_-}
(-1)^{k_{\alpha}}\Bigr)
\zeta_r(w^{-1}\mathbf{k},w^{-1}\mathbf{y};\Delta) \label{Th-main}\\
& =(-1)^{\abs{\Delta_+}}
\mathcal{P}_{\Delta^*}(\mathbf{k}_{\Delta^*},\mathbf{y};\Delta)
\biggl(\prod_{\alpha\in\Delta^*}
\frac{(2\pi i)^{k_\alpha}}{k_\alpha!}\biggr)
\notag
\end{align} provided all the series on the left-hand side absolutely converge. \end{thrm}
Assume that $\Delta$ is not simply-laced. Then we have the disjoint union $\Delta=\Delta_l\cup\Delta_s$, where $\Delta_l$ is the set of all long roots and $\Delta_s$ is the set of all short roots.
Note that if there is an odd $k_i$, then both hand sides vanish in \eqref{Th-main}. On the other hand, when all $k_i's$ are even, by applying Theorem \ref{thm:main1} to $\Delta^*=\Delta_l$ or $\Delta_s$, we obtain the following theorem immediately, which is a generalization of the explicit volume formula proved in \cite[Theorem 4.6]{KM3}.
\begin{thrm} \label{thm:main2} Let $\Delta_1=\Delta_l$ (resp.~$\Delta_s$), $\Delta_2=\Delta_s$ (resp.~$\Delta_l$), and $\Delta_{j+}=\Delta_j\cap\Delta_+$ ($j=1,2$). Then $\Delta_{j+}$ ($j=1,2$) is a root subset of $\Delta_+$. For
$\mathbf{s}_1=\mathbf{k}_1=(k_\alpha)_{\alpha\in\Delta_{1+}}$ with
$k_\alpha=k\in 2\mathbb{N}$ (for all $\alpha\in\Delta_{1+}$) and $\nu\in P^\vee/Q^\vee$, we have
\begin{align}
&\zeta_r(\mathbf{k}_1,\nu;\Delta_{1+}) =\frac{(-1)^{\abs{\Delta_+}}}{|W|}
\mathcal{P}_{\Delta_{1+}}(\mathbf{k}_1,\nu;\Delta)
\biggl(\prod_{\alpha\in\Delta_{1+}}
\frac{(2\pi i)^{k_\alpha}}{k_\alpha!}\biggr).\label{main2}
\end{align} \end{thrm}
\begin{remark} Let $\mathbf{s}=\mathbf{k}=(k_\alpha)_{\alpha\in\Delta_{+}}$ with $k_{\alpha}=k\in 2\mathbb{N}$ ($\alpha\in\Delta_{1+}$) and $k_\alpha=0$ ($\alpha\in\Delta_{2+}$). Then obviously $\zeta_r(\mathbf{k}_1,\nu;\Delta_{1+})=\zeta_r(\mathbf{k},\nu;\Delta)$. Our proof of Theorem \ref{thm:main2} is actually based on the latter viewpoint. \end{remark}
\section{Multiple zeta values and zeta-functions of root system of type $C_r$} \label{sec-4}
Now we study MZVs from the viewpoint of zeta-functions of root systems of type $C_r$. For $\Delta=\Delta(C_r)$, we have the disjoint union $\Delta_+^\vee=(\Delta_{l+})^\vee\cup(\Delta_{s+})^\vee$, where $\Delta_{l+}=\Delta_{l+}(C_r)=\Delta_l(C_r)\cap\Delta_+(C_r)$, $\Delta_{s+}=\Delta_{s+}(C_r)=\Delta_s(C_r)\cap\Delta_+(C_r)$, and \begin{align*}
(\Delta_{l+})^\vee&
=\{
\alpha_r^\vee,\
\alpha_{r-1}^\vee+\alpha_r^\vee,\
\alpha_{r-2}^\vee+\alpha_{r-1}^\vee+\alpha_r^\vee,\
\ldots,
\alpha_1^\vee+\cdots+\alpha_r^\vee \}. \end{align*} Since $P^\vee/Q^\vee=\{\mathbf{0},\lambda_r^\vee\}$, Therefore, for $\mathbf{s}_l=(s_{\alpha})_{\alpha\in\Delta_{l+}}$, we have \begin{align*}
\zeta_r(\mathbf{s}_l,\mathbf{0};\Delta_{l+}(C_r))&=\sum_{m_1,\ldots,m_r=1}^\infty \prod_{i=1}^{r} \frac{1}{(\sum_{j=r-i+1}^{r-1}m_j+m_r)^{{s_i}}},\\
\zeta_r(\mathbf{s}_l,\lambda_r^\vee;\Delta_{l+}(C_r))&=\sum_{m_1,\ldots,m_r=1}^\infty \prod_{i=1}^{r} \frac{(-1)^{m_r}}{(\sum_{j=r-i+1}^{r-1}m_j+m_r)^{{s_i}}}, \end{align*} where the first equation is exactly the Euler-Zagier sum $\zeta_{r}(s_1,\ldots,s_r)$ (see \eqref{e-1-1}).
In order to apply Theorems \ref{thm:main1} and \ref{thm:main2} to MZVs, we rewrite the root system of type $C_r$ in terms of standard orthonormal basis $\{e_1,\ldots,e_r\}$. We put $\alpha_i^\vee=e_{i}-e_{i+1}$ for $1\leq i\leq r-1$ and $\alpha_r^\vee=e_r$. Then we have \begin{equation*}
(\Delta_{l+})^\vee
=\{ \alpha_r^\vee=e_r,\ \alpha_{r-1}^\vee+\alpha_r^\vee=e_{r-1},\ \alpha_{r-2}^\vee+\alpha_{r-1}^\vee+\alpha_r^\vee=e_{r-2},\ \ldots, \alpha_1^\vee+\cdots+\alpha_r^\vee=e_1 \}. \end{equation*} In this realization, we see that $W(C_r)=(\mathbb{Z}/2\mathbb{Z})^r\rtimes \mathfrak{S}_r$, where $\mathfrak{S}_r$ is the symmetric group of degree $r$ which permutes bases, and the $j$-th $\mathbb{Z}/2\mathbb{Z}$ flips the sign of $e_j$. Since the sign flips act trivially on the variables $\mathbf{s}_l$, from Theorem \ref{thm:main1} we obtain the following formulas. These are the ``refined restricted sum formulas'' for $\zeta_r(\mathbf{s})$, which we will discuss in Section \ref{sumf}. \begin{cor} \label{Cor-Cr-Sr} Let $\Delta=\Delta(C_r)$. For $(2{\bf k})_l=(2k_{\alpha})_{\alpha\in\Delta_{l+}}=(2k_1,\ldots,2k_r)\in \left(2\mathbb{N}\right)^r$ and $\mathbf{y}=\nu\in P^\vee/Q^\vee$, \begin{equation} \label{EZ-Sr-1}
\sum_{\sigma\in\mathfrak{S}_r}
\zeta_r(\sigma^{-1}(2\mathbf{k})_l,\nu;\Delta_{l+}) =\frac{(-1)^{r}}{2^r}\mathcal{P}_{\Delta_{l+}} ((2\mathbf{k})_l,\nu;\Delta)
\prod_{j=1}^{r}\frac{(2\pi i)^{2k_j}}{(2k_j)!} \in\mathbb{Q}\cdot \pi^{2\sum_{j=1}^{r}k_j}. \end{equation} In particular when $\nu={\bf 0}$, \begin{equation} \label{EZ-Sr-11}
\sum_{\sigma\in\mathfrak{S}_r}
\zeta_r(2k_{\sigma^{-1}(1)},\ldots,2k_{\sigma^{-1}(r)}) =\frac{(-1)^{r}}{2^r}\mathcal{P}_{\Delta_{l+}} ((2\mathbf{k})_l,{\bf 0};\Delta)
\prod_{j=1}^{r}\frac{(2\pi i)^{2k_j}}{(2k_j)!} \in\mathbb{Q}\cdot \pi^{2\sum_{j=1}^{r}k_j}. \end{equation}
\end{cor}
Also Theorem \ref{thm:main2} in the case of type $C_r$ immediately gives the following.
\begin{cor}\label{Cor-Z} Let $\Delta=\Delta(C_r)$. For $({\bf 2k})_l=(2k,\ldots,2k)$ with any $k\in \mathbb{N}$, \begin{align}
& \zeta_{r}(2k,2k,\ldots,2k)=\frac{(-1)^{r}}{2^r r!}\mathcal{P}_{\Delta_{l+}} ((\mathbf{2k})_l,{\bf 0};\Delta)
\frac{(2\pi i)^{2kr}}{\{ (2k)! \}^r} \in\mathbb{Q}\cdot \pi^{2kr}. \label{Zagier-F2} \end{align} \end{cor}
\begin{remark}\label{Rem-Hof} The fact $$\sum_{\sigma\in\mathfrak{S}_r} \zeta_r(2k_{\sigma^{-1}(1)},\ldots,2k_{\sigma^{-1}(r)})\in\mathbb{Q}\cdot \pi^{2\sum_{j=1}^{r}k_j}$$ was first proved by Hoffman \cite[Theorem 2.2]{Hoff}. This gives the well-known result $$\zeta_{r}(2k,\ldots,2k)\in\mathbb{Q}\cdot \pi^{2kr},$$ which was also given by Zagier \cite[Section 9]{Za} independently. Broadhurst, Borwein and Bradley gave explicit formulas for these values in \cite[Section 2]{BBB}. Also it is known that \begin{equation} \zeta_{r}(2k,\ldots,2k)={\mathcal{C}}_r^{(k)}\frac{(2\pi i)^{2kr}}{(2kr)!},\label{Zagier-F} \end{equation} where $${\mathcal{C}}_0^{(k)}=1,\ \ {\mathcal{C}}_{n}^{(k)}=\frac{1}{2n}\sum_{j=1}^{n}(-1)^j \binom{2nk}{2jk}B_{2jk}{\mathcal{C}}_{n-j}^{(k)}\ \ (n \geq 1).$$ Formula \eqref{Zagier-F} was first published in the lecture notes \cite{AK1}, \cite{AK2} written in Japanese (Exercise 5, Section 1.1 of those lecture notes). See also Muneta \cite{Mun}.
We emphasize that \eqref{Zagier-F} can be regarded as a kind of Witten's volume formula \eqref{Zagier-F2}. Because \eqref{Zagier-F2} and \eqref{Zagier-F} in the case $r=1$ are both Euler's well-known formula \begin{equation} \zeta(2k)=-B_{2k}\frac{(2\pi i)^{2k}}{2(2k)!}\qquad (k\in \mathbb{N}), \label{Euler-F} \end{equation} we can see that $\mathcal{P}_{\Delta_{l+}}((\mathbf{2k})_l,{\bf 0};\Delta)$ and ${\mathcal{C}}_r^{(k)}$ are different types of generalizations of the ordinary Bernoulli number $B_{2k}$. \end{remark}
\begin{example} \label{Exam-C2} Let $\Delta=\Delta(C_2)$ be the root system of type $C_2$. By Theorem \ref{thm:main0}, we have
\begin{align*}
\ (\mathfrak{D}_{\Delta_{s+}}F)(t_1,t_2,y_1,y_2;\Delta)
& =1+\frac{t_1 t_2 e^{\{y_2\}t_1}}{(e^{t_1}-1)(t_1-t_2)}\\
& \quad +\frac{t_1 t_2 e^{\{y_2\} t_2}}{(e^{t_2}-1) (-t_1+t_2)}
+\frac{t_1 t_2 e^{(1-\{y_1-y_2\}) t_1+\{y_1\} t_2}}{(e^{t_1}-1) (e^{t_2}-1)}
\\
&\quad
-\frac{t_1 t_2 e^{(1-\{2 y_1-y_2\}) t_1}}{(e^{t_1}-1) (t_1+t_2)}
-\frac{t_1 t_2 e^{\{2 y_1-y_2\} t_2}}{(e^{t_2}-1) (t_1+t_2)}
\\
& =\sum_{k_1,k_2=1}^\infty \mathcal{P}_{\Delta_{l+}}(k_1,k_2,y_1,y_2;\Delta)\frac{t_1^{k_1}t_2^{k_2}}{k_1!k_2!}.
\end{align*} Set $(y_1,y_2)=(0,0)$ and $\mathbf{k}=(0,k_1,k_2,0)$. Then $\zeta_2(0,k_1,k_2,0;y_1,y_2;\Delta)=\zeta_2(k_1,k_2)$ for $\Delta=\Delta(C_2)$. Hence it follows from \eqref{Th-main} that
\begin{align} & (1+(-1)^{k_1})(1+(-1)^{k_2}) \zeta_2(k_1,k_2)+(1+(-1)^{k_2})(1+(-1)^{k_1}) \zeta_2(k_2,k_1) \label{exam-C2} \\ & = (-1)^4\mathcal{P}_{\Delta_{l+}}(k_1,k_2,0,0;\Delta)\frac{(2\pi i)^{k_1+k_2}}{k_1!k_2!} \notag
\end{align} for $k_1,k_2\geq 2$.
For example, we can compute $$\mathcal{P}_{\Delta_{l+}}(4,4,0,0;\Delta)=\frac{1}{6300}$$ from the above expansion. Hence we obtain $$\zeta_2(4,4)=\frac{(-1)^4}{8}\frac{1}{6300} \frac{(2\pi i)^8}{(4!)^2}=\frac{\pi^8}{113400}.$$ Similarly we can compute $\zeta_{2}(2k,2k)$ for $k\in \mathbb{N}$, though in this case we can also compute $\zeta_{2}(2k,2k)$ by using the well-known harmonic product formula for double zeta values \begin{equation} \zeta(s)\zeta(t)=\zeta_{2}(s,t)+\zeta_{2}(t,s) +\zeta(s+t). \label{harm} \end{equation} In the next section, we introduce a slight generalization of Corollary \ref{Cor-Z} which gives evaluation formulas of $\zeta_{2}(k,l)$ for odd $k+l$ in terms of $\zeta(s)$ (see Proposition \ref{Pr-1}). \end{example}
\begin{remark} In the general $C_r$ case, considering the expansion of $$(\mathfrak{D}_{\Delta_{s+}}F)({\bf t}_{\Delta_{l+}},{\bf 0};\Delta(C_r))$$ similarly, we can systematically compute $\zeta_{r}(2k,\ldots,2k)$. Moreover, considering the case $\nu\not={\bf 0}$ for $\zeta_r(\mathbf{s},\nu;\Delta(C_r))$, we can give character analogues of Corollary \ref{Cor-Z} for multiple $L$-values, which were first proved by Yamasaki \cite{Ya}. \end{remark}
\section{Some relations and parity results for double and triple zeta values}\label{sec-5}
In Theorem \ref{thm:main1}, we considered the sum over $W$ on the left-hand side of \eqref{Th-main}. Here, more generally, we consider the sum over a certain set of minimal coset representatives on the left-hand side of \eqref{Th-main}. In this case, it is not easy to execute its computation directly. Hence we use a more technical method which was already introduced in \cite{KMT-CJ}. First we show the following result for double zeta values corresponding to a sub-root system of type $C_2$, where the number of the terms on the left-hand side is just the half of that on the left-hand side of \eqref{exam-C2}.
\begin{prop}\label{Pr-1} For $p,q \in \mathbb{N}_{\geq 2}$, \begin{align*} & \left( 1+(-1)^p\right)\zeta_{2}(p,q)+\left( 1+(-1)^q\right) \zeta_{2}(q,p) \\ & \ =2\sum_{j=0}^{[p/2]}\binom{p+q-2j-1}{q-1}\zeta(2j)\zeta(p+q-2j) \\ & \quad +2\sum_{j=0}^{[q/2]}\binom{p+q-2j-1}{p-1}\zeta(2j)\zeta(p+q-2j) -\zeta(p+q). \end{align*} \end{prop}
\begin{proof} The proof was essentially stated in \cite[Theorem 3.1]{KMT-CJ} which is a simpler form of a previous result for zeta-functions of type $A_2$ given by the third-named author \cite[Theorem 4.5]{Ts-Cam}. In fact, setting $(k,l,s)=(p,q,0)$ in \cite[Theorem 3.1]{KMT-CJ}, we have \begin{align*} & \zeta(p)\zeta(q)+(-1)^p\zeta_{2}(p,q)+(-1)^q \zeta_{2}(q,p) \\ & \ =2\sum_{j=0}^{[p/2]}\binom{p+q-2j-1}{q-1}\zeta(2j)\zeta(p+q-2j) \\ & \quad +2\sum_{j=0}^{[q/2]}\binom{p+q-2j-1}{p-1}\zeta(2j)\zeta(p+q-2j). \end{align*} Combining this and \eqref{harm}, we have the assertion. \end{proof}
In particular when $p$ and $q$ are of different parity, we see that $\zeta_{2}(p,q)\in \mathbb{Q}[\{\zeta(j+1)\,|\,j\in \mathbb{N}\}]$ which was first proved by Euler. For example, we have $$\zeta_2(2,3)=3\zeta(2)\zeta(3)-\frac{11}{2}\zeta(5).$$
Next we consider triple zeta values. From the viewpoint of the root system of $C_3$ type, we have the following theorem. Note that, unlike the case of double zeta values, this result seems not to be led from the result on the case of type $A_3$ (cf. \cite[Theorems 5.9 and 5.10]{MT1}).
\begin{thrm} \label{T-5-1}\ For $a,b,c\in \mathbb{N}_{\geq 2}$, \begin{align*}
&(1+(-1)^a)\zeta_3(a,b,c)+(1+(-1)^b)\{ \zeta_3(b,a,c)+\zeta_3(b,c,a)\}+(-1)^b(1+(-1)^c)\zeta_3(c,b,a)\\ & =2\bigg\{ \sum_{\xi=0}^{[a/2]}\zeta(2\xi)\sum_{\omega=0}^{a-2\xi}\binom{\omega+b-1}{\omega}\binom{a+c-2\xi-\omega-1}{c-1}\zeta_2(b+\omega,a+c-2\xi-\omega)\\ & +\sum_{\xi=0}^{[b/2]}\zeta(2\xi)\sum_{\omega=0}^{a-1}\binom{\omega+b-2\xi}{\omega}\binom{a+c-\omega-2}{c-1}\zeta_2(b-2\xi+\omega+1,a+c-1-\omega)\\ & +(-1)^b\sum_{\xi=0}^{[c/2]}\zeta(2\xi)\sum_{\omega=0}^{c-2\xi}\binom{\omega+b-1}{\omega}\binom{a+c-2\xi-\omega-1}{a-1}\zeta_2(b+\omega,a+c-2\xi-\omega)\\ & +(-1)^b\sum_{\xi=0}^{[b/2]}\zeta(2\xi)\sum_{\omega=0}^{c-1}\binom{\omega+b-2\xi}{\omega}\binom{a+c-\omega-2}{a-1}\zeta_2(b-2\xi+\omega+1,a+c-1-\omega)\bigg\}\\ & -\zeta_2(a+b,c)-(1+(-1)^b)\zeta_2(b,a+c)-(-1)^b\zeta_2(b+c,a). \end{align*} \end{thrm}
The proof of this theorem will be given in Section \ref{sec-proof2}.
This theorem especially implies the following result which was proved by Borwein and Girgensohn (see \cite{BG}).
\begin{cor} \label{C-5-3} \ Let $$\mathfrak{X}=\mathbb{Q}\left[ \left\{ \zeta(j+1),\zeta_2(k,l+1)\right\}_{j,k,l\in \mathbb{N}}\right],$$ namely the $\mathbb{Q}$-algebra generated by Riemann zeta values and double zeta values at positive integers except singularities. Suppose $a,b,c\in \mathbb{N}_{\geq 2}$ satisfy that $a+b+c$ is even. Then $\zeta_3(a,b,c)\in \mathfrak{X}$. \end{cor}
\begin{proof} We recall the harmonic product formula \begin{align} & \zeta_3(a,b,c)+\zeta_3(b,a,c)+\zeta_3(b,c,a) =\zeta(a)\zeta_2(b,c)-\zeta_2(b,c+a)-\zeta_2(a+b,c) \label{harmonic} \end{align} for $a,b,c\in \mathbb{N}_{\geq 2}$ (see \cite{Ka}).
Let $a,b,c \in \mathbb{N}_{\geq 2}$ satisfying that $a+b+c$ is even. First we assume that $a,b,c$ are all even. Then, combining Theorem \ref{T-5-1} and \eqref{harmonic}, we see that $\zeta_3(c,b,a)\in \mathfrak{X}$.
Next we assume that $a$ is even and $b,c$ are odd. Then, by Theorem \ref{T-5-1}, we see that $\zeta_3(a,b,c)\in \mathfrak{X}$.
As for other cases, we can similarly obtain the assertions by using Theorem \ref{T-5-1} and \eqref{harmonic}. Thus we complete the proof. \end{proof}
\begin{remark} The following property of the multiple zeta value is sometimes called the parity result:
\it The multiple zeta value $\zeta_r(k_1,k_2,\ldots,k_r)$ of depth $r$ can be expressed as a rational linear combination of products of MZVs of lower depth than $r$, when its depth $r$ and its weight $\sum_{j=1}^{n}k_j$ are of different parity.
\rm The fact in case of depth 2 was proved by Euler, and that of depth $3$ was proved by Borwein and Girgensohn (see \cite{BG}). Further they conjectured the above assertion in the case of an arbitrary depth. This conjecture was proved by the third-named author \cite{TsActa04} and by Ihara, Kaneko and Zagier \cite{IKZ} independently. It should be stressed that our Corollary \ref{C-5-3} gives an explicit expression of the parity result for the triple zeta value under the condition $a,b,c\in \mathbb{N}_{\geq 2}$.
Therefore it seems important to generalize Theorem \ref{T-5-1} in order to give an explicit expression of the parity result of an arbitrary depth. \end{remark}
\begin{example} Putting $(a,b,c)=(2,2,4)$ in Theorem \ref{T-5-1}, we have \begin{align*} & 2\zeta_3(2,2,4)+2\{\zeta_3(2,2,4)+\zeta_3(2,4,2)\}+2\zeta_3(4,2,2)\\ & =2\zeta(4)\zeta_2(2,2)+\zeta(2)\{8\zeta_2(4,2)+ 12\zeta_2(3,3)+16\zeta_2(2,4)+16\zeta_2(1,5)\} \\ & \quad -16\zeta_2(6,2) - 20\zeta_2(5,3)-25\zeta_2(4,4)-24\zeta_2(3,5)-17\zeta_2(2,6). \end{align*} Therefore, using \eqref{harmonic}, we obtain \begin{align*} \zeta_3(4,2,2)& =\zeta(4)\zeta_2(2,2)+\zeta(2)\{4\zeta_2(4,2)+ 6\zeta_2(3,3)+7\zeta_2(2,4)+8\zeta_2(1,5)\}\\ & \quad -8\zeta_2(6,2) - 10\zeta_2(5,3)-\frac{23}{2}\zeta_2(4,4)-12\zeta_2(3,5)-\frac{15}{2}\zeta_2(2,6) \in \mathfrak{X}. \end{align*} Note that this formula can be proved by combining known results for MZVs given by the double shuffle relations and harmonic product formulas (see, for example, \cite[Section 5]{MP}). \end{example}
\begin{remark} If we replace \eqref{5-1-0} (in Section \ref{sec-proof2}) by $$\sum_{l\in \mathbb{N}}\sum_{m\in \mathbb{Z}^*} (-1)^{l+m}x^l y^m e^{i(l+m)\theta}, $$ and argue along the same line as in the proof of Theorem \ref{T-5-1}, then we can obtain \begin{align*} & \left( 1+(-1)^a\right)\left( 1+(-1)^c\right)\left\{ \zeta_3(a,b,c)+\zeta_3(a,c,b)+\zeta_3(c,a,b)\right\} \\ & \ +\left( 1+(-1)^b\right)\left( 1+(-1)^c\right)\left\{ \zeta_3(c,b,a)+\zeta_3(b,c,a)+\zeta_3(b,a,c)\right\}\\
& \qquad \in \mathbb{Q}[\{\zeta(j+1)\,|\,j\in \mathbb{N}\}] \end{align*} for $a,b,c\in \mathbb{N}_{\geq 2}$. In particular when $a,b,c$ are both even, we have \eqref{Zagier-F} for the triple zeta value which can be regarded as a kind of Witten's volume formula \eqref{Zagier-F2} (see Section \ref{sec-4}). Furthermore, when $a$ is odd and both $b$ and $c$ are even, then \begin{align*}
& {\zeta_3(c,b,a)+\zeta_3(b,c,a)+\zeta_3(b,a,c)} \in \mathbb{Q}\left[ \left\{\zeta(j+1)\,|\,j\in \mathbb{N}\right\}\right]. \end{align*} Note that this result can also be deduced by combining \eqref{harmonic} and Proposition \ref{Pr-1}. \end{remark}
\section{Multiple zeta values associated with the root system of type $B_r$} \label{sec-6}
In this section we discuss the $B_r$-analogue of our theory developed in the preceding two sections.
As for the root system of type $B_r$, namely for $\Delta=\Delta(B_r)$, we see that \begin{align*}
(\Delta_{s+})^\vee&
=\{\alpha_r^\vee,\ 2\alpha_{r-1}^\vee+\alpha_r^\vee,
2\alpha_{r-2}^\vee+2\alpha_{r-1}^\vee+\alpha_r^\vee,
\ldots, 2\alpha_1^\vee+\cdots+2\alpha_{r-1}^\vee+\alpha_r^\vee \}. \end{align*} Therefore for $\mathbf{s}_s=(s_{\alpha})_{\alpha\in\Delta_{s+}}$ we have \begin{align} & \zeta_r({\bf s}_s,{\bf 0};\Delta_{s+}(B_r))=\sum_{m_1,\ldots,m_r=1}^\infty \prod_{i=1}^{r} \frac{1}{(2\sum_{j=r-i+1}^{r-1}m_j+m_r)^{{s_i}}}, \label{B2-zeta} \end{align} which is a partial sum of $\zeta_{r}({\bf s})$. For example, we have \begin{align} & \zeta_2({\bf s}_s,{\bf 0};\Delta_{s+}(B_2))=\sum_{l,m=1}^\infty \frac{1}{m^{s_1}(2l+m)^{s_2}},\label{6-1}\\ & \zeta_3({\bf s}_s,{\bf 0};\Delta_{s+}(B_3))=\sum_{l,m,n=1}^\infty \frac{1}{n^{s_1}(2m+n)^{s_2}(2l+2m+n)^{s_3}},\label{6-2} \end{align} where $s_j=s_{\alpha_j}$ corresponding to $\alpha_j\in \Delta_{s+}$.
From the viewpoint of zeta-functions of root systems, values of \eqref{B2-zeta} at positive integers can be regarded as the objects dual to MZVs, in the sense that $B_r$ and $C_r$ are dual of each other. Hence we denote \eqref{B2-zeta} by $\zeta_r^\sharp(s_1,\ldots,s_r)$.
Since $W(B_r)\simeq W(C_r)$, just like Corollary \ref{Cor-Cr-Sr}, from Theorem \ref{thm:main1} we can obtain the following result, which gives the ``refined restricted sum formulas'' for $\zeta_r^{\sharp}(\mathbf{s})$. \begin{cor}\label{Cor-Br-Sr} Let $\Delta=\Delta(B_r)$. For $(2{\bf k})_s=(2k_{\alpha})_{\alpha\in\Delta_{s+}}=(2k_1,\ldots,2k_r)\in \left(2\mathbb{N}\right)^r$ and $\mathbf{y}=\nu\in P^\vee/Q^\vee$, \begin{equation}
\sum_{\sigma\in\mathfrak{S}_r}
\zeta_r (\sigma^{-1}(2\mathbf{k})_s,\nu;\Delta_{l+}) =\frac{(-1)^{r}}{2^r}\mathcal{P}_{\Delta_{s+}} ((2\mathbf{k})_s,\nu;\Delta)
\prod_{j=1}^{r}\frac{(2\pi i)^{2k_j}}{(2k_j)!} \in\mathbb{Q}\cdot \pi^{2\sum_{j=1}^{r}k_j}. \end{equation} In particular when $\nu={\bf 0}$, \begin{equation} \label{EZ-Sr-1-2}
\sum_{\sigma\in\mathfrak{S}_r}
\zeta_r^\sharp(2k_{\sigma^{-1}(1)},\ldots,2k_{\sigma^{-1}(r)}) =\frac{(-1)^{r}}{2^r}\mathcal{P}_{\Delta_{s+}} ((2\mathbf{k})_s,{\bf 0};\Delta)
\prod_{j=1}^{r}\frac{(2\pi i)^{2k_j}}{(2k_j)!} \in\mathbb{Q}\cdot \pi^{2\sum_{j=1}^{r}k_j}. \end{equation} \end{cor}
From Theorem \ref{thm:main2}, we obtain an analogue of Corollary \ref{Cor-Z}, which is a kind of Witten's volume formula and also a $B_r$-type analogue of \eqref{Zagier-F}.
\begin{cor} \label{C-6-2} Let $\Delta=\Delta(B_r)$. For $({\bf 2k})_s=(2k,\ldots,2k)$ with any $k\in \mathbb{N}$, \begin{align*} & \zeta_r^\sharp(2k,\ldots,2k)=\frac{(-1)^{r}}{2^r r!}\mathcal{P}_{\Delta_{s+}} ((\mathbf{2k})_s,{\bf 0};\Delta)\prod_{j=1}^{r}\frac{(2\pi i)^{2k_j}}{(2k_j)!} \in\mathbb{Q}\cdot \pi^{2kr}. \end{align*} \end{cor}
\begin{example}\label{B-EZ-Exam} \begin{align*} \zeta_2^\sharp(2,2)&=\sum_{m,n=1}^\infty \frac{1}{n^{2} (2m+n)^{2}}=\frac{1}{320}\pi^4,\\ \zeta_2^\sharp(4,4)&=\sum_{m,n=1}^\infty \frac{1}{n^{4} (2m+n)^{4}}=\frac{23}{14515200}\pi^8,\\ \zeta_2^\sharp(6,6)&=\sum_{m,n=1}^\infty \frac{1}{n^{6} (2m+n)^{6}}=\frac{1369}{871782912000}\pi^{12}. \end{align*} These formulas can be obtained by calculating the generating function of type $B_2$ similarly to the case of type $C_2$ in Example \ref{Exam-C2} (see Section \ref{sec-4}). Also we can obtain these formulas by Theorem \ref{T-B2-EZ} in the case $(p,q)=(2k,2k)$ for $k\in \mathbb{N}$. However, unlike the ordinary double zeta value, these cannot be easily deduced from \eqref{harm}.
Similarly, calculating the generating function of type $B_3$, we have explicit examples of Corollary \ref{C-6-2}: \begin{align*} \zeta_3^\sharp(2,2,2)&=\sum_{l,m,n=1}^\infty \frac{1}{n^{2} (2m+n)^{2}(2l+2m+n)^2}=\frac{1}{40320}\pi^{6},\\ \zeta_3^\sharp(4,4,4)&=\sum_{l,m,n=1}^\infty \frac{1}{n^{4} (2m+n)^{4}(2l+2m+n)^4}=\frac{23}{697426329600}\pi^{12},\\ \zeta_3^\sharp(6,6,6)&=\sum_{l,m,n=1}^\infty \frac{1}{n^{6} (2m+n)^{6}(2l+2m+n)^6}=\frac{1997}{17030314057236480000}\pi^{18}. \end{align*} \end{example}
Also, similarly to Proposition \ref{Pr-1}, we can obtain the following result whose proof will be given in Section \ref{sec-proof2}.
\begin{thrm}\label{T-B2-EZ} For $p,q\in \mathbb{N}_{\geq 2}$, \begin{align} & \ (1+(-1)^p)\zeta_2^\sharp (p,q) +(1+(-1)^q)\zeta_2^\sharp(q,p)\label{Pr-2-1}\\ & = 2 \sum_{j=0}^{[p/2]} \frac{1}{2^{p+q-2j}}\binom{p+q-1-2j}{q-1}\zeta(2j)\zeta(p+q-2j)\notag\\ & + 2\sum_{j=0}^{[q/2]} \frac{1}{2^{p+q-2j}}\binom{p+q-1-2j}{p-1}\zeta(2j)\zeta(p+q-2j) -\zeta(p+q). \notag \end{align} \end{thrm}
Theorem \ref{T-B2-EZ} in the case that $p$ and $q$ are of different parity implies the following.
\begin{cor} \label{parity-B2} Let $p,q \in \mathbb{N}_{\geq 2}$. Suppose $p$ and $q$ are of different parity, then
$$\zeta_2^\sharp(p,q)\in \mathbb{Q}\left[ \left\{\zeta(j+1)\,|\,j\in \mathbb{N}\right\}\right],$$ which is a parity result for $\zeta_2^\sharp$. \end{cor}
\begin{remark} This parity result for $\zeta_2^\sharp(p,q)$ is important in a recent study of the dimension of the linear space spanned by double zeta values of level $2$ given by Kaneko and Tasaka (see \cite{Ka-Ta}).
For example, setting $(p,q)=(3,2)$ in \eqref{Pr-2-1}, we have \begin{align*} \zeta_2^\sharp(2,3)&=\sum_{m,n=1}^\infty \frac{1}{n^{2} (2m+n)^{3}}=-\frac{21}{32}\zeta(5)+\frac{3}{8}\zeta(2)\zeta(3). \end{align*} It should be noted that this property can be given by combining the known facts for double zeta values and for their alternating series $$\varphi_2(s_1,s_2)=\sum_{m,n=1}^\infty \frac{(-1)^m}{n^{s_1}(m+n)^{s_2}}.$$ Actually we see that $$\zeta_2^\sharp(s_1,s_2)=\frac{1}{2}\left\{\zeta_2(s_1,s_2)+\varphi_2(s_1,s_2)\right\}.$$ When $p$ and $q$ are of different parity ($p,q \in \mathbb{N}$ and $q\geq 2$), Euler proved that
$$\zeta_2(p,q)\in \mathbb{Q}\left[ \left\{\zeta(j+1)\,|\,j\in \mathbb{N}\right\}\right],$$ and Borwein et al. proved that
$$\varphi_2(p,q)\in \mathbb{Q}\left[ \left\{\zeta(j+1)\,|\,j\in \mathbb{N}\right\}\right]$$ (see \cite{BBG}), from which Corollary \ref{parity-B2} follows. However \eqref{Pr-2-1} gives more explicit information on the parity result for $\zeta_2^\sharp(p,q)$. \end{remark}
Furthermore we can obtain the following result which can be regarded as an analogue of Theorem \ref{T-5-1} for type $B_3$. This can be proved similarly to Theorem \ref{T-5-1}, hence we omit its proof here.
\begin{thrm}\label{T-5-2} For $a,b,c\in \mathbb{N}_{\geq 2}$, \begin{align*}
&(1+(-1)^a)\zeta_3^\sharp(a,b,c)+(1+(-1)^b)\{ \zeta_3^\sharp(b,a,c)+\zeta_3^\sharp(b,c,a)\}+(-1)^b(1+(-1)^c)\zeta_3^\sharp(c,b,a)\\ & =2^{1-a-b-c}\bigg\{ \sum_{\xi=0}^{[a/2]}2^\xi \zeta(2\xi)\sum_{\omega=0}^{a-2\xi}\binom{\omega+b-1}{\omega}\binom{a+c-2\xi-\omega-1}{c-1}\zeta_2(b+\omega,a+c-2\xi-\omega)\\ & +\sum_{\xi=0}^{[b/2]}2^\xi\zeta(2\xi)\sum_{\omega=0}^{a-1}\binom{\omega+b-2\xi}{\omega}\binom{a+c-\omega-2}{c-1}\zeta_2(b-2\xi+\omega+1,a+c-1-\omega)\\ & +(-1)^b\sum_{\xi=0}^{[c/2]}2^\xi\zeta(2\xi)\sum_{\omega=0}^{c-2\xi}\binom{\omega+b-1}{\omega}\binom{a+c-2\xi-\omega-1}{a-1}\zeta_2(b+\omega,a+c-2\xi-\omega)\\ & +(-1)^b\sum_{\xi=0}^{[b/2]}2^\xi\zeta(2\xi)\sum_{\omega=0}^{c-1}\binom{\omega+b-2\xi}{\omega}\binom{a+c-\omega-2}{a-1}\zeta_2(b-2\xi+\omega+1,a+c-1-\omega)\bigg\}\\ & -\zeta_2^\sharp(a+b,c)-(1+(-1)^b)\zeta_2^\sharp(b,a+c)-(-1)^b\zeta_2^\sharp(b+c,a). \end{align*} \end{thrm}
\begin{remark} In \cite{KMT-Lie}, we study zeta-functions of weight lattices of semisimple compact connected Lie groups. We can prove analogues of Theorem \ref{thm:main1} for those zeta-functions by a method similar to the above. We will give the details in a forthcoming paper. \end{remark}
\section{Certain restricted sum formulas for $\zeta_r({\bf s})$ and for $\zeta_r^\sharp({\bf s})$} \label{sumf}
In this section, we give certain restricted sum formulas for $\zeta_r({\bf s})$ and for $\zeta_r^\sharp({\bf s})$ of an arbitrary depth $r$ which essentially include known results.
As we stated in Section \ref{sec-1}, Gangl, Kaneko and Zagier \cite{GKZ} obtained the restricted sum formulas \eqref{F-GKZ} for double zeta values. Recently Nakamura \cite{Na-Sh} gave certain analogues of \eqref{F-GKZ}.
More recently, Shen and Cai \cite{Shen-Cai} gave the following restricted sum formulas for triple and fourth zeta values: \begin{align} & \sum_{a_1,a_2,a_3 \in \mathbb{N}\atop a_1+a_2+a_3=N} \zeta_3(2a_1,2a_2,2a_3)= \frac{5}{8}\zeta(2N) - \frac{1}{4}\zeta(2)\zeta(2N - 2)\in \mathbb{Q}\cdot \pi^{2N}\quad (N\in \mathbb{Z}_{\geq 3}), \label{sumf-triple}\\ & \sum_{a_1,a_2,a_3,a_4 \in \mathbb{N}\atop a_1+a_2+a_3+a_4=N} \zeta_4(2a_1,2a_2,2a_3,2a_4)\label{sumf-fourth}\\ & \quad = \frac{35}{64}\zeta(2N) - \frac{5}{16}\zeta(2)\zeta(2N - 2)\in \mathbb{Q}\cdot \pi^{2N}(N\in \mathbb{Z}_{\geq 4}). \notag \end{align} Also Machide \cite{Mach} gave certain restricted sum formulas for triple zeta values.
Now recall our Corollaries \ref{Cor-Cr-Sr} and \ref{Cor-Br-Sr}. In the above restricted sum formulas, the summations are taken over all tuples $(a_1,\ldots,a_r)$ satisfying $a_1+\cdots+a_r=N$. On the other hand, the summations in the formulas of Corollaries \ref{Cor-Cr-Sr} and \ref{Cor-Br-Sr} are running over much smaller range, that is, just all the permutations of one fixed $(a_1,\ldots,a_r)$ with $a_1+\cdots+a_r=N$. Therefore our Corollaries give subdivisions, or refinements, of known restricted sum formulas.
Summing our formulas for all tuples $(a_1,\ldots,a_r)$ satisfying $a_1+\cdots+a_r=N$, we can obtain the $r$-ple generalization of \eqref{F-GKZ}, \eqref{sumf-triple} and \eqref{sumf-fourth}. Moreover we can show the following further generalization, which gives a new type of restricted sum formulas.
For $d\in \mathbb{N}$ and $N\in \mathbb{N}$, let
$$I_{r}(d,N)=\left\{ (2da_1,\ldots,2da_r)\in (2d\mathbb{N})^r\,|\,a_1+\cdots+a_r=N\right\}.$$ Denote by $P_r$ the set of all partitions of $r$, namely
$$P_r=\bigcup_{\nu=1}^{r}\{(j_1,\cdots,j_\nu)\in \mathbb{N}^\nu\,|\,j_1+\cdots+j_\nu=r\}.$$ For $J=(j_1,\cdots,j_\nu)\in P_r$, we set
$$\mathcal{A}_r(d,N,J)=\left\{ ((2dh_1)^{[j_1]},\ldots,(2dh_\nu)^{[j_\nu]})\in I_{r}(d,N)\,|\, h_1<\cdots<h_\nu\right\},$$ where $(2h)^{[j]}=(2h,\ldots,2h)\in (2\mathbb{N})^j$. Then we have the following restricted sum formulas of depth $r$.
\begin{thrm} \label{sumf-EZ-Cr} For $d\in \mathbb{N}$ and $N\in \mathbb{N}$ with $N\geq r$, \begin{align} & \sum_{a_1,\ldots,a_r \in \mathbb{N} \atop a_1+\cdots+a_r=N}\zeta_r(2da_1,\ldots,2da_r)\label{R-sumf}\\ & =\frac{(-1)^{r}}{2^r}\sum_{J=(j_1,\cdots,j_\nu)\in P_r}\frac{1}{j_1!\cdots j_\nu !}\notag\\ & \qquad \times \sum_{(2d\mathbf{k})_l \in \mathcal{A}_r(d,N,J)}\mathcal{P}_{\Delta_{l+}} ((2d\mathbf{k})_l,{\bf 0};\Delta(C_r)) \prod_{\rho=1}^{r}\frac{(2\pi i)^{2dk_\rho}}{(2k_\rho)!} \in\mathbb{Q}\cdot \pi^{2dN}.\notag \end{align} \end{thrm}
\begin{remark} In the case $d=1$ and $r=2,3,4$, we essentially obtain \eqref{F-GKZ}, \eqref{sumf-triple}, \eqref{sumf-fourth}. Also, in the case $N=r$, we obtain \eqref{Zagier-F2} stated in Corollary \ref{Cor-Z}. More generally,
in the case $d=1$ and $r\geq 2$, Muneta \cite{Mu} already conjectured an explicit expression of the left-hand side of \eqref{R-sumf} in terms of $\{\zeta(2k)\,|\,k\in \mathbb{N}\}$. \end{remark}
\begin{proof}[Proof of Theorem \ref{sumf-EZ-Cr}] Let $(2da_1,\ldots,2da_r)\in I_{r}(d,N)$. Denote a set of different elements in $\{a_1,\ldots,a_r\}$ by $\{h_1,\ldots,h_\nu\}$, and put
$j_\mu=\sharp \{ a_m\,|\, a_m=h_\mu\}$ $(1\leq \mu\leq \nu)$. We may assume $h_1<\cdots<h_{\nu}$. We can easily see that there exist $\sigma\in \mathfrak{S}_r$ and $((2dh_1)^{[j_1]},\ldots,(2dh_\nu)^{[j_\nu]}) \in \mathcal{A}_r(d,N,J)$ with $J=(j_1,\cdots,j_\nu)\in P_r$ such that $$(2da_1,\ldots,2da_r)=((2dh_1)^{[j_1]},\ldots,(2dh_\nu)^{[j_\nu]})^\sigma,$$ where we use the notation $$(k_1,\ldots,k_r)^\sigma=(k_{\sigma(1)},\ldots,k_{\sigma(r)}).$$ On the other hand, the set
$\{\left( (2dh_1)^{[j_1]},\ldots,(2dh_\nu)^{[j_\nu]}\right)^\tau\,|\,\tau\in \mathfrak{S}_r\}$ contains $j_1!\cdots j_\nu!$-copies of each element. In fact, if we denote by $\frak{S}(1,...,j_1)$ the set of all permutations among $\{1,...,j_1\}$, then $$\mathfrak{X}(J):=\frak{S}(1,\dots,j_1) \times \frak{S}(j_1+1,\ldots,j_1+j_2)
\times\cdots\times\frak{S}(\sum_{\rho=1}^{\nu-1}j_\rho+1, \ldots,\sum_{\rho=1}^{\nu}j_\rho) \subset \frak{S}_r $$
forms the stabilizer subgroup of $((2dh_1)^{[j_1]},\ldots,(2dh_\nu)^{[j_\nu]})$, and hence $\sharp \mathfrak{X}(J)=j_1!\cdots j_\nu!$. Therefore, using Corollary \ref{Cor-Cr-Sr}, we have \begin{align*} & \sum_{a_1,\ldots,a_r \in \mathbb{N} \atop a_1+\cdots+a_r=N} \zeta_r(2da_1,\ldots,2da_r)=\sum_{(2da_1,\ldots,2da_r)\in I_{r}(d,N)}\zeta_r(2da_1,\ldots,2da_r) \\ & =\sum_{J=(j_1,\cdots,j_\nu)\in P_r}\frac{1}{j_1!\cdots j_\nu !}\sum_{(2dk_1,\ldots,2dk_r) \atop \in \mathcal{A}_r(d,N,J)}\sum_{\sigma\in \mathfrak{S}_r}\zeta_r(2dk_{\sigma(1)},\ldots,2dk_{\sigma(r)})\\ & =\frac{(-1)^{r}}{2^r}\sum_{J=(j_1,\cdots,j_\nu)\in P_r}\frac{1}{j_1!\cdots j_\nu !}\sum_{(2d\mathbf{k})_l \in \mathcal{A}_r(d,N,J)}\mathcal{P}_{\Delta_{l+}(C_r)} ((2d\mathbf{k})_l,{\bf 0};\Delta) \prod_{\rho=1}^{r}\frac{(2\pi i)^{2dk_\rho}}{(2dk_\rho)!}. \end{align*} This completes the proof. \end{proof}
Similarly, using Corollary \ref{Cor-Br-Sr}, we obtain the following.
\begin{thrm} \label{sumf-EZ-Br} For $d\in \mathbb{N}$ and $N\in \mathbb{N}$ with $N\geq r$, \begin{align*} & \sum_{a_1,\ldots,a_r \in \mathbb{N} \atop a_1+\cdots+a_r=N}\zeta_r^\sharp (2da_1,\ldots,2da_r)\\ & =\frac{(-1)^{r}}{2^r}\sum_{J=(j_1,\cdots,j_\nu)\in P_r}\frac{1}{j_1!\cdots j_\nu !}\notag\\ & \quad \times \sum_{(2d\mathbf{k})_s \in \mathcal{A}_r(d,N,J)}\mathcal{P}_{\Delta_{s+}} ((2d\mathbf{k})_s,{\bf 0};\Delta(B_r)) \prod_{\rho=1}^{r}\frac{(2\pi i)^{2dk_\rho}}{(2k_\rho)!} \in\mathbb{Q}\cdot \pi^{2dN}. \end{align*} \end{thrm}
\section{Analytically closed subclass}\label{sec-acs}
In this section we observe our theory from the analytic point of view.
First consider the case of type $C_r$. In Section \ref{sec-4} we have shown that the zeta-functions corresponding to the sub-root system of type $C_r$ consisting of all long roots are exactly the family of Euler-Zagier sums. On the other hand, it is known that the Euler-Zagier $r$-fold sum can be expressed as an integral involving the Euler-Zagier $(r-1)$-fold sum in the integrand. In fact, it holds that \begin{align}\label{acs-1} \zeta_r(s_1,\ldots,s_r)=\frac{1}{2\pi i}\int_{(\kappa)}\frac{\Gamma(s_r+z)\Gamma(-z)} {\Gamma(s_r)}\zeta_{r-1}(s_1,\ldots,s_{r-2},s_{r-1}+s_r+z)\zeta(-z)dz \end{align} for $r\geq 2$, where $-\Re s_r<\kappa<-1$ and the path of integral is the vertical line from $\kappa-i\infty$ to $\kappa+i\infty$ (see \cite[Section 12]{Mat-NMJ}, \cite[Section 3]{Mat-JNT}). This formula is proved by applying the classical Mellin-Barnes integral formula (\eqref{acs-2} below), so we may call \eqref{acs-1} the Mellin-Barnes integral expression of $\zeta_r(s_1,\ldots,s_r)$.
Formula \eqref{acs-1} implies that the family of Euler-Zagier sums is closed under the Mellin-Barnes integral operation. (Note that the Riemann zeta-function, also appearing in the integrand, is the Euler-Zagier sum with $r=1$.) When some family of zeta-functions is closed in this sense, we call the family {\it analytically closed}. The aim of this section is to prove that the subclasses of type $B_r$ and of type $A_r$ discussed in our theory are both analytically closed.
\begin{prop}\label{prop-acs-1} The family of zeta-functions $\zeta_r({\bf s}_s,{\bf 0};\Delta_{s+}(B_r))$ defined by \eqref{B2-zeta} is analytically closed. \end{prop}
\begin{proof} Recall the Mellin-Barnes formula \begin{align}\label{acs-2} (1+\lambda)^{-s}=\frac{1}{2\pi i}\int_{(\kappa)}\frac{\Gamma(s+z)\Gamma(-z)} {\Gamma(s)}\lambda^z dz, \end{align} where $s,\lambda\in\mathbb{C}$ with $\Re s>0$, $\lambda\neq 0$,
$|\arg\lambda|<\pi$, $\kappa$ is real with $-\Re s<\kappa<0$.
Dividing the factor $(2(m_1+\cdots+m_{r-1})+m_r)^{-s_r}$ as $$ (2(m_2+\cdots+m_{r-1})+m_r)^{-s_r} \left(1+\frac{2m_1}{2(m_2+\cdots+m_{r-1})+m_r}\right)^{-s_r} $$ and applying \eqref{acs-2} to the second factor with $\lambda=2m_1/(2(m_2+\cdots+m_{r-1})+m_r)$, we obtain \begin{align} &\zeta_r((s_1,\ldots,s_r),{\bf 0};\Delta_{s+}(B_r))\label{acs-3}\\ &=\frac{1}{2\pi i}\int_{(\kappa)}\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}
\sum_{m_1,\ldots,m_r=1}^{\infty}\prod_{i=1}^{r-1}\frac{1}
{(2\sum_{j=r-i+1}^{r-1}m_j+m_r)^{s_i}}\notag\\ &\qquad\times(2(m_2+\cdots+m_{r-1})+m_r)^{-s_r}
\left(\frac{2m_1}{2(m_2+\cdots+m_{r-1})+m_r}\right)^z dz\notag\\ &=\frac{1}{2\pi i}\int_{(\kappa)}\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}
\sum_{m_1=1}^{\infty}(2m_1)^z\notag\\ &\;\times\sum_{m_2,\ldots,m_r=1}^{\infty}
\prod_{i=1}^{r-2}\frac{1}{(2\sum_{j=r-i+1}^{r-1}m_j+m_r)^{s_i}}
(2(m_2+\cdots+m_{r-1})+m_r)^{-s_{r-1}-s_r-z}dz\notag\\ &=\frac{1}{2\pi i}\int_{(\kappa)}\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}
2^z \zeta(z)
\zeta_{r-1}((s_1,\ldots,s_{r-2},s_{r-1}+s_r+z),{\bf 0};\Delta_{s+}(B_{r-1}))dz.\notag \end{align} This implies the assertion. \end{proof}
Next we consider the subclass of type $A_r$ which we studied in \cite{KMT-MZ}, and prove that it is also analytically closed. This part may be regarded as a supplement of \cite{KMT-MZ}.
The explicit form of the zeta-function of the root system of type $A_r$ is given by \begin{align}\label{acs-4} \zeta_r(\mathbf{s},\mathbf{0};\Delta(A_r))=\sum_{m_1,\ldots,m_r=1}^{\infty}\prod_{h=1}^r \prod_{j=h}^r\left(\sum_{k=h}^{r+h-j}m_k\right)^{-s_{hj}} \end{align} (where $\mathbf{s}=(s_{hj})_{h,j}$; see \cite[formula (13)]{KMT-MZ}). Let $a,b\in\mathbb{N}$, $c\in\mathbb{N}_0$ with $a+b+c=r$. The main result in \cite{KMT-MZ} asserts that the shuffle product procedure can be completely described by the partial fraction decomposition of zeta-functions \eqref{acs-4} at special values $\mathbf{s}=\mathbf{d}=(d_{hj})_{h,j}$, where $d_{hj}$ for \begin{align}\label{acs-5} \left\{ \begin{array}{lll}h=1,\;1\leq j\leq c\\ h=1,\;b+c+1\leq j\leq a+b+c\\ h=a+1,\;a+c+1\leq j\leq a+b+c \end{array} \right. \end{align} are all positive integers, and all other $d_{hj}$ are equal to 0. Let $\Delta_+^{(a,b,c)}=\Delta_+^{(a,b,c)}(A_r)$ be the set of all positive roots corresponding to $s_{hj}$ with $(h,j)$ in the list \eqref{acs-5}. Then this is a root set, and the above special values can be interpreted as special values of zeta-functions of $\Delta_+^{(a,b,c)}$.
\begin{thrm}\label{Th-A} The family of zeta-functions $\zeta_r(\mathbf{s}^{(a,b,c)},\mathbf{0}; \Delta_+^{(a,b,c)}(A_r))$ is analytically closed, where $\mathbf{s}^{(a,b,c)}=(s_{hj})_{h,j}$ with $(h,j)$ in the list \eqref{acs-5}. \end{thrm}
\begin{proof} We prove that zeta-functions $\zeta_{r+1}$ belonging to the above family can be expressed as a Mellin-Barnes integral, or multiple integrals, involving $\zeta_r$ also belonging to the above family. Let $a,b\in\mathbb{N}$, $c\in\mathbb{N}_0$ with $a+b+c=r$. We show that all of the zeta-functions $\zeta_{r+1}$ associated with (i) $\Delta_+^{(a+1,b,c)}$, (ii) $\Delta_+^{(a,b+1,c)}$, (iii) $\Delta_+^{(a,b,c+1)}$ have integral expressions involving the zeta-function of $\Delta_+^{(a,b,c)}$.
From \eqref{acs-4} we see that \begin{align} \zeta_r(\mathbf{s}^{(a,b,c)},\mathbf{0}; \Delta_+^{(a,b,c)}(A_r)) &=\sum_{m_1,\ldots,m_{a+b+c}=1}^{\infty}
\prod_{j=1}^c(m_1+m_2+\cdots+m_{a+b+c+1-j})^{-s_{1j}}\label{acs-6}\\ &\times\prod_{j=b+c+1}^{a+b+c}(m_1+m_2+\cdots+m_{a+b+c+1-j})^{-s_{1j}}\notag\\ &\times\prod_{j=a+c+1}^{a+b+c}(m_{a+1}+m_{a+2}+\cdots+m_{2a+b+c+1-j})
^{-s_{a+1,j}}, \notag \end{align} which is, by renaming the variables, \begin{align} =&\sum_{m_1,\ldots,m_{a+b+c}=1}^{\infty}(m_1+\cdots+m_{a+b+1})^{-s_{11}}\cdots
(m_1+\cdots+m_{a+b+c})^{-s_{1c}}\label{acs-7}\\ &\times m_1^{-s_{21}}(m_1+m_2)^{-s_{22}}\cdots(m_1+\cdots+m_a)^{-s_{2a}}\notag\\ &\times m_{a+1}^{-s_{31}}(m_{a+1}+m_{a+2})^{-s_{32}}\cdots
(m_{a+1}+\cdots+m_{a+b})^{-s_{3b}}.\notag \end{align}
Now we consider the above three cases (i), (ii) and (iii) separately.
The simplest case is (iii). When we replace $c$ by $c+1$ in \eqref{acs-7}, the differences are that the summation is now with respect to $m_1,\ldots,m_{a+b+c+1}$, and a new factor $(m_1+\cdots+m_{a+b+c+1})^{-s_{1,c+1}}$ appears. Dividing this factor as \begin{align*} \lefteqn{(m_1+\cdots+m_{a+b+c+1})^{-s_{1,c+1}}}\\ &=(m_1+\cdots+m_{a+b+c})^{-s_{1,c+1}} \left(1+\frac{m_{a+b+c+1}}{m_1+\cdots+m_{a+b+c}}\right)^{-s_{1,c+1}} \end{align*} and apply \eqref{acs-2} as in the argument of \eqref{acs-3}, we find that the sum with respect to $m_{a+b+c+1}$ is separated, which produces a Riemann zeta factor, and hence the zeta-function of $\Delta_+^{(a,b,c+1)}$ can be expressed as an integral of Mellin-Barnes type, involving gamma factors, a Riemann zeta factor, and the zeta-function of $\Delta_+^{(a,b,c)}$.
Next consider the case (ii). When we replace $b$ by $b+1$, \eqref{acs-7} is changed to \begin{align} =&\sum_{m_1,\ldots,m_{a+b+c+1}=1}^{\infty}(m_1+\cdots+m_{a+b+2})^{-s_{11}}\cdots
(m_1+\cdots+m_{a+b+c+1})^{-s_{1c}}\label{acs-8}\\ &\times m_1^{-s_{21}}(m_1+m_2)^{-s_{22}}\cdots(m_1+\cdots+m_a)^{-s_{2a}}\notag\\ &\times m_{a+1}^{-s_{31}}(m_{a+1}+m_{a+2})^{-s_{32}}\cdots
(m_{a+1}+\cdots+m_{a+b})^{-s_{3b}}\notag\\ &\times(m_{a+1}+\cdots+m_{a+b+1})^{-s_{3,b+1}}. \notag \end{align} The last factor is \begin{align} &=(m_{a+1}+\cdots+m_{a+b})^{-s_{3,b+1}}\left(1+\frac{m_{a+b+1}}
{m_{a+1}+\cdots+m_{a+b}}\right)^{-s_{3,b+1}}\label{acs-9}\\ &=(m_{a+1}+\cdots+m_{a+b})^{-s_{3,b+1}}\notag\\ &\qquad\times\frac{1}{2\pi i}\int_{(\kappa)}\frac{\Gamma(s_{3,b+1}+z)
\Gamma(-z)}{\Gamma(s_{3,b+1})}\left(\frac{m_{a+b+1}}{m_{a+1}+\cdots+m_{a+b}}
\right)^z dz.\notag \end{align} The factors $(m_1+\cdots+m_{a+b+n})^{-s_{1,n-1}}$ ($2\leq n\leq c+1$) also include the term $m_{a+b+1}$. We divide these factors as \begin{align*} \lefteqn{(m_1+\cdots+m_{a+b}+m_{a+b+2}+\cdots+m_{a+b+n})^{-s_{1,n-1}}}\\ &\times\left(1+\frac{m_{a+b+1}}{m_1+\cdots+m_{a+b}+m_{a+b+2}+\cdots+m_{a+b+n}} \right)^{-s_{1,n-1}} \end{align*} and apply \eqref{acs-2} to obtain \begin{align} \lefteqn{(m_1+\cdots+m_{a+b+n})^{-s_{1,n-1}}}\label{acs-10} \\ &=(m_1+\cdots+m_{a+b}+m_{a+b+2}+\cdots+m_{a+b+n})^{-s_{1,n-1}}\notag\\ &\times\frac{1}{2\pi i}\int_{(\kappa_n)}\frac{\Gamma(s_{1,n-1}+z_n)
\Gamma(-z_n)}{\Gamma(s_{1,n-1})}\left(\frac{m_{a+b+1}}
{m_1+\cdots+m_{a+b}+m_{a+b+2}+\cdots+m_{a+b+n}}\right)^{z_n}dz_n \notag \end{align} for $2\leq n\leq c+1$. Substituting \eqref{acs-9} and \eqref{acs-10} into \eqref{acs-8}, we find that the sum with respect to $m_{a+b+1}$ is separated and gives a Riemann zeta factor $\zeta(-z_2-\cdots-z_{c+1}-z)$. Since the remaining sum produces the zeta-function of $\Delta_+^{(a,b,c)}$, we obtain that the zeta-function of $\Delta_+^{(a,b+1,c)}$ can be expressed as a $(c+1)$-ple integral of Mellin-Barnes type involving $\zeta(-z_2-\cdots-z_{c+1}-z)$ and the zeta-function of $\Delta_+^{(a,b,c)}$.
The case (i) is similar; we omit the details, only noting that in this case the variable to be separated is $m_{a+1}$. The proof of Theorem \ref{Th-A} is now complete. \end{proof}
\section{Proof of fundamental formulas}\label{sec-proof1}
In this section we prove fundamental formulas stated in Section \ref{sec-3}.
\begin{lem} For $B\subset \Delta_+$ and $\mathbf{V}\in\mathscr{V}$, we have
\begin{equation}
{\rm L.h.}[\mathbf{V}\cap B]
=\{v\in V~|~\text{$\langle v,\mu^{\mathbf{V}}_\beta\rangle=0$ for all $\beta\in\mathbf{V}\setminus B$}\}.
\end{equation} \end{lem} \begin{proof} Let $v$ be an element of the right-hand side. We write $v=\sum_{\beta\in\mathbf{V}}c_\beta \beta$ and have $c_\beta=0$ for all $\beta\in\mathbf{V}\setminus B$ and hence \begin{equation}
v=\sum_{\beta\in\mathbf{V}\cap B}c_\beta \beta\in{\rm L.h.}[\mathbf{V}\cap B]. \end{equation} The converse is shown similarly. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main0}] For $\mathbf{t}=(t_{\alpha})_{\alpha\in\Delta_+}\in\mathbf{T}$, $\mathbf{y}\in V$, $\mathbf{V}\in\mathscr{V}$, $B\subset \Delta_+$ and $q\in Q^\vee/L(\mathbf{V}^\vee)$, let \begin{equation}
\begin{split}
F(\mathbf{t},\mathbf{y};\mathbf{V},B,q)&=
(-1)^{\abs{B\setminus\mathbf{V}}}
\Bigl(
\prod_{\gamma\in\Delta_+\setminus(\mathbf{V}\cup B)}
\frac{t_\gamma}
{t_\gamma-\sum_{\beta\in\mathbf{V}\setminus B}t_\beta\langle\gamma^\vee,\mu^{\mathbf{V}}_\beta\rangle}
\Bigr) \\ &\qquad\times
\Bigl(
\prod_{\beta\in\mathbf{V}\setminus B}\frac{t_\beta\exp
(t_\beta\{\mathbf{y}+q\}_{\mathbf{V},\beta})}{e^{t_\beta}-1}
\Bigr), \end{split} \end{equation} so that \begin{equation}\label{termwise}
F(\mathbf{t},\mathbf{y};\Delta)=
\sum_{\mathbf{V}\in\mathscr{V}}
\frac{1}{\abs{Q^\vee/L(\mathbf{V}^\vee)}}
\sum_{q\in Q^\vee/L(\mathbf{V}^\vee)}
F(\mathbf{t},\mathbf{y};\mathbf{V},\emptyset,q). \end{equation}
Assume $\mathbf{y}\in V\setminus\mathfrak{H}_{\mathscr{R}}$, and let \begin{equation}
F_j=F(\mathbf{t},\mathbf{y};\mathbf{V},A_j,q). \end{equation} We calculate $\mathfrak{D}_{\nu_{j+1}} F_j$. First, since $\mathbf{y}\notin\mathfrak{H}_{\mathscr{R}}$, noting Remark \ref{tsuika} we find that \begin{equation}
\partial_{\nu_{j+1}^\vee} F_j
=
\Bigl(\sum_{\beta\in\mathbf{V}\setminus A_j}t_\beta\langle\nu_{j+1}^\vee,\mu^{\mathbf{V}}_\beta\rangle\Bigr)
F_j. \end{equation} Consider the case $\nu_{j+1}\in\mathbf{V}$. Then $\langle\nu_{j+1}^\vee,\mu^{\mathbf{V}}_\beta\rangle=\delta_{\nu_{j+1},\beta}$ and \begin{equation}
\sum_{\beta\in\mathbf{V}\setminus A_j}t_\beta\langle\nu_{j+1}^\vee, \mu^{\mathbf{V}}_\beta\rangle=t_{j+1}, \end{equation} where we write $t_{\nu_{j+1}}=t_{j+1}$ for brevity. Hence we have $\partial_{\nu_{j+1}^\vee} F_j =t_{j+1} F_j$. Therefore we obtain \begin{equation}
\begin{split}
\mathfrak{D}_{\nu_{j+1}} F_j
&=
(-1)^{\abs{A_j\setminus\mathbf{V}}}
\Bigl(
\prod_{\gamma\in\Delta_+\setminus(\mathbf{V}\cup A_j)}
\frac{t_\gamma}
{t_\gamma-\sum_{\beta\in\mathbf{V}\setminus(A_j\cup\{\nu_{j+1}\})}t_\beta\langle\gamma^\vee,\mu^{\mathbf{V}}_\beta\rangle}
\Bigr)
\\
&
\qquad \times
\Bigl(
\prod_{\beta\in\mathbf{V}\setminus(A_j\cup\{\nu_{j+1}\})}\frac{t_\beta\exp
(t_\beta\{\mathbf{y}+q\}_{\mathbf{V},\beta})}{e^{t_\beta}-1}
\Bigr)
\end{split} \end{equation} which is equal to $F_{j+1}$ because $\Delta_+\setminus(\mathbf{V}\cup (A_j\cup\{\nu_{j+1}\}))=\Delta_+ \setminus(\mathbf{V}\cup A_j)$ and $\abs{(A_j\cup\{\nu_{j+1}\})\setminus\mathbf{V}}=\abs{A_j\setminus\mathbf{V}}$.
Next consider the case $\nu_{j+1}\notin\mathbf{V}$. If $\langle\nu_{j+1}^\vee,\mu^{\mathbf{V}}_\beta\rangle=0$ for all $\beta\in\mathbf{V}\setminus A_j$, then \begin{equation}
\partial_{\nu_{j+1}^\vee} F_j
=
\Bigl(\sum_{\beta\in\mathbf{V}\setminus A_j}t_\beta\langle\nu_{j+1}^\vee,\mu^{\mathbf{V}}_\beta\rangle\Bigr)
F_j =0 \end{equation} and hence $\mathfrak{D}_{\nu_{j+1}} F_j=0$. Otherwise, since \begin{equation}
\frac{\partial}{\partial t_{j+1}}\biggr\rvert_{t_{j+1}=0}
\Bigl(\frac{t_{j+1}}
{t_{j+1}-\sum_{\beta\in\mathbf{V}\setminus A_j}t_\beta\langle\nu_{j+1}^\vee,\mu^{\mathbf{V}}_\beta\rangle}
\Bigr)
=
-\frac{1}{\sum_{\beta\in\mathbf{V}\setminus A_j}t_\beta\langle\nu_{j+1}^\vee,\mu^{\mathbf{V}}_\beta\rangle} \end{equation} we have \begin{equation}
\begin{split}
\mathfrak{D}_\nu F_j&=
(-1)^{\abs{A_j\setminus\mathbf{V}}+1}
\Bigl(
\prod_{\gamma\in\Delta_+\setminus(\mathbf{V}\cup A_j\cup\{\nu_{j+1}\})}
\frac{t_\gamma}
{t_\gamma-\sum_{\beta\in\mathbf{V}\setminus A_j}t_\beta\langle\gamma^\vee,\mu^{\mathbf{V}}_\beta\rangle}
\Bigr)
\\
&\qquad\times
\Bigl(
\prod_{\beta\in\mathbf{V}\setminus A_j}\frac{t_\beta\exp
(t_\beta\{\mathbf{y}+q\}_{\mathbf{V},\beta})}{e^{t_\beta}-1}
\Bigr).
\end{split} \end{equation} By noting $\mathbf{V}\setminus (A_j\cup\{\nu_{j+1}\})= \mathbf{V}\setminus A_j$ and $\abs{(A_j\cup \{\nu_{j+1}\})\setminus \mathbf{V}}= \abs{A_j\setminus \mathbf{V}}+1$ we find that the right-hand side is equal to $F_{j+1}$.
We see that the condition $\langle\nu_{j+1},\mu^{\mathbf{V}}_\beta\rangle=0$ for all $\beta\in\mathbf{V}\setminus A_j$
is equivalent to the condition $\nu_{j+1}\in{\rm L.h.}[\mathbf{V}\cap A_j]$. Therefore the above results can be summarized as \begin{equation}
\mathfrak{D}_{\nu_{j+1}} F_j
=
\begin{cases}
0\qquad &(\nu_{j+1}\in{\rm L.h.}[\mathbf{V}\cap A_j]),\\
F_{j+1}\qquad &(\nu_{j+1}\notin{\rm L.h.}[\mathbf{V}\cap A_j]).
\end{cases} \end{equation} Hence \begin{equation} \label{eq:DAF0}
\mathfrak{D}_A F_0
=
\begin{cases}
0\qquad &(\mathbf{V}\notin\mathscr{V}_A),\\
F_N\qquad &(\mathbf{V}\in\mathscr{V}_A).
\end{cases} \end{equation} Similarly to the above calculations, we see that $\mathfrak{D}_{A,2} F_0$ gives the same result as \eqref{eq:DAF0}. Thus, since $F_0=F(\mathbf{t},\mathbf{y};\mathbf{V},\emptyset,q)$, from \eqref{termwise} we obtain \eqref{eq:main0}.
The continuity
follows from the limit \begin{equation}
\lim_{c\to0+}\{\mathbf{y}+q+c\phi\}_{\mathbf{V},\beta} =\{\mathbf{y}+q\}_{\mathbf{V},\beta} \end{equation} (see the last part of the proof of \cite[Theorem 4.1]{KM5}.) Finally, since $F(\mathbf{t},\mathbf{y};\Delta)$ is holomorphic with respect to $\mathbf{t}$ around the origin, so is $\bigl(\mathfrak{D}_A F\bigr) (\mathbf{t}_{\Delta^*},\mathbf{y};\Delta)$ with respect to $\mathbf{t}_{\Delta^*}$. The proof of Theorem \ref{thm:main0} is thus complete. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main1}]
First assume $\mathbf{y}\in V\setminus\mathfrak{H}_{\mathscr{R}}$.
Let
$\mathbf{k}'=(k'_\alpha)_{\alpha\in\Delta_+}$ with
$k'_\alpha=k_\alpha$ ($\alpha\in \Delta^*$),
$k'_\alpha=2$ ($\alpha\in \Delta_+\setminus\Delta^*=A$).
Then by Proposition \ref{prop:ZP}, we have
\begin{equation}
\begin{split}
S(\mathbf{k}',\mathbf{y};\Delta)
&=
\sum_{\lambda\in P\setminus H_{\Delta^\vee}}
e^{2\pi i\langle \mathbf{y},\lambda\rangle}
\prod_{\alpha\in\Delta_+}
\frac{1}{\langle\alpha^\vee,\lambda\rangle^{k'_\alpha}}
\\
&=
\sum_{w\in W}
\Bigl(\prod_{\alpha\in\Delta_+\cap w\Delta_-}
(-1)^{k'_{\alpha}}\Bigr)
\zeta_r(w^{-1}\mathbf{k}',w^{-1}\mathbf{y};\Delta)
\\
&=(-1)^{\abs{\Delta_+}}
\mathcal{P}(\mathbf{k}',\mathbf{y};\Delta)
\biggl(\prod_{\alpha\in\Delta_+}
\frac{(2\pi i)^{k'_\alpha}}{k'_\alpha!}\biggr).
\end{split}
\end{equation} Applying $\prod_{\alpha\in A}\partial_{\alpha^\vee}^2$ to the above. From the first line we observe that each $\partial_{\alpha^\vee}^2$ produces the factor $(2\pi i\langle\alpha^{\vee},\lambda\rangle)^2$. Hence the factor $\zeta_r(w^{-1}\mathbf{k}',w^{-1}\mathbf{y};\Delta)$ on the second line is transformed into
$(2\pi i)^{2|A|}\zeta_r(w^{-1}\mathbf{k},w^{-1}\mathbf{y};\Delta)$. Therefore we have
\begin{multline}
\label{eq:key1}
(2\pi i)^{2|A|}
\sum_{w\in W}
\Bigl(\prod_{\alpha\in\Delta_+\cap w\Delta_-}
(-1)^{k_{\alpha}}\Bigr)
\zeta_r(w^{-1}\mathbf{k},w^{-1}\mathbf{y};\Delta)
\\
=(-1)^{\abs{\Delta_+}}
\Bigl(\prod_{\alpha\in A}\partial_{\alpha^\vee}^2 \Bigr)
\mathcal{P}(\mathbf{k}',\mathbf{y};\Delta)
\biggl(\prod_{\alpha\in\Delta_+}
\frac{(2\pi i)^{k'_\alpha}}{k'_\alpha!}\biggr).
\end{multline} Since \begin{align*} \biggl(\prod_{\alpha\in\Delta_+}
\frac{(2\pi i)^{k'_\alpha}}{k'_\alpha!}\biggr)= \biggl(\prod_{\alpha\in\Delta^*}
\frac{(2\pi i)^{k'_\alpha}}{k'_\alpha!}\biggr)
\biggl(\prod_{\alpha\in A}
\frac{(2\pi i)^{2}}{2!}\biggr), \end{align*} we have
\begin{equation} \label{eq:key1bis}
\begin{split}
&\sum_{w\in W}
\Bigl(\prod_{\alpha\in\Delta_+\cap w\Delta_-}
(-1)^{k_{\alpha}}\Bigr)
\zeta_r(w^{-1}\mathbf{k},w^{-1}\mathbf{y};\Delta)
\\
&=(-1)^{\abs{\Delta_+}}
\Bigl(\prod_{\alpha\in A}\frac{1}{2}\partial_{\alpha^\vee}^2 \Bigr)
\mathcal{P}(\mathbf{k}',\mathbf{y};\Delta)
\biggl(\prod_{\alpha\in\Delta^*}
\frac{(2\pi i)^{k'_\alpha}}{k'_\alpha!}\biggr).
\end{split}
\end{equation} From \eqref{def-F} it follows that
\begin{equation} \label{eq:key2}
\Bigl(\prod_{\alpha\in A}\frac{1}{2}\frac{\partial^2 }{\partial t_\alpha^2 }\biggr\rvert_{t_\alpha=0}\partial_{\alpha^\vee}^2 \Bigr)
F(\mathbf{t},\mathbf{y};\Delta)
\\
=
\sum_{\substack{\mathbf{m}=(m_\alpha)_{\alpha\in\Delta_+}\\m_\alpha\in \mathbb{N}_{0}(\alpha\in\Delta^*)\\m_\alpha=2(\alpha\in A)}}
\Bigl(\prod_{\alpha\in A}\frac{1}{2}\partial_{\alpha^\vee}^2 \Bigr)
\mathcal{P}(\mathbf{m},\mathbf{y};\Delta)
\prod_{\alpha\in\Delta^*} \frac{t_\alpha^{m_\alpha}}{m_\alpha!}.
\end{equation}
By Theorem \ref{thm:main0}, we see that the left-hand side of \eqref{eq:key2} is equal to
\begin{equation}
\label{eq:key3}
F_{\Delta^*}(\mathbf{t}_{\Delta^*},\mathbf{y};\Delta) =
\sum_{\mathbf{m}_{\Delta^*}\in \mathbb{N}_{0}^{\abs{\Delta^*}}}\mathcal{P}_{\Delta^*}(\mathbf{m}_{\Delta^*},\mathbf{y};\Delta)
\prod_{\alpha\in\Delta^*}
\frac{t_{\alpha}^{m_\alpha}}{m_\alpha!}.
\end{equation} Comparing \eqref{eq:key2} with \eqref{eq:key3} we find that \begin{align*} \Bigl(\prod_{\alpha\in A}\frac{1}{2}\partial_{\alpha^\vee}^2 \Bigr)
\mathcal{P}(\mathbf{k}',\mathbf{y};\Delta) =\mathcal{P}_{\Delta^*}(\mathbf{k}_{\Delta^*},\mathbf{y};\Delta). \end{align*} Therefore \eqref{eq:key1bis} implies the desired result when $\mathbf{y}\in V\setminus\mathfrak{H}_{\mathscr{R}}$. By the continuity with respect to $\mathbf{y}$, the result is also valid in the case when $\mathbf{y}\in\mathfrak{H}_{\mathscr{R}}$. \end{proof}
\begin{remark}
It is possible to prove Theorem \ref{thm:main1}
by use of $\mathfrak{D}_A$ instead of $\mathfrak{D}_{A,2}$.
In this method, we need to consider the case $k_\alpha=1$ for some $\alpha\in A$
and such an argument is indeed valid. (See \cite[Remark 3.2]{KM3}.) \end{remark}
\section{Proofs of Theorems \ref{T-5-1} and \ref{T-B2-EZ}}\label{sec-proof2}
In this final section we prove Theorems \ref{T-5-1} and \ref{T-B2-EZ}. The basic principle of the proofs of these theorems is similar to that of the argument developed in \cite[Section 7]{KMT-CJ}. We first state the following lemma.
\begin{lem} \label{L-5-2} \ For an arbitrary function $f\,:\, \mathbb{N}_{0} \to \mathbb{C}$ and $d\in \mathbb{N}$, we have \begin{align} &\sum_{k=0}^{d}\phi(d-k){\varepsilon}_{d-k}\sum_{\nu=0}^{k}f(k-\nu)\frac{(i\pi)^{\nu}}{\nu!} =-\frac{i\pi}{2}f(d-1)+\sum_{\xi=0}^{[d/2]} \zeta(2\xi)f(d-2\xi), \label{MNOT} \end{align} where we denote the integer part of $x\in \mathbb{R}$ by $[x]$, ${\varepsilon}_j=(1+(-1)^j)/2$ $(j\in \mathbb{Z})$ and $\phi(s)=\sum_{m\geq 1}(-1)^m m^{-s}=\left(2^{1-s}-1\right)\zeta(s)$. \end{lem}
\begin{proof} This can be immediately obtained by combining (2.6) and (2.7) (with the choice $g(x)=i\pi f(x-1)$$\;$) in \cite[Lemma 2.1]{MNOT}. \end{proof}
\begin{proof}[Proof of Theorem \ref{T-5-1}] From \cite[(4.31) and (4.32)]{KMT-CJ}, we have \begin{align} & \sum_{n\in \mathbb{Z}^*} \frac{(-1)^{n}e^{in\theta}}{n^a}-2\sum_{j=0}^{a}\ \phi(a-j){\varepsilon}_{a-j} \frac{(i\theta)^{j}}{j!}=0 \label{e-5-1} \end{align} for $a\geq 2$ and $\theta \in [-\pi,\pi]$, where $\mathbb{Z}^*=\mathbb{Z}\smallsetminus \{0\}$. For $x,y \in \mathbb{R}$ with
$|x|<1$
and $|y|<1$, multiply the above by \begin{equation} \sum_{l,m\in \mathbb{N}} (-1)^{l+m}x^l y^m e^{i(l+m)\theta}. \label{5-1-0} \end{equation} Separating the terms corresponding to $l+m+n=0$, we obtain \begin{align*} & \sum_{l,m\in \mathbb{N}}\sum_{n\in \mathbb{Z}^*\atop l+m+n\not=0} \frac{(-1)^{l+m+n}x^l y^m e^{i(l+m+n)\theta}}{n^a}\\ & \ -2\sum_{j=0}^{a}\ \phi(a-j){\varepsilon}_{a-j}\sum_{l,m\in \mathbb{N}}(-1)^{l+m}x^l y^m e^{i(l+m)\theta} \frac{(i\theta)^{j}}{j!}\\ & \ =-(-1)^a\sum_{l,m\in \mathbb{N}} \frac{x^l y^m}{(l+m)^a} \end{align*} for $\theta \in [-\pi,\pi]$. The right-hand side of the above is constant with respect to $\theta$. Therefore we can apply \cite[Lemma 6.2]{KMT-CJ} with $h=1$, $a_1=a$, $d=c\geq 2$, $$C(N)=\sum_{l,m\in\mathbb{N},n\in\mathbb{Z}^*\atop l+m+n=N}\frac{x^l y^m}{n^a},$$ \begin{align*} D(N;r;1)=
\begin{cases}
\sum_{l,m\in\mathbb{N}\atop l+m=N}x^l y^m & (N\geq 2,r=0),\\
0 & ({\rm otherwise})
\end{cases} \end{align*} in the notation of \cite{KMT-CJ}. The result is \begin{align*} & \sum_{l,m\in \mathbb{N}}\sum_{n\in \mathbb{Z}^*\atop {l+m+n\not=0}} \frac{(-1)^{l+m+n}x^l y^m e^{i(l+m+n)\theta}}{n^a(l+m+n)^c} \\ & \ -2\sum_{j=0}^{a}\ \phi(a-j){\varepsilon}_{a-j}\sum_{\xi=0}^{j} \binom{j-\xi+c-1}{j-\xi} (-1)^{j-\xi}\sum_{l,m\in \mathbb{N}}\frac{(-1)^{l+m}x^l y^m e^{i(l+m)\theta}} {(l+m)^{c+j-\xi}} \frac{(i\theta)^{\xi}}{\xi!}\\ & \ +2\sum_{j=0}^{c}\ \phi(c-j){\varepsilon}_{c-j}\sum_{\xi=0}^{j} \binom{j-\xi+a-1}{a-1} (-1)^{a-1}\sum_{l,m\in \mathbb{N}}\frac{x^l y^m }{(l+m)^{a+j-\xi}} \frac{(i\theta)^{\xi}} {\xi!}=0. \end{align*} Replace $x$ by $-xe^{-i\theta}$ and separate the term corresponding to $m+n=0$ in the first member on the left-hand side, and apply \cite[Lemma 6.2]{KMT-CJ} again with $d=b\geq 2$. Then we can obtain \begin{align} & \sum_{l,m\in \mathbb{N}}\sum_{n\in \mathbb{Z}^*\atop {m+n\not=0 \atop l+m+n\not=0}} \frac{(-1)^{m+n}x^l y^m e^{i(m+n)\theta}}{n^a(m+n)^b (l+m+n)^c} \label{triple-1} \\ & \ =2\sum_{j=0}^{a}\ \phi(a-j){\varepsilon}_{a-j}\sum_{\xi=0}^{j} \sum_{\omega=0}^{j-\xi} \binom{\omega+b-1}{\omega}(-1)^\omega \binom{j-\xi-\omega+c-1}{c-1}(-1)^{j-\xi-\omega} \notag\\ & \qquad \qquad \times \sum_{l,m\in \mathbb{N}}\frac{(-1)^{m}x^l y^m e^{im\theta}} {m^{b+\omega}(l+m)^{c+j-\xi-\omega}} \frac{(i\theta)^{\xi}}{\xi!} \notag\\ & \ -2\sum_{j=0}^{b}\ \phi(b-j){\varepsilon}_{b-j}\sum_{\xi=0}^{j} \sum_{\omega=0}^{a-1} \binom{\omega+j-\xi}{\omega}(-1)^\omega \binom{a-1-\omega+c-1}{c-1}(-1)^{a-1-\omega} \notag\\ & \qquad \qquad \times \sum_{l,m\in \mathbb{N}}\frac{x^l y^m}{m^{j-\xi+\omega+1} (l+m)^{a+c-1-\omega}} \frac{(i\theta)^{\xi}}{\xi!} \notag\\ & \ -2\sum_{j=0}^{c}\ \phi(c-j){\varepsilon}_{c-j}\sum_{\xi=0}^{j} \sum_{\omega=0}^{j-\xi} \binom{\omega+b-1}{\omega}(-1)^\omega \binom{j-\xi-\omega+a-1}{a-1}(-1)^{a-1} \notag\\ & \qquad \qquad \times \sum_{l,m\in \mathbb{N}}\frac{(-1)^{l}x^l y^m e^{-il\theta}} {(-l)^{b+\omega}(l+m)^{a+j-\xi-\omega}} \frac{(i\theta)^{\xi}}{\xi!} \notag\\ & \ +2\sum_{j=0}^{b}\ \phi(b-j){\varepsilon}_{b-j}\sum_{\xi=0}^{j} \sum_{\omega=0}^{c-1} \binom{\omega+j-\xi}{\omega}(-1)^\omega \binom{a-1-\omega+c-1}{a-1}(-1)^{a-1} \notag\\ & \qquad \qquad \times \sum_{l,m\in \mathbb{N}}\frac{x^l y^m}{(-l)^{j-\xi+\omega+1} (l+m)^{a+c-1-\omega}} \frac{(i\theta)^{\xi}}{\xi!}. \notag \end{align} Since $a,b,c \geq 2$, we can let $x,y \to 1$ on the both sides because of absolute convergence. Then set $\theta=\pi$, and consider the left-hand side of the resulting formula first. The contribution of the terms corresponding to $m+2n=0$ is obviously $(-1)^a\zeta_2(a+b,c)$. The contribution of the terms corresponding to $l+m+2n=0$ is (with rewriting $-n$ by $n$) \begin{align*} (-1)^a\sum_{m,n\in\mathbb{N}\atop m\neq n, m<2n}\frac{1}{n^{a+c}(m-n)^b}, \end{align*} which is, by separating into two parts according to $n<m<2n$ and $0<m<n$, equal to $(-1)^a(1+(-1)^b)\zeta_2(b,a+c)$. We can also see that the contribution of the terms corresponding to $l+2m+2n=0$ is \begin{align*} (-1)^a \sum_{m,n\in\mathbb{N}\atop n>m}\frac{1}{n^a(m-n)^b(n-m)^c}
=(-1)^{a+b}\zeta_2(b+c,a). \end{align*}
The remaining part of the left-hand side is \begin{align*} & \sum_{l,m\in \mathbb{N}}\sum_{n\in \mathbb{Z}^*\atop {m+n\not=0 \atop {m+2n\not=0 \atop {l+m+n\not=0 \atop {l+m+2n\not=0 \atop l+2m+2n\not=0}}}}} \frac{1} {n^a(m+n)^b (l+m+n)^c} \notag\\ & =\zeta_3(a,b,c)+(-1)^a\sum_{l,m\in \mathbb{N}} \sum_{n\in \mathbb{N}\atop {m\not=n \atop {m\not=2n \atop {l+m\not=n \atop {l+m\not=2n \atop l+2m\not=2n}}}}} \frac{1} {n^a(m-n)^b (l+m-n)^c}. \end{align*} On the above double sum, replace $j=m-n$ and $k=n-m$ correspondingly to $m>n$ and $m<n$, respectively. On the part corresponding to $m>n$, we further divide the sum into three parts according to $l+j<n$, $j<n<l+j$, $n<j$ and find that the contribution of this part is $$ (-1)^a\left\{\zeta_3(b,c,a)+\zeta_3(b,a,c)+\zeta_3(a,b,c)\right\}. $$ Similarly we treat the part $m<n$. Collecting the above results, we obtain that the left-hand side is \begin{align*}
(-1)^a&\bigg\{(1+(-1)^a)\zeta_3(a,b,c)+(1+(-1)^b)\left( \zeta_3(b,a,c)+ \zeta_3(b,c,a)\right)\\ & \qquad +(-1)^b(1+(-1)^c)\zeta_3(c,b,a)+\zeta_2(a+b,c)\\ & \qquad +(1+(-1)^b)\zeta_2(b,a+c)+(-1)^b\zeta_2(b+c,a)\bigg\}. \end{align*} On the other hand, applying Lemma \ref{L-5-2}, we can rewrite the right-hand side to \begin{align*} & 2(-1)^a\bigg\{ \sum_{\xi=0}^{[a/2]}\zeta(2\xi)\sum_{\omega=0}^{a-2\xi}\binom{\omega+b-1} {\omega}\binom{a+c-2\xi-\omega-1}{c-1}\zeta_2(b+\omega,a+c-2\xi-\omega)\\ & \ +\sum_{\xi=0}^{[b/2]}\zeta(2\xi)\sum_{\omega=0}^{a-1}\binom{\omega+b-2\xi}{\omega} \binom{a+c-\omega-2}{c-1}\zeta_2(b-2\xi+\omega+1,a+c-1-\omega)\\ & \ +(-1)^b\sum_{\xi=0}^{[c/2]}\zeta(2\xi)\sum_{\omega=0}^{c-2\xi}\binom{\omega+b-1}{\omega} \binom{a+c-2\xi-\omega-1}{a-1}\zeta_2(b+\omega,a+c-2\xi-\omega)\\ & \ +(-1)^b\sum_{\xi=0}^{[b/2]}\zeta(2\xi)\sum_{\omega=0}^{c-1}\binom{\omega+b-2\xi}{\omega} \binom{a+c-\omega-2}{a-1}\zeta_2(b-2\xi+\omega+1,a+c-1-\omega)\bigg\}. \end{align*} This completes the proof of Theorem \ref{T-5-1}. \end{proof}
Finally we give the proof of Theorem \ref{T-B2-EZ}.
\begin{proof}[Proof of Theorem \ref{T-B2-EZ}] Let $p\in \mathbb{N}_{\geq 2}$ and $s\in \mathbb{R}_{>1}$. It follows from \cite[Equation (4.7)]{KMT-Pala} that \begin{equation*} \begin{split} & \sum_{l\in \mathbb{Z}^*, m\in\mathbb{N}\atop l+m\not=0} \frac{(-1)^{l+m}x^m e^{i(l+m)\theta}}{l^{p}m^{s}}-2\sum_{j=0}^{p}\ \phi(p-j)\varepsilon_{p-j}\left\{ \sum_{m=1}^\infty \frac{(-1)^{m}x^m e^{im\theta}}{m^s}\right\} \frac{(i\theta)^{j}}{j!}\\ & \ \ \ \ +(-1)^{p}\sum_{m=1}^\infty \frac{x^m}{m^{s+p}}=0 \end{split} \end{equation*}
for $\theta \in [-\pi,\pi]$ and $x\in \mathbb{C}$ with $|x|\leq 1$. Setting $x=-e^{i\theta}$ on the both sides and separating the term corresponding to $l+2m=0$ of the first term on the left-hand side, we have \begin{align*} & \sum_{l\in \mathbb{Z}^*,m\in\mathbb{N}\atop {l+m\not=0 \atop l+2m\not=0}} \frac{(-1)^{l} e^{i(l+2m)\theta}}{l^{p}m^{s}} -2\sum_{j=0}^{p}\ \phi(p-j)\varepsilon_{p-j}\left\{ \sum_{m=1}^\infty \frac{ e^{2im\theta}}{m^s}\right\} \frac{(i\theta)^{j}}{j!}\\ & \ \ \ \ +(-1)^{p}\sum_{m=1}^\infty \frac{(-1)^me^{im\theta}}{m^{s+p}}=-\sum_{m=1}^\infty \frac{1}{(-2m)^p m^s}. \end{align*} By \cite[Lemma 6.2]{KMT-CJ} with $d=q\geq 2$, we obtain \begin{align} & \sum_{l\in \mathbb{Z}^*,m\in\mathbb{N}\atop{l+m\not=0 \atop l+2m\not=0}} \frac{(-1)^{l} e^{i(l+2m)\theta}}{l^{p}m^{s}(l+2m)^q} \label{eq-9-2}\\ & \quad =2\sum_{j=0}^{p}\ \phi(p-j)\varepsilon_{p-j}\sum_{\xi=0}^{j}\binom{j-\xi+q-1}{j-\xi}\frac{(-1)^{j-\xi}}{2^{q+j-\xi}}\sum_{m=1}^{\infty}\frac{e^{2im\theta}}{m^{s+q+j-\xi}}\frac{(i\theta)^{\xi}}{\xi!}\notag\\ & \quad -2\sum_{j=0}^{q}\ \phi(q-j)\varepsilon_{q-j}\sum_{\xi=0}^{j}\binom{j-\xi+p-1}{j-\xi}\frac{(-1)^{p-1}}{2^{p+j-\xi}}\sum_{m=1}^{\infty}\frac{1}{m^{s+p+j-\xi}}\frac{(i\theta)^{\xi}}{\xi!}\notag\\ & \ \ \ \ -(-1)^{p}\sum_{m=1}^{\infty}\frac{(-1)^m e^{im\theta}}{m^{s+p+q}}. \notag \end{align} Let $\theta=\pi$ and using Lemma \ref{L-5-2}. Then the right-hand side of \eqref{eq-9-2} is equal to \begin{align} & 2(-1)^{p}\sum_{\xi=0}^{[p/2]}\ \frac{1}{2^{p+q-2\xi}}\binom{p+q-1-2\xi}{q-1}\zeta(2\xi)\zeta(s+p+q-2\xi)\label{eq-9-3} \\ & +2(-1)^{p}\sum_{\xi=0}^{[q/2]}\ \frac{1}{2^{p+q-2\xi}}\binom{p+q-1-2\xi}{p-1}\zeta(2\xi)\zeta(s+p+q-2\xi) \notag\\ & -(-1)^{p}\zeta(s+p+q). \notag \end{align} On the other hand, we can see that the left-hand side can be written in terms of the zeta-function of $B_2$. Recall that \begin{align*} \zeta_2(s_1,s_2,s_3,s_4;B_2)&=\zeta_2((s_1,s_2,s_3,s_4),{\bf 0};\Delta(B_2))\\ &=\sum_{m_1=1}^{\infty}\sum_{m_2=1}^{\infty}\frac{1}{m_1^{s_1}m_2^{s_2} (m_1+m_2)^{s_3}(2m_1+m_2)^{s_4}}. \end{align*} The contribution of the terms with $l>0$ to the left-hand side is obviously $\zeta_2(s,p,0,q;B_2)$. As for the terms with $l<0$, we rewrite $-l$ by $l$, divide the sum into three parts according to the conditions $l<m$, $m<l<2m$ and $l>2m$, and evaluate each part in terms of the zeta-function of $B_2$. The conclusion is that the left-hand side is \begin{align} & \zeta_2(s,p,0,q;B_2)+(-1)^p\zeta_2(0,p,s,q;B_2) +(-1)^p\zeta_2(0,q,s,p;B_2)\label{eq-9-4}\\ & \qquad +(-1)^{p+q}\zeta_2(s,q,0,p;B_2).\notag \end{align} We combine \eqref{eq-9-3} and \eqref{eq-9-4} and multiply by $(-1)^p$. Then we can set $s=0$ because \eqref{eq-9-3} and \eqref{eq-9-4} are absolutely convergent for $s>-1$. Noting $\zeta_2(0,p,0,q;B_2)=\zeta_2^\sharp(p,q)$, we complete the proof of Theorem \ref{T-B2-EZ}. \end{proof}
\
\proof[Acknowledgements] The authors would like to express their sincere gratitude to Professor Mike Hoffman for pointing out that symmetric sums for MZVs in \eqref{EZ-Sr-11} can be written in terms of products of Riemann's zeta values at even positive integers and giving related valuable comments (see Remark \ref{Rem-Hof}).
\
\end{document} |
\begin{document}
\title{Bratteli diagrams where random orders are imperfect}
\begin{abstract} For the simple Bratteli diagrams $B$ where there is a single edge connecting any two vertices in consecutive levels, we show that a random order has uncountably many infinite paths if and only if the growth rate of the level-$n$ vertex sets is super-linear. This gives us the dichotomy: a random order on a slowly growing Bratteli diagram admits a homeomorphism, while a random order on a quickly growing Bratteli diagram does not. We also show that for a large family of infinite rank Bratteli diagrams $B$, a random order on $B$ does not admit a continuous Vershik map. \end{abstract}
\maketitle
\section{Introduction}\label{Introduction} Consider the following random process. For each natural number $n$, we have a collection of finitely many individuals. Each individual in the $n+1$-st collection randomly picks a parent from the $n$-th collection, and this is done for all $n$. If we know how many individuals there are in each generation, the question ``How many infinite ancestral lines are there?" almost always has a common answer $j$: what is it? We can also make this game more general, by for each individual, changing the odds that he choose a certain parent, and ask the same question.
The information that we are given will come as a {\em Bratteli diagram} $B$ (Definition \ref{Definition_Bratteli_Diagram}), where each ``individual" in generation $n$ is represented by a vertex in the $n$-th vertex set $V_n$, and the chances that an individual $v\in V_{n+1}$ chooses $v' \in V_n$ as a parent is the ratio of the number of edges incoming to $v$ with source $v'$ to the total number of edges incoming to $v$. We consider the space $\mathcal O_B$ of {\em orders} on $B$ (Definition \ref{order_definition}) as a measure space equipped with the completion of the uniform product measure $\mathbb P$. A result in \cite{bky} (stated as Theorem \ref{generic_theorem} here) tells us that there is some $j$, either a positive integer or infinite, such that a $\mathbb P$-random order $\omega$ possesses $j$ maximal paths.
Bratteli diagrams, which were first studied in operator algebras, appeared implicitly in the measurable dynamical setting in \cite{vershik_1,vershik}, where it was shown that any ergodic invertible transformation of a Lebesgue space can be represented as a measurable ``successor" (or {\em Vershik}) map on the space of infinite paths $X_B$ in some Bratteli diagram $B$ (Definition \ref{Good_and_bad}). The successor map, which is defined using an order on $B$, is not defined on the set of maximal paths in $X_B$, but as this set is typically a null set, it poses no problem in the measurable framework. Similar results were discovered in the topological setting in \cite{hps}: any minimal homeomorphism on a Cantor Space has a representation as a (continuous, invertible) Vershik map which is defined on all of $X_B$ for some Bratteli diagram $B$. To achieve this, the technique used in \cite{hps} was to construct the order so that it had a unique minimal and maximal path, in which case the successor map extends uniquely to a homeomorphism of $X_B$. For such an order our quantity $j$ takes the value 1. We were curious to see whether such an order is typical, and whether a typical order defined a continuous Vershik map. Note that the value $j$ is not an invariant of topological dynamical properties that are determined by the Bratteli diagram's {\em dimension group} \cite{effros}, such as {\em strong orbit equivalence} \cite{g_p_s}.
In this article we compute $j$ for a large family of {\em infinite rank} Bratteli diagrams (Definition \ref{rank_d_definition}). Namely, in Theorem \ref{thm:dichot}, we show that $j$ is uncountable for the situation where any individual at stage $n$ is equally likely to be chosen as a parent by any individual at stage $n+1$, whenever the generation growth rate is super-linear. If the generations grow at a slower rate than this, $j=1$. We note that this latter situation has been studied in the context of gene survival in a variable size population, as in the Fisher-Wright model (e.g. \cite{seneta}, \cite{donnelly}). We describe this connection in Section \ref{results}.
In Theorem \ref{general_case} we generalise part of Theorem \ref{thm:dichot} to a large family of Bratteli diagrams. We can draw the following conclusion from these results. An order $\omega$ is called {\em perfect} if it admits a continuous Vershik map. Researchers working with continuous Bratteli-Vershik representations of dynamical systems usually work with a subfamily of perfect orders called {\em proper} orders: those that have only one maximal and one minimal path. For a large class of simple Bratteli diagrams (including the ones we identify in Theorems \ref{special_case} and \ref{general_case}), if $j>1$, then a $\mathbb P$-random order is almost surely not perfect (Theorem \ref{random_order_imperfect}), hence not proper. We note that this is in contrast to the case for finite rank diagrams, where almost any order put on any reasonable finite rank Bratteli diagram is perfect (Section 5, \cite{bky}).
\section{Bratteli diagrams and Vershik maps}\label{Preliminaries}
In this section, we collect the notation and basic definitions that are used throughout the paper.
\subsection{Bratteli diagrams} \begin{definition}\label{Definition_Bratteli_Diagram} A {\it Bratteli diagram} is an infinite graph $B=(V,E)$ such that the vertex set $V=\bigcup_{i\geq 0}V_i$ and the edge set $E=\bigcup_{i\geq 1}E_i$ are partitioned into disjoint subsets $V_i$ and $E_i$ where
\begin{enumerate}[(i)] \item $V_0=\{v_0\}$ is a single point; \item $V_i$ and $E_i$ are finite sets; \item there exists a range map $r$ and a source map $s$, both from $E$ to $V$, such that $r(E_i)= V_i$, $s(E_i)= V_{i-1}$. \end{enumerate}
\end{definition}
Note that $E$ may contain multiple edges between a pair of vertices. The pair $(V_i,E_i)$ or just $V_i$ is called the {\em $i$-th level} of the diagram $B$. A finite or infinite sequence of edges $(e_i : e_i\in E_i)$ such that $r(e_{i})=s(e_{i+1})$ is called a {\it finite} or {\it infinite path}, respectively.
For $m<n$, $v\, \in V_{m}$ and $w\,\in V_{n}$, let $E(v,w)$ denote the set of all paths $\overline{e} = (e_{1},\ldots, e_{p})$ with $s(e_{1})=v$ and $r(e_{p})=w$. For a Bratteli diagram $B$, let $X_B$ be the set of infinite paths starting at the top vertex $v_0$.
We endow $X_B$ with the topology generated by cylinder sets $\{U(e_j,\ldots,e_n): j, \,\, n \in \mathbb N, \mbox{ and }
(e_j,\ldots,e_n) \in E(v, w), v \in V_{j-1}, w \in V_n \}$, where $U(e_j,\ldots,e_n):=\{x\in X_B : x_i=e_i,\;i=j,\ldots,n\}$. With this topology, $X_B$ is a 0-dimensional compact metrizable space.
\begin{definition}\label{incidence_matrices_definition} Given a Bratteli diagram $B$, the $n$-th {\em incidence matrix}
$F_{n}=(f^{(n)}_{v,w}),\ n\geq 0,$ is a $|V_{n+1}|\times |V_n|$ matrix whose entries $f^{(n)}_{v,w}$ are equal to the number of edges between the vertices $v\in V_{n+1}$ and $w\in V_{n}$, i.e. $$
f^{(n)}_{v,w} = |\{e\in E_{n+1} : r(e) = v, s(e) = w\}|. $$ \end{definition}
\begin{definition}\label{rank_d_definition} Let $B$ be a Bratteli diagram. \begin{enumerate}
\item We say $B$ has \textit{finite rank} if for some $k$, $|V_n| \leq k$ for all $n\geq 1$. \item We say that $B$ is {\em simple} if for any level $m$ there is $n>m$ such that $E(v,w) \neq \emptyset$ for all $v\in V_m$ and $w\in V_n$. \item We say that a Bratteli diagram is {\em completely connected} if all entries of its incidence matrices are positive.
\end{enumerate} \end{definition}
In this article we work only with completely connected Bratteli diagrams.
\subsection{Orderings on a Bratteli diagram}
\begin{definition}\label{order_definition} A Bratteli diagram $B=(V,E) $ is called {\it ordered} if a linear order ``$>$" is defined on every set $r^{-1}(v)$, $v\in \bigcup_{n\ge 1} V_n$. We use $\omega$ to denote the corresponding partial order on $E$ and write $(B,\omega)$ when we consider $B$ with the ordering $\omega$. Denote by $\mathcal O_{B}$ the set of all orderings on $B$. \end{definition}
Every $\omega \in \mathcal O_{B}$ defines a \textit{lexicographic} partial ordering on the set of finite paths between vertices of levels $V_k$ and $V_l$: $(e_{k+1},\ldots,e_l) > (f_{k+1},\ldots,f_l)$ if and only if there is an $i$ with $k+1\le i\le l$, $e_j=f_j$ for $i<j\le l$ and $e_i> f_i$. It follows that, given $\omega \in \mathcal O_{B}$, any two paths from $E(v_0, v)$ are comparable with respect to the lexicographic ordering generated by $\omega$. If two infinite paths are {\em tail equivalent}, i.e. agree from some vertex $v$ onwards, then we can compare them by comparing their initial segments in $E(v_0,v)$. Thus $\omega$ defines a partial order on $X_B$, where two infinite paths are comparable if and only if they are tail equivalent.
\begin{definition} We call a finite or infinite path $e=(e_i)$ \textit{ maximal (minimal)} if every $e_i$ is maximal (minimal) amongst the edges from $r^{-1}(r(e_i))$. \end{definition}
Notice that, for $v\in V_i,\ i\ge 1$, the minimal and maximal (finite) paths in $E(v_0,v)$ are unique. Denote by $X_{\max}(\omega)$ and $X_{\min}(\omega)$ the sets of all maximal and minimal infinite paths in $X_B$, respectively. It is not hard to show that $X_{\max}(\omega)$ and $X_{\min}(\omega)$ are non-empty closed subsets of $X_B$. If $B$ is completely connected, then $X_{\max}(\omega)$ and $X_{\min}(\omega)$ have no interior points.
Given a Bratteli diagram $B$, we can describe the set of all orderings $\mathcal O_{B}$ in the following way. Given a vertex $v\in V\backslash V_0$, let $P_v$ denote the set of all orders on $r^{-1}(v)$; an element in $P_v$ is denoted by $\omega_v$. Then $\mathcal O_{B}$ can be represented as \begin{equation}\label{orderings_set} \mathcal O_{B} = \prod_{v\in V\backslash V_0}P_v . \end{equation} We write an element of $\mathcal O_B$ as $(\omega_v)_{v\in V\setminus V_0}$.
Recall that an $N$th level \emph{cylinder set} is a set of the form $\bigcap_{v\in \bigcup_{i=1}^{N} V_i}[w_{v}^*]$, where $[w_v^*]=\{ \omega: \omega_v=\omega_v^{*} \}$. The collection of $N$th level cylinder sets forms a finite $\sigma$-algebra, $\mathcal F_N$. We let $\mathcal B$ denote the $\sigma$-algebra generated by $\bigcup_N\mathcal F_N$ and equip $(\mathcal O_B,\mathcal B)$ with the product measure, $\mathbb P'= \prod_{v\in V\backslash V_0} \mathbb P_v$ where $\mathbb P_v$ is the uniform measure on $P_v$: $\mathbb P_v(\{i\}) =
(|r^{-1}(v)|!)^{-1}$ for every $i \in P_v$ and $v\in V\backslash V_0$. Finally, it will be convenient to extend the measure space $(\mathcal O_B, \mathcal B,\mathbb P')$ to its completion, $(\mathcal O_B,\mathcal F,\mathbb P)$. (The reason for the use of the completion is that the subset of $\mathcal O_B$ consisting of orders with uncountably many maximal paths may not be $\mathcal B$-measurable, but will shown to be $\mathcal F$-measurable.)
\begin{definition} \label{telescoping_definition} Let $B$ be a Bratteli diagram, and $n_0 = 0 <n_1<n_2 < \ldots$ be a strictly increasing sequence of integers. The {\em telescoping of $B$ to $(n_k)$} is the Bratteli diagram $B'$, whose $k$-level vertex set $V_k'= V_{n_k}$ and whose incidence matrices $(F_k')$ are defined by \[F_k'= F_{n_{k+1}-1} \circ \ldots \circ F_{n_k},\] where $(F_n)$ are the incidence matrices for $B$. \end{definition}
If $B'$ is a telescoping of $B$, then there is a natural injection
$L: \mathcal O_B \rightarrow \mathcal O_{B'}$. Note that unless $|V_n|=1$ for all but finitely many $n$, $L(\mathcal O_{B})$ is a set of zero measure in $\mathcal O_{B'}$.
\subsection{Vershik maps}
\begin{definition}\label{measvmap} Let $(B, \omega)$ be an ordered Bratteli diagram. The \emph{successor map}, $s_\omega$ is the map from $X_B\setminus X_\text{max}(\omega)$ to $X_B\setminus X_\text{min}(\omega)$ defined by $s_\omega(x_1,x_2,\ldots)=(x_1^0,\ldots,x_{k-1}^0,\overline {x_k},x_{k+1},x_{k+2},\ldots)$, where $k=\min\{n\geq 1 : x_n\mbox{ is not maximal}\}$, $\overline{x_k}$ is the successor of $x_k$ in $r^{-1}(r(x_k))$, and $(x_1^0,\ldots,x_{k-1}^0)$ is the minimal path in $E(v_0,s(\overline{x_k}))$. \end{definition}
\begin{definition}\label{VershikMap} Let $(B, \omega)$ be an ordered Bratteli diagram. We say that $\varphi = \varphi_\omega : X_B\rightarrow X_B$ is a {\it continuous Vershik map} if it satisfies the following conditions: \begin{enumerate}[i] \item $\varphi$ is a homeomorphism of the Cantor set $X_B$; \item $\varphi(X_{\max}(\omega))=X_{\min}(\omega)$; \item $\varphi(x)=s_\omega(x)$ for all $x\in X_B\setminus X_\text{max}(\omega)$. \end{enumerate} \end{definition}
If there is an $s_\omega$-invariant measure $\mu$ on $X_B$ such that $\mu(X_{\max}(\omega))=\mu(X_{\min}(\omega))=0$, then we may extend $s_\omega$ to a measure-preserving transformation $\phi_\omega$ of $X_B$. In this case, we call $\phi_\omega$ a \emph{measurable Vershik map} of $(X_B,\mu)$. Note that in our case, $X_{\max}(\omega)$ and $X_{\min}(\omega)$ have empty interiors, so that there is at most one continuous extension of the successor map to the whole space.
\begin{definition}\label{Good_and_bad} Let $B$ be a Bratteli diagram. We say that an ordering $\omega\in \mathcal O_{B}$ is \textit{perfect} if $\omega$ admits a continuous Vershik map $\varphi_{\omega}$ on $X_B$. If $\omega$ is not perfect, we call it \textit{imperfect}. \end{definition}
Let $\mathcal P_B\subset \mathcal O_B$ denote the set of perfect orders on $B$.
\section{The size of certain sets in $\mathcal O_B$.}\label{genericity_results}
A finite rank version of the following result was shown in \cite{bky} as a corollary of the Kolmogorov 0--1 law; the proof for non-finite rank diagrams is the same.
\begin{theorem}\label{generic_theorem} Let $B$ be a simple Bratteli diagram. Then there exists $j \in \mathbb N \cup \{\infty\}$ such that for $\mathbb P$-almost all orderings,
$|X_{\max}(\omega)|=j$. \end{theorem}
By symmetry (since the sets $X_{\text{max}}(\omega)$ and $X_{\text{min}}(\omega)$
have the same distribution), it follows that $|X_{\text{min}}(\omega)|=j$ almost surely as well.
\begin{example} It is not difficult, though contrived, to find a simple finite rank Bratteli diagram $B$ where almost all orderings are not perfect. Let $V_n=V= \{v_{1},v_{2}\}$ for $n\geq 1$, and define $m^{(n)}_{v,w}:=\frac{f^{(n)}_{v,w}}{\sum_{w}f^{(n)}_{v,w}}$: i.e. $m^{(n)}_{v,w}$ is the proportion of edges with range $v\in V_{n+1}$ that have source $w\in V_{n}$. Suppose that $\sum_{n=1}^{\infty} m_{v_{i},v_{j}}^{(n)}<\infty$ for $i\neq j$. Then for almost all orderings, there is some $K$ such that for $n>K$, the sources of the two maximal/minimal edges at level $n$ are distinct, i.e. $j=2$. The assertion follows from \cite[Theorem~5.4]{bky}. \end{example}
We point out that given an unordered Bratteli diagram, $B$, if $B$ is equipped with two proper orderings $\omega$ and $\omega'$, then the resulting topological Vershik dynamical systems $s$ and $s'$ are strong orbit equivalent \cite{g_p_s}. Likewise, if $B$ equipped with a perfect ordering is telescoped, the topological Vershik systems are conjugate. The number of maximal paths that a random order on $B$ possesses is not invariant under telescoping. Take for example a Bratteli diagram $B$ where odd levels consist of a unique vertex and even levels have $n^2$ vertices. let all incidence matrices have all entries equal to 1. By Theorem \ref{thm:dichot}, a random order on $B$ has infinitely many maximal paths. On the other hand, $B$ can be telescoped to a diagram $B'$ with only one vertex at each level, so that $B'$ has a unique maximal path.
A finite rank version of the following result is proved in Theorem 5.4 of \cite{bky}.
\begin{theorem}\label{random_order_imperfect} Suppose that $B$ is a completely connected Bratteli diagram of infinite rank so that $\mathbb P$-almost all orderings have $j$ maximal and minimal elements. If $j>1$, then $\mathbb P$-almost all orderings are imperfect. \end{theorem}
We motivate the proof of Theorem \ref{random_order_imperfect} with the following remarks. For an $\omega\in\Omega$, in order that $s_\omega$ be extended continuously to $X$, it is necessary that for each $n$, there exists an $N$ such that knowledge of any path, $x$, in $X\setminus X_\text{max}$ up to level $N$ determines $s_\omega(x)$ up to level $n$. Conversely to show $\omega$ is imperfect, one shows that there exists an $n$ such that for each $N$, the first $N$ terms of $x$ do not determine the first $n$ terms of $s(x)$. In fact, we show that for any $n$ and $N$, there exists a sequence of values, $K$, such that if one considers the collection, $\mathcal M_{K}$ of finite paths from the $K$th level to the root that are non-maximal in the $K$th edge, but maximal in all prior edges, the set of $\omega$ such that the first $N$ edges of $x$ determine the first $n$ edges of $s_\omega(x)$ is of measure 0. The following lemma provides a key combinatorial estimate that we use in the proof of Theorem \ref{random_order_imperfect}. We make use of the obvious correspondence between the collection of orderings on a set $S$ with $n$ elements, and the collection of bijections from $\{1,2,\ldots,n\}$ to $S$.
\begin{lemma}\label{lem:Bing} Let $S$ be a finite set of size $n$ and let $F$ and $G$ be maps from $S$ into a set $R$ with $G$ non-constant. Let the set $\Sigma$ of total orderings on $S$ be equipped with the uniform probability measure $\mathbb P$.
Then \begin{equation}\label{estimate} \mathbb P(O)\le \tfrac 1{n-1} \text{, where }O=\big\{\sigma \in \Sigma\colon F(\sigma(i))=G(\sigma(i+1)) \text{ for all $1\le i<n$}\big\}. \end{equation} \end{lemma}
\begin{proof}
Let $V$ be the union of the range of $F$ and the range of $G$. Form a directed multigraph $\mathcal G=(V,E)$ as follows. For $1\leq i\leq n$, define the ordered pair $e_i=(G(i),F(i))$. Let $E=\{ e_1,e_2,\dots ,e_n\}$. Now let $\sigma\in O $. Then for $1\leq i<n$, the range of $e_{\sigma(i)}$ equals the source of $e_{\sigma(i+1)}$. Therefore, $e_{\sigma(1)}e_{\sigma(2)}\dots e_{\sigma(n)}$ is an Eulerian trail in $ \mathcal G$.
It is straightforward to check that the map from $O$ to Eulerian trails is bijective, and thus we need to bound the number of Eulerian trails in $\mathcal G$. To do this, note that each Eulerian trail induces an ordering on the out-edges of each vertex. Let $V=\{ v_1,\dots , v_k\}$, and let $n_i$ be the number of out-edges of $v_i$. Since $G$ is non-constant, there are at least two directed edges with different sources, and thus $n_i\leq n-1$ for $1\leq i\leq k$. The number of orderings of out-edges is $n_1!n_2!\dots n_k!$.
We distinguish two cases. If all vertices have out-degree equal to in-degree, then each Eulerian trail is in fact an Eulerian circuit. An Eulerian circuit corresponds to $n$ different Eulerian trails, distinguished by their starting edge. To count the number of circuits, we may fix a starting edge $e^*$, and then note that each circuit induces exactly one out-edge ordering if we start following the circuit at this edge. Note that in each such ordering, the edge $e^*$ must be the first in the ordering of the out-edges of its source. We may choose $e^*$ such that its source, say $v_1$, has maximum out-degree. Thus the number of compatible out-edge orderings is at most $(n_1-1)!n_2!\dots n_k!$ This expression is maximized, subject to the conditions $n_1+n_2+\dots +n_k\leq n$ and $n_i\leq n_1\leq n-1 $ for $1\leq i\leq k$, when $k=2$ and $n_1=n-1$, $n_2=1$. Therefore, there are at most $(n-2)!$ Eulerian circuits, so at most $n(n-2)!$ Eulerian trails and elements of $O$.
If not all vertices have out-degree equal to in-degree, then either no Eulerian trail exists and the lemma trivially holds, or exactly one vertex, say $v_1$, has out-degree greater than in-degree, and this vertex must be the starting vertex of every trail. In this case, an ordering of out-edges from all vertices precisely determines the trail. The number of out-edge orderings (and good bijections) in this case is bounded above by $(n-1)!$.
Therefore, $O$ consists of at most $n(n-2)!$ orderings (out of $n!$), and the lemma follows. \end{proof}
\begin{proof}[Proof of Theorem \ref{random_order_imperfect}]
Note that if $|V_n|=1$ for infinitely many $n$, then any order on $B$ has exactly
one maximal and one minimal path. So we assume that $|V_n|\geq 2$ for all large $n$.
We first define some terminology. Recall that $s(e)$ and $r(e)$ denote the source and range of the edge $e$ respectively. Given an order $\omega \in \mathcal O_B$, we let $e_{\alpha,\omega}(v)$ be the $\alpha$th edge in $r^{-1}(v)$ If $v\in V_{N'}$ for some $N'>n$, we let $t_{n,\omega}(v)$ be the element of $V_n$ that the maximal incoming path to $v$ goes through. We call $t_{n,\omega}(v)$ the $n$-\emph{tribe} of $v$. Similarly the $n$-\emph{clan} of $v$, $c_{n,\omega}(v)$ is the element of $V_n$ through which the minimal incoming path to $v$ passes. If $n$ is such that for any $N>n$, the elements of $V_N$ belong to at least two $n$-clans (or $n$-tribes), we shall say that $\omega$ has at least two infinite $n$-clans (or $n$-tribes.)
Let $N>n$ and define $C_{n,N}$ to be the set of orders $\omega$ such that if the non-maximal paths $x$ and $y$ agree to level $N$, then their successors $s_\omega(x)$ and $s_\omega(y)$ agree to level $n$. Note that $\mathcal P_B \subset \bigcap_{n=1}^\infty \bigcup_{N=n}^{\infty}C_{n,N}$. In what follows we show that this last set has zero mass.
Fix $n$ and $N$ with $N>n$, and take any $N'>N$. Any order $\omega\in C_{n,N}$ must satisfy the following constraints: given any two non-maximal edges whose sources in $V_{N'}$ belong to the same $N$-tribe, their successors must belong to the same $n$-clan. In particular, if $v$ and $v'$ are vertices in $V_{N'}$ such that the sources of $e_{\alpha,\omega}(v)$ and $e_{\beta,\omega}(v)$ belong to the same $N$-tribe, where $\alpha$ and $\beta$ are both non-maximal, then the sources of $e_{\alpha+1,\omega}(v)$ and $e_{\beta+1,\omega}(v')$ must belong to the same $n$-clan. That is, there is a map $f\colon V_{N}\to V_n$ such that for any $v\in V_{N'}$ and any non-maximal $\alpha$, $f(t_{N,\omega}(s(e_{\alpha,\omega}(v))))=c_{n,\omega}(s(e_{\alpha+1,\omega}(v)))$. We think of this $f$ as mapping $N$-tribes to $n$-clans. This is illustrated in Figure \ref{fig:fcond}.
\begin{figure}
\caption{The maximal upward paths from
$s(e_{\alpha,\omega}(v))$ and $s(e_{\beta,\omega}(v'))$ (blue and bold) agree above level $N$, so the minimal upward path from path with range $s(e_{\beta+1,\omega}(v'))$ (red, dashed, not bold) must hit the same vertex in the $n$th level as the minimal upward path from $s(e_{\alpha+1,\omega}(v))$ (red, solid, not bold).}
\label{fig:fcond}
\end{figure}
Motivated by the preceding remark, if $N'>N>n$, we define two subsets of $\mathcal O_B$. We let $D_{n,N'}$ be the set of orders such that $V_{N'}$ contains members of at least two $n$-clans; and $E_{n,N,N'}$ to be the subset of orders in $\mathcal D_{n,N'-1}$ which additionally satisfy the condition (*):
\begin{quote} There is a function $f\colon V_N\to V_n$ such that for all $v \in V_{N'}$, if $\alpha$ is a non-maximal edge entering $v$ then $f(t_{N,\omega}(s(e_{\alpha,\omega}(v))))=c_{n,\omega}(s(e_{\alpha+1,\omega}(v)))$. \end{quote}
We observe that $D_{n,N'}$ and $E_{n,N,N'}$
are $\mathcal F_{N'}$-measurable. We compute $\mathbb P(E_{n,N,N'}|\mathcal F_{N'-1})$. Since $D_{n,N'-1}$ is $\mathcal F_{N'-1}$ measurable, we
have $\mathbb P(E_{n,N,N'}|\mathcal F_{N'-1})(\omega)$ is 0 for $\omega\not\in D_{n,N'-1}$. For a fixed map $f\colon V_N\to V_n$, and a fixed vertex $v\in V_{N'}$, and $\omega\in D_{n,N'-1}$, the conditional probability given $\mathcal F_{N'-1}$ that (*)
with the specific function $f$ is satisfied at $v$ is at most $1/(|V_{N'-1}|-1)$. To see this, notice that for $\omega\in D_{n,N'-1}$, the $n$-clan is a non-constant function of $V_{N'-1}$, so that the hypothesis of Lemma \ref{lem:Bing} is satisfied, with $F=f\circ t_{N,\omega}\circ s$ and
$G=c_{n,\omega}\circ s$, both applied to the set of incoming edges to $v$. Also, since $B$ is completely connected, there are at least $|V_{N'-1}|$ edges coming into $v$.
Since these are independent events conditioned on $\mathcal F_{N'-1}$, the conditional probability that (*) is satisfied for the fixed function $f$ over all $v\in V_{N'}$ is at most
$1/(|V_{N'-1}|-1)^{|V_N'|}$. There are $|V_n|^{|V_N|}$ possible functions $f$ that might satisfy (*). Hence we obtain $$
\mathbb P(E_{n,N,N'})\le \frac{|V_n|^{|V_N|}\mathbb P(D_{n,N'-1})}
{(|V_{N'-1}|-1)^{|V_{N'}|}}, $$ so that for fixed $n$ and $N$ with $n<N$, one has $\liminf_{N'\to\infty}\mathbb P(E_{n,N,N'})=0$. By the hypothesis, for any $\epsilon>0$, there exists $m(\epsilon)$ such that $\mathbb P(R_n)>1-\epsilon$ for all $n>m(\epsilon)$, where $R_n=\{\omega\in\mathcal O_B\colon \text{$\omega$ has at least 2 infinite $n$-clans}\}$.
Since $C_{n,N}\cap R_n\subset E_{n,N,N'}$ for all $N'>N>n$, we conclude that $\mathbb P(C_{n,N} \cap R_n )=0$ for $N>n$, so that $\mathbb P(C_{n,N}) \leq \epsilon$ for $ÊN>n>m(\epsilon)$. Now since $\mathcal P_B \subset \bigcap_{n=1}^\infty \bigcup_{N=n}^{\infty}C_{n,N}$ and $C_{n,N} \subset C_{n,N+1}$ for each $N\geq n$, we conclude that $\mathbb P( \mathcal P_B)=0$. \end{proof}
\section{Diagrams whose orders are almost always imperfect}\label{results}
\subsection{Bratteli diagrams and the Wright-Fisher model}\label{special_case}
Let $\mathbf1_{a\times b}$ denote the $a\times b$ matrix all of whose entries are 1. If $V_n$ is the $n$-th vertex set in B, define $M_n= |V_n|$. In this sub-section, all Bratteli diagrams that we consider have incidence matrices $F_n=\mathbf 1_{M_{n+1}\times M_n}$ for each $n$.
We wish to give conditions on $(M_n)$ so that a $\mathbb P$-random order has infinitely many maximal paths. We first comment on the relation between our question and the Wright-Fisher model in population genetics. Given a subset $A\subset V_k$, and an ordering $\omega \in \mathcal O_B$, we let $S^\omega_{k,n}(A)$ for $n>k$ be the collection of vertices $v$ in $V_n$ such that the unique upward maximal path in the $\omega$ ordering through $v$ passes through $A$. Informally, when we consider the tree formed by all maximal edges then $S^\omega_{k,n}(A)$ is the set of vertices in $V_n$ that have ``ancestors" in $A$.
Let $Y_n=|S^\omega_{k,n}(A)|/M_n$. We observe that conditional on $Y_n$, each vertex in $V_{n+1}$ has probability $Y_n$ of belonging to
$S^\omega_{k,n+1}(A)$ (since each vertex in $V_{n+1}$ chooses an independent ordering of $V_n$ from the uniform distribution), so that the distribution of $|S^{\omega}_{k,n+1}(A)|$ conditional on $Y_n$ is binomial with parameters $M_{n+1}$ and $Y_n$. In particular, $(Y_n)$ is a martingale with respect to the natural filtration $(\mathcal F_n)$, where $\mathcal F_n$ is the $\sigma$-algebra generated by the $n$th level cylinder sets. Since $(Y_n)$ is a bounded martingale, it follows from the martingale convergence theorem that $(Y_n)$ almost surely converges to some limit $Y_\infty$ where $0\leq Y_\infty \leq 1$.
It turns out that the study of maximal paths is equivalent to the Wright-Fisher model in population genetics. Here one studies populations where there are disjoint generations; each population member inherits an allele (gene type) from a uniformly randomly chosen member of the previous generation. To compare the randomly ordered Bratteli diagram and Wright-Fisher models, the vertices in $V_n$ represent the $n$th generation and a vertex $v\in V_{n+1}$ ``inherits an allele" from $w\in V_n$ if the edge from $w$ to $v$ is the maximal incoming edge to $v$. Since for each $v$, one of the $M_n!$ orderings of $V_n$ is chosen uniformly at random, the probability that any element of $V_n$ is the source of the maximal incoming edge to $v$ is $1/M_n$. Since the orderings are chosen independently, the ancestor of $v\in V_{n+1}$ is independent of the ancestor of any other $v'\in V_{n+1}$.
Analogous to $Y_n$, in the Wright-Fisher context, one studies the proportion of the population that have various alleles. If one declares the vertices in $A \subset V_k$ to have allele type \textbf{A} and the other vertices in that level to have allele type \textbf{a}, then there is a maximal path through $A$ if and only if in the Wright-Fisher model, the allele \textbf{A} persists - that is there exist individuals in all levels beyond the $n$th with type \textbf{A} alleles.
In a realization of the Wright-Fisher model, an allele type is said to \emph{fixate} if the proportion $Y_n$ of individuals with that allele type in the $n$th level converges to 0 or 1 as $n\to\infty$. An allele type is said to become \emph{extinct} if $Y_n=0$ for some finite level, or to \emph{dominate} if $Y_n=1$ for some finite level.
\begin{theorem} \cite[Theorem 3.2]{donnelly} \label{thm:Donnelly} Consider a Wright-Fisher model with population structure $(M_n)_{n\ge 0}$. Then domination of one of the alleles occurs almost surely if and only if $\sum_{n\ge 0}1/M_n=\infty$. \end{theorem}
Theorem \ref{thm:Donnelly} also holds if in the Wright-Fisher model, individuals can inherit one of $r$ alleles with $r\geq 2$. We exploit this below by letting each member of a chosen generation have a distinct allele type.
To indicate the flavour of the arguments, we give a proof of the simpler fact that if $\sum_{n\ge 0}1/M_n=\infty$ then each allele type fixates. To see this, let $Q_n=Y_n(1-Y_n)$.
Now we have \[\mathbb E(Q_n|\mathcal F_{n-1})
=Y_{n-1}-Y_{n-1}^2-(\mathbb E(Y_n^2|Y_{n-1})-\mathbb E(Y_n|Y_{n-1})^2)
=Q_{n-1}-\Var(Y_n|Y_{n-1}).\] Since $M_nY_n$ is binomial with parameters $M_n$ and $Y_{n-1}$, \[
\Var(Y_n|Y_{n-1})= (1/M_n^2)(M_nY_{n-1}(1-Y_{n-1}))=Q_{n-1}/M_n. \]
This gives $\mathbb E(Q_n|\mathcal F_{n-1})= (1-1/M_n)Q_{n-1}$. Now using the tower property of conditional expectations, we have
$\mathbb E Q_n=\mathbb E(Q_n|\mathcal F_0)=\prod_{j=1}^n(1-1/M_j)\mathbb E Q_0$, which converges to 0. As noted above, the sequence $(Y_n(\omega))$ is convergent for almost all $\omega$ to $Y_\infty(\omega)$ say. It follows that $Q_n(\omega)$ converges pointwise to $Y_\infty(1-Y_\infty)$. By the bounded convergence theorem, we deduce that $\mathbb E Y_\infty(1-Y_\infty)=0$, so that $Y_\infty$ is equal to 0 or 1 almost everywhere.
We shall use Theorem \ref{thm:Donnelly} to prove the first part of the following theorem.
\begin{theorem}\label{thm:dichot} Consider a Bratteli diagram with $M_n\ge 1$ vertices in the $n$th level and whose incidence matrices are all of the form $\mathbf 1_{M_{n+1}\times M_n}$. We have the following dichotomy:
If $\sum_n 1/M_n=\infty$, then there is $\mathbb P$-almost surely a unique maximal path.
If $\sum_n 1/M_n<\infty$, then there are $\mathbb P$-almost surely uncountably many maximal paths. \end{theorem}
To prove the second part of this result we will need the following tool. Recall the definition of $S^\omega_{k,n}(A)$, from the beginning of Section \ref{special_case}.
\begin{proposition}\label{thm:mgslowdown} Consider a Wright-Fisher model with population structure $(M_n)_{n\ge 0}$. Suppose that $\sum_{n\ge 0}1/M_n<\infty$. Then for each $\epsilon>0$ and $\eta>0$, there exists an $l>0$ such that for any $\mathcal F_l$-measurable random subset, $A(\omega)$, of $V_l$ (that is an $\mathcal F_l$-measurable map $\Omega\to\mathcal P(V_l)$) and any $L>l$, $$
\mathbb P\left(\left|\frac{|A(\omega)|}{|V_l|} - \frac{|S^\omega_{l,L}(A(\omega))|}{|V_L|}\right| \ge \eta\right)<\epsilon. $$ \end{proposition}
\begin{proof} Let $l$ be chosen so that $\sum_{n=l+1}^\infty 1/M_n<4\epsilon\eta^2$ and let $L>l$.
For $n>l$, let $Y_n=\big|S^\omega_{l,n}\big(A(\omega)\big)\big|/|V_n|$. Recall that $(Y_n)$ is a martingale with respect to the filtration $(\mathcal F_n)$. Set $Z_n=(Y_n-Y_l)^2$ and notice that $(Z_n)_{n\ge l}$ is a bounded sub-martingale by the conditional expectation version of Jensen's inequality.
We have \begin{align*}
\mathbb E Z_L&=\mathbb E\big(\mathbb E((Y_L-Y_l)^2|\mathcal F_l)\big)\\
&=\mathbb E\big(\mathbb E(Y_L^2-Y_l^2|\mathcal F_l)\big)\\ &=\mathbb E(Y_L^2-Y_l^2)\\ &=\sum_{j=l}^{L-1} \mathbb E(Y_{j+1}^2-Y_{j}^2). \end{align*} A calculation shows that \begin{align*}
\mathbb E(Y_{j+1}^2-Y_{j}^2|\mathcal F_j)&=
\mathbb E(Y_{j+1}^2|\mathcal F_j)-\mathbb E(Y_{j+1}|\mathcal F_j)^2\\
&=\Var(Y_{j+1}|\mathcal F_j)\\ &=\frac{Y_j(1-Y_j)}{M_{j+1}}\\ \end{align*} so that $\mathbb E(Y_{j+1}^2-Y_j^2)\le 1/(4M_{j+1})$ and we obtain $\mathbb E Z_L\le \sum_{j=l+1}^L 1/(4M_j)$. In particular we have $\mathbb E Z_{L}\le \epsilon\eta^2$. The claim follows from Markov's inequality.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:dichot}] Suppose first that $\sum_n 1/M_n=\infty$. We show for all $k$, with probability 1, there exists $n>k$ such that all maximal paths from each level $n$ vertex to the root vertex pass through a single vertex at level $k$.
To do this, we consider the $M_k$ vertices at level $k$ to each have a distinct allele type. By Theorem \ref{thm:Donnelly}, there is for almost every $\omega$, a level $n$ such that by level $n$ one of the $M_k$ allele types has dominated all the others. This is a direct translation of the statement that we need, which is that every maximal finite path with range in $V_n$ passes through the same vertex in $V_k$.
Now we consider the case $\sum_n 1/M_n<\infty$. In this case, we identify a sequence $(n_k)$ of levels. We start with a single allele at $v_0$; and evolve it to level $n_1$, where it is split into two almost equal sub-alleles. This \emph{evolve-and-split} operation is repeated inductively, evolving the two alleles at level $n_1$ to level $n_2$ and splitting each one giving four sub-alleles and so on, so that there are $2^k$ alleles in the generations of the Bratteli diagram between the $n_k$th and $n_{k+1}$st. We show that with very high probability, they all persist and maintain a roughly even share of the population. This splitting allows us to find, with probability arbitrarily close to one, a surjective map from the set of maximal paths to all possible sequences of 0s and 1s.
Fix a small $\kappa>0$. Using Proposition \ref{thm:mgslowdown}, choose an increasing sequence of levels $(n_k)_{k\ge 1}$ with the property that $M_{n_k}>4^k$ and that for any random $\mathcal F_{n_k}$-measurable subset, $A(\omega)$ of $V_{n_k}$, one has with probability at least $1-\kappa 4^{-k}$, \begin{equation}\label{eq:slowdensitychange}
\left|\frac{|S_{n_k,n_{k+1}}^\omega(A(\omega))|}{|V_{n_{k+1}}|}-
\frac{|A(\omega)|}{|V_{n_k}|}\right| < 4^{-(k+1)}. \end{equation}
Let $n_0=0$ and let $A_\epsilon(\omega)=V_0$ (here $\epsilon$ stands for the empty string). We inductively define a collection of $\mathcal F_{n_k}$-measurable subsets of $V_{n_k}$ indexed by strings of 0's and 1's of length $k$. Suppose that for each string $s$ of length $k$, $A_s(\omega)$ is a random $\mathcal F_{n_k}$-measurable subset of $V_{n_k}$. Then we let $A_{s0}(\omega)$ be the first half of $S_{n_k,n_{k+1}}^\omega(A_s(\omega))$ and $A_{s1}(\omega)$ be the second half (by the first half of a subset $A$ of $V_n$, we mean the subset consisting of the
first $\lceil \frac {|A|}2\rceil$ elements of $A$ with respect to the fixed indexing of $V_n$ and the second half is the subset
consisting of the last $\lfloor \frac {|A|}2\rfloor$ elements of $A$). By the union bound, we see that with probability at least $1-(\sum_{k=1}^\infty 2^k \kappa 4^{-k})=1-\kappa$, the sets satisfy for each $s\in \{0,1\}^k$, \begin{equation}\label{eq:nonwand2}
\left|\frac{|S^\omega_{n_k,n_{k+1}}(A_{s}(\omega))|}{|V_{n_{k+1}}|}-
\frac{|A_s(\omega)|}{|V_{n_k}|}\right|<4^{-(k+1)}. \end{equation} In particular, this suffices to ensure that the sets $A_s(\omega)$ are non-empty for each finite string of 0's and 1's. Now we define a map from the collection of maximal paths to $\{0,1\}^{\mathbb N}$: for each $k$, the $A_s(\omega)$ for $s\in\{0,1\}^k$ partition $V_{n_k}$. Given $x\in X_\text{max}(\omega)$, there is a unique sequence $\iota(x)= i_1i_2\ldots \in \{0,1\}^{\mathbb N}$ such that the $k_n$th edge, $r(x_{k_n})\in A_{i_1\ldots i_n}(\omega)$. The map $\iota\colon X_\text{max}(\omega)\to\{0,1\}^\mathbb N$ is then continuous. For any $\omega$ satisfying \eqref{eq:nonwand2}, the map $\iota\colon X_\text{max}(\omega)\to \{0,1\}^\mathbb N$ is surjective. Hence for each $\kappa>0$, we have exhibited a measurable subset of $\Omega$ with measure $1-\kappa$ for which there are uncountably many maximal paths. By completeness of the measure, it follows that almost every $\omega$ has uncountably many maximal paths.
\end{proof}
\subsection{Other Bratteli diagrams whose orders support many maximal paths}
Next we partially extend the results in Section \ref{special_case} to a larger family of Bratteli diagrams.
\begin{definition} \label{equal_path_number} Let $B$ be a Bratteli diagram. \begin{itemize} \item We say that $B$ is {\em superquadratic} if there exists $\delta >0$ so that $M_n \geq n^{2+\delta}$ for all large $n$. \item Let $B$ be superquadratic with constant $\delta$. We say that $B$ is {\em exponentially bounded} if
$\sum_{n=1}^{\infty} |V_{n+1}|\exp(- |V_n|/n^{2 + 2\delta/3}) $ converges.
\end{itemize} \end{definition}
We remark that the condition that $B$ is exponentially bounded is very mild.
In Theorem \ref{general_case} below we show that Bratteli diagrams satisfying these conditions have infinitely many maximal paths. Given $v\in V_{n+1}$, define \[ V_{n}^{v,i}:=\{ w \in V_n: f_{v,w}^{(n)}= i\}\, , \] so that if the incidence matrix entries for $B$ are all positive and bounded above by $r$, then $V_n = \bigcup_{i=1}^r V_n^{v,i} \mbox{ for each } v \in V_{n+1}$.
\begin{definition} Let $B$ be a Bratteli diagram with positive incidence matrices. We say that $B$ is {\em impartial} if there exists an integer $r$ so that all of $B$'s incidence matrix entries are bounded above by $r$, and if there exists some $\alpha \in (0,1)$ such that for any $n$, any $i\in \{1, \ldots , r\} $ and any
$v\in V_{n+1}$, $|V_n^{v,i}|\geq \alpha |V_n|$. \end{definition}
In other words, $B$ is impartial if for any row of any incidence matrix, no entry occurs disproportionately rarely or often with respect to the others.
For example, fixing $r$, if we let $|V_n|=r(n+1)$, and let each row of $F_n$ consist of any vector with entries equidistributed from $\{1,\ldots, r\}$, the resulting Bratteli diagram is impartial. Note that our diagrams in Theorem \ref{thm:dichot} are impartial. However the vertex sets can grow as fast as we want, so the diagrams are not necessarily exponentially bounded. We remark also that if a Bratteli diagram is impartial, then it is completely connected, which means that we can apply Theorem \ref{random_order_imperfect} if $j>1$.
\begin{definition} Suppose that $B$ is a Bratteli diagram each of whose incidence matrices has entries with a maximum value of $r$. We say that $A\subset V_n$ is {\em $(\beta,\epsilon)$-equitable} for $B$ if for each $v\in V_{n+1}$ and for each $i=1, \ldots , r$, \[
\left| \frac{|V_n^{v,i}\cap A|}{|V_n^{v,i}|} - \beta \right|\leq \epsilon. \]
In the case $\beta=\frac 12$, we shall speak simply of $\epsilon$-equitability. \end{definition}
Given $v\in V\backslash V_0$ and an order $\omega\in \mathcal O_B$, recall that we use $\widetilde e_v = \widetilde e_v(\omega)$ to denote the maximal edge with range $v$.
\begin{lemma}\label{mean_roughly_beta} Suppose that $B$ is impartial. Let $A\subset V_n$ be $(\beta,\epsilon)$-equitable, and $v\in V_{n+1}$. Let the random variable $X_v$ be defined as \begin{equation*} X_v(\omega)
= \left\{ \begin{array}{rl} 1 & \mbox{if $ s(\widetilde e_v)\in A$,
} \\ 0 & \mbox{ otherwise.}
\end{array}
\right. \end{equation*} Then $\beta- \epsilon \leq \mathbb E (X_v) \leq \beta+ \epsilon $. \end{lemma}
\begin{proof} We have \begin{align*}
\mathbb E (X_v) &= \frac {\sum_{j=1}^{r} j|A\cap V_n^{v,j}|}{\sum_{j=1}^r j|V_n^{v,j}|}\\ &\leq
\frac{\sum_{j=1}^{r} j| V_n^{v,j}|(\beta+\epsilon)}
{\sum_{j=1}^r j|V_n^{v,j}|} = \beta+\epsilon, \end{align*} the last inequality following since $A$ is $\epsilon$-equitable. Similarly, $\mathbb E (X_v) \geq \beta-\epsilon$. \end{proof}
\begin{lemma}\label{lem:Hoeffding-app} Let $B$ be an impartial Bratteli diagram with impartiality constant $\alpha$ and the property that each entry of each incidence matrix is between 1 and $r$. Let $\beta$, $\delta$ and $\epsilon$ be positive,
let $(p_v)_{v\in V_N}$ satisfy $|p_v-\beta|<\delta$ for each $v\in V_N$ and let $A\subset V_N$ be a randomly chosen subset, where each $v$ is included with probability $p_v$ independently of the inclusion of all other vertices. Then the probability that $A$ fails to be
$(\beta,\delta+\epsilon)$-equitable is at most $2r|V_{N+1}|e^{-\alpha|V_N|\epsilon^2}$. \end{lemma}
\begin{proof} Let $(Z_v)_{v\in V_N}$ be $\mathbf 1_{v\in A}$, so that these are independent Bernoulli random variables, where $Z_v$ takes the value 1 with probability $p_v$
For $u\in V_{N+1}$ and $1\leq i \leq r$, define \begin{equation}\label{definition_Yw,i}
Y_{u,i} := \frac{1}{|V_{N}^{u,i}|}\sum_{v\in V_{N}^{u,i} } Z_{v}
=\frac{ | \{ v\in V_{N}^{u,i}: v\in A \}| }{|V_{N}^{u,i}|}
=\frac{ |A\cap V_{N}^{u,i}| }{|V_{N}^{u,i}|} \, . \end{equation}
Using Hoeffding's inequality \cite{hoeffding}, since $\beta-\delta\leq \mathbb E (Y_{u,i} ) \leq \beta+\delta$ we have that \begin{align*}
\mathbb P (\{|Y_{u,i}- \beta|\geq (\delta + \epsilon) \})& \leq
\mathbb P (\{ | Y_{u,i}- \mathbb E (Y_{u,i}) |\geq \epsilon \}) \\ & \leq
2 e^{-2|V_{N}^{u,i}|\epsilon^2} \leq 2 e^{-2\alpha|V_{N}|\epsilon^2}. \end{align*} This implies that \begin{equation}\label{distributed_a} \mathbb P \left(
\bigcup_{i=1}^{r}\bigcup_{u\in V_{N+1}}\{|Y_{u,i}- \beta|\geq \delta+\epsilon \}
\right) \leq 2r|V_{N+1}| e^{-2|V_{N}|\alpha \epsilon^2}. \end{equation}
\end{proof}
\begin{lemma}\label{distributed} Suppose that $B$ is impartial, superquadratic and exponentially bounded. Then for any $\epsilon$ small there exist $n$ and $A\subset V_n$ such that $A$ is $(\frac12,\epsilon)$-equitable. \end{lemma}
\begin{proof} Let $r$ and $\alpha$ be as in the statement of Lemma \ref{lem:Hoeffding-app} and apply that lemma with $p_v=\frac 12$ for each $v\in V_n$. By the superquadratic and exponentially bounded properties, one has
$2r|V_{n+1}|e^{-2\alpha|V_n|\epsilon^2}<1$ for large $n$. Since the probability that a randomly chosen set is $(\frac12,\epsilon)$-equitable is positive, the existence of such a set is guaranteed. \end{proof}
\begin{theorem}\label{general_case} Suppose that $B$ is a Bratteli diagram that is impartial, superquadratic and exponentially bounded. Then $\mathbb P$-almost all orders on $B$ have infinitely many maximal paths. \end{theorem}
We note that in the special case where $B$ is defined as in Section \ref{special_case}, the following proof can be simplified and does not require the condition that $B$ is exponentially bounded. Instead of beginning our procedure with an equitable set, which is what we do below, we can start with any set $A_N\subset V_N$ whose size relative to $V_N$ is around 1/2.
\begin{proof} Since $B$ is superquadratic, we find a sequence $(\epsilon_j)$ such that \begin{eqnarray}\label{assumptions_on_epsilon} &&\sum_{j=1}^{\infty}\epsilon_j <\infty \text{ and } \label{epsilon1} \\ \label{assumptions_on_epsilon_2} &&M_j\epsilon_{j}^{2} \geq j^{\gamma} \text{ for some }\gamma >0\text{ and large enough }j. \label{epsilon2} \end{eqnarray}
Fix $N$ so that (\ref{epsilon2}) holds for all $j\geq N$, and let $N$ be large enough so that $ \sum_{j=N}^{\infty}\epsilon_j<\frac{1}{2} $. Moreover, we can also choose our sequence $(\epsilon_j)$ and our $N$ large enough so that there exists a set $A_N\subset V_N$ which is $\epsilon_N$-equitable: by Lemma \ref{distributed}, this can be done. For all $k\geq 0$, define also \[ \delta_{N+k}=\sum_{i=0}^{k} \epsilon_{N+i}. \] Finally, let $r$ be so that all entries of all $F_n$ are bounded above by $r$.
Define recursively, for all integers $k>0$ and all $v\in V_{N+k}$, the Bernoulli random variables $\{X_v: \mathcal O_B \rightarrow \{0,1\}: v\in V_{N+k} \}$, and the random sets $\{ A_{N+k}: \mathcal O_B \rightarrow 2^{V_{N+k}}: k\geq 1\}$, where $X_v(\omega)=1$ if $s(\widetilde e_v) \in A_{N+k-1}$, and 0 otherwise, and $A_{N+k}=\{ v\in V_{N+k}\,:\, X_v=1\}$.
We shall show that for a large set of $\omega$, each set $A_{N+k}$ is $\delta_{N+k}$-equitable.
This implies that the size of $A_{N+k}$ is not far from $\frac12 |V_{N+k}|$. For, if $k\geq 1$, define the event \[ D_{N+k} := \{ \omega : A_{N+k}\text{ is }\delta_{N+k}-\text{equitable} \}. \]
We claim that $$
\mathbb P(D_{N+k+1}|D_{N+k})\ge 1-2r|V_{N+k+2}|e^{-2\alpha|V_{N+k+1}|\epsilon_{N+k+1}^2}. $$ To see this, notice that if $\omega\in D_{N+k}$, then by Lemma \ref{mean_roughly_beta}, given $\mathcal F_{N+k}$, each vertex in $V_{N+k+1}$ is independently present in $A_{N+k+1}$ with probability in the range $[\beta-\delta_{N+k},\beta+\delta_{N+k}]$. Hence by Lemma
\ref{lem:Hoeffding-app}, $A_{N+k+1}$ is $\delta_{N+k+1}$-equitable with probability at least $1-2r|V_{N+k+2}|e^{-\alpha|V_{N+k+1}|\epsilon_{N+k+2}^2}$.
Next we show that our work implies that a random order has at least two maximal paths. Let $\gamma =\frac12- \sum_{j=N}^{\infty}\epsilon_j$. Notice that if $A_n\ne V_n$ for all $n>N$, then there are at least two maximal paths. By our choice of $N$ and $\gamma >0$ we have that \begin{align*}
\mathbb P(\{\omega: |X_\text{max}(\omega)|\geq 2\})
&\geq \mathbb P\left(\bigcap_{k=1}^{\infty}\left\{\omega: \gamma \leq
\frac{|A_{N+k}|}{|V_{N+k}|}\leq 1 - \gamma \,\, \right\}\right)\\
&\geq \mathbb P\left(\bigcap_{k=1}^{\infty}D_{N+k}\right)\\
& = \lim_{n\rightarrow \infty} \mathbb P(D_{N+1})
\prod_{k=1}^n\mathbb P(D_{N+k+1}\vert D_{N+k})\\
& \geq \lim_{n\rightarrow \infty} \mathbb P(D_{N+1})
\prod_{k=1}^n (1- 2r |V_{N+k+2}|e^{-2|V_{N+k+1}|\alpha\epsilon_{N+k+1}^2 }), \end{align*} and the condition that $B$ is superquadratic and exponentially bounded ensures that this last term converges to a non-zero value.
We can repeat this argument to show that for any natural $k$, a random order has at least $k$ maximal paths. We remark also that the techniques of Section \ref{special_case} could be generalized to show that a random order would have uncountably many maximal paths.
We now apply Theorem \ref{random_order_imperfect}. \end{proof} \proof[Acknowledgements] We thank Richard Nowakowski and Bing Zhou for helpful discussions around Lemma \ref{lem:Bing}.
{\footnotesize
}
\end{document} |
\begin{document}
\title{Application of planar Randers geodesics \\ with river-type perturbation in search models} \author{Piotr Kopacz} \date{\texttt{}} \maketitle \begin{abstract} \noindent We consider remodeling the planar search patterns, in the presence of the river-type perturbations represented by the weak vector field, basing on the time-optimal paths as Finslerian solutions to the Zermelo navigation problem via Randers metric. \end{abstract}
\textbf{M.S.C. 2010}: 53C22, 53C60, 53C21, 53B20, 68U35, 68T20.
\noindent \textbf{Keywords}: Zermelo navigation, Randers space, time-optimal path, perturbation, search pattern.
\section{Introduction}
\subsection{Motivation}
In geometric geodesy there are two standard problems. First (direct) geodetic problem is to determine the final point $A_2 \in M$ represented by its coordinates, when the starting point $A_1\in M$ is given as well as the distance and the initial angle of the geodesic of modeling surface $M$ coming from $A_1$ towards $A_2$, in reference to a fixed direction. Second (inverse) geodetic problem is to determine the length of the geodesic between the endpoints and its angles at them, when both points are given. More advanced, what covers the area of our interest, these problems are applied in navigation in the complex route planning and monitoring processes under the real perturbations resulting in leeway and drift. Acting perturbation can modify the preplanned trajectory and change the endpoint in the former problem in each part of the passage. Alternatively, the correction is applied to keep the ship on set track over ground. Thus, to determine the correction as a function of position and time in general, we need to find the notions considered in the latter problem. In the paper we aim to investigate the special solutions to generalized geodetic problems, which additionally provide the time-optimality under acting perturbation modeled by a vector field. We achieve the optimality making use of Randers metric. Hence, the modern approach to Zermelo's problem (1931) \cite{zermelo} in Finsler geometry is considered (cf. \cite{colleen_shen}). Due to potential real applications we pay more attention to low dimensions in our research (cf. \cite{kopi}).
\subsection{Preliminaries and posing the problem}
Let a pair $ (M,h) $ be a Riemannian manifold where $h = h_{ij}dx^i\otimes dx^j$ is a Riemannian metric and denote the corresponding norm-squared of tangent vectors $\mathbf{y} \in T_x M$ by
$\left|\mathbf{y} \right|^2 = h_{ij}y^iy^j = h(\mathbf{y}, \mathbf{y}).$
The unit tangent sphere in each $T_x M$ consists of all tangent vectors $u$ such that $\left|u\right| = 1$. Equivalently, $h(u, u) = 1$. Then we introduce a vector field $W$ such that $\left|W \right| < 1$, thought of as the spatial velocity vector of a weak wind on the Riemannian sea $(M, h)$. Before $W$ sets in, a passage from the base to the tip of any $u$ would take one unit of time. The effect of the wind is to cause the journey to veer off course or just off target if $u$ is collinear with $W$. Instead of $u$ we traverse the resultant $v = u + W$ within the same one unit of time. Thus, in the presence of the wind the Riemannian metric $h$ no longer gives the travel time along vectors. This causes the introduction of a function $F$ on the tangent bundle $T M$, in order to keep track of the travel time needed to traverse tangent vectors $\mathbf{y}$ under perturbation. For all those resultants $v = u + W$, we have $F(v) = 1$. Within each tangent space $T_x M$, the unit sphere of $F$ is the $W$-translate of the unit sphere of $h$. Since this $W$-translate is no longer centrally symmetric, $F$ cannot possibly be Riemannian \cite{colleen_shen}.
In general, the problem is to find the trajectory followed by the ship and the corresponding steering angle such that the ship completes her journey in the least time. In the paper we focus on 2D river-type perturbation, i.e with one component zeroed. Setting in the planar coordinate system let the vertical to be the one. Now we consider a ship navigating on a sea with a current and assume that the current runs parallel to the $x$-axis. The state of a ship is represented by its position $(x,y)$ and heading angle $\varphi$. Thus, the maximal state space is $\mathbb{R}^2 \times S^1$ if the convexity condition is fulfilled in the whole plane. Otherwise we have to restrict our domain for the position coordinates as a subset of $\mathbb{R}^2$. The current may have a rotational effect as well as a translational effect on the ship, and this effect depends on the ship's heading angle. Note that in the general Zermelo navigation problem we consider the time-dependent vector field which is given in the most general form \begin{equation} \label{general_field} W = \frac{\partial}{\partial t} + W^i\left(t,x^j\right)\frac{\partial}{\partial x^i}. \end{equation} \noindent The variational approach to the solution to the Zermelo navigation problem in time-space under time-dependent perturbation (\ref{general_field}) via the Euler-Lagrange equations is presented in \cite{palacek} and followed in \cite{kopi}. In the paper let us begin with the stationary current, namely the perturbation does not depend on time. This is a special case of the Zermelo navigation problem researched by C. Caratheodory by means of Hamiltonian formalism in the calculus of variations (cf. \S 276 in \cite{caratheodory}). We aim to find the deviation of calm sea geodesics under the action of distributed wind which is modeled by the vector field on manifold $M$. We aim to find the deviation of the Riemannian geodesics with application of special class of Finsler metrics, namely Randers metrics. Consequently, this also enables us to analyze the geometric properties of the trajectories as Finslerian solutions to Zermelo's problem depending on the type of perturbation and the initial or boundary conditions.
\section{Randers metric} From \cite{colleen_shen} we know that in Riemann-Finsler geometry Randers metrics may be identified with the solutions to the navigation problem on Riemannian manifolds. This navigation structure establishes a bijection between Randers spaces and pairs $(h,W)$ of Riemannian metrics $h$ and vector fields $W$ on the manifold $M$.
\subsection{The general form of the metric}
The resulting Randers metric is composed of the new Riemannian metric and $1$-form, and is given by \begin{equation} \label{ran}
F(\mathbf{y}) = \frac{ \sqrt{ \left[ h(W,\mathbf{y}) \right]^2 + |\mathbf{y}|^2 \lambda} } {\lambda} - \frac{ h(W,\mathbf{y})} {\lambda}. \end{equation}
\noindent The resulting Randers metric can also be presented in the form $F = \alpha + \beta$ as the sum of two components. Explicitly, \begin{itemize} \item the first term is the norm of $\mathbf{y}$ with respect to a new Riemannian metric \begin{equation} \alpha(x,\mathbf{y}) = \sqrt{a_{ij}(x)y^iy^j}, \quad \text{ where } a_{ij} = \frac{ h_{ij}}{\lambda} + \frac{W_i}{\lambda}\frac{W_j}{\lambda}, \label{Alpha} \end{equation}
\item the second term is the value on $\mathbf{y}$ of a differential 1-form \begin{equation} \beta(x,\mathbf{y})=b_i(x)y^i,\quad \text{where}\quad b_i = \frac{-W_i}{\lambda}. \label{Beta} \end{equation} \end{itemize} where $W_i=h_{ij}W^j$ and $\lambda=1-W^iW_i$.
As we aim to combine the theoretical research with real applications we have to make a remark here in reference to the notion of perturbation. In marine navigation there are two main types of real perturbation considered, namely wind and current (stream). Both of them affect the ship's trajectories and the corresponding angles, i.e. heading, course through the water and course over ground. In the literature on Finsler geometry the notions of wind and current are treated subsequently and equivalently by different authors. We will also use both notions of acting perturbation subsequently, although the notion of current fits better to the river-type perturbation which we consider in the paper. However, we let ourselves to call acting vector field $W$ equivalently as a wind in order to be in accordance with the contributions to the navigation problem in Finslerian approach. To be more precise, note that the notion of current or equivalently wind, which are used by different authors, means in fact the total drift of a vessel, i.e. the resulting perturbation, and so it ought to be considered in other papers implicitly if not otherwise stipulated.
\subsection{Randers metric with Euclidean background and river-type perturbation}
Let us assume that the initial Riemannian metric $h_{ij}$ to be perturbed is the standard Euclidean metric $\delta_{ij}$ on $\mathbb{R}^2$ as it can be applied in local modeling referring to Zermelo's problem. The Randers metric $\eqref{ran}$ which comes from the navigaton data $(h, W)$ including the initial Euclidean metric is expressed as follows \begin{equation}
F(x,\mathbf{y}) = \frac{ \sqrt{( \delta_{ij}W^iy^j)^2 + |\mathbf{y}|^2 (1-|W|^2)} } {1-|W|^2} - \frac{ \delta_{ij}W^iy^j} {1-|W|^2}. \end{equation} Since our focus is on dimension two, we denote the position coordinates $(x^1, x^2)$ by $(x, y)$, and expand arbitrary tangent vectors $y^1\frac{\partial}{\partial x^1}+y^2\frac{\partial}{\partial x^2}$ at $(x^1, x^2)$ as $(x,y;u,v)$ or $u\frac{\partial}{\partial x} + v\frac{\partial}{\partial y}$. Thus, adopting the notations in two dimensional case yields
\begin{equation}
\label{W1W2}
F(x,y; u,v) = \frac{\sqrt{u^2+v^2-(uW^2-vW^1)^2}-W^1u-W^2v}{1-|W|^2}, \end{equation} \noindent what defines the metric for an arbitrary vector field $W=(W^1, W^2)$. Let us consider the perturbation $W_I$ represented by the river-type vector field $W$ given in the following form \begin{alignat}{1} \label{pole_river} (W)& \left\{ \begin{array}{l l}
W^1 = W^1(x,y) \\
W^2 = W^2(x,y)
\end{array} \right.
\xrightarrow{}
(W_I)\left\{ \begin{array}{l l}
W^1(x,y):= f(y) \\
W^2(x,y) := 0
\end{array} \right. \end{alignat}
what determines the scenario for the navigation problem. To apply Finslerian version of the Zermelo navigation we have to take into consideration the assumptions which limit the set of the solutions. Namely, only weak perturbation can be applied, i.e. $|W|<1$ and vessel's own speed is constant, $|(u, v)|=1$. The former assumption is due to the requirement of the strong convexity of the Finsler metric. This is implied by Theorem \ref{THM} which we shall apply next. Let us also make two remarks noting Finslerian research on the problem under stronger perturbation. \begin{remark}
For stronger perturbation, i.e. $|W|=1$, Kropina metric has been applied to the Zermelo navigation problem in Finsler geometry recently (R.Yoshikawa, S. Sabau in \cite{kropina}). \end{remark} \begin{remark}
Strong wind, i.e. $|W|>1$, in the Zermelo navigation has been considered very recently (e.g. E. Caponio, M.A. Javaloyes, M. Sanchez in \cite{sanchez}) in reference to the Lorentzian metric with the general relativity background. \end{remark}
\section{Perturbing by shear vector field}
Analyses involving Randers spaces are generally difficult and finding solutions to the geodesic equations is not straightforward \cite{brody, chern_shen}. That is why by solving the problem under the particular perturbation, i.e. shear vector field, we show the respective steps of the solution to obtain at the end the flows of Randers geodesics representing the time-optimal paths. Next, we present the final solutions graphically which are obtained by following analogous steps for other river-type perturbations - the Gaussian function and the quartic curve perturbation.
The general solution refers to the system of geodesic equations $\eqref{geo1}$ applied to the Randers metric. We create some computational programmes by means of Wolfram Mathematica ver. 10.2 to generate the graphs and evaluate some numerical computations if the complete symbolic ones cannot be obtained. Using our software in the case of Finslerian computations can strengthen our intuition as well as ckeck and simplify some of obtained formulae and expressions. \subsection{The resulting metric} We perturb $h$ by the shear vector field (Figure $\ref{pole_fig}$) which is also applied to the discrete problems in the optimal control theory. In the example the current of the river increases as a linear function of $y$ reaching its minimal absolute value in the midstream. E. Zermelo desribed the perturbation \textit{"as the simplest nontrivial example of our theory"} in his variational paper when the navigation problem had been initially formulated. Let \begin{equation}
W = (y, 0) \qquad \text{with} \qquad |W| = |y|<1. \label{pole} \end{equation} \begin{figure}
\caption{Shear vector field and its density plot under convexity restriction, $|y|<1$.}
\label{pole_fig}
\end{figure}
\noindent The condition $|y|<1$ ensures that $F$ is strongly convex and it is a necessary codition if we want to study the problem via the construction of the Randers metric. Let us observe that E. Zermelo \cite{zermelo}, followed by C. Caratheodory \cite{caratheodory}, considered the problem on the open Euclidean sea without such a restriction what makes worth mentioning difference in both approaches. As a consequence is the fact we are forced to omit formally the solutions under stronger current ($ |W| \geq 1$) in Finslerian approach even in the low-dimensional Euclidean landscape. \noindent We obtain the resulting Randers metric for the shear perturbation \begin{equation} F(x,y; u, v) = \frac{ \sqrt{(1-y^2) (u^2+v^2) +(uy)^2} } {1-y^2} - \frac{uy} {1-y^2}. \end{equation} The metric can also be presented in the form $F = \alpha + \beta$ as the sum of the new Riemannian metric and 1-form, i.e. \begin{itemize} \item the new Riemannian metric \begin{equation}
\alpha(x,\mathbf{y}) = \frac{\sqrt {(\delta_{ij}\lambda+W^iW^j)y^iy^j}}{\lambda} =\frac{\sqrt {(\lambda+(W^1)^2)(y^1)^2+(\lambda+(W^2)^2)(y^2)^2+2W^1W^2y^1y^2}}{\lambda},
\end{equation} \noindent and adopting 2D notations yields \begin{equation}
\alpha(x,y; u, v)= \frac{\sqrt{u^2+v^2-(uW^2-vW^1)^2}}{1-|W|^2}=\frac{ \sqrt{ (1-y^2)v^2 +u^2 } } {1- y^2}, \label{alpha} \end{equation} \item 1-form \begin{equation}
\beta(x,y; u, v)=\frac{-1} {1-|W|^2}(W^1u+W^2v) = \frac{-yu}{ 1 - y^2 }. \end{equation} \end{itemize}
\noindent Hence, the Randers metric in the problem is expressed in the final form by \begin{equation} \label{moja} F(x,y; u, v) = \alpha(x,y; u, v) + \beta(x,y; u, v) =\frac{ \sqrt{u^2 + v^2 -( yv)^2}-yu } {1- y^2}. \end{equation} Under the influence of $W$, the most efficient navigational paths are no longer the geodesics of the Riemannian metric $h:=(\delta_{ij})$. Instead, they are the geodesics of the Finsler metric $F$. Z. Shen \cite{chern_shen} showed that the same phenomenon as in $\mathbb{R}^2$ holds for arbitrary Riemannian backgrounds in all dimensions. The following theorem \cite{colleen_shen} lets us formally to combine the Randers geodesics with the optimality condition \begin{theorem}{\text{[D. Bao, C. Robles, Z. Shen, 2004]}} \label{THM} A strongly convex Finsler metric $F$ is of Randers type if and only of it solves the Zermelo navigation problem on some Riemannian manifolds $(M, h)$, under the influence of a wind $W$ with $h(W,W)<1$. Also, $F$ is Riemannian if and only if $W=0$. \end{theorem} \noindent Recall that Randers metrics may be identified with solutions to the navigation problem on Riemannnian manifolds. This navigation structure establishes a bijection between Randers spaces $(M, F=\alpha+\beta)$ and pairs $(h, W)$ of Riemannian metrics $h$ and vector fields $W$ on the manifold $M$. Note, basing on Theorem \ref{THM} and the idea on analysing the indicatrices, the solution to the navigation problem has also been developed on Hermitian manifolds in complex Finsler geometry with application of complex Randers metric \cite{aldea}.
\subsection{Projective flatness}
A Finsler metric $F=F(x,\mathbf{y})$ on an open subset $\mathcal{U}\subset \mathbb{R}^n$ is said to be projectively flat if all geodesics are straight lines in $\mathcal{U}$. Our background metric is projectively flat as this is the Euclidean metric. We verify the fact that the resulting Finsler metric \eqref{moja} is not projectively flat. Due to the action of the perturbing vector field which we applied the obtained Randers metric looses this property. In general, there are Riemannian metrics $h$ composing the navigation data $(h, W)$ in the Zermelo navigation problem which are projectively flat, for instance the Klein metric. Depending on the type of perturbation the resulting Randers metric may have or may not have this geometric property. A Finsler metric $F=F(x,\mathbf{y})$ on an open subset $\mathcal{U}\subset \mathbb{R}^n$ is projectively flat iff it satisfies the following system of equations \cite{chern_shen} \begin{equation} \label{flat} F_{x^ky^l}y^k-F_{x^l}=0. \end{equation} We prove that the metric is not projectively flat. From (\ref{flat}) we observe in dimension two that \begin{equation} u F_{xu}+v F_{yu}=\frac{\left(1-y^2\right) \left(\frac{u v^2 y}{\left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}}-1\right)+2 y \left(\frac{u}{\sqrt{u^2-v^2 \left(y^2-1\right)}}-y\right)}{\left(y^2-1\right)^2}v\neq F_x=0 \end{equation} and \begin{equation} \small{\quad u F_{xv}+v F_{yv}=\frac{v^4 y}{\left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}}\neq F_y=\frac{-u \left(y^2+1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}+2 u^2 y-v^2 y \left(y^2-1\right)}{\left(y^2-1\right)^2 \sqrt{u^2-v^2 \left(y^2-1\right)}}}. \end{equation} Thus, the metric $\eqref{moja}$ is not projectively flat Finsler metric due to the fact that $\eqref{flat}$ does not hold. As a consequnce we cannot obtain the spray coefficients as well as the flag curvature by simplified formulae with use of the scalar function, namely the projective factor $P$ of $F$, where $G^i=Py^i$. In this case we shall compute both quantities applying the formulae for general Finsler metric. Recall also that a Finsler metric $F$ on manifold $M$ is said to be locally projectively flat if at any point, there is a local coordinate system $(x^i)$ in which $F$ is projectively flat.
We also verify the projective flatness of the new Riemannian metric $\eqref{alpha}$ of the Randers metric $\eqref{moja}$. We show that \begin{equation} u F_{xu}+v F_{yu}=\frac{u v y \left(2 u^2-3 v^2 \left(y^2-1\right)\right)}{\left(y^2-1\right)^2 \left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}}\neq F_x=0 \end{equation} and \begin{equation} \quad u F_{xv}+v F_{yv}=\frac{v^4 y}{\left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}}\neq F_y=\frac{y \left(2 u^2-v^2 \left(y^2-1\right)\right)}{\left(y^2-1\right)^2 \sqrt{u^2-v^2 \left(y^2-1\right)}}. \end{equation} \noindent Similarly as the resulting Randers metric the condition $\eqref{flat}$ is not fulfilled, so the Riemannian component $\alpha$ is not projectively flat, either. The example of a pair $(F, \alpha)$, where both metrics, i.e. the resulting Finslerian and the corresponding new Riemannian, are projectively flat, is the Funk and the Klein metric, respectively.
\subsection{Flag curvature}
For a tangent plane $\tilde{\pi}\subset T_xM$ containing $\mathbf{y}$, $\mathbf{K}=\mathbf{K}(\tilde{\pi},\mathbf{y})$ is the flag curvature, $\tilde{\pi}=span \{\mathbf{y}, \tilde{u}\}$, where $\tilde{u}\in \tilde{\pi}$. In dimension two $\tilde{\pi}=T_xM$ is the tangent plane. Thus the flag curvature $\mathbf{K}=\mathbf{K}(x,y)$ is a scalar function on $TM\backslash\{0\}$. For a Finsler metric $F=F(x,y;u,v)$ the Gauss curvature $\mathbf{K}=\mathbf{K}(x,y;u,v)$ is given by \begin{equation} \mathbf{K}=\frac{1}{F^2}(-2 G_v H_u+2 G Q_u-G_u^2+2 G_x+2 H Q_v-H_v^2+2 H_y-u Q_x-v Q_y) \end{equation} where $Q=G_u+H_v$ and $G=G(x,y;u,v)$, $H=H(x,y;u,v)$ denote its spray coefficients. The formula is a formula for the Ricci scalar $Ric$ divided by $F^2$. In 2D the quotient $Ric/F^2$ is the Gauss curvature. We obtain the formula \eqref{Gauss_curvature} which expresses the curvature of the Randers metric in the navigation problem with the shear perturbation
\begin{equation} \label{Gauss_curvature} \mathbf{K}=-3\frac{k_n}{k_d} \end{equation} where \begin{equation} \scriptsize{ \begin{split} k_d=4 \sqrt{u^2-v^2 \left(y^2-1\right)}\left(\sqrt{u^2-v^2 \left(y^2-1\right)}-u y\right)^2 \\ \left(y \left(y^2+3\right) u^3-\left(3 y^2+1\right) \sqrt{u^2-v^2 \left(y^2-1\right)} u^2-3 v^2 y \left(y^2-1\right) u+v^2 \left(y^2-1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}\right)^4 \end{split} } \end{equation} and \begin{equation} \scriptsize{ \begin{split} k_n=-4 y \left(7 y^{12}+182 y^{10}+1001 y^8+1716 y^6+1001 y^4+182 y^2+7\right) u^{15}\\ +2 \left(y^{14}+91 y^{12}+1001 y^{10}+3003 y^8+3003 y^6+1001 y^4+91 y^2+1\right) \sqrt{u^2-v^2 \left(y^2-1\right)} u^{14}\\ +32 v^2 y \left(2 y^{14}+63 y^{12}+364 y^{10}+429 y^8-286 y^6-455 y^4-112 y^2-5\right) u^{13}\\ +v^2 \left(-3 y^{16}-374 y^{14}-4914 y^{12}-14014 y^{10}-3432 y^8+14014 y^6+7826 y^4+886 y^2+11\right) \sqrt{u^2-v^2 \left(y^2-1\right)} u^{12}\\ -2 v^4 y \left(y^2-1\right)^2 \left(23 y^{12}+1056 y^{10}+8921 y^8+21648 y^6+16929 y^4+3968 y^2+191\right) u^{11}\\ +v^4 \left(y^2-1\right)^2 \left(y^{14}+241 y^{12}+4895 y^{10}+23199 y^8+33495 y^6+15191 y^4+1801 y^2+25\right) \sqrt{u^2-v^2 \left(y^2-1\right)} u^{10}\\ +2 v^6 y \left(y^2-1\right)^3 \left(5 y^{12}+440 y^{10}+5379 y^8+16896 y^6+16115 y^4+4440 y^2+245\right) u^9\\ -3 v^6 \left(y^2-1\right)^3 \left(15 y^{12}+605 y^{10}+4224 y^8+7986 y^6+4455 y^4+625 y^2+10\right) \sqrt{u^2-v^2 \left(y^2-1\right)} u^8\\ -4 v^8 y \left(y^2-1\right)^4 \left(30 y^{10}+723 y^8+3330 y^6+4133 y^4+1390 y^2+90\right) u^7\\ +2 v^8 \left(y^2-1\right)^4 \left(105 y^{10}+1491 y^8+4197 y^6+3075 y^4+530 y^2+10\right) \sqrt{u^2-v^2 \left(y^2-1\right)} u^6\\ +4 v^{10} y \left(y^2-1\right)^5 \left(63 y^8+588 y^6+1080 y^4+472 y^2+37\right) u^5\\ -v^{10} \left(y^2-1\right)^5 \left(210 y^8+1245 y^6+1367 y^4+307 y^2+7\right) \sqrt{u^2-v^2 \left(y^2-1\right)} u^4\\ -2 v^{12} y \left(y^2-1\right)^6 \left(60 y^6+235 y^4+152 y^2+15\right) u^3\\ +v^{12} \left(y^2-1\right)^6 \left(45 y^6+113 y^4+37 y^2+1\right) \sqrt{u^2-v^2 \left(y^2-1\right)} u^2\\ +2 v^{14} y \left(y^2-1\right)^7 \left(5 y^4+8 y^2+1\right) u-v^{14} y^2 \left(y^2-1\right)^7 \left(y^2+1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}. \end{split} } \end{equation}
Let us look at some computed values of the curvature of the Randers metric. For instance, we obtain $\mathbf{K}(0,-\frac{1}{2};0,\frac{\sqrt{3}}{2})=\mathbf{K}(0, 0; \pm \frac{1}{2},\pm\frac{\sqrt{3}}{2})= -\frac{15}{64}\approx-0.234$; $\mathbf{K}(0, 0; \frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}) = -\frac{9}{16}\approx -0.563 $. In general, the range is determined by $x\in \mathbb{R} \ \wedge \ y\in (-1, 1)$. Taking into consideration the type of the perturbation the ranges refer to the point $(x, y)$ which we choose and $v\in [-1, 1], u\in (y_0-1, y_0+1) \Rightarrow u\in (-2, 2)$. The flag curvature does not depend on $x$-coordinate due to form of the current parallel to the banks of the river. Let us fix $A=(x_0, y_0)=(0, -\frac{1}{2})$ which represents the starting point of the time-optimal paths. The graph of $\mathbf{K}$ at $A$ for corresponding $u\in (-\frac{3}{2}, \frac{1}{2})$ and $v\in [-1, 1]$ as well as its contour plot are presented in Figure \ref{fig-K}. However the domain of the tangent vector's coordinates in the problem is restricted and it is represented graphically by the space curve (purple) lying on the 3D surface of the curvature. In other words the length of the resulting tangent vector to the Randers geodesic at the commence point is determined by the equations of motion while the ship's own speed is unit. Gaussian curvature changing given as the function of the initial angle of rotating tangent vector in the tangent space is shown in Figure \ref{fig-K2}. \begin{figure}
\caption{The flag (Gaussian) curvature $\mathbf{K}$ at $A=(0, -\frac{1}{2})$ and its contour plot.}
\label{fig-K}
\end{figure} In fixed position the minimal value of $\mathbf{K}$ equals $-\frac{{3}}{2}$ and at maximum it approaches 0. The Randers metric \eqref{moja} is not of constant flag curvature. \begin{figure}
\caption{The flag (Gaussian) curvature $\mathbf{K}$ as a function of the initial control angle $\varphi_0\in[0, 2\pi) $ of rotating tangent vector, $A=(0, -\frac{1}{2})$.}
\label{fig-K2}
\end{figure}
\subsection{In steady flow of a current}
\label{constant}
For steady current we choose any two constants $p, q$ which satisfy $p^2 + q^2 < 1$. Let us assume for a while that the flow of the river is represented by the particular river-type vector field $W=[p, 0]$, $|W|=|p|<1$ due to convexity of $F$. Then the resulting Randers metric $\eqref{W1W2}$ is expressed by \begin{equation} F(x,y; u,v) = \frac{ \sqrt{u^2 + v^2 -( pv)^2}-pu } {1-p^2}. \end{equation}
This metric is of Minkowski type. The geodesics are the straight lines what can be intuitevely expected. The least time traverse trajectories have a constant velocity and control, i.e. heading angle $\varphi$. The flag curvature is constant, $\mathbf{K}=0$ and the metric fulfills the conditions for projective flatness with the projective factor $P=0$. More general, for two dimensional background with constant vector field, $W=[p, q]$ with $ |W|=\sqrt{p^2+q^2}<1$ the geodesics are obviously the straight lines, too. Thus, the ship sails along the time-optimal path following the same straight track and constant course over ground before and after perturbation while the courses through the water (headings) differ due to acting perturbation. As a consequence there are different resulting speeds over ground as well as the times of the motions between given point of departure and point of destination. Obviously, since we assume the constant perturbation the problem is invariant under translation. We draw attention to this case. Although it represents the simple scenario in the navigation problem theoretically, it is widely applied even in non-optimal and non-Finslerian geometric models and computational algorithms in real navigational software, what we bring closer in the further part of the paper.
\section{Geodesics of Randers metric as the solutions to Zermelo's problem}
\subsection{Fundamental tensor} The function $F$ is positive on the manifold $TM\setminus 0$ whose points are of the form $(x, \mathbf{y})$ with $0 \neq \mathbf{y} \in T_x M $ . Over each point $(x, \mathbf{y})$ of $TM\setminus 0 $ treated as a parameter space we designate the vector space $T_x M$ as a fiber and name the resulting vector bundle $\pi^ \ast TM$ . There is a canonical symmetric bilinear form $g_{ij} dx^i \otimes dx^j$ on the fibers of $\pi^\ast T M$ with \begin{equation} g_{ij} := \frac 1 2 [ F^2 ]_{y^i y^j}=\frac{1}{2}\frac{\partial^2F^2}{\partial y^i \partial y^j}. \label{g_ij_Randers} \end{equation} The subscripts $y^i , y^j$ signify here partial differentiation and the matrix $(g_{ij} )$ is the fundamental tensor. We compute the partial derivatives of the quadrature of \eqref{moja} \begin{equation} F^2_u=\frac{2 \left(\frac{u}{\sqrt{u^2-v^2 y^2+v^2}}-y\right) \left(\sqrt{u^2-v^2 y^2+v^2}-u y\right)}{\left(1-y^2\right)^2}, \end{equation}
\begin{equation} F^2_v=\frac{\left(2 v-2 v y^2\right) \left(\sqrt{u^2-v^2 y^2+v^2}-u y\right)}{\left(1-y^2\right)^2 \sqrt{u^2-v^2 y^2+v^2}}, \end{equation}
\begin{equation} \small{F^2_{uu}=\frac{2 \left(-2 u^3 y+u^2 \left(y^2+1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}-v^2 \left(y^4-1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}+3 u v^2 y \left(y^2-1\right)\right)}{\left(y^2-1\right)^2 \left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}}}, \end{equation}
\begin{equation} F^2_{vv}=\frac{2 u^3 y-2 u^2 \sqrt{u^2-v^2 \left(y^2-1\right)}+2 v^2 \left(y^2-1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}}{\left(y^2-1\right) \left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}}, \end{equation}
\begin{equation} F^2_{uv}=-\frac{2 v^3 y}{\left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}}=F^2_{vu}. \end{equation}
\noindent Hence, 2D Hessian matrix of $F$ is given by
\begin{equation} \scriptsize{H=\left( \begin{array}{cc}
\frac{-2 y u^3+\left(y^2+1\right) \sqrt{u^2-v^2 \left(y^2-1\right)} u^2+3 v^2 y \left(y^2-1\right) u-v^2 \left(y^4-1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}}{\left(y^2-1\right)^2 \left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}} & -\frac{v^3 y}{\left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}} \\ \\
-\frac{v^3 y}{\left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}} & \frac{y u^3-\sqrt{u^2-v^2 \left(y^2-1\right)} u^2+v^2 \left(y^2-1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}}{\left(y^2-1\right) \left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}} \\ \end{array} \right)}. \end{equation} The determinant of the fundamental tensor equals
\begin{equation} \footnotesize{\text{det}(H)=\frac{u^3 y \left(y^2+3\right)-u^2 \left(3 y^2+1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}+v^2 \left(y^2-1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}-3 u v^2 y \left(y^2-1\right)}{\left(y^2-1\right)^3 \left(u^2-v^2 \left(y^2-1\right)\right)^{3/2}}}. \end{equation}
\subsection{Spray coefficients} A spray on $M$ is a smooth vector field on $TM_0:=TM\backslash \{0\}$ locally expressed in the following form \begin{equation}
\label{spray} G=y^i\frac{\partial}{\partial x^i}-2G^i \frac{\partial}{\partial y^i}, \end{equation} where $G^i=G^i(x,\mathbf{y})$ are the local functions on $TM_0$ satisfying $G^i(x, \widehat{\lambda} \mathbf{y})=\widehat{\lambda} ^2G^i(x,\mathbf{y}), \quad \widehat{\lambda} >0$. The spray is induced by $F$ and the spray coefficients $G^i$ of $G$ given by \begin{equation}
\label{spray2} G^i:=\frac{1}{4}g^{il}\{ [F^2]_{x^ky^l}y^k-[F^2]_{}x^l\}=\frac{1}{4}g^{il}\left(2\frac{\partial g_{jl}}{\partial x^k}-\frac{\partial g_{jk}}{\partial x^l}\right)y^jy^k, \end{equation} are the spray coefficients of $F$. More general, in comparison to 2D components obtained above, plugging \eqref{Alpha} and \eqref{Beta} in \eqref{g_ij_Randers} one obtains the general Hessian matrix of Randers metric given by \begin{equation} g_{ij}(\mathbf{y})=\frac{F}{\alpha}\left(a_{ij}-\frac{y_i}{\alpha}\frac{y_j}{\alpha}\right)\left(\frac{y_i}{\alpha}+b_i\right)\left(\frac{y_j}{\alpha}+b_j\right) \end{equation} or in the equivalent form \begin{equation} g_{ij}(\mathbf{y})=\frac{F}{\alpha}\left[a_{ij}-\frac{y_i}{\alpha}\frac{y_j}{\alpha}+\frac{\alpha}{F}\left(\frac{y_i}{\alpha}+b_i\right)\left(\frac{y_j}{\alpha}+b_j\right)\right] \end{equation} where $y_i=a_{ij}y^j$. The inverse $(g^{ij})=(g_{ij})^{-1}$ can be expressed in the following form \begin{equation} g^{ij}(\mathbf{y})=\frac{\alpha}{F}a^{ij}+\left(\frac{\alpha}{F}\right)^2\frac{\beta+\alpha b^2}{F}\frac{y^i}{\alpha}\frac{y^j}{\alpha}-\left(\frac{\alpha}{F}\right)^2\left(b^j\frac{y^i}{\alpha}+b^i\frac{y^j}{\alpha}\right) \end{equation} where $b^i=a^{ij}b_j$. The determinant of the matrix $(g_{ij})$ is given by \begin{equation} \text{det} (g_{ij})=\left(\frac{F}{\alpha}\right)^{n+1}\text{det}(a_{ij}). \end{equation} \noindent Now we consider the spray coefficients for general 2D Finsler metric including non-Euclidean background. Let $$L(x,y;u,v)\text{:=}\frac{1}{2}F^2(x,y;u,v).$$ Hence, \begin{equation} G^1=\frac{L_{vv}(L_{xu}u+L_{yu}v-L_x)-L_{uv}(L_{xv}u+L_{yv}v-L_y)}{2(L_{uu}L_{vv}-L_{uv}L_{uv})}, \end{equation} \begin{equation} G^2=\frac{-L_{uv}(L_{xu}u+L_{yu}v-L_x)+L_{uu}(L_{xv}u+L_{yv}v-L_y)}{2(L_{uu}L_{vv}-L_{uv}L_{uv})}. \end{equation} The spray coefficients $G^1:=G=G(x,y;u,v)$ and $G^2:=H=H(x,y;u,v)$ can be expressed by more suitable formulae \cite{chern_shen} \begin{equation} G=\frac{\left(\frac{\partial ^2L}{\partial v^2} \frac{\partial L}{\partial x}-\frac{\partial L}{\partial y} \frac{\partial ^2L}{\partial u\, \partial v}\right)-\frac{\partial L}{\partial v} \left(\frac{\partial ^2L}{\partial x\, \partial v}-\frac{\partial ^2L}{\partial y\, \partial u}\right)}{2 \left[\frac{\partial ^2L}{\partial u^2} \frac{\partial ^2L}{\partial v^2}-\left(\frac{\partial ^2L}{\partial u\, \partial v}\right)^2\right]}, \end{equation} \begin{equation} H=\frac{\left(\frac{\partial ^2L}{\partial u^2} \frac{\partial L}{\partial y}-\frac{\partial L}{\partial x} \frac{\partial ^2L}{\partial u\, \partial v}\right)+\frac{\partial L}{\partial u} \left(\frac{\partial ^2L}{\partial x\, \partial v}-\frac{\partial ^2L}{\partial y\, \partial u}\right)}{2 \left[\frac{\partial ^2L}{\partial u^2} \frac{\partial ^2L}{\partial v^2}-\left(\frac{\partial ^2L}{\partial u\, \partial v}\right)^2\right]} \end{equation} which we apply to compute the spray coefficients of considered Randers metric. We obtain
\begin{equation} \label{G} \scriptsize{G=-\frac{v \left(\sqrt{u^2-v^2 \left(y^2-1\right)}-u y\right)^2 \left(4 u y \sqrt{u^2-v^2 \left(y^2-1\right)}-2 u^2 \left(y^2+1\right)+v^2 \left(y^4-1\right)\right)}{2 \left(y^2-1\right) \left(u^3 (-y) \left(y^2+3\right)+u^2 \left(3 y^2+1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}-v^2 \left(y^2-1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}+3 u v^2 y \left(y^2-1\right)\right)}}, \end{equation}
\begin{equation} \label{H} \tiny{H=\frac{\left(\sqrt{u^2-v^2 \left(y^2-1\right)}-u y\right)^2 \left(u^3 \left(-\left(3 y^2+1\right)\right)+u^2 y \left(y^2+3\right) \sqrt{u^2-v^2 \left(y^2-1\right)}-v^2 y^3 \left(y^2-1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}+3 u v^2 y^2 \left(y^2-1\right)\right)}{2 \left(y^2-1\right)^2 \left(u^3 y \left(y^2+3\right)-u^2 \left(3 y^2+1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}+v^2 \left(y^2-1\right) \sqrt{u^2-v^2 \left(y^2-1\right)}-3 u v^2 y \left(y^2-1\right)\right)}}. \end{equation}
\noindent Additionally, we also obtain the spray coefficients $G_{\alpha}, H_{\alpha}$ of the new Riemannian term \eqref{alpha} of the resulting Randers metric \begin{equation} G_{\alpha}= -\frac{2 u v y}{y^2-1}, \qquad H_{\alpha}= \frac{y \left(-2 u^2-v^2 \left(y^2-1\right)\right)}{2 \left(y^2-1\right)^2}. \end{equation}
\subsection{Randers geodesic equations}
The solution curves in the Zermelo navigation problem are found by working out the geodesics of the Randers metric. For a standard local coordinate system $(x^i, y^i)$ in $TM_0$ the geodesic equation for Finsler metric is expressed in the general form \begin{equation} \label{geo} \dot{y}^i+2G^i(x,\mathbf{y})=0.
\end{equation} Hence, \begin{equation} \label{geo1} \ddot{x}^i+\frac{1}{2} g^{il} (2\frac{\partial g_{jl}}{\partial x^k} - \frac{\partial g_{jk}}{\partial x^l})\dot{x}^j\dot{x}^k=0. \end{equation}
\noindent Let us abbreviate $\Phi = \sqrt{\dot x^2-\delta \dot y^2}, \delta = \left(y^2-1\right), \Delta = \left(y^2+3\right), \psi = \left(3 y^2+1\right), \Psi = \dot y^2 \Phi, \sigma = \dot x^2 \Phi$. Then plugging $\eqref{G}$ and $\eqref{H}$ in $\eqref{geo}$ we obtain the system of geodesic equations in the navigation problem under the shear vector field as follows
\begin{equation} \ddot x-\frac{2 \left(\dot y\left(\Phi-y \dot x\right)^2 \left(4 y \dot x \Phi-2 \left(y^2+1\right) \dot x^2+\left(y^4-1\right) \dot y^2\right)\right)}{2 \delta \left(\psi \sigma+3 y \delta \dot x \dot y^2-\delta \Psi-y \Delta \dot x^3\right)}=0, \label{g1} \end{equation}
\begin{equation} \ddot y+\frac{2 \left(\left(\Phi-y \dot x\right)^2 \left(\delta y^3 \left(-\dot y^2\right) \Phi+3 \delta y^2 \dot x \dot y^2+\Delta y \sigma-\psi \dot x^3\right)\right)}{2 \delta^2 \left(-\psi \sigma-3 y \delta \dot x \dot y^2+\delta \Psi+y \Delta \dot x^3\right)}=0. \label{g2} \end{equation} \
\noindent
\noindent Now we evaluate the system of the Randers geodesic equations at full range of the initial angle $\varphi_0$ starting from the fixed point $A=(0, -\frac{1}{2})$. Figure $\ref{fam_1}$ provides graph of the solutions to the system of the equations \eqref{g1} and \eqref{g2} giving the optimal curves for $\varphi_0$ ranging from 0 to 2$\pi$ in the increments of $\vartriangle\varphi_0=\frac{\pi}{18}$ at open sea together with acting vector field. The convexity condition for $|W|$ implies bounded domain, i.e. $|y|<1$, marked by the black dashed lines. \begin{figure}
\caption{The family of Randers geodesics with the increments $\vartriangle\varphi_0=\frac{\pi}{18}$ at open sea.}
\label{fam_1}
\end{figure} We also divide the Randers geodesics into the subsets referring to four quarters of $\frac{\pi}{2}$-ranges of the initial angle $\varphi_0$ with $\vartriangle\varphi_0=\frac{\pi}{18}$ in bounded domain what is presented in Figure $\ref{fam_1e}$. The time-optimal paths for $\varphi_0\in[0, \frac{\pi}{2})$ are marked in blue, $\varphi_0\in[\frac{\pi}{2}, \pi)$ in black, $\varphi_0\in[\pi, \frac{3}{2}\pi)$ in red and $\varphi_0\in[\frac{3}{2}\pi, 2\pi)$ in green. The same color codes we shall use in the analogous figures in the further part of the paper. \begin{figure}
\caption{The subsets of Randers geodesics divided with respect to $\frac{\pi}{2}$-ranged $\varphi_0\in[0, 2\pi)$ with the increments $\vartriangle\varphi_0=\frac{\pi}{18}$ in bounded domain, $|y|<1$.}
\label{fam_1e}
\end{figure} \begin{corollary}{} \label{cor1} The obtained results on the navigation problem referring to the mild shear perturbation $\eqref{pole}$ by means of Finsler geometry are consistent to the initial results obtained with the use of Hamiltonian formalism in the calculus of variations (cf. \S 458 in \cite{caratheodory}). \end{corollary} \noindent In our Finslerian approach the corresponding optimal steering angle can be deduced from obtained equations of Randers geodesics as equivalently from implicit Zermelo's formula applied in $\mathbb{R}^2$.
\section{Examples of Randers geodesics' flows}
Without loss of generality following above steps of the solution for new Finsler metrics \eqref{W1W2} one can obtain the final flows of Randers geodesics \eqref{geo} depending on other types of perturbation \eqref{pole_river} and different initial conditions. Next, as acting vector fields we shall consider the ones which are determined by the quartic curve and Gauss function. In Finsler geometry the computations of geometric quantities are usually complicated. As it takes a while if one computes some quotients manually, we create some programmes with use of Wolfram Mathematica to generate the graphs and provide some numeric computations when the complete symbolic ones cannot be obtained. The numerical schemes can give useful information studying the geometric properties of obtained flows.
\subsection{Quartic curve perturbation}
First let us start with the current $W$ defined by the quartic plane curve given by \begin{equation} f(x)=a \left(b-x^2\right)^2. \label{quart} \end{equation} \begin{figure}
\caption{The families of the plane quartic curves \eqref{quart} depending on the parameters $a$, $b$.}
\label{rb_pole2}
\end{figure}
The families of the plane quartic curves for different parameters $a$, $b$ are presented in Figure \ref{rb_pole2}. Let $a=0.8$ and $b=1$ and adopt for the scenario with the horizontal flow of the perturbation. This is shown in Figure \ref{rb_pole}. Recall, to apply Theorem \ref{THM} the convexity condition must be fulfilled. In the set up coordinate system the condition $|f(y)|<1$ implies the domain $|y|<y_0\approx 1.4553$ which is marked by red dotted lines in Figure \ref{rb_7c}.
\begin{figure}
\caption{Perturbation determined by the plane quartic curve with $a=0.8$, $b=1$.}
\label{rb_pole}
\end{figure}
The Randers geodesics starting from a point in midstream generated in the increments $\bigtriangleup\varphi_0=\frac{\pi}{18}$, in the presence of acting quartic curve perturbation are presented in Figure \ref{rb_7c}. Figure \ref{rb_7fffff} shows the subsets of the geodesics divided with respect to $\frac{\pi}{2}$-ranged $\varphi_0\in[0, 2\pi)$, in the increments $\vartriangle\varphi_0=\frac{\pi}{18}$, in bounded domain. There are the upstream focal points although there are none downstream. In Figure \ref{rb_3} and Figure \ref{rb_5} we can observe the change of the flow which depends on the initial point from which the family of Randers geodesics leaves. As in previous case of the shear vector field we fix the initial position at $A=(0, -\frac{1}{2})$. The upstream and downstream geodesics connecting the banks of the river and starting from $(0, 1.4553)$ are shown in Figure \ref{rb_8d}, and in partition with respect to $\frac{\pi}{2}$-ranged $\varphi_0\in[0, 2\pi)$ in Figure \ref{rb_8e}. \begin{figure}
\caption{Randers geodesics starting from a point in midstream under quartic curve perturbation in the increments $\bigtriangleup\varphi_0=\frac{\pi}{18}$.}
\label{rb_7c}
\end{figure} \begin{figure}
\caption{The subsets of Randers geodesics starting from a point in midstream under quartic curve perturbation divided with respect to $\frac{\pi}{2}$-ranged $\varphi_0\in[0, 2\pi)$ in the increments $\vartriangle\varphi_0=\frac{\pi}{18}$ in bounded domain, $|y|<1.4553$.}
\label{rb_7fffff}
\end{figure} \begin{figure}
\caption{The flow of Randers geodesics starting at $(0, -\frac{1}{2})$ under quartic curve perturbation for $\bigtriangleup\varphi_0=\frac{\pi}{36}$.}
\label{rb_3}
\end{figure} \begin{figure}
\caption{The subsets of Randers geodesics starting at $(0, -\frac{1}{2})$ under quartic curve perturbation divided with respect to $\frac{\pi}{2}$-ranged $\varphi_0\in[0, 2\pi)$ in the increments $\vartriangle\varphi_0=\frac{\pi}{18}$ in bounded domain, $|y|<1.4553$.}
\label{rb_5}
\end{figure} \begin{figure}
\caption{Randers geodesics starting from upper bank of the river at $(0, 1.4553)$ under quartic curve perturbation in bounded domain, $\bigtriangleup\varphi_0=\frac{\pi}{18}$.}
\label{rb_8d}
\end{figure} \begin{figure}
\caption{The subsets of Randers geodesics under quartic curve perturbation starting from upper bank of the river at $(0, 1.4553)$, divided with respect to $\frac{\pi}{2}$-ranged $\varphi_0\in[0, 2\pi)$ in bounded domain, with the increments $\vartriangle\varphi_0=\frac{\pi}{18}$.}
\label{rb_8e}
\end{figure}
\subsection{Gaussian function perturbation}
Next, we consider the current $W$ determined by Gauss function which is expressed in the following form \begin{equation} f(x)=ae^{-\frac{(x-b)^2}{2c^2}}, \label{gauss} \end{equation} where $a, b, c$ are the arbitrary real constants. The families of plane Gauss function curves for different parameters $a$, $b$, $c$ are presented in Figure \ref{gauss3_5}. This time we want to fullfil the strong convexity condition compulsory for Randers metric applied in the Finslerian approach to Zermelo navigation in the whole space so then the restriction referring to the domain is not necessary. We increase the speed of the current described by the basic Gaussian function $\frac{1}{\sqrt{2 \pi }}e^{-\frac{1}{2}x^2}$ using the constant multiplier, e.g. $\frac{5}{2}$, to approach its maximal allowable value implied by strong convexity. Consequently, let $a=\frac{5}{2\sqrt{2 \pi }}\approx 0.997$, $b=0$, $c=1$ and adopt as before for the scenario with horizontal flow of the perturbation. The field is shown in Figure \ref{gauss3_7}. Hence, \begin{equation} f(y)=\frac{5}{2}\frac{1}{\sqrt{2 \pi }}e^{-\frac{1}{2}y^2}. \end{equation} \begin{figure}
\caption{The families of planar Gaussian functions \eqref{gauss} depending on the parameters $a$, $b$, $c$.}
\label{gauss3_5}
\end{figure} \begin{figure}\label{gauss3_7}
\end{figure} \noindent Time-optimal paths coming from the midstream with the increments $\bigtriangleup\varphi_0=\frac{\pi}{18}$ in the presence of acting Gaussian function perturbation are presented in Figure \ref{gauss3_10a}. The subfamilies of Randers geodesics starting from a point in midstream and divided with respect to $\frac{\pi}{2}$-ranged $\varphi_0\in[0, 2\pi)$ are presented in Figure \ref{gauss3_1b}, with the increments $\vartriangle\varphi_0=\frac{\pi}{18}$. The analogous flow of Randers geodesics but starting at $(0, -\frac{1}{2})$ with $\bigtriangleup\varphi_0=\frac{\pi}{18}$ is given in Figure \ref{gauss3_9} and Figure \ref{gauss3_4AA}. \begin{figure}
\caption{Randers geodesics starting from a point in midstream under Gaussian function perturbation, with the increments $\bigtriangleup\varphi_0=\frac{\pi}{18}$.}
\label{gauss3_10a}
\end{figure} \begin{figure}
\caption{The subfamilies of Randers geodesics starting from a point in midstream under Gaussian function perturbation divided with respect to $\frac{\pi}{2}$-ranged $\varphi_0\in[0, 2\pi)$, with $\vartriangle\varphi_0=\frac{\pi}{18}$.}
\label{gauss3_1b}
\end{figure} \begin{figure}
\caption{The flow of Randers geodesics starting at $(0, -\frac{1}{2})$ under under Gaussian function perturbation, with the increments $\bigtriangleup\varphi_0=\frac{\pi}{18}$.}
\label{gauss3_9}
\end{figure} \begin{figure}
\caption{The subsets of Randers geodesics starting at $(0, -\frac{1}{2})$ under Gaussian function perturbation, divided with respect to $\frac{\pi}{2}$-ranged $\varphi_0\in[0, 2\pi)$ in the increments $\vartriangle\varphi_0=\frac{\pi}{18}$.}
\label{gauss3_4AA}
\end{figure} \begin{corollary}{} \label{cor2} The flows of time-optimal paths in the presence of the mild river-type perturbations $\eqref{pole}$ obtained by means of Finsler geometry are consistent to the results on the Zermelo navigation prolem obtained with the use of the equations of motions and the implicit navigation formula of Zermelo for the optimal control angle in the calculus of variations (cf. \S 279, \S 458 in \cite{caratheodory}). \end{corollary}
\section{Application to search models}
\subsection{Revisiting SAR patterns and stating the problem}
Our motivation in current research also touches the search patterns which are applied in air and marine Search and Rescue operations (SAR). They are determined in the International Aeronautical and Maritime Search and Rescue (IAMSAR) Manual which is required to be up-to-date and carried onboard the real ships worldwide by the International Convention for the Safety of Life at Sea (SOLAS, 1974). The Manual provides guidelines for a common aviation and maritime approach to organizing and providing search and rescue services.
\begin{figure}
\caption{Planar SAR patterns: creeping line, expanding square and sectror search.}
\label{sar_1}
\end{figure} \begin{figure}
\caption{Planar SAR patterns in the real air and marine applications: expanding square, sector search, parallel search, creeping line.}
\label{sar_2}
\end{figure}
Briefly, the essential criterium is time in SAR operations. The standard patterns are applied when the accurate position of searched object is unknown. So the key is to search the area efficiently and in the shortest time. Thus, the point of the highest probability (datum), which represents the centre point of searching, is to be determined. In particular, that is the field for applying the theory of reliability and probability including notion of concentration ellipse as in the study of uncertainties and error analysis. However, in our study we focus on the geometry referring to the flows of the time-optimal paths which might be used by searching object.
We observe that the search models are created and required to be used routinely neglecting the influence of acting perturbation in the meaning of optimization, which in reality may be understood as the real wind or stream (current), in particular. The standard methods of search are based on the following patterns: expanding square, creeping line, sector search and parallel search. They are presented in Figure \ref{sar_1}. Hence, the search paths in the air and marine applications are followed what Figure \ref{sar_2} shows. Now, let us recall two standard geodetic problems in the navigational context from the introduction of the paper. The search path can be sailed in an active or passive way. The former means that we correct the route to follow the search pattern "over ground". In turn, we let the ship to be drifted continuously by acting perturbation in the latter, so we follow the search pattern just "through the water". We ask the question if the search paths, their segments could be based on the time-optimal paths which we study by means of geometry in order to fulfill time criterium more efficient.
\subsection{Simulations}
In this part of our research we use the navigation information system Navi-Sailor 4000 ECDIS (Electronic Chart Display and Information System) integrated to the navigational simulator Navi-Trainer Professional 5000 by Transas Technologies Ltd., which is one of the worldwide leaders in developing a wide range of IT solutions for the marine industry and promotes its own concept of e-Navigation. We pay our attention to the geometric models and as the consequence the formulae implemented in the navigational software, which the information generated automatically to more and more mass user is based on. In particular, we recall the applications therein which correspond exactly to search models mentioned above. \begin{figure}
\caption{Simulated search paths in the models of expanding square, sector search and creeping line without perturbation; searching ship positioned in (50$^\circ$08.607'N, 005$^\circ$59.654'W), $|u|=10$ kn.}
\label{sar_bez}
\end{figure}
Let us consider some examples referring to real applications. First, we activate the search paths for the scenario without perturbation (calm sea model). Generated routes based on the models of expanding square, sector search and creepng line in case of one searching ship are presented in Figure \ref{sar_bez}. We assume the same conditions as in the Zermelo navigation problem we considered above, i.e. constant ship's own speed sailing on two-dimensional sea. In practice the search speed means the maximal speed of the ship which is possible to be kept in the real conditions. Note that for $n>1$ searching ships this means the highest speed of the slowest ship. Simulated trajectories correspond to the standard patterns.
\begin{figure}
\caption{Simulated search paths in the models of expanding square under acting perturbations with set of 2.0 kn and drifts: 270.0$^\circ$, 180.0$^\circ$, 000.0$^\circ$, respectively; $|u|=10$ kn.}
\label{sar_z1}
\end{figure}
Next, we introduce some perturbations for the same datum in the way it is possible to be set up in the system. The real perturbation can be defined by two parameters, namely the drift (direction) and set (speed). Let us assume the ship is positioned in fixed point given by the geographical coordinates 50$^\circ$08.607'N (latitude) and 005$^\circ$59.654'W (longitude). We set the following conditions in the simulations. In the first expanding square model presented in Figure \ref{sar_z1} the speed of the ship equals 10 kn (knots), the perturbation is determined by the pairs of drift and set as follows: (270.0$^\circ$, 2.0 kn), (180.0$^\circ$, 2.0 kn), (000.0$^\circ$, 2.0 kn). The commence search point is 50$^\circ$17.438'N and 005$^\circ$59.466'W, number of legs equals 10, the starting leg length is 1 Nm (nautical mile) and the search pattern heading equals 000.0$^\circ$. Then we increase the ship's speed up to 15 kn keeping the same strength of perturbation. Thus, the perturbations become weaker relatively. The solutions obtained in the simulator are given in Figure \ref{sar_z2} and refer to the perturbations given by set and drift as follows (225.0$^\circ$, 2.0 kn), (180.0$^\circ$, 2.0 kn) for expanding square, and (225.0$^\circ$, 2.0 kn) for the sector search. The commence search point is (50$^\circ$18.435'N, 006$^\circ$05.126'W), number of legs equals 10, the search pattern heading is 045.0$^\circ$ and the starting leg length is 2 Nm in case of expanding square. The number of sectors is 6 and the search radius equals 10 Nm in case of the sector search.
The modified search paths in the models of expanding square and sector search are presented in Figure \ref{sar_z1} and Figure \ref{sar_z2}. Geometrically, the Euclidean plane with steady current is just used in order to generate the search paths. This refers to the case mentioned before in $\S$ \ref{constant}. In fact only constant river-type perturbations are applied in the simulator as well as in the real devices onboard the ships as the same software is used therein. The perturbation is considered in the passive way so it refers to the direct geodetic problem in the navigational context. Due to some practical reasons the simplified approach is followed routinely in reality. However, taking into consideration the state-of-art in positioning, modeling, tracking and control in the context of, e.g. robotic sailing, dron piloting, we may ask if the approach becomes oversimplified as the models can be optimized geometrically. This remark plays a role if the perturbations are variable in space and/or time, so as the real ones.
\begin{figure}
\caption{Simulated search paths in the models of expanding squares and sector search under "weaker" perturbations with set of 2.0 kn and drifts: 225.0$^\circ$, 180.0$^\circ$, 225.0$^\circ$, respectively; $|u|=10$ kn.}
\label{sar_z2}
\end{figure}
\subsection{Preliminary modification with application of time-optimal paths}
Let us first imagine $n$ bees which have been ordered to leave the hive in different directions and reach the boundary of disc-shaped garden in the shortest time such that during the flight in windy conditions the whole hive's neighbourhood is fully patroled. Randers geodesics (red solid) simulating the trajectories in a unit disc starting from the origin, in the increments $\bigtriangleup\varphi_0=\frac{\pi}{18}$, and the corresponding indicatrices for $t=1$ (blue dashed), under acting the shear, the quartic curve and the Gaussian function background fields, are illustrated in Figure \ref{disc}. In the absence of the winds the optimal trajectories are desribed by the straight rays coming from the origin. They are indicated by the black arrows. In the presence of the wind, for instance in the left-hand side graph of the figure referring to shear perturbation, the space can be fully covered (searched) alternatively by $n$ curved time-optimal paths after suitable adopting the initial angles $\varphi_0$ which determine the flow of the Randers geodesics. The example gives rise to consider the model with only one ship involved in the problem.
\begin{figure}
\caption{Randers geodesics (red solid) in a unit disc starting from the origin, with the increments $\bigtriangleup\varphi_0=\frac{\pi}{18}$ and the indicatrices for $t=1$ (blue dashed) under acting shear, quartic curve and Gaussian function perturbation, respectively.}
\label{disc}
\end{figure}
In contrast to the search paths created in the software of the navigational simulator let us continue with the model with the perturbation $W$ which is not constant in space. To begin with, we combine the standard patterns with the time-optimal paths which can be represented by the Randers geodesics. Let us remark here that the application of Randers geodesics in the problem can bring more noticeable benefit in solving the problem with non-Euclidean backgrounds where the non-Finslerian approach cannot be applied. Let the current be given by the river-type perturbation \eqref{pole_river}. We consider the piecewise time-optimal paths connecting the fixed waypoints defined in the standard models where the directions of the straight search paths change. With the search problem in mind, we refer to the optimal control angle $\varphi $ by the following corollary \begin{corollary}{(cf. \S 459 in \cite{caratheodory})} The steering must always be toward the side which makes the wind component acting against the steering direction greater. \end{corollary} \noindent Thus, the idea is to make use of perturbing vector field in order to increase the resulting ship's speed and not to follow the fixed standard pattern without regard for the type and properties of acting perturbation. Brielfy, if possible we aim to avoid sailing routinely "against" the current. \begin{figure}
\caption{Modified expanding square model by the piecewise-time-optimal legs under the shear vector field.}
\label{exsq}
\end{figure} The current effect represented by the drift vector is the same for searching and searched object as it is assumed in the simulations. In each of simulated scenarios, if we subtract the drift vector given as linear function of time, then we obtain the standard search patterns illustrated in Figure \ref{sar_bez}. Hence, in reference to flowing water the standard model is followed continuously, however over ground the search paths are then modified by the vector of drift as they are presented in Figure \ref{sar_z1} and Figure \ref{sar_z2}.
We proceed by first considering expanding square and let us now come back, for simplicity but without loss of general idea, to the weak shear vector field \eqref{pole}. In Figure \ref{exsq} the standard expanding square (blue dotted) of given starting leg length, which determines track spacing $\varepsilon^*$ in the whole model, and oriented such that the horizontal legs are parallel to the flow. Thus, the fixed waypoints are determined and they represent the consecutive startpoints and endpoints connected by the Randers geodesics (red solid). As the time of passage is shorter in each leg in comparison to the corresponding straight legs of the standard pattern, thus the total time in the new modified paths is decreased. Obviously, we require that the search is efficient, so the maximal distance $\varepsilon$ between the points of the searched space and the search path ought to be also taken into consideration. Therefore, in what follows we shall define \textit{the complete search}. So far we aim to show the potential application of (piecewise) time-optimal path, where it is reasonable, in order to minimize the total time of search without exceeding required value of $\varepsilon$. For that reason previously fixed waypoints are now translated such that obtained new Randers geodesics (green solid) fulfill the condition for the maximal distance between the points of searched space and search paths. \begin{figure}
\caption{Sector search modified by the time-optimal legs starting from fixed waypoints determined by the standard pattern under the shear vector field, $\bigtriangleup\varphi_0=\frac{\pi}{18}$.}
\label{sector}
\end{figure}
Next, we combine the Randers geodesics and the sector search under the same weak shear perturbation what is presented in Figure \ref{sector}. The standard pattern (blue dotted) determines, as previously, the consecutive fixed waypoints from which the color-coded families of optimal connections start. In the example we assume that the diameter of circular search area equals the width of the river due to the restriction on strong convexity. If the Randers geodesics connecting directly the fixed points do not fulfill a condition for $\varepsilon$, then we follow the individuals of the families generated from the intermediate endpoints of followed legs. In general, the partition depends on $\varepsilon$ and the properties of the time-optimal paths in given vector field. We proceed by considering the third standard pattern, namely the creeping line in the presence of the quartic curve perturbation and the Gaussian function perturbation what is shown in Figure \ref{creep_RB} and Figure \ref{creep_gauss}, respectively. In analogous way as before the standard paths (blue dotted) are modified with the use of the time-optimal legs starting from fixed waypoints determined by the standard pattern in both scenarios.
\begin{figure}
\caption{Creeping line search modified by the time-optimal legs starting from fixed waypoints determined by the standard pattern under the quartic curve perturbation, $\bigtriangleup\varphi_0=\frac{\pi}{18}$.}
\label{creep_RB}
\end{figure} \begin{figure}
\caption{Creeping line modified by the time-optimal legs starting from fixed waypoints determined by the standard pattern under the Gaussian function perturbation, $\bigtriangleup\varphi_0=\frac{\pi}{18}$.}
\label{creep_gauss}
\end{figure} The charts of the color-coded Randers geodesics have been created which cover the background spaces under acting river-type perturbations. The curves start from determined fixed points and represent the quickest connections. In this sense they are the solutions to Zermelo's problem in the presence of weak perturbation in entire space without singularities. However, in search problem the additional conditions are included, e.g. $\varepsilon$ or restricted domain, in the context of free or fixed final time problems which belong to the optimal control theory. In each scenario the conditions modify the final search model. We aim to make use of time-optimal legs to decrease the total time of searching the whole space or to maximize the space which can be searched in limited fixed time, with required value of $\varepsilon$. Beside combining the Randers geodesics with the standard search patterns, it can be reasonable in the presence of some perturbations to omit the standard paths completely. This means to follow non-standard search models configured with the time-optimal legs and rearranged wypoints without references to the standard patterns. The approach admits the perturbation to influence the geometry of the search models.
\subsection{Generalizations and extensions} \label{gen} The problem requires that the search is efficient, not just time-optimal in the meaning of following the time-optimal legs. Thus, introducing the time-optimal paths to the search problem the condition for searching the entire space needs to be fulfilled. This means that the points of the searched space should be close enough to at least one path. In the standard models, which we analyzed above, the track spacing $\varepsilon^*$ is fixed as a constant value and determines the entire model and so the total time of search. Let us define the complete search for a subset $\mathcal{D}$ of a metric space $(M, d)$. \begin{definition} \em{Let $\mathcal{D}\subset M$ be a search space, $\Gamma=\underset{i}{\cup}\gamma_i$ a search path, where $\gamma_i$ represents $i$-time-optimal leg and $\varepsilon\geq0$ is a fixed search parameter. We say that a search is complete if \begin{equation}
\forall \ \ A\in \mathcal{D} \ \ \exists \ \ \tilde{A}\in \Gamma: \ d(A, \tilde{A})\leq \varepsilon. \label{complete_search} \end{equation}} \end{definition} \noindent The definition implies there are no omitted "zones" left, if the time is free, and the ship follows the time-optimal legs. Hence, the Randers geodesic paths in compliance with the condition \eqref{complete_search} can be applied in the problem. The particular application depends on the initial conditions, type of perturbation and preset parameter $\varepsilon$. In comparison to the track spacing parameter $\varepsilon^*$ applied in the real navigational devices and systems we are led to the relation, for example in case of creeping line search, $\varepsilon=0.5 \varepsilon^*$. However, recall that the key is to minimize the total time $t_c$ of the complete search. Otherwise, the solution based on the piecewise time-optimal search path $\Gamma$, which can be represented by the solutions to Zermelo's problem, might not state for the time-optimal solution to the problem of search. $\gamma_i$ guarantees the local optimality in the connections of the intermediate waypoints. To begin with, we restrict the study excluding the topological singularities, in particular let the search subset $\mathcal{D}$ be compact, connected and convex. The generalized geometric and optimal control problems with $t_c\rightarrow min$, which rise from above, can be proposed as follows \begin{enumerate}
\item time-optimal complete search of the ellipse-shaped area under weak river-type perturbation (note, in engineering and navigation the planar positions' distributions of graduated probability levels centered at datum are determined by the error and concentration ellipses, in 3D by the triaxial ellipsoids),
\item time-optimal complete search of given 2D area under weak river-type perturbation, arbitrary weak stationary perturbation strong stationary perturbation, arbitrary time-dependent perturbation,
\item simultaneous time-optimal complete search of given 2D area by $n\geq2$ ships under above mentioned perturbations,
\item 3D and higher dimensions analogues of above mentioned tasks with Euclidean background,
\item analogues of above mentioned tasks with non-Euclidean background, e.g. $\mathbb{S}^n$.
\end{enumerate} \noindent Let us observe that further in the extended problem one can combine and work with the notions of differential geometry and optimization together with reliability and probability. We can ask, for instance, about the complete search model with $\varepsilon\geq0$, in which the searched space is covered by $\Gamma$ such that $t_c\rightarrow min$ while the space is graduated with respect to the probability, starting from the datum (maximum) to the boundary (minimum) of the space. It means that first we search completely the subspace of the highest probability level and then we expand the search with next subspaces of lower probability levels in descending order.
\section{Conclusions}
We have studied the navigation problem by means of Finsler geometry, namely the Randers metric, under the river-type perturbation. In the case of planar Euclidean background the final system of equations defining the problem can also be formulated and solved with the use of equations of motions and the classical implicit navigation formula of Zermelo for the optimal control angle. Moreover, in order to optimize the real-time computations, what is the essential point in the real applications as the implementation of the geometric models in the software of the real navigational systems and simulators, the latter approach becomes more efficient and simpler. This is due to the fact that in Finsler geometry the computations of geometric quantities are usually complicated, even in 2D with the Euclidean background metric. Analyses involving Randers spaces are generally difficult and finding solutions to the geodesic equations is not straightforward (cf. \cite{brody, chern_shen}). For instance, this is shown directly in the obtained final system of Randers geodesics' equations \eqref{g1} and \eqref{g2} in the case of the shear wind. In the context of the real applications this is a strong argument, as the computational ability represented by the central processing unit is taken into account in practice, even though each of the mentioned theoretical methods gives the same final solutions represented by the time-optimal paths (cf. Corollary \ref{cor1} \& \ref{cor2}).
Having chosen the river-type perturbation allows us to simplify the Randers geodesics' equations without loss of generality and present the steps of the solution in completed forms. However, the Randers metric with application of Theorem \ref{THM} is the key to solve the Zermelo navigation problem for non-Euclidean Riemannian background metrics in any dimensions under arbitrary weak perturbations. Thus, the Finslerian approach as well as the variational one \cite{palacek} for an arbitrary wind enable to investigate and solve more advanced geometric and control problems in which, for instance, the implicit formula of Zermelo cannot be used. The analysis of the problem simplifies if we observe that it suffices to find the locally optimal solution on the tangent space.
The idea behind our scheme is to make use of the time-optimal paths, represented in particular by the Randers geodesics. With the condition \eqref{complete_search} in mind, we aimed to optimize the total time of search in comparison to following the standard fixed search patterns. They are routinely used without taking into consideration the type of the perturbation and the initial conditions. However, these factors influence the geometric properties of the models, the flow of Randers geodesics and thus the resulting speed of searching object, and finally the total traverse time $t_c$. The standard models are also required formally to be used in the real applications. This also motivated us to revisit the standard patterns and focus on the reasons enforcing them to be followed in the scenarios with acting perturbations. In the presented approach we let the the perturbation to influence the geometry of the search models, thus they differ from the standard ones in the case of acting vector field. The study also shows that it is not necessarily efficient to orient the search model in each case such that starting leg is oriented only with or against acting river-type perturbation, as it is routinely assumed. As the main criterium is time under complete search condition \eqref{complete_search}, combining the standard models with the time-optimal paths or creating the new models based solely on the time-optimal paths in the presence of the perturbation may give the potential benefit, i.e higher efficiency of searching. The remodeling which we considered becomes more significant when the ratio $\frac{|W|}{|u|}$ increases. Thus, in the context of some real applications let us note that the current technology enables to implement the models based on the time-optimal paths, for instance in route planning and monitoring referring to the dron aerial survey and patrolling fixed zone, robotic sailing, underwater slow-speed gliding, weather routing combined with the numerical weather prediction models.
Application to the search models can also be transformed into the non-Euclidean geometric structures. The idea of combining the theory of search with time-optimal paths, represented here by the Randers geodesics, can be followed and developed then. The examples of the extended and generalized theoretical problems arising from our study have been proposed in \S \ref{gen}. Regarding implementations, the applied simplifications admitting only constant perturbation can be preliminarily optimized by considering the stationary current with the use of time-optimal paths as presented in the above examples. Note that the complete solution to Zermelo's problem refers to the perturbation given as a function of position and time, so the standard search models may become inefficient with respect to the criterium of time. This fact gives meaningful opportunity to apply more advanced than the standard search models due to the essential time reduction, in the scenarios with the presence of acting perturbations.
\
\noindent \textsc{Faculty of Mathematics and Computer Science, Jagiellonian University,\\ 6, Prof. S. Łojasiewicza, 30 - 348, Kraków, Poland}\\ and \\ \textsc{Faculty of Navigation, Gdynia Maritime University,\\ 3, Al. Jana Pawla II, 81-345, Gdynia, Poland}\\
\noindent \textit{E-mail address:} \texttt{[email protected]}
\end{document} |
\begin{document}
\monthyear{Month Year} \volnumber{Volume, Number} \setcounter{page}{1}
\title{An Infinite 2-Dimensional Array Associated With Electric Circuits} \author{Emily Evans} \address{Brigham Young University} \email{[email protected]} \author{Russell Jay Hendel} \address{Towson University} \email{[email protected]}
\begin{abstract} Except for Koshy who devotes seven pages to applications of Fibonacci Numbers to electric circuits, most books and the Fibonacci Quarterly have been relatively silent on applications of graphs and electric circuits to Fibonacci numbers. This paper continues a recent trend of papers studying the interplay of graphs, circuits, and Fibonacci numbers by presenting and studying the Circuit Array, an infinite 2-dimensional array whose entries are electric resistances labelling edge values of circuits associated with a family of graphs. The Circuit Array has several features distinguishing it from other more familiar arrays such as the Binomial Array and Wythoff Array. For example, it can be proven modulo a strongly supported conjecture that the numerators of its left-most diagonal do not satisfy any linear, homogeneous, recursion, with constant coefficients (LHRCC). However, we conjecture with supporting numerical evidence an asymptotic formula involving $\pi$ satisfied by the left-most diagonal of the Circuit Array.
\end{abstract}
\maketitle
\section{Electrical Circuits, Linear 2-trees, and Fibonacci Numbers}\label{sec:s1_circuits}
Koshy \cite[pp. 43-49]{Koshy} lists applications of electrical circuits yielding interesting Fibonacci identities. However, aside from this, most books and as well as the issues of the Fibonacci Quarterly have been mostly silent on this application.
To begin our review of the recent literature, which has renewed interest in this application, first, recall one modern graph metric, effective resistance, requires that the graph be represented as an electric circuit with edges in the graph represented by resistors. Figure \ref{fig:pawcircuit} illustrates this.
\begin{figure}
\caption{Illustration of a graph and its associated circuit. }
\label{fig:pawcircuit}
\end{figure}
Several papers \cite{Barrett9, Barrett0} have explored effective resistances in electrical circuits whose underlying graphs are so-called linear 2-trees. In addition to showing that these effective resistances are rational functions of Fibonacci numbers, these circuits naturally give rise to interesting and new Fibonacci identities. For example the identities
\begin{equation}\label{eq:fibid1}
\sum_{i = 1}^{m} \frac{F_i F_{i+1}}{L_i L_{i+1}} = \frac{(m+1) L_{m+1} - F_{m+1}}{5 L_{m+1}}, \quad\text{for $m \geq 1$,} \end{equation} and for $k=3, 4, \dots, n-2$, \begin{equation}\label{eq:wayne1a} \sum_{j=3}^k {[(-1)^j F_{n-2j+1}(F_{n}+F_{j-2}F_{n-j-1})]}=-F_{k-2}F_{k+1}F_{n-k-2}F_{n+1-k}. \end{equation}
To appreciate these recent contributions we provide additional background. Effective resistance, also termed resistance distance in the literature, is a graph metric whose definition was motivated by the consideration of a graph as an electrical circuit. More formally, given a graph, we determine the effective resistance between any two vertices in that graph by assuming that the graph represents an electrical circuit with resistances on each edge. Given any two vertices labeled $i$ and $j$ for convenience assume that one unit of current flows into vertex $i$ and one unit of current flows out of vertex $j$. The potential difference $v_i - v_j$ between nodes $i$ and $j$ needed to maintain this current is the {\it effective resistance} between $i$ and $j$. Figure \ref{fig:pawcircuit} illustrates this.
Recent prior works~\cite{Barrett9, Barrett0, Barrett0b}, study effective resistance in a class of graphs termed {\it linear 2-trees}, also known as 2-paths, which we now define and illustrate. \begin{definition}\label{def:2tree} In graph--theoretic language, a 2-tree is defined inductively as follows \begin{enumerate}
\item $K_3$ is a 2-tree.
\item If $G$ is a 2-tree, the graph obtained by inserting a vertex adjacent to the two vertices of an edge of $G$ is a 2-tree. \end{enumerate}
A linear $2$-tree (or $2$-path) is a $2$-tree in which exactly two vertices have degree $2$. For an illustration of two sample linear 2--trees see Figure~\ref{fig:2tree}. \end{definition}
\begin{figure}
\caption{On the left, a straight linear 2-tree with $n$ vertices. On the right, a linear 2-tree with $n$ vertices and single bend at vertex $k$. }
\label{fig:2tree}
\end{figure}
In \cite{Barrett9} network transformations (identical to those found in Section~\ref{sec:s2_basics}) were used to determine the effective resistance in a linear 2-tree with $n$ vertices; the following results were obtained. \begin{theorem}~\cite[Th. 20]{Barrett9}\label{thm:sl2t} Let $S_n$ be the straight linear 2-tree on $n$ vertices labeled as in the graph on the left in Figure~\ref{fig:2tree}. Then for any two vertices $u$ and $v$ of $S_n$ with $u < v$, \begin{equation} r_{S_n}(u,v)=\frac{\sum_{i=1}^{v-u} (F_i F_{i+2u-2}-F_{i-1} F_{i+2u-3})F_{2n-2i-2u+1}}{F_{2n-2}}. \label{eq:resdiststraightsum} \end{equation} or equivalently in closed form
\begin{multline*}r_{S_n}(u,v) = \frac{F_{m+1}^2+F_{v-u}^2F_{m-2j-v+u+3}^2}{F_{2m+2}}\\ +\frac{F_{m+1}\left[{F_{m-v+u}}((v-u)L_k-F_{v-u})+{F_{m-v+u+1}}\left((v-u-5)F_{v-u+1}+(2v-2u+2)F_{v-u}\right)\right]}{5F_{2m+2}}\end{multline*} \noindent where $F_p$ is the $p$th Fibonacci number and $L_q$ is the $q$th Lucas number.
\end{theorem} \noindent Moreover identity~\ref{eq:fibid1} was shown.
In~\cite{Barrett0} the formulas for a straight linear 2-tree were generalized to a linear 2-tree with any number of bends. See the graph on the right in Figure~\ref{fig:2tree} for an example of a linear 2--tree with a bend at vertex $k$. The following result is the main result from~\cite{Barrett0} and nicely gives the effective resistance between two vertices in a bent linear 2--tree. \begin{theorem}~\cite[Th. 3.1]{Barrett0}\label{cor:main2}
Given a bent linear 2-tree with $n$ vertices, and $p = p_1 + p_2 + p_3$ single bends located at nodes $k_1, k_2, \ldots, k_p$ and $k_1 < k_2 < \cdots < k_{p-1} < k_p$ the effective resistance between vertices $u$ and $v$ is given by
\begin{multline}\label{eq:genericformres}
r_G(u,v)=r_{S_n}(u,v)-\sum_{j=p_1+1}^{p_1+p_2}\Big[F_{k_j-3}F_{k_j}-2\sum_{i=p_1+1}^{j-1}[(-1)^{k_j-k_i+1+j-i}F_{k_i}F_{k_i-3}]+2(-1)^{j+u+k_j}F_{u-1}^2\Big]\cdot\\
\Big[F_{n-k_j+2}F_{n-k_j-1}+2(-1)^{v-k_j}F_{n-v}^2\Big]/F_{2n-2}.
\end{multline}
\end{theorem} \noindent In addition, identity~\ref{eq:wayne1a} was shown.
This paper adds to the growing literature on electrical circuits and recursions by presenting, exploring, and proving results about an infinite array, $C_{i,j}, j \ge 1, 0 \le i \le 2(j-1),$ whose elements are electrical resistances associated with circuits defined on triangular grid graphs.
\section{Some Definitions }\label{sec:s2_basics}
This section gathers and defines some assorted terms used throughout the paper.
\textbf{The (Triangular) $n$-grid.} \cite[Figure 1]{Hendel},\cite[Figure2]{EvansHendel}. Figure \ref{fig:3grid} is illustrative of the general (triangular) $n$-grid for $n=3.$ As can be seen the $n$-grid consists of $n$ rows with $i, 1 \le i \le n,$ upright oriented triangles arranged in a triangular grid. Triangles are labeled by row, top to bottom, and diagonal, left to right, as shown in Figure \ref{fig:3grid}.
\begin{figure}
\caption{A 3-grid with the upright oriented triangles labeled by row and diagonal.}
\label{fig:3grid}
\end{figure}
\textbf{The all-one $n$-grid.} Throughout this paper the edge labels of a graph correspond to actual resistance values. The \textit{all-one $n$-grid} refers to an $n$-grid all of whose resistance values are uniformly 1.
We use the notation $T_{r,d,e}$ to refer to the edge label of edge $e, e \in \{L,R,B\}$ (standing, respectively, for the left, right, and base edges of a triangle in the upright oriented position), of the triangle in row $r$ diagonal $d.$ Similarly, $T_{r,d}$ will refer to the triangle in row $r$ diagonal $d.$
Throughout the paper both the all--one $n$-grid and the $m$-grids derived from it ($1 \le m \le n-1$) possess vertical and rotational symmetry (when rotated by $\frac{\pi}{3}).$ \cite[Definition 9.1]{Hendel},\cite[Definition 2.11]{EvansHendel}.
This symmetry facilitates not presenting results separately for the left, right, and base sides. Typically we will suffice with \textit{the upper left half} of a grid, \cite[Definition 9.6]{Hendel},\cite[Definition 2.12]{EvansHendel}, defined as the set of triangles, $T_{r,d}$ with $0 \le r \le \lfloor \frac{m+1}{2} \rfloor,$ $1 \le d \le \lfloor\frac{m+2}{2}\rfloor.$
\begin{example}\label{exa:upperlefthalf}
If $n=3,$ (see panel A1 in Figure \ref{fig:5panels}) the upper left half consists of the
triangles $\langle r,d \rangle, d=1, r=1,2.$
\end{example}
The importance of the upper left half is the following result which captures the implications of the symmetry of the $m$-grids \cite[Corollary 9.6]{Hendel},\cite[Lemma 2.14]{EvansHendel}. \begin{lemma} \label{lem:upperlefthalf} For an $m$-grid, once the edge values of the upper half are known, all edge values in the $m$-grid are fixed. \end{lemma}
\textbf{Corners.} \cite[Equation (29)]{Hendel},\cite[Definition 2.15]{EvansHendel}. Graph--theoretically, a triangle is a corner of an $m$-grid if it has a degree-2 vertex. The 3 corner triangles of an $m$-grid are located at $T_{1,1}, T_{m,1}, T_{m,m}.$ For example, for the 3-grid on Figure \ref{fig:3grid}, the three corners are located at $\langle 1,1 \rangle, \langle 3,1 \rangle, \langle 3,3 \rangle.$
\section{The Three Circuit Transformations}\label{sec:circuitfunctions}
As pointed out in Section \ref{sec:s1_circuits}, every circuit has associated with it an underlying labeled graph whose edge labels are electrical resistances. Therefore, to specify an \textit{equivalent circuit transformation} from an initial parent circuit to a transformed child circuit we must specify the vertex, edge, and label transformations. By equivalent circuit transformation we mean one that maintains the effective resistance between vertices that appear in both parent and child circuit. There are three basic circuit transformations that we use that preserve effective resistance: \textit{series, $\Delta- Y$, and $Y-\Delta.$} Figure \ref{fig:seriesparallel} illustrates the series transformation .
The following are the key points about this transformation. \begin{itemize}
\item The top parent graph has 3 nodes and 2 edges
\item The transformed child graph below has 2 nodes and one edge
\item There is a formula,\cite[pg. 43]{Koshy} $R_1+R_2$ giving the edge label of the child graph in terms of the edge labels of the parent graph. \end{itemize}
\begin{figure}
\caption{Illustration of the series transformations. See narrative for further details. }
\label{fig:seriesparallel}
\end{figure}
The remaining two circuit transformations are the $\Delta-Y$ transformation which transforms a parent simple 3-edge loop to a claw (3-edge outstar), and the $Y-\Delta$ transformation which takes a claw to a 3-edge loop,\cite[Figure 2]{Hendel}, \cite[Definition 2.4]{EvansHendel}. The relevant transformation functions are \begin{equation}\label{equ:deltay} \Delta(x,y,z) = \frac{xy}{x+y+z}; \qquad Y(x,y,z) = \frac{xy+yz+zx}{x}.\end{equation}
Following the computations presented in this paper will not require details of these transformations or how the order of arguments relates to the underlying graphs. To follow the computations needed in this paper it suffices to know the four circuit transformation functions presented in Section \ref{sec:proofmethods}.
\section{The Reduction Algorithm}\label{sec:reduction}
This section presents the basic reduction algorithm. This algorithm was first presented in \cite[pg. 18]{Barrett0} where the algorithm was used for purposes of proof but not used computationally, since computations were done using the combinatorial Laplacian. Hendel \cite[Definition 2.3,Figure 3]{Hendel} was the first to use the reduction algorithm computationally. Moreover, \cite[Algorithm 2.8, Figure 3 and Section 4]{EvansHendel} was the first to show that four transformation functions suffice for all computations. These four circuit transformation functions will be presented in Section \ref{sec:proofmethods}; knowledge of them suffices to follow, and be able to reproduce, all computations presented in this paper. The usefulness of this algorithm in uncovering patterns is alluded to in \cite{Hendel, EvansHendel}.
We begin the presentation of the four circuit transformations with some basic illustrations.
\color{black} The reduction algorithm takes a parent $m$ grid and \textit{reduces} it, by removing one row of triangles, to a child $m-1$ grid. Figure \ref{fig:5panels}, illustrates the five steps in reducing the 3 grid (Panel A) to a two grid (Panel E), \cite[Steps A-E, Figure 3]{Hendel}, \cite[Algorithm 2.8]{EvansHendel} \begin{itemize}
\item Step 1 - Panel A: Start with a labeled 3-grid
\item Step 2 - Panel B: Apply a $\Delta-Y$ transformation to each upright triangle (a 3-loop) resulting in a grid of 3 rows of 3-stars, as shown.
\item Step 3 - Panel C: Discard the corner tails, edges with a vertex of degree one. This does not affect the resistance labels of edges in the reduced two grid in panel E. (However, these corner tails are useful for computing effective resistance as shown in
\cite{Barrett0,Evans2022}).
\item Step 4 - Panel D: Perform series transformations on all consecutive pairs of boundary edges (i.e., the dashed edges in panel C).
\item Step 5 - Panel E: Apply $Y-\Delta$ transformations to all remaining claws, transforming them into 3-loops. \end{itemize}
\begin{figure}
\caption{Illustration of the reduction algorithm, on a 3-grid. The panel labels correspond to the five steps indicated in the narrative.}
\label{fig:5panels}
\end{figure} \color{black}
The important point here is that each of the five steps involves specific circuit transformations. However, to follow, and be able to reproduce the computations in this paper, only the four circuit transformation functions presented in the next section are needed. The derivation of these four circuit transformation functions is not needed and has been given in detail in the references cited. An example at the end of this section illustrates what is needed.
In the sequel, we will typically start with an all--one $n$-grid and successively apply the reduction algorithm resulting in a collection of $m$ grids, $1 \le m \le n-1.$ The notation \begin{multline*} T_{r,d,X}^m, X \in \{L,R,B, LR\} \text{ indicates the resistance label of side $X$}\\ \text{in triangle $T_{r,d}$ of the all--one $n$-grid reduced $m$ times}\\ \text{The symbol LR will be used in a context} \text{ when the side depends on the parity of a parameter. } \end{multline*}
Additionally, if we deal with a single reduction we may use the superscripts $p,c$ to distinguish between the parent grid and the child grid when the actual number of reductions used is not important.
\begin{example}\label{exa:suffices} Referring to Figure \ref{fig:5panels}, the function \emph{left} presented in the next section takes the 9 resistance edge-labels of triangles $T_{2,1}^p, T_{2,2}^p,T_{3,2}^p$ in the parent 3-grid in Panel A and computes the resistance edge-value, $T_{2,2,L}^c$ of the child 2-grid in Panel E. Thus the four transformation functions of the next section suffice to verify and reproduce the computations in this paper. \end{example}
\section{The Four Transformation Functions.}\label{sec:proofmethods}
As mentioned in Example \ref{exa:suffices} and the surrounding narrative, this section presents the four circuit transformation functions that suffice to follow and reproduce the computations presented in this paper \cite[Section 4]{EvansHendel}: \begin{itemize}
\item Boundary edges
\item Base (non-boundary) edges
\item Right (non-boundary)edges
\item Left (non-boundary) edges \end{itemize}
We begin our description of the four transformation functions with the base edge case. Illustrations are based on Figure \ref{fig:3grid}. We first illustrate with the base edge of the top corner triangle in Figure \ref{fig:3grid} and then generalize. Note, that the $\Delta$ and $Y$ functions have been defined in \eqref{equ:deltay}.
We have $$ T_{1,1,B}^c =
Y(\Delta(T_{3,2,L}^{p},
T_{3,2,R}^{p},
T_{3,2,B}^{p}),
\Delta(T_{2,1,R}^{p},
T_{2,1,B}^{p},
T_{2,1,L}^{p}),
\Delta(T_{2,2,B}^{p},
T_{2,2,L}^{p},
T_{2,2,R}^{p})). $$ This is a function of 9 variables. At times it becomes convenient to emphasize the triangles involved. We will use the following notation to indicate the dependency on triangles. $$ T_{1,1,B}^c =F(T_{3,2}^{p},
T_{2,1}^{p},
T_{2,2}^{p}), $$ which is interpreted as saying \textit{the base edge of $T_{1,1}^c$ is some function ($F$) of the edge-labels of the triangles $T_{3,2}^p,T_{2,1}^p,T_{2,2}^p.$} Clearly, this notation is mnemonical and cannot be used computationally. It is however very useful in proofs as will be seen later.
The previous two equations can be generalized to an arbitrary $m$-grid and arbitrary row and diagonal (with minor constraints, $r+2 \le n, d+1 \le r,$ on the row and diagonal). We have \begin{multline*} T_{r,d,B}^c =
Y(\Delta(T_{r+2,d+1,L}^{p},
T_{r+2,d+1,R}^{p},
T_{r+2,d+1,B}^{p}),
\Delta(T_{r+1,d,R}^{p},
T_{r+1,d,B}^{p},
T_{r+1,d,L}^{p}),\\
\Delta(T_{r+1,dr+1,B}^{p},
T_{r+1,d+1,L}^{p},
T_{r+1,d+1,R}^{p})). \end{multline*} and $$ T_{r,d,B}^c =F(T_{r+2,d+1}^{p},
T_{r+1,d}^{p},
T_{r+1,d+1}^{p}). $$
We next list the remaining three transformation functions.
For $r+2 \le n, d+1 \le r,$ for a boundary left edge we have
\begin{equation*} T_{r,1,L}^c =
\Delta(T_{r,1,B}^{p},
T_{r,1,L}^{p},
T_{r,1,R}^{p})+
\Delta(T_{r+1,1,L}^{p},
T_{r+1,1,R}^{p},
T_{r+1,1,B}^{p}) , \end{equation*} and $$ T_{r,1,L}^c =F(T_{r,1}^{p},
T_{r+1,1}^{p}). $$
For $r+2 \le n, d+1 \le r,$ for non boundary left edges we have,
\begin{multline}\label{equ:leftside9proofs} T_{r,d,L}^c =
Y(\Delta(T_{r,d-1,R}^{p},
T_{r,d-1,B}^{p},
T_{r,d-1,L}^{p}),
\Delta(T_{r,d,B}^{p},
T_{r,d,L}^{p},
T_{r,d,R}^{p}),\\
\Delta(T_{r+1,d,L}^{p},
T_{r+1,d,R}^{p},
T_{r+1,d,B}^{p}))
\end{multline} and \begin{equation}\label{equ:leftside3proofs} T_{r,d,L}^c =F(T_{r,d-1}^{p},
T_{r,d}^{p},
T_{r+1,d}^{p}). \end{equation}
For $r+1 \le n, 2 \le d \le r-1,$ for the right sides, \begin{multline*} T_{r,d,R}^c =
Y(\Delta(T_{r,d,B}^{p},
T_{r,d,L}^{p},
T_{r,d,R}^{p}),
\Delta(T_{r+1,d,L}^{p},
T_{r+1,d,R}^{p},
T_{r+1,d,B}^{p},\\
\Delta(T_{r,d-1,R}^{p},
T_{r,d-1,B}^{p},
T_{r,d-1,L}^{p})) \end{multline*} and $$ T_{r,d,R}^c =F(T_{r,d}^{p},
T_{r+1,d}^{p},
T_{r,d-1}^{p}). $$
\begin{comment} Notice that we only defined the boundary function for the left boundary ($d=1$). Similarly, notice that for example the base edge function requires $r \le n-2.$ This is not a restriction. For by Lemma \ref{lem:upperlefthalf}, once the upper left half is calculated, the remaining edge values follow by symmetry considerations. Thus the above functions with their restrictions do indeed suffice. \end{comment}
\section{Computational Examples}\label{sec:Appendix_A}
The four transformation functions of Section \ref{sec:proofmethods} with up to 9 arguments may appear computationally challenging. The purpose of this section is to illustrate their computational use. Additionally, the results computed will be used both to motivate and prove the main theorem.
\subsection{One Reduction of an all--one $n$--grid}
An all--one $n$ grid definitionaly has uniform labels of 1. Hence, we may calculate the edge resistance values in an $n-1$ grid arising from one reduction of the all--one $n$ grid as follows: \begin{itemize}
\item $T_{r,1,L}=\Delta(r,1,1)+\Delta(r+1,1,1)=\frac{2}{3}, 1 \le r \le n-1.$
\item The preceding bullet computes resistance labels for the left boundary. By Lemma \ref{lem:upperlefthalf}, and by symmetry considerations the same computed value holds on the other two grid boundary edges: $T_{r,r,R} = T_{n-1,r,B} = \frac{2}{3}, 1 \le r \le n-1.$
\item All other edge values are 1, since
the computation $Y(\Delta(1,1,1), \Delta(1,1,1), \Delta(1,1,1))=1$ applies to $T_{r,d,X}, X \in \{L,R,B\}$
\item Again, cases not covered by the four transformation functions are covered by symmetry considerations and Lemma \ref{lem:upperlefthalf}. For example the formula for $T_{r,d,B}$ is only valid for $r \le n-2,$ and therefore both the symmetry considerations and the lemma are needed. \end{itemize}
We may summarize our results in a lemma, see also, \cite[Corollary 5.1]{Hendel},\cite[Lemma 6.1]{EvansHendel}.
\begin{lemma}\label{lem:1reduction} The resistance labels of the $n-1$ grid arising from one reduction of an all--one $n$ grid are as follows: \begin{enumerate}
\item Boundary resistance labels are uniformly $\frac{2}{3}$.
\item Interior resistance labels are uniformly 1. \end{enumerate} \end{lemma}
The top corner triangle, $T_{1,1}$ of the $n-1$ grid is presented in Panel A of Figure \ref{fig:motivationillustration}.
\subsection{Uniform Central Regions} Prior to continuing with the computations we introduce the concept of the uniform center which will be used in the proof of the main theorem.
First, we can identify a triangle with the ordered list, Left, Right, Base, of its resistance labels. Two triangles are then equivalent if their edge labels are equal. By Lemma \ref{lem:1reduction} for the once reduced $n-1$ grid we have $$
T_{r,1}=\left(\frac{2}{3},1,1\right) \text{ and }
T_{r,r} = \left(1,\frac{2}{3},1\right) \text{ for }
2 \le r \le n-1. $$ Although $T_{r,1}$ and $T_{r,r}$ are not strictly equivalent we will say they are equivalent up to symmetry since each triangle may be derived from the other by a vertical symmetry, \cite[Definition 5.8]{EvansHendel}.
Using these concepts of triangle equivalence and triangle equality up to symmetry, we note that the central region, $2 \le r \le n-1$ of diagonal 1 of the reduced $n-1$ grid is uniform, that is all triangles are equal. We also note that the interior of the reduced $n-1$ grid is uniform.
This presence of uniformity generalizes. The formal statement of the uniform center \cite[Theorem 6.2]{EvansHendel} is as follows:
\begin{theorem}[Uniform Center]\label{the:uniformcenter} For any $s \ge 1,$ let $n \ge 4s,$ and $1 \le d \le s:$
\begin{enumerate} \item For
\begin{equation}\label{equ:uniformcenter}
s+ d \le r \le m-2s
\end{equation}
the triangles $T_{r,d}^s$ are all equal. \item For \begin{equation}\label{equ:uniformcenter2} 2s-1 \le r \le m-2s \end{equation} the left sides $T_{r,s,L}^s$ are all equal, $T_{2s-1,s,R}^s=T_{2s-1,s,L}^s,$ $T_{r,s,R}^s=1, 2s \le r \le m-2s,$ and $T_{r,s,B}^s=1, 2s-1 \le r \le m-2s-1.$ \item For any triangle in the uniform center, that is, satisfying \eqref{equ:uniformcenter}, $ T_{r,d,R}=T_{r,d,B}.$ \end{enumerate} \end{theorem}
This theorem has an elegant graphical interpretation. It states that the sub triangular grid whose corner triangles are $T_{2s-1s}^s, T_{m-2s,m}^s, T_{m-2s, m-2s}^s$ has interior labels of 1 and a single uniform label along its edge boundary. However this interpretation is not needed in the sequel.
\subsection{Two Reductions of an all--one $n$--grid}
We continue illustrating computations by considering the $n-2$ grid arising from 2 reductions of the all--one $n$ grid (or one reduction of the $n-1$ grid.)
By \eqref{equ:leftside3proofs} $$
T_{3,2,L}^2 = F(T_{3,1}^1, T_{3,2}^1, T_{4,2}^1). $$ By Lemma \ref{lem:1reduction}, we have $$
T_{3,1}^1=(\frac{2}{3},1,1),
T^1_{3,2}=T^p_{4,2} =(1,1,1). $$ Hence, by \eqref{equ:leftside9proofs} \begin{equation}\label{equ:t322627} T_{3,2,L}^c=Y(\Delta(1,1,\frac{2}{3}),
\Delta(1,1,1), \Delta(1,1,1))=
\frac{26}{27}, \end{equation} as shown in Panel B of Figure \ref{fig:motivationillustration}.
To continue with the computations we define the function, \begin{equation}\label{equ:g0}
G_0(X)=Y(\Delta(1,1,X),
\Delta(1,1,1), \Delta(1,1,1))=
\frac{X+8}{9}, \end{equation} and confirm $G_0(\frac{2}{3}) = \frac{26}{27}.$
\subsection{Three Reductions of an all--one $n$-grid.} We next compute $T_{5,3,L}^3.$ Continuing as in the case of $T_{3,2,1},$ we have by \eqref{equ:leftside3proofs} \begin{equation}\label{equ:temp1} T_{5,3,L}^3=F(T_{5,2}^2, T_{5,3}^2, T_{6,3}^2). \end{equation} By Theorem \ref{the:uniformcenter}(b) $$
T_{5,2,L}^2 = T_{3,2,L}^2, $$ and by \eqref{equ:t322627} $$
T_{3,2,L}^2 = \frac{26}{27}, $$ implying $$
T_{5,2,L}^2 = \frac{26}{27}. $$ Again, by Theorem \ref{the:uniformcenter} all resistance labels of $T_{5,3}^2, T_{6,3}^2$ are 1. Plugging this into \eqref{equ:temp1} and using \eqref{equ:leftside9proofs} and \eqref{equ:g0}, we have $$ T_{5,3,L}^3 = Y\left(\Delta\left(\frac{26}{27},1,1\right), \Delta(1,1,1), \Delta(1,1,1)\right)=G_0\left(\frac{26}{27}\right)=\frac{242}{243} $$ Panel C of Figure \ref{fig:motivationillustration} illustrates this.
We can continue this process inductively. For example, $T^4_{7,4} = G_0(\frac{242}{243}).$ The result is summarized as follows.
\begin{lemma} \label{lem:row0} With $G_0(X)$ defined by \eqref{equ:g0} we have $T_{1,1,L}^1=\frac{2}{3}$ and for $s \ge 2,$ $T_{2s-1,s,L}=G_{0}(T_{2s-3,s-1,L}).$ \end{lemma}
An almost identical argument using the circuit transformations for the right edge shows the following.
\begin{lemma} \label{lem:row1}
Let $G_1(X)=\frac{1}{3} \frac{X+8}{X+2}.$ Then for $k \ge 0,$ $T_{3+2k,2+k,R}^{2+k}=G_1(T^k_{1+2k,1+k})$ \end{lemma}
\section{Motivation for the Circuit Array}\label{sec:motivation}
This section motivates the underlying construction of the Circuit Array. We initialize with an all--one $n$-grid for $n$ large enough. As computed in Section \ref{sec:Appendix_A}, we have: \begin{itemize}
\item
$T_{1,1,L}^1=\frac{2}{3}=1-\frac{3}{9^1}.$
See Panel A of Figure \ref{fig:motivationillustration}.
\item $T_{3,2,L}^2=\frac{26}{27}=1-\frac{3}{9^2}.$ See Panel B of Figure \ref{fig:motivationillustration}.
\item $T_{5,3,L}^1=\frac{242}{243}=1-\frac{3}{9^3}.$ See Panel C of Figure \ref{fig:motivationillustration}. \end{itemize}
The resulting sequence $$
\frac{2}{3}, \frac{26}{27}, \frac{242}{243}, \dotsc $$ satisfies \begin{equation}\label{equ:row0}
1 -\frac{3}{9^s}, s=1,2,3, \dotsc. \end{equation} In this particular case the denominators, $9^s$ form a linear homogeneous recursion with constant coefficients (LHRCC) of order 1, $$
G_s = 9 G_{s-1}, s \ge 1, \qquad G_0=1. $$ Similarly, the numerators satisfy the LRCC, $$
G_s =3G_{s-1}+8, s \ge 1, \qquad G_0=-2. $$ (and therefore, since a sequence satisfying a linear, non--homogeneous recursion with constant coefficients (LRCC) will also satisfy an LHRCC albeit with a higher degree), the sequence also satisfies an LHRCC.
The sequence just studied forms row 0 of the Circuit Array, Table \ref{tab:circuitarray}. To determine row 1 of the Circuit Array we compute the following: \begin{itemize}
\item The right side of the triangle left-adjacent to the top corner triangle of the 2-rim of reduction 2 has label
$\frac{13}{12} = 1+\frac{2}{3} \frac{1}{9^{2-1}-1},$ as shown in Panel B of Figure \ref{fig:motivationillustration}.
\item The right side of the triangle left-adjacent to the top corner triangle of the 3-rim of reduction 3 has label
$\frac{121}{120} = 1+\frac{2}{3} \frac{1}{9^{3-1}-1},$ as shown in Panel C of Figure \ref{fig:motivationillustration}.
\item The right side of the triangle left-adjacent to the top corner triangle of the 4-rim of reduction 4 has label
$\frac{1093}{1092} = 1+\frac{2}{3} \frac{1}{9^{4-1}-1}.$ \end{itemize}
The resulting sequence $$
\frac{13}{12}, \frac{121}{120}, \frac{1093}{1092}, \frac{9841}{9840}, \dotsc $$ satisfies $1+\frac{2}{3 } { 9^{s-1}-1 }.$ We again see the presence of LRCC. The sequence of twice the denominators satisfies the LRCC $$
G_{s+1} = 9G_s +24, s \ge 3, G_2 =24, $$ while the sequence of twice the numerators satisfies the LRCC, $$
G_{s+1} = 9G_s +8, s \ge 3, G_2 =26, $$ and hence both numerators and denominators satisfy LHRCC, albeit of higher order.
These calculations determine the construction of the Circuit Array by rows. Figure~\ref{fig:motivationillustration} can be used to motivate a construction by columns. Each perspective provides different sequences. \begin{itemize}
\item As shown in Panel A, Column 1, consists of the singleton $\frac{2}{3}.$ We may describe the process of generating this column by starting, at the left resistance edge label of triangle $T_{1,1},$ where 1 corresponds to the number of underlying reductions of the all--one $n$-grid, traversing to the left (in this case there is nothing further to transverse) and ending at the left--most edge of the underlying row. This singleton $\frac{2}{3}$ is column 0 of the Circuit Array, Table \ref{tab:circuitarray}.
\item As shown in Panel B, Column 2 may be obtained as follows: Start, at the left resistance-label of triangle $T_{3,2},$ where the number of reductions of of the all--one $n$-grid for this column is $2, \text{ and } 3 = 2 \times 2 -1$. This resistance is $\frac{26}{27}.$ Then traverse to the left, and end at the left--most edge of the underlying row. By recording the labels during this transversal we obtain $\frac{26}{27},\frac{13}{12}, \frac{1}{2},$ which is column 1 of the Circuit Array, Table \ref{tab:circuitarray}, starting at row 0 and ending at row 2.
\item As shown in Panel C, Column 3 may be obtained as follows: Start, at the left resistance-label of triangle $T_{5,3},$ where the number of reductions of the all--one $n$-grid for this column is $3, \text{ and } 5 = 2 \times 3 -1.$ This resistance is
$\frac{242}{243},$ traverse to the left, and end at the left most edge of the underlying row. By recording the labels during this transversal we obtain $\frac{242}{243},\frac{121}{120}, \frac{89}{100},\frac{1157}{960},\frac{13}{32},$ which is column 2 of the Circuit Array, Table \ref{tab:circuitarray}), starting at row 0 and ending at row 4.
\item The above suggests in general, that column $c \ge 1$ of the Circuit Array will consist of the resistance labels of the left and right sides of the triangles $T_{2c-1,i}^c, i=c, c-1, \dotsc 1.$ This will be formalized in the next section. \end{itemize}
\begin{figure}
\caption{Graphical illustration showing locations of various edge resistance values computated in this section. See the narrative for further details. }
\label{fig:motivationillustration}
\end{figure}
\section{The Circuit Array}
This section formally defines the Circuit array, whose $i$-th row, $0 \le i \le 2(j-2),$ and $j$-th column, $j \ge 1,$ contains $T_{2j-1,j-\lfloor \frac{i+1}{2}\rfloor,LR}^j$ where, as indicated at the end of Section \ref{sec:reduction} the symbol $LR$ means L (respectively R) if $i$ is even (respectively odd).
\begin{example} This example repeats the derivation of the first three columns derived at the end of \ref{sec:motivation}.
Referring to Figure \ref{fig:motivationillustration}, we see Panel A contains row 0, column 1 of the array containing $T_{2j-1,j-i}^j=T_{1,1}^1 = \frac{2}{3}.$
Panel B contains column 2 rows 0,1,2, which respectively contain $T_{3,2,L}^2=\frac{26}{27}, T_{3,2,R}^2=\frac{13}{12}, T_{3,1,L}^2=\frac{1}{2}, $
Panel C contains column 3 rows 0,1,2,3,4 which respectively contain $T_{5,3,L}^3=\frac{242}{243}, T_{5,2,R}^3=\frac{121}{120}, T_{5,2,L}^3=\frac{89}{100}, T_{5,1,R}^3=\frac{1157}{969}, T_{5,1,L}^3=\frac{13}{32}. $
\end{example}
Table \ref{tab:circuitarrayformal} presents the the first few rows and columns of formal entries of the Circuit Array while Table \ref{tab:circuitarray} presents the first few rows and columns of the numerical values of the Circuit Array.
\begin{center} \begin{table}
\begin{small} \caption {First few rows and columns of the formal entries of the Circuit Array. } \label{tab:circuitarrayformal} { \renewcommand{1.3}{1.3} \begin{center}
\begin{tabular}{||c||r|r|r|r|r|r|r|r||} \hline \hline
\;&$1$&$2$&$3$&$4$&$5$&$6$&$7$&$\dotsc$\\ \hline $0$&$T_{1,1,L}^1$&$T_{3,2,L}^2$&$T_{5,3,L}^3$&$T_{7,4,L}^4$&$T_{9,5,L}^5$&$T_{11,6,L}^6$&$T_{13,7,L}^7$&$\dotsc$\\ $1$&\;&$T_{3,2,R}^2$&$T_{5,2,R}^3$&$T_{7,3,R}^4$&$T_{9,4,R}^5$&$T_{11,5,R}^6$&$T_{13,6,R}^7$&$\dotsc$\\ $2$&\;&$T_{3',1,L}^2$&$T_{5,2,L}^3$&$T_{7,3,L}^4$&$T_{9,4,L}^5$&$T_{11,5,L}^6$&$T_{13,6,L}^7$&$\dotsc$\\ $3$&\;&\;&$T_{5,1,R}^3$&$T_{7,2,R}^4$&$T_{9,3,R}^5$&$T_{11,4,R}^6$&$T_{13,5,R}^7$&$\dotsc$\\ $4$&\;&\;&$T_{5,1,L}^3$&$T_{7,2,L}^4$&$T_{9,3,L}^5$&$T_{11,4,L}^6$&$T_{13,5,L}^7$&$\dotsc$\\ $5$&\;&\;&\;&$T_{7,1,R}^4$&$T_{9,2,R}^5$&$T_{11,3,R}^6$&$T_{13,4,R}^7$&$\dotsc$\\ $6$&\;&\;&\;&$T_{7,1,L}^4$&$T_{9,2,L}^5$&$T_{11,3,L}^6$&$T_{13,4,L}^7$&$\dotsc$\\ $7$&\;&\;&\;&\;&$T_{9,1,R}^5$&$T_{11,2,R}^6$&$T_{13,3',R}^7$&$\dotsc$\\ $8$&\;&\;&\;&\;&$T_{9,1,L}^5$&$T_{11,2,L}^6$&$T_{13,3,L}^7$&$\dotsc$\\ $9$&\;&\;&\;&\;&\;&$T_{11,1,R}^6$&$T_{13,2,R}^7$&$\dotsc$\\ $10$&\;&\;&\;&\;&\;&$T_{11,1,L}^6$&$T_{13,2,L}^7$&$\dotsc$\\ $11$&\;&\;&\;&\;&\;&\;&$T_{13,1,R}^7$&$\dotsc$\\ $12$&\;&\;&\;&\;&\;&\;&$T_{13,1,L}^7$&$\dotsc$\\ $13$&\;&\;&\;&\;&\;&\;&\;&$\ddots$\\ $14$&\;&\;&\;&\;&\;&\;&\;&$\ddots$\\
\hline \hline \end{tabular} \end{center} }
\end{small} \end{table} \end{center}
\begin{center} \begin{table} \begin{large} \caption { First few rows and columns of the numerical values of the Circuit Array.} \label{tab:circuitarray} { \renewcommand{1.3}{1.3} \begin{center}
\begin{tabular}{||c||r|r|r|r|r|r|r||} \hline \hline
\;&$1$&$2$&$3$&$4$&$5$&$6$&$\dotsc$\\ \hline \hline $0$&$\frac{2}{3}$&$\frac{26}{27}$&$\frac{242}{243}$&$\frac{2186}{2187}$&$\frac{19682}{19683}$&$\frac{177146}{177147}$&$\dotsc$\\ $1$&\;&$\frac{13}{12}$&$\frac{121}{120}$&$\frac{1093}{1092}$&$\frac{9841}{9840}$&$\frac{88573}{88572}$&$\dotsc$\\ $2$&\;&$\frac{1}{2}$&$\frac{89}{100}$&$\frac{16243}{16562}$&$\frac{335209}{336200}$&$\frac{108912805}{108958322}$&$\dotsc$\\ $3$&\;&\;&$\frac{1157}{960}$&$\frac{1965403}{1904448}$&$\frac{366383437}{364552320}$&$\frac{ 1071810914005}{1071023961216}$&$\dotsc$\\ $4$&\;&\;&$\frac{13}{32}$&$\frac{305041}{380192}$&$\frac{1303624379}{1372554304}$&$\frac{9044690242835}{9138722473024}$&$\dotsc$\\ $5$&\;&\;&\;&$\frac{224369}{167424}$&$\frac{19373074829}{18067568640}$&$\frac{308084703953915}{303469074613248}$&$\dotsc$\\ $6$&\;&\;&\;&$\frac{89}{256}$&$\frac{296645909}{412902400}$&$\frac{31631261501245}{34990560891392}$&$\dotsc$\\ $7$&\;&\;&\;&\;&$\frac{46041023}{31211520}$&$\frac{112546800611915}{99980909002752}$&$\dotsc$\\ $8$&\;&\;&\;&\;&$\frac{2521}{8192}$&$\frac{320676092095}{495976128512}$&$\dotsc$\\ $9$&\;&\;&\;&\;&\;&$\frac{4910281495}{3059613696}$&$\dotsc$\\ $10$&\;&\;&\;&\;&\;&$\frac{18263}{65536}$&$\dotsc$\\ $\dotsc$&\;&\;&\;&\;&\;&\;&$\dotsc$\\
\hline \hline \end{tabular} \end{center} }
\end{large}
\end{table} \end{center}
The \textit{leftmost} diagonal of the circuit array is defined by \begin{equation}\label{equ:leftside} L_s = T_{2s-1,1,L}^s, s=1,2,3,\dotsc \end{equation}
\section{The Main Theorem}\label{sec:main}
The main theorem asserts that the Circuit Array is a recursive array. Along any fixed row, table values are a uniform function of previous row and column values. We have already introduced the row 0 function, $G_0,$ \eqref{equ:g0} (see Lemma \ref{lem:row0}) and $G_1(X)$ (see Lemma \ref{lem:row1}).
\begin{theorem} For each $e \ge 0, \text{ $e$ even,}$ there exist rational functions $G_{e}$ such that for $k \ge 0$ \begin{equation}\label{equ:maineven} T_{e+3+2k,2+k,L}^{\frac{e}{2}+2+k} = G_{e}(T_{1+2k,1+k,L}^{1+k}, T_{3+2k,1+k,L}^{2+k}, \dotsc, T_{e+1+2k,1+k,L}^{\frac{e}{2}+1+k}). \end{equation} Similarly for each odd, $o =\frac{e}{2}+1$ \begin{equation}\label{equ:mainodd} T_{\frac{e}{2}+2+2k,1+k,L}^{\frac{e}{2}+2+k} = G_{o}(T_{1+2k,1+k,L}^{1+k}, T_{3+2k,1+k,L}^{2+k}, \dotsc, T_{\frac{e}{2}+2k,1+k,L}^{\frac{e}{2}+1+k}). \end{equation} \end{theorem}
Proof of the main theorem is deferred to Sections \ref{sec:s13_proofbasecase} and \ref{sec:proofmain}. Illustrative examples of these recursions are provided in the next section.
\section{Illustrations of the Main Theorem}\label{sec:examples}
\begin{example} For $i=0$ we have by Lemma \ref{lem:row0}, $$ G_0(X)=\frac{X+8}{9}. \text{ Hence, } C_{0,2}= \frac{26}{27}=G_0(C_{0,1})= G_0\left(\frac{2}{3}\right), \text{ and } C_{0,3}= \frac{242}{243}=G_0(C_{0,2})= G_0\left(\frac{26}{27}\right). $$ Similarly, we have by Lemma \ref{lem:row1}, $$ G_1(X)=\frac{1}{3} \frac{X+8}{X+2}. \text{ Hence, } C_{1,2}= \frac{13}{12}=G_1(C_{0,1})= G_1\left(\frac{2}{3}\right), \text{ and } C_{1,3}= \frac{121}{120}=G_1(C_{0,2})= G_1\left(\frac{26}{27}\right). $$ \end{example}
\begin{example}\label{exa:row2} For $i=1$ we have $$ G_2(X,Y)=\frac{9Y(X+2)^2+8(X+8)^2}{(X+26)^2}.$$ Hence, $$C_{2,3}= \frac{89}{100}=G_2(C_{0,1},C_{2,2})= G_2\left(\frac{2}{3},\frac{1}{2}\right), \text{ and }$$ $$C_{2,4}= \frac{16243}{16562}=G_2(C_{0,2},C_{2,3})= G_2\left(\frac{26}{27},\frac{89}{100}\right) $$ Similarly, we have $$ G_3(X)=\frac{9Y(X+2)^2 (X+8)+8(X+8)^3} {9Y(X+2)^2(X+26)+6(X+2)(X+8)(X+26)}.$$ Hence, $$C_{3,3}= \frac{1157}{960}=G_3(C_{0,1},C_{2,2})= G_3\left(\frac{2}{3},\frac{1}{2}\right), \text{ and }$$ $$C_{3,4}= \frac{1965403}{190448}=G_3(C_{0,2},C_{2,3})= G_3\left(\frac{26}{27},\frac{89}{100}\right). $$ \end{example}
\begin{example} For $i=2,$ we have $G_4(X,Y,Z)=\frac{N(X,Y,Z)}{D(X,Y,Z)},$ with \begin{equation*} N(X,Y,Z) = \begin{cases} 512(X+2)^0(X+8)^5(X+80)Y^0+\\ 1152(X+2)^2(X+8)^3(X+80)Y^1+ \\ 648(X+2)^4(X+8)^1(X+80)Y^2+ \\ 36(X+2)^2(X+8)^2(X+80)^2 Y^0 Z+ \\ 108(X+2)^3(X+8)^1(X+80)^2Y^1 Z+ \\ 81(X+2)^4(X+8)^0(X+80)^2Y^2Z, \end{cases} \end{equation*} and \begin{equation*} D(X,Y,Z)= \begin{cases}
676(X+2)^0(X+8)^2Q(X)^2Y^0 +\\ 1404(X+2)^2(X+8)^2Q(X)^1Y^1+ \\ 729(X+2)^4(X+8)^2Q(X)^0Y^2, \end{cases} \end{equation*} with, $Q(X)=13X^2+298X+2848.$
These polynomials are formatted to show certain underlying patterns the statement and proof of which will be the subject of another paper.
One then has $C_{4,4}=\frac{305041}{380192}= G_4(C_{0,1}, C_{2,2}, C_{4,3})= G_4\left(\frac{2}{3}, \frac{1}{2}, \frac{13}{32}\right).$ \end{example}
\section{Alternate Approaches to the Main Theorem}
The main theorem formulates the recursiveness of the circuit array in terms of recursions by rows with the number of arguments of these recursions growing by row. There are other approaches to formulating the main theorem, explored in the next few sections. \begin{itemize} \item Section \ref{sec:closed} explores a formulation of the main theorem in terms of closed formula similar to those found in Section \ref{sec:motivation} \item Section \ref{sec:closed} also explores formulation of the main theorem in terms of a single variable rather than multiple variables. \item Section \ref{sec:determinant} explores determining an LHRCC for the numerators of the leftmost diagonal and strongly conjectures its impossibility. This contrasts with other 2-dimensional arrays whose diagonals do satisfy LHRCC. \item Section \ref{sec:product} explores asymptotic approximations to the leftmost diagonal. \end{itemize}
\section{Recursions vs. Closed Formulae}\label{sec:closed}
This section explores a closed-formula approach to the main theorem. We begin with a review.
We have already seen (Lemmas \ref{lem:row0} and \ref{lem:row1}) that row 0 of the circuit array has a simple closed form, $$
C_{0,s} = 1 - \frac{3}{9^s} \qquad s \ge 1; $$ and similarly, row 1 also has a simple closed form, $$
C_{1,s} = 1+\frac{2}{3} \frac{1}{9^{s-1}-1}. $$
This naturally motivated seeking a formulation of the entire array in terms of closed formulae. However, this approach quickly becomes excessively cumbersome. For example, consider row 2. With the aid of \cite[A163102,A191008]{OEIS}, we found the following closed form for this row: \begin{multline*} \textbf{Define } n=2(s-2), \qquad d=\frac{1}{2}\biggl( 3^{s-1}-1 \biggr)\\ N=\frac{1}{4} \biggl(n\cdot3^{n+1}\biggr)+ \frac{1}{16} \biggl(5\cdot3^{n+1}+(-1)^n\biggr), \qquad D=\frac{1}{2} d^2 (d+1)^2 \end{multline*} then $$ C_{2,s} = 1 - \frac{N}{D}.$$
However, this formula is much more complicated than the formula presented in Section \ref{sec:examples}, $$ C_{2,s} = G_2(X,Y)=\frac{9Y(X+2)^2+8(X+8)^2}{(X+26)^2}, \qquad \text{ with } X=C_{0,s-2}, Y= C_{2,s-1}. $$
We present one more attempt at a closed formula which also failed, that of using a single variable. We begin by first re-labeling $\frac{2}{3}$ as $1-\frac{3}{x}$ in the first reduction of an all--one $n$-grid (see Panel A in Figure \ref{fig:motivationillustration}). If we then continue reductions, all labels are rational functions in this single variable $x$ so that upon substitution of $x=9$ we may then obtain desired resistance edge labels.
As before, the resulting formulas are highly complex. We present below these closed formulas for the left-side diagonal, $L_s$. They are derived by ``plugging in" to the four basic transformation functions of Section \ref{sec:proofmethods} as we did in Section \ref{sec:motivation}.
\begin{itemize}
\item
\[\ \frac{x-3}{x-1} \qquad \text{gives $L_1=\frac{2}{3}$ when $x=9$}\]
\item \[\frac{2}{3}\frac{x-3}{x-1} \qquad \text{gives $L_2=\frac{1}{2}$ when $x=9$}\]
\item \[\frac{(x-3)(3x-1)}{6(x-1)^2}, \text{gives $L_3=\frac{13}{32}$ when $x=9$}\]
\item \[\frac{(x-3)(3(x-1)(x-3) + 4(3x-1)^2)}{96(x-1)^3}, \text{gives $L_4=\frac{89}{256}$ when $x=9$}\]
\item \[\frac{(x-3)(3(x-1)(x-3)(34x-18) + 16(3x-1)^3)}{1536(x-1)^4}, \qquad \text{gives $L_5$ when $x=9$} \]
\item \[\frac{(x-3)(3(x-1)(x-3)(793x^2-874x+273) + 64(3x-1)^4)}{24576(x-1)^5}, \qquad \text{gives $L_6$ when $x=9$} \]
\item \[\frac{(x-3) (6(x - 1)(x - 3)(7895x^3 - 13549x^2 + 8693x - 2015)+4^4(3x-1)^5)}{393216 (x-1)^6}, \qquad \text{gives $L_7$ when $x=9$}. \]
\end{itemize}
There are interesting patterns in the above results and it may yield future results. One example of an interesting pattern is found in the constants appearing in the denominators. For $s \ge 3$ the denominator constants in the formulas yielding $L_s$ upon substitution of $x=9,$ satisfy $3\times2^{4(s-3)+1}$ We however do not further pursue this in this paper.
To sum up, because of the greater complexity as well as lack of completely describable patterns in the closed formula we abandoned this approach in favor of a recursive approach in several variables.
\section{Impossibility of a recursive sequence for the left-most diagonal}\label{sec:determinant}
It is natural, when studying sequences of fractions, to separately study their numerators and denominators. We have seen that for $C_0, C_1$ such an approach uncovers LHRCC. Therefore, it comes as a surprise to have a result stating the impossibility of an LHRCC.
To present this impossibility result, we first, briefly review a technique for discovering LHRCC. Suppose we have an integer sequence such as $G_1, G_2, \dotsc$ Suppose further we believe this sequence is second order, that is, $$
G_n = x G_{n-1}+y G_{n-2} $$ As $n$ varies this last equation generates an infinite number of equations in $x$ and $y.$ In other words, to investigate the possible recursiveness of this sequence we can solve the following set of equations for any $m$ and the use the solution to test further, \begin{center} $\begin{bmatrix} G_m & G_{m+1} \\ G_{m+1} & G_{m+2} \end{bmatrix}$ $ \begin{bmatrix} x \\ y \end{bmatrix}$ $=$ $ \begin{bmatrix} G_{m+2} & G_{m+3} \end{bmatrix}.$ \end{center} Solving this set of equations by Cramer's rule naturally motivates considering the determinant $$ \begin{vmatrix} G_m & G_{m+1} \\ G_{m+1} & G_{m+2} \end{vmatrix} $$ for any integer $m.$ While these determinants are non-zero, the order 3 determinants, $$ \begin{vmatrix} G_m & G_{m+1} & G_{m+2}
\\ G_{m+1} & G_{m+2} & G_{m+3} \\
G_{m+2} & G_{m+3} & G_{m+4}
\end{vmatrix}, $$
must be zero because of the dependency captured by the LHRCC.
These remarks generalize to $r$-th order recursions for integer $r \ge 2,$ and explain why in the search for recursions it is natural to consider such determinants. It follows that if for some $m$ and for all $r \ge 2$ the following determinant is non-zero, $$ \begin{vmatrix} G_m & G_{m+1} & \dotsc & G_{m+r}
\\
G_{m+1} & G_{m+2} & \dotsc & G_{m+r+1} \\
\dotsc & \dotsc & \dotsc & \dotsc \\
G_{m+r-1} & G_{m+r} & \dotsc & G_{m+2r-1}
\end{vmatrix}, $$ then it is impossible for the sequence $\{G_m\}$ to satisfy any LHRCC.
The following conjecture, verified for several dozen early values of $k$ shows a remarkable and unexpected simplicity in the values of these determinants.
\begin{conjecture} Let $T(j)= \frac{j(j+1)}{2}$ indicate the $j$-th triangular number. Using \eqref{equ:leftside}, define ${n'}_s$ and ${d'}_s$ by $L_s = \frac{n_s}{d_s}=\frac{n'_s}{2^{4s-7}}$, where $n_s$ and $d_s$ are relatively prime. For any $j \ge 2$ we have \begin{center} $$\begin{vmatrix} n'_2 & n'_3 & \cdots& n'_{2+j}\\ n'_3 & n'_4 & \cdots &n_{3+j}\\ \vdots & \vdots & \ddots &\vdots\\ n'_{2+j}& n'_{3+j} & \cdots & n'_{2+2j}\end{vmatrix} =9^{T(j-1)}.$$ \end{center} \end{conjecture}
\begin{corollary} Under the conditions stated in the conjectures, it is impossible for the $\{{n'}_s\}_{s \ge 1}$ to satisfy an LHRCC of any order.
\end{corollary}
\begin{comment} It is tempting to suggest that the numerators satisfy no LHRCC because they are growing too fast. But that is not true. We know that $L_s <1,$ \cite[Corollary 7.2]{EvansHendel} and that the denominators form a geometric sequence. It follows that the numerators are bounded by a geometric sequence. In terms of growth rate, there is no reason why the sequence shouldn't be able to satisfy an LHRCC.
\end{comment}
\section{An Asymptotic Approach}\label{sec:product}
Prior to presenting the proof of the main theorem, we explore one more approach in this section. By way of motivation recall that several infinite arrays have asymptotic formulas associated with them. For example, the central binomial coefficients have asymptotic formulas arising from Stirling's formula.
For purposes of expositional smoothness, we focus on the leftmost diagonal, $L_s,$ \eqref{equ:leftside}.
Hendel, \cite{Hendel} introduced the idea of finding explicit formulas for edge-values in terms of products of factors. After numerical experimentation, the following approximation was found, \begin{equation}\label{equ:A}
L_s \asymp A_s = \frac{2}{3} \displaystyle \prod_{i=2}^s (1 - \frac{1}{2i-1}), \end{equation} with $A$ standing for approximation. Tables \ref{tab:leftcenter5rows} and \ref{tab:leftcenter80rows} provide numerical evidence for this approximation. The key takeaways from both tables is that both differences $L_s - A_s$ and ratios $\frac{L_s}{A_s}$ are monotone decreasing for $s \ge 3.$
\begin{center} \begin{table}
\begin{small} \caption {Numerical evidence for conjectures about $L_s,$ first five rows. Notice that after $s=2$ all difference and ratio columns are monotone decreasing.} \label{tab:leftcenter5rows} { \renewcommand{1.3}{1.3} \begin{center}
\begin{tabular}{||c||c|c|c|c||c|c|c||c|c||} \hline \hline
$s$&$L_s$&$A_s$&$L_s-A_s$&$\frac{L_s}{A_s}$&$P_s$&$A_s-P_s$&$\frac{A_s}{P_s}$&$L_s-P_s$&$\frac{L_s}{P_s}$\\ \hline $1$&$0.6667$&$0.6667$&$0$&$1$&$0.5908$&$0.0758$&$1.1284$&$0.0758$&$1.1284$\\ $2$&$0.5$&$0.4444$&$0.0556$&$1.125$&$0.4178$&$0.0267$&$1.0638$&$0.0822$&$1.1968$\\ $3$&$0.4063$&$0.3556$&$0.0507$&$1.1426$&$0.3411$&$0.0144$&$1.0424$&$0.0651$&$1.191$\\ $4$&$0.3477$&$0.3048$&$0.0429$&$1.1407$&$0.2954$&$0.0094$&$1.0317$&$0.0522$&$1.1769$\\ $5$&$0.3077$&$0.2709$&$0.0368$&$1.136$&$0.2642$&$0.0067$&$1.0253$&$0.0435$&$1.1647$\\
\hline \hline \end{tabular} \end{center} }
\end{small} \end{table} \end{center}
\begin{center} \begin{table}
\begin{small} \caption {Numerical evidence for conjectures about $L_s,$ first 80 rows. Observe that except for a few initial values the difference and ratio columns are monotone decreasing.} \label{tab:leftcenter80rows} { \renewcommand{1.3}{1.3} \begin{center}
\begin{tabular}{||c||c|c|c|c||c|c|c||c|c||} \hline \hline
$s$&$L_s$&$A_s$&$L_s-A_s$&$L_s/A_s$&$P_s$&$A_s-P_s$&$A_s/P_s$&$L_s-P_s$&$L_s/P_s$\\ \hline $8$&$0.2387$&$0.2122$&$0.0265$&$1.125$&$0.2089$&$0.0033$&$1.0157$&$0.0298$&$1.1427$\\ $16$&$0.1658$&$0.1489$&$0.017$&$1.1141$&$0.1477$&$0.0012$&$1.0078$&$0.0181$&$1.1228$\\ $24$&$0.1346$&$0.1212$&$0.0134$&$1.1103$&$0.1206$&$0.0006$&$1.0052$&$0.014$&$1.1161$\\ $32$&$0.1162$&$0.1049$&$0.0114$&$1.1084$&$0.1044$&$0.0004$&$1.0039$&$0.0118$&$1.1127$\\ $40$&$0.1038$&$0.0937$&$0.0101$&$1.1072$&$0.0934$&$0.0003$&$1.0031$&$0.0103$&$1.1107$\\ $48$&$0.0946$&$0.0855$&$0.0091$&$1.1065$&$0.0853$&$0.0002$&$1.0026$&$0.0093$&$1.1094$\\ $56$&$0.0875$&$0.0791$&$0.0084$&$1.1059$&$0.079$&$0.0002$&$1.0022$&$0.0086$&$1.1084$\\ $64$&$0.0818$&$0.074$&$0.0078$&$1.1055$&$0.0739$&$0.0001$&$1.002$&$0.008$&$1.1077$\\ $72$&$0.0771$&$0.0697$&$0.0073$&$1.1052$&$0.0696$&$0.0001$&$1.0017$&$0.0075$&$1.1071$\\ $80$&$0.0731$&$0.0662$&$0.0069$&$1.105$&$0.0661$&$0.0001$&$1.0016$&$0.007$&$1.1067$\\
\hline \hline \end{tabular} \end{center} }
\end{small} \end{table} \end{center}
The $P$ columns in these tables (which also provide good approximations as measured by differences and ratios) correspond to the following further approximation \begin{equation}\label{equ:P}
A_s \asymp P_s = \sqrt{\frac{\pi}{9s}}, \end{equation} with $P$ standing for the approximation of $A_s$ with $\pi$. \color{black}
Equation \eqref{equ:P} is naturally derived from \eqref{equ:A} using Stirling's formula. The next lemma contains a formal statement of the result.
\begin{lemma} $$
A_s \asymp P_s. $$ \end{lemma} \begin{proof} By \eqref{equ:A} we have $$
A_s = \frac{2}{3} \Biggl( \frac{2}{3} \frac{4}{5} \dotsc \frac{2s-2}{2s-1} \Biggr) . $$ Applying the identity $(2s-1)! = \Biggl(2 \cdot 4 \cdot \dotsc \cdot 2s-2 \Biggr) \Biggl( 3 \cdot 5 \cdot \dotsc 2s-1\Biggr)$ to the last equation, we have
$$
A_s = \frac{2}{3} \frac{\Biggl( 2^{s-1} (s-1)! \Biggr)^2} {(2s-1)!}. $$ Of the many forms of Stirling's formula, we can simplify the last equation by applying the standard approximation (see for example~\cite{Wolfram}) $n! \asymp \Biggl( \frac{n}{e} \Biggr)^n \sqrt{2\pi n} $, yielding
$$
A_s = \frac{2}{3} \frac{4^s}{4} \Biggl( \frac{s-1}{e} \Biggr)^{2(s-1)} \biggl(2\pi(s-1) \biggr) \Biggl(\frac{e}{2s-1}\Biggr)^{2s-1} \frac{1}{\sqrt{2 \pi (2s-1)}}. $$
By gathering constants, cancelling the powers of $e,$ and using the fact that $c_1 s -c_2 \asymp s$ for constants $c_1, c_2,$ we can simplify this last equation to $$
A_s = \frac{e}{6} 4^s \sqrt{s} \sqrt{\pi} (s-1)^{2s-2} \left(\frac{1}{2s-1}\right)^{2s-1}. $$ Further simplification is obtained by using traditional calculus identities on limits resulting in powers of $e.$ $$
(s-1)^{2s-2} = \Biggl( \frac{s-1}{s} \Biggr)^{2s} s^{2s} \frac{1}{(s-1)^2} \asymp e^{-2} s^{2s} \frac{1}{s^2},$$ $$
\frac{1}{(2s-1)^{2s-1}} = \Biggl(\frac{2s}{2s-1}\Biggr)^{2s-1} \frac{1}{(2s)^{2s-1}} \asymp e \frac{1}{4^s} \frac{1}{s^{2s}}2s. $$ Combining these last 3 equations, cancelling powers of $e$ and $4,$ and using the fact that $c_1 s + c_2 \asymp s,$ we obtain $$
A_s \asymp \sqrt{\pi} \frac{1}{6} 2s \frac{1}{s^2} \sqrt{s} = \frac{\sqrt{\pi}}{3 \sqrt{s}} = \sqrt{\frac{\pi}{9s}} = P_s $$ as required. \end{proof} \color{black}
\section{Base Case of the Inductive Proof}\label{sec:s13_proofbasecase}
The proof of the main theorem is by induction on the row index, parametrized by whether the row is even or odd, as shown in equations \eqref{equ:maineven}-\eqref{equ:mainodd}. The base case requires proofs for rows 0,1,2,3.
We suffice throughout the proof with consideration of the the even rows, the proof for the odd rows being highly similar and hence omitted. The proof for row 0 has already been completed and is summarized in Lemma \ref{lem:row0}. Recall that the proof was based on the equations describing a non-boundary left edge (\eqref{equ:leftside3proofs} and \eqref{equ:leftside9proofs}) as well as Theorem \ref{the:uniformcenter}. A proof for rows 0 and one can be found in Lemmas \ref{lem:row0} and \ref{lem:row1}. Proofs in this and the next section are accomplished similarly by applying the appropriate transformation functions found in Section~\ref{sec:proofmethods} as well as the Uniform Center Theorem, Theorem \ref{the:uniformcenter}.
In this section we show \eqref{equ:maineven} for the case $e=2.$ We accomplish this by first proving \eqref{equ:maineven} when $k=0$ and then proving for $k>0.$ This separation into two cases is for expositional clarity since the proof can be accomplished with the single arbitrary case.
\noindent\textsc{Case 1: Proof of Equation \eqref{equ:maineven} for $e=2,k=0.$}
We must show \begin{equation}\label{equ:tempbasecase1}
T_{5,2,L}^3 = G_3(T_{1,1,L}^1, T_{3,1,L}^2), \end{equation} for some function $G_3.$
By the formula for non boundary left edges, \eqref{equ:leftside3proofs}, we know \begin{equation}\label{equ:tempbasecase2}
T_{5,2,L}^3 = F(T_{5,1}^2,T_{5,2}^2, T_{6,2}^2). \end{equation}
Proceeding as in the proof of Lemma \ref{lem:row0} we have as follows: \begin{itemize}
\item By Theorem \ref{the:uniformcenter}(b) the six edges of triangles $T_{5,2}^2, T_{6,2}^2$ are identically one.
\item By \eqref{equ:uniformcenter}, the uniform center for the first diagonal in the twice reduced $n$-grid begins on row $s+d=2+1=3.$
Therefore, the argument $T_{5,1}^2$ in \eqref{equ:tempbasecase2} may be replaced by the identically labeled triangle $T_{3,1}^2.$
\item Triangle $T_{3,1}^2$ has three sides, $T_{3,1,L}^2, T_{3,2,R}^2, T_{3,2,B}^2.$
\item But by Lemma \ref{lem:row1},
$T_{3,2,R}^2= G_0(T_{1,1,L}^1),$ and by Theorem \ref{the:uniformcenter}(c),
$T_{3,2,B}^2=T_{3,2,R}^2$ \end{itemize}
Applying the above to \eqref{equ:tempbasecase2} and plugging into \eqref{equ:leftside9proofs} we have $$ T_{5,2,L}^3 = Y(\Delta(T_{3,1,L}^2,G_0(T_{1,1,L}^1), G_0(T_{1,1,L}^1)), \Delta(1,1,1),\Delta(1,1,1)) =G_3(T_{1,1,L}^1, T_{3,1,L}^2), $$ which has the required form of \eqref{equ:tempbasecase1} as desired. This completes the proof of \eqref{equ:maineven} for the case $e=2,k=0.$
\noindent\textsc{Case 2: Proof of Equation \eqref{equ:maineven} for $e=2,k>0.$}
Proceeding exactly as we did in the case $k=0$ we have by \eqref{equ:leftside3proofs}, \begin{equation} \label{equ:tempbasecase3}
T_{5+2K,2+K,L}^{3+K} = F(T_{5+2K,1+K}^{2+K},T_{5+2K,2+K}^{2+K}, T_{6+2K,2+K}^{2+K}). \end{equation}
Continuing as in the case $k=0$ we have: \begin{itemize}
\item By Theorem \ref{the:uniformcenter}(b) the six edges of triangles $T_{5+2K,2+K}^{2+K}, T_{6+2K,2+K}^{2+K}$ are identically one.
\item By \eqref{equ:uniformcenter}, the uniform center for (1+K)th diagonal of the all-one $n$ grid reduced $2+K$ times begins on row $s+d=2+K+1+K=3+2K.$
Therefore, the argument $T_{5+2K,1+K}^{2+K}$ in \eqref{equ:tempbasecase3} may be replaced by the identically labeled triangle $T_{3+2K,1+K}^{2+K}.$
\item Triangle $T_{3+2K,1+K}^{2+K}$ has three sides, $T_{3+2K,1+K,L}^{2+K}, T_{3+2K,1+K,R}^{2+K}, T_{3+2K,2+K,B}^{2+K}.$
\item But by Lemma \ref{lem:row1},
$T_{3+2K,2+K,R}^{2+K}= G_0(T_{1+2K,1+K,L}^{1+K}),$ and by Theorem \ref{the:uniformcenter}(c),
$T_{3+2K,2+K,B}^{2+K}=T_{3+2K,2+K,R}^{2+K}$ \end{itemize}
Applying the above to \eqref{equ:tempbasecase3} and plugging into the edge version of the equation, \eqref{equ:leftside9proofs}, we have \begin{multline*} T_{5+2K,2+K,L}^{3+K} = Y(\Delta(T_{3+2K,1+K,L}^{2+K},G_0(T_{1+2K,1+K,L}^{1+K}), G_0(T_{1+2K,1+K,L}^{1+K})), \Delta(1,1,1),\Delta(1,1,1))\\ =G_3(T_{1+2K,1+K,L}^{1+K}, T_{3+2K,1+K,L}^{2+K}), \end{multline*} which has the required form of \eqref{equ:tempbasecase1} as was to be shown. This completes the proof of \eqref{equ:maineven} for the second case and hence completes the proof of the base case $e=2.$
\section{Proof of the Main Theorem}\label{sec:proofmain}
This section completes the inductive proof of the main theorem, by showing equations \eqref{equ:maineven} and \eqref{equ:mainodd}, the base case of which was completed in the prior section. Accordingly throughout this section we assume $E$ an even number, corresponding to row $E$ of the Circuit Array, such that, \begin{equation}\label{equ:ebigger4}
E \ge 4. \end{equation}
We will utilize the following lemma, whose proof follows from an inspection of Table \ref{tab:circuitarrayformal}.
\begin{lemma}\label{lem:isamember} Triangle $T^a_{b,c,LR}$ belongs to row $d$ column $e$ of the Circuit array if $b=2a-1,$ $a=e$ and either i) $LR=L, a=c, d=0,$ ii) $LR=L, d=2(a-c)$, or iii) $LR=R, d=2(a-c)-1>0.$ \end{lemma}
For an induction assumption we assume \eqref{equ:maineven} and \eqref{equ:mainodd} hold for all $e < E$ and proceed to prove these equations for the case $E.$ We suffice with the proof for even rows (i.e, Equation \eqref{equ:maineven}) the proof for odd rows being similar and hence omitted. The proof proceeds in a manner similar the proofs presented in the prior section.
First, by \eqref{equ:leftside3proofs} we have \begin{equation}\label{equ:tempfirst}
T^{\frac{E}{2}+2+k}_{E+3+2k, 2+k, L}=
F(T^{\frac{E}{2}+1+k}_{E+3+2k, 1+k},
T^{\frac{E}{2}+1+k}_{E+3+2k, 2+k},
T^{\frac{E}{2}+1+k}_{E+4+2k, 2+k}) \end{equation}
Second, utilizing assumption \eqref{equ:ebigger4} and using part (a) of the Uniform Center Theorem, in the three triangle arguments on the right hand side of \eqref{equ:tempfirst} $E+3+2k$ and $E+4+2k$ can be replaced with $E+1+2k$ since $$ s+d =\frac{E}{2}+1+k+1+k \ge E+1+k; \quad \text{ and similarly } s+d = \frac{E}{2}+1+k+2+k \ge E+1+k,$$ implying \begin{equation}\label{equ:tempsecond}
T^{\frac{E}{2}+2+k}_{E+3+2k, 2+k, L}=
F(T^{\frac{E}{2}+1+k}_{E+1+2k, 1+k},
T^{\frac{E}{2}+1+k}_{E+1+2k, 2+k},
T^{\frac{E}{2}+1+k}_{E+1+2k, 2+k}). \end{equation}
Third, therefore expanding \eqref{equ:leftside3proofs} to a full 9 variable function, \eqref{equ:leftside9proofs}, and using Theorem \ref{the:uniformcenter}(c) we have that \eqref{equ:tempsecond} is expanded to \begin{multline*}
T^{\frac{E}{2}+2+k}_{E+3+2k, 1+k, L}=
Y(\Delta(T^{\frac{E}{2}+1+k}_{E+1+2k,1+k,L},
T^{\frac{E}{2}+1+k}_{E+1+2k,1+k,R},
T^{\frac{E}{2}+1+k}_{E+1+2k,1+k,R}),\\
\Delta(T^{\frac{E}{2}+1+k}_{E+1+2k,2+k,L},
T^{\frac{E}{2}+1+k}_{E+1+2k,2+k,R},
T^{\frac{E}{2}+1+k}_{E+1+2k,2+k,R}),\\
\Delta(T^{\frac{E}{2}+1+k}_{E+1+2k,2+k,L},
T^{\frac{E}{2}+1+k}_{E+1+2k,2+k,R},
T^{\frac{E}{2}+1+k}_{E+1+2k,2+k,R})). \end{multline*}
Fourth, in examining the rows to which the arguments of this last equation belong using Lemma \ref{lem:isamember} we have \begin{itemize}
\item $T^{\frac{E}{2}+1+k}_{E+1+2k, 1+k, L}$ in row $E$,
\item $T^{\frac{E}{2}+1+k}_{E+1+2k, 1+k, R}$ in row $E-1$,
\item $T^{\frac{E}{2}+1+k}_{E+1+2k, 2+k, L}$ in row $E-2$, and
\item $T^{\frac{E}{2}+1+k}_{E+1+2k, 2+k, R}$ in row $E-3$. \end{itemize}
The proof is completed by the induction assumption applied to rows $E-1, E-2, E-3.$ More specifically we must show that when $k=0$ the second row element is a function of all leftmost diagonal elements. But the first argument on the right hand side is the leftmost diagonal element of row $E$ while the induction assumption assures us that the other arguments are functions of the leftmost elements of previous rows, $e < E.$ This completes the proof.
\end{document} |
\begin{document}
\begin{abstract}
We give a new approach to the failure of the Canonical Base Property
(CBP) in the so far only known counterexample, produced by Hrushovski,
Palacin and Pillay. For this purpose, we will give an alternative
presentation of the counterexample as an additive covers of an
algebraically closed field. We isolate two fundamental weakenings of
the CBP, which already appeared in work of Chatzidakis, and show that
they do not hold in the counterexample. In order to do so, a study of
imaginaries in additive covers is developed, for elimination of finite
imaginaries yields a connection to the CBP. As a by-product of the
presentation, we notice that no pure Galois-theoretic account of the
CBP can be provided. \end{abstract}
\title{Additive covers and the Canonical Base Property}
\section{Introduction} Internality is a fundamental notion in geometric model theory in order to understand a complete stable theory of finite Lascar rank in terms of its building blocks, its minimal types of rank one. A type $p$ is internal, resp. almost internal to the family $\mathbb{P}$ of all non-locally modular minimal types, if there exists a set of parameters $C$ such that every realization $a$ of $p$ is definable, resp. algebraic over $C,e$ where $e$ is a tuple of realizations of types (each one based over $C$) in $\mathbb{P}$.
Motivated by results of Campana \cite{fC80} on algebraic coreductions, Pillay and Ziegler \cite{PZ03} showed that in the finite rank part of the theory of differentially closed fields in characteristic zero, the type of the canonical base of a stationary type over a realization is almost internal to the constants. With this result, Pillay and Ziegler reproved the function field case of the Mordell-Lang conjecture in characteristic zero following Hrushovski's original proof but with considerable simplifications.
The above phenomena is captured in the notion of the Canonical Base Property (CBP), which was introduced and studied by Moosa and Pillay \cite{MP08}: Over a realization of a stationary type, its canonical base is almost $\mathbb{P}$-internal. Chatzidakis \cite{zC12} showed that the CBP already implies a seemingly stronger statement, the so-called uniform canonical base property (UCBP): Whenever the type of a realization of the stationary type $p$ over some set $C$ of parameters is almost $\mathbb{P}$-internal, then so is $\textnormal{stp}(\textnormal{Cb}(p)/C)$. For the proof, she isolated two remarkable properties which hold in every theory of finite rank with the CBP: Almost internality to $\mathbb{P}$ is preserved on intersections and more generally on quotients. Motivated by her work, we introduce the following two notions. A stationary type is good, resp. special, if the condition for the CBP, resp. UCBP, holds for this type. (See Definitions \ref{D:good} and \ref{D:AB} for a precise formulation.) The following result relates these two notions to the aforementioned properties.
\begin{theoremA}\textup{(Propositions \ref{P:propA}
and \ref{P:propB})}
The theory $T$ preserves internality on intersections, resp. on
quotients, if and only if
every stationary almost $\mathbb{P}$-internal type in $T^\textnormal{eq}$ is
good, resp. special. \end{theoremA}
Though most relevant examples of theories satisfy the CBP, Hrushovski, Palacín and Pillay \cite{HPP13} produced the so far only known example of an uncountably categorical theory without the CBP. We will give an alternative description of their counterexample in terms of additive covers of an algebraically closed field of characteristic zero. Covers are already present in early work of Hrushovski \cite{eH91}, Ahlbrandt and Ziegler \cite{AZ91} as well as of Hodges and Pillay \cite{HP94}. For an additive cover $\mathcal{M}$ of an algebraically closed field, the sort $S$ is the home-sort and $P$ is the field-sort. The automorphism group $\textnormal{Aut}(\mathcal{M}/P)$ embeds canonically in the group of all additive maps on $P$. If the sort $S$ is almost $P$-internal, the CBP trivially holds. The counterexample to the CBP has a ring structure on the sort $S$ and the ring multiplication $\otimes$ is a lifting of the field multiplication. The automorphisms group over $P$ corresponds to the group of derivations, which ensures that the sort $S$ is not almost $P$-internal. We prove the following result.
\begin{theoremB}\textup{(Propositions \ref{P:M1AutVersion}
and \ref{P:CBPpureCover})}
The CBP holds whenever every
additive map on $P$ induces an automorphism in
$\textnormal{Aut}(\mathcal{M}/P)$.
If $\textnormal{Aut}(\mathcal{M}/P)$ corresponds to the group of
derivations, then the product $\otimes$
is definable in $\mathcal{M}$. \end{theoremB}
We focus on additive covers in which the sort $S$ is not almost $P$-internal, since otherwise the CBP trivially holds and show that no such additive cover can eliminate imaginaries. On the other side, the counterexample to the CBP does eliminate finite imaginaries, which fits into situation:
\begin{theoremC}\textup{(Theorem \ref{T:finImagCBP})}
If $\mathcal{M}$ eliminates finite imaginaries,
then it cannot preserve internality on quotients, so in particular
the CBP does not hold. \end{theoremC}
A standard argument shows that the CBP holds whenever it holds for all real stationary types. We will note that in the counterexample to the CBP the corresponding real versions of goodness and specialness hold, namely, every real stationary almost $P$-internal type is special. However the version for real types does not imply the full condition and gives a new proof of the failure of the CBP.
\begin{theoremD}\textup{(Propositions \ref{P: M1PropB}
and \ref{P: M1PropA})}
The counterexample to the CBP does not preserve internality on intersections. \end{theoremD}
Palac\'in and Pillay ~\cite{PP17} considered a strengthening of the CBP, called the strong canonical base property, which we show cannot hold in any additive cover, where $S$ is not almost $P$-internal. Regarding a question which arose in \cite{PP17}, we prove that no \textit{pure} Galois-theoretic account of the CBP can be provided.
In a forthcoming work, we use the approach with additive covers in order to produce new counterexamples to the CBP.
\begin{akn} The author would like to thank his supervisor Amador Mart\'in Pizarro for numerous helpful discussions, his support, generosity and guidance. He also would like to thank Daniel Palac\'in for multiple interesting discussions. Part of this research was carried out at the University of Notre Dame (Indiana, USA) with financial support from the DAAD, which the author gratefully acknowledges. The author would like to thank for the hospitality and Anand Pillay for many helpful discussions and for suggesting the study of imaginaries in the counterexample to the CBP.
\end{akn}
\section{The Canonical Base Property and Related Properties}\label{S:CBPAB}
In this section we introduce two properties related to the canonical base property. We assume throughout this article a solid knowledge in geometric stability theory \cite{aP96,TZ12}. Most of the results in this section can be found in \cite{zC12}.
Let us fix a complete stable theory of finite Lascar rank. As usual, we work inside a sufficiently saturated ambient model. We denote by $\mathbb{P}$ the $\emptyset$-invariant family of all non-locally modular minimal types.
The following notions provide an equivalent formulation of the CBP and the UCBP. They will play a crucial role in our attempt to weaken the CBP to other contexts.
\begin{definition}\label{D:good}
A stationary type $p$ is: \begin{itemize}
\item \emph{good} if $\textnormal{stp}(\textnormal{Cb}(p)/a)$ is almost
$\mathbb{P}$-internal for some (any) realization $a$ of $p$,
\item \emph{special} if, for every parameter set $C$ and every
realization $a$ of $p$, whenever $\textnormal{stp}(a/C)$ almost $\mathbb{P}$-internal, so
is $\textnormal{stp}(\textnormal{Cb}(p)/C)$ almost $\mathbb{P}$-internal. \end{itemize} \end{definition}
\begin{remark}\label{R:CBP_AB}~
\begin{enumerate}[(a)]
\item Note that every special type is good, by setting $C=\{a\}$.
\item It is
immediate from the definitions that the theory $T$ has
the CBP, resp.\ the UCBP, if and only if every
stationary type in $T^\textnormal{eq}$ is good, resp.\ special.
\item Analog to \cite[Remark 2.6]{aP95}, it can be easily shown
that
whether or not every stationary type is good, resp. special, is
preserved
under naming parameters.
\end{enumerate} \end{remark}
Chatzidakis showed in \cite[Theorem 2.5]{zC12} that the CBP already implies the UCBP for (simple) theories of finite rank. In order to prove so, she first shows in \cite[Proposition 2.1]{zC12} that, under the CBP, the type $\textnormal{tp}(b/\textnormal{acl}^{\textnormal{eq}}(a)\cap\textnormal{acl}^{\textnormal{eq}}(b))$ is almost $\mathbb{P}$-internal, whenever $\textnormal{stp}(b/a)$ is almost $\mathbb{P}$-internal, and secondly in \cite[Lemma 2.3]{zC12}, that $\textnormal{tp}(b/\textnormal{acl}^{\textnormal{eq}}(a_1)\cap \textnormal{acl}^{\textnormal{eq}}(a_2))$ is almost $\mathbb{P}$-internal, if both $\textnormal{stp}(b/a_1)$ and $\textnormal{tp}(b/a_2)$ are. Motivated by her work, we now introduce two notions capturing these intermediate steps and study their relation to the CBP.
\begin{definition}\label{D:AB}
The theory $T$ \emph{preserves internality on intersections} if
the type
\[\textnormal{tp}(b/\textnormal{acl}^{\textnormal{eq}}(a)\cap\textnormal{acl}^{\textnormal{eq}}(b))\]
is almost $\mathbb{P}$-internal,
whenever $\textnormal{stp}(b/a)$ is almost $\mathbb{P}$-internal.
Similarly, the theory
\emph{preserves internality on quotients} if the type
\[\textnormal{tp}(b/\textnormal{acl}^{\textnormal{eq}}(a_1)\cap \textnormal{acl}^{\textnormal{eq}}(a_2))\]
is almost $\mathbb{P}$-internal,
whenever both $\textnormal{stp}(b/a_1)$ and $\textnormal{tp}(b/a_2)$ are. \end{definition}
In order to relate the above properties to consequences of the CBP, we will need the following observation.
\begin{fact}\label{F:level}\textup{(}\cite[Proposition 1.18]{zC12} \textnormal{ \& } \cite[Theorem 3.6]{PW13}\textup{)}
Let $\textnormal{stp}(b/A)$ and $\textnormal{stp}(b/C)$ be two $\mathbb{P}$-analysable types.
\begin{enumerate}[(a)]
\item The type $\textnormal{stp}(b/\textnormal{acl}^{\textnormal{eq}}(A)\cap\textnormal{acl}^{\textnormal{eq}}(C))$ is again
$\mathbb{P}$-analysable. In particular, so is
$\textnormal{stp}(b/\textnormal{acl}^{\textnormal{eq}}(A)\cap\textnormal{acl}^{\textnormal{eq}}(b))$ also $\mathbb{P}$-analysable.
\item Let $b_A$ be the maximal
subset of $\textnormal{acl}^{\textnormal{eq}}(A,b)$ such that $stp(b_A /A)$
is almost $\mathbb{P}$-internal. The tuple $b_A$ (in some fixed enumeration)
dominates $b$ over
$A$, that is, for every set of parameters $D\supset A$,
\[ b \mathop{\mathpalette\Ind{}}_A D \ \ \text{ whenever } \ \ b_A \mathop{\mathpalette\Ind{}}_A D.\]
Furthermore, whenever $\textnormal{acl}^{\textnormal{eq}}(D)\cap\textnormal{acl}^{\textnormal{eq}}(A,b_A)=\textnormal{acl}^{\textnormal{eq}}(A)$, so is
\[ \textnormal{acl}^{\textnormal{eq}}(D)\cap\textnormal{acl}^{\textnormal{eq}}(A,b)=\textnormal{acl}^{\textnormal{eq}}(A).\]
\end{enumerate} \end{fact}
\begin{proposition}\label{P:propA}
The theory $T$ preserves internality on intersections if and only if every stationary almost $\mathbb{P}$-internal type in $T^\textnormal{eq}$ is good. \end{proposition} \begin{proof}
We assume first that every stationary almost $\mathbb{P}$-internal type is
good, but
the conclusion fails,
witnessed by
two tuples $a$ and $b$. By Remark \ref{R:CBP_AB}, we may assume
\[ \textnormal{acl}^{\textnormal{eq}}(a)\cap\textnormal{acl}^{\textnormal{eq}}(b)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\]
Thus, the type $\textnormal{stp}(b/a)$ is
almost $\mathbb{P}$-internal, but the type $\textnormal{stp}(b)$ is not. Note that
$\textnormal{stp}(b)$ is $\mathbb{P}$-analysable, by Fact \ref{F:level}.
Among all possible (imaginary) tuples in the ambient model take now
$a'$ such that
$\textnormal{stp}(b/a')$ is almost $\mathbb{P}$-internal and
\[\textnormal{acl}^{\textnormal{eq}}(a')\cap\textnormal{acl}^{\textnormal{eq}}(b)=\textnormal{acl}^{\textnormal{eq}}(\emptyset)\]
with $\textnormal{U}(b_\emptyset/a')$ maximal. Since $\textnormal{stp}(b/a')$ is almost
$\mathbb{P}$-internal, there is a set of parameters $A$ containing $a'$ with
$A \mathop{\mathpalette\Ind{}}_{a'} b$ such that $b$ is algebraic over $Ae$, where $e$ is a
tuple of
realizations of types (each one based over $A$) in $\mathbb{P}$. Since each
type in the family $\mathbb{P}$ is minimal, we may assume, after possibly
enlarging $A$, that $e$ and $b$ are interalgebraic over $A$.
Let now $e'$ be a maximal
subtuple of $e$ independent from $b_\emptyset$ over $A$, so \[ e'
\mathop{\mathpalette\Ind{}}_{A}
b_\emptyset \ \ \text{ and } \ \ e \in \textnormal{acl}^{\textnormal{eq}}(A, e', b_\emptyset).\]
Hence,
the tuple $b$
is algebraic over
$Ae' b_\emptyset $ and
\[\textnormal{acl}^{\textnormal{eq}}(A,e')\cap\textnormal{acl}^{\textnormal{eq}}(b_\emptyset)\subset
\textnormal{acl}^{\textnormal{eq}}(a')\cap\textnormal{acl}^{\textnormal{eq}}(b)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\]
Therefore
$\textnormal{acl}^{\textnormal{eq}}(A,e')\cap\textnormal{acl}^{\textnormal{eq}}(b) =\textnormal{acl}^{\textnormal{eq}}(\emptyset)$, by Fact \ref{F:level}.
Notice that $\textnormal{stp}(b/A, e')$ is almost $\mathbb{P}$-internal, yet this does not
yield any contradiction since
$\textnormal{U}(b_\emptyset/A,e')=\textnormal{U}(b_\emptyset/a')$. Choose now $b'$ realizing
$\textnormal{stp}(b/A,e')$ independent from $b$ over $A, e'$. An easy forking
computation yields
\[ \textnormal{acl}^{\textnormal{eq}}(b')\cap\textnormal{acl}^{\textnormal{eq}}(b)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\] By the hypothesis we
have that the almost $\mathbb{P}$-internal type
\[\textnormal{stp}(b'/\textnormal{acl}^{\textnormal{eq}}(A,e'))=\textnormal{stp}(b/\textnormal{acl}^{\textnormal{eq}}(A,e')) \] is good,
so we deduce that $\textnormal{stp}(\textnormal{Cb}(b/A,e')/b')$ is almost $\mathbb{P}$-internal.
Remark that $b$ is algebraic over $\textnormal{Cb}(b/A,e', b_\emptyset)$ and thus
also algebraic over $b_\emptyset \textnormal{Cb}(b/A,e')$.
Putting all of the above together, we conclude that the type
$\textnormal{stp}(b/b')$ is almost $\mathbb{P}$-internal. Since
\[\textnormal{U}(b_\emptyset/b')\geq \textnormal{U}(b_\emptyset/A, e',b') =
\textnormal{U}(b_\emptyset/A,e')=\textnormal{U}(b_\emptyset/a'),\]
we deduce by the maximality of $\textnormal{U}(b_\emptyset/a')$ that
$\textnormal{U}(b_\emptyset/b')= \textnormal{U}(b_\emptyset/A, e',b')$, that is, \[
b_\emptyset \mathop{\mathpalette\Ind{}}_{\textnormal{acl}^{\textnormal{eq}}(A,e')\cap \ \textnormal{acl}^{\textnormal{eq}}(b')} A, e', b'.\]
Hence $b_\emptyset \mathop{\mathpalette\Ind{}} b'$, so
$b \mathop{\mathpalette\Ind{}} b'$, by Fact \ref{F:level}, contradicting that $\textnormal{stp}(b)$ is
not almost $\mathbb{P}$-internal.
For the other direction, we need to show that the almost
$\mathbb{P}$-internal
type $\textnormal{stp}(a/b)$ is good, that is, that $\textnormal{stp}(\textnormal{Cb}(a/b)/a)$ is almost
$\mathbb{P}$-internal. We may assume that $b$ equals the canonical base
$\textnormal{Cb}(a/b)$. Superstability yields that $b$ is contained in the
algebraic closure of
finitely many $b$-conjugates of $a$. By preservation of internality on
intersections, the type
$\textnormal{tp}(a/\textnormal{acl}^{\textnormal{eq}}(a)\cap\textnormal{acl}^{\textnormal{eq}}(b))$ is almost $\mathbb{P}$-internal, so it follows
that \[\textnormal{tp}(b/\textnormal{acl}^{\textnormal{eq}}(a)\cap\textnormal{acl}^{\textnormal{eq}}(b))\] is almost $\mathbb{P}$-internal. Hence,
the type $\textnormal{stp}(b/a)$ is almost $\mathbb{P}$-internal, as desired. ~\end{proof}
It follows now from Remark \ref{R:CBP_AB} that preservation of internality on intersections does not depend on constants being named. \begin{corollary}\label{C:NamingParametersIntersections} Preservation of internality on intersections is invariant under naming and forgetting parameters. \end{corollary}
\begin{remark}\label{R:CCBP}
It follows from Remark \ref{R:CBP_AB} and Proposition \ref{P:propA} that
the CBP is equivalent to the property that
whenever $b=\textnormal{Cb}(a/b)$, then $\textnormal{tp}(b/\textnormal{acl}^{\textnormal{eq}}(a)\cap\textnormal{acl}^{\textnormal{eq}}(b))$ is almost
$\mathbb{P}$-internal, which was already shown in \cite[Theorem 2.1]{zC12}. \end{remark}
\begin{proposition}\label{P:propB}
The theory $T$ preserves internality on quotients if and only if
every stationary almost $\mathbb{P}$-internal type in $T^\textnormal{eq}$ is
special. \end{proposition}
\begin{proof}
Assume that every stationary almost $\mathbb{P}$-internal type is
special. We want
to show that
\[\textnormal{tp}(b/\textnormal{acl}^{\textnormal{eq}}(a_1)\cap\textnormal{acl}^{\textnormal{eq}}(a_2))\]
is almost $\mathbb{P}$-internal,
whenever both $\textnormal{stp}(b/a_1)$ and $\textnormal{stp}(b/a_2)$ are.
By Remark \ref{R:CBP_AB}, we may assume that
\[ \textnormal{acl}^{\textnormal{eq}}(a_1)\cap\textnormal{acl}^{\textnormal{eq}}(a_2)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\]
Note that the type $\textnormal{stp}(b)$ is $\mathbb{P}$-analysable, by Fact
\ref{F:level}, so
recall that $b_\emptyset$ is the maximal almost $\mathbb{P}$-internal
subset of
$\textnormal{acl}^{\textnormal{eq}}(b)$.
As in the proof of Proposition \ref{P:propA} there is a set of
parameters $A_1$ containing
$a_1$ such that $A_1 \mathop{\mathpalette\Ind{}}_{a_1} b$ and $b$ is
interalgebraic over $A_1$
with some tuple $e$ of realizations of types (each one based over
$A_1$) in $\mathbb{P}$. Choosing a maximal subtuple $e'$ of $e$ with
$e' \mathop{\mathpalette\Ind{}}_{A_1} b_\emptyset$, it follows that $b$ is algebraic over
$b_\emptyset A_1 e'$ and that
\[
\textnormal{acl}^{\textnormal{eq}}(b_\emptyset)\cap\textnormal{acl}^{\textnormal{eq}}(A_1,e')\subset \textnormal{acl}^{\textnormal{eq}}(a_1).
\]
Hence
\begin{equation*}
\tag{$\star$}
\textnormal{acl}^{\textnormal{eq}}(b)\cap\textnormal{acl}^{\textnormal{eq}}(A_1,e')\cap\textnormal{acl}^{\textnormal{eq}}(a_2)=\textnormal{acl}^{\textnormal{eq}}(\emptyset),
\end{equation*}
by Fact \ref{F:level}. Since the almost
$\mathbb{P}$-internal type
$\textnormal{stp}(b / A_1 , e')$ is special, we have that
\[\textnormal{stp}(\nicefrac{\textnormal{Cb}(b / A_1 , e')}{a_2})\]
is almost $\mathbb{P}$-internal. Therefore
\[\textnormal{stp}(\nicefrac{\textnormal{Cb}(b / A_1 , e')}{\textnormal{acl}^{\textnormal{eq}}(A_1,
e')\cap\textnormal{acl}^{\textnormal{eq}}(a_2)})\]
is almost $\mathbb{P}$-internal by Remark \ref{R:CBP_AB}.
Since
\[ b \mathop{\mathpalette\Ind{}}_{\textnormal{Cb}(b / A_1 , e'),b_\emptyset} A_1, e' \]
and $b$ is algebraic over $b_\emptyset A_1 e'$, the tuple $b$ is
algebraic over
$\textnormal{Cb}(b / A_1 , e') b_\emptyset$. In particular, the type
\[ \textnormal{stp}(b/\textnormal{acl}^{\textnormal{eq}}(A_1, e')\cap\textnormal{acl}^{\textnormal{eq}}(a_2)) \]
is almost $\mathbb{P}$-internal and hence so is $\textnormal{stp}(b)$ because of
($\star$).
In order to prove the other direction, we want to show that the almost
$\mathbb{P}$-internal type $\textnormal{stp}(a/b)$ is special. Fix $C$ some a set of
parameters such that $\textnormal{stp}(a/C)$ is almost $\mathbb{P}$-internal. By
preservation of internality on quotients, the type
\[\textnormal{stp}(a/\textnormal{acl}^{\textnormal{eq}}(b)\cap\textnormal{acl}^{\textnormal{eq}}(C)) \]
is almost $\mathbb{P}$-internal and so is
\[\textnormal{stp}(\nicefrac{\textnormal{Cb}(a/b)}{\textnormal{acl}^{\textnormal{eq}}(b)\cap\textnormal{acl}^{\textnormal{eq}}(C)}),\]
since the canonical base $\textnormal{Cb}(a/b)$ is algebraic over finitely many
$b$-conjugates of $a$. ~\end{proof} We deduce now the analog of Corollary \ref{C:NamingParametersIntersections} for preservation of internality on quotients. \begin{corollary}\label{C:NamingParametersQuotients} Preservation of internality on quotients is invariant under naming and forgetting parameters. \end{corollary}
Thanks to the previous notions, we will provide for the sake of completeness a compact proof in Corollary \ref{C:Zoe} that the CBP already implies the UCBP, which essentially follows the lines of Chatzidakis's proof \cite[Theorem 2.5]{zC12}: Under the assumption of the CBP, the UCBP is equivalent to preservation of internality of quotients. Hence, we need only show in Proposition \ref{P:CBPQuotients} that the CBP implies the latter (cf. \cite[Lemma 2.3]{zC12}). For this, we need some auxiliary results.
Let $\Sigma$ denote the family of all minimal types, that is, of Lascar rank one. For a set $A$ of parameters, denote by $A_{\emptyset}^{\Sigma}$ be the maximal almost $\Sigma$-internal subset (in some fixed enumeration) of $\textnormal{acl}^{\textnormal{eq}}(A)$.
\begin{fact}\label{F:Comp}\textup{(}\cite[Lemma 1.10]{zC12} \textnormal{ \& } \cite[Observation 1.2]{zC12}\textup{)}
Assume that the types $\textnormal{stp}(e)$ and $\textnormal{stp}(c)$ are almost
$\Sigma$-internal.
\begin{enumerate}[(a)]
\item If the tuple $e$ is algebraic over $Ac$ for some parameter
set $A$,
then $e$ is algebraic over
$A_{\emptyset}^{\Sigma} c$.
\item If the type $\textnormal{stp}(c)$ is $\mathbb{P}$-analysable, then it is almost
$\mathbb{P}$-internal.
\end{enumerate} \end{fact}
\begin{lemma}\label{L:CompCBP}
Assume that the theory $T$ has the CBP and let $e$ be a tuple which is
algebraic over $AB$ with
$\textnormal{acl}^{\textnormal{eq}}(A)\cap\textnormal{acl}^{\textnormal{eq}}(B)=\textnormal{acl}^{\textnormal{eq}}(\emptyset)$. If the type $\textnormal{stp}(e)$ is
almost $\Sigma$-internal, then $e$ is algebraic over
$A_{\emptyset}^{\Sigma} B$. \end{lemma} \begin{proof}
Choose a set of parameters $D$ with $D \mathop{\mathpalette\Ind{}} e,A,B$ such that $e$ is
interalgebraic over $D$ with a tuple of realizations of types (each
one based over $D$) in $\Sigma$.
Since
\[ e \mathop{\mathpalette\Ind{}}_{A_{\emptyset}^{\Sigma} B} D \ \ \textnormal{ and } \ \
\textnormal{acl}^{\textnormal{eq}}(A,D)\cap\textnormal{acl}^{\textnormal{eq}}(B,D)=\textnormal{acl}^{\textnormal{eq}}(D),\]
we may assume, after naming $D$, that $e$ is a single
element of Lascar rank one. If
\[ e \mathop{\mathpalette\notind{}}_{B} A_{\emptyset}^{\Sigma}, \]
we are done. Otherwise
\[ e \mathop{\mathpalette\Ind{}}_{B} A_{\emptyset}^{\Sigma}, \]
so \[
\textnormal{acl}^{\textnormal{eq}}(A_{\emptyset}^{\Sigma})\cap\textnormal{acl}^{\textnormal{eq}}(B,e)=\textnormal{acl}^{\textnormal{eq}}(A_{\emptyset}^{\Sigma})\cap\textnormal{acl}^{\textnormal{eq}}(B)=
\textnormal{acl}^{\textnormal{eq}}(\emptyset).\]
The variant of Fact \ref{F:level} (b) with respect to $\Sigma$ yields
\[\textnormal{acl}^{\textnormal{eq}}(A)\cap\textnormal{acl}^{\textnormal{eq}}(B,e)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\]
Now the CBP and Remark \ref{R:CCBP} imply that the type
$\textnormal{stp}(\textnormal{Cb}(B,e/A))$
is almost
$\mathbb{P}$-internal, hence almost $\Sigma$-internal. Therefore, the
canonical base $\textnormal{Cb}(B,e/A)$ is contained
in $A_{\emptyset}^{\Sigma}$. Since $e$
is algebraic over $\textnormal{Cb}(B,e/A) B$, we conclude that $e$ is
algebraic
over
$A_{\emptyset}^{\Sigma} B$, as desired. ~\end{proof}
We have now the necessary ingredients to show that every complete stable theory of finite rank with the CBP preserves internality on quotients.
\begin{proposition}\label{P:CBPQuotients}
If the theory $T$ has the CBP, then it preserves internality on
quotients. \end{proposition} \begin{proof}
We want to show that
\[\textnormal{tp}(b/\textnormal{acl}^{\textnormal{eq}}(a_1)\cap\textnormal{acl}^{\textnormal{eq}}(a_2))\]
is almost $\mathbb{P}$-internal, whenever both $\textnormal{stp}(b/a_1)$ and
$\textnormal{stp}(b/a_2)$ are. Since the CBP is preserved under naming parameters,
we may assume that
\[\textnormal{acl}^{\textnormal{eq}}(a_1)\cap\textnormal{acl}^{\textnormal{eq}}(a_2)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\]
Choose sets of parameters $A_1$ containing $a_1$ and $A_2$ containing
$a_2$
with
\[ A_1 \mathop{\mathpalette\Ind{}}_{a_1} b, a_2 \ \ \textnormal{ and } \ \ A_2 \mathop{\mathpalette\Ind{}}_{a_2}
b, A_1 \]
such that $b$ is algebraic over both $A_1 e_1$ and $A_2 e_2$, where
$e_1$ and $e_2$ are tuples of realizations of types (each one based
over $A_1$, resp. $A_2$) in $\mathbb{P}$. Since
\[\textnormal{acl}^{\textnormal{eq}}(A_1)\cap\textnormal{acl}^{\textnormal{eq}}(A_2)=\textnormal{acl}^{\textnormal{eq}}(a_1)\cap\textnormal{acl}^{\textnormal{eq}}(a_2)=\textnormal{acl}^{\textnormal{eq}}(\emptyset),\]
the CBP and Remark \ref{R:CCBP} implies that $\textnormal{stp}(\textnormal{Cb}(A_1/A_2))$ is
almost $\mathbb{P}$-internal, so
\[A_1 \mathop{\mathpalette\Ind{}}_{(A_2)_{\emptyset}^{\Sigma}} A_2.\]
Choose now a maximal subtuple
$e_{1}'$ of $e_1$ which is independent from $A_2$ over $A_1$, so $e_1$
is algebraic over $A_1 e_{1}' A_2$ and
\[\textnormal{acl}^{\textnormal{eq}}(A_1 ,
e_{1}')\cap\textnormal{acl}^{\textnormal{eq}}(A_2)=\textnormal{acl}^{\textnormal{eq}}(A_1)\cap\textnormal{acl}^{\textnormal{eq}}(A_2)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\]
Now, let
$e_{2}'$ be a maximal subtuple of $e_2$ with
\[ e_{2}' \mathop{\mathpalette\Ind{}}_{A_2} A_1 , e_{1}'. \]
We deduce that
\[ A_1 , e_{1}' \mathop{\mathpalette\Ind{}}_{(A_2)_{\emptyset}^{\Sigma}} A_2 , e_{2}' \]
and $e_2$ is algebraic over $A_1 e_{1}' e_{2}' A_2$.
Moreover
\[\textnormal{acl}^{\textnormal{eq}}(A_1 , e_{1}')\cap\textnormal{acl}^{\textnormal{eq}}(A_2, e_{2}')\subset\textnormal{acl}^{\textnormal{eq}}(A_1,
e_{1}')\cap\textnormal{acl}^{\textnormal{eq}}(A_2)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\]
By Lemma \ref{L:CompCBP},
we get that $e_{1}$ is algebraic over
$(A_1,e_{1}')_{\emptyset}^{\Sigma} A_2$ and that
$e_{2}$ is algebraic over $A_1 e_{1}' (e_{2}'
A_2)_{\emptyset}^{\Sigma}$.
It follows from Fact \ref{F:Comp} (a) that
\[ (A_1,e_{1}')_{\emptyset}^{\Sigma} = (A_1)_{\emptyset}^{\Sigma}
e_{1}'\ \ \text{ and } \ \ (e_{2}',A_2)_{\emptyset}^{\Sigma} =
e_{2}'
(A_2)_{\emptyset}^{\Sigma} .\]
We deduce that $e_{1}$ is algebraic over $(A_1)_{\emptyset}^{\Sigma}
e_{1}'
A_2$ and $e_{2}$ is algebraic over $A_1 e_{1}'
e_{2}'(A_2)_{\emptyset}^{\Sigma}$. Therefore
\[ A_1 , e_{1} \mathop{\mathpalette\Ind{}}_{(A_1)_{\emptyset}^{\Sigma},
(A_2)_{\emptyset}^{\Sigma}, e_{1}, e_{2}}
A_2 , e_{2}. \]
Hence $b$ is algebraic over $(A_1)_{\emptyset}^{\Sigma},
(A_2)_{\emptyset}^{\Sigma}, e_{1}, e_{2}$, so the type $\textnormal{stp}(b)$ is
almost
$\Sigma$-internal. Since, by Fact \ref{F:level}, the type $\textnormal{stp}(b)$ is
$\mathbb{P}$-analysable, we conclude by Fact \ref{F:Comp} (b) that $\textnormal{stp}(b)$
is almost $\mathbb{P}$-internal,
as desired. ~\end{proof}
\begin{remark}\label{R:IndepQuotients} It is easy to see that a weakening of preservation of internality on quotients holds in every complete stable theory of finite rank, when the quotients are independent: If the types $\textnormal{stp}(b/a_1)$ and
$\textnormal{stp}(b/a_2)$ are almost $\mathbb{P}$-internal
and $a_1 \mathop{\mathpalette\Ind{}} a_2$, then the type $\textnormal{stp}(b)$ is almost $\mathbb{P}$-internal. \end{remark}
For completeness, we now restate Chatzidakis's proof \cite[Theorem 2.5]{zC12} that the CBP implies the UCBP using the aforementioned terminology.
\begin{corollary}\label{C:Zoe} The CBP and UCBP are equivalent properties for theories of finite rank. \end{corollary} \begin{proof} The UCBP clearly implies the CBP, similar to the remark that every special type is good.
We assume now that the theory has the CBP. We need to show that every type $\textnormal{stp}(a/b)$ is special. Since \[ \textnormal{Cb}(a/b)=\textnormal{Cb}(\nicefrac{\textnormal{Cb}(b/a)}{b}),\] we may assume that $a$ is the
canonical base $\textnormal{Cb}(b/a)$. In particular, the type $\textnormal{stp}(a/b)$ is almost
$\mathbb{P}$-internal, by the CBP. Now, the Propositions \ref{P:CBPQuotients} and
\ref{P:propB} yield that the type $\textnormal{stp}(a/b)$ is special, as desired. ~\end{proof}
The equivalence of the previous corollary motivates the following question, after localizing to almost $\mathbb{P}$-internal types.
\begin{question}\label{Q:1}
Are preservation of internality on intersections and on quotients
equivalent
properties for theories of finite rank? \end{question}
At the moment of writing, we do not know whether the previous question has a positive answer. Note that providing a structure which answers negatively the above question means in particular a new theory of finite rank without the CBP, since we will see in Section \ref{S:Imag} that the so far only known counterexample to the CBP given in \cite{HPP13} does not preserve internality on intersections.
It was remarked in \cite[Lemma 2.11]{BMPW12} that the CBP holds whenever it holds for stationary real types, or equivalently, for real types over models. A natural question is whether the same holds for the above properties of preservation of internality.
\begin{question}\label{Q:2}
Does a theory of finite rank preserve internality on intersections,
resp. on quotients, if every stationary real almost $\mathbb{P}$-internal type
is good, resp. special? \end{question} Additive covers of the algebraically closed field $\mathbb{C}$, which will be introduced in the following section, will provide a negative answer (see Corollary \ref{C:AB_imag}) to Question \ref{Q:2}.
\section{Additive Covers}\label{S:AddCovers}
The only known example so far of a stable theory of finite rank without the CBP appeared in \cite{HPP13}. We will consider this example from the perspective of additive covers of the algebraically closed field $\mathbb{C}$. We start this section with a couple of definitions.
Following the terminology of Hrushovski ~\cite{eH91}, Ahlbrandt and Ziegler ~\cite{AZ91}, and Hodges and Pillay ~\cite{HP94}, we say that $M$ is a \textit{cover} of $N$ if the following three conditions hold: \begin{itemize}
\item The set $N$ is a stably embedded $\emptyset$-definable subset
of $M$.
\item There is a surjective $\emptyset$-definable map $\pi:M\backslash
N\rightarrow N$.
\item There is a family of groups $(G_a)_{a\in N}$ definable in
$N^{\text{eq}}$ without parameters such that $G_a$ acts definably and
regularly on the fiber $\pi^{-1}(a)$.
\end{itemize} For the purpose of this article, we will concentrate on particular covers of the algebraically closed field $\mathbb{C}$, and hence provide a definition adapted to this context. From now on, given the canonical projection of the sort $S=\mathbb{C}\times\mathbb{C}$ onto the first coordinate $P=\mathbb{C}$, we will denote the elements of $P$ with the greek letters $\alpha$, $\beta$, ect., while the elements of $S$ will be seen accordingly as pairs $(\alpha,a')$ and so on. \begin{definition}\label{D:AdditiveCover} An \emph{additive cover} of the algebraically closed field $\mathbb{C}$ is a structure $\mathcal{M}=(P,S,\pi,\star,\ldots)$ with the distinguished sorts $P=\mathbb{C}$ and $S=\mathbb{C}\times\mathbb{C}$ such that the following conditions to hold: \begin{itemize}
\item The structure $\mathcal{M}$ is a reduct of
$(\mathbb{C},\mathbb{C}\times\mathbb{C})$ with
the full field structure on the sort $P$.
\item The projection $\pi$ maps $S$ onto $P$.
\item There is an action $\star$ of $P$ on $S$ given
by $\alpha\star(\beta,b')=(\beta,b'+\alpha)$. \end{itemize} Moreover, the map \[ \begin{array}{rccc}
\oplus: & S\times S & \rightarrow & S \\[2mm]
& \big((\alpha,a'),(\beta,b')\big)&\mapsto& (\alpha+\beta,a'+b')
\end{array}\] is definable in $\mathcal{M}$ without parameters. \end{definition}
\begin{example}~\label{E:covers}
\begin{itemize}
\item The additive cover
$\mathcal{M}_{0}=(P,S,\pi,\star,\oplus)$ with no additional
structure.
\item The additive cover
$\mathcal{M}_{1}=(P,S,\pi,\star,\oplus,\otimes)$
with the product
\[ \begin{array}{rccc}
\otimes: & S\times S & \rightarrow & S \\[2mm]
& \big((\alpha,a'),(\beta,b')\big)&\mapsto& (\alpha\beta,\alpha
b'+\beta
a').
\end{array}\]
Note that $\mathcal{M}_{1}$
is a commutative ring with multiplicative neutral element $(1,0)$.
The zero-divisors are exactly the elements $a$ in $S$ with
$\pi(a)=0$, that is, the pairs $a=(0, a')$.
\end{itemize}
Given an additive cover $\mathcal{M}$, there is a canonical embedding \[\begin{array}{rll} \textnormal{Aut}(\mathcal{M}/P)& \hookrightarrow & \{F:\mathbb{C}\rightarrow\mathbb{C} \text{
additive} \}\\[1mm] \sigma &\mapsto & F_\sigma \end{array}\] uniquely determined by the identity $\sigma(x)=F_\sigma(\pi(x))\star x$.
For the additive cover $\mathcal M_0$ of Example \ref{E:covers}, the above embedding defines a bijection \[\textnormal{Aut}(\mathcal{M}_{0}/P)\leftrightarrow\{F:\mathbb{C}\rightarrow\mathbb{C}
\text{ additive}\}\] and a straight-forward calculation yields that
\[\textnormal{Aut}(\mathcal{M}_{1}/P)\leftrightarrow\{F:\mathbb{C}\rightarrow\mathbb{C}
\text{
derivation}\}.\] Indeed, for elements $a=(\alpha,a')$ and
$b=(\beta,b')$ in $S$, we have
\begin{align*}
\sigma(a\otimes b)&=F_{\sigma}(\alpha\beta)\star (a\otimes b)
\textnormal{ and } \\
\sigma(a)\otimes\sigma(b)&=\big(F_{\sigma}(\alpha)\star a\big)\otimes
\big(F_{\sigma}(\beta)\star b\big)=(\alpha\beta,\alpha
(b'+F_{\sigma}(\beta))+\beta (a'+F_{\sigma}(\alpha)))\\&=\big(\alpha
F_{\sigma}(\beta)+\beta F_{\sigma}(\alpha)\big)\star (a\otimes b).
\end{align*} \end{example}
\begin{remark}\label{R:genCov} Every additive cover $\mathcal{M}$ is a saturated uncountably categorical structure, where $P$ is the unique strongly minimal set up to non-orthogonality. The sort $S$ has Morley rank two and degree one, and is $P$-analysable in two steps. Moreover, each fiber $\pi^{-1}(\alpha)$ is strongly minimal.
Therefore, for additive covers, almost $\mathbb{P}$-internality in the CBP is equivalent to almost internality to $P$. If $S$ is almost $P$-internal, then the CBP trivially holds. \end{remark}
\begin{remark}\label{R: Cex} The counterexample to the CBP given in \cite{HPP13} is an additive cover, including for every irreducible variety $V$ defined over $\mathbb{Q}^{\textnormal{alg}}$ a predicate in the sort $S$ for the tangent bundle of $V$. It is easy to see that this structure has the same definable sets as the additive cover $\mathcal{M}_1$ given in Example \ref{E:covers}, since every polynomial expression over $\mathbb{Q}^{\textnormal{alg}}$ in $P$ lifts to a polynomial equation in $S$, using the ring operations $\oplus$ and $\otimes$.
A key ingredient in the proof that the sort $S$ in the above counterexample is not almost $P$-internal \cite[Corollary 3.3]{HPP13} is that every derivation on the algebraically closed field $\mathbb{C}$ induces an automorphism
in $\textnormal{Aut}(\mathcal{M}_1 / P)$. \end{remark}
For the following sections, we will need some auxiliary lemmas on the structure of additive covers, and particularly those where the sort $S$ is not almost $P$-internal. For the sake of completeness, note that there are additive covers, besides the full structure, where the sort $S$ is $P$-internal: Consider the additive cover $\mathcal{M}$ with the following binary relation $R$ on $S\times S$ \[ R((\alpha,a'),(\beta,b')) \iff \big(\alpha\notin\mathbb{Q}\ \ \& \ \ \beta\notin\mathbb{Q} \ \ \& \ \ a'=b' \big).
\] It is easy to verify that $\textnormal{Aut}(\mathcal{M}/P)=(\mathbb{C},+)$ and the sort $S$ is $P$-algebraic (actually $P$-definable), after naming any element in the fiber $\pi^{-1}(1)$.
The following notion will be helpful in the following chapter.
\begin{definition}\label{D:mean} Given elements $a_1=(\alpha, a_1'),\ldots, a_n=(\alpha, a'_n)$ of $S$ all in the same fiber $\pi^{-1}(\alpha)$, their \emph{average} is the element \[\Big(\alpha, \frac{a'_1+\ldots+a'_n}{n}\Big).\] \end{definition}
\begin{lemma}\label{L:mean}
Given a non-empty finite set $A$ of elements of $S$, all lying in the
same fiber, every
automorphism $\sigma$ of the additive cover maps the average of $A$ to
the average of $\sigma[A]$. In particular, the average of $A$ is
definable
over $A$. \end{lemma} \begin{proof}
We proceed by induction on the size $n$ of the non-empty set $A$. For
$n=1$, there is nothing to prove. Assume $A$ contains at least two
elements, and choose
$a$ some element of $A$. Set $b=\sigma(a)$.
Inductively, we have that $\sigma$ maps the average $d_1$ of
$A\backslash\{a\}$
to the average $d_2$ of $\sigma[A]\backslash\{b\}$. Let
$\varepsilon_1$ and $\varepsilon_2$ be the unique elements in $P$ such
that $\varepsilon_1\star
d_1=a$ and $\varepsilon_2\star d_2=b$. A straight-forward computation
yields that
$\frac{\varepsilon_1}{n}\star d_1$, resp.
$\frac{\varepsilon_2}{n}\star d_2$, is the average of $A$, resp. of
$\sigma[A]$. Now the
claim follows since $\sigma$ maps $\frac{\varepsilon_1}{n}$ to
$\frac{\varepsilon_2}{n}$. ~\end{proof}
\begin{lemma}\label{L:stat}
Let $a_1=(\alpha_1,0),\ldots,a_n=(\alpha_n,0)$ be elements in $S$. The
type $\textnormal{tp}(a_1,\ldots,a_n / \alpha_1,\ldots,\alpha_n)$ is
stationary. \end{lemma} \begin{proof}
Choose a maximal subtuple $\hat{a}$ of $(a_1,\ldots,a_n)$ (algebraic)
independent
over the tuple $\bar{\alpha}=(\alpha_1,\ldots,\alpha_n)$. Note that
each
$a_i$ is algebraic over $\bar{\alpha},\hat{a}$. Set
$b_i=(\alpha_i,b_{i}')$ the average of the finite set of
$\{\bar{\alpha},\hat{a}\}$-conjugates of $a_i$. The element $b_i$ is
definable over $\bar{\alpha},\hat{a}$, by Lemma \ref{L:mean}.
\begin{claim*}
The second coordinate $b_{i}'$ of the average $b_i$ is definable (as
an element of $P$) over $\bar{\alpha}$.
\end{claim*}
\begin{claimproof*}
We need only show that $b_{i}'$ is fixed by every automorphism $\tau$
of the sort $P$ fixing $\bar{\alpha}$. The map
$\sigma=(\tau,\tau\times \tau)$ is an automorphism of the full
structure $(\mathbb{C},\mathbb{C}\times\mathbb{C})$, and hence of the
reduct $\mathcal M$. Since $\tau(0)=0$, the automorphism $\sigma$
fixes $\bar{\alpha},a_1,\ldots, a_n$. Hence $\sigma(b_i)=b_i$, so in
particular $\tau(b_{i}')=b_{i}'$.\end{claimproof*}
Therefore $a_i=(-b_{i}')\star b_i$ is definable
over $\bar{\alpha},\hat{a}$. Since the fibers of the projection $\pi$
are
strongly minimal (see Remark \ref{R:genCov}), the type
$\textnormal{tp}(\hat{a}/\bar{\alpha})$ is stationary, so we obtain the desired
conclusion. ~\end{proof} The above proof yields in particular the following: \begin{remark}\label{R:algFib}
Every automorphism $\tau$ of $P$ fixing a subset $A$ induces an
automorphism $\sigma$ of the additive cover which fixes all the
elements of $S$ of
the form $(\alpha,0)$, with $\alpha$ in $A$.
The definable and algebraic closure of $P$ in the sort $S$
coincide: \[ S \cap \textnormal{acl}(P) = S \cap
\textnormal{dcl}(P). \]
Given a set of parameters $B$ in $S$ and an element $\beta$ in the
sort $P$, all elements of the strongly minimal fiber $\pi^{-1}(\beta)$
have
the same type over $B, P$ whenever the element $b=(\beta,0)$ of
$S$ is not algebraic over $B, P$. \end{remark}
\begin{lemma}\label{L:indep}
Let $a_1=(\alpha_1,0),\ldots,a_n=(\alpha_n,0)$ be elements in the sort
$S$ with generic independent elements $\alpha_i$ in $P$.
If the sort $S$ is not almost $P$-internal, then the $a_i$'s are
generic independent. \end{lemma} \begin{proof}
Choose some $\beta$ generic in $P$ independent from
$\alpha_1$ and set $a=(\alpha_1,\beta)$ in $S$. Note that the Morley
rank of $a$ is two. If $a_1$ were not generic, then
$a_1$
must be algebraic over the generic element $\alpha_1$ of $P$. Since
$a=\beta\star a_1$, it would
follow that the generic element
$a$ of $S$ is algebraic over $P$, which contradicts our assumption that
the sort $S$ is not almost $\mathbb{P}$-internal. Hence $a_1$ is generic in
$S$.
Now, we inductively assume that the tuple $(a_1,\ldots,a_{n-1})$
consists of generic independent elements and want to show that $a_n
\mathop{\mathpalette\Ind{}} \bar{a}_{<n}$. Assume otherwise that $a_n \mathop{\mathpalette\notind{}}
\bar{a}_{<n}$.
Note that $\alpha_n$ is not algebraic over $\bar{a}_{<n}$, by
Remark \ref{R:algFib}, since $\alpha_n$ is not algebraic over
$\bar{\alpha}_{<n}$. Thus $a_n
\mathop{\mathpalette\notind{}}_{\alpha_n} \bar{a}_{<n}$, so $a_n$ is algebraic over
$\alpha_n \bar{a}_{<n}$.
Choose now some element $\gamma$ in $P$ generic over
$(\alpha_1,\ldots, \alpha_n)$
and set $c=(\alpha_n,\gamma)=\gamma\star a_n$ in $S$. Note that $c$ is
algebraic over
$\bar{a}_{<n}P$. Observe that $\textnormal{RM}(c/\bar{a}_{<n})=2$, by the choice
of $\gamma$, so we conclude that $S$ is almost $\mathbb{P}$-internal, which
gives the desired contradiction. ~\end{proof}
We conclude this section with a full description of the Galois groups of stationary $P$-internal types in additive covers, whenever the sort $S$ is not almost $P$-internal.
\begin{remark}\label{R:Gal}
If $S$ is not $P$-internal, then every definable subgroup of
$(\mathbb{C}^n,+)$ appears as a Galois group and conversely every
Galois group is (definably isomorphic to) such a subgroup. \end{remark} \begin{proof}
We show first that every definable subgroup $G$ of $(\mathbb{C}^n,+)$
appears as a Galois group. Let
$a_1=(\alpha_1,0),\ldots,a_n=(\alpha_n,0)$ be elements in the sort
$S$ with generic independent elements $\alpha_i$ in $P$ and
set
\[ E = \{ \bar{x}\in S^n \ |\ \exists \bar{g}\in G \bigwedge_{i=1}^{n}
g_i \star a_i = x_i \}. \]
The type $\textnormal{stp}(a_1,\ldots,a_n/\ulcorner E\urcorner)$ is $P$-internal
because $\alpha_1,\ldots,\alpha_n$ are definable over
$\ulcorner
E\urcorner$. We show that $G$ is the Galois group of this type.
If
\[ b_1,\ldots,b_n \equiv_{\ulcorner E\urcorner, P} a_1,\ldots,a_n, \]
then $\bar{b}$ is in $E$ and there is an unique $\bar{g}$ in
$G$ with $\bar{g}\star \bar{a}=\bar{b}$.
Now assume that conversely $\bar{g}\star \bar{a}=\bar{b}$ for some
$\bar{g}$ in $G$. Note that for $1\leq k\leq n$ the element $a_k$ is
not algebraic over $\bar{a}_{<k},P$ by Lemma \ref{L:indep}, since $S$
is not almost $P$-internal. Hence, Remark \ref{R:algFib} yields
that all elements in the fiber $\pi^{-1}(\alpha_k)$ have the same type
over $\bar{a}_{<k},P$. This shows that we can inductively construct an
automorphism $\sigma$ in $\textnormal{Aut}(\mathcal{M}/P)$ with $\sigma(a_k)=g_k
\star a_k$ for $1\leq k\leq n$. The
automorphism $\sigma$ determines an element of the Galois group of the
fundamental type $\textnormal{stp}(a_1,\ldots,a_n/\ulcorner E\urcorner)$.
Now we show that every Galois group is of the claimed form.
The Galois group $G$ of a
real stationary fundamental
$P$-internal type
$\textnormal{tp}(a_1,\ldots,a_n/B)$
equals
\[ G = \{ (g_1,\ldots,g_n)\in P^n \ \vert \
g_1 \star a_1 , \ldots , g_n \star a_n \equiv_{B,P} a_1 ,
\ldots, a_n \}. \]
More generally, given an imaginary element $e=f(a)$, where $a$ is a
real tuple and $f$ is an $\emptyset$-interpretable function, the
Galois group of the stationary fundamental
$P$-internal type $\textnormal{tp}(e/B)$ equals
\[ \{ g\in P^n \ \vert \
f(g\star a) \equiv_{B,P} e \}. \]
Hence, the statement follows, since every Galois group is the Galois
group of a stationary fundamental (possibly imaginary) type.
Note that we did not use here that $S$ is not almost $P$-internal. ~\end{proof}
\section{Imaginaries in additive covers}\label{S:Imag}
In order to answer Question \ref{Q:2}, we are led to study imaginaries in additive covers, with a particular focus to the additive covers in the Example \ref{E:covers}. We will first show that neither the counterexample $\mathcal{M}_1$ to the CBP of \cite{HPP13} nor the additive cover $\mathcal{M}_0$ eliminate imaginaries.
\begin{lemma}\label{L:imag}
The additive cover $\mathcal{M}$ does not eliminate imaginaries if
every derivation on $\mathbb{C}$ induces an automorphism
in $\textnormal{Aut}(\mathcal{M}/P)$. \end{lemma}
\begin{proof}
Choose two generic independent elements $\alpha$ and $\beta$ in
the sort $P$, and pick elements $a$ and $b$ in the fiber of $\alpha$
and $\beta$, respectively. Fix a derivation $D$ with kernel
$\mathbb{Q}^{\text{alg}}$. Let us assume for a contradiction that the
definable set
\[E=\{(x,y)\in S^2 \ |\ \exists (\lambda,\mu)\in P^2 (\lambda\star a=x
\ \& \
\mu\star b=y \ \& \ D(\beta)\lambda-D(\alpha)\mu=0)\}\] has a real
canonical parameter $e$. By hypothesis, the derivation $D$ induces an
automorphism $\sigma_{D}$ in $\textnormal{Aut}(\mathcal{M}/P)$. Note that
$\sigma_{D}$ must fix $E$
setwise, because
$D(\beta)D(\alpha)-D(\alpha)D(\beta)=0$. Therefore (every element
of) the tuple $e$ lies in $P\cup\pi^{-1}(\mathbb{Q^{\text{alg}}})$.
In particular, the definable set $E$ is permuted by every automorphism
induced by a derivation. Now
let $D_1$ be a derivation with $D_1(\alpha)=1$ and $D_1(\beta)=0$, and
note that
$\sigma_{D_1}$ does not permute $E$, since
$D(\beta)\cdot 1-D(\alpha)\cdot 0=D(\beta)\neq 0$, which gives the
desired contradiction. ~\end{proof}
The proof of \cite{HPP13} shows that the sort $S$ in an additive cover $\mathcal M$ is not almost $P$-internal, whenever every derivation on $\mathbb{C}$ induces an automorphism in $\textnormal{Aut}(\mathcal{M}/P)$. We will now give a strengthening of Lemma \ref{L:imag}.
\begin{proposition}\label{P:ImagInt}
If the additive cover $\mathcal{M}$ eliminates imaginaries, then the
sort $S$ is $P$-internal. \end{proposition} \begin{proof}
We will mimic the proof of Lemma \ref{L:imag}. Assume for a
contradiction that the sort $S$ is not $P$-internal and choose two
generic independent elements $a$ and $b$ in $S$. Since $S$ is not
$P$-internal, there is
an automorphism $\tau$ in $\textnormal{Aut}(\mathcal{M}/P)$ which fixes $b$ and
moves
$a$. If we can construct an automorphism $\sigma$ (which was
$\sigma_D$ in the proof of Lemma \ref{L:imag}) such that it only fixes
the
definable closure of $P$ (in $S$), we conclude as before that the real
canonical parameter $e$ of the definable set \[E=\{(x,y)\in S^2 \
|\
\exists (\lambda,\mu)\in P^2 (\lambda\star
a=x
\ \& \
\mu\star b=y \ \& \ F_\sigma(\beta)\lambda-F_\sigma(\alpha)\mu=0)\}\]
is definable over $P$. The automorphism $\tau$ fixes $e$, yet it maps the
pair
$(a,b)$ in $E$ outside of $E$.
Hence, we need only show in the rest of the proof that there exists such an automorphism $\sigma$.
Choose an enumeration of elements $a_i=(\alpha_i,0)$ and $b_i=(\beta_i,0)$ in $S$, with $i<2^{\aleph_0}$, such that: \begin{itemize}
\item The tuple
$\bar{\alpha}=(\alpha_i)_{i<2^{\aleph_0}}$
is a transcendence basis of the algebraically closed field
$\mathbb{C}$.
\item For each $i<2^{\aleph_0}$, the element $b_i$ is not algebraic
over
$\bar{a},\bar{b}_{<i}$, where
$\bar{a}=(a_i)_{i<2^{\aleph_0}}$ and
$\bar{b}=(b_i)_{i<2^{\aleph_0}}$. Hence
$\textnormal{RM}(b_i/\bar{a},\bar{b}_{<i})=1$ since $\beta_i$ is in
$\textnormal{acl}(\bar{a})$.
\item Each element in $S$ is algebraic over $\bar{a},\bar{b}$. \end{itemize} We denote by ${\langle \alpha \rangle}_i$ the unique subtuple of $\bar{\alpha}$ of smallest length such that $\beta_i$ is algebraic over ${\langle \alpha \rangle}_i$. Write $\mathcal X$ for the set of all finite subtuples of $\bar{\alpha}$ and consider the map \[ \begin{array}{rccc} \Phi: & \mathcal X & \rightarrow & 2^{\aleph_0} \\[2mm] & (\alpha_{i_1},\ldots,\alpha_{i_n})&\mapsto& \max(i_1,\ldots,i_n). \end{array}\] The partial function $F$ defined by \[ F(\alpha_i)=\alpha_{\omega^{i+1}}
\ \ \textnormal{ and } \ \ F(\beta_i)=\alpha_{\omega^{\max\left(i,\Phi({\langle \alpha \rangle}_i)\right)+1}+\omega^{i}} \] is clearly injective. It follows inductively by Remark \ref{R:algFib} that \[\bar{a},\bar{b} \equiv_{P} F(\bar{a})\star\bar{a},F(\bar{b})\star\bar{b},\] so $F$ induces a partial automorphism fixing $P$ pointwise with domain the set \[\big(\pi^{-1}(\bar{\alpha})\times\pi^{-1}(\bar{\beta})\big)\cup P.\]
Given an element $c$ in $S$, it is by construction algebraic over $\bar a, \bar b$, so the average of its conjugates is definable over $\bar a, \bar b$, by Lemma \ref{L:mean}. Thus, every element of $S$ is definable over $\bar{a}, \bar{b}, P$. Therefore the above partial automorphism extends uniquely to an automorphism $\sigma$ in $\textnormal{Aut}(\mathcal{M}/P)$. \begin{claim*} The automorphism $\sigma$ only fixes the definable closure of $P$ in $S$. \end{claim*} \begin{claimproof*}
Since $\sigma$ fixes the sort $P$, it suffices to show that all elements $c$ fixed by $\sigma$ of the form $c=(\gamma,0)$ are definable over $P$. Otherwise, choose subtuples of least possible length \[ \hat{a}=(a_{i_1},\ldots,a_{i_m}) \ \ \textnormal{ and } \ \ \hat{b}=(b_{j_1},\ldots,b_{j_n}) \] of $\bar{a}$ and $\bar{b}$ such that $c$ is definable over $\hat{a},\hat{b},P$. Note that $\max(n,m)>0$ and that every element in the fiber of $\gamma$ is definable over $\hat{a},\hat{b},P$. The type \[ p=\textnormal{tp}(\hat{a},\hat{b},c/\hat{\alpha},\hat{\beta},\gamma) \] is fundamental and stationary by Lemma \ref{L:stat}. Its Galois group $G$ is a definable additive subgroup of $\mathbb{C}^{m+n+1}$, by Remark \ref{R:Gal}. If $\gamma$ is not algebraic over $\hat{\alpha},\hat{\beta}$, Lemma \ref{L:indep} yields that $c \mathop{\mathpalette\Ind{}} \hat{a},\hat{b}$, so the type $\textnormal{stp}(c)$ is $P$-internal and hence so is (the generic element in the fiber $\pi^{-1}(\gamma)$ of) $S$, contradicting our assumption. Since the Galois group $G$ of $p$ is definable over $\{ \hat{\alpha},\hat{\beta},\gamma \}$, we deduce that it is definable over \[A=\textnormal{acl}(\hat{\alpha}, {\langle \alpha \rangle}_{j_1},\ldots,{\langle \alpha \rangle}_{j_n})\supset \{ \hat{\alpha},\hat{\beta}\}.\] The group $G$ is given by a system $\mathcal{G}$ of linear equations of the form \begin{equation*}
\lambda_{1} \cdot x_{1}+\dots+\lambda_{m+n+1} \cdot x_{m+n+1}=0, \end{equation*} with coefficients $\lambda_{i}$ in $A$ and the tuple \[ ( F(\alpha_{i_1}),\ldots, F(\alpha_{i_m}), F(\beta_{j_1}),\ldots, F(\beta_{j_n}), 0) \] is a solution. Set now $\gamma= \Phi\big( (\hat{\alpha}, {\langle \alpha \rangle}_{j_1},\ldots,{\langle \alpha
\rangle}_{j_n})\big) <2^{\aleph_0}$. If $\alpha_\gamma=\alpha_{i_k}$
for some $1\le k\le m$, denote $i(\gamma)=i_k=\gamma$. Otherwise set
$i(\gamma)=j_\ell$ if $1\le \ell\le n$ is the least index such that
$\alpha_\gamma$ is an element in the tuple
${\langle \alpha \rangle}_{j_\ell}$.
Observe that there is a linear equation in the system $\mathcal{G}$ such that the coefficient $\lambda_{i(\gamma)}$ is non-trivial, for otherwise every automorphism in $\textnormal{Aut}(\mathcal{M}/P)$ fixing all coordinates except (possibly) the element $d_{i(\gamma)}$, which is the $i(\gamma)^\text{th}$-coordinate of the tuple $(\hat{a},\hat{b})$, must also fix $c$, contradicting the minimality of $m$ and $n$. The set \[ B=\{ F(\alpha_{i_1}),\ldots, F(\alpha_{i_m}), F(\beta_{j_1}),\ldots, F(\beta_{j_n}) \} \] consists of distinct elements, by the injectivity of $F$. Therefore, if suffices to show that the element $F(d_{i(\gamma)})$ is not algebraic over \[ A\cup \big(B\backslash \{ F(d_{i(\gamma)}) \}\big) \] to reach the desired contradiction. For this we need only show that the element $F(d_{i(\gamma)})$ is not contained in the set \[ \{ \hat{\alpha}, {\langle \alpha \rangle}_{j_1},\ldots,{\langle \alpha
\rangle}_{j_n} \}. \]
If $d_{i(\gamma)}=\alpha_{i(\gamma)}$, we obtain the result since \begin{align*} \Phi(F(d_{i(\gamma)}))&=\Phi(F(\alpha_\gamma)) \\ &=\Phi(\alpha_{\omega^{\gamma+1}}) \\ &=\omega^{\gamma+1}\geq\gamma+1 \\ &>\gamma= \Phi\big( (\hat{\alpha}, {\langle \alpha \rangle}_{j_1},\ldots,{\langle \alpha
\rangle}_{j_n})\big). \end{align*}
Otherwise $d_{i(\gamma)}=\beta_{i(\gamma)}$, so \begin{align*} \Phi(F(d_{i(\gamma)}))&=\Phi(F(\beta_{i(\gamma)}))\\ &= \omega^{\max\left({i(\gamma)},\Phi({\langle \alpha
\rangle}_{i(\gamma)})\right)+1}+\omega^{{i(\gamma)}} \\
&> \omega^{ \Phi({\langle \alpha
\rangle}_{i(\gamma)}) +1 } = \omega^{\gamma+1}, \end{align*} and we conclude the result analogous to the first case. \end{claimproof*} ~\end{proof}
Whenever the sort $S$ is not $P$-internal, the additive cover does not eliminate imaginaries. The situation is different for finite imaginaries: We will see below that the additive cover $\mathcal{M}_0$ does not eliminate finite imaginaries, however the additive cover $\mathcal{M}_{1}$ does.
\begin{remark}\label{R:FiniteImagM0}
Choose two generic independent elements $\alpha$ and $\beta$ be two in
the sort $P$. The finite subset
$\{(\alpha,0),(\beta,0)\}$ of $S$ has no real canonical parameter in
$\mathcal{M}_{0}$. \end{remark}
\begin{proof}
Assume that the tuple $e$ is a real canonical parameter of the set
$\{(\alpha,
0),(\beta, 0)\}$. Since the tuple $e$ is clearly definable over
$(\alpha,0),
(\beta, 0), P$, the projection $\pi(c)$ of every element $c$ in $S$
appearing in $e$
(if any) must be contained in the $\mathbb{Q}$-vector space generated
by $\alpha$ and $\beta$.
There is an automorphism $\tau$ of $P$ extending the
non-trivial permutation of the set $\{\alpha, \beta\}$, so it is easy
to show that there is a rational number $q$ such that
$\pi(c)=q\cdot(\alpha+\beta)$. Hence, the tuple $e$ is definable over
$(\alpha+\beta,0),P$.
Therefore, any additive map $F$ with $F(\alpha)=1$ and
$F(\beta)=-1$ induces an automorphism $\sigma_F$ fixing $e$, yet it
does not permute $\{(\alpha,0),(\beta,0)\}$. ~\end{proof}
In order to show that the additive cover $\mathcal{M}_1$ eliminates finite imaginaries, we first provide a sufficient condition.
\begin{proposition}\label{P:generalimag}
An additive cover $\mathcal{M}$ eliminates
finite imaginaries, whenever every finite subset of $S$ on which $\pi$ is
injective has a real
canonical parameter. \end{proposition} \begin{proof}
Let $A$ be the finite set $\{\bar{a}_1,\ldots,\bar{a}_n\}$ of
real $m$-tuples.
Every function
$\Phi:\{1,\ldots, m\}\longrightarrow\{P,S\}$ determines a
subset $A_{\phi}$ of $A$, according to whether the
$j^\text{th}$-coordinate lies in $P$ or $S$. Every automorphism
permuting $A$ permutes each $A_{\Phi}$, so we may assume that for
every tuple in $A$, the coordinates have the same configuration (given
by the function $\Phi$).
Since the canonical parameter is only determined up to
interdefinability, we may further assume (after possibly permuting the
coordinates) that there is a natural number $0\leq k\leq m$ such that
for each tuple $\bar {a}_i$ in $A$:
\begin{itemize}
\item The $j^\text{th}$-coordinate $a_{i}^{j}$ lies in $S$ for
$1\leq j\leq k$.
\item The $\ell^\text{th}$-coordinate $a_{i}^{\ell}$ lies in $P$
for $k< \ell\leq m$.
\end{itemize}
For every coordinate $1\leq j\leq k$ set $A^j=\{a_{i}^{j} \ | \ 1\leq
i\leq n \}$ and $d_{i}^{j}$
the average of the subset $A^j \cap \pi^{-1}(\pi(a_{i}^{j}))$. For
$1\leq i\leq
n$ let now $\varepsilon_{i}^{j}$ be the unique element in $P$ with
$a_{i}^{j}=\varepsilon_{i}^{j}
\star d_{i}^{j}$. Consider the tuples
$\varepsilon_{i}=(\varepsilon_{i}^{1},\ldots,\varepsilon_{i}^{k})$
and
\[
\alpha_i=(\pi(a_{i}^{1}),\ldots,\pi(a_{i}^{k}),a_{i}^{k+1},\ldots,a_{i}^{m})\]
in $P$. We need only show that the tuple
\[ e=\big( \ulcorner \{(\varepsilon_{1},\alpha_{1}),\ldots,
(\varepsilon_{n},\alpha_{n})
\}\urcorner,\ulcorner
\{d_{1}^{1},\ldots,d_{n}^{1}\}\urcorner,\ldots,\ulcorner
\{d_{1}^{k},\ldots,d_{n}^{k}\}\urcorner \big) \]
is a canonical parameter of $A$. Note that $e$ is a real tuple since
the sets
$\{d_{1}^{j},\ldots,d_{n}^{j}\}$ have real canonical parameters, by
our assumption.
Let $\sigma$ be an automorphism. If $\sigma$ permutes the set $A$,
Lemma \ref{L:mean} yields
that $\sigma$ permutes each set
$\{d_{1}^{j},\ldots,d_{n}^{j}\}$ since the image of $A^j \cap
\pi^{-1}(\pi(a_{i}^{j}))$ under $\sigma$ is $A^j \cap
\pi^{-1}(\pi(a_{i^*}^{j}))$ for some index $i(\sigma)$ with
$\sigma(a_{i}^{j})=a_{i(\sigma)}^{j}$ and
$\sigma(\alpha_i)=\sigma(\alpha_{i(\sigma)})$. Therefore
$\sigma(\varepsilon_{i})=\varepsilon_{i(\sigma)}$, since
$\sigma(d_{i}^{j})=d_{i(\sigma)}^{j}$. Hence $\sigma$ fixes $e$.
Assume now that $\sigma$ fixes the tuple $e$. The tuple $\alpha_i$ is mapped to $\alpha_{i(\sigma)}$ and \[ \sigma(a_{i}^{j})=\sigma(\varepsilon_{i}^{j}) \star \sigma(d_{i}^{j})=\varepsilon_{i(\sigma)}^{j}\star \sigma(d_{i}^{j}). \] It suffices to show that $\sigma(d_{i}^{j})=d_{i(\sigma)}^{j}$ to conclude that $\sigma$ permutes $A$. This follows immediately from
\[\pi(\sigma(d_{i}^{j}))=\sigma(\pi(d_{i}^{j}))=
\sigma(\alpha_{i}^{j})=\alpha_{i(\sigma)}^{j},\] since $\sigma$
permutes the set $\{d_{1}^{j},\ldots,d_{n}^{j}\}$. ~\end{proof}
Thus, we will deduce that the additive cover $\mathcal{M}_1$ eliminates finite imaginaries, by applying Proposition \ref{P:generalimag}, lifting the corresponding canonical parameters of finite subsets of $P$ to $S$ using the ring operations.
\begin{corollary}\label{C:finiteImag}
The additive cover $\mathcal{M}_{1}$ eliminates finite imaginaries. \end{corollary} \begin{proof}
By Proposition \ref{P:generalimag}, we need only show that every
finite
subset
$\{a_1,\ldots,a_n\}$ of $S$, with pairwise distinct projections
$\pi(a_i)=\alpha_i$, has a real canonical parameter.
For $1\leq i\leq n$ lift the $i^\text{\,th}$-symmetric function to
$S$:
\begin{equation*}
\tag{$\spadesuit$}
b_i=\sum_{1\leq j_1<\dots<j_i\leq n}a_{j_1}\otimes\cdots\otimes
a_{j_i}.
\end{equation*}
We
claim that the tuple $b=(b_1,\ldots,b_n)$ is a canonical parameter of
the set
$A=\{a_1,\ldots,a_n\}$. If the automorphism
$\sigma$ permutes $A$, then it clearly fixes $b$.
Assume now that $\sigma$ fixes the tuple $b$. Write each element $a_i$
of $A$ as $a_i=(\alpha_i,a_{i}')$, and similarly
$b_i=(\beta_i,b_{i}')$.
In the full structure
$(\mathbb{C},\mathbb{C}\times\mathbb{C})$ the definable condition
($\spadesuit$) uniquely translates into
\[ \beta_i=\sum_{1\leq j_1<\dots<j_i\leq
n}\alpha_{j_1}\cdots\alpha_{j_i}\]
and the system of linear equations:
\begin{equation*}
\begin{pmatrix}
1 & 1 & \cdots & 1 \\
\sum\limits_{j\neq 1}\alpha_j & \sum\limits_{j\neq 2}\alpha_j
& \cdots & \sum\limits_{j\neq n}\alpha_j \\
\sum\limits_{\substack{j_1<j_2 \\ j_1,j_2\neq 1}}\alpha_{j_1}
\alpha_{j_2}
& \sum\limits_{\substack{j_1<j_2 \\ j_1,j_2\neq
2}}\alpha_{j_1} \alpha_{j_2}
& \cdots
& \sum\limits_{\substack{j_1<j_2 \\ j_1,j_2\neq
n}}\alpha_{j_1} \alpha_{j_2} \\
\vdots & \vdots & \ddots & \vdots \\
\prod\limits_{j\neq 1}\alpha_j & \prod\limits_{j\neq
2}\alpha_j & \cdots & \prod\limits_{j\neq n}\alpha_j
\end{pmatrix}
\begin{pmatrix}
a_1' \\ a_2' \\ \vdots \\ \vdots \\ \vdots \\ a_n'
\end{pmatrix}
=
\begin{pmatrix}
b_1' \\ b_2' \\ \vdots \\ \vdots \\ \vdots \\ b_n'
\end{pmatrix}
\end{equation*}
Since the tuple $(\beta_1,\ldots,\beta_n)$ encodes the finite set
$\{\alpha_1,\ldots,\alpha_n\}$ and the above
matrix has determinant $\prod_{i<j}(\alpha_i-\alpha_j)\neq 0$,
we
conclude that the automorphism $\sigma$ permutes the set $A$. ~\end{proof}
\section{The CBP in additive covers}\label{S:CBP_addcovers}
As already stated in Remark \ref{R: Cex}, the CBP does not hold in the
additive cover $\mathcal{M}_1$ (see Example
\ref{E:covers}). For the sake of completeness, we will
now sketch a proof, using the
terminology introduced so far. For generic independent
elements $a$, $b$ and $c$ in $S$, set $d=(a \otimes c) \oplus b$.
Assuming the CBP, the type
$\textnormal{stp}(a/c,d)$ is almost $P$-internal, since $\textnormal{Cb}(c,d/a,b)=(a,b)$. As
the elements $a$, $c$ and $d$ are again (generic)
independent, we conclude that the type $\textnormal{stp}(a)$ is almost
$P$-internal, contradicting the fact that $S$ is not
almost $P$-internal.
The above is a lifting to the sort $S$ of a configuration witnessing that the field $P$ is not one-based. We will now present another proof that the additive cover $\mathcal{M}_1$ does not have the CBP, using the so called group version of the CBP, which already appeared in \cite[Theorem 4.1]{KP06}
\begin{fact}\label{F:groupCBP}\textup{(}\cite[Fact 1.3]{HPP13}\textup{)}
Let $G$ be a definable group in a theory with the CBP. Whenever $a$ is
in $G$ and the type $p=\textnormal{tp}(a/A)$ has finite stabilizer, then $p$ is
almost internal to the family of all non-locally modular minimal types. \end{fact}
The failure of the group version of the CBP is another example of such a lifting approach: Consider two generic independent elements $a$ and $b$ of $S$, and set $c=a\otimes b$. It is easy to see that $\textnormal{stp}(a,b,c)$ has trivial stabilizer, so the above Fact \ref{F:groupCBP} yields, assuming the CBP, that $S$ is almost $P$-internal, which is a contradiction.
Now we will see that the additive cover $\mathcal{M}_1$ is already determined by its automorphism group over $P$.
\begin{proposition}\label{P:M1AutVersion}
If $\mathcal{M}$ is an additive cover such that
$\textnormal{Aut}(\mathcal{M}/P)$ corresponds to the group of derivations on
$\mathbb{C}$, then the product
\[(\alpha,a')\otimes (\beta,b')=(\alpha\beta,\alpha b' + \beta a')\]
is definable in $\mathcal{M}$. \end{proposition} \begin{proof}
Choose two generic independent elements
$\alpha$ and $\beta$ in $P$ and consider the elements
$a=(\alpha,0),b=(\beta,0)$ and $c=(\alpha\beta,0)$
in $S$. The type $\textnormal{tp}(a,b,c/\alpha,\beta,\alpha\beta)$ is
$P$-internal and stationary, by Lemma \ref{L:stat}.
Since every element in its Galois group corresponds to a derivation,
we deduce
that for all elements
\[
\tilde a=(\alpha,a'), \tilde b=(\beta,b') \ \ \textnormal{ and } \ \
\tilde c=(\alpha\beta,c')
\]
in $S$, we have that $a,b,c \equiv_{P} \tilde a,\tilde b,\tilde c$ if
and only
if $c'=\alpha
b'+\beta a'$.
Therefore $c$ is definable over $a,b,P$.
In fact, we obtain that $c$ is definable over $a, b$:
Let $\bar{\gamma}$ be a tuple of elements in $P$ such that $c$ is
definable over $a,b,\bar{\gamma}$.
Now let $\bar{\varepsilon}$ be a maximal subtuple of $\bar{\gamma}$
such that
\[\bar{\varepsilon} \mathop{\mathpalette\Ind{}}_{\alpha,\beta}a,b,c.\]
Note that $\bar{\gamma}\backslash\bar{\varepsilon}$ is algebraic over
$\bar{\varepsilon},a,b,c$. Therefore Remark $\ref{R:algFib}$
implies that
$\bar{\gamma}\backslash\bar{\varepsilon}$ is algebraic over
$\bar{\varepsilon},\alpha,\beta$.
Hence $c$ is definable over $a,b,\bar{\varepsilon}$ and so,
by independence, we deduce that $c$ is algebraic over $a,b$.
The average $(\alpha\beta,e')$ of the finite set consisting of the
$\{a,b\}$-conjugates of $c$ is
definable over $a,b$. Similarly as in the proof of Lemma \ref{L:stat},
we deduce that
$e'$ is definable over $\alpha,\beta$. Hence,
\[ c=(-e')\star (\alpha\beta,e') \]
is definable over $a,b$.
Let $\varphi(x,y,z)$ be
a formula such that $c$ is the unique realization of $\varphi(a,b,z)$.
For two generic independent elements
$a_1=(\alpha_1,a_1')$ and $b_1=(\beta_1,b_1')$ in $S$, choose a
derivation $D$ with $D(\alpha_1)=-a_1'$
and $D(\beta_1)=-b_1'$ and let $\sigma_D$ be the induced automorphism
in $\textnormal{Aut}(\mathcal{M}/P)$. Furthermore, take a
field automorphism $\tau$ of $P$ with $\tau(\alpha_1)=\alpha$
and $\tau(\beta_1)=\beta$ and let $\sigma_{\tau}$ be the induced
automorphism of the additive cover as in Remark \ref{R:algFib}.
Since $\sigma_{\tau}(\sigma_{D}(a_1,b_1))=(a,b)$, we deduce that $\mathcal{M}\models\varphi(a_1,b_1,c_1)$ if and only if
$c_1=a_1\otimes b_1$.
Now we show that the multiplication $\otimes$ is globally definable,
following
the field version in Marker and Pillay's work \cite[Fact 1.5]{MP90}.
Set
\[X=\{ a \ | \ \varphi(\varepsilon\star a,b,(\varepsilon \star
a)\otimes b) \text{ for generic } b \text{ independent from } a \text{
and every } \varepsilon \text{ in }P \}.\]
Note that $\pi(X)$ is cofinite and $\pi^{-1}[\pi(a)]$ is contained in
$X$ for every $a$ in $X$.
Note that $a=b$ if and only if they define the same germ, that is
$a\otimes c=b\otimes c$ for generic $c$ independent from $a$ and $b$,
since generic elements have an inverse.
Let the finite set $P\backslash \pi(X)=\{\gamma_1,\ldots,\gamma_k\}$.
For $1\leq i\leq k$ choose $\alpha_i$ and $\beta_i$ in $\pi(X)$ such
that $\gamma_i=\alpha_i\beta_i$.
Using the elements $(\gamma_i,0),(\alpha_i,0),(\beta_i,0)$ as
parameters, we can uniformly identify every element in the fiber of
$\gamma_i$
with the product of two elements in $X$, namely
$(\gamma_i,c')=(\alpha_i,0)\otimes \big(\varepsilon\star
(\beta_i,0)\big)$, where
$\varepsilon$ is the unique element in $P$ such that
$(\varepsilon\alpha_i)\star(\gamma_i,0)=(\gamma_i,c')$. Now we can
define the multiplication $\otimes$ globally as the composition of
germs of
elements in $X$.
~\end{proof}
We will now show that the CBP holds in the additive cover $\mathcal{M}_0$ and more generally whenever there is essentially no additional structure on the sort $S$.
\begin{proposition}\label{P:CBPpureCover}
The CBP holds in an additive cover $\mathcal{M}$, whenever every
additive map on $\mathbb{C}$ induces an automorphism in
$\textnormal{Aut}(\mathcal{M}/P)$. \end{proposition} In particular, the additive cover $\mathcal{M}_0$ has the CBP. \begin{proof}
Recall that we need only consider real types over models in order to
deduce that the CBP holds.
Let $p(x)$ be the type of some finite real tuple $\bar{a}$ of length
$k$
over an elementary substructure $N$. In order to show that
the type
$\textnormal{stp}(\textnormal{Cb}(p)/\bar{a})$ is almost
$P$-internal, choose a formula $\varphi(x;\bar{b},\bar{\gamma})$ in
$p$ of least Morley rank and Morley degree one, where $\bar{b}$ is a
tuple of elements in $S\cap N$ and $\bar{\gamma}$ is a tuple of
elements
in
$P \cap N$.
We claim that every automorphism in
$\textnormal{Aut}(\mathcal{M}/P,\bar{a})$ fixes the canonical base $\textnormal{Cb}(p)$, which
is (interdefinable with) the canonical parameter $\ulcorner
\textnormal{d}_{p} x
\varphi(x;y)\urcorner$. For this, it suffices to show that every
such automorphism sends the tuple
$\bar{b}$ to another realization of the formula
$\textnormal{d}_{p} x \varphi(x; y_1, \gamma)$.
Write $\bar{a}=(a_1,\ldots,a_k)$ and \[ \alpha_i=\begin{cases} \pi(a_i), \text{ if $a_i$ is in $S$}\\ a_i \text{ otherwise.}
\end{cases} \] For
$\bar{b}=(b_1,\ldots,b_n)$, set
$\beta_i=\pi(b_i)$.
We may assume (after possibly reordering) that
$(\beta_1,\ldots,\beta_m)$ is a maximal subtuple of $\bar{\beta}$
which
is $\mathbb{Q}$-linearly independent over $\bar{\alpha}$. So,
\[ \beta_j = \sum\limits_{i=1}^{m}q_i\cdot\beta_i +
\sum\limits_{i=1}^{k}r_i\cdot\alpha_i \]
for $m+1\leq j\leq n$ and rational numbers $q_i$ and $r_i$. In order
to show that $\bar{b}$ is mapped by the automorphism
$\sigma$ of $\textnormal{Aut}(\mathcal{M}/P,\bar{a})$ to another realization of
the formula
$\textnormal{d}_{p} x \varphi(x;y_1, \gamma)$, it suffices to show
that
\[ N\models \forall \varepsilon_1,\ldots,\varepsilon_m \in P\ \
\textnormal{d}_{p} x \varphi(x;\bar{\varepsilon}\star\bar{b},\gamma)
\]
where $\bar{\varepsilon}=(\varepsilon_1,\ldots,\varepsilon_n)$ with
\[ \varepsilon_j = \sum\limits_{i=1}^{m}q_i\cdot\varepsilon_i \]
for $m+1\leq j\leq n$. Indeed: since $N$ is an elementary
substructure of $\mathcal{M}$, the above implies that \[
\mathcal{M}\models \forall \varepsilon_1,\ldots,\varepsilon_m \in
P\ \ \textnormal{d}_{p} x
\varphi(x,\bar{\varepsilon}\star\bar{b},\bar{\gamma}), \] so
$\sigma(\bar b)= F_\sigma(\bar b)\star \bar b$ realizes
$\textnormal{d}_{p} x \varphi(x;\bar y_1,\gamma)$, as desired.
So, let $\varepsilon_1,\ldots,\varepsilon_m$ be in $P\cap N$ and set
$\varepsilon_j = \sum_{i=1}^{m}q_i\cdot\varepsilon_i
$ for $m+1\leq j\leq n$. Choose an additive
map $G$ vanishing on $\alpha_i$ for $1\leq i\leq k$ and with
$F(\beta_i)=\varepsilon_i$ for $1\leq i \leq m$. Hence \[G(\beta_j)=
\sum\limits_{i=1}^{m}q_i\cdot\varepsilon_i , \]
so the image of $\bar b$ under the automorphism $\sigma_G$ induced
by $F$ lies in $N$.
Hence $\sigma_{G}(\bar{b})=\bar \varepsilon \star \bar b$ realizes
$\textnormal{d}_{p} x \varphi(x,y_1,\gamma)$ since
$\sigma_{G}(\bar{a})=\bar{a}$, as desired. ~\end{proof}
\begin{remark}\label{R:rsCBP}
The above proof shows that the canonical base of a real stationary
type $\textnormal{stp}(a/B)$ is definable over $a,P$ which is stronger than
$P$-internality.
As we will see below this does not hold for all
imaginary types. \end{remark}
Palac\'in and Pillay ~\cite{PP17} considered a strengthening of the CBP, called the \textit{strong canonical base property}, which we reformulate in the setting of additive covers: Given a (possibly imaginary) type $p=\textnormal{stp}(a/B)$, its canonical base $\textnormal{Cb}(p)$ is algebraic over $a, \bar d$, where $\textnormal{stp}(\bar d)$ is $P$-internal. If we denote by $\mathcal{Q}$ the family types over $\textnormal{acl}^{\textnormal{eq}}(\emptyset)$ which are $P$-internal, then the strong CBP holds if and only if every Galois group $G$ relative to $\mathcal{Q}$ is \emph{rigid} \cite[Theorem 3.4]{PP17}, that is, the connected component of every definable subgroup of $G$ is definable over $\textnormal{acl}(\ulcorner G\urcorner)$.
Notice that no additive cover where the sort $S$ is not almost $P$-internal can have the strong CBP: For the two generic independent elements $a=(\alpha,0)$ and $b=(\beta,0)$ in $S$, the stationary $P$-internal type $\textnormal{tp}(a,b/\alpha,\beta)$ is fundamental and has Galois group $(\mathbb{C}^2,+)$. This is clearly a $\mathcal{Q}$-internal type whose Galois group $G$ (relative to $\mathcal{Q}$) is a definable subgroup of $(\mathbb{C}^2,+)$. Since vector groups are never rigid, it suffices to show that $G=\mathbb{C}^2$ (compare to \cite[Proposition 4.9]{JJP20}). Otherwise, the element $b$ is algebraic over $a,\bar{d}$, where $\textnormal{stp}(\bar d)$ belongs to $\mathcal Q$ (up to permutation of $a$ and $b$). Hence, the type $\textnormal{stp}(b/a)$, and thus $S$, is almost $P$-internal.
The question whether a Galois-theoretic interpretation of the CBP exists arose in \cite{PP17}. We conclude this section by showing that no \textit{pure} Galois-theoretic account of the CBP can be provided. We already noticed in Remark \ref{R:Gal} that, whenever the sort $S$ in an additive cover is not almost $P$-internal, then the Galois groups relative to $P$ are precisely all definable subgroups of $(\mathbb{C}^n,+)$, as $n$ varies. In particular, all such additive covers share the same Galois groups (relative to $P$). We will now see that the same holds for the Galois groups relative to $\mathcal{Q}$.
\begin{lemma}\label{L:GalAccount}
All additive covers where the sort $S$ is
not almost $P$-internal share
the same Galois groups relative to $\mathcal{Q}$. \end{lemma} \begin{proof}
Note that $\mathcal{Q}$-internality coincides with $P$-internality.
Moreover,
the Galois group relative to $\mathcal{Q}$ is a subgroup of the Galois
group relative to $P$, which by Remark \ref{R:Gal} is a definable
subgroup of some $(\mathbb{C}^n,+)$. So it suffices to show that
every definable subgroup $G$ of $(\mathbb{C}^n,+)$
appears as a Galois group relative to $\mathcal{Q}$.
Choose a tuple $\bar{a}$ of elements
$a_1=(\alpha_1,0),\ldots,a_n=(\alpha_n,0)$ in the sort
$S$ with generic independent elements $\alpha_i$ in $P$ and
set
\[ E = \{ \bar{x}\in S^n \ |\ \exists \bar{g}\in G \bigwedge_{i=1}^{n}
g_i \star a_i = x_i \}. \]
The proof of Remark \ref{R:Gal} shows that the stationary type
$\textnormal{stp}(\bar a/\ulcorner E\urcorner)$
is $P$-internal and fundamental with Galois group $G$. Moreover, for
every set $B$ of parameters we have that \[\textnormal{stp}(\bar a/\ulcorner
E\urcorner, B) \vdash
\textnormal{tp}(\bar a/\ulcorner E\urcorner,B,P).\]
We now show that the Galois group $H$ relative to $\mathcal{Q}$ equals
$G$. Assume for a contradiction that $H$ is a
proper subgroup of $G$. The group $G$ (and $H$ relative to $G$) is
given by a system of linear equations in echelon form, so we find an
index $1\le k\le n$ and a tuple $\bar d$ with $\textnormal{stp}(\bar d)$
$P$-internal such that the element $a_k$ is not algebraic over
$\bar{a}_{>k},\ulcorner E\urcorner$, yet it is algebraic over
$\bar{a}_{>k},\ulcorner
E\urcorner,\bar d$.
By $\mathcal{P}$-internality of $\textnormal{stp}(\bar d)$, there is a set of
parameters $C$ with $C \mathop{\mathpalette\Ind{}} \bar{d},\bar{a},\ulcorner E\urcorner$ such
that $\bar d$ is
definable over $C,P$.
The above yields that $a_k$ is algebraic over $\bar{a}_{>k},\ulcorner
E\urcorner,C,P$ and therefore over $\bar{a}_{>k},\ulcorner
E\urcorner$, which yields the desired contradiction. \end{proof}
\section{Preservation of internality in additive covers}\label{S:PI_addcovers}
In this section we will show that the additive cover $\mathcal{M}_1$ does not preserve internality on intersections nor internality on quotients. We will start with the latter, whose proof is considerably simpler.
\begin{proposition}\label{P: M1PropB}
The additive cover $\mathcal{M}_1$ does not preserve internality on
quotients. \end{proposition} \begin{proof}
Choose generic independent elements $a,b$ and $c$ in $S$ and set
$d=(a\otimes c)\oplus b$.
Consider now the following definable set:
\begin{align*}
E &= \{ (x,y)\in S^2 \ |\ \pi(x)=\pi(a) \ \& \ \pi(y)=\pi(b) \ \& \
d=(x\otimes c)\oplus y \}
\end{align*}
Since the canonical parameter
$\ulcorner E\urcorner$ is clearly definable over $c,d,\pi(a),\pi(b)$
and the type $\textnormal{stp}(c,d,\pi(a),\pi(b)/\pi(c),\pi(d))$
is $P$-internal,
we deduce that the type \[ \textnormal{stp}(\ulcorner E\urcorner/\pi(c),\pi(d))\]
is $P$-internal.
\begin{claim*}
The type $\textnormal{stp}(\ulcorner E\urcorner / \pi(a),\pi(b))$ is
$P$-internal.
\end{claim*}
\begin{claimproof*}
Choose elements $a_1$ and $b_1$ in the fiber of $\pi(a)$, resp.
$\pi(b)$, such that
\[ a_1,b_1 \mathop{\mathpalette\Ind{}}_{\pi(a),\pi(b)} \ulcorner E\urcorner. \]
Note that every automorphism $\sigma$ in $\textnormal{Aut}(\mathcal{M}_1/P)$
fixing the elements $a_1$ and $b_1$ must fix
$\pi^{-1}(\pi(a))\times\pi^{-1}(\pi(b))$, so $\sigma$
permutes $E$. In particular, the canonical parameter $ \ulcorner
E\urcorner$ is definable over $a_1,b_1,P$, as desired.
\end{claimproof*}
We assume now that $\mathcal M_1$ preserves internality on
quotients in order to
reach a
contradiction. Since
\[\textnormal{acl}^{\textnormal{eq}}\big(\pi(a),\pi(b)\big)\cap\textnormal{acl}^{\textnormal{eq}}\big(\pi(c),\pi(d)\big)=\textnormal{acl}^{\textnormal{eq}}(\emptyset),\]
we deduce
that the type $\textnormal{stp}(\ulcorner E\urcorner)$
is almost $P$-internal.
Therefore there is a real subset $C$ of $S$ with
$C \mathop{\mathpalette\Ind{}} \ulcorner E\urcorner$ such that the canonical parameter
$\ulcorner E\urcorner$ is algebraic over $C,P$. Note that in
particular
\[\pi(C),\pi(a) \mathop{\mathpalette\Ind{}} \pi(b).\]
Choose now a derivation $D$ vanishing both on $\pi(C)$ and on
$\pi(a)$ with
$D(\pi(b))=1$. The induced automorphism $\sigma_D$ fixes $C$ and $P$
pointwise but $\ulcorner E\urcorner$ has an infinite orbit, yielding
the desired contradiction.
~\end{proof} \begin{remark}
The previous set is definable in every additive cover, since $E$ equals
\[
\{ (x,y)\in S^2 \ |\ \exists\big(\lambda,\mu)\in P^2 ( \lambda\star
a = x \ \& \ \mu\star b= y \ \& \ \lambda \cdot \pi(c)+\mu=0
\big) \}.
\]
The main cause for the failure of preservation of internality on
quotients is that $E$ is definable over $c,d,P$ in $\mathcal{M}_1$. \end{remark}
\begin{proposition}\label{P: M1PropA}
The additive cover $\mathcal{M}_1$ does not preserve internality on
intersections. \end{proposition} \begin{proof}
Choose generic independent elements $a_1$ and $a_2$ in $S$ and
$\varepsilon$ in $P$ generic over $a_1, a_2$. Set
$\bar\alpha=(\alpha_1,\alpha_2)=(\pi(a_1),\pi(a_2))$.
Consider now the definable set
\[ E= \{ (x,y)\in S^2 \ |\ \exists(\lambda,\mu)\in P^2 ( \lambda\star
a=x \ \& \
\mu\star b=y \ \& \ \varepsilon\cdot\lambda+\mu=0 ) \}. \]
Choose $\beta_1$ in $P$ generic over $
\ulcorner
E\urcorner, \bar\alpha,\varepsilon$ as well as elements
$\beta_2$ and $\beta_3$ in $P$ with
\begin{align}
0&=\beta_1 \alpha_1 + \frac{1}{2}\beta_2 \alpha_{1}^2 +
\frac{1}{3}\beta_3 \alpha_{1}^3+\alpha_2 \\
0&=\beta_1 +\beta_2 \alpha_1 + \beta_3 \alpha_{1}^2 - \varepsilon
\end{align}
This is possible because the matrix
\[ \begin{pmatrix}
\frac{\alpha_1^2}{2} & \frac{\alpha_{1}^3}{3} \\
\alpha_1 & \alpha_{1}^2
\end{pmatrix}\]
has determinant $\frac{\alpha_{1}^4}{2}-\frac{\alpha_{1}^4}{3}\neq 0$.
Since $\beta_2$ and $\beta_3$ are definable over
$\beta_1,\bar\alpha,\varepsilon$, we get the independence
\begin{equation*}
\tag{$\blacklozenge$}
\bar\beta \mathop{\mathpalette\Ind{}}_{\bar\alpha,\varepsilon}
\ulcorner E\urcorner,
\end{equation*}
where $\bar\beta=(\beta_1,\beta_2,\beta_3)$. \begin{claim} The type $\textnormal{stp}(\ulcorner E\urcorner/\bar\beta)$
is $P$-internal. \end{claim}
\begin{claimproof}
Let $b_1,b_2$ and $b_3$ be elements in $S$ such that $b_i$ is in
the
fiber of $\beta_i$ with
\[ b_1,b_2,b_3 \mathop{\mathpalette\Ind{}}_{\bar\beta} \ulcorner
E\urcorner, \bar\alpha,\varepsilon \]
We show that every automorphism $\sigma$ in
$\textnormal{Aut}(\mathcal{M}_{1}/P)$ fixing $b_1,b_2$ and $b_3$ must permute
$E$.
Recall that
$F_\sigma$ is the derivation on $\mathbb{C}$ induced by the
automorphism $\sigma$.
Since $F_{\sigma}(\beta_i)=0$, we deduce from equations (1) and
(2)
that $\varepsilon\cdot
F_{\sigma}(\alpha_1)+F_{\sigma}(\alpha_2)=0$. Hence, the
automorphism $\sigma$ permutes the set $E$.
\end{claimproof}
\begin{claim} The intersection $\textnormal{acl}^{\textnormal{eq}}(\ulcorner
E\urcorner)\cap\textnormal{acl}^{\textnormal{eq}}(\bar\beta)=\textnormal{acl}^{\textnormal{eq}}(\emptyset)$. \end{claim}
\begin{claimproof}
Because of the independence $(\blacklozenge)$, we need only show
that
\[\textnormal{acl}^{\textnormal{eq}}(\bar\beta)\cap\textnormal{acl}^{\textnormal{eq}}
(\bar\alpha,\varepsilon)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\]
Choose tuples $\bar{\beta}',\bar{\alpha}',\varepsilon',
\bar{\beta}'',\bar{\alpha}'',\varepsilon'',
\bar{\beta}'''$ such that
\[
\bar{\beta},\bar{\alpha},\varepsilon \equiv
\bar{\beta}',\bar{\alpha},\varepsilon \equiv
\bar{\beta}',\bar{\alpha}',\varepsilon' \equiv
\bar{\beta}'',\bar{\alpha}',\varepsilon' \equiv
\bar{\beta}'',\bar{\alpha}'',\varepsilon'' \equiv
\bar{\beta}''',\bar{\alpha}'',\varepsilon''
\]
with
\begin{align*}
\bar{\beta}' \mathop{\mathpalette\Ind{}}_{\bar{\alpha},\varepsilon} \bar{\beta} \qquad
\bar{\alpha}',\varepsilon' \mathop{\mathpalette\Ind{}}_{\bar{\beta}'}
\bar{\beta},\bar{\alpha},\varepsilon \qquad
\bar{\beta}'' \mathop{\mathpalette\Ind{}}_{\bar{\alpha}',\varepsilon'}
\bar{\beta},\bar{\alpha},\varepsilon,\bar{\beta}' \qquad
\bar{\alpha}'',\varepsilon'' \mathop{\mathpalette\Ind{}}_{\bar{\beta}''}
\bar{\beta},\bar{\alpha},\varepsilon,\bar{\beta}',\bar{\alpha}',\varepsilon'
\end{align*} and
\begin{align*}
\bar{\beta}''' \mathop{\mathpalette\Ind{}}_{\bar{\alpha}'',\varepsilon''}
\bar{\beta},\bar{\alpha},\varepsilon,\bar{\beta}',\bar{\alpha}',\varepsilon',
\bar{\beta}''.
\end{align*}
Since
\[ \textnormal{acl}^{\textnormal{eq}}(\bar{\beta})\cap\textnormal{acl}^{\textnormal{eq}}(\bar{\alpha},\varepsilon)\subset
\textnormal{acl}^{\textnormal{eq}}(\bar{\beta})\cap\textnormal{acl}^{\textnormal{eq}}(\bar{\beta}'''), \]
we need only show the independence $\bar{\beta} \mathop{\mathpalette\Ind{}} \bar{
\beta''' }
$. Note first that the whole configuration has Morley rank 9:
\[
\textnormal{RM}(\bar{\beta},\bar{\alpha},\varepsilon,
\bar{\beta}',\bar{\alpha}',\varepsilon',
\bar{\beta}'',\bar{\alpha}'',\varepsilon'',
\bar{\beta}''') =
\textnormal{RM}(\beta_1,\alpha_1,\alpha_2,\varepsilon,
\beta_1',\alpha_1',\beta_1'',\alpha_1'',\beta_1''') =9.
\]
Since \begin{multline*}
\textnormal{RM}(\bar{\beta}''',\bar{\beta},\alpha_1,\alpha_{1}',\alpha_{1}'')=
\\ \textnormal{RM}(
\bar{\beta}'''/\bar{\beta},\alpha_1,\alpha_{1}',\alpha_{1}'') +
\textnormal{RM}(\alpha_{1}'' / \bar{\beta},\alpha_1,\alpha_{1}' ) +
\textnormal{RM}( \alpha_{1}' / \bar{\beta},\alpha_1 ) + \\ +
\textnormal{RM}(\alpha_1 /\bar{\beta}) + \textnormal{RM}(\bar{\beta}) =
\textnormal{RM}(
\bar{\beta}'''/\bar{\beta},\alpha_1,\alpha_{1}',\alpha_{1}'')
+ 6,
\end{multline*}
it suffices to show that
$\alpha_{2},\varepsilon,\bar{\beta}',\alpha_{2}',\varepsilon',
\bar{\beta}'',\alpha_{2}''$ and $\varepsilon''$ are all algebraic
over the tuple
$(\bar{\beta}''',\bar{\beta},\alpha_1,\alpha_{1}',\alpha_{1}'')$.
Clearly
$\alpha_2,\varepsilon,\alpha_{2}''$ and $\varepsilon''$ are
algebraic over
$\bar{\beta}''',\bar{\beta},\alpha_1,\alpha_{1}''$. Furthermore
we
have the following system of linear equations:
\begin{align*}
\begin{pmatrix}
6\alpha_1 & 3\alpha_{1}^2 & 2\alpha_{1}^3 & 0 & 0 & 0 & 0 & 0 \\
1 & \alpha_1 & \alpha_{1}^2 & 0 & 0 & 0 & 0 & 0 \\
6\alpha_{1}' & 3\alpha_{1}'^2 & 2\alpha_{1}'^3 & 6 & 0 & 0 & 0 &
0 \\
1 & \alpha_{1}' & \alpha_{1}'^2 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 6 & 0 & 6\alpha_{1}' & 3\alpha_{1}'^2 &
2\alpha_{1}'^3 \\
0 & 0 & 0 & 0 & 1 & 1 & \alpha_{1}' & \alpha_{1}'^2 \\
0 & 0 & 0 & 0 & 0 & 6\alpha_{1}'' & 3\alpha_{1}''^2 &
2\alpha_{1}''^3 \\
0 & 0 & 0 & 0 & 0 & 1 & \alpha_{1}'' & \alpha_{1}''^2 \\
\end{pmatrix}
\begin{pmatrix}
\beta_{1}' \\ \beta_{2}' \\ \beta_{3}' \\ \alpha_{2}' \\
\varepsilon' \\
\beta_{1}'' \\ \beta_{2}'' \\ \beta_{3}''
\end{pmatrix}
=
\begin{pmatrix}
-6\alpha_2 \\
\varepsilon \\
0 \\
0 \\
0 \\
0 \\
-6\alpha_{2}'' \\
\varepsilon'' \\
\end{pmatrix}
\end{align*} Thus, we need only show that the above matrix has non-zero
determinant
\begin{align*}
&6
\begin{vmatrix}
6\alpha_1 & 3\alpha_{1}^2 & 2\alpha_{1}^3 \\
1 & \alpha_1 & \alpha_{1}^2 \\
1 & \alpha_{1}' & \alpha_{1}'^2
\end{vmatrix}
\begin{vmatrix}
6\alpha_{1}' & 3\alpha_{1}'^2 & 2\alpha_{1}'^3 \\
6\alpha_{1}'' & 3\alpha_{1}''^2 & 2\alpha_{1}''^3 \\
1 & \alpha_{1}'' & \alpha_{1}''^2
\end{vmatrix}
-6
\begin{vmatrix}
6\alpha_1 & 3\alpha_{1}^2 & 2\alpha_{1}^3 \\
1 & \alpha_1 & \alpha_{1}^2 \\
6\alpha_{1}' & 3\alpha_{1}'^2 & 2\alpha_{1}'^3
\end{vmatrix}
\begin{vmatrix}
1 & \alpha_{1}' & \alpha_{1}'^2 \\
6\alpha_{1}'' & 3\alpha_{1}''^2 & 2\alpha_{1}''^3 \\
1 & \alpha_{1}'' & \alpha_{1}''^2
\end{vmatrix} \\
&=72 \alpha_{1}^2 \alpha_{1}'^2 \alpha_{1}''^2
(\alpha_{1}-\alpha_{1}')(\alpha_{1}-\alpha_{1}'')(\alpha_{1}''-\alpha_{1}')\neq
0.
\end{align*}
~\end{claimproof}
If $\mathcal M_1$ had preservation of internality on
intersections, then
the type
\[\textnormal{stp}(\ulcorner E\urcorner /
\textnormal{acl}^{\textnormal{eq}}(\ulcorner E\urcorner)\cap\textnormal{acl}^{\textnormal{eq}}(\beta_1,\beta_2,\beta_3))\]
would be almost $P$-internal, by Claim 1, and so would be
$\textnormal{stp}(\ulcorner E\urcorner)$, by the previous claim, which yields a
contradiction, exactly as in the proof of Proposition \ref{P: M1PropB}. ~\end{proof}
Recall that an additive cover preserves internality on intersections, resp. on quotients, if and only if every almost $P$-internal type is good, resp. special, by Propositions \ref{P:propA} and \ref{P:propB}. For real types, the property of being special follows directly from almost internality.
\begin{remark}\label{R:realpart} Almost $P$-internal
real types are special in every additive cover. \end{remark} \begin{proof}
We may assume that the sort $S$ is not almost $P$-internal. By a
straight-forward forking calculation (cf. \cite[Theorem 2.5]{zC12} or
Proposition \ref{P:propB}), it suffices to show that, whenever the
real type $\textnormal{stp}(a/B)$ is almost $P$-internal, with $a$ a single
element in $S$, then $\alpha=\pi(a)$ is algebraic over $B$.
Choose a set of parameters $B_1$
with $B_1 \mathop{\mathpalette\Ind{}}_{B} a$ and $a$ algebraic over $B_1,P$. We need only
show that $\alpha$ is algebraic over $B_1$. Otherwise, choose an
element $a_1$ of $S$ in the fiber of $\alpha$ generic over $B_1$. The
elements $a$ and $a_1$ are interdefinable over $P$, so $a_1$ is
algebraic over $B_1, P$, contradicting that $S$ is not almost
$P$-internal. \end{proof}
Propositions \ref{P: M1PropA} and \ref{P: M1PropB} and the above remark give a negative answer to Question~\ref{Q:2}.
\begin{corollary}\label{C:AB_imag}
There is a stable theory of finite Morley rank, where every stationary
real almost
$\mathbb{P}$-internal type is special, yet internality on intersections is not
preserved. \end{corollary}
We can now conclude this work relating the failure of the CBP and elimination of finite imaginaries, always in the context of additive covers. For this, we need the following easy remark, which follows immediately from \cite[Remark 1.1 (2)]{zC12}. \begin{remark}\label{R:intersec} Given tuples $a$ and $b$ in an ambient model of an $\omega$-stable theory such that $\textnormal{RM}(a)-\textnormal{RM}(a/b)=1$ and $b=\textnormal{Cb}(a/b)$, the intersection
\[\textnormal{acl}^{\textnormal{eq}}(a)\cap
\textnormal{acl}^{\textnormal{eq}}(b)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\] \end{remark} \begin{theorem}\label{T:finImagCBP}
Suppose that the sort $S$ in the additive cover $\mathcal{M}$ is not
almost $P$-internal. If $\mathcal{M}$ eliminates finite imaginaries,
then it cannot preserve internality on quotients, so in particular
the CBP does not hold. \end{theorem} In a forthcoming work, we will explore whether the converse holds. We believe that similar techniques show that an additive cover as above cannot even preserve internality on intersections, but we have not yet pursued this problem thoroughly. \begin{proof}
We assume that the additive cover $\mathcal{M}$ eliminates finite
imaginaries and that the sort $S$ is not almost $P$-internal. In order
to show the failure of preserving internality on quotients, we will
find a similar configuration
to $(a\otimes c)\oplus b=d$, resonating with Martin's work \cite{gM88}
on
recovering
multiplication.
Choose two generic independent elements
$a_0=(\alpha_0,0)$ and
$a_1=(\alpha_1,0)$ in $S$. The
real canonical parameter of the finite set
$\{ a_0,a_1 \}$ is not definable over $a_0 \oplus a_1, P$: Indeed,
since $S$
is not almost $P$-internal, there is an automorphism $\sigma$ in
$\textnormal{Aut}(\mathcal{M}/P)$ with $\sigma(a_0)=1\star a_0$ and
$\sigma(a_1)=(-1)\star
a_1$, so $\sigma(a_0 \oplus a_1)=a_0\oplus a_1$, but $\sigma$ does not
permute
$\{ a_0,a_1 \}$. Choose now some coordinate $e$ of the real canonical
parameter which
is not definable over $a_0\oplus a_1,P$.
Note that $\varepsilon=\pi(e)$ is definable over $\alpha_0,\alpha_1$,
by
Remark \ref{R:algFib}. Therefore $\varepsilon=r(\alpha_0,\alpha_1)$
for
some symmetric
rational function $r(X,Y)$ over $\mathbb{Q}$. Let $\rho(x,y,z)$ be a
formula such that $e$ is the unique element realizing
$\rho(a_0,a_1,z)$.
We now proceed according to whether $r(\alpha_0,Y)$ is a
polynomial
map. Assume first that the map $r_{\alpha_0}(Y)=r(\alpha_0,Y)$ is not
polynomial.
As in the proof of \cite[Lemma 3.2]{gM88}, there are natural numbers
$n_1,\ldots,n_k$
such that the
degree of the numerator $P_{\alpha_0}(Y)$ of the rational function
\[
\sum_{j=0}^{k}(-1)^{k-j}
\sum_{1\leq i_1<\dots<i_j\leq k}
r_{\alpha_0}\big(Y+n_{i_1}+\cdots+n_{i_j}\big)
\]
is strictly smaller than the degree of its denominator
$Q_{\alpha_0}(Y)$.
For $1\leq i_1<\dots<i_j\leq k$, the formula $\rho(a_0,a_1\oplus
(n_{i_1}+\cdots+n_{i_j},0),z)$
has a unique realization $e_{i_1,\dots,i_j}$, since
\[\alpha_1 \equiv_{\alpha_0} \alpha_1+n_{i_1}+\cdots+n_{i_j},\]
so by Remark \ref{R:algFib}
\[a_1 \equiv_{a_0} a_1\oplus(n_{i_1}+\cdots+n_{i_j},0). \]
Set now
\[e_j=\sum_{1\leq i_1<\dots<i_j\leq k} e_{i_1,\dots,i_j}\]
and \begin{multline*}
\psi(x, y,z)=
\exists \bar{z} \Big(
\bigwedge_{j=0}^{k} \ \
\bigwedge_{1\leq i_1<\dots<i_j\leq k}
\rho(x,y\oplus(n_{i_1}+\cdots+n_{i_j},0),z_{i_1,\dots,i_j}) \ \
\land \\
z=
\sum_{j=0}^{k}(-1)^{k-j}
\sum_{1\leq i_1<\dots<i_j\leq k}
z_{i_1,\dots,i_j}
\Big).
\end{multline*}
Note that the element
\[ \sum_{j=0}^{k}(-1)^{k-j} e_j\]
is the unique realization of $\psi(a_0,a_1,z)$ and its projection to
$P$ is
\[ \frac{P_{\alpha_0}(\alpha_1)}{Q_{\alpha_0}(\alpha_1)}.\]
By Remark \ref{R:algFib}, every element in the fiber of $\alpha_0$ has
the same type as $a_0$ over $a_1,P$, so the formula
\[ \forall u \Big( \pi(u)=x \rightarrow \Big(
\exists ! z \psi(u, a_1,z) \land \forall w \Big( \psi(u, a_1,w)
\rightarrow \pi(w)= \frac{P_{x}(\alpha_1)}{Q_{x}(\alpha_1)}
\Big)\Big)\Big)\] \noindent belongs to the generic type $\textnormal{tp}(\alpha_0/a_1)$ in $P$. Therefore, there exists an algebraic number $\xi$ realizing it such that $\deg(Q_\xi(Y))>\deg(P_\xi(Y))$. Write now $\varphi(y,z)=\psi( (\xi, 0), y,z)$ and choose generic independent elements $a,b$ and $c$ in $S$ with
projections
\[\pi(a)=\alpha,\pi(b)=\beta \ \ \textnormal{ and } \ \ \pi(c)=\gamma.
\]
The formula $\varphi$ will play the role of the multiplication
$\otimes$, so let
$d=(\delta,d')$ be the unique element such that
\[ \mathcal{M}\models \exists z \big(\varphi(a\oplus c,z)\land z\oplus
b=d\big). \]
\begin{claim}
The intersection
$\textnormal{acl}^{\textnormal{eq}}(\alpha,\beta)\cap\textnormal{acl}^{\textnormal{eq}}(\gamma,\delta)=\textnormal{acl}^{\textnormal{eq}}(\emptyset)$.
\end{claim}
\begin{claimproof}
Since
$\textnormal{RM}(\alpha,\beta)-\textnormal{RM}(\alpha,\beta/\gamma,\delta)=2-1=1$, it
suffices to show by Remark \ref{R:intersec} that
$\textnormal{Cb}(\alpha,\beta/\gamma,\delta)$ is interdefinable with
$(\gamma,\delta)$.
Choose elements $\alpha'$ and $\beta'$ such that
\[\alpha',\beta' \equiv_{\gamma,\delta} \alpha,\beta \ \
\textnormal{
and } \ \ \alpha',\beta' \mathop{\mathpalette\Ind{}}_{\gamma,\delta} \alpha,\beta \
,\]
so
\[
\frac{P_\xi(\alpha+\gamma)}{Q_\xi(\alpha+\gamma)}+\beta=\delta=
\frac{P_\xi(\alpha'+\gamma)}{Q_\xi(\alpha'+\gamma)}+\beta'.
\]
Therefore
\[
P_\xi(\alpha+\gamma)Q_\xi(\alpha'+\gamma)-P_\xi(\alpha'+\gamma)Q_\xi(\alpha+\gamma)+
(\beta-\beta')Q_\xi(\alpha+\gamma)Q_\xi(\alpha'+\gamma)=0.
\]
Since
\[\deg(Q_\xi(Y))>\deg(P_\xi(Y)),\]
we need only show $\beta\neq\beta'$, for then
$\gamma$ is algebraic over $\alpha,\beta,\alpha',\beta'$ and
hence so is $\delta$, as desired.
We assume for a contradiction that
$\beta=\beta'$. Hence $\beta$ is algebraic over $\gamma,\delta$,
so the equation
\[ P_\xi(\alpha+\gamma)=(\delta-\beta)Q_\xi(\alpha+\gamma)\]
yields that $\alpha$ is also algebraic over $\gamma,\delta$, which
is a blatant
contradiction.
\end{claimproof}
As in Proposition \ref{P: M1PropB}, with the definable set
\begin{align*}
E &= \big\{ (x,y)\in S^2 \ |\ \pi(x)=\alpha \ \& \ \pi(y)=\beta
\ \& \
\exists z \big(\varphi(x\oplus c,z)\land z\oplus y=d\big) \big\}
\end{align*}
we can easily prove that
the types
\[
\textnormal{stp}(\ulcorner E\urcorner/\gamma,\delta) \ \ \textnormal{ and } \ \
\textnormal{stp}(\ulcorner E\urcorner/\alpha,\beta)
\]
are $P$-internal, since $(\xi,0)$ is internal over $\textnormal{acl}(\emptyset)$.
We assume now that $\mathcal M$ preserves internality on
quotients in order to reach a contradiction. By
the above claim,
the type $\textnormal{stp}(\ulcorner E\urcorner)$
is almost $P$-internal.
Therefore, there is a set $C$ of parameters with
$C \mathop{\mathpalette\Ind{}} \ulcorner E\urcorner,a,b$ such that the canonical parameter
$\ulcorner E\urcorner$ is algebraic over $C,P$.
Note that in
particular
\[C,a \mathop{\mathpalette\Ind{}} b.\]
Since the sort $S$ is not almost $P$-internal, there is an
automorphism
$\sigma$ in $\textnormal{Aut}(\mathcal{M}/P)$ fixing $C$ and $a$, yet
$\sigma(b)\neq b$.
The orbit of $\ulcorner E\urcorner$ under $\sigma$ is hence infinite,
which gives the desired contradiction.
The remaining case is that the rational function
$r(\alpha_0,Y)$ is
polynomial. For a natural number $m$, write $r(X,mX+Y)$ as
\[
r(X,mX+Y)=\sum_{i=0}^{n} \frac{P_{m,i}(X)}{Q_{m,i}(X)}Y^i,
\]
with coprime polynomials $P_{m,i}(X)$ and $Q_{m,i}(X)$ over
$\mathbb{Q}$ with
$P_{m,n}\neq
0$ (for $r$ is not the zero map).
\begin{claim}
There exists a natural number $m$ such that
$\deg(P_{m,i})\neq \deg(Q_{m,i})$
for some $i>0$.
\end{claim}
\begin{claimproof}
Note that $n>0$ because $r(X,Y)$ is symmetric and non-constant.
We may assume that $\deg(P_{0,i})= \deg(Q_{0,i})$ for all
$i>0$, since otherwise we are done.
If $n>1$, then
\begin{align*}
\frac{P_{1,n-1}(X)}{Q_{1,n-1}(X)}&=
\frac{P_{0,n}(X)}{Q_{0,n}(X)} X +
\frac{P_{0,n-1}(X)}{Q_{0,n-1}(X)}\\
&=\frac{P_{0,n}(X)Q_{0,n-1}(X)X + P_{0,n-1}(X)Q_{0,n}(X)
}{Q_{0,n}(X) Q_{0,n-1}(X)}
\end{align*}
implies
\[
\deg(P_{1,n-1})= \deg(Q_{1,n-1}) + 1,
\]
so the claim follows. Thus, we are left with the case
$n=1$, where
\begin{align*}
r(X,Y)=\frac{P_{0,1}(X)}{Q_{0,1}(X)} Y +
\frac{P_{0,0}(X)}{Q_{0,0}(X)}
= \frac{P_{0,1}(X)Q_{0,0}(X)Y + P_{0,0}(X)Q_{0,1}(X)
}{Q_{0,1}(X) Q_{0,0}(X)}.
\end{align*}
The map
\begin{align*}
r(\alpha_0,Y)=r(Y,\alpha_0)=\frac{P_{0,1}(Y)Q_{0,0}(Y)\alpha_0 +
P_{0,0}(Y)Q_{0,1}(Y)
}{Q_{0,1}(Y) Q_{0,0}(Y)}
\end{align*}
is polynomial and since $\alpha_0 \equiv \alpha_0 + 1$, so is
the map
\begin{align*}
\frac{P_{0,1}(Y)Q_{0,0}(Y)(\alpha_0+1) +
P_{0,0}(Y)Q_{0,1}(Y)
}{Q_{0,1}(Y) Q_{0,0}(Y)}.
\end{align*}
Since $P_{0,1}$ and $Q_{0,1}$ as well as $P_{0,0}$ and $Q_{0,0}$
are coprime,
it follows that $Q_{0,0}=\lambda Q_{0,1}$ for some
rational number $\lambda\neq 0$. We deduce that both
\[
\frac{\lambda \alpha_0 P_{0,1}(X) + P_{0,0}(X)}{\lambda
Q_{0,1}(X)}
\]
and
\[
\frac{\lambda (\alpha_0 +1) P_{0,1}(X) + P_{0,0}(X)}{\lambda
Q_{0,1}(X)}
\]
are polynomials. Hence, every root $\zeta$ of $Q_{0,1}$
is a root of
\[
\lambda \alpha_0 P_{0,1} + P_{0,0}
\ \ \textnormal { and of } \ \
\lambda (\alpha_0 +1) P_{0,1} + P_{0,0}
\]
and therefore $P_{0,1}(\zeta)=0$.
This implies that $Q_{0,1}$ is constant, since
$P_{0,1}$ and $Q_{0,1}$
are coprime.
It follows that $P_{0,1}$ cannot be constant, since otherwise
the symmetric function $r(X,Y)$ would equal to
$q_1\cdot(X+Y)+q_0$ for
some rational numbers $q_1$ and $q_0$, which yields that
the element $e$ would be definable over $a_0\oplus a_1, P$, a
contradiction.
\end{claimproof}
Fix now a natural number $m$ as in the previous claim and choose as
before generic independent elements $a$, $b$ and $c$ in $S$ with
projections
\[\pi(a)=\alpha,\pi(b)=\beta \ \ \textnormal{ and } \ \ \pi(c)=\gamma.
\]
Let
$d=(\delta,d')$ be the unique element such that
\[ \mathcal{M}\models \exists z \big(\rho(a,(m\cdot a) \oplus
c,z)\land z\oplus
b=d\big). \]
Considering the set
\begin{align*}
\big\{ (x,y)\in S^2 \ |\ \pi(x)=\alpha \ \& \ \pi(y)=\beta \ \&
\
\exists z \big(\rho(a,(m\cdot a) \oplus c,z)\land z\oplus y=d\big)
\big\},
\end{align*} we need only show as before that
\[\textnormal{acl}^{\textnormal{eq}}(\alpha,\beta)\cap\textnormal{acl}^{\textnormal{eq}}(\gamma,\delta)=\textnormal{acl}^{\textnormal{eq}}(\emptyset).\]
The strategy is the same as in the proof of Claim 1. Choose
elements $\alpha'$ and $\beta'$ such that
\[\alpha',\beta' \equiv_{\gamma,\delta} \alpha,\beta \ \
\textnormal{
and } \ \ \alpha',\beta' \mathop{\mathpalette\Ind{}}_{\gamma,\delta} \alpha,\beta \
.\]
Note that
\[
r(\alpha,m\cdot\alpha+\gamma)+\beta=\delta=r(\alpha',m\cdot\alpha'+\gamma)+\beta'
,
\]
so
\[
r(\alpha,m\cdot\alpha+\gamma)-r(\alpha',m\cdot\alpha'+\gamma)+\beta-\beta'=0.
\]
Now Claim 2 implies that $\gamma$ is algebraic over
$\alpha,\beta,\alpha',\beta'$, since $\alpha \mathop{\mathpalette\Ind{}} \alpha'$ (for
otherwise both $\alpha$ and $\beta$ are algebraic over
$\gamma,\delta$). It
follows that
$\delta$ is also algebraic over
$\alpha,\beta,\alpha',\beta'$, as desired.
~\end{proof}
\end{document} |
Subsets and Splits